url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1508.00667 | Direct sums and products in topological groups and vector spaces | We call a subset $A$ of an abelian topological group $G$: (i) $absolutely$ $Cauchy$ $summable$ provided that for every open neighbourhood $U$ of $0$ one can find a finite set $F\subseteq A$ such that the subgroup generated by $A\setminus F$ is contained in $U$; (ii) $absolutely$ $summable$ if, for every family $\{z_a:a\in A\}$ of integer numbers, there exists $g\in G$ such that the net $\left\{\sum_{a\in F} z_a a: F\subseteq A\mbox{ is finite}\right\}$ converges to $g$; (iii) $topologically$ $independent$ provided that $0\not \in A$ and for every neighbourhood $W$ of $0$ there exists a neighbourhood $V$ of $0$ such that, for every finite set $F\subseteq A$ and each set $\{z_a:a\in F\}$ of integers, $\sum_{a\in F}z_aa\in V$ implies that $z_aa\in W$ for all $a\in F$. We prove that: (1) an abelian topological group contains a direct product (direct sum) of $\kappa$-many non-trivial topological groups if and only if it contains a topologically independent, absolutely (Cauchy) summable subset of cardinality $\kappa$; (2) a topological vector space contains $\mathbb{R}^{(\mathbb{N})}$ as its subspace if and only if it has an infinite absolutely Cauchy summable set; (3) a topological vector space contains $\mathbb{R}^{\mathbb{N}}$ as its subspace if and only if it has an $\mathbb{R}^{(\mathbb{N})}$ multiplier convergent series of non-zero elements. We answer a question of Hušek and generalize results by Bessaga-Pelczynski-Rolewicz, Dominguez-Tarieladze and Lipecki. | \section{Introduction}
The aim of this paper is to study the following fundamental question:
\begin{question}
\label{our:aim}
Let $G$ be a topological group.
\begin{itemize}
\item[(i)] When does $G$ contain an (infinite) direct sum of non-trivial topological groups?
\item[(ii)] When does $G$ contain an (infinite) direct product of non-trivial topological groups?
\end{itemize}
\end{question}
Clearly, a topological group contains (a subgroup topologically isomorphic to) a direct product (a direct sum) of $\kappa$-many non-trivial topological groups if and only if it contains a direct product (a direct sum) of $\kappa$-many non-trivial cyclic topological groups.
Therefore, we can reduce Question \ref{our:aim} to the question of when does a topological group contain a direct product (a direct sum) of non-trivial {\em cyclic\/} topological groups.
Since both a direct sum and a direct product of cyclic groups are necessarily abelian, by passing to a subgroup of the group $G$ in Question \ref{our:aim} if necessary, we may assume, without loss of generality, that $G$ itself is abelian. This is why for the rest of this paper
{\bf we shall assume that all (topological) groups are abelian\/}.
In order to tackle Question \ref{our:aim}, let us to introduce a relevant notation. Let $G$ be a group and let $A\subseteq G\setminus \{0\}$. Then there exists a unique group homomorphism
\begin{equation}
\label{eq:K_A}
\kal{A}: \bigoplus _{a\in A} \hull{a}\to G
\end{equation}
which extends each natural inclusion map $\hull{a}\to G$ for $a\in A$. The set $A$ is said to be {\em independent\/} if $\kal{A}$ is monomorphism.
We call the map $\kal{A}$ as in \eqref{eq:K_A} the {\em Kalton map associated with $A$\/}, in memory of Nigel Kalton whose idea presented in \cite{kal} inspired us to write this manuscript.
When $G$ is a topological group, the cyclic subgroup $\hull{a}$ of $G$ generated by an element $a\in A$ inherits the subspace topology from $G$. Therefore, one can consider the Tychonoff product topology on the direct product
$$
P_A=\prod_{a\in A}\hull{a},
$$
where each $\hull{a}$ carries the subgroup topology inherited from $G$. Since the direct sum
\begin{equation}
\label{eq:S_A}
S_A=\bigoplus _{a\in A} \hull{a}
\end{equation}
is naturally identified with a subgroup of $P_A$, it carries the subgroup topology. Now both the domain \eqref{eq:S_A} and the range $G$ of the map \eqref{eq:K_A} are equipped with group topologies, thereby allowing one to talk about {\em topological} properties of the homomorphism $\kal{A}$ such as its continuity and openness. If $\kal{A}$ is a monomorphism which is both continuous and an open map onto its image $\kal{A}(S_A)=\hull{A}$,
then $\kal{A}$ is a topological isomorphism between $S_A$ and the subgroup $\hull{A}$
of $G$, thereby yielding (a subgroup topologically isomorphic to) a direct sum of $|A|$-many non-trivial cyclic groups in $G$. The converse also obviously holds.
\begin{fact}
\label{fact}
For a topological group $G$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $G$ contains a direct sum of $\kappa$-many non-trivial topological groups;
\item[(ii)] there exists a set $A\subseteq G\setminus\{0\}$ such that
$|A|=\kappa$ and $\kal{A}$ is a continuous monomorphism which is an open map onto its image $\kal{A}(S_A)$, or equivalently, $\kal{A}$ is a topological isomorphism between $S_A$ and the subgroup $\kal{A}(S_A)=\hull{A}$
of $G$.
\end{itemize}
\end{fact}
This fact shows that Question \ref{our:aim}~(i) transforms to the following problem:
\begin{problem}
\label{when:Kalton:is:an:top:isomorhism}
Given a subset $A$ of a topological group $G$ such that $0\not\in G$, find the necessary and sufficient conditions on $A$ to make the Kalton map $\kal{A}:S_A\to G$ a topologically isomorphic embedding.
\end{problem}
We completely resolve this problem
in two steps.
First, in Section \ref{sec:abs:cauch:summable:sets} we introduce the notion of an {\em absolutely Cauchy summable set} in a topological group which generalizes the concept of an $f_\omega$-Cauchy summable sequence introduced and studied extensively by the authors in~\cite{DSS_arxiv}.
The importance of this notion becomes clear from
Theorem \ref{Proposition:Udine}
which states that
the Kalton map $\kal{A}$ is continuous if and only if $A$ is absolutely Cauchy summable set.
Second,in
Section \ref{sec:top:indep:sets} we introduce the notion of a {\em topologically independent set} in a topological group which is a counterpart of the notion of an independent set in a group. Indeed, topologically independent sets are independent and these two notions coincide in discrete groups by Lemma \ref{lemma:top:ind:is:ind}.
While independent set already produces a direct sum in a group, the topological independence of a set $A$ is only a necessary condition for the Kalton map $\kal{A}$ to be a topologically isomorphic embedding; this fact
is established in Proposition \ref{Proposition:Matsuyama}~(ii).
It turns out that the Kalton map $\kal{A}$ is a topologically isomorphic embedding if and only if
$A$ is {\em both\/} topologically independent and absolutely Cauchy summable;
see Theorem \ref{Kalton:top:iso}. Indeed, topological independence of $A$ assures that $\kal{A}$ is an open isomorphism (taken as a map onto $\hull{A}$) while the absolute Cauchy summability
is responsible for
the continuity of $\kal{A}$.
This resolves Problem \ref{when:Kalton:is:an:top:isomorhism} and thus,
Question \ref{our:aim}~(i) as well.
It is worth noting that neither of the two notions taken alone guarantees that
the Kalton map $\kal{A}$ is a topological embedding. (In view of Theorem \ref{Kalton:top:iso} described above, this means that these two notions are logically independent of each other.)
Indeed,
an example of a compact group in which every null sequence is absolutely Cauchy summable but the group does not contain even a product of two non-trivial groups is given in Remark \ref{remark:Zp}.
Similarly,
every Schauder basis in a Banach space is topologically independent
by Proposition \ref{schauder:is:top:ind}, yet
Banach spaces do not contain infinite direct sums; see Remark \ref{rem:on:Schauder}.
The step towards the question when a topological group contains a direct product of cyclic topological groups is now
straightforward. Given an absolutely Cauchy summable set $A$ in a topological group $G$, the Kalton map $\kal{A}$ is continuous. Thus, it continuously extends to a
unique map $\skal{A}$ from the completion of $S_A$ to the completion of $G$. Since $P_A$ is a subset of the completion of $S_A$, if
the image $\skal{A}(P_A)$ of $P_A$ under $\skal{A}$ is in $G$,
then $G$ contains a direct product $P_A$ of $|A|$-many non-trivial groups.
The converse is also true: If $G$ contains a direct product of $|A|$-many non-trivial groups, then it contains a subset $A$ such that $\skal{A}(P_A)\subseteq G$.
Therefore, Question \ref{our:aim}~(ii)
transforms to the following problem:
\begin{problem}
\label{when:extended:Kalton:is:an:top:isomorhism}
Given a subset $A$ of a topological group $G$ such that $0\not\in G$
and $\kal{A}:S_A\to G$ is an isomorphic embedding,
find the necessary and sufficient conditions on $A$ equivalent to
the inclusion $\skal{A}(P_A)\subseteq G$.
\end{problem}
It turns out that this happens precisely when the set $A$ is {\em absolutely summable\/} in $G$. This notion is introduced in Section \ref{sec:abs:summable:sets}, where we also show that it is a generalization of the concept of $f_\omega$-summable sequences studied in \cite{DSS_arxiv}. We prove that $\skal{A}\restriction{P_A}$ is a topologically isomorphic embedding into $G$ if and only if $A$ is absolutely summable and topologically independent; see Theorem \ref{characterization:of:topological:isomorphism:of:extended:Kalton:maps}.
Absolutely summable sets are absolutely Cauchy summable, and the converse
holds in complete groups; see Proposition \ref{as:is:acs}.
In Section \ref{Sec:inf:dir:sums:in:top:groups} we provide our first applications of the above mentioned results. Namely, we generalize the result of Dominguez and Tarieladze from \cite{DT-private} by proving that every metric torsion free group such that each its cyclic subgroup is discrete is not NSS if and only if it contains as a subgroup the topological group ${\mathbb Z}^{({\mathbb N})}$; see Corollary \ref{sum:of:Z:inside}. As another example of various possible applications we also show that a complete metric torsion group is not NSS if and only if it contains a (compact) subgroup that is topologically isomorphic to an infinite product of non-trivial finite cyclic groups; in particular, if the above group is not NSS, then it contains an infinite compact zero-dimensional subgroup (Theorem \ref{compact:zero:dim:inside}). This property was used in \cite{DS_Lie} to obtain a characterization of the Lie groups.
If $A$ is a subset of a topological vector space, then the Kalton map $\kal{A}$ can be naturally extended to the {\em linear Kalton map $\lkal{A}$\/}; this is done in Section \ref{Sec:lkal}. We show that if $A$ is infinite and $\kal{A}$ is continuous, then $A$ contains an infinite subset $B$ such that the linear Kalton map $\lkal{B}$ is a topologically isomorphic embedding; see Corollary \ref{continuous:contain:isomorphic:subsets}.
We use this result in the final Section \ref{Sec:final}, where we provide a characterization of topological vector spaces that contain the topological vector space ${\mathbb R}^{({\mathbb N})}$ (Theorem \ref{complete:TVS:not:TAP:iff:contains:R:to:N}) and the topological vector space ${\mathbb R}^{\mathbb N}$ (Theorem \ref{another:theorem}). As corollaries we obtain the result of Lipecki about metric vector spaces containing ${\mathbb R}^{({\mathbb N})}$ (\cite[Theorem 3]{Lip}) and results of Bessaga, Pelczynski and Rolewicz about complete metric vector spaces containing ${\mathbb R}^{\mathbb N}$ (\cite[Theorem 9 and Corollary]{BPR}). Theorem \ref{another:theorem} also answers a question of Hu\v{s}ek posed in \cite{Husek2}; see Remark \ref{answer:to:a:question:of:M:Husek}.
\section{Modified topology of a topological group capturing topological properties of the Kalton map}
\begin{definition}
\label{def:modified:topology}
Let $G$ be a group, and let $A$ be its subset.
\begin{itemize}
\item[(i)] For every subset $W$ of $G$ containing $0$ and each finite set $F\subseteq A$, let
\begin{equation}
\label{eq:WAF}
\nbh{W}{A}{F}=\sum_{a\in F} (\hull{a} \cap W)+ \hull{A\setminus F}.
\end{equation}
\item[(ii)] For a group topology $\tau$ on $G$, we use $\t{A}{\tau}$ to denote the group topology on $G$ having as its base of neighbourhoods of $0$ the family\begin{equation}
\{\nbh{W}{A}{F}: 0\in W\in \tau, F\subseteq A\mbox{ is finite}\},
\end{equation}
and we shall call $\t{A}{\tau}$ the {\em $A$-modification of $\tau$\/}.
\item[(iii)] For brevity, the $A$-modification of the discrete topology $\delta_G$ on $G$ will be denoted by $\td{A}$ (instead of the longer notation $\t{A}{\delta_G}$).
\end{itemize}
\end{definition}
\begin{remark}
\label{linear:remark}
It follows easily from \eqref{eq:WAF} that $\nbh{\{0\}}{A}{F}=\hull{A\setminus F}$. Combing this with items (ii) and (iii) of Definition \ref{def:modified:topology}, we conclude that the family
$$
\{\hull{A\setminus F}: F\subseteq A, F \mbox{ is finite}\}
$$
forms a base of neighbourhoods of $0$ for the topology $\td{A}$. In particular, $\td{A}$ is a {\em linear\/} topology on $G$; that is, $\td{A}$ has a base at $0$ consisting of subgroups of $G$.
\end{remark}
The topology $\t{A}{\tau}$ need not be Hausdorff in general, even when $\tau$ itself is Hausdorff.
For two topologies $\tau$, $\tau'$ on a set $X$, $\tau\le\tau'$ means that every $\tau$-open subset of $X$ is also $\tau'$-open.
\begin{remark}\label{observation:about:t:A}
Let $G$ be a group and $\tau$ be a group topology on $G$.
\begin{itemize}
\item[(a)] If $\tau'$ is a group topology on $G$ with $\tau\le\tau'$, then $ \t{A}{\tau}\le \t{A}{\tau'}$; in particular, $\t{A}{\tau} \leq \td{A}$.
\item[(b)] $\langle A \rangle$ is an open subgroup of $(G,\t{A}{\tau})$ and thus of $(G,\td{A})$ as well.
\item[(c)] $\t{A}{\td{A}}=\td{A}$.
\end{itemize}
\end{remark}
\begin{proposition}
\label{Matsuyama:split}
For a subset $A$ of a topological group $(G,\tau)$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $\tau \leq \t{A}{\tau}$;
\item[(ii)] $\tau \leq \td{A}$;
\item[(iii)] $\tau\restriction_{\langle A\rangle}\le\t{A}{\tau}\restriction_{\langle A\rangle}$.
\end{itemize}
\end{proposition}
\begin{proof}
(i)$\to$(ii) follows from Remark
\ref{observation:about:t:A}~(a).
(ii)$\to$(i) Pick an arbitrary $U\in\tau$ with $0\in U$. Since $\tau$ is a group topology, we can fix a $\tau$-neighbourhood $V$ of $0$ such that ${V}+ {V}\subseteq U$.
Since $\tau\le \td{A}$ by (ii), $V\in \td{A}$. By Remark \ref{linear:remark}, there exists a finite set $F\subseteq A$ such that
$\hull{A\setminus F}\subseteq {V}$. If $F=\emptyset$, we let $W=V$. Otherwise, we pick a $\tau$-neighbourhood $W$ of zero of $G$ such that
$W+W+\ldots+W\subseteq V$, where $|F|$-many $W$'s are taken in the sum. Then $\sum_{a\in F} (\hull{a} \cap W)\subseteq V$, so that
$$
\nbh{W}{A}{F}= \sum_{a\in F} (\hull{a} \cap W)+ \hull{A\setminus F} \subseteq V+{V}\subseteq U.
$$
Since $\nbh{W}{A}{F}\in \t{A}{\tau}$, this implies $U\in \t{A}{\tau}$.
(i)$\to$(iii) is trivial.
(iii)$\to$(i) Let $U$ be a $\tau$-open subset of $G$ containing $0$. Then $V=U\cap \hull{A}\in \tau\restriction_{\langle A\rangle}$, which implies $V\in \t{A}{\tau}\restriction_{\langle A\rangle}$ by (iii). Since $\hull{A}\in \t{A}{\tau}$ by Remark \ref{observation:about:t:A}~(b), this implies that $V\in \t{A}{\tau}$. Since $0\in V\subseteq U$, we conclude that $U\in \t{A}{\tau}$.
\end{proof}
The importance of the $A$-modification of a group topology to the topic of this paper becomes clear from the next proposition and its corollary below.
\begin{proposition}
\label{when:is:kalton:continuous}
Let $A$ be a subset of a topological group $(G,\tau)$ such that $0\not\in A$. Then:
\begin{itemize}
\item[(i)] the Kalton map $\kal{A}$ is continuous if and only if $\tau\le\t{A}{\tau}$, and
\item[(ii)] the Kalton map $\kal{A}$ is an open map onto $\hull{A}$ if and only if $\t{A}{\tau}\restriction_{\hull{A}}\le\tau\restriction_{\hull{A}}$. \end{itemize}
\end{proposition}
\begin{proof}
Note that the topology of $S_A$ has the family
$$
\{\nbh{W}{A}{F}^*: 0\in W\in \tau, F\subseteq A\mbox{ is finite}\}
$$
as its base at $0$, where
$$
\nbh{W}{A}{F}^*=\left(\bigoplus_{{a}\in {F}}(\hull{{a}} \cap W)\right)\oplus
\left(\bigoplus_{{a}\in {A}\setminus {F}} \hull{{a}}\right).
$$
A straightforward check shows that $\kal{A}(\nbh{W}{A}{F}^*) =\nbh{W}{A}{F}$ for every finite set $F\subseteq A$ and every neighbourhood $W$ of $0$ in $(G,\tau)$, where $\nbh{W}{A}{F}$ are as defined in \eqref{eq:WAF}. The rest follows from Definition \ref{def:modified:topology}.
\end{proof}
\begin{corollary}
For a subset $A$ of a topological group $G$ such that $0\not\in A$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] the Kalton map $\kal{A}$ is a topologically isomorphic embedding;
\item[(ii)] the set $A$ is independent and $\t{A}{\tau}\restriction_{\langle A\rangle}=\tau\restriction_{\langle A\rangle}$, where $\tau$ is the topology of $G$.
\end{itemize}
\end{corollary}
\begin{proof}
The inequalities $\tau \leq \t{A}{\tau}$ and $\tau\restriction_{\langle A\rangle}\le\t{A}{\tau}\restriction_{\langle A\rangle}$ are equivalent by Proposition
\ref{Matsuyama:split}.
Therefore, the conclusion of our corollary follows from Proposition \ref{when:is:kalton:continuous}.
\end{proof}
\section{Absolutely Cauchy summable sets and the continuity of the Kalton map}
\label{sec:abs:cauch:summable:sets}
\begin{definition}\label{def:abs:summ:set}
We say that a subset $A$ of a topological group $G$ is {\em absolutely Cauchy summable\/} provided that for every neighbourhood $U$ of $0$ there exists a finite set $F\subseteq A$ such that $\hull{A\setminus F}\subseteq U$.
\end{definition}
\begin{remark}
\label{remark:on:abs:C:summability}
\begin{itemize}
\item[(i)]
Every finite subset of a topological group is absolutely Cauchy summable.
\item[(ii)] A subset $A$ of a topological group $G$ is absolutely Cauchy summable in $G$ if and only if $A$ is absolutely Cauchy summable in the subgroup $\hull{A}$ of $G$.
\end{itemize}
\end{remark}
\begin{remark}
\label{splitted:remark}
Let $A$ be a subset of a group $G$. Clearly, $A$ is absolutely Cauchy summable in $(G, \td{A})$. Therefore, {\em $A$ is absolutely Cauchy summable in $(G, \tau)$
for every topology $\tau$ on $G$ satisfying $\tau\le \td{A}$\/}.
\end{remark}
A typical example of an absolutely Cauchy summable set appears in direct sums.
\begin{lemma}
\label{Cauchy:summable:in:direct:sum}
Let $\{H_a:a\in A\}$ be a family of topological groups and $H=\bigoplus_{a\in A} H_a$ be its direct sum.
If $x_a\in H_a$ for every $a\in A$, then the set $X=\{x_a:a\in A\}$ is absolutely Cauchy summable in $H$.
\end{lemma}
\begin{proof} Let $U$ be a neighbourhood of $0$ in $H$. By the definition of the Tychonoff product topology,
there exists a finite set $F\subseteq A$ such that $\bigoplus_{a\in A\setminus F} H_a\subseteq U$. Clearly, $Y=\{x_a:a\in F\}$ is a finite subset of $X$ such that
$\hull{X\setminus Y}\subseteq \bigoplus_{a\in A\setminus F} H_a$. This gives $\hull{X\setminus Y}\subseteq U$. Therefore, $X$ is absolutely Cauchy summable in $H$ by Definition
\ref{def:abs:summ:set}.
\end{proof}
\begin{theorem}\label{Proposition:Udine} For a subset $A$ of a topological group $G$ such that $0\not\in A$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $A$ is absolutely Cauchy summable in $G$;
\item[(ii)]
$\kal{A}: S_A \to G$ is continuous.
\end{itemize}
\end{theorem}
\begin{proof}
Let $\tau$ be the topology of $G$.
(i)$\to$(ii)
Let $U$ be an arbitrary $\tau$-neighbourhood of $0$ in $G$. Since $A$ is absolutely Cauchy summable in $(G,\tau)$, there exists a finite set $F\subseteq A$ such that $\hull{A\setminus F}\subseteq U$.
Since $\hull{A\setminus F}$ is a $\td{A}$-neighbourhood of $0$ in $G$ by Remark
\ref{linear:remark}, we conclude that $U\in \td{A}$. This establishes the inclusion $\tau \leq \td{A}$. Applying Proposition \ref{Matsuyama:split}, we get the inclusion $\tau \leq \t{A}{\tau}$. Combining this with Proposition \ref{when:is:kalton:continuous}~(i), we obtain the continuity of the Kalton map $\kal{A}: S_A \to G$.
(ii)$\to$(i) By (ii) and Proposition \ref{when:is:kalton:continuous}~(i), the inequality
$\tau\le\t{A}{\tau}$ holds. By Remark \ref{observation:about:t:A}~(a), $\t{A}{\tau}\le\td{A}$. Therefore, $A$ is absolutely Cauchy summable in $(G,\tau)$ by Remark \ref{splitted:remark}.
\end{proof}
\section{Topologically independent sets}
\label{sec:top:indep:sets}
Recall that a subset $A$ of non-zero elements of a group $G$ is {\em independent\/} provided that for every finite subset $F\subseteq A$ and every indexed set $\{z_a:a\in F\}$ of integers the equality $\sum_{a\in F}z_aa=0$ implies that $z_aa=0$ for all $a\in F$. Our next definition is a topological analogue of this classical notion.
\begin{definition}
\label{def:topological:independence}
A subset $A$ of non-zero elements of a topological group $G$ is called {\em topologically independent\/} provided that for every neighbourhood $W$ of $0$ there exists neighbourhood $U$ of $0$ such that for every finite subset $F\subseteq A$ and every indexed set $\{z_a:a\in F\}$ of integers the inclusion $\sum_{a\in F}z_aa\in U$ implies that $z_aa\in W$ for all $a\in F$. We will call this $U$ a {\em $W$-witness\/} of the topological independence of $A$.
\end{definition}
Informally speaking, $A$ is topologically independent if for every neighbourhood $W$ of $0$ there exists a neighbourhood $U$ of $0$ such that, whenever $\sum_{a\in F}z_aa$ is $U$-close to zero, then all $z_aa$ are $W$-close to zero. Thus, in the definition of a topologically independent set, the algebraic ``equality to zero" from the definition of an independent set is replaced by the topological notion of ``being close to zero".
It is clear that a subset of a discrete topological group is topologically independent precisely when it is independent, so one can view topological independence as a generalization of the classical notion.
The following lemma shows that the notion of topological independence is also a strengthening of the classical notion.
\begin{lemma}\label{lemma:top:ind:is:ind}
Every topologically independent set is independent.
\end{lemma}
\begin{proof}
Assume that $F$ is a finite subset of a topologically independent set $A$ in a topological group $G$ and $\{z_a:a\in F\}$ is an indexed set of integers such that $\sum_{a\in F}z_aa=0$. Fix $a\in F$. Let $W$ be an arbitrary neighbourhood of $0$. Since $A$ is topologically independent, we can find $U$ which is a $W$-witness of this. Since $\sum_{a\in F}z_aa=0\in U$, it follows that $z_aa\in W$. Since this holds for an arbitrary $W$ and $G$ is (silently assumed to be) Hausdorff, this implies $z_aa=0$. This means that $A$ is independent.
\end{proof}
The next lemma and example that follows it both highlight the difference between the topological independence and the classical (algebraic) independence.
\begin{lemma}\label{lemma:step:to:Lie}
For every $n\in {\mathbb N}$, each topologically independent subset of ${\mathbb R}^n$ has size at most $n$.
\end{lemma}
\begin{proof}
Assume that $F$ is a finite topologically independent subset of ${\mathbb R}^n$ of size $n+1$. Since all cyclic subgroups of ${\mathbb R}^n$ are discrete, we can choose an open ball $W$ around $0$ such that
\begin{equation}
\label{eq:5}
W\cap \hull{a}=\{0\}
\
\mbox{ for all }
\
a\in F.
\end{equation}
Since $F$ is topologically independent, we can fix an open set $U$ containing $0$ which is a $W$-witness of the topological independence of $F$.
Since $F$ is topologically independent, it is independent by Lemma \ref{lemma:top:ind:is:ind}. Let $H$ be the closure of $\hull{F}$. Since $F$ contains $n+1$-many elements, the closed subgroup $H$ of ${\mathbb R}^n$ is non-discrete; in fact, $H$ contains a line passing through $0$, see \cite[Chap.VII, Th.1 and Prop.3]{Bourbaki}. Therefore, since $U$ is an open set containing $0$,
one can find some non-zero element $\sum_{a\in F}z_aa\in U$, where $z_a\in{\mathbb Z}$ for $a\in F$. Since $U$ is a $W$-witness, this implies that $z_aa\in W$
for all $a\in F$. Combining this with \eqref{eq:5}, we conclude that $z_aa=0$ for all $a\in F$. This contradicts the fact that $\sum_{a\in F}z_aa\not=0$.
\end{proof}
\begin{example}\label{discrete:top:indep:iff:indep}
While the topological group ${\mathbb R}$ contains an independent subset of size ${\mathfrak c}$, every topologically independent subset of ${\mathbb R}$ has size at most $1$, by Lemma \ref{lemma:step:to:Lie}.
\end{example}
\begin{remark}\label{lemma:obvious}
A subset $A$ of non-zero elements of a topological group $G$ is topologically independent if and only if $A$ is topologically independent in the subgroup $\hull{A}$ of $G$.
\end{remark}
The next lemma provides a typical example of a topologically independent set.
\begin{lemma}
\label{topologcally:independent:in:direct:sum}
Let $\{H_a:a\in A\}$ be a family of topological groups and $H=\bigoplus_{a\in A} H_a$ be its direct sum.
If $x_a\in H_a\setminus \{0\}$ for every $a\in A$,
then the set $X=\{x_a:a\in A\}$ is topologically independent in $H$.
\end{lemma}
\begin{proof}
Fix a neighbourhood $W$ of $0$ in $H$. By the definition of the Tychonoff product topology, there exist a finite set $F\subseteq A$
and an open neighbourhood $V_a$ of $0$ in $H_a$ such that $U\subseteq W$, where
$U=\bigoplus_{a\in F} V_a\oplus\bigoplus_{a\in A\setminus F}H_a$.
Observe that $U$ is a $U$-witness of the topological independence of $X$. As $U\subseteq W$, it is a $W$-witness as well.
\end{proof}
Our next lemma shows that topological independence of a set $A$ assures that the Kalton map is open.
\begin{proposition}\label{Proposition:Matsuyama}
Let $G$ be a topological group, $A\subseteq G\setminus\{0\}$
and let $\kal{A}:S_A\to G$ be the associated Kalton homomorphism.
\begin{itemize}
\item[(i)] If $A$ is topologically independent in $G$, then $\kal{A}$ is an open map onto $\hull{A}$.
\item[(ii)] If $\kal{A}$ is a topologically isomorphic embedding, then $A$ is topologically independent in $G$.
\end{itemize}
\end{proposition}
\begin{proof}
(i)
Let $\tau$ be the topology of $G$.
By Proposition \ref{when:is:kalton:continuous}(ii), it suffices to check that $\t{A}{\tau}\restriction_{\hull{A}}\le\tau\restriction_{\hull{A}}$.
Fix a $\tau$-neighbourhood $W$ of $0$ and a finite set $F\subseteq A$. Since $A$ is topologically independent, we can find a $\tau$-neighbourhood $U$ of $0$ which is a $W$-witness of topological independence of $A$. Then $U\cap\hull{A}\subseteq\nbh{W}{A}{F}$. Combined with Definition \ref{def:modified:topology}, this establishes the inequality $\t{A}{\tau}\restriction_{\hull{A}}\le\tau\restriction_{\hull{A}}$.
(ii) By our assumption, $\kal{A}:S_A\to G$ is a topologically isomorphic embedding.
For every $a\in A$, let $x_a=\kal{A}^{-1}(a)$. Then $x_a$ is a non-zero element of the summand $\hull{a}$ of $S_A$. By Lemma \ref{topologcally:independent:in:direct:sum}, the set $X=\{x_a:a\in A\}$ is topologically independent in $S_A$. Since $\kal{A}:S_A\to G$ is a topologically isomorphic embedding,
$\kal{A}:S_A\to \kal{A}(S_A)=\hull{A}$ is a topological isomorphism, so
$A=\kal{A}(X)$ is topologically independent in $\hull{A}$. By Remark \ref{lemma:obvious}, $A$ is topologically independent also in $G$.
\end{proof}
\begin{proposition}\label{direct:sum:top:indep} \label{top:indep:for:finite}
For a finite subset $A$ of a topological group $G$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] the Kalton map $\kal{A}: S_A\to G$ is a topologically isomorphic embedding;
\item[(ii)] $A$ is topologically independent in $G$.
\end{itemize}
\end{proposition}
\begin{proof}
The implication (i)$\to$(ii) was proved in Proposition \ref{Proposition:Matsuyama}~(ii).
(ii)$\to$(i) By Lemma \ref{lemma:top:ind:is:ind}, $A$ is independent. Hence the Kalton map $\kal{A}$ is an (algebraical) isomorphism.
Since $A$ is finite, it is absolutely Cauchy summable by Remark \ref{remark:on:abs:C:summability}~(i).
Applying Theorem \ref{Proposition:Udine}, we conclude that
$\kal{A}$ is continuous. By Proposition \ref{Proposition:Matsuyama}~(i), $\kal{A}$ is an open map onto $\hull{A}$. This finishes the proof of (i).
\end{proof}
In the rest of this section we discuss topologically independent sets arising in functional analysis.
Recall that a sequence $\{e_i:i\in{\mathbb N}\}$ in a normed vector space $V$ is called a {\em Schauder basis} provided that for every $v\in V$ there exists unique sequence $\{s_i:i\in{\mathbb N}\}$ of scalars such that $v=\sum_{n=0}^\infty s_ne_n$, where the convergence is taken with respect to the norm; that is, $\lim_{n\to\infty}||v-\sum_{i=0}^n s_i e_i ||= 0.$
\begin{proposition}\label{schauder:is:top:ind}
Every Schauder basis in a Banach space is topologically independent.
\end{proposition}
\begin{proof}
Let $E=\{e_i:i\in{\mathbb N}\}$ be a Schauder basis in a Banach space $B$. For $x=\sum_{i=0}^\infty a_ie_i\in B$ and $n\in{\mathbb N}$ put $P_n(x)=\sum_{i=0}^na_ie_i$. Then $P_n:B\to B$ is a linear projection for each $n\in{\mathbb N}$. Furthermore, by a theorem of Banach, these projections are uniformly bounded; see, for instance, \cite[Theorem 237]{HHZ}. That is, there exists $C\in{\mathbb R}$ such that
$$
||P_n(x)||\le C||x|| \mbox{ for every $n\in{\mathbb N}$ and $x\in B$}.
$$
Put $P_{-1}(x)=0$ for each $x\in B$. Define $\pi_n=P_n-P_{n-1}$. Then for all $x\in B$ and $n\in{\mathbb N}$ we have
\begin{equation}\label{eq:schauder}
||\pi_n(x)||=||P_n(x)-P_{n-1}(x)||\le||P_n(x)||+||P_{n-1}(x)||\le 2C||x||.
\end{equation}
Take a neighbourhood $W$ of $0$. Then there exists $\varepsilon>0$ such that the open ball $B(0;\varepsilon)$
with the center at $0$ and diameter $\varepsilon$ is a subset of $W$. We claim that the open ball $B(0;\frac{\varepsilon}{2C})$ is a $W$-witness of the topological independence of $E$.
Indeed, if $x=\sum_{i=0}^na_ie_i\in B(0;\frac{\varepsilon}{2C})$, then
$$
||a_ie_i||=||\pi_i(x)||\le 2C||x||<2C \frac{\varepsilon}{2C}=\varepsilon
$$
by \eqref{eq:schauder}. Thus $a_ie_i\in B(0;\varepsilon)\subseteq W$ for all $i=0,\ldots,n$.
\end{proof}
\begin{remark}\label{rem:on:Schauder}
A Banach space $B$ with an infinite Schauder basis $A$ never contains a topologically isomorphic copy of $S_A$ (as it contains no infinite direct sums at all). In view of Proposition \ref{schauder:is:top:ind}, this shows that
(i) and (ii) of Proposition \ref{direct:sum:top:indep} are not equivalent in general.
\end{remark}
Recall that a subset $A$ of a vector space is {\em linearly
independent} if for every finite set $B\subseteq A$ and each set
$\{r_b:b\in B\}$ of real numbers the equality $\sum_{b\in B}r_bb=0$
implies $r_b=0$ for all $b\in B$. Equivalently, $A$ is linearly
independent if every finite set $B\subseteq A$ generates a
$|B|$-dimensional Euclidean space.
\begin{proposition}\label{prop:step:to:Lie}
Let $A$ be a subset of a topological vector space.
\begin{itemize}
\item[(i)] If $A$ is topologically independent, then it is linearly independent.
\item[(ii)] If $A$ is finite and linearly independent, then it is topologically independent.
\end{itemize}
\end{proposition}
\begin{proof}
(i)
It is enough to prove that every non-empty finite subset $B$ of $A$ is linearly independent. Let $B\subseteq A$ be
finite and non-empty. Then $\rhull{B}={\mathbb R}^n$ is an $n$-dimensional Euclidean vector space for a suitable $n\in{\mathbb N}$. Since $B$ generates ${\mathbb R}^n$, one has $n\le |B|$.
Being a subset of a topologically independent set $A$, $B$ itself is topologically independent. By Lemma \ref{lemma:step:to:Lie}, the converse inequality $|B|\le n$ also holds. Therefore, $|B|=n$. Since $\rhull{B}={\mathbb R}^n$, $B$ must be the basis for ${\mathbb R}^n$. In particular, $B$ is linearly independent.
(ii)
It is a well-known and simple fact that $\rhull{A}=\bigoplus_{a\in A}\rhull{a}={\mathbb R}^{|A|}$. Consequently, the set $A$ is topologically independent by
Lemma \ref{topologcally:independent:in:direct:sum} and Remark \ref{lemma:obvious}.
\end{proof}
\begin{remark}
\label{remark:4.12}
In \cite{ES}, the following stronger version of a linear independence of a countably infinite subset $A=\{a_i:i\in{\mathbb N}\}$ of a topological vector space is introduced:
\begin{equation}\label{eq:dalsi:rovnice}
\mbox{If } \sum_{i=0}^\infty r_ia_i=0 \mbox{ for some real sequence } \{r_i:i\in{\mathbb N}\} \mbox{, then } r_i=0 \mbox{ for all } i\in{\mathbb N}.
\end{equation}
This notion is studied under the names $\omega$-independent in \cite{Lip}, $\omega$-linearly independent in \cite{Singer} and linearly topologically independent in \cite{LL,Lip1}.
\end{remark}
\begin{remark}
If in Definition \ref{def:topological:independence} we assume $G$ to be a topological vector space and replace the set of integers $\{z_a:a\in F\}$ by the set $\{r_a:a\in F\}$ of real numbers, then we obtain another generalization of linear independence for topological vector spaces which is stronger than the topological independence from Definition \ref{def:topological:independence}.
It is an interesting question how these two notions are related. However, this question is beyond the scope of this paper.
\end{remark}
\section{Direct sums in topological groups}
\begin{theorem}\label{Kalton:top:iso}
For a subset $A$ of a topological group $G$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $A$ is both topologically independent and absolutely Cauchy summable in $G$;
\item[(ii)] the Kalton map $\kal{A}$ is a topologically isomorphic embedding.
\end{itemize}
\end{theorem}
\begin{proof}
(i)$\to$(ii)
Since $A$ is topologically independent, it is independent by Lemma \ref{lemma:top:ind:is:ind}, and so the Kalton map $\kal{A}:S_A\to G$ is a monomorphism. By Proposition \ref{Proposition:Matsuyama}~(i), $\kal{A}$ is an open map onto its image $\kal{A}(S_A)=\hull{A}$. Since $A$ is absolutely Cauchy summable in $G$, the map $\kal{A}$
is continuous by Theorem \ref{Proposition:Udine}. This finishes the proof of (ii).
(ii)$\to$(i) Proposition \ref{Proposition:Matsuyama}~(ii) implies that $A$ is topologically independent in $G$, while Theorem \ref{Proposition:Udine} yields that $A$ is absolutely Cauchy summable.
\end{proof}
From this theorem and Fact \ref{fact}, we obtain the following corollary.
\begin{corollary}
\label{sum:corollary}
For a topological group $G$ and a cardinal $\kappa$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $G$ contains a subgroup topologically isomorphic to a direct sum of $\kappa$-many non-trivial groups;
\item[(ii)] $G$ contains a topologically independent absolutely Cauchy summable set of size $\kappa$.
\end{itemize}
\end{corollary}
\begin{remark}\label{remark:Zp}
Let ${\mathbb Z}_p$ denote the (compact metric) group of $p$-adic integers. It is known that ${\mathbb Z}_p$ is a linear group; that is, it has a basis at $0$ consisting of its clopen subgroups; see, for instance, \cite{DPS}.
(i) It is shown in \cite{DSS_arxiv} that {\em every infinite null sequence $A$ in ${\mathbb Z}_p$ is absolutely Cauchy summable in ${\mathbb Z}_p$\/}.
(ii) It is known that {\em ${\mathbb Z}_p$ does not contain any sum of two non-trivial topological groups\/}.
(iii) It is a simple fact that {\em ${\mathbb Z}_p$ contains an infinite independent null sequence\/}.
(iv) It follows from (i), (ii) and (iii) that one cannot replace ``topologically independent'' in item (ii) in Corollary \ref{sum:corollary} by the weaker condition ``independent'', even when $G$ is a compact metric linear group.
(v) Let $A$ be an infinite null sequence in ${\mathbb Z}_p$.
It follows from (i), (ii) and Corollary \ref{sum:corollary} that $A$ is an absolutely Cauchy summable set such that the only topologically independent subsets of $A$ are singletons.
(vi)
{\em There exists an infinite absolutely Cauchy summable subset $B$ of a compact metric group such that
the Kalton map $\kal{B}$ is continuous but not open.} Indeed, let $B$ be an infinite independent sequence in ${\mathbb Z}_p$
as in
(iii). By (i), $B$ is absolutely Cauchy summable, so $\kal{B}$ is continuous by Theorem \ref{Proposition:Udine}. Since $B$ is independent, $\kal{B}$ is injective. If it were also open, it would be a topologically isomorphic embedding in contradiction with
item
(ii).
\end{remark}
Item
(v) of the above remark shows that an infinite absolutely Cauchy summable set in a compact metric linear group need not contain any infinite topologically independent subset. Our next theorem shows that, under an additional condition imposed on an
infinite absolutely Cauchy summable set, it does contain an infinite topologically independent subset.
\begin{theorem}\label{Gn:discrete:gives:injective:heomeo}
Let $A$ be an infinite absolutely Cauchy summable set in a topological group $G$ such that $\hull{a}$
is discrete for every $a\in A$. Then $A$ contains an infinite topologically independent subset.
\end{theorem}
\begin{proof}
We will build the topologically independent faithfully indexed subset $B=\{a_{n}:n\in{\mathbb N}\}$ of $A$ by induction on $n\in{\mathbb N}$. At each step we choose an element $a_n\in A$, a finite set $F_n\subseteq A$ and an open symmetric neighbourhood $U_n$ of $0$ satisfying the conditions (i$_n$)--(iv$_n$) listed below.
First, we define $F_{-1}=\emptyset$ and $U_{-1}=G$.
\begin{itemize}
\item[(i$_n$)] $F_{n-1}\subseteq F_n$ and $a_n\in F_n\setminus F_{n-1}$,
\item[(ii$_n$)] $(U_n+U_n)\cap \hull{a_{n}}=\{0\}$,
\item[(iii$_n$)] $\hull{A\setminus F_{n}}\subseteq U_{n}$,
\item[(iv$_n$)] $U_n\subseteq U_{n-1}$.
\end{itemize}
Suppose that for $n\in{\mathbb N}$ a finite set $F_{n-1}\subseteq A$ and an open symmetric neighbourhood $U_{n-1}$ of $0$ have already been selected. Let us define $a_{n}\in A$, a finite set $F_{n}\subseteq A$ and an open symmetric neighbourhood $U_{n}$ of $0$ satisfying conditions (i$_{n}$)--(iv$_{n}$).
Since $A$ is infinite and $F_{n-1}$ is finite, we can choose $a_n\in A\setminus F_{n-1}$. Since $\hull{a_{n}}$ is discrete by our assumption, we can fix a symmetric neighbourhood $U_{n}$ of $0$ satisfying (ii$_{n}$). By choosing a smaller $U_n$ if necessary, we may also assume that it satisfies the condition (iv$_{n}$) as well. Since
$A$ is absolutely Cauchy summable, there exists a finite set $E$ such that $\hull{A\setminus E}\subseteq U_{n}$. Clearly, $F_n=\{a_n\}\cup F_{n-1}\cup E$ is a finite subset of $A$ satisfying
(i$_n$). Since $E\subseteq F_n$, we have $\hull{A\setminus F_{n}}\subseteq\hull{A\setminus E}\subseteq U_{n}$; that is,
(iii$_n$) holds as well. This finishes our inductive construction.
Since (i$_n$) holds for every $n\in {\mathbb N}$, we conclude that
\begin{equation}
\label{eq:9}
B=\{a_n:n\in{\mathbb N}\}\subseteq \bigcup_{n\in{\mathbb N}}F_n\subseteq A
\end{equation}
and $a_n\not=a_m$ for $n,m\in {\mathbb N}$ and $m\not=n$.
In particular, $B$ is infinite.
Let us show that $B$ is topologically independent.
Fix a neighbourhood $W$ of $0$. Since $A$ is absolutely Cauchy summable, so is its subset $B$. Therefore,
$\hull{B\setminus S}\subseteq W$ for some finite set $S\subseteq B$. Since $\{F_n:n\in{\mathbb N}\}$ is an increasing sequence of sets by (i$_n$), \eqref{eq:9} allows us to find an $n\in{\mathbb N}$ such that $S\subseteq F_n$. Now
\begin{equation}\label{eq:B:minus:Fn:in:W}
\hull{B\setminus F_{n}}\subseteq \hull{B\setminus S}\subseteq W.
\end{equation}
We claim that $U_n$ is a $W$-witness of the topological independence of $B$. Indeed, take a finite set $F\subseteq B$ and an indexed set $\{z_a:a\in F\}$ of integers such that
\begin{equation}\label{eq:c:in:Un}
c=\sum_{a\in F}z_aa\in U_n.
\end{equation}
Without loss of generality, we may assume that $z_aa\neq 0$ for all $a\in F$.
We are going to show that $z_aa\in W$ for each $a\in F$.
To achieve this, it suffices to check that $F\cap F_n=\emptyset$.
Indeed, assuming that this has already been proved,
for every $a\in F$
we would have $a\in F\setminus F_n\subseteq B\setminus F_n$, so
$z_aa\in \hull{B\setminus F_{n}}\subseteq W$ by \eqref{eq:B:minus:Fn:in:W}.
Therefore, we shall assume that $F\cap F_n\not=\emptyset$ and derive a contradiction from it. Let $m\in{\mathbb N}$ be the minimal element with the property that $F\cap F_m\not=\emptyset$.
Since $F\cap F_n\not=\emptyset$ by our assumption, $m\le n$. Since $F\subseteq B=\{a_n:n\in{\mathbb N}\}$ and
(i$_k$) holds for every $k\in{\mathbb N}$, from the minimality of $m$ one concludes that $F\cap F_m=\{a_m\}$. Therefore,
\begin{equation}
\label{new:eq}
c-z_{a_m} a_m\in\hull{F\setminus F_m}\subseteq \hull{A\setminus F_m}\subseteq U_m
\end{equation}
by \eqref{eq:c:in:Un} and (iii$_m$).
Since (iv$_k$) holds for every $k\in{\mathbb N}$ and $m\le n$, it follows that $U_n\subseteq U_m$. Combining this with \eqref{eq:c:in:Un}, we get
$c\in U_m$. Since $U_m$ is symmetric, this and \eqref{new:eq} yield $z_{a_m} a_m\in U_m+U_m$. Recalling (ii$_m$), we get
$z_{a_m} a_m=0$. On the other hand, since $a_m\in F$, we have $z_{a_m} a_m\not=0$ by our assumption.
This contradiction finishes the proof of the equality $F\cap F_n=\emptyset$.
\end{proof}
\begin{corollary}
\label{precise:corollary}
Let $A$ be an infinite absolutely Cauchy summable set in a topological group $G$ such that $\hull{a}$
is discrete for every $a\in A$. Then
$G$ contains a subgroup (topologically isomorphic to) $S_B$ for some infinite subset $B$ of $A$.
\end{corollary}
\begin{proof}
By Theorem \ref{Gn:discrete:gives:injective:heomeo}, $A$ contains an infinite topologically independent subset $B$. Since $A$ is absolutely Cauchy summable,
so is its subset $B$. By Theorem \ref{Kalton:top:iso}, the Kalton map $\kal{B}: S_B\to G$ is a topologically isomorphic embedding.
\end{proof}
\begin{corollary}\label{cor:thm:basic}
Let $G$ be a topological group such that each of its cyclic subgroups is discrete. Then the following statements are equivalent:
\begin{itemize}
\item[(i)]
$G$ contains an infinite absolutely Cauchy summable set;
\item[(ii)] $G$ contains a subgroup (topologically isomorphic to) $S_A$ for some infinite subset $A$ of $G\setminus\{0\}$.
\end{itemize}
\end{corollary}
\begin{proof}
The implication (i)~$\to$~(ii) follows from Corollary \ref{precise:corollary}.
The reverse implication (ii)~$\to$~(i) follows from the implication (ii)~$\to$~(i) of Theorem \ref{Kalton:top:iso}.
\end{proof}
Remark \ref{remark:Zp} shows that the condition ``all cyclic subgroups are discrete'' cannot be weakened to ``all cyclic subgroups have linear topology'' in
this corollary, as well as in Theorem \ref{Gn:discrete:gives:injective:heomeo} and Corollary \ref{precise:corollary}.
\section{Absolutely summable sets versus absolutely Cauchy summable sets}
\label{sec:abs:summable:sets}
\label{sec:KALTON}
\begin{definition}
We say that a subset $A$ of a topological group $G$ is
{\em absolutely summable} in $G$ provided that, for every family $\{z_a:a\in A\}$ of integers indexed by $A$, there exists $g\in G$ having the following property:
For every neighbourhood $U$ of $0$ one can find a finite set $F\subseteq A$ such that
\begin{equation}\label{eq:def:sum} g-\sum_{a\in E}z_a a\in U \mbox{ for every finite }
E\subseteq A \mbox{ containing } F;
\end{equation}
that is, the indexed set $\{z_aa:a\in A\}$ is summable in the sense of Bourbaki (see \cite[Appendice II, D\'{e}finition 1]{Bourbaki}). In this case we write
\begin{equation}\label{eq:g:is:sum}
g=\sum_{a\in
A}z_a a.
\end{equation}
\end{definition}
The following lemma provides a typical example of an absolutely summable set. We omit its straightforward proof.
\begin{lemma}
\label{summable:sets:in:direct:product}
Let $\{H_a:a\in A\}$ be a family of topological groups and $H=\prod_{a\in A} H_a$ be its direct product. If $x_a\in H_a$ for every $a\in A$, then the set $X=\{x_a:a\in A\}$ is absolutely summable in $H$.
\end{lemma}
\begin{lemma}\label{cont:hom:preserves:summ}
Let $G,H$ be topological groups, $A$ an absolutely summable set in $G$ and $f:G\to H$ a continuous homomorphism. Then $f(A)$ is an absolutely summable set in $H$.
If, moreover, $f$ is injective on $A$, then for every indexed set $\{z_a:a\in A\}$ of integers we have
\begin{equation}\label{eq:f(g):is:sum}
f\left(\sum_{a\in A}z_aa\right)=\sum_{a\in A}z_af(a).
\end{equation}
\end{lemma}
\begin{proof}
We may assume that $f$ is injective on $A$ (otherwise we can replace $A$ by $A'\subseteq A$ such that $f(A')=f(A)$ and $f$ is injective on $A'$).
Pick a family $\{z_a:a\in A\}$ of integers arbitrarily. Then there is $g\in G$ such that \eqref{eq:g:is:sum} holds. It suffices to show the equality \eqref{eq:f(g):is:sum} (since $f(A)$ is injective on $A$, we do not distinguish between the summing indexes of $A$ and $f(A)$). Pick a neighbourhood $V$ of $0_H$. Then there is finite $F\subseteq A$ such that \eqref{eq:def:sum} holds for $U=f^{-1}(V)$. Hence $$f(g)-\sum_{a\in E}z_af(a)=f\left(g-\sum_{a\in E}z_aa\right)\in f(U)=f(f^{-1}(V))\subseteq V$$ holds for every finite $E\subseteq A$ containing $F$. This yields \eqref{eq:f(g):is:sum}. Hence $f(A)$ is absolutely summable.
\end{proof}
\begin{proposition}
\label{exchange of quantifiers generalized}
Given a subset $A$ of elements of a topological group $G$, the following statements are equivalent:
\begin{itemize}
\item[(i)] For every neighbourhood $U$ of $0$ and every indexed set $\{z_a:a\in A\}$ of integers, there exists finite $F\subseteq A$ such that
\begin{equation}\label{exchange:quantifiers}
\sum_{a\in E} z_aa\in U\ \mbox{ for every finite }\ E\subseteq A\setminus F.
\end{equation}
\item[(ii)]
For every neighbourhood $U$ of $0$ there exists finite $F\subseteq A$ such that for every indexed set $\{z_a:a\in A\}$ of integers, we have (\ref{exchange:quantifiers}).
\item[(iii)]
$A$ is absolutely Cauchy summable.
\end{itemize}
\end{proposition}
\begin{proof}
Items (ii) and (iii) are obviously equivalent and (ii) trivially implies (i).
If $A$ is finite, then (i) and (ii) are equivalent. Thus we may assume that $A$ is infinite. To show that (i) implies (ii) we will prove the contrapositive. Suppose that (ii) does not hold. That is, we can fix a neighbourhood $U$ of $0$ such that for every finite $F\subseteq A$ there exists finite $E\subseteq A\setminus F$ such that \eqref{exchange:quantifiers} does not hold. This allows us to pick inductively for each $n\in{\mathbb N}$ a finite $E_n\subseteq A$ and a corresponding indexed set $\{z_a:a\in E_n\}$ of integers such that the collection $\{E_n:n\in{\mathbb N}\}$ is pairwise distinct and \begin{equation}\label{eq:exchange:quantifiers}
\sum_{a\in E_n} z_aa\not\in U \mbox{ for all } n\in{\mathbb N}.
\end{equation}
For $a\in A\setminus \bigcup_{n\in{\mathbb N}}E_n$ pick an integer $z_a$ arbitrarily. Then the neighbourhood $U$ and the indexed set $\{z_a:a\in A\}$ of integers witness that (i) does not hold. Indeed, if $F\subseteq A$ is an arbitrary finite set, then, since $E_n$'s are pairwise distinct, there is $k\in{\mathbb N}$ such that $E_k\subseteq A\setminus F$. Hence \eqref{exchange:quantifiers} does not hold by \eqref{eq:exchange:quantifiers}.
\end{proof}
\begin{proposition}\label{as:is:acs}
Every absolutely summable set in a topological group is absolutely Cauchy summable. The converse implication holds in complete groups.
\end{proposition}
\begin{proof} Let $A$ be an absolutely summable set in a topological group $G$. Fix a neighbourhood $U$ of $0$ and an indexed set $\{z_a:a\in A\}$ of integers. By Proposition \ref{exchange of quantifiers generalized} it suffices to find finite $F\subseteq A$ such that \eqref{exchange:quantifiers} holds.
Let $s$ be the sum of $\{z_aa:a\in A\}$. Pick a symmetric neighbourhood $V$ of $0$ with the property that $V+V\subseteq U$. By our assumption, there exists finite $F\subseteq A$ such that
\begin{equation}\label{eq:as:acs}
s-\sum_{a\in E\cup F}z_aa=s-\sum_{a\in F}z_aa-\sum_{a\in E}z_aa\in V \mbox{ for each finite } E\subseteq A\setminus F.
\end{equation}
In particular, for $E=\emptyset$ we have
$$
s-\sum_{a\in F}z_aa\in V.
$$
This together with \eqref{eq:as:acs} gives us
$$
\sum_{a\in E}z_aa\in V+V\subseteq U \mbox{ for every finite } E\subseteq A\setminus F,
$$
because $V$ was chosen symmetric. This gives us \eqref{exchange:quantifiers}.
To prove the second assertion, assume that $A$ is an absolutely Cauchy summable subset of a complete group $G$.
Then for every indexed set $\{z_a:a\in A\}$ of integers the net $\{\sum_{a\in F}z_aa:F\subseteq A, F$ is finite$\}$ is a Cauchy net in $G$, where the net order is given by the set inclusion.
Thus, this net converges to some element $g \in G$. A straightforward check shows that $g=\sum_{a\in A}z_aa$.
\end{proof}
\begin{question}
Let $G$ be a linear group. Assume that a subset of $G$ is absolutely Cauchy summable (if and) only if it is absolutely summable. Must $G$ be complete?
\end{question}
\begin{remark}\label{f:omgea:is:abs:sum}
Following \cite{DSS_arxiv} we say that a faithfully indexed sequence $\{a_i:i\in{\mathbb N}\}$ in a topological group $G$ is {\em $f_\omega$-(Cauchy) summable} provided that the sequence $\{\sum_{i=1}^nz_ia_i:n\in{\mathbb N}\}$ converges (is a Cauchy sequence) for every sequence $\{z_i:i\in{\mathbb N}\}$ of integers. It is easy to see that a faithfully indexed sequence in a topological group is $f_\omega$-summable if and only if it is absolutely summable. Further, from Proposition \ref{exchange of quantifiers generalized} it follows that it is $f_\omega$-Cauchy summable if and only if it is absolutely Cauchy summable. For other versions of $f_\omega$-summability see \cite{DSS_arxiv,Spe}.
\end{remark}
\begin{remark}
\label{TAP:remark}
In \cite{SS} the notion of a TAP group was introduced and investigated. (This property appeared earlier without a specific name in \cite{Husek1}; see ($\star$) on page 163 of \cite{Husek1}.) An equivalent definition from \cite{DSS_arxiv} says that a topological group is TAP if and only if it contains no $f_\omega$-summable sequences. Therefore, {\em a topological group $G$ is TAP if and only if every absolutely summable set in $G$ is finite\/}. Other variants of this property were studied in \cite{DT-private}.
\end{remark}
\begin{remark}
Inspired by the notion of a linearly independent set defined in Remark \ref{remark:4.12}, one can introduce a correspondent notion for topological groups.
Indeed, if we replace the real sequence
$\{r_i:i\in{\mathbb N}\}$ in \eqref{eq:dalsi:rovnice} by a sequence $\{z_i:i\in{\mathbb N}\}$ of integers, then this notion makes sense also for a topological group, and thereby we obtain another generalization of independence of a set. (Here $\sum_{i=0}^\infty z_ia_i=0$ means that the net $\left\{\sum_{i\in I}z_ia_i: I\subseteq{\mathbb N}\mbox{ if finite}\right\}$ converges to $0$.)
\end{remark}
\section{Extension of the Kalton map and direct products}
We use $\overline{G}$ to denote the completion of an abelian topological group $G$.
Let $G$ be an abelian topological group.
As we have seen in Theorem \ref{Proposition:Udine}, the Kalton map $\kal{A}:S_A\to G$ is continuous if and only if $A$ is absolutely Cauchy summable.
In such a case,
there exists a unique continuous extension
$$
\skal{A}:\overline{S_A}\to \overline{G}
$$
of $\kal{A}$ which is also a group homomorphism.
Note that
$$
P_A=\prod_{a\in A}\hull{a}\subseteq \prod_{a\in A}\overline{\hull{a}}=\overline{S_A},
$$
so the restriction of $\skal{A}$ to $P_A$ is well-defined. With a slight abuse of notations, we shall denote this restriction also by $\skal{A}$.
In our next proposition we provide an exact condition when the continuous extension $\skal{A}: P_A\to\overline{G}$ of the Kalton map $\kal{A}$ over $P_A$ takes values in $G$ rather than in $\overline{G}$.
\begin{proposition}\label{Proposition:Prague}
Let $A$ be a subset of a topological group $G$ such that $0\not\in A$. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $A$ is an absolutely summable set in $G$;
\item[(ii)] the Kalton map $\kal{A}:S_A\to G$ is continuous and
its continuous extension $\skal{A}:P_A\to\overline{G}$ satisfies $\skal{A}(P_A)\subseteq G$.
\end{itemize}
Furthermore, if these equivalent conditions hold, then
$\skal{A}(h)=\sum_{a\in A} \kal{A}(h_a)$
for every $h=\{h_a\}_{a\in A}\in P_A$.
\end{proposition}
\begin{proof}
For every $a\in A$ let $x_a$ be the unique element of the cyclic summand $\hull{a}$ of $S_A$ such that $\kal{A}(x_a)=a$.
Define $X=\{x_a:a\in A\}\subseteq S_A$.
Clearly, $A=\kal{A}(X)$ and $\kal{A}$ is injective on $X$.
(i)~$\to$~(ii)
Since $A$ is absolutely summable, it is absolutely Cauchy summable by Proposition \ref{as:is:acs}. By Theorem \ref{Proposition:Udine}, the Kalton map $\kal{A}$ is continuous. Therefore, there exists a unique continuous homomorphism $\skal{A}:P_A\to\overline{G}$ extending $\kal{A}$.
Let $h=\{h_a\}_{a\in A}\in P_A$ be arbitrary. There exists an indexed set $\{z_a:a\in A\}$ of integers such that $h_a=z_a x_a$ for every $a\in A$. Then
$
h=\sum_{a\in A}z_a x_a
$.
Since $X$ is absolutely summable in $P_A$ by Lemma \ref{summable:sets:in:direct:product} and $\kal{A}$ (and thus, $\skal{A}$) is injective on $X$,
from Lemma \ref{cont:hom:preserves:summ}
we conclude that
\begin{equation}
\label{eq:18}
\skal{A}(h)=\skal{A}\left(\sum_{a\in A}z_a x_a\right)=
\sum_{a\in A}z_a\skal{A}(x_a)
=
\sum_{a\in A}z_a\kal{A}(x_a)
=
\sum_{a\in A}z_aa.
\end{equation}
Since $A$ is absolutely summable by (i), $\sum_{a\in A}z_a a\in G$. This shows that $\skal{A}(P_A)\subseteq G$.
(ii)~$\to$~(i)
The set $X$ is absolutely summable in $P_A$ by Lemma \ref{summable:sets:in:direct:product}.
Since $\skal{A}:P_A\to G$ is continuous by (ii),
Lemma \ref{cont:hom:preserves:summ} implies that the set $A=\skal{A}(X)$ is absolutely summable in $\skal{A}(P_A)$. Since $\skal{A}(P_A)\subseteq G$ by (ii), $A$ is absolutely summable in $G$ as well.
The final statement of the proposition follows from \eqref{eq:18}.
\end{proof}
\begin{theorem}\label{prod:in:group}\label{characterization:of:topological:isomorphism:of:extended:Kalton:maps}
For a subset $A$ of a topological group $G$ such that $0\not\in A$,
the following conditions are equivalent:
\begin{itemize}
\item[(i)] the Kalton map $\kal{A}:S_A\to\overline{G}$ is continuous and its continuous extension $\skal{A}:P_A\to\overline{G}$ is a topologically isomorphic embedding satisfying $\skal{A}(P_A)\subseteq G$;
\item[(ii)] the set $A$ is both absolutely summable and topologically independent in $G$.
\end{itemize}
\end{theorem}
\begin{proof}
(i)$\to$(ii) By our assumption and the implication (ii)~$\to$~(i) of Proposition \ref{Proposition:Prague}, $A$ is absolutely summable in $G$. Since $\kal{A}$ is a topologically isomorphic embedding, $A$ is topologically independent in $G$ by Proposition \ref{Proposition:Matsuyama} (ii).
(ii)$\to$(i)
Since $A$ is absolutely summable by our assumption, it is Cauchy summable in $G$ by Proposition \ref{as:is:acs}.
Since $A$ is also topologically independent in $G$, applying the implication (i)~$\to$~(ii) of Theorem
\ref{Kalton:top:iso},
we conclude that the Kalton map $\kal{A}: S_A\to G$ is a topologically isomorphic embedding. By the uniqueness of completions of topological groups, its continuous extension
$\skal{A}: \overline{S_A}\to \overline{G}$ is also a topologically isomorphic embedding. Since $S_A\subseteq P_A\subseteq\overline{S_A}$, its restriction
$\skal{A}:P_A\to\overline{G}$ to $P_A$ is a topologically isomorphic embedding as well. It remains to note that $\skal{A}(P_A)\subseteq G$ by Proposition \ref{Proposition:Prague}.
\end{proof}
\begin{corollary}
For a topological group $G$ and a cardinal $\kappa$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $G$ contains a subgroup topologically isomorphic to a direct product of $\kappa$-many non-trivial groups;
\item[(ii)] $G$ contains a topologically independent absolutely summable set of size $\kappa$.
\end{itemize}
\end{corollary}
\begin{proof}
(i)~$\to$~(ii) Let $H$ be a subgroup of $G$ topologically isomorphic to a product $\prod_{a\in A} H_a$ of non-trivial topological groups $H_a$ such that $|A|=\kappa$. Select a non-zero element $x_a\in H_a$ for every $a\in A$ and let
$X=\{x_a:a\in A\}$. Clearly, $|X|=|A|=\kappa$ and $0\not\in X$.
By Lemma \ref{topologcally:independent:in:direct:sum}, $X$ is topologically independent in $\bigoplus_{a\in A} H_a$. Since this direct sum is a subgroup of $H$
which in turn is a subgroup of $G$, we conclude that $X$ is topologically independent in $G$. By Lemma \ref{summable:sets:in:direct:product}, $X$ is absolutely summable in $H$ (and thus, in $G$ as well).
(ii)~$\to$~(i) Let $A$ be a topologically independent absolutely summable subset of $G$ such that $|A|=\kappa$ and $0\not\in A$.
By the implication (ii)~$\to$~(i) of Theorem \ref{prod:in:group}, $G$ contains a subgroup topologically isomorphic to the direct product
$P_A$. Since $0\not\in A$, each $\hull{a}$ for $a\in A$ is a non-trivial group.\end{proof}
\begin{corollary}\label{cor:one:more}
\label{new:products:corollary}
Let $G$ be a topological group such that each of its cyclic subgroups is discrete. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $G$ contains an infinite absolutely summable set;
\item[(ii)] $G$ contains a subgroup (topologically isomorphic to) $P_B$ for some infinite subset $B$ of $G$.
\end{itemize}
Furthermore, if $G$ is complete, then the following item is equivalent to the other two:
\begin{itemize}
\item[(iii)] $G$ contains an infinite absolutely Cauchy summable set.
\end{itemize}
\end{corollary}
\begin{proof}
(i)$\to$(ii) Let $A$ be an infinite absolutely summable set in $G$. Then $A$ is absolutely Cauchy summable by Proposition \ref{as:is:acs}.
By Theorem \ref{Gn:discrete:gives:injective:heomeo}, $A$ contains an infinite topologically independent subset $B$. Being a subset of an absolutely summable set $A$, $B$ is absolutely summable as well.
Now item (ii) follows from the implication (ii)$\to$(i) of Theorem \ref{prod:in:group}.
The implication (ii)$\to$(i) follows from Lemma \ref{summable:sets:in:direct:product}.
Finally, the equivalence of items (i) and (iii) for a complete group $G$ follows from Proposition \ref{as:is:acs}.
\end{proof}
\section{Applications to metric NSS groups}
\label{Sec:inf:dir:sums:in:top:groups}
The first theorem in this section demonstrates that the notion of an absolutely Cauchy summable set can be used to characterize the NSS property in metric groups.
\begin{theorem}\label{metric:NSS:iff:no:abs:C:summable}
A metric group $G$ is NSS if and only if every absolutely Cauchy summable set in $G$ is finite.
\end{theorem}
\begin{proof}
By \cite[Theorem 5.5]{DSS_arxiv}, a group $G$ is NSS if and only if $G$ contains no $f_\omega$-summable sequences. According to Remark \ref{f:omgea:is:abs:sum},
this occurs precisely when every absolutely Cauchy summable set in $G$ is finite.
\end{proof}
The reader may wish to compare Theorem \ref{metric:NSS:iff:no:abs:C:summable} with \cite[Theorem 3.9]{DT-private}.
\begin{theorem}\label{thm:basic}
Let $G$ be a metric group such that each of its cyclic subgroups is discrete.
Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $G$ contains a subgroup (topologically isomorphic to) $S_A$ for some infinite subset $A$ of $G$;
\item[(ii)] $G$ is not NSS.
\end{itemize}
\end{theorem}
\begin{proof}
Combine Corollary \ref{cor:thm:basic} with Theorem \ref{metric:NSS:iff:no:abs:C:summable}.
\end{proof}
\begin{corollary}\label{sum:of:Z:inside}
Let $G$ be a torsion-free metric group such that each of its cyclic subgroups is discrete. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $G$ is not NSS;
\item[(ii)] $G$ contains a topologically isomorphic copy of ${\mathbb Z}^{({\mathbb N})}$.
\end{itemize}
If $G$ is also complete, then the next condition is equivalent to the above two:
\begin{itemize}
\item[(iii)] $G$ contains a topologically isomorphic copy of ${\mathbb Z}^{{\mathbb N}}$.
\end{itemize}
\end{corollary}
\begin{proof}
The equivalence of items (i) and (ii) follows from Theorem \ref{thm:basic}, as each cyclic subgroup $\hull{a}$ of $G$ is a copy of ${\mathbb Z}$ with the discrete topology. The equivalence of items (i) and (iii) follows from Corollary \ref{new:products:corollary} and Theorem \ref{metric:NSS:iff:no:abs:C:summable}.
\end{proof}
Remark \ref{remark:Zp} shows that the condition ``all cyclic subgroups are discrete'' cannot be weakened to ``all cyclic subgroups have linear topology'' in
Theorem \ref{thm:basic}
and Corollary \ref{sum:of:Z:inside}.
In \cite[Section 6]{DT-private} a subset $U$ of a topological group $G$ is called {\em root invariant} if $\{y:ny=x \mbox{ for some } n\in{\mathbb N}\}\subseteq U$ whenever $x\in U$. The topological group $G$ is then called {\em locally root invariant} provided that its topology has a local base at $0$ consisting of root invariant sets. Finally, $G$ belongs to the class {\em MMP} provided that it is locally root invariant and it is metrizable by a translation-invariant metric $d$ with the property that
\begin{equation}\label{eq:Vajas's}
d(0,g)\le d(0,ng) \mbox{ for all } g\in G\mbox{ and } n\in{\mathbb N}.
\end{equation}
Clearly, locally root invariant groups are torsion free, while condition \eqref{eq:Vajas's} implies, that $\hull{g}$ is discrete for every $g\in G$. Therefore, Corollary \ref{sum:of:Z:inside} generalizes the following result from \cite{DT-private}.
\begin{corollary}
Let $G$ be an MMP group.
\begin{itemize}
\item[(i)]
\cite[equivalence (i)$\leftrightarrow$(iii) of Theorem 6.9]{DT-private}
$G$ contains a topologically isomorphic copy of ${\mathbb Z}^{({\mathbb N})}$ if and only if $G$ is not NSS.
\item[(ii)]\cite[equivalence (i)$\leftrightarrow$(v) of Theorem 6.10]{DT-private} If $G$ is also complete then $G$ contains a topologically isomorphic copy of ${\mathbb Z}^{{\mathbb N}}$ if and only if $G$ is not NSS.
\end{itemize}
\end{corollary}
From Corollary \ref{new:products:corollary} and Theorem \ref{metric:NSS:iff:no:abs:C:summable} one obtains the following result:
\begin{theorem}\label{compact:zero:dim:inside}
Let $G$ be an infinite complete metric torsion group. Then the following statements are equivalent:
\begin{itemize}
\item[(i)] $G$ is not NSS;
\item[(ii)] $G$ contains a subgroup topologically isomorphic to an infinite product of finite non-trivial groups.
\end{itemize}
In particular, if $G$ is not NSS, then $G$ contains an infinite compact zero-dimensional subgroup.
\end{theorem}
\begin{remark}
All NSS groups are TAP; see Remark \ref{TAP:remark} for the definition of a TAP group. The implication NSS$\to$TAP was proved in \cite{SS}, although it was mentioned much earlier without a proof in \cite{Husek1}.
TAP groups contain no infinite products of non-trivial groups \cite{SS}.
\end{remark}
\section{Continuity of finite-dimensional projections in topological vector spaces}
The next simple fact is well known; see \cite[1.3 in Chapter III]{Schaefer}.
\begin{fact}
\label{closed:kernels}
Let $E$ and $E'$ be topological vector spaces such that $E'$ is finite-dimensional. Then
a linear map $f:E\to E'$ is continuous if and only if its kernel $\ker f=\{x\in E: f(x)=0\}$ is a closed subset of $E$.
\end{fact}
\begin{proposition}
\label{R:Cauchy:summable}
Let $A$ be an absolutely Cauchy summable subset of a topological vector space $E$. Then:
\begin{itemize}
\item[(i)]
for every neighbourhood $V$ of $0$ in $E$ there exists
a finite set $F\subseteq A$ such that $\rhull{A\setminus F}\subseteq V$;
\item[(ii)] the subspace $\rhull{A}$ of $E$ is locally convex.
\end{itemize}
\end{proposition}
\begin{proof}
(i)
Recall that a subset $S$ of a vector space is called {\em balanced} if $rS\subseteq S$ for every scalar $r$ with $|r|\le 1$. It is a folklore fact that every topological vector space has a base of its topology at $0$ consisting of balanced sets. Therefore, we can fix a closed balanced neighbourhood $U$ of $0$ such that $U\subseteq V$. Since $A$ is absolutely Cauchy summable in $E$, there exists a finite set $F \subseteq A$ such that $\hull{A\setminus F}\subseteq U$. Since $U$ is balanced, it contains all lines connecting $0$ with points from $\hull{A\setminus F}$. Since the union of these lines is dense in $\rhull{A\setminus F}$ and $U$ is closed, $\rhull{A\setminus F}\subseteq U\subseteq V$.
(ii)
Define $L=\rhull{A}$. Let $W$ be a neighbourhood of $0$ in $L$ and let
$V$ be a closed neighbourhood of $0$ in $L$ such that $V + V \subseteq W$.
Apply item (i) to $V$ and $L$ (taken as $E$) to get a finite subset $F$ of $A$ as in the conclusion of item (i).
Let $K$ be the closure of $\rhull{A\setminus F}$ in $L$.
Since $V$ is closed in $L$ and $\rhull{A\setminus F}\subseteq V$, the inclusion $K\subseteq V$ holds as well. Since $K$ has finite co-dimension in $L$, the quotient $L/K$
is finite-dimensional, so the quotient map $q: L \to L/K$ is continuous by Fact \ref{closed:kernels}.
As $\dim L/K < \infty$, it carries a unique topological vector space topology, so
$q$ is also open. Hence, $q(V)$ is a neighbourhood of $0$ in $L/K$. As $\dim L/K < \infty$, there exists a convex neighbourhood $U$ of $0$ in $L/K$ contained in $q(V)$. Then $q^{-1}(U)$ is a convex neighbourhood of $0$ in $L$ such that $q^{-1}(U) \subseteq q^{-1}(q(V)) = V + K \subseteq V + V \subseteq W$.
\end{proof}
Let $A$ be a linearly independent subset of a vector space $E$. For every set $B\subseteq A$
we denote by $\pi^A_B: \rhull{A}\to \rhull{B}$ the unique projection from $\rhull{A}$ onto $\rhull{B}$
such that $\ker \pi^A_B=\rhull{A\setminus B}$ and the restriction of $\pi^A_B$ to $\rhull{B}$ is the identity map of $\rhull{B}$.
For $b\in A$, we use $\pi^A_b$ instead of $\pi^A_{\{b\}}$ for simplicity.
\begin{lemma}
\label{lemma:continuity:of:projections}
Let $B$ be a non-empty finite subset of a linearly independent subset $A$ of a topological vector space $E$. Then $\pi^A_B$ is continuous if and only if $\pi^A_b$ is continuous for every $b\in B$.
\end{lemma}
\begin{proof}
To check the ``only if part'', assume that $\pi^A_B$ is continuous and fix an arbitrary $b\in B$. Being a linear map defined on the finite-dimensional space $\rhull{B}$, $\pi^{B}_{b}$ is continuous, and so is the composition $\pi^{B}_{b}\circ \pi^A_B=\pi^A_{b}$.
Let us show the ``if'' part. Assume that $\pi^A_b$ is continuous for every $b\in B$. By Fact \ref{closed:kernels}, $\ker \pi^A_b$
is a closed subset of $E$ for every $b\in B$. Therefore, $\ker\pi^A_B=\bigcap_{b\in B} \ker \pi^A_b$ is a closed subset of $E$ as well. Therefore,
$\pi^A_B$ is continuous by Fact \ref{closed:kernels}.
\end{proof}
The next example shows that even for a linearly independent, absolutely Cauchy summable subset $A$ of a topological vector space, {\em all\/} projections $\pi^A_B$ for a non-empty finite set $B\subseteq A$ can be discontinuous.
\begin{example}
\label{bad:projections} For each $i\in{\mathbb N}$, let $e_i=(0,0,\ldots,1,0,\ldots)\in {\mathbb R}^{\mathbb N}$, where the only $1$ is at the $i$th place. Define $a_0=e_0$ and $a_i=e_i-e_{i-1}$ for $i>0$.
Then {\em $A=\{a_i:i\in{\mathbb N}\}$ is a faithfully indexed (hence infinite) linearly independent, absolutely Cauchy summable subset of $E=\rhull{A}={\mathbb R}^{({\mathbb N})}$ such that the projection $\pi^A_B$ is discontinuous for each non-empty
finite set $B\subseteq A$.} All statements except discontinuity of projections are straightforward.
By Lemma \ref{lemma:continuity:of:projections}, it suffices to show that the projection $\pi^A_{a_k}$ is discontinuous for every $k\in{\mathbb N}$. Fix a $k\in{\mathbb N}$. It follows from the definition of $\pi^A_{a_k}$ that
\begin{equation}
\label{eq:projections}
\pi^A_{a_k}(a_k)=a_k
\text{ and }
\pi^A_{a_k}(a_i)=0
\text{ for all }
i\in{\mathbb N}
\text{ with }
i\not=k.
\end{equation}
Let $n\in{\mathbb N}$ and $n\ge k$.
Note that $e_n=\sum_{i=0}^{n} a_i$ by the definition of $a_i$.
Combining this with the linearity of $\pi^A_{a_k}$, \eqref{eq:projections} and $n\ge k$, we get
\begin{equation}
\label{eq:limit}
\pi^A_{a_k}(e_n)=\sum_{i=0}^{n}\pi^A_{a_k}(a_i)=\pi^A_{a_k}(a_k)=a_k\not=0.
\end{equation}
Since $\lim_{n\to\infty} e_n=0$, yet $\lim_{n\to\infty}\pi^A_{a_k}(e_n)=a_k\not=0=\pi^A_{a_k}(0)$ by \eqref{eq:limit}, the map $\pi^A_{a_k}$ is discontinuous.
\end{example}
\begin{remark}
(i)
{\em If $A$ is an absolutely Cauchy summable subset of a topological vector space, then for every finite-dimensional subspace $F$ of $\rhull{A}$ there exists a continuous projection from $\rhull{A}$ onto $F$.}
Indeed, since $\rhull{A}$ is locally convex by Proposition \ref{R:Cauchy:summable}~(ii), there exists a closed subspace $L$ of $\rhull{A}$ such that
$\rhull{A}=F\oplus L$ algebraically.\footnote{In fact, it follows from \cite[page 156, statement (2)]{Koethe}
that $\rhull{A}=F\times L$ holds topologically as well.} Therefore, the projection $p:\rhull{A}\to F$ with $\ker p=L$ is continuous by Fact \ref{closed:kernels}.
(ii) Let $A$ be the linearly independent, absolutely Cauchy summable subset of ${\mathbb R}^{({\mathbb N})}$ constructed in Example \ref{bad:projections}.
Then for every non-empty finite set $B\subseteq A$, the canonical projection $\pi^A_B:\rhull{A}\to\rhull{B}$ is discontinuous, yet by item (i), there exists {\em some\/} continuous projection from $\rhull{A}$ onto $\rhull{B}$.
\end{remark}
The following lemma is perhaps known. We include its proof only for the reader's convenience.
\begin{lemma}
\label{continuity:of:linear:functionals}
A linear functional $f: E\to{\mathbb R}$ on a topological vector space $E$ is continuous if and only if $f(U)\not={\mathbb R}$ for some neighbourhood $U$ of $0$ in $E$.
\end{lemma}
\begin{proof}
The ``only if'' part is clear.
To prove the ``if'' part, fix a neighbourhood $U$ of $0$ in $E$ with $f(U)\not={\mathbb R}$.
Let $V$ be a balanced neighbourhood of $0$ in $E$ contained in $U$. Then $f(V)$ is a proper balanced subset of ${\mathbb R}$, so it must be bounded; that is, $f(V)\subseteq (-r,r)$ for some real
number $r>0$. Finally, for every $\varepsilon>0$, $W=\frac{\varepsilon}{r} V$ is a neighbourhood of $0$ in $E$ such that $f(W)= \frac{\varepsilon}{r} f(V)\subseteq (-\varepsilon,\varepsilon)$. This shows that $f$ is continuous.
\end{proof}
Our next theorem produces infinite sets for which all internal projections are continuous.
\begin{theorem}
\label{subsets:with:continuous:projections}
Every infinite linearly independent, absolutely Cauchy summable subset $A$ of a topological vector space contains an infinite subset $B$ such that the projection $\pi^B_C:\rhull{B}\to \rhull{C}$ is continuous for every finite set $C\subseteq B$.
\end{theorem}
\begin{proof}
Define $B_{-1}=\emptyset$ and $A_{-1}=A$. By induction on $n\in{\mathbb N}$, we shall select sets $B_n$ and $A_n$ satisfying the following conditions:
\begin{itemize}
\item[(i$_n$)] $A_n$ is infinite;
\item[(ii$_n$)] $B_n$ is finite;
\item[(iii$_n$)] $B_n\subseteq A_n\subseteq A$;
\item[(iv$_n$)] $B_{n-1}\subseteq B_n$ and $B_n\setminus B_{n-1}\not=\emptyset$;
\item[(v$_n$)] $A_n\subseteq A_{n-1}$;
\item[(vi$_n$)] $\pi^{A_n}_b$ is continuous for every $b\in B_n$. (By
Lemma \ref{lemma:continuity:of:projections},
this is equivalent to the continuity of the map $\pi^{A_n}_{B_n}$.)
\end{itemize}
Suppose that $n\in{\mathbb N}$ and the sets $A_m$ and $B_m$ satisfying conditions
(i$_m$)--(vi$_m$) have already been selected for all $m\in{\mathbb N}$ with $m<n$. We shall define sets $A_n$ and $B_n$ satisfying conditions
(i$_n$)--(vi$_n$).
Since $A_{n-1}$ is infinite and $B_{n-1}$ is finite by
(i$_{n-1}$) and (ii$_{n-1}$) respectively, we can select $c\in A_{n-1}\setminus (B_{n-1}\cup\{0\})$. Clearly,
\begin{equation}
\label{eq:B_n}
B_n=B_{n-1}\cup\{c\}
\end{equation}
is a finite set, so (ii$_n$) holds. Note that (iv$_n$) holds as well.
Since $c\not=0$, there exists a neighbourhood $O$ of $0$ such that $c\not\in O$. Choose a neighbourhood $V$ of $0$ such that $V-V-V\subseteq O$.
Since $A$ is absolutely Cauchy summable, we can use Proposition \ref{R:Cauchy:summable}~(i) to find a finite set $F\subseteq A$ such that
\begin{equation}
\label{eq:tails}
\rhull{A\setminus F}\subseteq V.
\end{equation}
By enlarging $F$ if necessary, we shall assume, without loss of generality, that $B_n\subseteq F$. Define
\begin{equation}
\label{eq:A_n}
A_n=(A_{n-1}\setminus F)\cup B_n.
\end{equation}
Since $B_{n-1}\subseteq A_{n-1}$ by (iii$_{n-1}$) and $c\in A_{n-1}$, from \eqref{eq:B_n} we get $B_n\subseteq A_{n-1}$. From this and \eqref{eq:A_n} we conclude that (v$_n$) holds.
Since $F$ is finite and $A_{n-1}$ is infinite by (i$_{n-1}$), \eqref{eq:A_n} implies that $A_n$ is infinite as well; that is, (i$_n$) holds. The condition (iii$_n$) follows from (iii$_{n-1}$) and \eqref{eq:A_n}.
It remains only to check the condition (vi$_n$). Consider first the case when $b\in B_{n-1}$. Since $A_n\subseteq A_{n-1}$ by (v$_n$) and $\pi^{A_{n-1}}_b$ is continuous by (vi$_{n-1}$), so is its restriction $\pi^{A_n}_b=\pi^{A_{n-1}}_b\restriction{\rhull{A_n}}$ to $\rhull{A_n}$. Since $B_n\setminus B_{n-1}=\{c\}$, it remains only to verify that the linear functional $\pi^{A_n}_c$ is continuous. By Lemma \ref{continuity:of:linear:functionals},
it suffices to find a neighbourhood $U$ of $0$ such that
\begin{equation}
\label{missing:c}
c\not\in\pi^{A_n}_{c}(U\cap \rhull{A_n}).
\end{equation}
Since $\pi^{A_{n-1}}_{B_{n-1}}$ is continuous by (vi$_{n-1}$),
there exists a neighbourhood $W$ of $0$ such that
\begin{equation}
\label{eq:small:nghb}
\pi^{A_{n-1}}_{B_{n-1}}(W\cap\rhull{A_{n-1}})\subseteq V.
\end{equation}
We claim that $U=V\cap W$ is the desired neighbourhood. Indeed, assume that \eqref{missing:c} fails. Then $c=\pi^{A_n}_{c}(d)$ for some $d\in U\cap \rhull{A_n}$. It follows from \eqref{eq:B_n} and \eqref{eq:A_n} that
$A_n=(A_{n-1}\setminus F)\cup B_{n-1}\cup\{c\}$. Since these three sets are pairwise disjoint and $A$ is linearly independent, there exist unique $a\in \rhull{A_{n-1}\setminus F}$ and $b\in \rhull{B_{n-1}}$ such that
$d=a+b+c$. In particular,
\begin{equation}
\label{eq:b:as:projection}
b=\pi^{A_{n-1}}_{B_{n-1}}(d).
\end{equation}
To get a contradiction, it is enough to show that each of the three elements $d$, $a$, $b$ belongs to $V$. Indeed, assuming that this has already been proved,
we would get $c=d-a-b\in V- V-V\subseteq O$, in contradiction with our choice of $O$.
First, $d\in U\subseteq V$. Second, $d\in U\subseteq W$ and
$d\in \rhull{A_{n}}\subseteq \rhull{A_{n-1}}$ by (v$_n$), so that
$d\in W\cap\rhull{A_{n-1}}$. Combining this with
\eqref{eq:small:nghb}
and
\eqref{eq:b:as:projection},
we conclude that $b\in V$. Finally, $a\in \rhull{A_{n-1}\setminus F}\subseteq \rhull{A\setminus F}\subseteq V$ by (iii$_{n-1}$) and \eqref{eq:tails}.
This finishes the verification of the condition (vi$_n$).
The inductive step has been completed.
Since (iii$_m$) holds for every $m\in{\mathbb N}$,
\begin{equation}
\label{eq:def:B}
B=\bigcup_{m\in{\mathbb N}} B_m
\end{equation}
is a subset of $A$. Since (iv$_m$) holds for every $m\in{\mathbb N}$, $B$ is infinite. We claim that
\begin{equation}
\label{eq:B:in:the:intersection}
B\subseteq \bigcap_{n\in {\mathbb N}} A_n.
\end{equation}
Indeed, by
\eqref{eq:def:B},
in order to establish \eqref{eq:B:in:the:intersection}, it suffices to check that $B_m\subseteq A_n$ for all $m,n\in{\mathbb N}$. If $m\le n$, then $B_m\subseteq B_n$, as (iv$_k$) holds for every $k\in{\mathbb N}$. Since $B_n\subseteq A_n$ by (iii$_n$), this yields the inclusion $B_m\subseteq A_n$ for $m\le n$. Suppose now that $n< m$. By (iii$_m$), $B_m\subseteq A_m$. Since (v$_k$) holds for every $k\in{\mathbb N}$, we have $A_m\subseteq A_n$. This yields the inclusion $B_m\subseteq A_n$ in case $n< m$ as well.
By Lemma \ref{lemma:continuity:of:projections}, to finish the proof of our theorem it remains only to show that $\pi^{B}_b$ is continuous for every $b\in B$. Let $b\in B$ be arbitrary. By \eqref{eq:def:B}, there exists $m\in{\mathbb N}$ such that $b\in B_m$. The map $\pi^{A_m}_b$ is continuous by (vi$_m$). Since $B\subseteq A_m$ by \eqref{eq:B:in:the:intersection}, its restriction $\pi^{A_m}_b\restriction_{B}=\pi^{B}_b$ to $\rhull{B}$ is also continuous.
\end{proof}
\begin{question}
If $A$ is an absolutely Cauchy summable subset of a topological vector space, is $\rhull{A}$ topologically isomorphic to ${\mathbb R}^{(I)}$ for some index set $I$?
\end{question}
\section{The linear Kalton map and its connection to the Kalton map}
\label{Sec:lkal}
Let $E$ be a vector space and let $A\subseteq E\setminus\{0\}$. Then there exists a unique linear map
\begin{equation}\label{eq:def:of:lkal}
\lkal{A}:\bigoplus_{a\in A}\rhull{a}\to E
\end{equation}
which extends each natural inclusion map $\rhull{a}\to E$ for $a\in A$. We call the map $\lkal{A}$ as in \eqref{eq:def:of:lkal} the {\em linear Kalton map associated with $A$}. Clearly, the set $A$ is linearly independent (or equivalently, $A$ is a Hamel basis of $\rhull{A}$) if and only if $\lkal{A}$ is injective. The vector space
$$
lS_A=\bigoplus_{a\in A}\rhull{a}
$$
is topologically isomorphic to the topological vector space ${\mathbb R}^{(A)}$. Moreover, $lS_A$ contains $S_A$ as a subgroup and $\lkal{A}\restriction_{S_A}=\kal{A}$.
When $E$ is a topological vector space, one can discuss topological properties of the linear Kalton map, as now both the domain and range of the map are topological spaces. Proposition \ref{kal:iso:lkal:cont} deals with the continuity of this map, while Proposition \ref{openess:of:linear:kalton:map} deals with its openness.
\begin{proposition}\label{kal:iso:lkal:cont}
Let $E$ be a topological vector space and let $A\subseteq E\setminus\{0\}$. Then
the following statements are equivalent:
\begin{itemize}
\item[(i)] The linear Kalton map $\lkal{A}$ is continuous.
\item[(ii)] The Kalton map $\kal{A}$ is continuous.
\item[(iii)] The set $A$ is absolutely Cauchy summable.
\end{itemize}
\end{proposition}
\begin{proof}
(i) implies (ii), as $\lkal{A}\restriction_{S_A}=\kal{A}$. Items (ii) and (iii) are equivalent by Theorem \ref{Proposition:Udine}.
(iii)~$\to$~(i)
Let $U$ be a neighbourhood of $0$ in $E$. Choose a neighbourhood $V$ of $0$ in $E$ such that $V+V\subseteq U$. By (iii) and Proposition \ref{R:Cauchy:summable}~(i), there exists
a finite set $F\subseteq A$ such that $\rhull{A\setminus F}\subseteq V$. If $F=\emptyset$, we let $W=V$. Otherwise
we fix a neighbourhood $W$ of zero of $E$ such that $W+W+\dots+W\subseteq V$, where $|F|$-many $W$'s are taken in the sum. Then
$$
\lkal{A}\left( \bigoplus_{a\in F} (\rhull{a} \cap W)\oplus\bigoplus_{a\in A\setminus F} \rhull{a}\right)
=
\sum_{a\in F} (\rhull{a} \cap W)+\rhull{A\setminus F}\subseteq V+V\subseteq U.
$$
This establishes the continuity of $\lkal{A}$.
\end{proof}
\begin{proposition}
\label{openess:of:linear:kalton:map}
For a linearly independent subset $A$ of a topological vector space $E$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] the linear Kalton map $\lkal{A}: lS_A\to \rhull{A}$ is an open map onto its image $\rhull{A}$;
\item[(ii)] the linear functional $\pi^A_a: \rhull{A}\to\rhull{a}$ is continuous for every $a\in A$.
\end{itemize}
\end{proposition}
\begin{proof}
For every $a\in A$, denote by $p_a$ the projection from $lS_A$ to the $a$th coordinate $\rhull{a}$ of the direct sum $lS_A$. Clearly,
\begin{equation}
\label{commutative:diagram}
\lkal{A}\restriction_{\rhull{a}}\circ p_a = \pi^A_a\circ \lkal{A}
\
\mbox{ for every }
\
a\in A.
\end{equation}
Recall that each $\lkal{A}\restriction_{\rhull{a}}$ is a topological isomorphism. It follows from the definition of $lS_A$ that each projection $p_a: lS_A\to \rhull{a}$ is continuous.
(i)$\to$(ii) Fix $a\in A$ and let $V$ be an open subset of $\rhull{a}$. It suffices to show that $(\pi^A_a)^{-1}(V)$ is an open subset of $\rhull{A}$.
Since $\lkal{A}\restriction_{\rhull{a}}^{-1}$ is a topological isomorphism, $\lkal{A}\restriction_{\rhull{a}}^{-1}(V)$ is open in $\rhull{a}$. Since $p_a$ is continuous, $W =
p_a^{-1}(\lkal{A}\restriction_{\rhull{a}}^{-1}(V))$ is open in $lS_A$.
Applying (i), we conclude that $\lkal{A}(W)$ is open in
$\lkal{A}(lS_A)=\rhull{A}$.
From \eqref{commutative:diagram}, we get
\begin{equation}
\label{eq:29}
(\pi^A_a)^{-1}(V)=\lkal{A}\left((\lkal{A}\restriction_{\rhull{a}}\circ p_a)^{-1}(V)\right)=\lkal{A}(W).
\end{equation}
Therefore, $(\pi^A_a)^{-1}(V)$ is an open subset of $\rhull{A}$.
(ii)$\to$(i)
To establish (i), it suffices to find, for every basic open neighbourhood $O$ of $0$ in $lS_A$, an open neighbourhood $U$ of $0$ in $E$ such that $U\cap \rhull{A}\subseteq \lkal{A}(O)$. There exist a finite set $F\subseteq A$ and a family $\{W_a:a\in F\}$ of open neighbourhoods of $0$ in $E$ such that
\begin{equation}
\label{eq:O}
O=\bigoplus_{a\in F} (\rhull{a} \cap W_a)\oplus\bigoplus_{a\in A\setminus F} \rhull{a}
\end{equation}
For every $a\in F$ we can use (ii) to fix an open neighbourhood $V_a$ of $0$ in $E$ such that $\pi^A_a(V_a)\subseteq W_a$. Since $F$ is finite, $U=\bigcap_{a\in F} V_a$ is an open neighbourhood of $0$ in $E$. Since $\pi^A_a(U)\subseteq W_a$ for every $a\in F$, from \eqref{eq:O} and the definition of $\lkal{A}$ we conclude that
$U\cap \rhull{A}\subseteq \lkal{A}(O)$.
\end{proof}
\begin{example}
Let $A$ be the linearly independent, absolutely Cauchy summable subset of ${\mathbb R}^{({\mathbb N})}$ constructed in Example \ref{bad:projections}.
Then {\em the linear Kalton map $\lkal{A}$ is a continuous injection which is not an open map onto its image $\rhull{A}$\/}.
Indeed, the continuity of $\lkal{A}$ follows from Proposition \ref{kal:iso:lkal:cont}. Since the canonical projections $\pi^A_a$ are discontinuous,
$\lkal{A}$ is not open by Proposition \ref{openess:of:linear:kalton:map}.
\end{example}
This example shows that in our next theorem one cannot take $B=S$.
\begin{theorem}
\label{abs:Cauchy:summable:contain:isomorphic:subsets}
Every infinite absolutely Cauchy summable subset
$S$ of a topological vector space
contains an infinite subset $B$
such that
the linear Kalton map
$\lkal{B}$
is a topologically isomorphic embedding.
\end{theorem}
\begin{proof}
Let $A$ be a
maximal independent subset of $S$ which exists by Zorn's Lemma.
Then $\rhull{S}=\rhull{A}$ by the maximality of $S$. If $A$ were finite, then
$S$ would become an infinite Cauchy summable subset of the finite-dimensional Euclidean space $\rhull{A}$. Since finite-dimensional Euclidean spaces are NSS groups, this would contradict Theorem \ref{metric:NSS:iff:no:abs:C:summable}. Therefore, $A$ must be infinite.
As a subset of the absolutely Cauchy summable set $S$, $A$ itself is absolutely Cauchy summable. Since $A$ is also linearly independent, we can use
Theorem \ref{subsets:with:continuous:projections} to choose an infinite subset $B$ of $A$ as in the conclusion of this theorem.
As a subset of the linearly independent set $A$, $B$ is also linearly independent. Therefore, the linear Kalton map $\lkal{B}$ is an injection.
Furthermore, $\lkal{B}$ is an open map onto its image $\rhull{B}$ by Proposition \ref{openess:of:linear:kalton:map}.
As a subset of the absolutely Cauchy summable set $S$, $B$ is absolutely Cauchy summable as well.
Therefore, the linear Kalton map $\lkal{B}$ is continuous by the implication (iii)~$\to$~(i) of Proposition \ref{kal:iso:lkal:cont}.
\end{proof}
\begin{corollary}
\label{continuous:contain:isomorphic:subsets}
Let $S$ be an infinite subset of a topological vector space such that the Kalton map $\kal{S}$ is continuous. Then $S$ contains a countably infinite subset $B$ such that the linear Kalton map $\lkal{B}$ is a topologically isomorphic embedding.
\end{corollary}
\begin{proof}
By Proposition \ref{openess:of:linear:kalton:map}, $S$ is absolutely Cauchy summable. Now the conclusion follows from Theorem \ref{abs:Cauchy:summable:contain:isomorphic:subsets}.
\end{proof}
If $A$ is a subset of a topological vector space such that the Kalton map $\kal{A}$ is topologically isomorphic embedding, then the set $A$ is topologically independent (Proposition \ref{Proposition:Matsuyama}) and consequently, it is also linearly independent by Proposition \ref{prop:step:to:Lie}~(i). Thus the linear Kalton map $\lkal{A}$ is an isomorphism onto $\rhull{A}$. Moreover, $\lkal{A}$ is also continuous by Proposition \ref{kal:iso:lkal:cont}. However, we do not know whether it is an open map onto its image $\rhull{A}$.
\begin{question}
If $A$ is a subset of a topological vector space such that $\kal{A}$ is a topologically isomorphic embedding, is then $\lkal{A}$
a topologically isomorphic embedding as well?
\end{question}
\section{Infinite direct sums and products in topological vector spaces}\label{Sec:final}
In this section we provide characterizations of topological vector spaces that contain either ${\mathbb R}^{({\mathbb N})}$ or ${\mathbb R}^{\mathbb N}$ as a subspace. As particular corollaries we obtain some classical results from \cite{BPR,Lip}.
\begin{theorem}\label{complete:TVS:not:TAP:iff:contains:R:to:N}
Let $E$ be a topological vector space. Then the following conditions are equivalent:
\begin{itemize}
\item[(i)] $E$ contains the topological vector space ${\mathbb R}^{({\mathbb N})}$ as its subspace;
\item[(ii)] $E$ contains the topological group ${\mathbb Z}^{({\mathbb N})}$ as its subgroup;
\item[(iii)] $E$ has an infinite absolutely Cauchy summable set.
\end{itemize}
Moreover, if $E$ is complete, then these three conditions are also equivalent to:
\begin{itemize}
\item[(iv)]
$E$ contains the topological vector space ${\mathbb R}^{\mathbb N}$ as its subspace.
\end{itemize}
\end{theorem}
\begin{proof} The implication (i)~$\to$~(ii) is clear.
The implication (ii)~$\to$~(iii) follows from Lemma \ref{Cauchy:summable:in:direct:sum} and Remark \ref{remark:on:abs:C:summability}~(ii).
The implication (iii)~$\to$~(i) follows from Theorem
\ref{abs:Cauchy:summable:contain:isomorphic:subsets}.
The equivalence (i)~$\leftrightarrow$~(iv) for a complete topological vector space $E$ follows from the uniqueness of the completion and the fact that ${\mathbb R}^{\mathbb N}$ is the completion of ${\mathbb R}^{({\mathbb N})}$.
\end{proof}
We say that a topological vector space $E$ {\em has small lines\/} if, for every neighbourhood $V$ of $0$ in $E$ there exists $x\in E\setminus\{0\}$ with
$\rhull{x}\subseteq V$; that is, $V$ contains a ``line" (one-dimensional subspace).
\begin{corollary}\cite[Theorem 3]{Lip}
\label{Lip:corollary}
For a metric vector space $E$, the following conditions are equivalent:
\begin{itemize}
\item[(i)] $E$ contains a subspace topologically isomorphic to ${\mathbb R}^{({\mathbb N})}$;
\item[(ii)] $E$ has small lines.
\end{itemize}
\end{corollary}
\begin{proof}
The implication (i)~$\to$~(ii) follows from the fact that ${\mathbb R}^{({\mathbb N})}$ has small lines. To establish the implication (ii)~$\to$~(i), assume that $E$ has small lines. Then
$E$ is not NSS. By Theorem \ref{metric:NSS:iff:no:abs:C:summable}, $E$ contains an infinite absolutely Cauchy summable set. To get (i) it remains to apply Theorem \ref{complete:TVS:not:TAP:iff:contains:R:to:N}.
\end{proof}
Combining Theorem \ref{complete:TVS:not:TAP:iff:contains:R:to:N} with Corollary \ref{Lip:corollary}, one obtains the following classical result of Bessaga, Pelczynski and Rolewicz:
\begin{corollary}\cite[Theorem 9]{BPR}
A complete metric vector space contains a subspace topologically isomorphic to ${\mathbb R}^{{\mathbb N}}$ if and only if it has small lines.
\end{corollary}
Recall that in the multiplier convergence theory one typically takes a fixed set $\mathscr{F}\subseteq {\mathbb R}^{\mathbb N}$ of ``multipliers'' and calls a
series $\sum_{n=0}^\infty a_n$ in a topological vector space {\em $\mathscr{F}$ multiplier convergent\/} provided that the series $\sum_{n=0}^\infty f(n)a_n$ converges
for every $f\in\mathscr{F}$.
By varying the set $\mathscr{F}$ of multipliers one can obtain a fine description of the level of convergence
of a given series, leading to a rich theory; see \cite{Swartz}. The toughest convergence condition on a series is obviously imposed by taking
$\mathscr{F}$ to be the whole ${\mathbb R}^{\mathbb N}$. For brevity, we shall say that a sequence $\{a_n:n\in{\mathbb N}\}$ of elements of a topological vector space $E$ is {\em ${\mathbb R}^{\mathbb N}$-convergent\/}
if the series $\sum_{n=0}^\infty a_n$ is ${\mathbb R}^{\mathbb N}$ multiplier convergent; that is, if the series $\sum_{n=1}^\infty r_n a_n$ converges to some element of $E$ for every real sequence $\{r_n:n\in{\mathbb N}\}$.
For such a sequence $A= \{a_n:n\in{\mathbb N}\}$ the map ${\mathbb N} \to A$, defined by $n\mapsto a_n$ may fail to be finitely-many to one only when $a_n=0$ for all but finitely many members of the sequence. In such a case we say that $A= \{a_n:n\in{\mathbb N}\}$ is {\em trivial\/}, otherwise (if infinitely many $a_n\ne 0$) we call it {\em non-trivial\/}.
Bessaga, Pelczynski and Rolewicz used the notion of an ${\mathbb R}^{\mathbb N}$-convergent sequence (without giving it a name) to obtain a characterization of complete metric spaces containing a subspace isomorphic to ${\mathbb R}^{\mathbb N}$. Indeed, they proved that a complete metric vector space contains a subspace isomorphic to ${\mathbb R}^{\mathbb N}$ if and only if it contains
a non-trivial ${\mathbb R}^{\mathbb N}$-convergent sequence
\cite[Corollary]{BPR}.
Our last theorem shows that both ``metric'' and ``complete'' are superfluous in this result.
\begin{theorem}\label{another:theorem}
A topological vector space contains a subspace isomorphic to ${\mathbb R}^{\mathbb N}$ if and only if it contains
a non-trivial ${\mathbb R}^{\mathbb N}$-convergent sequence.
\end{theorem}
\begin{proof}
The ``only if'' part is obvious. To show the ``if'' part, assume that $A=\{a_n:n\in{\mathbb N}\}$ is a non-trivial
${\mathbb R}^{\mathbb N}$-convergent sequence in a topological vector space $E$. Then the {\em set\/} $A$ must be infinite and absolutely summable.
Consequently, the set $A$ is absolutely Cauchy summable by Proposition \ref{as:is:acs}.
By Theorem \ref{abs:Cauchy:summable:contain:isomorphic:subsets}, there is an infinite faithfully indexed subset $B=\{b_n:n\in{\mathbb N}\}$ of $A$ such that the linear Kalton map $\lkal{B}$ is a topologically isomorphic embedding. By the uniqueness of completions of topological vector spaces, its continuous extension $\slkal{B}: \overline{lS_B} \to \overline{E}$ to the completions $\overline{lS_B}$ and $\overline{E}$ of $lS_B$ and $E$, respectively, is still a topological isomorphism. Clearly, $ \overline{lS_B}\cong {\mathbb R}^{\mathbb N}$. Therefore, it suffices to show that $\slkal{B}(\overline{lS_B})\subseteq E$.
Since the faithfully indexed sequence $\{b_n:n\in{\mathbb N}\}$ is a subsequence of the ${\mathbb R}^{\mathbb N}$-convergent sequence $\{a_n:n\in{\mathbb N}\}$, the former sequence is ${\mathbb R}^{\mathbb N}$-convergent as well.
Let $x_n=(\slkal{B})^{-1}(b_n)$ for every $n\in{\mathbb N}$. Since every element $v \in \overline{lS_B}$ can be written in the form $v = \sum_{n=1}^\infty r_n x_n$, we obtain
$$
\slkal{B}(v) = \slkal{B}\left(\sum_{n=1}^\infty r_nx_n\right)
=
\sum_{n=1}^\infty r_n\slkal{B}(x_n)
=
\sum_{n=1}^\infty r_nb_n
\in E,
$$
which implies $\slkal{B}(\overline{lS_B})\subseteq E$.
\end{proof}
\begin{remark}\label{answer:to:a:question:of:M:Husek} Topological vector spaces without ${\mathbb R}^{\mathbb N}$-convergent sequences were studied in \cite{Husek2} under the name ``spaces without free sums''. In Proposition 4 of \cite{Husek2}
Hu\v{s}ek proved that the following statements are equivalent for a topological vector space $E$:
\begin{enumerate}
\item There exists a continuous linear mapping $f:{\mathbb R}^{\mathbb N}\to E$ such that $f({\mathbb R}^{\mathbb N})$ has infinite dimension.
\item There exists a continuous linear mapping $f:{\mathbb R}^{\mathbb N}\to E$ such that infinitely many $f(e_i)$ are non-zero (here $e_i(j)=\delta_{ij}$, where $\delta_{ij}$ is the Kronecker's delta).
\item $E$ contains an ${\mathbb R}^{\mathbb N}$-convergent sequence.
\end{enumerate}
In a remark after \cite[Proposition 4]{Husek2} the author says that he is not aware whether the following can be added to the above list of equivalent properties.
\begin{itemize}
\item[(4)] There exists a one-to-one continuous linear mapping from ${\mathbb R}^{\mathbb N}$ to $E$.
\end{itemize}
Theorem \ref{another:theorem} shows that it is possible, thereby answering the question of Hu\v{s}ek. In fact, the same theorem shows that one can add to the list of equivalent properties even the following stronger property:
\begin{itemize}
\item[(5)] There exists a topologically isomorphic embedding ${\mathbb R}^{\mathbb N} \hookrightarrow E$.
\end{itemize}
\end{remark}
To the best of our knowledge ${\mathbb R}^{\mathbb N}$-convergent sequences were also studied in \cite{Pfister}.
| {
"timestamp": "2015-08-05T02:05:35",
"yymm": "1508",
"arxiv_id": "1508.00667",
"language": "en",
"url": "https://arxiv.org/abs/1508.00667",
"abstract": "We call a subset $A$ of an abelian topological group $G$: (i) $absolutely$ $Cauchy$ $summable$ provided that for every open neighbourhood $U$ of $0$ one can find a finite set $F\\subseteq A$ such that the subgroup generated by $A\\setminus F$ is contained in $U$; (ii) $absolutely$ $summable$ if, for every family $\\{z_a:a\\in A\\}$ of integer numbers, there exists $g\\in G$ such that the net $\\left\\{\\sum_{a\\in F} z_a a: F\\subseteq A\\mbox{ is finite}\\right\\}$ converges to $g$; (iii) $topologically$ $independent$ provided that $0\\not \\in A$ and for every neighbourhood $W$ of $0$ there exists a neighbourhood $V$ of $0$ such that, for every finite set $F\\subseteq A$ and each set $\\{z_a:a\\in F\\}$ of integers, $\\sum_{a\\in F}z_aa\\in V$ implies that $z_aa\\in W$ for all $a\\in F$. We prove that: (1) an abelian topological group contains a direct product (direct sum) of $\\kappa$-many non-trivial topological groups if and only if it contains a topologically independent, absolutely (Cauchy) summable subset of cardinality $\\kappa$; (2) a topological vector space contains $\\mathbb{R}^{(\\mathbb{N})}$ as its subspace if and only if it has an infinite absolutely Cauchy summable set; (3) a topological vector space contains $\\mathbb{R}^{\\mathbb{N}}$ as its subspace if and only if it has an $\\mathbb{R}^{(\\mathbb{N})}$ multiplier convergent series of non-zero elements. We answer a question of Hušek and generalize results by Bessaga-Pelczynski-Rolewicz, Dominguez-Tarieladze and Lipecki.",
"subjects": "General Topology (math.GN); Functional Analysis (math.FA); Group Theory (math.GR)",
"title": "Direct sums and products in topological groups and vector spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474869616596,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7078029263375533
} |
https://arxiv.org/abs/1911.05411 | Menon-type identities again: A note on a paper by Li, Kim and Qiao | We give common generalizations of the Menon-type identities by Sivaramakrishnan (1969) and Li, Kim, Qiao (2019). Our general identities involve arithmetic functions of several variables, and also contain, as special cases, identities for gcd-sum type functions. We point out a new Menon-type identity concerning the lcm function. We present a simple character free approach for the proof. | \section{Introduction}
Menon's classical identity \cite{Men1965} states that for every $n\in \N:=\{1,2,\ldots \}$,
\begin{equation} \label{Menon_id}
M(n):= \sum_{\substack{a=1 \\ (a,n)=1}}^n (a-1,n) = \varphi(n) \tau(n),
\end{equation}
where $(a,n)$ stands for the greatest common divisor of $a$ and $n$, $\varphi(n)$ is Euler's totient function and
$\tau(n)=\sum_{d\mid n} 1$ is the divisor function.
Menon \cite{Men1965} proved this identity by three distinct methods, the first one being based on the Cauchy-Frobenius-Burnside lemma on group
actions. This method was used later by Sury \cite{Sur2009}, T\'oth \cite{Tot2011}, Li and Kim \cite{LiKim2017}, and other authors to
derive different generalizations and analogs of \eqref{Menon_id}. Number theoretic methods were also applied in several papers to deduce various
Menon-type identities. See, e.g., \cite{Hau2005,HW1996,HauWan1997,LiHuKimTaiw,LiKim2018,LKQ2019,Tot2018,Tot2019,Tot,WZJ2019,ZhaCao2017}.
It is less known and is not considered in the above mentioned papers the following old generalization, due to
Sivaramakrishnan \cite{Siv1969}, in a slightly different form:
\begin{equation} \label{Menon_id_Siv}
M(m,n,t):= \sum_{\substack{a=1 \\ (a,m)=1}}^t (a-1,n) = \frac{t\, \varphi(m)\tau(n)}{m} \prod_{p^\nu\mid \mid n_1}
\left(1-\frac{\nu}{(\nu+1)p} \right),
\end{equation}
where $m,n,t\in \N$ such that $m\mid t$, $n\mid t$ and $n_1=\max \{d\in \N: d\mid n, (d,m)=1\}$. If $m=n=t$, then
$M(n,n,n)=M(n)$, that is, \eqref{Menon_id_Siv} reduces to \eqref{Menon_id}. However, if $n\mid m$ and $t=m$, then it
follows from \eqref{Menon_id_Siv} that
\begin{equation*}
\sum_{\substack{a=1 \\ (a,m)=1}}^m (a-1,n) = \varphi(m) \tau(n),
\end{equation*}
which was recently obtained by Jafari and Madadi \cite[Cor.\ 2.2]{JafMad2017}, using group theoretic arguments,
without referring to the paper \cite{Siv1969}. It was pointed out by Sivaramakrishnan \cite{Siv1969} that if $t=[m,n]$, the least common
multiple of $m$ and $n$, then $M(m,n,[m,n])$ is a multiplicative function of two variables.
In a quite recent paper, Li, Kim and Qiao \cite[Th.\ 2.5]{LKQ2019} proved that for any integers
$n\ge 1$, $k\ge 0$, $\ell \ge 1$ one has
\begin{equation} \label{id_Debrecen_paper}
\sum_{\substack{1\le a,b_1,\ldots,b_k \le n \\ (a,n)=1}} (a^{\ell}-1,b_1,\ldots,b_k,n)=\varphi(n) (\id_k \ast \, C^{(\ell)})(n),
\end{equation}
where $\ast$ denotes the Dirichlet convolution, $\id_k(n)=n^k$ and $C^{(\ell)}(n)$ is the number of solutions of the congruence $x^{\ell}
\equiv 1$ (mod $n$) with $(x, n)=1$. Note that the condition $(x, n)=1$ can be omitted here.
For the proof they used properties of characters of finite abelian groups.
The case $k=0$ recovers certain identities given by the second author \cite{Tot2011}. If $k=0$ and $\ell=1$, then \eqref{id_Debrecen_paper}
reduces to \eqref{Menon_id}.
The sum $M(n)$ is related to the gcd-sum function, also known as Pillai's arithmetical function, given by
\begin{equation} \label{gcd_sum}
G(n):=\sum_{a=1}^n (a,n)= n \sum_{d\mid n} \frac{\varphi(d)}{d} \quad (n\in \N).
\end{equation}
A large number of different generalizations and analogs of the function $G(n)$ is presented in the literature. See, e.g.,
\cite{Hau2008,Tot2010}.
It is the goal of this paper to give common generalizations of the identities \eqref{Menon_id_Siv} and \eqref{id_Debrecen_paper},
and to present a simple character free approach for the proof. Our general identity, included in Theorem \ref{Theorem_main}, involves
arithmetic functions of several variables, and also contains, as a special case, identities for gcd-sum type functions, such as
identity \eqref{gcd_sum}. The identity of Theorem \ref{Theorem_g} is concerning arithmetic functions of a single variable.
We point out the following new Menon-type identity, which is another special case of our results. See Theorem \ref{Th_Menon_lcm} and Corollary
\ref{Cor_lcm_n}. If $n\in \N$, then
\begin{equation} \label{a_b_lcm}
\sum_{\substack{1\le a,b\le n\\ (a,n)=(b,n)=1}} [(a-1,n),(b-1,n)] =\varphi(n)^2 \prod_{p^{\nu}\mid\mid n}
\left(1+ 2\nu -\frac{p^{\nu}-1}{p^{\nu-1}(p-1)^2} \right).
\end{equation}
Note that identity \eqref{Menon_id_Siv} was generalized by Sita Ramaiah \cite[Th.\ 9.1]{Sit1978} in another
way, namely in terms of regular convolutions. Our results can further be generalized to regular convolutions and to $k$-reduced residue systems.
For the sake of brevity, we do not present the details. For appropriate material we refer to \cite{Coh1949,Coh1956,Sit1978}.
\section{Preliminaries}
\subsection{Arithmetic functions of several variables} \label{Sect_Prel_Arithm_Func}
Let $f, g:\N^k\to\C$ be arithmetic functions of $k$ variables. Their Dirichlet convolution is defined as
\begin{equation*}
(f\ast_k g)(n_1, \ldots, n_k) =\sum_{d_1\mid n_1,\ldots,d_k\mid n_k} f(d_1, \ldots, d_k)g(n_1/d_1,\ldots, n_k/d_k).
\end{equation*}
In the case $k=1$ we write simply $f\ast_1 g=f\ast g$. The identity under $\ast_k$ is
\begin{equation*}
\delta_k(n_1, \ldots, n_k)=\delta(n_1)\cdots\delta(n_k),
\end{equation*}
where $\delta(1)=1$ and $\delta(n)=0$ for $n\ne 1$. An arithmetic function $f$ of $k$ variables possesses an inverse under $\ast_k$
if and only if $f(1,\ldots, 1)\ne 0$. Let $\zeta_k(n_1, \ldots, n_k)$ be defined as
$\zeta_k(n_1, \ldots, n_k)=1$ for all $n_1, \ldots, n_k\in\N$. Its Dirichlet inverse is the M\"{o}bius function $\mu_k$ of $k$
variables given as
\begin{equation*}
\mu_k(n_1, \ldots, n_k) = \mu(n_1)\cdots\mu(n_k),
\end{equation*}
where $\mu$ is the classical M\"{o}bius function of one variable.
Let $g$ be an arithmetic function of one variable. Then the principal function $\Pr_k(g)$ associated with $g$ is the arithmetic
function of $k$ variables defined as
\begin{equation*}
\Pr_k(g)(n_1,\ldots, n_k)
= \begin{cases}
g(n), & \text{ if $n_1=\cdots = n_k=n$,}\\
0, & \text{ otherwise.}
\end{cases}
\end{equation*}
(See Vaidyanathaswamy \cite{Vai1931}.) Let $f$ be the arithmetic function of $k$ variables defined by
\begin{equation*}
f(n_1,\ldots, n_k)=g((n_1,\ldots, n_k)),
\end{equation*}
having the gcd on the right-hand side. Then
\begin{equation*}
f(n_1,\ldots, n_k)
=\sum_{d\mid (n_1,\ldots, n_k)}(\mu\ast g)(d) =\sum_{d_1\mid n_1,\ldots, d_k\mid n_k}
\Pr_k(\mu\ast g)(d_1,\ldots, d_k),
\end{equation*}
that is,
\begin{equation*}
f= \Pr_k(\mu\ast g) \ast_k \zeta_k,
\end{equation*}
which means that
\begin{equation}\label{eq:princ}
\Pr_k(\mu\ast g)= \mu_k\ast_k f.
\end{equation}
An arithmetic function $f$ of $k$ variables is said to be multiplicative if
$f(1,\ldots, 1)=1$ and
\begin{equation*}
f(m_1 n_1,\ldots, m_k n_k)=f(m_1,\ldots, m_k)f(n_1,\ldots, n_k)
\end{equation*}
for all positive integers $m_1,\ldots, m_k$ and $n_1,\ldots, n_k$
with $(m_1\cdots m_k, n_1\cdots n_k)=1$. For example, the gcd function $(n_1,\ldots,n_k)$ and the lcm function $[n_1,\ldots,n_k]$ are
multiplicative. If $f$ and $g$ are multiplicative functions of $k$ variables, then their Dirichlet convolution
$f\ast_k g$ is also multiplicative. See \cite{Tot2014,Vai1931}.
\subsection{Number of solutions of congruences} \label{Sect_Nr_Congr}
For a given polynomial $P\in \Z[x]$ let $N_P(n)$ denote the number of solutions $x$ (mod $n$) of the congruence
$P(x)\equiv 0$ (mod $n$) and let $\widehat{N}_P(n)$ be the number of solutions $x$ (mod $n$) such that $(x,n)=1$.
Furthermore, for a fixed integer $s\in \N$, let $\widehat{N}_P(n,s)$ be the number of solutions $x$ (mod $n$) such that
$(x,n,s)=1$.
The functions $N_P(n)$, $\widehat{N}_P(n)$ and $\widehat{N}_P(n,s)$ are multiplicative in $n$, which are direct
consequences of the Chinese remainder theorem.
It is easy to see that if $P(x)=a_0+a_1x+\cdots + a_kx^k$ and $(a_0,n)=1$, then $N_P(n)=\widehat{N}_P(n)=\widehat{N}_P(n,s)$.
This applies, in particular, to $P(x)=-1+x^{\ell}$. See the Introduction regarding the notation $C^{(\ell)}(n)$,
used by Li, Kim and Qiao \cite{LKQ2019}.
\subsection{Lemma}
We will need the next lemma.
\begin{lemma} \label{Lemma_cong} Let $d,r,s\in \N$, $x\in \Z$ such that $d\mid r$, $s\mid r$. Then
\begin{equation*}
\sum_{\substack{1\le a\le r \\ (a,s)=1 \\ a\equiv x \, \text{\rm (mod $d$)} }} 1 =
\begin{cases} \displaystyle{ \frac{r}{d} \prod_{\substack{p\mid s\\ p\nmid d}} \left(1-\frac1{p}\right)}, & \text{ if $(d,s,x)=1$,} \\
0, & \text{ otherwise.}
\end{cases}
\end{equation*}
\end{lemma}
In the special case $r=s$ this is known in the literature, usually proved by the
inclusion-exclusion principle. See, e.g., \cite[Th.\ 5.32]{Apo1976}. Also see \cite[Lemma]{HauWan1997} for a generalization in terms of regular
convolutions. Here we use a different approach, similar to the proof of \cite[Th.\ 9.1]{Sit1978} and to the proofs of our previous papers \cite{Tot2019,Tot}.
\begin{proof}[Proof of Lemma {\rm \ref{Lemma_cong}}] Let $A$ denote the given sum.
If $(d,s,x)\ne 1$, then the sum $A$ is empty and equal to zero. Indeed, if we assume $p\mid (d,s,x)$ for some prime $p$, then $p\mid a\equiv x$
(mod $d$). Hence $p\mid (a,s)=1$, a contradiction.
Assume now that $(d,s,x)=1$. By using the property of the M\"{o}bius function, the given sum can be written as
\begin{equation} \label{last_sum}
A = \sum_{\substack{1\le a \le r \\ a\equiv x \, \text{\rm (mod $d$)} }} \sum_{\delta \mid (a,s)}
\mu(\delta) = \sum_{\delta \mid s} \mu(\delta) \sum_{\substack{j=1\\ \delta j\equiv x \, \text{\rm (mod $d$)} }}^{r/\delta} 1.
\end{equation}
Let $\delta\mid s$ be fixed. The linear congruence $\delta j\equiv x$ (mod $d$) has solutions in $j$ if and only if $(\delta,d) \mid x$. Here
$(\delta,d)\mid \delta$ and $\delta \mid s$, hence $(\delta,d) \mid s$. Also, $(\delta,d)\mid d$. If $(\delta,d)\mid x$ holds, then $(\delta,d)\mid (d,s,x)=1$, therefore
$(\delta,d)=1$. We deduce that the above congruence has
\begin{equation*}
N= \frac{r}{d \delta}
\end{equation*}
solutions (mod $r/\delta$) and the last sum in \eqref{last_sum} is $N$.
This gives
\begin{equation*}
A= \frac{r}{d} \sum_{\substack{\delta \mid s\\ (\delta,d)=1}} \frac{\mu(\delta)}{\delta}= \frac{r}{d}\prod_{\substack{p\mid s\\ p\nmid d}}
\left(1-\frac1{p}\right).
\end{equation*}
\end{proof}
\section{Main results}
Assume that
(1) $k,\ell \ge 0$ are fixed integers, not both zero;
(2) $m_i,r_i,s_i,n_j,t_j\in \N$ are integers such that $m_i\mid r_i$, $s_i\mid r_i$, $n_j\mid t_j$ ($1\le i\le k$, $1\le j\le \ell$);
(3) $f:\N^{k+\ell} \to \C$ is an arbitrary arithmetic function of $k+\ell$ variables;
(4) $P_i,Q_j\in \Z[x]$ are arbitrary polynomials ($1\le i\le k$, $1\le j\le \ell$).
Consider the sum
\begin{equation*}
S:= \sum_{\substack{1\le a_i\le r_i\\ (a_i,s_i)=1\\ 1\le i\le k}}
\sum_{\substack{1\le b_j\le t_j\\ 1\le j\le \ell}} f((P_1(a_1),m_1), \ldots,(P_k(a_k),m_k), (Q_1(b_1),n_1),\ldots,(Q_{\ell}(b_{\ell}),n_{\ell})),
\end{equation*}
where $(P_i(a_i),m_i)$ and $(Q_j(b_j),n_j)$ represent the gcd's of the corresponding values ($1\le i\le k$, $1\le j\le \ell$).
\begin{theorem} \label{Theorem_main}
Under the above assumptions (1)-(4) we have
\begin{equation*}
S = r_1\cdots r_k t_1\cdots t_{\ell} \sum_{\substack{d_i\mid m_i \\1\le i\le k}}
\sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}} \frac{(\mu_{k+\ell}\ast_{k+\ell}f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})}{d_1\cdots d_k
e_1\cdots e_{\ell}}
\end{equation*}
\begin{equation*}
\times \left(\prod_{1\le i\le k} \widehat{N}_{P_i}(d_i,s_i) \beta(s_i,d_i)\right)\left( \prod_{1\le j \le \ell} N_{Q_j}(e_j)\right),
\end{equation*}
where $\widehat{N}_{P_i}(d_i,s_i)$ and $N_{Q_j}(e_j)$ \textup{($1\le i\le k$, $1\le j\le \ell$)} are defined in Section \ref{Sect_Nr_Congr}, and
\begin{equation*}
\beta(s_i,d_i)= \prod_{\substack{p\mid s_i\\ p\nmid d_i}} \left(1-\frac1{p}\right).
\end{equation*}
\end{theorem}
\begin{cor}
If $\ell=0$, then Theorem \ref{Theorem_main} gives the pure Menon-type identity
\begin{equation}\label{Cor_pure_Menon}
S = r_1\cdots r_k
\sum_{\substack{d_i\mid m_i \\1\le i\le k}}
\frac{(\mu_{k}\ast_{k}f)(d_1,\ldots,d_k)}{d_1\cdots d_k}
\left(\prod_{1\le i\le k} \widehat{N}_{P_i}(d_i,s_i) \beta(s_i,d_i)\right),
\end{equation}
and if $k=0$, it gives the pure gcd-sum identity
\begin{equation*}
S = t_1\cdots t_{\ell}
\sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}} \frac{(\mu_{\ell}\ast_{\ell}f)(e_1,\ldots,e_{\ell})}{e_1\cdots e_{\ell}}
\left(\prod_{1\le j \le \ell} N_{Q_j}(e_j)\right).
\end{equation*}
\end{cor}
If $k=1$, $f(n)=n$ ($n\in \N$) and $P(x)=x-1$, then identity \eqref{Cor_pure_Menon} reduces to
\begin{equation*}
\sum_{\substack{a=1 \\ (a,s)=1}}^r (a-1,m) = r \sum_{d\mid m} \frac{\varphi(d)}{d} \prod_{\substack{p\mid s\\ p\nmid d}}
\left(1-\frac1{p}\right)
\end{equation*}
\begin{equation*}
= \frac{r\, \varphi(s)\tau(m)}{s} \prod_{p^\nu\mid \mid m_1}
\left(1-\frac{\nu}{(\nu+1)p} \right),
\end{equation*}
where $m\mid r$, $s\mid r$ and $m_1=\max \{d\in \N: d\mid m, (d,s)=1\}$, which is identity \eqref{Menon_id_Siv}
(with the corresponding change of notations).
\begin{remark}{\rm Haukkanen and Wang \cite{HauWan1997} considered
systems of polynomials in several variables and a different constraint,
namely $(a_1,\ldots,a_k,n)=1$ in the first sum defining $S$. }
\end{remark}
\begin{proof}[Proof of Theorem {\rm \ref{Theorem_main}}]
It is an immediate consequence of the definition of the function $\mu_k$ that
\begin{equation} \label{key_id}
f(n_1,\ldots,n_k) = \sum_{d_1\mid n_1,\ldots, d_k\mid n_k} (\mu_k\ast_k f)(d_1,\ldots,d_k).
\end{equation}
By using \eqref{key_id} we have
\begin{equation*}
S = \sum_{\substack{1\le a_i\le r_i\\ (a_i,s_i)=1\\ 1\le i\le k}}
\sum_{\substack{1\le b_j\le t_j\\ 1\le j\le \ell}} \sum_{\substack{d_i\mid (P_i(a_i),m_i)\\1\le i\le k}}
\, \sum_{\substack{e_j \mid (Q_j(b_j),n_j)\\ 1\le j\le \ell}} (\mu_{k+\ell}\ast_{k+\ell} f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})
\end{equation*}
\begin{equation*}
= \sum_{\substack{d_i\mid m_i \\1\le i\le k}} \sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}} (\mu_{k+\ell}\ast_{k+\ell}f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})
\end{equation*}
\begin{equation*}
\times \Big( \prod_{1\le i\le k} \sum_{\substack{1\le a_i\le r_i\\ (a_i,s_i)=1\\ P_i(a_i)\equiv 0 \text{ (mod $d_i$)} }} 1 \Big)
\Big( \prod_{1\le j \le \ell} \sum_{\substack{1\le b_j\le t_j\\ Q_j(b_j)\equiv 0 \text{ (mod $e_j$) } }} 1 \Big).
\end{equation*}
Now we use Lemma \ref{Lemma_cong} to evaluate the sum
\begin{equation*}
B_i:= \sum_{\substack{1\le a_i\le r_i\\ (a_i,s_i)=1\\ P_i(a_i)\equiv 0 \text{ (mod $d_i$)} }} 1.
\end{equation*}
For any $x$ such that $(x,d_i,s_i)=1$ we have
\begin{equation*}
\sum_{\substack{1\le a_i\le r_i\\ (a_i,s_i)=1\\ a_i\equiv x \text{ (mod $d_i$)} }} 1 = \frac{r_i}{d_i} \beta(s_i,d_i),
\end{equation*}
and there are $\widehat{N}_{P_i}(d_i,s_i)$ such values of $x$ (mod $d_i$). Hence,
\begin{equation*}
B_i= \frac{r_i}{d_i} \beta(s_i,d_i) \widehat{N}_{P_i}(d_i,s_i).
\end{equation*}
We also have
\begin{equation*}
\sum_{\substack{1\le b_j\le t_j\\ Q_j(b_j)\equiv 0 \text{ (mod $e_j$)} }} 1 = \frac{t_j}{e_j} N_{Q_j}(e_j).
\end{equation*}
Notice that here $r_i/d_i$ and $t_j/e_j$ are integers for any $i,j$.
Putting together this gives
\begin{equation*}
S = \sum_{\substack{d_i\mid m_i \\1\le i\le k}} \sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}} (\mu_{k+\ell}\ast_{k+\ell}f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})
\end{equation*}
\begin{equation*}
\times \Big( \prod_{1\le i\le k} \widehat{N}_{P_i}(d_i,s_i) \frac{r_i}{d_i} \beta(s_i,d_i) \Big)
\Big(\prod_{1\le j \le \ell} \frac{t_j}{e_j} N_{Q_j}(e_j)\Big)
\end{equation*}
\begin{equation*}
= r_1\cdots r_k t_1\cdots t_{\ell} \sum_{\substack{d_i\mid m_i \\1\le i\le k}}
\sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}} \frac{(\mu_{k+\ell}\ast_{k+\ell}f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})}{d_1\cdots d_k
e_1 \cdots e_{\ell}}
\end{equation*}
\begin{equation*}
\times \Big(\prod_{1\le i\le k} \widehat{N}_{P_i}(d_i,s_i) \beta(s_i,d_i) \Big) \Big(\prod_{1\le j \le \ell} N_{Q_j}(e_j) \Big).
\end{equation*}
\end{proof}
\begin{cor} Assume that $m_i\mid s_i$ and $s_i\mid r_i$ for any $i$ with $1\le i\le k$ Then
\begin{equation*}
S = r_1\frac{\varphi(s_1)}{s_1}\cdots r_k\frac{\varphi(s_k)}{s_k} t_1\cdots t_{\ell} \sum_{\substack{d_i\mid m_i \\1\le i\le k}}
\sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}} \frac{(\mu_{k+\ell}\ast_{k+\ell}f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})}{\varphi(d_1)
\cdots \varphi(d_k) e_1\cdots e_{\ell}}
\end{equation*}
\begin{equation*}
\times \Big(\prod_{1\le i\le k} \widehat{N}_{P_i}(d_i) \Big) \Big( \prod_{1\le j \le \ell} N_{Q_j}(e_j) \Big).
\end{equation*}
\end{cor}
\begin{proof} Apply Theorem \ref{Theorem_main}. Since $d_i\mid m_i$, we have $d_i\mid s_i$. Hence
$\widehat{N}_{P_i}(d_i,s_i)= \widehat{N}_{P_i}(d_i)$ and
\begin{equation*}
\beta(s_i,d_i)= \prod_{\substack{p\mid s_i\\ p\nmid d_i}} \left(1-\frac1{p}\right) = \frac{\varphi(s_i)/s_i}{\varphi(d_i)/d_i}.
\end{equation*}
\end{proof}
\begin{cor} Assume that $m_i=r_i=s_i$ and $n_j=t_j$ for any $i,j$ ($1\le i\le k$, $1\le j\le \ell$). Then
\begin{equation} \label{S_special}
S = \varphi(m_1)\cdots \varphi(m_k) n_1\cdots n_{\ell}
\sum_{\substack{d_i\mid m_i \\1\le i\le k}} \sum_{\substack{e_j \mid n_j \\ 1\le j\le \ell}}
\frac{(\mu_{k+\ell}\ast_{k+\ell}f)(d_1,\ldots,d_k,e_1,\ldots,e_{\ell})}{\varphi(d_1)\cdots \varphi(d_k) e_1\cdots e_{\ell}}
\end{equation}
\begin{equation*}
\times \Big( \prod_{1\le i\le k} \widehat{N}_{P_i}(d_i) \Big) \Big( \prod_{1\le j \le \ell} N_{Q_j}(e_j) \Big).
\end{equation*}
\end{cor}
\begin{theorem} Assume conditions (1)-(4). Furthermore, let $r_i=[m_i,s_i]$, $t_j=n_j$ \textup{($1\le i\le k$, $1\le j\le \ell$)}
and let $f$ be a multiplicative function of $k+\ell$ variables. Then the sum
$$
S=S(m_1,\ldots,m_k,s_1,\ldots,s_k,n_1,\ldots,n_{\ell})
$$
represents a multiplicative function of $2k+\ell$ variables.
\end{theorem}
\begin{proof} Note that
\begin{equation} \label{g}
\beta(s_i,d_i)= \sum_{\substack{\delta \mid s_i\\ (\delta,d_i)=1}} \frac{\mu(\delta)}{\delta}=
\sum_{\delta \mid s_i} \frac{\mu(\delta)}{\delta} h(\delta,d_i),
\end{equation}
where the function of two variables
\begin{equation*}
h(\delta,d_i)= \sum_{\substack{c\mid \delta \\ c\mid d_i}} \mu(c)
\end{equation*}
is multiplicative, being the convolution of multiplicative functions. Therefore, $\beta(s_i,d_i)$, given by the
convolution \eqref{g} is also multiplicative.
We conclude that $S$, given in Theorem \ref{Theorem_main} as a convolution of $2k+\ell$ variables of multiplicative functions,
is multiplicative, as well.
\end{proof}
\begin{cor} \label{Cor_multipl_k_ell} Assume that $m_i=r_i=s_i$ and $n_j=t_j$ for any $i,j$ and $f$ is multiplicative, viewed as a function of $k+\ell$ variables. Then
$S$ given by \eqref{S_special} is also multiplicative in $m_1,\ldots,m_k,n_1,\ldots,n_{\ell}$, as a function of $k+\ell$ variables.
\end{cor}
\begin{remark} {\rm Note that in his original paper Menon \cite[Lemma]{Men1965} proved that if $f$ is a multiplicative arithmetic function of $r$
variables and $P_i\in \Z[x]$ are polynomials, then the function
\begin{align*}
F(n):= \sum_{a=1}^n f((P_1(a),n),\ldots,(P_r(a),n))
\end{align*}
is multiplicative in the single variable $n$. Here $F(n)$ is not a special case our sum $S$, but it can be treated in a similar way. By using
\eqref{key_id} one obtains the formula
\begin{equation} \label{F_convo}
F(n)= n \sum_{d_1\mid n,\ldots, d_r\mid n} \frac{(\mu_r \ast_r f)(d_1,\ldots,d_r)}{[d_1,\ldots,d_r]} N(d_1,\ldots,d_r),
\end{equation}
valid for any function $f$ of $r$ variables, where $N(d_1,\ldots,d_r)$ is the number of solutions (mod $[d_1,\ldots,d_r]$) of the simultaneous
congruences $P_1(x)\equiv 0$ (mod $d_1$), ..., $P_r(x)\equiv 0$ (mod $d_r$). Note that $N(d_1,\ldots,d_r)$ is a multiplicative function of $r$ variables.
If $f$ is multiplicative, then the convolution representation \eqref{F_convo} shows that $F$ is also multiplicative.
}
\end{remark}
In what follows assume that
(1') $k,\ell \ge 0$ are fixed integers, not both zero;
(2') $n,r_i,s_i,t_j\in \N$ are integers such that $n\mid r_i$, $s_i\mid r_i$, $n\mid t_j$ ($1\le i\le k$, $1\le j\le \ell$);
(3') $g:\N \to \C$ is an arbitrary arithmetic function;
(4') $P_i,Q_j\in \Z[x]$ are arbitrary polynomials ($1\le i\le k$, $1\le j\le \ell$).
Consider the sum
\begin{equation*}
T:= \sum_{\substack{1\le a_i\le r_i\\ (a_i,s_i)=1\\ 1\le i\le k}} \sum_{\substack{1\le b_j\le t_j\\ 1\le j\le \ell}}
g((P_1(a_1), \ldots,P_k(a_k), Q(b_1),\ldots,Q_{\ell}(b_{\ell}),n)),
\end{equation*}
with the gcd on the right hand side.
We have the following result.
\begin{theorem} \label{Theorem_g} Assume conditions (1')-(4'). Then
\begin{equation*}
T = r_1\cdots r_k t_1\cdots t_{\ell} \sum_{d\mid n} \frac{(\mu\ast g)(d)}{d^{k+\ell}}
\left(\prod_{1\le i\le k} \widehat{N}_{P_i}(d,s_i) \beta(s_i,d) \right) \left(\prod_{1\le j \le \ell} N_{Q_j}(d)\right),
\end{equation*}
where
\begin{equation*}
\beta(s_i,d)= \prod_{\substack{p\mid s_i\\ p\nmid d}} \left(1-\frac1{p}\right)
\end{equation*}
\end{theorem}
\begin{proof} Apply Theorem \ref{Theorem_main} in the case when $m_i=n_j=n$ ($1\le i\le k$, $1\le j\le \ell$) and
\begin{equation*}
f(x_1,\ldots,x_k,y_1,\ldots,y_{\ell}) = g((x_1,\ldots,x_k,y_1,\ldots,y_{\ell})).
\end{equation*}
Then
\begin{equation*}
f((P_1(a_1),m_1), \ldots,(P_k(a_k),m_k), (Q(b_1),n_1),\ldots,(Q_{\ell}(b_{\ell}),n_{\ell}))
\end{equation*}
\begin{equation*}
= g((P_1(a_1), \ldots,P_k(a_k),Q(b_1),\ldots,Q_{\ell}(b_{\ell}),n)).
\end{equation*}
From (\ref{eq:princ}) we obtain
\begin{equation*}
(\mu_{k+\ell}\ast_{k+\ell}f)(x_1,\ldots,x_k,y_1,\ldots,y_{\ell})= \begin{cases} (\mu\ast g)(n), & \text{ if $x_1=\cdots = x_k=y_1=\ldots=y_{\ell}=
n$,}\\ 0, & \text{ otherwise.}
\end{cases}
\end{equation*}
\end{proof}
In the special case $g(n)=n$, $Q_j(x)=x$, $r_i=s_i=t_j=n$ ($1\le i\le k$, $1\le j\le \ell$) we obtain from Theorem \ref{Theorem_g} the next result.
\begin{cor}
\begin{equation*}
\sum_{\substack{1\le a_i\le n\\ (a_i,n)=1\\ 1\le i\le k}}
\sum_{\substack{1\le b_j\le n\\ 1\le j\le \ell}} (P_1(a_1), \ldots,P_k(a_k),b_1,\ldots,b_{\ell},n)
= \varphi(n)^{k} (\id_\ell\ast G_k)(n),
\end{equation*}
where
\begin{equation*}
G_k(n)=\varphi(n)^{1-k}\prod_{1\le i\le k} \widehat{N}_{P_i}(n).
\end{equation*}
\end{cor}
If $P_i(x)=x^{q_i}-1$ ($1\le i\le k$), then we obtain
\begin{cor} If $q_i\in \N$ \textup{($1\le i \le k$)}, then
\begin{equation*}
\sum_{\substack{1\le a_i\le n\\ (a_i,n)=1\\ 1\le i\le k}}
\sum_{\substack{1\le b_j\le n\\ 1\le j\le \ell}} (a_1^{q_1}-1,\ldots,a_k^{q_k}-1,b_1,\ldots,b_{\ell},n)
= \varphi(n)^{k} (\id_\ell\ast H_k)(n),
\end{equation*}
where
\begin{equation} \label{G_k_n}
H_k(n)=\varphi(n)^{1-k}\prod_{1\le i\le k} C^{(q_i)}(n),
\end{equation}
$C^{(q_i)}(n)$ being the number of solutions of the congruence $x^{q_i} \equiv 1$ \textup{(mod $n$)}. \end{cor}
For $k=1$, \eqref{G_k_n} reduces to identity \eqref{id_Debrecen_paper} by Li, Kim and Qiao \cite{LKQ2019}.
Several other special cases can be discussed. For example, let $\ell=0$. By formula
\eqref{S_special} we have
\begin{equation} \label{V}
V(n_1,\ldots,n_k): = \sum_{\substack{1\le a_i\le n_i\\ (a_i,n_i)=1\\ 1\le i\le k}} f((P_1(a_1),n_1), \ldots,(P_k(a_k),n_k))
\end{equation}
\begin{equation*}
= \varphi(n_1)\cdots \varphi(n_k) \sum_{\substack{d_i\mid n_i \\1\le i\le k}}
\frac{(\mu_k\ast_k f)(d_1,\ldots,d_k)}{\varphi(d_1)\cdots \varphi(d_k)}
\Big( \prod_{1\le i\le k} \widehat{N}_{P_i}(d_i) \Big).
\end{equation*}
If $f:\N \to \C$ is multiplicative, then $V(n_1,\ldots,n_k)$ is multiplicative, as well, by Corollary \ref{Cor_multipl_k_ell}.
For prime powers $p^{\nu_1},\ldots,p^{\nu_k}$ the values $V(p^{\nu_1},\ldots,p^{\nu_k})$ can be computed in the case of special
functions $f$ and special polynomials $P_i$.
We confine ourselves with the case of the lcm function $f(n_1,\ldots,n_k)=[n_1,\ldots,n_k]$ and the polynomials $P_i(x)=x-1$
($1\le i\le k$), included in the next section.
\section{A special case}
In this section we consider the function
\begin{equation*}
W(n_1,\ldots,n_k): = \sum_{\substack{1\le a_i\le n_i\\ (a_i,n_i)=1 \\ 1\le i\le k}} [(a_1-1,n_1), \ldots,(a_k-1,n_k)].
\end{equation*}
\begin{theorem} \label{Th_Menon_lcm} For any $n_1,\ldots,n_k\in \N$,
\begin{equation*}
W(n_1,\ldots,n_k) = \varphi(n_1)\cdots \varphi(n_k) h(n_1,\ldots,n_k),
\end{equation*}
where the function $h$ is multiplicative, symmetric in the variables and for any prime powers $p^{\nu_1},\ldots, p^{\nu_k}$ such that
$\nu_1\ge \cdots \ge \nu_t\ge 1$, $\nu_{t+1}=\cdots = \nu_k=0$,
\begin{equation*}
h(p^{\nu_1},\ldots,p^{\nu_k})
\end{equation*}
\begin{equation*}
= 1+(\nu_1+\cdots +\nu_t)+ \sum_{j=1}^{t-1} \frac{(-1)^jp^j}{(p-1)^j(p^j-1)} \Big( \binom{t}{j+1} -
\sum_{\substack{M\subseteq \{1,\ldots,t\}\\ \#M=j+1}} \frac1{p^{j\nu_{\max M}}} \Big).
\end{equation*}
\end{theorem}
\begin{proof}
According to \eqref{V} we have
\begin{equation*}
W(n_1,\ldots,n_k) = \varphi(n_1)\cdots \varphi(n_k) \sum_{\substack{d_i\mid n_i \\1\le i\le k}}
\frac{(\mu_k\ast_k f)(d_1,\ldots,d_k)}{\varphi(d_1)\cdots \varphi(d_k)},
\end{equation*}
where $f(n_1,\ldots,n_k)=[n_1,\ldots,n_k]$.
Here $W(n_1,\ldots,n_k)$ is multiplicative and we compute the values $W(p^{\nu_1},\ldots,p^{\nu_k})$. Let
$g=\mu_k\ast_k f$, that is,
\begin{equation*}
g(n_1,\ldots,n_k) = \sum_{d_1\mid n_1,\ldots, d_k\mid n_k} \mu(d_1)\cdots \mu(d_k) \left[n_1/d_1, \ldots, n_k/d_k \right].
\end{equation*}
Then $g$ is multiplicative and for any prime powers $p^{\nu_1},\ldots, p^{\nu_k}$ ($\nu_1,\ldots,\nu_k\ge 0$),
\begin{equation*}
g(p^{\nu_1},\ldots,p^{\nu_k}) = \sum_{d_1,\ldots, d_k\in \{1,p\}} \mu(d_1)\cdots \mu(d_k) \left[p^{\nu_1}/d_1, \ldots, p^{\nu_k}/d_k \right].
\end{equation*}
Assume that there is $j\ge 1$ such that $\nu_1= \nu_2= \cdots = \nu_j=\nu > \nu_{j+1}\ge \nu_{j+2}\ge \cdots \ge \nu_m\ge 1$,
$\nu_{m+1}=\cdots =\nu_k=0$. Then we have for any $d_1,\ldots,d_m\in \{1,p\}$, $d_{m+1},\ldots,d_k=1$,
\begin{equation*}
[p^{\nu_1}/d_1,\ldots,p^{\nu_k}/d_k]= \begin{cases} p^{\nu-1}, & \text{ if $d_1=\cdots =d_j=p$,}\\
p^\nu, & \text{ otherwise,}
\end{cases}
\end{equation*}
and
\begin{equation*}
g(p^{\nu_1},\ldots,p^{\nu_k}) = \Big( p^\nu \sum_{d_1\in \{1,p\}} \mu(d_1) \cdots \sum_{d_j\in \{1,p\}} \mu(d_j) - p^\nu \mu(p)^j +p^{\nu-1} \mu(p)^j \Big)
\end{equation*}
\begin{equation*}
\times \sum_{d_{j+1}\in \{1,p\}} \mu(d_{j+1}) \cdots \sum_{d_m\in \{1,p\}} \mu(d_m) =
\begin{cases} (-1)^{j-1}(p^\nu-p^{\nu-1}), & \text{ if $j=m$,}\\
0, & \text{ otherwise.}
\end{cases}
\end{equation*}
Therefore, since $g$ is symmetric in the variables, we deduce
\begin{equation} \label{val_g}
g(p^{\nu_1},\ldots,p^{\nu_k})=\begin{cases} 1, & \text{ if $\nu_1=\cdots = \nu_k=0$,}\\
(-1)^{j-1} \varphi(p^\nu), & \text{ if a number $j\ge 1$ of $\nu_1,\ldots,\nu_k$ is equal to $\nu \ge 1$, }\\
& \text{ while all others are zero,} \\
0, & \text{ otherwise.}
\end{cases}
\end{equation}
Furthermore, let
\begin{equation*}
h(n_1,\ldots,n_k)= \sum_{d_1\mid n_1, \ldots, d_k\mid n_k} \frac{g(d_1,\ldots,d_k)}{ \varphi(d_1)\cdots \varphi(d_k)},
\end{equation*}
which is also multiplicative and symmetric in the variables. Let $p^{\nu_1}, \ldots, p_k^{\nu_k}$ be any prime powers and
assume, without loss of generality, that for some $t\ge 0$, one has $\nu_1\ge \cdots \ge \nu_t\ge 1$, $\nu_{t+1}=\cdots =
\nu_k=0$.
If $t=0$, then $h(1,\ldots,1)=1$. If $t\ge 1$, then
\begin{equation*}
h(p^{\nu_1},\ldots,p^{\nu_k})= \sum_{d_1\mid p^{\nu_1}, \ldots, d_t\mid p^{\nu_t}}
\frac{g(d_1,\ldots,d_t,1,\ldots,1)}{ \varphi(d_1)\cdots \varphi(d_t)}.
\end{equation*}
Let $d_1=p^{\beta_1},\ldots, d_t=p^{\beta_t}$, with $0\le \beta_1\le \nu_1,\ldots, 0\le \beta_t\le \nu_t$. For any subset $M$ of $\{1,\ldots,t\}$ such that $\#M=j$
($1\le j\le t$) let $\beta_m=\nu$ ($1\le \nu \le \nu_{\max M}$) for every $m\in M$ and $\beta_m=0$ for $m\notin M$. Then, according to \eqref{val_g},
\begin{equation*}
\frac{g(d_1,\ldots,d_t,1,\ldots,1)}{ \varphi(d_1)\cdots \varphi(d_t)} =\frac{(-1)^{j-1}\varphi(p^\nu)}{\varphi(p^\nu)^j}=
\frac{(-1)^{j-1}}{\varphi(p^\nu)^{j-1}}.
\end{equation*}
We deduce that
\begin{equation*}
h(p^{\nu_1},\ldots,p^{\nu_k})= 1+\sum_{j=1}^t \sum_{\substack{M\subseteq \{1,\ldots,t\}\\ \#M=j}}
\sum_{\nu=1}^{\nu_{\max M}} \frac{(-1)^{j-1}}{\varphi(p^\nu)^{j-1}}
\end{equation*}
\begin{equation*}
= 1+\sum_{j=1}^t (-1)^{j-1} \sum_{\substack{M\subseteq \{1,\ldots,t\}\\ \#M=j}}
\sum_{\nu=1}^{\nu_{\max M}} \frac1{\varphi(p^\nu)^{j-1}}.
\end{equation*}
Here, with the notation $A:=\nu_{\max M}$, we have for $j\ge 2$,
\begin{equation*}
K_j:= \sum_{\nu=1}^{\nu_{\max M}} \frac1{\varphi(p^\nu)^{j-1}}= \frac1{(p-1)^{j-1}} \sum_{\nu=1}^A \frac1{p^{(j-1)(\nu-1)}}
\end{equation*}
\begin{equation*}
= \frac{p^{j-1}}{(p-1)^{j-1}(p^{j-1}-1)}\left(1-\frac1{p^{A(j-1)}} \right),
\end{equation*}
and for $j=1$, $K_1=A$.
That is,
\begin{equation*}
h(p^{\nu_1},\ldots,p^{\nu_k})= 1+(\nu_1+\cdots +\nu_t)+ \sum_{j=2}^t \frac{(-1)^{j-1}p^{j-1}} {(p-1)^{j-1}(p^{j-1}-1)}
\sum_{\substack{M\subseteq \{1,\ldots,t\}\\ \#M=j}} \left(1-\frac1{p^{A(j-1)}} \right)
\end{equation*}
\begin{equation*}
= 1+(\nu_1+\cdots +\nu_t)+ \sum_{j=1}^{t-1} \frac{(-1)^jp^j}{(p-1)^j(p^j-1)} \Big( \binom{t}{j+1} -
\sum_{\substack{M\subseteq \{1,\ldots,t\}\\ \#M=j+1}} \frac1{p^{Aj}}\Big).
\end{equation*}
\end{proof}
\begin{cor} \label{Cor_lcm_n} {\rm ($n_1=\cdots =n_k=n$)}
For any $n,k\in \N$,
\begin{equation*}
\sum_{\substack{1\le a_1\le n\\ (a_1,n)=1}} \cdots \sum_{\substack{1\le a_k\le n\\ (a_k,n)=1}}
[(a_1-1,n), \ldots,(a_k-1,n)]
\end{equation*}
\begin{equation*}
= \varphi(n)^k \prod_{p^\nu \mid \mid n }
\left( 1+k\nu + \sum_{j=1}^{k-1} (-1)^j \binom{k}{j+1} \frac{p^j}{(p-1)^j(p^j-1)} \left(1-\frac1{p^{\nu j}}\right) \right).
\end{equation*}
\end{cor}
In the case $k=2$ this gives the formula \eqref{a_b_lcm}, while for $k=1$ we reobtain Menon's identity \eqref{Menon_id}.
\section{Acknowledgment} The second author was supported by the European Union, co-financed by the European
Social Fund EFOP-3.6.1.-16-2016-00004.
| {
"timestamp": "2020-05-07T02:20:44",
"yymm": "1911",
"arxiv_id": "1911.05411",
"language": "en",
"url": "https://arxiv.org/abs/1911.05411",
"abstract": "We give common generalizations of the Menon-type identities by Sivaramakrishnan (1969) and Li, Kim, Qiao (2019). Our general identities involve arithmetic functions of several variables, and also contain, as special cases, identities for gcd-sum type functions. We point out a new Menon-type identity concerning the lcm function. We present a simple character free approach for the proof.",
"subjects": "Number Theory (math.NT); Group Theory (math.GR)",
"title": "Menon-type identities again: A note on a paper by Li, Kim and Qiao",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474913588876,
"lm_q2_score": 0.7154239836484143,
"lm_q1q2_score": 0.7078029234805405
} |
https://arxiv.org/abs/1808.09734 | Thurston's metric on Teichmüller space of semi-translation surfaces | The present paper is composed of two parts. In the first one we define two pseudo-metrics $L_F$ and $K_F$ on the Teichmuüller space of semi-translation surfaces $\mathcal{TQ}_g(\underline k,\epsilon)$, which are the symmetric counterparts to the metrics defined by William Thurston on $\mathcal{T}_g^n$. We prove some nice properties of $L_F$ and $K_F$, most notably that they are complete pseudo-metrics. In the second part we define their asymmetric analogues $L_F^a$ and $K_F^a$ on $\mathcal{T Q}_g^{(1)}(k, \epsilon)$ and prove that their equality depends on two statements regarding 1-Lipschitz maps between polygons. We are able to prove the first statement, but the second one remains a conjecture: nonetheless, we explain why we believe it is true. | \section{Introduction}
\bigskip
Denote by $\mathcal{T}_g^n$ the Teichm\"uller space of Riemann surfaces of genus $g\ge 2$ and $n\ge 0$ punctures. William Thurston in \cite{Th} defined the following asymmetric metric $L$ on $\mathcal{T}_g^n$:
given any two hyperbolic surfaces $X,X'\in \mathcal{T}_g^n$, their distance with respect to $L$ defined as
$$L(X,X')=\inf\limits_{\varphi\in Diff^+_0(S_g^n)}\log(Lip(\varphi)_X^{X'}),$$
where $Diff^+_0(S_g^n)$ is the group of diffeomorphisms of $S_g^n$ homotopic to the identity and
$$Lip(\varphi)_X^{X'}=\sup\limits_{x,y\in S_g^n}\frac{d_{X'}(\varphi(x),\varphi(y))}{d_X(x,y)}$$
is the Lipschitz constant of $\varphi$ computed with respect to the hyperbolic metrics of $X$ and $X'$.\\
The main result of \cite{Th} is that for every $X,X'\in \mathcal{T}_g^n$ it results
\begin{equation}
L(X,X')=K(X,X'),
\end{equation}
where $K$ is another asymmetric metric on $\mathcal{T}_g^n$ defined as
$$K(X,X')=\sup\limits_{\alpha\in \mathcal{S}}\log\left(\frac{\hat l_{X'}(\alpha)}{\hat l_X(\alpha)}\right)$$
with $\mathcal{S}$ being the set of homotopy classes of simple closed curves on $S_g^n$ and $\hat l_X(\alpha)$ being the length of the geodesic representative for $X$ of the homotopy class of $\alpha$.\\
The equality (1.1) has been proved by Thurston using the properties of measured laminations on $S_g^n$. Roughly, one could say that the idea of the proof is to triangulate the surface with hyperbolic triangles and then use the fact that for any $c > 1$ there is a $c$-Lipschitz homeomorphism of a filled hyperbolic triangle to itself which maps each side to itself, multiplying arc length on the side by $c$.\\
The Teichm\"uller space endowed with the Thurston's metric is a geodesic space; A.Papadopoulos and G.Th\'eret proved that it is also a complete asymmetric space (\cite{PT}).\\
Every semi-translation surface defines a singular flat metric on $S_g$: the idea of the present paper is to investigate how the definitions of Thurston's metric $L$ and $K$ on $\mathcal{T}_g^n$ could be adapted to the case of flat singular metrics.
W.A. Veech already did something similar in \cite{Ve2} defining a complex-valued distance map $D_0$ on the Teichmuller space $\mathcal{TQ}_g^n(\underline k,\epsilon)$ of semi-translation structures on $S_g^n$.\\
We copy the definition of $D_0$ maintaining the original notation of Veech:
$$D_0(q_1,q_2):=\inf\limits_{\varphi\in Diff_0^+(S_g^n)}\alpha(\varphi^*q_1,q_2),$$
$$ \alpha(\varphi^*q_1,q_2):=\sup\limits_{x\in S_g^n}\left( \sup\limits_{(U_i,f_i)\in q_i, x\in U_1\cap U_2}\left(\limsup_{x'\rightarrow x}Log\left(\left(\frac{f_1(\varphi(x'))-f_1(\varphi(x))}{f_2(x')-f_2(x)}\right)^2\right)\right)\right),$$\\
where $q_i$, $i=1,2$ is regarded as a semi-translation structures and $f_i:U_i\rightarrow \mathbb{C}$, $U_i\subset S_g^n$, are natural charts of $q_i$. The map $Log$ is a branch of the complex logarithm.\\
The real part of $\alpha(\varphi^*q_1,q_2)$ is the Lipschitz constant of $\varphi$ computed with respect to the metrics $|q_1|$ and $|q_2|$ and consequently the real part of the distance function $D_0$ is asymmetric.\\
Veech claimed that the map $D_0$ is a complete pseudo-metric on $\mathcal{TQ}_g^n(\underline k,\epsilon)$ (the proof should be contained in unpublished preprints \cite{Ve3}).\\
We defined the pseudo-metric $L_F$ on $\mathcal{TQ}_g(\underline k,\epsilon)$ which is the symmetric analogue to the Thurston's metric:
$$L_F(q_1,q_2):=\inf\limits_{\varphi\in Diff^+_0(S_g,\Sigma)}\mathcal{L}_{q_1}^{q_2}(\varphi),$$$$ \mathcal{L}_{q_1}^{q_2}(\varphi):=\sup\limits_{p\in S_g\setminus \Sigma }\left(\sup\limits_{v\in T_pS_g,||v||_{q_1}=1}\left|\log(||d\varphi_pv||_{q_2})\right|\right).$$
One should notice that $L_F$ is different from the real part of Veech's distance function $D_0$.\\
A first, notable, inequality regarding $L_F$ is given by proposition 2.4: it results
$$L_F(q_1,q_2)\ge d_{\mathcal{T}}(X_1,X_2),$$
where $X_1,X_2$ are the points in $\mathcal{T}_g$ corresponding to the conformal structures underlying the quadratic differentials.\\
The metric $L_F$ endows $\mathcal{TQ}_g(\underline k,\epsilon)$ with the structure of proper and complete space (propositions 2.8 and 2.10) and the standard topology of $\mathcal{TQ}_g(\underline k,\epsilon)$ (which is the one induced by its structure of complex manifold) is finer than the topology induced by $L_F$ (proposition 2.6). Furthermore, there is a metric $\mathbb{P}L_F$ on $\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon)$ induced by $L_F$, and the topology it induces is equal to the standard topology of the projectification of $\mathcal{TQ}_g(\underline k,\epsilon)$.\\
Motivated by Thurston's work, we defined another metric $K_F$ on $\mathcal{TQ}_g(\underline k,\epsilon)$ through ratios of lengths of saddle connections:
$$K_F(q_1,q_2):=\max\{K_F^a(q_1,q_2),K_F^a(q_2,q_1)\},$$
$$K^a_F(q_1,q_2):=\sup\limits_{\gamma\in SC(q_1)}\log\left(\frac{\hat l_{q_2}(\gamma)}{\hat l_{q_1}(\gamma)}\right),$$
where $SC(q_1)$ is the set of saddle connections of $q_1$ (geodesics for the flat metric meeting singular points only at their extremities), and $\hat l_{q_i}(\gamma)$ is the length of the geodesic representative for the metric $|q_i|$ of the homotopy class of $\gamma$ with fixed endpoints.\\
While it is possible to prove $L_F(q_1,q_2)=K_F(q_1,q_2)$ if $q_1$ and $q_2$ are on the same orbit of the action of $GL(2,\mathbb{R})^+$ (proposition 2.13), in the general case we were not able to adapt Thurston's proof of $L=K$. This is mainly because, as it is explained in the end of section 2, we believe it is not possible to find a flat analogue to the large class of geodesics of $L$ which Thurston uses in the proof of $L=K$.\\
In section 3 we introduced an asymmetric analogue $L_F^a$ to $L_F$ on $\mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$ (which is the subset of $\mathcal{TQ}_g(\underline k,\epsilon)$ corresponding to surfaces of unitary area) defined as
$$L^a_F(q_1,q_2):=\inf\limits_{\varphi\in \mathcal{D}} \log(Lip(\varphi)_{q_1}^{q_2}),$$
$$Lip(\varphi)_{q_1}^{q_2}=\sup\limits_{p\in S_g\setminus \Sigma}\left(\sup\limits_{v\in T_pS_g, ||v||_{q_1}=1} ||d\phi_pv||_{q_2}\right) ,$$
with $\mathcal{D}$ being the set of functions $\varphi:S_g\rightarrow S_g$ which are homotopic to the identity, differentiable almost everywhere and which fix the points of $\Sigma$. \\
We are able to reduce the proof of the equality of $L_F^a$ and $K_F^a$ on $\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$ to the proof of two statements (corresponding to following theorem 1.1 and conjecture 1.2) about 1-Lischitz maps between planar polygons. In order to give the reader an idea of the reasonings involved, we briefly state them in a slightly simplified version.\\
Consider two planar polygons $\Delta$ and $\Delta'$ such that there is an injective function
$$\iota:Vertices(\Delta)\rightarrow Vertices(\Delta')$$
which to every vertex $v$ of $\Delta$ associates a unique vertex $\iota(v)=v'$. Suppose both $\Delta$ and $\Delta'$ have exactly three vertices with strictly convex internal angle, which we denote $x_i$ and $x_i'$, $i=1,2,3$ respectively. \\
Suppose furthermore that for every $x,y\in Vertices(\Delta)$ it results
$$d_\Delta(x,y)\ge d_{\Delta'}(x',y'),$$
where $d_\Delta$ (resp. $d_{\Delta'}$) is the intrinsic Euclidean metric inside $\Delta$ (resp. $\Delta'$): $d_\Delta(x,y)$ (resp. $d_{\Delta'}(x',y')$) is defined as the infimum of the lengths, computed with respect to the Euclidean metric, of all paths from $x$ to $y$ (resp. from $x'$ to $y'$) entirely contained in $\Delta$ (resp. in $\Delta'$). \\
We say that vertices of $\Delta$ and of $\iota(Vertices(\Delta))$ are \textit{disposed in the same order} if it is possible to choose two parametrizations $\gamma:[0,1]\rightarrow \partial \Delta$ and $\gamma_1:[0,1]\rightarrow \partial \Delta'$ such that $\gamma(0)=x_1$, $\gamma_1(0)=x_1'$ and $\gamma,\gamma_1$ meet respectively vertices of $\Delta$ and of $\Delta'$ in the same order.
\begin{thm}
If $Vertices(\Delta)$ and $\iota(Vertices(\Delta))$ are disposed in the same order, then there is a 1-Lipschitz map $f:\Delta\rightarrow \Delta'$ (with respect to the intrinsic Euclidean metrics of the polygons) which sends vertices to corresponding vertices.
\end{thm}
\begin{conj}
If $Vertices(\Delta)$ and $\iota(Vertices(\Delta))$ are not disposed in the same order, then for every point $p\in \Delta$ there is a point $p'\in \Delta'$ such that $$d_\Delta(p,x_i)\ge d_{\Delta'}(p',x_i'), \quad i=1,2,3.$$
\end{conj}
We were able to prove theorem 1.1, which corresponds to theorem 3.21 of section 3, but not conjecture 1.2, which corresponds to conjecture 5.31 of section 3: we will still explain why we believe it must be true.\\
We proved the following theorem, which is the main result of this paper.
\begin{thm}
If conjecture 1.2 is true, then for every $q_1,q_2\in \mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$, it results
$$L_F^a(q_1,q_2)= K^a_F(q_1,q_2).$$
\end{thm}
Instead of following Thurston's approach, we proved theorem 1.3 adapting the idea of F. A. Valentine's proof (\cite{Va}) of Kirszbraun's theorem for $\mathbb{R}^2$.
\begin{thm}
Let $S\subset \mathbb{R}^2$ be any subset and $f:S\rightarrow \mathbb{R}^2$ a 1-Lipschitz map.\\
Given any set $T$ which contains $S$, it is possible to extend $f$ to a 1-Lipschitz map $\hat f:T\rightarrow \mathbb{R}^2$ such that $\hat f(T)$ is contained in the convex hull of $f(S)$.
\end{thm}
\bigskip
\section{Symmetric pseudo-metrics $L_F$ and $K_F$}
\bigskip
\subsection{Teichm\"uller space of semi-translation surfaces} In this preliminary part we introduce semi-translation surfaces and their Teichm\"uller spaces, underlining some of their major properties.
\begin{defi}
A semi-translation surface is a closed topological surface $S_g$ of genus $g\ge 2$ endowed with a semi-translation structure, that is:
\begin{enumerate}[label=(\roman*)]
\item a finite set of points $\Sigma\subset S_g$ and an atlas of charts on $S_g\setminus \Sigma$ to $\mathbb{C}$ such that \ transition maps are of the form $z\mapsto \pm z+c$ with $c\in \mathbb{C}$,
\item a flat singular metric on $S_g$ such that for each point $p\in \Sigma$ there is a homeomorphism of a neighborhood of $p$ with a neighborhood of a cone angle of $\pi(k+2)$ for some $k>0$, which is an isometry away from $p$ (we call such point a singular point of order $k$). Furthermore, charts of the atlas of $(i)$ are isometries for the flat singular metric.
\end{enumerate}
\end{defi}
Equivalently, a semi-translation surface can be defined as a closed Riemann surface $X$ endowed with a non-vanishing holomorphic quadratic differential $q$. Indeed, natural coordinates for $q$ and the metric $|q|$ endow $S_g$ with a semi-translation structure. Conversely, given a semi-translation structure one can obtain a quadratic differential by setting $q=dz^2$ on $S_g\setminus \Sigma$ (where $z$ is a coordinate of the charts of the semi-translation structure) and $q=z^{k}dz^2$ in a neighborhood of a singular point. It is clear then that the sum of the orders of singular points is $4g-4$.\\
A semi-translation surface is naturally endowed with a locally $Cat(0)$ metric. Actually, one can extend the definition to allow the quadratic differentials to have at most simple poles (and consequently cone angles of $\pi$), but then the resulting metric will not be locally $Cat(0)$ anymore.\\
The flat singular metric $|q|$ can be nicely characterized (see \cite{St}) stating that its local geodesics are continuous maps $\gamma:\mathbb{R}\rightarrow S_g$ such that for every $t\in \mathbb{R}$:
\begin{itemize}
\item if $\gamma(t)\notin\Sigma$, then there is a neighborhood $U$ of $t$ in $\mathbb{R}$ such that $\gamma|_U$ is an Euclidean segment,
\item if $\gamma(t)\in\Sigma$, then there is a small neighborhood $V$ of $\gamma(t)$ in $S_g$ and an $\epsilon>0$ small enough such that the angles defined by $\gamma([t,t+\epsilon))$ and $\gamma((t-\epsilon,t])$ in $V$ are both at least $\pi$.
\end{itemize}
We say that a \textit{saddle connection} on $(X,q)$ is a geodesic for the flat metric going from a singularity to a singularity, without any singularities in the interior of the segment. \\
Since the metric $|q|$ is locally $Cat(0)$, for any arc $\gamma$ with endpoints in $\Sigma$ there always is a unique geodesic representative in the homotopy class of $\gamma$ with fixed endpoints. This geodesic representative is a concatenation of saddle connections.\\
Finally, we define the $systole$ of a semi-translation surface $(X,q)$, and denote it with $sys(q)$, to be the length of the shortest saddle connection.\\
Given any $g\ge 2$ and $m\ge 1$, fix a finite set of points $\Sigma=\{p_1,\dots,p_m\}\subset S_g$ and an $m$-ple $\underline{k}=(k_1,k_2,\dots,k_m)\in \mathbb{N}^m$ such that $\sum_{l=1}^mk_l=4g-4$.\\
We denote by $\mathcal{S}\Omega(\underline k,\epsilon,\Sigma)$ the set of semi-translation surfaces on $S_g$ with singularities prescribed by $\underline k$ on the points of $\Sigma$ (i.e. it has a zero of order $k_i$ on $p_i$, $i=1,\dots,m$) and holonomy determined by $\epsilon\in \{\pm 1\}$ ($\epsilon=1$ in case of trivial holonomy and $\epsilon=-1$ otherwise). \\
Consider the group $Diff^+(S_g,\Sigma)$ of diffeomorphisms of $S_g$ which fix the points of $\Sigma$ and its subgroup $Diff^+_0(S_g,\Sigma)$ consisting of diffeomorphisms homotopic to the identity.\\
We define the Teichm\"uller and moduli space of semi-translation surfaces with singularities prescribed by $\underline k$ on $\Sigma$ and holonomy defined by $\epsilon$ in the following way:
$$\mathcal{TQ}(\underline k,\epsilon,\Sigma):=\mathcal{S}\Omega(\underline k,\epsilon,\Sigma)/Diff^+_0(S_g,\Sigma),\quad \mathcal{Q}(\underline k,\epsilon,\Sigma):=\mathcal{S}\Omega(\underline k,\epsilon,\Sigma)/Diff^+(S_g,\Sigma),$$
where the two groups of diffeomorphisms act by pullback.\\
We will denote $\mathcal{TQ}_g(\underline k,\epsilon,\Sigma)$ and $\mathcal{Q}(\underline k,\epsilon,\Sigma)$ simply as $\mathcal{TQ}_g(\underline k,\epsilon)$ and $\mathcal{Q}(\underline k,\epsilon,\Sigma)$ in order lighten the notation: one should keep in mind that in the definition is implicit the choice of $\Sigma$.\\
Furthermore, we denote simply by $q$ an element of $\mathcal{TQ}_g(\underline k,\epsilon)$ and $\mathcal{Q}(\underline k,\epsilon,\Sigma)$: the fact that it is an equivalence class will be clear from the context.\\
As it is explained in the following theorem, the spaces $\mathcal{TQ}_g(\underline k,\Sigma)$ have a nice structure of complex manifold.
\begin{thm}
Each space $\mathcal{TQ}_g(\underline k,1)$ has the structure of a complex manifold of dimension $2g+m-1$, while $\mathcal{TQ}_g(\underline k,-1)$ has the structure of a complex manifold of dimension $2g+m-2$.
\end{thm}
Unfortunately, the spaces $\mathcal{Q}(\underline k,\epsilon)$ have only the structure of complex orbifolds of the same dimension of $\mathcal{TQ}(\underline k,\epsilon)$.\\
There is a natural action of $GL(2,\mathbb{R})^+$ on $\mathcal{TQ}_g(\underline k,\epsilon)$ and $\mathcal{Q}_g(\underline k,\epsilon)$: for each \mbox{$A\in GL(2,\mathbb{R})^+$} and each quadratic differential $q$, the element $A\cdot q$ is the quadratic differential obtained post-composing the natural charts of $q$ with $A$.
\bigskip
\subsection{Definitions of flat Thurston's metrics}
Fix any genus $g\ge 2$ and consider the Teichm\"uller space of semi-translation surfaces $\mathcal{TQ}_g(\underline k,\epsilon)$ with singularities on $\Sigma\subset S_g$ prescribed by the $m$-ple $\underline k=(k_1,\dots, k_m)\in\mathbb{N}^m$ such that $\sum_{i=1}^mk_i=4g-4$ and holonomy determined by $\epsilon\in\{+1,-1\}$. \\
We will introduce now all flat analogues to Thurston's metrics.\\
First we define the following function $L_F$, which is a symmetric analogue to Thurston's metric $L$.
$$L_F:\mathcal{TQ}_g(\underline k,\epsilon)\times \mathcal{TQ}_g(\underline k,\epsilon)\rightarrow \mathbb{R}, $$
$$L_F(q_1,q_2):=\inf\limits_{\varphi\in Diff^+_0(S_g,\Sigma)}\mathcal{L}_{q_1}^{q_2}(\varphi),$$
$$\mathcal{L}_{q_1}^{q_2}(\varphi):=\sup\limits_{p\in S_g\setminus \Sigma }\left(\sup\limits_{v\in T_pS_g,||v||_{q_1}=1}\left|\log(||d\varphi_pv||_{q_2})\right|\right).$$\\
The quantity $\mathcal{L}_{q_1}^{q_2}(\varphi)$ can be rewritten as
$$\mathcal{L}_{q_1}^{q_2}(\varphi)=\max\{\log(Lip_{q_1}^{q_2}(\varphi)),-\log(lip_{q_1}^{q_2}(\varphi))\},$$
with $Lip_{q_1}^{q_2}(\varphi)$ being the \textit{upper Lipschitz constant} of $\varphi$:
$$Lip_{q_1}^{q_2}(\varphi):=\sup\limits_{p\in S_g\setminus \Sigma}\left(\sup\limits_{v\in T_pS_g,||v||_{q_1}=1}||d\varphi_pv||_{q_2}\right)$$
and $lip_{q_1}^{q_2}(\varphi)$ being the \textit{lower Lipschitz constant} of $\varphi$:
$$lip_{q_1}^{q_2}(\varphi):=\inf\limits_{p\in S_g\setminus \Sigma}\left(\inf\limits_{w\in T_pS_g,||w||_{q_1}=1}||d\varphi_pw||_{q_2}\right).$$
\\
We define also an asymmetric analogue to $L$ on $\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$
$$L^a_F:\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)\times \mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)\rightarrow \mathbb{R}$$
associating to any pair $q_1,q_2 \in\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$ of semi-translation surfaces of unitary area the quantity
$$L^a_F(q_1,q_2):=\inf\limits_{\varphi\in \mathcal{D}} \log(Lip(\varphi)_{q_1}^{q_2}),$$
$$Lip(\varphi)_{q_1}^{q_2}=\sup\limits_{p\in S_g\setminus \Sigma}\left(\sup\limits_{v\in T_pS_g, ||v||_{q_1}=1} ||d\varphi_pv||_{q_2}\right),$$
where $\mathcal{D}$ is the set of functions $\varphi:S_g\rightarrow S_g$ which are homotopic to the identity, differentiable almost everywhere and which fix the points of $\Sigma$. \\
Since $Diff_0^+(S_g,\Sigma)\subset \mathcal{D}$, one can immediately deduce $L_F(q_1,q_2)\ge L_F^a(q_1,q_2)$ for every $q_1,q_2 \in\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$.\\
We define two flat counterparts to the metric $K$, which are $K_F^a$ and $K_F$. The first one is asymmetric and the second one is its symmetrization.\\
In particular, for every $q_1,q_2\in \mathcal{TQ}_g(\underline k,\epsilon)$, we set
$$K^a_F(q_1,q_2):=\sup\limits_{\gamma\in SC(q_1)}\log\left(\frac{\hat l_{q_2}(\gamma)}{\hat l_{q_1}(\gamma)}\right),$$
where $SC(q_1)$ is the set of saddle connections of $q_1$, and $\hat l_{q_i}(\gamma)$ is the length of the geodesic representative for $|q_i|$ in the homotopy class of $\gamma$ with fixed endpoints. \\Finally the symmetric analogue to $K$ is defined as
$$K_F(q_1,q_2):=\max\{K_F^a(q_1,q_2),K_F^a(q_2,q_1)\}$$
for every $q_1,q_2\in \mathcal{TQ}_g(\underline k,\epsilon)$.\\
In this section we will study the properties of $L_F$ and $K_F$ and explain the difficulties in trying to prove $L_F=K_F$ on $\mathcal{TQ}_g(\underline k,\epsilon)$.\\
These difficulties can be solved considering $L_F^a$ instead of $L_F$: the fact that $L_F^a$ is asymmetric and the infimum is taken over functions in $\mathcal{D}$ will play a crucial role. \\
Indeed, $L_F^a$ is defined specifically to get $L_F^a=K_F^a$: the next section will be completely devoted to the proof of such equality.\\
We now begin the study of the properties of $L_F$.
\begin{prop}
The function $L_F$ is a symmetric pseudo-metric on $\mathcal{TQ}_g(\underline k,\epsilon)$.\\
\end{prop}
\begin{proof}
It is clear that $L(q,q)=|\log(Lip_q^q(Id))|=0$ for all $q\in\mathcal{TQ}_g(\underline k,\epsilon)$.\\
The equality
$$lip_{q_1}^{q_2}(\varphi)=\frac{1}{Lip_{q_2}^{q_1}(\varphi^{-1})}$$
grants
$$\mathcal{L}_{q_1}^{q_2}(\varphi)=\mathcal{L}_{q_2}^{q_1}(\varphi^{-1})$$
and thus the symmetry of $L_F$.\\
The triangular inequality follows from the inequality
$$\mathcal{L}_{q_1}^{q_3}(\varphi\circ \psi)\le \mathcal{L}_{q_2}^{q_3}(\varphi)+\mathcal{L}_{q_1}^{q_2}(\psi).$$
Finally, one could easily note that, given any $q_1\in \mathcal{TQ}_g(\underline k,\epsilon)$, it results \mbox{$L_F(q_1,q_2)=0$} exactly for all $q_2\in \mathcal{TQ}_g(\underline k,\epsilon)$ such that $q_2=e^{i\theta}q_1$.
\end{proof}
Since it results $L_F(q_1,q_2)=0$ if and only if $q_1$ and $q_2$ are in the same orbit of the action of the unitary group $U(1)\subset \mathbb{C}^*$, it follows that $L_F$ can be considered as a metric on the space of flat singular metrics with singularities prescribed by $\underline k$ and holonomy prescribed by $\epsilon$.\\
For the same reason, $L_F$ descends to a metric $\mathbb{P} L_F$ on the projectivization $\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon)=\mathcal{TQ}_g(\underline k,\epsilon)/\mathbb{C}^*=\mathcal{TQ}^{(1)}_g(\underline k,\epsilon)/U(1)$ by setting
$$\mathbb{P} L_F([q_1],[q_2]):=L_F\left(\frac{q_1}{Area(q_1)},\frac{q_2}{Area(q_2)}\right).$$
The first result we present on the pseudo-metric $L_F$ is an inequality concerning the Teichm\"uller metric $d_{\mathcal{T}}$.
\begin{prop}\label{tecl}
For any $q_1,q_2\in \mathcal{TQ}_g(\underline k,\epsilon)$, denote by $X_1,X_2\in \mathcal{T}_g$ the points in the Teichm\"uller space relative to the corresponding conformal structure.
It results:
$$L_F(q_1,q_2)\ge d_{\mathcal{T}}(X_1,X_2)$$
In case there is a Teichm\"uller map between $X_1$ and $X2$ with respect to the differentials $q_1$ and $q_2$ the last inequality is an equality.
\end{prop}
\begin{proof}
For every $\varphi\in Diff^+_0(S_g,\Sigma)$ and $p\in S_g\setminus \Sigma$ we define the quantities
$$Lip_{q_1}^{q_2}(\varphi)_p:=\sup\limits_{v\in T_pS_g,||v||_{q_1}=1}||d\varphi_pv||_{q_2},$$
$$lip_{q_1}^{q_2}(\varphi)_p:=\inf\limits_{w\in T_pS_g,||w||_{q_1}=1}||d\varphi_pw||_{q_2}.$$
Then, since the global dilatation $K(\varphi)$ is independent of the holomorphic charts and thus can be computed in the natural coordinates respectively of $q_1$ and $q_2$, we get the inequality
$$K(\varphi)=\sup\limits_{p\in S_g\setminus \Sigma}\frac{Lip_{q_1}^{q_2}(\varphi)_p}{lip_{q_1}^{q_2}(\varphi)_p}\le \frac{Lip_{q_1}^{q_2}(\varphi)}{lip_{q_1}^{q_2}(\varphi)}.$$
Since for every $\varphi\in Diff^+_0(S_g,\Sigma)$ it also results
$$K(h)\le K(\varphi),$$
where $K(h)$ is the global dilatation of a Teichm\"uller map $h$ such that $d_{\mathcal{T}}(X_1,X_2)=\frac 1 2\log(K(h))$, combining the last two inequalities we get that it can not be at the same time
$$Lip_{q_1}^{q_2}(\varphi)<\sqrt{K(h)}\text{ and } lip_{q_1}^{q_2}(\varphi)>\frac{1}{\sqrt{K(h)}}$$
and this implies the inequality $L(q_1,q_2)\ge d_{\mathcal{T}}(X_1,X_2)$.\\
Finally, in case $h$ is a Teichm\"uller map with respect to the quadratic differentials $q_1$ and $q_2$, then, since $h$ can be written in local coordinates as
$$h(x+iy)=\sqrt{K(h)}x+\frac{i}{\sqrt{K(h)}}y$$
it follows
$$Lip_{q_1}^{q_2}(h)=\sqrt{K(h)},\quad lip_{q_1}^{q_2}(h)=\frac{1}{\sqrt{K(h)}}$$
and thus the equality of the claim.\\
\end{proof}
\begin{oss}
Notice that in the proof of proposition 2.4 the fact that the metric induced by the quadratic differential is locally $Cat(0)$ is never used.
For this reason, one could allow the quadratic differentials to have simple poles on the marked points and define $L_F$ in the same way.\\
Then the same inequality $L_F(q_1,q_2)\ge d_{\mathcal{T}}(X_1,X_2)$ will be true for $X_1,X_2\in \mathcal{T}_g^n$.
\end{oss}
\bigskip
\subsection{Induced topology of $L_F$}
We define \textit{standard topology} on $\mathcal{TQ}_g(\underline k,\epsilon)$, and denote it by $\mathbb{T}_{std}$, the topology induced by the structure of complex manifold, that is, the topology induced by the period maps. Given a sequence $\{q_n\}_{n\in\mathbb{N}}\subset \mathcal{TQ}_g(\underline k,\epsilon)$, we write $q_n\rightarrow q$ to denote its convergence to $q\in \mathcal{TQ}_g(\underline k,\epsilon)$ with respect to the standard topology.\\
Similarly, we denote by $\mathbb{T}_{L_F}$ the topology on $\mathcal{TQ}_g(\underline k,\epsilon)$ induced by $L_F$.
\begin{prop}\label{finer}
The topology $\mathbb{T}_{std}$ is finer than $\mathbb{T}_{L_F}$.
\end{prop}
We will prove the equivalent claim that for every sequence \mbox{$\{q_n\}_{n\in\mathbb{N}}\subset \mathcal{TQ}_g(\underline k,\epsilon)$} the convergence $q_n\rightarrow q$ implies $\lim\limits_{n\rightarrow \infty}L_F(q_n,q)=0$.\\
To this end, we need to first make an observation concerning Euclidean triangles.\\
Denote by $\Xi$ the set of non-degenerate Euclidean triangles $T\subset \mathbb{R}^2$ with one vertex in the origin of $\mathbb{R}^2$: since every triangle $T\in \Xi$ can be identified by the coordinates of its two vertices different from the origin, $\Xi$ can be considered as a subset of $\mathbb{R}^4$.\\
Given any sequence $\{T_n\}_{n\in\mathbb{N}}$ in $\Xi$, we say that it converges to $T\in \Xi$, and write $T_n\rightarrow T$, if $\{T_n\}_{n\in\mathbb{N}}$ converges to $T$ as a sequence of $\mathbb{R}^4$ with respect to the standard Euclidean metric. For every $n\in\mathbb{N}$ consider the affine map $A_n$ which sends $T_n$ to $T$ and denote by $\sigma_{1}(A_n),\sigma_{2}(A_n)$ its eigenvalues. It is easy to verify that if $T_n\rightarrow T$ then $\lim\limits_{n\rightarrow \infty}\sigma_1(A_n)=1$, $\lim\limits_{n\rightarrow \infty}\sigma_2(A_n)=1$.
\begin{proof}
In order to prove the proposition, given any sequence $\{q_n\}_{n\in\mathbb{N}}\subset \mathcal{TQ}_g(\underline k,\epsilon)$ such that $q_n\rightarrow q$, we will find a sequence of maps $A_n\in Diff^+_0(S_g,\Sigma)$ with the property $\mathcal{L}_{q}^{q_n}(A_n)\rightarrow 0$. The claim then will follow from the inequality \mbox{$L(q_n,q)\le \mathcal{L}_{q}^{q_n}(A_n)$}.\\
If $q_n\rightarrow q$ then one could find a collection of arcs $\Gamma=\{\gamma_j\}_{j=1}^{3(m+2g-2)}$ with endpoints in $\Sigma$ which triangulate $S_g$ and an $n_0>0$ such that the geodesic representative of the homotopy class of every $\gamma_j$ for $|q|$ and $|q_n|$, $n>n_0$, is a saddle connection.\\
The geodesic representatives of the homotopy classes of the arcs in $\Gamma$ for $|q|$ (resp. $|q_n|$), provide us of a set of Euclidean triangles $\Xi_q=\{T_l\}_{l=1}^{2(k+2g-2)}$ (resp $\Xi_{q_n}=\{T_l^n\}_{l=1}^{2(k+2g-2)}$) which cover $S_g$. Using period coordinates of $\mathcal{TQ}_g(\underline k,\epsilon)$ one can indeed observe that $q_n\rightarrow q$ implies that every triangle $T^n_l$ converges to $T_l$ in the sense explained in the observation preceding this proof.\\
For every $n\in\mathbb{N}$, we define by $A_n\in Diff^+_0(S_g,\Sigma)$ the map which is piecewise affine in natural coordinates respectively of $q_n$ and $q$, and which on every triangle $T_l^n$ of $\Xi_{q_n}$ is the affine map $A_n^l$ which sends $T_l^n$ to the corresponding triangle $T_l$ of $\Xi_{q}$. \\
As before, we denote by $\sigma_1(A_n^l),\sigma_2(A_n^l)$ the eigenvalues of $A_n^l$. Since it results
$$Lip_{q}^{q_n}(A_n^l)=\max\limits_{l=1,\dots ,2(m+2g-2)}\left(\max\{\sigma_1(A_n^l),\sigma_2(A_n^l)\}\right)$$
$$lip_{q}^{q_n}(A_n^l)=\min\limits_{l=1,\dots ,2(m+2g-2)}\left(\min\{\sigma_1(A_n^l),\sigma_2(A_n^l)\}\right)$$
the claim of the proposition follows from the preceding observation about Euclidean triangles.
\end{proof}
From proposition \ref{finer}, it follows that compact sets of $\mathbb{T}_{std}$ are also compact sets of $\mathbb{T}_{L_F}$. It is thus useful to characterize them in a way which is similar to the statement of Mumford's compactness criterion.\\
Before doing so, let us fix once and for all some notation: for any arc $\gamma$ in $S_g$ with endpoints in $\Sigma$ and any quadratic differential $q\in\mathcal{TQ}_g(\underline k,\epsilon)$, we denote by $l_q(\gamma)$ the length of $\gamma$ with respect to the metric $|q|$ and by $\hat l_q(\gamma)$ the length of the geodesic representative for $|q|$ in the homotopy class of $\gamma$ with fixed endpoints.\\
The following proposition about compact sets of $\mathbb{T}_{std}$ is a consequence of proposition 1, section 3, of \cite{KMS}, which establishes the compactness of subsets of quadratic differentials with lower bound on the area.
\begin{prop}\label{compa}
Fix $\epsilon,L>0$ and a collection of arcs $\Gamma=\{\gamma_i\}_{i=1}^{3(m+2g-2)}$ with endpoints in $\Sigma$ which triangulates $S_g$.\\
Define the subset $\mathcal{K}_{\epsilon,L}\subset \mathcal{TQ}_g(\underline k,\epsilon)$ as the set of quadratic differentials $q$ which satisfy the following two conditions.
\begin{enumerate}[label=(\roman*)]
\item $sys(q)\ge \epsilon$,
\item $\sum\limits_{i=1}\limits^{3(m+2g-2)}\hat l_{q}(\gamma_i)\le L$.
\end{enumerate}
The set $\mathcal{K}_{\epsilon,L}$ is a compact set of $\mathbb{T}_{std}$.
\end{prop}
Using this characterization of compact sets we can prove the following proposition.
\begin{prop}\label{proper}
Each Teichm\"uller space $\mathcal{TQ}(\underline k,\epsilon)$ endowed with the pseudo-metric $L_F$ is a proper topological space.
\end{prop}
\begin{proof}
We prove that closed balls $B_{L_F}^R(q)$ of $L_F$,
$$B_{L_F}^R(q):=\{q'\in \mathcal{TQ}_g(\underline k,\epsilon)|L_F(q,q')\le R\}$$
are contained in a compact subset of $\mathbb{T}_{std}$: thanks to the result of proposition \ref{finer} they will be contained also in a compact set of $\mathbb{T}_{L_F}$.\\
Let $\gamma$ be any geodesic arc for $|q|$ with endpoints in $\Sigma$, and $\gamma_n$ the geodesic representative of its homotopy class for the metric $|q_n|$. Then it follows
$$\frac{l_q(\gamma)}{l_{q_n}(\gamma_n)}\le \frac{l_q(\varphi(\gamma_n))}{l_{q_n}(\gamma_n)}\le \sup\limits_{p\in S_g\setminus \Sigma}\left(\sup_{v\in T_pS_g,||v||_{q_n}=1}||d\varphi_pv||_q\right),$$
$$\frac{l_{q_n}(\gamma_n)}{l_{q}(\gamma)}\le \frac{l_{q_n}(\varphi(\gamma))}{l_{q}(\gamma)}\le \sup\limits_{p\in S_g\setminus \Sigma}\left(\sup_{v\in T_pS_g,||v||_{q}=1}||d\varphi_pv||_{q_n}\right)$$
and from the fact that $L_F(q,q_n)$ is bounded it follows that it can not happen $$\lim\limits_{n\rightarrow \infty}\hat l_{q_n}(\gamma)= 0 \text{ or } \lim\limits_{n\rightarrow \infty}\hat l_{q_n}(\gamma)= \infty.$$
\end{proof}
By abuse of notation we will denote by $\mathbb{T}_{std}$ and $\mathbb{T}_{L_F}$ the induced topologies on $\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon)$.
\begin{prop}
$\mathbb{T}_{std}$ and $\mathbb{T}_{L_F}$ are the same topology on $\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon)$.
\end{prop}
\begin{proof}
It will be sufficient to prove that $\mathbb{T}_{L_F}$ is finer than $\mathbb{T}_{std}$ and thus that for every sequence \mbox{$\{q_n\}_{n\in\mathbb{N}}\subset \mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$} such that $\lim\limits_{n\rightarrow \infty}L_F(q_n,q)= 0$ it follows that there exists $c\in U(1)$ with the property $q_n\rightarrow cq$.\\
Since $\lim\limits_{n\rightarrow \infty}L_F(q_n,q)= 0$ it follows that $\{q_n\}_{n\in\mathbb{N}}$ is contained in a closed ball of $L_F$ and thus in a compact set. Up to passing to a subsequence we can state that there is $q'\in \mathcal{TQ}_g(\underline k,\epsilon)$ such that $q_n\rightarrow q'$.
Since
$$L_F(q,q')\le L_F(q,q_n)+L_F(q_n,q')$$
it follows $q'=e^{i\theta}q$.
\end{proof}
In the following theorem we establish another similarity between $L_F$ and Thurston's asymmetric metric $L$: $L_F$ is a complete pseudo-metric.\\
The notion of completeness makes sense also for pseudo-metrics: a pseudo-metric $d$ on a topological space $X$ is complete if every Cauchy sequence for $d$ admits at least one limit point for $d$. Thus in the proof of the following theorem we will prove that every Cauchy sequence for $L_F$ admits at least one limit point for $L_F$.
\begin{thm}
Every Teichm\"uller space $\mathcal{TQ}_g(\underline k,\epsilon)$ and its quotient $\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon)$, endowed respectively with the metrics $L_F$ and $\mathbb{P} L_F$, are complete pseudo-metric spaces.
\end{thm}
\begin{proof}
We prove that any Cauchy sequence $\{q_n\}_{n\in\mathbb{N}}$ for $L_F$ on $\mathcal{TQ}_g(\underline k,\epsilon)$ is contained in a compact set of $\mathbb{T}_{std}$: from proposition 2.6 it will follow that $\{q_n\}_{n\in \mathbb{N}}$ is contained in a compact set of $\mathbb{T}_{L_F}$ and therefore is convergent. We will use the same inequalities of the proof of proposition \ref{proper}.\\
Consider any Cauchy sequence $\{q_n\}_{n\in\mathbb{N}}$ for $L_F$ on $\mathcal{TQ}_g(\underline k,\epsilon)$, given any arc $\gamma$ on $S_g$ with endpoints in $\Sigma$ denote by $\gamma_{n}$ the geodesic representative of the homotopy class of $\gamma$ for the metric $|q_n|$. Then for every $\varphi\in Diff_0^+(S_g,\Sigma)$ it results:
$$\frac{ l_{q_m}(\gamma_m)}{l_{q_n}(\gamma_n)}\le \frac{l_{q_m}(\varphi(\gamma_n))}{l_{q_n}(\gamma_n)}\le Lip_{q_n}^{q_m}(\varphi)$$
and thus the sequence $\{\log(l_{q_n}(\gamma_n))\}_{n\in\mathbb{N}}$ is a Cauchy sequence and consequently bounded: this means that $\{q_n\}_{n\in \mathbb{N}}$ is contained in a set of the form described in proposition 2.7.\\
The completeness of $(\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon),\mathbb{P} L_F)$ follows from the same reasoning considering a Cauchy sequence $\{q_n\}_{n\in\mathbb{N}}\subset \mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$.
\end{proof}
Finally, it is worth mentioning that the mapping class group $\Gamma(S_g,\Sigma)$ acts on $\mathcal{TQ}_g(\underline k,\epsilon)$ by isometries of $L_F$: in particular for every $q_1,q_2\in \mathcal{TQ}_g(\underline k,\epsilon)$ and \mbox{$\psi\in \Gamma(S_g,\Sigma)$} it results
$$L_F(q_1,q_2)=L_F(\psi\cdot q_1,\psi\cdot q_2),$$
where $\psi\cdot q$ is the pullback by $\psi^{-1}$ of the quadratic differential $q$. This result follows from the equality
$$\mathcal{L}_{q_1}^{q_2}(\varphi)=\mathcal{L}_{\psi\cdot q_1}^{\psi\cdot q_2}(\psi\circ\varphi\circ \psi^{-1})$$
for every $\varphi\in Diff_0^+(S_g,\Sigma)$ and the fact that the conjugation of $Diff_0^+(S_g,\Sigma)$ by any element of $\Gamma(S_g,\Sigma)$ is an isomorphism of $Diff_0^+(S_g,\Sigma)$.\\
Since the action of the mapping class group $\Gamma(S_g,\Sigma)$ on $\mathcal{TQ}_g(\underline k,\epsilon)$ is also properly discontinuous, one gets that the metric $L_F$ descends also to a metric $\hat L_F$ on $\mathcal{Q}(\underline k,\epsilon)$,
$$\hat L_F(\hat q_1,\hat q_2)=\inf L_F(q_1,q_2),$$
where the infimum is taken over all liftings $q_1,q_2$ to $\mathcal{TQ}_g(\underline k,\epsilon)$ of $\hat q_1,\hat q_2\in \mathcal{Q}_g(\underline k,\epsilon)$.
\begin{prop}
The space $\mathcal{Q}_g(\underline k,\epsilon)$ endowed with the metric $\hat L_F$ is a complete pseudo-metric space.
\end{prop}
\begin{proof}
The proof is identical to the one of proposition 2.10.
\end{proof}
\bigskip
\subsection{Properties of the pseudo-metric $K_F$}
A first analogy with the metric $L_F$ is given by the fact that $K_F$ has all the properties we just proved for $L_F$ and in particular it follows:
\begin{thm}
The function $K_F$ is a complete and proper symmetric pseudo-metric on $\mathcal{TQ}_g(\underline k, \epsilon)$.
\end{thm}
\begin{proof}
All the previous proofs for $L_F$ adapt to $K_F$ (in particular, the fact that $q_n\rightarrow q$ implies $K_F(q_n,q)\rightarrow q$ is a direct consequence of the definition of period maps), except for
$$K_F(q_1,q_2)=0 \text{ if and only if } q_1=e^{i\theta}q_2$$
which can be proved as we now explain.\\
If $q_1=e^{i\theta}q_2$ then $q_1$ and $q_2$ induce the same flat metric on $S_g$ and consequently \mbox{$K_F(q_1,q_2)=0$}, so let us prove the other implication.\\
If $K_F(q_1,q_2)=0$, then consider any saddle connection $\sigma$ of $q_1$ and let $\tau$ be the geodesic representative for $|q_2|$ in the homotopy class of $\sigma$. The curve $\tau$ is a concatenation of saddle connections $\tau_1,\dots,\tau_k$ of $q_2$ and since $K_F^a(q_1,q_2)\le 0$ it results
$$l_{q_1}(\sigma)\ge l_{q_2}(\tau_1)+\dots l_{q_2}(\tau_k).$$
For each $i=1,\dots,k$ let $\sigma_i$ be the geodesic representative for the metric $|q_1|$ in the homotopy class of $\tau_i$. Since $K_F^a(q_2,q_1) \le 0$ it follows
$$l_{q_2}(\tau_1)+\dots l_{q_2}(\tau_k)\ge l_{q_1}(\sigma_1)+\dots+l_{q_1}(\sigma_k)$$
and, since the concatenation $\sigma_1*\dots *\sigma_k$ is in the same homotopy class of $\sigma$, it also results
$$l_{q_1}(\sigma_1)+\dots+l_{q_1}(\sigma_k)\ge l_{q_1}(\sigma).$$
These inequalities can be realized at the same time only if they are equalities, and since $\sigma$ is the only geodesic representative in its homotopy class it follows that $\tau$ must be a saddle connection of $q_2$: we have thus proved that if $K_F(q_1,q_2)=0$ then the geodesic representative for $|q_2|$ (resp. for $|q_1|$) of any saddle connection of $q_1$ (resp. of $q_2$) must be a saddle connection of the same length.\\
At this point the claim is basically already proved, since $q_1$ and $q_2$ give triangulations of $S_g$ by saddle connections of the same length.
\end{proof}
We can define on $\mathbb{P}\mathcal{TQ}_g(\underline k,\epsilon)$ the metric $\mathbb{P} K_F$ in the same way we defined $\mathbb{P} L_F$ and prove that its induced topology $\mathbb{T}_{K_F}$ coincides with the standard topology $\mathbb{T}_{std}$.\\
As for the metrics $L$ and $K$ on $\mathcal{T}_g$, the inequality
$$L_F(q_1,q_2)\ge K_F(q_1,q_2), \quad \forall q_1,q_2\in\mathcal{TQ}_g(\underline k,\epsilon)$$
is straightforward, while proving the inverse inequality is a much harder problem, which could be solved finding a function $\varphi\in Diff_0^+(S_g,\Sigma)$ such that
$$\mathcal{L}_{q_1}^{q_2}(\varphi)\le K_F(q_1,q_2).$$
Before studying the general case, let us first state a much simpler fact.
\begin{prop}
Given any $q\in\mathcal{TQ}_g(\underline k,\epsilon)$ and any $A\in GL(2,\mathbb{R})^+$, it results
$$L_F(q,A\cdot q)=K_F(q,A\cdot q)=\log(\sigma),$$
where $\sigma:=\max\{\sigma_1(A),\sigma_1(A)^{-1},\sigma_2(A),\sigma_2(A)^{-1}\}$ and $\sigma_1(A), \sigma_2(A)$ are the two eigenvalues of $A$.
\end{prop}
\begin{proof}
Without loss of generality, we can suppose $\sigma_1(A)$ is realized in the horizontal direction of $q$ and $\sigma_2(A)$ in the vertical direction. Notice furthermore that it results
$$\log(\sigma)=\mathcal{L}_{q}^{A\cdot q}(Id).$$
If $\sigma=\sigma_1(A)$ or $\sigma=\sigma_1(A)^{-1}$, then a saddle connection in the horizontal direction will have stretch factor $\sigma$: although it is not always possible to suppose the existence of such geodesic, it is a consequence of theorem 2 of \cite{Ma2} that the directions of saddle connections of a quadratic differential are dense in $S^1$. Consequently, we can always consider a sequence $\{\gamma_n\}_{n\in \mathbb{N}}$ of saddle connections of $q$ asymptotic in the horizontal direction: this means that it results $\lim\limits_{n\rightarrow \infty}\theta(\gamma_n)=0$, where $\theta(\gamma_n)$ is the difference between the direction of $\gamma_n$ and the horizontal direction.\\
Then it follows
$$K_F(q,A\cdot q)\ge \lim\limits_{n\rightarrow \infty}\left|\log\left(\frac{\hat l_{A\cdot q}(\gamma_n)}{\hat l_{q}(\gamma_n)}\right)\right|=\log(\sigma)\ge L_F(q,A\cdot q)$$
and from $K_F(q,A\cdot q)\le L_F(q,A\cdot q)$ one gets $K_F(q,A\cdot q)= L_F(q,A\cdot q)=\log (\sigma)$.
If $\sigma=\sigma_2(A)$ or $\sigma=\sigma_2(A)^{-1}$, one can repeat the same reasoning for the vertical direction.
\end{proof}
Considering the general case, one could be tempted to adapt the ideas behind Thurston's proof in \cite{Th} to the case of $L_F$ and $K_F$. Specifically, one could try to build a flat analogue to Thurston's stretch maps. \\
We thought the more natural approach to try to do so was to triangulate $S_g$ by saddle connections: clearly this could work only locally on $\mathcal{TQ}_g(\underline k,\epsilon)$, since for quadratic differentials $q_1,q_2$ too far apart there will not be any triangulation $\Gamma=\{\gamma_i\}_{i=1}^{3(m+2g-2)}$ of $S_g$ by arcs and a continuous path $t\mapsto q_t$ which connects $q_1$ and $q_2$ and is such that the geodesic representative of the homotopy class of each $\gamma_i$ is a saddle connection for all $q_t$.\\
Another possibility concerned the use of a flat counterpart to geodesic laminations, called \textit{flat lamination} (for definitions and properties we refer the reader to \cite{Mo}) in order to obtain a triangulation of $S_g$.\\
Unfortunately, both approaches suffered of the same problem: instead of hyperbolic triangles, singular flat metrics require the use of Euclidean triangles.
Indeed, one can triangulate a semi-translation surface $(X,q)$ with Euclidean triangles and stretch each side of each Euclidean triangle by the same factor $c>1$ as in proposition 2.2 of \cite{Th}, but then the resulting semi-translation surface will simply be $c\cdot q$.\\
The point is that in this case the sides of the triangles of the triangulation should be stretched by different factors. When trying to do so, one should notice that there are plenty of couples of Euclidean triangles $T_1,T_2$ with each side stretched by a factor lower or equal to $c>1$, and such that there could be no homeomorphism $f:T_1\rightarrow T_2$ which sends sides to corresponding sides and with $Lip(f)\le c$.
\begin{es}Consider the equilateral triangle $T_1$ with sides of length 1 and the isosceles triangle $T_2$ with base side of length 1 and height $\sqrt{3}$. Then clearly the maximal stretching of the sides of $T_1$ and $T_2$ is $\frac{\sqrt{13}}{2}$, while each homeomorphism \mbox{$f:T_1\rightarrow T_2$} which sends sides to corresponding sides must also send the arc parametrizing the height of $T_1$ to an arc of length at least $\sqrt{3}$. This implies that the Lipschitz constant of such $f$ must be at least $2>\frac{\sqrt{13}}{2}$.
\end{es}
The fundamental fact enlightened by the previous conter-example is that, if one tries to obtain a diffeomorphism $\varphi\in Diff_0^+(S_g,\Sigma)$ with $\mathcal{L}_{q_1}^{q_2}(\varphi)=K_F(q_1,q_2)$ by defining it first on the Euclidean triangles of a triangulation of $S_g$, then $\mathcal{L}_{q_1}^{q_2}(\varphi)$ should be attained along a curve of the triangulation. As a consequence, when searching for flat analogues to Thurston's stretch maps, one should impose strict conditions on the triangles considered.\\
As we made clear before, for $q_1$ and $q_2$ sufficiently close in $\mathcal{TQ}_g(\underline k,\epsilon)$, there is a triangulation $\Gamma=\{\gamma_i\}_{i=1}^{3(m+2g-2)}$ of $S_g$ by arcs with endpoints in $\Sigma$ such that the geodesic representative of each $\gamma_i$ for $|q_1|$ and $|q_2|$ is a saddle connection. This procedure provides us of a collection $\Xi^1=\{T^1_j\}_{j=1}^{2(m+2g-2)}$ of Euclidean triangles in the natural coordinates of $q_1$ and a collection $\Xi^2=\{T^2_j\}_{j=1}^{2(m+2g-2)}$ of Euclidean triangles in the natural coordinates of $q_2$.\\
Our problem is now to establish if there is a triangulation $\Gamma$ of $S_g$ such that it is possible to obtain a function $\varphi\in Diff_0^+(S_g,\Sigma)$ with $\mathcal{L}_{q_1}^{q_2}(\varphi)=K_F(q_1,q_2)$ by defining it first on each couple of corresponding triangles of $\Xi^1$ and $\Xi^2$.\\
To this end one should consider the following fact:\\
\textit{Given two Euclidean triangles $T_1,T_2$ with sides labeled, consider the set $L(T_1,T_2)$ of Lipschitz constants of diffeomorphisms $f:T_1\rightarrow T_2$ which send sides to corresponding sides in a linear way. \\
The minimum of $L(T_1,T_2)$ is the Lipschitz constant of the affine map $A$ which maps $T_1$ in $T_2$.}\\
Note that we considered functions which are linear on the sides of the triangles since we want the Lipschitz constant to be equal to ratio of lengths of a side. This suggests the fact that the function $\varphi$ we are trying to obtain should be affine on each triangle $T_j^1$ and that its greater eigenvalue should be attained on the most stretched side of $\Gamma$.\\
Finally, we see that this last condition imposes a very strong constrain on the collections $\Xi_1$ and $\Xi_2$ and consequently on the triangulation $\Gamma$. Since this problem is related to the nature of Euclidean triangles, it does not seem likely to be solved using flat laminations.\\
For the reasons we just explained, we were not able to prove the local equality $L_F=K_F$ trying to adapt Thurston's approach. In section 3 we will explain another approach we used to prove that the equality of two asymmetric pseudo-metrics $L_F^a$ and $K_F^a$ on $\mathcal{TQ}_g^{(1)}(\underline k,\epsilon)$ depends on two statements about 1-Lipschitz maps between polygons.
\bigskip
\subsection{Geodesics of $L_F$}
In the previous discussion we explained why we are not able to produce a flat counterpart to Thurston's stretch lines, but it is interesting nonetheless to investigate what do geodesics of $L_F$ look like.\\
We could only find geodesics of $L_F$ which are also geodesics of $K_F$: this is because the only feasible strategy to find geodesics $t\mapsto q_t$ of $L_F$ we could think of was to find functions $\varphi_t\in Diff_0^+(S_g,\Sigma)$ such that $\mathcal{L}_q^{q_t}(\varphi_t)=t=K_F(q,q_t)$ and then conclude from $K_F(q,q_t)\le L_F(q,q_t)$.\\
As one can easily notice, these geodesics of $L_F$ are very particular: as soon as some hypothesis are lighten, one can no longer be sure to find functions $\varphi_t$ such that $\mathcal{L}_q^{q_t}(\varphi_t)=t=K_F(q,q_t)$.\\
Let us explain first how to obtain geodesics of $L_F$ and $K_F$ entirely contained in one orbit of $GL(2,\mathbb{R})^+$.
\begin{prop}
Consider any $q\in \mathcal{TQ}_g(\underline k,\epsilon)$, and any pair of continuous functions $$\theta:[0,1]\rightarrow [0,2\pi), \quad f:[0,1]\rightarrow \mathbb{R}^+$$
such that for every $t_0,t_1\in [0,1]$, $t_0<t_1$ it results
$$e^{t_1/t_0}\ge \max\Big\{\frac{f(t_1)}{f(t_0)},\frac{f(t_0)}{f(t_1)}\Big\}.$$
Using these data one can produce four geodesics for $K_F$ and $L_F$ starting at $q$ of the following form
$$\Phi^j:[0,1]\rightarrow \mathcal{TQ}_g(\underline k,\epsilon), \quad t\mapsto q^j_t:= e^{i\theta(t)}\cdot \Sigma_t^j\cdot q,\quad j=1,2,3,4,$$
where $\Sigma_t^j$ is one of the following four diagonal matrices
$$\Sigma_t^1:=\begin{pmatrix} e^t &0\\0&f(t)\end{pmatrix}\quad \Sigma_t^2:=\begin{pmatrix} e^{-t} &0\\0&f(t)\end{pmatrix}\quad \Sigma_t^3:=\begin{pmatrix} f(t) &0\\0&e^t\end{pmatrix}\quad \Sigma_t^4:=\begin{pmatrix} f(t) &0\\0&e^{-t}\end{pmatrix}.$$
\end{prop}
\begin{proof}
The proof is identical for all four geodesics, so we will just prove it for $\Phi^1$. \\
For any $t_0,t_1\in [0,1]$, $t_0<t_1$ it results $q_{t_1}^{1}=A\cdot q_{t_0}^1$, where $A=e^{i\theta(t_1)}\cdot \Sigma \cdot e^{-i\theta(t_0)}$ and $\Sigma$ is the following diagonal matrix
$$\Sigma:=\begin{pmatrix} e^{t_1-t_0} & 0\\ 0&\frac{f(t_1)}{f(t_0)} \end{pmatrix}.$$
Since $\Phi^1$ is contained in a $GL(2,\mathbb{R})^+$-orbit, one can apply previous proposition 2.13 and get $K_F(q_{t_0}^1,q_{t_1}^1)=L_F(q_{t_0}^1,q_{t_1}^1)=t_1-t_0.$
\end{proof}
Given any Teichm\"uller geodesic $$\Psi:[0,1]\rightarrow \mathcal{T}_g, \quad t\mapsto [(X_t,h_t)]$$ with initial differential $q$ on $X$, we define its lifting on $\mathcal{TQ}_g(\underline k,\epsilon)$ to be $$\widetilde \Psi:[0,1]\rightarrow \mathcal{TQ}_g(\underline k,\epsilon),\quad t\mapsto q_t$$ where $q_t$ is the holomorphic quadratic differential on $X_t$ such that $h_t:X\rightarrow X_t$ is a Teichm\"uller map with respect to $q$ and $q_t$ and with dilatation $e^{2t}$.
\begin{prop}
Liftings to $\mathcal{TQ}_g(\underline k,\epsilon)$ of Teichm\"uller geodesics are geodesics for $L_F$ and $K_F$.
\end{prop}
\begin{proof}
The claim follows immediately from the previous proposition: one just has to notice that the Teichm\"uller map $h_t$ can be locally written in natural coordinates of $q$ and $q_t$ as
$$h_t(x+iy)=e^{t}x+ie^{-t}y.$$
\end{proof}
At this point, one could be tempted to try to obtain other geodesics using the result of proposition 2.13. In particular, considering functions $\theta$ and $f$ as in proposition $2.15$, one may wonder if it could be possible to impose for example $q_t:=\Sigma^1_t\cdot e^{i\theta(t)}\cdot q$.\\
The answer is no: since the direction where the stretching $e^t$ is obtained varies, there is no hope to get $L_F(q_{t_0},q_{t_1})=t_1-t_0$ or $K_F(q_{t_0},q_{t_1})=t_1-t_0$.\\
It is possible however to obtain other kinds of geodesics modifying only one part of the semi-translation surface, as we will now explain.
\begin{prop}
Let $q\in\mathcal{TQ}_g(\underline k,\epsilon)$ be a semi-translation surface which contains a flat cylinder $C$ of height $h>0$ such that there is at least one saddle connection entirely contained in $C$ which realizes the height of the cylinder.\\
The arc $\Phi:[0,1]\rightarrow \mathcal{TQ}_g(\underline k,\epsilon)$, $t\mapsto q_t$, where $q_t$ is the semi-translation surface obtained from $q$ changing the height of the flat cylinder to $e^th$, is a geodesic for $L_F$ and $K_F$.
\end{prop}
\begin{proof}
Denote by $\gamma_1,\dots,\gamma_2$ the saddle connections entirely contained in $C$ which realize the height of the cylinder. Clearly, if $h$ is stretched by $e^t$ then the length of $\gamma_1,\dots,\gamma_k$ is stretched by the same factor.\\
For any $t_0,t_1\in [0,1]$, $t_0<t_1$, the semi-translation surface $q_{t_1}$ is obtained from $q_{t_0}$ stretching the height of the cylinder by the factor $e^{t_1-t_0}$. All saddle connections of $q_{t_0}$ different from $\gamma_1,\dots,\gamma_k$ are stretched by a factor which is smaller than $e^{t_1-t_0}$, and consequently one can conclude $$K_F(q_{t_1},q_{t_0})=\log\left(\frac{\hat l_{q_{t_1}}(\gamma_i)}{\hat l_{q_{t_0}}(\gamma_i)}\right)=t_1-t_0.$$
Without loss of generality, we can suppose the direction of the saddle connection $\gamma_1,\dots,\gamma_k$ is the vertical one. Consequently there is a function $\varphi\in Diff^+_0(S_g,\Sigma)$ which, in natural coordinates of $q_{t_0}$ and $q_{t_1}$, can be written as the affine function $\begin{pmatrix} 1& 0\\0& e^{t_1-t_0}\end{pmatrix} $ on the cylinder and as the identity on the complement of the cylinder.\\
From $L_F(q_{t_0},q_{t_1})\le\mathcal{L}_{q_{t_0}}^{q_{t_1}}(\varphi)=t_1-t_0=K_F(q_{t_0},q_{t_1})$ and $L_F(q_{t_0},q_{t_1})\ge K_F(q_{t_0},q_{t_1})$ one gets the last desired equality $L_F(q_{t_1},q_{t_0})=t_1-t_0$.\\
\end{proof}
The idea behind the previous proposition can be applied also to the case of a semi-translation surface $q$ obtained \textit{gluing two semi-translation surfaces $q_1,q_2$ along a slit in the horizontal direction}. This means that one cuts two slits of the same length, one in $q_1$ and one in $q_2$, both in the horizontal direction. Each $q_i$ will then have boundary consisting of two segments: each segment of the boundary of $q_1$ will be glued with a segment of the boundary of $q_2$ and the resulting surface $q$ will have two singularities of total angle $4\pi$ at the extremities of the slit.
\begin{prop}
Let $q\in\mathcal{TQ}_g(\underline k,\epsilon)$ be a semi-translation surface obtained gluing two semi-translation surfaces $q_1,q_2$ along a slit in the horizontal direction. Furthermore, suppose that $q_1$ is such that it contains a sequence of saddle connections $\{\gamma_n\}_{n\in \mathbb{N}}$ asymptotic in the vertical direction (i.e. the limit of the differences of their directions with the vertical direction is zero) such that no $\gamma_n$ intersects the slit. \\
Then one obtains the geodesic $\Phi:[0,1]\rightarrow \mathcal{TQ}_g(\underline k,\epsilon)$, $t\mapsto q_t$, where $q_t$ is the semi-translation surface obtained gluing $\begin{pmatrix}1&0 \\ 0&e^t \end{pmatrix}\cdot q_1$ and $q_2$ along the same slit.
\end{prop}
\begin{proof}
The idea of the proof is very similar to the one of the previous proposition. \\
First of all notice that $q_t$ is a well-defined semi-translation surface since the slit is horizontal and $q_1$ is stretched only in the vertical direction. \\
Then, for every $t_0,t_1\in [0,1]$, $t_0<t_1$, from the fact that no $\gamma_n$ intersects the slit it follows
$$K_F(q_{t_0},q_{t_1})=\lim_{n\rightarrow \infty}\log\left(\frac{\hat l_{q_{t_1}}(\gamma_n)}{\hat l_{q_{t_0}}(\gamma_n)}\right)=t_1-t_0.$$
One can then conclude noting $L_F(q_{t_0},q_{t_1})\le\mathcal{L}_{q_{t_0}}^{q_{t_1}}(Id)=t_1-t_0.$
\end{proof}
\bigskip
\section{Equality of asymmetric pseudo-metrics $L_F^a$ and $K_F^a$}
\bigskip
In this section we investigate the equality of two asymmetric pseudo-metrics $L_F^a$ and $K_F^a$ on each Teichm\"uller space $\mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$ of holomorphic quadratic differentials of unitary area without simple poles. \\
In particular, using the method we develop in this section, the equality of $L_F^a$ and $K_F^a$ on whole $\mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$ can be proved if two statements about 1-Lipschitz maps between planar polygons are true. We are able to prove the first statement, but the second one remains a conjecture: nonetheless, we explain why we believe it is true.
For any $g\ge 2$ and any Teichm\"uller space $\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$ of holomorphic quadratic differentials of unitary area without simple poles, we define the function
$$L^a_F:\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)\times \mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)\rightarrow \mathbb{R}$$
associating to any pair $q_1,q_2 \in\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$ of semi-translation surfaces of unitary area the quantity
$$L^a_F(q_1,q_2):=\inf\limits_{\varphi\in \mathcal{D}} \log(Lip(\varphi)_{q_1}^{q_2}),$$
$$Lip(\varphi)_{q_1}^{q_2}=\sup\limits_{p\in S_g\setminus \Sigma}\left(\sup\limits_{v\in T_pS_g, ||v||_{q_1}=1} ||d\varphi_pv||_{q_2}\right),$$
where $\mathcal{D}$ is the set of functions $\varphi:S_g\rightarrow S_g$ which are homotopic to the identity, differentiable almost everywhere and which fix the points of $\Sigma$.
\begin{prop}
The function $ L^a_F$ is an asymmetric pseudo-metric on $\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$.
\end{prop}
\begin{proof}
It is clear that $L^a_F(q,q)=0$ for every $q\in \mathcal{TQ}_g(\underline k,\epsilon)$ and that $L^a_F$ is not symmetric.\\
Note that every function $\varphi\in\mathcal{D}$ must be surjective, since it has degree 1: from this fact it follows $Lip(\varphi)_{q_1}^{q_2}\ge 1$ and $Lip(\varphi)_{q_1}^{q_2}= 1$ if and only if $q_2=e^{i\theta}q_1$. \\
Finally, $L^a_F$ satisfies the triangular inequality since for every couple of functions \mbox{$\varphi,\phi\in\mathcal{D}$} it follows
$$Lip(\varphi\circ\phi)_{q_1}^{q_3}\le Lip(\varphi)_{q_2}^{q_3}Lip(\phi)_{q_1}^{q_2}.$$
\end{proof}
The other pseudo-metric we consider in the present section is $K_F^a$: for every $q_1,q_2\in \mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$, $K_F^a(q_1,q_2)$ is defined as
$$K^a_F(q_1,q_2):=\sup\limits_{\gamma\in SC(q_1)}\log\left(\frac{\hat l_{q_2}(\gamma)}{\hat l_{q_1}(\gamma)}\right).$$
For every $q_1,q_2\in \mathcal{TQ}^{(1)}_g(\underline k,\epsilon)$ it clearly results $$L_F^a(q_1,q_2)\ge K_F^a(q_1,q_2).$$
With the techniques exposed in the present section we are able to reduce the proof of the equality of $L_F^a$ and $K_F^a$ on the whole $\mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$ to the proof of two statements about 1-Lischitz maps between planar polygons. Given their importance, we feel it is necessary to briefly anticipate them now in a slightly simplified version.\\
Consider two planar polygons $\Delta$ and $\Delta'$ such that there is an injective function
$$\iota:Vertices(\Delta)\rightarrow Vertices(\Delta')$$
which to every vertex $v$ associates a unique vertex $\iota(v)=v'$. Suppose both $\Delta$ and $\Delta'$ have exactly three vertices with strictly convex internal angle, which we denote $x_i$ and $x_i'$, $i=1,2,3$ respectively. \\
Suppose furthermore that for every $x,y\in Vertices(\Delta)$ it results
$$d_\Delta(x,y)\ge d_{\Delta'}(x',y'),$$
where $d_\Delta$ (resp. $d_{\Delta'}$) is the intrinsic Euclidean metric inside $\Delta$ (resp. $\Delta'$): $d_\Delta(x,y)$ (resp. $d_{\Delta'}(x',y')$) is defined as the infimum of the lengths, computed with respect to the Euclidean metric, of all paths from $x$ to $y$ (resp. from $x'$ to $y'$) entirely contained in $\Delta$ (resp. in $\Delta'$). \\
We say that vertices of $\Delta$ and of $\iota(Vertices(\Delta))$ are \textit{disposed in the same order} if it is possible to choose two parametrizations $\gamma:[0,1]\rightarrow \partial \Delta$ and $\gamma_1:[0,1]\rightarrow \partial \Delta'$ such that $\gamma(0)=x_1$, $\gamma_1(0)=x_1'$ and $\gamma,\gamma_1$ meet respectively vertices of $\Delta$ and of $\Delta'$ in the same order.
\begin{stm}(Theorem 3.21)
If $Vertices(\Delta)$ and $\iota(Vertices(\Delta))$ are disposed in the same order, then there is a 1-Lipschitz map $f:\Delta\rightarrow \Delta'$ (with respect to the intrinsic Euclidean metrics of the polygons) which sends vertices to corresponding vertices.
\end{stm}
\begin{stm}(Conjecture 3.31)
If $Vertices(\Delta)$ and $\iota(Vertices(\Delta))$ are not disposed in the same order, then for every point $p\in \Delta$ there is a point $p'\in \Delta'$ such that $$d_\Delta(p,x_i)\ge d_{\Delta'}(p',x_i'), \quad i=1,2,3.$$
\end{stm}
We were able to prove the first statement, which corresponds to following theorem 3.21, but not the second one, which from now on will be referred to as conjecture 3.31: we will still explain why we believe it must be true.\\
We state the following theorem, which is the main result of this paper.
\begin{thm}\label{bigthm}
If conjecture 3.31 is true, then for every $q_1,q_2\in \mathcal{TQ}_g^{(1)}(\underline{k},\epsilon)$, it results
$$L_F^a(q_1,q_2)= K^a_F(q_1,q_2).$$
\end{thm}
We proved theorem \ref{bigthm} using an approach similar to a proof by F.A. Valentine (which can be found in \cite{Va}) of Kirszbraun's theorem for $\mathbb{R}^2$ (firstly proved by M.D. Kirszbraun in \cite{Ki}).
\begin{thm} (Kirszbraun)\\\label{kirsz}
Let $S\subset \mathbb{R}^2$ be any subset and $f:S\rightarrow \mathbb{R}^2$ a 1-Lipschitz map.\\
Given any set $T$ which contains $S$, it is possible to extend $f$ to a 1-Lipschitz map $\hat f:T\rightarrow \mathbb{R}^2$ such that $\hat f(T)$ is contained in the convex hull of $f(S)$.
\end{thm}
The key ingredients of Valentine's proof of Kirszbraun theorem are the following two lemmas.
\begin{lem}\label{triang}
Fix two Euclidean triangles $\Delta(x_1,x_2,x_3)$ and $\Delta(x_1',x_2',x_3')$ in $\mathbb{R}^2$ such that
$$|x_i'-x_j'|\le |x_i-x_j|\quad \textit{ for every }i,j=1,2,3.$$
Then for any $x_4\in \mathbb{R}^4$ there is a point $x_4'$ contained in $\Delta(x_1',x_2',x_3')$ such that
$$|x_4'-x_i'|\le |x_4-x_i|\quad \textit{for every } i=1,2,3.$$
\end{lem}
The second lemma is often referred to as \textit{Helly's theorem} (firstly proved by E.Helly in \cite{He}).
\begin{lem}(Helly)\\
Let $F$ be any family of compact and convex subsets of $\mathbb{R}^n$. Suppose that for every $C_1,\dots, C_{n+1}\in F$ it results
$$\bigcap_{i=1}^{n+1}C_i\neq \emptyset$$
then it also results
$$\bigcap_{C\in F}C\neq \emptyset.$$
\end{lem}
Together, these two lemmas imply the ensuing proposition, from which one easily deduces theorem \ref{kirsz}.
\begin{prop}
Given any two collections $\{B_{r_j}(x_j)\}_{j\in J}$ and $\{B_{r_j}(x_j')\}_{j\in J}$ of closed disks in $\mathbb{R}^2$ with the same radii and with centers such that
$$|x_i'-x_j'|\le |x_i-x_j|.$$
Then, if
$$\bigcap\limits_{j\in J}B_{r_j}(x_j)\neq \emptyset$$
it follows
$$\bigcap\limits_{j\in J}B_{r_j}(x_j')\neq \emptyset.$$
\end{prop}
\bigskip
We performed a similar reasoning in order to find a function $\phi\in \mathcal{D}$ such that $$Lip(\phi)_{q_1}^{\sigma_2}=1,$$
where $\sigma_2$ is the rescaled differential
$$\sigma_2:=\frac{q_2}{e^{K^a_F(q_1,q_2)}}.$$
The existence of such function $\phi$ proves the equality
\begin{equation}
e^{L_F^a(q_1,\sigma_2)}=e^{K_F^a(q_1,\sigma_2)}=1
\end{equation}
and consequently, since for every $c>0$ it follows
$$c e^{L_F^a(q_1,q_2)}=e^{L_F^a(q_1,cq_2)},\quad ce^{K_F^a(q_1,q_2)}=e^{K_F^a(q_1,cq_2)}$$
multiplying both termes of equation $(5.1)$ by $e^{K^a_F(q_1,q_2)}$ and then composing with the logarithm, one gets the desired result
$$L_F^a(q_1,q_2)=K_F^a(q_1,q_2).$$
It is important to specify that in our proof we used the following version of Helly's lemma, which can be found in \cite{Iv}.
\begin{lem}\label{hellyiv}
Let $X$ be a uniquely geodesic space of compact topological dimension $n<\infty$. If $\{A_j\}_{j\in J}$ is any finite collection of convex sets in $X$ such that every subcollection of cardinality at most $n+1$ has a nonempty intersection, then
$$\bigcap\limits_{j\in J} A_j\neq \emptyset.$$
\end{lem}
If $q$ is a holomorphic quadratic differential on a closed Riemann surface of genus $g\ge 2$ one can consider a universal cover $\pi:\widetilde S_g\rightarrow S_g$ and the pullback $\widetilde q$ of $q$ on $\widetilde S_g$. Then $|\widetilde q|$ induces a metric which is $Cat(0)$ and consequently uniquely geodesic. But if $q$ has poles then $|\widetilde q|$ does not induce an uniquely geodesic metric space: this is the reason why our proof could not be adapted to the Teichm\"uller space of quadratic differentials with poles.
\bigskip
One should notice that the equality $L_F^a=K_F^a$ could be implied by a version of Kirszbraun theorem which suits semi-translation surfaces (without simple poles). The generalization of theorem \ref{kirsz} which could be considered closer to semi-translation surfaces was proved by S.Alexander, V.Kapovitch and A.Petrunin in \cite{AKP} and applies to the case of functions from complete $CBB(k)$ spaces (spaces with curvature bounded below by $k$) to complete $Cat(k)$ spaces (spaces with curvature bounded above by $k$). Since semi-translation surfaces are only locally $Cat(0)$ spaces, unfortunately the theorem of \cite{AKP} does not apply to our case.\\
At this point it should be more clear why we decided to prove the equality of the two pseudo-metrics $L_F^a,K_F^a$ instead of the equality of the two pseudo-metrics $L_F,K_F$ studied in the preceding section.\\
Indeed, one reason is that it is more convenient to study asymmetric pseudo-metrics, since it is more complicated to control both Lipschitz constants (the lower and the upper one) at once: for an attempt in this direction in the simple case of the unit square see \cite{DP}.\\
The other reason is that using this kind of \textit{Kirszbraun approach} there is no hope to obtain an injective 1-Lipschitz function. This is the reason why we defined $L_F^a$ as the infimum of Lipschitz constants of functions in $\mathcal{D}$.\\
Finally one should notice that the condition of unitary area of the two semi-translation surfaces $q_1$ and $q_2$ will never be used in the proof. We could actually prove the equality of $L_F^a$ and $K_F^a$ on the whole $\mathcal{TQ}_g(\underline k,\epsilon)$, where the two pseudo-metrics are much more degenerate. \\
The next section is devoted to the explanation of our proof of the construction of the function $\phi\in \mathcal{D}$ such that $Lip(\phi)_{q_1}^{\sigma_2}=1$.
\subsection{Proof of the equality}
Let $\pi:\widetilde S_g\rightarrow S_g$ be a universal cover. Lifting through $\pi$ the complex structure of $X_1$ and the differential $q_1$ one obtains the metric universal cover $\pi:(\widetilde X_1,|\widetilde q_1|)\rightarrow (X_1,|q_1|)$ and doing the same thing to $X_2$ and $q_2$ one obtains the metric universal cover $\pi:(\widetilde X_2,|\widetilde \sigma_2|)\rightarrow (X_2,|\sigma_2|)$.\\
Denote by $d_{\widetilde {q_1}}$ the $Cat(0)$ metric induced by $|\widetilde q_1|$ and by $d_{\widetilde {\sigma_2}}$ the $Cat(0)$ metric induced by $|\widetilde \sigma_2|$. In order to avoid confusion, when we will want to underline that a point of $\widetilde S_g$ is regarded as a point of $\widetilde X_2$, we will denote it with an additional prime symbol: for example a point $\widetilde x\in \pi^{-1}(\Sigma)$ will be denoted as $\widetilde x$ if regarded as a point of $\widetilde X_1$ and $\widetilde x'$ if regarded as a point of $\widetilde X_2$.\\
For every couple of points $\widetilde x,\widetilde y\in \widetilde X_1$, $\overline{ \widetilde x\widetilde y}$ is the $d_{\widetilde q_1}$-geodesic from $\widetilde x$ to $\widetilde y$. Since there will be no ambiguity, we will denote geodesics of $d_{\widetilde \sigma_2}$ in the same way: for every couple of points $\widetilde x',\widetilde y'\in \widetilde X_2$, $\overline{ \widetilde x'\widetilde y'}$ is the $d_{\widetilde \sigma_2}$-geodesic from $\widetilde x'$ to $\widetilde y'$.\\
Fix a point $x_0\in\Sigma\subset S_g$ and $\widetilde x_0\in \pi^{-1}(x_0)$: as it is well known, the group $\pi_1(S_g,x_0)$ acts on $\widetilde S_g$ and for every $\gamma\in\pi_1(S_g,x_0)$, $\widetilde x\in \widetilde S_g$ it results
$$\gamma\cdot \widetilde x=\widetilde \tau(1),$$
where $\widetilde \tau$ is the lifting of $\gamma * \pi(\widetilde \sigma)$ ($\widetilde \sigma$ is any path in $\widetilde S_g$ from $\widetilde x_0$ to $\widetilde x$) such that $\widetilde \tau(0)=\widetilde x_0$.\\
Fix a fundamental domain $P\subset (\widetilde X_1,d_{\widetilde q_1})$ for the action of $\pi_1(S_g,x_0)$, suppose $\widetilde x_0\in P$.\\
We want to build a map $\hat\phi:\hat U\rightarrow (\widetilde X_2,d_{\widetilde \sigma_2})$ (where $\hat U$ is a dense countable subset of $P$ which includes the zeroes of $\widetilde q_1$ contained in $P$), such that for every couple of points $\widetilde x,\widetilde y\in \hat U$ (eventually equal) and every $\gamma\in \pi_1(S_g,x_0)$, it results
\begin{equation}
d_{\widetilde \sigma_2}(\hat \phi(\widetilde x),\gamma\cdot \hat \phi(\widetilde y))\le d_{\widetilde q_1}(\widetilde x,\gamma\cdot \widetilde y)
\end{equation}
and for every zero $\widetilde z$ of $\widetilde q_1$ contained in $P$ it results $\hat \phi(\widetilde z)=\widetilde z'$ (notice that $\widetilde q_1$ and $\widetilde q_2$ have zeroes in the same points, which are the points of $\pi^{-1}(\Sigma)$).\\
Having done so, we define the dense subset $\widetilde U:=\pi_1(S_g,x_0)\cdot \hat U$ of $\widetilde X_1$ and extend the function $\hat \phi$ by equivariance to a function $\widetilde \phi^U:\widetilde U\rightarrow \widetilde X_2$, imposing
$$\widetilde \phi^U(\gamma\cdot \widetilde x):=\gamma\cdot\hat \phi(\widetilde x)$$
for every $\gamma\in \pi_1( X_1,x_0),\widetilde x\in \hat U$.\\
Notice that for every $\gamma_1\cdot \widetilde x_1,\gamma_2\cdot \widetilde x_2\in \widetilde U$ it results:
$$d_{\widetilde \sigma_2}( \widetilde \phi^U(\gamma_1\cdot \widetilde x_1),\widetilde \phi^U(\gamma_2\cdot \widetilde x_2))=d_{\widetilde \sigma_2}(\gamma_1\cdot \hat \phi(\widetilde x_1),\gamma_2\cdot \hat \phi(\widetilde x_2))=d_{\widetilde \sigma_2}( \hat \phi(\widetilde x_1),(\gamma_1^{-1} * \gamma_2)\cdot \hat \phi(\widetilde x_2))\le$$$$\le d_{\widetilde q_1}(\widetilde x_1,(\gamma_1^{-1}* \gamma_2)\cdot \widetilde x_2))=d_{\widetilde q_1}(\gamma_1\cdot \widetilde x_1, \gamma_2\cdot \widetilde x_2)$$
and consequently $\widetilde \phi^U$ can be extended to a function $\widetilde \phi:(\widetilde X_1,|\widetilde q_1|)\rightarrow (\widetilde X_2,|\widetilde \sigma_2|)$ which has Lipschitz constant 1. \\
In particular, for every point $\widetilde x\in \widetilde X\setminus \widetilde U$ we define $\widetilde \phi(\widetilde x)$ as
$$\widetilde \phi(\widetilde x):=\lim\limits_{n\rightarrow \infty}\widetilde \phi^U(\widetilde x_n),$$
where $\{\widetilde x_n\}_{n\in \mathbb{N}}\subset \widetilde U$ is a sequence such that $\lim\limits_{n\rightarrow \infty}\widetilde x_n=x$: since $\widetilde \phi^U$ is 1-Lipschitz on $\widetilde U$, the limit in the definition of $\widetilde \phi(\widetilde x)$ exists and does not depend from the chosen sequence $\{\widetilde x_n\}_{n\in \mathbb{N}}$.\\
Notice furthermore that $\widetilde \phi$ is equivariant for the action of $\pi_1(S_g,x_0)$: for every $\widetilde x\in \widetilde X\setminus \widetilde U$ and $\gamma\in \pi_1(S_g,x_0)$ consider a sequence $\{\widetilde x_n\}_{n\in \mathbb{N}}\subset \widetilde U$ such that $\lim\limits_{n\rightarrow \infty}\widetilde x_n=\widetilde x$, then it results $\lim\limits_{n\rightarrow \infty}\gamma\cdot\widetilde x_n=\gamma\cdot \widetilde x$ and consequently
$$\widetilde \phi(\gamma\cdot \widetilde x)= \lim\limits_{n\rightarrow \infty}\widetilde \phi^U(\gamma\cdot \widetilde x_n)=\lim\limits_{n\rightarrow \infty}\gamma\cdot\widetilde \phi^U(\widetilde x_n)=\gamma\cdot\lim\limits_{n\rightarrow \infty}\widetilde \phi^U(\widetilde x_n)=\gamma\cdot\widetilde \phi(\widetilde x).$$
We have proved that $\widetilde \phi$ descends to a function $\phi:(X_1,q_1)\rightarrow (X_2,\sigma_2)$ which is $1-$Lipschitz and such that
$$(\phi)_*=Id:\pi_1(S_g,x_0)\rightarrow \pi_1(S_g,x_0)$$
which implies that $\phi$ is homotopic to the identity.\\
In the rest of the section we will explain how to obtain a function $\hat \phi$ which satisfies previous inequality $(5.2)$.\\
We have imposed $\hat\phi(\widetilde z)=\widetilde z'$ for every zero $\widetilde z$ of $\widetilde q_1$ which is contained in $P$, so we have to verify
$$d_{\widetilde \sigma_2}(\widetilde z_1',\gamma\cdot\widetilde z_2')\le d_{\widetilde q_1}(\widetilde z_1,\gamma\cdot \widetilde z_2)$$
for every pair of zeroes $\widetilde z_1,\widetilde z_2$ of $\widetilde q_1$ contained in $P$ and every $\gamma\in \pi_1(S_g,x_0)$. \\
Notice that it results $d_{\widetilde \sigma_2}(\widetilde z_1',\gamma\cdot\widetilde z_2')=\hat l_{\sigma_2}(\tau)$, where $\hat l_{\sigma_2}(\tau)$ is the length of the geodesic representative for $|\sigma_2|$ of the homotopy class (with fixed endpoints) of $\pi(\widetilde\tau)$ and $\widetilde \tau$ is any arc in $\widetilde X_2$ from $\widetilde z_1'$ to $\gamma\cdot \widetilde z_2'$. In the same way it results $d_{q_1}(\widetilde z_1,\gamma\cdot\widetilde z_2)=\hat l_{q_1}(\tau)$. \\
Let $\tau^{q_1}$ be the geodesic representative for $|q_1|$ of the homotopy class (with fixed endpoints) of $\tau$ and suppose $\tau^{q_1}$ is a concatenation of $k\ge 1$ saddle connections $\tau_1^{q_1},\dots ,\tau_k^{q_1}$. \\
From the definition of $\sigma_2$ it follows
$$l_{q_1}(\tau_i^{q_1})\ge \hat l_{\sigma_2}(\tau_i^{q_1})$$
for every $i=1,\dots,k$. We thus obtain the following inequalities:
$$d_{q_1}(\widetilde z_1,\gamma\cdot\widetilde z_2)=l_{q_1}(\tau^{q_1})=\sum\limits_{i=1,\dots,k} l_{q_1}(\tau_i^{q_1})\ge \sum\limits_{i=1,\dots,k}\hat l_{\sigma_2}(\tau_i^{q_1})\ge \hat l_{\sigma_2}(\tau)=d_{\sigma_2}(\widetilde z_1',\gamma\cdot\widetilde z_2').$$
Now we are going to define the function $\hat \phi$ on $\hat U$ one point at a time.\\
Let $\widetilde p_1\in P\setminus \pi^{-1}(\Sigma)$ be the first point (besides the zeroes of $\widetilde q_1$) on which we want to define $\hat\phi$: we have to find $\hat\phi(\widetilde p_1)\in \widetilde{X}_2$ such that
\begin{equation}
d_{\widetilde \sigma_2}(\hat\phi(\widetilde p_1),\gamma\cdot \widetilde x')\le d_{\widetilde {q_1}}(\widetilde p_1,\gamma\cdot \widetilde x)
\end{equation}
for every zero $\widetilde x$ of $\widetilde q_1$ contained in $P$ and for every $\gamma\in \pi_1(S_g,x_0)$. The point $\hat\phi(\widetilde p_1)$ should also satisfy the condition
\begin{equation}
d_{\widetilde \sigma_2}(\hat\phi(\widetilde p_1),\theta\cdot \hat\phi(\widetilde p_1))\le d_{\widetilde q_1}(\widetilde p_1,\theta\cdot \widetilde p_1)
\end{equation}
for every $\theta\in \pi_1(S_g,x_0)$.\\
Notice that, in order for equation $(5.3)$ to be always satisfied, it is sufficient to check only the distances of $\widetilde p_1$ from the zeroes $\gamma\cdot \widetilde x$ such that $\overline{\widetilde p_1(\gamma\cdot \widetilde x)}$ is smooth and does not contain other zeroes. Indeed, suppose $\overline{\widetilde p_1(\gamma\cdot \widetilde x)}$ is the concatenation of the following segments
$$\overline{\widetilde p_1(\gamma\cdot \widetilde x)}=\overline{\widetilde p_1(\gamma_1\cdot \widetilde x_1)}*\widetilde \tau_1^{q_1}*\dots*\widetilde \tau_l^{q_1},$$
where:
\begin{itemize}
\item $\gamma_1\in \pi_1(S_g,x_0)$,
\item $\widetilde x_1$ is a zero of $\widetilde q_1$ contained in $P$,
\item $\widetilde \tau_i^{q_1}$ are saddle connections for $\widetilde q_1,$
\end{itemize}
then from the inequality
$$d_{\widetilde \sigma_2}(\hat \phi(\widetilde p_1),\gamma_1\cdot \widetilde x_1')\le d_{\widetilde q_1}(\widetilde p_1,\gamma_1\cdot \widetilde x_1)$$
and the definition of $\sigma_2$ it will follow
$$d_{\widetilde q_1}(\widetilde p_1,\gamma\cdot \widetilde{x})=d_{\widetilde q_1}(\widetilde p_1,\gamma_1\cdot \widetilde x_1)+\sum\limits_{i=1,\dots,l}l_{\widetilde q_1}(\widetilde \tau_i^{q_1})\ge$$ $$\ge d_{\widetilde \sigma_2}(\hat \phi(\widetilde p_1),\gamma_1\cdot \widetilde x_1')+\sum\limits_{i=1,\dots,l}\hat l_{\widetilde \sigma_2}(\widetilde \tau_i^{q_1}) \ge d_{\widetilde \sigma_2}(\hat \phi(\widetilde p_1),\gamma\cdot \widetilde x').$$
For the same reason it suffices to verify equation $(5.4)$ only for $\theta\in \pi_1(S_g,x_0)$ such that $\overline{\widetilde p_1(\theta\cdot \widetilde p_1)}$ is smooth and does not contain zeroes of $\widetilde q_1$.\\
We define the following two sets:
$$\mathcal{X}(\widetilde p_1):=\{\widetilde z\in \pi^{-1}(\Sigma)\text{ such that } \overline{\widetilde z\widetilde p_1} \text{ is smooth and does not contain other zeroes of } \widetilde{q_1} \},$$
$$\Theta(\widetilde p_1):=\{\gamma\in \pi_1(S_g,x_0) \text{ such that } \overline{\widetilde p_1(\theta\cdot \widetilde p_1)} \text{ is smooth and does not contain zeroes of } \widetilde{q_1} \}.$$
For every $\theta\in \Theta(\widetilde p_1)$ we define the set
$$V_\theta:=\{\widetilde p'\in \widetilde X_2\enskip |\enskip d_{\widetilde \sigma_2}(\widetilde p',\theta\cdot \widetilde p')\le d_{\widetilde q_1}(\widetilde p_1,\theta\cdot \widetilde p_1) \}.$$
\begin{lem}
For every $\theta\in \Theta(\widetilde p_1)$, the set $V_\theta$ is convex in $(\widetilde X_2,d_{\widetilde \sigma_2})$.
\end{lem}
\begin{proof}
Consider any two points $\widetilde x',\widetilde y'\in V_\theta$. Since $\widetilde \sigma_2$ is invariant by covering transformations, it is possible to obtain two parametrizations $\widetilde \tau,\widetilde \tau_\theta:[0,1]\rightarrow \widetilde X_2$ respectively of $\overline{\widetilde x'\widetilde y'}$ and of $\overline{(\theta\cdot \widetilde x')(\theta\cdot \widetilde y')}$ such that $\widetilde \tau_\theta(s)=\theta\cdot \widetilde \tau(s)$.\\
The space $(\widetilde X_2,d_{\widetilde \sigma_2})$ is $Cat(0)$ and consequently Busemann-convex: this means that the function $$s\mapsto d_{\widetilde \sigma_2}(\tau(s),\tau_\theta(s))=d_{\widetilde \sigma_2}(\tau(s),\theta\cdot\tau(s))$$ is convex. From this fact we get $\tau(s)\in V_\theta$ for every $s\in [0,1]$.
\end{proof}
For every $\widetilde x\in \mathcal{X}(\widetilde p_1)$ we define the following closed ball
$$B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_1)}(\widetilde x'):=\{\widetilde p'\in \widetilde X_2|d_{\widetilde \sigma_2}(\widetilde p',\widetilde x')\le d_{\widetilde q_1}(\widetilde x,\widetilde p_1)\}.$$
Clearly, our goal is to prove that the set $\Pi(\widetilde p_1)$,
$$\Pi(\widetilde p_1):=\left(\bigcap\limits_{\widetilde x\in \mathcal{X}(\widetilde p_1)}B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_1)}(\widetilde x')\right)\bigcap\left(\bigcap\limits_{\theta\in \Theta(\widetilde p_1)}V_\theta\right)$$
is not empty, in order to being able to choose $\hat\phi(\widetilde p_1)\in\Pi(\widetilde p_1)$.\\
Since the sets $B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_1)}(\widetilde x')$ and $V_\theta$ are convex, we can use Helly's lemma \ref{hellyiv} for uniquely geodesic spaces to prove $\Pi(\widetilde p_1)\neq \emptyset$.
There are four cases:
\begin{enumerate}
\item $B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde p_1)}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde p_1)}(\widetilde x_2')\cap B^2_{d_{\widetilde q_1}(\widetilde x_3,\widetilde p_1)}(\widetilde x_3')\neq \emptyset,$
\item $V_{\theta_1}\cap B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde p_1)}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde p_1)}(\widetilde x_2')\neq \emptyset,$
\item $V_{\theta_1}\cap V_{\theta_2}\cap B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_1)}(\widetilde x')\neq \emptyset,$
\item $V_{\theta_1}\cap V_{\theta_2}\cap V_{\theta_3}\neq \emptyset.$
\end{enumerate}
The proofs of the four cases will be presented later, since we feel it is now best to conclude the procedure of the definition of $\hat \phi$.\\
So suppose we have proved each of the four preceding cases and we have chosen $\hat \phi(\widetilde p_1)\in \Pi(\widetilde p_1)$, we now have to find the image of a second point $\widetilde p_2\in P\setminus \pi^{-1}(\Sigma)$ in such a way that it results:
\begin{enumerate}[label=(\roman*)]
\item
$$d_{\widetilde \sigma_2}(\hat\phi(\widetilde p_2),\gamma\cdot \widetilde x')\le d_{\widetilde q_1}(\widetilde p_2,\gamma\cdot \widetilde x)$$
for every zero $\widetilde x$ of $\widetilde q_1$ contained in $P$ and $\gamma\in \pi_1(S_g,x_0)$ such that $\overline{\widetilde p_2(\gamma\cdot \widetilde x)}$ is smooth and does not contain other zeroes of $\widetilde q_1$,
\item
$$d_{\widetilde \sigma_2}(\hat\phi(\widetilde p_2),\gamma\cdot \hat\phi(\widetilde p_1))\le d_{\widetilde q_1}(\widetilde p_2,\gamma\cdot \widetilde p_1)$$
for every $\gamma\in \pi_1(S_g,x_0)$ such that $\overline{\widetilde p_2(\gamma\cdot \widetilde p_1)}$ is smooth and does not contain zeroes of $\widetilde q_1$,
\item
$$d_{\widetilde \sigma_2}(\hat\phi(\widetilde p_2),\theta\cdot \hat\phi(\widetilde p_2))\le d_{\widetilde q_1}(\widetilde p_2,\theta\cdot \widetilde p_2)$$
for every $\theta\in \pi_1(S_g,x_0)$ such that $\overline{\widetilde p_2(\theta\cdot \widetilde p_2)}$ is smooth and does not contain zeroes of $\widetilde q_1$.
\end{enumerate}
As we did for $\widetilde p_1$, we now define the sets $\mathcal{X}(\widetilde p_2)_\Sigma, \mathcal{X}(\widetilde p_2)_{\widetilde p_1}$ and $\Theta(\widetilde p_2)$:
$$ \mathcal{X}(\widetilde p_2)_\Sigma:=\{\widetilde x\in \pi^{-1}(\Sigma)\enskip|\enspace \overline{\widetilde p_2 \widetilde x}\text{ is smooth and does not contain other zeroes of }\widetilde{q_1}\},$$
$$ \mathcal{X}(\widetilde p_2)_{\widetilde p_1}:=\{\gamma\cdot \widetilde p_1\enspace|\enspace\gamma\in \pi_1(S_g,x_0) \text{ and } \overline{\widetilde p_2(\gamma\cdot \widetilde p_1)}\text{ is smooth and does not contain zeroes of }\widetilde{q_1}\},$$
$$\Theta(\widetilde p_2):=\{\theta\in \pi_1(S_g,x_0)\enskip|\enskip \overline{\widetilde p_2(\theta\cdot \widetilde p_2)}\text{ is smooth and does not contain zeroes of }\widetilde{q_1}\}.$$
We define the following intersections:
$$B_\Sigma:=\bigcap\limits_{\widetilde x\in \mathcal{X}(\widetilde p_2)_\Sigma}B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_2)}(\widetilde x'),$$
$$B_{\widetilde p_1}:=\bigcap\limits_{\gamma\cdot \widetilde p_1\in \mathcal{X}(\widetilde p_2)_{\widetilde p_1}}B^2_{d_{\widetilde q_1}(\gamma\cdot\widetilde p_1,\widetilde p_2)}(\gamma\cdot \hat\phi(\widetilde p_1)),$$
$$V_{\widetilde p_2}:=\bigcap\limits_{\theta\in \Theta(\widetilde p_2)}V_\theta.$$
Again, we want to prove
$$\Pi(\widetilde p_2):=B_\Sigma\cap B_{\widetilde p_1}\cap V_{\widetilde p_2}\neq \emptyset$$
in order to pick $\hat\phi(\widetilde p_2)\in \Pi(\widetilde p_2)$. One can consider the four cases we previously deduced for $\Pi(\widetilde p_1)$, noting that this time the closed balls can also be centered in points $\gamma\cdot \hat \phi(\widetilde p_1)$.\\
We now proceed in the same way, defining $\hat\phi$ on $P$ one point at a time. \\
Suppose $\hat \phi$ is already defined on the points $\widetilde p_1,\dots,\widetilde p_n\in P\setminus \pi^{-1}(\Sigma)$ and that we wish to determine its value at $\widetilde p_{n+1}$. In order to do so we define the following sets:
$$ \mathcal{X}(\widetilde p_{n+1})_\Sigma:=\{\widetilde x\in \pi^{-1}(\Sigma)\enskip|\enskip \overline{\widetilde p_{n+1}\widetilde x}\text{ is smooth and does not contain any other zero of } \widetilde{q_1}\},$$
$$\Theta(\widetilde p_{n+1}):=\{\theta\in \pi_1(S_g,x_0)\enskip|\enskip\overline{\widetilde p_{n+1}(\theta\cdot \widetilde p_{n+1})}\text{ is smooth and does not contain zeroes of } \widetilde{q_1}\},$$
$$ \mathcal{X}(\widetilde p_{n+1})_{\widetilde p_i}:=\{\gamma\cdot \widetilde p_i\enskip|\enskip\gamma\in \pi_1(S_g,x_0) \text{ and } \overline{\widetilde p_{n+1}(\gamma\cdot \widetilde p_i)}\text{ is smooth and does not contain zeroes of }\widetilde{q_1}\},$$
for every $i=1,\dots,n$.\\
Again, we want to prove
$$\Pi(\widetilde p_{n+1}):=B_\Sigma\cap\left(\bigcap\limits_{i=1,\dots,n}B_{\widetilde p_{i}}\right)\cap V_{\widetilde p_{n+1}}\neq \emptyset,$$
where the sets $B_\Sigma,B_{\widetilde p_i},V_{\widetilde p_{n+1}}$ are defined as follows:
$$B_\Sigma:=\bigcap\limits_{\widetilde x\in \mathcal{X}(\widetilde p_{n+1})_\Sigma}B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_{n+1})}(\widetilde x'),$$
$$B_{\widetilde p_i}:=\bigcap\limits_{\gamma\cdot \widetilde p_i\in \mathcal{X}(\widetilde p_{n+1})_{\widetilde p_i}}B^2_{d_{\widetilde q_1}(\gamma\cdot\widetilde p_i,\widetilde p_{n+1})}(\gamma\cdot \hat\phi(\widetilde p_i)),$$
$$V_{\widetilde p_{n+1}}:=\bigcap\limits_{\theta\in \Theta(\widetilde p_{n+1})}V_\theta.$$
Then we will pick $\hat\phi(\widetilde p_{n+1})\in\Pi(\widetilde p_{n+1})$: notice that even in this case there are only the same four types of intersections we pointed out for $\widetilde p_1$.\\
Since we have now fully explained our method to define $\hat \phi$ on a dense countable subset of $P$, we can now concentrate on the four types of intersections which appear in the sets $\Pi(\widetilde p_{i})$ (we will prove it for $\Pi(\widetilde p_{n+1})$, the reasoning will be the same for the other sets $\Pi(\widetilde p_{i})$).\\
The following procedure will not vary in case closed balls are centered in zeroes of $\widetilde q_1$ or in points outside $\pi^{-1}(\Sigma)$: in order to lighten the notation, given any point $\widetilde x=\gamma\cdot \widetilde p_i\in \mathcal{X}(\widetilde p_{n+1})_{\widetilde p_i}$, we will denote the corresponding point $\gamma\cdot \hat \phi(\widetilde p_i)$ simply as $\widetilde x'$.\\
From now on we will also denote the set $\mathcal{X}(\widetilde p_{n+1})_\Sigma\cup (\cup_{i=1}^n\mathcal{X}(\widetilde p_{n+1})_{\widetilde p_i})$ simply as $\mathcal X(\widetilde p_{n+1})$.\\
The first case concerns the intersection of three closed balls and is the most important, since it will imply all other three cases. Its proof is quite long and involves the two statements about 1-Lipschitz maps between polygons we introduced at the beginning of this section: for these reasons we feel it is best to postpone it and dedicate to it the whole next section.\\We will thus state the following theorem and take it for granted.
\begin{thm}
If following conjecture 3.31 is true, for every $\widetilde x_1,\widetilde x_2,\widetilde x_3\in \mathcal{X}(\widetilde p_{n+1})$ it results $$B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})}(\widetilde x'_1)\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde p_{n+1})}(\widetilde x'_2)\cap B^2_{d_{\widetilde q_1}(\widetilde x_3,\widetilde p_{n+1})}(\widetilde x'_3)\neq \emptyset.$$
\end{thm}
It is important to notice that all next results will be implied by theorem 3.11: the reader is advised to keep in mind that they consequently depend on conjecture 3.31.\\
We state the following corollary, which is a consequence of theorem 3.11, Helly's lemma and some observations we already made.
\begin{cor}\label{corintersection}
Consider any finite number of points $\widetilde y_1,\dots,\widetilde y_n\in \widetilde X_1\setminus \pi^{-1}(\Sigma)$ and $\widetilde y_1',\dots,\widetilde y_n'\in \widetilde X_2\setminus \pi^{-1}(\Sigma)$ such that
$$d_{\widetilde \sigma_2}(\widetilde y_i',\widetilde y_j')\le d_{\widetilde q_1}(\widetilde y_i,\widetilde y_j) \quad \forall i,j=1,\dots,n$$
and
$$d_{\widetilde \sigma_2}(\widetilde y_i',\widetilde z')\le d_{\widetilde q_1}(\widetilde y_i,\widetilde z)$$ for every $\widetilde z\in \pi^{-1}(\Sigma)$ and $i=1,\dots,n$.\\
Then for every finite set of zeroes $\widetilde x_1,\dots,\widetilde x_m\in \pi^{-1}(\Sigma)$ and for every $\widetilde p\in \widetilde X_1$ it results
$$\left(\bigcap\limits_{i=1,\dots,n}B^2_{d_{\widetilde q_1}(\widetilde y_i,\widetilde p)}(\widetilde y_i')\right) \bigcap\left(\bigcap\limits_{i=1,\dots,m}B^2_{d_{\widetilde q_1}(\widetilde x_i,\widetilde p)}(\widetilde x_i')\right) \neq \emptyset.$$
\end{cor}
\begin{proof}
Closed balls of $d_{\widetilde \sigma_2}$ are convex, so one can use Helly's lemma \ref{hellyiv} and prove that the intersection of every triple of closed balls is not empty.\\
As we have already seen, given a point $\widetilde y_i$, if it results
$$\overline{\widetilde y_i\widetilde p}=\overline{\widetilde p\widetilde z}*\widetilde \tau_1^{q_1}*\dots *\widetilde \tau_r^{q_1}*\overline{\widetilde w\widetilde y_i}$$
with $\widetilde w,\widetilde z\in \pi^{-1}(\Sigma)$, $\widetilde \tau_i^{q_1}$ saddle connections and $\overline{\widetilde p\widetilde z}, \overline{\widetilde w\widetilde y_i}$ smooth, one can replace the ball $B^2_{d_{\widetilde q_1}(\widetilde y_i,\widetilde p)}(\widetilde y_i')$ in the intersection with the ball $B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde p)}(\widetilde z')$. The same is true for all points $\widetilde x_i$.\\
The result then follows directly from theorem 3.11.
\end{proof}
We now want to focus ourselves on the remaining three cases. In order to do so we first need to characterize closed geodesics and flat cylinders of a semi-translation surface $(X,q)$. A proof of the following lemma can be found in \cite{St}.
\begin{lem}
Let $\theta$ be a simple closed geodesic for $|q|$ on $X$. Then $\theta$ is a cylinder curve of a flat cylinder $C$ of $(X,q)$. This means that $C$ is foliated by simple closed geodesics all parallel to $\theta$ and of the same length. The border of $C$ is composed by two components, both consisting of saddle connections of $q$ parallel to $\theta$. The length of both components equals the length of $\theta$.
\end{lem}
\begin{lem}
Consider any $\theta\in \Theta(\widetilde p_{n+1})$ and let $\widetilde C$ be the lifting to $\widetilde X_1$ of the flat cylinder of $(X_1,q_1)$ corresponding to $\theta$.\\
Let $\widetilde y$ be any point of $\partial \widetilde C$ and $\widetilde z_1,\widetilde z_2$ the two zeroes on $\partial \widetilde C$ such that $\overline{\widetilde z_1\widetilde z_2}$ is a saddle connection containing $\widetilde y$. Then it results
$$B^2_{d_{\widetilde q_1}(\widetilde y,\widetilde z_1)}(\widetilde z_1')\cap B^2_{d_{\widetilde q_1}(\widetilde y,\widetilde z_2)}(\widetilde z_2')\subset V_\theta.$$
\end{lem}
\begin{proof}
Let $\widetilde \tau_1^{q_1},\dots,\widetilde \tau_k^{q_1}$ be the saddle connections such that
$$\overline{\widetilde y(\theta \cdot \widetilde y)}=\overline{\widetilde y\widetilde z_1}*\widetilde \tau_1^{q_1}*\dots *\widetilde \tau_k^{q_1}*\overline{(\theta\cdot \widetilde z_2)(\theta\cdot\widetilde y)}.$$
Then for every point $\widetilde y'\in B^2_{d_{\widetilde q_1}(\widetilde y,\widetilde z_1)}(\widetilde z_1')\cap B^2_{d_{\widetilde q_1}(\widetilde y,\widetilde z_2)}(\widetilde z_2')$ it results
$$d_{\widetilde \sigma_2}(\widetilde y',\theta\cdot \widetilde y')\le d_{\widetilde \sigma_2}(\widetilde y',\widetilde z_1')+\sum\limits_{i=1,\dots,k}\hat l_{\widetilde \sigma_2}(\tau_i^q)+ d_{\widetilde \sigma_2}(\theta \cdot \widetilde y',\theta\cdot \widetilde z_2')= $$$$= d_{\widetilde \sigma_2}(\widetilde y',\widetilde z_1')+\sum\limits_{i=1,\dots,k}\hat l_{\widetilde \sigma_2}(\tau_i^{q_1})+ d_{\widetilde \sigma_2}(\widetilde y', \widetilde z_2')\le d_{\widetilde q_1}(\widetilde y,\widetilde z_1)+\sum\limits_{i=1,\dots,k} l_{\widetilde q_1}(\tau_i^{q_1})+ d_{\widetilde q_1}(\widetilde y, \widetilde z_2)=$$$$=d_{\widetilde q_1}(\widetilde y,\widetilde z_1)+\sum\limits_{i=1,\dots,k} l_{\widetilde q_1}(\tau_i^{q_1})+ d_{\widetilde q_1}(\theta\cdot\widetilde y, \theta\cdot \widetilde z_2)=d_{\widetilde q_1}(\widetilde y,\theta\cdot \widetilde y)$$
and consequently $B^2_{d_{\widetilde q_1}(\widetilde y,\widetilde z_1)}(\widetilde z_1')\cap B^2_{d_{\widetilde q_1}(\widetilde y,\widetilde z_2)}(\widetilde z_2')\subset V_\theta$.
\end{proof}
We are now ready to prove the case of the second type of intersections.
\begin{prop}
For every $\theta\in \Theta(\widetilde p_{n+1})$ and $\widetilde x_1,\widetilde x_2\in \mathcal{X}(\widetilde p_{n+1})$ it results
$$V_\theta\cap B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde p_{n+1})}(\widetilde x_2')\neq \emptyset.$$
\end{prop}
\begin{proof}
Let $\widetilde C$ be the lifting to $\widetilde X_1$ of the flat cylinder corresponding to $\theta$.\\
We will first consider the case $\widetilde x_1\not\in \widetilde C$ and $\widetilde x_2\not\in \widetilde C$, since it is the more complicated one.\\
We define the following points $\widetilde z_1,\widetilde z_2\in \widetilde X_1$:
$$\widetilde z_1:=\overline{\widetilde p_{n+1}\widetilde x_1}\cap \partial \widetilde C,\quad \quad \widetilde z_2:=\overline {\widetilde p_{n+1}\widetilde x_2}\cap \partial \widetilde C.$$
Consider the following two cases:
\begin{itemize}
\item $\overline{\widetilde x_1\widetilde x_2}$ does not traverse $\widetilde C$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{poli8bfeb}
\caption{The case $\overline{\widetilde x_1\widetilde x_2}$ does not traverse $\widetilde C$}
\end{figure}
There is a point $\widetilde z\in \overline{\widetilde z_1\widetilde z_2}$ (eventually equal to $\widetilde z_1$ or $\widetilde z_2$) such that $\widetilde z\in \partial \widetilde C$ and
$$d_{\widetilde q_1}(\widetilde z,\widetilde z_i)\le d_{\widetilde q_1}(\widetilde z_i,\widetilde p_{n+1}),\quad i=1,2$$
and consequently
$$d_{\widetilde q_1}(\widetilde z,\widetilde x_i)\le d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1}).$$
Let $\widetilde v_1$ and $\widetilde w_1$ be the two zeroes on $\partial \widetilde C$ such that $\overline{\widetilde v_1\widetilde w_1}$ is a saddle connection containing $\widetilde z$.\\
From corollary \ref{corintersection} it follows
$$ \Lambda:=B^2_{d_{\widetilde q_1}(\widetilde v_1,\widetilde z)}(\widetilde v_1')\cap B^2_{d_{\widetilde q_1}(\widetilde w_1,\widetilde z)}(\widetilde w_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde z)}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde z)}(\widetilde x_2')\neq \emptyset.$$
The inequality $d_{\widetilde q_1}(\widetilde z,\widetilde x_i)\le d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1})$ grants
$$\Lambda\subset B^2_{d_{\widetilde q_1}(\widetilde v_1,\widetilde z)}(\widetilde v_1')\cap B^2_{d_{\widetilde q_1}(\widetilde w_1,\widetilde z)}(\widetilde w_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde p_{n+1})}(\widetilde x_2')$$
and applying the preceding lemma we can finally get
$$\Lambda\subset V_\theta\cap B^2_{d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x_2,\widetilde p_{n+1})}(\widetilde x_2').$$
\item $\overline{\widetilde x_1\widetilde x_2}$ traverses $\widetilde C$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{poli8feb}
\caption{The case $\overline{\widetilde x_1\widetilde x_2}$ traverses $\widetilde C$.}
\end{figure}
There is a point $\widetilde z\in \overline{\widetilde z_1\widetilde z_2}$ such that $d_{\widetilde q_1}(\widetilde z,\widetilde z_i)\le d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde z_i)$, $i=1,2$.\\
For $i=1,2$, let $\widetilde v_i$ and $\widetilde w_i$ the two zeroes on $\partial \widetilde C$ such that $\overline{\widetilde v_i\widetilde w_i}$ is a saddle connection and $\widetilde z_i\in \overline{\widetilde v_i\widetilde w_i}$.\\
Denote by $\mathcal{X}(\widetilde z_1)$ the set of the zeroes of $\widetilde q_1$ joined to $\widetilde z_1$ by a smooth geodesic of $|\widetilde q_1|$: clearly $\widetilde v_1,\widetilde w_1\in \mathcal{X}(\widetilde z_1)$.\\
Corollary \ref{corintersection} and the previous lemma grant the existence of the following points $\widetilde z_i'\in \widetilde X_2$:
$$\widetilde z_1'\in \left( \bigcap\limits_{\widetilde x\in \mathcal{X}(\widetilde z_1)} B^2_{d_{\widetilde q_1}(\widetilde z_1,\widetilde x)}(\widetilde x')\right)\cap B^2_{d_{\widetilde q_1}(\widetilde z_1,\widetilde x_1)}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z_1,\widetilde x_2)}(\widetilde x_2')\subset $$ $$\subset B^2_{d_{\widetilde q_1}(\widetilde z_1,\widetilde x_1)}(\widetilde x_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z_1,\widetilde x_2)}(\widetilde x_2')\cap V_\theta,$$
$$\widetilde z_2'\in B^2_{d_{\widetilde q_1}(\widetilde z_2,\widetilde v_2)}(\widetilde v_2')\cap B^2_{d_{\widetilde q_1}(\widetilde z_2,\widetilde w_2)}(\widetilde w_2')\cap B^2_{d_{\widetilde q_1}(\widetilde z_2,\widetilde z_1)}(z_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z_2,\widetilde x_2)}(\widetilde x_2')\subset$$ $$\subset B^2_{d_{\widetilde q_1}(\widetilde z_2,\widetilde z_1)}(\widetilde z_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z_2,\widetilde x_2)}(\widetilde x_2')\cap V_\theta.$$
The set $V_\theta$ is convex, so it follows $\overline{\widetilde z_1'\widetilde z_2'}\subset V_\theta$ and since $d_{\widetilde \sigma_2}(\widetilde z_1',\widetilde z_2')\le d_{\widetilde q_1}(\widetilde z_1,\widetilde z_2)$, we can choose $\widetilde z'\in \overline{\widetilde z_1'\widetilde z_2'}$ such that $d_{\widetilde \sigma_2}(\widetilde z',\widetilde z_i')\le d_{\widetilde q_1}(\widetilde z,\widetilde z_i)$, $i=1,2$. \\
In this way one finally gets the following inequalities:
$$d_{\widetilde \sigma_2}(\widetilde z',\widetilde x_i')\le d_{\widetilde \sigma_2}(\widetilde z',\widetilde z_i')+d_{\widetilde \sigma_2}(\widetilde z_i',\widetilde x_i')\le $$ $$\le d_{\widetilde q_1}(\widetilde z,\widetilde z_i)+d_{\widetilde q_1}(\widetilde z_i,\widetilde x_i)\le d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde z_i)+d_{\widetilde q_1}(\widetilde z_i,\widetilde x_i)=d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde x_i).$$
\end{itemize}
The case $\widetilde x_1\in \widetilde C$ and $\widetilde x_2\not\in \widetilde C$ can be solved in the same way. Define as before $\widetilde z_2:=\overline{\widetilde p_{n+1}\widetilde x_2}\cap \partial \widetilde C$, then one just has to notice that there always is a point $\widetilde z\in \overline{\widetilde z_2\widetilde x_1}$ such that $d_{\widetilde q_1}(\widetilde z,\widetilde x_1)\le d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde x_1)$ and $d_{\widetilde q_1}(\widetilde z,\widetilde z_2)\le d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde z_2)$. \\
Finally, if $\widetilde x_1\in \widetilde C$ and $\widetilde x_2\in \widetilde C$ one could notice that it results $\overline{\widetilde{x}_1'\widetilde{x}_2'}\subset V_\theta$. Since $d_{\widetilde \sigma_2}(\widetilde x_1',\widetilde x_2')\le d_{\widetilde q_1}(\widetilde x_1,\widetilde x_2)$, there is a point $\widetilde p_{n+1}'\in \overline{\widetilde{x}_1'\widetilde{x}_2'}$ such that $$d_{\widetilde \sigma_2}(\widetilde p_{n+1}',\widetilde x_i')\le d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde x_i),\quad i=1,2.$$
\end{proof}
\begin{cor}\label{cor529}
For every $\theta\in \Theta(\widetilde p_{n+1})$ and $\widetilde x_i\in \mathcal{X}(\widetilde p_{n+1})$, $i=1,\dots,n$, it results:
$$V_\theta\cap \bigcap\limits_{i=1,\dots,n}B^2_{d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1})}(\widetilde x_i')\neq \emptyset.$$
\end{cor}
\begin{proof}
It is a consequence of previous results and Helly's lemma for uniquely geodesic spaces.
\end{proof}
Finally, we can prove that the intersection is not empty also in the last two cases.
\begin{prop}
For every $\theta_1,\theta_2\in \Theta(\widetilde p_{n+1})$ and $\widetilde x\in \mathcal{X}(\widetilde p_{n+1})$, it follows
$$V_{\theta_1}\cap V_{\theta_2}\cap B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_{n+1})}(\widetilde x')\neq \emptyset.$$
\end{prop}
\begin{proof}
Let $\widetilde C_i$ be the lifting to $\widetilde X_1$ of the flat cylinder corresponding to $\theta_i$, $i=1,2$. \\
We first consider the case $\widetilde x\not\in \widetilde C_1\cup \widetilde C_2$, since it is the more complicated one.\\
We choose the point $\widetilde z$:
$$\widetilde z:=\overline{\widetilde x\widetilde p_{n+1}}\cap \partial(\widetilde C_1\cap \widetilde C_2).$$
Notice that it results $d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde x)\ge d_{\widetilde q_1}(\widetilde z,\widetilde x)$ and suppose $\widetilde z\in \partial \widetilde C_1$.\\
Let $\widetilde v_1$ and $\widetilde w_1$ be the two zeroes of $\widetilde q_1$ such that $\overline{\widetilde v_1\widetilde w_1}$ is the saddle connection of $\partial \widetilde C_1$ containing $\widetilde z$.\\
Then one gets the following inclusion of sets:
$$B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde v_1)}(\widetilde v_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde w_1)}(\widetilde w_1')\cap B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde z)}(\widetilde x')\cap V_{\theta_2}\subset V_{\theta_1}\cap B^2_{d_{\widetilde q_1}(\widetilde x,\widetilde p_{n+1})}(\widetilde x')\cap V_{\theta_2}$$
and we can conclude applying corollary \ref{cor529}.\\
The case $\widetilde x\in \widetilde C_1$ and $\widetilde x\not\in \widetilde C_2$ can be solved in the same way, choosing $$\widetilde z:=\overline{\widetilde x\widetilde p_{n+1}}\cap \partial\widetilde C_1.$$
Finally, the case $\widetilde x\in \widetilde C_1\cap \widetilde C_2$ is trivial since $\widetilde x'\in V_{\theta_1}\cap V_{\theta_2}$.
\end{proof}
\begin{prop}
For every $\theta_1,\theta_2,\theta_3\in \Theta(\widetilde p_{n+1})$ it follows
$$V_{\theta_1}\cap V_{\theta_2}\cap V_{\theta_3}\neq \emptyset.$$
\end{prop}
\begin{proof}
As before, denote by $\widetilde C_i$ the lifting to $\widetilde X_1$ of the flat cylinder corresponding to $\theta_i$, $i=1,2,3$. Up to renumbering the indexes, we can suppose there is a point $\widetilde z\in \partial(\widetilde C_1\cap \widetilde C_2)\cap \widetilde C_3$. \\
Then, for $i=1,2$ let $\widetilde v_i,\widetilde w_i$ be the zeroes of $\widetilde q_1$ on the border of $\widetilde C_i$ such that $\overline{\widetilde v_i\widetilde w_i}$ is a saddle connection and $\widetilde z\in \overline{\widetilde v_i\widetilde w_i}$.\\
Using lemma 3.14 we just have to prove
$$B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde v_1)}(\widetilde v_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde w_1)}(\widetilde w_1')\cap B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde v_2)}(\widetilde v_2')\cap B^2_{d_{\widetilde q_1}(\widetilde z,\widetilde w_2)}(\widetilde w_2')\cap \widetilde V_{\theta_3}\neq \emptyset$$
which is granted by corollary \ref{cor529}.
\end{proof}
This ends the proof of the existence of the desired function $\phi$: if conjecture 3.31 is true, we have described how to obtain the equality $L_F^a(q_1,q_2)=K_F^a(q_1,q_2)$.
\bigskip
\subsection{Proof of theorem 3.11}
The first step towards the proof of theorem 3.11 consists in the characterization of geodesic triangles in $(\widetilde X,d_{\widetilde q})$ (where as before $(X,q)$ is a semi-translation surface and $\pi:(\widetilde X,|\widetilde q|)\rightarrow (X,|q|)$ is a metric universal cover). We will use the following lemma, the proof of which can be found in \cite{St}, theorem 16.1.
\begin{lem}\label{streb}
Let $\widetilde \gamma:[0,1]\rightarrow \widetilde X$ be a locally minimizing geodesic for $d_{\widetilde q}$. It follows
$$d_{\widetilde q}(\widetilde \gamma(0),\widetilde \gamma(1))=l_{\widetilde q}(\widetilde \gamma)$$
that is, $\widetilde \gamma$ is also globally minimizing. Furthermore, $\widetilde \gamma$ is the unique geodesic with these properties.
\end{lem}
Given any triple of points $\widetilde x_1,\widetilde x_2, \widetilde x_3\in \widetilde X$ denote by $T$ the corresponding geodesic triangle for $d_{\widetilde q}$, which is the subset of $\widetilde X$ composed by the three geodesics $\overline{\widetilde x_i\widetilde x_j}$.\\
Since $\widetilde X\simeq \mathbb{H}$, it makes sense to define the internal part ${\mathop \Delta\limits^ \circ}$ of $T$: we call \textit{filled geodesic triangle} the set $T\cup {\mathop \Delta\limits^ \circ}$ and we denote it by $\Delta$.\\
Given any planar polygon $P$, we denote by $d_P$ its \textit{intrinsic Euclidean metric}: for every $x_1,x_2\in P$, we define $d_P(x_1,x_2)$ as the infimum of the lengths, computed with respect to the Euclidean metric, of all paths from $x_1$ to $x_2$ entirely contained in $P$. \\
Every polygon used in the following proofs will be endowed with such intrinsic Euclidean metric.
\begin{prop}
Filled geodesic triangles of $d_{\widetilde q}$ are convex and do not contain zeroes of $\widetilde q$ in their internal part, which is connected.\\
Given a triple of points $\widetilde x_1,\widetilde x_2,\widetilde x_3\in \widetilde X$, the corresponding filled geodesic triangle $\Delta$ can have one dimensional components. For every $i=1,2,3$ we define $\widetilde v_i$ as the point on $\overline{\widetilde x_i\widetilde x_j}\cap \overline{\widetilde x_i\widetilde x_k}$, $i\neq j\neq k$ which has maximum distance with $\widetilde x_i$.\\
If ${\mathop \Delta\limits^ \circ}$ is not empty, then its border is exactly $\overline{\widetilde v_1\widetilde v_2}\cup\overline{\widetilde v_2\widetilde v_3}\cup\overline{\widetilde v_1\widetilde v_3}$ and for every $i=1,2,3$, if $\widetilde x_i\neq \widetilde v_i$, then $\overline{\widetilde x_i\widetilde v_i}$ is the only one dimensional component of $\Delta$ starting from $\widetilde x_i$.\\
The internal angles of $\overline{\mathop \Delta\limits^ \circ}$ in the three points $\widetilde v_i$ are strictly convex, while all other internal angles are concave and less than $2\pi$.\\
Finally, every filled geodesic triangle for $d_{\widetilde q}$ is isometric to a planar polygon, which eventually could be degenerate (one dimensional) or with at most three one dimensional components.
\end{prop}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{poli5febb}
\caption{An example of a filled geodesic triangle $\Delta$.}
\end{figure}
\begin{proof}
By lemma \ref{streb}, if $\overline{\widetilde x_i \widetilde x_j}$ and $\overline{\widetilde x_i\widetilde x_k}$ intersect in a point $\widetilde p\neq \widetilde x_i$, then they must coincide over all $\overline{\widetilde p\widetilde x_i}$. It follows that ${\mathop \Delta\limits^ \circ}$ is connected and its border is $\overline{\widetilde v_1\widetilde v_2}\cup\overline{\widetilde v_1\widetilde v_3}\cup\overline{\widetilde v_2\widetilde v_3}$.\\
Suppose ${\mathop \Delta\limits^ \circ}\neq \emptyset$ and denote by $\alpha_1$ the internal angle of $\overline{{\mathop \Delta\limits^ \circ}}$ in $\widetilde v_1$: we prove $\alpha_1< \pi$. \\
Let $\alpha_{12}$ be the angle in $\widetilde v_1$ determined by $\overline{\widetilde x_1\widetilde v_1}$ and $\overline{\widetilde v_1\widetilde v_2}$ completely outside $\Delta$ and let $\alpha_{13}$ be the angle in $\widetilde v_1$ determined by $\overline{\widetilde x_1\widetilde v_1}$ and $\overline{\widetilde v_1\widetilde v_3}$ completely outside $\Delta$. Clearly it results $\alpha_{12}\ge \pi$ and $\alpha_{13}\ge \pi$: if $\alpha_1\ge \pi$ then lemma \ref{streb} would imply $\widetilde v_1\in \overline{\widetilde v_2\widetilde v_3}$ and consequently $\mathop \Delta\limits^ \circ=\emptyset$. In the same way one proves that the internal angles of $\overline{{\mathop \Delta\limits^ \circ}}$ in $\widetilde v_2,\widetilde v_3$ must be strictly convex.\\
If $\widetilde v$ is a zero of $\widetilde q$ in the border of $\mathop \Delta\limits^ \circ$, $\widetilde v\neq \widetilde v_1,\widetilde v_2,\widetilde v_3$, the internal angle $\beta_{\widetilde v}$ of $\overline{{\mathop \Delta\limits^ \circ}}$ in $\widetilde v$ must be concave, and we now also prove $\beta_{\widetilde v}<2\pi$.\\
Let $\widetilde v\in \overline{\widetilde v_1\widetilde v_2}$, and suppose by contradiction $\beta_{\widetilde v}\ge 2\pi$. Let $\tau_i$, $i=1,2$, be the angle in $\widetilde v$ determined by $\overline{\widetilde v_3\widetilde v}$ and $\overline{\widetilde v\widetilde v_i}$ inside $\Delta$. Since $\beta_{\widetilde v}\ge 2\pi$, it must follow $\tau_1\ge \pi$ or $\tau_2\ge \pi$. Suppose $\tau_1\ge 1$, then $\overline{\widetilde v_3\widetilde v_1}$ would be a concatenation of $\overline{\widetilde v_1\widetilde v}$ and $\overline{\widetilde v\widetilde v_3}$, implying $\widetilde v=\widetilde v_1$. This last equality contradicts the previous assumption $\widetilde v\neq \widetilde v_1,\widetilde v_2,\widetilde v_3$.\\
Finally, suppose by contradiction that one or more zeroes $\widetilde z_j$ of $\widetilde q$ are contained in ${\mathop \Delta\limits^ \circ}$.
Denote by $\alpha_i,\theta_j,\beta_k$ the internal angles of $\overline{{\mathop \Delta\limits^ \circ}}$ respectively in $\widetilde v_i,\widetilde z_j,\widetilde w_k$, where $\widetilde w_k$ is a zero of $\widetilde q$ on the border of $\mathop \Delta\limits^ \circ$.\\
Applying Gauss-Bonnet formula on $\overline{{\mathop \Delta\limits^ \circ}}$ one gets:
$$\sum_i(\pi-\beta_i)+(\pi-\alpha_1)+(\pi-\alpha_2)+(\pi-\alpha_{3})=2\pi+\sum_j(\theta_j-2\pi).$$
From what we have proved it follows
$$\sum_k(\pi-\beta_k)\le 0,\quad (\pi-\alpha_1)+(\pi-\alpha_2)+(\pi-\alpha_{3})<3 \pi$$
and consequently we now get
$$2\pi+\sum_j(\theta_j-2\pi)<3\pi.$$
The total angle in $\widetilde z_j\in \mathop \Delta\limits^ \circ$ must be greater than or equal to $3\pi$, but this contradicts the last inequality.\\
In order to prove that $\Delta$ is isometric to a planar polygon endowed with its intrinsic Euclidean metric it is clearly sufficient to prove that $ \overline{{\mathop \Delta\limits^ \circ}}$ is isometric to a planar polygon.\\
Let $Dev:(\widetilde X,|\widetilde q|)\rightarrow \mathbb{R}^2$ be the developing map (for a precise definition see for example \cite{Tr2}), notice that, if $Dev$ is injective on a point $\widetilde v$ on the border of $ \overline{{\mathop \Delta\limits^ \circ}}$, then the internal angle of $\overline{{\mathop \Delta\limits^ \circ}}$ in $\widetilde v$ coincides with the internal angle of $Dev(\overline{{\mathop \Delta\limits^ \circ}})$ in $Dev(\widetilde v)$.\\
We will prove that $Dev:\overline{{\mathop \Delta\limits^ \circ}}\rightarrow \mathbb{R}^2$ is injective, or equivalently that $Dev(\overline{{\mathop \Delta\limits^ \circ}})$ is a simple polygon (not self-intersecting).\\
Suppose by contradiction that $Dev$ is not injective. We divide two cases:
\begin{enumerate}
\item $Dev$ is not injective on any of the points $\widetilde v_i$. Then denote by $P_1$ be the simple polygon identified by the external border of $Dev(\overline{{\mathop \Delta\limits^ \circ}})$. Notice that internal angles of $P_1$ can correspond to internal angles of $\overline{{\mathop \Delta\limits^ \circ}}$ or can be originated by overlays on points where $Dev$ fails to be injective. Internal angles of the latter kind must be strictly concave and consequently convex internal angles of $P_1$ must correspond to convex internal angles of $\overline{{\mathop \Delta\limits^ \circ}}$. Since $P_1$ is simple, it must have at least three strictly convex internal angles. It would follow that $\overline{{\mathop \Delta\limits^ \circ}}$ must have at least six strictly convex internal angles: the three angles $\alpha_i$ plus the angles which correspond to the three strictly convex internal angles of $P_1$. This fact clearly contradicts the hypothesis.
\item $Dev$ is injective on $\widetilde v_1$. Then there is a polygon $P_2\subset Dev(\overline{{\mathop \Delta\limits^ \circ}})$ which is maximal with respect to inclusion on the set of polygons $\{Q\}$ such that
\begin{itemize}
\item $Q\subset Dev(\overline{{\mathop \Delta\limits^ \circ}})$,
\item $Dev(\widetilde v_1)$ is a vertex of $Q$,
\item $Dev$ is injective on $Dev^{-1}(Q)$.
\end{itemize}
Let $P_0$ be the simple polygon identified by the external border of $Dev(\overline{{\mathop \Delta\limits^ \circ}})$ and define $P_1:=\overline{P_0\setminus P_2}$ (see figure 4 for an example.). As before, convex internal angles of $P_1$ must correspond to convex internal angles of $\overline{{\mathop \Delta\limits^ \circ}}$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{poli3aprc1b}
\caption{ On the left there is an example of $Dev(\overline{{\mathop \Delta\limits^ \circ}})$ we want to exclude. On the right there is the corresponding polygon $P_1$.}
\end{figure}
It would follow that $\overline{{\mathop \Delta\limits^ \circ}}$ must have at least four strictly convex internal angles: $\alpha_1$ plus the angles which correspond to the three strictly convex internal angles of $P_1$. This fact clearly contradicts the hypothesis.
\end{enumerate}
Finally, convexity of $\Delta$ follows from the fact that, given any pair $\widetilde x,\widetilde y\in \Delta$, the geodesic for the intrinsic Euclidean metric connecting them is also a locally minimizing geodesic for $d_{\widetilde q}$ and consequently also globally minimizing.
\end{proof}
We now go back to consider the fundamental domain $P$ defined in the preceding section.\\
Given the point $\widetilde p_{n+1}\in P$ and the three points $\widetilde x_1,\widetilde x_2,\widetilde x_3\in \mathcal{X}(\widetilde p_n)$ corresponding to the centers of the closed balls, we consider the filled geodesic triangle $\Delta$ for $(\widetilde X_1,d_{\widetilde q_1})$ with vertices $\widetilde x_1,\widetilde x_2,\widetilde x_3$. Following the characterization of the previous proposition, we divide two cases:
\begin{enumerate}
\item $\widetilde p_{n+1}\in \Delta$, then, since the three geodesics $\overline{\widetilde p_{n+1}\widetilde x_i}$ are smooth and do not contain other zeroes of $\widetilde q_1$, it follows that $\Delta$ can not have one dimensional components.
\item $\widetilde p_{n+1}\not \in \Delta$, then $\Delta$ can have one dimensional components and even be a degenerate polygon (one dimensional).
\end{enumerate}
Denote by $\Delta'$ the filled geodesic triangle for $(\widetilde X_2,d_{\widetilde \sigma_2})$ with vertices $\widetilde x_1',\widetilde x_2',\widetilde x_3'$. \\
Again, we divide three cases:
\begin{enumerate}[label=(\roman*)]
\item $\Delta'$ is not one dimensional, but can have at most three one dimensional components,
\item $\Delta'$ is one dimensional and it is not possible to renumber the vertices in order to obtain $x_3'\in \overline{x_1'x_2'}$,
\item $\Delta'$ is one dimensional and it is possible to renumber the vertices in order to obtain $x_3'\in \overline{x_1'x_2'}$.
\end{enumerate}
Combining them, we have a total of six cases we need to care care of.\\
In cases (1,i),(1,ii),(1,iii) our goal is to find a point $\widetilde p_{n+1}'\in \Delta'$ such that
$$d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1})\ge d_{\widetilde \sigma_2}(\widetilde x_i',\widetilde p_{n+1}') \text{ for } i=1,2,3.$$
In the remaining cases (2,i),(2,ii),(2,iii) we will use the orthogonal projection on convex sets in $Cat(0)$ spaces:
$$pr:(\widetilde X_1,d_{\widetilde q_1})\rightarrow \Delta,$$
where the image $pr(\widetilde x)$ of every point $\widetilde x\in \widetilde X_1$ is defined as the unique point such that
$$d_{\widetilde q_1}(\widetilde x,pr(\widetilde x))=\inf \limits_{\widetilde y\in \Delta}d_{\widetilde q_1}(\widetilde x,\widetilde y).$$
The projection $pr$ does not increase distances (for a proof and a list of other properties of $pr$ one could see \cite{BH}, proposition 2.4, page 176) and in particular it results
$$d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1})\ge d_{\widetilde q_1}(\widetilde x_i, pr(\widetilde p_{n+1})) \text{ for } i=1,2,3.$$
Then, we will look for a point $\widetilde p_{n+1}'\in \Delta'$ such that
$$d_{\widetilde q_1}(\widetilde x_i, pr(\widetilde p_{n+1}))\ge d_{\widetilde \sigma_2}(\widetilde x_i',\widetilde p_{n+1}') \text{ for } i=1,2,3.$$
We chose to confront distances with $pr(\widetilde p_{n+1})$ instead of $\widetilde p_{n+1}$ because in the following procedures it will be crucial to always consider points inside $\Delta$.\\
Cases (1,ii),(1,iii),(2,ii) and (2,iii) are easily solvable. We will prove only case (1,ii), since the others are almost identical. \\
If $\Delta'$ is one dimensional, then it always contains a vertex $\widetilde v'$ such that $$\widetilde v'\in \overline{\widetilde x_1'\widetilde x_2'}\cap \overline{\widetilde x_1'\widetilde x_3'}\cap \overline{\widetilde x_2'\widetilde x_3'}.$$
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{poli18mag}
\caption{An example of vertex $\widetilde v'\in \Delta'$.}
\end{figure}
If, for every index $i=1,2,3$, it results $d_{\widetilde \sigma_2}(\widetilde x_i',\widetilde v')\le d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1})$, then we can choose $\widetilde p_{n+1}'=\widetilde v'$.\\
If, up to renumbering the indexes, it results $d_{\widetilde \sigma_2}(\widetilde x_1',\widetilde v')>d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})$, we choose $\widetilde p_{n+1}'$ to be the point on $\overline{\widetilde x_1'\widetilde v'}$ such that $d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})=d_{\widetilde \sigma_2}(\widetilde x_1',\widetilde p_{n+1}')$.\\
Then it will follow $d_{\widetilde \sigma_2}(\widetilde x_i',\widetilde p_{n+1}')\le d_{\widetilde q_1}(\widetilde x_i,\widetilde p_{n+1})$ for $i=2,3$, since
$$d_{\widetilde \sigma_2}(\widetilde x_i',\widetilde p_{n+1}')=d_{\widetilde \sigma_2}(\widetilde x_1',\widetilde x_i')-d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})\le d_{\widetilde q_1}(\widetilde x_1,\widetilde x_i)-d_{\widetilde q_1}(\widetilde x_1,\widetilde p_{n+1})\le d_{\widetilde q_1}(\widetilde p_{n+1},\widetilde x_i).$$
In case (2,i) it will always be possible to suppose $\Delta$ does not have one dimensional components, since
\begin{itemize}
\item if $pr(\widetilde p_{n+1})$ is on a one dimensional component $\overline{\widetilde{x}_1\widetilde v_1}$ of $\Delta$ then it suffices to choose $\widetilde p_{n+1}'\in \overline{\widetilde x_1'\widetilde v_1'}$ such that $$d_{\widetilde \sigma_2}(\widetilde p_{n+1}',\widetilde x_1')\le d_{\widetilde q_1}(pr(\widetilde p_{n+1}),\widetilde x_1) \text{ and } d_{\widetilde \sigma_2}(\widetilde p_{n+1}',\widetilde v_1')\le d_{\widetilde q_1}(pr(\widetilde p_{n+1}),\widetilde v_1).$$
\item Otherwise, $pr(\widetilde p_{n+1})\in \overline{{\mathop \Delta\limits^ \circ}}$, where $\overline{{\mathop \Delta\limits^ \circ}}$ corresponds to the filled geodesic triangle of vertices $\widetilde v_1,\widetilde v_2,\widetilde v_3$ (which are the vertices with strictly convex internal angle as in proposition 3.20). \\
In this case one can choose $\widetilde p_{n+1}'$ such that $d_{\Delta}(\widetilde v_i,\widetilde p_{n+1})\ge d_{\Delta'}(\widetilde v_i',\widetilde p_{n+1}')$.\\
In this way one obtains
$$d_{\widetilde \sigma_2}(\widetilde p_{n+1}',\widetilde x_i')\le d_{\widetilde \sigma_2}(\widetilde p_{n+1}',\widetilde v_i')+d_{\widetilde \sigma_2}(\widetilde v_i',\widetilde x_i')\le$$ $$\le d_{\widetilde q_1}(pr(\widetilde p_{n+1}),\widetilde v_i)+d_{\widetilde q_1}(\widetilde v_i,\widetilde x_i)=d_{\widetilde q_1}(pr(\widetilde p_{n+1}),\widetilde x_i)$$ for $i=1,2,3$ as desired.
\end{itemize}
The rest of the section will be devoted to the explanation of our method to find $\widetilde p_{n+1}$ in cases (1,i) and (2,i). As we anticipated it will depend on following theorem 3.21 and conjecture 3.31.\\
Consider the two previously defined filled geodesic triangles of vertices respectively $\widetilde x_1,\widetilde x_2,\widetilde x_3$ and $\widetilde x_1',\widetilde x_2',\widetilde x_3'$. Zeroes on the border of $\Delta$ can change position in $\Delta'$ and in particular the following things can happen:
\begin{enumerate}[label=(\roman*)]
\item if $\widetilde z\in\overline{\widetilde x_i\widetilde x_j}$, then it can happen $\widetilde z'\in \overline{\widetilde x_i'\widetilde x_k'}$,
\item a zero $ \widetilde z$ on the border of $\Delta$ can be such that $\widetilde z'\not\in \Delta'$,
\item a zero $ \widetilde z'$ on the border of $\Delta'$ can be such that $\widetilde z\not\in \Delta$.
\end{enumerate}
Every time case (ii) is verified, we consider the previously defined orthogonal projection on convex sets in $Cat(0)$ spaces
$$pr:(\widetilde X_2,d_{\widetilde \sigma_2})\rightarrow \Delta'$$
and take into account the point $pr(\widetilde z')\in \partial\Delta'$. Then it will follow
$$d_{\widetilde \sigma_2}(pr(\widetilde z'),\widetilde x_i')\le d_{\widetilde \sigma_2}(\widetilde z',\widetilde x_i')\le d_{\widetilde q_1}(\widetilde z,\widetilde x_i)$$
for $i=1,2,3$ and
$$d_{\widetilde \sigma_2}(pr(\widetilde z'),\widetilde w')\le d_{\widetilde \sigma_2}(\widetilde z',\widetilde w')\le d_{\widetilde q_1}(\widetilde z,\widetilde w)$$
for every zero $\widetilde w'$ on the border of $\Delta'$.\\
In the following construction we will need to consider, for every point on the border of $\Delta$, a corresponding point on the border of $\Delta'$. For this reason, by abuse of notation, every time previous case (ii) is verified we will denote the point $pr(\widetilde z')$ simply by $\widetilde z'$ and consider it the point on the border of $\Delta'$ corresponding to $\widetilde z$.\\
Notice that proceeding in this way $\Delta'$ could end up having two or more coinciding vertices: this will not be a problem.\\
From now on it will be more convenient to consider filled geodesic triangles $\Delta$ and $\Delta'$ exclusively as planar polygons endowed respectively with the intrinsic Euclidean metrics $d_\Delta$ and $d_{\Delta'}$. For this reason we will consider zeroes on the border of $\Delta$ simply as vertices of the polygon. Furthermore, in order to lighten up the notation, vertices will be denoted without the overlying tilde.\\
For every couple of points $u,v\in \Delta$ we will denote by $\overline{uv}$ the geodesic for $d_\Delta$ connecting them. Given any two points $u',v'\in \Delta'$, we will denote by $\overline{u'v'}$ the geodesic for $d_{\Delta'}$ connecting them. \\
We will initially consider the case there is a function $$\iota :Vertices(\Delta)\rightarrow Vertices(\Delta')$$ which to every vertex $z$ of $\Delta$ associates a vertex $\iota(z)=z'$ of $\Delta'$ in such a way that vertices of $\Delta$ and of $\iota (Vertices(\Delta))$ are \textit{disposed in the same order}. This means that:
\begin{itemize}
\item for every vertex $z$ of $\Delta$, if $z\in \overline{x_ix_j}$, then $z'\in \overline{x_ix_j}$,
\item for every couple of vertices $z_1,z_2\in \overline{x_ix_j}$, if $d_\Delta(x_i,z_1)<d_{\Delta}(x_i,z_2)$, then $d_{\Delta'}(x_i',z_1')\le d_{\Delta'}(x_i',z_2')$.
\end{itemize}
We will summarize this condition on the vertices of $\Delta$ and $\Delta'$ saying that the common vertex of $\Delta$ and $\Delta'$ have the same order.\\
We noticed that, given two vertices $v_1,v_2$ of $\Delta$, it can happen that their corresponding vertices of $\Delta'$ coincide as points on $\partial\Delta'$. For a reason which will be clear in the following proofs, we will consider $v_1'$ and $v_2'$ as distinct vertices of $\Delta'$ which are at distance zero on $\partial \Delta'$: we will refer to them as multiple vertices. \\
We can thus suppose the function $\iota$ is always injective and the number of vertices of $\Delta'$ is always greater than or equal to the number of vertices of $\Delta$.\\
We underline again an important hypothesis on distances between vertices of $\Delta$ and $\Delta'$: for every pair of vertices $u,v$ of $\Delta$ such that $\overline{uv}$ is smooth it results
$$d_\Delta(u,v)\ge d_{\Delta'}(u',v').$$
This fact clearly implies the same inequality also in case $\overline{uv}$ is a concatenation of smooth segments.\\
The following theorem is our fundamental tool to find the desired point $\widetilde p_{n+1}'$.
\begin{thm}\label{lemshort1}
Suppose the number of vertices of $\Delta'$ is greater than or equal to the number of vertices of $\Delta$ and that the common vertices have the same order, in the sense we explained earlier. Suppose furthermore that $\Delta'$ can have one dimensional components. \\
Then there is a 1-Lipschitz map $f:\Delta\rightarrow \Delta'$ (with respect to the intrinsic Euclidean metrics of the polygons) such that:
$$f(z)=z'$$
for every vertex $z$ of $\Delta$.
\end{thm}
Clearly, given any point $\widetilde p_{n+1}\in \Delta$, we will set the point $\widetilde p_{n+1}'$ to be $f(\widetilde p_{n+1})$. \\
Instead of proving theorem \ref{lemshort1} directly, we will prove the following theorem \ref{lemshort2} which will then imply theorem \ref{lemshort1}. The reason for this choice will be made clear in the proof of theorem \ref{lemshort2} and in particular by the example of figure 11.\\
Given any planar polygon $P$ with $n\ge 3$ vertices, we will say that $P'$ is a \textit{degenerate polygon comparable with P} if $P'$ is obtained connecting planar polygons through common vertices or one dimensional components and furthermore all the following conditions are satisfied.
\begin{enumerate}[label=(\roman*)]
\item $P'$ is connected, simply connected, can be embedded in $\mathbb{R}^2$ and contains at least one planar polygon.
\item Every planar polygon of $P'$ is linked (by shared vertices or one dimensional components) to at most other two planar polygons of $P'$. The degenerate polygon $P'$ can have one dimensional components which are linked to just one planar polygon of $P'$ (as polygons $\Delta'$ corresponding to geodesic triangles of $d_{\widetilde q}$ do).
\item There is an injective function $\iota : Vertices(P)\rightarrow Vertices(P')$, which to every vertex $z$ of $P$ associates a unique vertex $z'$ of $P'$. \\
Given two vertices $z_1,z_2$ of $P$, their corresponding vertices of $P'$ can coincide as points on $\partial P'$: we will consider $z_1',z_2'$ as distinct vertices of $P'$ at distance zero on $\partial P'$ and refer to them as multiple vertices.\\
Consequently, the total number of vertices of $P'$ is $m\ge n$.
\item For every pair of vertices $z_1,z_2$ of $P$ it results $$d_{P}(z_1,z_2)\ge d_{P'}(z_1',z_2').$$
\item If $y'$ is a vertex of $P'$ which does not correspond to any vertex of $P$ and $y'$ does not lie on a one dimensional component, then the internal angle at $y'$ is:
\begin{itemize}
\item convex, if $y'$ is a shared vertex of two planar polygons of $P'$ or from $y'$ starts a one dimensional component,
\item concave, otherwise.
\end{itemize}
A vertex $y'$ which does not correspond to any vertex of $P$ can also lie on a one dimensional component, but it can not be at the extremity which is not connected to a planar polygon.
\item The vertices of $P$ and of $\iota(Vertices(P))$ \textit{are disposed in the same order} in the following sense. There is a continuous, surjective function $\tau:[0,1]\rightarrow \partial P'$ such that $\tau(0)=z'\in \iota(Vertices(P))$ and for every $x'\in \partial P'$ the cardinality of $\tau^{-1}(x')$ is:
\begin{itemize}
\item two, if $x'$ is a shared vertex of two planar polygons of $P'$ or $x'$ in on a one dimensional component,
\item one, otherwise.
\end{itemize}
Then one can choose a parametrization $\gamma:[0,1]\rightarrow \partial P$ of $\partial P$ such that $\gamma(0)=z$ and $\gamma$ and $\tau$ meet respectively the vertices of $P$ and of $\iota(Verices(P))$ in the same order (up to removing one copy of the vertices which $\tau$ meets twice).
\end{enumerate}
In figure 6 there are some example which will clarify our definition of degenerate polygons comparable with $P$ and of condition (vi).
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{fig22mag1}
\caption{In example (1) one can find $\tau$ such that it encounters the vertices of $P'$ in the order $x_1',x_2',x_4',x_3',x_4',x_6',x_5',x_6',x_2'$. One then discards the first copy of $x_4'$ and $x_6'$ and the last copy of $x_2'$.
In example (2) one can find $\tau$ such that it encounters the vertices of $P'$ in the order $x_1',x_5',x_4',x_2',x_3',x_4',x_5',x_6'$. One then discards the first copy of $x_5'$ and $x_4'$.}
\end{figure}
As it is easily verifiable, polygons $\Delta$ and $\Delta'$ as in the hypothesis of theorem \ref{lemshort1} satisfy all previous conditions.\\
If $u,v$ are vertices of $P$, $\overline{uv}$ is smooth and lies entirely on the border of $P$ then we will call $\overline{uv}$ a $side$ of $P$ and sometimes denote it simply by $\gamma$. If $w,z$ are vertices of $P$, $\overline{wz}$ is smooth and $\overline{wz}\cap \partial P=\{w,z\}$, then we will call $\overline{uv}$ a \textit{smooth diagonal} of $P$ and sometimes denote it simply by $d$. If $\overline{wz}$ is a concatenation of segments and is not entirely contained in the border of $P$ we call $\overline{wz}$ a diagonal of $P$ and sometimes denote it with the same symbol $d$. \\
We define sides $\gamma'$ of $P'$ in the same way. A diagonal $d'$ of $P'$ is a geodesic $\overline{u'v'}$ such that $\overline{uv}$ is a diagonal of $P$. In particular one should notice that:
\begin{itemize}
\item a diagonal $d'$ of $P'$ can be entirely contained in a one dimensional component,
\item $\overline{u'v'}$ can be a diagonal of $P'$ only if $u',v'\in \iota(Vertices(P))$.
\end{itemize}
Given sides $\gamma,\gamma'$ and diagonals $d,d'$, we will denote by $l(\gamma),l(\gamma'),l(d),l(d')$ their lengths (of which the first and the third are computed with respect to $d_{P}$ and the second and the fourth with respect to $d_{P'}$).\\
Before starting the proof, we feel it is necessary to anticipate why we decided to consider such a complicated set of degenerate polygons. The short answer is that the set of degenerate polygons $P'$ comparable to $P$ is closed with respect to the operation of \textit{cutting along a diagonal $d'$ of $P'$}, operation which is crucial in the proof of theorem \ref{lemshort2}. We will further clarify this concept in the proof.
\begin{thm} \label{lemshort2}
Let $P$ be a planar polygon with $n\ge 3$ vertices and $P'$ a degenerate polygon which is comparable with $P$ in the sense we just explained.
Then there is a 1-Lipschitz map $f:P\rightarrow P'$ (with respect to the intrinsic Euclidean metrics of the polygons) such that
$$f(z)=z'$$
for every other vertex $z$ of $P$.
\end{thm}
The idea of the proof will be to turn $P'$ into the polygon $P$ through a finite number of steps, called \textit{elementary steps}, which will modify lengths of sides and diagonals of $P'$. Each elementary step will provide us of a 1-Lipschitz map: the final 1-Lipschitz map $f$ will be the composition of all intermediate 1-Lipschitz maps. Of course, all intermediate polygons will be endowed with the corresponding intrinsic Euclidean metric and the intermediate maps will have Lipschitz coefficient 1 with respect to those metrics.\\
We specify that intermediate polygons obtained through elementary steps can fail to be planar and just be \textit{generalized polygons}: a generalized polygon is a polygon which is obtained gluing planar polygons along sides of the same length and which can not be embedded in $\mathbb{R}^2$. For any generalized polygon it still makes sense to define the intrinsic Euclidean metric.\\
Given any generalized polygon $Q$ and a vertex $v$ of $Q$, in the following proofs we will denote by $\alpha_{v}$ the internal angle of $Q$ at $v$.\\
We will now define the two types of elementary steps we will use. In order to make the definition easier, we will first make the assumption $P'$ does not have one dimensional components. \\
\textbf{Elementary step of type one}: \\
If $\gamma'$ is a side of $P'$ such that $l(\gamma')<l(\gamma)$ then though an elementary step of type one on the side $\gamma'$ of $P'$ it is possible to obtain a polygon $\hat P$ and a 1-Lipschitz map $\phi: \hat P\rightarrow P'$ such that:
\begin{itemize}
\item $l(\gamma) \ge l(\hat \gamma)>l(\gamma')$ (where $\hat \gamma$ denotes the side of $\hat P$ corresponding to $\gamma'$) and all other sides of $\hat P$ are of the same length of the corresponding sides of $P'$,
\item all the diagonals $\hat d$ of $\hat P$ are such that $l(d)\ge l(\hat d)\ge l(d')$.
\end{itemize}
We presently explain how the elementary step of type one on the side $\gamma'=\overline{x'y'}$ of $P'$ is performed.\\
Let $z'$ be another vertex of $P'$ such that $\overline{x'z'}$ and $\overline{y'z'}$ are sides or smooth diagonals of $P'$ (it is always possible to suppose the existence of such $z'$). Let $P(x',y',z')\subset P'$ be the triangle of vertices $x',y',z'$, the set $\overline{P'\setminus P(x',y',z')}$ consists of a number of polygons $Q'_i$ which varies between zero and two. \\
Denote by $P(\hat x,\hat y,\hat z)$ the triangle obtained from $P(x',y',z')$ increasing $d_{P'}(x',y')$ and without changing $d_{P'}(x',z')$ and $d_{P'}(y',z')$. \\
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{elestep1}
\caption{An example of an elementary step of type one: $P(x',y',z')$ is the triangle drawn with a dashed line, while $P(\hat x,\hat y,\hat z)$ is the triangle drawn with a continuous line.}
\end{figure}
The polygon $\hat P$ is then obtained gluing back on the sides of $P(\hat x,\hat y,\hat z)$ the corresponding polygons $Q'_i$. \\
There is a 1-Lipschitz map $\phi_1:P(\hat x,\hat y,\hat z)\rightarrow P(x',y',z')$ which is the identity on the two sides whose length is not increased. The map $\phi_1$ can then be extended to a 1-Lipschitz map $\phi:\hat P\rightarrow P'$ by defining it as the identity on $Q'_i$.\\
We will also consider degenerate elementary steps of type one, in which $P(x',y',z')$ is one dimensional and is then turned into a triangle. This will happen for example in case of coinciding vertices, which correspond to sides of length zero.\\
\textbf{Elementary step of type two}: \\
If $d'$ is a smooth diagonal of $P'$ such that $l(d')<l(d)$ then through an elementary step of type two on the diagonal $d'$ of $P'$ it is possible to obtain a polygon $\hat P$ and a 1-Lipschitz map $\psi:\hat P\rightarrow P'$ such that:
\begin{itemize}
\item all sides of $\hat P$ have the same length of the corresponding sides of $P'$,
\item all diagonals $\hat d$ of $\hat P$ are such that $l(d)\ge l(\hat d)\ge l(d')$.
\end{itemize}
We presently explain how the elementary step of type two on the smooth diagonal $d'=\overline{x'y'}$ of $P'$ is performed.\\
Unlike elementary steps of type one, it is possible to perform an elementary step on a smooth diagonal $d'=\overline{x'y'}$ of $P'$ only if there are other vertices $u',v'$ of $P'$ such that:
\begin{itemize}
\item all four geodesics $\overline{u'x'},\overline{x'v'},\overline{v'y'},\overline{y'u'}$ are smooth and thus define a quadrilateral \\ $P(x',y',u',v')\subset P'$,
\item $\overline{x'y'}$ is a smooth diagonal of $P(x',y',u',v')$,
\item $P(x',y',u',v')$ has only one strictly concave internal angle, which is in $x'$ or $y'$. Consequently all other three internal angles of $P(x',y',u',v')$ are strictly convex.
\end{itemize}
We allow the quadrilateral $P(x',y',u',v')$ to be $degenerate$ in the sense that one of the internal angles of $P(x',y',u',v')$ in $u'$ or $v'$ can be zero.\\
The set $\overline{P'\setminus P(x',y',u',v')}$ consists of a number of polygons $Q_i'$ which varies between zero and four. It is possible to obtain another quadrilateral $P(\hat x,\hat y,\hat u,\hat v)$ from $ P(x',y',u',v')$ increasing $d_{P'}(x',y')$, decreasing the strictly concave angle of the quadrilateral $P(x',y',u',v')$ and leaving unchanged the lengths of its sides. The polygon $\hat P$ is then obtained gluing back the polygons $Q_i'$ on the corresponding sides of $P(\hat x,\hat y,\hat u,\hat v)$.\\
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{elestep2b}
\caption{An example of an elementary step of type two: $P(x',y',u',v')$ is the quadrilateral drawn with a dashed line, while $P(\hat x,\hat y,\hat u,\hat v)$ is the quadrilateral drawn with a continuous line.}
\end{figure}
There is a 1-Lipschitz map $ \psi_1:P(\hat x,\hat y,\hat u,\hat v)\rightarrow P(x',y',u',v')$, with respect to the intrinsic Euclidean metrics of the polygons, which is the identity on the sides of the quadrilaterals. It can be extended to a 1-Lipschitz map $\psi:\hat P\rightarrow P'$ defining it as the identity on the polygons $Q_i'$. \\
Notice that both types of elementary steps do not change the sum of the internal angles of polygons $Q'$ on which they are performed.\\
Now that we have defined the two types of elementary steps, we can go back to explaining how to obtain the desired 1-Lipschitz map $f:P\rightarrow P'$.\\
We will use the following lemma regarding generalized polygons. Notice that any generalized polygon $Q'$ with $m$ vertices has sum of internal angles equal to $\pi(m-2)$. Indeed, suppose $Q$ is obtained gluing two planar polygons $Q_1$ and $Q_2$ along a side $\overline{vw}$: denote by $m_1,m_2$ the number of vertices of $Q_1$ and $Q_2$, then it must follow $m_1+m_2=m+2$ (since gluing $Q_1$ and $Q_2$ we lose two vertexes which are identified together). Consequently the sum of internal angles of $Q$ is equal to $\pi(m_1-2)+\pi(m_2-2)=\pi(m-2)$.
\label{conv}\begin{lem}
Let $Q'$ be a generalized polygon with $n$ vertices. Then it is possible to apply a finite sequence of elementary steps of type two on $Q'$ turning it into a convex polygon $\hat Q$ such that all sides of $\hat Q$ are of the same length of the corresponding sides of $Q'$.
\end{lem}
\begin{proof}
We proceed by induction on the number $m$ of vertices of $Q'$. \\
If $m=4$ then the result is trivial.
Suppose the thesis is true for all polygons $Q'$ with number of vertices between 4 and $m>4$, then we will prove it for polygons $Q'$ with $m+1$ vertices.\\
We cut $Q'$ along a smooth diagonal $\overline{v'w'}$ obtaining two generalized polygons $Q_i$ on which we can apply the inductive hypothesis thus turning them into two convex polygons $\hat Q_i$: we glue $\hat Q_1,\hat Q_2$ back together along $\overline{\hat v\hat w}$ obtaining a polygon $\hat Q$ which can have strictly concave internal angles only in $\hat v$ and $\hat w$. If $\alpha_{\hat v}>\pi$ then one performs an elementary step of type two on $P(\hat v_1,\hat v,\hat v_2,\hat w)$ (where $\hat v_1,\hat v_2$ are the vertices next to $\hat v$) stretching $\overline{\hat v\hat w}$ until $\alpha_{\hat v}=\pi$. Finally, only the angle $\alpha_{\hat w}$ can be strictly concave. Notice that all diagonals $\overline{\hat w\hat z}$ must be smooth, where $\hat z$ is any vertex of $\hat Q$ not adjacent to $\hat v$: using this fact and the hypothesis on the internal angles of $Q'$ one gets that it is always possible to flatten the angle $\alpha_{\hat w}$ performing elementary steps of type 2 stretching $\overline{\hat w\hat x}$, where $\hat x$ is a vertex of $\hat Q$ such that $\alpha_{\hat x}<\pi$, without making any angle $\alpha_{\hat x}$ strictly concave.
\end{proof}
\begin{oss}Notice that, given polygons $Q'$ and $\hat Q$ as in the previous lemma, if $x'$ is a vertex of $Q'$ such that $\alpha_{x'}\ge \pi$, then it can not result $\alpha_{\hat x}<\pi$.\\
To see this, denote by $x_1',x_2'$ the two vertices of $Q'$ adjacent to $x'$ and by $\hat x_1,\hat x_2$ the two corresponding vertices of $\hat Q$. If $\alpha_{\hat x}<\pi$ then $\overline{\hat x_1\hat x_2}$ is a segment of length strictly smaller than $d_{Q'}(x_1',x')+d_{Q'}(x',x_2')=d_{Q'}(x_1',x_2')$ and consequently it would result $d_{Q'}(x_1',x_2')>d_{\hat Q}(\hat x_1,\hat x_2)$.\\
This inequality would contradict the fact that, since $\hat Q$ is obtained from $Q'$ through a sequence of elementary steps of type two, there is a 1-Lipschitz map $f:\hat Q\rightarrow Q'$ which sends vertices to corresponding vertices.
\end{oss}
We can now start the proof of theorem \ref{lemshort2}, using induction on the number $n$ of vertices of $P$. In order to make the proof more easily readable, we will divide the following arguments in succeeding lemmas. \\
Suppose $n=3$, then $P$ is an Euclidean triangle, while $P'$ can have many more vertices than $P$. In order to satisfy condition (v) of the definition of degenerate polygons comparable to $P$, $P'$ can have only one planar subpolygon and at most three one dimensional components ending in points of $\iota(Vertices(\Delta))$.\\
Consequently, for $n=3$, $P$ and $P'$ will be polygons of the type described in theorem \ref{lemshort1}: for this reason we will denote them by $\Delta,\Delta'$.
\begin{lem}
If $n=3$, it is possible to turn $\Delta'$ into $\Delta$ using only elementary steps of type one and two and consequently get a 1-Lipschitz map $f:\Delta\rightarrow \Delta'$ obtained composing all intermediate 1-Lipschitz maps between intermediate polygons.
\end{lem}
\begin{proof}
We first get rid of the one dimensional components of $\Delta'$, turning them into part of $\overline{\mathop \Delta\limits^ \circ}$ using elementary steps as thus explained (we will explain the procedure only for the one dimensional component starting at $x_1'$, the other two will be treated in the same way). \\
Suppose there is a one dimensional component $\overline{x_1'v_1'}$ starting at $x_1'$ and $v_1'\in\overline{\mathop \Delta\limits^ \circ}$ is the vertex such that the corresponding internal angle of $\overline{\mathop \Delta\limits^ \circ}$ must be strictly convex. Let $v_2'$ be the vertex of $\overline{x_1'v_1'}$ closer to $v_1'$ and let $w_1',u_1'$ be the two vertices of $\overline{\mathop \Delta\limits^ \circ}$ adjacent to $v_1'$. We perform an elementary step of type two on $P(u_1',v_1',v_2',w_1')$ (which is a degenerate polygon, since the internal angle in $v_2'$ is zero) until $P(u_1',v_1',v_2',w_1')$ is no longer degenerate. Notice that there are two ways of performing an elementary step of type two on a degenerate quadrilateral $P(u_1',v_1',v_2',w_1')$ (as it is showed in figure 9): in one way $\overline{u_1'v_1'}$ is stretched and it will result $\hat v_1\in \overline{\hat w_1 \hat v_2}$ and in the other way $\overline{v_1'w_1'}$ is stretched and it will result $\hat v_1\in \overline{\hat u_1\hat v_2}$. Since we do not care on which side the vertex $\hat v_1$ will end up, we can choose either way.\\
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{poli20mag2}
\caption{Two ways of performing an elementary step of type two on $P(u_1',v_1',v_2',w_1')$}
\end{figure}
We then proceed in the same way considering the vertex $v_3'$ of the one dimensional component $\overline{v_2'x_1'}$ closer to $v_2'$.\\
Having done so, we obtain a polygon without one dimensional components and with concave internal angles in all vertices which are not in $\iota(Vertices(\Delta))$: we turn it into an Euclidean triangle $\widehat \Delta$ of vertices $\hat x_i$, $i=1,2,3$ using lemma \ref{conv} through elementary steps of type two. Finally, we perform a finite sequence of elementary steps of type one on the three sides $\overline{\hat x_i\hat x_j}$ of $\widehat \Delta$ in order to make them of the same length of the corresponding sides of $\Delta$.\\
Suppose all three sides of $\widehat \Delta$ are such that $d_{\widehat \Delta}(\hat x_i,\hat x_j)<d_\Delta(x_i,x_j)$. We start by stretching the length of $\overline{\hat x_1\hat x_2}$ until the angle in $\hat x_3$ is equal to $\pi-\epsilon$, with $\epsilon>0$ very small: in this way it results $l(\overline{\hat x_1\hat x_3})^2+l(\overline{\hat x_2\hat x_3})^2=l(\overline{\hat x_1\hat x_2})^2+\psi(\epsilon)$ with $\lim\limits_{\epsilon\rightarrow 0}\frac{\psi(\epsilon)}{\epsilon}=0$. Performing again an elementary step of type one stretching $\overline{\hat x_2\hat x_3}$ until the angle in $\hat x_1$ is equal to $\pi-\epsilon$ one gets $2l(\overline{\hat x_1\hat x_3})^2+l(\overline{\hat x_2\hat x_3})^2=l(\overline{\hat x_2\hat x_3})^2+\psi_1(\epsilon)$ with $\lim\limits_{\epsilon\rightarrow 0}\frac{\psi_1(\epsilon)}{\epsilon}=0$. Consequently, proceeding in this way, after a finite number of steps one side must reach its maximum length. Then we proceed in the same way until all sides of $\widehat \Delta$ are of the same length of the corresponding sides of $\Delta$.
\end{proof}
Now suppose the inductive hypothesis is verified if the number of vertices of $P$ is not greater than $n$, then we shall find the 1-Lipschitz map if $P$ has $n+1$ vertices.
\begin{lem}
If there is a diagonal $\overline{v'w'}$ of $P'$ such that $l(\overline{v'w'})=l(\overline{vw})$, then it is possible to apply the inductive hypothesis to obtain the 1-Lipschitz map $f:P\rightarrow P'$.
\end{lem}
\begin{proof}
We divide two cases.
\begin{itemize}
\item if $\overline{vw}$ is smooth then we cut the polygons $P$ and $P'$ in the following way:
\begin{itemize}
\item we cut $P$ along $\overline{vw}$ obtaining $P_1$ and $P_2$,
\item we cut $P'$ along $\overline{v'w'}$ obtaining $P_1'$ and $P_2'$.
\end{itemize}
Notice that if $\overline{v'w'}$ is not smooth then the operation of \textit{cutting along $\overline{v'w'}$} must be further clarified. If $d'$ passes through a side of $P'$ (resp. a one dimensional component), then such side (resp. one dimensional component) will appear on both polygons $P_i'$. Notice that in this way the polygons $P_i'$ could acquire new one dimensional components and new vertices. We will follow this rule to name the new vertices: if $u'$ is a vertex of $\iota(Vertices(P))$ on $\overline{v'w'}$ and $u\in \Delta_1$ (resp. $u\in \Delta_2$), then $u'$ will be a vertex only of $\Delta_1'$ (resp. $\Delta_2'$).
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{fig22mag3}
\caption{An example of cutting in case $d'$ is not smooth: notice that the points $v',w'$ appear on both $P_1'$ and $P_2'$, while $u'$ appears only on $P_1'$, since $u\in P_1$. On $P_2'$ there is a new vertex $z'\not\in \iota(Vertices(P_2))$.}
\end{figure}
Sometimes a polygon $P_i'$ could be entirely degenerate (i.e. one dimensional): in that case we perform a degenerate elementary step of type one on $P_i'$ turning it into a degenerate polygon which includes at least one planar polygon.\\
We can thus suppose both newly obtained polygons $P_1'$ and $P_2'$ are degenerate polygons comparable respectively with $P_1$ and $P_2$. Indeed, condition (v) of the definition is verified since, if $z'\in\overline{v'w'}$, $z'\not\in \iota(Vertices(P))$ is a vertex of a planar polygon of $P'$, a corresponding vertex $z_i'\in P_i'$ can have strictly convex internal angle only if from $z_i'$ starts a one dimensional component.
This is the crucial property we were looking for: we can now apply the inductive hypothesis and obtain two 1-Lipschitz maps $f_i:\Delta_i\rightarrow \Delta_i'$ which must agree on $\overline{vw}$: we will define $f:\Delta\rightarrow \Delta'$ to be such that $f|_{\Delta_i}:=f_i$.\\
Notice that the same reasoning could not have been done considering polygons $\Delta$ and $\Delta'$ of the hypothesis of theorem \ref{lemshort1}, as explained in picture 5.11.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.55]{fig22mag2}
\caption{The diagonals $d$ and $d'$ are drawn in red. One clearly sees that the bottom half of $\Delta$ has four strictly convex angles, while the bottom half of $\Delta'$ is composed by two triangles connected by a one dimensional component.}
\end{figure}
\item if $\overline{vw}$ is not smooth, then suppose $\overline{vw}$ is the concatenation of segments $\overline{vv_1}*\overline{v_1v_2}*\cdots *\overline{v_{m}w}$: at least one of them must be a smooth diagonal, so suppose $\overline{vv_1}$ is. Notice that if $l(\overline{vw})=l(\overline{v'w'})$ then it must follow $\overline{v'w'}=\overline{v'v_1'}*\overline{v_1'v_2'}*\cdots *\overline{v_{m}'w'}$, otherwise one would get
$$l(\overline{vv_1})+l(\overline{v_1v_2})+\cdots +l(\overline{v_mw})< l(\overline{v'v_1'})+l(\overline{v_1'v_2'})+\cdots +l(\overline{v_m'w'})$$
which contradicts the hypothesis on the distances in $P$ and $P'$.\\
Now one can just consider the diagonals $\overline{vv_1}$ (which is smooth) and $\overline{v'v_1'}$ and fall into the previous case.
\end{itemize}
\end{proof}
After these considerations we can always suppose all diagonals of $P'$ are strictly shorter than the corresponding diagonals of $P$. We will now deal with one dimensional components of $P'$.
\begin{lem}
Suppose all diagonals of $P'$ are strictly shorter than the corresponding diagonals of $P$. Then, using elementary steps, it is possible to turn $P'$ in a degenerate polygon comparable to $P$ without one dimensional components. If in doing so one diagonal of $P'$ reaches its maximum length (i.e. the length of the corresponding diagonal of $P$) it is possible to apply the inductive hypothesis to obtain the desired 1-Lipschitz map $f:P\rightarrow P'$.
\end{lem}
\begin{proof}
We will proceed in a way which is almost identical to the one applied in the previous case $n=3$. Let $\overline{v_1'x_1'}$ be a one dimensional component of $P'$ and $v_2'$ the vertex of $\overline{v_1'x_1'}$ closer to $v_1'$, then we will apply an elementary step of type two on $P(u_1',v_1',v_2',w_1')$ (where, as before, $u_1'$ and $w_1'$ are vertices of a planar subpolygon of $P'$ adjacent to $v_1'$) in such a way that the newly obtained degenerate polygon $\hat P$ satisfies axiom (vi). In particular, if $v_1',u_1',w_1',v_2'$ are all vertices of $\iota(Vertices(P))$ then we will apply the elementary step which gives $\hat v_1\in \overline{\hat w_1\hat v_2}$ (resp. $\hat v_1\in \overline{\hat u_1\hat v_2}$) if $v_1\in \overline{ w_1 v_2}$ (resp. $ v_1\in \overline{ u_1v_2}$). If $v_1'$ is not a vertex of $\iota(Vertices(P'))$, then it is possible to perform both types of elementary step of type one. \\
Proceeding in this way one could end up with a vertex $x_1'$ which connects two planar polygons of $P'$: it is possible to get rid of this "pathology" with another elementary step of type two as it is explained in figure 12.\\
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{fig22mag4}
\caption{If $v\in \overline{v_1v_2}$ then one performs an elementary step of type two on $P(v_4',v',v_2',v_3')$.}
\end{figure}
In this way we explained also how to get rid of vertices of $P'$ which link two different planar polygons. Clearly, if at any point during this procedure of elimination of one dimensional components, one ends up with a diagonal $d'$ of $P'$ such that $l(d')=l(d)$ then the 1-Lipschitz map $f:P\rightarrow P'$ is obtained as explained before.
\end{proof}
At this point, we can suppose $P'$ does not have one dimensional components, but it can stil have more vertices than $P$. Notice that a straightforward consequence of the definition of elementary steps of type two and of condition (v) of the definition of degenerate polygons comparable to $P$ is that all internal angles in vertices of $P'$ which are not in $\iota(Vertices(P))$ will have concave internal angle.
\begin{lem}
Suppose all diagonals of $P'$ are strictly shorter than the corresponding diagonals of $P$ and $P'$ does not have one dimensional components. Then, using elementary steps, it is possible to turn $P'$ in a degenerate polygon comparable to $P$ with the same vertices of $P$. If in doing so one diagonal of $P'$ reaches its maximum length (i.e. the length of the corresponding diagonal of $P$) it is possible to apply the inductive hypothesis to obtain the desired 1-Lipschitz map $f:P\rightarrow P'$.
\end{lem}
\begin{proof}
One just has to apply lemma \ref{conv} (and its following observation), turning $P'$ into a convex polygon $\hat P$. The polygon $\hat P$ will have flat internal angles at vertices $\hat z$ such that the corresponding vertex $z'$ of $P'$ is not in $\iota(Vertices(\Delta))$. At this point one simply "forgets" about $\hat z$ and removes it from the set of vertices of $\hat P$.\\
Again, if, performing any of the elementary steps of type two of lemma \ref{conv}, one diagonal $d'$ of $P'$ is stretched until $l(\hat d)=l(d)$, then the procedure is finished as we already explained.
\end{proof}
We will now stretch all sides of $P'$ until they become of the same length of the corresponding sides of $P$.
\begin{lem}
Suppose all diagonals of $P'$ are strictly shorter than the corresponding diagonals of $P$, $P'$ has the same vertices of $P$ and $P'$ does not have one dimensional components. Then, using elementary steps of type one, it is possible to stretch all sides of $P'$ until they become of the same length of the corresponding sides of $P$. If in doing so one diagonal of $P'$ reaches its maximum length (i.e. the length of the corresponding diagonal of $P$) it is possible to apply the inductive hypothesis to obtain the desired 1-Lipschitz map $f:P\rightarrow P'$.
\end{lem}
\begin{proof}
First, notice that it is not always possible to stretch a side of $\Delta'$ with just one elementary step of type one until it reaches its maximum length. Indeed, let $\overline{x_1'x_2'}$ be a side of $P'$ such that $l(\overline{x_1'x_2'})<l(\overline{x_1x_2})$ and $P(x_1',x_2',z')$ a triangle as in the definition of elementary step of type one. Notice that the upper limit of the length of the side $\overline{x_1'x_2'}$ obtainable through an elementary step of type one on $P(x_1',x_2',z')$ is $l(\overline{x_1'z'})+l(\overline{x_2'z'})$ (at whose length $P(x_1',x_2',z')$ becomes a segment). \\
In order to overcome this difficulty, we number the vertices of $P'$ in an increasing order starting from from $x_1'$, in such a way that its adjacent vertices are $x_2'$ and $x_m'$. Then we will explain how to turn $P'$ into a triangle with convex angles in $x_1',x_2'$ and internal angle in $x_m'$ equal to $\pi-\epsilon$. In this way it will result that $l(\overline{x_1'x_2'})^2$ is equal to the sum of the squares of the lengths of all other sides of $P'$ minus a term $\psi(\epsilon)$ such that $\lim\limits_{\epsilon\rightarrow 0}\frac{\psi(\epsilon)}{\epsilon}=0$: the conclusion will follow in the same way of the case of the proof of lemma 3.25. Clearly, if doing so one diagonal $d'$ of $P'$ is stretched until $l(d')=l(d)$ then the procedure is finished as explained before. One should notice that coinciding vertices do not constitute a problem, since they just correspond to sides of length zero (and will now be stretched by degenerate elementary steps of type one).\\
We now explain how to turn $P'$ into a triangle with convex angles in $x_1',x_2'$ and internal angle in $x_m'$ equal to $\pi-\epsilon$: first of all we apply lemma \ref{conv} and turn $P'$ into a convex polygon $\hat P$.\\
Denote by $\alpha_{\hat x_i}$ the internal angle of $\hat P$ in $\hat x_i$. If $\alpha_{\hat x_2}<\pi$ and $\alpha_{\hat x_j}=\pi$ for $j=3,\dots,l-1$, we perform an elementary step of type one on $P(\hat x_1,\hat x_2,\hat x_l)$ until $\alpha_{\hat x_j}=\pi$.\\
If $\alpha_{\hat x_i}=\pi$ for $i=2,\dots, k-1$, we perform an elementary step of type one on $P(\hat x_1,\hat x_2,\hat x_k)$ until $\alpha_{\hat x_k}=\pi$ and then perform an elementary step of type one on $P(\hat x_1,\hat x_2,\hat x_{k-1})$ until $\alpha_{\hat x_{k-1}}=\pi$. \\
Proceeding in this way one can flatten all angles $\alpha_{\hat x_i}$, $i=3,\dots,m-1$, until $\hat P$ becomes a triangle with convex angles only in $\hat x_1,\hat x_2,\hat x_m$. Finally, one performs an elementary step of type one until $\alpha_{\hat x_m}=\pi-\epsilon$.
\end{proof}
The following lemma concludes the proof of theorem \ref{lemshort2}.
\begin{lem}
Suppose all diagonals of $P'$ are strictly shorter than the corresponding diagonals of $P$, $P'$ does not have one dimensional components, $P'$ has the same vertices of $P$ and all sides of $P'$ have the same length of the corresponding sides of $P$. Then it is possible to obtain the desired 1-Lipschitz map $f:P\rightarrow P'$.
\end{lem}
\begin{proof}
We will prove that it will always be possible to obtain a diagonal $d'$ of $P'$ of maximum length, applying a finite number of elementary steps of type two on $P'$: then the conclusion will follow as in the previous lemmas.\\
Once again, we turn $P'$ into a convex polygon $\hat P$ using lemma \ref{conv}. In case $\hat P\neq P$, there must be a vertex $\hat x$ of $\hat P$ such that $\alpha_{\hat x}>\alpha_x$. If we can prove that this implies the existence of a diagonal $\hat d$ of $\hat P$ such that $l(\hat d)\ge l(d)$ then the proof is finished, since this means that at some point during the sequence of elementary steps of type two which turns $P'$ into $\hat P$ one gets $l(\hat d)=l(d)$.\\
We prove the equivalent statement that if all diagonals of $\hat P$ are strictly shorter than the corresponding diagonals of $P$, then all convex angles of $P$ must be greater than the corresponding angles of $\hat P$.\\
Denote by $y$ and $z$ the vertices of $P$ next to $x$: suppose $\overline{yz}$ is the concatenation of the smooth segments $\overline{yx_1}*\overline{x_1x_2}*\cdots *\overline{x_kz}$ for $k\ge 0$.\\
Denote by $Q$ the polygon delimited by $\overline{xy},\overline{xz}$ and $\overline{yz}$: all internal angles of $Q$ are concave except for the ones in $x,y,z$ and $\overline{xx_i}$, $i=1,\dots,k$ are smooth diagonals contained in $Q$. We claim that decreasing the length of all diagonals $\overline{xx_i}$ without increasing the length of the sides of $Q$ and without changing the lengths of $\overline{xy}$ and $\overline{xz}$, the angle $\alpha_x$ will decrease: this can be proved modifying the lengths of sides of $Q$ one at a time. \\
Indeed, if only $d_Q(x_i,x_{i+1})$ decreases, then $\alpha_x$ must decrease: this can be easily seen shortening the side $\overline{x_ix_{i+1}}$ of the triangle $P(x,x_i,x_{i+1})$ of vertices $x,x_i,x_{i+1}$ without changing the lengths of the other two sides of $P(x,x_i,x_{i+1})$. In the same way, if only $d_Q(x,x_{i})$ decreases, then $\alpha_x$ must decrease: this can be easily seen shortening the diagonal $\overline{xx_i}$ of the quadrilateral $P(x,x_{i-1},x_i,x_{i+1})$ of vertices $x,x_{i-1},x_i,x_{i+1}$ without changing the lengths of the sides of $P(x,x_{i-1},x_i,x_{i+1})$.
\end{proof}
As we said, this ends the proof of theorem \ref{lemshort2} and consequently also theorem 3.21 is proved.\\
We are now left with the case common vertices of $\Delta$ and of $\Delta'$ are not disposed in the same order.\\
From now on, we will denote the vertices of $\Delta$ with concave internal angle in the following way, which will be useful in the succeeding reasonings.
\begin{itemize}
\item Denote by $w_j$ the vertices on $\overline{x_1x_2}$, ordered in increasing order from $x_1$ to $x_2$.
\item Denote by $u_k$ the vertices on $\overline{x_2x_3}$, ordered in increasing order from $x_3$ to $x_2$.
\item Denote by $v_l$ the vertices on $\overline{x_1x_3}$, ordered in increasing order from $x_3$ to $x_1$.
\end{itemize}
As before, we denote by $w_j',u_k',v_l'$ the corresponding vertices of $\Delta'$.\\
We say a vertex $w_j$ has \textit{changed side} on $\Delta'$ if $w_j'\not\in \overline{x_1'x_2'}$.\\
Two vertices $w_m',w_n' \in \overline{x_1'x_2'}$ have \textit{changed their order} if $m<n$ and it results $d_{\Delta'}(w_n',x_1')<d_{\Delta'}(w_m',x_1')$.\\
Changes of side and order of vertices $u_k$ and $v_l$ are defined in the same way.\\
Common vertices of $\Delta$ and of $\Delta'$ are not disposed in the same order if there is at least one change of side or one change of order.\\
As we anticipated, we are not able to prove a statement similar to the one of theorem 3.21 in case common vertices of $\Delta$ and $\Delta'$ are not disposed in the same order. So we can only state the following conjecture.
\begin{conj}
Suppose the number of vertices of $\Delta'$ can be greater than the number of vertices of $\Delta$, $\Delta'$ can have one dimensional components and the common vertices of $\Delta$ and $\Delta'$ are not disposed in the same order. \\
Then for every $p\in \Delta$ there is a corresponding point $p'\in \Delta'$ such that $$d_{\Delta'}(p',x_i')\le d_\Delta(p,x_i),\quad i=1,2,3.$$
\end{conj}
Clearly, it is not possible to adapt the proof of theorem \ref{lemshort2} to prove conjecture 5.31, since the method consisting of elementary steps would only work if common vertices of $\Delta$ and $\Delta'$ have the same order.\\
Nonetheless, we are quite confident conjecture 5.31 must be true: this is because changes of side or order of vertices force the polygon $\Delta'$ to become smaller.\\
Indeed, if two vertices $w_m,w_n$ of $\overline{x_1x_2}$ change order in $\Delta'$, then it must result $d_{\Delta'}(x_1',x_2')\le d_\Delta(x_1,x_2)-d_\Delta(w_m,w_n)$. Since each change of order of the vertices contributes to the shortening of $\overline{x_1'x_2'}$, as the number of changes of order of vertices of $\overline{x_1x_2}$ increases, the shortening of $\overline{x_1'x_2'}$ also increases. \\
In a similar way, if a vertex of $\Delta$ changes side and for example it is $u_{k_0}'\in \overline{x_1'x_3'}$, then, since the distances $d_{\Delta'}(u_k',u_{k_0}')$ can not be greater than the corresponding distances $d_{\Delta}(u_k,u_{k_0})$, all other vertices $u_k'$ are forced to "follow" $u_{k_0}'$ and become closer to vertices of $\overline{x_1'x_3'}$. This fact will force some distances inside $\Delta'$ to become smaller than the corresponding distances in $\Delta$. \\
In light of these observations, one could even consider the case common vertices of $\Delta$ and $\Delta'$ are disposed in the same order as the worst one to prove the existence of $p'$, since no distance inside $\Delta'$ is forced to decrease.\\
The following two propositions should support our intuition. Indeed, they show some cases where the change of side of one or more vertices of $\Delta'$ directly implies the existence of $p'$.
\begin{prop}
Suppose there is at least one vertex of $\Delta'$ which changes side, for example $u'\in \overline{x_1'x_3'}$. Then for every $p\in \Delta$ such that $d_\Delta(p,x_3)\le d_{\Delta'}(u',x_3')$ there is a point $p'\in \Delta'$ such that $$d_{\Delta'}(p',x_i')\le d_\Delta(p,x_i),\quad i=1,2,3.$$
\end{prop}
\begin{proof}
Choose $p'\in \overline{u'x_3'}$ at distance $d_\Delta(p,x_3)$ from $x_3'$. Then it results
$$d_{\Delta'}(p',x_1')=d_{\Delta'}(x_1',x_3')-d_{\Delta'}(p',x_3')\le d_{\Delta}(x_1,x_3)-d_\Delta(p,x_3)\le d_\Delta(p,x_1),$$
$$d_{\Delta'}(p',x_2')\le d_{\Delta'}(p',u')+d_{\Delta'}(u',x_2')\le d_\Delta(x_2,x_3)-d_\Delta(p,x_3)\le d_\Delta(p,x_2).$$
\end{proof}
\begin{prop}
Suppose one of the following three conditions is satisfied:
\begin{enumerate}[label=(\roman*)]
\item there are vertices $u',v'\in \overline{x_1'x_2'}$ such that $d_{\Delta'}(x_1',v')>d_{\Delta'}(x_1',u')$,
\item there are vertices $u',w'\in \overline{x_1'x_3'}$ such that $d_{\Delta'}(x_1',w')>d_{\Delta'}(x_1',u')$,
\item there are vertices $v',w'\in \overline{x_2'x_3'}$ such that $d_{\Delta'}(x_2',w')>d_{\Delta'}(x_2',v')$.
\end{enumerate}
Then for every $p\in \Delta$ there is a corresponding point $p'\in \Delta'$ such that $$d_{\Delta'}(p',x_i')\le d_\Delta(p,x_i),\quad i=1,2,3.$$
\end{prop}
\begin{proof}
We will prove the proposition only for case $(i)$, since the proof is identical for the other two cases.\\
One can find $p'$ as follows.
\begin{enumerate}
\item If $d_\Delta(p,x_2)\le d_{\Delta'}(u',x_2')$ let $p'$ be the point on $\overline{u'x_2'}$ at distance $d_\Delta(p,x_2)$ from $x_2$. It then results
$$d_{\Delta'}(p',x_1')=d_{\Delta'}(x_1',x_2')-d_{\Delta'}(p',x_2')\le d_\Delta(x_1,x_2)-d_\Delta(p,x_2)\le d_\Delta(p,x_1),$$
$$d_{\Delta'}(p',x_3')\le d_{\Delta'}(x_3',u')+d_{\Delta'}(u',x_2')-d_{\Delta'}(p',x_2')\le d_\Delta(x_2,x_3)-d_\Delta(p,x_2)\le d_\Delta(p,x_3).$$
\item If $d_\Delta(p,x_3)\le d_{\Delta'}(u',x_3')$ let $p'$ be the point on $\overline{x_3'u'}$ at distance $d_\Delta(p,x_3)$ from $x_3'$. It results
$$d_{\Delta'}(p',x_2')\le d_{\Delta'}(x_3',u')+d_{\Delta'}(u',x_2')-d_{\Delta'}(p',x_3')\le d_\Delta(x_2,x_3)-d_\Delta(p,x_3)\le d_\Delta(p,x_2),$$
$$d_{\Delta'}(p',x_3')+d_{\Delta'}(p',x_1')\le d_{\Delta'}(v',x_3')+d_{\Delta'}(v',x_1')\le d_\Delta(x_1,x_3),$$
$$d_{\Delta'}(p',x_1')\le d_\Delta(x_1,x_3)-d_{\Delta'}(p',x_3')=d_\Delta(x_1,x_3)-d_\Delta(p,x_3)\le d_\Delta(p,x_1)$$
\item If $d_\Delta(p,x_2)> d_{\Delta'}(u',x_2')$ and $d_\Delta(p,x_3)> d_{\Delta'}(u',x_3')$ then there is always a point $p'\in \overline{x_1'u'}$ such that one of the following two conditions is satisfied:
\begin{itemize}
\item $d_{\Delta'}(p',x_2')=d_\Delta(p,x_2)$ and $d_{\Delta'}(p',x_3')\le d_\Delta(p,x_3)$, then one can proceed as in previous case (1),
\item $d_{\Delta'}(p',x_3')=d_\Delta(p,x_3)$ and $d_{\Delta'}(p',x_2')\le d_\Delta(p,x_2)$, then one can proceed as in previous case (2).
\end{itemize}
\end{enumerate}
\end{proof}
One could try to prove conjecture 5.31 using the following approach. \\
Consider a subpolygon $\widehat \Delta\subset \Delta'$ such that to every vertex $x_i,w_j,u_k, v_l$ of $\Delta$ there is a unique corresponding vertex $\hat x_i,\hat w_j,\hat u_k,\hat v_l$ of $\widehat \Delta$.\\
We say that $\widehat \Delta$ is a \textit{subpolygon of $\Delta'$ comparable to $\Delta$} if $\hat x_i=x_i'$, $i=1,2,3$ and $\Delta,\widehat \Delta$ satisfy the hypothesis of theorem 5.3.4. In particular, this condition implies that :
\begin{itemize}
\item common vertices of $\widehat \Delta$ and of $\Delta$ are disposed in the same order,
\item the distance between any two vertices of $\Delta$ is greater than or equal to the distance between the corresponding two points of $\widehat \Delta$.
\end{itemize}
If such polygon $\widehat \Delta$ exists, then preceding theorem \ref{lemshort2} will grant the existence of a 1-Lipschitz map $\phi:\Delta\rightarrow \widehat \Delta$ which sends vertices of $\Delta$ to corresponding vertices of $\widehat \Delta$. Since for every couple of points $x',y'\in \widehat \Delta$ it results $d_{\widehat \Delta}(x',y')\ge d_{\Delta'}(x',y')$, one will conclude that $\phi$ is also a 1-Lipschitz map from $\Delta$ to $\Delta'$ such that $\phi(x_i)=x_i'$, $i=1,2,3$.\\
Notice that there is no need to require the polygon $\widehat \Delta$ to have exactly three strictly convex internal angles, since it is not required in the hypothesis of theorem \ref{lemshort2}.\\
Unfortunately we were not able to develop a method which always produces such polygon $\widehat \Delta$ for every $\Delta,\Delta'$. Indeed, we can only make the following conjecture.
\begin{conj}
Suppose the number of vertices of $\Delta'$ can be greater than the number of vertices of $\Delta$, $\Delta'$ can have one dimensional components and the common vertices of $\Delta$ and $\Delta'$ are not disposed in the same order. \\
Then there always is a subpolygon $\widehat \Delta$ of $\Delta'$ comparable to $\Delta$.
\end{conj}
As we just explained, conjecture 3.34 implies conjecture 5.31. \\
We feel conjecture 3.34 must be true for the same reasons we explained to justify conjecture 5.31: each vertex which changes side or order in $\Delta'$ forces some distances to decrease.\\
We are only able to prove conjecture 3.34 in two simple cases, which we now illustrate.
\begin{prop}
If there is only one point $u_{k_0}'$ of $\Delta'$ such that $u_k'\in \overline{x_1'x_3'}$ and no other vertex changes order or side, then there is a subpolygon $\widehat \Delta$ of $\Delta'$ comparable to $\Delta$.
\end{prop}
\begin{proof}
In this case it is possible to obtain $\widehat \Delta$ in the following way.\\
One replaces $\overline{x_2'x_3'}$ with $\overline{x_3'u_{k_0}'}*\overline{u_{k_0}'x_2'}$ and evaluates if it is possible to find points \\ $\hat u_1,\dots,\hat u_{k_0-1},\hat u_{k_0+1},\dots,\hat u_k\in \overline{x_3'u_{k_0}'}*\overline{u_{k_0}'x_2'}$ corresponding to $u_1',\dots,u_{k_0-1}',u_{k_0+1}',\dots,u_k'$ such that the polygon identified by $\overline{x_1'x_2'},\overline{x_3'u_{k_0}'}*\overline{u_{k_0}'x_2'}, \overline{x_1'x_3'}$ is a subpolygon of $\Delta'$ comparable to $\Delta$.\\
If so, the proof is concluded, since we have found the desired polygon $\widehat \Delta$. \\
If not, consider the orthogonal projection $pr:\Delta'\rightarrow \overline{x_2'x_3'}$: one moves the point $u_{k_0}$ in $\hat u_{k_0}$ on $\overline{u_{k_0}'pr(u_{k_0}')}$ towards $pr(u_{k_0}')$ until one of the following events happens.
\begin{enumerate}[label=(\roman*)]
\item Replacing $\overline{x_2'x_3'}$ with $\overline{x_3'\hat u_{k_0}}*\overline{\hat u_{k_0}x_2'}$ it is possible to find points $\hat u_1,\dots,\hat u_{k_0-1},$ $\hat u_{k_0+1},\dots,\hat u_k\in \overline{x_3'\hat u_{k_0}}*\overline{\hat u_{k_0}x_2'}$ corresponding to $u_1',\dots,u_{k_0-1}',u_{k_0+1}',\dots,u_k'$ such that the polygon identified by $\overline{x_1'x_2'},\overline{x_3'\hat u_{k_0}}*\overline{\hat u_{k_0}x_2'}, \overline{x_1'x_3'}$ is a subpolygon of $\Delta'$ comparable to $\Delta$. Then the polygon $\widehat \Delta$ is found.
\item The distance of $\hat u_{k_0}$ with one of the points $x_1', v_j', w_l'$ becomes equal to the distance between corresponding points of $\Delta$ (suppose for example $d_{\Delta'}(\hat u_{k_0},v_j')=d_\Delta(u_{k_0},v_j)$). \\
Define $\widehat \Delta$ as the polygon obtained from $\Delta'$ replacing $\overline{x_2'x_3'}$ with $\overline{x_3'\hat u_{k_0}}*\overline{\hat u_{k_0}x_2'}$. The vertices $\hat u_1,\dots, \hat u_{k_0-1}$ of $\widehat \Delta$ corresponding to $ u_1',\dots, u_{k_0-1}'$ will be their orthogonal projection on $\overline{x_3'\hat u_{k_0}}$, while the vertices $\hat u_{k_0+1},\dots, \hat u_{k}$ of $\widehat \Delta$ corresponding to $ u_{k_0+1}',\dots, u_{k}'$ will be their orthogonal projection on $\overline{\hat u_{k_0}x_2'}$. One then cuts $\widehat \Delta$ along $\overline{\hat u_{k_0}v_j'}$ obtaining $\widehat \Delta_1,\widehat \Delta_2$ and cuts $\Delta$ along $\overline{u_{k_0}v_j}$ obtaining $\Delta_1,\Delta_2$.\\
Since both $\Delta_1,\widehat \Delta_1$ and $\Delta_2,\widehat \Delta_2$ satisfy the hypothesis of theorem \ref{lemshort2}, one can conclude there are two 1-Lipschitz maps $\phi_i:\Delta_i\rightarrow \widehat \Delta_i$, $i=1,2$. From the equality $d_{\Delta'}(\hat u_{k_0},v_j')=d_\Delta(u_{k_0},v_j)$ one gets that it is possible to obtain a 1-Lipschitz map $\phi:\Delta\rightarrow \widehat \Delta$ sending vertices to corresponding vertices and such that $\phi(p):=\phi_i(p)$ if $p\in \Delta_i$. This proves that the distance between any two vertices of $\Delta$ is greater than or equal to the distance between the two corresponding vertices of $\widehat \Delta$.
\end{enumerate}
\end{proof}
\begin{prop}
If only two adjacent vertices $w_m,w_{m+1}$ of $\Delta$ change order in $\Delta'$ and no vertex of $\Delta$ changes side, then there is a subpolygon $\widehat \Delta$ of $\Delta'$ comparable to $\Delta$.
\end{prop}
\begin{proof}
In figure 13 is represented an example of this situation with $m=1$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{fig14}
\caption{The order of $w_1'$ and $w_2'$ is changed.}
\end{figure}
We will move only one vertex between $w_m'$ and $w_{m+1}'$ and prove it will always be possible to set $\hat w_m=\hat w_{m+1}:=w_{m+1}'$ or $\hat w_m=\hat w_{m+1}:=w_{m}'$.\\
Clearly, in case one sets $\hat w_m=\hat w_{m+1}:=w_{m+1}'$ then $w_{m+1}'$ will become a multiple vertex of $\widehat \Delta$ and $w_m'$ will be a vertex of $\widehat \Delta$ not in $\iota(Vertices(\Delta))$.\\
All other vertices of $\widehat \Delta$ will coincide with the corresponding vertices of $\Delta'$. As it will be clear, all our considerations will not change in case $\Delta'$ has one dimensional components, more vertices than $\Delta$ or multiple vertices.\\
We define the following subsets of the sets of vertices of $\Delta$ and $\Delta'$:
$$T_{w_m}:=\{p \text{ is a vertex of } \Delta \text{ such that } d_{\Delta}(p,w_m)\le d_{\Delta}(p,w_{m+1})\},$$
$$T_{w_{m+1}}:=\{p \text{ is a vertex of } \Delta \text{ such that } d_{\Delta}(p,w_{m+1})\le d_{\Delta}(p,w_m)\},$$
$$T_{w_m}':=\{p' \text{ is a vertex of } \Delta' \text{ such that } p\in T_{w_m}\},$$
$$T_{w_{m+1}}':=\{p' \text{ is a vertex of } \Delta' \text{ such that } p\in T_{w_{m+1}}\}.$$
The meaning of these sets is that in order to being able to impose $\hat w_m:=w_{m+1}'$ one has to check only the distances of $w_{m+1}'$ with the points of $T_{w_m}'$, since for every $p'\in T_{w_{m+1}}'$ it results
$$d_{\Delta'}(p',w_{m+1}')\le d_{\Delta}(p,w_{m+1})\le d_{\Delta}(p,w_m).$$
For the same reason, in order to set $\hat w_{m+1}:=w_{m}'$ one has to check only the distances of $w_m$ with the points of $T_{w_{m+1}}'$.\\
It is possible to further develop this reasoning defining the following two sets:
$$\hat T_{w_m}:=\{p' \in T_{w_m}' \text{ such that } d_{\Delta'}(p',w_m')\le d_{\Delta'}(p',w_{m+1}')\},$$
$$\hat T_{w_{m+1}}:=\{p' \in T_{w_{m+1}}' \text{ such that } d_{\Delta'}(p',w_{m+1}')\le d_{\Delta'}(p',w_m')\}.$$
Following our previous idea, only points of $\hat T_{w_m}$ (resp. of $\hat T_{w_{m+1}}$) can prevent one from imposing $\hat w_m:=w_{m+1}'$ (resp. $\hat w_{m+1}:=w_{m}'$).\\
The example of figure 13 is particularly simple, since the sets $\hat T_{w_1},\hat T_{w_2}$ are empty and consequently it is possible to impose both $\hat w_1=\hat w_2:=w_2'$ and $\hat w_1=\hat w_2:=w_1'$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{fig15}
\caption{The convex envelope of $T_{w_1}$ and $T_{w_1}'$ is drawn in green and the convex envelope of $T_{w_2}$ and $T_{w_2}'$ is drawn in orange.}
\end{figure}
One will not always be so lucky: we will use the following simple lemma to study the general case.
\begin{lem}
Consider $w_m',w_{m+1}'\in \overline{x_1'x_2'}$ such that $d_{\Delta'}(x_1',x_{m+1}')<d_{\Delta'}(x_1',x_m')$. \\
If there is a vertex $v'\in \overline{x_1'x_3'}$ such that $d_{\Delta'}(v',w_m') \le d_{\Delta'}(v',w_{m+1}')$, then for every $p'\in \overline{v'x_3'},\overline{w_m'x_2'},\overline{x_2'x_3'}$ it will also follow $d_{\Delta'}(p',w_m') \le d_{\Delta'}(p',w_{m+1}')$.
\end{lem}
\begin{proof}
We will first prove $d_{\Delta'}(p_1',w_m') \le d_{\Delta'}(p_1',w_{m+1}')$ for every $p_1'\in \overline{v'w_{m}'}$. Suppose by contradiction there is a point $p_2'\in \overline{v'w_{m}'}$ such that $d_{\Delta'}(p_2',w_m') > d_{\Delta'}(p_2',w_{m+1}')$ and denote by $pr:\Delta'\rightarrow \overline{v'w_{m+1}'}$ the orthogonal projection on $\overline{v'w_{m+1}'}$. Then it would follow:
$$d_{\Delta'}(v',w_{m+1}')=d_{\Delta'}(v',pr(p_2'))+d_{\Delta'}(pr(p_2'),w_{m+1}')\le d_{\Delta'}(v',p_2')+d_{\Delta'}(p_2',w_{m+1}')<$$ $$<d_{\Delta'}(v',p_2')+d_{\Delta'}(p_2',w_{m}')=d_{\Delta'}(v',w_m')$$
which contradicts the hypothesis.\\
For every point $p'\in \overline{v'x_3'},\overline{w_m'x_2'},\overline{x_2'x_3'}$ then define $\widetilde p:=\overline{p'w_{m+1}'}\cap\overline{v'w_m'}$ (notice that if $v'w_m'$ is not smooth then it can happen $p'=\widetilde p$ if $p'\in \overline{x_2'x_3'}$). It results:
$$d_{\Delta'}(p',w_{m+1}')=d_{\Delta'}(p',\widetilde p)+d_{\Delta'}(\widetilde p,w_{m+1}')\ge d_{\Delta'}(p',\widetilde p)+d_{\Delta'}(\widetilde p,w_{m}')\ge d_{\Delta'}(p',w_{m}').$$
\end{proof}
Notice that there can not be points $w_j'$ in $\hat T_{w_m}$ or $\hat T_{w_{m+1}}$, otherwise there would be another change of order of the vertices.\\
One can apply the preceding lemma to make the following inferences.
\begin{itemize}
\item There can not be a point $u_j'\in \hat T_{w_{m+1}}$ and a point $v_i'\in \hat T_{w_m}$, since if it results $d_{\Delta'}(v_i',w_{m}')\le d_{\Delta'}(v_i',w_{m+1}')$ then it must follow $d_{\Delta'}(u_j',w_{m}')\le d_{\Delta'}(u_j',w_{m+1}')$.
\item There can not be a point $v_i'\in \hat T_{w_{m+1}}$ and a point $u_j'\in \hat T_{w_m}$, since if it results $d_{\Delta}(v_i,w_{m+1})\le d_{\Delta}(v_i,w_m)$ then it must follow $d_{\Delta}(u_j,w_{m+1})\le d_{\Delta}(u_j,w_m)$ (since clearly there is an analogue version of the preceding lemma on $\Delta$).
\item There can not be a point $v_i'\in \hat T_{w_{m+1}}$ and a point $v_j'\in \hat T_{w_m}$, since they would be forced to have inverted order.
\end{itemize}
One can conclude that the two sets $\hat T_{w_{m+1}},\hat T_{w_{m}}$ can not be both non-empty and consequently it is always possible to set $\hat w_m:=w_{m+1}'$ or $\hat w_{m+1}:=w_m'$.
\end{proof}
| {
"timestamp": "2018-08-30T02:08:33",
"yymm": "1808",
"arxiv_id": "1808.09734",
"language": "en",
"url": "https://arxiv.org/abs/1808.09734",
"abstract": "The present paper is composed of two parts. In the first one we define two pseudo-metrics $L_F$ and $K_F$ on the Teichmuüller space of semi-translation surfaces $\\mathcal{TQ}_g(\\underline k,\\epsilon)$, which are the symmetric counterparts to the metrics defined by William Thurston on $\\mathcal{T}_g^n$. We prove some nice properties of $L_F$ and $K_F$, most notably that they are complete pseudo-metrics. In the second part we define their asymmetric analogues $L_F^a$ and $K_F^a$ on $\\mathcal{T Q}_g^{(1)}(k, \\epsilon)$ and prove that their equality depends on two statements regarding 1-Lipschitz maps between polygons. We are able to prove the first statement, but the second one remains a conjecture: nonetheless, we explain why we believe it is true.",
"subjects": "Differential Geometry (math.DG); Metric Geometry (math.MG)",
"title": "Thurston's metric on Teichmüller space of semi-translation surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474910447999,
"lm_q2_score": 0.7154239836484144,
"lm_q1q2_score": 0.7078029232558347
} |
https://arxiv.org/abs/0811.4451 | Towards Proving Legendre's Conjecture | Legendre's conjecture states that there is a prime number between n^2 and (n+1)^2 for every positive integer n. We consider the following question : for all integer n>1 and a fixed integer k<=n does there exist a prime number such that kn < p < (k+1)n ? Bertrand-Chebyshev theorem answers this question affirmatively for k=1. A positive answer for k=n would prove Legendre's conjecture. In this paper, we show that one can determine explicitly a number N(k) such that for all n >= N(k), there is at least one prime between kn and (k+1)n. Our proof is based on Erdos's proof of Bertrand-Chebyshev theorem and uses elementary combinatorial techniques without appealing to the prime number theorem. | \section{Introduction}
Bertrand's postulate states that for every positive integer $n$, there is always at least one prime $p$ such that $n < p < 2n$. This was first proved by Chebyshev in 1850 and hence the postulate is also called the Bertrand-Chebyshev theorem. Ramanujan gave a simpler proof by using the properties of the Gamma function \cite{ramanujan}, which resulted in the concept of Ramanujan primes. In 1932, Erd\H{o}s published a simpler proof using the Chebyshev function and properties of binomial coefficients \cite{erdos}.
Legendre's conjecture states that there is a prime number between $n^2$ and $(n + 1)^2$ for every positive integer $n$. It is one of the four Landau's problems, considered as four basic problems about prime numbers. The other three problems are (i) Goldbach's conjecture : every even integer $n > 2$ can be written as the sum of two primes (ii) Twin prime conjecture : there are infinitely many primes $p$ such that $p+2$ is prime (iii) are there infinitely many primes $p$ such that $p-1$ is a perfect square ? All these problems are open till date.
We consider a generalization of the Bertrand's postulate : for all integer $n > 1$ and a fixed integer $k \leq n$ does there exist a prime number such that $kn < p < (k+1)n$ ? This question was first posed by Bachraoui \cite{two}. He provided an affirmative answer for $k=2$ and observed that a positive answer for $k = n$ would prove Legendre's conjecture. Bertrand-Chebyshev theorem answers this question affirmatively for $k = 1$. In this paper, we show that one can determine explicitly a number $N_k$ such that for all $n \geq N_k$, there is at least one prime between $kn$ and $(k+1)n$. Note that the prime number theorem guarantees the existence of such $N_k$. The interesting feature of our proof is that elementary combinatorial techniques can be used to obtain an explicit bound on $N_k$. Our proof is motivated by Erd\H{o}s's proof of Bertrand-Chebyshev theorem \cite{erdos}.
Let $\pi(x)$ denote the number of prime numbers not greater than $x$. Let ln($x$) denote the logarithm with base $e$ of $x$. We write $k | n$ when $k$ divides $n$. We let $n$ run through the natural numbers and $p$ through the primes. Let $\phi(a,b)$ denote the product of all primes greater than $a$ and not greater than $b$, i.e.,
\begin{equation*}
\phi(a,b) = \displaystyle\prod_{a < p \leq b}{p}
\end{equation*}
\section{Lemmas}
In this section, we present several lemmas which are used in the proof of our main
theorem, presented in the next section.
\begin{lemma}\label{lem:basic1}
If $k|n$ then
\begin{equation*}
{\frac{(k+1)n}{k} \choose n} < \left(\frac{(k+1)^{(k+1)}}{k^k}\right)^{\frac{n}{k}}
\end{equation*}
If $k|(n+l)$, $0 < l < k$ and $n > (k+1)^k$ then
\begin{equation*}
{\frac{(k+1)n+l}{k} \choose n} < \left(\frac{(k+1)^{(k+1)}}{k^k}\right)^{\frac{n+l}{k}}
\end{equation*}
\end{lemma}
\begin{proof}
We prove this lemma for $l=0$. The case $0 < l < k$ is similar. We use induction on $n$. It is easy to see that
\begin{equation*}
{{k+1} \choose k} < \frac{(k+1)^{(k+1)}}{k^k}
\end{equation*}
Let the inequality hold for ${(k+1)n \choose kn}$. Then
\begin{eqnarray*}
{(k+1)n+(k+1) \choose kn+k} & = & {(k+1)n \choose kn}\frac{((k+1)n+1)\dots((k+1)n+(k+1))}{(n+1)(kn+1)\dots(kn+k)} \\
& = & {(k+1)n \choose kn}\frac{(k+1)((k+1)n+1)\dots((k+1)n+k)}{(kn+1)\dots(kn+k)} \\
\end{eqnarray*}
Comparing the coefficients of $n^k$ and $n^{k-1}$ in the numerator and the denominator we have, for all $n>k$
\begin{equation*}
\frac{(k+1)((k+1)n+1)\dots((k+1)n+k)}{(kn+1)\dots(kn+k)} < \frac{(k+1)^{(k+1)}}{k^k}
\end{equation*}
\end{proof}
\begin{lemma}\label{lem:basic2}
If $k|n$ and $n \geq k(k+1)^{(k+1)}$ then
\begin{equation*}
{\frac{(k+1)n}{k} \choose n} > \left(\frac{(k+1)^{(k+1)}-1}{k^k}\right)^{\frac{n}{k}}
\end{equation*}
\end{lemma}
\begin{proof}
It is easy to prove that the inequality holds for $n = k(k+1)^{(k+1)}$. Let $S_k$ denote the sum of integers from 1 to $k$, i.e., $S_k = \sum_{i=1}^{k}{i}$. Following the previous proof and comparing the coefficients of $n^k$ and $n^{k-1}$ in the numerator and the denominator, for all $n$ such that $nk^k > {S_k}({k^{k-1}}((k+1)^{k+1}-1)-{k^k}{(k+1)^k})$ we have
\begin{equation*}
\frac{(k+1)((k+1)n+1)\dots((k+1)n+k)}{(kn+1)\dots(kn+k)} > \frac{(k+1)^{(k+1)}-1}{k^k}
\end{equation*}
\end{proof}
\begin{lemma}\label{lem:nknk}
Let $N_k = k(k+1)^{2k+2}$. If $n \geq N_k$ and $k > 1$ then
\begin{equation*}
{\left(\frac{(k+1)^{(k+1)}-1}{k^k}\right)^n}{\left(\frac{1}{(k+1)^{(k+1)}}\right)^{\frac{n}{k}}}\ \ >\ \ ((k+1)n)^{\frac{\sqrt{(k+1)n}}{k}}
\end{equation*}
\end{lemma}
\begin{proof}
The following inequalities are equivalent:
\begin{eqnarray*}
{\left(\frac{(k+1)^{(k+1)}-1}{k^k}\right)^n}{\left(\frac{1}{(k+1)^{(k+1)}}\right)^{\frac{n}{k}}} > {((k+1)n)}^{\frac{\sqrt{(k+1)n}}{k}} \\
{\frac{k}{\sqrt{(k+1)}}}{\ln}{\left({{\left(\frac{(k+1)^{(k+1)}-1}{k^k}\right)}{\left(\frac{1}{(k+1)^{(k+1)}}\right)^{\frac{1}{k}}}}\right)} > \frac{{\ln}{((k+1)n)}}{\sqrt{n}} \\
\end{eqnarray*}
The function $\frac{{\ln}((k+1)x)}{\sqrt{x}}$ is decreasing and the above inequality holds for $n = N_k$
\end{proof}
\begin{lemma}\label{lem:product}
If $k|n$ then
\begin{equation*}
\phi\left(\frac{n}{k},\frac{(k+1)n}{(k+2)}\right)\phi\left(n,\frac{(k+1)n}{k}\right) < {\frac{(k+1)n}{k} \choose n}
\end{equation*}
If $k|(n+l)$, $0 < l < k$, then
\begin{equation*}
\phi\left(\frac{n+l}{k},\frac{(k+1)n}{(k+2)}\right)\phi\left(n,\frac{(k+1)n+l}{k}\right) < {\frac{(k+1)n+l}{k} \choose n}
\end{equation*}
\end{lemma}
\begin{proof}
We prove this lemma for $l=0$. The case $0 < l < k$ is similar. We have
\begin{equation}\label{eqn:prod}
{\frac{(k+1)n}{k} \choose n} = \frac{(n+1)\dots\frac{(k+1)n}{k}}{\frac{n}{k}!}
\end{equation}
Clearly $\phi\left(n,\frac{(k+1)n}{k}\right)$ divides ${\frac{(k+1)n}{k} \choose n}$. If $\frac{n}{k} < p \leq \frac{(k+1)n}{(k+2)}$ then $kp$ occurs in the numerator of (\ref{eqn:prod}) but $p$ does not occur in the denominator. After simplification of $kp$ with a number of the form ${\alpha}k$ from the denominator we get the prime factor $p$ in ${\frac{(k+1)n}{k} \choose n}$. Hence $\phi\left(\frac{n}{k},\frac{(k+1)n}{(k+2)}\right)$ divides ${\frac{(k+1)n}{k} \choose n}$ too and the lemma follows.
\end{proof}
\section{The proof of main theorem}
\begin{theorem}\label{thm:main}
For any integer $1<k<n$, there exists a number $N_k$ such that for all $n \geq N_k$, there is at least one prime between $kn$ and $(k+1)n$.
\end{theorem}
\begin{proof}
The product of primes between $kn$ and $(k+1)n$, if there are any, divides ${{(k+1)n} \choose kn}$. For a fixed prime $p$, let $\beta(p)$ be the highest number $x$, such that $p^x$ divides $\textstyle\binom{(k+1)n}{kn}$. Let ${{(k+1)n} \choose kn} = P_1P_2P_3$, such that,
\begin{equation*}
P_1 = \displaystyle\prod_{p \leq \sqrt{(k+1)n}}{p^{\beta(p)}},\ \ \ \ \ \
P_2 = \displaystyle\prod_{\sqrt{(k+1)n} < p \leq kn}{p^{\beta(p)}},\ \ \ \ \ \
P_3 = \displaystyle\prod_{kn+1 < p \leq (k+1)n}{p^{\beta(p)}}
\end{equation*}
\noindent To prove the theorem we have to show that $P_3 > 1$ for $n \geq N_k$. Clearly, $P_1 < ((k+1)n)^{\pi(\sqrt{(k+1)n})}$. From Lemma \ref{lem:p2bound}, we have $P_2 < ((k+1)^{(k+1)})^{\frac{n}{k}}$. From Lemmas \ref{lem:basic1} and \ref{lem:basic2}, we have
\begin{equation*}
\left(\frac{(k+1)^{(k+1)}-1}{k^k}\right)^n < P_1P_2P_3 < ((k+1)n)^{\pi(\sqrt{(k+1)n})}{((k+1)^{(k+1)})^{\frac{n}{k}}}P_3
\end{equation*}
Using Lemma \ref{lem:nknk} and $\pi(\sqrt{(k+1)n}) \leq \frac{\sqrt{(k+1)n}}{2}$ we have
\begin{equation*}
P_3 > \left(\frac{(k+1)^{(k+1)}-1}{k^k}\right)^n{\left({\frac{1}{(k+1)^{(k+1)}}}\right)^{\frac{n}{k}}}{\frac{1}{((k+1)n)^{\pi(\sqrt{(k+1)n})}}} > 1
\end{equation*}
\end{proof}
\begin{lemma}\label{lem:p2bound}
Let $P_2$ be as defined in the proof of Theorem \ref{thm:main}. Then $P_2 < ((k+1)^{(k+1)})^{\frac{n}{k}}$.
\end{lemma}
\begin{proof}
We have
\begin{equation}\label{eqn:choose}
{{(k+1)n} \choose kn} = \frac{(kn+1)(kn+2){\cdot}{\cdot}{\cdot}(k+1)n}{1{\cdot}2{\cdot}{\cdot}{\cdot}n}.
\end{equation}
The prime decomposition \cite{erdos-book} of ${{(k+1)n} \choose kn}$ implies that the powers of primes in $P_2$ are less than 2. Clearly, a prime $p$ satisfying $\frac{(k+1)n}{k+2} < p \leq n$ appears in the denominator of (\ref{eqn:choose}) but $2p$ does not, and $(k+1)p$ appears in the numerator of (\ref{eqn:choose}) but $(k+2)p$ does not. Hence the powers of such primes in $P_2$ is 0. Also if a prime $p$ satisfies $\frac{(k+1)n}{k} < p \leq kn$ then its power in $P_2$ is 0 because it appears neither in the denominator nor in the numerator of (\ref{eqn:choose}). We have
\begin{eqnarray*}
P_2 & < & \phi\left(\sqrt{(k+1)n},\frac{n}{k}\right)\phi\left(\frac{n}{k},\frac{(k+1)n}{(k+2)}\right)\phi\left(n,\frac{(k+1)n}{k}\right) \\
& < & 4^{\frac{n}{k}}{\frac{(k+1)n}{k} \choose n} \\
& < & 4^{\frac{n}{k}}\left(\frac{(k+1)^{(k+1)}}{k^k}\right)^{\frac{n}{k}} \\
& < & ((k+1)^{(k+1)})^{\frac{n}{k}} \\
\end{eqnarray*}
We used Lemmas \ref{lem:product}, \ref{lem:basic1} and the fact that $\displaystyle\prod_{p\leq{x}}{p} < 4^x$. Similarly we get the same bound when $0 < l < k$ in Lemmas \ref{lem:product}, \ref{lem:basic1}.
\end{proof}
\begin{comment}
\section{Small values of $k$}
The following table shows the values of $N_k$ for small values of $k$.
\begin{center}
\begin{tabular}{|c|r|}
\hline
$k$ & $N_k$ \\
\hline
1 & 1 \\
2 & 2 \\
3 & 2 \\
4 & 2 \\
5 & 2 \\
\hline
\end{tabular}
\end{center}
Growth of $N_k$, graph....
\end{comment}
\bibliographystyle{plain}
| {
"timestamp": "2009-01-11T09:23:01",
"yymm": "0811",
"arxiv_id": "0811.4451",
"language": "en",
"url": "https://arxiv.org/abs/0811.4451",
"abstract": "Legendre's conjecture states that there is a prime number between n^2 and (n+1)^2 for every positive integer n. We consider the following question : for all integer n>1 and a fixed integer k<=n does there exist a prime number such that kn < p < (k+1)n ? Bertrand-Chebyshev theorem answers this question affirmatively for k=1. A positive answer for k=n would prove Legendre's conjecture. In this paper, we show that one can determine explicitly a number N(k) such that for all n >= N(k), there is at least one prime between kn and (k+1)n. Our proof is based on Erdos's proof of Bertrand-Chebyshev theorem and uses elementary combinatorial techniques without appealing to the prime number theorem.",
"subjects": "Number Theory (math.NT)",
"title": "Towards Proving Legendre's Conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474885320984,
"lm_q2_score": 0.7154239836484143,
"lm_q1q2_score": 0.7078029214581878
} |
https://arxiv.org/abs/2201.09850 | Growth of bilinear maps III: Decidability | The following notion of growth rate can be seen as a generalization of joint spectral radius: Given a bilinear map $*:\mathbb R^d\times\mathbb R^d\to\mathbb R^d$ with nonnegative coefficients and a nonnegative vector $s\in\mathbb R^d$, denote by $g(n)$ the largest possible entry of a vector obtained by combining $n$ instances of $s$ using $n-1$ applications of $*$. Let $\lambda$ denote the growth rate $\limsup_{n\to\infty} \sqrt[n]{g(n)}$, Rosenfeld showed that the problem of checking $\lambda\le 1$ is undecidable by reducing the problem of joint spectral radius.In this article, we provide a simpler reduction using the observation that matrix multiplication is actually a bilinear map. Suppose there is no restriction on the signs, an application of this reduction is that the problem of checking if the system can produce a zero vector is undecidable by reducing the problem of checking the mortality of a pair of matrices. This answers a question asked by Rosenfeld. Another application is that the problem does not become harder when we introduce more bilinear maps or more starting vectors, which was remarked by Rosenfeld.It is known that if the vector $s$ is strictly positive, then the limit superior $\lambda$ is actually a limit. However, we show that when $s$ is only nonnegative, the problem of checking the validity of the limit is undecidable. This also answers a question asked by Rosenfeld.We provide a formula for the growth rate $\lambda$. A condition is given so that the limit is always ensured. This actually gives a simpler proof for the limit $\lambda$ when $s>0$. An important corollary of the formula is the computability of the growth rate, which answers another question by Rosenfeld. Also, we relate the finiteness property of a set of matrices to a special structure called "linear pattern" for the problem of bilinear system. | \section{Introduction}
Given a bilinear map $*:\mathbb R^d\times\mathbb R^d\to\mathbb R^d$ with \emph{nonnegative} coefficients and a \emph{positive} vector $s\in\mathbb R^d$, denote by $g(n)$ the largest possible entry of a vector obtained by combining $n$ instances of $s$ using $n-1$ applications of $*$. For example, all the combinations of $4$ instances of $s$ are
\[
s*(s*(s*s)), s*((s*s)*s), (s*s)*(s*s), (s*(s*s))*s, ((s*s)*s)*s.
\]
It was shown in \cite{bui2021growth} that the following growth rate is valid
\[
\lambda=\lim_{n\to\infty} \sqrt[n]{g(n)}.
\]
When the entries of $s$ are not necessarily positive but only \emph{nonnegative}, the limit $\lambda$ may be no longer valid. However, relaxing the requirements on the signs in this way is often asked in applications. Therefore, Rosenfeld in \cite{rosenfeld2022undecidable} extends the notion of the growth rate $\lambda$ for the case $s$ is nonnegative by defining
\[
\lambda=\limsup_{n\to\infty} \sqrt[n]{g(n)},
\]
which is called the \emph{growth rate of the bilinear system} $(*,s)$.
Let us call the original setting the positive setting and the latter setting the nonnegative setting (with respect to the sign of $s$).
The study of this problem was fist started by Rote in \cite{rote2019maximum} with the maximum number of minimal dominating sets in a tree of $n$ leaves as an example. Later on, a richer set of applications to the maximum number of different types of dominating sets, perfect codes, different types of matchings, and maximal irredundant sets in a tree was given by Rosenfeld in \cite{rosenfeld2021growth}.
For application purpose, estimating $\lambda$ is a natural problem. In \cite{bui2021growth2} the limit in the positive setting can be approximated to an arbitrary precision. In \cite{rosenfeld2021growth} the growth rate in the nonnegative setting was shown to be upper semi-computable, i.e. we can generate a sequence of upper bounds converging to $\lambda$. In this article, we show that the growth rate in the nonnegative setting is also lower semi-computable, that is the growth rate is computable. However, it still remains the problem of checking if $\lambda\le 1$. In \cite{rosenfeld2022undecidable} Rosenfeld shows that checking $\lambda\le 1$ is undecidable for the nonnegative setting by reducing the problem of checking $\rho\le 1$ for the joint spectral radius $\rho$. The notion of joint spectral radius was first introduced in \cite{rota1960note} and the growth of bilinear maps can be seen as a generalization.
In this paper, we provide another proof with a simpler reduction using the observation that matrix multiplication is also a bilinear map, as in Theorem \ref{thm:undecidable-estimate}. The reduction is so natural, and the products of the matrices can be found in an embedded form in the resulting vectors. Note that it is still left open whether the problem of checking $\lambda\le 1$ in the positive setting is undecidable. We attempt to prove its undecidability under the assumption that it is undecidable to check $\rho\le 1$ for the joint spectral radius $\rho$ of a pair \emph{positive} matrices, as in Section \ref{sec:undecidable-positive-setting}.
The undecidability of the problem for joint spectral radius is actually proved by
\[
HP \le PFAE \le JSR,
\]
where $HP$ denotes the halting problem, $PFAE$ denotes the problem of probabilistic finite automaton emptiness and $JSR$ denotes the problem of joint spectral radius. (We denote $A\le B$ if Problem $A$ can be reduced to Problem $B$.) To be more precise, we mean by $JSR$ the problem of checking $\rho\le 1$.
Note that actually all these problem are Turing equivalent (i.e. each is reducible to any other) since we a reduction from $JSR$ to $HP$ by the joint spectral radius theorem, which states that for a finite set $\Sigma$ of matrices we have
\begin{equation}\label{eq:jsr-theorem}
\rho(\Sigma)=\sup_n\max_{A_1,\dots,A_n\in\Sigma}\sqrt[n]{\rho(A_1\dots A_n)},
\end{equation}
where $\rho$ denotes both the joint spectral radius of a set of matrices and the ordinary spectral radius of a matrix, depending on the argument. Indeed, we just run the program that looks for a sequence of matrices whose product has the spectral radius greater than $1$. The program does not stop if and only if $\rho(\Sigma)\le 1$. (Note that the problem for spectral radius ($SR$) is decidable.)
On the other hand, a formula of $\lambda$ in Section \ref{sec:nonnegative-setting} allows a reduction from checking $\lambda\le 1$ to the halting problem. We show the formula as follows in a form that looks similar to the joint spectral radius theorem:
\begin{equation}\label{eq:formula-for-lambda}
\lambda=\sup_n \max_{\substack{\text{linear pattern $P$}\\ |P|=n}} \sqrt[n]{\rho(M(P))}.
\end{equation}
We do not explain the terms in detail, which is done in Section \ref{sec:nonnegative-setting}, but we may say roughly that a linear pattern is a sequence $x_n$ for $n=0,1,2,\dots$ so that $x_0=s$ and $x_n$ for $n\ge 1$ is a combination of some instances of $s$ and only \emph{one} instance of $x_{n-1}$. The notation $|P|$ denotes the number of instances of $s$ and the matrix $M=M(P)$ represents the linear relation $x_n=Mx_{n-1}$ for every $n\ge 1$. The reduction from checking $\lambda\le 1$ to the halting problem is similar to the one for the problem of checking $\rho(\Sigma)\le 1$.
It means we have established the relation of these problems to the problem of the growth rate of a bilinear system ($GRBS$). An interesting point is that using reductions of the same kind as the one for $JSR\le GRBS$ we can show that the problem for the growth rate does not become harder when multiple operators and multiple starting vectors are allowed. This was first remarked by Rosenfeld in \cite{rosenfeld2022undecidable}. Let us call it the joint growth rate of a bilinear system ($JGRBS$). In total, we have
\[
SR < HP = PFAE = JSR = GRBS = JGRBS,
\]
where $A<B$ means $A\le B$ but we do not have $B\le A$, and $A=B$ means $A\le B$ and $B\le A$, that is each of $A,B$ is reducible to the other, i.e. they are Turing equivalent.
Note that we still do not yet have a natural reduction from $GRBS$ to $JSR$ as the one for $JSR\le GRBS$. Such a reduction is very desirable with some consequences, as discussed in Section \ref{sec:applications}.
In \cite{rosenfeld2022undecidable} Rosenfeld asks the following question: Suppose the coefficients of $*$ and the entries of $s$ have no condition on the signs (they can even be complex), then is the problem of checking if the system can produce a zero vector decidable? A negative answer is given in Theorem \ref{thm:undecidable-mortal}. It uses almost the same construction as the one for $JSR\le GRBS$ but reduces the problem of checking the mortality of a pair of matrices instead.
Since the reduction for $JSR\le GRBS$ is quite natural, we can relate the finiteness property \cite{lagarias1995finiteness} for the joint spectral radius to a result on whether the rate of a linear pattern can attain the growth rate.
A set $\Sigma$ of matrices is said to have the finiteness property if the supremum in Equation \eqref{eq:jsr-theorem} for the joint spectral radius theorem is attainable, that is there exist $A_1,\dots,A_n\in\Sigma$ so that $\sqrt[n]{\rho(A_1\dots A_n)}=\rho(\Sigma)$. Meanwhile, the rate of a linear pattern $P$ is $\sqrt[|P|]{\rho(M(P))}$ and the supremum in Equation \eqref{eq:formula-for-lambda} is not always attainable, that is there exists a system where no linear pattern has the same rate as the growth rate (e.g. see \cite{bui2021growth}).
The relation is presented in Section \ref{sec:applications}.
Checking if $\lambda$ is actually a limit is also interesting problem, whose decidability was asked by Rosenfeld in a correspondence. Theorem \ref{thm:undecidable-validity} shows that it is undecidable by reducing the problem of checking $\lambda\le 1$.
During the course, there is a transform of $(*,s)$ to a new system with the corresponding function $g'(n)$ so that for every $m\ge 1$ we have $g'(2m)=g(m)$ and $g'(2m+1)=0$.
As an attempt to study the nonnegative setting, we extend the formula of $\lambda$ in \cite{bui2021growth2} to the nonnegative setting in Section \ref{sec:nonnegative-setting}. Using the formula, we give a condition so that the limit is always ensured. This actually serves as a proof of the limit $\lambda$ in the positive setting, which is quite simpler than the proof in \cite{bui2021growth}. Another corollary is a transform so that the new system still has the same growth rate as the original one but with the valid limit. In fact, the computability of the growth rate in the nonnegative setting is derived from the formula, as in Theorem \ref{thm:computable}.
\section{Reductions}
\label{sec:reduction}
\subsection*{The common construction}
We present a construction that will be used in the proofs of both Theorems \ref{thm:undecidable-estimate} and \ref{thm:undecidable-mortal}.
As a matrix multiplication is also a bilinear map in $\mathbb R^{d^2}\times\mathbb R^{d^2}\to\mathbb R^{d^2}$, we use some consistent embedding of a matrix $A$ to a vector $v$ in the space $\mathbb R^{d^2}$. We denote the embedding by two functions $\Gamma,\tilde \Gamma$ so that
\[
A=\Gamma(v),\qquad v=\tilde \Gamma(A).
\]
We also denote by $v_{[i,j]}$ the subvector of $v$ containing the $k$-th entries for $i\le k\le j$.
Given a pair of matrices $A,B$ in $\mathbb R^d$. Consider the space $\mathbb R^{3d^2+2}$ and denote by $R_A,R_B,R_C$ the ranges $[1,d^2], [d^2+1,2d^2], [2d^2+1,3d^2]$, respectively, and $i=3d^2+1$,$j=3d^2+2$.
Let the system $(*,s)$ with $*:\mathbb R^{3d^2+2}\times\mathbb R^{3d^2+2}\to\mathbb R^{3d^2+2}$ and $s\in\mathbb R^{3d^2+2}$ be so that
\begin{gather*}
s_{R_A}=\tilde \Gamma(A),\qquad s_{R_B}=\tilde \Gamma(B),\qquad s_{R_C}=0,\\
s_i=1,\qquad s_j=0,
\end{gather*}
and for any two vectors $x, y$,
\begin{gather*}
(x*y)_{R_A} = (x*y)_{R_B} = 0,\\
(x*y)_{R_C} = \tilde \Gamma( \Gamma(x_{R_C}) \Gamma(y_{R_C}) ) + x_j y_{R_A} + x_{R_B} y_j,\\
(x*y)_i = 0,\qquad (x*y)_j = x_i y_i.
\end{gather*}
Let us make some analysis for the vector $v$ obtained from a combination of $n$ instances of $s$. Obviously, $v_i$ is nonzero for only $n=1$, and $v_j$ is nonzero for only $n=2$. Also, $v_{R_A}, v_{R_B}$ are zero whenever $n\ge 2$. It remains to consider the dimensions in $R_C$.
\begin{proposition} \label{prop:buffer-analysis}
The vector $\bar v=v_{R_C}$ is zero whenever $n$ is not divisible by $3$. When $n=3m$, if $\bar v$ is nonzero then the matrix form $\Gamma(\bar v)$ is a product of $m$ matrices from $\{A,B\}$. On the other hand, for any $M_1,\dots,M_m\in\{A,B\}$, we have a combination for $n=3m$ so that $\Gamma(\bar v)=M_1\dots M_m$. Moreover, if every subcombination (contained in a matching pair of brackets) has either less than $3$ instances of $s$ or a multiple of $3$ instances of $s$, then $\bar v$ is a product of $m$ matrices from $\{A,B\}$ for $n=3m$.
\end{proposition}
\begin{proof}
At first, we make an observation: If the sum $x_j y_{R_A} + x_{R_B} y_j$ in the expression of $(x*y)_{R_C}$ is not zero, then $n=3$ due to the analysis of $v_{R_A}, v_{R_B}$ and $v_i, v_j$. We have $\bar v=\tilde \Gamma(A)$ if the combination is $(s*s)*s$, and $\bar v=\tilde \Gamma(B)$ for $s*(s*s)$.
We prove the proposition by induction. It trivially holds for any $n\le 3$. We show that it holds for any $n>3$ provided that it holds for smaller numbers.
Let the combination be $X*Y$, where $X,Y$ are the combinations of $n_1,n_2$, respectively, instances of $s$, with $n=n_1+n_2$.
As $n>3$, the summands $X_j Y_{R_A}, X_{R_B} Y_j$ in the expression of $(X*Y)_{R_C}$ are zero, hence we can safely ignore these summands, that is
\[
\bar v = (X*Y)_{R_C} = \tilde \Gamma( \Gamma(X_{R_C}) \Gamma(Y_{R_C}) ).
\]
Suppose $n$ is not divisible by $3$, then one of $n_1,n_2$ is not divisible by $3$. By induction hypothesis, either $X_{R_C}$ or $Y_{R_C}$ is zero. It follows that $\bar v=0$.
Suppose $n=3m$ and $\bar v$ is not zero, then $n_1,n_2$ are also divisible by $3$, say $n_1=3m_1$, $n_2=3m_2$. By induction hypothesis, both $\Gamma(X_{R_C})$ and $\Gamma(Y_{R_C})$ are products of $m_1,m_2$, respectively, matrices from $\{A,B\}$. Hence $\Gamma(\bar v)=\Gamma(X_{R_C})\Gamma(Y_{R_C})$ is also a product of $m_1+m_2=m$ matrices from $\{A,B\}$.
On the other hand, for any $M_1,\dots, M_m\in\{A,B\}$, by induction hypothesis let $X$ be the combination of $3(m-1)$ instances of $s$ so that $\Gamma(X_{R_C})=M_1\dots M_{m-1}$ and $Y$ be the combination of $3$ instances of $s$ so that $\Gamma(Y_{R_C})=M_m$. It follows that $\Gamma(\bar v)=\Gamma(X_{R_C})\Gamma(Y_{R_C})=M_1\dots M_m$.
The conclusion relating to the number of instances of $s$ in a subcombination is argued in a similar way.
The proof finishes by induction.
\end{proof}
\subsection*{Joint spectral radius}
Before proving some problems are undecidable, we remind the readers the notion of the joint spectral radius.
Given a set of matrices $\Sigma$ in $\mathbb R^d$, the joint spectral radius $\rho(\Sigma)$ of $\Sigma$ is defined to be the limit
\[
\rho(\Sigma)=\lim_{n\to\infty} \sqrt[n]{\max_{M_1,\dots,M_n\in\Sigma} \|M_1\dots M_n\|}.
\]
The limit is introduced and proved in \cite{rota1960note} and it is independent on the norm. For convenience, we let the norm be the maximum norm, i.e. the largest absolute value of an entry in the matrix.
\subsection*{Checking $\lambda\le 1$ is undecidable}
Using the construction, we can establish the reductions in the following theorems.
\begin{theorem} \label{thm:undecidable-estimate}
The problem of checking if $\lambda\le 1$ for the nonnegative setting is undecidable.
\end{theorem}
\begin{proof}
Consider the problem of checking if the joint spectral radius $\rho(\{A,B\})$ of a pair of nonnegative matrices $A,B$ in $\mathbb R^d$ is at most $1$, which is known to be undecidable in \cite{blondel2000boundedness}. We reduce this problem to the problem of checking if $\lambda\le 1$ for the system $(*,s)$ that has been constructed.
By Proposition \ref{prop:buffer-analysis}, we obtain
\[
g(3m)=\max_{M_1,\dots,M_m\in\{A,B\}} \|M_1\dots M_m\|.
\]
Also, for $n>2$ and $n$ not divisible by $3$, we have
\[
g(n)=0.
\]
Therefore,
\[
\lambda=\sqrt[3]{\rho(\{A,B\})}.
\]
It means that we have reduced the problem of the joint spectral radius to the problem of the growth rate. The conclusion on the undecidability follows.
\end{proof}
The variant of checking $\lambda=1$ is also undecidable due to the undecidability of the corresponding problem of checking $\rho= 1$ for the joint spectral radius. In fact, we can reduce the problem of checking $\lambda\le 1$ to the problem of checking $\lambda=1$ by adding an extra dimension that is always $1$. However, the question for $\rho\ge 1$ still remains open (see \cite[Section 2.2.3]{jungers2009joint} for a discussion) as we restated as follows.
\begin{conjecture}[\cite{blondel2000boundedness}]\label{con:rho>=1}
It is undecidable to check if $\rho\ge 1$ for the joint spectral radius $\rho$.
\end{conjecture}
This has applications in the stability of dynamical systems. If the conjecture holds, then the problem checking $\lambda\ge 1$ is also undecidable.
Note that the problem of comparing $\rho$ with $1$ for a pair of matrices and a set of matrices are equivalent, see \cite{blondel2000boundedness}.
\subsection*{Checking the mortality is undecidable}
Other properties of a pair of matrices can be also reduced to the corresponding ones of a bilinear system. The following theorem is one example.
\begin{theorem} \label{thm:undecidable-mortal}
When there is no condition on the signs of the coefficients and the entries, the problem of checking if the system can produce a zero vector is undecidable.
\end{theorem}
\begin{proof}
We reduce to this problem the problem of checking if a pair of matrices $A,B$ is mortal, that is checking if there exists a sequence of matrices $M_1,\dots,M_m$ drawn from $\{A,B\}$ for some $m$ so that $M_1\dots M_m$ is a zero matrix. The problem of mortality for a pair of matrices is known to be undecidable in \cite{blondel1997pair}. We use the same system $(*,s)$ that has been constructed. However, we add three extra dimensions $3d^2+3, 3d^2+4, 3d^2+5$ with $s_{3d^2+3}=1$, $s_{3d^2+4}=0$, $s_{3d^2+5}=0$ and
\begin{gather*}
(x*y)_{3d^2+3}=x_{3d^2+4}y_{3d^2+4},\qquad (x*y)_{3d^2+4}=x_{3d^2+3}y_{3d^2+3},\\
(x*y)_{3d^2+5}=x_{3d^2+3}y_{3d^2+4} + x_{3d^2+4}y_{3d^2+3} + x_{3d^2+5}y_{3d^2+5} - x_i y_j - x_j y_i,
\end{gather*}
where we still denote $i=3d^2+1$,$j=3d^2+2$. (Note that we are allowed to use negative coefficients here.)
It is not hard to see that for any vector $v$ obtained from a combination of $n$ instances of $s$, the entry $v_{3d^2+3}$ is nonzero if and only if $n=3m+1$, the entry $v_{3d^2+4}$ is nonzero if and only if $n=3m+2$.
\emph{Claim}: For $n$ divisible by $3$, if the entry $v_{3d^2+5}$ is zero then every subcombination has either less than $3$ instances of $s$ or a multiple of $3$ instances of $s$.
The claim trivially holds for $n=3$, since both combinations of $3$ instances of $s$ give $v_{3d^2+5}=0$. Note that $v_i\ne 0$ for only $n=1$ and $v_j\ne 0$ for only $n=2$, hence for $n\ne 3$, both $x_iy_j$ and $x_jy_i$ are zero, that is we can ignore them whenever $n\ne 3$. This also means $v_{3d^2+5}$ is always nonnegative despite some negative coefficients.
We prove by induction for $n=3m$ with $m>1$ provided that the claim holds for smaller number. If the combination is $X*Y$, then the numbers of instances of $s$ in both $X,Y$ are divisible by $3$, otherwise $X_{3d^2+3}Y_{3d^2+4} + X_{3d^2+4}Y_{3d^2+3}$ is positive while $X_{3d^2+5}Y_{3d^2+5}$ is nonnegative. As we need $X_{3d^2+5}Y_{3d^2+5} = 0$, by induction hypothesis every subcombination also needs to follow the requirement.
The claim is verified.
Therefore, if the resulting vector $v$ is zero, then the combination must follow the requirement for the subcombinations. We can obtain such a zero vector if and only if $\{A,B\}$ is mortal by Proposition \ref{prop:buffer-analysis}. The conclusion follows.
\end{proof}
\section{Checking if the limit holds is undecidable}
\begin{proposition}\label{prop:insert-0-at-odd}
For every $(*,s)$ there exists $(*',s')$ so that for every $m\ge 1$ we have $g'(2m+1)=0$ and $g'(2m)=g(m)$.
\end{proposition}
\begin{proof}
Suppose the space of $(*,s)$ is $\mathbb R^d$ and the coefficients of $*$ are $c_{i,j}^{(k)}$, that is $(x*y)_k=\sum_{i,j} c_{i,j}^{(k)} x_i y_j$ for any vectors $x,y$ and index $k$. Let $*':\mathbb R^{d+1}\times\mathbb R^{d+1}\to\mathbb R^{d+1}$ and $s\in\mathbb R^{d+1}$ be so that
\[
s'_{[1,d]}=0,\qquad s'_{d+1}=1,
\]
for every $k=1,\dots,d$,
\[
(x *' y)_k = s_k x_{d+1} y_{d+1} + \sum_{i,j\in [1,d]} c_{i,j}^{(k)} x_{i} y_{j},
\]
and
\[
(x *' y)_{d+1}=0.
\]
For every vector $v$ obtained from a combination of $n$ instances of $s'$ (using operator $*'$), the last dimension is obvious. The vector $v_{d+1} = 1$ if $n=1$ and $v_{d+1} = 0$ otherwise. The verification that $(*',s')$ satisfies the requirements can be reduced to proving the following claim.
\emph{Claim}: Denote $\bar w=w_{[1,d]}$ for any vector $w$. The vector $\bar v$ is zero whenever $n$ is odd. When $n=2m$, if $\bar v$ is not zero then $\bar v$ is a vector obtained from a combination of $m$ instances of $s$ (using operator $*$).
On the other hand, for every vector $u$ obtained from a combination of $m$ instances of $s$ we have a combination of $2m$ instances of $s'$ so that $\bar v=u$.
Let $X*'Y$ be the combination for $v$, where $X,Y$ are combinations of $n_1,n_2$, respectively instances of $s$ with $n=n_1+n_2$.
At first, we should note that if the summand $s_k x_{d+1} y_{d+1}$ in the expression of $(x *' y)_k$ is nonzero, then $n=2$. Therefore, when $n>2$ we can safely remove it from the expression, that is
\[
\bar v=\bar X * \bar Y.
\]
It is not so hard to prove the claim by induction. The claim trivially holds for $n\le 2$. We prove it for $n>2$ provided that it holds for smaller numbers.
If $n$ is odd then either $n_1$ or $n_2$ is odd, hence either $\bar X=0$ or $\bar Y=0$. It follows that $\bar v=0$.
When $n=2m$, if $\bar v$ is not zero, then both $n_1,n_2$ are even. It follows from induction hypothesis that $\bar X, \bar Y$ are vectors obtained from combinations of $m_1,m_2$, respectively instances of $s$, where $m_1+m_2=m$. It means $\bar v=\bar X * \bar Y$ is a vector obtained from combining $m$ instances of $s$.
The remaining conclusion on the construction of a combination of $2m$ instances of $s'$ is argued similarly.
We finish proving the claim, hence also get the proposition proved.
\end{proof}
\begin{theorem}\label{thm:undecidable-validity}
Checking the validity of the limit of $\sqrt[n]{g(n)}$ is undecidable.
\end{theorem}
\begin{proof}
We will reduce the problem of checking if $\lambda=\limsup_{n\to\infty} \sqrt[n]{g(n)} \le 1$ for a system $(*,s)$ to the problem of checking the validity of the limit.
By Proposition \ref{prop:insert-0-at-odd}, there exists a system $(*',s')$ so that for every $m\ge 1$ we have $g'(2m+1)=0$ and $g'(2m)=g(m)$. Let the space of $(*',s')$ be $\mathbb R^{d'}$, we construct $*'':\mathbb R^{d'+1}\times\mathbb R^{d'+1}\to\mathbb R^{d'+1}$ and $s''\in\mathbb R^{d'+1}$ so that the system $(*',s')$ is brought into the first $d'$ dimensions of the new system and
\[
s''_{d'+1}=1,\qquad (x*y)_{d'+1}=x_{d'+1} y_{d'+1}.
\]
The last dimension is obviously always $1$. So we are ensured that $\lambda''$ of the new system is at least $1$. It means the following are now equivalent: (i) the limit of $\sqrt[n]{g''(n)}$ exists for $(*'',s'')$ and (ii) the growth rate $\lambda=(\lambda')^2$ is at most $1$. The reduction is finished, and the conclusion on undecidability follows.
\end{proof}
\section{Growth rate in nonnegative setting}
\label{sec:nonnegative-setting}
\subsection*{Some definitions}
Before presenting a formula of the growth rate, we present some definitions that can be found in \cite{bui2021growth} and \cite{bui2021growth2}. The definitions are self-contained, but the readers are advised to check the original source for more intuitions and explanations. In fact, the proof of the formula is a simplified and adapted version of the argument in \cite{bui2021growth2} for the nonnegative setting.
Beside $g(n)$, we also denote by $g_i(n)$ the largest possible $i$-th entry of any vector obtained from a combination of $n$ instances of $s$.
We make an assumption that for every $i$ there exists some $n$ so that $g_i(n)>0$, otherwise we can safely eliminate such a degenerate dimension $i$. How to check for some $i$ if $g_i(n)=0$ for every $n$ is left as an exercise for the readers. Note that without the assumption, some later results may not hold in their current forms.
A \emph{composition tree} is a rooted binary tree where each vertex is assigned a vector in the following way. We assign the same vector $s$ to all leaves, and assign to each non-leaf vertex the value $x*y$ where $x,y$ are respectively the vectors of the left and right children. The vector obtained at the root is called the \emph{vector associated with} the composition tree. We often call a composition tree a \emph{tree} for short. It can be seen that there is a one-to-one correspondence between a tree of $n$ leaves and a combination of $n$ instances of $s$.
If every leaf is assigned the same vector $s$ but a specially \emph{marked leaf} is assigned a vector variable $u$, then the vector $v$ at the root depends linearly on $u$ by a matrix $M=M(P)$, that is $v=Mu$. If the tree is $T$ and the leaf is $\ell$, we say such a setting is a \emph{linear pattern} $P=(T,\ell)$. We call $M$ the \emph{matrix associated with} $P$.
A \emph{composition} $P_1\oplus P_2$ of two linear patterns $P_1=(T_1,\ell_1),P_2=(T_2,\ell_2)$ is the pattern $(T,\ell)$ so that $T$ is obtained from $T_1$ by replacing $\ell_1$ by $T_2$, and $\ell=\ell_2$. If $M_1,M_2$ are the matrices associated to $P_1,P_2$, then $M_1M_2$ is the matrix associated to $P_1\oplus P_2$.
For $m\ge 1$ we denote $P^m=P\oplus\dots\oplus P$ where there are $m$ instances of $P$. If $M$ is associated to $P$ then obviously $M^m$ is associated to $P^m$.
The \emph{number of leaves} $|P|$ of a pattern $P=(T,\ell)$ is defined to be the number of leaves excluding the marked leaf (i.e. one less than the number of leaves in $T$). One can see that $|P\oplus Q|=|P|+|Q|$.
For convenience, we also denote by $P\oplus T'$ the tree obtained from the tree of the pattern $P$ by replacing the marked leaf by the tree $T'$. Let $u$ be the vector associated with $T'$, the vector $v$ associated with $P\oplus T'$ is $Mu$ for $M=M(P)$. Let $T'$ be a tree of a bounded number of leaves with $u_j>0$, then $M_{i,j}\le\const M_{i,j}u_j\le\const v_i\le\const g_i(|P|+O(1))$.
Let $*$ be represented by the coefficients $c_{i,j}^{(k)}$ so that for any vectors $x,y$ and an index $k$,
\[
(x*y)_k = \sum_{i,j} c_{i,j}^{(k)} x_i y_j.
\]
The \emph{dependency graph} is the directed graph where the dimensions are the vertices and there is an edge from $k$ to $i$ if and only if either $c_{i,j}^{(k)}\ne 0$ or $c_{j,i}^{(k)}\ne 0$ (loops are allowed). The dependency graph can be partitioned into strongly connected components, for which we call \emph{components} for short. These components define a partial order so that for two different components $C', C$, we say $C'<C$ if there is a path from $i$ to $j$ for $i\in C$ and $j\in C'$.
If there is a path from $i$ to $j$, then there is a linear pattern $P_{i\to j}$ of a bounded number of leaves so that $M(P_{i\to j})_{i,j}>0$. It can be seen from the fact that if there is an edge $ki$ then $M(P)_{k,i}>0$ for $P=(T,\ell)$ where $\ell$ is the left (or right) branch if $c_{i,j}^{(k)}\ne 0$ (or $c_{j,i}^{(k)}\ne 0$), and the right (or left) branch has a bounded number of leaves whose associated vector has a positive $j$-th entry. The remaining is done by compositions if the distance from $i$ to $j$ is greater than $1$.
\subsection*{The formula}
Now we have enough material for proving the following formula of the growth rate.
\begin{theorem}\label{thm:limsup=sup}
The growth rate can be expressed as a supremum by
\[
\lambda=\limsup_{n\to\infty} \sqrt[n]{g(n)} = \sup_{\text{linear pattern $P$}} \max_i \sqrt[|P|]{M(P)_{i,i}}.
\]
\end{theorem}
\begin{proof}
Let $\theta$ denote the supremum in the theorem. It is obvious that $\lambda\ge\theta$. Indeed, for any $P$ and $i$, consider the sequence $n=q|P|+r$ for $q=1,2,\dots$, where $r$ satisfies $g_i(r)>0$ by a tree $T_0$. For such $n$, consider the tree $P^q\oplus T_0$, the associated vector has the $i$-th entry at least $\const (M(P)_{i,i})^q$. As $r$ is bounded, the lower bound of $\lambda$ follows.
It remains to prove the other direction $\lambda\le\theta$ by the fact that for every $i$ there exists some $r$ so that \footnote{In \cite{bui2021growth2} the bound is even shown to be $\const n^r\theta^n$, but we would keep the approach simpler for the purpose of proving the theorem only.}
\begin{equation}\label{eq:upper-bound}
g_i(n)\le \const n^{O((\log n)^r)} \theta^n.
\end{equation}
At first, we make an observation: If $i,j$ are in the same connected component, then for every linear pattern $P$,
\begin{equation}\label{eq:base}
M(P)_{i,j}\le\const\theta^{|P|}.
\end{equation}
Indeed, let $P_{j\to i}$ be the pattern of a bounded number of leaves so that $M(P_{j\to i})_{j,i}>0$, we have $M(P\oplus P_{j\to i})_{i,i} \ge M(P)_{i,j} M(P_{j\to i})_{j,i} \ge \const M(P)_{i,j}$. Meanwhile, $M(P\oplus P_{j\to i})_{i,i}\le \theta^{|P\oplus P_{j\to i}|} \le\const \theta^{|P|}$. The observation is clarified.
When the component is not connected (containing a single vertex without loops), the observation is trivial.
We prove \eqref{eq:upper-bound} by induction on the components. The observation in \eqref{eq:base} can be seen to be the base case actually, since for any $i$ in a minimal component let $P$ be any pattern with the tree associated to $g_i(n)$ we have $g_i(n)=\sum_j M(P)_{i,j} s_j \le\const M(P)_{i,j}$ for some $j$ (note that $j$ is in the same component). Suppose \eqref{eq:upper-bound} holds for any vertex in a component lower than the component of $i$ with the degree $r'$, we prove that it also holds for $i$ with some degree $r$.
Let $T$ be the tree associated with $g_i(n)$. Pick a subtree $T_0$ of $m$ leaves so that $n/3\le m\le 2n/3$. Let the pattern $P'$ be so that we have the decomposition $T=P'\oplus T_0$. Let $M'$ be the matrix associated with $P'$ and $u$ be the vector associated with $T_0$, we have
\[
g_i(n)=\sum_j M'_{i,j} u_j \le \const M'_{i,j} u_j \le \const M'_{i,j} g_j(m)
\]
for some $j$.
If $j$ is in the same component as $i$, then we have $M'_{i,j}\le\const\theta^{n-m}$. Therefore,
\[
g_i(n)\le\const \theta^{n-m} g_j(m).
\]
If $j$ is not in the component of $i$, then $g_j(m)\le \const m^{O((\log m)^{r'})} \theta^m$ by induction hypothesis. Since $M'_{i,j}\le \const g_i(|P'|+O(1))$, we have
\[
g_i(n)\le \const g_i(n-m+O(1)) m^{O((\log m)^{r'})} \theta^m.
\]
In either case we have reduced $n$ to at most a fraction of $n$ and $g_i$ to $g_k$ with $k$ still in the same component of $i$, in $g_i(n)$ to $g_j(m)$ for the first case, and in $g_i(n)$ to $g_i(n-m+O(1))$ for the second case. Repeating the process recursively to either $g_j(m)$ or $g_i(n-m+O(1))$ an $O(\log n)$ number of times until the argument is small enough, we obtain
\[
g_i(n)\le \const K^{O(\log n)} (n^{O((\log n)^{r'})})^{O(\log n)} \theta^{n+O(\log n)} \le \const n^{O((\log n)^r)} \theta^n
\]
where $r=r'+1$ and $K$ is some constant. (Note that $a^{\log b}=b^{\log a}$.)
The proof finishes by induction.
\end{proof}
\subsection*{A condition for the limit to hold}
We provide a condition in the nonnegative setting so that $\lambda$ is a limit.
\begin{theorem}
Suppose there exists some $n_0$ so that for every $n\ge n_0$ and for every $i$ we have $g_i(n)>0$, then $\lambda$ is actually a limit.
\end{theorem}
\begin{proof}
Let $\theta$ denote the supremum in Theorem \ref{thm:limsup=sup}, it suffices to prove that
\[
\liminf_{n\to\infty} \sqrt[n]{g(n)}\ge \theta,
\]
which can be reduced to showing that for any pattern $P$ and any index $i$, we have
\[
\liminf_{n\to\infty} \sqrt[n]{g(n)}\ge \sqrt[|P|]{M(P)_{i,i}}.
\]
Indeed, for every $n$ large enough, let $n=q|P|+r$ so that $n_0\le r< n_0+|P|$. Consider the tree $P^q\oplus T_0$ where $T_0$ is the tree associated with $g_i(r)$, the associated vector has the $i$-th entry at least a constant times $(M(P)_{i,i})^q$. Since $r$ is bounded, the conclusion follows.
\end{proof}
We have actually provided another proof that the limit is always valid in the positive setting. This proof is actually simpler than both other versions in \cite{bui2021growth} and \cite{bui2021growth2}.
\begin{corollary}
The growth rate $\lambda$ is always a limit in the positive setting.
\end{corollary}
\section{Applications of the formula}
\label{sec:applications}
\subsection*{A formula for the spectral radius}
The formula in Theorem \ref{thm:limsup=sup} turns out to give a formula for the spectral radius of a nonnegative matrix $A$.
\begin{theorem}\label{thm:spectral-radius}
For every nonnegative matrix $A$, the spectral radius $\rho(A)$ can be written as
\[
\rho(A)=\sup_n\max_i\sqrt[n]{(A^n)_{i,i}}.
\]
\end{theorem}
\begin{proof}
Suppose $A$ is a $d\times d$ matrix, consider a consistent embedding of a $d\times d$ matrix $B$ to a vector $v$ in $\mathbb R^{d^2}$ by the functions $\Gamma,\tilde \Gamma$ so that
\[
B=\Gamma(v),\qquad v=\tilde \Gamma(B).
\]
Let the system $(*,s)$ in the space $\mathbb R^{d^2}$ be so that $s=\tilde \Gamma(A)$ and
\[
u*v=\tilde \Gamma(\Gamma(u) \Gamma(v)).
\]
One can see that every combination of $n$ instances of $s$ gives $\tilde \Gamma(A^n)$. Therefore,
\[
\lambda=\rho(A).
\]
On the other hand, if $P$ is a linear pattern with $|P|=m$, then the relation between the vector at the root $v$ and the vector at the marked leaf $u$ is
\[
\Gamma(v)= A^m \Gamma(u).
\]
Let $M$ be the $d^2\times d^2$ matrix so that $v=Mu$, it is not hard to see that the diagonal $M_{j,j}$ are all zero except for those $j$ that correspond to the entries $(i,i)$ for $i=1,\dots,d$, that is $M_{j,j} = (A^m)_{i,i}$.
It follows that
\[
\rho(A)=\lambda=\sup_n \max_{\substack{\text{linear pattern $P$}\\|P|=n}} \max_i \sqrt[n]{M(P)_{i,i}}=\sup_n \max_i \sqrt[n]{(A^n)_{i,i}}.
\]
\end{proof}
\begin{remark}
One can also obtain a similar formula for the joint spectral radius using this method with the construction in Section \ref{sec:reduction}. However, it would be more complicated to argue.
\end{remark}
\subsection*{Finiteness property and linear patterns}
The growth rate can be written a bit differently as
\[
\lambda= \sup_{\text{linear pattern $P$}} \sqrt[|P|]{\sup_n \max_i \sqrt[n]{[M(P)^n]_{i,i}}},
\]
since the linear pattern $P^n$ has the associated matrix $M(P)^n$ and has $|P^n|=n|P|$.
By the formula of the spectral radius in Theorem \ref{thm:spectral-radius}, we have
\[
\lambda=\sup_{\text{linear pattern $P$}} \sqrt[|P|]{\rho(M(P))}.
\]
We call $\bar\lambda_P=\sqrt[|P|]{\rho(M(P))}$ the rate of the pattern $P$.
The formula for the new notation is
\begin{equation}\label{eq:in-rate-pattern}
\lambda=\sup_{\text{linear pattern $P$}} \bar\lambda_P.
\end{equation}
The rate of a linear pattern is the original motivation for the proof of the limit in the positive setting in \cite{bui2021growth}. Although it is not technically more important than Theorem \ref{thm:limsup=sup}, its meaning is worth mentioning: Consider the sequence of the trees of $P^1, P^2, \dots$, the vectors $v^{(1)},v^{(2)},\dots$ associated with these trees are $Ms, M^2s, \dots$ for $M=M(P)$. As $s>0$, the growth $\lambda_P=\lim_{n\to\infty} \sqrt[n]{\|v^{(n)}\|}$ of the norms $\|v^{(n)}\|$ is the spectral radius of $M$. However, a lower bound on the growth rate should be $\rho(M)$ after being normalized, by taking the $|P|$-th root, as the number of leaves in $P_i$ grows by $|P|$ in each step, that is $\lambda\ge\bar\lambda_P=\sqrt[|P|]{\lambda_P}$. The proof in \cite{bui2021growth} manages to show that this is also the upper bound.
Representing the growth rate in terms of the rates of linear patterns gives some new insight. While the supremum is almost never attained in the form of Theorem \ref{thm:limsup=sup} (as rare as in the case of Theorem \ref{thm:spectral-radius}), it is quite common that some linear pattern attains the growth rate in the form of \eqref{eq:in-rate-pattern}, that is $\lambda=\bar\lambda_P$ for some $P$. For example, the system in \cite[Theorem $3$]{bui2021growth}, where $s=(1,1)$ and $x*y=(x_1y_2+x_2y_1,x_1y_2)$, takes the golden ratio as the growth rate, and the growth rate is attained by a linear pattern where the tree has two leaves with the marked leaf on the left. The readers can also check \cite{rote2019maximum} for a more complicated example.
A natural question would be:
\begin{quote}
When is the growth rate actually the rate of a linear pattern?
\end{quote}
We relate this question to the finiteness property of a set of matrices.
Given a pair of matrices $A,B$ and the associated bilinear system that is constructed as in Section \ref{sec:reduction}, we have
\[
\lambda=\sqrt[3]{\rho(\{A,B\})}.
\]
Suppose the pair $A,B$ has the finiteness property, that is there exists a sequence $M_1,\dots,M_m$ where each matrix is in $\{A,B\}$ so that $\sqrt[m]{\rho(M_1\dots M_m)}=\rho(\{A,B\})$. We can then build a pattern $P=(T,\ell)$ so that $\bar\lambda_P=\sqrt[3]{\rho(\{A,B\})}$. Indeed, if $T'$ is the tree of $3m$ leaves that is associated to $M_1\dots M_m$ (as in Proposition \ref{prop:buffer-analysis}), we can let $T$ be the tree of $3m+1$ leaves where one branch is $T'$ and the other branch is the marked leaf $\ell$. The readers can check that $\bar\lambda_P=\sqrt[3]{\rho(\{A,B\}})$.
On the other hand, suppose the pair $A,B$ does not have the finiteness property, e.g. the class of pairs in \cite{blondel2003elementary}, or an explicit instance in \cite{hare2011explicit}. In this case, there is no linear pattern where $\bar\lambda_P=\sqrt[3]{\rho(\{A,B\})}$, since otherwise, by considering the sequence of $P^t$ for $t=1,2,\dots$, we would have a periodic sequence of products of matrices whose norms follow the rate $\rho(\{A,B\})$ (with respect to the number of matrices).
In fact, the readers can find in \cite[Theorem $2$]{bui2021growth} a simple example in the positive setting where no linear pattern has the same rate as the growth rate. The example is not related to the joint spectral radius and involves only binary entries and coefficients with $s=(1,1)$ and $x*y=(x_1y_1+x_2y_2,x_2y_2)$.
On the other hand, the algebraic nature of the entries in the example of \cite{hare2011explicit} is quite complicated. It seems that $JSR\le GRBS$ suggests some phenomenon of $GRBS$ may be easier to construct than a similar one of $JSR$. Nevertheless, the finiteness conjecture is still open for the case of rational (and equivalently binary) matrices, see \cite{jungers2008finiteness}. Note that if we have a reduction from $GRBS$ to $JSR$ that is as natural as the one in Section \ref{sec:reduction} and keeps the resulting vectors in some form in the resulting matrices, then we can obtain a set of binary matrices without finiteness property.
\subsection*{Computability of the growth rate}
Before proving the computability of the growth rate in Theorem \ref{thm:computable} below, we give an extension of Fekete's lemma.
\begin{lemma}[An extension of Fekete's lemma for nonnegative sequences]
Given a nonnegative sequence $a_n$ for $n=1,2,\dots$, if the sequence is supermultiplicative (i.e. $a_{m+n}\ge a_m a_n$ for any $m,n$), then the subsequence of all positive $\sqrt[n]{a_n}$, if nonempty, converges to $\sup_n \sqrt[n]{a_n}$.
\end{lemma}
\begin{proof}
If there is any $m$ such that $a_m>0$, then the subsequence $\{a_n: a_n > 0\}$ is infinite ($a_{mt}>0$ for every $t\ge 1$). We suppose the subsequence $\{a_n: a_n > 0\}$ is not empty, thus infinite.
It is obvious by definition that $\limsup\{\sqrt[n]{a_n}: a_n>0\} \le \sup_n \sqrt[n]{a_n}$. To finish the proof, it remains to show $\liminf\{\sqrt[n]{a_n}: a_n>0\} \ge \sup_n \sqrt[n]{a_n}$.
Consider any positive integer $q$.
Let $R$ be the set of integer $r$ ($0\le r< q$) such that there exists some $m_r$ with $m_r\equiv r \pmod{q}$ and $a_{m_r} > 0$.
For every $n$ such that $a_n>0$, if $n\equiv r \pmod{q}$, then $r\in R$. Consider the representation $n=pq+m_r$, we obtain
\[
\sqrt[n]{a_n} \ge \sqrt[n]{(a_q)^p a_{m_r}}.
\]
The lower bound converges to $\sqrt[q]{a_q}$ when $n\to\infty$ since $m_r$ is bounded.
As the lower bound holds for every $q$, we have shown the lower bound $\sup_n \sqrt[n]{a_n}$ for the limit inferior and finished the proof.
\end{proof}
\begin{theorem}\label{thm:computable}
The growth rate in the nonnegative setting is computable.
\end{theorem}
\begin{proof}
It was shown in \cite{rosenfeld2021growth} that the growth rate $\lambda$ in the nonnegative setting is upper semi-computable. It remains to show that it is lower semi-computable, by showing a sequence of lower bounds converging to $\lambda$.
As
\[
\lambda= \sup_{\text{linear pattern $P$}} \max_i \sqrt[|P|]{M(P)_{i,i}},
\]
we have
\[
\lambda=\max_i \sup_n \sup_{\substack{\text{linear pattern $P$}\\ |P|=n}} \sqrt[n]{M(P)_{i,i}}.
\]
For each $i$, consider the sequence $a_n^{(i)}=\sup_{P: |P|=n} M(P)_{i,i}$ for $n=1,2,\dots$, we can see that this sequence is supermultiplicative. Indeed, suppose $P$ is the pattern for $a_m^{(i)}$ and $Q$ is the pattern for $a_n^{(i)}$. Since the pattern $P\oplus Q$ has $|P\oplus Q|=|P|+|Q|=m+n$, we have
\[
a_{m+n}^{(i)}\ge M(P\oplus Q)_{i,i} = (M(P)M(Q))_{i,i} \ge M(P)_{i,i} M(Q)_{i,i}=a_m^{(i)} a_n^{(i)}.
\]
By the extension of Fekete's lemma for nonnegative sequence, the subsequence $\{\sqrt[n]{a_n^{(i)}}: a_n^{(i)}>0\}$, if nonempty, converges to $\sup_n \sqrt[n]{a_n^{(i)}}$.
Consider the sequence $x_n=\max_i a_n^{(i)}$, either the sequence is all zero or
\[
\lim_{n\to\infty} \{\sqrt[n]{x_n}: x_n>0\}=\max_{\substack{i\\ \exists m,\; a_m^{(i)}>0}} \lim_{n\to\infty} \left\{\sqrt[n]{a_n^{(i)}}: a_n^{(i)}>0\right\} =\max_i \sup_n \sqrt[n]{a_n^{(i)}} = \lambda.
\]
The conclusion follows from the fact that each $\sqrt[n]{x_n}$ is a lower bound for $\lambda$.
\end{proof}
\subsection*{Transform to make the limit valid}
Beside the transform in Proposition \ref{prop:insert-0-at-odd}, we also present the following transform, as an application of Theorem \ref{thm:limsup=sup}. While the former transform makes the limit not valid, the latter ensures the opposite.
\begin{proposition}\label{prop:keep-rate-limit-ensured}
For every $(*,s)$ there exists $(*',s')$ so that $(*',s')$ has the same growth rate as $(*,s)$ and the limit of $\sqrt[n]{g'(n)}$ is ensured, where $g'(n)$ is the function for $(*',s')$.
\end{proposition}
\begin{proof}
Consider $*':\mathbb R^{d+2}\times\mathbb R^{d+2}\to\mathbb R^{d+2},\ s'\in\mathbb R^{d+2}$ so that the coefficients of $*$ and the entries of $s$ are brought to the first $d$ dimensions of $(*',s')$. We let $s'_{d+1}=s'_{d+2}=\alpha$ (in fact, the value of $s'_{d+1}$ does not matter), where $0<\alpha\le\lambda$, say we can take any lower bound of $\lambda$, e.g. by Theorem \ref{thm:limsup=sup}. (Note that we assume $\lambda>0$, otherwise it is trivial.)
The operator $*'$ is defined so that
\[
(x*'y)_{d+1}=\sum_{i=1}^d x_i y_{d+2},
\]
and
\[
(x*'y)_{d+2}=x_{d+2} y_{d+2}.
\]
The $(d+2)$-th entry of any resulting vector is obviously a power of $\alpha$ to the number of instances of $s$. It follows that for any index $i$ and a bounded $\delta$, we have
\[
g'_{d+1}(n+\delta)\ge\const g_i(n),
\]
by considering the composition tree where the left branch is associated with $g_i(n)$ and the right branch is any tree of $\delta$ leaves.
This means that for a bounded $\delta$, we have
\[
g'(n+\delta)\ge g'_{d+1}(n+\delta)\ge\max_i\const g_i(n)=\const g(n).
\]
However, it is not hard to see (exercise) that
\[
\limsup_{n\to\infty} \sqrt[n]{g'(n)}\le\limsup_{n\to\infty} \sqrt[n]{g(n)}.
\]
For any linear pattern $P$ with the associated matrix $M$ and any index $i$, we prove that
\begin{equation}\label{eq:lower-bound}
\liminf_{n\to\infty} \sqrt[n]{g'(n)}\ge \sqrt[|P|]{M_{i,i}}.
\end{equation}
Indeed, for any $n$ large enough, let $n'$ be a multiple of the number of leaves in $P$, say $n'=q|P|$, so that $n-n'$ is bounded but not too small. Consider the pattern $P^q$ with the associated matrix $M^q$. We have $(M^q)_{i,i}\ge (M_{i,i})^q$. Let $T_0$ be the tree associated to $g_i(n_0)>0$ for a bounded $n_0$, the $i$-th entry of the vector associated to $P^q\oplus T_0$ is at least a constant times $(M_{i,i})^q$.
We choose $n'$ so that let $r=n-(n'+n_0)>0$. As $r$ is bounded, we have
\[
g'(n)\ge \const g(n-r)\ge\const(M_{i,i})^q.
\]
As $n-n'$ is bounded, we have proved the inequality \eqref{eq:lower-bound}. It follows that
\[
\liminf_{n\to\infty} \sqrt[n]{g'(n)}\ge\sup_{\text{linear pattern $P$}}\max_i \sqrt[|P|]{M(P)_{i,i}}.
\]
By Theorem \ref{thm:limsup=sup}, we have
\[
\liminf_{n\to\infty} \sqrt[n]{g'(n)}\ge\lambda.
\]
In total, we have the limit
\[
\lim_{n\to\infty} \sqrt[n]{g'(n)}=\liminf_{n\to\infty} \sqrt[n]{g'(n)}=\limsup_{n\to\infty} \sqrt[n]{g'(n)}=\lambda.
\]
\end{proof}
Assume Conjecture \ref{con:rho>=1} holds, that is checking $\rho\ge 1$ and checking $\lambda\ge 1$ are undecidable, we give another approach to the undecidability of the problem of checking if the limit $\lambda$ holds, as an application of Proposition \ref{prop:keep-rate-limit-ensured}.
Given a system $(*,s)$, let the system $(*',s')$ obtained from Proposition \ref{prop:keep-rate-limit-ensured} be in the space $\mathbb R^{d'}$. Consider $*'':\mathbb R^{d'+2}\times\mathbb R^{d'+2}\to\mathbb R^{d'+2}$ and $s''\in\mathbb R^{d'+2}$ where the first $d'$ dimensions are deduced from $(*',s')$. We let $s''_{d'+1}=1$, $s''_{d'+2}=0$, and
\[
(x*''y)_{d'+1}=x_{d'+2} y_{d'+2},\qquad (x*''y)_{d'+2}=x_{d'+1} y_{d'+1}.
\]
We can see that the last $2$ dimensions are independent of the remaining dimensions, and $\max\{g''_{d'+1}(n), g''_{d'+2}(n)\}$ is $0$ if $3\mid n$ and it is $1$ otherwise, where $g''$ is the function for $(*'',s'')$.
The following are now equivalent: (i) the limit of $\sqrt[n]{g''(n)}$ is valid, and (ii) $\lambda\ge 1$. That is we have reduced the problem of checking $\lambda\ge 1$ to the problem of checking the validity of the limit. Therefore, the latter is undecidable, under the assumption on the undecidability of $\lambda\ge 1$.
\section{Multiple operators and multiple starting vectors}
\label{sec:multiple-operators-vectors}
Rosenfeld in \cite{rosenfeld2022undecidable} made a remark that the problem of the bilinear system is not harder when we allow multiple operators and multiple starting vectors. We give reductions that are similar to those in Section \ref{sec:reduction}.
The construction in Section \ref{sec:reduction} is well suited for reducing the problem for $(*,s,s')$ to the original problem. By the problem for $(*,s,s')$, we mean the problem where we can choose either $s$ or $s'$ in the place of each $s$ instead of fixing the vector $s$. The two vectors $s,s'$ play the roles of $A,B$ in the construction. We rewrite it formally without repeating the verification.
For a bilinear map $*:\mathbb R^d\times\mathbb R^d\to\mathbb R^d$ and two vectors $s,s'\in\mathbb R^d$, consider the space $\mathbb R^{3d+2}$ and denote by $R_A,R_B,R_C$ the ranges $[1,d], [d+1,2d], [2d+1,3d]$, respectively, and $i=3d+1$,$j=3d+2$.
Let the system $(\bullet,u)$ with $\bullet:\mathbb R^{3d+2}\times\mathbb R^{3d+2}\to\mathbb R^{3d+2}$ and $u\in\mathbb R^{3d+2}$ be so that
\begin{gather*}
u_{R_A}=s,\qquad u_{R_B}=s',\qquad u_{R_C}=0,\\
u_i=1,\qquad u_j=0,
\end{gather*}
and for any two vectors $x, y$,
\begin{gather*}
(x\bullet y)_{R_A} = (x\bullet y)_{R_B} = 0,\\
(x\bullet y)_{R_C} = x_{R_C} * y_{R_C} + x_j y_{R_A} + x_{R_B} y_j,\\
(x\bullet y)_i = 0,\qquad (x\bullet y)_j = x_i y_i.
\end{gather*}
By the same analysis, we obtain that the growth rate of $(\bullet,u)$ is the cube root of the growth rate of $(*,s,s')$, like in Theorem \ref{thm:undecidable-estimate}.
Using the idea of the previous construction, we can reduce the problem for $(*,*',s)$ to the original problem. By the problem for $(*,*',s)$ we mean the problem where we can choose either $*$ or $*'$ in the place of each instance of $*$ instead of fixing $*$.
For two bilinear maps $*,*':\mathbb R^d\times\mathbb R^d\to\mathbb R^d$ and a vector $s\in\mathbb R^d$, consider the space $\mathbb R^{3d+2}$ and denote by $R_A,R_B,R_C$ and $i,j$ as in the first reduction. Let the system $(\bullet, u)$ with $\bullet: \mathbb R^{3d+2}\times\mathbb R^{3d+2}\to\mathbb R^{3d+2}$ and $u\in\mathbb R^{3d+2}$ be so that
\begin{gather*}
u_{R_A}=s,\qquad u_{R_B}=s,\qquad u_{R_C}=0,\\
u_i=1,\qquad u_j=0,
\end{gather*}
and for any two vectors $x, y$,
\begin{gather*}
(x\bullet y)_{R_A} = x_{R_C}*y_{R_C},\qquad (x\bullet y)_{R_B} = x_{R_C} *' y_{R_C},\\
(x*y)_{R_C} = x_j y_{R_A} + x_{R_B} y_j,\\
(x*y)_i = 0,\qquad (x*y)_j = x_i y_i.
\end{gather*}
It is not hard to see that for any vector $v$ obtained from combining $n$ instances of $u$ using $\bullet$, if $v_{R_C}\ne 0$ then $n=5k+3$ for some $k$. Also, if $v_{R_A}$ or $v_{R_B}$ is not a zero vector, then $n=5k+1$ for some $k$. The growth rate of $(\bullet, u)$ is the fifth root of the growth rate of $(*,*',s)$. The verification is similar to that in Proposition \ref{prop:buffer-analysis} and we leave it to the readers.
A construction for a higher number of starting vectors or a higher number of bilinear operators, or both, can be established similarly by introducing more dimensions. We leave it to the readers as an exercise since the details would be tedious.
In conclusion, introducing more vectors and more operators does not make the problem any harder.
\section{Undecidability of checking $\lambda\le 1$ in the positive setting under an assumption}
\label{sec:undecidable-positive-setting}
As we can reduce the problem checking $\rho\le 1$ for the joint spectral radius $\rho$ to the problem checking $\lambda\le 1$ for the growth of bilinear maps in the nonnegative setting, one may wonder if there is a similar reduction for the positive setting, where all the entries of $s$ have to be positive. In this section, we give such a reduction under the assumption that the following conjecture holds.
\begin{conjecture}\label{conj:undecidable-jsr-positive}
It is undecidable to check $\rho\le 1$ for the joint spectral radius $\rho$ of a pair of \emph{positive} matrices.
\end{conjecture}
Provided the conjecture holds, we have a reduction that is almost the same as the one in Section \ref{sec:reduction} but with a slightly more complicated argument.
For a pair of \emph{positive} matrices $A,B$, we reuse the notations there and keep the operator $*$, but set the starting vector $s$ by
\begin{gather*}
s_{R_A}=\tilde \Gamma(A-X),\qquad s_{R_B}=\tilde \Gamma(B-Y),\qquad s_{R_C}=\epsilon,\\
s_i=1,\qquad s_j=\epsilon,
\end{gather*}
where the value $\epsilon>0$ is small enough and $X,Y$ are two matrices that will be given later. (The notation $\epsilon$ may denote a number or a vector, depending on the context.)
By the description of the operator $*$, one needs to take care of the range $R_C$ only. Let us analyze some beginning value. At first,
\begin{align*}
\Gamma((s*s)_{R_C})&=\Gamma(\epsilon)^2 + \epsilon (A-X) + \epsilon (B-Y),\\
\Gamma(((s*s)*s)_{R_C})&=(\Gamma(\epsilon)^2 + \epsilon (A-X) + \epsilon (B-Y))\Gamma(\epsilon) + (A-X),\\
\Gamma((s*(s*s))_{R_C})&=\Gamma(\epsilon)(\Gamma(\epsilon)^2 + \epsilon (A-X) + \epsilon (B-Y)) + (B-Y).
\end{align*}
We need $X,Y$ be so that
\begin{gather*}
\Gamma(((s*s)*s)_{R_C})=A,\\
\Gamma((s*(s*s))_{R_C})=B,\\
X<A,\qquad Y<B.
\end{gather*}
The first two requirements are equivalent to
\begin{align*}
X + (X+Y) \Gamma(\epsilon^2) &= \Gamma(\epsilon)^3 + A\Gamma(\epsilon^2) + B\Gamma(\epsilon^2),\\
Y + \Gamma(\epsilon^2)(X+Y) &= \Gamma(\epsilon)^3 + \Gamma(\epsilon^2)A + \Gamma(\epsilon^2)B.
\end{align*}
Such $X,Y$ always exist. (As the solution would be quite tedious, we leave it as an exercise for the readers.) It follows from the smallness of $\epsilon$ that $X,Y$ are also small. It means the requirements $X<A$ and $Y<B$ are guaranteed.
Denote $M_1=\Gamma(s_{R_C})=\Gamma(\epsilon)$ and $M_2=\Gamma((s*s)_{R_C})$, we have both $M_1<\Gamma(\epsilon')$ and $M_2 < \Gamma(\epsilon')$ for some $\epsilon'$ that depends on $\epsilon$. The value $\epsilon'$ can be made arbitrarily small by reducing $\epsilon$.
We make the following observation, whose verification is similar to that of Proposition \ref{prop:buffer-analysis} and left to the readers.
\begin{proposition}
The matrix form $\Gamma(v_{R_C})$ for any vector $v$ obtained by combining $n$ instances of $s$ is the product of some matrices from $\{A,B,M_1,M_2\}$. In particular, if $m_A,m_B,m_1,m_2$ are respectively the numbers of instances of $A,B,M_1,M_2$, then $m_1 + 2m_2 + 3(m_A+m_B) = n$. On the other hand, for any product of $m$ matrices from $\{A,B\}$, we have a combination for $n=3m$ so that $\Gamma(v_{R_C})$ is the product.
\end{proposition}
Since $\epsilon'$ can be made arbitrarily small, the number $m_1,m_2$ should be made minimal. It follows that $\lambda=\sqrt[3]{\rho\{A,B\}}$ like in Theorem \ref{thm:undecidable-estimate}. Therefore, the problem of checking $\lambda\le 1$ is undecidable in the positive setting under the assumption that Conjecture \ref{conj:undecidable-jsr-positive} holds.
\bibliographystyle{unsrt}
| {
"timestamp": "2022-03-15T01:37:00",
"yymm": "2201",
"arxiv_id": "2201.09850",
"language": "en",
"url": "https://arxiv.org/abs/2201.09850",
"abstract": "The following notion of growth rate can be seen as a generalization of joint spectral radius: Given a bilinear map $*:\\mathbb R^d\\times\\mathbb R^d\\to\\mathbb R^d$ with nonnegative coefficients and a nonnegative vector $s\\in\\mathbb R^d$, denote by $g(n)$ the largest possible entry of a vector obtained by combining $n$ instances of $s$ using $n-1$ applications of $*$. Let $\\lambda$ denote the growth rate $\\limsup_{n\\to\\infty} \\sqrt[n]{g(n)}$, Rosenfeld showed that the problem of checking $\\lambda\\le 1$ is undecidable by reducing the problem of joint spectral radius.In this article, we provide a simpler reduction using the observation that matrix multiplication is actually a bilinear map. Suppose there is no restriction on the signs, an application of this reduction is that the problem of checking if the system can produce a zero vector is undecidable by reducing the problem of checking the mortality of a pair of matrices. This answers a question asked by Rosenfeld. Another application is that the problem does not become harder when we introduce more bilinear maps or more starting vectors, which was remarked by Rosenfeld.It is known that if the vector $s$ is strictly positive, then the limit superior $\\lambda$ is actually a limit. However, we show that when $s$ is only nonnegative, the problem of checking the validity of the limit is undecidable. This also answers a question asked by Rosenfeld.We provide a formula for the growth rate $\\lambda$. A condition is given so that the limit is always ensured. This actually gives a simpler proof for the limit $\\lambda$ when $s>0$. An important corollary of the formula is the computability of the growth rate, which answers another question by Rosenfeld. Also, we relate the finiteness property of a set of matrices to a special structure called \"linear pattern\" for the problem of bilinear system.",
"subjects": "Combinatorics (math.CO)",
"title": "Growth of bilinear maps III: Decidability",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474872757474,
"lm_q2_score": 0.7154239836484144,
"lm_q1q2_score": 0.7078029205593642
} |
https://arxiv.org/abs/1502.00776 | Homomorphisms of binary Cayley graphs | A binary Cayley graph is a Cayley graph based on a binary group. In 1982, Payan proved that any non-bipartite binary Cayley graph must contain a generalized Mycielski graph of an odd-cycle, implying that such a graph cannot have chromatic number 3. We strengthen this result first by proving that any non-bipartite binary Cayley graph must contain a projective cube as a subgraph. We further conjecture that any homo- morphism of a non-bipartite binary Cayley graph to a projective cube must be surjective and we prove some special case of this conjecture. | \section{Introduction}
For classic notation we will follow that of \cite{GodsilRoyle}. A
\emph{binary Cayley} graph is a Cayley graph $\mbox{Cay}(\Gamma, \Omega)$
where $\Gamma$ is a binary group (i.e., $x+x=0$ for any element $x$),
and $\Omega$ is any subset of $\Gamma$ (normally not including element
$0$). The vertices of the graph are the elements of $\Gamma$, and two
vertices $u$ and $v$ are adjacent if and only if $u-v \in
\Omega$. Thus $\mbox{Cay}(\Gamma, \Omega)$ is a simple graph when element
$0$ is not in $\Omega$. Hypercubes are the most famous examples of
binary Cayley graphs. In fact, for this reason, binary Cayley graphs
often are referred to as \emph{cube-like graphs}.
Other examples of binary Cayley graphs, which are essential for this
work, are the \emph{projective cubes}. A projective cube of dimension
$d$, denoted $\mathcal{PC}_d$, is defined as the Cayley graph
$\mbox{Cay}(\mathbb{Z}_2^d, \{e_1,e_2,\cdots, e_d, J\})$ where $(e_1, e_2,
\ldots, e_d)$ is the canonical basis and $J$ is the all-1
vector. Projective cube of dimension $d$ can be built from hypercube
of dimension $d+1$ by identifying antipodal vertices. From this fact
comes their name. It can also be built, equivalently, from the
hypercube of dimension $d$ by adding edges between antipodal pairs of
vertices. This satisfies the Cayley graph definition given here. In
some literature they are also referred to as \emph{folded cubes}.
Projective cubes are studied for their highly symmetric structures.
Homomorphisms to projective cubes capture some important packing and
edge-coloring problems, see~\cite{N07, NRS13}.
A graph $G$ is a {\em core} if it does not admit a homomorphism to a
proper subgraph of itself.
In this work we show the importance of projective cubes in the study
of homomorphisms of Cayley graphs on binary groups. Among other
properties, we will need the following results:
\begin{theorem}[Naserasr 2007 \cite{N07}]
The projective cube of dimension $2k-1$ is bipartite.
Projective cube of dimension $2k$ is of odd girth $2k+1$. Furthermore,
any pair of vertices of $\mathcal{PC}_{2k}$ is in a common cycle of length
$2k+1$.
\end{theorem}
\begin{corollary}\label{ProjectiveCubeCORE}
The projective cube of dimension $2k$ is a core.
\end{corollary}
In \cite{Payan98}, Payan proved a surprising result that there is no
binary Cayley graph of chromatic number 3. His proof was an
implication of the following stronger result based on the following
definition. Let $G$ be a graph on vertices $v^0_1, v^0_2, \ldots,
v^0_n$. The $k$-th level Mycielski graph of $G$, denoted $M^k(G)$, is
built from $G$ by adding vertices $v^1_1, v^1_2, \ldots,v^1_n$,
$v^2_1, v^2_2, \ldots, v^2_n$ up to $v^k_1, v^k_2, \ldots, v^k_n$
where if $v^0_i$ is adjacent to $v^0_j$, then $v^r_i$ is also adjacent
to $v^{r-1}_j$, finally we add one more vertex $w$ which is is joined
to all vertices $v^k_i$. We will use the following result of
Stiebitz, see~\cite{Matousek2003} for a proof.
\begin{lemma}[Siebitz 1985 \cite{Stiebitz85}]
Let $C$ be an odd-cycle.
Then for any $i$, $\chi(M^i(C)) = 4$.
\end{lemma}
Payan proved the following stronger statement:
\begin{theorem}[Payan 1998 \cite{Payan98}]
\label{Payan}
Given a binary Cayley graph $\mbox{Cay}(\Gamma, \Omega)$ of odd-girth
$2k+1$, the $k$-th level Mycielski graph $M^k(C_{2k+1})$ is a
subgraph of $\mbox{Cay}(\Gamma, \Omega)$.
\end{theorem}
This in particular implies that the projective cube of dimension $2k$
contains the graph $M^{k}(C_{2k+1})$ as a subgraph. This fact is also
implied from the following view of the projective cubes.
First, recall that for any pair of integer $n,k$ with $k < n$, the
graph $K(n,k)$ is the {\em Kneser graph} of $k$ among $n$. Its vertex
set is made by the $n \choose k$ subsets of $[1 \cdot n]$ of size $k$,
two of them being adjacent if they are disjoint.
Now, for an integer $k$, and a set $\mathcal A$ of size
$2k+1$. Vertices of $\mathcal{PC}_{2k}$ can be regarded as the partitions $(A,
\bar A)$ of $\mathcal A$. We always assume $A$ is the smaller
part. Two such vertices $(A, \bar A)$ and $(B, \Bar B)$ are adjacent
if either $A$ or $\bar A$ is obtained from $B$ by adding one more
element. This implies that the subgraph induced by vertices $(A, \bar
A)$ with $|A|=k$ is isomorphic to the Kneser graph $K(2k+1, k)$. To
find $M^k(C_{2k+1})$ in this graph, just take $v^0_1, v^0_2, \ldots
v^0_{2k+1}$ to be a $2k+1$-cycle in this Kneser graph. Call
$(A_i,\bar{A_i})$ the partition associated with $v^0_i$. Then for each $j$,
$A_{j-1}$ and $A_{j+1}$ (indices are taken modulo $2k+1$) have
exactly $k-1$ elements in common. Let $A^1_j$ be this subset and
define $v^1_j$ to be $(A^1_j \bar A^1_j)$. Continuing by induction each
pair $v^i_{j-1}$ and $v^i_{j+1}$ of vertices define a unique set of
size $i-1$ which defines $v^{i+1}_{j}$ with the last vertex being
$(\emptyset, \mathcal{A})$.
In Section \ref{sec:power}, we strengthen the result of Payan proving
that:
\begin{theorem}\label{PCasSubgraph}
Given a binary Cayley graph $\mbox{Cay}(\Gamma, \Omega)$ of odd-girth
$2k+1$, the projective cube $\mathcal{PC}_{2k}$ is a subgraph of
$(\Gamma, \Omega)$.
\end{theorem}
Since a $k$-coloring of a graph $G$ is equivalent to a homomorphism of
$G$ to $K_k$, the corollary of Payan's theorem can be restated as
follows:
\begin{theorem}[Payan 1998 \cite{Payan98}]
\label{PayanCorollary}
If a non-bipartite binary Cayley graph admits a homomorphism to
$K_4$, then any such homomorphism must be a surjective mapping.
\end{theorem}
Considering the fact that $K_4$ is isomorphic to $\mathcal{PC}_{2}$,
we introduce the following conjecture in generalization of
Theorem~\ref{PayanCorollary}.
\begin{conjecture}
\label{MappingToPC2k}
If a non-bipartite binary Cayley graph admits a homomorphism to
$\mathcal{PC}_{2k}$, then any such homomorphism must be an onto
mapping.
\end{conjecture}
In Section \ref{sec:bintopc}, we reduce this conjecture to properties
of homomorphisms among Projective cubes only. Then we prove a special
case.
\section{Power graphs and pseudo-duality}
\label{sec:power}
Given a set $A$, the {\em power set} of $A$ is the set of all subsets
of $A$. It is denoted by ${\cal P }(A)$. This set forms a binary group
together with the operation of \emph{symmetric difference}. In fact it
is isomorphic to $(\mathbb{Z}_{2}^{|A|},+)$, each subset being represented
by its characteristic vector.
For a graph $G$, let $\widehat{G}$ denote the Cayley graph $\mbox{Cay}(
{\cal P}(V(G)), E(G))$. This is the graph whose vertices are the subsets
of vertices of $G$ where two vertices are adjacent if their symmetric
difference is an edge of $G$. It is worth noting that $E(G)$ is the
smallest Cayley subset which makes the natural injection of $G$ into
$\widehat{G}$ a homomorphism. Recall that a homomorphism is an edge
preserving mapping of vertices.
The graph $P_n$ is the path on $n$ vertices. The power graph
$\widehat{P_n}$ consists of two connected components each isomorphic
to the hypercube of dimension $n-1$. For a cycle, $C_n$, the power
graph $\widehat{C_n}$ consists of two connected components each
isomorphic to the projective cube of dimension $n-1$.
In general the following holds.
\begin{lemma} \label{1}
For a graph $G$, an integer $n$ and a Cayley graph $H$ on ${\mathbb
Z}_2^n$, there exists a homomorphism from $G$ to $H$ if and only if
there exists a homomorphism from $\widehat{G}$ to $H$.
\end{lemma}
We will prove Lemma~\ref{1} in a much more general form,
encompassing all varieties of groups. Let ${\cal V}$ be
a variety of groups, that is, a class of groups defined
by a set of equations. For instance the variety of abelian groups
is defined by the equation $xy = yx$, and the groups
$\mbox{\Bb Z}_2^n$ are (up to isomorphism) the finite
members of the variety of groups defined by the equation
$x^2 = 1$.
For a graph $G$, we denote by ${\cal F_V}(G)$ the free group
on the vertex set of $G$ in the variety ${\cal V}$, and
$S_{\cal V}(G)$ the following subset of ${\cal F_V}(G)$:
$$
S_{\cal V}(G) = \{ u^{-1}v : \{u,v\} \in E(G) \}.
$$
The general form of Lemma~\ref{1} is the following.
\begin{lemma} \label{1+}
Let $\mbox{Cay}(A,S)$ be a Cayley graph, where
$A$ is a group in ${\cal V}$. Then for a graph $G$,
there exists a homomorphism of $G$ to $\mbox{Cay}(A,S)$
if and only if there exists
a homomorphism of $\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ to $\mbox{Cay}(A,S)$.
\end{lemma}
\begin{proof}
By definition of $S_{\cal V}(G)$, the inclusion of $V(G)$ in ${\cal
F_V}(G)$ gives a natural homomorphism from $G$ to $\mbox{Cay}({\cal
F_V}(G),S_{\cal V}(G))$. Therefore, if there exists a homomorphism
of $\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ to $\mbox{Cay}(A,S)$, then there
exists a homomorphism from $G$ to $\mbox{Cay}(A,S)$.
Now suppose that there exists a graph homomorphism $\phi: G \rightarrow
\mbox{Cay}(A,S)$. Then $\phi$ extends to a group homomorphism
$\widehat{\phi}: {\cal F_V}(G) \rightarrow A$, and it is easy to see that
$\widehat{\phi}$ is also a graph homomorphism of $\mbox{Cay}({\cal
F_V}(G),S_{\cal V}(G))$ to $\mbox{Cay}(A,S)$. Indeed, if the set
$\{w_1,w_2\}$ is an edge in $\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$, then
$w_1^{-1}w_2 = u^{-1}v$ for some $\{u,v\} \in E(G)$, whence
$\widehat{\phi}(w_1)^{-1}\widehat{\phi}(w_2) = \phi(u)^{-1}\phi(v)$ which is in $S$.
\end{proof}
Note that when ${\cal V}$ is the variety of all groups, then ${\cal
F_V}(G)$ is simply the free group on $V(G)$, and Lemma~\ref{1+}
presents $\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ as the smallest Cayley
graph into which $G$ admits a homomorphism. By a result of Sabidussi
\cite{sabidussi64} reformulated in~\cite{HahnTardif96}, every
vertex-transitive graph is a retract of a Cayley graph. Therefore
$\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ is also the smallest
vertex-transitive graph into which $G$ admits a homomorphism. In
particular, the chromatic number of $\mbox{Cay}({\cal F_V}(G),S_{\cal
V}(G))$ is equal to that of $G$; since the chromatic number is
defined in terms of homomorphisms into complete graphs, which are
Cayley graphs. The fractional chromatic number of $G$ is defined in
terms of homomorphisms to Kneser graphs (see
\cite{ScheinermanUllman97}), which are seldom Cayley graphs (see
\cite{Scapellato96}) but nonetheless vertex-transitive; therefore the
fractional chromatic number of $\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ is
equal to that of $G$.
When ${\cal V}$ is the variety of abelian groups, then the chromatic
number of the Cayley graph $\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ is
again equal to that of $G$, since the complete graphs are also Cayley
graphs on abelian groups. However the fractional chromatic number of
$\mbox{Cay}({\cal F_V}(G),S_{\cal V}(G))$ may be larger than that of $G$.
For instance, it can be shown that the fractional chromatic number of
the Petersen graph $P$ is $\frac{5}{2}$, while that of $\mbox{Cay}({\cal
F_V}(P),S_{\cal V}(P))$ is $3$.
Now, the finite groups in the variety ${\cal V}$ defined by the
identity $x^2 = 1$ are all isomorphic to $\mbox{\Bb Z}_2^n$ for some
$n$. Therefore only the complete graphs whose number of vertices is a
power of $2$ are Cayley graphs on groups in ${\cal V}$, so for an
arbitrary graph $G$, even the chromatic number of $\mbox{Cay}({\cal
F_V}(G),S_{\cal V}(G))$ ( which is equal to $\widehat{G}$) may be
larger than that of $G$. In essence, Corollary~\ref{PayanCorollary}
goes a step further than this observation, by stating that the number
$3$ does not even belong to the range of chromatic numbers of Cayley
graphs of groups in ${\cal V}$.
Note that $\widehat{C_n}$ consists in two disjoint copies of
$\mathcal{PC}_{n-1}$. Thus if $C_{2k+1}$ maps to a binary Cayley graph
$G$, then, by Lemma~\ref{1}, the projective cube $\mathcal{PC}_{2k}$
maps to $G$. Furthermore, if $2k+1$ is the length of the shortest
odd-cycle of $G$, then in any mapping of $\mathcal{PC}_{2k}$ to $G$ no
two vertices of $\mathcal{PC}_{2k}$ can be identified. This proves the claim of
Theorem~\ref{PCasSubgraph}.
\section{Mapping binary Cayley graphs to projective cubes}
\label{sec:bintopc}
By restating Payan's theorem with the language of homomorphisms, we
obtain Theorem~\ref{PayanCorollary}. This led us to formulate
Conjecture~\ref{MappingToPC2k}, suggesting that what makes 4-coloring
so special is the fact that $\mathcal{PC}_2$ is isomorphic to $K_4$.
In the context of this conjecture, note that since $G$ is not
bipartite it contains an odd-cycle. Let $2r+1$ be the length of a
shortest odd-cycle of $G$. Since $G$ maps to $\mathcal{PC}_{2k}$ and
since the odd-girth of $\mathcal{PC}_{2k}$ is $2k+1$, we have $r\geq
k$. On the other hand Theorem~\ref{PCasSubgraph} tells us that $G$
contains $\mathcal{PC}_{2r}$ as a subgraph. Since $\mathcal{PC}_{2r}$
itself is a binary Cayley graph, Conjecture~\ref{MappingToPC2k} is
equivalent to the following conjecture.
\begin{conjecture}\label{MappingAmongPC2k}
Given $r\geq k$, any mapping of $\mathcal{PC}_{2r}$ to
$\mathcal{PC}_{2k}$ must be onto.
\end{conjecture}
When $k$ is equal to 1, this conjecture is equivalent to Payan's
theorem and is implied by the fact that $M^k(C_{2k+1})$ is a subgraph
of $\mathcal{PC}_{2k}$ as mentioned in the introduction. The case
when $k$ is equal to $r$ is also equivalent to stating that
$\mathcal{PC}_{2k}$ is a core as observed by Corollary
\ref{ProjectiveCubeCORE}. In the next theorem we verify the
conjecture for $k=2$ and $r=3$. In other words we prove that any
homomorphism of $\mathcal{PC}_6$ into $\mathcal{PC}_4$ should be
surjective. We start with a couple of observations that might be
useful in general case.
\begin{observation}\label{obs:dist2}
If $f: \mathcal{PC}_{2k+2} \rightarrow \mathcal{PC}_{2k}$ is a
homomorphism and $f(x)=f(y)$, then $x$ and $y$ have a common
neighbor, i.e., they are at distance 2.
\end{observation}
\begin{proof}
Vertices $x$ and $y$ belong to a cycle of length $2k+3$ in
$\mathcal{PC}_{2k+2}$. If they are not at distance 2, then there
would be a cycle of odd length strictly smaller than $2k+1$ in
$\mathcal{PC}_{2k}$ which is a contradiction.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,x=.5cm,y=.5cm]
\draw (0,0) node [above] {$\emptyset$};
\fill (0,0) circle (1.5pt);
\draw (-6,-3) node [left] {$1$};
\draw (-3,-3) node [left] {$2$};
\foreach \i in {3,4,...,5}{
\draw (3*\i - 9,-3) node [right] {$\i$};
}
\foreach \i in {1,2,...,5}{
\fill (3*\i - 9,-3) circle (1.5pt);
\draw (0,0) -- (3*\i - 9,-3);
\foreach \j in {1,2,...,4}{
\draw [dotted] (3*\i - 9,-3) -- (.5*\j + 3*\i - 10.25, -4.5);
}
}
\coordinate (centre) at (0,-10.5);
\foreach \angle in {90,162,18,234,306}{
\coordinate (a) at ($ (centre) + (\angle:2)$);
\coordinate (b) at ($ (centre) + (\angle-144:2)$);
\coordinate (c) at ($ (centre) + (\angle-144:4)$);
\coordinate (d) at ($ (centre) + (\angle-72:4)$);
\coordinate (e) at ($ (b) + (\angle - 154:1)$);
\coordinate (f) at ($ (b) + (\angle - 134:1)$);
\coordinate (g) at ($ (c) + (\angle - 154:1)$);
\coordinate (h) at ($ (c) + (\angle - 134:1)$);
\draw (a) -- (b) -- (c) -- (d);
\draw [dotted] (b) -- (e);
\draw [dotted] (b) -- (f);
\draw [dotted] (c) -- (g);
\draw [dotted] (c) -- (h);
\fill (b) circle (1.5pt);
\fill (c) circle (1.5pt);
}
\draw ($ (centre) + (18:2)$) node[below] {$24$};
\draw ($ (centre) + (18:4)$) node[above] {$35$};
\draw ($ (centre) + (90:2)$) node[right] {$34$};
\draw ($ (centre) + (90:4)$) node[right] {$12$};
\draw ($ (centre) + (162:2)$) node[below] {$13$};
\draw ($ (centre) + (162:4)$) node[above] {$45$};
\draw ($ (centre) + (234:2)$) node[right] {$15$};
\draw ($ (centre) + (234:4)$) node[left] {$23$};
\draw ($ (centre) + (306:2)$) node[left] {$25$};
\draw ($ (centre) + (306:4)$) node[right] {$14$};
\end{tikzpicture}
\end{center}
\caption{A depiction of $\mathcal{PC}_{4}$.}
\label{fig:pc4}
\end{figure}
\begin{corollary}\label{preImageOf5}
If $f: \mathcal{PC}_{2k+2} \rightarrow \mathcal{PC}_{2k}$ is a homomorphism and
$|f^{-1}(x)|\geq 5$ for some vertex $x\in V(\mathcal{PC}_{2k})$, then
$f^{-1}(x) \subseteq N(a)$ for some $a\in V(\mathcal{PC}_{2k+2})$.
\end{corollary}
\begin{proof}
Using the poset notation, and without loss of generality we may
assume that the vertex associated with the empty set is in
$f^{-1}(x)$. Then every other vertex in $f^{-1}(x)$ must be a
2-subset of $[1 \cdot 2k+3]$. Moreover they must be at distance 2
from each other, so that each pair of 2-subsets in $f^{-1}(x)$ have a
non-empty intersection. In order to reach four such 2-subsets, there
has to be a fixed element (say $i$) in all of them. Let $a$ be the
vertex associated with the set $\{i\}$ in $\mathcal{PC}_{2k+2}$, we
then have $f^{-1}(x) \subseteq N(a)$.
\end{proof}
\begin{observation}\label{obs:noPreimageOf6}
If there exists a homomorphism of $\mathcal{PC}_{6}$ into
$\mathcal{PC}_{4}$ which is not surjective and such that a vertex of
$\mathcal{PC}_{4}$ has a pre-image of size $6$, then there exists a
homomorphism of $\mathcal{PC}_{6}$ into $\mathcal{PC}_{4}$ which is
not surjective and with no vertex of $\mathcal{PC}_{4}$ being the
image of $6$ vertices of $\mathcal{PC}_{6}$.
\end{observation}
\begin{proof}
Let $f$ be a homomorphism of $\mathcal{PC}_{6}$ into
$\mathcal{PC}_{4}$ which is not surjective and such that a vertex
$x$ of $\mathcal{PC}_{4}$ has a pre-image of size $6$. By Corollary
\ref{preImageOf5}, there exists a vertex $a$ of $\mathcal{PC}_{6}$
such that $f^{-1}(x) \subseteq N(a)$. Without loss of generality, we
may assume that $a$ is the vertex associated with the empty set and
that $f^{-1}(x)$ is made of the singletons from $\{1\}$ to
$\{6\}$. Let $y$ be the image of the singleton $\{7\}$. It cannot be
the image of $7$ vertices (otherwise it would be the whole
neighborhood of a vertex in $\mathcal{PC}_{6}$ but each of the
neighbors of $\{7\}$ has one of its neighbors mapped to
$x$). Therefore, mapping the singleton $\{7\}$ to $x$ does not
create a new vertex of $\mathcal{PC}_{4}$ being the image of $6$
vertices of $\mathcal{PC}_{6}$. One can easily check that it is
still a homomorphism and it remains not surjective. We thus have
built a homomorphism from $\mathcal{PC}_{6}$ to $\mathcal{PC}_{4}$
which is not surjective and with strictly less vertices of
$\mathcal{PC}_{4}$ being the image of exactly $6$ vertices of
$\mathcal{PC}_{6}$. We may keep doing so until there is no such
vertex.
\end{proof}
\begin{observation}\label{obs:5gives5}
Let $f$ be homomorphism of $\mathcal{PC}_{6}$ into
$\mathcal{PC}_{4}$. If there is a vertex $x$ of $\mathcal{PC}_{4}$
with a pre-image of size 5 or more, then there is a vertex $y$
adjacent to $x$ with a pre-image of size 5 or more. Moreover the
common neighbor of the vertices in the pre-image of $y$ is adjacent
to the common neighbor of the vertices in the pre-image of $x$.
\end{observation}
\begin{proof}
With Corollary \ref{preImageOf5}, we may assume that $f^{-1}(x) =
\left\{ \{1\}, \{2\}, \{3\}, \{4\}, \{5\} \right\}$, the empty set
being the common neighbor of the pre-image of $x$. This last set
has twenty-one neighbors in $\mathcal{PC}_{6}$ that must be mapped to the five
neighbors of $x$. One of these neighbors of $x$, must have a
pre-image of size 5 or more. Let it be $y$. The only vertices having
more than five neighbors in $N(f^{-1}(x))$ are the vertices
associated with singletons. Therefore the common neighbors to the
vertices of the pre-image of $y$ is a singleton which is adjacent to
the empty set.
\end{proof}
\begin{theorem}\label{thm:main}
Any homomorphism of $\mathcal{PC}_{6}$ into $\mathcal{PC}_{4}$ must be onto.
\end{theorem}
\begin{proof}
For a contradiction, let $f: \mathcal{PC}_{6} \rightarrow
\mathcal{PC}_{4}$ be a homomorphism which is not onto. By Observation
\ref{obs:noPreimageOf6}, we may assume that for every vertex $x$ in
$\mathcal{PC}_{4}$, the size of $f^{-1}(x)$ is not equal to $6$.
We consider two cases:
{\bf Case 1. There is a vertex $\mathbf{x}$ such that
$\mathbf{|f^{-1}(x)|=7}$.} We may assume that the pre-images of $x$
are exactly the singletons. Then $f$ maps the twenty-one vertices of
size 2 into the five neighbors of $x$, thus there should be a neighbor
$y$ of $x$ which is the image of five such vertices. These five
vertices must share a common element (same arguments as for Corollary
\ref{preImageOf5}). Therefore, we may consider that they are
associated with the sets $\{1,2\}, \{1,3\}, \ldots, \{1,6\}$. By
mapping the empty set and the 2-subset $\{1,7\}$ we still have a
non-surjective homomorphism with no pre-image of size 6 (same
arguments as for Observation \ref{obs:noPreimageOf6}). Therefore, we
may assume that $f^{-1}(x)=N(\emptyset)$ and $f^{-1}(y)=N(\{1\})$.
The remaining 2-subsets (which are the 2-subsets of $[2 \cdot 7]$)
have to be mapped to the four other neighbors of $x$. Among the
3-subsets, the ones containing the element $1$ have to be mapped to
the four other neighbors of $y$. The remaining sets are the 3-subsets
of $[2 \cdot 7]$. In $\mathcal{PC}_{6}$, they induce a matching, each set being
matched to its complement within $[2 \cdot 7]$.
The fifteen 2-subsets of $[2 \cdot 7]$ have to be mapped within the
four neighbors of $x$ which are not $y$. Two such sets can have the
same image only if they share an element. Therefore, the restriction
of $f$ to these vertices induce a coloring of the vertices of
$K(6,2)$. Since $K(6,2)$ is 4-chromatic, the four neighbors of $x$
have a non-empty pre-image. Same argument works for the neighbors of
$y$.
In $\mathcal{PC}_{4}$ there are six vertices which are neither adjacent to $x$
nor to $y$. These six vertices induce a matching in $\mathcal{PC}_{4}$. Each of
the 3-subsets of $[2 \cdot 7]$ has to be mapped simultaneously to a
neighbor of a neighbor of $x$ and a neighbor of a neighbor of
$y$. So these twenty vertices are mapped to the aforementioned six
vertices of $\mathcal{PC}_{4}$. Both sets induce matchings in their respective
graphs, hence if a vertex $a$ is mapped to a vertex $z$, the match of
$a$ has to be mapped to the match of $z$. In other words, if some
vertex $z$ is not in the image of $f$, its match is not either. Since
$f$ is not onto, there must be two such vertices. Thus, all twenty
vertices have to be mapped to four vertices and one of these four
vertices must have a pre-image of size more than 5. By Corollary
\ref{preImageOf5}, its pre-image is included in the neighborhood of
some vertex in $\mathcal{PC}_{6}$. But there is no such 5-tuple among the twenty
considered vertices. This is a contradiction.
We note that we may actually map the twenty remaining vertices of
$\mathcal{PC}_{6}$ to the six remaining vertices of
$\mathcal{PC}_{4}$, and then obtain a homomorphism of
$\mathcal{PC}_{6}$ into $\mathcal{PC}_{4}$.
{\bf Case 2. For every vertex $\mathbf{x}$ of
$\mathbf{\mathcal{PC}_{4}}$, $\mathbf{|f^{-1}(x)| \leq 5}$.} In this
case we first note that if $|f^{-1}(x)|=5$ then all five neighbors of $x$
must be in the image of $f$. Otherwise, the twenty-one neighbors of
$f^{-1}(x)$ are mapped to only four vertices and therefore we have a
neighbor $z$ of $x$ with $|f^{-1}(z)| \geq 6$.
Since $f$ is not onto, there is a vertex $z$ in $\mathcal{PC}_{4}$ with an empty
pre-image. Then every neighbor is the image of at most four vertices
from $\mathcal{PC}_{6}$.
{\bf Case 2.1} Suppose there is a neighbor $t$ of $z$ which has a
pre-image of size 4.
Without loss of generality, and using Observation \ref{obs:dist2} and
symmetry arguments, either
$f^{-1}(t)=\left\{\{1\},\{2\},\{3\},\{4\}\right\}$ or
$f^{-1}(t)=\left\{\emptyset,\{1,2\},\{2,3\},\{1,3\}\right\}$.
{\bf Case 2.1.1} If $f^{-1}(t)=\{\emptyset, \{1,2\}, \{1,3\},
\{2,3\}\}$. Then there are twenty vertices in $N(f^{-1}(t))$ and they
must map to four vertices only. So $N(f^{-1}(t))$ should be
partitioned into four sets of size 5, each part being vertices with a
common neighbor in $\mathcal{PC}_{6}$. But the only vertices having
five neighbors in $N(f^{-1}(t))$ are the vertices associated with the
empty set, $\{1,2\}$, $\{1,3\}$, and $\{2,3\}$. Thus they should be
the center of such partitions, we then denote the corresponding parts
by $P_{\emptyset}, P_{\{1,2\}}, P_{\{1,3\}}$ and
$P_{\{2,3\}}$. Private neighborhoods give us that vertices
$\{4\},\{5\},\{6\}$ and $\{7\}$ are in $P_{\emptyset}$, vertices
$\{1,2,4\},\{1,2,5\},\{1,2,6\}$ and $\{1,2,7\}$ are in $P_{\{1,2\}}$,
vertices $\{1,3,4\},\{1,3,5\},\{1,3,6\}$ and $\{1,3,7\}$ are in
$P_{\{1,3\}}$, and finally vertices $\{2,3,4\},\{2,3,5\},\{2,3,6\}$
and $\{2,3,7\}$ are in $P_{\{2,3\}}$. Moreover, each set then contains
exactly one of the four other vertices in $N(f^{-1}(t))$, i.e.,
$\{1\}, \{2\}, \{3\}, \{1,2,3\}$.
Suppose $x$ is the image of five vertices of $P_{\emptyset}$. By
Observation \ref{obs:5gives5}, there must be a neighbor $y$ of $x$ in
$\mathcal{PC}_{4}$ and a neighbor $a$ of the empty set in
$\mathcal{PC}_{6}$ such that five of the seven neighbors of $a$ are
mapped into $y$, let $N'(a)$ be these five vertices. Note that for
each $b$ in $\{1,2,3\}$, three of the neighbors of $\{b\}$ are already
mapped into $t$, so $a$ cannot be a singleton included in
$\{1,2,3\}$. We may then assume without loss of generality that $a$ is
the singleton $\{4\}$. Then we observe that for any choice of $N'(a)$,
this set $N'(a)$ will have a neighbor in each of the sets
$P_{\emptyset}, P_{\{1,2\}}, P_{\{1,3\}}, P_{\{2,3\}}$. Therefore
vertices $t, y, f(P_{\emptyset}), f(P_{\{1,2\}}), f(P_{\{1,3\}})$ and
$f(P_{\{2,3\}})$ would induce a $K_{2,4}$ in $\mathcal{PC}_{4}$ which
is a contradiction.
{\bf Case 2.1.2} If $f^{-1}(t)=\{\{1\}, \{2\}, \{3\} \{4\}\}$. The set
$f^{-1}(t)$ has nineteen neighbors in $\mathcal{PC}_{6}$ and they should map, by $f$,
to only four neighbors of $t$ in $\mathcal{PC}_{4}$. Thus the neighborhood of
$f^{-1}(t)$ is partitioned into four sets, three of which are of size
5 and the last one of size 4. The ones of the size 5 must be common
neighbors of a vertex in $\mathcal{PC}_{6}$ and the central vertex itself must
be of the form $\{i\}$, but only one such $i$ can be in $\{5, 6,
7\}$. So without loss of generality we may assume that the first two
parts of size 5 are subsets of $N(\{1\})$ and $N(\{2\})$. Furthermore
since $\{1,2\}$ can only be in one of these two parts, we assume it is
not in the first one. Thus the first part is precisely
$P=\{\{1,3\},\{1,4\},\{1,5\},\{1,6\},\{1,7\}\}$. Let $x$ be the image
of $P$. Then each neighbor of $\emptyset$ and $\{1, 2\}$ except
$\{2\}$ is also a neighbor of a vertex in $P$. Furthermore,
$f(\{2\})=t$ is also adjacent to $x$. Then if we change $f$ only in
these place, namely defining $f'(\emptyset)=f'(\{1,2\})=v$ and
$f'(a)=f(a)$ otherwise, we will have a have new homomorphism, $f'$,
whose image is a subset of the image of $f$. This new homomorphism
$f'$ would have a vertex with a pre-image of size $7$. But by the Case
1, it is impossible.
{\bf Case 2.2} We finally focus our attention on the case where every
neighbor of $z$ is the image of at most three vertices of
$\mathcal{PC}_{6}$. We remind the reader that we are under the
assumption that pre-image of each vertex has size at most 5. With all
these assumptions we prove the following claim:
\begin{claim}\label{claim:laclaim}
If vertices $x$ and $y$ of $\mathcal{PC}_{4}$ are such that
$|f^{-1}(x)|=|f^{-1}(y)|=5$ and $x$ is adjacent to $y$, then
$f^{-1}(x) \subset N(a)$ and $f^{-1}(x) \subset N(b)$ for some vertices
$a$ and $b$ of $\mathcal{PC}_{6}$ which are adjacent.
\end{claim}
Let $a$ be the common neighbor of the vertices in $f^{-1}(x)$. Note
that $z$ and $x$ are not adjacent and, therefore, have two common
neighbors. Each of these two common neighbors is the image of at most
three vertices of $\mathcal{PC}_{6}$. Thus the twenty-one vertices of
$N(f^{-1}(x))$ must be partitioned into five sets three of which are
of size 5 and the other two of size exactly 3. Then $y$ must be the
image of one of the parts of size 5 but these five elements can only
be a common neighbor of vertex $b$ at distance 1 from $a$. This
concludes the proof of Claim \ref{claim:laclaim}.
Having this observed note that there is no vertex mapped to $z$ and
each neighbor of $z$ is the image of at most three vertices, thus at
least forty-nine vertices are mapped to the vertices at distance 2
from $z$. And, therefore, at least nine of them are the image of five
vertices. These nine vertices induce a subgraph isomorphic to $P^-$,
that is the Petersen graph minus a vertex. Now we consider a mapping
$g$ of $P^-$ which sends each of these nine vertices to the center of
their pre-images under $f$. By Claim \ref{claim:laclaim}, this is a
homomorphism of $P^-$ into $\mathcal{PC}_{6}$. But $P^-$ contains a
$C_5$ while $\mathcal{PC}_{6}$ has odd-girth 7. This contradiction
concludes the proof of Theorem \ref{thm:main}.
\end{proof}
From these results, one can derive the following Corollary.
\begin{corollary}
Let $G$ be a binary Cayley graph of odd-girth $7$. If $G$ admits a
homomorphism to $\mathcal{PC}_{4}$, then any such mapping must be onto.
\end{corollary}
\section{Concluding remarks}
Conjecture~\ref{MappingAmongPC2k} can be strengthened in two steps
each of which may give a new idea for proving it. The first
strengthening is based on the following notation:
Given a graph $G$ and a positive integer $l$ we define the
\emph{$l$-th walk power} of $G$, denoted $G^{(l)}$ to be a graph with
vertices set of $G$ as its vertices where two vertices $x$ and $y$
being adjacent if there is a walk of length $l$ connecting $x$ and $y$
in $G$. It follows from this definition that if $\varphi$ is a
homomorphism of $G$ to $H$, then $\varphi$ is also a homomorphism of
$G^{(l)}$ to $H^{(l)}$. Since $\mathcal{PC}_{2k}^{(2k-1)}$ is isomorphic to
$K_{2^{2k}}$, Conjecture~\ref{MappingAmongPC2k} would be implied by
the following conjecture:
\begin{conjecture}
For $r\geq k$ we have $\chi(\mathcal{PC}_{2r}^{(2k-1)})\geq 2^{2k}$.
\end{conjecture}
It seems then that the methods of algebraic topology used for graph
coloring are the best tools to prove this conjecture. To this end we
suggest the following stronger conjecture, we refer to
\cite{Matousek2003} for definitions and details required for this
conjecture.
\begin{conjecture}
For $r\geq k$ the simplicial complex associated to $\mathcal{PC}_{2r}^{(2k-1)}$ is
$2^{2k}$ connected.
\end{conjecture}
Finally, for odd values of $k$ the projective cube $\mathcal{PC}_{k}$
is a bipartite graph and homomorphism problems to or among these
graphs are trivial. However the theory becomes more complicated under
the notion of signed graph homomorphisms and signed projective cubes
as studied in~\cite{NRS13}. Analogue of this work for the case of
singed projective cubes is under development.
\bibliographystyle{plain}
| {
"timestamp": "2015-02-04T02:08:20",
"yymm": "1502",
"arxiv_id": "1502.00776",
"language": "en",
"url": "https://arxiv.org/abs/1502.00776",
"abstract": "A binary Cayley graph is a Cayley graph based on a binary group. In 1982, Payan proved that any non-bipartite binary Cayley graph must contain a generalized Mycielski graph of an odd-cycle, implying that such a graph cannot have chromatic number 3. We strengthen this result first by proving that any non-bipartite binary Cayley graph must contain a projective cube as a subgraph. We further conjecture that any homo- morphism of a non-bipartite binary Cayley graph to a projective cube must be surjective and we prove some special case of this conjecture.",
"subjects": "Combinatorics (math.CO)",
"title": "Homomorphisms of binary Cayley graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806546550656,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.707727437462085
} |
https://arxiv.org/abs/1708.01598 | On covering a ball by congruent subsets in normed spaces | We consider the covering of a ball in certain normed spaces by its congruent subsets and show that if the finite number of sets is not greater than the dimensionality of the space, then the centre of the ball either belongs to the interior of each set, or doesn't belong to the interior of any set. We also provide some examples when it belongs to the interior of exactly one set. These are the specific cases of the modified problem originally posed for dissection. | \section*{Introduction}
\addcontentsline{toc}{section}{Introduction}
\hskip 1.5em
In \cite[C6, p. 87]{croft1991} the problem, attributed to S.K. Stein, is posed:
``Whether it is possible to partition the unit circle into congruent pieces so that the center is in the interior of one of the pieces?''.
At present, for arbitrary number of pieces it is considered to be unsolved (\cite{mathoverflow17313}).
It can be generalized and varied in many ways, as stated in same place (\cite[p. 88]{croft1991}), not only dimensionality.
Some related questions were studied and answered more or less fully, ---
\cite{douwen1993}, \cite{edelstein1988}, \cite{haddley2016}, \cite{kiss2016}, \cite{richter2008}, \cite{wagon1983} to name a few.
A problem of this kind may depend greatly on the meaning of involved terms like ``piece'', ``partition'', ``congruence'':
do we allow the pieces to intersect at boundaries?
does congruence include reflection? should the piece be connected? measurable?
For example, it is shown in \cite{wagon1983} that the ball in $\mathbb{R}^m$ cannot be ``strictly'' dissected
into $n\in [2;m]$ topologically congruent pieces, to say nothing of the centre; see also \cite{waerden1949}, \cite[25.A.6, p. 599]{gleason1980}, \cite{edelstein1988}.
Hereinafter, we distinguish between 3 types of ``decomposition'' of the set $B$ (in particular, the~ball) into the congruent (sub)sets $\{ A_i \}_{i\in I}$,
so that $B = \bigcup\limits_{i \in I} A_i$ (cf. \cite[p. 79]{croft1991}, \cite[p. 49]{hertel2003}):
$\bullet$ {\it partition}: $\{ A_i \}$ are pairwise disjoint;
$\bullet$ {\it dissection}: interiors of $\{ A_i \}$ are pairwise disjoint;
$\bullet$ {\it covering} (or {\sl intra}{\it covering} to emphasize $A_i \subseteq B$): no additional constraints are {\sl required}.
These terms aren't ``standardized'', and may have quite different meaning in other works.
Any partition is a dissection, and any dissection is a covering. Therefore, the impossibility of covering satisfying certain additional conditions (e.g. relating to the centre)
implies that dissection and partition satisfying the same conditions are not possible as well.
However, when such covering exists, the corresponding dissection or partition may not exist.
Here we consider the ``decomposition'' of (intra)covering type, in certain specific cases, while the original problem almost surely belongs to dissection type;
and the majority of referenced papers, temporally ordered from \cite{waerden1949} to \cite{kiss2016}, deals with partition.
{\ }
{\sl The routinism of inference suggests few/some/most/all of the presented ``results'' to be well known, even if not claimed explicitly or publicly,
and the aim is rather to remove the delusion that there's no such well-knowness...}
There are works concerning the original centre-in-interior dissection problem, under ``natural'' (or ``physical'') assumptions
(such as the space being Euclidean, boundaries of parts being rectifiable, parts being connected): cf. \cite{haddley2016}, \cite{kanelbelov2002}, \cite{banakh2010}.
In our opinion, the most similar negative result relating to covering is obtained for pre-Hilbert spaces in \cite[Th. 1.1]{douwen1993}:
in spite of ``indivisibility'' and ``partitioning'' terms, it is actually a covering considered there, under the assumption that exactly one set contains the centre.
See Rem. \ref{remIneqCounterEx} below.
{\ }
We try to attain the generality by considering spaces and coverings with as few additional properties and constraints as possible.
Thereby few different interpretations of the problem (the ball is closed/open etc.) are aggregated.
{\ }
Hereinafter, we consider a normed linear space $X$ over the field of reals $\mathbb{R}$, $\theta$ is the zero of $X$.
Where we need the specific space such as $\mathbb{R}^m$, we will note it.
Completeness of $X$ is not assumed.
$\| x \|$ is the norm of $x\in X$, inducing the metric $\rho (x, y) = \| x - y \|$.
The balls: open $B(x,r) = \{ y\in X\colon \| y - x \| < r \}$, closed $\overline{B}(x,r) = \{ y\in X\colon \| y - x \| \leqslant r \}$;
the (closed) sphere $S(x,r) = \{ y\in X\colon \| y - x \| = r \}$. $r > 0$ is assumed.
We call the sets $A\subseteq X$ and $B\subseteq X$ {\it congruent}, $A\cong B$, iff there is an isometric surjective mapping ({\it motion})
$f\colon X \leftrightarrow X$: $\forall x, y \in X$: $\| f(x) - f(y) \| = \| x - y \|$ (surjectivity implies that $f^{-1} \colon X \leftrightarrow X$ is a motion too), and $f(A) = B$.
The identity map $\mathcal{I}$: $\mathcal{I}(x) = x$ is a motion.
$\mathrm{Int}\, A = \{ x\in A\mid \exists \varepsilon > 0\colon B(x,\varepsilon) \subseteq A \}$ and
$\overline{A} = \{ x \in X \mid \forall \varepsilon > 0 \colon B(x, \varepsilon) \cap A \ne \varnothing \}$ are the interior and the closure of $A$, respectively.
{\ }
We assume that $X$ has these additional properties:
$\bullet$ $\dim X > 1$: $\exists a, b \in X$, which are linearly independent.
$\bullet$ NCS: $\| \cdot \|$ is strictly convex, that is, $\forall x, y \in S(\theta, 1)$, $x \ne y$: $\lambda \in (0;1)$ $\Rightarrow$ $\| \lambda x + (1 - \lambda ) y\| < 1$.
Conventional examples of non-NCS space are $\mathbb{R}^m_1$ and $\mathbb{R}^m_{\infty}$ when $m \geqslant 2$, or $L_1$ and $L_{\infty}$.
\section{Preliminaries}
\hskip 1.5em
Watery Warning: some of the following lemmas seem ``folkloric'', with proofs included for the sake of integrity and probably present elsewhere.
\begin{lemma}\label{lmMotionSph}
If $f\colon X \leftrightarrow X$ is a motion, then $\forall S(x,r)$: $f\bigl( S(x,r) \bigr) = S(f(x), r)$.
\end{lemma}
\begin{proof}
a) $\forall y \in S(x,r)$: $\| f(y) - f(x) \| = \| y - x \| = r$ $\Rightarrow$ $f\bigl(S(x,r)\bigr) \subseteq S(f(x),r)$.
b) $\forall z\in S(f(x),r)$: $z = f(y)$, $\| y - x \| = \| f(y) - f(x) \| = \| z - f(x) \| = r$ $\Rightarrow$ $y \in S(x,r)$ $\Rightarrow$ $S(f(x), r) \subseteq f\bigl(S(x,r)\bigr)$.
\end{proof}
\begin{lemma}\label{lmMotionBall}
If $f\colon X \leftrightarrow X$ is a motion, then $\forall B(x,r)$: $f\bigl( B(x,r) \bigr) = B(f(x), r)$.
\end{lemma}
\begin{proof}
$f\bigl( B(x,r) \bigr) = f\bigl( \{ x \} \cup \bigcup\limits_{u\in (0;r)} S(x,u) \bigr) = \{ f(x) \} \cup \bigcup\limits_{u\in (0;r)} f\bigl( S(x,u) \bigr)
\stackrel{\text{Lemma \ref{lmMotionSph}}}{=}$
\hfill $= \{ f(x) \} \cup \bigcup\limits_{u\in (0;r)} S\bigl( f(x), u \bigr) = B\bigl( f(x), r \bigr)$
\end{proof}
\begin{lemma}\label{lmMotionDecomp}
Let $f\colon X \leftrightarrow X$ be a motion. Then $f = h \circ g$ (that is, $f(x) = h\bigl(g(x)\bigr)$),
where $g\colon X \leftrightarrow X$ and $h\colon X \leftrightarrow X$ are uniquely determined motions such that
1) $\forall x \in X$: $\| g(x) \| = \| x \|$ ($\Leftrightarrow$ $g(\theta) = \theta$);
2) $\exists a \in X$: $\forall x \in X$: $h(x) = x + a$.
\end{lemma}
\begin{proof}
Consider $g(x) = f(x) - f(\theta)$ and $h(x) = x + f(\theta)$. Obviously, $h \circ g = f$.
$\| g(x) \| = \| f(x) - f(\theta) \| = \| x - \theta \| = \| x \|$. (Implied by $g(\theta) {=} \theta$: $\| g(x) \| {=} \| g(x) {-} g(\theta) \| {=} \| x {-} \theta \|$.)
$g$ and $h$ are motions: $\| g(x) - g(y) \| = \| f(x) - f(y) \| = \| x - y \|$ and $\| h(x) - h(y) \| = \| x - y \|$ (isometry),
inverse maps $g^{-1}(x) = f^{-1}(x + f(\theta))$ and $h^{-1}(x) = x - f(\theta)$ imply surjectivity.
Uniqueness: $f(\theta) = h(g(\theta)) = h(\theta) = a$, $g(x) = h^{-1}(f(x)) = f(x) - a = f(x) - f(\theta)$.
\end{proof}
Here, we call $h$ {\it shift} and $g$ {\it non-shift} components of the motion $f$. If $h=\mathcal{I}$ or $g=\mathcal{I}$, the respective component is called {\it trivial}.
It is easy to see that if $f$ has trivial shift or non-shift component, then the respective component of $f^{-1} = g^{-1} \circ h^{-1}$ is trivial as well.
{\ }
\begin{theorem}\label{thmIsomZero2Zero} (Mazur-Ulam, \cite{mazur1932}; \cite[5.3, Th. 12]{lax2002}).
The motion that maps $\theta$ to $\theta$ is linear.
\end{theorem}
{\bf Remark.} We consider the isometries that map $X$ onto itself,
while the theorem holds true for any bijective isometry between two normed spaces $X$ (with $\theta_X$) and $Y$ (with $\theta_Y$).
{\bf Corollary.} Non-shift component $g$ of the motion $f$ is linear: $g(\lambda x + \mu y) = \lambda g(x) + \mu g(y)$.
{\ }
\begin{lemma}\label{lmDiamTrivShift}
If the motion $f\colon X \leftrightarrow X$ is such that $\exists x \in X$: $\| f(x) \| \leqslant \| x \|$ and $\| f(-x) \| \leqslant \| x \|$, then the shift component of $f$ is trivial.
\end{lemma}
\begin{proof}
Using the notation of Lemma \ref{lmMotionDecomp}, let $f = h \circ g$ and $y = g(x)$.
For $x = \theta$: $y = \theta$, so $f(x) = a$, and $\| a \| \leqslant 0$ $\Leftrightarrow$ $a = \theta$. Suppose $x \ne \theta$.
By Th. \ref{thmIsomZero2Zero}, $-y = g(-x)$, so $f(x) = y + a$ and $\| y + a \| \leqslant \| x \| = \| y \|$,
$f(-x) = -y + a$ and $\| -y + a\| \leqslant \| y \|$ $\Leftrightarrow$ $\| y - a \| \leqslant \| y \|$.
If $\| y + a\| < \| y \|$ or $\| y - a \| < \| y \|$, then by triangle inequality $2 \| y \| = \| y - (-y) \| \leqslant \| y - a \| + \| a - (-y) \| < 2 \| y \|$, --- a contradiction;
thus $\| y - a \| = \| y + a \| = \| y \|$.
$y = \frac{1}{2}(y-a) + \frac{1}{2}(y+a)$. Assume $a \ne \theta$ $\Leftrightarrow$ $y - a \ne y + a$. For $s = (y-a) / \| y \|$, $t = y / \| y \|$, $u = (y+a) / \| y \|$:
$s,t,u \in S(\theta, 1)$, $s \ne u$, $\| \frac{1}{2}s + \frac{1}{2}u \| = \| t \| = 1$, contradicting NCS. So $a = \theta$.
\end{proof}
{\ }
Let $a_1$, ..., $a_m$ be linear independent (LI) elements of $X$ (thus $\dim X \geqslant m$).
We denote by $M(a_1, ..., a_m) = \bigl\{ \sum\limits_{i=1}^m x_i a_i \mid x_i \in \mathbb{R} \bigr\}$ the $m$-dimensional linear manifold generated by them.
It follows from LI that $\forall x \in M(a_1, ..., a_m)$ the coordinates $\{ x_i \}$ are determined uniquely.
Suppose $x^{(k)}, y \in M(a_1, ..., a_m)$. Since $\| x^{(k)} - y \| \leqslant \sum\limits_{i=1}^m |x^{(k)}_i - y_i| \cdot \| a_i \|$ by triangle inequality,
we immediately see that $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} y_i$ for $i=\overline{1,m}$
implies $x^{(k)} \xrightarrow[k\rightarrow \infty]{} y$, that is, $\| x^{(k)} - y \| \xrightarrow[k\rightarrow \infty]{} 0$.
The converse implication and the closedness of $M(a_1, ..., a_m)$ (making it a subspace of $X$),
though known well enough (see \cite[1.2.3]{cotlar1974}, \cite[5.2, Ex. 4]{lax2002}), are obtained in the next lemma by ``elementary'' reasonings,
without resort to norm equivalence or functionals.
\begin{lemma}\label{lmFinDimManifClosed}
If $x^{(k)} \in M(a_1, ..., a_m)$ and $x^{(k)} \xrightarrow[k\rightarrow \infty]{} x \in X$,
then $x \in M(a_1, ..., a_m)$, which is closed therefore, and $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} x_i$ for $i=\overline{1,m}$.
\end{lemma}
\begin{proof}
The proof is by induction over $\dim M(a_1, ..., a_m)$.
Let $m = 1$. The sequence $\{ x^{(k)} \}_k = \{ x^{(k)}_1 a_1 \}_k$ is convergent (conv.), therefore it is fundamental (fund.)
Assume that $\{ x^{(k)}_1 \}_k$ is not conv., then it isn't fund. due to completeness of $\mathbb{R}$:
\centerline{$\exists \varepsilon_0 > 0$: $\forall N \in \mathbb{N}$: $\exists k_1, k_2 > N$: $|x^{(k_1)}_1 - x^{(k_2)}_1| \geqslant \varepsilon_0$}
But then $\| x^{(k_1)} - x^{(k_2)} \| = | x^{(k_1)}_1 - x^{(k_2)}_1 | \cdot \| a_1 \| \geqslant \varepsilon_0 \| a_1 \| > 0$, which contradicts the fund. of $\{ x^{(k)} \}_k$.
Hence $\exists \lim\limits_{k\rightarrow \infty} x^{(k)}_1 = \widetilde{x}_1$. Let $\widetilde{x} = \widetilde{x}_1 a_1$.
$\| x^{(k)} - \widetilde{x} \| = |x^{(k)}_1 - \widetilde{x}_1| \cdot \| a_1 \| \xrightarrow[k\rightarrow \infty]{} 0$ $\Rightarrow$ $x^{(k)} \rightarrow \widetilde{x}$ as $k\rightarrow \infty$.
This means that $x = \widetilde{x} \in M(a_1)$ and $x^{(k)}_1 \rightarrow x_1$ as $k\rightarrow \infty$.
Now suppose that the statement holds true for $\dim M\bigl(\{ a_i \}\bigr) = 1, 2, ..., m-1$.
Consider the conv. $\{ x^{(k)} \}_k = \{ \sum\limits_{i=1}^m x^{(k)}_i a_i \}_k$, it is fund.
Take any $i_0 = \overline{1,m}$, for instance $i_0 = m$. Assume that $\{ x^{(k)}_m \}$ isn't conv., then it isn't fund.,
$\exists \varepsilon_0 > 0$: $\forall N \in \mathbb{N}$: $\exists k_1, k_2 > N$: $|x^{(k_1)}_m - x^{(k_2)}_m| \geqslant \varepsilon_0$, and
\centerline{$\| x^{(k_1)} - x^{(k_2)} \| = \bigl\| \sum\limits_{i=1}^m (x^{(k_1)}_i - x^{(k_2)}_i) a_i \bigr\| =$}
\centerline{$= |x^{(k_1)}_m - x^{(k_2)}_m| \cdot \bigl\| a_m + \sum\limits_{i=1}^{m-1} \frac{x^{(k_1)}_i - x^{(k_2)}_i}{x^{(k_1)}_m - x^{(k_2)}_m} a_i \bigr\| =
|x^{(k_1)}_m - x^{(k_2)}_m| \cdot \| a_m - z \|$}
\noindent
where $z \in M (a_1, ..., a_{m-1}) = M_{m-1}$. It follows from $\dim M_{m-1} = m-1$ and the induction hypothesis that $M_{m-1}$ is closed.
$a_m \notin M_{m-1}$ due to LI, therefore $\| a_m - z \| \geqslant \rho (a_m, M_{m-1}) > 0$. And we obtain
$\| x^{(k_1)} - x^{(k_2)} \| \geqslant \varepsilon_0 \rho (a_m, M_{m-1}) > 0$,
which contradicts the fund. of $\{ x^{(k)} \}_k$. Hence $\exists \lim\limits_{k\rightarrow \infty} x^{(k)}_m = \widetilde{x}_m$,
and similarly $\exists \lim\limits_{k\rightarrow \infty} x^{(k)}_i = \widetilde{x}_i$ for $i = \overline{1,m-1}$. Let $\widetilde{x} = \sum\limits_{i=1}^m \widetilde{x}_i a_i$.
$\| x^{(k)} - \widetilde{x} \| \leqslant \sum\limits_{i=1}^m |x^{(k)}_i - \widetilde{x}_i| \cdot \| a_i \| \xrightarrow[k\rightarrow \infty]{} 0$, so $x^{(k)} \xrightarrow[k\rightarrow \infty]{} \widetilde{x}$.
Consequently, $x = \widetilde{x} \in M(a_1, ..., a_m)$ and $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} x_i$ for $i = \overline{1,m}$.
By induction principle, the statement is true for $\forall m \in \mathbb{N}$.
\end{proof}
{\ }
\begin{theorem}\label{thmLSB} (Lusternik-Schnirelmann-Borsuk (LSB), \cite[II.5]{lusternik1930}, \cite{borsuk1933}; see also \cite{matousek2008}).
Let the sphere $S_m = \bigl\{ x\in \mathbb{R}^m \colon \| x \|_m = \sqrt{\sum\limits_{j=1}^m x_j^2} = r \bigr\} = \bigcup\limits_{i=1}^m A_i$, where $A_i$ are closed.
Then $\exists i_0$, $\exists x\in S_m$: $\{ x, -x \} \subseteq A_{i_0}$, --- one of $A_i$ contains the pair of antipodal points of $S_m$.
\end{theorem}
{\ }
The immediate corollary of LSB theorem is this generalization for normed spaces:
\begin{lemma}\label{lmLSBNormSpace}
Let $\dim X \geqslant m \in \mathbb{N}$, that is, $\exists a_1, ..., a_m \in X$, which are linearly independent.
If $S(\theta, r) = \bigcup\limits_{i=1}^m A_i$, where $A_i$ are closed, then $\exists A_{i_0}$, $\exists x \in S(\theta, r)$: $\{ x, -x \} \subseteq A_{i_0}$.
\end{lemma}
See \cite[p. 119]{bollobas2006}, and most likely it's mentioned in \cite{steinlein1985}; more general form is in e.g. \cite[p. 39]{arandjelovic1999}.
\begin{proof}
Let $L = M(a_1, ..., a_m)$ be the subspace of $X$ generated by $\{ a_i \}$ (by Lemma \ref{lmFinDimManifClosed}, $L$ is closed),
$C = S(\theta, r) \cap L$, $S_m = \bigl\{ y \in \mathbb{R}^m \colon \| y \|_m = 1 \bigr\}$.
$\forall x\in L$ has the unique representation $x = (x_1; ...; x_m) = \sum\limits_{i=1}^m x_i a_i$.
Therefore the mapping $s \colon C \rightarrow S_m$: $s(x) = \bigl( x_1 / \| x \|_m ; ...; x_m / \| x \|_m \bigr)$ is well defined.
Moreover, we claim that $s$ is a homeomorphism.
1) $s$ is injective. Indeed, if $s(x') = s(x'')$, where $x', x'' \in C$, then $\frac{x'_i}{\| x' \|_m} = \frac{x''_i}{\| x'' \|_m}$ for $i = \overline{1,m}$,
thus $x'_i = \alpha x''_i$ for $\alpha = \| x' \|_m / \| x'' \|_m > 0$. So $x' = \alpha x''$ $\Rightarrow$ $r = \| x' \| = |\alpha| \cdot \| x'' \| = r \alpha$ $\Rightarrow$ $\alpha = 1$.
2) $s$ is surjective. $\forall y=(y_1;...;y_m) \in S_m$: $s^{-1}(y) = \frac{r}{\| x \|} x$, where $x = \sum\limits_{i=1}^m y_i a_i$.
3) $s$ is continuous. Let $C \ni x^{(k)} \xrightarrow[k\rightarrow \infty]{} x$. Using closedness of $S(\theta, r)$ and Lemma \ref{lmFinDimManifClosed}, we obtain:
$x \in S(\theta, r) \cap L = C$ and $x^{(k)}_i \xrightarrow[k\rightarrow \infty]{} x_i$.
Therefore $\| x^{(k)} \|_m \xrightarrow[k\rightarrow \infty]{} \| x \|_m$, and $s(x^{(k)}) \xrightarrow[k\rightarrow \infty]{} s(x)$.
4) $s^{-1}$ is continuous too. For $S_m \ni y^{(k)} \xrightarrow[k\rightarrow \infty]{} y$: $S_m$ is closed $\Rightarrow$ $y \in S_m$, and $y^{(k)}_i \xrightarrow[k\rightarrow \infty]{} y_i$.
Let $x^{(k)} = \sum\limits_{i=1}^m y^{(k)}_i a_i$, $x = \sum\limits_{i=1}^m y_i a_i$, then $x^{(k)} \xrightarrow[k\rightarrow \infty]{} x$,
$\| x^{(k)} \| \xrightarrow[k\rightarrow \infty]{} \| x \|$, so $s^{-1}(y^{(k)}) \xrightarrow[k\rightarrow \infty]{} s^{-1}(y)$.
Consider $C_i = A_i \cap C = A_i \cap S(\theta, r) \cap L$, they are closed.
Hence the image $s(C_i) \subseteq S_m$, under homeomorphic mapping $s$, is closed too (\cite[XII, \S 3]{kuratowski1961}).
$\bigcup\limits_{i=1}^m C_i = \bigl( \bigcup\limits_{i=1}^m A_i \bigr) \cap C = S(\theta, r) \cap C = C$, so $\bigcup\limits_{i=1}^m s(C_i) = S_m$.
By LSB theorem, $\exists i_0$, $\exists y\in S_m$: $\{ y,-y \} \in s(C_{i_0})$.
Since $s^{-1}(-y) = - s^{-1}(y)$ (by (2)), we obtain $x = s^{-1}(y) \in C$: $\{ x, -x \} \in C_{i_0} \subseteq A_{i_0}$.
\end{proof}
{\ }
In the Main section, certain infinite-dimensional ball covering will be considered, where the following Hilbert space-related lemmas are needed.
{\ }
We denote by $H = l_2$ the separable infinite-dimensional Hilbert space over $\mathbb{R}$.
Until the end of this section, $\| \cdot \| = \| \cdot \|_H$ denotes the norm in $H$. $S = S(\theta, 1)$ is the unit sphere of $H$.
$\inprod{x}{y}$ is the scalar/inner product of $x, y\in H$,
$\angle(x, y) = \arccos \frac{\inprod{x}{y}}{\| x \| \cdot \| y \|} \in [0;\pi]$ is the angle between $x$ and $y$
($\angle(x,y) = 0$ if $x = \theta$ or $y = \theta$). $x \perp y$ means $\inprod{x}{y} = 0$.
The ``basic'' properties of $H$ and $\inprod{\cdot}{\cdot}$ (like $\inprod{x}{x} = \| x \|^2$) are assumed to be known; see e.g. \cite[II.3]{cotlar1974}, \cite[6]{lax2002}.
\begin{lemma}\label{lmCountDenseSphGeod}
$\exists D = \{ d_i \}_{i\in \mathbb{N}} \subset S$ such that $\forall \beta > 0$, $\forall x \in S$: $\exists d \in D$: $\angle(x,d) < \beta$.
\end{lemma}
In other words, there's a countable subset $D$ of $S$, which is everywhere dense (ED) in ``geodesic'' metric $\rho_S(x,y) = \angle(x,y)$ on $S$
(see \cite[6.4, 17.4]{deza2009}). Such $D$ is said to be {\it geodesically dense in $S$}.
\begin{proof}
$H$ is separable: $\exists R \subset H$, countable and ED in $H$. Let $D = \{ \frac{y}{\| y \|} \mid y\in R, y\ne \theta \}$; $D \subset S$ and $D$ is countable.
Take any $x \in S$. $\forall \delta > 0$ $\exists y \in R$: $\| x - y \| < \delta$. Then, by triangle inequality,
\centerline{$ 1 - \delta < \| x \| - \| x - y \| \leqslant \| y \| \leqslant \| y - x \| + \| x \| < 1 + \delta$}
\noindent
hence $\| x - \frac{y}{\| y \|} \| = \frac{1}{\| y \|} \cdot \bigl\| x \cdot \| y \| - y \bigr\| \leqslant \frac{1}{\| y \|} \Bigl[\bigl\| (\| y \| - 1) x \bigr\| + \| x - y \| \Bigr] \leqslant
\frac{1}{1 - \delta} \bigl[ \delta \| x \| + \delta \bigr] = \frac{2\delta}{1 - \delta}$.
Since $\frac{2\delta}{1 - \delta} \rightarrow 0$ as $\delta \rightarrow 0$, we obtain for $\forall \varepsilon > 0$: $\exists d = \frac{y}{\| y \|} \in D$: $\| x - d \| < \varepsilon$.
Consider $\varepsilon = \frac{1}{n}$ to get $\{ d_n \}_{n\in \mathbb{N}} \subset D$: $d_n \xrightarrow[n\rightarrow \infty]{} x$.
(So, $D$ is ED on $S$ in $\| \cdot \|$-induced metric).
It follows from continuity of $\| \cdot \|$ and $\inprod{\cdot}{\cdot}$ that
$\frac{\inprod{d_n}{x}}{\| d_n \| \cdot \| x \|} \xrightarrow[n\rightarrow \infty]{} \frac{\inprod{x}{x}}{\| x \|^2} = 1$.
In turn, continuous $\arccos \frac{\inprod{d_n}{x}}{\| d_n \| \cdot \| x \|} \xrightarrow[n\rightarrow \infty]{} 0$,
therefore $\exists d_{n_{\beta}} \in D$: $\angle(x,d_{n_{\beta}}) < \beta$.
\end{proof}
{\bf Remark.} Given such $D$, it is easy to see that $\{ A_i \}_{i\in \mathbb{N}} = \{ \overline{B}(d_i, \varepsilon) \cap S \}_{i\in \mathbb{N}}$ for $\varepsilon < 1$
is a covering of $S$ by closed subsets (moreover, $A_i \cong A_j$, see Lemma \ref{lmOmtdCongr}).
$\dim H = \aleph_0 = |\{ A_i\}|$, however, $\mathrm{diam} A_i \leqslant \mathrm{diam} \overline{B}(d_i, \varepsilon) = 2\varepsilon < 2$, thus no $A_i$ contains antipodal points of $S$, ---
the ``straight'' attempt of infinite-dimensional generalization of LSB theorem fails. Cf. \cite{cutler1973}.
\begin{lemma}\label{lmHilbertNonShiftInprod}
If $g\colon H \leftrightarrow H$ is a non-shift motion, $g(\theta) = \theta$, then $\forall x, y \in H$: $\inprodbig{g(x)}{g(y)} = \inprod{x}{y}$.
\end{lemma}
\begin{proof}
$\inprodbig{g(x)}{g(y)} = \inprodbig{g(x) - g(y) + g(y)}{g(y)} = \inprodbig{g(x) - g(y)}{g(y)} + \| g(y) \|^2 =$
\centerline{$= \inprodbig{g(x) - g(y)}{g(y) - g(x) + g(x)} + \| y \|^2 \stackrel{\text{Th. \ref{thmIsomZero2Zero}}}{=}$}
\noindent
$= - \inprodbig{g(x - y)}{g(x - y)} + \| g(x) \|^2 - \inprodbig{g(x)}{g(y)} + \| y \|^2 =
\| x \|^2 + \| y \|^2 - \| x - y \|^2 - \inprodbig{g(x)}{g(y)}$,
therefore $\inprodbig{g(x)}{g(y)} = \frac{1}{2} \bigl[ \| x \|^2 + \| y \|^2 - \inprod{x - y}{x - y} \bigr] = \frac{1}{2} \bigl[ 2 \inprod{x}{y} \bigr] = \inprod{x}{y}$.
\end{proof}
{\ }
{\bf Definition.} Let $H \ni s \ne e \in H$, $\gamma \in [0;\pi]$. We call the set
\centerline{$C(s, e, \gamma) = \bigl\{ x \in H \colon \| x - s \| \leqslant \| e - s \| \text{ and } \angle(x - s, e - s) \leqslant \gamma \bigr\} \subseteq \overline{B}(s, \| e - s \|)$}
\noindent
the (closed) {\it ommatidium}, with origin at $s$, around $[s, e]$, of angle $\gamma$ and of radius $\| e - s \|$.
It's actually a ``sector'' of the ball $\overline{B}(s, \| e - s \|)$, and would be a usual disk sector in $\mathbb{R}^2$.
\begin{lemma}\label{lmOmtdSegm}
If $s \ne x \in C(s, e, \gamma)$, then $\forall \lambda \in [0; \frac{\| e - s\|}{\| x - s \|}]$: $s + \lambda (x - s) \in C(s, e, \gamma)$.
\end{lemma}
\begin{proof}
It follows simply from the definition.
\end{proof}
\begin{lemma}\label{lmOmtdCongr}
Two ommatidiums of the same angle and radius are congruent in $H$.
\end{lemma}
\begin{proof}
Evidently, a parallel shift $h(x) = x + a$ transforms $C(s, e, \gamma)$ onto $C(s + a, e + a, \gamma)$.
Thus we consider, without loss of generality, $C_1 = C(\theta, e_1, \gamma)$ and $C_2 = C(\theta, e_2, \gamma)$,
where $\| e_1 \| = \| e_2 \| = r$, $e_1 \ne e_2$. We are going to find the non-shift motion $g$ such that $g(C_1) = C_2$.
It suffices to obtain $g$ such that $g(e_1) = e_2$. Indeed, $\forall x \in C_1$ we have then $\| g(x) \| = \| x \| \leqslant r$ and
$\angle(g(x), e_2) = \arccos \frac{\inprod{g(x)}{g(e_1)}}{\| g(x) \| \cdot \| g(e_1) \|} \stackrel{\text{Lemma \ref{lmHilbertNonShiftInprod}}}{=}
\arccos \frac{\inprod{x}{e_1}}{\| x \| \cdot \| e_1 \|} = \angle(x, e_1) \leqslant \gamma$, so $g(x) \in C_2$.
Conversely, $\forall x \in C_2$: $g^{-1} (x) \in C_1$, because $g^{-1}$ is a non-shift motion as well, and $g^{-1} (e_2) = e_1$.
We apply the ``coordinate'' approach to define such $g$.
Let $e_1' = e_1 / r$, $e_2' = e_2 / r$. They generate the 2-dimensional subspace $M = M(e_1', e_2')$ of $H$.
$\exists u \in M$ such that $\| u \| = 1$ and $u \perp e_1'$, hence $M = M(e_1', u)$ and $\forall z\in M$: $z = z_1 e_1' + z_2 u$, $\| z \|^2 = z_1^2 + z_2^2$.
Then $e_2' = (\cos \alpha) e_1' + (\sin \alpha) u$ for some $\alpha \in (0;2\pi)$.
$H = M \oplus L$, where $L$ is the orthogonal complement of $M$. It follows that $\forall x \in H$ has unique representation $x = x_1 e_1' + x_2 u + w_x$, where $w_x \in L$,
and $\| x \|^2 = x_1^2 + x_2^2 + \| w_x \|^2$. In particular, $e_1 = r e_1'$ and $e_2 = (r \cos \alpha) e_1' + (r \sin \alpha) u$.
Let $g(x) = (x_1 \cos \alpha - x_2 \sin \alpha) e_1' + (x_1 \sin \alpha + x_2 \cos \alpha) u + w_x$. It has the required properties:
1) $g$ is isometric: $\| g(x) - g(y) \|^2 =$
\centerline{$= \bigl[ (x_1 - y_1) \cos \alpha - (x_2 - y_2) \sin \alpha \bigr]^2 + \bigl[ (x_1 - y_1) \sin \alpha + (x_2 - y_2) \cos \alpha \bigr]^2 + \| w_x - w_y \|^2 =$}
\centerline{$ = (x_1 - y_1)^2 + (x_2 - y_2)^2 + \| w_x - w_y \|^2 = \| x - y \|^2$}
2) $g$ is surjective: $g^{-1}(x) = (x_1 \cos \alpha + x_2 \sin \alpha) e_1' + (-x_1 \sin \alpha + x_2 \cos \alpha) u + w_x$.
3) $g(\theta) = \theta + \theta + \theta = \theta$, and 4) $g(e_1) = (r \cos \alpha) e_1' + (r \sin \alpha) u + \theta = e_2$.
\end{proof}
\begin{lemma}\label{lmOmtdCoverBall}
If $D = \{ d_i \}_{i\in \mathbb{N}} \subset S$ is geodesically dense in $S$, then $\forall \beta > 0$:
$\overline{B}(\theta, 1) = \bigcup\limits_{i \in \mathbb{N}} C(\theta, d_i, \beta)$.
\end{lemma}
\begin{proof}
$C(\theta, d_i, \beta) \subseteq \overline{B}(\theta, 1)$ is obvious.
$\theta \in C(\theta, d_i, \beta)$ for any $i$. Take $\forall x \in \overline{B}(\theta, 1) \backslash \{ \theta \}$, then $x' = x / \| x \| \in S$ and $x = \| x \| \cdot x'$.
By definition of $D$, $\exists d \in D$: $\angle(x', d) < \beta$, thus $x' \in C(\theta, d, \beta)$.
By Lemma \ref{lmOmtdSegm}, $x \in C(\theta, d, \beta)$ too.
\end{proof}
\begin{lemma}\label{lmOmtdInsideBall}
Let $d \in S$ and $\gamma \leqslant \arccos \frac{1}{4}$. Then $C_0 = C(-\frac{1}{2}d, \frac{1}{2}d, \gamma) \subset \overline{B}(\theta, 1)$.
\end{lemma}
\begin{proof}
Due to convexity of $\overline{B}(\theta, 1)$, we only need to prove that $\forall x \in S(-\frac{1}{2}d, 1) \cap C_0$: $\| x \| \leqslant 1$,
because $\forall y \in C_0 \backslash \{ -\frac{1}{2}d \}$: $y = \lambda x + (1 - \lambda) (-\frac{1}{2} d)$
for $x = -\frac{1}{2}d + \frac{y + \frac{1}{2}d}{\|y + \frac{1}{2}d\|} \in S(-\frac{1}{2}d, 1) \cap C_0$ ($\in C_0$ follows from Lemma \ref{lmOmtdSegm})
and $\lambda = \| y + \frac{1}{2} d\| \in [0;1]$.
Let $H = M(d) \oplus T$, then $\forall y \in H$:
$y = y_1 d + y_2 u$, where $d \perp u \in T$, $\| u \| = 1$, and $\| y \|^2 = y_1^2 + y_2^2$.
In particular, for $y = x + \frac{1}{2}d$: $\| y \| = 1$, hence we can represent $y_1 = \cos \beta \geqslant 0$, $y_2 = \sin \beta$ for $\beta \in [0;2\pi)$.
At that $y_1 = \inprod{y}{d} = \frac{\inprod{y}{d}}{\| y \| \cdot \| d \|} = \cos \angle(x + \frac{1}{2}d,d)$.
$x \in C_0$, so $y_1 = \cos \beta \geqslant \cos \gamma \geqslant \frac{1}{4}$,
\hfill $\| x \|^2 = \| y - \frac{1}{2} d \|^2 = \| (\cos \beta - \frac{1}{2}) d + \sin \beta \cdot u \|^2 = (\cos \beta - \frac{1}{2})^2 + \sin^2 \beta =
\frac{5}{4} - \cos \beta \leqslant 1$
\end{proof}
\begin{lemma}\label{lmOmtdOriginNonIntr}
If $\gamma < \pi$, then $s \notin \mathrm{Int}\, C(s, e, \gamma)$.
\end{lemma}
\begin{proof}
Let $v = e - s$, then $\forall \varepsilon > 0$: $\angle (-\varepsilon v, e - s) = \arccos \frac{\inprod{-\varepsilon v}{v}}{\| -\varepsilon v \| \cdot \| v \|} = \arccos (-1) = \pi > \gamma$,
hence $B(s, 2 \varepsilon \| e - s\|) \ni s -\varepsilon v \notin C(s, e, \gamma)$, and $B(s, 2 \varepsilon \| e - s\|) \nsubseteq C(s, e, \gamma)$.
\end{proof}
\begin{lemma}\label{lmOmtdMiddleInt}
If $\gamma > 0$, then $\frac{1}{2}(s + e) \in \mathrm{Int}\, C(s, e, \gamma)$.
\end{lemma}
\begin{proof}
Without loss of generality, assume that $s = \theta$, $\| e \| = 1$, and $\gamma \leqslant \frac{\pi}{4}$
(otherwise move the ommatidium so that its origin becomes $\theta$ by Lemma \ref{lmOmtdCongr}, scale it to attain $\| e \| = 1$ ($x\leftrightarrow x / \| e \|$),
and consider $C(\theta, e, \frac{\pi}{4}) \subseteq C(\theta, e, \gamma)$).
We need to show that $\exists \varepsilon > 0$: $B(\frac{1}{2}e, \varepsilon) \subseteq C(\theta, e, \gamma)$ $\Leftrightarrow$ $\forall x\in B(\frac{1}{2}e, \varepsilon)$:
$\| x\| \leqslant 1$ and $\angle (x, e) \leqslant \gamma$; the latter inequality is equivalent to $\cos \angle(x, e) \geqslant \cos \gamma$.
For arbitrary $\varepsilon > 0$ and $\forall x \in B(\frac{1}{2}e, \varepsilon)$: $x = \frac{1}{2}e + b$, where $\| b \| < \varepsilon$.
Then $\| x\| \leqslant \| \frac{1}{2} e\| + \| b\| < \frac{1}{2} + \varepsilon$; the constraint $\varepsilon < \frac{1}{2}$ ensures $\| x\| < 1$.
\centerline{$\cos \angle(x, e) = \frac{\inprod{x}{e}}{\| x \| \cdot \| e\|} = \frac{1}{\| \frac{1}{2}e + b \|} \bigl[ \inprod{\frac{1}{2}e}{e} + \inprod{b}{e} \bigr] =
\frac{1}{\| e + 2b\|} + \frac{2}{\| e + 2b\|} \inprod{b}{e}$}
1) $\| e + 2b \| \leqslant \| e \| + 2 \| b \| < 1 + 2\varepsilon$, hence $\frac{1}{\| e + 2b\|} > \frac{1}{1 + 2 \varepsilon}$.
2) On the other hand, $\| e + 2b\| \geqslant \| e \| - \| - 2b\| > 1 - 2 \varepsilon$ $\Rightarrow$ $\frac{2}{\| e + 2b\|} < \frac{2}{1 - 2\varepsilon}$,
and Cauchy-Bunyakowsky-Schwartz inequality implies $\bigl| \inprod{b}{e} \bigr| \leqslant \| b \| \cdot \| e \| < \varepsilon$,
therefore $\frac{2}{\| e + 2b\|} \inprod{b}{e} > - \frac{2\varepsilon}{1 - 2\varepsilon}$.
Consequently (for sufficiently small $\varepsilon$) $\cos \angle(x, e) > \frac{1}{1 + 2 \varepsilon} - \frac{2\varepsilon}{1 - 2 \varepsilon} \rightarrow 1$ as $\varepsilon \rightarrow 0$,
thus for some $\varepsilon_0 \in (0;\frac{1}{2})$ we obtain: $\cos \angle(x, e) \geqslant \cos \gamma$ for each $x\in B(\frac{1}{2}e, \varepsilon_0)$.
\end{proof}
\begin{lemma}\label{lmOmtdConvex}
If $\gamma \leqslant \frac{\pi}{2}$, then $C(s, e, \gamma)$ is convex.
\end{lemma}
\begin{proof}
Again, we assume $s = \theta$ and $\| e \| = 1$ without loss of generality.
Let $x, y \in C(\theta, e, \gamma)$. We claim that $\forall \lambda \in [0;1]$: $z = \lambda x + (1 - \lambda) y \in C(\theta, e, \gamma)$.
If $x = \theta$ or $y = \theta$, then $z \in C(\theta, e, \gamma)$ by Lemma \ref{lmOmtdSegm}.
If not: clearly $\theta \in C(\theta, e, \gamma)$, suppose $z \ne \theta$.
1) $\| z\| \leqslant \lambda \| x \| + (1 - \lambda) \| y \| \leqslant \lambda \cdot 1 + (1 - \lambda) \cdot 1 = 1$;
2) $\cos \angle(z, e) = \frac{\inprod{z}{e}}{\| z \| \cdot \| e\|} = \bigl[ \lambda \frac{\inprod{x}{e}}{\| z \|} + (1 - \lambda) \frac{\inprod{y}{e}}{\| z \|} \bigr] =
\bigl[ \lambda \frac{\inprod{x}{e}}{\| x\|} \cdot \frac{\| x \|}{\| z \|} + (1 - \lambda) \frac{\inprod{y}{e}}{\| y\|} \cdot \frac{\| y \|}{\| z \|} \bigr] =$
\hfill $= \bigl[ \frac{\lambda \| x \|}{\| z\|} \cos \angle(x, e) + \frac{(1 - \lambda) \| y\|}{\| z \|} \cos \angle(y, e) \bigr] \geqslant
\frac{\lambda \| x\| + (1 - \lambda) \| y\|}{\| \lambda x + (1 - \lambda) y \|} \cos \gamma \stackrel{\cos \gamma \geqslant 0}{\geqslant} \cos \gamma$
$\Leftrightarrow$ $\angle(z, e) \leqslant \gamma$
\end{proof}
\section{Main}
\begin{prop}\label{propMain}
Let $\dim X \geqslant m \in \mathbb{N}$, $B(\theta, 1) \subseteq E \subseteq \overline{B}(\theta, 1)$, and $E = \bigcup\limits_{i=1}^m A_i$, where $A_i \cong A_j$.
Then either $\theta \in \bigcap\limits_{i=1}^m \mathrm{Int}\, A_i$, or $\theta \notin \bigcup\limits_{i=1}^m \mathrm{Int}\, A_i$.
\end{prop}
\begin{proof}
Suppose $m \geqslant 2$. Let $K = \overline{B}(\theta, 1)$, $S = S(\theta, 1)$.
$K = \overline{B(\theta, 1)} \subseteq \overline{E} \subseteq \overline{\overline{B}(\theta, 1)} = K$ $\Leftrightarrow$ $\overline{E} = K$.
Let $f_{ij}$ be the motion transforming $A_i$ to $A_j$, so that $f_{ij}(A_i) = A_j$, and $f_{ji} = f^{-1}_{ij}$ ($f_{ii} = \mathcal{I}$).
Consider $S_i = \overline{A_i} \cap S$. They are closed and $\bigcup\limits_{i=1}^m S_i = \bigl( \bigcup\limits_{i=1}^m \overline{A_i} \bigr) \cap S =
\overline{\bigcup\limits_{i=1}^m A_i} \cap S = K \cap S = S$. By Lemma \ref{lmLSBNormSpace}, $\exists S_k$, $\exists d \in S$: $\{ d, -d \} \subseteq S_k$.
Take any $i \ne k$. Let $A_k' = A_k \cup S_k \cup f^{-1}_{ki}(S_i)$ and $A_i' = A_i \cup S_i \cup f_{ki}(S_k)$.
1) $A_i' \subseteq K$. Indeed, a) $A_i \subseteq E \subseteq K$, b) $S_i \subseteq S \subset K$, c) $\forall x \in S_k \subseteq \overline{A_k}$
$\exists \{ x_l \}_{l=1}^{\infty}$, $x_l \in A_k$: $x_l \xrightarrow[l\rightarrow \infty]{} x$, then continuous $f_{ki}(x_l) \xrightarrow[l\rightarrow \infty]{} f_{ki}(x)$.
$f_{ki}(x_l) \in A_i$, hence $f_{ki}(x) \in \overline{A_i} \subseteq \overline{E} = K$.
2) $f_{ki}(A_k') = f_{ki} (A_k) \cup f_{ki}(S_k) \cup f_{ki} \bigl( f^{-1}_{ki} (S_i) \bigr) = A_i \cup S_i \cup f_{ki}(S_k) = A_i'$.
Since $\{ d, -d \} \subseteq S_k \subseteq A_k'$, we obtain $f_{ki} \bigl( \{d, -d \} \bigr) \subseteq A_i' \subseteq K$,
so $\| f_{ki} (d) \| \leqslant 1 = \| d \|$ and $\| f_{ki}(-d) \| \leqslant \| d \|$.
By Lemma \ref{lmDiamTrivShift}, the shift component $h_{ki}$ of $f_{ki} = h_{ki} \circ g_{ki}$ is trivial. Then the shift component $h_{ik}$ of $f_{ik} = f^{-1}_{ki}$ is also trivial.
There are 2 possible cases: either $\exists i$: $\theta \in \mathrm{Int}\, A_i$, or $\forall i$: $\theta \notin \mathrm{Int}\, A_i$ $\Leftrightarrow$ $\theta \notin \bigcup\limits_{i=1}^m \mathrm{Int}\, A_i$.
Consider the former case, then $\exists B(\theta, \varepsilon) \subseteq A_i$. Take $\forall j \ne i$.
By Lemma \ref{lmMotionBall}, $f_{ik} \bigl( B(\theta, \varepsilon) \bigr) = B\bigl( f_{ik}(\theta), \varepsilon \bigr) = B\bigl( g_{ik}(\theta), \varepsilon \bigr) = B(\theta, \varepsilon)$,
hence $B(\theta, \varepsilon) \subseteq A_k$. Apply Lemma \ref{lmMotionBall} again:
$f_{kj} \bigl( B(\theta, \varepsilon) \bigr) = B\bigl( f_{kj}(\theta), \varepsilon \bigr) = B(\theta, \varepsilon) \subseteq A_j$, and $\theta \in \mathrm{Int}\, A_j$.
Therefore $\theta \in \bigcap\limits_{i=1}^m \mathrm{Int}\, A_i$.
\end{proof}
{\bf Corollary.} If $\dim X \geqslant \aleph_0$, then the statement of Prop. \ref{propMain} holds true for $\forall m \in \mathbb{N}$:
a ball in such $X$ cannot be covered by any finite number of congruent subsets so that its centre belongs to the interiors of certain of them
and doesn't belong to the interiors of the others.
As for infinite coverings, see Ex. \ref{exmpUnivInfCover} and Ex. \ref{exmpHilbertCountCover} below.
{\ }
\begin{remark}\label{remIneqCounterEx}
One may ask why we do not generalize the approach from \cite{douwen1993} instead.
The reasonings there essentially make use of the inequality
\centerline{$\forall x, y, z \in X$: $\| x - y \|^2 + \| z \|^2 \leqslant \| x \|^2 + \| y \|^2 + \| x - z \|^2 + \| y - z \|^2$}
\noindent
which is the implication of the inequality \cite[p. 184, (c)]{douwen1993} (for $p \leftarrow \theta$, $q \leftarrow z = \sigma_A(\theta)$),
established for Euclidean/pre-Hilbert $X$. Unfortunately, it is not true for arbitrary NCS $X$: consider $X = \mathbb{R}^2_{3/2}$ with
$\| x \| = \bigl\| (x_1;x_2) \bigr\|_{3/2} = \bigl( |x_1|^{3/2} + |x_2|^{3/2} \bigr)^{2/3}$
and let $x = (1;0)$, $y = (0;1)$, $z = (1;1)$. Then
\centerline{$\| x - y \|^2 + \| z \|^2 = 2 \cdot 2^{\frac{4}{3}} = 4 \cdot 2^{\frac{1}{3}} >
4 = 1 + 1 + 1 + 1 = \| x \|^2 + \| y \|^2 + \| x - z \|^2 + \| y - z \|^2$}
(Maybe some subtler form of the inequality would work.)
\end{remark}
{\ }
\begin{remark}
On the other hand, LSB theorem is applied here too, being a ``foundation stone'' of the inference;
another pebble is that the motions transforming the subsets onto each other don't include parallel shift, otherwise one of antipodal points moves outside of the ball.
Antipodal/``diametral'' points and the constraints they impose are exploited, --- without resort to LSB theorem, --- in \cite[\S 4]{edelstein1988},
where NCS Banach spaces are considered; see Rem. \ref{remPlaneNonNCSBanachNCS} below.
\end{remark}
{\ }
\begin{remark}
If we replace the condition ``$A_i \cong A_j$'' by ``$\mathrm{Int}\, A_i \cong \mathrm{Int}\, A_j$'', then ``$\theta \in \mathrm{Int}\, A_1$ and $\theta \notin \mathrm{Int}\, A_2$'' becomes possible, evidently;
for example, in $\mathbb{R}^2$ take $z\in K$: $\| z \| = \frac{1}{2}$, and
\centerline{$A_1 = B(\theta, \frac{1}{8}) \cup \{ (x;y) \in K\mid x\in \mathbb{Q} \text{ and } y\in \mathbb{Q} \}$,
$A_2 = B(z, \frac{1}{8}) \cup \{ (x;y) \in K\mid x\notin \mathbb{Q} \text{ or } y\notin \mathbb{Q} \}$}
\noindent
then $A_1 \cup A_2 = K$, $\mathrm{Int}\, A_1 = B(\theta, \frac{1}{8}) \ni \theta$, $\mathrm{Int}\, A_2 = B(z, \frac{1}{8})$, so $\mathrm{Int}\, A_1 \cong \mathrm{Int}\, A_2$, $A_1 \cap A_2 = \varnothing$.
Same happens if we replace ``congruence'' by ``homotheticity'': take $A_1 = K$ and let $A_2$, $A_3$, ... be the balls of sufficiently small radius $\rho$ so that
all of them can be placed within $K$ and don't contain its centre (in other words, $A_i = \rho K + c_i$, $\rho < \| c_i \| < 1 - \rho$ for $i \geqslant 2$).
\end{remark}
{\ }
\begin{example}\label{exmpNonNCS}
Without NCS, the Prop. \ref{propMain} statement can become false.
Consider non-NCS $l_{\infty} = \bigl\{ x = (x_1;x_2;...)\colon \| x \|_{\infty} = \sup\limits_{i\in \mathbb{N}} |x_i| < \infty \bigr\}$
and its unit ball $\overline{B}(\theta,1) = \bigl\{ x\in l_{\infty} \colon \sup\limits_i |x_i| \leqslant 1 \bigr\}$. For any odd $n \geqslant 3$
the subsets $A_i = \bigl\{ x\in \overline{B}(\theta, 1) \colon x_1 \in [-1+2\frac{i-1}{n};-1 + 2 \frac{i}{n}]\bigr\}$, $i=\overline{1,n}$, are congruent
(motion $f_{ij}(x) = (x_1 + 2\frac{j-i}{n};x_2;x_3;...)$ transforms $A_i$ to $A_j$), $\theta \in B(\theta, \frac{1}{2n}) \subset \mathrm{Int}\, A_{1 + \lfloor\frac{n}{2}\rfloor}$,
while $\theta \notin A_i$ for $i \ne 1 + \lfloor\frac{n}{2}\rfloor$, and $A_1 \cup ... \cup A_n = \overline{B}(\theta, 1)$.
Instead of $l_{\infty}$, we can take $\mathbb{R}^m_{\infty}$ if $m \geqslant n$.
Note that this decomposition of $\overline{B}(\theta, 1)$ is a dissection and a covering (but not a partition).
\end{example}
{\ }
\begin{remark}\label{remPlaneNonNCSBanachNCS}
Consider $\mathbb{R}^2_p$, $\| x \|_{2;p} = \bigl( |x_1|^p + |x_2|^p \bigr)^{\frac{1}{p}}$.
For $p = 2$, usual Euclidean metric, the original dissection problem posed in \cite[C6]{croft1991} remains unsolved.
For $p = 1$ or $p = \infty$, --- non-NCS case, --- $\overline{B}(\theta, 1)$ is a square
(sides being parallel to $Ox_i$ for $p = \infty$, rotated by $\frac{\pi}{4}$ for $p = 1$),
trivially dissectable into 3 (5, 7, ...) congruent rectangles such that the centre $\theta$ is within one of them.
It is shown in \cite[\S 2]{edelstein1988} that $\overline{B}(\theta, 1)$ in non-NCS $c_0$, $C_{[0;1]}$ is partitionable into $n$ congruent subsets for $\forall n \leqslant \aleph_0$,
while in NCS Banach $X$ there's no such partition if $2 \leqslant n < \min\{ \dim X , \aleph_0 \} + 1$.
\end{remark}
{\ }
\begin{example}\label{exmpCoverDisk}
Obviously, as Fig. \ref{figCoverDiskGen} illustrates, the ball/disk in $\mathbb{R}^2$ can be covered by $n \geqslant 4$ congruent and convex subsets such that its centre belongs to the interior of exactly one set; moreover, the centre is at positive distance from other sets.
\begin{figure}[h]
\centerline{\begin{tabular}{ccc}
\includegraphics[width=3cm]{cbcs_pic_4.pdf}
&
\includegraphics[width=3cm]{cbcs_pic_5.pdf}
&
\includegraphics[width=3cm]{cbcs_pic_17.pdf}
\\
\includegraphics[height=1.5cm]{cbcs_pic_4_one.pdf}
&
\includegraphics[height=1.5cm]{cbcs_pic_5_one.pdf}
&
\includegraphics[height=1.5cm]{cbcs_pic_17_one.pdf}
\\
$n=4$ & $n=5$ & $n=17$
\end{tabular}}
\caption{Covering a disk by $n \geqslant 4$ congruent subsets}
\label{figCoverDiskGen}
\end{figure}
The case $n=3$ is slightly different: the sets are not convex and not 1-connected, each one has a circular hole in one of two symmetric segments it consists of.
At Fig. \ref{figCoverDisk3}, $\angle AOB = 150^{\circ}$ (for instance).
We do not know is there any such covering by three 1-connected congruent subsets.
\begin{figure}[h]
\centerline{
\begin{tabular}{ccc}
\includegraphics[width=3.6cm]{cbcs_pic_3.pdf}
&\quad\quad&
\includegraphics[width=3.5cm]{cbcs_pic_3_one.pdf}
\end{tabular}}
\caption{Covering a disk by $n=3$ congruent subsets (resembles ``Biohazard'' symbol)}
\label{figCoverDisk3}
\end{figure}
In fact, the case $n=k$ can be extended to all $n > k$ (which makes Fig. \ref{figCoverDiskGen} redundant),
because covering allows $A_i = A_j$ (not so for dissection): take $A_1$, ..., $A_{k-1}$, and $A_k = A_{k+1} = ... = A_n$.
Similar constructions can be used in $\mathbb{R}^m$. In particular, when $n = m + 2$,
note that the ``hollow'' around the centre at Fig. \ref{figCoverDiskGen}, case $n=4$, is an equilateral triangle and a 2-simplex in $\mathbb{R}^2$.
\end{example}
{\ }
\begin{remark}
Convexity of parts implies the negative answer not only to the original dissection problem, but also to its generalization:
the closed disk in $\mathbb{R}^2$ cannot be dissected into $n \geqslant 2$ homothetic, convex, and closed parts such that the interior of exactly one part contains the centre.
\begin{proof}
Let $K = \overline{B}(\theta, 1)$, $S = S(\theta, 1)$ in $\mathbb{R}^2$, and let $\{A_i\}_{i=1}^n$ be the parts, so that $K=\bigcup\limits_{i=1}^n A_i$,
$A_i \sim A_j$, $\mathrm{Int}\, A_i \cap \mathrm{Int}\, A_j = \varnothing$ for $i \ne j$.
Also, let $\partial A_i = \overline{A_i} \cap \overline{\mathbb{R}^2 \backslash A_i} \subset A_i$ be the boundary of $A_i$.
1) Claim: if $\partial A_i$ contains $2n+4$ different points $x_1$, ..., $x_{2n+4}$ that belong to some circle $S(a, r)$,
then $S(a,r) = S$.
(``The strictly convex section of $\partial A_i$ has to be on $\partial K = S$, not inside $K$.'')
To show that this claim is true, assume the contrary: $\exists x_j \notin S$. Let $N_+$ be the number of $x_j \in S$, and $N_- = \bigl|\{ x_j\colon x_j \notin S \}\bigr|$.
$N_+ + N_- = 2n + 4$.
If $N_+ \geqslant 3$, then 3 points that $\in S$ among $x_1$, ..., $x_{2n+4}$ determine the circle $S(a, r)$ uniquely (see e.g. \cite[2.3, Cor. 7]{agricola2008}),
so $S(a,r) = S$, which contradicts the assumption.
Thus $N_+ \leqslant 2$ $\Rightarrow$ $N_- \geqslant 2n+2$: we can take $2n+2$ points on $S(a,r)$ in $\mathrm{Int}\, K = B(\theta, 1)$.
Enumerate them sequentially, for instance, counter-clockwise: $x_1'$, ..., $x_{2n+2}'$.
\begin{figure}[h]
\centerline{\includegraphics[width=3.5cm]{cbcs_conv_diss.pdf}}
\label{figDissDiskConvex}
\end{figure}
Let $x_1' x_3' ... x_{2n+1}'$ be the convex polygon, with interior, inscribed into $S(a,r)$;
it follows from convexity of $A_i$ and $n+1\geqslant 3$ that $x_1' x_3' ... x_{2n+1}' \subseteq A_i$ and $\varnothing \ne \mathrm{Int}\, x_1' x_3' ... x_{2n+1}' \subseteq \mathrm{Int}\, A_i$.
Consider the rest of points: $x_2'$, $x_4'$, ..., $x_{2n+2}'$.
Since $x_j' \in \partial A_i \cap \mathrm{Int}\, K$, each of these $n+1$ points belongs to the boundary $\partial A_{k_j}$ of at least one other part, $k_j \ne i$.
There are $n-1$ other parts, hence two of these points, $x_{(1)}' $ and $x_{(2)}'$, belong to the boundary of the same $A_k$, $k \ne i$.
Then $[x_{(1)}', x_{(2)}'] \subset A_k$.
It is easy to see that $[x_{(1)}', x_{(2)}']$ intersects $\mathrm{Int}\, x_1' x_3' ... x_{2n+1}'$, hence $\mathrm{Int}\, A_i \cap \mathrm{Int}\, A_k \ne \varnothing$
(because $\forall y \in [x_{(1)}', x_{(2)}']$, $\forall B(y, \varepsilon)$: $B(y,\varepsilon) \cap \mathrm{Int}\, A_k \ne \varnothing$), --- a contradiction.
2) $\bigcup\limits_{i=1}^n (A_i \cap S) = S$ and $|S| > \aleph_0$ imply $\exists A_i$: $\partial A_i \supseteq A_i \cap S \supseteq \{ x_1, ..., x_{2n+4} \}$.
Consequently, $\partial A_k = f_{ik}(\partial A_i)$ of any other $A_k$ contains $x^{(k)}_j = f_{ik}(x_j)$, $j=\overline{1,2n+4}$,
which belong to some circle $S(a, r) = f_{ik}(S)$; here $f_{ik}\colon X \leftrightarrow X$, $\| f_{ik} (x) - f_{ik}(y) \| = \alpha_{ik} \| x - y \|$ is the homothety transforming $A_i$ to $A_k$.
By (1), $x^{(k)}_j \in S$ for any $j$ and $k$.
3) Now assume that there's exactly one part $A_{i_0}$ such that $\theta \in \mathrm{Int}\, A_{i_0}$: $B(\theta, \delta) \subseteq A_{i_0}$.
$\{ x^{(i_0)}_j \}_{j=1}^{2n+4} \subseteq S \cap \partial A_{i_0}$, and the point $\theta \in A_{i_0}$ is at distance 1, equidistant, from each $x^{(i_0)}_j$.
Take any $k \ne i_0$. As above, $\{y^{(k)}_j = f_{i_0 k}(x^{(i_0)}_j)\}_{j=1}^{2n+4} \subseteq S \cap A_k$.
And there must be $z \in A_k$, which is equidistant from each $y^{(k)}_j$; clearly, $z = \theta$ ($y^{(k)}_1$, $y^{(k)}_2$, $y^{(k)}_3$ determine it uniquely).
Also, $B(\theta, \delta) \subseteq A_k$ (apply similar arguments to $\forall x \in B(\theta, \delta) \subseteq A_{i_0}$),
so $\theta \in \mathrm{Int}\, A_k$. A contradiction.
\end{proof}
(1st step shortens if we assume that one of $\partial A_i$ contains the arc $\breve{a}$, which has to be on $S$ then,
otherwise 2 internal points of $\breve{a}$ are in $\partial A_k$, $k\ne i$, too, implying a contradiction.)
\end{remark}
{\ }
\begin{example}\label{exmpUnivInfCover}
Without the upper bound for the cardinal number of covering,
there is a ``universal'' covering of $\overline{B}(\theta, 1)$ such that the interior of exactly one subset contains the centre:
let $\mathcal{C} = \{ A_{\theta} \} \cup \bigl\{ A_y \bigr\}_{y \in S(\theta, \frac{1}{2})}$, where $A_{\theta} = \overline{B}(\theta, \frac{1}{2})$ and
$A_y = \overline{B}(y, \frac{1}{2})$.
Indeed, $\theta \in \mathrm{Int}\, A_{\theta} = B(\theta, \frac{1}{2})$, while $\forall y \in S(\theta, \frac{1}{2})$: $\theta \notin \mathrm{Int}\, A_y = B(y, \frac{1}{2})$,
and for $\forall x \in \overline{B}(\theta, 1) \backslash \{ \theta \}$ we take $y_x = \frac{1}{2 \| x \|} x \in S(\theta, \frac{1}{2})$,
then $\| x - y_x \| = |1 - \frac{1}{2 \| x\|}| \cdot \| x \| = \bigl| \| x \| - \frac{1}{2} \bigr| \leqslant \frac{1}{2}$, thus $x \in A_{y_x}$.
Certainly, $A_i \cong A_j$.
This covering doesn't require NCS or $\dim X > 1$; meanwhile $\dim X > 1$ implies $| \mathcal{C} | > \aleph_0$.
\end{example}
{\ }
\begin{example}\label{exmpHilbertCountCover}
Consider the Hilbert space over $\mathbb{R}$, $X = H = l_2$, and its closed unit ball $\overline{B_H} = \overline{B}(\theta, 1)$, unit sphere $S_H = S(\theta, 1)$.
We claim that there is a countable covering of $\overline{B_H}$ by its congruent and convex subsets $\{ A_i \}$ such that the interior of exactly one set contains the centre.
\begin{proof}
It's a direct corollary of Lemmas \ref{lmCountDenseSphGeod}, \ref{lmOmtdCongr}--\ref{lmOmtdConvex}:
1) Lemma \ref{lmCountDenseSphGeod} and Lemma \ref{lmOmtdCoverBall} along with Lemma \ref{lmOmtdCongr} provide
the countable covering of $\overline{B_H}$ by congruent ommatidiums $A_i = C(\theta, d_i, \frac{\pi}{4})$, $i \in \mathbb{N}$, where $d_i \in S_H$.
By Lemma \ref{lmOmtdOriginNonIntr}, $\theta \notin \mathrm{Int}\, A_i$.
2) Then Lemma \ref{lmOmtdInsideBall} allows to add the ommatidium $A_0 = C(-\frac{1}{2}d_1, \frac{1}{2}d_1, \frac{\pi}{4})$,
which is contained in $\overline{B_H}$ since $\frac{\pi}{4} < \arccos\frac{1}{4}$ and congruent with $A_i$ by Lemma \ref{lmOmtdCongr}.
3) Finally, by Lemma \ref{lmOmtdMiddleInt}, $\theta = \frac{1}{2}\bigl( -\frac{1}{2}d_1 + \frac{1}{2} d_1 \bigr) \in \mathrm{Int}\, A_0$.
By Lemma \ref{lmOmtdConvex}, $A_i$ are convex.
$|\{ A_i \}_{i \in \mathbb{N} \cup \{ 0\}}| = \aleph_0 = \dim H$.
\end{proof}
This covering somewhat resembles those from Fig. \ref{figCoverDiskGen}, except that
a)~the sets intersect ``a~lot'', b)~there's no ``hollow'' at the centre (corrigible by erasing sufficiently small neighborhood of template ommatidium's origin), and
c)~it's infinite-dimensional.
\end{example}
{\ }
The covering problem turns out to be easier about ``positive'' results than the problems of dissection and partition types.
| {
"timestamp": "2017-08-07T02:08:16",
"yymm": "1708",
"arxiv_id": "1708.01598",
"language": "en",
"url": "https://arxiv.org/abs/1708.01598",
"abstract": "We consider the covering of a ball in certain normed spaces by its congruent subsets and show that if the finite number of sets is not greater than the dimensionality of the space, then the centre of the ball either belongs to the interior of each set, or doesn't belong to the interior of any set. We also provide some examples when it belongs to the interior of exactly one set. These are the specific cases of the modified problem originally posed for dissection.",
"subjects": "Functional Analysis (math.FA); Metric Geometry (math.MG)",
"title": "On covering a ball by congruent subsets in normed spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806518175515,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7077274354141285
} |
https://arxiv.org/abs/1105.3643 | On the identifiability of binary Segre products | We prove that a product of $m>5$ copies of $\PP^1$, embedded in the projective space $\PP^r$ by the standard Segre embedding, is $k$-identifiable (i.e. a general point of the secant variety $S^k(X)$ is contained in only one $(k+1)$-secant $k$-space), for all $k$ such that $k+1\leq 2^{m-1}/m$. | \section{Introduction}
In this paper, we study secant varieties $S^k$ of Segre products
of projective spaces, with special focus on products of many copies
of ${\mathbb{P}}^1$ (binary Segre products or Bernoulli models, in Algebraic Statistics).
We are mainly concerned with the number of secant spaces passing through
a general point of a secant variety.
In the literature, one finds several methods for computing the
dimension of secant varieties of products. Let us just mention the
inductive method introduced by Abo, Ottaviani and Peterson
in \cite{AOP}, which provides a procedure for detecting
when the dimension coincides with the expected one.
In the specific case of products of copies of ${\mathbb{P}}^1$, a complete description
of the dimension of secant varieties has been obtained by Catalisano, Geramita
and Gimigliano in \cite{CGG1}
and \cite{CGG2}. When the number of copies $m$ of ${\mathbb{P}}^1$ is bigger than $4$, they
prove that $S^k$ has always the expected dimension.
From our point of view, the result implies
that, when the secant variety $S^k$ does not fill the ambient space and $m>4$, then
through a general point of $S^k$ one finds only finitely many $(k+1)$-secant
$k$-spaces. In this paper, we go one step further and we ask {\it how many} secant
spaces one finds through a general point of $S^k$. Our main result is:
\begin{Thm}\label{main}
Let $X$ be a product of $m>5$ copies of ${\mathbb{P}}^1$, embedded in the projective
space ${\mathbb{P}}^r$, $r=2^m-1$, by the standard Segre embedding.
Let $S^k(X)$ be the $k$-th secant variety of $X$, generated by $(k+1)$-secant $k$-spaces. If $k+1\leq 2^{m-1}/m$,
then a general point of $S^k(X)$ is contained in
only one $(k+1)$-secant $k$-space.\end{Thm}
Following a notation suggested by applications to Algebraic Statistics,
we say that a variety $X$ is {\it $k$-identifiable} when through a
{\it general} point of the secant variety $S^k(X)$, there is only one
$(k+1)$-secant $k$-space. (Those who would prefer "generically $k$-identifiable" here,
should consider that there are {\it always} points of $S^k(X)$, e.g. points
of $X$, for which the number jumps to infinity.)
With this notation, our result can be rephrased by saying that a product
of $m>5$ copies of ${\mathbb{P}}^1$ is $k$-identifiable, as soon as $k+1\leq 2^{m-1}/m$ (i.e.
$m - log_2(m) \geq \lceil \log_2 (k+1) \rceil +1$).
\smallskip
From this last point of view, $k$-identifiability has been studied
because of its application to Algebraic Statistics and other fields.
Using methods of Algebraic Geometry, Elmore, Hall and Neeman proved in
\cite{EHN} the following asymptotic result: when the number $m$ of copies of
${\mathbb{P}}^1$ is ``very large" with respect to $k$, then the binary Segre product is
$k$-identifiable.
As far as we know, the best bound for identifiability of binary products
has been obtained by Allman, Matias and Rhodes
in \cite{AMR} (Corollary 5). They prove that the product is $k$-identifiable when
$m> 2\lceil\log_2 (k+1)\rceil +1$. Thus, they
give a lower bound for $2^m$ which is quadratic with respect to $k+1$. Our theorem provides an extension of these results. In order to compare with the aforementioned bounds, notice that
$({\mathbb{P}}^1)^m$ cannot be $k$-identifiable for $k>2^m/(m+1) -1$, by a simple
dimensional count, explained in Section \ref{background}.
Thus, the maximal $k$ for which identifiability makes sense is
$k_{max}=\lfloor 2^m/(m+1) \rfloor -1$. The result of Allman, Matias and Rhodes,
rewritten from this point of view, says that $({\mathbb{P}}^1)^m$ is
$k$-identifiable for $k+1\leq 2^{(m-1)/2}$. Our theorem extends this bound, for we prove that:
$$ ({\mathbb{P}}^1)^m \mbox{ is $k$-identifiable for } k+1\leq \frac{2^{m-1}}{m}.$$
Still more or less half way from the maximum,
but a sensible improvement, anyway. For example, for $m=10$, $k_{max}$ is $92$, our bound
proves the $k$-identifiability for $k\leq 50$, while Allman,
Matias and Rhodes give identifiability for $k\leq 16\sqrt 2$.
\smallskip
Our method is strongly based on the result on the dimension
of secant varieties, contained in \cite{CGG2}. Indeed, for a variety $X$, both the
dimension of secant varieties and the number of secant spaces
passing through a point, are linked to the existence of very
degenerate subvarieties passing through $k+1$ general points
of $X$. We explain this fact in details, through the next section.
Using this remark, transferring results on the dimension
of secant varieties to results on the identifiability,
becomes straightforward. At the end of the paper, we will
explain why we need the assumption $m>5$.
Namely, we prove that $({\mathbb{P}}^1)^5$ is not $4$-identifiable.
Let us finish by stating the following conjecture, suggested by
our analysis.
\begin{Conj}\label{conge} For $m>5$ and for all $k=1,\dots,
\lfloor 2^m/(m+1) \rfloor -1$, the binary Segre product
$({\mathbb{P}}^1)^m$ is $k$-identifiable.
\end{Conj}
\section{Geometric background}\label{background}
In this section, we collect some known results on secant varieties and Segre products.
We refer to \cite{CC2}, for details and proofs. We work over the complex field
and we consider the projective space ${\mathbb{P}}^r = {\mathbb{P}}^r_{{\mathbb{C}}}$,
equipped with the tautological line bundle ${\mathcal{O}}_{{\mathbb{P}}^r}(1)$.
If $Y$ is a subset of ${\mathbb{P}}^r$, we denote by $\langle Y \rangle$
the linear span of $Y$. We say that $Y$ is {\it non--degenerate}
if $\langle Y \rangle={\mathbb{P}}^r$.
A linear subspace of dimension $n$ of ${\mathbb{P}}^ r$
will be called a \emph{$n$--subspace} of ${\mathbb{P}}^ r$.
Let $X\subset {\mathbb{P}}^ r$ be an irreducible, projective, non--degenerate variety of
dimension $m$. For any non--negative integer $k$,
the {\it $k$--secant variety} of $X$ is the Zariski closure in ${\mathbb{P}}^r$ of the union of all
$k$--dimensional subspaces of ${\mathbb{P}}^r$ that are spanned by $k+1$
independent points of $X$. We denote it by $S^k(X)$, or $S^k$, if
no confusion arises. $S^k(X)$ can be seen as the closure of the image,
under the second projection,
of the {\it abstract secant variety}, i.e. the incidence subvariety
$AbS^k(X)\subset X^{(k)}\times {\mathbb{P}}^r$,
$$AbS^k(X) =\{((P_0,\dots,P_k),P): P\in\langle P_0,\dots,P_k\rangle, \mbox{ and the
$P_i$'s are independent}\}.$$
Notice that $AbS^k(X)$ is {\it always} a variety of dimension $mk+m+k$.
When $X\subset {\mathbb{P}}^ r$ is reducible, the same definition of secant variety
holds, except that we only consider linear spaces meeting every component
of $X$. In particular, when $X$ has $k+1$ components, the secant
variety coincides with the {\it join} of the components
(see \cite{Zak}).
\begin{Def} We say that $X$ {\it has $k$-th secant order $\mu$}
if for a general point $P\in S^k(X)$, there are exactly $\mu$ unordered $(k+1)$-uples
$P_0,\dots,P_k$ of points of $X$ such that $P\in \langle P_0,\dots,P_k\rangle$.
We say that $X$ is (generically) {\it $k$-identifiable}
if it has $k$-th secant order $1$, i.e.
if for a general point $P\in S^k(X)$, there is a unique unordered $(k+1)$-uple
$P_0,\dots,P_k$ of points of $X$ such that $P\in \langle P_0,\dots,P_k\rangle$.
\end{Def}
\begin{Ex}\label{linsp} If $X$ is the union of $k+1$ linearly independent subspaces
of dimension $m$, then $X$ has $k$-th secant order $1$. This is an easy exercise of Linear Algebra, for $k+1=2$. For $k+1>2$,
it follows by induction, by projecting from one linear component of $X$. Rational normal curves in ${\mathbb{P}}^{2k+1}$ are the unique irreducible curves with
$k$-th secant order $1$. See e.g. Theorem 3.1 of \cite{CC2}.
\end{Ex}
From the definition of secant varieties, it follows that:
\begin{equation} \label{defect} s^{(k)}(X):= \dim (S^k(X))\leq
\min\{r,mk+m+k\}.\end{equation}
The right hand side is called the {\it expected dimension} of $S^{k}(X)$.
\begin{Def}
We say that $X$ is {\it $k$--defective} when a strict inequality
holds in (\ref{defect}).
\end{Def}
\begin{Rmk}
It is clear that $X$ is $k$-identifiable when the projection $AbS^k(X)\to S^k(X)$
is birational. So $X$ cannot be $k$-identifiable when $\dim(AbS^k(X))> s^{(k)}(X)$. In particular, $X$ is not $k$-identifiable when $r<mk+m+k$ or when $X$ is
$k$-defective.
\end{Rmk}
Let $X\subset {\mathbb{P}}^ r$ be a variety. We denote by ${\rm Sing}(X)$ the Zariski-closed
subset of singular points of $X$.
Let $P\in X\setminus{\rm Sing}(X)$ be a smooth point. We
denote by $T_{X,P}$ the embedded tangent space to $X$ at $P$,
which is a $m$--subspace of ${\mathbb{P}}^ r$. More generally, if $P_0,\ldots,P_k$
are smooth points of $X$, we will set
$$T_{X,P_0,\ldots,P_k}=\langle \bigcup_{i=1}^ n T_{X,P_i} \rangle.$$
The relations between secant varieties and tangent spaces to $X$
are enlightened by the celebrated Terracini's Lemma:
\begin{Lem} (see \cite{Terr1} or, for modern versions, \cite{Adl},
\cite{Dale1}, \cite{Zak}).
Given a general point $P\in S^k(X)$, lying in the subspace $\langle P_0,\dots,P_k
\rangle$ spanned by $k+1$ general points on $X$, then the tangent space
$T_{S^{k}(X),P}$ to $S^{k}(X)$ at $P$ is the span $T_{X,P_0,\dots,P_k}$
of the tangent spaces $T_{X,P_0},\dots, T_{X,P_k}$.
\end{Lem}
Using the correspondence between the abstract secant variety and
$S^k(X)$, one obtains from Terracini's Lemma, a condition for the defectivity
of $X$:
\begin{Thm}\label{def-weakdef} (See \cite{CC2}, Theorem 2.5)
Let $P_0,\dots ,P_k$ be general points of $X$.
If $H$ is a general hyperplane tangent to $X$ at $P_0,\dots ,P_k$, we can consider
the {\it contact variety} of $H$, i.e. the union $\Sigma$ of the irreducible components of
${\rm Sing}(X\cap H)$. If $X$ is $k$-defective, then $\Sigma$ is positive dimensional.
\end{Thm}
\noindent The previous Theorem suggests a refinement of the notion
of defective variety.
\begin{Def} An irreducible, non--degenerate variety $X\subset{\mathbb{P}}^r$
such that $s^{(k)}(X)<r$ is {\it $k$--weakly defective} if for
$P_0,...,P_k\in X$ general points, the general hyperplane $H$
containing $T_{X,P_0,...,P_k}$ is tangent to $X$ along a variety
$\Sigma(H)$ of positive dimension. $\Sigma(H)$ is called the {\it
$(k+1)$--contact variety} of $H$.\
\end{Def}
\noindent It turns out that $k$-defective implies $k$--weakly defective,
but the converse is false. We refer to \cite{CC0} and \cite{CC2}
for a discussion on the subject.
The main link between identifiability and weakly defective varieties
lies in the following:
\begin{Thm}\label{mu1} (See \cite{CC2}, Corollary 2.7)
Let $X\subset{\mathbb{P}}^r$ be an irreducible,
projective, non--degenerate variety of dimension $m$. Assume $
mk+m+k<r$. Then $X$ is $k$-identifiable, unless it is
$k$--weakly defective. \end{Thm}
\begin{Thm}\label{mu2} (See \cite{CC2}, Theorem 2.4)
Let $X\subset{\mathbb{P}}^r$ be an irreducible,
projective, non--degenerate variety of dimension $m$. Assume $
mk+m+k<r$ and assume that $X$ is $k$-weakly defective.
Call $\Sigma$ a general $(k+1)$-th contact variety.
Then, the $k$-th secant order of $\Sigma$ is equal to the $k$-th
secant order of $X$.
\end{Thm}
\noindent Thus, a way to prove that a variety $X$ is $k$-identifiable, at least
when $r\neq mk+m+k$, is to prove that $X$ is not $k$-weakly defective,
or, if it is $k$-weakly defective, that the general contact variety
$\Sigma$ has $k$-th secant order $1$.
\smallskip
The second cornerstone in our theory links $k$-defectivity
and $k$-weakly defectivity with
the existence of degenerate subvarieties, passing through $k+1$ general
points in $X$. Namely, if $X$ is $k$-defective or
$k$-weakly defective, then it turns out
that the general contact variety is highly degenerate.
\begin{Thm}\label{wdef} (See \cite{CC2}, Theorem 2.4 and Theorem 2.5)
Assume $mk+m+k<r$.
If $X$ is $k$-weakly defective, then a general contact variety $\Sigma$
spans a linear space of dimension $\leq nk+n+k$, where $n=\dim(\Sigma)$.
Moreover, $X$ is $k$-defective if and only if
$\Sigma$ spans a space of dimension $< nk+n+k$.
\end{Thm}
In conclusion, we obtain:
\begin{Cor} \label{basic}
Assume $r>mk+m+k$. Assume that for all $n=1,\dots,m-1$, there
are no families of $n$-dimensional subvarieties of $X$, whose general element
spans a linear space of dimension $\leq nk+n+k$ and passes through
$k+1$ general points of $X$. Then $X$ is not $k$-weakly defective.
Hence it is $k$-identifiable.
\end{Cor}
This is our starting point. In the next sections, we will obtain the
$k$-identifiability of products of ${\mathbb{P}}^1$'s, for $k$ in our range,
by proving that subvarieties of Segre products
${\mathbb{P}}^1\times\dots\times{\mathbb{P}}^1$ passing through $k+1$ general points cannot
be too degenerate, unless they are formed by a bunch of
independent linear spaces. One should observe that both Corollary \ref{basic} and
the second part of Theorem \ref{mu1} cannot be inverted.
\begin{Ex} The existence of families of degenerate subvarieties
cannot guarantee that $X$ is $k$-weakly defective. For instance,
consider $X={\mathbb{P}}^2$ embedded in ${\mathbb{P}}^9$ by the $3$-Veronese
embedding. Then $X$ is not $1$-weakly defective. Indeed,
the general hyperplane tangent to $X$ at two general points cuts
a divisor which corresponds, in ${\mathbb{P}}^2$, to
a general cubic curve with two singular points. Such a cubic splits in
the union of a conic and a line, and it is reduced. On the other hand,
through $2$ general points of $X$ one finds a curve
spanning a space of dimension $1\cdot 1+1+1=3$. Namely, it is
the twisted cubic, image of the line trough the two points.
\end{Ex}
\begin{Ex} When $X$ is $k$-weakly defective, it can
be $k$-identifiable as well. This may happen, by \cite{CC2}, Theorem 2.4,
when the contact locus has $k$-th secant order $1$.
Examples of such varieties can be found in \cite{CC2}, Example 3.7,
but they are singular. A smooth example was communicated us by G. Ottaviani.
Take the Segre embedding of $X={\mathbb{P}}^1\times{\mathbb{P}}^1\times{\mathbb{P}}^2$ in ${\mathbb{P}}^{11}$.
Using a computer-aided procedure, one can find that the general
hyperplane which is tangent to $X$ at two points, is indeed tangent
along a twisted cubic. The computation was indeed performed at
two specific points of $X$, but notice that Aut($X$) acts
transitively on pair of points. Thus $X$ is $1$-weakly defective.
Since a twisted cubic curve has first secant order equal to $1$
(Example \ref{linsp}),
it turns out by \cite{CC2}, Theorem 2.4, that $X$ is $1$-identifiable.
The $1$-identifiability of $X$ also follows from the Kruskal's
identifiability criterion for the product of three projective spaces
(see \cite{K}).
\end{Ex}
As a consequence, one cannot use the inverse of the previous argument
to determine the non-identifiability
of a variety $X$, simply by studying degenerate subvarieties.
\begin{Rmk} \label{monod}
The degenerate subvarieties $\Sigma$, whose existence is guaranteed by
Corollary \ref{basic} in any weakly defective variety, are not necessarily
smooth, neither they are necessarily irreducible
(although one can assume that they are reduced).
On the other hand, one can use a monodromy--type argument
(see e.g. \cite{CC1}, Proposition 3.1) in order to show that,
when $\Sigma$ is reducible, all the components are interchanged in a flat
deformation, thus they are general members of a flat family.
This is due to the generality of the points $P_i$'s.
In particular, we may assume that the components share
the same geometrical properties, also with respect to the linear series
induced by the projections to the factors ${\mathbb{P}}^1$.
\end{Rmk}
\section{Proof of the Theorem}
Let us start with an useful Lemma of Linear Algebra:
\begin{Lem}\label{linalg} Let $H_0,\dots,H_k$ be subspaces of
${\mathbb{P}}^s$, such that the sum $H_0+\dots+H_k$ is not direct.
Let $p$ be the dimension of the linear span of the $H_i$'s.
Then, for a general choice of points $P_0,\dots,P_k\in {\mathbb{P}}^r$
$r\geq 1$, the dimension of the linear span of the spaces
$H_i\times \{P_i\}\subset {\mathbb{P}}^s\times{\mathbb{P}}^r$ is at least $p+1$.
\end{Lem}
\begin{proof} We may assume that $H_0$ meets the span
$L=\langle H_1\cup\dots\cup H_k\rangle$. If $d=\dim(H_0)$
and $e=\dim(L)$, then by assumption $\langle H_0\cup L\rangle$
has dimension at most $d+e$, while for a general choice of the points
$P_0$, $P_1$,
$$\dim\langle (H_0\times P_0)\cup (L\times P_1)\rangle =d+e+1,$$
for the two spaces belong to linearly independent copies
of ${\mathbb{P}}^s$ in the product.
Now, the claim follows by specializing all $P_2,\dots, P_k$ to $P_1$.
\end{proof}
The proof of our Main Theorem now follows soon by the main result of \cite{CGG2}
and by the following general observation.
\begin{Lem} Let $Y\subset {\mathbb{P}}^s$ be a non-degenerate variety of dimension $d$
which is not $k$-defective ($k\geq 1$). Assume $kd+k+d<s$.
Then $X=Y\times {\mathbb{P}}^q$ ($q\geq 1$) is $k$-identifiable.
\end{Lem}
\begin{proof}
If $X$ is $k$-weakly defective, then by Theorem \ref{wdef}
the $(k+1)$-contact locus is a
subvariety $W$ of some dimension $n>0$, contained in ${\mathbb{P}}^{nk+n+k}$,
which passes through $k+1$ general points of $X$.
Assume that such a variety exists. Call $W'\subset Y$ the image of $W$
in the projection $X\to Y$ and call $n'=\dim(W')$.
Since $W'$ passes through $k+1$ general points of $Y$, and $Y$
is not $k$-defective, then the span of $W'$ has dimension at least
$n'k+n'+k$, by the second part of Theorem \ref{wdef}.
Now, notice that the fibers of the projection $W\to W'$ span a space of
dimension at least $n-n'$ in ${\mathbb{P}}^q$. It follows, by Linear Algebra,
that $W$ spans a space of dimension at least $(n-n'+1)(n'k+n'+k)$.
Now, we have to study several cases.
Assume $0<n'<n$. Then $n'(n-n')\geq n-n'$, so that $(n-n'+1)n'\geq n$.
Moreover $(n-n'+1)k>k$. It follows that
$(n-n'+1)(n'k+n'+k)$ is bigger than $nk+n+k$, so we get a
contradiction.
Assume $n=n'>0$. By construction, the linear series $L$ which sends $W$ to
${\mathbb{P}}^s$, passing through the embedding $W\subset X$ and the projection
$X\to Y$, has dimension equal to the span of $W'$. Hence it has
dimension $kn+n+k$, in our case. Call $L'$ the linear series defining
the projection $W\to {\mathbb{P}}^q$. If the image of $W$ in ${\mathbb{P}}^q$ has dimension
at least $1$, then the embedding of $W$ in ${\mathbb{P}}^s\times{\mathbb{P}}^q$ is given
by a series $L+L'$, whose dimension is at least $\dim(L)+1=nk+n+k+1$.
It follows that $W$ spans a space of dimension at least $nk+n+k+1$,
a contradiction. Since $W$ passes through $k+1\geq 2$ points of $X$,
its image into ${\mathbb{P}}^q$ can be trivial only when $W$ is given by
$k+1$ components, $W=W_0\cup\dots\cup W_k$ and each $W_i$ is contained
in a fiber of the projection $X\to {\mathbb{P}}^q$.
By monodromy (Remark \ref{monod}), each $W_i$
has the same dimension $n$ and spans a space of the same dimension
$q'$, in the fiber. Call $H_0,\dots, H_k$ the projections of these
spaces to ${\mathbb{P}}^s$. Since $W'$ spans a space of dimension
$nk+n+k$, then the $H_i$'s span a space of the same dimension.
Thus, if the $H_i$'s are not linearly independent, then
$W$ spans a space of dimension at least $nk+n+k+1$, by Lemma
\ref{linalg}, a contradiction.
It follows that the span of the $H_i$'s has dimension
$nk+n+k=q'k+q'+k$, so that $q'=n$. This means that
each $W_i$ projects to a subspace $H_i$ of dimension $n$
in ${\mathbb{P}}^s$. Since each $W_i$ sits in a fiber of $X\to {\mathbb{P}}^q$,
this implies that each $W_i$ is linear,
and these subspaces are independent.
Assume $n'=0$. Then necessarily $W$ consists of $k+1$ components
$W=W_0\cup\dots\cup W_k$, each $W_i$ being contained in a fiber of
the projection. As above, it turns out that $W$ spans a space of dimension
$kq'+q'+k$. Thus $q'>n$ yields a contradiction. Hence $q'=n$, so that every
$W_i$ is linear.
It follows, from the previous analysis, that necessarily
$W$ is the union of $k+1$ linearly
independent subspaces of dimension $n$.
We get then that either $X$ is not $k$-defective, so it is $k$-identifiable by
Theorem \ref{basic}, or it is $k$-defective, and the $(k+1)$-contact variety
$W$ is the union of $k+1$ linearly independent subspaces of dimension $n$.
In the latter case, the $k$-th secant order of $W$ is known to be $1$
(see Example \ref{linsp}).
Thus $X$ is $k$-identifiable, by Theorem \ref{mu2}.
\end{proof}
{\bf Proof of the main Theorem}
Let $k$ the any positive integer such that $(k+1)m<2^{m-1}-1$,
so that the $k-$secant variety of $({\mathbb{P}}^1)^{m-1}$ cannot span ${\mathbb{P}}^{2^{m-1}-1}$.
By the main result of \cite{CGG2}, $({\mathbb{P}}^1)^{m-1}$ is not $k-$defective.
Then the previous Lemma implies that $({\mathbb{P}}^1)^m$ is $k$-identifiable.
\hfill\qed
\vskip0.3cm
Indeed, the previous Lemma also prove the following, stronger
statement:
\begin{Thm}
Let $X$ be a product of $m>5$ copies of ${\mathbb{P}}^1$, embedded in the projective
space ${\mathbb{P}}^r$, $r=2^m-1$, by the standard Segre embedding.
If $k+1\leq 2^{m-1}/m$, then $X$ is not $k$-weakly defective.
\end{Thm}
\begin{proof} We know from the proof of the previous Lemma, that
if $X$ is $k$-weakly defective, then a general hyperplane $H$ tangent at
$k+1$ general points $P_0,\dots, P_k$ of $X$, is
tangent along a union of linear spaces.
Thus it can only be tangent along fibers of some projection $X\to{\mathbb{P}}^1$,
because the product does not contain other lines.
This is clear for ${\mathbb{P}}^1\times{\mathbb{P}}^1$, while for higher
dimensional products one can argue by induction, on
some projection $({\mathbb{P}}^1)^m\to({\mathbb{P}}^1)^{m-1}$.
Thus, by symmetry, $X$ can be $k$-weakly defective
only when a general $(k+1)$-tangent hyperplane $H$ is tangent along
all the fibers passing through $P_0,\dots, P_k$.
But then, for a general choice of a point $Q$ in some
fiber passing through $P_0$, a general hyperplane tangent
to $P_0,P_1,\dots, P_k$ is also tangent at $Q,P_1,\dots,P_k$.
By the same argument, it is also tangent along any fiber
passing through $Q$.
Arguing again in this way, we get that a general $H$ must
be tangent (thus must contain) any point of $X$.
An obvious contradiction.
\end{proof}
\section{Results for small $m$}
\begin{Prop} The product $X$ of $5$ copies of ${\mathbb{P}}^1$ is not $4$-identifiable.
Through a general point of $S^5(X)$ one finds exactly two $5$-secant, $4$-spaces.
\end{Prop}
\begin{proof} Indeed, we prove that through $5$ general points of
$X$ one can find an irreducible elliptic normal curve $C\subset {\mathbb{P}}^9$,
contained in $X$. Since a general point of the ${\mathbb{P}}^9$, spanned
by $C$, sits in exactly two subspaces of dimension $4$, $5$-secant to
an irreducible elliptic normal curve
(by \cite{CC2} Proposition 5.2), it follows that the $4$-th secant order
of $X$ is at least $2$. In particular, $X$ is $4$-weakly defective,
by \cite{CC2}, proposition 2.7, and the $4$-th contact locus contains an elliptic
normal curve as $C$. A computer aided computation, at
$5$ specific points of $X$, proves that indeed the $5$-contact locus of
$X$ is exactly an irreducible elliptic normal curve of degree $12$.
The computation has been performed with the Macaulay2 Computer
Algebra package \cite{GS}, with the script described in \cite{BC}.
Thus $4$-th secant order of $X$ is $2$
(by Theorem \ref{mu2}) and the claim is proved.
To prove the existence of the curve $C$ passing through $5$
general points $P_0,\dots,P_4$ of $X$, we start with the product of three lines $X'={\mathbb{P}}^1
\times{\mathbb{P}}^1\times{\mathbb{P}}^1$. Through the $5$ points $P'_0,\dots,P'_4\in X'$,
projection of the $P_i$'s, one can find a $2$-dimensional
family $\mathcal F$ of elliptic normal curves
$C'$ of degree $6$. Indeed $X'\subset{\mathbb{P}}^5$ is a sestic threefold with elliptic
curve sections, and there is a $2$-dimensional family of hyperplanes passing
through $5$ general points of $X$. $\mathcal F$ is parametrized by
points of some plane $\Pi$, obtained by projecting ${\mathbb{P}}^5$ from
the span of the $P'_i$'s.
Consider now the product $X''$ of the two remaining copies of ${\mathbb{P}}^1$,
so that $X=X'\times X''$. We also get $5$ distinguished general points
$P''_0,\dots, P''_4\in X''$. For any curve $C'$ of the family $\mathcal F$, we have
a $7$-dimensional family of embeddings $C'\to X''$. Thus, adding
the automorphisms of $C'$, for $C'\in\mathcal F$
general, we may assume that each $P'_i$, $i=1,\dots,4$,
goes to the corresponding $P''_i$. The condition that $P'_0$ goes to $P''_0$
determines two algebraic condition on the family, hence two algebraic curves on
$\Pi$. Thus, there is at least one curve $C'$ of the family, for which
$P'_0$ goes to $P''_0$. This determines an elliptic normal curve $C$ in $X$,
passing through the $5$ given general points $P_i$'s.
The fact that $C$ is irreducible, for a general choice
of the points, follows by the computer-aided computation, on a specific example.
\end{proof}
For $6$ copies of ${\mathbb{P}}^1$, the maximal value for which $k$-identifiability
makes sense is $k_{max}=9$.
Our result gives that the product $X$ of $6$ copies of
${\mathbb{P}}^1$ is $k$-identifiable, for $k=1,\dots, 5$.
The $k$-identifiability of $({\mathbb{P}}^1)^6$, for $k=6,7,8,9$, can be directly
checked by a computer-aided procedure.
Indeed, the following observation reduces our problem to check
only what happens for the maximal number $k$
such that $mk+m+k<r$.
\begin{Prop}\label{maxk}
Assume that $km+m+k<r$ and $X$ is not $k$-weakly defective. Then
$X$ is not $(k-1)$--weakly defective.
\end{Prop}
\begin{proof}
Fix $k+1$ general points $P_0,\dots,P_k\in X$. The family of hyperplanes
containing the tangent space $T_{X,P_1,\dots,P_k}$ is irreducible,
so a general hyperplane tangent to $X$ at $P_0,\dots,P_k$ is the limit
of a family of hyperplanes tangent at $P_1,\dots,P_k$. Since the general
element of this last family has a zero dimensional contact locus,
the claim follows.
\end{proof}
Now, by Corollary \ref{basic}, it is enough to compute that
some hyperplane tangent to $X$ at some points $P_0,\dots, P_k$,
is in fact tangent only at those $k+1$ points.
Using this procedure with $9$ points of $({\mathbb{P}}^1)^6$, a computer-aided
compution, using the script in \cite{BC},
proves the following:
\begin{Prop} \label{sei} For $m=6$ and for all $k\leq k_{max}= 9$,
the product $X$ of $6$ copies of ${\mathbb{P}}^1$ is $k$-identifiable.
\end{Prop}
For sure, with a more advanced technical equipment, one can
analyze products with more copies of ${\mathbb{P}}^1$.
Nevertheless, Proposition \ref{sei} already provides an initial
evidence for our Conjecture \ref{conge}.
| {
"timestamp": "2011-05-19T02:03:20",
"yymm": "1105",
"arxiv_id": "1105.3643",
"language": "en",
"url": "https://arxiv.org/abs/1105.3643",
"abstract": "We prove that a product of $m>5$ copies of $\\PP^1$, embedded in the projective space $\\PP^r$ by the standard Segre embedding, is $k$-identifiable (i.e. a general point of the secant variety $S^k(X)$ is contained in only one $(k+1)$-secant $k$-space), for all $k$ such that $k+1\\leq 2^{m-1}/m$.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "On the identifiability of binary Segre products",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806495475397,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7077274337757629
} |
https://arxiv.org/abs/1906.05523 | New constructions of asymptotically optimal codebooks via character sums over a local ring | In this paper, we present explicit description on the additive characters, multiplicative characters and Gauss sums over a local ring. As an application, based on the additive characters and multiplicative characters satisfying certain conditions, two new constructions of complex codebooks over a local ring are introduced. With these two constructions, we obtain two families of codebooks achieving asymptotically optimal with respect to the Welch bound. It's worth mentioning that the codebooks constructed in this paper have new parameters. | \section{Introduction}
Let $C=\{\textbf{c}_0,\textbf{c}_1,\cdots, \textbf{c}_{N-1}\}$ be a set of $N$ unit-norm complex vectors $\textbf{c}_l\in \mathbb{C}^K$ over an alphabet $A$, where $l=0, 1,\cdots , N-1$. The size of $A$ is called the alphabet size of $C$. Such a set $C$ is called an $(N, K)$ codebook (also called a signal set). The maximum cross-correlation amplitude, which is a performance measure of a codebook in practical applications, of the $(N, K)$ codebook $C$ is defined as
\begin{eqnarray*}
I_{\max}(C) &=& \max_{0\leq i<j\leq N-1} |\textbf{c}_i\textbf{c}_j^H|,
\end{eqnarray*}
where $\textbf{c}_j^H$ denotes the conjugate transpose of the complex vector $\textbf{c}_j$. For a certain length $K$, it is desirable to design a codebook such that the number $N$ of codewords is as large as possible and the maximum cross-correlation amplitude $I_{\max}(C)$ is as small as possible. To evaluate a codebook $C$ with parameters $(N, K)$, it is important to find the minimum achievable $I_{\max}(C)$ or its lower bound. However, for $I_{\max}(C)$, we have the well-known Welch bound in the following.
\begin{lem}\label{lem1} \cite{LW} For any $(N, K)$ codebook $C$ with $N\geq K$,
\begin{eqnarray}\label{eq1}
I_{\max}(C) &\geq& I_w=\sqrt{\frac{N-K}{(N-1)K}}.
\end{eqnarray}
Furthermore, the equality in (\ref{eq1}) is achieved if and only if
\begin{eqnarray*}
|\textbf{c}_i\textbf{c}_j^H| &=& \sqrt{\frac{N-K}{(N-1)K}}
\end{eqnarray*}
for all pairs $(i, j)$ with $i\neq j$.
\end{lem}
A codebook is referred to as a maximum-Welch-bound-equality (MWBE) codebook \cite{DS} or an equiangular tight frame \cite{JK} if it meets the Welch bound equality in ($\ref{eq1}$). Codebooks meeting the Welch bound are used to distinguish among the signals of different users in code-division multiple-access (CDMA) systems \cite{MM}. In addition, codebooks meeting optimal (or asymptotically optimal) with respect to the Welch bound are much preferred in many practical applications, such as, multiple description coding over erasure channels \cite{SH}, communications \cite{DS}, compressed sensing \cite{CW}, space-time codes \cite{TK}, coding theory \cite{DGS} and quantum computing \cite{RBSC} etc.. In general, it is very difficult to construct optimal codebooks achieving the Welch bound (i.e. MWBE). Hence, many researchers attempted to construct asymptotically optimal codebooks, i.e., the minimum achievable $I_{\max}(C)$ of the codebook $C$ nearly achieving the Welch bound for large $N$. There are many results in regard to optimal or almost optimal codebooks by the Welch bound, interested readers may refer to [1, 2, 4-6, 9-12, 14-16, 18, 28, 29]. It is important that the construction method of codebooks. At present, many researchers constructed the codebooks based on difference sets, almost difference sets, relative difference sets, binary row selection sequences and cyclotomic classes.
It is well known that the additive characters, multiplicative characters and Gauss sums over finite fields and some of their good properties
\cite[Chapter 5]{LNC}. Especially, they have many rich applications in coding theory. It's worth mentioning that some researchers constructed codebooks by using the character sums of finite fields \cite{DF, LC, ZF}. Later, G. Luo and X. Cao proposed two constructions of complex codebooks from character sums over the Galois ring $GR(p^2, r)$ in \cite{LC2} based on existing results \cite{LZF}. In fact, we know that many scholars have done a lot of research over local rings \cite{GSF, LL, SWLS} etc.. Motivated by \cite{LC2} and \cite{LZF}, a natural question is to explore the character sums over the ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$, is it possible to construct codebooks over the ring $R$ based on the character sums we studied and obtain several classes of asymptotically optimal codebooks with respect to the Welch bound?
This paper will give a positive answer to this question. This manuscript has three main contributions. One contribution of this paper is to give explicit description on the additive characters, multiplicative characters and establish a Gauss sum over a local ring for the first time. Another contribution of this paper is to focus on the constructions of codebooks over the ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$ by using the character sums. Finally, we show that the maximum cross-correlation amplitudes $I_{\max}(C)$ of these codebooks asymptotically meet the Welch bound and obtain new parameters by comparing with the parameters of some known classes of asymptotically optimal codebooks.
The rest of this paper is arranged as follows. Section 2 presents some notations and basic results which will be
needed in subsequent sections. In Section 3, we explicit description on the additive characters and multiplicative characters over a local ring. In Section 4, we present computation on Gauss sums over a local ring. Section 5 introduces two generic families of codebooks meeting asymptotically optimal with respect to the Welch bound. In Section 6, we conclude this paper and present several open problems.
\section{Preliminaries}
Let $\mathbb{F}_q$ denote the finite field with $q$ elements and $q=p^m$, where $p$ is a prime and $m$ is a positive integer. We consider the chain ring $R=\mathbb{F}_q+u\mathbb{F}_q=\{a+bu: a, b\in \mathbb{F}_q\} (u^2=0)$ with the unique maximal ideal $M=\langle u\rangle$. In fact, $R=\mathbb{F}_q\oplus u\mathbb{F}_q \simeq \mathbb{F}_q^2$ is a two-dimensional vector space over $\mathbb{F}_q$ and $|R|=q^2.$ The invertible elements of $R$ is $$R^*=R\backslash M=\mathbb{F}_q^*+u\mathbb{F}_q=\{a+bu: a\in \mathbb{F}_q^*, b\in \mathbb{F}_q\}$$ with $|R^*|=q(q-1)$. In fact, $R^*$ can also be represented as $\mathbb{F}_q^*\times (1+M)~~({\rm direct~product}).$
A character $\chi$ of a finite abelian group $G$ is a homomorphism from $G$ into the multiplicative group $U$ of complex numbers of absolute value 1, that is, a mapping from $G$ into $U$ with $\chi(g_1g_2)=\chi(g_1)\chi(g_2)$ for all $g_1, g_2\in G.$ Next, we recall the the additive characters and multiplicative characters of the finite field $\mathbb{F}_q$.
$\bullet$ The additive character $\chi$ of $\mathbb{F}_q$ defined by $$\chi(c)=e^{\frac{2\pi i{\rm Tr}(c)}{p}}$$ for all $c\in \mathbb{F}_q$, where Tr: $\mathbb{F}_q\longrightarrow \mathbb{F}_p$ is the absolute trace function from $\mathbb{F}_q$ to $\mathbb{F}_p$~(see Definition 2.22 in \cite{LNC}). For any $c_1, c_2\in \mathbb{F}_q$, we have
\begin{eqnarray}\label{den2}
\chi(c_1+c_2) &=&\chi(c_1)\chi(c_2).
\end{eqnarray}
Moreover, for $b\in \mathbb{F}_q$, the function $\chi_b$ is defined as $\chi_b(c)=\chi(bc)$ for all $c\in \mathbb{F}_q$.\\
$\bullet$ The multiplicative character $\psi_j$ of $\mathbb{F}_q$ defined by $$\psi_j(g^k)=e^{\frac{2\pi ijk}{q-1}}$$ for each $j=0,1,\cdots, q-2$, where $k=0,1,\cdots, q-2$ and $g$ is a fixed primitive element of $\mathbb{F}_q$. For any $c_1, c_2\in \mathbb{F}_q^*$, we have
\begin{eqnarray}\label{den3}
\psi_j(c_1c_2) &=&\psi_j(c_1)\psi_j(c_2).
\end{eqnarray}
Now, let $\psi$ be a multiplicative and $\chi$ an additive character of $\mathbb{F}_q$. Then the Gauss sum $G(\psi, \chi)$ of $\mathbb{F}_q$ is defined by
\begin{eqnarray*}
G(\psi, \chi)&=&\sum\limits_{c\in \mathbb{F}_q^*}\psi(c)\chi(c).
\end{eqnarray*}
However, we now need to study the additive and multiplicative characters of a local ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$, which implies that the character of the ring $R$ are described in detail similarly by the definition of the character of the finite field $\mathbb{F}_q$. Furthermore, the explicit description on the additive and multiplicative characters of $R$ we present should be satisfied the similar properties above equalities (\ref{den2}) and (\ref{den3}), respectively. In addition, we establish the Gauss sum of $R$ by the Gauss sum of $\mathbb{F}_q$. Hence, we will present the the additive and multiplicative characters of $R$ in the following section based on the characters of finite fields and propose the Gauss sum of $R$ in Section 4.
\section{Characters}
In this section, we will give the additive characters and multiplicative characters of $R$.\\\\
$\blacktriangle$\textbf{ \large Additive characters of $R$}
The group of additive characters of $(R, +)$ is
$$\widehat{R}:=\{\lambda: R\longrightarrow \mathbb{C}^*| \lambda(\alpha+\beta)=\lambda(\alpha)\lambda(\beta), \alpha, \beta \in R\}.$$
For any additive character $\lambda$ of $R$ $$\lambda: R \longrightarrow\mathbb{C}^*.$$ Since $\lambda(a_0+ua_1)=\lambda(a_0)\lambda(ua_1)$ for any $a_0, a_1\in \mathbb{F}_q,$ we define two maps as follows:
\begin{itemize}
\item $$\lambda^{'}: \mathbb{F}_q \longrightarrow \mathbb{C}^*$$ by $\lambda^{'}(c):=\lambda(c)$ for $c\in \mathbb{F}_q.$
\item $$\lambda^{''}: \mathbb{F}_q \longrightarrow \mathbb{C}^*$$ by $\lambda^{''}(c):=\lambda(uc)$ for $c\in \mathbb{F}_q.$
\end{itemize}
Therefore, it is easy to prove that $\lambda^{'}(c_1+c_2)=\lambda^{'}(c_1)\lambda^{'}(c_2)$ and $\lambda^{''}(c_1+c_2)=\lambda^{''}(c_1)\lambda^{''}(c_2)$ for $c_1, c_2 \in \mathbb{F}_q.$ Based on this, we know that $\lambda^{'}$ and $\lambda^{''}$ are additive characters of $(\mathbb{F}_q, +)$, then there exist $b, c \in \mathbb{F}_q$ such that $$\lambda^{'}(x)=\zeta_p^{{\rm Tr}(bx)}=\chi_{b}(x), \lambda^{''}(x)=\zeta_p^{{\rm Tr}(cx)}=\chi_{c}(x)$$ for all $x\in \mathbb{F}_q$, where $\zeta_p=e^{\frac{2\pi i}{p}}$ is a primitive $p$th root of unity over $\mathbb{F}_q.$ Hence, we can get the additive character of $R$
\begin{eqnarray*}
\lambda(a_0+ua_1) &=& \lambda'(a_0)\lambda''(a_1)\\
&=& \chi_{b}(a_0)\chi_{c}(a_1).
\end{eqnarray*}
Thus, there is an one-to-one correspondence:
\begin{eqnarray*}
\tau : \widehat{(R,+)} &\longrightarrow& \widehat{(\mathbb{F}_q,+)}\times \widehat{(\mathbb{F}_q,+)},\\
\lambda &\longmapsto& (\chi_b, \chi_c).
\end{eqnarray*}
It is easy to prove that the mapping $\tau$ is an isomorphism.\\
$\blacktriangle$\textbf{ \large Multiplicative characters of $R$}
The structure of the multiplicative group $R^*$ is $$R^*=\mathbb{F}_q^*\times (1+M)~~({\rm direct~product}).$$ Now, we have
\begin{eqnarray*}
R^* &=& \{a_0+ua_1: a_0\in \mathbb{F}_q^*, a_1\in \mathbb{F}_q\} \\
&=& \{b_0(1+ub_1): b_0\in \mathbb{F}_q^*, b_1\in \mathbb{F}_q\}.
\end{eqnarray*}
The group of multiplicative characters of $R$ is denoted by $\widehat{R}^*$ and $\widehat{R}^*=\widehat{\mathbb{F}}_q^*\times\widehat{(1+M)}$. We define $$\widehat{R}^*:=\{\varphi: R^*\longrightarrow \mathbb{C}^*| \varphi(\alpha\beta)=\varphi(\alpha)\varphi(\beta), \alpha, \beta \in R\}.$$
For any multiplicative character $\varphi$ of $R$ $$\varphi: R^* \longrightarrow\mathbb{C}^*.$$ Since $\varphi(b_0(1+ub_1))=\varphi(b_0)\varphi(1+ub_1)$ for any $b_0\in \mathbb{F}_q^*, b_1\in \mathbb{F}_q,$ we define two maps as follows:
\begin{itemize}
\item $$\varphi^{'}: \mathbb{F}_q^*\longrightarrow \mathbb{C}^*$$ by $\varphi^{'}(c):=\varphi(c)$ for $c\in \mathbb{F}_q^*.$
\item $$\varphi^{''}: \mathbb{F}_q\longrightarrow \mathbb{C}^*$$ by $\varphi^{''}(c):=\varphi(1+uc)$ for $c\in \mathbb{F}_q.$
\end{itemize}
For any $c_1, c_2 \in \mathbb{F}_q^*$, we have $\varphi'(c_1c_2)=\varphi'(c_1)\varphi'(c_2)$ and
\begin{eqnarray*}
\varphi''(c_1+c_2)&=& \varphi(1+u(c_1+c_2)) \\
&=& \varphi((1+uc_1)(1+uc_2)) \\
&=& \varphi(1+uc_1)\varphi(1+uc_2)\\
&=&\varphi''(c_1)\varphi''(c_2).
\end{eqnarray*}
Based on this, we can obtain that $\varphi'$ is a multiplicative character of $\mathbb{F}_q$ and $\varphi''$ is an additive character of $\mathbb{F}_q$. Hence, we can get the multiplicative character of $R$ $$\varphi(b_0(1+ub_1))=\varphi'(b_0)\varphi''(b_1),$$ where $\varphi'\in \widehat{\mathbb{F}}_q^*$ and $\varphi''\in \widehat{\mathbb{F}}_q.$ Since $\varphi''$ is an additive character of $\mathbb{F}_q$, then there exists $a\in \mathbb{F}_q$ such that $\varphi''=\chi_a.$
Moreover, we have
\begin{eqnarray*}
\sigma : \widehat{(R^*,\ast)} &\longrightarrow& \widehat{\mathbb({\mathbb{F}}_q^*,\ast)}\times \widehat{(\mathbb{F}_q,+)}, \\
\varphi &\longmapsto& (\psi, \chi_a),
\end{eqnarray*}
where $\psi=\varphi'$ is a multiplicative character of $\mathbb{F}_q$. One can show that the mapping $\sigma$ is an isomorphism.
\section{Gaussian sums}
Let $\lambda$ and $\varphi$ be an additive character and a multiplicative character of $R$, respectively. The Gaussian sum
for $\lambda$ and $\varphi$ of $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$ is defined by
\begin{eqnarray*}
G_R(\varphi, \lambda) &=&\sum\limits_{t\in R^*}\varphi(t)\lambda(t).
\end{eqnarray*}
In this section, we calculate the value of $G_R(\varphi, \lambda)$. For convenience, we denote $\varphi:=\psi\star\chi_a~({\rm namely,}~\varphi(t)=\psi(t_0)\chi_a(t_1)), \lambda:=\chi_b\star\chi_c~({\rm namely,}~\lambda(t)=\chi_b(t_0)\chi_c(t_0t_1))$ according to Section 3, where $a, b, c\in \mathbb{F}_q$ and $t=t_0(1+ut_1)\in R.$ Hence, we denote $G_R(\varphi, \lambda):=G(\psi\star\chi_a, \chi_b\star\chi_c)$.
\begin{thm}\label{thm1}
Let $\varphi$ be a multiplicative character and $\lambda$ be an additive character of $R$, where $\varphi:=\psi\star\chi_a, \lambda:=\chi_b\star\chi_c$ and $a, b, c\in \mathbb{F}_q.$ Then the Gaussian sum $G_R(\varphi, \lambda)$ satisfies
\begin{equation*}\label{den1}
G_R(\varphi, \lambda)=\begin{cases}
\emph{ }qG(\psi, \chi_b), ~~~~~~~~~{\rm if}~a=0, c=0;\\
\emph{ }0, ~~~~~~~~~~~~~~~~~~~{\rm if}~a=0, c\neq0; \\
\emph{ }0, ~~~~~~~~~~~~~~~~~~~{\rm if}~a\neq0, c=0; \\
\emph{ }q\psi(-\frac{a}{c})\chi(-\frac{ab}{c}), ~~~{\rm if}~a\neq0, c\neq 0, \\
\end{cases}
\end{equation*}
where
\begin{equation*}
G(\psi, \chi_b)=\begin{cases}
\emph{ }q-1, ~~~~{\rm if}~\psi~{\rm is~trivial}, b=0;\\
\emph{ }-1, ~~~~~{\rm if}~\psi~{\rm is~trivial}, b\neq0; \\
\emph{ }0,~~~~~~~~~{\rm if}~\psi~{\rm is~nontrivial}, b=0.\\
\end{cases}
\end{equation*}If $\psi$ is nontrivial and $b\neq0$, then $|G(\psi, \chi_b)|=q^\frac{1}{2}$.
\end{thm}
\begin{proof} Now, let $\varphi:=\psi\star\chi_a$ and $\lambda:=\chi_b\star\chi_c$ with $a,b,c\in \mathbb{F}_q.$ Assume that $t=t_0(1+ut_1)$, where $t_0\in \mathbb{F}_q^*$ and $t_1\in \mathbb{F}_q.$
\begin{eqnarray*}
G_R(\varphi, \lambda) &=& \sum\limits_{t\in R^*}\varphi(t)\lambda(t)\\
&=& \sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\varphi(t_0(1+ut_1))\lambda(t_0(1+ut_1)) \\
&=& \sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1)\\
&=& \sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi(t_0)\chi(at_1+bt_0+ct_0t_1)\\
&=& \sum\limits_{t_0\in \mathbb{F}_q^*}\psi(t_0)\chi(bt_0)\sum\limits_{t_1\in \mathbb{F}_q}\chi((a+ct_0)t_1)\\
&=& q\sum\limits_{t_0\in \mathbb{F}_q^*, a+ct_0=0}\psi(t_0)\chi(bt_0)\\
&=&\begin{cases}
\emph{ }qG(\psi, \chi_b), ~~~~~~~~~{\rm if}~a=0, c=0;\\
\emph{ }0, ~~~~~~~~~~~~~~~~~~~~{\rm if}~a=0, c\neq0; \\
\emph{ }0, ~~~~~~~~~~~~~~~~~~~~{\rm if}~a\neq0, c=0; \\
\emph{ }q\psi(-\frac{a}{c})\chi(-\frac{ab}{c}),~~{\rm if}~a\neq0, c\neq 0, \\
\end{cases}
\end{eqnarray*}
where $G(\psi, \chi_b)$ is a Gaussian sum of $\mathbb{F}_q.$
\end{proof}
\section{Two families of asymptotically optimal codebooks}
In this section, we study two classes of codebooks asymptotically
achieving the Welch bound by using character sums over the local ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$. Note that $|R^*|=q(q-1)$ and we can write $K=q(q-1)$. Let $\varphi:=\psi\star\chi_a$ and $\lambda:=\chi_b\star\chi_c$ with $a,b,c\in \mathbb{F}_q.$ Assume that $t=t_0(1+ut_1)$, where $t_0\in \mathbb{F}_q^*$ and $t_1\in \mathbb{F}_q.$
Then we can define a set $C_0(R)$ of length $K$ as
\begin{eqnarray*}
C_0(R) &=& \{\frac{1}{\sqrt{K}}(\varphi(t)\lambda(t))_{t\in R^*}, \varphi\in \widehat{R}^*, \lambda\in \widehat{R}\} \\
&=& \{\frac{1}{\sqrt{K}}(\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}, \psi\in \widehat{\mathbb{F}}_q^*,\chi_a, \chi_b, \chi_c \in \widehat{\mathbb{F}}_q\}.
\end{eqnarray*}
Next, we will give the two constructions of codebooks over the ring $R$.
\subsection{The first construction of codebooks}
The codebook $C_1(R)$ of length $K$ over $R$ is constructed as
\begin{eqnarray*}
C_1(R) &=& \{\frac{1}{\sqrt{K}}(\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}, \\
&& \psi~{\rm is~a~fixed~multiplicative~character~over}~ \mathbb{F}_q,\chi_a, \chi_b, \chi_c \in \widehat{\mathbb{F}}_q\}.
\end{eqnarray*}
Based on this construction of the codebook $C_1(R)$, we have the following theorem.
\begin{thm}\label{thm2}
Let $C_1(R)$ be a codebook defined as above. Then $C_1(R)$ is a $(q^3, q(q-1))$ codebook with the maximum
cross-correlation amplitude $I_{max}(C_1(R))=\frac{1}{q-1}$.
\end{thm}
\begin{proof}
According to the definition of $C_1(R)$, it is easy to see that $C_1(R)$ has $N=q^3$ codewords of length $K=q(q-1)$. Next, our task is to determine the maximum
cross-correlation amplitude $I_{max}$ of the codebook $C_1(R)$. Let $\textbf{c}_1$ and $\textbf{c}_2$ be any two distinct codewords in $C_1(R)$, where
$\textbf{c}_1=\frac{1}{\sqrt{K}}(\psi(t_0)\chi_{a_1}(t_1)\chi_{b_1}(t_0)\chi_{c_1}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$ and $\textbf{c}_2=\frac{1}{\sqrt{K}}(\psi(t_0)\chi_{a_2}(t_1)\chi_{b_2}(t_0)\chi_{c_2}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$. Without loss of generality, we denote the trivial multiplicative character of $\mathbb{F}_q$ by $\psi_0$. Then we have
\begin{eqnarray*}
\textbf{c}_1\textbf{c}_2^H &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi(t_0)\chi_{a_1}(t_1)\chi_{b_1}(t_0)\chi_{c_1}(t_0t_1)\overline{\psi(t_0)\chi_{a_2}(t_1)\chi_{b_2}(t_0)\chi_{c_2}(t_0t_1)}\\
&=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi_0(t_0)\chi((a_1-a_2)t_1+(b_1-b_2)t_0+(c_1-c_2)t_0t_1) \\
&=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*}\psi_0(t_0)\chi{((b_1-b_2)t_0)}\sum\limits_{t_1\in \mathbb{F}_q}\chi((a_1-a_2)t_1+(c_1-c_2)t_0t_1)\\
&=& \frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*}\psi_0(t_0)\chi{(bt_0)}\sum\limits_{t_1\in \mathbb{F}_q}\chi((a+ct_0)t_1)~({\rm Set}~a=a_1-a_2, b=b_1-b_2, c=c_1-c_2)\\
&=&\frac{q}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, a+ct_0=0}\psi_0(t_0)\chi_b(t_0)\\
&=& \frac{1}{K}G_R(\varphi, \lambda)~({\rm By~the~proof~of~Theorem~\ref{thm1},~where}~\varphi:=\psi_0\star\chi_a, \lambda:=\chi_b\star\chi_c)
\end{eqnarray*}
Since $\textbf{c}_1\neq \textbf{c}_2$, then $a, b$ and $c$ are not all equal to $0$.
In view of Theorem \ref{thm1}, we have
\begin{equation*}
K\textbf{c}_1\textbf{c}_2^H=\begin{cases}
\emph{ }-q, ~~~~~~~~~~~{\rm if}~a=0, c=0, b\neq 0;\\
\emph{ }q\chi(-\frac{ab}{c}), ~~~~~{\rm if}~a\neq0, c\neq0;\\
\emph{ }0, ~~~~~~~~~~~~~~~{\rm otherwise}.\\
\end{cases}
\end{equation*}
Consequently, we infer that $|\textbf{c}_1\textbf{c}_2^H|\in \{0, \frac{1}{q-1}\}$ for any two distinct codewords $\textbf{c}_1, \textbf{c}_2$ in $C_1(R)$.
Hence, $I_{max}(C_1(R))=\frac{1}{q-1}.$
\end{proof}
By Theorem \ref{thm2}, we can calculate the ratio $\frac{I_{max}(C_1(R))}{I_w}$, which is to prove that the codebook $C_1(R)$ is asymptotically optimal.
\begin{thm}\label{thm3}
Let the symbols be the same as those in Theorem \ref{thm2}. Then the codebook $C_1(R)$ asymptotically meets the
Welch bound.
\end{thm}
\begin{proof}
In view of Theorem \ref{thm2}, note that $N=q^3$ and $K=q(q-1)$. Then the corresponding Welch bound of the codebook $C_1(R)$ is
\begin{eqnarray*}
I_w &=& \sqrt{\frac{N-K}{(N-1)K}} \\
&=& \sqrt{\frac{q^3-q(q-1)}{(q^3-1)q(q-1)}}\\
&=&\sqrt{\frac{q^2-q+1}{q^4-q^3-q+1}}.
\end{eqnarray*}It follows from Theorem \ref{thm2}, then we have
\begin{equation*}
\frac{I_{max}(C_1(R))}{I_w}=\sqrt{\frac{q^4-q^3-q+1}{(q^2-q+1)(q-1)^2}}
\end{equation*}
Obviously, we get $\lim\limits_{q\longrightarrow \infty}\frac{I_{max}(C_1(R))}{I_w}=1$, which implies that $C_1(R)$ asymptotically
meets the Welch bound.
\end{proof}
\subsection{The second construction of codebooks}
The codebook $C_2(R)$ of length $K$ over $R$ is constructed as
\begin{eqnarray*}
C_2(R) &=& \{\frac{1}{\sqrt{K}}(\psi(t_0)\chi_a(t_1)\chi_b(t_0)\chi_c(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}, \\
&& \psi\in \widehat{\mathbb{F}}_q^*, \chi_b~{\rm is~a~fixed~additive~character~over}~\mathbb{F}_q,\chi_a, \chi_c \in \widehat{\mathbb{F}}_q\}.
\end{eqnarray*}
With this construction, we will figure up the maximum cross-correlation amplitude $I_{max}(C_2(R))$ as follows.
\begin{thm}\label{thm4}
Let $C_2(R)$ be a codebook defined as above. Then $C_2(R)$ is a $(q^2(q-1), q(q-1))$ codebook with the maximum
cross-correlation amplitude $I_{max}(C_2(R))=\frac{1}{q-1}$.
\end{thm}
\begin{proof}
According to the definition of $C_2(R)$, it is obvious that $C_2(R)$ has $N=q^2(q-1)$ codewords of length $K=q(q-1)$. Next, our goal is to determine the maximum
cross-correlation amplitude $I_{max}$ of the codebook $C_2(R)$. Let $\textbf{c}_1$ and $\textbf{c}_2$ be any two distinct codewords in $C_2(R)$, where
$\textbf{c}_1=\frac{1}{\sqrt{K}}(\psi_1(t_0)\chi_{a_1}(t_1)\chi_{b}(t_0)\chi_{c_1}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$ and $\textbf{c}_2=\frac{1}{\sqrt{K}}(\psi_2(t_0)\chi_{a_2}(t_1)\chi_{b}(t_0)\chi_{c_2}(t_0t_1))_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}$. Then we have
\begin{eqnarray*}
\textbf{c}_1\textbf{c}_2^H &=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi_1(t_0)\chi_{a_1}(t_1)\chi_{b}(t_0)\chi_{c_1}(t_0t_1)\overline{\psi_2(t_0)\chi_{a_2}(t_1)\chi_{b}(t_0)\chi_{c_2}(t_0t_1)}\\
&=&\frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, t_1\in \mathbb{F}_q}\psi_1\overline{\psi}_2(t_0)\chi((a_1-a_2)t_1+(c_1-c_2)t_0t_1)\\
&=& \frac{1}{K}\sum\limits_{t_0\in \mathbb{F}_q^*}\psi(t_0)\sum\limits_{t_1\in \mathbb{F}_q}\chi((a+ct_0)t_1)~({\rm Set}~\psi=\psi_1\overline{\psi}_2, a=a_1-a_2, c=c_1-c_2)\\
&=&\frac{q}{K}\sum\limits_{t_0\in \mathbb{F}_q^*, a+ct_0=0}\psi(t_0).
\end{eqnarray*}
\begin{itemize}
\item If $a=c=0,$ since $\textbf{c}_1\neq \textbf{c}_2$, thus $\psi$ is nontrivial. Then we have $$K\textbf{c}_1\textbf{c}_2^H=q\sum\limits_{t_0\in \mathbb{F}_q^*}\psi(t_0)=0;$$
\item If $a=0, c\neq 0$ or $a\neq0, c=0$, then $K\textbf{c}_1\textbf{c}_2^H=0$;
\item If $a\neq0, c\neq 0$, then $K\textbf{c}_1\textbf{c}_2^H=q\psi(-\frac{a}{c})$.
\end{itemize}
\begin{equation*}
\textbf{c}_1\textbf{c}_2^H=\begin{cases}
\emph{ }\frac{q}{K}\psi(-\frac{a}{c}), ~~~~~{\rm if}~a\neq0, c\neq0;\\
\emph{ }0, ~~~~~~~~~~~~~~~{\rm otherwise}.\\
\end{cases}
\end{equation*}
Consequently, we infer that $|\textbf{c}_1\textbf{c}_2^H|\in \{0, \frac{1}{q-1}\}$ for any two distinct codewords $\textbf{c}_1, \textbf{c}_2$ in $C_2(R)$.
Hence, $I_{max}(C_1(R))=\frac{1}{q-1}.$
\end{proof}
Similarly, we show the near-optimality of the codebook $C_2(R)$ in the following theorem.
\begin{thm}\label{thm5}
Let the symbols be the same as those in Theorem \ref{thm4}. Then the codebook $C_2(R)$ asymptotically meets the
Welch bound.
\end{thm}
\begin{proof}
In view of Theorem \ref{thm4}, note that $N=q^2(q-1)$ and $K=q(q-1)$. Then the corresponding Welch bound of the codebook $C_2(R)$ is
\begin{eqnarray*}
I_w &=& \sqrt{\frac{N-K}{(N-1)K}} \\
&=& \sqrt{\frac{q^2(q-1)-q(q-1)}{(q^3-q^2-1)q(q-1)}}\\
&=&\sqrt{\frac{q-1}{q^3-q^2-1}}.
\end{eqnarray*}It follows from Theorem \ref{thm4}, then we have
\begin{equation*}
\frac{I_{max}(C_2(R))}{I_w}=\sqrt{\frac{q^3-q^2-1}{(q-1)(q-1)^2}}
\end{equation*}
Obviously, we get $\lim\limits_{q\longrightarrow \infty}\frac{I_{max}(C_2(R))}{I_w}=1$, which implies that $C_2(R)$ asymptotically
meets the Welch bound.
\end{proof}
\section{Conclusions}
In this paper, we described the additive characters and multiplicative characters over the ring $R=\mathbb{F}_q+u\mathbb{F}_q~(u^2=0)$ in detail. Our
results on Gauss sums over the ring $R$ are calculated explicitly based on the additive and multiplicative characters. The purpose of studying the characters over $R$ is to present an application in the codebooks. Based on this idea, we proposed two constructions of codebooks and determined the maximum cross-correlation amplitude $I_{\max}(C)$ of codebooks generated by these two constructions. Moreover, we showed that these codebooks are asymptotically optimal with respect to Welch bound and the parameters of these codebooks are new.
In further research, it would be interesting to investigate the application of the new families of codebooks meeting the Welch bound or Levenstein bound by finding the new constructions of codebooks. In addition, we hope and believe that the better properties with respect to Gauss and Jacobi sums over rings will be studied and the results will be useful in applications.
| {
"timestamp": "2019-06-14T02:09:56",
"yymm": "1906",
"arxiv_id": "1906.05523",
"language": "en",
"url": "https://arxiv.org/abs/1906.05523",
"abstract": "In this paper, we present explicit description on the additive characters, multiplicative characters and Gauss sums over a local ring. As an application, based on the additive characters and multiplicative characters satisfying certain conditions, two new constructions of complex codebooks over a local ring are introduced. With these two constructions, we obtain two families of codebooks achieving asymptotically optimal with respect to the Welch bound. It's worth mentioning that the codebooks constructed in this paper have new parameters.",
"subjects": "Information Theory (cs.IT)",
"title": "New constructions of asymptotically optimal codebooks via character sums over a local ring",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806484125337,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7077274329565799
} |
https://arxiv.org/abs/2209.00319 | Some old and basic facts about random walks on groups | This note contains old instead of new results about random walks on groups, which may serve as a small supplement to the author's monograph ``Random Walks on Infinite Graphs and Groups'' (Cambridge Univ. Press 2000/2009). First, we exhibit a basic exercise on the periodicity classes of random walk. The second topic concerns some basics on ratio limits for random walks, which had been published ``only'' in German in the 1970ies. | \section{Introduction}\label{sec:intro}
In all that follows, $\Gamma$ will be a countable, discrete group written multiplicatively,
with elements typically denoted by $x,y$, etc., unit element $e$, and $\mu$ a probability
measure on $\Gamma$. The support of $\mu$ is
$$
S_{\mu} = \{ x \in \Gamma : \mu(x) > 0 \}\,.
$$
Recall that the random walk on $\Gamma$ with law $\mu$ is the time-homogeneous
Markov chain with state space $\Gamma$ and transition probabilities $p_{\mu}(x,y) = \mu(x^{-1}y)$.
The $n$-step transition probabilities are then $p_{\mu}^{(n)}(x,y) = \mu^{(n)}(x^{-1}y)$,
where $\mu^{(n)}$ is the $n^{\textrm{th}}$ convolution power of $\mu$. Its support is
$S_{\mu}^n$, the set of all products $x_1 \cdots x_n$ of group elements.
Thus, the random walk is \emph{irreducible} in the sense of Markov chains (resp., non-negative
matrics) if and only if
\begin{equation}\label{eq:semigroup}
\bigcup_{n=1}^{\infty} S_{\mu}^n = \Gamma\,,
\end{equation}
i.e., the semigroup genereated by the support is all of $\Gamma$. (In general, for
two subsets $A, B$ of $\Gamma$, their product is $AB = \{xy : x \in A\,,\; y \in B\}$,
and $A^{-1} = \{x^{-1} : x \in A \}$.)
\smallskip
$\bullet\;$ In this note, we always assume irreducibility.
\smallskip
In \S \ref{sec:period}, we recall a basic fact about the period and aperiodicity
of the random walk, and in \S \ref{sec:specrad}, we recall some ratio limit theorems.
\section{Aperiodicity}\label{sec:period}
As an irreducible Markov chain, the random walk has a \emph{period}
$$
d = d(\mu) = \gcd\{ n \in \N : \mu^{(n)}(e) > 0 \},
$$
compare with \cite[page 3]{Wbook}. The random walk is called \emph{aperiodic,}
if $d(\mu)=1$. For general $d$, as for any irreducible Markov chain, there is a
partition in subsets
$$
\Gamma = C_0 \,\dot{\cup}\, C_1 \,\dot{\cup}\,\dots\,\dot{\cup}\, C_{d-1}\,,
$$
with $e \in C_0\,$, such that the random walk wanders through the seta $C_j$ cyclically:
if $x \in C_j$ and $y \in C_k$ then $\mu^{(n)}(x^{-1y}) > 0$ only when
$n \equiv k-j (\mod d)$, and then this holds fal all but finitely many such $n$.
We now recall a group-theoretical fact which was considered a basic exercise in earlier
days but now is sometimes being ``discovered'' by younger researchers. The following
has also has a generalisation to random walks on locally compact groups, which was
studied by {\sc Woess}~\cite{Wperiod} -- which however is written in French and not available
online, apart from the author's webpage.
\begin{pro}\label{pro:normal} \hspace*{3.1cm} $\displaystyle
\Gamma_0 = \bigcup_{k=1}^{\infty} S_{\mu}^{-k}S_{\mu}^{k}$\\[3pt]
is a normal subgroup of $\Gamma$ with index $d$. The factor group
$\Gamma/\Gamma_0$ is cyclic of order $d$.
One has $C_0 = \Gamma_0\,$, and the sets $C_j$ are the cosets of $\Gamma_0\,$.
The probability measure $\mu^{(d)}$ is supported by $\Gamma_0$ and
irreducible on that subgroup, with period $1$.
\end{pro}
\begin{proof}
We first show that by \eqref{eq:semigroup}, one has that $\Gamma_0$
is a normal subgroup; this can be found, e.g., in the lecture notes by
{\sc Mukherjea and Tserpes}~\cite[p. 97]{MuTs}.
\\
(a) It is clear that $e \in \Gamma_0$ and that $x^{-1}\in\Gamma_0$ whenever $x \in \Gamma_0\,$.
\\
(b) If $x \in \Gamma$ then there is $n$ such that $x \in S_{\mu}^n$. Hence
$x^{-1}S_{\mu}^{-k}S_{\mu}^{k}x \subset S_{\mu}^{-n-k}S_{\mu}^{k+n}$, so that
$x^{-1}\Gamma_0x \subset \Gamma_0\,$.
\\
(c) Finally, by (b),
$$
(S_{\mu}^{-k}S_{\mu}^{k})(S_{\mu}^{-n}S_{\mu}^{n}) \subset (\Gamma_0 S_{\mu}^{-n})S_{\mu}^{n}
= S_{\mu}^{-n}\Gamma_0 S_{\mu}^{n} \subset \Gamma_0\,,
$$
so that $\Gamma_0$ is closed with respect to the group product.
Next, we observe that $d(\mu) = \gcd N_{\mu}$, where $N_{\mu} = \{ n \in \N : e \in S_{\mu}^n \}$,
and that the latter set
is closed under addition. This implies via elementary number theory that there are $n_1, n_2$
such that $e \in S_{\mu}^{n_1} \cap S_{\mu}^{n_2}$ and $d(\mu)=n_2-n_1\,$. Therefore
$$
e \in S_{\mu}^{-n_1} S_{\mu}^{n_1}S_{\mu}^{d} \subset \Gamma_0 S_{\mu}^{d}\,
\AND S_{\mu}^{-d} \subset S_{\mu}^{-d} \Gamma_0 S_{\mu}^{d} \subset \Gamma_0\,.
$$
Therefore also $S_{\mu}^{nd} \subset \Gamma_0$ for all $n \in \N$.
Now suppose that $S_{\mu}^k \cap S_{\mu}^m \ne \emptyset$ for $k \ne m$, and let $x$ be
an element of that set. By \eqref{eq:semigroup}, there is $n$ such that $x^{-1} \in S_{\mu}^n$.
Then $e \in S_{\mu}^{k+n} \cap S_{\mu}^{m+n}$, whence $k+n$ and $m+n$ belong to $N_{\mu}\,$.
We conclude that $d$ divides $m-k$.
In particular, let $x_0 \in S_{\mu}$. If $x_0^k \in \Gamma_0$
then $x_0^k \in S_{\mu}^{-n} S_{\mu}^{-n}$ for some $n$,
so that $S_{\mu}^{k+n} \cap S_{\mu}^{n} \ne \emptyset$ and $d$ divedes $k$.
Thus, the cosets $x_0^k\Gamma_0$, $k=0\,,\dots, d-1$ are all disctinct, and $x_0^d\Gamma_0 = \Gamma_0$.
This holds for any $x_0 \in S_{\mu}\,$, and it shows that $\Gamma/\Gamma_0$ is cyclic of order $d$.
Turning to the random walk, if $x \in x_0^k \Gamma^0$ for $k \in \{0,\dots, d-1\}$
and $p(x,y) = \mu(x^{-1}y)> 0$ then
$y \in S_{\mu}^{k+1}$ and $y x_0^{d-1-k} \in S_{\mu}^d \subset \Gamma_0\,$. Consequently,
$y \in x_0^{k+1}\Gamma_0\,$.
By all that we have proved so far, we have that $S_{\mu}^n \cap \Gamma_0 = \emptyset$
when $d$ does not divide $n$, so that
$$
\bigcup_{n=1}^{\infty} S_{\mu}^{nd} = \Gamma_0\,,
$$
which means that $\mu^{(d)}$ is irreducible on $\Gamma_0\,$. Since $N_{\mu}$ is closed
under addition, there is $n_0$ such that $nd \in \N_{\mu}$ for all $n \ge n_0\,$.
Therefore $\mu^{(d)}$ has period 1 on $\Gamma_0\,$.
\end{proof}
If $\mu$ has period 1, then it is called aperiodic. By the Proposition, we may
restrict attention to that case.
\section{Spectral radius and convergence}\label{sec:specrad}
For irreducible $\mu$ as in the preceding section, the number
$$
\rho(\mu) = \limsup_{n \to \infty} \mu^{(n)}(x)^{1/n}
$$
is independent of $x \in \Gamma$. It is called the \emph{spectral radius} in a variety
of references, including \cite{Wbook}. We stick to this terminology, but
one should be careful: in general, this is not the spectral radius of an operator;
it \emph{is} the spectral radius (and norm) of $\mu$ as a convolution operator on $\ell^2(\Gamma)$
when $\mu$ is symmetric, i.e., $\mu(x^{-1}) = \mu(x)$ for all $x$, and on a
weighted $\ell^2$-space when $\mu$ is reversible, see \cite[\S 10]{Wbook}.
The study of this number, for general irreducible Markov chains, goes back to
{\sc Pruitt}~\cite{Pruitt} and {\sc Vere-Jones}~\cite{VJ}; see also the monograph by
{\sc Seneta}~\cite{Sen}.
\begin{lem}\label{lem:roots} For every $x \in \Gamma$, one has convergence:
$$
\lim_{n \to \infty} \mu^{(n)}(x)^{1/n} = \rho(\mu)\,,
$$
and the sequence $\mu^{(n)}(e)^{1/n}$ converges to its limit from below. Furthermore,
there is $k_0$ such that
$$
\mu^{(n)}(e) < \rho(\mu)^n \quad \text{strictly for all }\;n \ge k_0\,.
$$
\end{lem}
\begin{proof} The convergence result is the multiplicative version of {\sc Fekete}'s Lemma \cite{Fek}.
Let $a_n = \mu^{(n)}(e)$. Then $a_n > 0$ for all $n \ge n_0$ and $a_m a_n \le a_{m+n} \le 1$.
For fixed $m \in n$, let $n = qm + r$ with $n_0 \le r=r_n < n_0+m$. Then
$$
a_n \ge a_r a_m^q \ge c_m a_m^q \,, \quad\text{where}\quad c_m =\min \{ a_r: n_0 \le r < n_0+m\} > 0.
$$
Therefore $a_n^{1/n} \ge c_m^{1/n} a_m^{q/n}$, and letting $n \to \infty$,
$$
\rho(\mu) \ge \liminf_{n \to \infty} a_n^{1/n} \ge a_m^{1/m}\,.
$$
This holds for every $m$, and letting $m \to \infty$, we get that $a_m^{1/m}$ tends
to $\rho(\mu)$ from below. For general $x$, there is $k$ such that $\mu^{(k)}(x) > 0$,
and $\mu^{(n)}(x) \ge \mu^{(k)}(x)\mu^{(n-k)}(e)$. Taking $n^{\text{th}}$ roots on both
sides, we get that also $\mu^{(n)}(x)^{1/n}$ converges.
The last observation is due to {\sc Gerl}~\cite{Gerl78}: fix $x_0 \in S_{\mu} \setminus \{e\}$.
Then $x_0^{-1} \in S_{\mu}^{r_0}$ for some $r_0 \in \N$, and by the above,
$$
\mu^{(n)}(x_0)\mu^{(n)}(x_0^{-1}) > 0 \quad \text{for all }\; n \ge n_0+r_0 =:k_0
$$
Now suppose that for some $n \ge k_0$,
$\mu^{(n)}(e)=\rho(\mu)^n.$ Then
$$
\rho(\mu)^{2n} \ge \mu^{(2n)}(e) = \sum_{x \in \Gamma} \mu^{(n)}(x)\mu^{(n)}(x^{-1})
= \rho(\mu)^{2n} + \sum_{x\ne e} \mu^{(n)}(x)\mu^{(n)}(x^{-1}).
$$
But then $\mu^{(n)}(x_0)\mu^{(n)}(x_0^{-1}) = 0$, a contradiction.
\end{proof}
The following result of {\sc Gerl}~\cite{Gerl78} is less straightforward, and was the
primary motivation for writing this little note. It is based on an argument in \cite{Gerl73},
compare also with {\sc Guivarc'h}~\cite[pp. 18-19]{Gui}.
According to {\sc Le Page}~\cite{LeP}, the argument can be traced back to
{\sc Orey and Kingman}~\cite{OK}.
\begin{theorem1}\label{thm:ratio} If $\mu$ is irreducible and aperiodic on $\Gamma$,
then
$$
\lim_{n \to \infty} \frac{\mu^{(n+1)}(x)}{\mu^{(n)}(x)} = \rho(\mu)
\quad \text{for every }\; x \in \Gamma\,.
$$
\end{theorem1}
\begin{comment}
The proof will follow from the following proposition, which appears in \cite{Gerl73}
for random walks on groups, but the group structure is not needed. A similr argument appears
in {\sc Guivarc'h}~\cite[pp. 18-19]{Gui} without reference to \cite{Gerl73}.
\begin{pro}\label{pro:ratio}
Let $P = \bigl(p(x,y)\bigr)_{x,y \in \Gamma}$ be a substochastic, irreducible
transition matrix on the countable set $\Gamma$\footnote{That is, the matrix elements are
non-negative, $\sum_y p(x,y) \le 1$ for every $x$, and for all $x,y$ there is $n$ such that
the element $p^{(n)}(x,y)$ of $P^n$ is $> 0$.}. Suppose that $p(x,x) \ge a > 0$ for all $x$
and that
$\lim_{n\to \infty} p^{(n)(x,y)^{1/n} =1$ for all $x,y$. (The limit exists and is independent of $x,y$
as in Lemma \ref{lem:roots}.)
\end{pro}
\end{comment}
\begin{proof} Write $\rho = \rho(\mu)$.
Recall the transition probabilities $p(x,y) = \mu(x^{-1}y)$ of the random walk.
Irreducibility yields that there is a positive $\rho$-subharmonic function
$h: \Gamma \to (0\,,\,\infty)$, that is,
$$
\sum_{y \in \Gamma} \mu(x^{-1}y)h(y) \le \rho\, h(x) \quad \text{for every }\; x \in \Gamma\,.
$$
See \cite{Pruitt}, \cite{Sen} or \cite[Lemma 7.2]{Wbook}. In many cases, there even is
such a function which is $\rho$-harmonic, i.e., equality holds at every $x$.
We now define a new Markov chain on $\Gamma$, the \emph{$h$-process,} with transition probabilities
$$
p_h(x,y) = \frac{\mu(x^{-1}y)h(y)}{\rho\,h(x)}.
$$
This is in general not a group-invariant random walk. The
transition matrix (denoted $Q$ in \cite{Gerl78})
$$
P_h = \bigl(p_h(x,y)\bigr)_{x,y \in \Gamma}
$$
is substochastic, i.e., $\sum_y p_h(x,y) \le 1$ for all $x$, so that there may be a positive
probability that the Markov chain ``dies'' at $x$. Furthermore, along with $\mu$ it is
irreducible and aperiodic: for all $x,y$, there is $n_{x,y}$ such that $p_h^{(n)}(x,y) > 0$
for all $n \ge n_{x,y}\,$, where $p_h^{(n)}(x,y)$
is the $(x,y)$-entry of the matrix power $P_h^n\,$. We have by Lemma \ref{lem:roots}
$$
p_h^{(n)}(x,y) = \frac{\mu^{(n)}(x^{-1}y)h(y)}{\rho^n\,h(x)} \AND
\lim_{n\to \infty} p_h^{(n)}(x,y)^{1/n} = 1 \quad \text{for all }\;x,y \in \Gamma\,.
$$
In particular, for all $x$,
$$
0 < p_h^{(n)}(x,x) = \frac{\mu^{(n)}(e)}{\rho^n} < 1 \quad \text{for all $n \ge k_0\,$.}
$$
We now fix $m \ge k_0$ and set $a = a_m = 1- p_h^{(m)}(x,x)$, so that $0 < a < 1$, as well as
$$
Q = \frac{1}{a} \bigl( P_h^m - (1-a)I\bigr)\,,
$$
where $I$ is the identity matrix over $\Gamma$. (The matrix $Q$ is denoted $R$ in \cite{Gerl78}.)
We shall also need the matrix $E$ over $\Gamma$ with all entries $=1$.
For the next lines of the proof, we just write $P$ for $P_h^m\,$.
Then $Q$ is also non-negative, substochastic and irreducible, and $P = a\, Q + (1-a)I$. Note
that $Q$ commutes with $P_h\,$, and that $P_hE \le E$. Then
$$
P^n = \sum_{k=0}^n p_a(n,k) \,Q^k\,, \quad\text{where}\quad
p_a(n,k) = {n \choose k}a^k(1-a)^{n-k}\,.
$$
The latter is the probability that the sum $S_n = X_1 +\dots+X_n$ of i.i.d. Bernoulli
random variables with $\Prob[X_k=1] = a$ has value $k$. For $\ep > 0$, consider the set
$$
C_n(\ep) = \bigl\{ k \in \{0\,,\dots, n\} :
p_a(n,k) \le (1+\ep)\,p_a(n+1,k+1) \bigr\}
$$
and its complement $C_n(\ep)^c$ in $\{0\,,\dots, n\}$. Then
$$
\sum_{k \in C_n^c} p_a(n,k) = \Prob \biggl[ \frac{S_n+1}{n+1}-a > \ep\,a \biggr]\,.
$$
This is a large deviation probability, which is well known to decay exponentially,
i.e., there is $\delta > 0$
such that it is $\le e^{-n\delta}$. In our specific case, this can also be verified
by combinatorial computations. See e.g. {\sc R\'enyi}~\cite[p. 324]{Renyi}.
Then, using matrix products and elementwise inquality between matrices,
$$
\begin{aligned}
\frac{1}{a}P^{n+1} - \frac{1-a}{a}P^n = QP^n &= \sum_{k=0}^n p_a(n,k)Q^{k+1}\\
&\le e^{-\delta n}E + (1+\ep)\sum_{k \in C_n(\ep)} p_a(n+1,k+1)Q^{k+1}\\
&\le e^{-\delta n}E + (1+\ep)P^{n+1}\,.
\end{aligned}
$$
Reassembling the terms,
$$
\Bigl( 1 - \frac{a}{1-a}\, \ep\Bigr)P^{n+1} \le \frac{a}{1-a}\,e^{-\delta n}E + P^n\,.
$$
We multiply from the left with $P_h^r$, where $r \in \N_0$ is arbitrary, and
get for the matrix elements
$$
\left( 1 - \frac{a}{1-a} \ep\right)\frac{p_h^{(mn + m + r)}(x,y)}{p_h^{(mn + r)}(x,y)} \le
\frac{a}{1-a}\,\frac{e^{-\delta n}}{p_h^{(mn + r)}(x,y)} + 1\,.
$$
We are not dividing by $0$ if $n$ is sufficiently large, and since
$p_h^{(mn + r)}(x,y)^{1/n} \to 1$, the right hand side tends to 1 as $n \to \infty$.
Since we can choose $\ep$ arbitrarily small, we get
\begin{equation}\label{eq:limsup}
\limsup_{n\to \infty} \frac{p_h^{(mn + m + r)}(x,y)}{p_h^{(mn + r)}(x,y)} \le 1\,,
\end{equation}
and this holds for every $m \ge k_0$ and every $r \ge 0$.
\smallskip
For an analogous lower bound, we use the set
$$
D_n(\ep) = \bigl\{ k \in \{0\,,\dots, n\} :
p_a(n+1,k+1) \le (1+\ep)p_a(n,k) \bigr\}
$$
and observe that
$$
\sum_{k \in C_n^c} p_a(n+1,k+1) = \Prob \biggl[\frac{S_{n+1}+1}{n+1}-a < - \frac{\ep}{1+\ep}\,a \biggr]\,.
$$
also decays exponentially, and is bounded by $e^{-\delta n}$ for some $\delta > 0$. Then
$$
\begin{aligned}
P^{n+1} &\le e^{-\delta n}E + (1+\ep)\sum_{k \in D_n(\ep)} p_a(n,k)Q^{k+1}\\
&\le e^{-\delta n}E + (1+\ep)P^n Q
= e^{-\delta n}E + (1+\ep)\left(\frac{1}{a}P^{n+1} - \frac{1-a}{a}P^n\right)\,.
\end{aligned}
$$
Reassembling the terms,
$$
(1+\ep)P^n \le \frac{a}{1-a}\,e^{-\delta n}E + \Bigl(1 + \frac{\ep}{1-a}\Bigr)P^{n+1}\,.
$$
Proceeding as above, we get for all $m \ge k_0$ and all $r \ge 0$
\begin{equation}\label{eq:liminf}
\liminf_{n\to \infty} \frac{p_h^{(mn + m + r)}(x,y)}{p_h^{(mn + r)}(x,y)} \ge 1\,,
\end{equation}
for every $m \ge k_0$ and every $r \ge 0$. Since these two numbers chan be chosen arbitrarily,
it is an easy exercise to deduce from \eqref{eq:limsup} and \eqref{eq:liminf} that
$$
\lim_{n\to \infty} \frac{p_h^{(n+1)}(x,y)}{p_h^{(n)}(x,y)} = 1\,,
$$
and the stated result follows.
\end{proof}
\begin{rmk}\label{rmk:nogroups}
The last theorem is not restricted to random walks on groups. If $P$ is the transition matrix
of an irreducible Markov chain on a countable set -- say -- $\Gamma$, and it is
\emph{strongly aperiodic,} that is,
$$
\gcd \bigl\{ n \in \N : \inf_x p^{(n)}(x,x) > 0 \bigr\} = 1\,,
$$
then the same proof applies to show that
$$
\lim_{n\to \infty} \frac{p^{(n+1)}(x,y)}{p^{(n)}(x,y)} = \rho(P) \quad \text{for all }\;
x,y \in \Gamma\,. \eqno\square
$$
\end{rmk}
In the situation of Theorem 1, assume for simplicity that $S_{\mu}$ is finite.
Then it is well known and easy to decuce that the sequence of measures
$$
\Bigl(\frac{\mu^{(n)}}{\mu^{(n)}(e)}\Bigr)_{n \in \N}
$$
is relatively compact in the topology of pointwise convergence, and every
limit measure $\nu$ satisfies the convolution equation
\begin{equation}\label{eq:conv}
\mu * \nu = \rho(\mu)\cdot \nu\,,
\end{equation}
or in other words, the function $h(x) = \nu(x^{-1})$
is $\rho(\mu)$-harmonic,
that is
$$
\sum_y \mu(x^{-1}y)h(y) = \rho(\mu)\, h(x) \quad \text{for all }\; x \in \Gamma\,.
$$
\cite{Gerl78} and (under slightly different conditions, where
the state space is not necessarily discrete) \cite{Gui} prove the following ratio limit theorem.
\begin{theorem2}\label{thm:2}
Assume that $\mu$ is irreducible and aperiodic, and that $S_{\mu}$ is finite.
Suppose that {\sf (P)} is a certain property of positive measures $\nu$ on $\Gamma$
such that
\begin{itemize}
\item every limit along a pointwise convergent subsequence
$$
\Bigl(\frac{\mu^{(n_k)}}{\mu^{(n_k)}(e)}\Bigr)_{k \in \N}
$$
must have property {\sf (P)}, and
\item there is a unique positive measure $\nu$ with $\nu(e)=1$ that satisfies
\eqref{eq:conv}.
\end{itemize}
Then
$$
\lim_{n \to \infty}\frac{\mu^{(n)}(x)}{\mu^{(n)}(e)} = \nu(x) \quad \text{for all }\; x \in \Gamma.
$$
\end{theorem2}
The proof is clear in view of the observations preceding the statement of Theorem 2,
i.e., relative compactness and \eqref{eq:conv}, which are left to the reader as exercises
or to be looked up in old references. Finiteness of $S_{\mu}$ can be relaxed.
Typical examples of application are (virtually) Abelian groups, where property {\sf (P)}
is empty, because there is a unique solution $\nu$ of \eqref{eq:conv}: there is a good
amount of literature from the 1960ies on this. The most significant reference for Abelian
groups is {\sc Stone}~\cite{Stone}.
Other typical examples are isotropic random walks on free groups, and property {\sf (P)}
is that $\nu$ is also isotropic; see \cite{Gerl78}.
It should be noted that in those cases, one also has stronger results, namely
local limit theorems; see \cite[Chapter III]{Wbook}.
\smallskip
\textbf{\, Author's final, personal remarks.} When I wrote the monograph \cite{Wbook} in the
$2^{\text{nd}}$ half of the 1990ies, ratio limit theorems were not an active topic,
but replaced by the study of the asymptotic type of random walk transition probabilities and
the sharper local limit theorems. Therefore, having to keep the book size under control,
I had ``sacrificed'' the material of ratio limit theorems. In the meantime, the
subject has ``woken up'' again, e.g. in a recent paper of {\sc Dor-On}~\cite{Doron} and current
work of {\sc Dougall and Sharp}.
\begin{comment}
Another motivation for preparing this little supplement to \cite{Wbook} lies in the fact that
the old results re-presented here are not easily available online, and that they were wirtten
in \emph{German.} Nowadays' publication rush, where everything should be quickly available
online and written in English, may lead to ignoring such old work, or maybe to vaguely
citing it along with another, well-announced proof. While the latter is a reasonable attitude,
I have learnt that often it leads to the fact that who is cited for a result is the \emph{last}
author who proved it, the others being buried in oblivion. In recent discussions, the
attitude of referring to the last result as a ``discovery'' led to coining the expression that
the preceding results were ``prediscoveries''. There are numerous results of this type.
For example, the Hahn-Banach theorem was first proved by Hahn, but \emph{prediscovered}
(in a less general version) 15 years earlier by Helly -- working in Vienna like Hahn -- in his
PhD thesis. A generating function identity for cogrowth of groups is very frequently
attributed to Bartholdi, while it was \emph{prediscovered} 10 years earlier by Woess, and
still about 10 more years earlier was contained in the Doctoral thesis of Grigorchuk - which
admittedly is not easily accessible. So my own sensitivity in this respect has led me
to re-evoke some work of my former PhD advisor Gerl.
\end{comment}
| {
"timestamp": "2022-09-02T02:11:37",
"yymm": "2209",
"arxiv_id": "2209.00319",
"language": "en",
"url": "https://arxiv.org/abs/2209.00319",
"abstract": "This note contains old instead of new results about random walks on groups, which may serve as a small supplement to the author's monograph ``Random Walks on Infinite Graphs and Groups'' (Cambridge Univ. Press 2000/2009). First, we exhibit a basic exercise on the periodicity classes of random walk. The second topic concerns some basics on ratio limits for random walks, which had been published ``only'' in German in the 1970ies.",
"subjects": "Probability (math.PR)",
"title": "Some old and basic facts about random walks on groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806557900711,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274324122974
} |
https://arxiv.org/abs/1911.07676 | Learning with Good Feature Representations in Bandits and in RL with a Generative Model | The construction by Du et al. (2019) implies that even if a learner is given linear features in $\mathbb R^d$ that approximate the rewards in a bandit with a uniform error of $\epsilon$, then searching for an action that is optimal up to $O(\epsilon)$ requires examining essentially all actions. We use the Kiefer-Wolfowitz theorem to prove a positive result that by checking only a few actions, a learner can always find an action that is suboptimal with an error of at most $O(\epsilon \sqrt{d})$. Thus, features are useful when the approximation error is small relative to the dimensionality of the features. The idea is applied to stochastic bandits and reinforcement learning with a generative model where the learner has access to $d$-dimensional linear features that approximate the action-value functions for all policies to an accuracy of $\epsilon$. For linear bandits, we prove a bound on the regret of order $\sqrt{dn \log(k)} + \epsilon n \sqrt{d} \log(n)$ with $k$ the number of actions and $n$ the horizon. For RL we show that approximate policy iteration can learn a policy that is optimal up to an additive error of order $\epsilon \sqrt{d}/(1 - \gamma)^2$ and using $d/(\epsilon^2(1 - \gamma)^4)$ samples from a generative model. These bounds are independent of the finer details of the features. We also investigate how the structure of the feature set impacts the tradeoff between sample complexity and estimation error. | \section{Introduction}
\citet{du2019good} ask whether ``good feature representation'' is sufficient for efficient reinforcement learning and suggest a negative answer.
Efficiency here means learning a good policy with a small number of interactions either with the environment (on-line learning),
or with a simulator (planning). A linear feature representation is called ``good'' if it approximates the value functions of \emph{all} policies with a small uniform error.
The same question can also be asked for learning in bandits.
The ideas by \citet{du2019good} suggest that the answer is also negative in finite-armed bandits with a misspecified linear model.
All is not lost, however. By relaxing the objective, we will show that
one can obtain positive results showing that efficient learning \textit{is} possible in interactive settings with good feature representations.
The rest of this article is organised as follows.
First we introduce the problem of learning to identify a near-optimal action with side information about the possible reward (\cref{sec:prob}).
We then adapt the argument of \citet{du2019good} to show that
no algorithm can find an $O(\epsilon)$-optimal action without examining nearly all the actions, even when the rewards lie within an $\epsilon$-vicinity of
a subspace spanned by some features available to the algorithm (\cref{sec:negative}).
The negative result is complemented by a positive result showing
that there exists an algorithm such that for any feature map of dimension $d$,
the algorithm is able to find an action with suboptimality gap of at most $O(\epsilon \sqrt{d})$ where $\epsilon$ is the maximum distance between
the reward and the subspace spanned by the features in the max-norm. The algorithm only needs to investigate the reward at
$O(d \log\log d)$ well-chosen actions.
The main idea is to use the Kiefer-Wolfowitz theorem with a least-squares estimator of the reward function.
Finally, we apply the idea to stochastic bandits (\cref{sec:bandits}) and reinforcement learning with a generative model (\cref{sec:rl}).
\paragraph{Related work}
Despite its importance, the problem of identifying near-optimal actions when rewards follow a misspecified linear model has only recently received attention.
Of course, there is the recent paper by \citet{du2019good}, whose negative result inspired this work and is summarised for our setting in Section~\ref{sec:negative}.
A contemporaneous work also addressing the issues raised by \citet{du2019good} is by \citet{DV19}, who make a connection to the Eluder dimension \citep{RR13} and prove
a variation on our Proposition~\ref{prop:upper}.
The setting studied here in Section~\ref{sec:negative} is closely related to the query complexity of exactly maximising a function in a given function class, which was studied by \citet{AKS11}.
They introduced the haystack dimension as a hardness measure for exact maximisation. Unfortunately, their results for infinite classes
are generally not tight and no results for misspecified linear models were provided.
Another related area is pure exploration in bandits, which was popularised in the machine learning community by \citet{EMM06,AB10}.
The standard problem is to identify a (near)-optimal action in an unstructured bandit. \citet{SLM14} study pure exploration in linear bandits, but
do not address the case where the model is misspecified. More general structured settings have also been considered by \citet{DKM19} and others.
The algorithms in these works begin by playing every action once, which is inconsequential in an asymptotic sense.
Our focus, however, is on the finite-time regime where the number of actions is very large, which makes these algorithms unusable.
We discuss the related work on linear bandits and RL in the relevant sections later.
\paragraph{Notation}
Given a matrix $A \in \mathbb{R}^{n \times m}$, the set of rows is denoted by $\operatorname{rows}(A)$ and its range is
$\Range(A) = \{A \theta : \theta \in \mathbb{R}^m\}$. When $A$ is positive semi-definite, we define $\norm{x}_A^2 = x^\top A x$.
The Minkowski sum of sets $U, V \subset \mathbb{R}^d$ is $U + V = \{u + v : u \in U, v \in V\}$.
The standard basis vectors are $e_1,\ldots,e_d$. There will never be ambiguity about deducing the dimension.
\section{Problem setup}\label{sec:prob}
We start with an abstract problem that is reminiscent of pure exploration in bandits, but without noise. Fix $\delta>0$ and
consider the problem of identifying a $\delta$-optimal action out of $k$ actions with the additional information that the unknown reward vector $\mu\in \mathbb{R}^k$
belongs to a known hypothesis set $\mathcal{H}\subset \mathbb{R}^k$. An action $j\in [k] = \{1,\ldots,k\}$ is $\delta$-optimal for $\mu = (\mu_i)_{i=1}^k$ if $\mu_j > \max_i \mu_i - \delta$.
The learner sequentially queries actions $i \in [k]$ and observes the reward $\mu_i$. At some point the learner should stop and output both an estimated optimal action $\hat a \in [k]$ and an estimation
vector $\hat \mu \in \mathbb{R}^k$. There is no noise, so the learner has no reason to query the same action twice. Of course, if the learner queries all the actions, then it knows both $\mu$ and the optimal action.
The learner is permitted to randomise.
Two objectives are considered. The first only measures the quality of the outputted action $\hat a$, while the second depends on $\hat \mu$.
\begin{definition}
A learner is called sound for $(\mathcal{H}, \delta)$ if $\norm{\hat \mu - \mu}_\infty < \delta$ almost surely for all $\mu \in \mathcal{H}$.
It is called max-sound for $(\mathcal{H},\delta)$ if $\mu_{\hat a} > \max_a \mu_a - \delta$ almost surely for all $\mu \in \mathcal{H}$.
\end{definition}
Denote by $q_\delta(\mathscr A,\mu)$ the expected number of queries learner $\mathscr A$ executes when interacting with the environment specified by $\mu$ and let
\begin{align*}
c^{\max}_\delta(\mathcal{H}) &= \inf_{\mathscr A : \mathscr A \text{is } (\mathcal{H}, \delta)\text{-max-sound}} \sup_{\mu \in \mathcal{H}} q_\delta(\mathscr A,\mu) \\
c^{\operatorname{est}}_\delta(\mathcal{H}) &= \inf_{\mathscr A : \mathscr A \text{is } (\mathcal{H}, \delta)\text{-sound}} \sup_{\mu \in \mathcal{H}} q_\delta(\mathscr A, \mu) \,,
\end{align*}
which are the minimax query complexities for max-sound/sound learners respectively when interacting with reward vectors in $\mathcal{H}$ and with error tolerance $\delta$.
Both complexity measures are increasing as the hypothesis class is larger in the sense that
if $U\subset V$, then $c^{\max}_\delta(U)\le c^{\max}_\delta(V)$, and similarly for $c^{\operatorname{est}}_\delta$.
If a learner is sound for $(\mathcal{H}, \delta)$ and $\hat a = \argmax \hat \mu$, then clearly it is also max-sound for $(\mathcal{H}, 2\delta)$, which
shows that
\begin{align}
c^{\max}_{2\delta}(\mathcal{H}) \leq c^{\operatorname{est}}_{\delta}(\mathcal{H})\,.
\label{eq:rel}
\end{align}
Our primary interest is to understand $c^{\max}_\delta(\mathcal{H})$. Upper bounds, however, will be proven using \cref{eq:rel} and by controlling $c^{\operatorname{est}}_\delta(\mathcal{H})$. Furthermore,
in Section~\ref{sec:pos} we provide a simple characterisation of $c^{\operatorname{est}}_\delta(\mathcal{H})$, while $c^{\max}_\delta(\mathcal{H})$ is apparently more subtle.
Later we need the following intuitive result, which
says that the complexity of finding a near-optimal action when the hypothesis set consists of the unit vectors is linear in the number of actions. The
proof is given in
\ifsup
Section~\ref{sec:ellb}.
\else
the supplementary material.
\fi
\begin{lemma}\label{lem:obv}
$c^{\max}_1( \{ e_1, \dots, e_k \} ) = (k+1)/2$.
\end{lemma}
It follows from the aforementioned monotonicity that if $\{e_1,\ldots,e_k\} \subseteq \mathcal{H}$, then $c_1^{\max}(\mathcal{H}) \geq (k+1)/2$.
\section{Negative result}\label{sec:negative}
Let $\Phi \in \mathbb{R}^{k\times d}$.
The rows of $\Phi$ should be thought of as feature vectors assigned to each of the $k$ actions; accordingly we call $\Phi$ a feature matrix.
Furthermore, when $\mu \in \mathbb{R}^k$ and $a \in \operatorname{rows}(\Phi)$, we abuse notation by writing $\mu_a$ for the value of vector $\mu$ at the index of row $a$ in $\Phi$.
Our interest lies in the regime where $k$ is much larger than $d$ and where $\exp(d)$ is large.
We consider hypothesis classes where the true reward lies within an $\epsilon$-vicinity of $\Range(\Phi)$ as measured in max-norm.
Let $\mathcal{H}_\Phi^\epsilon = \Range(\Phi) + B_\infty(\epsilon)$, where $B_\infty(\epsilon) = [-\epsilon, \epsilon]^k$ is a $k$-dimensional hypercube.
How large is $c_\delta^{\max}(\mathcal{H}_\Phi^\epsilon)$ for different regimes of $\delta$ and $\epsilon$ and feature matrices?
As we shall see, for $\delta=\Omega(\epsilon \sqrt{d})$ one can keep the complexity small, while for smaller $\delta$ there exist feature matrices for which
the complexity can be as high as the large dimension, $k$.
The latter result follows from the core argument of the recent paper by \citet{du2019good}.
The next lemma is the key tool, and is a consequence of the Johnson--Lindenstrauss lemma.
It shows that there exist matrices $\Phi \in \mathbb{R}^{k \times d}$ with $k$ much larger than $d$ where
rows have unit length and all non-equal rows are almost orthogonal.
\begin{lemma}
\label{lem:jl}
For any $\epsilon>0$ and $d\in [k]$ such that
$d\ge \lceil 8 \log(k)/\epsilon^2 \rceil$,
there exists a feature matrix $\Phi\in \mathbb{R}^{k\times d}$ with unique rows such that
for all $a, b \in \operatorname{rows}(\Phi)$ with $a \neq b$, $\norm{a}_2 = 1$ and $|a^\top b| \leq \epsilon$.
\end{lemma}
\cref{lem:obv,lem:jl} together imply the promised result:
\begin{proposition}\label{prop:badmx}
For any $\epsilon>0$ and $d\in [k]$ with
$d\ge \lceil 8 \log(k)/\epsilon^2 \rceil$,
there exists a feature matrix $\Phi\in \mathbb{R}^{k\times d}$ such that $c_1^{\max}(\mathcal{H}_\Phi^\epsilon) \geq (k+1)/2$.
\end{proposition}
\begin{proof}
Let $\Phi$ be the matrix from \cref{lem:jl} with $\operatorname{rows}(\Phi) = (a_i)_{i=1}^k$.
We want to show that $e_i \in \mathcal{H}_\Phi^\epsilon$ for all $i \in [k]$ and then apply \cref{lem:obv}.
If $\theta = a_i$, then $\Phi \theta = ( a_1^\top a_i, \dots, a_i^\top a_i, \dots, a_k^\top a_i)^\top$.
By the choice of $\Phi$ the $i$th component is one and the others are all less than $\epsilon$ in absolute value.
Hence, $\|\Phi \theta - e_i\|_\infty\le \epsilon$, which completes the proof.
\end{proof}
The proposition has a worst-case flavour. Not all feature matrices have a high query complexity.
For a silly example, the query complexity of the zero matrix $\Phi=\mathbf{0}$ satisfies $c_1^{\max}(\mathcal{H}_\Phi^\epsilon) = 0$ provided $\epsilon < 1$.
That said, the matrix witnessing the claims in \cref{lem:jl} can be found with non-negligible probability by sampling the rows of $\Phi$ uniformly from the surface of
the $(d-1)$-dimensional sphere.
There is another way of writing this result, emphasising the role of the dimension rather than the number of actions.
\begin{corollary}\label{cor:badmx}
For all $\delta > \epsilon$, there exists a feature matrix $\Phi \in \mathbb{R}^{k \times d}$ with suitably large $k$ such that
\begin{align*}
c_\delta^{\max}(\mathcal{H}^\epsilon_\Phi) \geq \frac{1}{2} \exp\left(\frac{d-1}{8} \left(\frac{\epsilon}{\delta}\right)^2\right)\,.
\end{align*}
\end{corollary}
The proof follows by rescaling the features in \cref{prop:badmx} and is given in
\ifsup
\cref{app:cor:badmx}.
\else
the supplementary material.
\fi
\section{Positive result}\label{sec:pos}
The negative result of the previous section is complemented with a positive result showing that the query
complexity can be bounded independently of $k$ whenever $\delta = \Omega(\epsilon \sqrt{d})$.
For the remainder of the article we make the following assumption:
\begin{assumption}
$\Phi \in \mathbb{R}^{k \times d}$ has unique rows and the span of $\operatorname{rows}(\Phi)$ is all of $\mathbb{R}^d$.
\end{assumption}
We discuss the relationship between this result and \cref{prop:badmx} at the end of the section.
\begin{proposition}\label{prop:upper}
Let $\Phi \in \mathbb{R}^{k \times d}$ and
$\delta > 2\epsilon(1 + \sqrt{2d})$. Then, $c^{\max}_\delta(\mathcal{H}_\Phi^\epsilon) \leq 4 d \log \log d + 16$.
\end{proposition}
The proof relies on the Kiefer--Wolfowitz theorem, which we now recall.
Given a probability distribution $\rho : \operatorname{rows}(\Phi) \to [0,1]$, let $G(\rho) \in \mathbb{R}^{d\times d}$ and $g(\rho) \in \mathbb{R}$ be given by
\begin{align*}
G(\rho) &= \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) a a^\top\,, &
g(\rho) &= \max_{a\in \operatorname{rows}(\Phi)} \norm{a}_{G(\rho)^{-1}}^2 \,.
\end{align*}
\begin{theorem}[\citealt{KW60}]\label{thm:kiefer-wolfowitz}
The following are equivalent:
\begin{enumerate}[itemsep=0pt,nosep]
\item $\rho^*$ is a minimiser of $g$.
\item $\rho^*$ is a maximiser of $f(\rho) = \log \det G(\rho)$. \label{thm:des:kw:2}
\item $g(\rho^*) = d$. \label{thm:des:kw:3}
\end{enumerate}
Furthermore, there exists a minimiser $\rho^*$ of $g$ such that the support of $\rho^*$ has cardinality at most $|\operatorname{supp}(\rho)| \leq d(d+1)/2$.
\end{theorem}
The distribution $\rho^*$ is called an (optimal) experimental design and
the elements of its support are called its core set.
Intuitively, when covariates are sampled from $\rho$, then $g(\rho)$
is proportional to the maximum variance of the corresponding least-squares estimator
over all directions in $\operatorname{rows}(\Phi)$. Hence, minimising $g$ corresponds to minimising the worst-case variance of the resulting least-squares estimator.
A geometric interpretation is that the core set lies on the boundary of the central ellipsoid of minimum volume that contains $\operatorname{rows}(\Phi)$.
The next theorem shows that there exists a near-optimal design with a small core set.
The proof follows immediately from part (ii) of lemma 3.9 in the book by \citet{Tod16},
which also provides an algorithm for computing such a distribution in roughly order $k d^2$ computation steps.
\begin{theorem}\label{thm:todd}
There exists a probability distribution $\rho$ such that $g(\rho) \leq 2d$ and the core set of $\rho$ has size at most $4d \log \log(d) + 16$.
\end{theorem}
The proof of \cref{prop:upper} is a corollary of the following more general result about least-squares estimators
over near-optimal designs.
\begin{proposition}\label{prop:inf}
Let $\mu \in \mathcal{H}_\Phi^\epsilon$ and $\eta \in [-\beta, \beta]^k$.
Suppose that $\rho$ is a probability distribution over $\operatorname{rows}(\Phi)$ satisfying the conclusions of \cref{thm:todd}.
Then $\norm{\Phi \hat \theta - \mu}_\infty \leq \epsilon + (\epsilon + \beta) \sqrt{2d}$, where
\begin{align*}
\hat \theta = G(\rho)^{-1} \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) (\mu_a + \eta_a) a \,.
\end{align*}
\end{proposition}
The problem can be reduced to the case where $\eta = \mathbf{0}$ by noting that $\mu + \eta \in \mathcal{H}_\Phi^{\epsilon+\beta}$
The only disadvantage is that this leads to an additional additive dependence on $\beta$.
\begin{proof}
Let $\mu = \Phi \theta + \Delta$ where $\norm{\Delta}_\infty \leq \epsilon$.
The difference between $\hat \theta$ and $\theta$ can be written as
\begin{align*}
\hat \theta - \theta
&= G(\rho)^{-1} \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) a\left(a^\top \theta + \Delta_a + \eta_a\right) - \theta \\
&= G(\rho)^{-1} \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) (\Delta_a + \eta_a) a\,.
\end{align*}
Next, for any $b \in \operatorname{rows}(\Phi)$,
\begin{align*}
&\ip{b, \hat \theta - \theta}
= \bip{b, G(\rho)^{-1} \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) (\Delta_a+\eta_a) a} \nonumber \\
&= \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) (\Delta_a + \eta_a) \ip{b, G(\rho)^{-1} a} \nonumber \\
&\leq (\epsilon + \beta) \sum_{a \in \operatorname{rows}(\Phi)} \rho(a) |\ip{b, G(\rho)^{-1} a}| \nonumber \\
&\leq (\epsilon + \beta) \sqrt{\sum_{a \in \operatorname{rows}(\Phi)} \rho(a) \ip{b, G(\rho)^{-1} a}^2} \nonumber \\
&= (\epsilon + \beta) \sqrt{\sum_{a \in \operatorname{rows}(\Phi)} \rho(a) b^\top G(\rho)^{-1} a a^\top G(\rho)^{-1} b} \\
&= (\epsilon + \beta) \sqrt{\norm{b}^2_{G(\rho)^{-1}}}
\leq (\epsilon + \beta) \sqrt{g(\rho)} \leq (\epsilon + \beta) \sqrt{2d}\,,
\end{align*}
where the first inequality follows from H\"older's inequality and the
fact that $\norm{\Delta}_\infty \leq \epsilon$, the second by Jensen's inequality and the last two by our choice of $\rho$ and \cref{thm:todd}.
Therefore
\begin{align*}
\ip{b, \hat \theta} \leq \ip{b, \theta} + (\epsilon+\beta) \sqrt{2d} \leq \mu_b + \epsilon + (\epsilon + \beta) \sqrt{2d}\,.
\end{align*}
A symmetrical argument completes the proof.
\end{proof}
\begin{proof}[Proof of \cref{prop:upper}]
Let $\rho$ be a probability distribution over $\operatorname{rows}(\Phi)$ satisfying the conclusions of \cref{thm:todd}.
Consider the algorithm that evaluates $\mu$ on each point of the support of $\rho$ and computes the least-squares
estimator defined in \cref{prop:inf} and predicts $\hat a = \argmax_{a \in \operatorname{rows}(\Phi)} \ip{a, \hat \theta}$.
Let $a^*=\argmax_{a\in \operatorname{rows}(\Phi)} \mu_a$ be the optimal action.
Then by \cref{prop:inf} with $\eta = \mathbf{0}$,
\begin{align*}
\mu_{\hat a}
&\geq \ip{\hat a, \hat \theta} - \epsilon\left(1 + \sqrt{2d}\right)
\geq \ip{a^*, \hat \theta} - \epsilon\left(1 + \sqrt{2d}\right) \\
&\geq \mu_{a^*} - 2\epsilon \left(1 + \sqrt{2d}\right) > \mu_{a^*} - \delta\,.
\qedhere
\end{align*}
\end{proof}
\paragraph{Discussion}
\cref{cor:badmx} shows that the query complexity is exponential in $d$ when
$\delta$ is not much larger than $\epsilon$, but is benign when $\delta = \Omega(\epsilon \sqrt{d})$.
The positive result shows that in the latter regime the complexity is more or less linear in $d$. Precisely,
\begin{align*}
\min\left\{\delta : c^{\max}_\delta(\mathcal{H}_\Phi^\epsilon) \leq 4 d \log \log(d) + 16 \right\} = O(\epsilon \sqrt{d})\,.
\end{align*}
The message is that there is a sharp tradeoff between query complexity and error. The learner pays dearly in terms of query complexity if they demand an estimation error that is close to the approximation error.
By sacrificing a factor of $\sqrt{d}$ in estimation error, the query complexity is practically just linear in $d$.
\paragraph{Comparison to supervised learning}
As noted by \citet{du2019good}, the negative result does not hold in supervised learning, where the learner is judged on its average prediction error
with respect to the data generating distribution.
Suppose that $a,a_1,\ldots,a_n$ are sampled i.i.d.\ from some distribution $P$ on $\operatorname{rows}(\Phi)$ and the learner observes $(a_t)_{t=1}^n$ and $(\mu_{a_t})_{t=1}^n$.
\begin{align*}
\hat \theta = \left(\sum_{t=1}^n a_t a_t^\top\right)^{-1} \sum_{t=1}^n a_t \mu_{a_t}\,.
\end{align*}
Then, by making reasonable boundedness and span assumptions on $\operatorname{rows}(\Phi)$, and by combining the results in chapters 13 and 14 of \citep{Wai19}, with high probability,
\begin{align*}
\mathbb E\left[(a^\top \hat \theta - \mu_a)^2 \,\Big|\, \hat \theta\right] = O\left(\frac{d}{n} + \epsilon^2\right)\,.
\end{align*}
Notice, there is no $d$ multiplying the dependence on the approximation error.
The fundamental difference is that $a$ is sampled from $P$. The quantity $\max_{a \in \operatorname{rows}(\Phi)} (a^\top \hat \theta - \mu_a)^2$ behaves quite differently, as the lower bound shows.
\paragraph{Feature-dependent bounds}
The negative result in Section~\ref{sec:negative} shows that there \textit{exist} feature matrices for which the
learner must query exponentially many actions or suffer an estimation error that expands the approximation error by a factor of $\sqrt{d}$.
On the other hand, Proposition~\ref{prop:upper} shows that for \textit{any} feature matrix, there exists a learner that queries $O(d \log \log(d))$
actions for an estimation error of $\epsilon \sqrt{d}$, roughly matching the lower bound.
One might wonder whether or not there exists a feature-dependent measure that characterises the blowup in estimation error in terms of the feature matrix and
query budget. One such measure is given here. Given a set $C \subseteq [k]$ with $|C| = q$, let $\Phi_C \in \mathbb{R}^{q \times d}$ be the matrix obtained from $\Phi$
by restricting to those rows indexed by $C$. Define
\begin{align*}
\lambda_q(\Phi)
&= \min_{\substack{C \subset [k], |C| = q}} \max_{v \in \mathbb{R}^d \setminus \{\mathbf{0}\}} \frac{\norm{\Phi v}_\infty}{\norm{\Phi_C v}_\infty}\,.
\end{align*}
\begin{proposition}\label{prop:bounds}
Let $1 \leq q < k$ and
$\delta_1 = \epsilon(1 + \lambda_q(\Phi))$ and $\delta_2 > \epsilon(1 + 2\lambda_q(\Phi))$.
Then,
\begin{align*}
c^{\operatorname{est}}_{\delta_1}(\mathcal{H}^\epsilon_\Phi) > q \geq c^{\operatorname{est}}_{\delta_2}(\mathcal{H}^\epsilon_\Phi)\,.
\end{align*}
\end{proposition}
\ifsup
The proof is supplied in Appendix~\ref{app:bounds}.
\else
The proof is supplied in the supplementary material.
\fi
By (\ref{eq:rel}), it also holds that $c^{\max}_{2\delta_2}(\mathcal{H}^\epsilon_\Phi) \leq q$. Currently we do not have a corresponding lower bound, however.
\section{Misspecified linear bandits}\label{sec:bandits}
Here we consider the classic stochastic bandit where the mean rewards are nearly a linear function of their associated features.
We assume for simplicity that no two actions have the same features. In case this does not hold, a representative action can be chosen for each feature without
changing the main theorem.
Let $\Phi \in \mathbb{R}^{k \times d}$ and $\mu \in \mathcal{H}_\Phi^\epsilon$.
In rounds $t \in [n]$, the learner chooses actions $(X_t)_{t=1}^n$ with $X_t \in \operatorname{rows}(\Phi)$ and the reward is $Y_t = \mu_{X_t} + \eta_t$
where $(\eta_t)_{t=1}^n$ is a sequence of independent $1$-subgaussian random variables.
The optimal action has expected reward $\mu^* = \max_{a \in \mathcal{A}} \mu_a$ and the
expected regret is $R_n = \mathbb E[\sum_{t=1}^n \mu^* - \mu_{X_t}]$.
The idea is to use essentially the same elimination algorithm as \citep[chapter~22]{LaSz18:book}, which is summarised in \cref{alg:elim}.
In each episode, the algorithm computes a near-optimal design over a subset of the actions that are plausibly optimal.
It then chooses each action in proportion to the optimal design and eliminates arms that appear sufficiently suboptimal.
\begin{proposition}\label{prop:linear}
When $\alpha = 1/(kn)$ and $C$ is a suitably large universal constant
and $\max_a \mu_a - \min_a \mu_a\le 1$,
\cref{alg:elim} satisfies
\begin{align*}
R_n \leq C\left[\sqrt{d n \log(nk)} + \epsilon n \sqrt{d} \log(n)\right]\,.
\end{align*}
\end{proposition}
\ifsup
In Appendix~\ref{sec:bandit-lower},
\else
In the supplementary material,
\fi
we show that the bound in Proposition~\ref{prop:linear} is tight up to logarithmic factors in the interesting regime where $k$ is comparable to $n$.
\begin{proof}
Let $\mu = \Phi \theta + \Delta$ with $\norm{\Delta}_\infty \leq \epsilon$, which exists by the assumption that $\mu \in \mathcal{H}_\Phi^\epsilon$.
We only analyse the behaviour of the algorithm within an episode, showing that the least-squares estimator is guaranteed
to have sufficient accuracy so that (a) arms that are sufficiently suboptimal are eliminated and (b) some near-optimal arms are retained. Fix any $b\in \mathcal{A}$.
Using the notation in \cref{alg:elim},
\begin{align}
&\ip{b, \hat \theta - \theta}
= \left|b^\top G^{-1} \sum_{s=1}^u \Delta_{X_s} X_s + b^\top G^{-1} \sum_{s=1}^u X_s \eta_s\right| \nonumber \\
&\,\,\leq \left|b^\top G^{-1} \sum_{a \in \mathcal{A}} u(a) a \Delta_a \right| + \left|b^\top G^{-1} \sum_{s=1}^u X_s \eta_s\right|\,.
\label{eq:bandit1}
\end{align}
The first term is bounded using Jensen's inequality as before:
\begin{align*}
&\left|b^\top G^{-1} \sum_{a \in \mathcal{A}} u(a) \Delta_a a\right|
\leq \epsilon \sum_{a \in \mathcal{A}} u(a) \left|b^\top G^{-1} a\right| \\
&\quad\leq \epsilon \sqrt{\left(\sum_{a \in \mathcal{A}} u(a)\right) b^\top \sum_{a \in \mathcal{A}} u(a) G^{-1} aa^\top G^{-1} b} \\
&\quad= \epsilon \sqrt{\sum_{a \in \mathcal{A}} u(a) \norm{b}^2_{G^{-1}}}
\leq \epsilon \sqrt{\frac{2d u}{m}}
\leq 2\epsilon \sqrt{d}\,,
\end{align*}
where the first inequality follows form H\"older's inequality,
the second is Jensen's inequality and the last follows from the exploration distribution that guarantees $\norm{b}^2_{G^{-1}} \leq 2d/m$.
The second term in \cref{eq:bandit1} is bounded using standard concentration bounds. Preciesly, by eq. (20.2) of \cite{LaSz18:book}, with probability at least $1 - 2\alpha$,
\begin{align*}
\left|b^\top G^{-1} \sum_{s=1}^u X_s \eta_s\right|
&\leq \norm{b}_{G^{-1}} \sqrt{2 \log\left(\frac{1}{\alpha}\right)} \\
&\leq \sqrt{\frac{4d}{m} \log\left(\frac{1}{\alpha}\right)}
\end{align*}
and $|\ip{b, \hat \theta - \theta} |\le 2\epsilon \sqrt{d} +
\sqrt{\frac{4d}{m} \log\left(\frac{1}{\alpha}\right)}$.
Continuing with standard calculations, provided in
\ifsup
\cref{app:bandit},
\else
the supplementary material
\fi
one gets that the expected regret satisfies
\begin{align*}
R_n \leq C\left[\sqrt{d n \log(nk)} + \epsilon n \sqrt{d} \log(n)\right]
\end{align*}
where $C > 0$ is a suitably large universal constant. The logarithmic factor in the second term is due to the fact that in each of the logarithmically many episodes the algorithm may eliminate the best remaining arm, but
keep an arm that is at most $O(\epsilon \sqrt{d})$ worse than the best remaining arm.
\end{proof}
\newcommand{\algitem}[1]{\item[(#1)]}
\begin{algorithm}
\textsc{input} $\Phi \in \mathbb{R}^{k \times d}$ and confidence level $\alpha \in (0,1)$
\begin{enumerate}[leftmargin=0.6cm]
\algitem{1} Set $m = \ceil{4 d \log \log d} + 16$ and $\mathcal{A} = \operatorname{rows}(\Phi)$
\algitem{2} Find design $\rho : \mathcal{A} \to [0,1]$ with $g(\rho) \leq 2d$ and $|\operatorname{supp}(\rho)| \leq 4d \log \log(d) + 16$
\algitem{3} Compute $\displaystyle u(a) = \ceil{m\rho(a)}$ and $\displaystyle u = \sum_{a \in \mathcal{A}} u(a)$
\algitem{4} Take each action $a \in \mathcal{A}$ exactly $u(a)$ times with corresponding features $(X_s)_{s=1}^u$ and rewards $(Y_s)_{s=1}^u$
\algitem{5} Calculate the vector $\hat \theta$:
\begin{align*}
\hat \theta = G^{-1} \sum_{s=1}^u X_s Y_s \quad \text{with} \quad G = \sum_{a \in \mathcal{A}} u(a) aa^\top
\end{align*}
\algitem{6} Update active set: \\ \!\!\!$\displaystyle \mathcal{A} \leftarrow \left\{a \in \mathcal{A} : \max_{b \in \mathcal{A}} \ip{\hat \theta, b - a} \leq 2\sqrt{\frac{4d}{m} \log\left(\frac{1}{\alpha}\right)}\right\}$.
\algitem{7} $m \leftarrow 2m$ and \textsc{goto} (1)
\end{enumerate}
\caption{\textsc{phased elimination}}\label{alg:elim}
\end{algorithm}
\begin{remark}
When the active set contains fewer than $d$ actions, then the conditions of Kiefer-Wolfowitz are not satisfied because $\mathcal{A}$ cannot span $\mathbb{R}^d$.
Rest assured, however, since in these cases one can simply work in the smaller space spanned by $\mathcal{A}$ and the analysis goes through without further changes.
\end{remark}
\paragraph{Known approximation error}
The logarithmic factor in the second term in the regret bound can be removed when $\epsilon$ is known by modifying the elimination criteria so that with high probability the
optimal action is never eliminated, as explained in
\ifsup
\cref{rem:known}.
\else
the supplementary material.
\fi
\paragraph{Infinite action sets}
The logarithmic dependence on $k$ follows from the choice of $\alpha$, which is needed to guarantee the concentration holds for all actions.
When $k = \Omega(\exp(d))$, the union bound can be improved by a covering argument or using the argument in the next section. This leads to a bound
of $O(d \sqrt{n \log(n)} + \epsilon n \sqrt{d} \log(n))$, which is independent of the number of arms.
\paragraph{Other approaches}
We are not the first to consider misspecified linear bandits.
\citet{ghchgo07} consider the same setting and show that in the favourable case when one can cheaply test linearity, there exist algorithms for which the regret has
order $\min(d, \sqrt{k}) \sqrt{n}$ up to logarithmic factors. While such results a certainly welcome, our focus is on the case where $k$ has the same order of magnitude as $n$ and hence
the dependence of the regret on $\epsilon$ is paramount.
Another way to obtain a similar result to ours is to use the Eluder dimension \citep{RR13}, which should first be generalised a little to accommodate the need to use an accuracy
threshhold that does not decrease with the horizon. Then the Eluder dimension can be controlled using either our techniques or the alternative argument by \citet{DV19}.
\paragraph{Contextual linear bandits}
Algorithms based on phased elimination are not easily adapted to the contextual case, which is usually addressed using optimistic methods.
You might wonder whether or not LinUCB \citep{AST11} serendipitously adapts to misspecified models in the contextual case.
\citet{GMZ16} have shown that LinUCB \textit{is} robust in the non-contextual case when $\epsilon$ is very small.
Their conditions, however, depend the structure of the problem, and in particular on having good control of the $2$-norm of $\Delta$, which may
scale like $\Omega(\epsilon \sqrt{k})$ and is too big for large action sets.
We provide a negative result in the supplementary material, as well as a modification that corrects the algorithm, but requires knowledge of the approximation error.
The modification is a data-dependent refinement of the bonus used by \citet{JYW19}.
An open question is to find an algorithm for contextual linear bandits for which the regret similar to \cref{prop:linear} and where the algorithm does not
need to know the approximation error.
\section{Reinforcement learning}\label{sec:rl}
We now consider discounted reinforcement learning with a generative model,
which means the learner can sample next-states and rewards for any state-action pair of their choice.
The notation is largely borrowed from \citep{Sze10}.
Fix an MDP with state space $[S]$, action space $[A]$, transition kernel $P$, reward function $r : [S] \times [A] \to [0,1]$
and discount factor $\gamma \in (0, 1)$. The finiteness of the state space is assumed only for simplicity.
As usual, $V^\pi$ and $Q^\pi$ refer to the value and action-value functions for policy $\pi$ (e.g., $V^\pi(s)$ is the total expected discounted reward incurred while following policy $\pi$ in the MDP) and $V^*$ and $Q^*$ the same for the optimal policy.
The learner is given a feature matrix $\Phi \in \mathbb{R}^{SA \times d}$ such that $Q^\pi \in \mathcal{H}_\Phi^\epsilon$ for all policies $\pi$ and where $Q^\pi$ is vectorised
in the obvious way. The notation $\Phi(s, a) \in \mathbb{R}^d$ denotes the feature associated with state-action pair $(s, a)$.
The main idea is the observation that if $Q^*$ were known with reasonable accuracy on the support
of an approximately optimal design $\rho$ on the set of vectors $(\Phi(s, a) : s,a \in [S] \times [A])$, then least-squares in combination with our earlier arguments
would provide a good estimation of the optimal state-action value function.
Approximating $Q^*$ on the core set $\mathcal{C} = \operatorname{supp}(\rho) \subset [S] \times [A]$ is possible using approximate policy iteration.
For the remainder of this section let $\rho$ be a design with $g(\rho) \leq 2d$ and with support $\mathcal{C}$ and
$G(\rho) = \sum_{(s, a) \in \mathcal{C}} \rho(s, a) \Phi(s, a) \Phi(s,a)^\top$.
\paragraph{Related work}
The idea to extrapolate a value function by sampling from a few anchor state/action pairs has been used before in a few works.
The recent work by \citet{ZLK19} consider approximate value iteration in the episodic
setting and do not make a connection to optimal design.
The challenge in the finite-horizon setting is that one must learn one parameter vector for each layer and, at least naively, errors propagate multiplicatively.
For this reason using the anchor pairs from the support of an experimental design would not make the algorithm proposed by the aforementioned paper practical.
\citet{YW19} assume the transition matrix has a linear structure and also use least-squares with data from a pre-selected collection of anchor state/action pairs.
Their assumption is that the features of all state-action pairs can be written as a convex combination of the anchoring features, which means the number of anchors
is the number of corners of the polytope spanned by $\operatorname{rows}(\Phi)$ and may be much larger than $d$. One special feature of their paper is that the dependency on the horizon of the sample
complexity is cubic in $1/(1 - \gamma)$, while in our theorem it is quartic.
Earlier, \citet{LaBhSze18} described how anchor states (with some lag allowed) can be used to reduce the number of constraints in the approximate linear programming approach to approximate planning in MDPs, while maintaining error bounds.
\paragraph{Approximate policy iteration}
Let $\pi_1$ be an arbitrary policy and define
a sequence of policies $(\pi_k)_{k=1}^\infty$ inductively using the following procedure.
From each state-action pair $(s, a) \in \mathcal{C}$ take $m$ roll-outs of length $n$ following policy $\pi_k$ and let $\hat Q_k(s, a)$
be the empirical average, which is only defined on the core set $\mathcal{C}$.
The estimation of $Q^{\pi_k}$ is then extended to all state-action pairs using the features and least-squares
\begin{align*}
\hat \theta_k = G(\rho)^{-1} \sum_{(s, a) \in \mathcal{C}} \rho(s, a) \Phi(s, a) \hat Q_k(s, a) \quad
Q_k = \Phi \hat \theta_k\,.
\end{align*}
Then $\pi_{k+1}$ is chosen to be the greedy policy with respect to $Q_k$ and the process is repeated.
The following theorem shows that for suitable choices of roll-out length $n$, roll-out number $m$ and iterations $k$, the policy
$\pi_{k+1}$ is nearly optimal with high probability. Significantly, the choice of parameters ensures that the total number of samples
from the generative model is independent of $S$ and $A$.
\begin{theorem}\label{thm:mdp}
Suppose that approximate policy iteration is run with
\begin{align*}
k = \frac{\log\left(\frac{1}{\epsilon \sqrt{d}}\right)}{1 - \gamma} \quad
m = \frac{\log\left(\frac{2k|\mathcal{C}|}{\alpha}\right)}{2\epsilon^2(1 - \gamma)^2} \quad
n = \frac{\log\left(\frac{1}{\epsilon(1-\gamma)}\right)}{1 - \gamma}\,.
\end{align*}
Then there exists a universal constant $C$ such that with probability at least $1 - \alpha$, the policy $\pi_{k+1}$ satisfies
\begin{align*}
\max_{s \in [S]} \left(V^*(s) - V^{\pi_{k+1}}(s)\right) \leq C\epsilon \sqrt{d} / (1 - \gamma)^2\,,
\end{align*}
\end{theorem}
When $\rho$ is chosen using \cref{thm:todd} so that $|\mathcal{C}| \leq 4 d \log \log(d) + 16$, then
the number of samples from the generative model is $kmn|\mathcal{C}|$, which is
\begin{align*}
O\left(\frac{\log\left(\frac{1}{\epsilon(1 - \gamma)}\right) \log\left(\frac{2k|\mathcal{C}|}{\alpha}\right)
\log\left(\frac{1}{\epsilon \sqrt{d}}\right) d \log \log(d)}{\epsilon^2 (1 - \gamma)^4}\right) \,.
\end{align*}
Before the proof we need two lemmas.
The first controls the propagation of errors in policy iteration when using $Q_k$ rather than $Q^{\pi_k}$.
For policy $\pi$, let $P^\pi : \mathbb{R}^{[S] \times [A]} \to \mathbb{R}^{[S] \times [A]}$ defined by $(P^\pi Q)(s,a)=\sum_{s'} P(s'|s,a) Q(s',\pi(s'))$.
\begin{lemma}\label{lem:mdp1}
Let $\delta_i = Q_i - Q^{\pi_i}$ and $E_i = P^{\pi_{i+1}}(I-\gamma P^{\pi_{i+1}})^{-1} (I-\gamma P^{\pi_i})-P^{\pi^*}$.
Then,
$
Q^* - Q^{\pi_k}
\le
(\gamma P^{\pi^*})^k (Q^* - Q^{\pi_0})
+ \gamma \sum_{i=0}^{k-1} (\gamma P^{\pi^*})^{k-i-1} E_i \delta_i
$.
\end{lemma}
\begin{proof}
This is stated as Eq. (7) in the proof of part~(b) of Theorem~3 of \cite{FaMuSz10}
and ultimately follows from Lemma~4 of \citet{Munos03}.
\end{proof}
The second lemma controls the value of the greedy policy with respect to a $Q$ function in terms of the quality of the $Q$ function.
\begin{lemma}[\citet{SinghYee94}, corollary 2]\label{lem:mdp2}
Let $\pi$ be greedy with respect to an action-value function $Q$.
Then for any state $s\in [S]$, $V^\pi(s)\ge V^*(s) - \frac{2}{1-\gamma} \norm{Q-Q^*}_\infty$.
\end{lemma}
\begin{proof}[Proof of \cref{thm:mdp}]
Hoeffding's bound and the definition of the roll-out length shows that for any $(s, a) \in \mathcal{C}$, with probability at least $1 - \alpha$,
\begin{align*}
\left|\hat Q_i(s, a) - Q^{\pi_i}(s,a)\right| \leq \frac{1}{1 - \gamma}\sqrt{\frac{1}{2m} \log\left(\frac{2}{\alpha}\right)} + \epsilon = 2\epsilon\,.
\end{align*}
At the end we analyse the failure probability of the algorithm, but for now assume the above inequality holds for all $i \leq k$ and $(s, a) \in \mathcal{C}$.
Let $\theta_i = \argmin_\theta \norm{Q^{\pi_i} - \Phi \theta}_{\infty}$. Then, by \cref{prop:inf} with $\beta = 2\epsilon$,
\begin{align*}
\norm{Q_i - Q^{\pi_i}}_\infty
= \norm{\Phi \hat \theta_i - Q^{\pi_i}}_\infty
\leq 3 \epsilon \sqrt{2d} + \epsilon
\doteq \delta \,.
\end{align*}
Since the rewards belong to the unit interval,
taking the maximum norm of both sides in \cref{lem:mdp1} shows that
$\norm{Q^* - Q^{\pi_k}}_\infty \leq 2\delta / (1-\gamma) + \gamma^k / (1-\gamma)$.
Then, by the triangle inequality,
\begin{align*}
\norm{Q_k-Q^*}_\infty
&\leq \norm{Q_k-Q^{\pi_k}}_\infty + \norm{Q^*-Q^{\pi_k}}_\infty \\
&\leq \frac{3\delta}{1-\gamma} + \frac{\gamma^k}{1-\gamma}\,.
\end{align*}
Next, by \cref{lem:mdp2}, for any state $s \in [S]$,
\begin{align*}
V^{\pi_{k+1}}(s)
& \geq V^*(s) - \frac{2}{1-\gamma} \norm{Q_k-Q^*}_\infty \\
& \geq V^*(s) - \frac{2}{(1-\gamma)^2} \left( 3\delta + \gamma^k\right)\,.
\end{align*}
All that remains is bounding the failure probability, which follows immediately from a union bound over all iterations $i \leq k$ and state-action pairs $(s, a) \in \mathcal{C}$.
\end{proof}
\section{Conclusions}
Are good representations sufficient for efficient learning in bandits or in RL with a generative model?
The answer depends on whether one accepts a blowup of the approximation error by a factor of $\sqrt{d}$, and
is positive if and only if this blowup is acceptable.
The implication is that the role of bias/prior information is more pronounced than in supervised learning where the blowup does not appear.
One may wonder whether the usual changes to the learning problem, such as considering sparse approximations, could reduce the blowup.
Since sparsity is of little help even in the realisable setting \citep[chapter~23]{LaSz18:book}, we are only modestly optimistic in this regard.
Note also that in reinforcement learning, the blowup is even harsher: in the discounted case we see that a factor of $1/(1-\gamma)^2$ also appears, which
we believe is not improvable.
The analysis in both the bandit and reinforcement learning settings can be decoupled into two components.
The first is to control the query complexity of identifying a near-optimal action and the second is estimating the value of an action/policy using roll-outs.
This view may be prove fruitful when analysing (more) non-linear classes of reward function.
There are many open questions.
First, in order to compute an approximate optimal design, the algorithm needs to examine all features.
Second, the argument in \cref{sec:rl}
heavily relies on the uniform contraction property of the various operators involved. It remains to be seen whether similar arguments hold for other settings,
such as the finite horizon setting or the average
cost setting.
Another interesting open question is whether a similar result holds for the online setting when the learner needs to control its regret.
| {
"timestamp": "2020-02-20T02:16:23",
"yymm": "1911",
"arxiv_id": "1911.07676",
"language": "en",
"url": "https://arxiv.org/abs/1911.07676",
"abstract": "The construction by Du et al. (2019) implies that even if a learner is given linear features in $\\mathbb R^d$ that approximate the rewards in a bandit with a uniform error of $\\epsilon$, then searching for an action that is optimal up to $O(\\epsilon)$ requires examining essentially all actions. We use the Kiefer-Wolfowitz theorem to prove a positive result that by checking only a few actions, a learner can always find an action that is suboptimal with an error of at most $O(\\epsilon \\sqrt{d})$. Thus, features are useful when the approximation error is small relative to the dimensionality of the features. The idea is applied to stochastic bandits and reinforcement learning with a generative model where the learner has access to $d$-dimensional linear features that approximate the action-value functions for all policies to an accuracy of $\\epsilon$. For linear bandits, we prove a bound on the regret of order $\\sqrt{dn \\log(k)} + \\epsilon n \\sqrt{d} \\log(n)$ with $k$ the number of actions and $n$ the horizon. For RL we show that approximate policy iteration can learn a policy that is optimal up to an additive error of order $\\epsilon \\sqrt{d}/(1 - \\gamma)^2$ and using $d/(\\epsilon^2(1 - \\gamma)^4)$ samples from a generative model. These bounds are independent of the finer details of the features. We also investigate how the structure of the feature set impacts the tradeoff between sample complexity and estimation error.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Learning with Good Feature Representations in Bandits and in RL with a Generative Model",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806552225684,
"lm_q2_score": 0.7217432122827969,
"lm_q1q2_score": 0.7077274320027063
} |
https://arxiv.org/abs/1703.10741 | Minimum degree conditions for small percolating sets in bootstrap percolation | The $r$-neighbour bootstrap process is an update rule for the states of vertices in which `uninfected' vertices with at least $r$ `infected' neighbours become infected and a set of initially infected vertices is said to \emph{percolate} if eventually all vertices are infected. For every $r \geq 3$, a sharp condition is given for the minimum degree of a sufficiently large graph that guarantees the existence of a percolating set of size $r$. In the case $r=3$, for $n$ large enough, any graph on $n$ vertices with minimum degree $\lfloor n/2 \rfloor +1$ has a percolating set of size $3$ and for $r \geq 4$ and $n$ large enough (in terms of $r$), every graph on $n$ vertices with minimum degree $\lfloor n/2 \rfloor + (r-3)$ has a percolating set of size $r$. A class of examples are given to show the sharpness of these results. | \section{Introduction}\label{sec:intro}
Bootstrap percolation is a model for the spread of an `infection' in a network. The $r$-neighbour bootstrap processes are an example of a cellular automaton. The notion of cellular automata were introduced by von Neumann~\cite{jvN66} after a suggestion of Ulam~\cite{sU52}. In this paper, an extremal problem related to these processes is considered.
For any integer $r \geq 2$, the $r$-neighbour bootstrap process is an update rule for the states of vertices in a graph which are in one of two possible states at any given time: `infected' or `uninfected'. From an initial configuration of infected and uninfected vertices, the following state updates occur simultaneously and at discrete time steps: any uninfected vertex with at least $r$ infected neighbours becomes infected while infected vertices remain infected forever. To be precise, given a graph $G$ and a set $A \subseteq V(G)$ of `initially infected' vertices, set $A_0 = A$ and for every $t \geq 1$ define
\[
A_t = A_{t-1} \cup \{v \in V(G) \mid |N(v) \cap A_{t-1}| \geq r\}.
\]
The \emph{closure} of $A$ is $\langle A \rangle_r = \cup_{t \geq 0} A_t$; the set of vertices that are eventually infected starting from $A = A_0$. The set $A_t \setminus A_{t-1}$ shall often be referred to as the vertices \emph{infected at time step $t$}. The set $A$ is said to \emph{span} $\langle A \rangle_r$. The set $A$ is called \emph{closed} if{f} $\langle A \rangle_r = A$ and is said to \emph{percolate} if{f} $\langle A \rangle_r = V(G)$.
The class of $r$-neighbour bootstrap processes were first introduced and investigated by Chalupa, Leath, and Reich~\cite{CLR79} as a monotone model of the dynamics of ferromagnetism.
While the focus of study for such processes is often the behaviour of initially infected sets that are chosen at random, a number of natural extremal problems arise. For any graph $G$ and $r \geq 2$, define the size of the smallest percolating set to be
\[
m(G, r) = \min\{|A| \mid A \subseteq V(G),\ \langle A \rangle_r = V(G)\}.
\]
One class of graphs that have received a great deal of attention in this area are the square grids. For any $n$ and $d$, let $[n]^d$ denote the $d$-dimensional $n \times n \times \cdots \times n$ grid. In the case that $r = 2$, for all $n$ and $d$, the quantity $m([n]^d, 2)$ is known exactly (see~\cite{BB06} and \cite{BBM10}). Pete (see~\cite{BP98}) gave a number of general results about the smallest percolating sets in grids with other thresholds and observed that $m([n]^d, d) = n^{d-1}$. In the case of hypercubes, $Q_d = [2]^d$, Morrison and Noel~\cite{MN15} confirmed a conjecture of Balogh and Bollob\'{a}s~\cite{BB06}, showing that for each fixed $r$, $m(Q_d, r) = \frac{(1+o(1))}{r} \binom{d}{r-1}$.
Minimum percolating sets in trees were investigated by Riedl~\cite{eR10}.
The size of minimum percolating sets in regular graphs have been examined by Coja-Oghlan, Feige, Krivelevich and Reichman~\cite{C-ORKR} who gave bounds on $m(G, r)$ in a number of different cases in which $G$ is a regular graph satisfying various expansion properties. Bounds on the size of a minimum percolating set (or `contagious set') in both binomial random graphs and random regular graphs have been given by Feige, Krivelevich, and Reichman~\cite{FKR16} and Guggiola and Semerjian~\cite{GS15}.
Extremal problems for more general `$\mathcal{H}$-bootstrap processes' were considered by Balogh, Bollob\'{a}s, Morris, and Riordan~\cite{BBMR12} and many other natural extremal problems have been examined including the largest minimal percolating sets~\cite{rM09} and the `percolation time'~\cite{BP13, BP15, mP12}.
In this note, we shall focus on the conditions for the minimum degree of a graph that imply the existence of a percolating set of the smallest possible size. It is clear that for any graph on at least $r$ vertices, $m(G, r) \geq r$. Throughout, $\delta(G)$ is used to denoted the minimum degree of a graph $G$.
Considering the degree sequence of a graph, Reichman~\cite{dR12} showed that for any any graph $G$ and threshold $r$, then
\[
m(G, r) \leq \sum_{v \in V(G)} \min\left\{1, \frac{r}{\deg(v) + 1} \right\}.
\]
For any $d \geq r-1$, this upper bound is achieved by disjoint copies of cliques on $d+1$ vertices.
Freund, Poloczek, and Reichman~\cite{FPR15} showed that if $G$ is a graph on $n$ vertices with $\delta(G) \geq \left\lceil \frac{(r-1)}{r}n \right\rceil$, then $m(G, r) = r$. Furthermore, they gave the example for odd $r$ of a clique on $n = r+1$ vertices with a perfect matching deleted. No set of size $r$ percolates in such a graph and the minimum degree is $n-2 = \left\lfloor \frac{r-1}{r}n \right\rfloor$. In the special case of $r = 2$, it is noted in \cite{FPR15} that for any $n$, a graph consisting of two disjoint cliques on $\lfloor n/2 \rfloor$ vertices and $\lceil n/2 \rceil$ vertices has minimum degree $\left\lfloor \frac{(2-1)}{2}n\right\rfloor - 1$ and no set of size $2$ that percolates in $2$-neighbour bootstrap percolation. Though it is not stated in their paper, the proof idea in~\cite{FPR15} can be used, with a small extra check, to show that for $n$ sufficiently large, and $\delta(G) \geq \lfloor n/2 \rfloor$, then $m(G, 2) = 2$.
Freund, Poloczek, and Reichman~\cite{FPR15} further investigated Ore-type degree conditions for a graph that guarantee that $m(G, 2) = 2$. Defining $\sigma_2(G)$ to be the minimum sum of degrees of non-adjacent vertices in $G$, they showed that for a graph on $n \geq 2$ vertices, if $\sigma_2(G) \geq n$, then $m(G, 2) =2$. Recently, Dairyko, Ferrara, Lidick\'{y}, Martin, Pfender, and Uzzell~\cite{DFLMPU16} improved this result showing that, except for a list of exceptional graphs that they completely characterized, if $\sigma_2(G) \geq n-2$, then $m(G, 2) > 2$. Their results show that the only graph with $\delta(G) = \lfloor |V(G)|/2 \rfloor$ and $m(G, 2) > 2$ is the $5$-cycle.
The examples showing the tightness of results on the minimum degree in \cite{FPR15} are only given for a small value of $n$ depending on $r$. When $r \geq 3$ and the number of vertices is large relative to $r$, a different picture emerges and, in fact, when $n$ is large, any graph on $n$ vertices with a minimum degree that exceeds $n/2$ by some constant that depends on $r$ will have a set of size $r$ that percolates in $r$-neighbour bootstrap percolation. The main result of this paper is the following.
\begin{theorem}\label{thm:main-ub}
For any $r \geq 4$ and $n$ sufficiently large, if $G$ is a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor +(r-3)$, then $m(G, r) = r$.
\end{theorem}
The result for the case $r = 3$ is slightly different than the rest and is, perhaps, closer to the behaviour of the case $r = 2$ examined in~\cite{FPR15}.
\begin{theorem}\label{thm:ub-3}
For any $n \geq 30$, any graph $G$ on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + 1$ satisfies $m(G, 3) = 3$.
\end{theorem}
In both Theorem~\ref{thm:main-ub} and Theorem~\ref{thm:ub-3}, no attempt has been made to optimize the possible lower bounds on $n$.
While it remains true that a graph consisting of two disjoint cliques of size $\lfloor n/2\rfloor$ and $\lceil n/2 \rceil$ will have no set of size $r$ that percolates in $r$-neighbour bootstrap percolation, for large $n$, graphs with larger minimum degree exist with no small percolating sets. In Section~\ref{sec:construction}, examples are given of graphs on $n$ vertices with $\delta(G) = \lfloor n/2 \rfloor$ and $m(G, 3) > 3$ and for every $r \geq 4$, examples of graphs with $\delta(G) = \lfloor n/2 \rfloor + (r-4)$ and $m(G, r) > r$. These examples show that Theorem~\ref{thm:main-ub} and Theorem~\ref{thm:ub-3} are sharp.
Throughout, the following notation is used. Given two disjoint sets of vertices $A$ and $B$ in a graph $G$, let $e(A, B)$ denote the number of edges with one endpoint in $A$ and the other in $B$. The subgraph of $G$ induced by the set $A$ is denoted by $G[A]$ and given two disjoint sets $A$ and $B$, let $G[A, B]$ denote the bipartite subgraph consisting of all the edges in $G$ with one endpoint in $A$ and the other in $B$. Given a set $A$ and a vertex $x$, let $\deg_A(x)$ be the number of neighbours of $x$ in the set $A$. The neighbourhood of a vertex $x$ in $G$ is denoted $N(x)$.
The remainder of the paper is organized as follows. In Section~\ref{sec:construction}, we give the classes of graphs that show the sharpness of Theorem~\ref{thm:main-ub} and Theorem~\ref{thm:ub-3}. In Section~\ref{sec:extremal}, it is shown that for all large graphs satisfying the degree conditions of Theorem~\ref{thm:main-ub} or Theorem~\ref{thm:ub-3}, every closed set is either relatively small, consists of around half the vertices, or is the set of all vertices. Using the existence of small complete bipartite subgraphs, it is shown that there is always a set of $r$ vertices whose closure is not too small. In Section~\ref{sec:large-closed}, it is shown that graphs with closed sets consisting of nearly half the vertices are highly structured and that this structure can be exploited to find a percolating set of size $r$. Finally, in Section~\ref{sec:open}, some further open problems are given.
\section{Graphs with no small percolating sets}\label{sec:construction}
The graphs described in this section showing the sharpness of Theorem~\ref{thm:main-ub} and Theorem~\ref{thm:ub-3} consist of two disjoint cliques, with a regular (or nearly-regular when the number of vertices is odd) bipartite graph between them.
\begin{theorem}\label{thm:const-r}
For $r \geq 4$, let $n \geq 2(r-1)$ be even and suppose that $H$ is any $(r-3)$-regular bipartite graph with no $4$-cycles on parts $A$ and $B$ of size $n/2$ each. The graph $G$ consisting of $H$ together with a clique on the vertices of $A$ and a clique on the vertices of $B$ has $\delta(G) = n/2 + (r-4)$ and $m(G, r) > r$.
\end{theorem}
\begin{proof}
As the graph $G$ is $(n/2+(r-4))$-regular, it remains only to show that no set of $r$ vertices percolates.
Let $X$ be any initially infected set of $r$ vertices in $G$ and set $|X\cap A| = k$. Note that since every vertex in $A$ has $r-3$ neighbours in $B$ and every vertex in $B$ has $r-3$ neighbours in $A$, then if either $k \leq 2$ or $r-k \leq 2$, some vertices in one partition set will never have $r$ infected neighbours, even if all vertices in the other partition set are infected.
We first use this observation to deal with some of the small values of $r$.
For $r \in \{4, 5\}$, if $k \geq 3$, then $r - k \leq r-3 \leq 2$. Thus, in the cases $r = 4$ or $r=5$, it is immediate that $X$ does not percolate and so $m(G, r) > r$.
Next, consider the case that $r =6$. By the previous observation and relabelling $A$ and $B$ if necessary, assume that $3 \leq k \leq r-k \leq r-3$, so that we have $3 = k = r-k$. If anything further is infected by $X$, say $a \in A$, then $a$ must be adjacent to all $3$ elements of $X \cap B$. Since $H$ contains no copies of $C_4$, no other vertices in $A$ can be adjacent to all elements of $X\cap B$ and so there is at most one such $a \in A$.
If $a$ is the only vertex infected at time $1$, then no vertex in $B$ is adjacent to all elements of $X \cap A$ (or else it would have been infected in the first time step) and the only vertices adjacent to $a$ are those in $X \cap B$, which are already infected. Thus, nothing further is infected.
If two vertices are infected at the first time step, it can only be that one $a \in A$ and one $b \in B$ are infected. That is, $a$ is adjacent to all elements in $X \cap B$ and $b$ is adjacent to all elements in $X \cap A$. At the second time step, any further vertex in $A$ is adjacent to $4$ infected vertices in $A$, but not $b$ and at most one from $X \cap B$ and so does not become infected. Similarly, nothing in $B$ that is not already infected has more than $5$ infected neighbours. Thus, $X$ does not percolate and so $m(G, 6) > 6$.
Now we consider the most general case: $r \geq 7$. As above, let $X$ be any set of $r$ vertices in $G$, set $k = |X \cap A|$ and assume that $3 \leq k, r-k \leq r-3$.
\begin{center}
\begin{figure}[htb]
$(a)$\begin{tikzpicture}[scale = 0.5]
\tikzstyle{vertex}=[circle, draw=black, minimum size=4pt,inner sep=0pt]
\filldraw[draw = black, fill = gray!20] (0, 2.5) ellipse (1cm and 3.5cm);
\filldraw[draw = black, fill = gray!20] (4, 2.5) ellipse (1cm and 3.5cm);
\node at (0, -1.5) {$A$};
\node at (4, -1.5) {$B$};
\draw (0, 4.25) ellipse (0.5cm and 1.25cm);
\node at (-2.3, 4.25) {$X \cap A$};
\draw (4, 4) ellipse (0.5cm and 1.5cm);
\node at (6.3, 4) {$X \cap B$};
\node[vertex, fill = black] at (0, 5.25) (a1) {};
\node[vertex, fill = black] at (0, 4.75) (a2) {};
\node at (0, 4.3) {\small $\vdots$};
\node[vertex, fill = black] at (0, 3.75) (a3) {};
\node[vertex, fill = black] at (0, 3.25) (a4) {};
\node[vertex, fill = white, label=below:{$x$}] at (0, 0) (x) {};
\node[vertex, fill = black] at (4, 5.25) (b1) {};
\node[vertex, fill = black] at (4, 4.75) (b2) {};
\node[vertex, fill = black] at (4, 4.25) (b3) {};
\node at (4, 3.9) {\small $\vdots$};
\node[vertex, fill = black] at (4, 3.25) (b4) {};
\node[vertex, fill = black] at (4, 2.75) (b5) {};
\draw (4, 1) ellipse (0.5cm and 1cm);
\node at (5.5, 1) {$N_x$};
\node[vertex, fill = white, label=below:{$y$}] at (4, 1) (y) {};
\draw (x) -- (y);
\draw (x) -- (3.9, 1.98);
\draw (x) -- (4, 0);
\draw (x) -- (b1);
\draw (x) -- (b2);
\draw (x) -- (b3);
\draw (x) -- (b4);
\draw (x) -- (b5);
\draw (x) -- (3.7, 3.5);
\draw[dashed] (y) -- (a1);
\draw (y) -- (a2);
\draw (y) -- (a3);
\draw (y) -- (a4);
\draw (y) -- (0.25, 4);
\draw (0, 1.75) ellipse (0.5cm and 1 cm);
\draw (y) -- (0.1, 2.73);
\draw (y) -- (0, 0.75);
\end{tikzpicture}
$(b)$ \begin{tikzpicture}[scale=0.5]
\tikzstyle{vertex}=[circle, draw=black, minimum size=4pt,inner sep=0pt]
\filldraw[draw = black, fill = gray!20] (0, 2.5) ellipse (1cm and 3.5cm);
\filldraw[draw = black, fill = gray!20] (4, 2.5) ellipse (1cm and 3.5cm);
\node at (0, -1.5) {$A$};
\node at (4, -1.5) {$B$};
\draw (0, 4.25) ellipse (0.5cm and 1.25cm);
\node at (-2.3, 4.25) {$X \cap A$};
\draw (4, 4) ellipse (0.5cm and 1.5cm);
\node at (6.3, 4) {$X \cap B$};
\node[vertex, fill = black] at (0, 5.25) (a1) {};
\node[vertex, fill = black] at (0, 4.75) (a2) {};
\node at (0, 4.3) {\small $\vdots$};
\node[vertex, fill = black] at (0, 3.75) (a3) {};
\node[vertex, fill = black] at (0, 3.25) (a4) {};
\node[vertex, fill=white, label=below:{$x$}] at (0, 0) (x) {};
\node[vertex, fill = black] at (4, 5.25) (b1) {};
\node[vertex, fill = black] at (4, 4.75) (b2) {};
\node[vertex, fill = black] at (4, 4.25) (b3) {};
\node at (4, 3.9) {\small $\vdots$};
\node[vertex, fill = black] at (4, 3.25) (b4) {};
\node[vertex, fill = black] at (4, 2.75) (b5) {};
\draw (4, 1.5) ellipse (0.5cm and 0.75cm);
\node at (4, 1.5) {$N_x$};
\node[vertex, fill=white, label=below:{$y$}] at (4, 0) (y) {};
\draw (x) -- (3.8, 2.19);
\draw (x) -- (4, 0.75);
\draw (x) -- (b1);
\draw (x) -- (b2);
\draw (x) -- (b3);
\draw (x) -- (b4);
\draw (x) -- (b5);
\draw (x) -- (3.7, 3.5);
\draw (y) -- (a1);
\draw (y) -- (a2);
\draw (y) -- (a3);
\draw (y) -- (a4);
\draw (y) -- (0.25, 4);
\draw (0, 1.75) ellipse (0.5cm and 1 cm);
\node at (0, 1.75) {$N_y$};
\draw (y) -- (0.15, 2.71);
\draw (y) -- (0, 0.75);
\end{tikzpicture}
\caption{Possible structures for the set $X$ when $r \geq 7$ if $(a)$ one vertex is infected in the first time step or $(b)$ two vertices are infected in the first time step. Shaded regions represent cliques.}\label{fig:Ext-example-r7}
\end{figure}
\end{center}
First suppose that only vertices in one partition set, say $A$, are infected at the first time step. Since $H$ has no copy of $C_4$, there can be only one such vertex $x$ adjacent to all $r-k$ vertices in $X \cap B$. At the second time step, no vertex in $A$ can be infected as it would have to have $r-k-1 \geq 2$ neighbours in $X \cap B$, which would create a $C_4$ with $x$. Any vertex in $B$ that is infected at time $2$ is in $N(x) \cap B \setminus X$. Set $N_x = N(x) \cap B \setminus X$ and note that $|N_x| = k-3$. If $k = 3$, then no further vertices are infected and the process stops. If $k \geq 4$ and $y \in N_x$ is infected at the second time step, then $y$ has exactly $k-1$ neighbours in $X \cap A$ and there can only be one such vertex since a second would have two common neighbours with $y$ in $A$. See Figure~\ref{fig:Ext-example-r7} $(a)$. At time step $3$, any vertex in $A$ has at most $k+1$ infected neighbours in $A$ and at most $1$ infected neighbour in $B$ (since two would create a $C_4$ with $x$ in $H$). Any vertex in $B$ has at most $(r-k) + 1$ infected neighbours in $B$ and is either adjacent to $x$ and at most one vertex from $X \cap A \setminus N(y)$ or else at most two vertices from $X\cap A$. Thus, any uninfected vertex is adjacent to at most $\max\{k+2, r-k+3\}$ infected vertices. Since $k \geq 4$, then $\max\{k+2, r-k+3\} \leq r-1$ and so no further vertices are infected.
Next, suppose that $x \in A$ and $y \in B$ are both infected at the first time step. Without loss of generality, assume that $3 \leq k \leq r-k \leq r-3$. As before, there can be only one vertex in each partition set that is infected in the first time step. Set $N_x = B \cap N(x) \setminus X$ and $N_y = A \cap N(y) \setminus X$ so that $|N_x| = k-3$ and $|N_y| = r-k-3$. See Figure~\ref{fig:Ext-example-r7} $(b)$. At time step $2$, any vertex in $N_y$ is adjacent to $k+1$ infected vertices in $A$ at most $2$ vertices in $B$ ($y$ and at most one from $X\cap B$). Since $k+1+2 = k+3 \leq r-1$, then such a vertex is not infected. If $k = 3$, then there are no vertices in $N_x$ and so any vertex in $B$ has at most $r-k+2 \leq r-1$ infected neighbours and so is not infected. If $k \geq 4$, then any vertex in $N_x$ is adjacent to at most $(r-k) + 1$ infected neighbours in $B$ and at most $2$ in $A$ since it can have at most one neighbour in $X \cap A$. Since $r-k+3 \leq r-1$, then such a vertex is not infected.
As the set $X$ was arbitrary and in all cases, the bootstrap process halts with not all vertices infected, $m(G, r) > r$.
\end{proof}
Note that a graph $H$ with girth at least $6$ as required by Theorem~\ref{thm:const-r} is given with positive probability by taking a random $(r-3)$-regular graph on two vertex sets of size $n/2$ as long as $n$ is sufficiently large (see, for example, \cite{nW78} and \cite{nW99}).
\begin{corollary}\label{cor:const-r}
For every $r \geq 4$ and $n$ sufficiently large, there is a graph $G$ on $n$ vertices with $\delta(G) = \lfloor n/2 \rfloor + (r-4)$ with $m(G, r) > r$.
\end{corollary}
\begin{proof}
If $n$ is even, let $G$ be given by Theorem~\ref{thm:const-r}. If $n$ is odd, let $G_1$ be the graph on $n+1$ vertices given by Theorem~\ref{thm:const-r} and define a graph $G$ by deleting one vertex from $G_1$. The vertices of $G$ are partitioned into a set, $A$, of size $\lceil n/2 \rceil$ that form a clique and a set, $B$, of size $\lfloor n/2 \rfloor$ that each form a clique. Vertices in $A$ have degree at least $\lceil n/2 \rceil - 1 + (r-3) - 1 = \lfloor n/2 \rfloor + (r-4)$ while vertices in $B$ have degree exactly $\lfloor n/2 \rfloor -1 + (r-3) = \lfloor n/2 \rfloor + (r-4)$. If $G$ had a percolating set of size $r$ for $r$-neighbour bootstrap percolation, then this same set would percolate in $G_1$ since the additional vertex is joined to at least $r$ neighbours in $B$. As this would contradict the fact that $m(G_1, r) > r$, then $m(G, r) > r$ also.
\end{proof}
The case $r = 3$ has a different behaviour than larger values of $r$. The proof that the example has no small percolating sets is closely related to the corresponding proofs for $r \in \{4, 5\}$.
\begin{theorem}\label{thm:const-3}
For any even $n \geq 4$, let $A=[1, n/2]$ and $B = [n/2+1, n]$ and let $G$ be the graph given by a complete graph on $A$, a complete graph on $B$ and a perfect matching between $A$ and $B$. Then, $\delta(G) = n/2$ and $m(G, 3) > 3$.
\end{theorem}
\begin{proof}
Let $X$ be any set of $3$ vertices in $G$. Note that either $|X \cap A| \leq 1$ or else $|X \cap B| \leq 1$. Suppose, without loss of generality that $|X \cap A| \leq 1$. Even if every vertex in $B$ becomes infected, any uninfected vertex in $A$ has at most $2$ infected neighbours: any vertex in $X \cap A$ and the single neighbour in $B$. Thus, these vertices never become infected and so $X$ does not percolate.
\end{proof}
Using the same argument as that given in the proof of Corollary~\ref{cor:const-r} extends Theorem~\ref{thm:const-3} to all $n \geq 4$.
\begin{corollary}
For any $n \geq 4$, there exists a graph $G$ with $\delta(G) = \lfloor n/2 \rfloor$ and $m(G, 3) > 3$.
\end{corollary}
Note that the graph described in Theorem~\ref{thm:const-3} was also used \cite{FPR15} where it was called $DC_n$ and it was noted, in relation to $2$-neighbour bootstrap percolation, that this graph has sets of size $2$ whose closure is of size $n/2$, while there are other sets of size two that percolate.
This concludes the descriptions of constructions and in the subsequent sections, it is shown that large graphs with minimum degree one larger (for a fixed $n$ and $r$) than those in Theorems~\ref{thm:const-r} and \ref{thm:const-3} do have small percolating sets. No attempt has been made here to classify the extremal examples.
\section{Sets with large closure}\label{sec:extremal}
Before proceeding to the proofs of the main theorems, we give a number of results about the size of the closures of sets in $r$-neighbour bootstrap percolation. In particular, the goal is to show that the closures of \emph{any} set in graphs satisfying the minimum degree conditions under consideration can only can only have a small number of different sizes.
The following straightforward lemma uses the minimum degree condition to show that any large set will percolate. This will be used throughout in arguments to come.
\begin{lemma}\label{lem:big-sets-perc}
For any $r \geq 3$, $k \geq 1$, let $G$ be a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + k$. Every set $A \subseteq V(G)$ with $|A| \geq \left\lceil \frac{n}{2} \right\rceil +(r-k-1)$ satisfies $\langle A \rangle_r = V(G)$.
\end{lemma}
\begin{proof}
For every $x \in A^c$, since $x$ has at most $|A^c|-1 = n-|A|-1$ neighbours within $A^c$, then
\begin{align*}
\deg_A(x) &= \deg(x) - \deg_{A^c}(x)\\
&\geq \left\lfloor \frac{n}{2} \right\rfloor + k - n + |A|+1\\
&\geq \left\lfloor \frac{n}{2} \right\rfloor + k - n + \left(\left\lceil \frac{n}{2} \right\rceil +r-k-1 \right) + 1\\
&=r.
\end{align*}
Thus, as every vertex in $A^c$ has at least $r$ neighbours in $A$, if the set $A$ is initially infected, the remainder of the graph becomes infected in one time step.
\end{proof}
There are two different cases for the choice of $k$ in Lemma~\ref{lem:big-sets-perc} used here. In the case $r = 3$ with $k = 1$, this lemma states that if $\delta(G) \geq \lfloor n/2 \rfloor +1$, then any set of size $\lceil n/2 \rceil +1$ percolates. For all $r \geq 4$, taking $k = r-3$, the lemma shows that for $\delta(G) = \lfloor n/2 \rfloor +r-3$, any set of size $\lceil n/2 \rceil +2$ percolates.
In the following proposition, we consider large graphs with a given minimum degree condition. Edge-counting is used to show that any set that is closed is either relatively small or else contains nearly half of the vertices of the graph. This includes the possibility that the set percolates.
\begin{proposition}\label{prop:no-mid-size-closed}
Let $r \geq 3$, set $k = \max\{1, r-3\}$ and let $G$ be a graph on $n$ vertices with $n \geq 10r$ and $\delta(G) \geq \lfloor n/2 \rfloor + k$. If $A \subseteq V(G)$ is such that $\langle A \rangle_r = A$, then either $|A| \leq 2(r-1)$ or else $|A| \geq \lfloor n/2 \rfloor - \min\{1, r-3\}$.
\end{proposition}
\begin{proof}
Let $A$ be a set of vertices with $\langle A \rangle_r = A$ and set $|A| = \ell$. The proof proceeds by counting the edges with one endpoint in $A$ and the other in $A^c$ in two different ways.
Since any vertex in $A$ has at most $\ell-1$ neighbours within the set $A$, any $x \in A$ has at least $\delta(G) - \ell+1$ neighbours in the set $A^c$. Thus,
\begin{equation}\label{eq:lb-cross-edges}
e(A, A^c) = \sum_{x \in A} \deg_{A^c}(x) \geq \ell(\delta(G) - \ell+1) \geq \ell(\lfloor n/2 \rfloor - \ell + k+1).
\end{equation}
On the other hand, since $\langle A \rangle_r = A$, every vertex in $A^c$ can have at most $r-1$ neighbours in the set $A$. Thus,
\begin{equation}\label{eq:ub-cross-edges}
e(A, A^c) = \sum_{x \in A^c} \deg_A(x) \leq (r-1)|A^c| = (r-1)(n-\ell).
\end{equation}
Combining the inequalities \eqref{eq:lb-cross-edges} and \eqref{eq:ub-cross-edges} and rearranging gives that
\begin{equation}\label{eq:edge-balance}
0 \leq \ell^2 - \ell\left(\left\lfloor \frac{n}{2} \right\rfloor + k+r \right) + (r-1)n.
\end{equation}
Define $D(\ell) = \ell^2 - \ell\left(\lfloor n/2 \rfloor + k+r \right) + (r-1)n$, which is the righthand side of inequality \eqref{eq:edge-balance}. Substituting $\ell=2r-1$ into $D(\ell)$ gives
\begin{align*}
D(2r-1) &=(2r-1)^2 - (2r-1)\lfloor n/2 \rfloor -(2r-1)(k+r) + n(r-1)\\
&= (2r-1)(r-k-1) + (r-1) \left(n - 2 \left\lfloor \frac{n}{2} \right\rfloor \right) - \left\lfloor \frac{n}{2} \right\rfloor\\
&\leq (2r-1)(r-k-1) + (r-1) - \left\lfloor \frac{n}{2} \right\rfloor\\
&=
\begin{cases}
7 - \left\lfloor \frac{n}{2} \right\rfloor &\text{if $r = 3$}\\
5r-3 - \left\lfloor \frac{n}{2} \right\rfloor &\text{if $r \geq 4$}
\end{cases}\\
&< 0
\end{align*}
since for $n \geq 10r-4$, then $\lfloor n/2 \rfloor > 5r-3$. Furthermore, substituting $\ell = 2r-2$ gives, for all $n$,
\begin{align*}
D(2r-2) &=(2r-2)^2 - 2(r-1)\lfloor n/2 \rfloor - (2r-2)(k+r) + n(r-1)\\
&= (2r-2)(r-k-2) + (r-1)\left(n - 2 \left\lfloor \frac{n}{2}\right\rfloor \right) \geq 0.
\end{align*}
Similarly, substituting $\ell = \lfloor n/2 \rfloor -2$ gives
\begin{align*}
D(\lfloor n/2 \rfloor -2)
&=\lfloor n/2 \rfloor^2 - 4\lfloor n/2 \rfloor + 4 - \lfloor n/2 \rfloor^2 - (k+r-2)\lfloor n/2 \rfloor\\
& \qquad + 2(k+r) + n(r-1)\\
&=(r-1)\left( n - 2 \lfloor n/2 \rfloor \right) - (k-r+4)\lfloor n/2 \rfloor + 2(k+r+2)\\
&\leq 2k+3r+3 - \lfloor n/2 \rfloor < 0
\end{align*}
for $n \geq 10r$. Next consider the result of substituting $\ell = \lfloor n/2 \rfloor -1$,
\begin{align*}
D(\lfloor n/2 \rfloor -1)
&=\lfloor n/2 \rfloor^2 - 2\lfloor n/2 \rfloor + 1 - \lfloor n/2 \rfloor^2 - (k+r-1)\lfloor n/2 \rfloor\\
& \qquad + (k+r) + n(r-1)\\
&=n(r-1) - (k+r+1)\lfloor n/2 \rfloor + (k+r+1)\\
&=\begin{cases}
2n - 5\lfloor n/2 \rfloor + 5 &\text{if $r = 3$}\\
(r-1)\left(n - 2\lfloor n/2 \rfloor\right) + 2r-2 &\text{if $r \geq 4$}.
\end{cases}
\end{align*}
Thus, when $r = 3$, and $n \geq 16$, $D(\lfloor n/2 \rfloor -1)< 0$ whereas for $r \geq 4$ and all $n$, we have $D(\lfloor n/2 \rfloor -1) \geq 0$. Finally, consider $D(\lfloor n/2 \rfloor)$ in the case that $r = 3$:
\begin{align*}
D(\lfloor n/2 \rfloor)
&=\lfloor n/2 \rfloor^2 - \lfloor n/2 \rfloor^2 - 4\lfloor n/2 \rfloor + 2n\\
&=2\left(n - 2\lfloor n/2 \rfloor \right) \geq 0.
\end{align*}
Note that $D$ is a quadratic function in $\ell$ with a unique minimum and satisfying $D(2r-2) \geq 0$, $D(2r-1) < 0$. When $r = 3$, since $D(\lfloor n/2 \rfloor - 1) < 0$ and $D(\lfloor n/2 \rfloor) \geq 0$, then if $D(\ell) \geq 0$, then either $\ell \leq 4$ or else $\ell \geq \lfloor n/2 \rfloor$. When $r \geq 4$, since $D(\lfloor n/2 \rfloor - 2) < 0$ and $D(\lfloor n/2 \rfloor - 1) \geq 0$, then if $D(\ell) \geq 0$, either $\ell \leq 2(r-1)$ or else $\ell \geq \lfloor n/2 \rfloor -1$. This completes the proof.
\end{proof}
In summary, Lemma~\ref{lem:big-sets-perc} and Proposition~\ref{prop:no-mid-size-closed} together show that if $G$ is a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + \max\{1, r-3\}$, then any set of $r$ vertices either percolates, spans a set of size at most $2(r-1)$ or else spans a set of cardinality close to $n/2$.
In order to address the existence of small closed sets of vertices in the graph, note that for $r$ fixed and $n$ large enough, the K\"{o}vari-S\'{o}s-Tur\'{a}n theorem~\cite{KST54} implies that a graph on $n$ vertices with minimum degree $\delta(G) \geq \lfloor n/2 \rfloor + (r-3)$ contains complete bipartite subgraphs of the form $K_{r, r-1}$ which give a subgraph on $2r-1$ vertices with $m(K_{r, r-1}, r) = r$. For the sake of completeness, the following pair of lemmas with standard proofs make this precise.
\begin{lemma}\label{lem:complete-bip-3}
For $n \geq 6$, if $G$ is a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor +1$, then any vertex of $G$ is contained in a copy of $K_{2, 3}$.
\end{lemma}
\begin{proof}
Let $x$ be any vertex in $G$. If $x$ is adjacent to all other vertices, then for any $y \neq x$, the common neighbourhood of $x$ and $y$ has at least $\lfloor n/2 \rfloor \geq 3$ vertices and these together with $x$ and $y$ form a copy of $K_{2, 3}$. Otherwise, let $z$ be any non-neighbour of $x$. Then the common neighbourhood of $x$ and $z$ has at least $2(\lfloor n/2 \rfloor + 1) - (n-2) = 2\lfloor n/2 \rfloor - n + 4 \geq 3$ vertices and these together with $x$ and $z$ form a copy of $K_{2, 3}$.
\end{proof}
\begin{lemma}\label{lem:complete-bip-r}
For each $r \geq 3$ and $n \geq (r-1)2^{r-1} +4$, if $G$ is a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + (r-3)$, then $G$ contains a copy of $K_{r, r-1}$.
\end{lemma}
\begin{proof}
The proof proceeds by counting copies of stars of the form $K_{1, r-1}$. Define the set
\[
S = \{(x, A) \mid x \in V(G),\ |A| = r-1,\ A \subseteq N(x)\}.
\]
Then, counting elements of $S$ by the first coordinate, as long as $\lfloor n/2 \rfloor + (r-2) \geq r-1$, then
\begin{align*}
|S| &=\sum_{x \in V} \binom{\deg(x)}{r-1} \\
& \geq \sum_{x \in V} \binom{\lfloor n/2 \rfloor + (r-3)}{r-1}\\
& = n \binom{\lfloor n/2 \rfloor + (r-3)}{r-1}\\
& \geq \frac{n}{(r-1)!}\left(\frac{n-1}{2} + (r-3)\right)\cdots \left(\frac{n-1}{2}+1\right)\left(\frac{n-1}{2}\right)\left(\frac{n-1}{2} - 1\right)\\
&\geq \frac{n}{(r-1)!} \cdot \frac{(n-1)(n-2) \cdots (n-r+2)}{2^{r-1}} \cdot (n-3)\\
&\geq \frac{n-3}{2^{r-1}} \binom{n}{r-1} > (r-1) \binom{n}{r-1}.
\end{align*}
As there are $\binom{n}{r-1}$ possible choices for the second coordinate of elements of $S$, by the pigeonhole principle, there is a set $A \subseteq V(G)$ of $r-1$ vertices with at least $r$ common neighbours. These $r$ vertices, together with $A$ contain a copy of $K_{r, r-1}$ in the graph.
\end{proof}
Thus, when $n$ is sufficiently large, any graph on $n$ vertices with minimum degree $\lfloor n/2 \rfloor + (r-3)$ has a set of size $r$ whose span contains at least $2r-1$ vertices and hence by Proposition~\ref{prop:no-mid-size-closed}, contains at least $\lfloor n/2 \rfloor -1$ vertices.
What remains to show is that if such a graph contains a set $A$ with around half the vertices in $G$ and $\langle A \rangle_r = A$, then $G$ contains some set of size $r$ that percolates.
\section{Structure of large closed sets}\label{sec:large-closed}
In this section, we show that if a graph on $n$ vertices has minimum degree $\lfloor n/2 \rfloor + \max\{r-3, 1\}$ and a set $A$ with $\langle A \rangle_r = A$ and $A$ has close to half the vertices of $G$, then enough structural information about $G$ can be deduced to show that there is some set of size $r$ that percolates, completing the proof of Theorem~\ref{thm:main-ub}. As the minimum degree conditions for the case $r = 3$ are different from all others, these are dealt with separately.
Before proceeding with these results, a straightforward lemma is recorded to be used repeatedly. If the minimum degree of a graph is large enough, not only is there a set of $r$ vertices that percolates in $r$-neighbour bootstrap percolation, but, in fact, any set of $r$ vertices will percolate.
\begin{lemma}\label{lem:all-sets-perc}
Let $k \geq 0$, $r \geq 3$ and $n \geq k(r+1)-1$. For any graph $G$ on $n$ vertices with $\delta(G) \geq n-k$ and any set $A \subseteq V(G)$ of $r$ vertices, $\langle A \rangle_r = V(G)$.
\end{lemma}
\begin{proof}
As each vertex in $G$ has at most $k-1$ non-neighbours, the vertices in the set $A$ have at least $n-|A| - (k-1)|A| =n-kr$ common neighbours. Since $n-kr \geq k-1$, when the set $A$ is initially infected at least $k-1$ further vertices are infected at the first time step. At this point, any uninfected vertex is adjacent to at least $(r+k-1) - (k-1) = r$ infected vertices and so becomes infected in the second time step. Thus, the set $A$ percolates.
\end{proof}
\subsection{Threshold $r = 3$}\label{sec:structure-3}
In this subsection, it is shown that if any set of size $3$ in a graph with minimum degree $\lfloor n/2 \rfloor +1$ spans either $\lfloor n/2 \rfloor$ or $\lceil n/2 \rceil$ vertices, then some set of $3$ vertices percolates in $3$-neighbour bootstrap percolation. The two different cases that arise when $n$ is odd are handled in separate propositions.
\begin{proposition}\label{prop:small-half-3}
Let $G$ be a graph on $n \geq 13$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor +1$ and let $A \subseteq V(G)$ be such that $|A| = \lfloor n/2 \rfloor$. If $\langle A \rangle_3 = A$, then $m(G, 3) = 3$.
\end{proposition}
\begin{proof}
Since any vertex $x \in A$ has at most $\lfloor n/2 \rfloor -1$ neighbours within $A$, then
\[
\deg_{A^c}(x) \geq \lfloor n/2 \rfloor + 1 - \left(\lfloor n/2 \rfloor -1 \right) = 2.
\]
Since $\langle A \rangle_3 = A$, then any vertex $y \in A^c$ has at most $2$ neighbours in $A$ and so
\[
\deg_{A^c}(y) \geq \lfloor n/2 \rfloor + 1 - 2 = \lfloor n/2 \rfloor -1 \geq \left(\lceil n/2 \rceil -1\right) - 1.
\]
Then, by Lemma~\ref{lem:all-sets-perc}, any set of $3$ vertices in $A^c$ infects all of $A^c$.
Set $A_3 = \{x \in A \mid \deg_{A^c}(x) \geq 3\}$. If $A_3 \neq \emptyset$, then for any three vertices $a, b, c \in A^c$,
\[
|\langle \{a, b, c\} \rangle_3| \geq |A^c| + |A_3| \geq \lceil n/2 \rceil +1.
\]
Then, by Lemma~\ref{lem:big-sets-perc}, $\langle \{a, b, c\} \rangle_3 = V(G)$.
Thus, assume that $A_3 = \emptyset$ and hence $G[A]$ is a complete graph with every vertex having exactly $2$ neighbours in $A^c$. Since any vertex in $A^c$ has at most $\lceil n/2 \rceil -1$ neighbours within $A^c$, then every vertex in $A^c$ has at least $1$ neighbour in $A$. Set $B_1 = \{y \in A^c \mid \deg_A(y) = 1\}$. Since $A$ is closed, every vertex in $A^c$ has at most $2$ neighbours in $A$. Then,
\[
2\lfloor n/2 \rfloor = e(A, A^c) = |B_1| + 2(\lceil n/2 \rceil - |B_1|) = 2\lceil n/2 \rceil - |B_1|
\]
which implies that $|B_1| = 2(\lceil n/2 \rceil - \lfloor n/2 \rfloor) \leq 2$.
\begin{center}
\begin{figure}[htb]
\begin{tikzpicture}[scale = 0.75]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw = black, fill = gray!20] (0, 2) ellipse (1cm and 3cm);
\filldraw[draw = black, fill = white] (4, 2) ellipse (1cm and 3cm);
\node at (0, -1.5) {$A$};
\node at (4, -1.5) {$A^c$};
\node[vertex, fill = white, label=above:{$a$}] at (0, 4) (a) {};
\node[vertex, fill = white, label=below right:{$b$}] at (0, 3) (b) {};
\node[vertex, fill = black, label=below:{$c$}] at (0, 0) (c) {};
\node[vertex, fill = black, label=right:{$y$}] at (4, 4) (y) {};
\node[vertex, fill =black, label=right:{$z$}] at (4, 3) (z) {};
\draw (z) -- (a) -- (y) -- (b) -- (c);
\draw (a) -- (b);
\draw (c) to[bend left = 30] (a);
\end{tikzpicture}
\caption{Case for $r = 3$ and $A_3 = \emptyset$ in the proof of Proposition~\ref{prop:small-half-3}.}\label{fig:r3-floor}
\end{figure}
\end{center}
Pick any $y \in A^c \setminus B_1$ and let $a, b$ be its $2$ neighbours in $A$. Let $z \in A^c$ be any other neighbour of $a$ and choose any $c \in A \setminus \{a, b\}$. Consider the effect of initially infecting the set $\{c, y, z\}$; see Figure~\ref{fig:r3-floor}. Then, $a$ is adjacent to all $3$ and becomes infected in the first time step. Then, $b$ is adjacent to $a$, $c$, and $y$ and so becomes infected by the second time step. Since $G[A]$ is complete and contains three infected vertices, all remaining vertices of $A$ are infected by the third time step. Finally, any vertex in $A^c \setminus B_1$ is adjacent to at least one of $y$ and $z$ and has two further infected neighbours in $A$ and so becomes infected by time step $4$. Finally, if there is a vertex in $B_1$, it is adjacent to all elements of $A^c$ and has one infected neighbour in $A$ and so also becomes infected by step $4$.
\end{proof}
\begin{proposition}\label{prop:big-half-3}
Let $n \geq 13$ be odd and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \frac{n+1}{2}$ and let $A \subseteq V(G)$ be such that $|A| =\frac{n+1}{2}$. If $\langle A \rangle_3 = A$, then $m(G, 3) = 3$.
\end{proposition}
\begin{proof}
Counting degrees as in the previous proof, for every $y \in A^c$, since $\langle A \rangle_3 = A$, then $y$ has at most $2$ neighbours in $A$ and so at least $\frac{n+1}{2} - 2 = |A^c| -1$ neighbours in $A^c$. That is, $G[A^c]$ is a complete graph and every vertex has exactly $2$ neighbours in $A$.
Any vertex in $A$ has at least $1$ neighbour in $A^c$. Set $A_3 = \{x \in A \mid \deg_A(x) \geq 3\}$. If $|A_3| \geq 2$, then any three vertices in $A^c$ span at least $|A^c| + |A_3| \geq \frac{n-1}{2} + 2 = \lceil n/2 \rceil +1$ vertices and so percolate by Lemma~\ref{lem:big-sets-perc}. If $A_3 = \emptyset$, then $\langle A^c \rangle_3 = A^c$ and so by Proposition~\ref{prop:small-half-3}, $G$ has a percolating set of size $3$.
Assume now that $|A_3| = 1$. Note that every vertex in $A\setminus A_3$ has either $1$ or $2$ neighbours in $A^c$ and at most one non-neighbour in $A$. Thus, by Lemma~\ref{lem:all-sets-perc}, any set of size $3$ in $A \setminus A_3$ eventually infects all of $A \setminus A_3$. If $\langle A \setminus A_3 \rangle_3 = A \setminus A_3$, then again by Proposition~\ref{prop:small-half-3}, $G$ has a percolating set of size $3$.
Therefore, assume further that $|A_3| = 1$ and that $\langle A\setminus A_3 \rangle_3 = A$. Let $x$ be any vertex in $A \setminus A_3$ and let $a$ be one its neighbours in $A^c$. Let $y, z \in A \setminus A_3$ be any two neighbours of $x$ and consider the effect of initially infecting $\{a, y, z\}$. Then since $x$ is adjacent to all $3$, it is infected in the first time step. By assumption, $\langle \{x, y, z\} \rangle_3 = A$ and so $\langle \{a, y, z\}\rangle_3 \supseteq A \cup \{a\}$, which is a set of size $\lceil n/2 \rceil +1$. Thus, by Lemma~\ref{lem:big-sets-perc}, the vertices $a, y, z$ percolate.
In all cases, the graph $G$ contains $3$ vertices that percolate and so $m(G, 3) = 3$.
\end{proof}
With these two results, the proof of Theorem~\ref{thm:ub-3} now follows.
\begin{proof}[Proof of Theorem~\ref{thm:ub-3}]
Let $n \geq 30$ and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor +1$. By Lemma~\ref{lem:complete-bip-3}, $G$ contains a copy of $K_{2, 3}$. Let $A$ be a set of $3$ vertices in one of the partition classes in any copy of $K_{2,3}$, since $|\langle A \rangle_r| \geq 5 > 2(3-1)$, by Lemma~\ref{lem:big-sets-perc} and Proposition~\ref{prop:no-mid-size-closed}, either $A$ percolates or else $|\langle A \rangle_3| \in \{\lfloor n/2 \rfloor, \lceil n/2 \rceil\}$. By Propositions~\ref{prop:small-half-3} and \ref{prop:big-half-3}, if $|\langle A \rangle_3| \in \{\lfloor n/2 \rfloor, \lceil n/2 \rceil\}$, then $G$ contains some set of size $3$ that percolates. Thus, $m(G, 3) = 3$, which completes the proof.
\end{proof}
\subsection{Threshold $r \geq 4$}\label{sec:structure-r}
In this subsection, we consider bootstrap processes with infection threshold $r \geq 4$ and give the proof of Theorem~\ref{thm:main-ub}. The proof uses more steps than that for the corresponding result for $r=3$ because of the weaker result for Proposition~\ref{prop:no-mid-size-closed} in the case $r \geq 4$.
\begin{proposition}\label{prop:vsmall-half-r}
Let $r \geq 4$, let $n$ be sufficiently large, and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + (r-3)$. If there is a set $A \subseteq V(G)$ with $|A| = \lfloor n/2 \rfloor -1$ and $\langle A \rangle_r = A$, then $m(G, r) = r$.
\end{proposition}
\begin{proof}
Counting edges as in previous proofs and using the fact that $\langle A \rangle_r = A$, for any vertex $y \in A^c$
\[
r-4 \leq (r-3) - \lceil n/2 \rceil + \lfloor n/2 \rfloor \leq \deg_{A}(y) \leq r-1
\]
and $y$ has at most $3$ non-neighbours in the set $A^c$. Thus, by Lemma~\ref{lem:all-sets-perc}, any set of $r$ vertices in $A^c$ infects all of $A^c$. Since any vertex in $A$ has at least $r-1 \geq 1$ neighbours in $A^c$, $e(A, A^c) \neq 0$. Let $b \in A^c$ be any vertex with a neighbour $a \in A$. Let $v_1, v_2, \ldots, v_{r-1}$ be any neighbours of $b$ in $A^c$ and consider initially infecting the set $\{a, v_1, v_2, \ldots, v_{r-1}\}$. Since $b$ is adjacent to all $r$ infected vertices, it is infected at the first time step. Then, by the previous comment, $\{b, v_1, \ldots, v_{r-1}\}$ internally spans the entire set $A^c$. Since,
\[
|\langle \{a, v_1, v_2, \ldots, v_{r-1}\} \rangle_r| \geq |A^c \cup \{a\}| = \lceil n/2 \rceil + 1 +1 = \lceil n/2 \rceil +2,
\]
then by Lemma~\ref{lem:big-sets-perc}, the set $\{a, v_1, v_2, \ldots, v_{r-1}\}$ percolates and so $m(G, r) = r$.
\end{proof}
\begin{proposition}\label{prop:floor-half-r}
Let $r \geq 4$, let $n$ be sufficiently large, and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + (r-3)$. If there is a set $A \subseteq V(G)$ with $|A| = \lfloor n/2 \rfloor$ and $\langle A \rangle_r = A$, then $m(G, r) = r$.
\end{proposition}
\begin{proof}
Since $\langle A \rangle_r = A$ and $|A^c| = \lceil n/2 \rceil$, then for every $y \in A^c$,
\[
r-3 \leq \lfloor n/2 \rfloor + (r-3) - (\lceil n/2 \rceil -1) \leq \deg_A(y) \leq r-1
\]
and also $y$ has at most $2$ non-neighbours within $A^c$. Thus, by Lemma~\ref{lem:all-sets-perc}, any $r$ vertices in $A^c$ infect all of $A^c$.
\begin{center}
\begin{figure}
$(a)$ \begin{tikzpicture}[scale = 0.8]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw = black, fill = white] (0, 2.5) ellipse (1cm and 3.5cm);
\filldraw[draw = black, fill = white] (4, 2.5) ellipse (1.1cm and 3.5cm);
\node at (0, -1.5) {$A$};
\node at (4, -1.5) {$A^c$};
\node[vertex, fill = black, label=left:{$a$}] at (0, 4) (a) {};
\node[vertex, fill = black, label=left:{$b$}] at (0, 3) (b) {};
\node[vertex, label=above:{$x$}] at (4, 4) (x) {};
\node[vertex, label=above:{$y$}] at (4, 3) (y) {};
\draw (a) -- (x) -- (b) -- (y) -- (a);
\draw (4, 1) ellipse (0.9cm and 1.5 cm);
\node[vertex, fill = black, label=left:{$v_1$}] at (4, 1.7) (v1) {};
\node[vertex, fill = black, label=left:{$v_2$}] at (4, 1.2) (v2) {};
\node at (4, 0.8) {$\vdots$};
\node[vertex, fill = black, label=below:{$v_{r-2}$}] at (4, 0.3) (v3) {};
\draw (y) to[bend left=10] (v1);
\draw (y) to[bend left = 15] (v2);
\draw (y) to[bend left=20] (4.1, 0.9);
\draw (y) to[bend left = 25] (v3);
\draw (x) to[bend left =30] (v1);
\draw (x) to[bend left = 35] (v2);
\draw (x) to[bend left = 40] (4.1, 0.8);
\draw (x) to[bend left = 45] (v3);
\end{tikzpicture}
$(b)$ \hspace*{10pt}
\begin{tikzpicture}[scale = 0.8]
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw = black, fill = white] (0, 2.5) ellipse (1cm and 3.5cm);
\filldraw[draw = black, fill = white] (4, 2.5) ellipse (1.2cm and 3.5cm);
\node at (0, -1.5) {$A$};
\node at (4, -1.5) {$A^c$};
\node[vertex, label=right:{$b_1$}] at (4, 3.7) (b1) {};
\node[vertex, label=below left:{$x$}] at (4, 2.7) (x) {};
\draw (x) -- (b1);
\node[vertex, label=right:{$b_2$}] at (4, 4.2) (b2) {};
\node[vertex, label=right:{$b_3$}] at (4, 4.7) (b3) {};
\node at (4, 5.2) {$\vdots$};
\node[vertex, label=right:{$b_i$}] at (4, 5.5) (b4) {};
\node[vertex, fill = black, label=left:{$a_1$}] at (0, 1) (a1) {};
\draw (x) -- (a1) -- (b1);
\node[vertex, fill = black, label=left:{$a_2$}] at (0, 2) (a2) {};
\draw (x) -- (a2) -- (b2);
\node[vertex, label=left:{$a_3$}] at (0, 3) (a3) {};
\draw (x) -- (a3) -- (b3);
\node at (0, 4) {$\vdots$};
\draw (x) -- (0.2, 3.9);
\node[vertex, label=left:{$a_i$}] at (0, 5) (a4) {};
\draw (x) -- (a4) -- (b4);
\draw (4, 0.7) ellipse (0.9cm and 1.5 cm);
\node[vertex, fill = black, label=left:{$v_1$}] at (4, 1.4) (v1) {};
\node[vertex, fill = black, label=left:{$v_2$}] at (4, 0.9) (v2) {};
\node at (4, 0.8) {$\vdots$};
\node[vertex, fill = black, label=below:{$v_{r-2}$}] at (4, 0) (v3) {};
\draw (x) to[bend left=10] (v1);
\draw (x) to[bend left = 15] (v2);
\draw (x) to[bend left=20] (4.1, 0.6);
\draw (x) to[bend left = 25] (v3);
\draw (b1) to[bend left =30] (v1);
\draw (b1) to[bend left = 35] (v2);
\draw (b1) to[bend left = 40] (4.1, 0.5);
\draw (b1) to[bend left = 45] (v3);
\end{tikzpicture}
\caption{$(a)$ $G[A, A^c]$ contains a copy of $K_{2,2}$; $(b)$ $G[A, A^c]$ is $K_{2,2}$-free}\label{fig:r4-small-half}
\end{figure}
\end{center}
If the graph $G[A, A^c]$ contains a copy of $K_{2, 2}$ with vertices $a, b \in A$ and $x, y \in A^c$, let $v_1, v_2, \ldots, v_{r-2}$ be any $r-2$ common neighbours of $x$ and $y$ in $A^c$ and consider initially infecting the set $\{a, b, v_1, v_2, \ldots, v_{r-2}\}$, as in Figure~\ref{fig:r4-small-half} $(a)$. The vertices $x$ and $y$ are infected in the first time step and subsequently all vertices in $A^c$ are infected. Since at least $|A^c| + 2 = \lceil n/2 \rceil +2$ vertices are infected, the set percolates by Lemma~\ref{lem:big-sets-perc}.
Now, assume that the graph $G[A, A^c]$ contains no copy of $K_{2,2}$. Since every vertex $x \in A$ has at least $\lfloor n/2 \rfloor + (r-3) - (\lfloor n/2 \rfloor -1) = r-2$ neighbours in $A^c$, then $e(A, A^c) \geq (r-2)\lfloor n/2 \rfloor$ and so there are at most $\lceil n/2 \rceil/(r-2)$ vertices $y \in A^c$ with $\deg_A(y) = r-3$. Let $x \in A^c$ be a vertex with $\deg_A(x) = i \in \{r-2, r-1\}$ and let $a_1, a_2, \ldots, a_i$ be its neighbours in $A$. Note that $i \geq 2$. As each $a_j$ has at least $r-2 \geq 2$ neighbours in $A^c$, for each $j \leq i$, let $b_j \in A^c \setminus \{x\}$ be a neighbour of $a_j$. Since $G[A, A^c]$ contains no copy of $K_{2, 2}$ all of the vertices $\{b_1, b_2, \ldots, b_i\}$ are distinct. Since the vertex $x$ has at most $i - (r-3)$ non-neighbours in $A^c$ and $i - (r-3) \leq i-1$, then $x$ is adjacent to at least one vertex in $\{b_1, b_2, \ldots, b_i\}$. Without loss of generality, suppose that $x$ is adjacent to $b_1$. Let $v_1, v_2, \ldots, v_{r-2}$ be any common neighbours of $x$ and $b_1$ in $A^c$ and consider initially infecting the set $\{a_1, a_2, v_1, v_2, \ldots, v_{r-2}\}$, as in Figure~\ref{fig:r4-small-half} $(b)$. The vertex $x$ is infected in the first time step, and $b_1$ by the second time step. Then, $A^c$ is internally spanned by $\{x, b_1, v_1, \ldots, v_{r-2}\}$ and since
\[
|\langle \{a_1, a_2, v_1, v_2, \ldots, v_{r-2}\} \rangle_r| \geq |A^c \cup \{a_1, a_2\}| = \lceil n/2 \rceil +2,
\]
then by Lemma~\ref{lem:big-sets-perc}, all vertices are eventually infected and so $m(G, r) = r$.
\end{proof}
The remaining two cases consist of showing that if a set $A$ satisfies $\langle A \rangle_r = A$ and $\lceil n/2 \rceil \leq |A| \leq \lceil n/2 \rceil +1$, then there is a set of size $r$ that percolates. The following structural fact about graphs with a large closed set is used repeatedly. The straightforward proof follws the same arguments used in previous propositions in this section.
\begin{fact}\label{fact:big-half-r}
For any $r \geq 4$, $n \geq 2r$ and $G$ a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor +1$, let $A$ be a set with $|A| = \lceil n/2 \rceil +r-3$ and $\langle A \rangle_r = A$. Then $G[A^c]$ is a complete graph and every vertex has exactly $r-1$ neighbours in $A$.
\end{fact}
The aim in all of the proofs of this section is to use the structural information about the graphs to find a set of $r$ vertices that internally spans at least $\lfloor n/2 \rfloor +2$ vertices (and hence percolates). In some circumstances, finding many sets whose span is $\lfloor n/2 \rfloor +1$ can be quite useful as Fact~\ref{fact:big-half-r} provides a great deal of information about structure regarding such sets.
\begin{proposition}\label{prop:ceil-half-r}
Let $r \geq 4$, let $n$ be sufficiently large and odd, and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \frac{n-1}{2} + (r-3)$. If there is a set $A \subseteq V(G)$ with $|A| = \left\lceil \frac{n}{2} \right\rceil = \frac{n+1}{2}$ and $\langle A \rangle_r = A$, then $m(G, r) = r$.
\end{proposition}
\begin{proof}
Since any vertex $y \in A^c$ has at most $\frac{n-1}{2} - 1$ neighbours within $A^c$ and $\langle A \rangle_r = A$, then
\[
r-2 = \frac{n-1}{2} + (r-3) - \left(\frac{n-1}{2} - 1\right) \leq \deg_A(x) \leq r-1
\]
and so every vertex in $A^c$ has at most $1$ non-neighbour in $A^c$. As before, by Lemma~\ref{lem:all-sets-perc}, any set of $r$ vertices in $A^c$ infects all of $A^c$, at least.
Since $|A| = \frac{n+1}{2}$, then every vertex in $A$ has at least $r-3 \geq 1$ neighbours in $A^c$. Set $A_r = \{x \in A \mid \deg_{A^c}(x) \geq r\}$. Note that since $r|A_r| \leq e(A, A^c) \leq (r-1)|A^c|$, then $|A_r| \leq \frac{(r-1)}{r}\cdot \frac{(n-1)}{2} < \frac{n+1}{2}$ and so $A \setminus A_r \neq \emptyset$. Any vertex in $A \setminus A_r$ has at most $2$ non-neighbours in $A$ and so any set of size $r$ in $A \setminus A_r$ infects the remainder of $A \setminus A_r$ by Lemma~\ref{lem:all-sets-perc}.
If $A_r = \emptyset$, then $\langle A^c \rangle_r = A^c$, but since $|A^c| = \frac{n-1}{2} = \lfloor n/2 \rfloor$, then by Proposition~\ref{prop:floor-half-r}, there is a set of size $r$ that percolates.
If $|A_r| \geq 2$, then choose any element $a \in A \setminus A_r$, let $b \in A^c$ be any neighbour of $a$ and let $v_1, v_2, \ldots, v_{r-1}$ be any $r-1$ neighbours of $b$ in $A^c$. Then, letting $B = \{a, v_1, v_2, \ldots, v_{r-1}\}$ be the set of initially infected vertices, $B$ infects $b$ and so with $r$ infected vertices in $A^c$, $|\langle B \rangle_r| \geq |A^c \cup A_r \cup \{a\}| \geq \frac{n-1}{2} + 2+1 = \lceil n/2 \rceil +2$ and so by Lemma~\ref{lem:big-sets-perc}, $B$ percolates.
Suppose now that $|A_r| = 1$ and let $A_r = \{x\}$. If $x$ has fewer than $r$ neighbours in $A$, then $A \setminus \{x\}$ is a closed set of size $\frac{n-1}{2}$ and so by Proposition~\ref{prop:floor-half-r}, $G$ contains a set of size $r$ that percolates. The remainder of the proof involves considering many different sets of size $r$ and showing that if none percolate, then Fact~\ref{fact:big-half-r} can be used to deduce sufficient structural information about $G$ to find a small percolating set.
\begin{center}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw=black, fill = white] (0, 0) ellipse (1cm and 2cm);
\filldraw[draw=black, fill = white] (4, 0) ellipse (1cm and 2cm);
\draw (-0.866, -1) to[bend left=10] (0.866, -1);
\node at (-1.5, -1.5) {$A_r$};
\node[vertex, label=below:{$x$}] at (0, -1.5) (x) {};
\node at (0, 2.5) {$A$};
\node at (4, 2.5) {$A^c$};
\node[vertex, label=above:{$a$}] at (0, 1.4) (a) {};
\node[vertex, label=above:{$b$}] at (4, 1.4) (b) {};
\draw (0, 0) ellipse (0.75cm and 0.4cm);
\node at (0, 0) {$I(a)$};
\draw (0.725, 0.1) -- (a) -- (-0.725, 0.1);
\draw (4, 0) ellipse (0.75cm and 0.4cm);
\node at (4, 0) {$I(b)$};
\draw (4.725, 0.1) -- (b) -- (3.275, 0.1);
\draw (a) -- (b);
\draw (x) -- ++ (1, 0.45);
\draw (x) -- ++ (1, 0.15);
\draw (x) -- ++ (1, -0.15);
\draw (x) -- ++ (1, -0.45);
\draw [decorate, decoration={brace}] (1.1, -1) -- (1.1, -2);
\node at (1.7, -1.5) {$\geq r$};
\draw (x) -- ++ (-0.45, 0.8);
\draw (x) -- ++ (-0.15, 0.8);
\draw (x) -- ++ (0.15, 0.8);
\draw (x) -- ++ (0.45, 0.8);
\end{tikzpicture}
\caption{The sets $I(a)$ and $I(b)$ for an edge $\{a, b\}$.}\label{fig:big-half-r-Isets}
\end{figure}
\end{center}
For every vertex $a \in V(G) \setminus \{x\}$, choose a set, denoted $I(a)$ of $r-1$ neighbours of $a$ so that, if $a \in A\setminus\{x\}$, then $I(a) \subseteq A\setminus\{x\}$ and if $a \in A^c$, then $I(a) \subseteq A^c$. Every vertex in $A$ has at least one neighbour in $A^c$ and since $|A_r| =1$, then every vertex in $A^c$ has at least one neighbour in $A\setminus A_r$. For any pair $\{a, b\} \in E(G)$ with $a \in A\setminus A_r$ and $b \in A^c$, then
\begin{align*}
\langle \{b\} \cup I(a) \rangle_r &\supseteq A \cup \{b\}, \text{ and}\\
\langle \{a\} \cup I(b) \rangle_r &\supseteq A^c \cup A_r \cup \{a\}.
\end{align*}
See Figure~\ref{fig:big-half-r-Isets}. Since each of the sets $\{b\} \cup I(a)$ and $\{a\} \cup I(b)$ each span a set of size at least $\frac{n+1}{2} + 1 = \frac{n-1}{2} +2$, either one of them percolates, or else by Fact~\ref{fact:big-half-r}, for every $a \in A\setminus A_r$, the graph induced by $G$ on $A \setminus \{a, x\}$ is a clique with every vertex having exactly $r-1$ neighbours in $A^c \cup \{a, x\}$ and similarly, for every $b \in A^c$, the set $A^c \setminus \{b\}$ induces a clique with every vertex having $r-1$ neighbours in $A \cup \{b\}$.
\begin{center}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw=black, fill = gray!20] (0, 0) ellipse (1cm and 2cm);
\filldraw[draw=black, fill = gray!20] (4, 0) ellipse (1cm and 2cm);
\draw (-0.866, -1) to[bend left=10] (0.866, -1);
\node at (-1, -1.5) {$A_x$};
\draw (3.134, -1) to[bend left=10] (4.866, -1);
\node at (5, -1.5) {$B_x$};
\node at (0, 2.5) {$A\setminus \{x\}$};
\node at (4, 2.5) {$A^c$};
\node[vertex, label=below:{$x$}] at (2, -2.5) (x) {};
\draw (-0.1, -1.99) -- (x) -- (0.866, -1);
\draw (3.134, -1) -- (x) -- (4.1, -1.99);
\node[vertex, fill = white] at (0, -1.5) (a1) {};
\node[vertex, fill = white] at (0, 1.5) (a2) {};
\node[vertex, fill = white] at (4, -1.5) (b1) {};
\node[vertex, fill = white] at (4, 1.5) (b2) {};
\draw (a1) -- (x) -- (b1);
\draw (a2) -- ++ (1, 0.3);
\draw (a2) -- ++ (1, 0);
\draw (a2) -- ++ (1, -0.3);
\draw (b2) -- ++ (-1, 0.3);
\draw (b2) -- ++ (-1, 0);
\draw (b2) -- ++ (-1, -0.3);
\draw [decorate,decoration={brace}] (1.1, 1.9) --(1.1, 1.1);
\draw [decorate,decoration={brace}] (2.9, 1.1) --(2.9, 1.9);
\node at (2, 1.5) {$r-2$};
\draw (a1) -- ++ (1.3, 1.5);
\draw (a1) -- ++ (1.3, 1);
\draw (b1) -- ++ (-1.3, 1.5);
\draw (b1) -- ++ (-1.3, 1);
\draw [decorate,decoration={brace}] (1.4, 0.1) --(1.4, -0.55);
\draw [decorate,decoration={brace}] (2.6, -0.55) --(2.6, 0.1);
\node at (2, -0.25) {$r-3$};
\end{tikzpicture}
\caption{Structure of the graph if no set $\{a\} \cup I(b)$ or $\{b\} \cup I(a)$ percolates.}\label{fig:r-big-half}
\end{figure}
\end{center}
Note that any graph $H$ on at least $3$ vertices with the property that deleting any vertex gives a clique is itself a clique. Thus, each of $G[A \setminus \{x\}]$ and $G[A^c]$ is a complete graph where every vertex in $A \setminus \{x\}$ has exactly $r-2$ neighbours in $A^c \cup \{x\}$ and every vertex in $A^c$ has exactly $r-2$ neighbours in $A$.
Set $A_x = A \cap N(x)$, $A_1 = A \setminus (\{x\} \cup N(x))$, $B_x = A^c \cap N(x)$ and $B_1 = A^c \setminus N(x)$. By a previous comment, $|A_x|, |B_x| \geq r$. Note that every vertex in $A_1$ has $r-2 \geq 2$ neighbours in $A^c$ and every vertex in $A_x$ has $r-3 \geq 0$ neighbours in $A^c$; see Figure~\ref{fig:r-big-half}.
If any vertex $a \in A \setminus \{x\}$ has two neighbours $b_1, b_2 \in B_x$, then let $\{v_1, v_2, \ldots, v_{r-2}\}$ be any $r-2$ vertices in $A_x \setminus \{a\}$ and consider initially infecting $\{b_1, b_2, v_1, v_2, \ldots, v_{r-2}\}$. Both $x$ and $a$ are adjacent to all infected vertices and so become infected at the first time step. Thereafter, the remainder of $A_x$ is infected and hence all of $A$. Since at least $|A \cup \{b_1, b_2\}| = \lceil n/2 \rceil +2$ vertices are infected, the set percolates, by Lemma~\ref{lem:big-sets-perc}.
If any vertex $a \in A \setminus \{x \}$ has two neighbours $b_1, b_2 \in A^c$ with $b_1 \in B_1$, then since $b_1$ has at least $r-2\geq 2$ neighbours in $A \setminus \{x\}$, let $c \in A \setminus \{a, x\}$ be any other neighbour of $b_1$. Let $v_1, v_2, \ldots, v_{r-2} \in A^c\setminus \{b_1, b_2\}$ be any $r-2$ vertices in $A^c$ and consider initially infecting the set $\{a, c, v_1, v_2, \ldots, v_{r-2}\}$. In the first step $b_1$ is infected, then $b_2$ and subsequently the remainder of $A^c$ and so also $x$. As at least $|A^c \cup \{a, c, x\}| = \lceil n/2 \rceil +2$ vertices are infected, the set percolates, by Lemma~\ref{lem:big-sets-perc}.
By symmetry, the same is true for any vertex in $A^c$ with two neighbours in $A_x$ or else two neighbours in $A \setminus \{x\}$, one of which is in $A_1$.
For any $r \geq 5$, every vertex in $A_1 \cup A_x$ has at least $r-3 \geq 2$ neighbours in $A^c$ and so either some vertex has $2$ neighbours in $B_x$ or $2$ neighbours one of which is in $B_1$. In either case, there is some set of size $r$ that percolates.
\begin{center}
\begin{figure}
\begin{tikzpicture}
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw=black, fill = gray!20] (0, 0) ellipse (1cm and 2cm);
\filldraw[draw=black, fill = gray!20] (4, 0) ellipse (1cm and 2cm);
\node at (0, 2.5) {$A_x$};
\node at (4, 2.5) {$B_x$};
\node[vertex, label=below:{$x$}] at (2, -3) (x) {};
\node[vertex, fill = black, label=left:{$a$}] at (0, 1.5) (a1) {};
\node[vertex, fill = black, label=left:{$b$}] at (0, 1) (a2) {};
\node[vertex, fill = white] at (0, 0.5) (a3) {};
\node at (0, 0.1) {$\vdots$};
\node[vertex, fill = white] at (0, -0.5) (a4) {};
\node[vertex, fill = white] at (0, -1) (a5) {};
\node[vertex, fill = white] at (0, -1.5) (a6) {};
\node[vertex, fill = white] at (4, 1.5) (b1) {};
\node[vertex, fill = white] at (4, 1) (b2) {};
\node[vertex, fill = white] at (4, 0.5) (b3) {};
\node at (4, 0.1) {$\vdots$};
\node[vertex, fill = white] at (4, -0.5) (b4) {};
\node[vertex, fill = black, label=right:{$c$}] at (4, -1) (b5) {};
\node[vertex, fill = black, label=right:{$d$}] at (4, -1.5) (b6) {};
\draw (a1) -- (x) -- (a2);
\draw (a3) -- (x) -- (a4);
\draw (a5) -- (x) -- (a6);
\draw (x) -- (0.1, -0.1);
\draw (b1) -- (x) -- (b2);
\draw (b3) -- (x) -- (b4);
\draw (b5) -- (x) -- (b6);
\draw (x) -- (3.9, -0.1);
\draw (a1) -- (b1);
\draw (a2) -- (b2);
\draw (a3) -- (b3);
\draw (a4) -- (b4);
\draw (a5) -- (b5);
\draw (a6) -- (b6);
\end{tikzpicture}\caption{Case where $r = 4$ and every vertex in $A\setminus \{x\}$ has only one neighbour in $A^c$.}\label{fig:r-big-half-cases}
\end{figure}
\end{center}
The only remaining case is when $r = 4$ and there are no vertices in $A\setminus \{x\}$ with two neighbours in $A^c$ and similarly, no vertices in $A^c$ with two neighbours in $A \setminus \{x\}$. That is, $A_1 = B_1 = \emptyset$ and $G$ consists of a clique on $A_x$, a clique on $B_x$, all vertices in $A_x \cup B_x$ joined to $x$ and a perfect matching between $A_x$ and $B_x$, as in Figure~\ref{fig:r-big-half-cases}. Since $(n-1)/2 \geq 4$, choose $a, b \in A_x$ and $c, d \in B_x$ with $c, d \notin N(a) \cup N(b)$ and initially infect the set $\{a, b, c, d\}$. The vertex $x$ is infected at the first time step. At the second time step, the neighbours of $a$ and $b$ in $A^c$ and the neighbours of $c$ and $d$ in $A_x$ are infected and then all remaining vertices are infected in the third time step.
This complete the proof in the case that $\langle A \rangle_r = A$ and $|A| = \left\lceil \frac{n}{2} \right\rceil$.
\end{proof}
The final remaining case to be dealt with is the following.
\begin{proposition}\label{prop:vbig-half-r}
Let $r \geq 4$, $n$ be sufficiently large, and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + (r-3)$. If there is a set $A \subseteq V(G)$ with $|A| = \lceil n/2 \rceil +1$ and $\langle A \rangle_r = A$, then $m(G, r) = r$.
\end{proposition}
\begin{proof}
By Fact~\ref{fact:big-half-r}, the set $A^c$ induces a complete graph with every vertex having exactly $r-1$ neighbours in $A$. Any vertex $x \in A$ has
\[
\deg_{A^c}(x) \geq \lfloor n/2 \rfloor + (r-3) - \lceil n/2 \rceil =
\begin{cases}
r-3 &\text{if $n$ is even},\\
r-4 &\text{if $n$ is odd}
\end{cases}.
\]
As in previous proofs, set $A_r = \{x \in A \mid \deg_{A^c}(x) \geq r\}$. Again, if $A_r = \emptyset$, then $A^c$ is a closed set of size $\lfloor n/2 \rfloor -1$ and so by Proposition~\ref{prop:vsmall-half-r}, there is a percolating set of size $r$. Thus, assume that $A_r \neq \emptyset$. Every vertex in $A\setminus A_r$ has at most $3$ non-neighbours in $A$. If $A_r \nsubseteq \langle A \setminus A_r \rangle_r$, then there is a closed set that is smaller than $A$ and so by one of Propositions~\ref{prop:vsmall-half-r}, \ref{prop:floor-half-r}, or \ref{prop:ceil-half-r}, $G$ has a percolating set of size $r$. Therefore, assume that $\langle A \setminus A_r \rangle_r = A$.
Note that since
\[
(r-1)\left( \lfloor n/2 \rfloor -1 \right) = e(A, A^c) \geq r|A_r|,
\]
then $|A_r| \leq \frac{(r-1)}{r}(\lfloor n/2 \rfloor -1) \leq \lceil n/2 \rceil - (r+3)$ as long as $n \geq 2(r^2+2r+1)$.
If there is any vertex $a \in A \setminus A_r$ with a neighbour $b \in A^c$, then since $a$ has at most $3$ non-neighbours in $A$, there are at least $r-1$ neighbours of $a$ in $A \setminus A_r$. Let $v_1, v_2, \ldots, v_{r-1}$ be any neighbours of $a$ in $A \setminus A_r$. Since the set $\{b, v_1, v_2, \ldots, v_{r-1}\}$ infects $a$ and hence all of $A\setminus A_r$ and subsequently $A_r$, the closure of this set has at least $|A| + 1 = \lceil n/2 \rceil +2$ vertices and hence the set percolates by Lemma~\ref{lem:big-sets-perc}.
The only case in which there can be no edges between $A \setminus A_r$ and $A^c$ is when $r = 4$, $n$ is odd, every vertex in $A^c$ has $r-1 = 3$ neighbours in $A_r$ and $G[A \setminus A_r]$ is a complete graph with all vertices in $A \setminus A_r$ adjacent to every vertex in $A_r$. In this case $|A_r| \geq 3$. If $|A_r| \geq 4$, then any set of size $4$ in $A^c$ percolates. Therefore, assume that $|A_r| = 3$. The graph is as in Figure~\ref{fig:vbig-half-r4}. Then any set consisting of two vertices from $A \setminus A_r$ and two vertices from $A^c$ will infect all of $A_r$ and subsequently the remainder of the graph.
\begin{center}
\begin{figure}[htb]
\begin{tikzpicture}
\tikzstyle{vertex}=[circle, draw=black, minimum size=5pt,inner sep=0pt]
\filldraw[draw=black, fill = gray!20] (0, 0) ellipse (0.8cm and 2cm);
\filldraw[draw=black, fill = gray!20] (4, 0) ellipse (0.8cm and 2cm);
\node at (-1.5, 0) {$A\setminus A_r$};
\node at (5.3, 0) {$A^c$};
\draw (2, -3) ellipse (0.8cm and 0.2cm);
\node[vertex] at (1.5, -3) (c1) {};
\node[vertex] at (2, -3) (c2) {};
\node[vertex] at (2.5, -3) (c3) {};
\node at (2, -3.5) {$A_r$};
\node[vertex, fill = black] at (0, 1.5) (a1) {};
\node[vertex, fill = black] at (0, 1) (a2) {};
\node[vertex, fill = white] at (0, 0.5) (a3) {};
\node at (0, 0.1) {$\vdots$};
\node[vertex, fill = white] at (0, -0.5) (a4) {};
\node[vertex, fill = white] at (0, -1) (a5) {};
\node[vertex, fill = white] at (0, -1.5) (a6) {};
\node[vertex, fill = black] at (4, 1.5) (b1) {};
\node[vertex, fill = black] at (4, 1) (b2) {};
\node[vertex, fill = white] at (4, 0.5) (b3) {};
\node at (4, 0.1) {$\vdots$};
\node[vertex, fill = white] at (4, -0.5) (b4) {};
\node[vertex, fill = white] at (4, -1) (b5) {};
\node[vertex, fill = white] at (4, -1.5) (b6) {};
\draw (a1) -- (c1) -- (a2);
\draw (a3) -- (c1) -- (a4);
\draw (a5) -- (c1) -- (a6);
\draw (c1) -- (0.1, -0.1);
\draw (b1) -- (c1) -- (b2);
\draw (b3) -- (c1) -- (b4);
\draw (b5) -- (c1) -- (b6);
\draw (c1) -- (3.9, 0.1);
\draw (a1) -- (c2) -- (a2);
\draw (a3) -- (c2) -- (a4);
\draw (a5) -- (c2) -- (a6);
\draw (c2) -- (0.1, 0);
\draw (b1) -- (c2) -- (b2);
\draw (b3) -- (c2) -- (b4);
\draw (b5) -- (c2) -- (b6);
\draw (c2) -- (3.9, 0);
\draw (a1) -- (c3) -- (a2);
\draw (a3) -- (c3) -- (a4);
\draw (a5) -- (c3) -- (a6);
\draw (c3) -- (0.1, 0.1);
\draw (b1) -- (c3) -- (b2);
\draw (b3) -- (c3) -- (b4);
\draw (b5) -- (c3) -- (b6);
\draw (c3) -- (3.9, -0.1);
\end{tikzpicture}
\caption{$r = 4$ and no edges between $A \setminus A_r$ and $A^c$.}\label{fig:vbig-half-r4}
\end{figure}
\end{center}
In all cases, there is some set of $r$ vertices that percolates and so $m(G, r) = r$.
\end{proof}
The proof of Theorem~\ref{thm:main-ub} can now be completed.
\begin{proof}[Proof of Theorem~\ref{thm:main-ub}]
Let $n$ be large enough to apply the lemmas and propositions given previously and let $G$ be a graph on $n$ vertices with $\delta(G) \geq \lfloor n/2 \rfloor + (r-3)$. By Lemma~\ref{lem:complete-bip-r}, $G$ contains a copy of $K_{r, r-1}$ and the $r$ vertices in one partition set, $A$, have closure $|\langle A \rangle_r| \geq 2r-1 > 2(r-1)$. By Lemma~\ref{lem:big-sets-perc} and Proposition~\ref{prop:no-mid-size-closed}, either $A$ percolates or else $|\langle A \rangle_r| \in \left[\lfloor n/2 \rfloor -1, \lceil n/2 \rceil + 1\right]$. If $|\langle A \rangle_r| \in \left[\lfloor n/2 \rfloor -1, \lceil n/2 \rceil + 1\right]$, then by Proposition~\ref{prop:vsmall-half-r}, \ref{prop:floor-half-r}, \ref{prop:ceil-half-r}, or \ref{prop:vbig-half-r}, $G$ contains a percolating set of size $r$.
\end{proof}
\section{Open problems}\label{sec:open}
There are a number of natural questions related to the results in this paper that remain open. One could ask for the conditions on $\delta(G)$ that guarantee $m(G, r) \leq k$ for a fixed $k \geq r+1$. Following the line of inquiry in \cite{DFLMPU16} and \cite{FPR15}, one might consider the lower bounds on $\sigma_2(G)$ that guarantee that $m(G, r) = r$ for $r \geq 3$. A problem that may be quite technical would be the characterization of those small graphs for which $\delta(G) = \lfloor n/2 \rfloor + \min\{1, r-3\}$ but $m(G, r) > r$.
| {
"timestamp": "2017-04-03T02:03:17",
"yymm": "1703",
"arxiv_id": "1703.10741",
"language": "en",
"url": "https://arxiv.org/abs/1703.10741",
"abstract": "The $r$-neighbour bootstrap process is an update rule for the states of vertices in which `uninfected' vertices with at least $r$ `infected' neighbours become infected and a set of initially infected vertices is said to \\emph{percolate} if eventually all vertices are infected. For every $r \\geq 3$, a sharp condition is given for the minimum degree of a sufficiently large graph that guarantees the existence of a percolating set of size $r$. In the case $r=3$, for $n$ large enough, any graph on $n$ vertices with minimum degree $\\lfloor n/2 \\rfloor +1$ has a percolating set of size $3$ and for $r \\geq 4$ and $n$ large enough (in terms of $r$), every graph on $n$ vertices with minimum degree $\\lfloor n/2 \\rfloor + (r-3)$ has a percolating set of size $r$. A class of examples are given to show the sharpness of these results.",
"subjects": "Combinatorics (math.CO)",
"title": "Minimum degree conditions for small percolating sets in bootstrap percolation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806546550657,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.707727431593115
} |
https://arxiv.org/abs/1002.2033 | Explicit integrable systems on two dimensional manifolds with a cubic first integral | A few years ago Selivanova gave an existence proof for some integrable models, in fact geodesic flows on two dimensional manifolds, with a cubic first integral. However the explicit form of these models hinged on the solution of a nonlinear third order ordinary differential equation which could not be obtained. We show that an appropriate choice of coordinates allows for integration and gives the explicit local form for the full family of integrable systems. The relevant metrics are described by a finite number of parameters and lead to a large class of models on the manifolds ${\mb S}^2, {\mb H}^2$ and $P^2({\mb R})$ containing as special cases examples due to Goryachev, Chaplygin, Dullin, Matveev and Tsiganov. | \section{Introduction}
Let $M$ be a $n$-dimensional smooth manifold with metric $g(X,Y)=g_{ij}\,X^i\,Y^j$ and let
$T^*\,M$ be its cotangent bundle with coordinates $(x,P)$, where $P$ is a covector
from $T^*_x\,M$. Let us recall that $T^*M$ is a smooth symplectic $2n$-manifold with respect
to the standard 2-form $\om=dP_i\wedge dx^i$ which induces the Poisson bracket
\[\{f,g\}=\sum_{i=1}^n\left(\frac{\pt f}{\pt x^i}\,\frac{\pt g}{\pt P_i}-
\frac{\pt f}{\pt P_i}\,\frac{\pt g}{\pt x^i}\right).\]
In $T^*\,M$ the geodesic flow is defined by the Hamiltonian
\beq\label{notH}
H=K+V,\qq K=\frac 12\sum_{i,j=1}^n\,g^{ij}(x)\,P_i\,P_j,\qq V=V(x),\eeq
where $g^{ij}$ is the inverse metric of $g_{ij}$.
An ``observable" $f:\ T^*M\to {\mb R}$, which can be written locally
\[f=\sum_{i_1+\cdots+i_n\leq m}\,f^{i_1,\cdots,i_n}(x)\,P_{i_1}\,\cdots P_{i_n},\qq \#(f)=m,\]
is a constant of motion iff $\{H,f\}=0$. A hamiltonian system is said to be integrable in
Liouville sense if there exist $n$ constants of motion (including $H$) generically independent
and in pairwise involution with respect to the Poisson bracket.
In what follows we will deal exclusively with integrable systems defined on two dimensional manifolds. An integrable system is just made out of 2 independent observables $H$ and $Q$ with $\{H,Q\}=0$.
The general line of attack of this problem is based on the integer $m=\#(Q)$. For $m=1$ $M$ is
a surface of revolution and for $m=2$ $M$ is a Liouville surface \cite{Da}.
For higher values
of $m$ only particular examples have been obtained, some of which in explicit form. For
$M={\mb S}^2$ and $m=3$ the oldest explicit examples (early twentieth century) were due to
Goryachev and Chaplygin on the one hand and to Chaplygin on the other hand
(see \cite{bkf}[p. 483] and \cite{Ts} for the detailed references). On the same manifold
with $m=4$ there is the famous Kovalevskaya system \cite{Ko} and some extension
due to Goryachev (see \cite{Se4} for the reference).
More recently there was a revival of this subject due to Selivanova \cite{Se}, \cite{Se4}
and Kiyohara \cite{Ki} who proved {\em existence} theorems of integrable systems for
$m=3,4$ for the first author and for any $m\geq 3$ for the second author. As observed by
Kiyohara himself for $m=3$ the two classes of models are markedly different. Even more
recently several new explicit examples for $m=3$ were given by Dullin and Matveev \cite{dm}
and Tsiganov \cite{Ts}.
In this work we will focus on Selivanova's integrable systems with a cubic first integral
discussed in \cite{Se}. The existence theorems she proved are not explicit since there remains
to solve a nonlinear ODE of third order. In Tsiganov's article too a non-linear ODE of
fourth order four appears for which only special solutions could be obtained.
We will show that the solutions of these ODE are not required: the use of appropriate
coordinates allows to get locally the explicit form of the full family of integrable system.
Having the local form of the metric $g$ on $M$ one can determine the global structure of the manifold. In view of the many parameters exhibited by the metric, the global analysis gives
rise to plenty of integrable models, some of which were discovered only recently.
The plan of the article is the following: in Section 2 we consider the class of models
analyzed by Selivanova with the following leading terms for the cubic observable:
\[Q=p\,\pf^3+2q\,K\,\pf+\cdots,\qq p\in{\mb R},\quad q\geq 0,\]
and the general differential system resulting of $\{H,Q\}=0$ is given.
In section 3 we first integrate the special case where $q=0$: the differential system is reduced to
a second order non-linear ODE. Its integration gives the local form of the integrable system and
the global analysis detrmines the manifolds according to the parameters that appeared in the integration process.
In Section 4 we consider the general case $q>0$. Here we have linearized, by an appropriate choice
of the coordinates, the possibly non-linear ODE of third order. In Section 5, with the explicit
local form of the metric, it is then straightforward (but lengthy because an enumeration of
cases is required) to determine on which manifolds the metric is defined. We check that all
the previously explicitly known integrable examples are indeed recovered.
\section{Cubic first integral}
Let us consider the hamiltonian (\ref{notH})
with
\beq
K=\frac 12\Big(\pth^2+a(\tht)\pf^2\Big),\qq V=f(\tht)\cos\phi+g(\tht),\qq f(\tht)\not\equiv 0,
\eeq
and the cubic observable
\beq
Q=Q_3+Q_1,\eeq
with
\beq\left\{\barr{l}
Q_3=p\,\pf^3+2q\,K\,\pf,\qq p\,\in{\mb R},\ q\geq 0,\\[4mm] Q_1=\chi(\tht)\sin\phi\,\pth+\Big(\be(\tht)+\ga(\tht)\cos\phi\Big)\pf.\earr\right.\eeq
\begin{nlem} The constraint $\{H,Q\}=0$ is equivalent to the following differential system:
\beq\label{eq0}\barr{lc}
(a)\quad & \chi\,\dot{f}=\ga f,\qq \chi \dot{g}=\be\,f,\qq\quad\Big(\ \dot{}=D_{\tht}\Big),\\[4mm]
(b)\quad & \dst\dot{\chi}=-q\,f,\qq\dot{\be}=2q\,\dot{g},\qq \dot{\ga}+\chi\,a=2q\,\dot{f},\qq a\ga+\chi\,\frac{\dot{a}}{2}=3(p+qa)f.\earr\eeq
\end{nlem}
\nin{\bf Proof:}
The relation $\{H,Q\}=0$ splits into three constraints
\beq
\{K,Q_3\}=0,\qq \{K,Q_1\}+\{V,Q_3\}=0,\qq \{V,Q_1\}=0.\eeq
The first is identically true, the second one is equivalent to the relations (\ref{eq0}\,b) while the last one is equivalent to (\ref{eq0}\,a) .$\quad\Box$
The special case $q=0$ is rather difficult to obtain as the limit of the general case $q\neq 0$,
so we will first work it out completely.
\section{The special case $q=0$}
We can take $p=1$ and obvious integrations give
\beq\label{sc1}
\chi=\chi_0>0,\qq \be=\be_0\in{\mb R},\qq \ga=\chi_0\,\frac{\dot{f}}{f},
\qq \dot{g}=\frac{\be_0}{\chi_0}\,f,\qq a=-\frac{\dot{\ga}}{\chi_0},
\eeq
and the last equation
\beq\label{sc2}
\ddot{\ga}+2\,\frac{\dot{f}}{f}\,\dot{\ga}+6f=0.\eeq
An appropriate choice of coordinates does simplify matters:
\begin{nlem} The differential equation for $u=\dot{f}$ as a function of the
variable $x=f$ is given by
\beq\label{eqd}
u\left(\frac{uu'}{x}\right)'+cx=0,\qq c=\frac 6{\chi_0}>0.\qq\qq\qq \Big(\ '=D_x \Big).\eeq
\end{nlem}
\nin{\bf Proof:}
The relations in (\ref{sc1}) become
\beq
g'=\frac{\be_0}{\chi_0}\,\frac xu,\qq \ga=\chi_0\,\frac ux,\qq a=-u\left(\frac ux\right)',\eeq
and (\ref{sc2}) gives (\ref{eqd}).$\quad\Box$
The solution of this ODE follows from
\begin{nlem} The general solution of (\ref{eqd}) is given by
\beq\label{eqds1}
u=-\frac{\ze^2+c_0}{2c},\eeq
with
\beq\label{eqds2}
\ze^3+3c_0\,\ze-2\rho=0,\qq\qq 2(\rho-\rho_0)=3c^2x^2,
\eeq
and integration constants $(\rho_0,c_0)$.
\end{nlem}
\nin{\bf Proof:}
Let us define $\,\dst \ze'=-c\,x/u$. This allows a first integration of (\ref{eqd}), giving
$\,\dst\frac{uu'}{x}=\ze$. From this we deduce
\[cu'=-\ze\ze'\qq\Longrightarrow\qq 2c\,u=-\ze^2-c_0,\]
which in turn implies
\beq
\Big[\ze^2+c_0\Big]\ze'=2c^2\,x\quad\Longrightarrow\quad
\ze^3+3c_0\,\ze-2\rho=0,\eeq
which concludes the proof. $\quad\Box$
It is now clear that the initial coordinates $(\tht,\phi)$ chosen on $S^2$ will not lead,
at least generically, to a simple form of the hamiltonian! To achieve a real simplification
for the observables the symplectic coordinates change
$(\tht,\phi,\pth,\pf)\ \to\ (\ze,\phi,\pz,\pf)$ gives:
\begin{nth}\label{th1}
Locally, the integrable system has for explicit form
\beq\label{sys0}
\left\{\barr{l}\dst
H=\frac 12\left(F\,\pz^2+\frac{G}{4F}\,\pf^2\right)+\chi_0\,\sqrt{F}\,\cos\phi
-\be_0\,\ze,\\[5mm]\dst
Q=\pf^3-2\chi_0\Big(\sqrt{F}\,\sin\phi\,\pz+(\sqrt{F}\,)'\,\cos\phi\,\pf\Big)
+2\be_0\,\pf,\earr\right.\qq \Big(\ '=D_{\ze}\Big),\eeq
with
\beq\label{FetG}
F=-2\rho_0+3c_0\,\ze+\ze^3,\qq\qq G=9c_0^2+24\rho_0\,\ze-18c_0\,\ze^2-3\ze^4.\eeq
\end{nth}
\nin{\bf Proofs:}
One may obtain these formulas by elementary computations and some scalings of $\chi_0,\,\be_0$
and $H$.
Alternatively, one can check that (\ref{FetG}) implies the relations
\beq\label{usid0}
G'=-12\,F,\qq G=F'^2-2F\,F'',\eeq
which allows for a direct check of $\,\{H,Q\}=0$.
As proved in \cite{Se} this system does not exhibit any linear or quadratic constant of motion
and $\,(H,Q)$ are algebraically independent.
$\quad\Box$
We are now in position to analyze the global geometric aspects related to the metric
\beq\label{met}
g=\frac{d\ze^2}{F}+\frac{4F}{G}\,d\phi^2,\qq\quad\phi\in [0,2\pi).\eeq
One has first to impose the positivity of both $F$ and $G$ for this metric to be riemannian.
This gives for $\ze$ some interval $I$ whose end-points are possible singularities of the metric.
To ascertain that the metric is defined on some manifold one has to ensure that these
singularities are apparent ones and not true curvature singularities.
Let us define, for the cubic $F$, its discriminant $\De=c_0^3+\rho_0^2$.
\begin{nth}\label{th2}
The metric (\ref{met}):
\brm
\item[(i)] For $\De<0$ is defined on ${\mb S}^2$ iff
\[F=(\ze-\ze_0)(\ze-\ze_1)(\ze-\ze_2),\qq\quad \ze_0<\ze<\ze_1<\ze_2.\]
The change of coordinates
\beq\label{notq0}
{\rm sn}\,(u,k^2)=\sqrt{\frac{\ze-\ze_0}{\ze_1-\ze_0}},\qq
k^2=\frac{\ze_1-\ze_0}{\ze_2-\ze_0}\in\,(0,1),\eeq
gives for integrable system \footnote{We use the shorthand notation: $s,\,c,\,d$ respectively
for ${\rm sn}\,(u,k^2),\,{\rm cn}\,(u,k^2),\,{\rm dn}\,(u,k^2)$.}
\beq\label{sys1q0}\left\{\barr{l}\dst
H=\frac 12\left(P_u^2+\frac{D(u)}{s^2 c^2 d^2}\pf^2\right)+\chi_0\,k^2\,s c d\,\cos\phi
-\be_0\,k^2 s^2,\\[5mm]\dst
Q=4\pf^3-\chi_0\left(\sin\phi\,P_u+\frac{(scd)'}{scd}\,\cos\phi\,\pf\right)+2\be_0\,\pf,\\[5mm]
D(u)=(1-k^2 s^4)^2-4k^2\,s^4c^2d^2,\qq u\in(0,K).\earr\right.\eeq
\item[(ii)] For $\De=0$ is defined on ${\mb H}^2$ iff
\[F=(\ze-\ze_1)^2(\ze+2\ze_1),\qq\quad -2\ze_1<\ze<\ze_1,\quad (\ze_1>0).\]
The change of coordinates
\beq\label{notspeq0}
\ze=\ze_1(-2+3\,\tanh^2 u),\qq u\in(0,+\nf),\eeq
gives for integrable system \footnote{We use the shorthand notation $S,\,C,\,T$ respectively for $\sinh u,\,\cosh u,\,\tanh u$.}
\beq\label{sys2q0}
\left\{\barr{l}\dst
H=M_1^2+M_2^2-\left(1-\frac 3{C^2}\right)M_3^2+\chi_0\,T(1-T^2)\,\cos\phi-\be_0\,T^2,\\[5mm]\dst
Q=4M_3^3-\chi_0\Big(M_1-3T\,\cos\phi\,M_3\Big)+2\be_0\,M_3.\earr\right.\eeq
\item[(iii)] For $\De>0$ is not defined on a manifold.
\erm
\end{nth}
\nin{\bf Proof of (i):}
If $\,\De<0$ the cubic $F$ has three simple real roots $\ze_0<\ze_1<\ze_2$.
If we take $\,\ze\in(\ze_2,+\nf)$ then $F$ is positive. The relation $\,G'=-12F$ shows
that in this interval $G$ is decreasing from $G(\ze_2)=F'^2(\ze_2)>0$ to
$-\nf$ and will vanish for some $\wh{\ze}>\ze_2$. Hence to ensure positivity for $F$ and $G$
we must restrict $\ze$ to the interval $(\ze_2,\wh{\ze})$. Since at $\ze=\wh{\ze}$ the
function $F$ does not vanish while $G$ does, this point is a curvature
singularity and the metric cannot be defined on a manifold.
The positivity of $F$ is also ensured if we take $\ze\in(\ze_0,\ze_1)$. In this
interval $G$ decreases monotonously from $G(\ze_0)$ to $\,G(\ze_1)=F'^2(\ze_1)>0$.
Let us analyze the singularities at the end points. For $\ze$ close to $\ze_0$ we have for
approximate metric
\beq
g\approx \frac 4{F'(\ze_0)}\left[\frac{d\ze^2}{4(\ze-\ze_0)}
+\frac{F'^2(\ze_0)}{G(\ze_0)}\,(\ze-\ze_0)\,d\phi^2\right].
\eeq
The relation (\ref{usid0}) gives $G(\ze_0)=F'^2(\ze_0)$, so the change of variable
$\rho=\sqrt{\ze-\ze_0}$ allows to write
\beq
g\approx \frac 4{F'(\ze_0)}\Big(d\rho^2+\rho^2\,d\phi^2\Big),\eeq
which shows that $\rho=0$ is an {\em apparent} singularity, due to the choice of polar coordinates, which could removed by switching back to cartesian coordinates. So the point $\ze=\ze_0$ can be added to the manifold.
A similar argument works for $\ze=\ze_1$. In fact these end-points are geometrically the poles of the manifold and the index theorem for $\,\pt_{\phi}$ gives for Euler characteristic $\,\chi=2$, showing that the manifold is indeed ${\mb S}^2$.
Then, using the change of variable (\ref{notq0}), it is a routine exercise in
elliptic functions theory to operate the symplectic coordinates change
$\,(\ze,\phi,\pz,\pf)\ \to\ (u,\phi,P_u,\pf)$ which, after several scalings of the
observables and of their parameters, gives (\ref{sys1q0}). Notice that one can also, by direct computation, check that $\,\{H,Q\}=0$ from the formulas given in (\ref{sys1q0}). $\quad\Box$
\vspace{2mm}
\nin{\bf Proof of (ii):}
In this case we have
\[F=(\ze+2\ze_1)(\ze-\ze_1)^2,\qq\quad G=-3(\ze+3\ze_1)(\ze-\ze_1)^3,\qq\qq \ze_1=-\rho_0^{1/3}.\]
For $\ze_1<0$ the positivity of $F$ implies $\ze\in(2|\ze_1|,+\nf)$ and $G$ decreases and vanishes for $\wh{\ze}=3|\ze_1|$ leading to a curvature singularity. The case
$\ze_1=0$ is also excluded since then $G\leq 0$ and the remaining case is $\ze_1>0$. The
positivity of $F$ and $G$ requires $\ze\in(-2\ze_1,\ze_1)$. The singularity structure is most
conveniently discussed thanks to the coordinates change (\ref{notspeq0})
which brings the metric to the form
\beq\label{methyp}
g=\frac 4{3\ze_1}\left\{du^2+\frac{\sinh^2 u}{1+3\tanh^2 u}\,d\phi^2\right\},\qq
u\in(0,+\nf),\eeq
from which we conclude that the manifold is ${\mb H}^2$.
Then starting from (\ref{sys0}), the symplectic change of coordinates $(\ze,\phi,\pz,\pf)\ \to\ (u,\phi,P_u,\pf)$, and some scalings, gives
\beq\label{sys2bq0}\left\{\barr{l}\dst
H=\frac 12\left(P_u^2+\frac{(1+3\,T^2)}{S^2}\,\pf^2\right)
+\chi_0\,T(1-T^2)\cos\phi-\be_0\,T^2,\\[5mm]\dst
Q=4\pf^3-\chi_0\left(\sin\phi\,P_u+\frac{1-3T^2}{T}\,\cos\phi\,\pf
\right)+2\be_0\,\pf,\earr\right.\eeq
Defining the generators of the $so(2,1)$ Lie algebra in $T{\mb H}^2$ to be
\beq\label{genM}
M_1=\sin\phi\,P_u+\frac{\cos\phi}{T}\,\pf,\qq
M_2=\cos\phi\,P_u-\frac{\sin\phi}{T}\,\pf,\qq M_3=\pf,\eeq
transforms the observables (\ref{sys2bq0}) into (\ref{sys2q0}).$\quad\Box$
\vspace{2mm}
\nin{\bf Proof of (iii):}
For $\,\De>0$ the cubic $F$ has a single real zero $\ze_0$. The
positivity of $F$ requires that $\ze\in\,(\ze_0,+\nf)$. Since $G'=-12F$ the function
$G$ decreases from $G(\ze_0)$ to $-\nf$. Since $G(\ze_0)>0$ there exists $\wh{\ze}>\ze_0$ for
which $G(\wh{\ze})=0$. So positivity restricts $\ze\in(\ze_0,\wh{\ze})$ and
$\,\wh{\ze}$ is a curvature singularity showing that the metric cannot be defined
on a manifold.$\quad\Box$
\nin{\bf Remarks:}
\brm
\item The integrable system (\ref{sys2q0}) corresponds to the limit of (\ref{sys1q0}) when
$\ze_2\to\ze_1$ or $k^2\to 1$. Then the elliptic functions degenerate into hyperbolic functions.
Let us emphasis that in this limit the observables behave smoothly while the manifold changes
drastically . Let us also observe that $H$ is globally defined on the manifold while $Q$ is not.
\item In \cite{Se} Selivanova proved an existence theorem for an integrable system
on $S^2$ with a cubic observable (case (i) of her Theorem 1.1). The observables are
\beq\label{eqSe1}\barr{l}\dst
H=\frac{\psi'^2(y)}{2}\Big(P_y^2+\pf^2\Big)
+\frac{\psi'^2(y)}{2}(\psi(y)-\psi''(y))\,\cos\phi,\\[5mm]\dst
Q=\pf^3-\frac 32\,\psi'(y)\,\sin\phi\,P_y+\frac 32\,\psi(y)\,\cos\phi\,\pf,\earr
\qq\Big(\ '=D_y\Big),\eeq
where $\psi(y)$ is a solution of the ODE
\beq\label{eqSe2}
\psi'\,\psi''=\psi\,\psi''-2\psi''^2+\psi'^2+\psi^2,\qq \psi(0)=0,\ \psi'(0)=1,\ \psi''(0)=\tau.\eeq
Comparing (\ref{eqSe1}) and (\ref{sys0}) for $\be_0=0$ makes it obvious that
we are dealing with the same integrable system, up to diffeomorphism. The
{\em local} identification follows from
\beq\label{Seli}
\psi(y)=-\frac{(\ze^2+c_0)}{2\sqrt{F}},
\qq\quad \frac{\sqrt{G}}{F}\,d\ze=\pm\sqrt{3}\,dy, \eeq
and we have checked that the ODE (\ref{eqSe2}) is a consequence of the
relations (\ref{Seli}) and (\ref{usid0}). We see clearly that Selivanova's choice of the
coordinate $y$ led to a complicated ODE, very difficult to solve. In fact one should rather
find coordinates such that the ODE becomes tractable, as we did.
\erm
\section{Local structure of the integrable systems for $q>0$}
As already observed, if one insists in working with the variable $\tht$, the differential system (\ref{eq0}) can be reduced either to a third order \cite{Se} or to a fourth order \cite{Ts} non-linear ODE. The key to a full integration of this system is again an appropriate choice of coordinates on the manifold.
\begin{nth}
Locally, the integrable system $\,(H,Q)$ has for explicit form
\beq\label{sysq}
\left\{\barr{l}\dst
H=\frac 1{2\ze}\Big(F\,P_{\ze}^2+\frac G{4\,F}\,\pf^2\Big)
+\frac{\sqrt{F}}{2q\ze}\,\cos\phi+\frac{\be_0}{2q\ze},\\[5mm]\dst
Q=p\,\pf^3+2q\,H\,\pf-\sqrt{F}\,\sin\phi\,P_{\ze}
-(\sqrt{F}\,)'\,\cos\phi\,\pf,\earr\right.\qq \Big(\ '=D_{\ze}\Big),\eeq
with the polynomials
\beq
F=c_0+c_1\ze+c_2\ze^2+\frac pq\,\ze^3,\qq G=F'^2-2F\,F''.\eeq
\end{nth}
{\bf Proofs:}
Starting from (\ref{eq0}) the functions $\be$ and $g$ are easily determined to be
\beq
\be=\frac{\be_0}{\chi^2},\qq g=\frac{\be_0}{2q\chi^2}.\eeq
The functions $\ga$ and $a$ can be expressed in terms of $f$ and its derivatives with
respect to $\,\chi$ as
\beq
\ga=-q\chi\,f',\qq a=-q^2\left(ff''+\frac 3{\chi}\,ff'\right).\eeq
Then the last relation in (\ref{eq0}) reduces to a second order {\em linear} ODE
\beq
\chi\,(ff')''+9\,(ff')'+\frac{15}{\chi}\,ff'=\frac{6p}{q^3},\eeq
which is readily integrated to
\beq\label{ff1}
f=\pm\sqrt{c_2+f_1\,\chi^2+\frac{c_1}{\chi^2}+\frac{c_0}{\chi^4}},\qq\qq f_1=\frac p{4q^3}.\eeq
The remaining functions become
\beq\label{aga}\barr{l}\dst
a=\frac{q^2}{f^2}\left(\frac{c_1^2-4c_0c_2}{\chi^6}-\frac{12c_0f_1}{\chi^4}-\frac{6c_1f_1}{\chi^2}
-4c_2f_1-3f_1^2\,\chi^2\right),\\[5mm]\dst
\ga=\frac q{f}\left(-f_1 \chi^2+\frac{c_1}{\chi^2}+\frac{2c_0}{\chi^4}\right).\earr\eeq
The observables can be written, up to a scaling of the parameters, in terms of $F$ and $G$ defined by
\beq\label{basdef}\barr{l}\dst
F=4q^2\,\chi^4\,f^2=c_0+c_1\ze+c_2\ze^2+g_1\ze^3,\qq g_1=\frac pq,\qq \ze=\chi^2,\\[4mm]\dst
G=16q^2\,\chi^6\,f^2\,a=c_1^2-4c_0c_2-12c_0g_1\ze-6c_1g_1\ze^2-4c_2g_1\ze^3-3g_1^2\ze^4.
\earr\eeq
To simplify matters the symplectic change of coordinates
$(\tht,\phi,\pth,\pf)\ \to\ (\ze,\phi,P_{\ze},\pf)$.
gives the required result, up to scalings.
Alternatively (\ref{basdef}) implies the relations
\beq\label{usidq}
G'=-12\frac pq\,F,\qq G=F'^2-2F\,F'',\eeq
which allow a direct check of $\,\{H,Q\}=0$. As proved in \cite{Se} this system does not exhibit any
other conserved observable linear or quadratic in the momenta, and $\,(H,Q)$ are
algebraically independent.$\quad\Box$
\vspace{4mm}
\nin {\bf Remarks:}
\brm
\item The limit $q=0$ is quite tricky: it is why we analyzed it separately in the
previous section.
\item Let us observe that the kinetic parts of $H$ in (\ref{sys0}) and (\ref{sysq}) are
conformally related.
\item It is still possible to come back to the coordinate $\tht$ but the price to pay is the
integration of the relation
\beq
\sqrt{\frac{\ze}{F}}\,d\ze=-d\tht,\eeq
which can be done using elementary functions for $c_0=0$.
\erm
\section{The global structure}
Let us now examine the global geometric aspects of the metric
\beq\label{metq}
g=\frac{\ze}{F}\,d\ze^2+\frac{4\,\ze\,F}{G}\,d\phi^2,\qq\quad \phi\in[0,2\pi),\eeq
taking into account the following observations:
\brm
\item The positivity constraints are $\ze F(\ze)>0$ and $G(\ze)>0$. They define the
end-points of some interval $I$ for $\ze$. In some cases, discussed in detail later on,
one can obtain extensions beyond some of the end-points.
\item For the observables to be defined it is required that $\,F\geq 0\quad\forall\ze\in I$.
\item As already observed any point $\ze_0$ with $F(\ze_0)\neq 0$ and $G(\ze_0)=0$ is a curvature singularity.
\item The point $\ze=0$ is a curvature singularity for $F(0)\neq 0$ and $G(0)\neq 0$.
\erm
In order to have a complete description of all the possible integrable models, we will present them
in three sets:
\brm
\item The first set $p=0$ with a simpler geometric structure.
\item The second set $p>0$ somewhat similar to the $q=0$ case.
\item The third set $p<0$ which displays the richest structure.
\erm
\subsection{First set of integrable models}
Since $\,p=0$ we have
\beq\label{notp0}
F=c_0+c_1\ze+c_2\ze^2=c_2(\ze-\ze_1)(\ze-\ze_2),\qq G=c_1^2-4c_0c_2,
\qq\quad (c_0,\,c_1,\,c_2)\in{\mb R}^3.\eeq
\begin{nth}
In this set we have the following integrable models:
\brm
\item[(i)] Iff $c_2>0$ and $0<\ze_2<\ze$ the metric (\ref{metq}) is defined in $\,{\mb H}^2$ and
\beq\label{si1p0}\left\{\barr{l}\dst
H=\frac 12\,\frac{M_1^2+M_2^2-M_3^2}{\rho+\cosh u}
+\frac{\alf\,\sinh u\,\cos\phi+\be}{\rho+\cosh u},\qq u\in(0,+\nf),\\[5mm]\dst
Q=H\,M_3-\alf\,M_1,\qq \rho=\frac{\ze_2+\ze_1}{\ze_2-\ze_1}\in(-1,+\nf).\earr\right.\eeq
\item[(ii)] Iff $c_2<0$ and $\,0<\ze_1<\ze<\ze_2$ the metric
(\ref{metq}) is defined in $\,{\mb S}^2$ and
\beq\label{si2p0}\left\{\barr{l}\dst
H=\frac 12\,\frac{L_1^2+L_2^2+L_3^2}{1+\rho\cos\tht}
+\frac{\alf\,\rho\,\sin\tht\,\cos\phi+\be}{1+\rho\cos\tht},\qq\tht\in(0,\pi),\\[5mm]\dst
Q=H\,L_3+\alf\,L_1,\qq \rho=\frac{\ze_2-\ze_1}{\ze_2+\ze_1}\in(0,+1).\earr\right.\eeq
\item[(iii)] Iff $c_2=0$ the metric
(\ref{metq}) is defined in $\,{\mb R}^2$ and
\beq\label{si3p0}\left\{\barr{l}\dst
H=\frac 12\,\frac{P_x^2+P_y^2}{1+\rho^2(x^2+y^2)}
+\frac{2\alf\,\rho^2\,x+\beta}{1+\rho^2(x^2+y^2)},\qq (x,y)\in{\mb R}^2,\\[5mm]\dst
Q=H\,L_z-\alf\,P_y,\qq \rho>0.\earr\right.\eeq
\erm
In all cases $\alf$ and $\be$ are free parameters.
\end{nth}
\nin{\bf Proof of (i):}
The positivity condition $G>0$ shows that $F$ has two real and distinct roots $\ze_1<\ze_2$,
so we will write
\beq
F=c_2(\ze-\ze_1)(\ze-\ze_2),\qq G=c_2^2(\ze_1-\ze_2)^2.\eeq
Then imposing the positivity of $\ze F$ one has to deal with the iff part of the proof by an enumeration
of all possible cases for the triplet $(0,\ze_1,\ze_2)$, including the possibility of one $\ze_i$
being zero. Taking into account the remarks given at the end of Section 4, one concludes that for $c_2>0$, we must take $\ze>\ze_2>0$. The change of coordinates
\[\ze=\frac{\ze_2-\ze_1}{2}\Big(\rho+\cosh u\Big),\qq(\ze_2,+\nf)\ \to\ (0,+\nf),
\qq \rho=\frac{\ze_2+\ze_1}{\ze_2-\ze_1}.\]
brings the metric (\ref{metq}) to the form
\beq
g=\frac{\ze_2-\ze_1}{2c_2}\Big(\rho+\cosh u\Big)
\Big(du^2+\sinh^2 u\,d\phi^2\Big),\qq u\in(0,+\nf),\eeq
which is conformal to the canonical metric on ${\mb H}^2$. Using the definitions (\ref{genM})
we obtain (\ref{si1p0}), up to scalings.$\quad\Box$
\vspace{2mm}
\nin{\bf Proof of (ii):}
For $c_2<0$ positivity requires either $0<\ze_1<\ze<\ze_2$ or $\ze_1<\ze<\ze_2<0$.
In both cases the change of coordinates
\[\ze=\frac{\ze_1+\ze_2}{2}\Big(1+\rho\,\cos\tht\Big),\qq (\ze_1,\ze_2)\to(\pi,0),
\qq \rho=\frac{\ze_2-\ze_1}{\ze_2+\ze_1},\]
brings the metric (\ref{metq}) to one and the same form
\beq
g=\frac{\ze_1+\ze_2}{2c_2}\Big(1+\rho\,\cos\tht\Big)
\Big(d\tht^2+\sin^2\tht\,d\phi^2\Big),\qq \tht\in(0,\pi),\eeq
which is conformal to the canonical metric on ${\mb S}^2$ for $\rho\in(0,+1)$. Using the $so(3)$
Lie algebra generators acting in $T^*{\mb S}^2$
\beq\label{genL}
L_1=\sin\phi\,P_{\tht}+\frac{\cos\phi}{\tan\tht}\,\pf,\qq
L_2=\cos\phi\,P_{\tht}-\frac{\sin\phi}{\tan\tht}\,\pf,\qq L_3=\pf,\eeq
one obtains (\ref{si2p0}), up to scalings.$\quad\Box$
\vspace{2mm}
\nin{\bf Proof of (iii):}
For $c_2=0$ we have $G=c_1^2>0$.
If $c_1<0$ we can write $F=|c_1|(\ze_1-\ze)$ and positivity
requires $\ze\in(0,\ze_1)$. If $\ze_1\neq 0$ then $\ze=0$ is a curvature singularity because
$F(0)$ and $G(0)$ are not vanishing.
If $c_1>0$ we have $F=c_1(\ze-\ze_1)$. If $\ze_1<0$ positivity requires either $\ze>0$, but $\ze=0$
is a curvature singularity, or $\ze<\ze_1$ and then $F$ is negative. If
$\ze_1=0$ the metric becomes
\[g=\frac 1{c_1}\Big(d\ze^2+4\ze^2\,d\phi^2\Big),\]
so to recover flat space we have to take $\wti{\phi}=2\phi\,\in\,[0,2\pi)$ and in $H$ appears a term
of the form $\cos(\wti{\phi}/2)$ which does not define a function in ${\mb R}^2$.
Eventually, if $\ze_1>0$ if we take $\ze<0$ the point $\ze=0$ is singular, so we are left with
$\ze>\ze_1$. The change of coordinates
\[\ze=\ze_1(1+\rho^2\,r^2),\quad \rho>0,\qq x=r\cos\phi,\quad y=r\sin\phi,\]
brings the metric (\ref{metq}) to the form
\beq
g=\frac{4\ze_1^2\rho^2}{c_1}(1+\rho^2\,r^2)(dx^2+dy^2),\qq (x,\,y)\in{\mb R}^2.\eeq
Using the $e(3)$ Lie algebra generators $(P_x,P_y,L_z=xP_y-yP_x)$ we obtain (\ref{si3p0}),
up to scalings.
$\quad\Box$
The remaining cases are given by $p\neq 0$. It is convenient to rescale $F$
by $|p|/q$ and $G$ by $p^2/q^2$ in order to have
\beq\label{newF}
F=\eps(\ze^3+f_0\ze^2+c_1\ze+c_0),\quad \eps={\rm sign}(p),\qq G=F'^2-2FF'',
\quad G'=-12\,\eps\,F,\eeq
and for the observables, up to scalings
\beq\label{newH}
\left\{\barr{l}\dst
H=\frac 1{2\ze}\Big(F\,P_{\ze}^2+\frac G{4\,F}\,\pf^2\Big)
+\alf\,\frac{\sqrt{F}}{\ze}\,\cos\phi+\frac{\be}{\ze},\\[5mm]\dst
Q=\eps\,\pf^3+2\,H\,\pf-2\alf\Big(\sqrt{F}\,\sin\phi\,P_{\ze}
+(\sqrt{F}\,)'\,\cos\phi\,\pf\Big),\earr\right.\eeq
So the metric is still given by (\ref{metq}). We will denote
by $\De_{\eps}$ the discriminant of $F$ according to the sign of $\eps$.
\subsection{Second set of integrable models}
It is given by $\,p>0$ or $\,\eps=+1$. We have:
\begin{nth}
The metric (\ref{metq}):
\brm
\item[(i)] For $\De_+<0$ is defined on ${\mb S}^2$ iff
\[F=(\ze-\ze_0)(\ze-\ze_1)(\ze-\ze_2),\qq\quad 0<\ze_0<\ze<\ze_1<\ze_2.\]
The integrable system, using the notations of Theorem 2 case (i), is
\beq\label{sys1q}\left\{\barr{l}\dst
H=\frac 1{2\ze_+(u)}\left(P_u^2+\frac{D(u)}{s^2 c^2 d^2}\pf^2\right)
+\alf k^2\frac{\,scd}{\ze_+(u)}\,\cos\phi+\frac{\be}{\ze_+(u)},\\[5mm]\dst
Q=4\pf^3+2H\,\pf-\alf\left(\sin\phi\,P_u+\frac{(scd)'}{scd}\,\cos\phi\,\pf\right),\\[5mm]
\ze_+(u)=\rho+k^2\,{\rm sn}^2\,u,\qq u\in\,(0,K),
\quad \rho=\frac{\ze_0}{\ze_2-\ze_0}>0.\earr\right.\eeq
\item[(ii)] For $\De_+=0$ is defined on ${\mb H}^2$ iff
\[F=(\ze-\ze_0)(\ze-\ze_1)^2,\qq\quad 0<\ze_0<\ze<\ze_1.\]
The integrable system, using the notations of Theorem 2 case (ii), is
\beq\label{sys2q}
\left\{\barr{l}\dst
H=\frac 1{2\ze_+(u)}\left\{M_1^2+M_2^2-\left(1-\frac 3{C^2}\right)M_3^2\right\}+\alf\,\frac{T(1-T^2)}{\ze_+(u)}\,\cos\phi
+\frac{\be}{\ze_+(u)},\\[5mm]\dst
Q=4M_3^3+2H\,M_3-\alf\Big(M_1-3T\,\cos\phi\,M_3\Big),\\[5mm]
\ze_+(u)=\rho+\tanh^2 u,\quad u\in(0,+\nf),\qq \rho=\frac{\ze_0}{\ze_1-\ze_0}>0.
\earr\right.\eeq
\item[(iii)] For $\De_+>0$ is not defined on a manifold.
\erm\end{nth}
\nin{\bf Proof of (i):} The iff part results from a case by case examination of all possible
orderings of the 4-plet $(0,\ze_0,\ze_1,\ze_2)$, including the possibility of one
of the $\,\ze_i$ being zero. We will not give the full details which can be easily
checked by the reader, taking into account the remarks presented at the end of Section 4.
The reader can check that with $F=(\ze-\ze_0)(\ze-\ze_1)(\ze-\ze_2)$ and
$0<\ze_0<\ze<\ze_1<\ze_2$, the polynomial $F$ is positive and vanishes at the end-points
$(\ze_0,\,\ze_1)$ while $G$ is strictly positive. It follows that $\ze=\ze_0$ and $\ze=\ze_1$
are poles and the manifold is ${\mb S}^2$. Operating the same coordinate change as
in Theorem 2, case (i), one obtains (\ref{sys1q}).$\quad\Box$
\vspace{2mm}
\nin{\bf Proof of (ii):}
The polynomial $G$ becomes $G=(\ze_1-\ze)^3(3\ze+\ze_1-4\ze_0)$. The change of variable
\[\ze=(\ze_1-\ze_0)(\rho+{\rm th}^2 u),
\qq (\ze_0,\ze_1)\to(0,+\nf),\qq\rho=\frac{\ze_0}{\ze_1-\ze_0}>0,\]
transforms the observables, up to scalings, into
\beq\left\{\barr{l}\dst
H=\frac 1{2\ze_+(u)}\left(P_u^2+\frac{1+3T^2}{S^2}\,\pf^2\right)
+\frac{\alf}{\ze_+(u)}\,T(1-T^2)\,\cos\phi+\frac{\be}{\ze_+(u)},\\[5mm]\dst
Q=4\pf^3+2H\,\pf-\alf\,\sin\phi\,P_u-\alf\,\frac{(1-3T^2)}{T}\,\cos\phi\,\pf,\\[5mm]
\ze_+(u)=\rho+\tanh^2 u.\earr\right.\eeq
Using the relations (\ref{genM}) one gets (\ref{sys2q}). $\quad\Box$
\vspace{2mm}
\nin{\bf Proof of (iii):}
Examining all the possible cases gives no manifold for the metric.$\quad\Box$
\subsection{Third set of integrable models}
It is given by $\,p<0$ or $\,\eps=-1$. It displays a richer structure and for
clarity we will split up the description of the integrable systems into several theorems.
\begin{nth}
The metric (\ref{metq}) for $\De_-<0$ is defined on ${\mb S}^2$ iff:
\brm
\item[(i)] either $\ F=(\ze-\ze_0)(\ze-\ze_1)(\ze_2-\ze),\quad
\ze_0<\ze_1<\ze<\ze_2\ (\ze_1>0).$
\nin The change of coordinates
\beq\label{notq}
{\rm sn}\,(u,k^2)=\sqrt{\frac{\ze_2-\ze}{\ze_2-\ze_1}}, \qq
k^2=\frac{\ze_2-\ze_1}{\ze_2-\ze_0}\in\,(0,1),\eeq
gives for integrable system
\beq\label{sys1qm}\left\{\barr{l}\dst
H=\frac 1{2\ze_-(u)}\left(P_u^2+\frac{D(u)}{s^2 c^2 d^2}\,\pf^2\right)
+\alf\frac{k^2\,scd}{\ze_-(u)}\,\cos\phi+\frac{\be}{\ze_-(u)},\\[5mm]\dst
Q=-4\,\pf^3+2H\,\pf+\alf\left(\sin\phi\,P_u+\frac{(scd)'}{scd}\,\cos\phi\,\pf\right),\\[5mm]
\ze_-(u)=k^2\Big(\rho-{\rm sn}^2\,u\Big),\qq u\in(0,K),\qq\rho=\frac{\ze_2}{\ze_2-\ze_1}>1.
\earr\right.\eeq
\item[(ii)] or $\ F=(\ze_0-\ze)(\ze-\ze_1)(\ze-\ze_2)\ \mbox{and}\ \ G(0)=0,
\quad 0<\ze<\ze_0<\ze_1<\ze_2.$
\nin The integrable system is
\beq\label{sys2qm}
\left\{\barr{l}\dst
H=\frac 12\,f\,(L_1^2+L_2^2)+\frac 12\left(\frac{h}{3f}-\cos^2\tht\,f\right)
\frac{L_3^2}{\sin^2\tht}+\\[5mm]\dst \hspace{8cm}
+\alf\,\frac{\sin\tht\,\sqrt{f}}{(\cos^2\tht)^{1/3}}\,\cos\phi
+\frac{\be}{(\cos^2\tht)^{1/3}},\\[5mm]\dst
Q=-\frac 49\,L_3^3+2H\,L_3+3\alf\,(\cos\tht)^{1/3}\Big(\sqrt{f}\,L_1
+(\sqrt{f})'\,\cos\phi\,L_3\Big),\earr\right.\eeq
where $\,f(\tht)=\hat{f}(\cos\tht)$ with
\beq
\hat{f}(\mu)=\frac{\Big(\mu^{2/3}-\frac{\ze_1}{\ze_0}\Big)\Big(\mu^{2/3}-\frac{\ze_2}{\ze_0}\Big)}
{\mu^{4/3}+\mu^{2/3}+1},\qq \mu\in(-1,+1),\eeq
and $\,h(\tht)=\hat{h}(\cos\tht)$ with
\beq
\hat{h}(\mu)=-\mu^2+\frac 43\Big(1+\frac{\ze_1+\ze_2}{\ze_0}\Big)\mu^{4/3}
-2\Big(\frac{\ze_1+\ze_2}{\ze_0}+\frac{\ze_1\ze_2}{\ze_0^2}\Big)\mu^{2/3}
+4\frac{\ze_1\ze_2}{\ze_0^2}.\eeq
The parameter $\ze_0$ is:
\beq\label{ze0cas1}
\ze_0=\frac{\ze_1\ze_2}{(\sqrt{\ze_1}+\sqrt{\ze_2})^2}<\ze_1.\eeq
\erm
\end{nth}
\vspace{2mm}
\nin {\bf Proof of (i):}
The change of variable indicated gives (\ref{sys1qm}) by lengthy but straightforward computations.
$\quad\Box$
\nin{\bf Remark:} The previous analysis does not describe appropriately the special case
$\ze_0=0$ for which elliptic functions are no longer required. In this case the coordinates change
\[\ze=\frac{\ze_1+\ze_2}{2}-\frac{\ze_1-\ze_2}{2}\,\cos\tht,\qq (\ze_1,\ze_2)\ \to\ (\pi,0),\]
gives for the metric
\beq
g=d\tht^2+\frac{\sin^2\tht}{1+\sin^2\tht\,G(\cos\tht)}\,d\phi^2,\eeq
with
\beq
G(\mu)=\frac{3\mu^2+4\rho\mu+1}{4(\rho+\mu)^2},\qq\rho=\frac{\ze_2+\ze_1}{\ze_2-\ze_1}>1.\eeq
The integrable system is
\beq\label{DM}\left\{\barr{l}\dst
H=\frac 12\left(P_{\tht}^2+\Big(\frac 1{\sin^2\tht}+G(\cos\tht)\Big)\pf^2\right)
+\alf\,\frac{\sin\tht}{\sqrt{U}}\,\cos\phi+\frac{\be}{U},\\[5mm]\dst
Q=-\pf^3+2H\,\pf+2\alf\,\sqrt{U}\sin\phi\,P_{\tht}
+2\alf\frac{(\sin\tht\sqrt{U})'}{\sin\tht}\,\cos\phi\,\pf.,\\[5mm]
U=\rho+\cos\tht,\earr\right.\eeq
on which we recognize the Dullin-Matveev system \cite{dm}.
\vspace{2mm}
\nin {\bf Proof of (ii):}
One has
\[G(0)=(\ze_1-\ze_2)^2\,\ze_0^2-2\ze_1\ze_2(\ze_1+\ze_2)\,\ze_0+\ze_1^2\ze_2^2.\]
Its vanishing determines uniquely $\ze_0$ in terms of $(\ze_1,\ze_2)$ as given by (\ref{ze0cas1}).
At this stage positivity requires $\ze\in(0,\ze_0)$. Let us make the change of variable
$\,\ze=\ze_0\,\mu^{2/3}$. The metric becomes
\[g=\frac 49\left\{\frac{d\mu^2}{(1-\mu^2)\hat{f}(\mu)}
+3(1-\mu^2)\frac{\hat{f}(\mu)}{\hat{h}(\mu)}\,d\phi^2\right\},\qq \mu\in(0,1).\]
All the functions in the metric are {\em even} functions of $\mu$: we can therefore
take $\mu\in(-1,+1)$ extending the metric beyond $\mu=0$. One can check that the
points $\mu=\pm 1$ are poles and therefore we get again for manifold ${\mb S}^2$. The change of
variable $\mu=\cos\tht$ with $\tht\in(0,\pi)$ gives then for result (\ref{sys2qm}). $\quad\Box$
Let us proceed to:
\vspace{2mm}
\begin{nth}
\brm
\item[(a)] The metric (\ref{metq}) for $\De_-=0$ is defined on ${\mb S}^2$ iff:
\brm
\item[(i)] $\mbox{either}\quad F=\ze^2(\ze_0-\ze),\qq 0<\ze<\ze_0,\ $ and we have
\beq\label{sys2bisq}\left\{\barr{l}\dst
H=\frac 12\Big(L_1^2+L_2^2+4L_3^2\Big)+\alf\,\sin\tht\,\cos\phi+\frac{\be}{\cos^2\tht},
\qq\tht\in(0,\pi),\\[5mm]\dst
Q=-4\,L_3^3+2H\,L_3+\alf\Big(\cos\tht\,L_1-2\sin\tht\,\cos\phi\,L_3\Big),\earr\right.\eeq
which is the Goryachev-Chaplygin top.
\item[(ii)] $\mbox{or}\quad F=(\ze-\ze_1)^2(\ze_0-\ze)\ \mbox{and}\ \ G(0)=0,
\qq 0<\ze<\ze_0.$
\nin The integrable system is of the form (\ref{sys2qm}) with the functions
\beq
\hat{f}(\mu)=\frac{\Big(4-\mu^{2/3}\Big)^2}
{\mu^{4/3}+\mu^{2/3}+1},\qq \hat{h}(\mu)=(4-\mu^{2/3})^3,\qq\mu\,\in(-1,+1).\eeq
\erm
\item[(b)]The metric (\ref{metq}) for $\De_-=0$ is defined on ${\mb H}^2$ iff:
\[F=(\ze-\ze_1)^2(\ze_0-\ze),\qq 0<\ze_1<\ze<\ze_0.\]
The integrable system, in the notations of Theorem 2, case (ii), is
\beq\label{sys2terq}\left\{\barr{l}\dst
H=\frac 1{2\ze_-(u)}\left\{M_1^2+M_2^2-\left(1-\frac 3{C^2}\right)M_3^2\right\}+\alf\,\frac{T(1-T^2)}{\ze_-(u)}\,\cos\phi
+\frac{\be}{\ze_-(u)},\\[5mm]\dst
Q=-4M_3^3+2H\,M_3+\alf\Big(M_1-3T\,\cos\phi\,M_3\Big),\\[5mm]
\ze_-(u)=\rho-\tanh^2 u,\quad u\in(0,+\nf),\qq \rho=\frac{\ze_0}{\ze_0-\ze_1}>1.
\earr\right.\eeq
\erm
\end{nth}
\vspace{2mm}
\nin {\bf Proof of (a)(i):}
We have $F=\ze^2(\ze_0-\ze)$ and $G=\ze^3(4\ze_0-3\ze)$ and $\ze\in(0,\ze_0)$ from positivity.
Taking for new variable $\tht$ such that $\ze=\ze_0\,\cos^2\tht$ we get
for the metric
\beq
g=4\left(d\tht^2+\frac{\sin^2\tht}{1+3\sin^2\tht}\,d\phi^2\right),\qq \tht\in(0,\pi/2).\eeq
As it stands the manifold is $P^2({\mb R})$ (see \cite{Be}[p. 268]). However we can also
extend the metric taking $\tht\in(0,\pi)$: then the manifold extends to ${\mb S}^2$
since $\tht=0$ and $\tht=\pi$ are poles and in this case we recover Goryachev-Chaplygin top.
The observables can be transformed into (\ref{sys2bisq}).
$\quad\Box$
\vspace{2mm}
\nin {\bf Proof of (a)(ii):}
In this case we have $\,G(0)=\ze_1^3(\ze_1-4\ze_0)$ which fixes $\ze_0=\ze_1/4$. The argument then proceeds as in the proof of Theorem 6, case (ii).$\quad\Box$
\vspace{2mm}
\nin {\bf Proof of (b):}
The proof is identical to the one for Theorem 5, case (ii), except for the change of coordinates,
which is now
\[\ze=\ze_0-(\ze_0-\ze_1)\tanh^2 u:\qq (\ze_0,\ze_1)\quad\to\quad(0,+\nf).\]
One gets (\ref{sys2terq}) by similar arguments.$\quad\Box$
\begin{nth}
The metric (\ref{metq}) for $\De_->0$ is defined on ${\mb S}^2$ iff:
\[ F=(\ze_0-\ze)(\ze-\ze_1)(\ze-\overline{\ze_1})\ \ \mbox{and}\ \ G(0)=0,
\quad 0<\ze<\ze_0.\]
The integrable system is of the form (\ref{sys2qm}) with the functions
\beq
\hat{f}(\mu)=\frac{\Big(\mu^{2/3}-\frac{\ze_1}{\ze_0}\Big)\Big(\mu^{2/3}-\frac{\ol{\ze}_1}{\ze_0}\Big)}{\mu^{4/3}+\mu^{2/3}+1},\qq\mu\,\in(-1,+1),\eeq
and
\beq
\hat{h}(\mu)=-\mu^2+\frac 43\Big(1+\frac{\ze_1+\ol{\ze}_1}{\ze_0}\Big)\mu^{4/3}
-2\Big(\frac{\ze_1+\ol{\ze}_1}{\ze_0}+\frac{|\ze_1|^2}{\ze_0^2}\Big)\mu^{2/3}
+4\frac{|\ze_1|^2}{\ze_0^2}.\eeq
We have two possible values for $\ze_0$ which are
\beq\label{ze0cas3}
\ze_0=\frac{|\ze_1|^2}{\ze_1+\ol{\ze}_1 \pm 2|\ze_1|}.\eeq
\end{nth}
\vspace{2mm}
\nin{\bf Proof:}
We have
\[G(0)=(\ze_1-\ol{\ze}_1)^2\,\ze_0^2-2(\ze_1+\ol{\ze}_1)|\ze_1|^2\,\ze_0+|\ze_1|^4.\]
Its vanishing gives for $\ze_0$ the roots (\ref{ze0cas3}). The subsequent analysis is identical
to that already given in the proof of Theorem 6, case (ii).$\quad\Box$
It is interesting to examine the explicitly known integrable systems, with a metric
defined in ${\mb S}^2$ and with a cubic observable already given in the literature:
\brm
\item The Goryachev-Chaplygin top given by Theorem 6, case (ii).
\item The Dullin-Matveev top \cite{dm} is given in the remark after Theorem 6.
\item If we restrict, in Theorem 8, the parameters according to
\[\ze_0=-(\ze_1+\ol{\ze}_1)\quad\mbox{and}\quad \ze_0^2=|\ze_1|^2,
\qq\Longrightarrow\qq f=1,\quad g=4-\mu^3,\]
we recover the Goryachev top
\beq\left\{\barr{l}\dst
H=\frac 12\left(L_1^2+L_2^2+\frac 43\,L_3^2\right)+\alf\,\frac{\sin\tht}{(\cos^2\tht)^{1/3}}\,\cos\phi
+\frac{\be}{(\cos^2\tht)^{1/3}},\\[5mm]\dst
Q=-\frac 49\,\pf^3+2H\,\pf+3\alf\,(\cos\tht)^{1/3}\,L_1.\earr\right.\eeq
\erm
The two new examples given by Tsiganov in \cite{Ts} are not defined on a manifold.
\nin{\bf Remarks:}
\brm
\item All of the previous examples belong to the third set with $p<0$.
\item Considering the genus of the algebraic curve
$\dst y^2=\frac{F(\ze)}{\ze}$ let us observe that the Goryachev-Chaplygin and Dullin-Matveev
systems have zero genus while the Goryachev system has genus one.
\item In general the potential $V$ as well as the observable $Q$ are not defined
on the whole manifold.
\erm
\section{Conclusion}
We have exhaustively constructed all the integrable models, on two dimensional manifolds,
characterized by the following form of the observables
\beq\left\{\barr{l}\dst
H=\frac 12\Big(\pth^2+a(\tht)\pf^2\Big)+f(\tht)\,\cos\phi+g(\tht)\\[4mm]\dst
Q=p\,\pf^3+q\,\Big(\pth^2+a(\tht)\pf^2\Big)\pf+\chi(\tht)\sin\phi\,\pth+\Big(\be(\tht)
+\ga(\tht)\cos\phi\Big)\pf\earr\right.\eeq
The main lesson from the failure of \cite{Se} to solve the problem has to do with the
crucial role of the coordinates choice, which determines the structure of the ODE to be solved
eventually. This is a familiar phenomenon to people dealing with Einstein equations: despite their diffeomorphism invariance, finding exact solutions relies on an adapted choice of
coordinates which can simplify, or even linearize the differential system to be integrated.
\vspace{2mm}
\nin{\bf Acknowledgments:} we are greatly indebted to K. P. Tod for his kind and efficient
help in the analysis of the metrics singularities of Section 5.
| {
"timestamp": "2010-02-10T08:44:57",
"yymm": "1002",
"arxiv_id": "1002.2033",
"language": "en",
"url": "https://arxiv.org/abs/1002.2033",
"abstract": "A few years ago Selivanova gave an existence proof for some integrable models, in fact geodesic flows on two dimensional manifolds, with a cubic first integral. However the explicit form of these models hinged on the solution of a nonlinear third order ordinary differential equation which could not be obtained. We show that an appropriate choice of coordinates allows for integration and gives the explicit local form for the full family of integrable systems. The relevant metrics are described by a finite number of parameters and lead to a large class of models on the manifolds ${\\mb S}^2, {\\mb H}^2$ and $P^2({\\mb R})$ containing as special cases examples due to Goryachev, Chaplygin, Dullin, Matveev and Tsiganov.",
"subjects": "Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI)",
"title": "Explicit integrable systems on two dimensional manifolds with a cubic first integral",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98058065352006,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274307739323
} |
https://arxiv.org/abs/1303.7374 | Pólya Urn Schemes with Infinitely Many Colors | In this work we introduce a new type of urn model with infinite but countable many colors indexed by an appropriate infinite set. We mainly consider the indexing set of colors to be the $d$-dimensional integer lattice and consider balanced replacement schemes associated with bounded increment random walks on it. We prove central and local limit theorems for the random color of the $n$-th selected ball and show that irrespective of the null recurrent or transient behavior of the underlying random walks, the asymptotic distribution is Gaussian after appropriate centering and scaling. We show that the order of any non-zero centering is always ${\mathcal O}\left(\log n\right)$ and the scaling is ${\mathcal O}\left(\sqrt{\log n}\right)$. The work also provides similar results for urn models with infinitely many colors indexed by more general lattices in ${\mathbb R}^d$. We introduce a novel technique of representing the random color of the $n$-th selected ball as a suitably sampled point on the path of the underlying random walk. This helps us to derive the central and local limit theorems. | \section{Introduction}
\label{Sec:Intro}
P\'olya urn schemes and its various generalizations with finitely many colors have been widely studied in literature
\cite{Polya30, Fri49, Free65, AthKar68, BagPal85, Pe90, Gouet, Svante1, FlDuPu06, maulik1, maulik2, DasMau11}, also see \cite{Pe07} for an extensive survey
of many of the known results.
The model is described as follows:
\begin{quote}
We start with an urn containing finitely many balls of different colors.
At any time $n\geq 1$, a ball is selected uniformly at random from the urn,
the color of the selected ball is
noted, the selected ball is returned to the urn along with a set of balls
of various colors which may depend on the color of the selected ball.
\end{quote}
The goal is to study the asymptotic properties of the configuration of the urn.
Suppose there are $K \geq 1$ different colors and we denote the configuration of the urn at time $n$ by
$U_{n}=\left(U_{n,1},U_{n,2}\ldots, U_{n,K}\right)$, where $U_{n,j}$ denotes the number of balls of color $j$, $1 \leq j \leq K$.
The dynamics of the urn model depends on the \emph{replacement policy} which can be presented by
a $K \times K$ matrix, say $R$ whose $\left(i,j\right)^{\mbox{th}}$ entry is the number of balls of
color $j$ which are to be added to the urn if the selected color is $i$. In literature $R$ is termed as \emph{replacement matrix}.
The dynamics of the model can then be written as
\begin{equation}
U_{n+1} = U_n + R_i
\label{Equ:Basic-Recurssion}
\end{equation}
where $R_i$ is the $i^{\mbox{th}}$ row of the replacement matrix $R$, and $i$ is the random color of the ball selected at the
$\left(n+1\right)^{\mbox{th}}$ draw.
A replacement matrix is said to be \emph{balanced}, if the row sums are constant. In that case
the asymptotic behavior of the proportion of balls of different colors will be same if we change the
replacement matrix $R$ by $R/s$ where $s$ is the row sum. Note that the later matrix is a
\emph{stochastic matrix} and hence the entries are not necessarily non-negative integers. Since we will be
interested mostly in the asymptotic behavior of the configuration of a balanced urn model,
so without loss of any generality, we assume that the replacement matrix $R$ is a stochastic matrix.
In that case it is also customary to assume that $U_0$ is a probability distribution on the
set of colors, which is to be interpreted as the probability distribution of
the selected color of the first ball drawn from the urn.
Note in this case the entries of $U_n$
indexed by the colors are no longer the number of balls of that color, but $U_n/\left(n+1\right)$ is
the probability mass function associated with the
the probability distribution of the color of the $\left(n+1\right)^{\mbox{th}}$ selected ball.
In other words, if $Z_n$ is the color of the ball selected at the $\left(n+1\right)^{\text{th}}$ draw then
\begin{equation}
\mathbb{P}\left( Z_n = i \,\Big\vert\,U_0, U_1,\ldots, U_n \right) = \frac{U_{n,i}}{n+1}, \,\,\, 1 \leq i \leq K.
\label{Equ:Choise-Mass-Function}
\end{equation}
We can now consider a Markov chain with state space as the set of colors, the transition matrix as $R$
and starting distribution as $U_0$. We call such a chain, a chain
associated with the urn model and vice-versa. In other words given a balanced urn model we
can associate with it a unique Markov chain on the set of colors and conversely given a
Markov chain on a finite state space there is an associated urn model with balls of
colors indexed by the state space.
It is well known \cite{Gouet, Svante1, maulik1, maulik2, DasMau11} that the asymptotic properties of a balanced urn are often related to the qualitative
properties of this associated Markov chain on finite state space. For example, if the
associated finite state Markov chain is irreducible and aperiodic with stationary distribution $\pi$ then in \cite{Gouet, Svante1} it has been proved that
\begin{equation}
\frac{U_n}{n+1} \longrightarrow \pi \mbox{\ \ a.s.}
\label{Equ:Irreducible-Aperiodic-Limit}
\end{equation}
The reducible case for the finite color urn model has been extensively studied in \cite{maulik1, maulik2, DasMau11} and various different kind of
limiting results have been derived based on the properties of the replacement/stochastic matrix $R$.
In this work we introduce a new urn model with countable but infinitely many colors.
We would like to note that in \cite{BlackMac73} Blackwell and MacQueen introduced a model
with possibly infinitely many colors which may even be
uncountable, but with a very simple replacement mechanism which corresponds
to the original P\'olya type schemes that is to have only a
``diagonal replacement scheme''. Our generalization will consider only the
cases of certain infinite dimensional matrices with some non-zero off diagonal entries.
Thus it is important to note that
the classical P\'olya scheme \cite{Polya30}
and also the Blackwell and MacQueen scheme \cite{BlackMac73} are not covered by our
generalization. We will see in Section \ref{Sec:Main Results} the results we obtain are very different than that
in these two classical cases.
\subsection{Urn Model with Infinitely Many Colors}
\label{SubSec:Model}
Let $\left\{X_j\right\}_{j \geq 1}$ be i.i.d. random vectors taking values in $\mbox{${\mathbb Z}$}^d$ with probability
mass function $p\left(u\right) := \mbox{${\mathbb P}$}\left(X_1 = u\right), u \in \mbox{${\mathbb Z}$}^d$. We assume that the
distribution
of $X_1$ is bounded, that is there exists a non-empty finite subset $B \subseteq \mbox{${\mathbb Z}$}^d$ such that
$p\left(u\right) = 0$ for all $u \not\in B$. Throughout this paper
we take the convention of writing all vectors as row vectors.
Thus for a vector $x \in \mbox{${\mathbb R}$}^d$ we will write $x^T$ to denote it as a column vector.
The notations
$\langle \cdot , \cdot \rangle$ will denote
the usual Euclidean inner product on $\mbox{${\mathbb R}$}^d$ and $\| \cdot \|$ the
the Euclidean norm. We will always write
\begin{equation}
\begin{array}{rcl}
{\mathbf \mu} & := & \mbox{${\mathbb E}$}\left[X_1\right] \\
\varSigma & := & \mbox{${\mathbb E}$}\left[ X_1^T X_1 \right] \\
e\left(\lambda\right) & := & \mbox{${\mathbb E}$}\left[e^{\langle \lambda , X_1 \rangle}\right], \, \lambda \in \mbox{${\mathbb R}$}^d. \\
\end{array}
\label{Equ:Basic-Notations}
\end{equation}
We will write
$\varSigma := \left(\left(\sigma_{ij}\right)\right)_{1 \leq i,j \leq d}$ and
assume that it is a positive definite matrix.
Also
$\varSigma^{\frac{1}{2}}$ will denote the unique \emph{positive definite square root} of $\varSigma$,
that is, $\varSigma^{\frac{1}{2}}$ is a positive definite matrix such that
$\varSigma = \varSigma^{\frac{1}{2}} \varSigma^{\frac{1}{2}}$.
When the dimension $d=1$, we will denote the mean and variance simply by $\mu$ and $\sigma^2$ respectively
and in that case we assume $\sigma^2 > 0$.
Let $S_n := X_0 + X_1 + \cdots + X_n, n \geq 0$ be the random walk
on $\mbox{${\mathbb Z}$}^d$ starting at $X_0$ and with increments $\left\{X_j\right\}_{j \geq 1}$ which are independent.
Needless to say that $\left\{S_n\right\}_{n \geq 0}$ is Markov chain with state-space $\mbox{${\mathbb Z}$}^d$,
initial distribution given by the distribution of $X_0$ and the transition matrix
\[ R := \left(\left( p\left(u - v\right) \right)\right)_{u, v \in {\mathbb Z}^d}.\]
In this work we consider
the following infinite color generalization of P\'olya urn scheme where
the colors were indexed by $\mbox{${\mathbb Z}$}^d$.
Let $U_n := \left(U_{n,v}\right)_{v \in {\mathbb Z}^d} \in [0, \infty)^{{\mathbb Z}^d}$
denote the configuration of the urn at time $n$, that is,
\small{
\[
\mbox{${\mathbb P}$}\left( \left(n+1\right)^{\mbox{th}} \mbox{\ selected ball has color\ } v
\,\Big\vert\, U_n, U_{n-1}, \cdots, U_0 \right)
\propto U_{n,v}, \, v \in \mbox{${\mathbb Z}$}^d.
\]
}
Starting with $U_0$ which is a probability distribution we define $\left(U_n\right)_{n \geq 0}$
recursively as follows
\begin{equation}
\label{recurssion}
U_{n+1}=U_{n} + \chi_{n+1} R
\end{equation}
where $\chi_{n+1} = \left(\chi_{n+1,v}\right)_{v \in {\mathbb Z}^d}$ is such that
$\chi_{n+1,V}=1$ and $\chi_{n+1,u} = 0$ if $u \neq V$ where $V$ is a random color
chosen from the configuration $U_n$. In other words
\[
U_{n+1}=U_n + R_V
\]
where $R_V$ is the $V^{\text{th}}$ row of the replacement matrix $R$.
We will call
the process $\left(U_n\right)_{n \geq 0}$ as the \emph{infinite color urn model} with
initial configuration $U_0$ and replacement matrix $R$. We will also refer to it as the
\emph{infinite color urn model associated with the random walk $\left\{S_n\right\}_{n \geq 0}$ on $\mbox{${\mathbb Z}$}^d$}.
Throughout this paper we will assume that
$U_0 = \left(U_{0,v}\right)_{v \in {\mathbb Z}^d}$ is such that
$U_{0,v} = 0$ for all but finitely many $v \in \mbox{${\mathbb Z}$}^d$.
It is worth noting that
\[
\sum_{u \in {\mathbb Z}^d} U_{n,u} = n + 1
\]
for all $n \geq 0$. So if
$Z_n$ denotes the $\left(n+1\right)^{\mbox{th}}$ selected color then
\begin{equation}
\mbox{${\mathbb P}$}\left(Z_n = v \,\Big\vert\, U_n, U_{n-1}, \cdots, U_0 \right) = \frac{U_{n,v}}{n+1}
\end{equation}
which implies
\begin{equation}
\mbox{${\mathbb P}$}\left(Z_n = v \right) = \frac{\mbox{${\mathbb E}$}\left[U_{n,v}\right]}{n+1}.
\end{equation}
In other words the expected configuration of the urn at time $n$ is given by the distribution of $Z_n$.
We will use the following two special cases for illustration purposes.
\begin{enumerate}
\item In one dimension we consider a trivial walk, namely, ``\emph{move one step to the right}''. Formally, in this case
$d=1$ and $B=\{1\}$, the law of $X_{1}$ is given by
$\mathbb{P}\left(X_{1}=1\right)=1$.
The associated Markov chain $S_{n}=S_{0}+n$ is deterministic and trivially transient.
The transition matrix $R$ is given by
\begin{eqnarray}\label{RS}
R(i,j)=
\begin{cases}
1 &\text{if\ } j=i+1\\
0 &\text{otherwise.}
\end{cases}
\end{eqnarray}
We call this $R$ the \emph{right-shift operator}.
\item The other special case is the \emph{simple symmetric random walk} on $\mbox{${\mathbb Z}$}^d$. For this
$B=\{v \in \mathbb{Z}^{d} \,\Big\vert\, \|v\|_{1}=1 \,\}$ where $\|\cdot\|_{1}$ denotes the $l_{1}$-norm. The law of $X_{1}$ is given by
$\mathbb{P}\left(X_{1}=v\right)=\frac{1}{2d}, \,\,\, v \in B$. The matrix $R$ is given by
\begin{eqnarray}\label{SRW}
R(u,v)=
\begin{cases}
\frac{1}{2d} & \text{for\ }v-u \in B\\
0 &\text{otherwise.}
\end{cases}
\end{eqnarray}
Here we note that by the famous result of P\'{o}lya \cite{Polya21}
the simple symmetric random walk is null recurrent when $d \leq 2$ and transient when $d \geq 3$.
\end{enumerate}
In Section \ref{Sec:General} we will further generalize the model when the associated random walk
takes values in other $d$-dimensional discrete lattices, for example, the \emph{triangular lattice} in
two dimension.
\subsection{Notations} Following notations and conventions are used in the paper.
\begin{itemize}
\item For two sequences $\left\{a_{n}\right\}_{n \geq 1}$ and $\left\{b_{n}\right\}_{n \geq 1}$ of positive real numbers
such that $b_{n}\neq 0$ for all $n\geq1$, we will write $a_{n} \sim b_{n}$ if $\displaystyle \lim_{n \to \infty}\frac{a_{n}}{b_{n}}=1$.
\item As mentioned earlier, all vectors are written as row vectors unless otherwise stated.
For example, a finite dimensional vector
$x \in \mbox{${\mathbb R}$}^d$ is written as
$x=\left(x^{(1)},x^{(2)},\ldots,x^{(d)}\right)$ where $x^{(i)}$ denotes the $i^{\text{th}}$ coordinate.
To be consistent with this notation matrices are multiplied to the right of the vectors.
The infinite dimensional vectors are written as $y=\left(y_{j}\right)_{j\in \mathcal{J}}$ where $y_{j}$
is the $j^{\text{th}}$ coordinate and $\mathcal{J}$ is the indexing set. Column vectors are denoted by $x^{T},$ where $x$ is a row vector.
\item For any vector $x$, $x^2$ will denote a vector with the coordinates squared.
\item By $N_{d}\left({\mathbf \mu},\varSigma\right)$ we denote the $d$-dimensional Gaussian distribution with mean vector ${\mathbf \mu} \in \mbox{${\mathbb R}$}^d$ and
variance-covariance matrix $\varSigma$.
For $d=1$, we simply write $N(\mu, \sigma^{2})$ with mean $\mu \in \mbox{${\mathbb R}$}$ and variance $\sigma^{2} > 0$.
\item The standard Gaussian measure on $\left(\mbox{${\mathbb R}$}^d, \mbox{${\mathcal B}$}\left(\mbox{${\mathbb R}$}^d\right) \right)$ is denoted by
$\Phi_d$ and its density by $\phi_d$, that is for $A \in \mbox{${\mathcal B}$}\left(\mbox{${\mathbb R}$}^d\right)$,
\begin{equation}
\Phi_d\left(A\right) := \int\limits_A \! \phi_d\left(x\right) \, \mathrm{d}x
\mbox{\ \ where\ \ }
\phi_d\left(x\right) := \frac{1}{\left(2 \pi \right)^{d/2}} e^{- \frac{\| x \|^2}{2}}, x \in \mbox{${\mathbb R}$}^d.
\end{equation}
For $d=1$, we will simply write $\Phi$ for the standard Gaussian measure on
$\left(\mbox{${\mathbb R}$}, \mbox{${\mathcal B}$}\left(\mbox{${\mathbb R}$}\right)\right)$ and $\phi$ for its density.
\item We will write $\nu_n \stackrel{w}{\longrightarrow} \nu$ for a sequence of probability measures
$\left\{\nu_n\right\}$ converging weakly to a probability measure $\nu$. We will also write
$W_n \Rightarrow \nu$ for a
sequence of random variables/vectors $\left\{W_n\right\}$ converging in distribution to a probability
measure $\nu$.
\item the symbol $\stackrel{p}{\longrightarrow}$ will denote convergence in probability.
\item For any two random variables/vectors $X$ and $Y$, we will write $X \stackrel{d}{=} Y$ to denote that $X$
and $Y$ have the same distribution.
\end{itemize}
\subsection{Outline}
In the following section we state the main results which we prove in Section \ref{Sec:Proofs}.
In Section \ref{SubSec:Intermediate} we state and prove two important results which we use in the proofs
of the main results which also have some independent interest.
In Section \ref{Sec:General} we further generalize our results for urns with infinitely many
colors where the color sets are indexed by other countable lattices on $\mbox{${\mathbb R}$}^d$.
In particular, we consider the example of the two dimensional triangular lattice.
An elementary technical result which is needed in the proofs of the main results is presented in the appendix.
\section{Main Results}\label{Sec:Main Results}
Throughout this paper we assume that
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$ is a probability space on which all the random processes are defined.
\subsection{Weak Convergence of the Expected Configuration}
\begin{theorem}
\label{GRW}
Let $\overline{\Lambda}_{n}$ be the probability measure on $\mathbb{R}^{d}$ corresponding to the probability
vector $\frac{1}{n+1}\left(\mathbb{E}[U_{n,v}]\right)_{v \in \mathbb{Z}^{d}}$ and let
\[
\overline{\Lambda}_{n}^{cs}(A)
:= \overline{\Lambda}_{n}\left(\sqrt{\log n}A\varSigma^{-1/2}+ {\mathbf \mu}\log n\right), \,\,\,
A \in \mathcal{B}\left(\mathbb{R}^{d}\right).
\]
Then, as $n \to \infty$,
\begin{equation}
\overline{\Lambda}_{n}^{cs}\stackrel{w}{\longrightarrow} \Phi_{d}.
\end{equation}
\end{theorem}
Recall that if $Z_n$ denotes the $\left(n+1\right)^{\mbox{th}}$ selected color then
its probability mass function is given by
$\left(\frac{\mbox{${\mathbb E}$}\left[U_{n,v}\right]}{n+1}\right)_{v \in {\mathbb Z}^d}$. Thus
$\overline{\Lambda}_{n}$ is the probability distribution of $Z_n$ on
$\left(\mbox{${\mathbb R}$}^d, \mbox{${\mathcal B}$}\left(\mbox{${\mathbb R}$}^d\right)\right)$. So the above theorem can be restated as
\begin{eqnarray}\label{ED2}
\frac{Z_{n}-{\mathbf \mu}\log n} {\sqrt{\log n}} \Rightarrow N_{d}(0,\varSigma)\text{ as } n \to \infty \text{.}
\end{eqnarray}
Following two results are immediate applications of the Theorem \ref{GRW}.
\begin{cor}
For $d=1$, let $X_{i}\equiv 1$, that is the underlying Markov chain moves deterministically one step to the right, then as $n \to \infty$
\begin{eqnarray*}
\frac{Z_{n}-\log n}{\sqrt{\log n}} \Rightarrow N(0,1).
\end{eqnarray*}
\end{cor}
\begin{cor}
Let $\left\{S_{n}\right\}_{n \geq 0}$
be the simple symmetric random walk on $\mathbb{Z}^{d}, d\geq 1$. Then,
as $n\to \infty$,
\begin{eqnarray*}
\frac{Z_{n}}{\sqrt{\log n}} \Rightarrow N_{d}(0,d^{-1}\mathbb{I}_{d}),
\end{eqnarray*}
where $\mathbb{I}_{d}$ is the $d \times d$ identity matrix.
\end{cor}
The above two results essentially show that
irrespective of the recurrent or transient behavior of the under lying random walk, the associated
urn models have similar asymptotic behavior. In particular limiting distribution
is always Gaussian with universal orders for centering and scaling, namely, $\mbox{${\mathcal O}$}\left(\log n\right)$ and
$\mbox{${\mathcal O}$}\left(\sqrt{\log n}\right)$ respectively.
\subsection{Weak Convergence of the Random Configuration}
Let $\mathcal{M}_{1}$ be the space of probability measures on $\mathbb{R}^{d},\mbox{ }d\geq 1$
endowed with the topology of weak convergence. For $\omega \in \Omega$,
let $\Lambda_{n}(\omega)\in \mathcal{M}_{1}$ be the random probability measure corresponding to the
random probability vector $\frac{U_{n}(\omega)}{n+1}$. It is easy to note that the function
$\Lambda_n : \Omega \rightarrow {\mathcal M}_1$ is measurable.
\begin{theorem}
\label{ASd}
For $\omega \in \Omega$ and $A \in \mathcal{B}\left(\mathbb{R}^{d}\right)$, let
\[
\Lambda^{cs}_{n}(\omega)\left(A\right)=\Lambda_{n}(\omega)\left(\sqrt{\log n}A\varSigma^{-1/2}
+ {\mathbf \mu} \log n\right).
\]
Then, as $n \to \infty $,
\begin{equation}
\label{Eq:PrCgs}
\Lambda_{n}^{cs}\stackrel{p}{\longrightarrow} \Phi_{d} \mbox{ on }\mathcal{M}_{1}.
\end{equation}
\end{theorem}
Theorem \ref{ASd} is a stronger version of the Theorem \ref{GRW}. Using it
we can conclude that given any subsequence $\{n_{k}\}_{k=1}^{\infty}$ there exists a further
subsequence $\{n_{k_{j}}\}_{j=1}^{\infty}$, such that as $j \to \infty$,
\[
\Lambda^{cs}_{n_{k_{j}}}\left(\omega\right) \stackrel{w}{\longrightarrow} \Phi_{d} \mbox{\ \ a.s.\ \ }
\left[ \mbox{${\mathbb P}$} \right],
\]
that is, given any subsequence $\{n_{k}\}_{k=1}^{\infty}$ there exists a further
subsequence $\{n_{k_{j}}\}_{j=1}^{\infty}$, such that as $j \to \infty$, the random configuration of the
urn after appropriate non-random centering and scaling,
converges weakly almost surely to the standard Gaussian measure on $d$-dimension.
\subsection{Local Limit Theorem Type Results for the Expected Configuration}
It turns out that under certain assumptions the expected configuration of the urn at time $n$, namely,
$\left(\frac{\mbox{${\mathbb E}$}\left[U_n\right]}{n+1}\right)_{n \geq 0}$, satisfy a \emph{local limit theorem}.
\subsubsection{Local Limit Type Results for One Dimension}
\label{SubSubSec:LLT1}
In this subsection, we present the local limit theorems for urns with colors indexed by $\mathbb{Z}$.
Note that $X_1$ is a lattice random variable, so we can write
\begin{equation}
\mathbb{P}\left(X_{1} \in a+ h\mathbb{Z}\right)=1,
\label{Equ:Span}
\end{equation}
where $a \in \mathbb{R}$ and $h>0$ is maximum value such that \eqref{Equ:Span} holds. $h$ is called the
span for $X_1$ (see Section 3.5 of \cite{Durr10}).
We define
\begin{equation}
\mathcal{L}_{n}^{(1)} :=
\left\{x\colon x=\frac{n}{\sigma\sqrt{\log n}}a-\frac{\mu}{\sigma} \sqrt{\log n}+\frac{h}{\sigma \sqrt{\log n}}
\mathbb{Z}\right\}.
\label{Equ:Def-L^1}
\end{equation}
\begin{theorem}\label{llt1}Assume that
$\mathbb{P}\left[X_{1}=0\right]>0$. Then, as $n \to \infty$
\begin{equation}
\sup_{x \in \mathcal{L}_{n}^{(1)}} \left\vert \sigma \frac{\sqrt{ \log n}}{h}\mathbb{P}\left(\frac{Z_{n}-
\mu \log n}{\sigma \sqrt{\log n}}=x\right)-\phi(x) \right\vert \longrightarrow 0.
\end{equation}
\end{theorem}
The above local limit theorem does not cover all cases. A slightly more general version can be derived
by using the proof of this theorem and is given in Section \ref{Sec:Proofs}.
The next theorem is for the special case when the urn is associated with
simple symmetric random walk which is not covered by Theorem \ref{llt1} or its generalization
given in Section \ref{Sec:Proofs}.
\begin{theorem}
\label{llt4}
Assume that $\mbox{${\mathbb P}$}\left(X_1 = 1 \right) = \mbox{${\mathbb P}$}\left(X_1 = -1\right) = \frac{1}{2}$. Then, as $n \to \infty$
\begin{equation}
\sup_{x \in \mathcal{L}_{n}^{(1)}} \left\vert \sqrt{ \log n}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)-\phi(x) \right\vert \longrightarrow 0
\end{equation}
where
$\mathcal{L}_{n}^{(1)}$ is given by \eqref{Equ:Def-L^1} with $\mu=0=a$ and $\sigma = 1 = h$.
\end{theorem}
\subsubsection{Local Limit Type Results for Higher Dimensions}
Now we consider the case $d \geq 2$. $X_1$ is then a lattice random vector taking values in $\mbox{${\mathbb Z}$}^d$.
Let $\mathcal{L}$ be its \emph{minimal lattice}, that is, $\mbox{${\mathbb P}$}\left(X_1 \in x + \mbox{${\mathcal L}$}\right) = 1$ for
every $x \in \mbox{${\mathbb Z}$}^d$ such that $\mbox{${\mathbb P}$}\left(X_1 = x \right) > 0$ and if $\mbox{${\mathcal L}$}'$ is any closed subgroup
of $\mbox{${\mathbb R}$}^d$,
such that $\mbox{${\mathbb P}$}\left(X_1 \in y + \mbox{${\mathcal L}$}'\right) = 1$ for some $y \in \mbox{${\mathbb Z}$}^d$, then $\mbox{${\mathcal L}$} \subseteq \mbox{${\mathcal L}$}'$
and the rank of $\mbox{${\mathcal L}$}$ is $d$.
We refer to the pages 226 -- 227 of \cite{BhRa76} for a formal definition of the minimal lattice of a
$d$-dimensional lattice random variable.
Let $l = \det\left(\mbox{${\mathcal L}$}\right)$ (see the pages 228 -- 229 of \cite{BhRa76} for more details).
Now $x_0$ be such that $\mbox{${\mathbb P}$}\left(X_1 \in x_0 + \mbox{${\mathcal L}$}\right) = 1$ and we define
\begin{equation}
\mathcal{L}_{n}^{(d)} :=
\left\{ x\colon x = \frac{n}{\sqrt{\log n}} x_{0}\varSigma ^{-1/2}-\sqrt{\log n} \,
{\mathbf \mu} \,\varSigma^{-1/2}+\frac{1}{\sqrt{\log n}}\mathcal{L}\varSigma ^{-1/2} \right\}.
\label{Equ:Def-L^d}
\end{equation}
\begin{theorem}\label{llt2}Assume that
$\mathbb{P}\left[X_{1}=0\right]>0$. Then, as $n \to \infty$
\begin{equation}
\sup_{x \in \mathcal{L}_{n}^{(d)}} \left\vert
\frac{\text{det}(\varSigma^{1/2})\left(\sqrt{\log n} \right)^{d}}{l}
\mathbb{P}\left(\frac{Z_{n}-{\mathbf \mu}\log n}{\sqrt{\log n}}\varSigma ^{-1/2}=x\right)-
\phi_{d}(x) \right\vert \longrightarrow 0.
\end{equation}
\end{theorem}
Observe that as in the one dimensional case the above theorem does not cover all the cases.
The next theorem is for the special case when the urn is associated with
simple symmetric random walk on $d \geq 2$ dimension which is not covered by the Theorem \ref{llt2}.
\begin{theorem}\label{llt5}
Assume that $\mbox{${\mathbb P}$}\left(X_1 = \pm e_i\right) = \frac{1}{2d}$ for $1 \leq i \leq d$ where $e_i$ is the
$i^{\text{th}}$ unit vector in direction $i$. Then, as $n \rightarrow \infty$
\begin{eqnarray}\label{lltSSRW}
\sup_{x \in \mathcal{L}_{n}^{(d)}} \left\vert \left(d\right)^{\frac{d}{2}} \left(\sqrt{\log n} \right)^{d}
\mathbb{P}\left(\frac{\sqrt{d}}{\sqrt{\log n}} Z_n =x\right)-
\phi_{d}(x) \right\vert \longrightarrow 0,
\end{eqnarray}
where $\mathcal{L}_{n}^{(d)}$ is as defined in \eqref{Equ:Def-L^d} with $\mu = 0 = x_0$, $\varSigma = \mbox{${\mathbb I}$}_d$
and $\mbox{${\mathcal L}$} = \sqrt{d} \, \mbox{${\mathbb Z}$}^d$.
\end{theorem}
\noindent
{\bf Remark:} The
assumption $\mathbb{P}\left[X_{1}=0 \right]>0$ can be removed,
at least for some cases. Theorem \ref{llt4}, Theorem \ref{llt5} and Theorem \ref{llt3} are such examples.
Because of certain technical difficulties,
we do not know the full generality under which the local limit
theorem holds, though we conjecture that it holds for all the cases.
\section{Auxiliary Results}
\label{SubSec:Intermediate}
In this section, we present two results which we will need to prove the main results.
These results are of independent importance and hence presented separately.
Define $\Pi_{n}\left(z\right)=\displaystyle\prod_{j=1}^{n}\left(1+\frac{z}{j}\right)$ for $z \in \mathbb{C}.$
It is known from Euler product formula for gamma function, which is also referred to as
Gauss's formula (see page 178 of \cite{Con78}), that
\begin{eqnarray}\label{Euler}
\displaystyle \lim_{n\to \infty} \frac{\Pi_{n}(z)}{n^{z}}\Gamma(z+1)=1
\end{eqnarray} uniformly on compact subsets of
$\mathbb{C}\setminus\{0, -1,-2,\ldots\}$.
Recall
$e\left(\lambda\right) := \sum _{v \in B}e^{\langle\lambda, v\rangle}p(v)$
is the moment generating function of $X_1$. It is easy
to note that $e\left(\lambda\right)$ is an eigenvalue of $R$ corresponding to the right eigenvector
$x\left(\lambda\right)=\left(e^{\langle \lambda, v\rangle}\right)_{ v \in \mathbb{Z}^{d}}^{T}$.
Let $\mathcal{F}_{n}=\sigma \left(U_{j}\colon 0\leq j\leq n\right), n \geq 0$ be the natural filtration.
Define
\[\overline{M}_{n}\left(\lambda\right)=\frac{U_{n}x\left(\lambda\right)}{\Pi_{n}\left(e\left(\lambda\right)\right)}\]
From the fundamental recursion (\ref{recurssion}) we get,
\[ U_{n+1}x\left(\lambda\right)=U_{n}x\left(\lambda\right)+\mathcal{X}_{n+1}Rx\left(\lambda\right)\]
Thus,\begin{eqnarray*}
\mathbb{E}\left[U_{n+1}x\left(\lambda\right)\Big{\lvert} \mathcal{F}_{n} \right]&= U_{n}x\left(\lambda\right)+e\left(\lambda\right)
\mathbb{E}\left[\mathcal{X}_{n+1}x\left(\lambda\right)\Big{\lvert} \mathcal{F}_{n}\right]
=\left(1+\frac{e\left(\lambda\right)}{n+1}\right)U_{n}x\left(\lambda\right).
\end{eqnarray*}
Therefore, $\overline{M}_{n}\left(\lambda\right)$ is a non-negative martingale for every
$\lambda \in \mathbb{R}^{d}$. In particular
$\mathbb{E}\left[\overline{M}_{n}\left(\lambda\right)\right]= \overline{M}_{0}\left(\lambda\right)$.
We now present a representation of the marginal distribution of
$Z_{n}$ in terms of the increments $\left(X_{j}\right)_{j \geq 1}$.
\begin{theorem}
\label{LRW1}
For each $n\geq 1$,
\begin{eqnarray}\label{Marginal}
Z_{n}\stackrel{d}{=}Z_{0}+\displaystyle\sum_{j=1}^{n}I_{j}X_{j}.
\end{eqnarray}
where $\{I_{j}\}_{j\geq 1}$ are independent random variables such that
$I_{j}\sim Bernoulli\left(\frac{1}{j+1}\right), j \geq 1$ and are independent of
$\{X_{j}\}_{j\geq 1}$; and $Z_{0}$ is a random vector taking values in $\mathbb{Z}^{d}$
distributed according to the probability vector $U_{0}$ and is
independent of $\left(\{I_{j}\}_{j\geq 1}; \{X_{j}\}_{j\geq1}\right)$.
\end{theorem}
\begin{proof}
As noted before, the probability mass function for the color of the $\left(n+1\right)^{\text{th}}$
selected ball, namely $Z_n$, is
$\left(\frac{\mbox{${\mathbb E}$}\left[U_{n,v}\right]}{n+1}\right)_{v in {\mathbb Z}^d}$. So
for $\lambda \in \mathbb{R}^{d}$, the moment generating function of $Z_{n}$ is given by
\begin{eqnarray}
\frac{1}{n+1}\sum_{v\in \mathbb{Z}^{d}}e^{\langle\lambda,v\rangle }\mathbb{E}\left[U_{n,v}\right]
& = & \frac{\Pi_{n}\left(e(\lambda)\right)}{n+1}\mathbb{E}\left[\overline{M}_{n}(\lambda)\right] \nonumber\\
& = & \frac{\Pi_{n}\left(e(\lambda)\right)}{n+1}\overline {M}_{0}(\lambda) \nonumber\\
& = & \overline {M}_{0}(\lambda) \prod_{j=1}^{n}\left(1-\frac{1}{j+1}+\frac{e(\lambda)}{j+1}\right) \label{MGF}.
\end{eqnarray}
The equation \eqref{Marginal} follows from \eqref{MGF}.
\end{proof}
Our next theorem states that around a non-trivial closed neighborhood of $0$ the martingales
$\left( \overline{M}_{n}\left(\lambda\right) \right)_{n \geq 0}$ are uniformly (in $\lambda$)
${\mathcal L}_2$ bounded.
\begin{theorem}
\label{Martingale}
There exists $\delta > 0$ such that
\begin{equation}
\sup_{\lambda \in \left[-\delta, \delta\right]^{d}} \sup_{n\geq 1}
\mathbb{E} \left[\overline{M}^{2}_{n}\left(\lambda\right)\right]<\infty.
\label{Equ:L-2-bound}
\end{equation}
\end{theorem}
\begin{proof}
From (\ref{recurssion}), we obtain
\begin{eqnarray*}
\mathbb{E}\left[\left(U_{n+1}x\left(\lambda\right)\right)^{2}\Big{\lvert }\mathcal{F}_{n}\right]
& = & \left(U_{n}x\left(\lambda\right)\right)^{2}
+2e\left(\lambda\right)U_{n}x\left(\lambda\right)
\mathbb{E}\left[\mathcal{X}_{n+1}x\left(\lambda\right)\Big{\lvert }\mathcal{F}_{n}\right] \\
& & \quad + e^{2}\left(\lambda\right)
\mathbb{E}\left[\left(\mathcal{X}_{n+1}x\left(\lambda\right)\right)^{2}\Big{\lvert }\mathcal{F}_{n}\right]
\end{eqnarray*}
It is easy to see that
\begin{equation}
\mathbb{E}\left[\mathcal{X}_{n+1}x\left(\lambda\right)\Big{\lvert} \mathcal{F}_{n}\right]=\frac{1}{n+1}U_{n}x\left(\lambda\right)\text{ and }
\mathbb{E}\left[\left(\mathcal{X}_{n+1}x\left(\lambda\right)\right)^{2}\Big{\lvert }\mathcal{F}_{n}\right]=\frac{1}{n+1}U_{n}x\left(2\lambda\right).
\end{equation}
Therefore, we get the recursion
\begin{eqnarray}\label{2M}
\mathbb{E}\left[\left(U_{n+1}x\left(\lambda\right)\right)^{2}\right]=\left(1+\frac{2e\left(\lambda\right)}{n+1}\right)
\mathbb{E}\left[\left(U_{n}x\left(\lambda\right)\right)^{2}\right]+\frac{e^{2}\left(\lambda\right)}{n+1}\mathbb{E}\left[U_{n}x\left(2\lambda\right)\right].
\end{eqnarray}
Dividing both sides of (\ref{2M}) by $\Pi^{2}_{n+1}\left(\lambda\right)$,
\begin{eqnarray}\label{22M}
\mathbb{E}\left[\overline{M}^{2}_{n+1}\left(\lambda\right)\right]=\frac{\left(1+\frac{2e\left(\lambda\right)}{n+1}\right)}
{\left(1+\frac{e\left(\lambda\right)}{n+1}\right)^{2}}\mathbb{E}\left[\overline{M}^{2}_{n}\left(\lambda\right)\right]+\frac
{e^{2}\left(\lambda\right)}{n+1}\frac{\mathbb{E}\left[U_{n}x\left(2\lambda\right)\right]}{\Pi^{2}_{n+1}\left(\lambda\right)}.
\end{eqnarray}
$\overline{M}_{n}\left(2\lambda\right)$ being a martingale, we obtain $\mathbb{E}\left[U_{n}x\left(2\lambda\right)\right]=
\Pi_{n}\left(e\left(2\lambda\right)\right)\overline{M}_{0}\left(2\lambda\right)$. Therefore from (\ref{22M}), we get
\begin{eqnarray}
\mathbb{E}\left[\overline{M}_{n}^{2}\left(\lambda\right)\right]
& = & \frac{\Pi_{n}\left(2e\left(\lambda\right)\right)}{\Pi_{n}\left(e\left(\lambda\right)\right)^{2}}
\overline{M}_{0}^{2}\left(\lambda\right) \nonumber \\
& & \quad + \sum_{k=1}^{n} \frac{e^{2}\left(\lambda\right)}{k}
\left\{\prod_{j>k}^{n}\frac{\left(1+\frac{2e\left(\lambda\right)}{j}\right)}
{\left(1+\frac{e\left(\lambda\right)}{j}\right)^{2}}\right\}
\frac{\Pi_{k-1}\left(e\left(2\lambda\right)\right)}
{\Pi_{k}^{2}\left(e\left(\lambda\right)\right)}\overline{M}_{0}\left(2\lambda\right). \label{M2}
\end{eqnarray}
We observe that as $e\left(\lambda\right)>0$, so
$\frac{1+\frac{2e\left(\lambda\right)}{j}}{\left(1+\frac{e\left(\lambda\right)}{j}\right)^{2}} \leq 1$
and hence
$\frac{\Pi_{n}\left(2e\left(\lambda\right)\right)}{\Pi^{2}_{n}\left(e\left(\lambda\right)\right)}\leq 1$.
Thus
\begin{eqnarray}\label{B1}
\mathbb{E}\left[\overline{M}^{2}_{n}\left(\lambda\right)\right]\leq \overline{M}^{2}_{0}\left(\lambda\right)+e^{2}\left(\lambda\right)
\overline{M}_{0}\left(2\lambda\right)\displaystyle \sum_{k=1}^{n}\frac{1}{k}\frac{\Pi_{k-1}\left(e\left(2\lambda\right)\right)}
{\Pi^{2}_{k}\left(e\left(\lambda\right)\right)}\mbox{.}
\end{eqnarray}
Using (\ref{Euler}), we know that
\begin{equation}
\Pi_{n}^{2}\left(e\left(\lambda\right)\right)\sim \frac{n^{2e\left(\lambda\right)}}{\Gamma^{2}\left(e\left(\lambda\right)+1\right)}.
\end{equation}
Since $e\left(0\right)=1$ and $e\left(\lambda\right)$ is continuous as a function of $\lambda$ so given $\eta>0$ there exists
$0<K_{1},K_{2}<\infty$, such that for all $\lambda \in \left[-\eta, \eta\right]^{d}$, $K_{1}\leq e\left(\lambda\right)\leq K_{2}$. Since the convergence in (\ref{Euler}) is uniform on compact subsets of $\left[0,\infty\right),\mbox{ given }\epsilon>0$ there exists $N_{1}>0$ such that for all $n\geq N_{1}$ and $\lambda \in \left[-\eta, \eta\right]^{d}$,
\begin{eqnarray*}
& & \left(1-\epsilon\right)\frac{\Gamma^{2}\left(e\left(\lambda\right)+1\right)}
{\Gamma \left(e\left(2\lambda\right)+1\right)}
\sum_{k\geq N_{1}}^{n} \frac{1}{k^{1+2e\left(\lambda\right)-e\left(2\lambda\right)}} \\
& \leq & \sum _{k\geq N_{1}}^{n}\frac{1}{k}\frac{\Pi_{k-1}\left(e\left(2\lambda\right)\right)}
{\Pi^{2}_{k}\left(e\left(\lambda\right)\right)} \\
& \leq &\left(1+\epsilon\right)\frac{\Gamma^{2}\left(e\left(\lambda\right)+1\right)}{\Gamma \left(e\left(2\lambda\right)+1\right)}
\displaystyle \sum_{k\geq N_{1}}^{n}\frac{1}{k^{1+2e\left(\lambda\right)-e\left(2\lambda\right)}}.
\end{eqnarray*}
Recall that $e\left(\lambda\right)=\textstyle\sum _{v \in B}e^{\langle\lambda, v\rangle}p(v)$. Since the cardinality of $B$ is finite, we can choose a $\delta_{0}>0 $ such that for every $\lambda \in \left[-\delta_{0}, \delta_{0}\right]^{d}$,
$2e\left(\lambda\right)-e\left(2\lambda\right)>0$. Choose $\delta =\textstyle \min \{\eta,\delta_{0}\}$.
Since $2e\left(\lambda\right)-e\left(2\lambda\right)$ is continuous as a function of $\lambda$, there exists a $\lambda_{0}\in \left[-\delta,\delta\right]^{d}$ such that
$\textstyle\min_{\lambda \in \left[-\delta,\delta\right]^{d}}2e\left(\lambda\right)-e\left(2\lambda\right)
=2e\left(\lambda_{0}\right)-e\left(2\lambda_{0}\right)>0$. Therefore
\begin{eqnarray*}
\displaystyle \sum_{k=1}^{\infty}\frac{1}{k^{1+2e\left(\lambda\right)-e\left(2\lambda\right)}}\leq
\sum_{k=1}^{\infty}\frac{1}{k^{1+2e\left(\lambda_{0}\right)-e\left(2\lambda_{0}\right)}}.
\end{eqnarray*}
Therefore given $\epsilon >0$
there exists $N_{2}>0 $ such that $\forall \lambda \in \left[-\delta, \delta\right]^{d}$.
\begin{eqnarray*}
\displaystyle \sum_{k>N_{2}}^{\infty}\frac{1}{k^{1+2e\left(\lambda\right)-e\left(2\lambda\right)}}\leq \sum_{k>N_{2}}^{\infty}\frac{1}
{k^{1+2e\left(\lambda_{0}\right)-e\left(2\lambda_{0}\right)}}<\epsilon.
\end{eqnarray*}
$\textstyle\frac{\Gamma^{2}\left(e\left(\lambda\right)+1\right)}
{\Gamma \left(e\left(2\lambda\right)+1\right)}$, $e^{2}\left(\lambda\right)$ and $\overline{M}_{0}\left
(2\lambda\right)$ being continuous as functions of $\lambda$ are bounded for $ \lambda \in \left[-\delta, \delta\right]^{d}$.
Choose $N=\max\{N_{1},N_{2}\}$. From (\ref{B1}) we obtain for all $n\geq N$
\begin{eqnarray}\label{Delta1}
\mathbb{E}\left[\overline{M}^{2}_{n}\left(\lambda\right)\right]\leq \overline{M}^{2}_{0}\left(\lambda\right)+C_{1}
\displaystyle \sum_{k=1}^{N}\frac{1}{k}\frac{\Pi_{k-1}\left(e\left(2\lambda\right)\right)}
{\Pi^{2}_{k}\left(e\left(\lambda\right)\right)}+\epsilon
\end{eqnarray} for an appropriate positive constant $C_{1}$.
$\textstyle\sum_{k=1}^{N}\frac{1}{k}\frac{\Pi_{k-1}\left(e\left(2\lambda\right)\right)}
{\Pi^{2}_{k}\left(e\left(\lambda\right)\right)}$ and $\overline{M}^{2}_{0}\left(\lambda\right)$ being continuous as functions of $\lambda$, are bounded for $\lambda \in \left[-\delta,\delta\right]^{d}$.
Therefore, from (\ref{Delta1}) we obtain that there
exists $C>0$ such that
for all $\lambda \in \left[-\delta,\delta\right]^{d}$ and for all $n\geq 1$
\begin{eqnarray*}
\mathbb{E}\left[\overline{M}^{2}_{n}\left(\lambda\right)\right]\leq C.
\end{eqnarray*}
This proves \eqref{Equ:L-2-bound}.
\end{proof}
\section{Proofs of the Main Results}\label{Sec:Proofs}
In this section we present the proofs of the main results.
\subsection{Proofs for the Expected Configuration}
\begin{proof}[Proof of Theorem \ref{GRW}]
Using Theorem \ref{LRW1} we note that to prove Theorem \ref{GRW} it is enough to prove it when
$Z_0 = 0$, that is, the initial configuration of the urn consists of one ball of color $0$. In
that case
\begin{equation}
Z_{n}\stackrel{d}{=}\displaystyle \sum_{j=1}^{n}I_{j}X_{j}.
\end{equation}
Now we observe that
\begin{equation}
\mathbb{E}\left[\sum_{j=1}^{n}I_{j}X_{j}\right]-{\mathbf \mu} \log n
= \sum_{j=1}^{n}\frac{1}{j}{\mathbf \mu}- {\mathbf \mu}\log n\longrightarrow \gamma{\mathbf \mu},
\end{equation}
where $\gamma$ is the Euler's constant. \\
\noindent
{\bf Case I:} Let $d=1$. Let $s^{2}_{n}=\mbox{Var}\left(\sum_{j=1}^{n}I_{j}X_{j}\right)$.
It is easy to note that
\[
s^{2}_{n} = \sum_{j=1}^{n}\frac{1}{j+1}\mathbb{E}\left[X_{1}^{2}\right]-\frac{\mu^{2}}{(j+1)^{2}}
\sim \sigma ^{2}\log n.
\]
Since the cardinality of $B$ is finite, so for any $\epsilon>0$, we have
\[
\frac{1}{s^{2}_{n}} \sum_{j=1}^{n}
\mathbb{E}\left[I_{j}X^{2}_{j}1_{\{I_{j}X_{j}>\epsilon s_{n}\}}\right]\longrightarrow 0
\]
as $n \to \infty$.
Therefore, by the Lindeberg Central Limit theorem, we conclude that as $n \to \infty$
\[
\frac{Z_{n}-\mu \log n}{\sigma \sqrt{\log n}}\Rightarrow N(0,1).
\]
This completes the proof in this case.\\
\noindent
{\bf Case II:} Now suppose $d\geq 2$.
Let $\varSigma_{n}=\left[\sigma_{k,l}(n)\right]_{d\times d
}$ denote the variance-covariance matrix for $\textstyle\sum_{j=1}^{n}I_{j}X_{j}$ then by calculations
similar to that in one-dimension it is easy to see that for all $k,l \in \{1,2,\ldots d\}$ as $n \to \infty$
\begin{eqnarray*}
\frac{\sigma_{k,l}(n)}{(\log n)\sigma_{k,l}}\longrightarrow 1.
\end{eqnarray*}
Therefore for every $\theta \in \mathbb{R}^{d}$, by Lindeberg Central Limit Theorem in one dimension,
\begin{eqnarray*}
\frac{\langle \theta,\displaystyle \sum_{j=1}^{n}I_{j}X_{j}\rangle -\langle \theta, {\mathbf \mu}\log n\rangle }
{\sqrt{\log n}\left(\theta\varSigma \theta^T\right)^{1/2} }\Rightarrow N(0,1)\mbox{ as }n \to\infty.
\end{eqnarray*}
Therefore by Cramer-Wold device, it follows that as $n \to \infty$
\begin{eqnarray*}
\frac{\displaystyle \sum_{j=1}^{n}I_{j}X_{j}-{\mathbf \mu}\log n}{\sqrt{\log n}}\Rightarrow N_{d}\left(0,\varSigma\right).
\end{eqnarray*}
So we conclude that as $n \to \infty$
\begin{eqnarray*}
\frac{Z_{n}- {\mathbf \mu} \log n}{\sqrt{\log n}} \Rightarrow N_{d}\left(0,\varSigma\right).
\end{eqnarray*}
This completes the proof.
\end{proof}
\subsection{Proofs for Random Configuration}
In this subsection we will present the proof of Theorem \ref{ASd}.
We start with the following lemma which is needed in the proof of Theorem \ref{ASd}.
\begin{lem}\label{PC}
Let $\delta$ be as in Theorem \ref{Martingale}, then for every $\lambda \in \left[-\delta, \delta \right]^{d}$
as $n \to \infty$,
\begin{equation}
\overline{M}_{n}\left(\frac{\lambda}{\sqrt{\log n}}\right)\stackrel{p}{\longrightarrow} 1.
\end{equation}
\end{lem}
\begin{proof}
Without loss of generality, we may assume that the process starts with a single ball of color indexed by
$0$. From equation (\ref{M2})
we get
\begin{eqnarray*}
\mathbb{E}\left[\overline{M}^{2}_{n}\left(\lambda\right)\right]=\frac{\Pi_{n}\left(2e(\lambda)\right)}{\Pi^{2}_{n}\left(e(\lambda)\right)
}+\frac{\Pi_{n}\left(2e(\lambda)\right)}{\Pi^{2}_{n}\left(e(\lambda)\right)}\displaystyle\sum_{k=1}^{n}\frac{e^{2}(\lambda)}{k}
\frac{\Pi_{k-1}\left(e(2\lambda)\right)}{\Pi_{k}\left(2e(\lambda)\right)}.
\end{eqnarray*}
Replacing $\lambda$ by $\lambda_{n}=\frac{\lambda}{\sqrt{\log n}}$, we obtain
\begin{eqnarray}\label{Eq:Martingale}
\mathbb{E}\left[\overline{M}^{2}_{n}\left(\lambda_{n}\right)\right]=\frac{\Pi_{n}\left(2e\left(\lambda_{n}\right)\right)}
{\Pi^{2}_{n}\left(e\left(\lambda_{n}\right)\right)}+ \frac{\Pi_{n}\left(2e\left(\lambda_{n}\right)\right)}
{\Pi^{2}_{n}\left(e\left(\lambda_{n}\right)\right)} \displaystyle \sum_{k=1}^{n}\frac{e^{2}\left(\lambda_{n}\right)}{k}
\frac{\Pi_{k-1}\left(e\left(2\lambda_{n}\right)\right)}{\Pi_{k}\left(2e\left(\lambda_{n}\right)\right)}
\end{eqnarray}
Since the convergence in formula (\ref{Euler}) is uniform on compact sets of $\left[0,\infty \right)$, we observe that for $\lambda \in \left[-\delta, \delta\right]^{d}$
\begin{eqnarray*}
\displaystyle \lim_{n \to \infty}\frac{\Pi_{n}\left(2e\left(\lambda_{n}\right)\right)}{\Pi^{2}_{n}\left(e\left(\lambda_{n}\right)\right)}
=\frac{\Gamma^{2}\left(2\right)}{\Gamma\left(3\right)}=\frac{1}{2}.
\end{eqnarray*}
We observe that $\textstyle\lim_{n \to \infty }e\left(\lambda_{n}\right)=1$ and
\[
\lim_{n \to \infty}\frac{\Pi_{n}\left(2e(\lambda_{n})\right)}{\Pi^{2}_{n}\left(e(\lambda_{n})\right)}
\frac{e^{2}\left(\lambda_{n}\right)}{k}
\frac{\Pi_{k-1}\left(e\left(2\lambda_{n}\right)\right)}{\Pi_{k}\left(2e\left(\lambda_{n}\right)\right)}
=\frac{1}{2}\frac{1}{k}\frac{\Pi_{k-1}(1)}{\Pi_{k}\left(2\right)}.
\]
Now using Theorem \ref{Martingale} and the dominated convergence theorem, we get
\begin{eqnarray*}
\displaystyle \lim_{n\to \infty}\frac{\Pi_{n}\left(2e\left(\lambda_{n}\right)\right)}
{\Pi^{2}_{n}\left(e\left(\lambda_{n}\right)\right)} \displaystyle \sum_{k=1}^{n}\frac{e^{2}\left(\lambda_{n}\right)}{k}
\frac{\Pi_{k-1}\left(e\left(2\lambda_{n}\right)\right)}{\Pi_{k}\left(2e\left(\lambda_{n}\right)\right)}=\frac{1}{2}
\displaystyle \sum_{k=1}^{\infty}\frac{2}{(k+2)(k+1)}=\frac{1}{2}.
\end{eqnarray*}
Therefore, from (\ref{Eq:Martingale}) we obtain
\begin{equation}
\mathbb{E}\left[\overline{M}_{n}^{2}\left(\lambda_{n}\right)\right]\longrightarrow 1 \mbox{ as } n \to \infty.
\end{equation}
Observing that ${\mathbf E}\left[ \overline{M}_{n}\left(\lambda_{n}\right) \right] = 1$, we get
\begin{equation}
\mbox{Var}\left( \overline{M}_{n}\left(\lambda_{n}\right) \right) \rightarrow 0,
\end{equation}
as $n \to \infty$. This implies
\[
\overline{M}_{n}\left(\lambda_{n}\right)\stackrel{p}{\longrightarrow} 1 \mbox{ as } n \to \infty,
\]
completing the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ASd}]
Without loss of generality, we may assume that the urn process starts with a single ball of color indexed by
$0$. Now
$\Lambda_{n}$ is the random probability measure on $\mathbb{R}^{d}$ corresponding to the random probability vector
$\frac{1}{n+1}U_{n}$. For $\lambda \in \mathbb{R}^{d}$ the corresponding moment generating function is given by
\begin{equation}
\frac{1}{n+1} \sum_{v \in \mathbb{Z}^{d}}e^{\langle \lambda, v\rangle}U_{n,v}
=
\frac{1}{n+1} U_{n}x\left(\lambda\right)=\frac{1}{n+1}
\overline{M}_{n}\left(\lambda\right)\Pi_{n}\left(e(\lambda)\right).
\end{equation}
The moment generating function corresponding to the scaled and centered random measure $\Lambda^{cs}_{n}$ is
\[
\frac{1}{n+1}e^{-\langle \lambda, {\mathbf \mu}\sqrt{\log n}\rangle}U_{n}x\left(\frac{\lambda}{\sqrt{\log n}}\right)
=
\frac{1}{n+1}e^{-\langle \lambda, {\mathbf \mu}\sqrt{\log n}\rangle}
\overline{M}_{n}\left(\frac{\lambda}{\sqrt{\log n}}\right)\Pi_{n}\left(e(\frac{\lambda}{\sqrt{\log n}})\right).
\]
To show (\ref{Eq:PrCgs}) it is enough to show that for every subsequence $\{n_{k}\}_{k\geq 1}$,
there exists a further subsequence $\{n_{k_{j}}\}_{j=1}^{\infty}$ such that as $j\to \infty$
\begin{equation}
\label{MGFCS}
\frac{e^{-\langle\lambda,{\mathbf \mu}\sqrt{\log n_{k_{j}}}\rangle}}{n_{k_{j}}+1}
\overline{M}_{n_{k_{j}}}\left(\frac{\lambda}{\sqrt{\log n_{k_{j}}}}\right)
\Pi_{n}\left(e\left(\frac{\lambda}{\sqrt{\log n_{k_{j}}}}\right)\right)\longrightarrow
e^{\frac{\lambda\varSigma \lambda^{T}}{2}}
\end{equation}
for all $\lambda \in \left[-\delta, \delta\right]^{d}$ almost surely, where $\delta$ is as in
Theorem \ref{Martingale}.
From Theorem \ref{GRW} we know that
\[
\frac{Z_{n}-{\mathbf \mu} \log n}{\sqrt{\log n}} \Rightarrow N_{d}\left(0,\mathbb{I}_{d}\right).
\]
Therefore using (\ref{MGF}) as $n \to \infty$ we obtain,
\begin{eqnarray*}
e^{-\langle\lambda,{\mathbf \mu}\sqrt{\log n}\rangle}\mathbb{E}\left[e^{\langle\lambda,\frac{Z_{n}}{\sqrt{\log n}}\rangle}\right]=\frac{1}{n+1}e^{-\langle\lambda,{\mathbf \mu}\sqrt{\log n}\rangle}\Pi_{n}\left(e\left(\frac{\lambda}{\sqrt{ \log n}}\right)\right)\longrightarrow e^{\frac{\lambda\varSigma \lambda^{T}}{2}}.
\end{eqnarray*}
Now using Theorem \ref{Rational2} from the appendix
it is enough to show (\ref{MGFCS}) only for $\lambda \in \mathbb{Q}^{d}\cap\left[-\delta, \delta\right]^{d}$
which is equivalent to proving that for every $\lambda \in \mathbb{Q}^{d}\cap \left[-\delta, \delta\right]^{d}$
as $j \to \infty$
\begin{eqnarray*}
\overline{M}_{n_{k_{j}}}\left(\frac{\lambda}{\sqrt{\log n_{k_{j}}}}\right)\longrightarrow 1 \mbox{ almost surely.}
\end{eqnarray*} From Lemma \ref{PC} we know that for all $\lambda \in \left[-\delta, \delta \right]^{d}$
\begin{eqnarray*}
\overline{M}_{n}\left(\frac{\lambda}{\sqrt{\log n}}\right)\stackrel{p}{\longrightarrow} 1\mbox{ as }n \to \infty.
\end{eqnarray*}
Therefore using the standard diagonalization argument we can say that given a subsequence $\{n_{k}\}_{k\geq1}$ there exists a further subsequence $\{n_{k_{j}}\}_{j=1}^{\infty}$ such that
for every $\lambda \in \mathbb{Q}^{d}\cap\left[-\delta,\delta \right]^{d}$
\begin{eqnarray*}
\overline{M}_{n_{k_{j}}}\left(\frac{\lambda}{\sqrt{\log n_{k_{j}}}}\right)\longrightarrow 1 \mbox{ almost surely.}
\end{eqnarray*}
This completes the proof.
\end{proof}
\noindent
{\bf Remark:}
It is worth noting that the proofs of Theorems \ref{GRW} and \ref{ASd} go through if we assume $U_{0}$ to be
non random probability vector such that there exists $r > 0$ with
$\sum_{v \in \mathbb{Z}^{d}}e^{\langle\lambda,v\rangle } U_{0,v} < \infty$
whenever $\| \lambda \| < r$.
\subsection{Proofs of the Local Limit Type Results}
In this section, we present the proofs for the local limit theorems.
As before, we present the proof for $d = 1$ first.
\subsubsection{Proof for the Local Limit Theorems for d=1}
\begin{proof}[Proof of Theorem \ref{llt1}]
Without loss of generality we may assume $\mu=0$ and $\sigma =1$. We further assume that the process begins
with a single ball of color indexed by 0. From Theorem \ref{LRW1}, we know that
$Z_{n}\stackrel{d}{=}\textstyle\sum_{k=1}^{n}I_{j}X_{j}$. $X_{j}$ is a lattice random variable, therefore
$I_{j}X_{j}$ is also so. Now by our assumption that $\mbox{${\mathbb P}$}\left(X_1 = 0 \right) > 0$, we have
$0 \in B$, therefore $I_{j}X_{j}$ and $X_{j}$ have the
same lattice structure. Therefore $Z_{n}$ is a lattice random
variable with lattice $\mathcal{L}_{n}^{(1)}$. Applying Fourier inversion formula, for all
$x \in \mathcal{L}_{n}^{(1)}$ we obtain
\begin{eqnarray}
\label{FI1}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)
& = &
\frac{h}{2\pi \sqrt{\log n}}
\int\limits_{-\frac{\pi \sqrt{\log n}}{h}}^{\frac{\pi \sqrt{\log n}}{h}} \! e^{-i tx}\psi_{n}(t)\, \mathrm{d}t \\
& = &
\frac{1}{2\pi \sqrt{\log n}}
\int\limits_{-\pi \sqrt{\log n}}^{\pi \sqrt{\log n}} \! e^{-i \frac{tx}{h}} \psi_{n}\left(\frac{t}{h}\right)\, \mathrm{d}t
\end{eqnarray}
where $\psi_{n}\left(t\right)=\mathbb{E}\left[e^{it\frac{Z_{n}}{\sqrt{\log n}}}\right].$
Notice that without loss of any generality, we now can assume $h=1$.
Also by Fourier inversion formula, for all $x \in \mathbb{R}$
\begin{equation}
\label{FI2}
\phi(x)=\frac{1}{2 \pi }\displaystyle\int\limits_{-\infty}^{\infty}\!{e^{-itx}e^{\frac{-t^{2}}{2}}\, \mathrm{d}t}.
\end{equation}
Given $\epsilon>0$, there exists $N$ large enough such that for all $n\geq N$
\begin{eqnarray*}
& & \Big{\lvert}\sqrt{ \log n}\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)-\phi(x)\Big{\rvert} \\
& \leq & \int\limits_{-\pi \sqrt{\log n}}^{\pi\sqrt{\log n}} \!
\Big{\lvert}\psi_{n}(t)-e^{\frac{-t^{2}}{2}}\Big{\rvert}\, \mathrm{d}t
+ 2 \int\limits_{\left[-\pi \sqrt{\log n},\pi \sqrt{\log n}\right]^{c}} \! \phi(t) \, \mathrm{d}t \\
& \leq & \int\limits_{-\pi \sqrt{\log n}}^{\pi \sqrt{\log n}} \!
\Big{\lvert}\psi_{n}(t)-e^{\frac{-t^{2}}{2}}\Big{\rvert} \, \mathrm{d}t +\epsilon.
\end{eqnarray*}
Given $M>0$, we can write for all $n$ large enough
\begin{eqnarray}
\int\limits_{-\pi \sqrt{\log n}}^{\pi \sqrt{\log n}} \!
\Big{\lvert}{\psi_{n}(t)-e^{\frac{-t^{2}}{2}}\Big{\rvert}\, \mathrm{d}t}
& \leq & \int\limits_{-M}^{M}{\!\Big{\lvert}\psi_{n}(t)-e^{\frac{-t^{2}}{2}}\Big{\rvert}\, \mathrm{d}t}
+ \int\limits_{M}^{\pi \sqrt{\log n}}{\!\Big{\lvert }\psi_{n}(t)\Big{\rvert}\,\mathrm{d}t} \nonumber\\
& & \quad +2\displaystyle\int\limits_{M}^{\pi \sqrt{\log n}}\!{e^{\frac{-t^{2}}{2}}\,\mathrm{d}t}. \label{L}
\end{eqnarray}
Given $\epsilon>0$ we choose an $M>0$ such that
\[
\int\limits \limits_{\left[-M,M\right]^{c}}\!{e^{\frac{-t^{2}}{2}}\,\mathrm{d}t} < \epsilon.
\] Therefore,
\begin{eqnarray}\label{normal}
\displaystyle \int\limits_{M}^{\pi \sqrt{\log n}}\!{e^{-\frac{t^{2}}{2}}\, \mathrm{d}t}\leq \int\limits \limits_{\left[-M,M\right]^{c}}\!{e^{\frac{-t^{2}}{2}}\,\mathrm{d}t} < \epsilon .
\end{eqnarray}
We know from Theorem \ref{GRW} that as $n \to \infty$, $\frac{Z_{n}}{\sqrt{\log n}}\Rightarrow N(0,1)$. Hence
for all $t \in \mathbb{R}$, $\psi_{n}(t)\longrightarrow
e^{\frac{-t^{2}}{2}}$. Therefore, for the chosen $M>0$ by bounded convergence theorem we get as $n \to \infty$
\begin{eqnarray*}\label{BC}
\displaystyle\int\limits_{-M}^{M}\!{\Big{\lvert} \psi_{n}(t)-e^{\frac{-t^{2}}{2}}\Big{\rvert}\,\mathrm{d}t}\longrightarrow 0.
\end{eqnarray*}
Let
\begin{eqnarray*}
\mathcal{I}(n)=\displaystyle\int\limits_{M}^{\pi \sqrt{\log n}}\!{\Big{\lvert} \psi_{n}(t)\Big{\rvert}\,\mathrm{d}t}.
\end{eqnarray*}
We will show that as $n \to \infty$, $\mathcal{I}(n)\longrightarrow 0$.
Since
$Z_{n}\stackrel{d}{=}\textstyle\sum_{j=1}^{n}I_{j}X_{j}$, therefore
\begin{align}
\nonumber \mathbb{E}\left[e^{itZ_{n}}\right]&=\displaystyle \prod_{j=1}^{n}\left(1-\frac{1}{j+1}+\frac{e\left(i
t\right)}{j+1}\right)\\
\nonumber &=\frac{1}{n+1}\Pi_{n}\left(e\left(it\right)\right)
\end{align} where $e\left(it\right)=\mathbb{E}\left[e^{itX_{1}}\right]$. Therefore,
\[\psi_{n}(t)= \mathbb{E}\left[e^{it\frac{Z_{n}}{\sqrt{\log n}}}\right]=\frac{1}{n+1}\Pi_{n}\left(e(it/\sqrt{\log n})\right).\]
Applying a change of variables $\frac{t}{\sqrt{\log n}}=w$, we obtain
\begin{eqnarray}\label{I}
\mathcal{I}(n)=\sqrt{\log n}\displaystyle \int\limits_{M/\sqrt{\log n}}^{\pi}\!{\Big{\lvert}\psi_{n}\left(w\sqrt{\log n}\right)\Big{\rvert}\,\mathrm{d}w}.
\end{eqnarray}
Now there exists $\delta>0$ such that for all $t \in \left(0,\delta\right)$
\begin{eqnarray}\label{Delta}
\lvert e\left(it\right)\rvert\leq 1-\frac{t^{2}}{4}.
\end{eqnarray}
Therefore using the inequality $1-x\leq e^{-x}$, we obtain
$1-\frac{1}{j+1}+\frac{\lvert e\left(it\right)\rvert}{j+1}\leq e^{-\frac{1}{j+1}\frac{t^{2}}{4}}$.
Hence, for all $t \in \left(0,\delta\right)$
\begin{eqnarray}\label{exp}
\frac{1}{n+1}\lvert\Pi_{n}\left(e\left(it\right)\right)\rvert \leq e^{-\frac{t^{2}}{4}\displaystyle
\sum_{j=1}^{n}\frac{1}{j+1}}.
\end{eqnarray}
We observe from (\ref{I}) that we can write
\begin{eqnarray*}
\mathcal{I}(n)=\sqrt{\log n}\displaystyle\int\limits_{M/\sqrt{\log n}}^{\delta}\!{\Big{\lvert}\psi_{n}\left(w\sqrt{\log n}\right)\Big{\rvert}\,\mathrm{d}w}+\sqrt{\log n}\displaystyle\int\limits_{\delta}^{\pi}\!{\Big{\lvert}\psi_{n}\left(w\sqrt{\log n}\right)\Big{\rvert}\,
\mathrm{d}w}.
\end{eqnarray*}Let us write
\begin{eqnarray*}
\mathcal{I}_1(n)=\sqrt{\log n}\displaystyle\int\limits_{M/\sqrt{\log n}}^{\delta}\!{\Big{\lvert}\psi_{n}\left(w\sqrt{\log n}\right)\Big{\rvert} \mathrm{d}w}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{I}_{2}(n)=\sqrt{\log n}\displaystyle\int\limits_{\delta}^{\pi}\!{\left\lvert\psi_{n}\left(w\sqrt{\log n}\right)\right\rvert\,\mathrm{d}w}.
\end{eqnarray*}
From (\ref{exp}) we have
$ \mathcal{I}_1(n)\longrightarrow 0 \mbox{ as } n \to \infty.$
Since we have assumed $h=1$ so for all $t \in \left[\delta,2\pi\right)$,
$\lvert e\left(it\right)\rvert<1$. The characteristic function being continuous in $t$, there exists $0<\eta<1$
such that $\lvert e\left(it\right)\rvert\leq \eta$ for all $t\in \left[\delta, \pi\right]$.
Therefore
\begin{eqnarray*}
1-\frac{1}{j+1}+\frac{\lvert e\left(it\right)\rvert}{j+1}\leq 1-\frac{1}{j+1}+\frac{\eta }{j+1}\leq e^{-\frac{1-\eta}{j+1}}.
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\frac{1}{n+1}\lvert\Pi_{n}\left(e\left(it\right)\right)\rvert\leq e^{-\displaystyle \sum_{j=1}^{n}\frac{1-\eta}{j+1}}
\leq C_{2}e^{-\left(1-\eta\right)\log n}
\end{eqnarray*} where $C_{2}$ is some positive constant.
So as $n \to \infty$
\begin{eqnarray*}
\mathcal{I}_2(n)\leq C_{2} e^{-\left(1-\eta\right)\log n}\left(\pi-\delta \right)\sqrt{\log n}\longrightarrow 0.
\end{eqnarray*}
Combining the facts that
$\mathcal{I}_1(n)\longrightarrow 0,\mbox{ }\mathcal{I}_2(n)\longrightarrow 0\mbox{ as }n \to \infty$
and from (\ref{L}) and
(\ref{normal}), the proof is complete.
\end{proof}
As discussed before, Theorem \ref{llt1} only cover the cases which are lazy walks. Suppose now
$\mathbb{P}\left[X_{1}=0\right]=0$ and let $\tilde{h}$ be the span for $X_1$.
We can now write $\mathbb{P}\left(I_{1}X_{1}
\in a+ h\mathbb{Z}\right)=1$, where $a \in \mathbb{R}$ and $h>0$ is the span for $I_{1}X_{1}$.
It is easy to note that $h \leq \tilde{h}$. Following result gives a local limit theorem for the
case when $\tilde{h} < 2h$.
\begin{theorem}
\label{llt3}
Assume that $\tilde{h} < 2h$, then, as $n \to \infty$
\begin{equation}
\sup_{x \in \mathcal{L}_{n}^{(1)}}
\left\vert \sigma \frac{\sqrt{ \log n}}{h}\mathbb{P}\left(\frac{Z_{n}-\mu \log n}{\sigma \sqrt{\log n}}=x\right)
-\phi(x) \right\vert \longrightarrow 0,
\end{equation}
where
$\mathcal{L}_{n}^{(1)}
=
\left\{x\colon x
=
\frac{n}{\sigma\sqrt{\log n}}a-\frac{\mu}{\sigma} \sqrt{\log n}+\frac{h}{\sigma \sqrt{\log n}}\mathbb{Z}\right\}$.
\end{theorem}
\begin{proof}
Without loss of generality we may assume that $\mu=0$, $\sigma=1$ and the process begins with a single ball of color indexed by $0$. The proof is similar to the proof of Theorem \ref{llt1} and therefore we omit certain details. Since for $ j \in \mathbb{N}$, the span of $I_{j}X_{j}$ is $h$, for all $x \in \mathcal{L}_{n}^{(1)}$ we obtain by Fourier inversion formula
\begin{eqnarray*}\label{FI3}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)=\frac{h}{2\pi \sqrt{\log n}}\displaystyle
\int\limits_{-\frac{\pi \sqrt{\log n}}{h}}^{\frac{\pi \sqrt{\log n}}{h}}\!{e^{-i tx}\psi_{n}(t)\, \mathrm{d}t}
\end{eqnarray*} where $\psi_{n}\left(t\right)=\mathbb{E}\left[e^{it\frac{Z_{n}}{\sqrt{\log n}}}\right].$
Without loss of generality, we may assume $h=1$.
Also by Fourier inversion formula, for all $x \in \mathbb{R}$
\begin{eqnarray*}\label{FI4}
\phi(x)=\frac{1}{2 \pi }\displaystyle\int\limits_{-\infty}^{\infty}\!{e^{-itx}e^{\frac{-t^{2}}{2}}\, \mathrm{d}t}.
\end{eqnarray*}
The bounds for $\Big{\lvert} \sqrt{ \log n}\mathbb{P}\left(\frac{Z_{n}
}{\sqrt{\log n}}=x\right)-\phi(x)
\Big{\rvert }$ are similar to that in the proof of Theorem \ref{llt1} except for that of $\mathcal{I}_2(n)$ where
\begin{eqnarray*}
\mathcal{I}_2(n)=\sqrt{\log n}\displaystyle\int\limits_{\delta}^{\pi}\!{\left\lvert\psi_{n}\left(w\sqrt{\log n}\right)\right\rvert\,\mathrm{d}w}
\end{eqnarray*} and $\delta$ is chosen as in (\ref{Delta}).
We have to show
\[\mathcal{I}_2(n)\longrightarrow 0 \mbox{ as } n \to \infty.
\]
The span of $X_{1}$ being $\tilde{h}$, for all $t \in \left[\delta,\frac{2\pi}{\tilde{h}}\right)$,
$\lvert e\left(it\right)\rvert<1$. Since we have assumed $h=1$, it follows that $\tilde{h}<2$. The characteristic function being continuous in $t$, there exists $0<\eta<1$
such that $\lvert e\left(it\right)\rvert\leq \eta$ for all $t\in \left[\delta, \pi\right]\subset \left[\delta,\frac{2\pi}{\tilde{h}}\right)$. Therefore $\mathcal{I}_2(n)\longrightarrow 0 \mbox{ as } n \to \infty$ using the similar bounds as in the proof of Theorem \ref{llt1}.
\end{proof}
Next we prove Theorem \ref{llt4}.
\begin{proof}[Proof of Theorem \ref{llt4}]
In this case $\mbox{${\mathbb P}$}\left(X_1 = 1 \right) = \mbox{${\mathbb P}$}\left(X_1 = -1\right) = \frac{1}{2}$. Thus
the span of $X_1$ is $2$.
The random variables $I_1 X_1$ is supported on the set $\left\{0,1,-1\right\}$ and it has span $1$.
We have $\mu =0$ and $\sigma=1$, so from equation \ref{Equ:Def-L^1} we get
$\mathcal{L}_{n}^{(1)}=\{x\colon x=\frac{1}{\sqrt{\log n}}\mathbb{Z}\}$.
As earlier, here also
without loss we may assume that the process begins with a single ball of color indexed by $0$.
For all $x \in \mathcal{L}_{n}^{(1)}$ we obtain by Fourier Inversion formula,
\begin{eqnarray*}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)=\frac{1}{2\pi \sqrt{\log n}}\displaystyle
\int\limits_{-\pi \sqrt{\log n}}^{\pi \sqrt{\log n}}\!{e^{-i tx}\psi_{n}(t)\, \mathrm{d}t}
\end{eqnarray*} where $\psi_{n}\left(t\right)=\mathbb{E}\left[e^{it\frac{Z_{n}}{\sqrt{\log n}}}\right].$
Furthermore, by Fourier inversion formula, for all $x \in \mathbb{R}$
\begin{eqnarray*}
\phi(x)=\frac{1}{2 \pi }\displaystyle\int\limits_{-\infty}^{\infty}\!{e^{-itx}e^{\frac{-t^{2}}{2}}\, \mathrm{d}t}.
\end{eqnarray*}
The proof of this theorem is also very similar to that of Theorem \ref{llt1}.
The bounds for $\Big{\lvert} \sqrt{ \log n}\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)-\phi(x) \Big{\rvert }$ are similar to that in the proof of
Theorem \ref{llt1} except for that of $\mathcal{I}_2(n)$
where
\begin{eqnarray*}
\mathcal{I}_2(n)=\sqrt{\log n}\displaystyle\int\limits_{\delta}^{\pi}\!{\left\lvert\psi_{n}\left(w\sqrt{\log n}\right)\right\rvert\,\mathrm{d}w}
\end{eqnarray*} and $\delta $ is chosen as in (\ref{Delta}).
To show that $\mathcal{I}_2(n)\longrightarrow 0 \mbox{ as } n \to \infty$, we observe that
\begin{align}
\nonumber \mathbb{E}\left[e^{itZ_{n}}\right]&=\displaystyle \prod_{j=1}^{n}\left(1-\frac{1}{j+1}+\frac{\cos t
}{j+1}\right)\\
\nonumber &=\frac{1}{n+1}\Pi_{n}\left(\cos t\right)
\end{align} since $\mathbb{E}\left[e^{itX_{1}}\right]=\cos t$. Therefore,
\[\psi_{n}(w\sqrt{\log n})= \mathbb{E}\left[e^{iwZ_{n}}\right]=\frac{1}{n+1}\Pi_{n}\left(\cos w\right).\]
We note that $\cos w$ is decreasing in $\left[\frac{\pi}{2},\pi\right]$ and for all $w \in \left[\frac{\pi}{2}, \pi\right] \mbox{ },-1\leq \cos w \leq 0$. Therefore, there exists $\eta >0$( small enough) such that $\left[\pi-\eta, \pi\right)\subset \left(\frac{\pi}{2}, \pi\right] $ and for all $w \in \left[\pi-\eta, \pi\right)$ we have
$-1< \cos(\pi-\eta)<0$ and
\begin{eqnarray*}
\Big{\lvert}\psi_{n}(w\sqrt{\log n})\Big{\rvert}\leq \frac{1}{n+1}\Pi_{n}\left(\cos (\pi-\eta)\right).
\end{eqnarray*}
Since $-1<\cos (\pi-\eta)<0$, so for all $j\geq1,$ $\left(1+\frac{\cos (\pi -\eta)}{j}\right)<1$. Therefore,
\begin{eqnarray}\label{J2}
\Pi_{n}\left(\cos (\pi-\eta)\right)\leq 1.
\end{eqnarray}
Let us write
\[\mathcal{I}_2(n)=\mathcal{J}_{1}(n)+\mathcal{J}_2(n)
\] where
\begin{eqnarray}\label{J1}
\mathcal{J}_1(n)= \sqrt{\log n}\displaystyle\int\limits_{\delta}^{\pi-\eta}\!{\left\lvert\psi_{n}\left(w\sqrt{\log n}\right)\right\rvert\,\mathrm{d}w}
\end{eqnarray} and
\begin{eqnarray*}
\mathcal{J}_2(n)= \sqrt{\log n}\displaystyle\int\limits_{\pi -\eta}^{\pi}\!{\left\lvert\psi_{n}\left(w\sqrt{\log n}\right)\right\rvert\,\mathrm{d}w}.
\end{eqnarray*}
It is easy to see from (\ref{J2}) that
\begin{eqnarray*}
\mathcal{J}_2(n)\leq \frac{\eta}{n+1}\sqrt{\log n}\longrightarrow 0 \mbox{ as } n \to \infty.
\end{eqnarray*}
For all $t\in \left[\delta, \pi-\eta \right],\mbox{ } 0\leq \lvert \cos t\rvert <1$, so there exists $0<\alpha<1$ such that
$0\leq \lvert \cos t \rvert \leq \alpha$ for all $t\in \left[\delta, \pi-\eta \right]$. Recall that
\[\psi_{n}(w\sqrt{\log n})=\prod_{j=1}^{n}\left(1-\frac{1}{j+1}+\frac{\cos w}{j+1}\right).
\] Using the inequality $1-x\leq e^{-x}$, it follows that for all $t\in \left[\delta, \pi-\eta \right]$
\begin{eqnarray*}
1-\frac{1}{j+1}+\frac{\lvert \cos t\rvert}{j+1}\leq 1-\frac{1}{j+1}+\frac{\alpha }{j+1}\leq e^{-\frac{1-\alpha}{j+1}}
\end{eqnarray*}
and hence
\begin{eqnarray*}
\frac{1}{n+1}\lvert\Pi_{n}\left(\cos t\right)\rvert\leq e^{-\displaystyle \sum_{j=1}^{n}\frac{1-\alpha}{j+1}}
\leq Ce^{-\left(1-\eta\right)\log n}
\end{eqnarray*} where $C$ is some positive constant. Therefore from (\ref{J1}) we obtain as $n \to \infty$
\begin{eqnarray*}
\mathcal{J}_1(n)\leq C e^{-\left(1-\alpha \right)\log n}\left(\pi-\eta -\delta \right)\sqrt{\log n}\longrightarrow 0.
\end{eqnarray*}
\end{proof}
\subsubsection{Proofs for the Local Limit Type Results for $d\geq2$}
\begin{proof}[Proof of Theorem \ref{llt2} ]
Without loss of generality we may assume that ${\mathbf \mu}=0$ and $\varSigma=\mathbb{I}_{d}$ and the process begins with a single ball of color indexed by $0$.
From Theorem \ref{LRW1}, we obtain $Z_{n}\stackrel{d}{=}\textstyle \sum_{j=1}^{n}I_{j}X_{j}$.
$X_{j}$ being a lattice random variable, $I_{j}X_{j}$ is also so.
By our assumption $\mbox{${\mathbb P}$}\left(X_1 = 0 \right) > 0$, so
$0 \in B$, therefore
$X_{j}$ and $I_{j}X_{j}$ are supported on the same lattice.
For $A\subset \mathbb{R}^{d}$ and $x\in \mathbb{R}$, we define
$xA=\{xy\colon y \in A\}$.
By Fourier inversion formula (see 21.28 on page 230 of \cite{BhRa76}), we get for $x\in \mathcal{L}_{n}^{(d)}$
\begin{eqnarray*}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)=\frac{l}{(2\pi\sqrt{\log n})^{d}}\int\limits\limits_{(\sqrt{\log n}\mathcal{F}^{*})}
\!{\psi_{n}(t)e^{-i\langle t,x\rangle}\, \mathrm{d}t}
\end{eqnarray*} where $\psi_{n}(t)=\mathbb{E}\left[e^{i\langle t,\frac{Z_{n}}{\sqrt{\log n}}\rangle}\right]$, $l=\lvert\text{det}\left(\mathcal{L}\right)\rvert$ and
$\mathcal{F}^{*}$ is the fundamental domain for $X_{1}$ as defined in equation(21.22) on page 229 of \cite{BhRa76}.
Also by Fourier inversion formula
\begin{eqnarray*}
\phi_{d}(x)=\frac{1}{(2\pi)^{d}}\int\limits\limits_{\mathbb{R}^{d}}\!
{e^{-i\langle t,x\rangle}e^{-\frac{\|t\|^{2}}{2}}\,\mathrm{d}t}.
\end{eqnarray*}
Given $\epsilon>0$, there exists $N>0$ such that $n\geq N$,
\begin{eqnarray*}
& & \Big{\lvert} \frac{\left(\sqrt{\log n} \right)^{d}}{l}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)-
\phi_{d}(x)\Big{\rvert } \\
&\leq & \frac{1}{(2\pi)^{d}}\int\limits\limits_{(\sqrt{\log n}\mathcal{F}^{*})}\!
\Big{\lvert}\psi_{n}(t)-
e^{-\frac{\|t\|^{2}}{2}}\Big{\rvert} \, \mathrm{d}t
+\frac{1}{(2\pi)^{d}}\int\limits \limits_{\mathbb{R}^{d}\setminus \sqrt{\log n}\mathcal{F}^{*} }\!{e^{-\frac{\|t\|^{2}}{2}}\,\mathrm{d}t}\\
&\leq & \frac{1}{(2\pi)^{d}}\int\limits\limits_{(\sqrt{\log n}\mathcal{F}^{*})}\!{\Big{\lvert}\psi_{n}(t)-
e^{-\frac{\|t\|^{2}}{2}}\Big{\rvert}\, \mathrm{d}t}+ \epsilon.
\end{eqnarray*}
Given any compact set $A\subset\mathbb{R}^{d}$ for all $n$ large enough
\begin{eqnarray*}
\int\limits\limits_{(\sqrt{\log n}\mathcal{F}^{*})} \!
{\Big{\lvert}\psi_{n}(t)-e^{-\frac{\|t\|^{2}}{2}}\Big{\rvert}\, \mathrm{d}t}
& \leq & \int\limits\limits_{A} \! \Big{\lvert}\psi_{n}(t)- e^{-\frac{\|t\|^{2}}{2}}\Big{\rvert}\, \mathrm{d}t
+ \int\limits_{(\sqrt{\log n}\mathcal{F}^{*})\setminus A} \!
\Big{\lvert}\psi_{n}(t)\Big{\rvert}\, \mathrm{d}t \\
& & \qquad \qquad + \int\limits_{\mathbb{R}^{d}\setminus A}\! e^{-\frac{\|t\|^{2}}{2}}\, \mathrm{d}t.
\end{eqnarray*}
By Theorem \ref{GRW}, we know that $\frac{Z_{n}}{\sqrt{\log n}}\Rightarrow N_{d}(0,\mathbb{I}_{d})$ as $n \to \infty$. Therefore,
for any compact set $A\subset \mathbb{R}^{d}$ by bounded convergence theorem,
\begin{eqnarray*}
\int\limits\limits_{A}\!{\Big{\lvert} \psi_{n}(t)-e^{-\frac{\|
t\|^{2}}{2}}\Big{\rvert}\mathrm {d}t} \longrightarrow 0 \mbox{ as } n \to \infty.
\end{eqnarray*}
Choose $A$ such that
\begin{eqnarray*}
\int\limits\limits_{A^{c}}\!{e^{-\frac{\|t\|^{2}}{2}}\,\mathrm{d}t}<\epsilon.
\end{eqnarray*}
Let us write
\begin{eqnarray}\label{Int}
\mathcal{I}(n)=\int\limits\limits _{(\sqrt{\log n}\mathcal{F}^{*})\setminus A}\!{\Big{\lvert}\psi_{n}(t)\Big{\rvert}\,\mathrm{d}t}.
\end{eqnarray}
For the above choice of $A$, we will show that
\begin{eqnarray*}
\mathcal{I}(n)\longrightarrow 0 \mbox{ as } n \to \infty.
\end{eqnarray*}
Since $Z_{n}\stackrel{d}{=}\textstyle \sum_{j=1}^{n}I_{j}X_{j}$, we have
\begin{align}
\nonumber \mathbb{E}\left[e^{i\langle t,Z_{n}\rangle}\right]&=\displaystyle \prod_{j=1}^{n}\left(1-\frac{1}{j+1}+\frac{e\left(i
t\right)}{j+1}\right)\\
\nonumber &=\frac{1}{n+1}\Pi_{n}\left(e\left(it\right)\right)
\end{align} where $e\left(it\right)=\mathbb{E}\left[e^{i\langle t,X_{1}\rangle}\right]$. So,
\[\psi_{n}(t)=\mathbb{E}\left[e^{i\langle t,\frac{Z_{n}}{\sqrt{\log n}}\rangle}\right]=\frac{1}{n+1}\Pi_{n}\left(e\left(\frac{1}{\sqrt{\log n}}it\right)\right).\]
Applying a change of variables $t=\frac{1}{\sqrt{\log n}}w$ to (\ref{Int}), we obtain
\begin{eqnarray}\label{CV}
\mathcal{I}(n)=(\sqrt{\log n})^{d}\int\limits\limits_{\mathcal{F}^{*}\setminus \frac{1}{\sqrt{\log n}}A}\!{\Big{\lvert}\psi_{n}\left(\sqrt{\log n}w\right)\Big{\rvert}\,\mathrm{d}w}.
\end{eqnarray}
We can choose a $\delta>0$ such that for all $w\in B(0,\delta)\setminus \{0\}$ there exists $b>0$ such that
\begin{eqnarray}\label{RWB}
\lvert e(iw)\rvert \leq 1-\frac{b\|w\|^{2}}{2}.
\end{eqnarray}
Therefore, using the inequality $1-x\leq e^{-x}$
\begin{eqnarray}
\nonumber\lvert\psi_{n}(\sqrt{\log n}w)\rvert &=& \frac{1}{n+1}\lvert\Pi_{n}\left(e(iw)\right)\rvert\\
\nonumber &\leq& \displaystyle \prod_{j=1}^{n+1}\left(1-\frac{1}{j+1}+\frac{\lvert e(iw)\rvert}{j+1}\right)\\
\label{Dbound} &\leq & e^{-\displaystyle\sum_{j=1}^{n}\frac{b}{j+1}\frac{\|w\|^{2}}{2}} \leq C_{1}e^{-b \frac{\|w\|^{2}}{2}\log n}
\end{eqnarray} for some positive constant $C_{1}$.
From (\ref{CV}) we can write
\begin{eqnarray*}
\mathcal{I}(n)=\mathcal{I}_1(n)+ \mathcal{I}_2(n)
\end{eqnarray*}
where
\begin{eqnarray*}
\mathcal{I}_1(n)=(\sqrt{\log n})^{d}\int\limits\limits_{\left(B(0,\delta)\setminus \frac{1}{\sqrt{\log n}}A\right)\cap \mathcal{F}^{*}}\!{\lvert{\psi_{n}
\left(\sqrt{\log n}w\right)\rvert}\,\mathrm{d}w}
\end{eqnarray*} and
\begin{eqnarray*}
\mathcal{I}_2(n)=(\sqrt{\log n})^{d}\int\limits_{\mathcal{F}^{*}\setminus B(0,\delta)}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}.
\end{eqnarray*}
Since (\ref{Dbound}) holds, given $\epsilon>0$, we have for all $n$ large enough
\begin{eqnarray}\label{I1}
\mathcal{I}_1(n)\leq (\sqrt{\log n})^{d}\int\limits\limits_{B(0,\delta)\setminus \frac{A}{\sqrt{\log n}}}\!{C_{1}e^{-b\frac{\|w\|^{2}}{2}\log n}\,\mathrm{d}w}\leq \epsilon.
\end{eqnarray}
Since the lattices for $X_1$ and $I_1 X_1$ are same, for all $w \in \mathcal{F}^{*}\setminus B(0,\delta)\text{ we get }\lvert e(iw)\rvert<1$, so there exists an $0<\eta<1$, such that
$\lvert e(iw)\rvert \leq \eta$. Therefore, using the inequality $1-x\leq e^{-x}$, we obtain
\begin{eqnarray}\label{Exp1}
\lvert\psi_{n}(\sqrt{\log n}w)\rvert\leq e^{- \sum_{j=i}^{n}\frac{1}{j+1}(1-\eta)}\leq C_{2}e^{-(1-\eta)\log n}
\end{eqnarray} for some positive constant $C_{2}$.
Therefore, using equation (21.25) on page 230 of \cite{BhRa76} we obtain
\begin{eqnarray*}
\mathcal{I}_2(n)\leq C'_{2}(\sqrt{\log n})^{d}e^{-(1-\eta)\log n}\longrightarrow 0 \mbox{ as } n \to \infty
\end{eqnarray*} where $C'_{2}$ is an appropriate positive constant.
\end{proof}
\begin{proof}[Proof of the Theorem \ref{llt5}]
In this case $\mbox{${\mathbb P}$}\left(X_1 = \pm e_i\right) = \frac{1}{2d}$ for $1 \leq i \leq d$ where $e_i$ is the
$i^{\text{th}}$ unit vector in direction $i$, thus $\mu = 0$ and $\varSigma = \frac{1}{d} {\mathbb I}_d$.
For notional simplicity we consider the case $d = 2$, the general case can be written similarly.
Now for each $j \in \mathbb{N}$, $I_{j}X_{j}$ is a lattice random vector with the minimal lattice $\mathbb{Z}^{2}$. It is easy to note that
$2\pi \mathbb{Z}\times 2\pi\mathbb{Z}$ is the set of all periods for $I_{j}X_{j}$ and its fundamental domain is given by $\left(-\pi,\pi\right)^{2}$.
To prove (\ref{lltSSRW}), it is enough to show
\begin{eqnarray*}
\displaystyle \sup_{x \in \frac{1}{\sqrt{2}}\mathcal{L}_{n}^{(2)}} \left\vert \left(\log n\right)
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)-
\phi_{2,\frac{1}{2}\mathbb{I}_{2}}(x) \right\vert \longrightarrow 0 \mbox{ as } n \to \infty,
\end{eqnarray*} where $\phi_{2,\frac{1}{2}\mathbb{I}_{2}}(x)=\frac{1}{\pi}e^{-\|x\|^2}$ is the
bivariate normal density with mean vector $0$ and variance-covariance matrix $\frac{1}{2}\mathbb{I}_{2}$
and
$\frac{1}{\sqrt{2}}\mathcal{L}_{n}^{(2)}=\{x\colon
x=\frac{1}{{\sqrt{\log n}}}\mathbb{Z}^{2} \}.$
By Fourier inversion formula (see 21.28 on page 230 of \cite{BhRa76}), we get for $x\in \frac{1}{\sqrt{2}}\mathcal{L}_{n}^{(2)}$
\begin{eqnarray*}
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}=x\right)=\frac{1}{(2\pi)^{2} \log n}\int\limits\limits_{\left(-\sqrt{\log n}\pi,\sqrt{\log n}\pi\right)^{2}}
\!{\psi_{n}(t)e^{-i\langle t,x\rangle}\, \mathrm{d}t}
\end{eqnarray*}
Also by Fourier inversion formula
\begin{eqnarray*}
\phi_{2,\frac{1}{2}\mathbb{I}_{2}}(x)=\frac{1}{(2\pi)^{2}}\int\limits\limits_{\mathbb{R}^{2}}\!{e^{-i\langle t,x\rangle}e^{-\frac{\|t\|^{2}}{4}}\,\mathrm{d}t}.
\end{eqnarray*} Let us write $H_{n}=\left(-\sqrt{\log n}\pi,\sqrt{\log n}\pi\right)^{2}.$
Given $\epsilon>0$, there exists $N>0$ such that $n\geq N$,
\begin{eqnarray*}
\nonumber \Big{\lvert} \log n \,
\mathbb{P}\left(\frac{Z_{n}}{\sqrt{\log n}}\right)-
\phi_{2,\frac{1}{2}\mathbb{I}_{2}}(x)\Big{\rvert }&\leq & \frac{1}{(2\pi)^{2}}\int\limits\limits_{H_{n}}\!{\Big{\lvert}\psi_{n}(t)-
e^{-\frac{\|t\|^{2}}{4}}\Big{\rvert}\, \mathrm{d}t}+\frac{1}{(2\pi)^{2}}\int\limits \limits_{\mathbb{R}^{2}\setminus H_{n}}\!{e^{-\frac{\|t\|^{2}}{4}}\,\mathrm{d}t}\\
&\leq & \frac{1}{(2\pi)^{2}}\int\limits\limits_{H_{n}}\!{\Big{\lvert}\psi_{n}(t)-
e^{-\frac{\|t\|^{2}}{4}}\Big{\rvert}\, \mathrm{d}t}+ \epsilon.
\end{eqnarray*}
Given any compact set $A\subset\mathbb{R}^{2}$ for all $n$ large enough we have
\begin{eqnarray}\label{III}
\nonumber\int\limits\limits_{H_{n}}\!{\Big{\lvert}\psi_{n}(t)-
e^{-\frac{\|t\|^{2}}{4}}\Big{\rvert}\, \mathrm{d}t}\leq \int\limits\limits_{A}\!{\Big{\lvert}\psi_{n}(t)- e^{-\frac{\|t\|^{2}}{4}}\Big{\rvert}\, \mathrm{d}t}+ \int\limits_{H_{n}\setminus A}{\Big{\lvert}\psi_{n}(t)\Big{\rvert}\, \mathrm{d}t}+
\int\limits_{\mathbb{R}^{2}\setminus A}\!{e^{-\frac{\|t\|^{2}}{4}}\, \mathrm{d}t}.
\end{eqnarray}
By Theorem \ref{GRW}, we know that $\frac{Z_{n}}{\sqrt{\log n}}\Rightarrow N_{2}(0,2^{-1}\mathbb{I}_{2})$ as $n \to \infty$. Therefore,
for any compact set $A\subset \mathbb{R}^{2}$ by bounded convergence theorem,
\begin{eqnarray*}
\int\limits\limits_{A}\!{\Big{\lvert} \psi_{n}(t)-e^{-\frac{\|
t\|^{2}}{4}}\Big{\rvert}\mathrm {d}t} \longrightarrow 0 \mbox{ as } n \to \infty.
\end{eqnarray*}
Choose $A$ such that
\begin{eqnarray*}
\int\limits\limits_{A^{c}}\!{e^{-\frac{\|t\|^{2}}{4}}\,\mathrm{d}t}<\epsilon.
\end{eqnarray*}
Let us write
\begin{eqnarray*}
\mathcal{I}(n)=\int\limits\limits _{H_{n}\setminus A}\!{\Big{\lvert}\psi_{n}(t)\Big{\rvert}\,\mathrm{d}t}.
\end{eqnarray*}
For the above choice of $A$, we will show that
\begin{eqnarray*}
\mathcal{I}(n)\longrightarrow 0 \mbox{ as } n \to \infty.
\end{eqnarray*}
Applying a change of variables $t=\frac{1}{\sqrt{\log n}}w$, we obtain
\begin{eqnarray*}
\mathcal{I}(n)=\log n\int\limits\limits_{\left(-\pi,\pi\right)^{2}\setminus \frac{1}{\sqrt{\log n}}A}\!{\Big{\lvert}\psi_{n}\left(\sqrt{\log n}w\right)\Big{\rvert}\,\mathrm{d}w}.
\end{eqnarray*} where for $A \subset \mathbb{R}^{d}$ and $x \in \mathbb{R}$, we write $xA=\{xy\colon y \in A\}$.
We can write
\begin{eqnarray*}
\mathcal{I}(n)=\mathcal{I}_1(n)+ \mathcal{I}_2(n)
\end{eqnarray*}
where
\begin{eqnarray*}
\mathcal{I}_1(n)=\log n \int\limits\limits_{\left(B(0,\delta)\setminus \frac{1}{\sqrt{\log n}}A\right)\cap \left(-\pi,\pi\right)^{2}}\!{\lvert{\psi_{n}
\left(\sqrt{\log n}w\right)\rvert}\,\mathrm{d}w}
\end{eqnarray*} and
\begin{eqnarray*}
\mathcal{I}_2(n)=\log n\int\limits\limits_{\left(-\pi,\pi\right)^{2}\setminus B(0,\delta)}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}.
\end{eqnarray*} where $\delta$ is as in (\ref{RWB}).
Using arguments similar to (\ref{I1}), we can show that $\mathcal{I}_1(n)\longrightarrow 0 \mbox{ as } n \to \infty.$ Therefore it is enough to show that $\mathcal{I}_2(n)\longrightarrow 0 \mbox{ as } n \to \infty.$
To do so, we first observe that for $t=\left(t^{(1)},t^{(2)}\right)\in \mathbb{R}^{2}$ the characteristic function for $X_{1}$ is given by $e\left(it\right)=\frac{1}{2}\left(\cos t^{(1)}+\cos t^{(2)}\right)$. So if $t\in \left[-\pi,\pi\right]^{2}$ be such that
$\lvert e\left(it\right)\rvert=1,$ then $t\in \{(\pi,\pi),(-\pi,\pi),(\pi,-\pi),(-\pi,-\pi)\}$.
The function $\cos \theta$ is continuous and decreasing as a function of $\theta $ for $t \in \left[\frac{\pi}{2},\pi\right]$. Choose $\eta >\frac{\pi}{2}$ such that for $t\in A_{1}=(-\pi,\pi)^{2}\cap B^{c}(0,\delta)\cap D^{c}, \mbox{ we have }\lvert e\left(it\right)\rvert<1,$ where $D=\left[\pi-\eta,\pi \right)^{2}\cup\left[-\pi-\eta,-\pi\right)\times \left[\pi-\eta,\pi\right)\cup \left[-\pi-\eta, -\pi\right)^{2}\cup \left[\pi-\eta,\pi\right)\times \left[-\pi-\eta,-\pi\right)$. Let us write
\[\mathcal{I}_2(n)=\mathcal{J}_1(n)+\mathcal{J}_2(n)
\] where
\begin{eqnarray*}
\mathcal{J}_1(n)=\log n\int\limits_{A_{1}}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}
\end{eqnarray*} and
\begin{eqnarray*}
\mathcal{J}_2(n)=\log n\int\limits_{D}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}\mbox{.}
\end{eqnarray*} It is easy to note that
\[\mathcal{J}_1(n)\leq \log n \int\limits\limits_{\overline{A}_{1}}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}
\] where $\overline{A}_{1}$ denotes the closure of $A_{1}$. For $w \in \overline{A}_{1}$ there exists some $0<\alpha<1$ such that $\lvert e\left(it\right)\rvert\leq \alpha$. Therefore using bounds similar to that in (\ref{Exp1}) we can show that
\[\mathcal{J}_1(n)\longrightarrow 0 \mbox{ as } n \to \infty.
\]
We observe that
\[\mathcal{J}_2(n)\leq 4 \log n\int\limits_{\left[\pi-\eta,\pi\right]^{2}}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}.\] Hence, it is enough to show that
$\log n\int\limits_{\left[\pi-\eta,\pi\right]^{2}}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}\longrightarrow 0$
as $n \to \infty$.
For $w \in \left[\pi-\eta,\pi \right]^{2}$ we have $0<\lvert \left(1+\frac{e(iw)}{j}\right)\rvert\leq \left(1+\frac{\cos (\pi-\eta)}{j}\right)\leq 1$. Therefore
\begin{eqnarray*}
\lvert\psi_{n}(w)\rvert=\displaystyle\frac{1}{n+1}\prod_{j=1}^{n}\Big{\lvert}\left(1+\frac{e(iw)}{j}\right)\Big{\rvert}\leq \frac{1}{n+1}.
\end{eqnarray*} So,
\begin{eqnarray*}
\log n\int\limits_{\left[\pi-\eta,\pi\right]^{2}}{\lvert\psi_{n}\left(\sqrt{\log n}w\right)\rvert dw}\leq \frac{\eta^{2}}{n+1}\log n\longrightarrow 0 \mbox{ as } n \to \infty.
\end{eqnarray*}
\end{proof}
\section{Urns with Colors Indexed by Other Lattices on $\mathbb{R}^{d}$}
\label{Sec:General}
We can further generalize the urn models with color sets indexed by certain countable lattices in $\mbox{${\mathbb R}$}^d$.
Such a model will be associated with the corresponding random walk on the lattice. To state the results
rigorously we consider the following notations.
Let $\{X_{i}\}_{i\geq 1}$ be a sequence of random $d$-dimensional i.i.d. vectors with non empty support
set $B \subseteq \mathbb{R}^{d}$ and probability mass function $p$. We assume that $B$ is finite.
Consider the countable subset
\[
S^d := \left\{\textstyle \sum_{i=1}^{k}n_{i}b_{i}\colon n_{1},n_{2},\ldots, n_{k} \in \mathbb{N},
b_{1},b_{2},\ldots,b_{k}\in B\right\}
\]
of $\mbox{${\mathbb R}$}^d$ which will index the set of colors.
Like earlier we consider $S_n := X_0 + X_1 + \cdots + X_n, n \geq 0$, the random walk starting at $X_0$
and taking values now in $S^d$. The transition matrix for this work is given by
\[ R := \left(\left( p\left(u - v\right) \right)\right)_{u, v \in S^d}.\]
We say a process $\left(U_n\right)_{n \geq 0}$ is a urn scheme with colors indexed by $S^d$
and replacement matrix $R$ if
starting with $U_0$ which is a probability distribution on $S^d$,
we define $\left(U_n\right)_{n \geq 0}$ recursively
\begin{equation}
\label{recurssion-S}
U_{n+1}=U_{n} + \zeta_{n+1} R
\end{equation}
where $\zeta_{n+1} = \left(\zeta_{n+1,v}\right)_{v \in S^d}$ is such that
$\zeta_{n+1,V}=1$ and $\zeta_{n+1,u} = 0$ if $u \neq V$ where $V$ is a random color
chosen from the configuration $U_n$. In other words
\[
U_{n+1}=U_n + R_V
\]
where $R_V$ is the $V^{\text{th}}$ row of the replacement matrix $R$.
We will also term this process $\left(U_n\right)_{n \geq 0}$
\emph{infinite color urn model associated with the random walk $\left\{S_n\right\}_{n \geq 0}$ on $S^d$}.
Naturally when $S^d = \mbox{${\mathbb Z}$}^d$ this is exactly the process which is discussed earlier.
We will use same notations as earlier for the mean, variance-co-variance matrix and moment generating function
for the increment $X_1$.
We will still denote by $Z_n$ the $\left(n+1\right)^{\mbox{th}}$ selected color and then like earlier
the expected configuration of the urn at time $n$ will be given by the distribution of $Z_n$
but now on $S^d$.
We first note that the Theorem \ref{LRW1} is still valid with exactly the same proof.
This enable us to generalize
Theorem \ref{GRW} and Theorem \ref{ASd} as follows.
\begin{theorem}
\label{GRWg}
Let $\overline{\Lambda}_{n}$ be the probability measure on $\mathbb{R}^{d}$ corresponding to the probability
vector $\frac{1}{n+1}\left(\mathbb{E}[U_{n,v}]\right)_{v \in S^d}$ and let
\[
\overline{\Lambda}_{n}^{cs}(A)
:=\overline{\Lambda}_{n}\left(\sqrt{\log n}A\varSigma^{-1/2}+ {\mathbf \mu}\log n\right), \,\,\,
A \in \mathcal{B}\left(\mathbb{R}^{d}\right).
\]
Then, as $n \to \infty$,
\begin{equation}
\overline{\Lambda}_{n}^{cs}\stackrel{w}{\longrightarrow} \Phi_{d}.
\end{equation}
\end{theorem}
\begin{theorem}
Let $\Lambda_{n}(\omega)\in \mathcal{M}_{1}$ be the random probability measure corresponding to the
random probability vector $\frac{U_{n}(\omega)}{n+1}$.
For $\omega \in \Omega$ and $A \in \mathcal{B}\left(\mathbb{R}^{d}\right)$,
let
\[
\Lambda^{cs}_{n}(\omega)\left(A\right)
=\Lambda_{n}(\omega)\left(\sqrt{\log n}A\varSigma^{-1/2}+ {\mathbf \mu} \log n\right).
\]
Then, as $n \to \infty $,
\begin{equation}
\label{Eq:PrCgsg}
\Lambda_{n}^{cs}\stackrel{p}{\longrightarrow} \Phi_{d} \mbox{ on }\mathcal{M}_{1}.
\end{equation}
\end{theorem}
The proofs of these theorems are exactly similar to their counter part for the walks on $\mbox{${\mathbb Z}$}^d$ and
hence are ommitted.
We now consider a specific example, namely, the triangular lattice in two dimension. For this
the support set for the i.i.d. increment vectors is given by
\[
B= \left\{(1,0), (-1,0),\omega, -\omega, \omega^{2}, -\omega^{2} \right\},
\]
where $\omega, \omega^{2}$ are the complex cube
roots of unity. The law of $X_1$ is uniform on $B$. This gives the
random walk on the triangular lattice in two dimension.
\begin{figure}
\centering
\includegraphics [scale=0.45] {hexa.png}
\caption{Triangular Lattice}
\end{figure}
Following is an immediate corollary of Theorem \ref{GRWg}.
\begin{cor}
\label{TRW}
For the urn model associated with the random walk on two dimensional triangular lattice if $Z_n$ is the
color of the $\left(n+1\right)^{\mbox{th}}$ selected ball, then as $n \rightarrow \infty$
\begin{eqnarray}
\frac{Z_{n}}{\sqrt{\log n}} \Rightarrow N_{2}\left(0,\frac{1}{2}\mathbb{I}_{2}\right).
\end{eqnarray}
\end{cor}
\begin{proof}
Since $1+\omega+\omega^{2}=0$, therefore it is immediate that ${\mathbf \mu}=0$. Also we know that
$\omega=\frac{1}{2}+\mathit{i}\frac{\sqrt{3}}{2}$.
Observe that $\mathbb{E}\left[\left(X_{1}^{(1)}\right)^{2}\right]=\frac{2}{6}\left(1+\left(\mathit{Re}\mbox{ }
\omega\right)^{2}+
\left(\mathit{Re}\mbox{ }\omega^{2}\right)^{2}\right)$
where we write $\omega = \mathit{Re}\mbox{ } \omega + i \mathit{Im}\mbox{ } \omega$.
But $\mathit{Re}\mbox{ }\omega=\mathit{Re}\mbox{ }\omega^{2}$,
therefore $\mathbb{E}\left[\left(X_{1}^{(1)}\right)^{2}\right]
=\frac{2}{6}\left(1+2\left(\mathit{Re}\mbox{ }\omega\right)^{2}\right)=\frac{1}{2}$.
Similarly, $\mathit{Im}(\omega)=-\mathit{Im}(\omega^{2})$,
and hence $\mathbb{E}\left[\left(X_{1}^{2}\right)^{2}\right]=
\frac{2}{6}\left(\left(\mathit{Im}(\omega)\right)^{2}+\left(\mathit{Im}(\omega^{2})\right)^{2}\right)=\frac{1}{2}$.
Finally
$\mathbb{E}\left[X_{1}^{(1)}X_{1}^{(2)}\right]=-\frac{2}{6}\mathit{Im}\left(1+\omega+\omega^{2}\right)=0$.
So $\varSigma = \frac{1}{2} {\mathbb I}_2$. The rest is just an application of Theorem \ref{GRWg}.
\end{proof}
\section*{Appendix}
We present here an elementary but technical result which we have used in the proof of Theorem \ref{ASd}.
It is really a generalization of the classical result for Laplace transform, namely,
Theorem 22.2 of \cite{Bi95}.
\begin{theorem}\label{Rational2}
Let $\nu_{n}$ be a sequence of probability measures on $\left(\mathbb{R}^{d},
\mathcal{B}(\mathbb{R}^{d})\right)$ and let $m_{n}(\cdotp)$
be the corresponding moment generating functions. Suppose there exists $\delta>0$ such that
$m_{n}(\lambda)\longrightarrow e^{\frac{\|\lambda\|^{2}}{2}}\mbox{ as }
n \to \infty$ for every $\lambda \in \left[-\delta, \delta\right]^{d}\cap \mathbb{Q}^{d}$,
then as $n \to \infty$
\begin{equation}
\nu_{n} \stackrel{w}{\longrightarrow } \Phi_{d}.
\label{Equ:nu_n-to-normal}
\end{equation}
\end{theorem}
\begin{proof}
Choose a $\delta' \in \mathbb{Q}$ such that $0 < \delta' < \delta$, and observe that for every $ a > 0$
\[
\nu_{n}\left(\left(\left[-a,a\right]^d\right)^{c}\right) \leq
\sum_{i=1}^d e^{-\delta' a}\left(m_{n}(-\delta' e_i) + m_{n}(\delta' e_i)\right),
\]
where $\left\{e_i\right\}_{i=1}^d$ are the $d$-unit vectors.
Now for our assumption we get
$m_{n}(\delta' e_i) \to e^{\frac{{\delta'}^2}{2}}$ and
$m_{n}(-\delta' e_i)\to e^{\frac{{\delta'}^2}{2}}$ as $ n \to \infty$
for every $1 \leq i \leq d$. Thus we get
\begin{eqnarray*}
\sup_{n \geq 1} \nu_{n} \left(\left(\left[-a,a\right]^d\right)^{c}\right)
\longrightarrow 0 \mbox{ as } a \to \infty.
\end{eqnarray*}
So the sequence of probability measures $\left(\nu_n\right)_{n \geq 1}$ is tight.
Therefore, for every subsequence $\{n_{k}\}_{k\geq 1}$ there exists a further subsequence
$\{n_{k_{j}}\}_{j\geq 1}$ and a probability measure $\nu$ such that as $n \to \infty$,
\[ \nu_{n_{k_{j}}} \stackrel{w}{\longrightarrow } \nu. \]
Then by dominated convergence theorem
\[
m_{n_{k_j}}\left(\lambda\right) \longrightarrow m_{\infty}\left(\lambda\right), \,\,\, \forall \,\,
\lambda \in \left(- \delta, \delta\right)^d \cap {\mathbb Q}^d
\]
where $m_{\infty}$ is the moment generating function of $\nu$.
But from our assumption
\[
m_{n_{k_j}}\left(\lambda\right) \rightarrow e^{\frac{\| \lambda \|^2}{2}}, \,\,\, \forall \,\,
\lambda \in \left[-\delta, \delta\right]^d \cap \mathbb{Q^d}.
\]
So we conclude that
\[
m_{\infty}\left(\lambda\right) = e^{\frac{\| \lambda \|^2}{2}}, , \,\,\, \forall \,\,
\lambda \in \left(- \delta, \delta\right)^d \cap {\mathbb Q}^d.
\]
Since both sides of the above equation are continuous functions on their respective domains, we get that
$m_{\infty}\left(\lambda\right) = e^{\frac{\| \lambda \|^2}{2}}$ for every
$\lambda \in \left(- \delta, \delta\right)^d$. But the standard Gaussian distribution
is characterize by the values of
its moment generating function in a open neighborhood of $0$, so
we conclude that every sub-sequential limit is standard Gaussian.
This proves \eqref{Equ:nu_n-to-normal}.
\end{proof}
\section*{Acknowledgement}
The authors are grateful to Krishanu Maulik for various discussions they had with him
regarding general urn models and also for looking through the first
draft of this work.
\bibliographystyle{plain}
| {
"timestamp": "2013-11-06T02:02:53",
"yymm": "1303",
"arxiv_id": "1303.7374",
"language": "en",
"url": "https://arxiv.org/abs/1303.7374",
"abstract": "In this work we introduce a new type of urn model with infinite but countable many colors indexed by an appropriate infinite set. We mainly consider the indexing set of colors to be the $d$-dimensional integer lattice and consider balanced replacement schemes associated with bounded increment random walks on it. We prove central and local limit theorems for the random color of the $n$-th selected ball and show that irrespective of the null recurrent or transient behavior of the underlying random walks, the asymptotic distribution is Gaussian after appropriate centering and scaling. We show that the order of any non-zero centering is always ${\\mathcal O}\\left(\\log n\\right)$ and the scaling is ${\\mathcal O}\\left(\\sqrt{\\log n}\\right)$. The work also provides similar results for urn models with infinitely many colors indexed by more general lattices in ${\\mathbb R}^d$. We introduce a novel technique of representing the random color of the $n$-th selected ball as a suitably sampled point on the path of the underlying random walk. This helps us to derive the central and local limit theorems.",
"subjects": "Probability (math.PR)",
"title": "Pólya Urn Schemes with Infinitely Many Colors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806529525571,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274303643409
} |
https://arxiv.org/abs/1803.04533 | A note on $p$-adic locally analytic functions with application to behavior of the $p$-adic valuations of Stirling numbers | The aim of this paper is to prove conjectures concerning $p$-adic valuations of Stirling numbers of the second kind $S(n,k)$, $n,k\in\mathbb{N}_+$, stated by Amdeberhan, Manna and Moll and Berrizbeitia et al., where $p$ is a prime number. The proof is based on elementary facts from $p$-adic analysis. | \section{Introduction}
At the beginning of the paper, we denote the set of non-negative integers (i. e. positive integers with $0$ included) by $\mathbb{N}$ while the set of positive integers (with $0$ excluded) by $\mathbb{N}_+$.
Let $n,k\in\mathbb{N}_+$. The Stirling number of the second kind $S(n,k)$ counts the number of partitions of a set with $n$ elements into exactly $k$ nonempty subsets. Using a simple combinatorial reasoning one can obtain the recurrence
\begin{equation*}
S(n+1,k+1)=S(n,k)+(k+1)S(n,k+1),\quad\quad n,k\in\mathbb{N}_+,
\end{equation*}
with initial conditions $S(1,1)=1$, $S(1,k)=0$ for $k>1$ and $S(n,1)=1$ for $n\in\mathbb{N}_+$. The exact formula for the Stirling numbers of the second kind takes the form
\begin{equation*}
S(n,k)=\frac{1}{k!}\sum_{j=1}^k (-1)^{k-j}{k\choose j}j^n.
\end{equation*}
It is worth to note that the numbers $k!S(n,k)$ also have a combinatorial interpretation. Namely, $k!S(n,k)$ is the number of all surjections of a set with $n$ elements onto a set with $k$ elements.
Let us recall that for a prime number $p$ and a nonzero rational number $x$ we define the $p$-adic valuation $v_p(x)$ of the number $x$ as a number $t\in\mathbb{Z}$ such that $x=\frac{a}{b}p^t$, where $a\in\mathbb{Z}$, $b\in\mathbb{N}_+$, $\gcd(a,b)=1$ and $p\nmid ab$. Moreover, we put $v_p(0)=+\infty$. The problem of computation of the $p$-adic valuations (with emphasis on $2$-adic valuations) of Stirling numbers of the second kind and their relatives generated a lot of research, e.g. \cite{Cla, Dav2, Dav3, HZZ, Leng2, Lun1, Wan}. The Stirling numbers of the second kind appear in algebraic topology. Their connections with algebraic topological problems can be found in \cite{BenDav, CraKna, Dav4, DavPot, DavSun, KomLam, Lun2}. Amdeberhan, Manna and Moll in \cite{AMM} and Berrizbeitia, Medina, Moll, Moll and Noble in \cite{BMMMN} stated two conjectures on the behavior of the $p$-adic valuations of the numbers $S(n,k)$. In order to formulate these conjectures we will denote by $[a]_d$ the congruence class of an integer $a$ modulo a positive integer $d$:
\begin{equation*}
[a]_d=\{n\in\mathbb{Z}: n\equiv a\pmod{d}\}.
\end{equation*}
Moreover, for a sequence $(c(n))_{n\in\mathbb{N}_+}$ of rational numbers we will write
\begin{equation*}
c\left([a]_d\right)=\left\{c(n): n\in [a]_d, n\in\mathbb{N}_+\right\}.
\end{equation*}
In particular,
\begin{equation*}
S\left([a]_d,k\right)=\left\{S(n,k): n\in [a]_d, n\in\mathbb{N}_+\right\}
\end{equation*}
for $k\in\mathbb{N}$. Consequently,
\begin{equation*}
v_p\left(c\left([a]_d\right)\right)=\left\{v_p(c(n)): n\in [a]_d, n\in\mathbb{N}_+\right\}
\end{equation*}
and
\begin{equation*}
v_p\left(S\left([a]_d,k\right)\right)=\left\{v_p(S(n,k)): n\in [a]_d, n\in\mathbb{N}_+\right\}
\end{equation*}
for a prime number $p$. Fixing a sequence $(c(n))_{n\in\mathbb{N}_+}$ of rational numbers and a prime number $p$ we say that a congruence class $[a]_d$ is \emph{constant} with $p$-adic valuation $t$ if $v_p\left(c\left(n\right)\right)=t$ for each $n\in [a]_d$. Every congruence class that is not constant will be called \emph{non-constant}. The value $\min v_p\left(c\left([a]_d\right)\right)$ (if exists) will be called \emph{the least $p$-adic valuation of the class $[a]_d$}. Then the first conjecture, concerning the $2$-adic valuations of Stirling numbers of the second kind, can be presented in the following way.
\begin{con}[Conjecture 1.6. in \cite{AMM}]\label{Con1}
Fix a number $k\in\mathbb{N}_+$. Then there exist $m_0(k)\in\mathbb{N}_+$ and $\mu(k)\in\mathbb{N}$ such that for any integer $m\geq m_0(k)$ there are exactly $\mu(k)$ non-constant congruence classes $[a]_{2^m}$ modulo $2^m$ and each non-constant class modulo $2^m$ splits into one constant and one non-constant class modulo $2^{m+1}$.
\end{con}
The authors of \cite{AMM} proved the above conjecture for $k\leq 5$. Their work was continued by Bennett and Mosteig in \cite{BenMos}, who proved this conjecture in case $5\leq k\leq 20$. Conjecture \ref{Con1} in a slightly different form was proved by Davis in \cite{Dav1} for partial Stirling numbers of the second kind $S_2(n,k)=\frac{1}{k!}\sum_{j=1, 2\nmid j}^k (-1)^{k-j}{k\choose j}j^n$, where $k\leq 36$. It is remarkable that according to Conjecture \ref{Con1} all the numbers $S(2^n,k)$, $n\gg 0$, have the same $2$-adic valuation. The independence of $2$-adic valuation of $S(2^n,k)$ on $n$ was noticed by Lengyel in \cite{Leng}. He conjectured and proved in some cases that
\begin{equation}\label{LengWan}
v_2(k!S(2^n,k))=k-1
\end{equation}
for every non-negative integer $n$ such that $2^n\geq k$. The above equality in full generality was proved by De Wannemacker in \cite{Wan}.
For an odd prime number $p$ and positive integer $m$ we put
$$L_{p^m}=(p-1)p^{\lceil\log_p k\rceil +m-2}.$$
Then the sequence $(S(n,k)\pmod{p^m})_{n\in\mathbb{N}_{\geq k}}$ is ultimately periodic with period $L_{p^m}$ (see \cite{BeckRio, Car}). Now we are ready to present the second conjecture, which generalizes the first one for arbitrary prime number $p$.
\begin{con}[Conjecture 5.4. in \cite{BMMMN}]\label{Con2}
Fix a positive integer $k$ and a prime number $p$ such that $k<p$. Then there exist $m_0(k)\in\mathbb{N}_+$ and $\mu(k)\in\mathbb{N}$ such that for any integer $m\geq m_0(k)$ there are exactly $\mu(k)$ non-constant congruence classes $[a]_{L_{p^m}}$ modulo $L_{p^m}$ and each non-constant class modulo $L_{p^m}$ splits into $p-1$ constant and one non-constant class modulo $L_{p^{m+1}}$.
\end{con}
The above conjecture implies that, under certain conditions, for any $a\in\mathbb{N}$ and sufficiently large $n\in\mathbb{N}$ the $p$-adic valuation of the number $S(ap^n(p-1),k)$ is independent on $a$ and $n$. This fact was proved by Gessel and Lengyel in \cite{GesLeng} provided that $\frac{k}{p}$ is not an odd integer. More precisely, they gave a formula
\begin{equation}\label{GesLeng}
v_p(k!S(ap^n(p-1),k))=\left\lfloor\frac{k-1}{p-1}\right\rfloor+\tau_p(k),
\end{equation}
where $\tau_p(k)\in\mathbb{N}$. It is not clear in general what values $\tau_p(k)$ can take. The formula for $\tau_p(k)$ is given for $p\in\{3,5\}$. Moreover, $\tau(k)=0$ for $k$ being a multiple of $p-1$. As we can see, the formula (\ref{GesLeng}) is a generalization of (\ref{LengWan}).
We will say that a congruence class $[a]_d$ of an integer $a$ modulo a positive integer $d$ is \emph{almost constant} with $p$-adic valuation $t$ if $v_p(S(n,k))=t$ for almost all positive integers $n\in [a]_d$. The congruence class which is not almost constant will be called \emph{essentially non-constant}. If the limit
$$\liminf_{n\rightarrow +\infty, n\in [a]_d} v_p(c_n)=:\liminf v_p(S([a]_d,k))$$
exists then it will be called \emph{the ultimate minimal $p$-adic valuation of the class $[a]_d$}. Now we are ready to state the theorem describing the behavior of the $p$-adic valuations of the Stirling numbers of the second kind.
\begin{mainthm}\label{thm1}
Fix a positive integer $k$ and a prime number $p$. Then there exist $m_0(k)\in\mathbb{N}_+$ and $\mu(k)\in\mathbb{N}$ such that for any integer $m\geq m_0(k)$ there are exactly $\mu(k)$ essentially non-constant congruence classes $[a]_{p^{m-1}(p-1)}$ modulo $p^{m-1}(p-1)$. Moreover, each essentially non-constant class $[a]_{p^{m-1}(p-1)}$ modulo $p^{m-1}(p-1)$ splits into $p-1$ almost constant classes $[a_1]_{p^{m}(p-1)}$, ..., $[a_{p-1}]_{p^{m}(p-1)}$ modulo $p^{m}(p-1)$ such that
$$v_p(S([a_t]_{p^{m}(p-1)}))=\{\min v_p(S([a]_{p^{m-1}(p-1)}))\},\quad t\in\{1,...,p-1\},$$
and one essentially non-constant class $[a_0]_{p^{m}(p-1)}$ modulo $p^{m}(p-1)$ with
\begin{align*}
0<\min v_p(S([a_0]_{p^{m}(p-1)}))-\min v_p(S([a]_{p^{m-1}(p-1)}))<k-\left\lfloor\frac{k}{p}\right\rfloor.
\end{align*}
More precisely, for each essentially non-costant congruence class\linebreak $[a]_{p^{m_0-1}(p-1)}$ modulo $p^{m_0-1}(p-1)$ there are $\alpha\in\mathbb{Z}$ and $l\in\left\{1,...,k-\left\lfloor\frac{k}{p}\right\rfloor-1\right\}$ such that for each $m\geq m_0$ the ultimate minimal $p$-adic valuation of the unique essentially non-constant class modulo $p^{m-1}(p-1)$ contained in\linebreak $[a]_{p^{m_0-1}(p-1)}$ is equal to $\alpha+l(m-m_0)$.
In case of $p>k$ in the above statement we can omit words ``almost'', ``essentially'' and ``ultimate''. Then, for each non-costant congruence class $[a]_{p^{m_0-1}(p-1)}$ modulo $p^{m_0-1}(p-1)$ there are $\beta\in\mathbb{Z}$, $l\in\left\{1,...,k-\left\lfloor\frac{k}{p}\right\rfloor-1\right\}$ and $x_0\in\mathbb{Z}_p$ such that $v_p(S(n,k))=\beta+lv_p(n-x_0)$ for $n\in [a]_{p^{m_0-1}(p-1)}$.
\end{mainthm}
\begin{rem}
{\rm In cases of $k=1$ or $k=p=2$ one can straightforward check $v_p(S(n,k))=0$ for all $n\in\mathbb{N}_+$. Hence the statement of Theorem \ref{thm1} remains true in these cases, although the set $\left\{t+1,...,t+k-\left\lfloor\frac{k}{p}\right\rfloor-1\right\}$ is empty. In fact, we then have no non-constant classes with respect to the sequence $(S(n,k))_{n\in\mathbb{N}_+}$.}
\end{rem}
\section{Elementary $p$-adic analytic approach}\label{sec1}
In order to prove Theorem \ref{thm1} we will use some basic facts about the field of $p$-adic numbers and $p$-adic locally analytic functions.
First, let us recall the construction of the field of $p$-adic numbers. For every rational number $x$ we define its $p$-adic norm $|x|_p$ by the formula
\begin{equation*}
|x|_p =
\begin{cases}
p^{-v_p(x)}, & \mbox{when } x\neq 0
\\ 0, & \mbox{when } x=0
\end{cases}.
\end{equation*}
Since for all rational numbers $x,y$ we have $|x+y|_p \leq \min\{ |x|_p, |y|_p\}$, hence $p$-adic norm gives a metric space structure on $\mathbb{Q}$. Namely, the distance between rational numbers $x,y$ is equal to $d_p(x,y) = |x-y|_p$. The field $\mathbb{Q}$ equipped with the $p$-adic metric $d_p$ is not a complete metric space. The completion of $\mathbb{Q}$ with respect to this metric has a structure of a field and this field is called the field of $p$-adic numbers and denoted by $\mathbb{Q}_p$. We extend the $p$-adic valuation and $p$-adic norm on $\mathbb{Q}_p$ in the following way: $v_p(x) = \lim_{n\rightarrow +\infty} v_p(x_n)$, $|x|_p = \lim_{n\rightarrow +\infty} |x_n|_p$, where $x\in\mathbb{Q}_p$, $(x_n)_{n\in\mathbb{N}} \in \mathbb{Q}^{\mathbb{N}}$ and $x = \lim_{n\rightarrow +\infty} x_n$. The values $v_p(x)$ and $|x|_p$ do not depend on the choice of a sequence $(x_n)_{n\in\mathbb{N}}$, thus they are well defined. For the proofs of these facts one can consult \cite{Mahl}.
We define the ring of integer $p$-adic numbers $\mathbb{Z}_p$ as a set of all $p$-adic numbers with non-negative $p$-adic valuation. Note that $\mathbb{Z}_p$ is the completion of $\mathbb{Z}$ as a space with the $p$-adic metric. In the sequel we will use the fact that $\mathbb{Z}_p$ is a compact metric space.
We assume that the expression $x \equiv y \pmod{p^k}$ means $v_p(x-y) \geq k$ for prime number $p$, an integer $k$ and $p$-adic numbers $x,y$.
By $p$-adic continuous function we mean a function $f:S\rightarrow\mathbb{Q}_p$ defined on some subset $S$ of $\mathbb{Q}_p$, which is continuous with respect to the $p$-adic metric. Assuming that $S$ is an open subset of $\mathbb{Q}_p$, we will say that $f$ is differentiable at a point $x_0\in S$, if there exists a limit $\lim_{x\rightarrow x_0}\frac{f(x)-f(x_0)}{x-x_0}$. In this situation this limit will be called the derivative of $f$ at the point $x_0$ and denoted by $f'(x_0)$. If $f$ is differentiable at each point of its domain then we will say that $f$ is a $p$-adic differentiable function and the mapping $S\ni x\mapsto f'(x)\in\mathbb{Q}_p$ is the derivative of $f$. Recursively we are able to define the higher order derivatives of $f$ as follows: $f^{(1)}(x)=f'(x)$ and if $f^{(n)}$ is a $p$-adic differentiable function for $n\in\mathbb{N}_+$ then we define $f^{(n+1)}(x)=\left(f^{(n)}\right)'(x)$. Finally, still assuming $S$ to be an open subset of $\mathbb{Q}_p$, we will say that $f:S\rightarrow\mathbb{Q}_p$ is a $p$-adic locally analytic function if for each $x_0\in S$ there exists a neighbourhood $U\subset S$ of $x_0$ and the sequence $(a_n)_{n\in\mathbb{N}}$ of $p$-adic numbers such that for any $x\in U$ the series $\sum_{n=0}^{+\infty} a_n(x-x_0)^n$ is convergent and its sum is equal to $f(x)$. A $p$-adic locally analytic function $f$ is continuous, arbitrarily many times differentiable and $f^{(n)}(x_0)=n!a_n$ for $n\in\mathbb{N}$ (see \cite{Mahl}).
If $f:\mathbb{Z}_p\rightarrow\mathbb{Q}_p$ is a $p$-adic locally analytic function then we can easily obtain a description of the $p$-adic valuations of numbers $f(n)$, $n\in\mathbb{N}$.
\begin{thm}\label{thm2}
Let $f:\mathbb{Z}_p\rightarrow\mathbb{Q}_p$ be a $p$-adic locally analytic function. Then there exist $m_0\in\mathbb{N}_+$ and $\mu\in\mathbb{N}$ such that for any integer $m\geq m_0$ there are exactly $\mu$ non-constant congruence classes $[a]_{p^m}$ modulo $p^m$ with respect to the sequence $(f(n))_{n\in\mathbb{N}_+}$ and each non-constant class modulo $p^m$ splits into $p-1$ constant classes $[a_1]_{p^{m+1}}$, ..., $[a_{p-1}]_{p^{m+1}}$ modulo $p^{m+1}$ with $p$-adic valuation equal to $\min v_p(f([a]_{p^m}))$ and one non-constant class $[a_0]_{p^{m+1}}$ modulo $p^{m+1}$ with
$$\min v_p(f([a_0]_{p^{m+1}}))>\min v_p(f([a]_{p^m})).$$
More precisely, for each non-costant congruence class $[a]_{p^{m_0}}$ modulo $p^{m_0}$ there are $\beta\in\mathbb{Z}$, $l\in\mathbb{N}_+$ and $x_0\in\mathbb{Z}_p$ such that $v_p(f(n))=\beta+lv_p(n-x_0)$ for $n\in [a]_{p^{m_0}}$. In particular, for each non-costant congruence class $[a]_{p^{m_0}}$ modulo $p^{m_0}$ there are $\alpha\in\mathbb{Z}$ and $l\in\mathbb{N}_+$ such that for each $m\geq m_0$ the minimal $p$-adic valuation of the unique non-constant class modulo $p^m$ contained in $[a]_{p^{m_0}}$ is equal to $\alpha+l(m-m_0)$.
\end{thm}
\begin{proof}
Choose an arbitrary $x_0\in\mathbb{Z}_p$. We consider three cases.
\noindent\textbf{Case 1.} If $f(x_0)\neq 0$ then $v_p(f(x_0))<+\infty$. By continuity of $f$ there exists a neighbourhood $U_{x_0}\subset\mathbb{Z}_p$ of $x_0$ such that $v_p(f(x)-f(x_0))>v_p(f(x_0))$ for each $x\in U_{x_0}$. As a consequence, we have $v_p(f(x))=v_p(f(x_0))$ for $x\in U_{x_0}$.
\noindent\textbf{Case 2.} There exists a neighbourhood $U_{x_0}\subset\mathbb{Z}_p$ of $x_0$ such that $f(x)=0$ for each $x\in U_{x_0}$ and then $v_p(f(x))=+\infty$.
\noindent\textbf{Case 3.} There holds $f(x_0)=0$ and $f$ is not identically equal to zero function on any neighbourhood of $x_0$. Then for some neighbourhood $U_{x_0}\subset\mathbb{Z}_p$ of $x_0$ we can write $f(x)=\sum_{n=l}^{+\infty} a_n(x-x_0)^n$ for $x\in U_{x_0}$, where $l>0$ and $a_l\neq 0$. Since $$\lim_{x\rightarrow x_0}\sum_{n=l}^{+\infty} a_n(x-x_0)^{n-l}=a_l,$$ we may assume without loss of generality that the neighbourhood $U_{x_0}$ is so small that $v_p\left(\sum_{n=l}^{+\infty} a_n(x-x_0)^{n-l}\right)=v_p(a_l)$ for each $x\in U_{x_0}$. Then
\begin{align*}
v_p(f(x))&=v_p\left(\sum_{n=l}^{+\infty} a_n(x-x_0)^n\right)=v_p\left(\sum_{n=l}^{+\infty} a_n(x-x_0)^{n-l}\right)+v_p\left((x-x_0)^l\right)\\
&=v_p(a_l)+lv_p(x-x_0).
\end{align*}
For each $x_0\in\mathbb{Z}_p$ we take $U_{x_0}$ specified in the above cases. We may choose $U_{x_0}$ to be a ball in the $p$-adic metric:
\begin{align*}
B(x_0,p^{-m})&=\{x\in\mathbb{Z}_p: |x-x_0|_p\leq p^{-m}\}=\{x\in\mathbb{Z}_p: v_p(x-x_0)\geq m\}\\
&=\{x\in\mathbb{Z}_p: x\equiv x_0\pmod{p^m}\},
\end{align*}
where $m\in\mathbb{N}$. Let us notice that $B(x_0,p^{-m})\cap\mathbb{Z}=[x_1]_{p^m}$ for any integer $x_1$ satisfying $x_1\equiv x_0\pmod{p^m}$. The family $\{U_{x_0}\}_{x_0\in\mathbb{Z}_p}$ is an open cover of $\mathbb{Z}_p$. By compactness of $\mathbb{Z}_p$ we may choose $x_1,...,x_r\in\mathbb{Z}_p$ such that $\{U_{x_j}\}_{j=1}^r$ is\linebreak a cover of $\mathbb{Z}_p$. We write $U_{x_j}=B(x_j,p^{-m_j})$ for $j\in\{1,...,r\}$. Every two balls in the $p$-adic metric are either disjoint or one contains the second one. Hence we can choose a finite subcover $(U_{x_j})_{j=1}^r$ to be consisting of pairwise disjoint balls. We put $m_0=\max_{1\leq j\leq r}m_j$. Let us consider the balls $B(i,p^{-m_0})$, $i\in\{0,...,p^{m_0}-1\}$. They are a partition of $\mathbb{Z}_p$. Fixing $i\in\{0,...,p^{m_0}-1\}$ there exists $j\in\{1,...,r\}$ such that $B(i,p^{-m_0})\subset B(x_j,p^{-m_j})$. If $x_j$ is as in case Case 1 or Case 2 in the above reasoning then $\#\{v_p(f(x)): x\in B(i,p^{-m_0})\}=1$ and, as a result, the congruence class $[i]_{p^{m_0}}$ is constant.
If $x_j$ is as in Case 3 and $v_p(i-x_j)<m_0$ then for $x\in B(i,p^{-m_0})$ we have $f(x)=\sum_{n=l}^{+\infty} a_n(x-x_j)^n$ and
\begin{align*}
v_p(f(x))&=v_p(a_l)+lv_p(x-x_j)=v_p(a_l)+lv_p((x-i)+(i-x_j))\\
&=v_p(a_l)+lv_p(i-x_j).
\end{align*}
Thus the congruence class $[i]_{p^{m_0}}$ is constant. On the other hand, if $x_j$ is as in Case 3 and $v_p(i-x_j)\geq m_0$ then $v_p(f(x))=v_p(a_l)+lv_p(x-x_j)$ and we easily conclude that for each $m\geq m_0$ there exists exactly one non-constant class $[h]_{p^m}$ (namely the class of $h\equiv x_j\pmod{p^m}$) contained in $[i]_{p^{m_0}}$. This class splits into $p-1$ constant classes modulo $p^{m+1}$ and one non-constant class modulo $p^{m+1}$. Each non-constant class modulo $p^m$, $m\geq m_0$, corresponds to one $j\in\{1,...,r\}$ such that $x_j$ is as in Case 3. Hence the number $\mu$ postulated in the statement of our theorem is the number of indices $j$ for which $x_j$ is as in Case 3. In other words, $\mu$ is the number of the isolated zeros of the function $f$.
\end{proof}
The reasoning in the proof of Theorem \ref{thm2} recalls us that the set of zeros of a $p$-adic locally analytic function $f:\mathbb{Z}_p\rightarrow\mathbb{Q}_p$ is a union of a finite subset and a clopen subset of $\mathbb{Z}_p$. This fact is used in a proof of basic version of Skolem-Mahler-Lech theorem that if $(c(n))_{n\in\mathbb{N}_+}$ is a sequence of rational numbers given by a linear recurrence $c(n+k)=\alpha_0c(n)+...+\alpha_{k-1}c(n+k-1)$ for some $\alpha_0,...,\alpha_{k-1}\in\mathbb{Q}$ then the set $\{n\in\mathbb{N}_+: c(n)=0\}$ is a union of\linebreak a finite set and finitely many arithmetic progressions (see \cite{Sko}).
The next fact that we utilize to prove the main result of the paper is a generalization of the lifting the exponent lemma. This generalization describes the behavior of the $p$-adic valuations of a linear combination of exponential functions. Let us see that if $p$ is an odd number and $a$ is a $p$-adic integer congruent to $1$ modulo $p$, i.e., $v_p(a-1)\geq 1$, then writing $a=1+pb$ for some $b\in\mathbb{Z}_p$ we can expand the expression $a^x$ for $x\in\mathbb{N}$ as follows:
\begin{equation}\label{exp}
a^x=(1+pb)^x=\sum_{j=0}^x {x\choose j}p^jb^j=\sum_{j=0}^{+\infty} b^j(x)_j\frac{p^j}{j!},
\end{equation}
where $(x)_j=x(x-1)\cdot...\cdot(x-j+1)$ for $j\in\mathbb{N}_+$ and $(x)_0=1$ means Pochhammer symbol. The series on the right-hand side of the equation (\ref{exp}) is convergent for every $x\in\mathbb{Z}_p$. Hence we are able to extend the exponential function $a^x$ for $x\in\mathbb{Z}_p$ defining it as $\sum_{j=0}^{+\infty} b^j(x)_j\frac{p^j}{j!}$. The function $a^x$, $x\in\mathbb{Z}_p$, defined in a such way, is a $p$-adic locally analytic function (see \cite{Mahl}). In particular, this function is continuous. This fact combined with the identity $a^{x+y}=a^xa^y$ for $x,y\in\mathbb{N}$ and the density of $\mathbb{N}$ in $\mathbb{Z}_p$ implies the identity $a^{x+y}=a^xa^y$ for $x,y\in\mathbb{Z}_p$.
The above preparation becomes to be true for $p=2$ if we assume that $v_2(x)\geq 1$.
Assuming that $p$ is a prime number and $a$ is a $p$-adic integer of the form $a=1+pb$ for some $b\in\mathbb{Z}_p$, we define the $p$-adic logarithm of the number $a$ as follows:
\begin{align*}
\log_p a=\sum_{j=1}^{+\infty} \frac{(-1)^{j-1}}{j}(a-1)^j=\sum_{j=1}^{+\infty} \frac{(-1)^{j-1}b^jp^j}{j}.
\end{align*}
One can see that the series defining the $p$-adic logarithm is convergent. Hence the function $\log_p x$ is a $p$-adic locally analytic function defined on the closed ball $B(1,p^{-1})$. We have $\log_p a_1+\log_p a_2=\log_p a_1a_2$ for any $a_1,a_2\in B(1,p^{-1})$ and the derivative of the function $a^x$ (of variable $x\in\mathbb{Z}_p$) is equal to $a^x\log_p a$ for $a\in B(1,p^{-1})$ (see \cite{Mahl}). Moreover, $\log_p a=0$ if and only if $a$ is a root of unity (see \cite{Schi}). Since $a\equiv 1\pmod{p}$, we conclude that $\log_p a=0$ only for $a=1$ or $p=2$ and $a=-1$.
Now we are ready to state and prove the announced generalization of the lifting the exponent lemma.
\begin{thm}\label{thm3}
Let $p$ be a prime number and $k\in\mathbb{N}_+$, $k\geq 2$. Assume that $a_1,...,a_k$ are pairwise distinct $p$-adic integers congruent to $1$ modulo $p$. In case of $p=2$ we assume additionally that $\frac{a_j}{a_i}\neq -1$ for any $i,j\in\{1,...,k\}$. Let $c_1,...,c_k\in\mathbb{Q}_p\backslash\{0\}$. Assume that a number $x_0\in\mathbb{Z}_p$ is a zero of the function $f(x)=\sum_{i=1}^k c_ia_i^x$, $x\in\mathbb{Z}_p$. Then there exist an integer $\alpha$,\linebreak a positive integer $l<k$ and a non-negative integer $m$ such that for any $t\in\mathbb{Z}_p$ congruent to $x_0$ modulo $p^m$ we have $v_p(f(t))=\alpha+lv_p(t-x_0)$. In particular, $x_0$ is an isolated zero of the function $f$.
\end{thm}
\begin{proof}
From the previous theorem we know that there exist $\alpha\in\mathbb{Z}$, $l\in\mathbb{N}\cup\{+\infty\}$ and $m\in\mathbb{N}$ such that $v_p(f(t))=\alpha+lv_p(t-x_0)$ for any $t\in B(x_0,p^{-m})$, where $l=\inf\{n\in\mathbb{N}: f^{(n)}(x_0)\neq 0\}$ (we set that $\inf\varnothing=+\infty$). Hence it remains to prove that $l<k$. If we assume the contrary, then $f^{(n)}(x_0)=0$ for any $n\in\{0,...,k-1\}$. On the other hand,
\begin{align*}
f^{(n)}(x_0)=\sum_{i=1}^k c_ia_i^{x_0}(\log_p a_i)^n,\quad n\in\mathbb{N}.
\end{align*}
Then the $k$-tuple $(c_ia_i^{x_0})_{i=1}^k$ is a non-zero solution of the system of linear equations $\sum_{i=1}^k (\log_p a_i)^nX_i=0$, $0\leq n\leq k-1$, with unknowns $X_i$, $1\leq i\leq k$. However, the matrix of this system is the Vandermonde matrix for values $\log_p a_i$, $1\leq i\leq k$. Hence its determinant is equal to $\prod_{1\leq i<j\leq k} (\log_p a_j-\log_p a_i)\neq 0$, since the values $a_i$, $1\leq i\leq k$, are pairwise distinct and no ratio of the form $\frac{a_j}{a_i}\neq -1$. Thus the only solution of this system is zero. The contradiction shows the inequality $l<k$.
\end{proof}
\begin{rem}
{\rm It is worth to see that in general the value $l$ in the statement of Theorem \ref{thm3} can be greater than $1$. Let us take any two distinct integers $a$, $b$ congruent to $1$ modulo $p$ if $p>2$, or congruent to $1$ modulo $4$ if $p=2$. Then the function $f(x)=(a^2)^x+(b^2)^x-2(ab)^x$ vanishes at $0$ and $f'(0)=\log_p a^2 +\log_p b^2 -2\log_p ab=0$. Thus $l>1$. On the other hand, by Theorem \ref{thm3} we have $l<3$. As a result, $l=2$ and finally there exist $\alpha\in\mathbb{Z}$ and $m\in\mathbb{N}$ that $v_p(f(x))=\alpha+2v_p(x)$ for $x\in B(0,p^{-m})$.
In fact, if we fix $k$, then $l$ may attain every value from the set $\{0,...,k-1\}$. It suffices to choose $c_1,...,c_k\in\mathbb{Q}_p$ such that $\sum_{i=1}^k (\log_p a_i)^nc_i=0$, $0\leq n\leq l-1$, and $\sum_{i=1}^k (\log_p a_i)^lc_i\neq 0$. We are able to choose such $c_1,...,c_k$ since, as we noted in the proof of Theorem \ref{thm3}, the determinant of the system of equations $\sum_{i=1}^k (\log_p a_i)^nX_i=0$, $0\leq n\leq k-1$, is non-zero. Then for some integer $\alpha$ we have $v_p(f(x))=\alpha+lv_p(x)$ for any $x$ sufficiently close to $0$ with respect to the $p$-adic metric.
Unfortunately, fixing $k>l>2$ we do not know if we can choose $a_1,...,a_k$ to be integers and $c_1,...,c_k$ to be rational numbers. Hence, in case $a_1,...,a_k\in\mathbb{Z}$, $c_1,...,c_k\in\mathbb{Q}$ we do not know about the value $l$ anymore than from Theorem \ref{thm3}.}
\end{rem}
\section{Proof of Theorem \ref{thm1}}
Let us denote, as in \cite{Cla}, $$T_p(n,k)=\sum_{j=1, p\nmid j}^k (-1)^{k-j}{k\choose j}j^n.$$ From the preparation before Theorem \ref{thm3}, if $j\in\mathbb{Z}$ is not divisible by $p$ then for $a\in\{0,...,p-2\}$ the function $\mathbb{N}_+\ni n\mapsto j^{a+n(p-1)}\in\mathbb{Z}$ extends to a $p$-adic locally analytic function defined on $\mathbb{Z}_p$. Hence for each $a\in\{0,...,p-2\}$ the function $\mathbb{N}_+\ni n\mapsto T_p(a+n(p-1),k)\in\mathbb{N}$ is a restriction of a $p$-adic locally analytic function $f_{a,k}:\mathbb{Z}_p\rightarrow\mathbb{Z}_p$ to $\mathbb{N}_+$. Thus we apply Theorem \ref{thm2} and Theorem \ref{thm3} to each function $f_{a,k}$ and we obtain $m_{0,a}$ and $\mu_a$ as specified in the statement of Theorem \ref{thm2}. We thus put $$m_0=m_0(k)=1+\max_{0\leq a\leq p-2} m_{0,a}$$ and $$\mu=\mu(k)=\sum_{a=0}^{p-2} \mu_a.$$ Then there holds the statement of Theorem \ref{thm1} for the sequence $T_p(n,k)$ with the words ``almost'' and ``essentially'' ommited.
If $p>k$ then $S(n,k)=\frac{1}{k!}T_p(n,k)$ and the statement holds with omitted words ``almost'' and ``essentially''. Let us fix a non-constant class $[a]_{p^{m_0-1}(p-1)}$ modulo $p^{m_0-1}(p-1)$. Each member of this class can be written in the form $n=a_0+\tilde{n}(p-1)$, where $a_0=a\bmod{p-1}$. Then, by Theorem \ref{thm2} and Theorem \ref{thm3}, we have $\beta\in\mathbb{Z}$, $l\in\{1,...,k-1\}$ and $\tilde{x}_0\in\mathbb{Z}_p$ such that
\begin{align*}
v_p(S(n,k))=v_p(S(a_0+\tilde{n}(p-1),k))=v_p(f_{a_0,k}(\tilde{n}))=\beta+lv_p(\tilde{n}-\tilde{x}_0)
\end{align*}
for $n\in [a]_{p^{m_0-1}(p-1)}$. Putting $x_0=a_0+\tilde{x}_0(p-1)$, since $v_p(n-x_0)=v_p(\tilde{n}-\tilde{x}_0)$, we get
\begin{align*}
v_p(S(n,k))=\beta+lv_p(n-x_0)
\end{align*}
for $n\in [a]_{p^{m_0-1}(p-1)}$.
In general
\begin{align}\label{sum}
S(n,k)=\frac{1}{k!}\left(T_p(n,k)+\sum_{0<j\leq k, p\mid j}(-1)^{k-j}{k\choose j}j^n\right).
\end{align}
Certainly, $v_p\left(\sum_{0<j\leq k, p\mid j}(-1)^{k-j}{k\choose j}j^n\right)\geq n$. Hence, if $[a]_{(p-1)p^{m-1}}$ is a constant class with respect to $(T_p(n,k))_{n\in\mathbb{N}_+}$ with the $p$-adic valuation $t$ then, by (\ref{sum}), $v_p(S(n,k))=t-v_p(k!)$ for each $n\in [a]_{(p-1)p^{m-1}}$ satisfying $n>t$. On the other hand, if $[a]_{(p-1)p^{m-1}}$ is a non-constant class with respect to $(T_p(n,k))_{n\in\mathbb{N}_+}$ then it contains infinitely many constant classes with respect to $(T_p(n,k))_{n\in\mathbb{N}_+}$ with pairwise distinct $p$-adic valuations. Hence the numbers $v_p(S(n,k))$, $n\in [a]_{(p-1)p^{m-1}}$, attain infinitely many values and the class $[a]_{(p-1)p^{m-1}}$ is not almost constant with respect to $(S(n,k))_{n\in\mathbb{N}_+}$.
\section{Concluding remarks}
Theorem \ref{thm1} states that for a given prime number $p$ and a positive integer $k$ there exists $m_0=m_0(k)\in\mathbb{N}_+$ such that for each essentially non-costant congruence class $[a]_{p^{m_0-1}(p-1)}$ modulo $p^{m_0-1}(p-1)$ with respect to the sequence $(S(n,k))_{n\in\mathbb{N}_+}$ and the prime number $p$ there are $\alpha\in\mathbb{Z}$ and $l\in\left\{1,...,k-\left\lfloor\frac{k}{p}\right\rfloor-1\right\}$ such that for each $m\geq m_0$ the ultimate minimal $p$-adic valuation of the unique essentially non-constant class modulo $p^{m-1}(p-1)$ contained in $[a]_{p^{m_0-1}(p-1)}$ is equal to $\alpha+l(m-m_0)$. However, we do not know any example of a prime number $p$, a positive integer $k$ and an essentially non-constant class modulo $p^{m_0-1}(p-1)$ with respect to the sequence $(S(n,k))_{n\in\mathbb{N}_+}$ and the prime number $p$ for which $l>1$.
\begin{prob}
Do there exist a prime number $p$ and positive integers $k,a$ such that $[a]_{p^{m_0-1}(p-1)}$ ($m_0$ as specified in the statement of Theorem \ref{thm1}) is an essentially non-constant class modulo $p^{m_0-1}(p-1)$ with respect to the sequence $(S(n,k))_{n\in\mathbb{N}_+}$ and prime number $p$ for which the value $l$ specified in the statement of Theorem \ref{thm1} is greater than $1$?
\end{prob}
We can show, that if $a<k<p$, then $l$ must be equal to $1$.
\begin{thm}
Let $k,a$ be positive integers and $p$ be a prime number such that $a<k<p$. Then for each positive integer $n\equiv a\pmod{p^{m_0-1}(p-1)}$ there holds $v_p(S(n,k))=v_p(S(a+p^{m_0-1}(p-1),k))+v_p(n-a)-m_0+1$.
In particular, for each positive integer $m\geq m_0$, $[a]_{p^{m-1}(p-1)}$ is the only non-constant class modulo $p^{m-1}(p-1)$ contained in $[a]_{p^{m_0-1}(p-1)}$ and its least $p$-adic valuation is equal to $v_p(S(a+p^{m_0-1}(p-1),k))+m-m_0$.
\end{thm}
\begin{proof}
We have $S(a,k)=0$. Since $k<p$, by Theorem 1, we have $\beta\in\mathbb{Z}$ and $l\in\mathbb{N}_+$ such that
\begin{equation}\label{form}
v_p(S(n,k))=\beta+lv_p(n-a)
\end{equation}
for $n\equiv a\pmod{p^{m_0-1}(p-1)}$. Putting $n=a+p^{m_0-1}(p-1)$ in (\ref{form}) we obtain $\beta=v_p(S(a+p^{m_0-1}(p-1),k))-l(m_0-1)$. It suffices to prove that $l=1$. We have $S(a+n(p-1),k)=\frac{1}{k!}f_{a,k}(n)$ for $n\in\mathbb{N}$, where $f_{a,k}(x)=\sum_{j=1}^k (-1)^{k-j}{k\choose j}j^a(j^{p-1})^x$, $x\in\mathbb{Z}_p$, is a $p$-adic locally analytic function. Since $f_{a,k}(0)=S(a,k)=0$ and $v_p(f_{a,k}(n))=v_p(S(a+n(p-1)))-v_p(k!)$, we thus infer that $l=\inf\{i\in\mathbb{N}: f_{a,k}^{(i)}\neq 0\}$. Hence it remains us to show that $f_{a,k}'(0)\neq 0$. Indeed,
$$f_{a,k}'(0)=\sum_{j=1}^k (-1)^{k-j}{k\choose j}j^a\log_p(j^{p-1})=\log_p\left(\prod_{j=1}^k j^{(-1)^{k-j}{k\choose j}j^a(p-1)}\right).$$
Let us note that for $k\geq 2$, by Tshebyshev theorem there exists a prime number $q$ such that $\frac{k}{2}<q\leq k$. Then the $q$-adic valuation of the number $\prod_{j=0}^k j^{(-1)^{k-j}{k\choose j}j^a(p-1)}$ is equal to $(-1)^{k-q}{k\choose q}q^a(p-1)\neq 0$. This means that $\prod_{j=0}^k j^{(-1)^{k-j}{k\choose j}j^a(p-1)}\not\in\{-1,1\}$ and from our preparation about the $p$-adic logarithm we conclude that $$f_{a,k}'(0)=\log_p\left(\prod_{j=0}^k j^{(-1)^{k-j}{k\choose j}j^a(p-1)}\right)\neq 0.$$
\end{proof}
| {
"timestamp": "2018-03-14T01:03:40",
"yymm": "1803",
"arxiv_id": "1803.04533",
"language": "en",
"url": "https://arxiv.org/abs/1803.04533",
"abstract": "The aim of this paper is to prove conjectures concerning $p$-adic valuations of Stirling numbers of the second kind $S(n,k)$, $n,k\\in\\mathbb{N}_+$, stated by Amdeberhan, Manna and Moll and Berrizbeitia et al., where $p$ is a prime number. The proof is based on elementary facts from $p$-adic analysis.",
"subjects": "Number Theory (math.NT)",
"title": "A note on $p$-adic locally analytic functions with application to behavior of the $p$-adic valuations of Stirling numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806512500485,
"lm_q2_score": 0.7217432122827967,
"lm_q1q2_score": 0.7077274291355667
} |
https://arxiv.org/abs/math/0701296 | Shellable graphs and sequentially Cohen-Macaulay bipartite graphs | Associated to a simple undirected graph G is a simplicial complex whose faces correspond to the independent sets of G. We call a graph G shellable if this simplicial complex is a shellable simplicial complex in the non-pure sense of Bjorner-Wachs. We are then interested in determining what families of graphs have the property that G is shellable. We show that all chordal graphs are shellable. Furthermore, we classify all the shellable bipartite graphs; they are precisely the sequentially Cohen-Macaulay bipartite graphs. We also give an recursive procedure to verify if a bipartite graph is shellable. Because shellable implies that the associated Stanley-Reisner ring is sequentially Cohen-Macaulay, our results complement and extend recent work on the problem of determining when the edge ideal of a graph is (sequentially) Cohen-Macaulay. We also give a new proof for a result of Faridi on the sequentially Cohen-Macaulayness of simplicial forests. | \section{Introduction}
Let $G$ be a simple (no loops or multiple edges) undirected graph
on the vertex set $V_G = \{x_1,\ldots,x_n\}$. By identifying the vertex $x_i$
with the variable $x_i$ in the polynomial ring $R = k[x_1,\ldots,x_n]$
over a field $k$, we can
associate to $G$ a quadratic square-free monomial ideal
$I(G) = ( \{x_ix_j ~|~ \{x_i,x_j\} \in E_G\})$ where $E_G$ is the edge set of $G$.
The ideal $I(G)$ is called the {\it edge ideal} of $G$.
Using the Stanley-Reisner correspondence,
we can associate to $G$ the simplicial complex $\Delta_G$ where
$I_{\Delta_G} = I(G)$.
Notice that the faces of $\Delta_G$ are the {\it independent
sets} or {\it stable sets\/} of $G$. Thus $F$ is a face of $\Delta_G$
if and only if there is no edge of $G$ joining any two vertices of
$F$. The dual concept of an independent set is a
{\it vertex cover\/}, i.e., a subset $C$ of $V_G$ is a vertex cover of
$G$ if and only if $V_G\setminus C$ is an independent set of $G$.
We call a graph $G$ {\it {\rm (}sequentially{\rm )} Cohen-Macaulay}
if $R/I(G)$ is (sequentially) Cohen-Macaulay.
Recently, a number of authors (for example, see
\cite{EV,FH,FVT2,herzog-hibi,HHZ,Vi2,unmixed}) have been interested in
classifying or identifying (sequentially) Cohen-Macaulay graphs $G$ in terms of
the combinatorial properties of $G$. This paper complements and extends some
of this work by introducing the notion of a {\it shellable graph}.
We shall call a graph $G$ shellable if the simplicial complex $\Delta_G$
is a shellable simplicial complex
(see Definition \ref{shellabledefn}). Here, we mean
the non-pure definition of shellability as introduced by Bj\"orner
and Wachs \cite{BW}.
Because a shellable simplicial complex has the property that its associated
Stanley-Reisner ring is sequentially Cohen-Macaulay, by identifying
shellable graphs, we are in fact identifying some
of the sequentially Cohen-Macaulay graphs.
We begin in Section 2 by formally introducing shellable graphs and
discussing some of their basic properties. We then focus on the shellability
of bipartite graphs.
Recall that a graph $G$ is {\it bipartite\/} if the vertex set $V_G$
can be partitioned into two disjoint sets $V = V_1 \cup V_2$ such
that every edge of $G$ contains one vertex in $V_1$ and the other in $V_2$.
Furthermore, let $N_G(x)$ denote the set of {\it neighbors} of the
vertex $x$.
We then show:
\begin{theorem}[Corollary \ref{recursivebuild}]\label{thm1}
Let $G$ be a bipartite graph. Then $G$ is shellable if and only
if there are adjacent vertices $x$ and $y$
with $\deg(x)=1$ such that
the bipartite graphs $G \setminus (\{x\} \cup N_G(x))$
and $G \setminus (\{y\} \cup N_G(y))$ are shellable.
\end{theorem}
We also consider the shellability of chordal graphs.
A graph ${G}$ is
{\it chordal\/} (or {\it triangulated}) if every cycle
${\mathcal C}_n$ of ${G}$ of length $n\geq 4$ has a
chord. A {\it chord} of ${\mathcal C}_n$ is an edge joining two
non-adjacent vertices of ${\mathcal C}_n$. Chordal graphs
then have a nice combinatorial property:
\begin{theorem}[Theorem \ref{chordaltheorem}]
Let $G$ be a chordal graph. Then $G$ is shellable.
\end{theorem}
\noindent
Because $G$ being shellable implies that $G$ is sequentially
Cohen-Macaulay, the above result gives a new proof to the main result
of Francisco and the first author \cite{FVT2} that all chordal
graphs are sequentially Cohen-Macaulay.
The main result of Section 3 is to classify all sequentially Cohen-Macaulay
bipartite graphs. Precisely, we show:
\begin{theorem}[Theorem \ref{SCM=shellable}] \label{them3}
Let $G$ be a bipartite graph.
Then $G$ is sequentially Cohen-Macaulay if and only $G$ is
shellable.
\end{theorem}
\noindent
Note that all shellable graphs are automatically sequentially Cohen-Macaulay
(see Stanley \cite{Stanley} or Theorem \ref{shellable->scm}), but
the converse is not true in general. So,
the above theorem says that among the bipartite graphs, those that are sequentially
Cohen-Macaulay are precisely those that are shellable. This generalizes
a result of Estrada and the second author \cite{EV} which
showed that $G$ is a Cohen-Macaulay bipartite graph if and only
if $\Delta_G$ has a pure shelling.
Because we can use Theorem \ref{thm1} to recursively check if a
bipartite graph is shellable, Theorem \ref{them3} implies we can
verify recursively if a
bipartite graph is sequentially Cohen-Macaulay.
In the fourth section we consider connected bipartite graphs with
bipartition $V_1=\{x_1,\ldots,x_g\}$ and $V_2=\{y_1,\ldots,y_g\}$ such
that $\{x_i,y_i\}\in E_G$ for all $i$ and $g\geq 2$.
Following Carr\'a Ferro and Ferrarello \cite{carra-ferrarello},
we can associate to $G$ a directed graph $\mathcal{D}$. Carr\'a
Ferro and Ferrarello
gave an alternative classification of Cohen-Macaulay bipartite graphs in terms
of the properties of $\mathcal{D}$ (the original classification is
due of Herzog and Hibi \cite{herzog-hibi}). We show
how $G$ being sequentially Cohen-Macaulay affects the graph $\mathcal{D}$.
In the final section we extend the scope of our investigation
to include the edge ideals associated to clutters (a type of hypergraph).
As in the graph case, we say that a clutter
$\mathcal{C}$ is shellable if the simplicial complex associated to
the edge ideal $I(\mathcal{C})$
is a shellable simplicial complex.
We show (the free vertex property is defined in Section 5):
\begin{theorem}[Theorem \ref{fvp}]
If a clutter $\mathcal{C}$ has the free vertex property, then
$\mathcal{C}$ is shellable.
\end{theorem}
By applying
a result of Herzog, Hibi, Trung and Zheng \cite{hhtz}, we recover as a corollary
the fact that all simplicial forests are sequentially Cohen-Macaulay.
This result was first proved by Faridi \cite{Faridi}.
\section{Shellable graphs}
We continue to use the notation and definitions used in the introduction. In
this section we introduce shellable graphs, describe some
of their properties, and identify some families of shellable graphs.
\begin{definition}\label{shellabledefn}
A simplicial complex $\Delta$ is {\it shellable\/} if
the facets (maximal faces) of $\Delta$ can be ordered $F_1,\ldots,F_s$ such that
for all $1\leq i<j\leq s$, there
exists some $v\in F_j\setminus F_i$ and some
$\ell\in \{1,\ldots,j-1\}$ with $F_j\setminus F_\ell= \{v\}$. We call
$F_1,\ldots,F_s$ a {\it shelling} of $\Delta$ when the facets have been
ordered with respect to the shellable definition. For a fixed
shelling of $\Delta$, if $F,F' \in \Delta$
then we write $F < F'$ to mean that $F$
appears before $F'$ in the ordering.
\end{definition}
\begin{remark}
The above definition of shellable is due to
Bj\"orner and Wachs \cite{BW} and is usually referred
to as {\it nonpure shellable}, although
in this paper we will drop the adjective ``nonpure''. Originally, the definition of
shellable also required that
the simplicial complex be pure, that is, all the facets have same dimension.
We will say $\Delta$ is {\it pure shellable} if it also satisfies this
hypothesis.
\end{remark}
\begin{definition} Let $G$ be a simple undirected graph with associated
simplicial complex $\Delta_G$. We say
$G$ is a {\it shellable graph} if $\Delta_G$ is a shellable simplicial
complex.
\end{definition}
To prove that a graph $G$ is shellable, it suffices to prove each
connected component of $G$ is shellable, as demonstrated below.
\begin{lemma}\label{oct14-06} Let $G_1$ and $G_2$ be two graphs with
disjoint sets of
vertices and let $G=G_1\cup G_2$.
Then $G_1$ and $G_2$ are shellable if and only if $G$
is shellable.
\end{lemma}
\begin{proof} $(\Rightarrow)$
Let $F_{1},\ldots,F_{r}$ and $H_{1},\ldots,H_{s}$ be
the shellings of $\Delta_{G_1}$ and $\Delta_{G_2}$ respectively.
Then
if we order the facets of $\Delta_G$ as
$$
F_{1}\cup H_{1},\ldots,F_{1}\cup H_{s};\,
F_{2}\cup H_{1},\ldots,F_{2}\cup H_{s};\, \ldots;\,
F_{r}\cup H_{1},\ldots,F_{r}\cup H_{s}
$$
we get a shelling of $\Delta_G$. Indeed if $F'<F$ are two facets
of $\Delta_G$ we have two cases to consider.
Case (i): $F'=F_i\cup H_k$ and $F=F_j\cup H_t$, where $i<j$. Because
$\Delta_{G_1}$ is shellable there is $v\in F_j\setminus F_i$ and
$\ell<j$ with $F_j\setminus F_\ell= \{v\}$.
Hence $v\in F\setminus F'$, $F_\ell\cup H_t<F$, and
$F\setminus(F_\ell\cup H_t)=\{v\}$. Case (ii): $F'=F_k\cup H_i$ and
$F=F_k\cup H_j$, where $i<j$. This case follows from the
shellability of $\Delta_{G_2}$.
$(\Leftarrow)$ Note that if $F$ is a facet of $\Delta_G$,
then $F' = F \cap V_{G_1}$, respectively, $F'' = F \cap V_{G_2}$, is a facet
of $\Delta_{G_1}$, respectively $\Delta_{G_2}$. We now show that
$G_1$ is shellable and omit the similar proof for the shellability of $G_2$.
Let $F_1,\ldots,F_t$ be a shelling of $\Delta_G$, and consider the subsequence
\[F_{i_1},\ldots,F_{i_s} ~~~\mbox{with $1 = i_1 < i_2 < \cdots < i_s$} \]
where $F_1 \cap V_{G_2} = F_{i_j} \cap V_{G_2}$ for $i_j \in \{i_1,\ldots,i_s\}$,
but $F_1 \cap V_{G_2} \neq F_k \cap V_{G_2}$ for any $k \in \{1,\ldots,t\} \setminus
\{i_1,\ldots,i_s\}$. We then claim that
\[F'_1 = F_{i_1} \setminus V_{G_2},\ F'_2 = F_{i_2} \setminus V_{G_2},
\ldots, F'_s = F_{i_s} \setminus V_{G_2}\]
is a shelling of $\Delta_{G_1}$. We first
show that this is a complete list of facets; indeed, each $F'_j = F_{i_j} \cap V_{G_1}$
is a facet of $\Delta_{G_1}$, and furthermore, for any facet
$F \in \Delta_{G_1}$, $F \cup (F_1 \cap V_{G_2})$ is a facet of $\Delta_G$,
and hence $F \cup (F_1 \cap V_{G_2}) = F_{i_j}$ for some $i_j \in \{i_1,\ldots,i_s\}$.
Because the $F_i$'s form a shelling, if $1 \leq k < j \leq s$,
there exists $v \in F_{i_j} \setminus F_{i_k} = (F_{i_j} \setminus V_{G_2})\setminus
(F_{i_k}\setminus V_{G_2}) = F'_j \setminus F'_k$ such that
$\{v\} = F_{i_j} \setminus F_{\ell}$ for some $1 \leq \ell < i_j$.
It suffices to show that $F_{\ell}$ is among $F_{i_1},\ldots,F_{i_s}$. Now
because $F_{i_j} \cap V_{G_2} \subset F_{i_j}$ and $v \not\in
F_{i_j} \cap V_{G_2}$, we must have $F_{i_j} \cap V_{G_2} \subset F_{\ell}$.
So, $F_{\ell} \cap V_{G_2} \supset F_{i_j} \cap V_{G_2}$.
But $F_{\ell} \cap V_{G_2}$ is a facet of $\Delta_{G_2}$,
so we must have $F_{\ell} \cap V_{G_2} = F_{i_j} \cap V_{G_2}$.
So $F_{\ell} = F_{i_r}$ for some $r < j$,
and hence, $\{v\} = F'_j \setminus F'_r$, as desired.
\end{proof}
Given a subset $S \subset V_G$, by $G \setminus S$, we mean
the graph formed from $G$ by deleting all the vertices in $S$, and all
edges incident to a vertex in $S$. If $x$ is a vertex of $G$, then
its {\it neighbor set},
denoted by $N_G(x)$, is
the set of vertices of $G$ adjacent to $x$.
If $F$ is a face of a simplicial complex $\Delta$, the {\it link} of $F$ is
defined to be $\operatorname{lk}_{\Delta}(F) = \{G ~|~ G \cup F \in
\Delta,~~ G \cap F = \emptyset\}$.
When $F = \{x\}$, then we shall abuse notation and
write $\operatorname{lk}_{\Delta}(x)$
instead of $\operatorname{lk}_{\Delta}(\{x\})$.
\begin{lemma}\label{link}
Let $x$ be a vertex of $G$ and let
$G' = G\setminus(\{x\}\cup N_G(x))$. Then
\[\Delta_{G'} = \operatorname{lk}_{\Delta_G}(x).\]
In particular, $F$ is a facet of $\Delta_{G'}$ if and only if
$x\notin F$ and $F \cup
\{x\}$ is a facet
of $\Delta_G$.
\end{lemma}
\begin{proof}
If $F \in \operatorname{lk}_{\Delta_G}(x)$,
then $x \not\in F$, and $F \cup \{x\} \in \Delta_G$ implies that $F \cup \{x\}$
is an independent set of $G$. So $(F \cup \{x\}) \cap N_G(x) = \emptyset$.
But this means that $F \subset V_{G'}$ because
$V_{G'}= V_G \setminus (\{x\} \cup N_G(x))$.
Thus $F \in \Delta_{G'}$ since $F$ is also
an independent set of the smaller graph $G'$.
Conversely, if $F \in \Delta_{G'}$, then $F$ is an independent set of $G'$
that does not contain any of the vertices of $\{x\} \cup N_G(x)$. But
then $F \cup \{x\}$ is an independent set of $G$, i.e., $F \cup \{x\} \in \Delta_G$.
So $F \in \operatorname{lk}_{\Delta_G}(x)$.
The last statement follows readily from the fact that $F$ is a facet
of $\operatorname{lk}_{\Delta_G}(x)$ if and only if $x\notin F$ and
$F \cup \{x\}$ is a facet of $\Delta_G$.
\end{proof}
The property of shellability is preserved when
removing the vertices $\{x\} \cup N_G(x)$ and all incident
edges from $G$ for
any vertex $x$.
\begin{theorem} \label{removevertex-shellable}
Let $x$ be a vertex of $G$ and let
$G' = G\setminus(\{x\}\cup N_G(x))$.
If $G$ is shellable, then $G'$ is shellable.
\end{theorem}
\begin{proof}
Let $F_1,\ldots,F_s$ be a shelling of $\Delta_G$. Suppose the subsequence
\[F_{i_1},F_{i_2},\ldots,F_{i_t} ~~\mbox{with $i_1 < i_2 < \cdots < i_t$}\]
is the list of all the facets with $x \in F_{i_j}$. Setting $H_j =
F_{i_j}\setminus\{x\}$
for each $j =1,\ldots,t$, Lemma \ref{link} implies that the $H_j$'s
are the facets of $\Delta_{G'}$.
We claim that $H_1,\ldots,H_t$ is a shelling of $\Delta_{G'}$. Because
the $F_i$'s form a shelling, if $1 \leq k < j \leq t$, there
exists a vertex $v \in F_{i_j} \setminus F_{i_k} = (F_{i_j}
\setminus\{x\}) \setminus
(F_{i_k}\setminus\{x\}) =
(H_j \setminus H_k)$ such that $\{v\} = F_{i_j} \setminus F_{\ell}$
for some $1 \leq \ell < i_j$. It suffices to show that $F_\ell$ is
among the list $F_{i_1},
\ldots, F_{i_t}$. But because $x \in F_{i_j}$ and $x \neq v$, we must
have $x \in F_{\ell}$. Thus $F_{\ell} = F_{i_k}$ for some $k \leq j$. But
then $\{v\} = F_{i_j} \setminus F_{\ell} = H_{j} \setminus H_k$. So,
the $H_i$'s form
a shelling of $\Delta_{G'}$.
\end{proof}
Let $G$ be a graph and let $S\subset V_G$. For use below consider the
graph $G\cup W_G(S)$ obtained from $G$ by
adding new vertices $\{y_i~\vert\,~ x_i\in S\}$ and new
edges $\{\{x_i,y_i\}~\vert\,~ x_i\in S\}$. The edges $\{x_i,y_i\}$ are
called {\it whiskers}. The notion of a whisker was introduced by
the second author \cite{ITG,Vi2} to study how modifying the graph $G$
affected the Cohen-Macaulayness of $G$; this
idea was later generalized by Francisco and H\`a \cite{FH} in
their study of sequentially Cohen-Macaulay graphs. We can give
a shellable analog of \cite[Theorem 4.1]{FH}.
\begin{corollary} Let $G$ be a graph and let
$S\subset V_G$. If $G\cup W_G(S)$ is shellable, then
$G\setminus S$ is shellable.
\end{corollary}
\begin{proof} We may assume that $S=\{x_1,\ldots,x_s\}$. Set
$G_0=G \cup W_G(S)$ and $G_i=G_{i-1}\setminus(\{y_i\}\cup N_G(y_i))$ for
$i=1,\ldots,s$. Notice that $G_s=G\setminus S$. Hence, by repeatedly applying
Theorem \ref{removevertex-shellable}, the graph $G\setminus S$ is
shellable. \end{proof}
We now turn our attention to the shellability of bipartite graphs.
\begin{lemma} \label{degree1} Let $G$ be a bipartite graph with
bipartition $\{x_1,\ldots,x_m\}$, $\{y_1,\ldots,y_n\}$. If $G$
is shellable and $G$ has no isolated vertices,
then there is $v\in V_G$ with $\deg(v)=1$.
\end{lemma}
\begin{proof} Let $F_1,\ldots,F_s$ be a shelling of $\Delta_G$. We may
assume that $F_i=\{y_1,\ldots,y_n\}$, $F_j=\{x_1,\ldots,x_m\}$ and
$i<j$. Then there is $x_k\in F_j\setminus F_i$ and $F_\ell$ with
$\ell\leq j-1$ such that $F_j\setminus F_\ell=\{x_k\}$. For simplicity
assume that $x_k=x_1$. Then $\{x_2,\ldots,x_m\}\subset F_\ell$ and
there is $y_t$ in $F_\ell$ for some $1\leq t\leq n$. Since
$\{y_t,x_2,\ldots,x_m\}$ is an independent set of $G$, we get that
$y_t$ can only be adjacent to $x_1$. Thus $\deg(y_t)=1$ because
$G$ has no isolated vertices. \end{proof}
\begin{theorem}\label{oct15-1-06} Let $G$ be a graph and let $x_1,y_1$ be two
adjacent vertices of $G$ with $\deg(x_1)=1$. If
$G_1=G\setminus(\{x_1\}\cup N_G(x_1))$ and
$G_2=G\setminus(\{y_1\}\cup N_G(y_1))$, then $G$ is shellable
if and only if $G_1$ and $G_2$ are shellable.
\end{theorem}
\begin{proof} If $G$ is shellable, then $G_1$ and $G_2$ are shellable by
Theorem \ref{removevertex-shellable}. So it suffices to prove the
reverse direction.
Let $F_1',\ldots,F_r'$ be a shelling of
$\Delta_{G_1}$ and let $H_1',\ldots,H_s'$ be a shelling of
$\Delta_{G_2}$. It suffices to prove that
$$
F_1'\cup\{x_1\},\ldots,F_r'\cup\{x_1\},H_1'\cup\{y_1\},\ldots,
H_s'\cup\{y_1\}
$$
is a shelling of $\Delta_G$. One first shows that this is the complete list
of facets of $\Delta_G$ using Lemma~\ref{link}. Indeed,
take any facet $F$ of $\Delta_G$. If $y_1 \in F$,
then $x_1 \not\in F$ because $\{x_1,y_1\}$ is an edge of $G$,
and by Lemma ~\ref{link}, $F \setminus \{y_1\} = H_i'$ for
some $i$. On the other hand, if $y_1 \not\in F$,
we must have $x_1 \in F$, because if not, then
$\{x_1\} \cup F$ is larger independent set of $G$ because
$x_1$ is only adjacent to $y_1$. Again, by Lemma ~\ref{link},
we have $F \setminus \{x\} = F_i'$ for some $i$.
Let $F'<F$ be two facets of $\Delta_G$.
There are three cases to consider. Case (i): $F'=F_i'\cup\{x_1\}$
and $F=H_j'\cup\{y_1\}$. Since $H_j'\cup\{x_1\}$ is an independent
set of $G$, it is contained in a facet of $\Delta_G$, i.e.,
$H_j'\cup\{x_1\}\subset F_\ell'\cup\{x_1\}$ for some $\ell$. Hence
$(H_j'\cup\{y_1\})\setminus(F_\ell'\cup\{x_1\})=\{y_1\}$,
$y_1\in F\setminus F'$, and $F_\ell'\cup\{x_1\}<F$. The remaining two
cases follow readily from the shellability of $\Delta_{G_1}$
and $\Delta_{G_2}$. \end{proof}
Putting together the last two results yields
a recursive procedure to verify if a bipartite
graph is shellable.
\begin{corollary} \label{recursivebuild}
Let $G$ be a bipartite graph. Then $G$ is shellable if and only
if there are adjacent vertices $x$ and $y$
with $\deg(x)=1$ such that
the bipartite graphs $G \setminus (\{x\} \cup N_G(x))$
and $G \setminus (\{y\} \cup N_G(y))$ are shellable.
\end{corollary}
\begin{proof} By Lemma \ref{oct14-06} it suffices to
verify the statement when $G$ is connected.
By Lemma \ref{degree1} there exists
a vertex of $x_1$ with $\deg(x_1) =1$. Now apply the previous
theorem.
\end{proof}
\begin{example}
The {\it complete bipartite graph}, denoted $\mathcal{K}_{m,n}$,
is the graph with vertex set $V_G = \{x_1,\ldots,x_m,y_1,\ldots,y_n\}$
and edge set $E_G = \{\{x_i,y_j\} ~|~ 1 \leq i \leq m, 1 \leq j \leq n\}$.
If $m,n \geq 2$, then $\mathcal{K}_{m,n}$
is not shellable since the graph has no vertex of degree one.
On the other hand, if $m =1$ and $n \geq 1$, then
$\mathcal{K}_{m,n}$ is shellable since
the only facets are $F_1 = \{y_1,\ldots,y_n\}$
and $F_2 = \{x_1\}$ and we have a shelling with $F_1 < F_2$. Similarly,
$\mathcal{K}_{m,1}$ is shellable for all $m \geq 1$.
\end{example}
We complete this section by showing that all chordal
graphs are shellable.
A graph ${G}$ is {\it triangulated\/} or
{\it chordal\/} if every cycle ${\mathcal C}_n$ of ${G}$ of length $n\geq 4$ has a
chord. A {\it chord} of ${\mathcal C}_n$ is an edge joining two
non-adjacent vertices of ${\mathcal C}_n$. Let $S$ be a set of
vertices of a graph $G$. The {\it induced
subgraph\/} $G_S$ is the maximal subgraph of $G$ with
vertex set $S$.
For use below we call a complete subgraph of $G$ a {\it clique}. As usual,
a complete graph with $r$ vertices is denoted by ${\mathcal K}_r$.
\begin{lemma}{\rm\cite[Theorem~8.3]{Toft}}\label{toft-lemma}
Let $G$ be a chordal graph and let ${\mathcal K}$ be a complete
subgraph of $G$. If ${\mathcal K}\neq G$, then there is $x\not\in
V({\mathcal K})$
such that $G_{N_G(x)}$ is a complete subgraph.
\end{lemma}
\begin{theorem} \label{chordaltheorem}
Let $G$ be a chordal graph. Then $G$ is
shellable.
\end{theorem}
\begin{proof} We proceed by induction on $n = |V_G|$. Let
$V_G=\{x_1,\ldots,x_n\}$ be the vertex set of $G$. If $G$ is a
complete graph, then $\Delta_G$ consists of $n$ isolated vertices and
they clearly form a shelling. Thus by Lemma~\ref{oct14-06} we may assume
that $G$ is a connected non-complete graph. According to
Lemma~\ref{toft-lemma} there is
$x_1\in V_G$ such that $G_{N_G(x_1)}={\mathcal K}_{r-1}$ is a
complete subgraph for some $r \geq 1$. (To apply Lemma ~\ref{toft-lemma}, take
$\mathcal K$ to be any edge of $G$; this is clearly a complete graph.)
Notice that
$G_{\{x_1\}\cup N_G(x_1)} ={\mathcal K}_r$ and that ${\mathcal
K}_r$ is the
only maximal complete subgraph of $G$ that contains $x_1$. We may
assume that $V({\mathcal K}_r)=\{x_1,\ldots,x_r\}$.
Consider the subgraphs $G_i=G\setminus(\{x_i\}\cup N_G(x_i))$,
which are also chordal. By
induction there is a shelling $F_{i1},\ldots,F_{is_i}$
of $\Delta_{G_i}$ for $i=1,\ldots,r$. Observe that any facet of
$\Delta_G$ intersects $V({\mathcal K}_r)$ in exactly one vertex. Thus
by Lemma~\ref{link} the following is the
complete list of facets of $\Delta_G$:
$$
F_{11}\cup\{x_1\},\ldots,F_{1s_1}\cup\{x_1\};
F_{21}\cup\{x_2\},\ldots,F_{2s_2}\cup\{x_2\};\ldots;
F_{r1}\cup\{x_r\},\ldots,F_{rs_r}\cup\{x_r\}.
$$
We claim that this linear ordering is a shelling of $\Delta_G$. Let
$F'<F$ be two facets of $\Delta_G$. There are two cases to consider.
Case (i): $F'=F_{ik}\cup\{x_i\}$ and $F=F_{jt}\cup\{x_j\}$, where
$i<j$. Notice that $F_{jt}\cup\{x_1\}$ is an independent set of $G$
because $F_{jt}\cap V(\mathcal{K}_r)=\emptyset$. Thus
$F_{jt}\cup\{x_1\}$ can be extended to a facet of $G$, i.e.,
$F_{jt}\cup\{x_1\}\subset F_{1\ell}\cup\{x_1\}$ for some
$1\leq\ell\leq s_1$. Set $F''= F_{1\ell}\cup\{x_1\}$. Hence $x_j\in
F\setminus F'$, $F\setminus F''=\{x_j\}$, and $F''<F$. Case (ii):
$F'=F_{ik}\cup\{x_i\}$ and $F=F_{it}\cup\{x_i\}$, with $k<t$.
This case follows from the
shellability of $\Delta_{G_i}$. \end{proof}
\begin{remark}
As shown below (Theorem \ref{shellable->scm}), if a graph $G$
is shellable, then it is also sequentially Cohen-Macaulay. The above
theorem, therefore, gives a new proof to the fact that all chordal
graphs are sequentially Cohen-Macaulay as first proved in \cite{FVT2}.
To show that all chordal graphs are
sequentially Cohen-Macaulay, the authors of \cite{FVT2}
show that for each degree $d \geq 0$, the square-free part
of the Alexander dual $I(G)^{\vee}$ (also
defined below) in degree $d$ has {\it linear quotients},
that is, there is an ordering of the generators $\{u_1,\ldots,u_s\}$
of the square-free part of $I(G)^{\vee}$ of degree $d$
such that $(u_1,\ldots,u_{i-1}):(u_i) = (x_{i_1},\ldots,x_{i_t})$
for $i=1,\ldots,s$.
However, when $G$ is shellable, the
generators of the Alexander dual $I(G)^{\vee}$
must have linear quotients (see \cite[Theorem 1.4(c)]{hhz-ejc} and
\cite{SJZ});
so, when $G$ is chordal,
the ideal $I(G)^{\vee}$ also has linear quotients, a fact, to the
best of our knowledge, that has never been noticed.
\end{remark}
\section{Sequentially Cohen-Macaulay bipartite graphs}
In this section we classify all sequentially Cohen-Macaulay bipartite graphs.
We begin by recalling the relevant definitions and results about sequentially
Cohen-Macaulay modules.
\begin{definition} \label{d.seqcm}
Let $R=k[x_1,\dots,x_n]$. A graded $R$-module $M$ is called
{\it sequentially Cohen-Macaulay} (over $k$)
if there exists a finite filtration of graded $R$-modules
\[ 0 = M_0 \subset M_1 \subset \cdots \subset M_r = M \]
such that each $M_i/M_{i-1}$ is Cohen-Macaulay, and the Krull dimensions of the
quotients are increasing:
\[\dim (M_1/M_0) < \dim (M_2/M_1) < \cdots < \dim (M_r/M_{r-1}).\]
\end{definition}
As first shown by Stanley \cite{Stanley}, shellable implies
sequentially Cohen-Macaulay.
\begin{theorem}\label{shellable->scm}
Let $\Delta$ be a simplicial complex, and suppose that $R/I_{\Delta}$
is the associated Stanley-Reisner ring. If $\Delta$ is shellable, then
$R/I_{\Delta}$ is sequentially Cohen-Macaulay.
\end{theorem}
We now specialize to the case of graphs by providing a sequentially
Cohen-Macaulay analog of Theorem \ref{removevertex-shellable}.
\begin{theorem} \label{gscm->cscm1}
Let $x$ be a vertex of $G$ and let
$G' = G\setminus(\{x\}\cup N_G(x))$.
If $G$ is sequentially Cohen-Macaulay, then $G'$ is sequentially Cohen-Macaulay.
\end{theorem}
\begin{proof}
Let $F_1,\ldots,F_s$ be the facets of
$\Delta=\Delta_G$, and let $F_1,\ldots,F_r$ be the facets of $\Delta$ that
contain $x$. Set $\Gamma=\Delta_{G'}$; by Lemma~\ref{link}, the
facets of $\Gamma$ are
$F_1'=F_1\setminus\{x\},\ldots,F_r'=F_r\setminus\{x\}$.
Consider the pure simplicial complexes
\begin{eqnarray*}
\Delta^{[k]}&=&\langle\{F\in\Delta\vert\, \dim(F)=k\}\rangle;\ \ -1\leq
k\leq\dim(\Delta),\\
\Gamma^{[k]}&=&\langle\{F\in\Gamma\vert\, \dim(F)=k\}\rangle;\ \ -1\leq
k\leq\dim(\Gamma),
\end{eqnarray*}
where $\langle{\mathcal F}\rangle$ denotes the subcomplex generated by
the set of faces $\mathcal F$. Recall that $H$ is a face of
$\langle{\mathcal F}\rangle$ if and only if $H$ is contained in
some $F$ in $\mathcal{F}$. Take a facet $F_i'$ of $\Gamma$ of
dimension $d=\dim(\Gamma)$. Then
$F_i'\cup\{x\}\in\Delta^{[d+1]}$ and consequently $\{x\}\in
\Delta^{[k+1]}$ for $k\leq d$. Because the
facets of $\Gamma$ are
$F_1'=F_1\setminus\{x\},\ldots,F_r'=F_r\setminus\{x\}$, we have
the equality
$$
\Gamma^{[k]}={\rm lk}_{\Delta^{[k+1]}}(x)
$$
for $k\leq d$. By \cite[Theorem~3.3]{duval} the complex
$\Delta$ is sequentially Cohen-Macaulay if and only if $\Delta^{[k]}$
is Cohen-Macaulay for
$-1\leq k\leq \dim(\Delta)$. Because $\Delta^{[k]}$ is Cohen-Macaulay,
by \cite[Proposition 5.3.8]{V}
${\rm lk}_{\Delta^{[k]}}(F)$ is Cohen-Macaulay for any $F \in
\Delta^{[k]}$. Thus, $\Gamma^{[k]}={\rm lk}_{\Delta^{[k+1]}}(x)$ is Cohen-Macaulay
for any $-1 \leq k \leq \dim (\Gamma) \leq \dim(\Delta)-1$.
Therefore $\Gamma$ is sequentially Cohen-Macaulay by
\cite[Theorem~3.3]{duval}, as
required. \end{proof}
\begin{example} The six cycle $\mathcal{C}_6$ is a counterexample to the
converse of the above statement. For any vertex $x$ of $\mathcal{C}_6$,
the graph $\mathcal{C}_6 \setminus (\{x\}\cup N_G(x))$ is a tree,
which is sequentially Cohen-Macaulay. (A tree is a chordal
graph, so by Theorem \ref{chordaltheorem}, a tree is shellable, and hence,
sequentially Cohen-Macaulay
by Theorem \ref{shellable->scm}.) However, the only
sequentially Cohen-Macaulay cycles are $\mathcal{C}_3$ and $\mathcal{C}_5$
\cite[Proposition 4.1]{FVT2}.
\end{example}
A corollary of the above result is the following result of
Francisco and H\`a. Here $W_G(S)$
is the whisker notation introduced in the previous section.
\begin{corollary}{\rm \cite[Theorem~4.1]{FH}} Let $G$ be a graph and let
$S\subset V_G$. If $G\cup W_G(S)$ is sequentially Cohen-Macaulay, then
$G\setminus S$ is sequentially Cohen-Macaulay.
\end{corollary}
\begin{proof} We may assume that $S=\{x_1,\ldots,x_s\}$. Set
$G_0=G$ and $G_i=G_{i-1}\setminus(y_i\cup N_G(y_i))$ for
$i=1,\ldots,s$ where $y_i$ is
the degree 1 vertex adjacent to $x_i$.
Notice that $G_s=G\setminus S$. Hence by
Theorem~\ref{gscm->cscm1} the graph $G\setminus S$ is
sequentially Cohen-Macaulay. \end{proof}
We make use of the following result of
Herzog and Hibi that links the notions of componentwise
linearity and sequentially Cohen-Macaulayness. We begin
by recalling:
\begin{definition}
\label{d:CWL}
Let $(I_d)$
denote the ideal generated by all degree $d$ elements of
a homogeneous ideal $I$.
Then $I$ is called {\it componentwise linear} if $(I_d)$ has a linear
resolution for all $d$.
\end{definition}
\begin{definition} If $I$ is a squarefree monomial ideal, then
the {\it squarefree Alexander dual} of
$I = (x_{1,1}\cdots x_{1,{s_1}},\ldots,x_{t,1}\cdots x_{t,{s_t}})$
is the ideal
\[I^{\vee} =
(x_{1,1},\ldots,x_{1,s_1}) \cap \cdots \cap (x_{t,1},\ldots,x_{t,s_t}).\]
\end{definition}
If $I$ is a square-free
monomial ideal we write $I_{[d]}$ for the ideal generated by all the
squarefree monomial ideals of degree $d$ in $I$.
\begin{theorem}{\rm (\cite{HH})} \label{t.seqcm}
Let $I$ be a squarefree monomial ideal of $R$. Then
\begin{itemize}
\item[\rm (a)] $R/I$ is sequentially Cohen-Macaulay if and only if $I^{\vee}$
is componentwise linear.
\item[\rm (b)] $I$ is componentwise linear if and only if
$I_{[d]}$ has a linear resolution for all $d \geq 0$.
\end{itemize}
\end{theorem}
\begin{lemma}\label{oct15-06} Let $G$ be a bipartite graph with
bipartition $\{x_1,\ldots,x_m\}$, $\{y_1,\ldots,y_n\}$. If $G$ is
sequentially Cohen-Macaulay, then there is $v\in V_G$ with $\deg(v)=1$.
\end{lemma}
\begin{proof} We may assume that $m\leq n$ and that $G$ has no isolated
vertices. Let $J$ be the Alexander dual of $I=I(G)$ and
let $L=J_{[n]}$ be the monomial ideal generated by the
square-free monomials of $J$ of degree $n$. We may assume that $L$ is
generated by $g_1,\ldots,g_q$, where $g_1=y_1y_2\cdots y_n$ and
$g_2=x_1\cdots x_my_1\cdots y_{n-m}$. Consider the linear map
$$
R^q\stackrel{\varphi}{\longrightarrow} R\ \ \ (e_i\mapsto g_i).
$$
The kernel of this map is generated by syzygies of the
form
$$
(g_j/\gcd(g_i,g_j))e_i-(g_i/\gcd(g_i,g_j))e_j.
$$
Since the vector $\alpha=x_1\cdots x_me_1-y_{n-m+1}\cdots y_ne_2$ is
in ${\rm ker}(\varphi)$ and since ${\rm ker}(\varphi)$ is generated by
linear syzygies (see Theorem~\ref{t.seqcm}), there is a linear syzygy
of $L$ of the form
$x_je_1-ze_k$, where $z$ is a variable, $k\neq 1$. Hence
$x_j(y_1\cdots y_n)=z(g_k)$ and $g_k=x_jy_1\cdots
y_{i-1}y_{i+1}\cdots y_n$ for some $i$. Because the support of $g_k$ is a
vertex cover of $G$, we get that the complement of the support of
$g_k$, i.e., $\{y_i,x_1,\ldots,x_{j-1},x_{j+1},\ldots,x_m\}$, is an
independent set of $G$. Thus $y_i$ can only be adjacent to $x_j$,
i.e., $\deg(y_i)=1$. \end{proof}
We come to the main result of this section.
\begin{theorem} \label{SCM=shellable}
Let $G$ be a bipartite graph. Then $G$ is
shellable if and only if $G$ is sequentially Cohen-Macaulay.
\end{theorem}
\begin{proof} Since $G$ shellable implies $G$
sequentially Cohen-Macaulay (Theorem \ref{shellable->scm}) we only need
to show the converse. Assume that $G$ is sequentially
Cohen-Macaulay. The proof is by induction on the number of
vertices of $G$. By Lemma~\ref{oct15-06} there is a vertex $x_1$ of
$G$ of degree $1$. Let $y_1$ be the vertex of $G$ adjacent to $x_1$.
Consider the subgraphs $G_1=G\setminus(\{x_1\}\cup N_G(x_1))$ and
$G_2=G\setminus(\{y_1\}\cup N_G(y_1))$. By
Theorem~\ref{gscm->cscm1} $G_1$ and $G_2$ are sequentially
Cohen-Macaulay. Hence $\Delta_{G_1}$ and $\Delta_{G_2}$ are shellable
by the induction hypothesis. Therefore $\Delta_G$ is shellable by
Theorem~\ref{oct15-1-06}. \end{proof}
As we saw in Corollary \ref{recursivebuild},
one can verify recursively that a bipartite graph
is shellable. The above theorem, therefore, implies the same
for sequentially Cohen-Macaulay bipartite graphs.
In particular, we have:
\begin{corollary}\label{scm-build}
Let $G$ be a bipartite graph. Then $G$ is sequentially Cohen-Macaulay if and only
if there are adjacent vertices $x$ and $y$
with $\deg(x)=1$ such that the bipartite graphs $G \setminus (\{x\} \cup N_G(x))$
and $G \setminus (\{y\} \cup N_G(y))$ are sequentially Cohen-Macaulay.
\end{corollary}
\begin{example}
No even cycle $\mathcal{C}_{2m}$ can be sequentially Cohen-Macaulay
since $\mathcal{C}_{2m}$ is
a bipartite graph that
does not have a vertex of degree 1.
\end{example}
\section{An application to Cohen-Macaulay bipartite graphs}\label{s.main}
If $G$ is a bipartite graph without isolated vertices whose edge
ideal $I(G)$ is unmixed,
i.e., all the associated primes of $I(G)$ have the same height, then one can
show (see, for example, \cite[Theorem 6.4.2]{V})
that $G$ must have the following two properties:
\begin{enumerate}
\item[$(1)$]
if $V_1 = \{x_1,\ldots,x_g\}$ and $V_2 = \{y_1,\ldots,y_h\}$
is the bipartition of $V_G$, then $g = h$,
\item[$(2)$] for $i=1,\ldots,g$, (after relabeling) $\{x_i,y_i\}$ is an edge
of $G$.
\end{enumerate}
Properties $(1)$ and $(2)$ are deduced from
the fact that all the minimal vertex covers of a graph
whose edge ideal is unmixed ideal must have the same size. Cohen-Macaulay
bipartite graphs are, therefore, a subset of all the graphs that
satisfies $(1)$ and $(2)$ since their edge ideals are unmixed.
If $G$ is any bipartite graph that satisfies $(1)$ and $(2)$,
then Carr\'a Ferro and Ferrarello \cite{carra-ferrarello}
introduced a way to construct a directed graph from
the graph $G$. Precisely, we define a directed graph $\mathcal D$ with
vertex set $V_1$ as follows: $(x_i,x_j)$ is a directed edge of $\mathcal D$
if $i\neq j$ and $\{x_i,y_j\}$ is an edge of $G$.
In this section $G$ will be any bipartite graph that satisfies
conditions $(1)$ and $(2)$. We will show how $G$
being (sequentially) Cohen-Macaulay affects the graph $\mathcal D$.
In particular, we can express Herzog and Hibi's \cite{herzog-hibi}
classification of Cohen-Macaulay bipartite graphs
in terms of the graph $\mathcal D$.
We say that a cycle
$\mathcal{C}$ of ${\mathcal D}$ is {\it oriented\/} if all the arrows
of $\mathcal{C}$ are oriented in the same direction.
\begin{example}
If $G = \mathcal{C}_4$ with edge set $\{\{x_1,y_1\},\{x_2,y_2\},\{x_1,y_2\},
\{x_2,y_1\}\}$,
then $\mathcal D$ has two vertices $x_1,x_2$ and
two arrows $(x_1,x_2)$, $(x_2,x_1)$ forming an oriented
cycle of length
two.
\end{example}
\begin{lemma}{\rm \cite[Theorem~16.3(4),
p.~200]{Har}}\label{acyclic-char} Let $\mathcal D$ be the directed
graph described above.
$\mathcal D$ is {\it acyclic},
i.e., $\mathcal D$ has no oriented cycles,
if and only if there is a linear
ordering of the vertex set $V_1$ such that all the
edges of $\mathcal D$ are of the form
$(x_i,x_j)$ with $i<j$.
\end{lemma}
Recall that $\mathcal D$ is called {\it
transitive} if for any two $(x_i,x_j)$,
$(x_j,x_k)$ in $E_{\mathcal D}$ with $i,j,k$ distinct, we have
that $(x_i,x_k)\in E_{\mathcal D}$.
\begin{theorem}{\rm(\cite{unmixed})}\label{unmixed-char}
Let $G$ be a bipartite graph satisfying $(1)$ and $(2)$.
The digraph $\mathcal D$ is
transitive if and only if $G$ is unmixed, i.e., all minimal vertex covers
of $G$ have the same cardinality.
\end{theorem}
We can now show how $G$ being sequentially Cohen-Macaulay affects the graph $\mathcal{D}$.
\begin{theorem}\label{chordalCWL}
Let $G$ be a bipartite graph satisfying $(1)$ and $(2)$.
If $G$ is sequentially Cohen-Macaulay, then the directed graph $\mathcal{D}$ is acyclic.
\end{theorem}
\begin{proof} We proceed by induction on the number of
vertices of $G$. Assume that $\mathcal D$ has an oriented cycle
$\mathcal{C}_r$ with vertices $\{x_{i_1},\ldots,x_{i_r}\}$. This means that the
graph $G$ has a cycle
$$
\mathcal{C}_{2r}=\{y_{i_1},x_{i_1},y_{i_2},x_{i_2},y_{i_3},
\ldots,y_{i_{r-1}},x_{i_{r-1}},
y_{i_r},x_{i_r}\}
$$
of length $2r$. By Lemma~\ref{oct15-06}, the graph $G$ has a vertex
$v$ of degree $1$. Notice that
$v\notin\{x_{i_1},\ldots,x_{i_r},y_{i_1},\ldots,y_{i_r}\}$.
Furthermore, if $w$ is the vertex adjacent to $v$, we also have
$w \notin \{x_{i_1},\ldots,x_{i_r},y_{i_1},\ldots,y_{i_r}\}$. Hence
by Theorem~\ref{gscm->cscm1} the graph $G'=G\setminus(\{v\}\cup
N_G(v))$ is sequentially Cohen-Macaulay and ${\mathcal D}_{G'}$ has
an oriented cycle, a
contradiction to the induction hypotheses.
Thus $\mathcal D$ has no oriented cycles, as required.
\end{proof}
\begin{example} The converse of the above theorem does not hold
as illustrated through the following example.
Let $G$ be the graph
\begin{picture}(100,60)(-100,30)
\put(0,0){\line(0,1){50}}
\put(0,0){\circle*{5}}
\put(0,50){\circle*{5}}
\put(0,50){\line(1,-1){50}}
\put(50,50){\line(1,-1){50}}
\put(50,50){\line(2,-1){100}}
\put(100,50){\line(1,-1){50}}
\put(50,0){\line(0,1){50}}
\put(100,0){\line(0,1){50}}
\put(150,0){\line(0,1){50}}
\put(200,0){\line(0,1){50}}
\put(150,50){\line(1,-1){50}}
\put(50,0){\circle*{5}}
\put(100,0){\circle*{5}}
\put(150,0){\circle*{5}}
\put(200,0){\circle*{5}}
\put(50,50){\circle*{5}}
\put(100,50){\circle*{5}}
\put(150,50){\circle*{5}}
\put(200,50){\circle*{5}}
\put(-4,55){$x_1$}
\put(46,55){$x_2$}
\put(96,55){$x_3$}
\put(146,55){$x_4$}
\put(196,55){$x_5$}
\put(-4,-10){$y_1$}
\put(46,-10){$y_2$}
\put(96,-10){$y_3$}
\put(146,-10){$y_4$}
\put(196,-10){$y_5$}
\end{picture}
\vspace{2cm}
\noindent
By Lemma \ref{acyclic-char} $G$ is a bipartite graph whose directed graph
$\mathcal{D}$ is acyclic. However, $G$ is not sequentially
Cohen-Macaulay. To verify this,
note that by Corollary \ref{scm-build}, if $G$ is sequentially Cohen-Macaulay,
then $G_1 = G\setminus(\{x_5\} \cup N_G(x_5))$ and $G_2 =
G\setminus(\{y_5\} \cup N_G(y_5))$
are sequentially Cohen-Macaulay. (Note that by the symmetry of the
graph, we can use
either $\{x_5,y_5\}$ or $\{x_1,y_1\}$.)
But $G_2$ is sequentially Cohen-Macaulay if and only if $H_1 = G_2
\setminus(\{y_1\}\cup N_G(y_1))$
and $H_2 = G_2 \setminus(\{x_1\} \cup N_G(x_1))$ are sequentially Cohen-Macaulay.
But $H_2$ is the graph of $\mathcal{C}_4$ which is not sequentially Cohen-Macaulay.
Hence, $G$ is not sequentially Cohen-Macaulay.
\end{example}
Bipartite Cohen-Macaulay graphs have been studied in
\cite{EV,herzog-hibi,V}. In \cite{EV} it is shown that $G$ is a
Cohen-Macaulay bipartite graph if and only if $\Delta_G$ is pure
shellable. In \cite{herzog-hibi} Herzog and Hibi give a graph
theoretical description of
Cohen-Macaulay bipartite graphs. This description
can be expressed in terms of $\mathcal D$, as was pointed out in
\cite{carra-ferrarello}. As a corollary, we prove Herzog and Hibi's
result classifying Cohen-Macaulay bipartite graphs.
\begin{corollary}{\rm(\cite{carra-ferrarello,herzog-hibi})}\label{c.hhz}
Let $G$ be a bipartite graph satisfying $(1)$ and $(2)$.
Then
$G$ is Cohen-Macaulay if and only
if $\mathcal D$ is acyclic and transitive.
\end{corollary}
\begin{proof}
($\Rightarrow$) By Theorem~\ref{unmixed-char}, $\mathcal D$ is
transitive, and by Theorem~\ref{chordalCWL}, $\mathcal D$ is
acyclic.
($\Leftarrow$) The proof is by induction on $g = |V_1|$. The case $g=1$ is
clear. We may assume that $G$ is connected and $g\geq 2$. By
Lemma~\ref{acyclic-char} we may also assume that
if $\{x_i,y_j\} \in E_G$, then $i \leq j$. Let
$N_G(y_g)=\{x_{r_1},\ldots,x_{r_s}\}$ be the set of all
vertices of $G$ adjacent to $y_g$, where $x_{r_s}=x_g$.
Consider the subgraph
$G'=G\setminus (\{y_g\}\cup N_G(y_g))$. We claim that
$y_{r_1},\ldots,y_{r_{s-1}}$ are isolated vertices of $G'$. Indeed if
$y_{r_j}$ is not isolated, there is an edge $\{x_i,y_{r_j}\}$ in $G'$
with $i<r_j$. Hence, by the transitivity of $\mathcal D$, we get that
$\{x_i,y_g\}$ is an edge
of $G$ and $x_i$ must be a vertex in $N_G(y_g)$, a contradiction. Thus, by
induction, the graphs $G'$ and
$G''= G\setminus\{x_g,y_g\}=G\setminus(\{x_g\}\cup N_G(x_g))$ are
Cohen-Macaulay. If $R_1 = k[x ~|~ x \in V_{G'}]$ and
$R_2 = k[x ~|~ x \in V_{G''}]$, then by induction
$\dim R_1/I(G') = g-s$ and $\dim R_2/I(G'') = g-1$.
Since $(I(G)\colon y_g) = (x_{i_1},\ldots,x_{r_s},I(G'))$ and
$(y_g,I(G)) = (y_g,I(G''))$, the
ends of the exact sequence
$$
0\longrightarrow R/(I(G)\colon y_g)\stackrel{y_g}{\longrightarrow}
R/I(G)\longrightarrow R/(I(G),y_g)\longrightarrow 0
$$
are Cohen-Macaulay modules of dimension $g$.
On the other hand, because $\mathcal D$ is transitive,
by Theorem \ref{acyclic-char} the graph $G$ is unmixed,
and thus $\dim R/I(G) =
\dim R - {\rm ht}(I(G)) = 2g - g = g$ since $g$
is the size of any minimal vertex covering. Consequently by applying
the depth lemma (see \cite[Corollary 18.6]{E}) to the above
short exact sequence, we have
\[\dim R/I(G) \geq \operatorname{depth} R/I(G) \geq
\min\{\operatorname{depth}~R/(I(G)\colon y_g),\operatorname{depth}~R/(I(G), y_g)+1\} = g\]
whence $R/I(G)$ is Cohen-Macaulay of dimension $g$.
\end{proof}
Cohen-Macaulay trees, first studied in \cite{Vi2}, can also be described in terms of $\mathcal D$:
\begin{theorem} Let $G$ be a tree satisfying $(1)$ and $(2)$.
Then $G$ is a Cohen-Macaulay tree if and only if $\mathcal D$ is a
tree such that every vertex $x_i$ of $\mathcal D$ is either a {\it
source\/} {\rm (}i.e., has only arrows leaving $x_i${\rm )} or a
{\it sink\/}
{\rm (}i.e.,
has only arrows entering $x_i${\rm )}.
\end{theorem}
\begin{proof}
($\Rightarrow$)
Since a tree is bipartite, $\mathcal{D}$ is both acyclic and transitive.
Suppose there is a vertex $x_i$ that is not a sink or source. i.e.,
there is an arrow entering $x_i$ and one leaving $x_i$. Suppose
the arrow entering $x_i$ originates at $x_j$, and the arrow leaving
$x_i$ goes to $x_k$. Note that $x_j \neq x_k$ because
otherwise we would have a cycle in the acyclic graph $\mathcal{D}$.
Because $\mathcal{D}$ is transitive, the directed edge $(x_j,x_k)$
also belongs to $\mathcal{D}$. But then the induced graph
on the vertices $\{x_j,y_i,x_i,y_k\}$ in $G$ forms the cycle $\mathcal{C}_4$,
contradicting the fact that $G$ is a tree.
$(\Leftarrow)$ The hypotheses on $\mathcal{D}$ imply $\mathcal{D}$
is acyclic and transitive, so apply Theorem \ref{c.hhz}.
\end{proof}
\section{Clutters with the free vertex property are shellable}
We now extend the scope of our paper to include a special
family of hypergraphs called clutters. The results of this section
allow us to give a new proof to a result of Faridi on the
sequentially Cohen-Macaulayness
of simplicial forests.
A {\it clutter\/} $\mathcal C$ with
vertex set $X=\{x_1,\ldots,x_n\}$ is a family of subsets of $X$,
called edges, none of which is included in another. The set of
vertices and edges of $\mathcal C$ are denoted by $V_{\mathcal C}$ and
$E_{\mathcal C}$ respectively. A basic example
of a clutter is a graph. Note that a clutter is an example of a hypergraph
on the vertex set of $X$; a clutter is sometimes called a simple hypergraph,
as in \cite{HVT}. For a thorough study of clutters---that includes 18
conjectures in the area---from the point of
view of combinatorial optimization see \cite{cornu-book}.
Let $R=k[x_1,\ldots,x_n]$ be a polynomial ring
over a field $k$ and let $I$ be an ideal
of $R$ minimally generated by a finite set
$\{x^{v_1},\ldots,x^{v_q}\}$ of
square-free monomials. As usual we
use $x^a$ as an abbreviation for $x_1^{a_1} \cdots x_n^{a_n}$,
where $a=(a_1,\ldots,a_n)\in \mathbb{N}^n$. Note that the entries of
each $v_i$ are in $\{0,1\}$. We associate to the
ideal $I$ a {\it clutter\/} $\mathcal C$ by taking the set
of indeterminates $V_{\mathcal C}=\{x_1,\ldots,x_n\}$ as the vertex set and
$E_{\mathcal C}=\{S_1,\ldots,S_q\}$ as the edge set, where
$S_i={\rm supp}(x^{v_i})$ is the {\it support\/} of
$x^{v_i}$, i.e., $S_i$ is the set of variables that occur in
$x^{v_i}$. For this
reason $I$ is called the {\it edge ideal\/} of $\mathcal C$ and is denoted
$I = I(\mathcal C)$. Edge ideals of clutters
are also called {\it facet ideals} \cite{Faridi} because
$S_1,\ldots,S_q$ are exactly the facets of the simplicial complex
$\Delta=\langle S_1,\ldots,S_q\rangle$ generated by $S_1,\ldots,S_q$.
A subset $C\subset X$ is a
{\it minimal vertex cover\/} of the clutter $\mathcal C$ if:
(i) every edge of $\mathcal C$ contains at least one vertex of $C$,
and (ii) there is no proper subset of $C$ with the first
property. If $C$ only satisfies condition (i), then $C$ is
called a {\it vertex cover\/} of $\mathcal C$. Notice that
$\mathfrak{p}$ is a minimal prime of $I =I(\mathcal C)$ if and only if
$\mathfrak{p}=(C)$ for some minimal vertex cover $C$ of $\mathcal C$.
In particular, if $D_1,\ldots,D_t$ is a complete list
of the minimal vertex covers of $\mathcal C$, then
\[I(\mathcal C) =(D_1)\cap (D_2)\cap\cdots\cap(D_t).\]
Because $I = I(\mathcal C)$ is a square-free monomial ideal,
it also corresponds to a simplicial complex via
the Stanley-Reisner correspondence \cite{Stanley}.
We let $\Delta_{\mathcal C}$ represent
this simplicial complex.
Note that $F$ is a facet of $\Delta_{\mathcal C}$ if and only if
$X\setminus F$ is a minimal vertex cover of $\mathcal C$.
As for graphs, we may say
that the clutter $\mathcal C$ is {\it shellable\/} if
$\Delta_{\mathcal C}$ is shellable.
\begin{lemma}\label{induced-shelling} Let $\mathcal{C}$ be a clutter
with minimal vertex covers $D_1,\ldots,D_t$.
If $\Delta_{\mathcal{C}}$ is shellable and $A\subset
V_{\mathcal{C}}$ is a set
of vertices, then the Stanley-Reisner complex $\Delta_{I'}$ of
the ideal
$$
I'=\bigcap_{\scriptstyle D_i\cap A=\emptyset}(D_i)
$$
is shellable with respect to the linear ordering of the facets of
$\Delta_{I'}$ induced by the shelling of the simplicial complex
$\Delta_{\mathcal{C}}$.
\end{lemma}
\begin{proof} Let $H_1,\ldots,H_t$ be a shelling of $\Delta_{\mathcal{C}}$. We may
assume that $H_i=V_{\mathcal{C}}\setminus D_i$ for all $i$. Let $H_i$ and $H_j$ be
two facets of $\Delta_{I'}$ with
$i<j$, i.e., $A\cap D_i=\emptyset$ and $A\cap D_j=\emptyset$.
By the shellability of $\Delta_{\mathcal{C}}$, there
is an $x\in H_j\setminus H_i$ and an $\ell<j$ such that $H_j\setminus
H_\ell=\{x\}$. It suffices to prove that
$D_\ell\cap A=\emptyset$. If $D_\ell\cap A\neq\emptyset$, pick $z\in
D_\ell \cap A$. Then $z\notin D_i\cup D_j$ and $z\in H_i\cap H_j$.
Since $z\notin H_\ell$ (otherwise $z\notin D_\ell$, a contradiction),
we get $z\in H_j\setminus H_\ell$, i.e., $z=x$, a contradiction
because $x\notin H_i$.
\end{proof}
An ideal $I'$ is called a {\it minor\/} of $I$ if there is a subset
$X'=\{x_{i_1},\ldots,x_{i_r},x_{j_1},\ldots,x_{j_s}\}$ of the set of
variables $X=\{x_1,\ldots,x_n\}$ such that $I'$ is a proper ideal of
$R'=k[X\setminus X']$ that can be obtained from a generating set of
$I$ by setting
$x_{i_k}=0$ and $x_{j_\ell}=1$ for all $k,\ell$. The ideal $I$ is
also considered to be a minor. A {\it minor\/} of $\mathcal C$
is a clutter ${\mathcal C}'$ on the vertex set
$V_{\mathcal{C}'}=X\setminus X'$
that corresponds to a minor $(0)\subsetneq
I'\subsetneq R'$. Notice that the edges of ${\mathcal C}'$ are obtained from $I'$
by considering the unique set
of square-free monomials of $R'$ that minimally generate $I'$.
For use below we say $x_i$ is a {\it free variable\/} (resp. {\it free
vertex}) of $I$ (resp. ${\mathcal C}$) if $x_i$
only appears in one of the monomials $x^{v_1},\ldots,x^{v_q}$ (resp. in
one of the edges of $\mathcal C$). If all the minors of
$\mathcal C$ have free vertices, we say that ${\mathcal C}$
has the {\it free vertex property}. Note that if $\mathcal C$
has the free vertex property, then so do all of its minors.
\begin{lemma}\label{covers-xn} Let $x_n$ be a free variable of $I = I(\mathcal{C})
= (x^{v_1},\ldots,x^{v_{q-1}},x^{v_q})$,
and let $x^{v_q}=x_nx^u$. {\rm (a)} If $\mathcal{C}_1$ is the clutter associated
to $J=(x^{v_1},\ldots,x^{v_{q-1}})$, then $C$ is a minimal vertex
cover of $\mathcal{C}$ containing $x_n$ if and only if
$C\cap{\rm supp}(x^{u})=\emptyset$ and $C=\{x_n\}\cup C'$ for some
minimal vertex cover $C'$ of $\mathcal{C}_1$. {\rm (b)} If $\mathcal{C}_2$
is the clutter associated
to $L=(x^{v_1},\ldots,x^{v_{q-1}},x^u)$, then $C$ is a minimal vertex
cover of $\mathcal{C}$ not containing $x_n$ if and only if $C$ is a
minimal vertex cover of $\mathcal{C}_2$.
\end{lemma}
\begin{proof} (a) Assume that $C$ is a minimal vertex cover of
$\mathcal{C}$ containing $x_n$. If $C\cap{\rm
supp}(x^{u})\neq\emptyset$, then $C\setminus\{x_n\}$ is a vertex cover
of $\mathcal{C}$, a contradiction. Thus $C\cap{\rm
supp}(x^{u})=\emptyset$. Hence it suffices to notice that
$C'=C\setminus\{x_n\}$ is a minimal
vertex cover of $\mathcal{C}_1$. The converse also follows readily.
(b) Assume that $C$ is a minimal vertex cover of
$\mathcal{C}$ not containing $x_n$. Let $x^a$ be a minimal generator
of $I(\mathcal{C}_2)$, then either $x^u$ divides $x^a$ or
$x^a=x^{v_i}$ for some $i<q$. Then clearly $C\cap {\rm
supp}(x^a)\neq\emptyset$ because $C\cap A\neq\emptyset$, where
$A={\rm supp}(x^u)$. Thus $C$ is a vertex cover of $\mathcal{C}_2$.
To prove that $C$ is minimal take $C'\subsetneq C$. We must show that
there is an edge of $\mathcal{C}_2$ not covered by $C'$. As $C$ is a
minimal vertex cover of $\mathcal{C}$, there is $x^{v_i}$ such
that ${\rm supp}(x^{v_i})\cap C'=\emptyset$. If $x^{v_i}$ is a minimal
generator of $\mathcal{C}_2$ there is nothing to prove, otherwise
$x^u$ divides $x^{v_i}$ and the edge $A$ of $\mathcal{C}_2$ is not
covered by $C'$. The converse also follows readily.
\end{proof}
\begin{theorem} \label{fvp}
If the clutter $\mathcal{C}$ has the free vertex property, then
$\Delta_{\mathcal{C}}$ is shellable.
\end{theorem}
\begin{proof} We proceed by induction on the number of vertices of
$\mathcal C$. Let $x_n$ be a free variable of $I=I(\mathcal{C}) =
(x^{v_1},\ldots,x^{v_{q-1}},x^{v_q})$. We
may assume that $x_n$ occurs in $x^{v_q}$. Hence we can write
$x^{v_q}=x_nx^u$ for some $x^u$ such that
$x_n\notin{\rm supp}(x^u)$. For use below we set $A={\rm supp}(x^u)$.
Consider the ideals $J=(x^{v_1},\ldots,x^{v_{q-1}})$ and
$L=(J,x^u)$. Then
$J=I({\mathcal C}_1)$ and $L=I({\mathcal C}_2)$,
where ${\mathcal C}_1$ and ${\mathcal C}_2$ are the clutters
defined by the ideals $J$ and $L$, respectively.
Notice that $J$ and $L$ are minors of
the ideal $I$ obtained by setting $x_n=0$
and $x_n=1$, respectively. The vertex set of $\mathcal{C}_i$ is
$V_{\mathcal{C}_i}=X\setminus\{x_n\}$ for $i=1,2$. Thus $\Delta_{\mathcal{C}_1}$ and
$\Delta_{\mathcal{C}_2}$ are
shellable by the induction hypothesis. Let $F_1,\ldots,F_r$ be the facets
of $\Delta_{\mathcal C}$ that contain $x_n$ and let $G_1,\ldots,G_s$ be the facets
of $\Delta_{\mathcal C}$ that do not contain $x_n$. Set
$C_i=X\setminus G_i$ and $C_i'=C_i\setminus \{x_n\}$ for
$i=1,\ldots,s$. Then $C_1,\ldots,C_s$ is the set of minimal vertex
covers of $\mathcal{C}$ that contain $x_n$, and by
Lemma~\ref{covers-xn}(a) $C_1',\ldots,C_s'$ is the set of minimal vertex
covers of $\mathcal{C}_1$ that do not intersect $A$. One has the
equality $G_i=V_{\mathcal{C}_1}\setminus C_i'$ for all $i$. Hence, by
the shellability of $\Delta_{\mathcal{C}_1}$ and using
Lemma~\ref{induced-shelling}, we may assume that
$G_1,\ldots,G_s$ is a shelling for the simplicial complex generated by
$G_1,\ldots,G_s$. By Lemma~\ref{covers-xn}(b) one has that $C$ is a minimal vertex
cover of $\mathcal{C}$ not containing $x_n$ if and only if $C$ is a
minimal vertex cover of $\mathcal{C}_2$. Thus, $F$ is a facet of $\Delta_{\mathcal{C}}$
that contains $x_n$, i.e., $F = F' \cup \{x_n\}$ if and only if
$F'$ is a facet of $\Delta_{\mathcal{C}_2}$.
By induction we may also assume that
$F'_1=F_1\setminus\{x_n\},\ldots,F'_r=F_r\setminus\{x_n\}$
is a shelling of $\Delta_{\mathcal{C}_2}$. We now prove that
$$
F_1,\ldots,F_r,G_1,\ldots,G_s ~~\mbox{with $F_i = F'_i \cup \{x_n\}$}
$$
is a shelling of $\Delta_{\mathcal{C}}$. We need only show that
given $G_j$ and $F_i$ there is $a\in G_j\setminus F_i$ and $F_\ell$
such that $G_j\setminus F_\ell=\{a\}$. We can write
$$
G_j=X\setminus C_j\ \mbox{ and }\ F_i=X\setminus C_i,
$$
where $C_j$ (resp. $C_i$) is a minimal vertex cover of $\mathcal C$
containing $x_n$ (resp. not containing $x_n$). Recall that
$A={\rm supp}(x^u)$ is an edge of $\mathcal{C}_2$. Notice the
following: (i) $C_j=C_j'\cup\{x_n\}$ for some minimal vertex cover $C_j'$
of ${\mathcal C}_1$ such that $A\cap C_j'=\emptyset$, and (ii) $C_i$
is a minimal vertex cover of
${\mathcal C}_2$. From (i) we get that $A\subset G_j$. Observe that
$A\not\subset F_i$, otherwise $A\cap C_i=\emptyset$, a contradiction
because $C_i$ must cover the edge $A={\rm supp}(u)$. Hence there is
$a\in A\setminus F_i$ and $a\in G_j\setminus F_i$. Since
$C_j'\cup\{a\}$ is a vertex cover of $\mathcal{C}$, there is
a minimal vertex cover $C_\ell$ of $\mathcal{C}$ contained in
$C_j'\cup\{a\}$. Clearly $a\in C_\ell$ because $C_\ell$ has to cover
$x^u$ and $C_j'\cap A=\emptyset$. Thus
$F_\ell=X\setminus C_\ell$ is a facet of $\Delta_{\mathcal{C}}$ containing $x_n$.
To finish the proof we now prove that $G_j\setminus F_\ell=\{a\}$. We
know that $a\in G_j$. If $a\in F_\ell$, then $a\notin C_\ell$, a
contradiction. Thus $a\in G_j\setminus F_\ell$. Conversely take $z\in
G_j\setminus F_\ell$. Then $z\notin C_j'\cup\{x_n\}$ and
$z\in C_\ell\subset C_j'\cup\{a\}$. Hence $z=a$, as required. \end{proof}
The $n\times q$ matrix $A$ with column vectors
$v_1,\ldots,v_q$ is called the {\it incidence
matrix\/} of $\mathcal C$. This matrix has entries in $\{0,1\}$. We
say that $A$ (resp. $\mathcal{C}$) is a {\it totally balanced\/}
matrix (resp. clutter) if $A$ has no square
submatrix of order at least $3$ with exactly two $1$'s in
each row and column. According to \cite[Corollary~83.3a]{Schr2} a
totally balanced clutter satisfies the free vertex property. Thus we
obtain:
\begin{corollary} If $\mathcal{C}$ is a totally balanced clutter,
then $\Delta_{\mathcal{C}}$ is shellable.
\end{corollary}
Faridi \cite{Faridi} introduced the notion of a leaf for a simplicial
complex $\Delta$. Precisely, a facet $F$ of $\Delta$ is a {\it leaf} if $F$ is
the only facet of $\Delta$, or there exists a facet $G \neq F$ in $\Delta$ such that
$F \cap F' \subset F \cap G$ for all facets $F' \neq F$ in $\Delta$.
A simplicial complex $\Delta$ is a {\it simplicial forest} if every nonempty
subcollection, i.e., a subcomplex whose
facets are also facets of $\Delta$, of $\Delta$ contains a leaf.
We can translate Faridi's definition into hypergraph language; we call the
translated version of Faridi's leaf a $f$-leaf.
\begin{definition} An edge $E$ of a clutter $\mathcal{C}$ is aa {\it $f$-leaf} if $E$
is the only edge of $\mathcal{C}$, or if there exists an edge $H$ of
$\mathcal{C}$ such that
$E \cap E' \subset E \cap H$ for all edges $E' \neq E$ of $\mathcal{C}$.
A clutter $\mathcal{C}$ is an $f$-{\it forest}, if every
subclutter of $\mathcal{C}$, including $\mathcal{C}$ itself, contains
an $f$-leaf.
\end{definition}
In \cite[Theorem~3.2]{hhtz} it is shown that $\mathcal{C}$ is an $f$-forest if
and only if $\mathcal{C}$ is a totally balanced clutter. Thus we
obtain:
\begin{corollary}\label{maintheorem-simplicial}
If the clutter $\mathcal{C}$ is an $f$-forest,
then $\Delta_{\mathcal{C}}$ is shellable.
\end{corollary}
We now recover the main
result of Faridi \cite{Faridi}:
\begin{corollary} Let $I = I(\Delta)$ be the facet ideal
of a simplicial forest. Then
$R/I(\Delta)$ is
sequentially Cohen-Macaulay.
\end{corollary}
\begin{proof}
If $\Delta = \langle F_1,\ldots,F_s\rangle$, then $I(\Delta)$
is also the edge ideal of the clutter $\mathcal{C}$ whose
edge set is $E_{\mathcal{C}} = \{F_1,\ldots,F_s\}$. Now apply
Corollary \ref{maintheorem-simplicial}
and Theorem \ref{shellable->scm}.
\end{proof}
\begin{remark} Since submitting this paper, Soleyman Jahan and Zheng
\cite[Theorem 3.4]{SJZ} have given a generalization of Theorem \ref{fvp}
using the notion a {\it pretty clean monomial ideal}.
\end{remark}
\noindent
{\bf Acknowledgments.} We gratefully acknowledge the computer algebra
system {\tt CoCoA} \cite{Co} which was invaluable in our work on this paper.
The first author also acknowledges the financial support of NSERC;
the second author acknowledges the financial support of
CONACyT grant 49251-F and SNI. We also thank the referees for their
careful reading of the paper and for the improvements that
they suggested.
\bibliographystyle{plain}
| {
"timestamp": "2007-11-06T14:57:09",
"yymm": "0701",
"arxiv_id": "math/0701296",
"language": "en",
"url": "https://arxiv.org/abs/math/0701296",
"abstract": "Associated to a simple undirected graph G is a simplicial complex whose faces correspond to the independent sets of G. We call a graph G shellable if this simplicial complex is a shellable simplicial complex in the non-pure sense of Bjorner-Wachs. We are then interested in determining what families of graphs have the property that G is shellable. We show that all chordal graphs are shellable. Furthermore, we classify all the shellable bipartite graphs; they are precisely the sequentially Cohen-Macaulay bipartite graphs. We also give an recursive procedure to verify if a bipartite graph is shellable. Because shellable implies that the associated Stanley-Reisner ring is sequentially Cohen-Macaulay, our results complement and extend recent work on the problem of determining when the edge ideal of a graph is (sequentially) Cohen-Macaulay. We also give a new proof for a result of Faridi on the sequentially Cohen-Macaulayness of simplicial forests.",
"subjects": "Combinatorics (math.CO); Commutative Algebra (math.AC)",
"title": "Shellable graphs and sequentially Cohen-Macaulay bipartite graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806506825456,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274287259756
} |
https://arxiv.org/abs/1708.08532 | Two remarks on Wall's D2 problem | If a finite group $G$ is isomorphic to a subgroup of $SO(3)$, then $G$ has the D2-property. Let $X$ be a finite complex satisfying Wall's D2-conditions. If $\pi_1(X)=G$ is finite, and $\chi(X) \geq 1-Def(G)$, then $X \vee S^2$ is simple homotopy equivalent to a finite $2$-complex, whose simple homotopy type depends only on $G$ and $\chi(X)$. | \section{Introduction}
\label{sect: introduction}
In \cite[\S 2]{wall-finiteness1}, C.~T.~C.~Wall initiated the study of the relations between homological and geometrical dimension conditions for finite $CW$-complexes. In particular, a finite complex $X$ \emph{satisfies Wall's \textup{D2}-conditions} if $H_i(\widetilde X) =0$, for $i >2$, and $H^{3}(X; \mathcal B) = 0$, for all coefficient bundles $\mathcal B$. Here $\widetilde X$ denotes the universal covering of $X$. If these conditions hold, we will say that $X$ is a
\emph{{\textup{D2}-complex}}. If every {\textup{D2}-complex}\ with fundamental group $G$ is homotopy equivalent to a finite $2$-complex, then we say that \emph{$G$ has the \textup{D2}-property}.
In \cite[p.~64]{wall-finiteness1}, Wall proved that a finite complex $X$ satisfying the D2-conditions is homotopy equivalent to a finite $3$-complex. We will therefore assume that all our D2-complexes have $\dim X \leq 3$.
The D2 problem for a finitely-presented group $G$ asks whether every finite complex $X$ with fundamental group $G$ which satisfies the D2-conditions is homotopy equivalent to a finite $2$-complex. The D2 problem has been actively studied for finite groups, but answered affirmatively only in a limited number of cases (see \cite{hog-angeloni-metzler1}, \cite{johnson2} for references to the literature on $2$-complexes and the D2-problem, and compare \cite{Mannan:2009a}, \cite{Jin:2015},
\cite{ye:2015} for some more recent work).
In this note, I make two remarks concerning the (stable) solution of the D2-problem and cancellation, based on my joint work with Matthias Kreck \cite[Theorem B]{hk4}. I am indebted to Dr.~W.~H.~Mannan for asking about this connection some years ago.
For $G$ a finitely presented group, let $\Def(G)$ denote the \emph{deficiency of $G$}, defined as the maximum value of the number of generators minus the number of relations over all finite presentations of $G$. We note that $1-\Def(G)$ is the minimal Euler characteristic possible for a finite $2$-complex with fundamental group $G$.
Swan defined $\mu_2(G)$ as the minimum of the numbers $\mu_2(\mathcal F) =f_2 - f_1 + f_0$, where $f_i$ are the ranks of the finitely generated free $\bZ G$-modules $F_i$ in an exact sequence
$$ F_2 \to F_1 \to F_0 \to \mathbb Z \to 0.$$
In general, Swan \cite[Proposition 1]{Swan:1965} noted that $\mu_2(G) \leq 1-\Def(G)$. For a finite D2-complex X, we have the Euler characteristic inequality $\chi(X) \geq \mu_2(G)$ (see Section \ref{sec:one} for details). In addition, $\mu_2(G) \geq 1$ for $G$ a finite group by \cite[Corollary 1.3]{Swan:1965}.
\begin{thma} Let $X$ be a finite complex satisfying the \textup{D2}-conditions, and assume that $G:=\pi_1(X)$ is a finite group. Then
\begin{enumerate}
\item if $\chi(X) > 1-\Def(G) $, $X$ is simple homotopy equivalent to a finite $2$-complex;
\item If $\chi(X) =1-\Def(G)$,
$ X \vee S^2$ is simple homotopy equivalent to a finite $2$-complex.
\end{enumerate}
In case \textup{(i)} the simple homotopy type of $X$ depends only on $\pi_1(X)$ and $\chi(X)$.
\end{thma}
The uniqueness part is a direct application of \cite[Theorem B]{hk4}, since the resulting $2$-complexes have non-minimal Euler characteristic. We remark that the unpublished work of Browning \cite{Browning:1978} implies the corresponding weaker statements for homotopy equivalence, rather than simple homotopy equivalance (see Corollary \ref{cor:browning}).
\begin{remark} A stable solution of the problem for D2-complexes with any finitely presented fundamental group was first given by Cohen \cite[Theorem 1]{Cohen:1978}: if $X$ is a D2-complex, then there exists an integer $r\geq 0$ such that the stabilized complex
$ X \vee r(S^2)$ is homotopy equivalent to a finite $2$-complex.
This result and the foundational work of
J.~H.~C.~Whitehead \cite{whitehead1} shows that any two {\textup{D2}-complexes}\ with isomorphic fundamental groups become stably simple homotopy equivalent after wedging on sufficiently many $2$-spheres. I give a different argument in Lemma \ref{lem:stableequiv} for the stable result, and show that it holds whenever $r\geq b_3(X)$ (compare
\cite[Proposition 3.5]{ye:2015}). Here $b_3(X)$ denotes the number of $3$-cells in $X$.
If the group ring $\bZ G$ is noetherian, then there exists a uniform bound for this stable range, depending only on the fundamental group (see Proposition \ref{prop:noetherian}). This remark applies for example to polycyclic-by-finite fundamental groups.
\end{remark}
\begin{thmb}
Let $G$ be a finite subgroup of $SO(3)$. Then any \textup{D2}-complex is simple homotopy equivalent to a finite $2$-complex, and $G$ has the \textup{D2}-property.
\end{thmb}
This result is an application of \cite[Theorem 2.1]{hk4}. The result was known for cyclic and dihedral groups
(see \cite{Mannan:2007}, \cite{OShea:2012}, \cite{Mannan:2013}), but the argument given here is more uniform and the tetrahedral, octahedral and isosahedral groups do not seem to have been covered before.
\begin{remark}Brown and Kahn \cite[Theorem 2.1]{brown-kahn1} proved that that a {\textup{D2}-complex}\ which is a nilpotent space is homotopy equivalent to a $2$-complex, but this does not appear to settle the D2 problem for nilpotent fundamental groups.
\end{remark}
\begin{remark} A result essentially contained in the proof of Wall \cite[Theorem 4]{wall-finiteness2} shows that there exist finite D2-complexes $X$, with $\pi_1(X) = G$ and $\chi(X) = \mu_2(G)$ realizing this minimum value, for every finitely presented group $G$. Since $\mu_2(G) \leq 1- \Def(G)$ by Swan \cite[Proposition 1]{Swan:1965}, a \emph{necessary} condition for any group $G$ to have the D2-property is that $\mu_2(G) = 1-\Def(G)$.
\end{remark}
\begin{acknowledgement} I would like to thank Jens Harlander and Jonathan Hillman for helpful comments and references.
\end{acknowledgement}
\section{Cancellation and the D2 Problem}\label{sec:one}
We assume that $X$ is a finite, connected $3$-complex, with fundamental group $G = \pi_1(X)$, satisfying the D2-conditions. We use the following notation for the chain complex $C(\widetilde X;\mathbb Z)$ of the universal covering:
$$C(X) : 0 \to C_3 \xrightarrow{\partial_3} C_2 \xrightarrow{\partial_2} C_1\xrightarrow{\partial_1} C_0 \to \mathbb Z \to 0,$$
considered as a chain complex of finitely-generated, free $\Lambda$-modules relative to a single $0$-cell as base-point, where $\Lambda = \bZ G$ is the integral group ring.
The boundary map $\partial_3$ is injective because $H_3(\widetilde X) = 0$. Let $B_3= \Image(\partial_3)$, with $j\colon B_3 \to C_2$ the inclusion map, and consider the boundary map $\partial_3\colon C_3 \to B_3$ as defining a $3$-cocycle. Since $H^3(X; B_3) = 0$, there is a $\Lambda$-module homomorphism $\phi\colon C_2 \to B_3$ such that $\phi\circ j = \mathrm{id}$. We have an exact sequence
$$ 0 \to C_3 \to \pi_2(K) \to \pi_2(X) \to 0$$
where $K \subset X$ denotes the $2$-skeleton (since $\pi_2(K) = Z_2 = \ker \partial_2$).
It follows that
$C_3$ is a direct summand of $\pi_2(K)$, and hence $\pi_2(X)$ is a representative of the stable class $\Omega^3(\mathbb Z)$. More explicitly, the map $\phi$ induces a direct sum splitting $C_2 = \Image(\partial_3) \oplus P$, and $P\cong C_2/\Image(\partial_3)$ is a finitely-generated, stably-free $\Lambda$-module since $C_3 \cong \Image(\partial_3)$ is a finitely-generated, free $\Lambda$-module. This gives a
commutative diagram:
$$\xymatrix@R-4pt@C-4pt{&0\ar[d]&0\ar[d]&&\cr
&C_3\ar[d]\ar@{=}[r]^{\partial_3}&B_3\ar[d]&&\cr
0 \ar[r]&Z_2\ar[d]\ar[r]&C_2\ar[d]\ar[r]&B_2 \ar@{=}[d]\ar[r]&0\cr
0 \ar[r]&\pi_2(X)\ar[d]\ar[r]&C_2/B_3\ar[d]\ar[r]&B_2 \ar[r]&0\cr
&0&0&&}
$$
where the vertical sequences are split exact, and hence a resolution
$$0 \to \pi_2(X) \to P \to C_1 \to C_0 \to \mathbb Z\to 0\ .$$
By a sequence of elementary expansions (on the chain complex these are just the direct sum with copies of
$\xymatrix@C-15pt{\Lambda \ar@{=}[r]& \Lambda}$ in dimensions 1 and 2), we may assume that $P$ is a finitely-generated, free $\Lambda$-module. This operation doesn't change the (simple) homotopy type of $X$. The following result has also been observed in \cite{Cohen:1978}, \cite[Theorem 3.5]{ye:2015}.
Our proof uses the techniques of \cite[\S 2]{hk4}.
\begin{lemma}\label{lem:stableequiv}
The stabilized complex $X \vee r(S^2)$, with $r=b_3(X)$, is simple homotopy equivalent to a finite $2$-complex $K$.
\end{lemma}
\begin{proof} Let $u \colon K \subset X$ denote the inclusion of the $2$-skeleton of $X$, so that we have the identification $\pi_2(K) \cong \pi(X) \oplus C_3$ discussed above. We further identify
\eqncount
\begin{equation}\label{eq:twotwo}
\pi_2(K \vee r(S^2)) \cong \pi_2(K) \oplus \Lambda^r\cong \pi_2(X) \oplus C_3 \oplus F
\end{equation}
and fix free $\Lambda$-bases $\{e_1, \dots, e_r\}$ for $C_3 \cong \Lambda^r$, and $\{f_1, \dots, f_r\}$ for $F \cong \Lambda^r$.
The same notation $\{e_i\}$ and $\{f_j\}$ will also be used for continuous maps $S^2 \to K \vee r(S^2)$ in the homotopy classes
of $ \pi_2(K \vee r(S^2))$ defined by these basis elements. Notice that the maps $f_j\colon S^2 \to K \vee r(S^2)$ may be chosen
to represent the inclusions of the $S^2$ wedge factors.
\smallskip
We first claim that there exists a (simple) self-homotopy equivalence
$$h\colon K \vee r(S^2) \to K \vee r(S^2)$$
such that the induced isomorphism
$$h_*\colon \pi_2(K \vee r(S^2)) \xrightarrow{\cong} \pi_2(K \vee r(S^2))$$
has the property
$h_*(e_i) = f_i$, for $1\leq i \leq r$, with respect to the chosen bases
in the right-hand side of \eqref{eq:twotwo}, and induces the identity on the summand $\pi_2(X)$.
The construction of the required self-homotopy equivalences is given in \cite[p.~101]{hk4}, where the realization of the group of elementary automorphisms
$E(P_1, L\oplus P_0)$ is studied. In this notation $P_0$, $P_1$ are free modules of rank one, and $L$ is an arbitrary $\Lambda$-module. The basic construction is to realize automorphisms of the form $1+f$ and $1 +g$, where
$f\colon L\oplus P_0 \to P_1$ and $g\colon P_1 \to L \oplus P_0$ are arbitrary $\Lambda$-homomorphisms.
We apply this to the sub-module $L \oplus \Lambda\cdot e_1\oplus \Lambda\cdot f_1$, where $L = \pi_2(X)$, and realize the automorphism $\mathrm{id}_L \oplus \alpha$ with $\alpha(e_1) = -f_1$ and $\alpha(f_1) = e_1$ via the composition
$$\mmatrix{{\hphantom{-} 0}}{1}{-1}{0}=\mmatrix{{\hphantom{-} 1}}{0}{-1}{1}\mmatrix{{1}}{1}{0}{1}\mmatrix{{\hphantom{-} 1}}{0}{-1}{1}.$$
We can now construct a homotopy equivalence $f\colon X \vee r(S^2)\to K$, by extending the simple homotopy equivalence $h \colon K \vee r(S^2) \to K \vee r(S^2)$ over the (stabilized) inclusion
$$u \vee \mathrm{id}\colon K \vee r(S^2) \to X \vee r(S^2)$$
by attaching the $3$-cells of X in domain, and $3$-cells in the range which cancel the $S^2$ wedge factors. For the attaching maps $[\partial D^3_i ]= e_i$, $1\leq i \leq r$, of the $3$-cells of $X$ we have $h \circ [\partial D^3_i ]= f_i$. Hence we can extend by the identity to $3$-cells attached along the maps $\{f_i\colon S^2 \to K \vee r(S^2)\}$. We obtain
a map
$$h'\colon X\vee r(S^2) \to K \vee r(S^2) \bigcup\{ D^3_i : [\partial D^3_i] = f_i, 1\leq i \leq r\} \simeq K$$
extending $h$. It is easy to check that $h'$ is a (simple) homotopy equivalence.
\end{proof}
An \emph{algebraic $2$-complex over the group ring $\Lambda:=\bZ G$} is a chain complex $(F_*, \partial_*)$ of the form
$$ F_2 \xrightarrow{\partial_2} F_1\xrightarrow{\partial_1} F_0 $$
consisting of an exact sequence of finitely-generated, stably-free $\Lambda$-modules, such that $H_0(F_*) = \mathbb Z$. An \emph{$r$-stabilization} of an algebraic $2$-complex is the result of direct sum with a complex $(E_*, \partial_*)$, where $E_2 = \Lambda^r$ for some $r\geq 0$, $\partial_* = 0$ and $E_i = 0$ for $i \neq 2$. We say that an algebraic $2$-complex is \emph{geometrically realizable} if it is chain homotopy equivalent to the cellular chain complex $C(X)$ of a (geometric) finite $2$-complex $X$ with fundamental group $\pi_1(X) =G$.
\begin{lemma}\label{lem:algreal}
Any algebraic $2$-complex $(F_*, \partial_*)$ over $\Lambda = \bZ G$ is geometrically realizable after an $r$-stablization, for some $r\geq 0$.
\end{lemma}
\begin{proof} We compare the resolution
$$0 \to L \to F_2 \to F_1 \to F_0 \to \mathbb Z\to 0,$$
where $L = \ker\partial_2$, to one obtained from the chain complex
$$0 \to \pi_2(K) \to C_2(K) \to C_1(K) \to C_0(K) \to \mathbb Z \to 0$$
of any finite $2$-complex $K$ with fundamental group $G$. Then Schanuel's Lemma shows that these two resolutions of $\Lambda$-modules (regarded as connected $3$-dimensional chain complexes) are stably chain isomorphic after direct sum with elementary complexes of the form
$ \xymatrix@C-15pt{\Lambda \ar@{=}[r]& \Lambda}$ in degrees $(i, i-1)$ for $1\leq i \leq 3$ (compare \cite[Lemma 3B]{wall-finiteness2}, or \cite[p.~415]{hpy1}).
The stabilizations in degrees $(i, i-1)$ for $i < 3$ produce a complex $(F'_*, \partial'_*)$ of finitely generated free $\Lambda$-modules, and a chain homotopy equivalence $(F'_*, \partial'_*) \simeq (F_*, \partial_*)$. The additional degree $(3,2)$ stabilizations produce a complex $(F''_*, \partial''_*)$, and a chain homotopy equivalence $(F''_*, \partial''_*) \simeq (F_*, \partial_*)\oplus (E_*, \partial_*)$, where $(E_*, \partial_*)$ is a complex concentrated in degree 2 (as defined above).
In other words, the resulting stabilized complex $ (F_*, \partial_*) \oplus (E_*, \partial_*) $ is an $r$-stabilization of $(F_*, \partial_*)$.
The chain homotopy equivalence
$$ (F_*, \partial_*) \oplus (E_*, \partial_*) \simeq C_*(K \vee r(S^2))$$
shows that the algebraic $2$-complex $(F_*, \partial_*)$ is geometrically realizable after $r$-stabilization.
\end{proof}
\begin{corollary}[Wall]\label{cor:wall}\label{cor:algreal}
Every algebraic $2$-complex $F_*$ is chain homotopy equivalent to the chain complex $C_*(X)$ of a \textup{D2}-complex.
\end{corollary}
\begin{proof} The construction produces a chain homotopy equivalence
$$ (F_*, \partial_*) \oplus (E_*, \partial_*) \simeq C_*(K \vee r(S^2))$$
after an $r$-stabilization of $F_*$, and in particular an isomorphism
$ L \oplus E_2 = \pi_2(K)\oplus \Lambda^r$, for some $r \geq 0$. Then one can attach $3$-cells to $K\vee r(S^2)$, using the images in $\pi_2(K \vee r(S^2))$ of a free basis of the summand $E_2 \cong \Lambda^r$, to produce a D2-complex $X$ and a chain homotopy equivalence $C(X) \simeq F_*$.
\end{proof}
\begin{remark}
The ingredients in the proof of Lemma \ref{lem:algreal} are essentially the same as those used by Wall to prove \cite[Theorem 4]{wall-finiteness2}.
Similar ideas appear in \cite[Appendix B]{johnson2}, \cite[Theorem 2.1]{Mannan:2009}.
\end{remark}
\begin{proof}[The proof of Theorem A]
Let $X$ be a finite $3$-complex which satisfies the D2-conditions. By Lemma \ref{lem:stableequiv}, there exists a finite $2$-complex $K$ and a simple homotopy equivalence $f\colon X' := X \vee r(S^2) \to K$, for any $r\geq b_3(X)$. Now let $G = \pi_1(X)$ be a finite group, and let $K_0$ denote a minimal finite $2$-complex $K_0$ with fundamental group $G$. Then $\chi(K_0) = 1-\Def(G)$, and, after perhaps stabilizing further, we can assume that $K$ is simple homotopy equivalent to a stabilization of $K_0$. We then obtain a simple homotopy equivalence of the form
$$ X \vee r(S^2) \simeq K_0 \vee t(S^2) \vee r(S^2)$$
where $t \geq 0$ provided that $\chi(X) \geq 1-\Def(G) = \chi(K_0)$. We note that the arguments in \cite[\S 2]{hk4} are at first completely algebraic (to obtain cancellation of the $\pi_2$ modules via elementary automorphisms), and then we show as above (compare the proof of \cite[Theorem B]{hk4}) how to realize the necessary elementary automorphisms by simple homotopy equivalences.
If $\chi(X) > \chi(K_0)$, then $t \geq 1$ and we can construct simple self-equivalences of $K_0 \vee t(S^2) \vee r(S^2)$ to cancel the extra $r$ wedge summands of $X \vee r(S^2)$. The resulting $2$-complex will be $K' \simeq K_0 \vee t(S^2)$.
If $\chi(X) = \chi(K_0)$, then $t=0$ but we can perform the same operations after replacing $X$ by $X \vee S^2$, and the resulting $2$-complex will be $K' \simeq K_0 \vee S^2$. In either case, the resulting $2$-complex $K'$ has non-minimal Euler characteristic $\chi(K') > \chi(K_0)$, so its simple homotopy type is uniquely determined by $G$ and $\chi(X)$ (see \cite[Theorem B]{hk4}).
\end{proof}
The techniques used in this proof also give a version for algebraic $2$-complexes (answering a question of Browning \cite[\S 5.6]{Browning:1978}). We recall that an \emph{$s$-basis} for a stably free $\Lambda$-module $M$ is a free $\Lambda$-basis for some stabilization $M \oplus \Lambda^r$ by a free module.
\begin{corollary}\label{cor:browning}
Let $F$ and $F'$ be $s$-based algebraic $2$-complexes over $\Lambda= \bZ G$, where $G$ is a finite group. If $\chi(F) = \chi(F') > \mu_2(G)$, then $F$ and $F'$ are simple chain homotopy equivalent.
\end{corollary}
\begin{proof}
We apply Corollary \ref{cor:algreal} and the method of proof for Theorem A.
\end{proof}
\begin{proof}[The proof of Theorem B]
The same remarks as above apply to the proof of \cite[Theorem 2.1]{hk4}. In addition, we note that $\mu_2(G) = 1-\Def(G)$ for all of the finite subgroups of $SO(3)$. For these groups, $\Def(G) \geq -1$ (see Coxeter \cite[\S 6.4]{Coxeter:1980}), and $\mu_2(G)$ can be estimated by group cohomology using Swan \cite[Theorem 1.1]{Swan:1965}. We can now apply cancellation down to $r=0$ for fundamental groups which are finite subgroups of $SO(3)$. This proves that every algebraic $2$-complex with one of these fundamental groups is geometrically realizable.
\end{proof}
The uniform stability bound for D2-complexes in Theorem A is a special result for finite fundamental groups, based initially on the fact that their integral group rings are finite algebras over the integers. Here is a sample stability result which applies to certain infinite fundamental groups (compare Brown \cite{Brown:1981}).
\begin{proposition}\label{prop:noetherian} Let $G$ be a finitely presented group such that the integral group ring $\bZ G$ is noetherian of Krull dimension $d_G$. If $X$ is a finite complex with $\pi_1(X) = G$ satisfying the \textup{D2}-conditions, then $X \vee r(S^2)$ is simple homotopy equivalent to a finite $2$-complex, for $r \geq d_G + 1$, whose simple homotopy type is uniquely determined by $G$ and $\chi(X)$.
\end{proposition}
\begin{proof}(Sketch) The arguments follow the same outline as those used by Bass \cite[Chap IV.3.5]{Bass:1968}
to prove a cancellation theorem for modules using elementary automorphisms. The ingredients in these arguments were generalized to apply to non-commutative noetherian rings by Magurn, van der Kallen and Vaserstein \cite{Magurn:1988}, and Stafford \cite{Stafford:1977,Stafford:1990} (see also McConnell and Robson \cite[Chap.~11]{McConnell:2001}). The application to $2$-complexes follows by realizing elementary automorphisms by simple homotopy self-equivalences, as in \cite[\S 2]{hk4}.
\end{proof}
\begin{remark} For $G$ finite, the integral group ring $\bZ G$ has Krull dimension $d_G=1$, so the Bass stability bound would be $d_G + 1 = 2$. If $G$ is a polycyclic-by-finite group, the group ring $\bZ G$ is again noetherian and $d_G = h_G + 1 $, where $h_G$ denotes the \emph{Hirsch length} of $G$ (see \cite[6.6.1]{McConnell:2001}). The examples of \cite{Dunwoody:1976}, \cite{Harlander:2006a}, \cite{Harlander:2006}, \cite{Harlander:2011} show that for general infinite fundamental groups (for example, the fundamental group of the trefoil knot), there can be (infinitely) many distinct $2$-complexes with the same Euler characteristic.
\end{remark}
\section{The relation gap problem}
We will conclude by mentioning a related problem. If $F/R$ is a finite presentation for a group $G$, then the action of the free group $F$ by conjugation on the normal subgroup $R$ induces an action of $G$ on the quotient abelian group $R_{ab} := R/[R.R]$. This $\bZ G$-module $R_{ab}$ is called the \emph{relation module} for $G$.
Let $d(\Gamma)$ denote the minimum number of elements needed to generate a group $\Gamma$, and if a group $Q$ acts on $\Gamma$, then let $d_Q(\Gamma)$ denote the minimum number of $Q$-orbits needed to generate $\Gamma$. Note that $d(\Gamma) \geq d_G(\Gamma)$.
In this notation, $d_F(R)$ is the minimum number of normal generators for $R$, and $d_G(R/[R.R])$ is the minimum number of $\bZ G$-module generators for the module $R_{ab}$.
\begin{definition} For a finite presentation $F/R$ of a group $G$, the \emph{relation gap} is the difference $d_F(R) - d_G(R/[R,R])$. The \emph{relation gap problem} is to decide whether there exists a finitely presentation with a positive relation gap.
\end{definition}
The survey articles of Harlander \cite{Harlander:2000,Harlander:2015} provide some key examples (such as those constructed by Bridson and Tweedale \cite{Bridson:2007a}), and a guide to the literature.
A connection to the D2 problem is provided by the following result:
\begin{theorem}[{Dyer \cite[Theorem 3.5]{Harlander:2000}}] Let $G$ be a group with $H^3(G; \bZ G) = 0$. If there exists a finite presentation $F/R$ with a positive relation gap, realizing the deficiency of $G$, then the \textup{D2} property does not hold for $G$.
\end{theorem}
The D2 problem can be considered a generalization of the Eilenberg-Ganea conjecture \cite{Eilenberg:1957}, which states that a group $G$ with cohomological dimension 2 also has geometric dimension 2. If $\text{cd}(G) = 2$ and the classifying space $BG$ is homotopy equivalent to a finite complex, then
$G$ will satisfy the Eilenberg-Ganea conjecture if $G$ has the D2 property.
A striking result of Bestvina and Brady \cite[Theorem 8.7]{Bestvina:1997} shows that either the Eilenberg-Ganea conjecture is false, or there is a counterexample to the Whitehead conjecture, which states that every connected subcomplex of an aspherical 2-complex is aspherical.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2017-08-30T02:02:14",
"yymm": "1708",
"arxiv_id": "1708.08532",
"language": "en",
"url": "https://arxiv.org/abs/1708.08532",
"abstract": "If a finite group $G$ is isomorphic to a subgroup of $SO(3)$, then $G$ has the D2-property. Let $X$ be a finite complex satisfying Wall's D2-conditions. If $\\pi_1(X)=G$ is finite, and $\\chi(X) \\geq 1-Def(G)$, then $X \\vee S^2$ is simple homotopy equivalent to a finite $2$-complex, whose simple homotopy type depends only on $G$ and $\\chi(X)$.",
"subjects": "Algebraic Topology (math.AT); Geometric Topology (math.GT)",
"title": "Two remarks on Wall's D2 problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806489800368,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274274972014
} |
https://arxiv.org/abs/1012.5027 | Moderate deviations via cumulants | The purpose of the present paper is to establish moderate deviation principles for a rather general class of random variables fulfilling certain bounds of the cumulants. We apply a celebrated lemma of the theory of large deviations probabilities due to Rudzkis, Saulis and Statulevicius. The examples of random objects we treat include dependency graphs, subgraph-counting statistics in Erdős-Rényi random graphs and $U$-statistics. Moreover, we prove moderate deviation principles for certain statistics appearing in random matrix theory, namely characteristic polynomials of random unitary matrices as well as the number of particles in a growing box of random determinantal point processes like the number of eigenvalues in the GUE or the number of points in Airy, Bessel, and $\sin$ random point fields. | \section{Introduction}
Since the late seventies estimations of cumulants have not only been studied
to show convergence in law, but have been studied to investigate
a more precise asymptotic analysis of the distribution via
the rate of convergence and large deviation probabilities,
see e.g. \cite{SaulisStratulyavichus:1989} and references therein.
In \cite{ERS:2009} it has been shown how to relate these bounds to prove a
moderate deviation principle for a class of counting functionals in models of geometric probability.
This paper provides a general approach to show moderate deviation principles
via cumulants.
Let $X$ be a real-valued random variable with existing absolute moments. Then
$$
\left. \Gamma_j := \Gamma_j(X) :=(-i)^j \frac{d^j}{dt^j} \log \E\bigl[e^{i t X}\bigr] \right|_{t=0}
$$
exists for all $j\in\mathbb N$ and the term is called the {\it $j$th cumulant}
(also called semi-invariant) of $X$. Here and in the following $\E$ denotes the expectation
of the corresponding random variable.
The method of moments results in a method of cumulants, saying that
if the distribution of $X$ is determined by its moments and $(X_i)_i$ are random variables with finite moments
such that $\Gamma_j(X_n) \to \Gamma_j(X)$ as $n \to \infty$ for every $j \geq 1$, then $(X_i)_i$ converges in distribution to $X$.
Hence if the first cumulant of $X_n$ converges to zero, the second cumulant to one
as well as all cumulants of $X_n$ bigger than $2$ vanish, then the sequence $(X_n)_n$ satisfies a Central Limit Theorem (CLT).
Knowing additionally exact bounds of the cumulants one is able to describe
the asymptotic behaviour more precisely. Let $Z_n$ be a real-valued random
variable with mean $\E Z_n=0$ and variance $\V Z_n=1$ and
\begin{equation} \label{cum1}
|\Gamma_j(Z_n)| \leq \frac{(j!)^{1+\gamma}}{\Delta^{j-2}}
\end{equation}
for all $j=3,4, \ldots$ and all $n \geq 1$ for fixed $\gamma\geq 0$ and $\Delta>0$.
Here and in the following $\V$ denotes the variance of the corresponding random variable.
Denoting the standard normal distribution function by
$$\Phi(x):=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-\frac{y^2}{2}}dy\,,$$
one obtains the following bound for the Kolmogorov distance
$$
\sup_{x\in\mathbb R}\bigl|P(Z_n\leq x)-\Phi(x)\bigr|
\leq c_{\gamma} \, \Delta^{\frac{1}{1+2\gamma}}
$$
where $c_{\gamma}$ is a constant depending only on $\gamma$, see \cite[Lemma 2.1]{SaulisStratulyavichus:1989}.
By this result, the distribution function $F_n$ of $Z_n$ converges uniformly to $\Phi$ as $n \to \infty$.
Hence, when $x=O(1)$ we have
\begin{equation} \label{mainratio}
\lim_{n \to \infty} \frac{1-F_n(x)}{1 -\Phi(x)} = 1.
\end{equation}
One is interested to have - under additional conditions -
such a relation in the case when $x$ depends on $n$ and tends to $\infty$ as $n \to \infty$. In particular,
one is interested in conditions for which the relation \eqref{mainratio} holds in the interval
$0 \leq x \leq f(n)$, where $f(n)$ is a non-decreasing function such that $f(n) \to \infty$.
If the relation hold in such an interval, we call the interval a zone of normal convergence. In the case of
partial sums of i.i.d. random variables with zero mean and finite positive variance, it can be shown
applying Mill's ratios that $f(n)$ can be chosen as $(1- \varepsilon) (\log n)^{1/2}$ for any $0 < \varepsilon <1$,
if the third absolute moment of $X_1$ is assumed to be finite (see \cite[Lemma 5.8]{Petrov:book}).
Moreover, \eqref{mainratio} cannot be true in general since for the symmetric binomial distribution the numerator
vanishes for all $x > \sqrt{n}$. For i.i.d. partial sums the classical result due to Cram\'er is that if $\E e^{t |X_1|^{1/2}} < \infty$
for some $t >0$, \eqref{mainratio} holds with $f(n) = o(n^{1/6})$.
In \cite[Chapter 2]{SaulisStratulyavichus:1989}, relations of large deviations of the type \eqref{mainratio} are proved under the
condition \eqref{cum1} on cumulants with a zone of normal convergence of size proportional to $\Delta^{\frac{1}{1 + 2 \gamma}}$, see
Lemma 2.3 in \cite{SaulisStratulyavichus:1989}.
The aim of this paper is to show that under the same type of condition on cumulants of random variables $Z_n$ moderate deviation
principles can be deduced. Actually we will go the detour via large deviation probabilities, showing that under condition
\eqref{cum1}, the deducible results on large deviations probabilities imply a moderate deviation principle. For partial sums $S_n$ of i.i.d. random variables
$(X_i)_i$ one can find in \cite{LiRosalsky:2004} the remark, that large deviation probability results imply asymptotic expansions for tail probabilities
$P(S_n \geq n \E(X_1) + n^{1/2} x)$ and $P(S_n \leq n \E(X_1) - n^{1/2} x)$ for $x \geq 0$ and $x = o(n^{1/2})$ and moreover, that these expansions imply
a moderate deviation principle. We have not found the general statement proven in the literature, that large deviation probability results imply
in general a moderate deviation principle. Our abstract result, Theorem \ref{thmcumulants}, is motivated by various applications.
We will prove moderate deviation principles for a couple of statistics applying Theorem \ref{thmcumulants}. Some results will be improvements
of existing results, most of our examples are new moderate deviation results.
Let us recall the definition of a large deviation principle (LDP) due to Varadhan, see for example
\cite{DemboZeitouni:1998}. A sequence of probability measures $\{(\mu_n), n\in \mathbb N\}$ on a
topological space $\mathcal X$ equipped with a $\sigma$-field $\mathcal
B$ is said to satisfy the LDP with speed $s_n\nearrow \infty$ and
good rate function $I(\cdot)$ if the level sets $\{x: I(x)\leq
\alpha\}$ are compact for all $\alpha\in[0,\infty)$ and for all
$\Gamma\in\mathcal B$ the lower bound
$$
\liminf_{n\to\infty} \frac{1}{s_n} \log \mu_n(\Gamma)
\geq - \inf_{x\in \operatorname{int}(\Gamma)} I(x)
$$
and the upper bound
$$
\limsup_{n\to\infty} \frac{1}{s_n} \log \mu_n(\Gamma)
\leq - \inf_{x\in \operatorname{cl}(\Gamma)} I(x)
$$
hold. Here $\operatorname{int}(\Gamma)$ and
$\operatorname{cl}(\Gamma)$ denote the interior and closure of
$\Gamma$ respectively. We say a sequence of random variables satisfies
the LDP when the sequence of measures induced by these variables
satisfies the LDP. Formally a moderate deviation principle is nothing
else but the LDP. However, we will speak about a moderate deviation
principle (MDP) for a sequence of random variables, whenever the scaling
of the corresponding random variables is between that of an ordinary Law
of Large Numbers and that of a Central Limit Theorem.
The following main theorem of this paper generalizes the idea in
\cite{ERS:2009} to use the method of cumulants to investigate moderate deviation principles:
\begin{theorem}\label{thmcumulants}
For any $n \in {\Bbb N}$, let $Z_n$ be a centered random variable with variance one and existing
absolute moments, which satisfies
\begin{equation}\label{eqcumulants}
\bigl| \Gamma_j(Z_n) \bigr| \leq (j!)^{1+\gamma} / \Delta_n^{j-2}
\quad\text{for all } j=3,4, \dots
\end{equation}
for fixed $\gamma\geq 0$ and $\Delta_n>0$.
Let the sequence $(a_n)_{n \geq 1}$ of real numbers grow to infinity, but slow enough such that
$$
\frac{a_n}{\Delta_n^{1/(1+2\gamma)}} \stackrel{n\to\infty}{\longrightarrow} 0
$$
holds.
Then the moderate deviation principle for
$\bigl(\frac{1}{a_n} Z_n\bigr)_n$
with speed $a_n^2$ and rate function $I(x)=\frac{x^2}{2}$
holds true.
\end{theorem}
\noindent
The Theorem opens up the possibility to prove moderate deviations for a wide range
of dependent random variables. Before we will proceed, we will consider a moderate deviation principle
for partial sums of independent, non-identically distributed random variables. Interesting enough, we have not
find any reference for the following result.
\begin{theorem} \label{thmnotidentical}
Let $(X_i)_{i \geq 1}$ be a sequence of independent real-valued random variables with expectation zero and variances
$\sigma_i^2>0$, $i \geq 1$, and let us assume that $\gamma\geq 0$ and $K>0$ exist such that for all $i \geq 1$
\begin{equation}\label{momentenbedingungen}
\bigl| \E X_i^j\bigr| \leq (j!)^{1+\gamma} K^{j-2} \sigma_i^2
\quad\text{for all } j=3,4, \dots\, .
\end{equation}
Let $Z_n:=\frac{1}{\sqrt{\sum_{i=1}^n \sigma_i^2}}\sum_{i=1}^n X_i$. Then
$\bigl(\frac{1}{a_n} Z_n \bigr)_{n \geq 1}$ satisfies the moderate
deviation principle with speed $a_n^2$ and rate function $\frac{x^2}{2}$ for any
$1\ll a_n\ll \left(\frac{\sqrt{\sum_{i=1}^n \sigma_i^2}}{\displaystyle{2 \max\bigl\{K; \max_{1\leq i\leq n}}\{\sigma_i\}\bigr\}}\right)^{1/(1+2\gamma)}$.
\end{theorem}
Remark that condition \eqref{momentenbedingungen} is a generalization of the classical Bernstein condition ($\gamma =0$).
\begin{proof}
Using a relation between moments and cumulants, condition \eqref{momentenbedingungen} implies that the $j$-th cumulant of $X_i$
can be bounded by $(j!)^{1 + \gamma} (2 \max \{K, \sigma_i \})^{j-2} \sigma_i^2$. Hence it follows from the independence of the random variables $X_i$, $i \geq 1$,
that the $j$-th cumulant of $Z_n$ has the bound
\begin{equation}\label{cumulantenbedinungen}
|\Gamma_j(Z_n)| \leq (j!)^{1+\gamma} \left(
\frac{2 \max\bigl\{K; \max_{1\leq i\leq n}\{\sigma_i\}\bigr\}}{\sqrt{\sum_{i=1}^n \sigma_i^2}}\right)^{j-2}
\, ,
\end{equation}
for details see for example \cite[Theorem 3.1]{SaulisStratulyavichus:1989}.
Thus for $Z_n$ the condition of Theorem \ref{thmcumulants} holds with
$$
\Delta_n=\frac{\sqrt{\sum_{i=1}^n \sigma_i^2}}{\displaystyle{2 \max\bigl\{K; \max_{1\leq i\leq n}\{\sigma_i\}\bigr\}}}\,.
$$
The result follows from Theorem \ref{thmcumulants}.
\end{proof}
\begin{remark}
If Cram\'er's condition holds, that is there
exists $\lambda>0$ such that $\E e^{\lambda |X_i|} < \infty$
holds for all $i\in\mathbb N$, then $X_i$ satisfies Bernstein's condition, which is the bound \eqref{momentenbedingungen}
with $\gamma=0$, see for example \cite[Remark 3.6.1]{Yurinsky:1995}.
This implies \eqref{cumulantenbedinungen} and we can apply Theorem \ref{thmcumulants}
as above. Therefore Theorem \ref{thmcumulants} requires less restrictions on the random sequence
than Cram\'er's condition.
\end{remark}
The paper is organized as follows. Section 2 is devoted to applications for so called {\it dependency graphs}
including counting-statistics of subgraphs in Erd\H{o}s-R\'enyi random graphs. Section 3 presents applications to $U$-statistics.
Theorem \ref{thmcumulants} and Theorem \ref{thmnotidentical} will be applied in random matrix theory in Section 4. We will be able to reprove
moderate deviations for the characteristic polynomials for the COE, CUE and CSE matrix ensembles. Moreover
we will prove moderate deviations for determinantal point processes with applications in random matrix theory.
Finally, in Section 5 we present the proof of Theorem \ref{thmcumulants}.
\section{Applications to dependency graphs}\label{Application to dependency graphs}
Let ${\{X_{\alpha}\}}_{\alpha\in \mathcal I}$ be a family of random variables defined
on a common probability space. A {\it dependency graph} for ${\{X_{\alpha}\}}_{\alpha\in \mathcal I}$
is any graph $L$ with vertex set $\mathcal I$ which satisfies the following condition:
For any two disjoint subsets of vertices $V_1$ and $V_2$ such that there is no edge from any vertex in
$V_1$ to any vertex in $V_2$, the corresponding collections of random
variables $\{X_{\alpha}\}_{\alpha\in V_1}$ and $\{X_{\alpha}\}_{\alpha\in V_2}$ are independent.
Let the {\it maximal degree} of a dependency graph $L$ be the maximum of the
number of edges coinciding at one vertex of $L$. The idea behind the usefulness of dependency graphs is that if the maximal degree
is not too large, one expects a Central Limit Theorem for the partial sums of the family ${\{X_{\alpha}\}}_{\alpha\in \mathcal I}$.
We will consider moderate deviations. Note that there does not exist a unique dependency graph, for example the complete graph works for any set of random variables.
\begin{example} \label{stexample}
A standard situation is, that there is an underlying family of independent random variables $\{Y_i\}_{i \in \mathcal A}$, and each $X_{\alpha}$
is a function of the variables $\{Y_i\}_{i \in \mathcal A_{\alpha}}$, for some $\mathcal A_{\alpha} \subset \mathcal A$. With $\mathcal S = \{ \mathcal A_{\alpha}:
\alpha \in \mathcal I \}$ the graph $L=L(\mathcal S)$ with vertex set $\mathcal I$ and edge set $\{ \alpha \beta: A_{\alpha} \cap A_{\beta} \not= \emptyset \}$
is a dependency graph for the family ${\{X_{\alpha}\}}_{\alpha\in \mathcal I}$. As a special case of this example, we will consider subgraphs
of an Erd\H{o}s-R\'enyi random graph.
\end{example}
Another context, outside the scope of the present paper, in which dependency graphs are used is the Lov\'asz Local Lemma,
see \cite{AlonSpencer08}. Central limit theorems for $Z:=\sum_{\alpha \in \mathcal I} X_{\alpha}$ are obtained in \cite{BaldiRinott:1989}, see \cite[Theorem 9.6]{Steinbuch2010}
for corresponding Berry-Esseen bounds. We obtain the following bounds on cumulants of $Z$:
\begin{theorem}\label{lemmacumulantsrg}
Suppose that $L$ is a dependency graph for the family $\{X_{\alpha}\}_{\alpha \in \mathcal I}$ and that
$M$ is the maximal degree of $L$. Suppose further that $|X_{\alpha}| \leq A$ almost surely
for any $\alpha \in \mathcal I$ and some constant $A$. Let $\sigma^2$ be the variance of $Z:=\sum_{\alpha \in \mathcal I} X_{\alpha}$.
Then the cumulants $\Gamma_j$ of $\frac{Z}{\sigma}$ are bounded by
\begin{equation}\label{eqcumulantRG}
\bigl|\Gamma_j\bigr|\leq
(j!)^3 |\mathcal I| \, (M+1)^{j-1} \, (2eA)^j \frac{1}{\sigma^j}
\end{equation}
for all $j\geq 1$.
\end{theorem}
\begin{proof}
For notational reasons we consider without loss of generality the case where the index set $\mathcal I$ is chosen to be
$\mathcal I=\{1,\dots,N\}$ for any fixed natural number $N\in\N$. In \cite[Lemma 4]{Janson:1988} bounds for
the cumulants were given. Our main task is to obtain a bound, which gives the dependency of $j$ (and $j!$) as exact as possible. The first steps of our proof
can exactly be found in \cite[Lemma 4]{Janson:1988}.
Assuming the existence of the $m$-th moments of $X_1,\dots,X_j$ define the multi-linear function
$$
\kappa(X_1,\dots,X_j) := (-i)^j \frac{\partial^j}{\partial t_1 \cdots \partial t_j} \log \E \bigl[\exp(i t_1 X_1)\cdots \exp(i t_j X_j)\bigr] \Big|_{(t_1,\dots,t_j)=(0,\dots,0)} \,.
$$
Per definition for any random variable $X$ the cumulant is given by $\Gamma_j(X)=\kappa(\underbrace{X,\dots,X}_{j \text{ times}})$
and for the cumulant of $\frac{Z}{\sigma}$ we have
\begin{equation}
\Gamma_j
= \kappa\Bigl(\underbrace{\sum_{i=1}^{N}X_i,\dots,\sum_{i=1}^{N}X_i}_{j \text{ times}}\Bigr) \frac{1}{\sigma^j}
= \sum_{{i_1}=1}^{N}\dots \sum_{{i_j}=1}^{N} \kappa(X_{i_1},\dots, X_{i_j}) \frac{1}{\sigma^j}\,.
\label{eqGesamtausdruckKumulante}
\end{equation}
Suppose that $X_{1},\dots,X_{m}$ are independent of $X_{m+1},\dots,X_{j}$
for any $1\leq m<j$, then
\begin{eqnarray*}
\kappa(X_{1},\dots, X_{j})
&=&
(-i)^j \frac{\partial^j}{\partial t_1 \cdots \partial t_j} \log \E \bigl[\exp(i t_{1} X_{1})\cdots \exp(i t_{j} X_{j})\bigr] \Big|_{(0,\dots,0)}
\\
&=&
(-i)^j \frac{\partial^j}{\partial t_{1} \cdots \partial t_{j}}
\log \E \bigl[\exp(i t_1 X_1)\cdots \exp(i t_m X_m)\bigr]\Big|_{(0,\dots,0)}
\\
&&{}+ (-i)^j \frac{\partial^j}{\partial t_{1} \cdots \partial t_{j}}
\log \E \bigl[\exp(i t_{m+1} X_{m+1})\cdots \exp(i t_j X_j)\bigr]
\Big|_{(0,\dots,0)}
\\&=& 0\,.
\end{eqnarray*}
Thus in \eqref{eqGesamtausdruckKumulante} we only have to consider those terms $\kappa(X_{i_1},\dots,X_{i_j})$ for which the corresponding $j$ vertices of $L$ (not necessarily
distinct) form a connected subgraph.
For $1\leq q\leq j$, let $\sum_{I_1,\dots,I_q}$ denote the summation over all partitions
$I_m$ of $\{1,\dots,j\}$ into $m$ nonempty subsets, $1\leq m\leq q$.
The representation of a cumulant in \cite[Eq. (1.57)]{SaulisStratulyavichus:1989},
which was derived by Leonov and Shiryaev in 1959 via Taylor's expansion,
gives
\begin{eqnarray}
\kappa(X_{i_1},\dots, X_{i_j})
&=& \sum_{q=1}^j \sum_{I_1,\dots,I_q}
\left| (-1)^{q-1} (q-1)! \prod_{m=1}^q \E\left[\prod_{r\in I_m} X_{i_r}\right] \right|
\nonumber\\
&\leq& \sum_{q=1}^j \sum_{I_1,\dots,I_q}
\left| (-1)^{q-1} (q-1)! \prod_{m=1}^q \prod_{r\in I_m} \|X_{i_r}\|_{m_i}\bigr] \right|
\label{eqcumulantshoelder}
\end{eqnarray}
applying H\"older's inequality with $\sum_{i\in I_m}\frac{1}{m_i}=1$
and symbolizing $\bigl( \E |X_{i_r}|^{m_i}\bigr)^{1/m_i}$ by $\|X_{i_r}\|_{m_i}$.
Choosing $m_i=|I_m|$ and using the fact that $\|X_{i_r}\|_{m_i}\leq \|X_{i_r}\|_j$
for $m_i\leq j$ implies
\begin{eqnarray}
\kappa(X_{i_1},\dots, X_{i_j})
&\leq&
\sum_{q=1}^j \sum_{I_1,\dots,I_q} \left| (-1)^{q-1} (q-1)! \prod_{m=1}^q \prod_{r\in I_m} \|X_{i_r}\|_j \right|
\nonumber
\\
&=&
\|X_{i_1}\|_j \cdots \|X_{i_j}\|_j \sum_{q=1}^j \sum_{I_1,\dots,I_q} \left| (-1)^{q-1} (q-1)! \right|
\,.\label{eqforeachcumulant}
\end{eqnarray}
The number of partitions of an set containing $j$ elements into $q$ parts is the Stirling number
$$\frac{1}{q!} \sum_{m=0}^q (-1)^{q-m} \left(q\atop m\right) m^j\,.$$
And inequality \eqref{eqforeachcumulant} implies
$$
\kappa(X_{i_1},\dots, X_{i_j})
\leq
\|X_{i_1}\|_j \cdots \|X_{i_j}\|_j \sum_{q=1}^j \frac{(q-1)!}{q!} \sum_{m=0}^q \left(q\atop m\right) m^j \,.
$$
Since
$$\frac{1}{q} \sum_{m=0}^q \left(q\atop m\right) m^j
\leq \sum_{m=1}^q \left(q\atop {\lfloor q/2\rfloor}\right) q^{j-1}
= \left(q\atop {\lfloor q/2\rfloor}\right) q^j
\leq j^j \left(j\atop {\lfloor j/2\rfloor}\right)
$$
holds, we can apply $\left(j\atop {\lfloor j/2\rfloor}\right)
= \frac{j!}{ ({\lfloor j/2\rfloor})! ({\lceil j/2\rceil})!}
\leq \frac{j!}{ ({\lfloor j/2\rfloor})!^2}$
and the Stirling approximation $m! > \sqrt{2 \pi m} \left(\frac{m}{e}\right)^m$ to get
\begin{eqnarray}
\kappa(X_{i_1},\dots, X_{i_j})
&\leq&
\|X_1\|_j \cdots \|X_j\|_j \cdot j^{j+1} \frac{j!}{2\pi \frac{j}{2} \left(\frac{j}{2e}\right)^{2j/2}}
\nonumber\\
&\leq&
\|X_1\|_j \cdots \|X_j\|_j \cdot j! \cdot (2e)^j
\quad \leq \quad j! (2e A)^j \,.
\label{eqcumulantsfastfertig}
\end{eqnarray}
Now we need to know the number of possible sets of $j$ vertices forming
a connected subgraph of $L$. If $v_1,\dots,v_j$ are $j$ such vertices,
then we can rearrange the indices such that each set $\{v_1,v_2\}$,
$\{v_1,v_2,v_3\}, \dots, \{v_1,\dots,v_j\}$ forms itself a connected subgraph
of $L$. There are at most $j!$ tuples of $j$ vertices associated to the
same ordering.
There are $N$ ways of choosing $v_1$. The vertex $v_2$ must equal $v_1$ or
be connected to $v_1$, for which we have the choice of at most $M$ possible
vertices.
Similarly, $v_3$ either equals $v_1$ or $v_2$ or is connected to one of them.
For this choice we have at most $2+2 M=2(M+1)$ possibilities.
Continuing this way we see that there are at most
$$
j! N (M+1) 2(M+1) \cdots (j-1)(M+1)
=j! (j-1)! N (M+1)^{j-1}
$$
choices of $j$ vertices forming a connected subgraph in $L$.
Inserting this estimation and the bound in \eqref{eqcumulantsfastfertig}
into equation \eqref{eqGesamtausdruckKumulante} completes the proof of Theorem
\ref{lemmacumulantsrg}.
\end{proof}
\subsection{Subgraphs in Erd\H{o}s-R\'enyi random graphs}
Consider an Erd\H{o}s-R{\'e}nyi random graph with $n$ vertices, where
for all $\left(n \atop 2\right)$ different pairs of vertices the
existence of an edge is decided by an independent Bernoulli experiment
with probability $p$. For each $i\in\{1,\dots,\left({{n}\atop{2}}\right)\}$,
let $X_i$ be the random variable determining if the edge $e_i$ is present,
i.e. $P(X_i=1)=1-P(X_i=0)=p(n) =:p$. The model is called ${\Bbb G}(n,p)$.
The following statistic counts the number of subgraphs isomorphic to a fixed
graph $G$ with $k$ edges and $l$ vertices
\begin{equation} \label{wdef}
W=
\sum_{1\leq \kappa_1<\dots < \kappa_k \leq \left({{n}\atop{2}}\right)}
1_{\{(e_{\kappa_1},\dots,e_{\kappa_k})\sim G\}} \left(\prod_{i=1}^k X_{\kappa_i} \right)
\:.
\end{equation}
Here $(e_{\kappa_1}, \ldots, e_{\kappa_k})$ denotes the graph with edges
$e_{\kappa_1}, \ldots, e_{\kappa_k}$ present and $A \sim G$ denotes the fact that the subgraph $A$
of the complete graph is isomorphic to $G$. Here and in the following we speak about connected subgraphs only.
Let the constant $a := \rm{aut}(G)$ denote the order of the automorphism group of $G$.
The number of copies of $G$ in $K_n$, the complete graph with $n$
vertices and $\left(n \atop 2\right)$ edges, is given by
$\left(n \atop l\right) l!/a$ and the expectation of $W$ is equal to
$\E[W] = \frac{\left(n \atop l\right) l!}{a} p^k = {\mathcal O}(n^l p^k) \:$.
It is easy to see that $P(W> 0)= o(1)$ if $p\ll n^{-l/k}$.
Moreover, for the graph property that $G$ is a subgraph, the probability that
a random graph possesses it jumps from $0$ to $1$ at the threshold probability
$n^{-1/m(G)}$, where $m(G)= \max \left\{ \frac{e_H}{v_H} : H\subseteq G, v_H >0 \right\}$,
$e_H, v_H$ denote the number of edges and vertices of $H\subseteq G$,
respectively, see \cite{JLR:2000}.
Ruci{\'n}ski proved in \cite{Rucinski:1988} that
$\frac{W-\E(W)}{\sqrt{\V(W)}}$
converges in distribution to a standard normal distribution if and only if
\begin{equation}
n p^{m(G)}\stackrel{n\to\infty}{\longrightarrow} \infty
\quad\text{and}\quad
n^2 (1-p) \stackrel{n\to\infty}{\longrightarrow} \infty\:.
\end{equation}
An upper bound for {\it lower tails} was proven by Janson \cite{Janson:1990}, applying the FKG-inequality. A comparison of {\it seven} different
techniques proving bounds for {\it the infamous} upper tail can be found in \cite{JansonRucinski:2002}, see also \cite{Chatterjee:2010}
for a recent improvement. The large deviation principle for subgraph count statistics in Erd\H{o}s-R\'enyi random graphs with
fixed $p$ are solved in \cite{ChatterjeeVaradhan:2010}.
As a special case of Example \ref{stexample}, let $\{H_{\alpha}\}_{\alpha \in \mathcal I}$ be given subgraphs of the complete graph $K_n$ and let $I_{\alpha}$
be the indicator that $H_{\alpha}$ appears as a subgraph in ${\Bbb G}(n,p)$, that is, $I_{\alpha} = 1_{\{ H_{\alpha} \subset {\Bbb G}(n,p) \}}$, $\alpha \in \mathcal I$.
Then $L(S)$ with $S = \{e_{H_{\alpha}} : \alpha \in \mathcal I\}$ is a dependency graph with edge set $\{ \alpha \, \beta: e_{H_{\alpha}} \cap e_{H_{\beta}} \not= \emptyset \}$.
Here we take the family of subgraphs of $K_n$ that are isomorphic to a fixed graph $G$, denoting by $\{G_{\alpha}\}_{\alpha \in A_n}$.
Consider $X_{\alpha} = I_{\alpha} - {\Bbb E} I_{\alpha}$ and define the graph $L_n$ by connecting every pair of indices $\alpha$ and $\beta$ such that the corresponding
graphs $G_{\alpha}$ and $G_{\beta}$ have a common edge. This is evidently a dependency graph for $(X_{\alpha})_{\alpha \in A_n}$; see \cite[Example 6.19]{JLR:2000}.
Note that the subgraph count statistic $W-\E W$ given in \eqref{wdef} is equal to the sum of all $X_{\alpha}$, $1\leq \alpha\leq A_n$.
We will be able to prove the following moderate deviation principle for the subgraph count statistic:
\begin{theorem}\label{thmMDPtriangle}
Let $G$ be a fixed graph with $k$ edges and $l$ vertices.
Let $(a_n)_n$ be a sequence with
$$
1 \ll a_n \ll
\Bigl(\frac{n \bigl(p^{k-1} \sqrt{p (1-p)}\bigr)^3}{8 k^2 e^3}\Bigr)^{1/5}
\,,
$$
where $e$ is Euler's number.
Then the scaled subgraph count statistics
$\bigl(\frac{1}{a_n} \frac{W-\E W}{\sqrt{\V W}}\bigr)_n$
satisfy the moderate deviation principle with speed $a_n^2$ and rate
function $x^2/2$ if
\begin{equation}\label{eqprobbedingung}
n^2 p^{3(2k-1)} (1-p)^3\stackrel{n\to \infty}{\longrightarrow} \infty
\end{equation}
holds.
\end{theorem}
\begin{remark}
Condition \eqref{eqprobbedingung} on $p(n)$ assures that $(a_n)_n$ grows to infinity.
Moderate deviations for the subgraph count statistic of Erd{\H o}s-R{\'e}nyi
random graphs are already considered in \cite{DoeringEichelsbacher:2009}
studying the log-Laplace transform via martingale differences and using the
G\"artner-Ellis Theorem. The stated moderate deviation principle in Theorem \ref{thmMDPtriangle} is on one
hand valid for more probabilities $p(n)$ than in \cite[Theorem 1.1]{DoeringEichelsbacher:2009}.
But on the other hand the scaling $\beta_n :=a_n \sqrt{\V W}$ has a smaller range in comparison to \cite{DoeringEichelsbacher:2009}:
Using $\text{const.} \, n^{2l-2} p^{2k-1} (1-p)
\leq \V W \leq \text{const.} \, n^{2l-2} p^{2k-1} (1-p)$
\label{Rucinskivar}
(see \cite[2nd section, page 5]{Rucinski:1988})
the scaling in Theorem \ref{thmMDPtriangle} is equal to
$$
n^{l-1} p^{k-1} \sqrt{ p(1-p) } \ll
a_n \sqrt{\V W}
\ll n^{l-\frac{4}{5}} \bigl(p^{k-1} \sqrt{ p(1-p) }\bigr)^{8/5}\,.
$$
Where the scaling in Theorem \cite[Theorem 1.1]{DoeringEichelsbacher:2009} is bounded by:
$$
n^{l-1} p^{k-1} \sqrt{ p(1-p) } \ll \beta_n \ll n^{l} \left( p^{k-1} \sqrt{p(1-p)} \right)^4\,.
$$
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thmMDPtriangle}]
In order to prove Theorem \ref{thmMDPtriangle} we apply
Theorem \ref{lemmacumulantsrg} to show that the conditions
of Theorem \ref{thmcumulants} are satisfied.
Let us consider the subgraph count statistic in an
Erd{\H o}s-R{\'e}nyi random graph for any fixed subgraph $G$ with $l$ vertices
and $k$ edges and its associated dependency graph $L_n$ defined as above.
Let $M_n$ be the maximal degree of the dependency graph $L_n$. Thus to
determine $M_n$ we need to bound the maximal number of subgraphs isomorphic
to $G$ having at least one edge in common with a fixed subgraph $G'$ which
is itself isomorphic to $G$. For every subgraph $G'$, isomorphic to $G$, we
have to consider one of the $k$ edges of $G'$ to be the common edge.
Accordingly we can choose $l-2$ further vertices out of $n-2$ possible vertices
-- which justifies a factor $(n-2)_{l-2}:=(n-2) (n-1) \cdots (n-l-1)$.
We can substract one solution, because we do not count $G'$ itself and achieve
$$
M_n \leq k (n-2)_{l-2} -1 \leq k n^{l-2} -1\,.
$$
The number $N_n$ of the subgraphs in $K_n$ which are isomorphic to $G$ satisfies the inequality
$$
\frac{n (n-1)\cdots (n-l-1)}{l (l-1)\cdots 1}=\left({n}\atop{l}\right)
\leq N_n \leq n_l=n (n-1)\cdots (n-l-1)\,.
$$
As stated on page \pageref{Rucinskivar} the variance $\sigma_n^2=\V W$ of
$\sum_{\alpha=1}^{N_n} Y_{\alpha}= W-\E W$ is bounded by a constant times
$n^{2l-2} p^{2k-1} (1-p)$.
For the cumulants of $\frac{W-\E W}{\sqrt{\V W}}$ it follows with \eqref{eqcumulantRG} that, for $j\geq 3$,
\begin{eqnarray}
\bigl|\Gamma_j\bigr|
&\leq&
(j!)^3 n^l \bigl(k n^{l-2}\bigr)^{j-1} (2e)^j \frac{1}{\bigl(\text{const.} n^{l-1} p^{k-1} \sqrt{p (1-p)}\bigr)^j}\nonumber\\
&=&
(j!)^3 \frac{1}{n^{j-2}} k^{j-1} (2e)^j \frac{1}{\bigl(\text{const.} p^{k-1} \sqrt{p (1-p)}\bigr)^j}\nonumber\\
&\leq&
(j!)^3 \left(\frac{8 k^2 e^3}{n \bigl(\text{const.} p^{k-1} \sqrt{p (1-p)}\bigr)^3}\right)^{j-2}\,.
\label{subgraphcumulant}
\end{eqnarray}
In the last inequality we used the fact that $3(j-2)\geq j$ is equivalent to
$j\geq 3$.
This implies
that condition \eqref{eqcumulants} is satisfied for $\gamma=2$ and
$$\Delta_n
=\frac{n \bigl(\text{const.} p^{k-1} \sqrt{p (1-p)}\bigr)^3}{8 k^2 e^3}\,.$$
$\Delta_n$ increases if $n^2 p^{3(2k-1)} (1-p)^3\stackrel{n\to \infty}{\longrightarrow} \infty$.
So in the following we can only consider this case.
Now we choose a sequence $(a_n)_n$ such that
$$
1 \ll a_n \ll \Delta_n^{1/(1+2\gamma)}
= \left(\frac{n \bigl(p^{k-1} \sqrt{p (1-p)}\bigr)^3}{8 k^2 e^3}\right)^{1/5}\,,
$$
and apply Theorem \ref{thmcumulants}, which ends the proof of Theorem \ref{thmMDPtriangle}.
\end{proof}
\begin{remark}\label{CLTcumulants}
As mentioned in the introduction, the cumulant bounds \eqref{eqcumulants}
imply a Central Limit Theorem if $\lim_{n\to\infty}\Delta_n=\infty$.
Moreover applying \cite[Lemma 2.1]{SaulisStratulyavichus:1989} and inequality
\eqref{subgraphcumulant} proves the following bound for the Kolmogorov distance:
\begin{equation*}
\sup_{x\in\mathbb R} \Bigl| P\Bigl(\frac{W-\E W}{\sqrt{\V W}}\leq x\Bigr)-\Phi(x)\Bigr|
\leq 108 \left(\frac{\sqrt{2}}{6} \Delta_n \right)^{-\frac{1}{1+2\gamma}}
\leq \frac{\text{const.}}{n^{1/5} \bigl(p^{k-1} \sqrt{p (1-p)}\bigr)^{3/5}}\,.
\end{equation*}
This bound is weaker than the inequality induced in \cite{BarbourKaronskiRucinski:1989} via Stein's method.
For some improvements see \cite{Goldstein}.
\end{remark}
\subsection{Another example of a dependency graph}
Let $X_i$, $i \geq 1$, be independent centered random variables
with existing variances $\V X_i \geq \varepsilon$ for any $\varepsilon>0$ and define $Z_n:=\sum_{i=1}^n X_i X_{i+1}$.
Let $A$ be a constant such that $|X_i|\leq \sqrt{A}$ almost surely. Let $(a_n)_n$
be a divergent sequence where $a_n\ll n^{1/10}$. Then $\Bigl(\frac{1}{a_n \sqrt{\V Z_n}} Z_n\Bigr)_{n\in\mathbb N}$
satisfies the moderate deviation principle with speed $a_n^2$ and rate function $I(x)=x^2/2$.
\begin{proof}
Set $Y_i:= X_i X_{i+1}$ for all $i=1,\dots,n$. $Y_i$ is independent of $Y_j$ for all
$j$ not equal to $i-1$ and $i+1$. Let $L_n$ be the graph with vertex set
$\{1,\dots,n\}$ and edges between $1$ and $2$, $2$ and $3$, \dots as well as between
$n-1$ and $n$. $L_n$ is a dependency graph of $\{Y_i\}_{i=1}^{N}$ with $N=n$ and $M=2$.
The variance $\sigma_n^2=\V Z_n$ is bigger or equal than a constant times $n$:
\begin{eqnarray*}
\V Z_n &=& \sum_{i,j =1}^n \E \bigl[Y_i Y_j\bigr] = \sum_{i,j =1}^n \E[X_i X_{i+1} X_j X_{j+1}]\\
&=& \sum_{i=1}^n \E\bigl[X_i^2 X_{i+1}^2\bigr]
+ \sum_{i=2}^n \E\bigl[X_i X_{i+1} X_{i-1} X_i\bigr]
+ \sum_{i=1}^{n-1} \E\bigl[X_i X_{i+1} X_{i+1} X_{i+2}\bigr]\\
&=& \sum_{i=1}^n \sqrt{\V(X_i) \V(X_{i+1})} + 2 \sum_{i=2}^n \E[X_{i-1}] \E[X_i^2] \E[X_{i+1}]\\
&=& \sum_{i=1}^n \sqrt{\V(X_i) \V(X_{i+1})}
\geq n \min_{i=1,\dots,n+1} \V(X_i)
\end{eqnarray*}
due to the independence of $X_1,\dots,X_{n+1}$ and the fact that their expectations
are equal to zero. In particular, for independent and identically distributed random
variables $X_i$, we have $\V Z_n= \text{const.} \cdot n$.
Using Theorem \ref{lemmacumulantsrg} we have a bound for the cumulant
of $\frac{1}{\sqrt{\V Z_n}} \sum_{i=1}^n Y_i =\frac{1}{\sqrt{\V Z_n}} Z_n$:
$$
\bigl|\Gamma_j\bigr| \leq (j!)^3 \left( \frac{A^3}{\text{const.} \sqrt{n}}\right)^{j-2}\,.
$$
Now we can apply Theorem \ref{thmcumulants} with $\gamma=2$, $\Delta_n= \text{const.} \sqrt{n}$
and a sequence $(a_n)_n$ satisfying
$\frac{a_n}{\sqrt{n}^{1/5}}\stackrel{n\to\infty}{\longrightarrow}0$.
This proves the claim.
\end{proof}
\section{Application to non-degenerate $U$-statistics}
\label{Application to non-degenerate U-statistics}
Let $X_1,\dots,X_n$ be independent and identically distributed random
variables with values in a measurable space $\mathcal X$. For a
measurable and symmetric function $h:{\mathcal X}^m\to \R$ we define
$$
U_n(h):= \frac{1}{\left(n\atop m\right)}
\sum_{1\leq i_1<\dots<i_m \leq n} h(X_{i_1},\dots,X_{i_m})\:,
$$
where symmetric means invariant under all permutations of its arguments.
$U_n(h)$ is called a {\it U-statistic} with {\it kernel} $h$ and
{\it degree} $m$.
Define the conditional expectation for $c=1,\dots,m$ by
\begin{eqnarray*}
h_c(x_1,\dots,x_c)
&:=&\E\bigl[ h(x_1,\dots,x_c,X_{c+1},\dots, X_m)\bigr]
\\
&=&\E\bigl[ h(X_1,\dots, X_m)\big| X_1=x_1,\dots, X_c=x_c\bigr]
\end{eqnarray*}
and the variances by $\sigma_c^2:=\V\bigl[h_c(X_1,\dots,X_c)\bigr]$.
A U-statistic is called {\it degenerate of order $d$} if and only if $0=\sigma_1^2 = \cdots = \sigma_d^2 < \sigma_{d+1}^2$ and
{\it non-degenerate} if $\sigma_1^2>0$. As is well known, the weak limits of appropriately scaled $U$-statistics depend
on the order of degeneracy. By the Hoeffding-decomposition (see for example \cite{Lee:1990}), we know
that for every symmetric function $h$, the $U$-statistic can be decomposed into a sum
of degenerate $U$-statistics of different orders.
In the degenerate case the linear term of this decomposition
disappears. On the level of moderate deviations, in \cite{EichelsbacherSchmock:2003} the MDP for non-degenerate $U$-statistics
is investigated; the proof used the fact that the linear term
in the Hoeffding-decomposition is leading in the non-degenerate case.
Moreover in \cite{EichelsbacherSchmock:2003},
moderate deviation principles for Banach-space valued degenerate $U$-statistics were established, with bon-convex rate functions.
In the present paper the observed U-statistics are assumed to be non-degenerate. The main result is:
\begin{theorem}{\it (Moderate deviations for non-degenerate $U$-statistics)}
\label{thm2.14}\newline
Let $X_1, X_2, \dots$ be a sequence of independent and identically distributed random variables and
$$
U_n(h)= \frac{1}{\left(n\atop 2\right)} \sum_{1\leq i_1 < i_2 \leq n} h(X_{i_1}, X_{i_2})
$$
a non-degenerate $U$-statistic of degree two. Let $\sigma_1^2:= \V \left( \E[h(X_1,X_2)| X_1] \right) <\infty$ and suppose that there exist constants $\gamma\geq1$ and $C>0$ such that
\begin{equation}\label{UStatKum}
\E\bigl[|h(X_1,X_2)|^j\bigr]\leq C^j (j!)^{\gamma}
\end{equation}
for all $j\geq 3$.
Defining
$$
C(\sigma_1) :=
\left\{\begin{array}{ll}
\frac{C}{\sigma_1}&\text{, if } C\leq \sigma_1\\
\frac{C^3}{\sigma_1^3}&\text{, if } C> \sigma_1\\
\end{array}
\right.
$$
and $\Delta_n:=\left(\frac{\sqrt{n}}{2 \sqrt{2} e C(\sigma_1)}\right)$,
let $(a_n)_n$ be a sequence growing to infinity such that
\begin{equation}\label{eqna_nDelta}
\frac{a_n}{\Delta_n^{1/(1+2\gamma)}} \stackrel{n\to\infty}{\longrightarrow} 0\, .
\end{equation}
Then $\Bigl(\frac{U_n}{a_n \sqrt{\V(U_n)}}\Bigr)_{n\in\mathbb N}$ satisfies the moderate deviation principle with speed $a_n^2$ and rate function
$I(x)=\frac{x^2}{2}$.
\end{theorem}
\begin{remark}\label{Remarkustat}
Let us discuss the conditions \eqref{UStatKum} and \eqref{eqna_nDelta} in detail.
\noindent
{\bf a.}
In \cite{EichelsbacherSchmock:2003} a moderate deviation
principle for degenerate and for non-degenerate U-statistics with a kernel
function $h$, which is bounded or satisfies exponential moment conditions,
was considered (see also \cite{Eichelsbacher:2001}).
In \cite{EichelsbacherSchmock:2003} the exponential moment conditions for a non-degenerate
$U$-statistic of degree two reads as follows: the function $h_1$ of the leading term in the Hoeffding-decomposition
has to satisfy the weak Cram\'er condition: $\int \exp(\alpha \|h_1\|) dP < \infty$
for a $\alpha >0$. Moreover $h_2$ has to satisfy the condition that there exists at least one $\alpha_h >0$ such that
$\int \exp(\alpha_h \|h_2\|^2) dP^2 < \infty$. The MDP in \cite{EichelsbacherSchmock:2003}
was proved for $1 \ll a_n \ll \sqrt{n}$. Since the leading term of the Hoeffding decomposition is a partial sum of
i.i.d. random variables, the weak Cram\'er condition on $h_1$ can be relaxed. A necessary and sufficient condition
is given in \cite{EichelsbacherLoewe:2003} which is
$$
\limsup_{n \to \infty} \frac{1}{a_n^2} \log \bigl( n P \bigl( |h_1(X_1)| > \sqrt{n} a_n \bigr) \bigr) = - \infty.
$$
The strong condition on $h_2$ is due to the fact, that a Bernstein-type inequality for the degenerate
part of the Hoeffding-decomposition was applied, see \cite[Theorem 3.26]{EichelsbacherSchmock:2003}.
Unfortunately is is not obvious how to compare condition \eqref{UStatKum} with the conditions in \cite{EichelsbacherSchmock:2003}.
Condition \eqref{UStatKum} is a Bernstein-type condition on the moments of $h$, which is equivalent to a weak Cram\'er condition
on $h$. We haven't no assumptions on $h_2$, hence \eqref{UStatKum} seems to be weaker. On the other side, even in the
case of the best bounds ($\gamma =1$) in \eqref{UStatKum}, our result is restricted to $1 \ll a_n \ll n^{1/6}$. The prize
of less restrictive conditions on $h$ seem to be that the moderate deviation principle holds in a smaller scaling-interval.
Our Theorem is an improvement of \cite{EichelsbacherSchmock:2003} for some $a_n$.
\noindent
{\bf b.}
We can also compare the result in Theorem \ref{thm2.14} with the result in
\cite[Theorem 3.1]{DoeringEichelsbacher:2009}, which was deduced via the Laplace transform. Let the kernel function $h$ be bounded.
Obviously condition \eqref{UStatKum} is fulfilled with $\gamma=1$ and according to Theorem \ref{thm2.14}
the object $\Bigl(\frac{U_n}{a_n \sqrt{\V(U_n)}}\Bigr)_n$ satisfies the MDP with speed $a_n^2$ and rate function
$I(x)=\frac{x^2}{2}$ for every sequence $(a_n)_n$ growing to infinity slow enough such that $1 \ll a_n\ll n^{1/6}$.
Let $(b_n)_n$ be a sequence satisfying $\sqrt{n}\ll b_n \ll n$. From \cite[Theorem 3.1]{DoeringEichelsbacher:2009} it follows that
$\bigl(\frac{n}{b_n}U_n\bigr)_n$ satisfies the MDP with speed $\frac{b_n^2}{n}$ and rate function
$I(x)=\frac{x^2}{8\sigma_1^2}$. Choosing $b_n=n a_n \sqrt{\V U_n}$ in \cite[Theorem 3.1]{DoeringEichelsbacher:2009} requires
the scaling $\sqrt{n}\ll b_n =n a_n \sqrt{\V U_n} \ll n \,$.
Applying that $n\V U_n = 4\sigma_1^2 + O\bigl(\frac{1}{n}\bigr)$ gives
$1\ll a_n \ll \sqrt{n}$. From \cite[Theorem 3.1]{DoeringEichelsbacher:2009} we obtain, that
$\frac{n}{b_n}U_n=\frac{U_n}{a_n \sqrt{\V(U_n)}}$ satisfies the MDP with speed
$n a_n^2 \V U_n= a_n^2 4 \sigma_1^2 + O\bigl(\frac{a_n^2}{n}\bigr)$
and rate function $I(x)=\frac{x^2}{8\sigma_1^2}$. This is the same result
as stated above via Theorem \ref{thm2.14}. Therefore the MDP via the log-Laplace transform holds for a larger scaling range.
But \cite[Theorem 3.1]{DoeringEichelsbacher:2009} assumed bounded $U$-statistics, and thus Theorem \ref{thm2.14} is valid for more
general kernel functions $h$ for some $a_n$.
\end{remark}
\begin{proof}
According to \cite{Alesk:1990}, see \cite[Lemma 5.3]{SaulisStratulyavichus:1989},
the cumulant of $U_n$ can be bounded by
$$
|\Gamma_j(U_n)| < 2 e^{2(j-2)}\frac{2^j-1}{j} C^j (j!)^{1+\gamma} \frac{1}{n^{j-1}}
$$
for all $j=1,2,\dots,n-1$ and $n\geq 7$. The quite involved proof is presented in \cite{SaulisStratulyavichus:1989}.
The variance for the non-degenerate $U$-statistic is given by
$\V (U_n)= \frac{4 \sigma_1^2}{n} \frac{n-2}{n-1} + \frac{2 \sigma_2^2}{n(n-1)}$,
see Theorem 3 in \cite[chapter 1.3]{Lee:1990}. Therefore it exists an $n_0\geq 7$
big enough such that $\sqrt{\V (U_n)}\geq \frac{e \sigma_1}{\sqrt{2n}}$.
The following bound holds for the cumulants of $\frac{U_n}{\sqrt{\V(U_n)}}$:
$$
|\Gamma_j|\leq
(j!)^{1+\gamma} \left(\frac{2 \sqrt{2} e C(\sigma_1)}{\sqrt{n}}\right)^{j-2}
$$
for all $j= 3,\dots, n-1$ and $n\geq n_0$.
Applying Theorem \ref{thmcumulants}, \eqref{eqna_nDelta}, with $\Delta_n=\left(\frac{\sqrt{n}}{2 \sqrt{2} e C(\sigma_1)}\right)$
is a sufficient condition for the moderate deviation principle.
\end{proof}
\begin{remark}
Let us remark, that known precise estimates on cumulants will enable us to prove moderate deviation principles
for further probabilistic objects. Examples are polynomial forms, Pitman polynomial estimators and multiple stochastic integrals (see \cite{SaulisStratulyavichus:1989}).
This will be not the topic of this paper.
\end{remark}
\bigskip
\section{Moderate deviations for the characteristic polynomials in the circular ensembles}
In the last decade, a huge number of results in random matrix theory were proved. Some of the results were extrapolated
to make interesting conjectures on the behaviour of the Riemann zeta function on the critical line. It is known that random matrix
statistics describe the local statistics of the imaginary parts of the zeros high up on the critical line. The random matrix
statistic considered for this conjectural understanding of the zeta-function is the characteristic polynomial $Z(\theta) := Z(U,\theta) =
\det\bigl(I-U e^{-i \theta}\bigr)$ of a unitary $n \times n$ matrix $U$. The matrix $U$ is considered as a random variable in the
{\it circular unitary ensemble} (CUE), that is, the unitary group $U(n)$ equipped with the unique translation-invariant (Haar) probability measure.
In \cite{KeatingSnaith:2000} exact expressions for any matrix size $n$ are derived for the moments of $|Z|$ and from these the asymptotics of the value
distribution and cumulants of the real and imaginary parts of $\log Z$ as $n \to \infty$ are obtained. In the limit, these distributions are independent and Gaussian.
In \cite{KeatingSnaith:2000} the results were generalized to the circular orthogonal (COE) and the circular symplectic (CSE) ensembles. The goal of this section is to
prove a moderate deviation principle for the appropriately rescaled $\log Z$ for the three classical circular ensembles applying Theorem \ref{thmcumulants}.
Remark that our result is known for CUE, see \cite[Theorem 3.5]{HughesKeatingOConnell:2001}, see Remark \ref{remarkchar}.
We present a different proof and generalize the result to the COE and CSE ensembles. We start with the representation of $Z(U, \theta)$ in terms
of the eigenvalues $e^{i \theta_k}$ of $U$:
$$
Z(U,\theta)= \det\bigl(I-U e^{-i \theta}\bigr) = \prod_{k=1}^n \bigl(1-e^{i(\theta_k-\theta)}\bigr).
$$
Let $Z$ now represent the characteristic polynomial of an $n \times n$ matrix $U$ in either the CUE ($\beta=2$), the COE ($\beta=1$), or the CSE ($\beta=4$).
The $C \beta E$ average can then be performed using the joint probability density for the eigenphases $\theta_k$
$$
\frac{(\beta/2)!^n}{(n \beta/2)! (2 \pi)^n} \prod_{1 \leq j < m \leq n} |e^{i \theta_j} - e^{i \theta_m}|^{\beta}.
$$
Hence the $s$-moment of $|Z|$ is of the form
$$
\langle |Z|^s \rangle_{\beta} = \frac{(\beta/2)!^n}{(n \beta/2)! (2 \pi)^n} \int_0^{2 \pi} \cdots \int_0^{2 \pi} d\theta_1 \cdots d\theta_n
\prod_{1 \leq j < m \leq n} |e^{i \theta_j} - e^{i \theta_m}|^{\beta} \times \bigg| \prod_{k=1}^n \bigl(1 - e^{i(\theta_k - \theta)} \bigr) \bigg|^s.
$$
This integral can be evaluated using Selberg's formula, see \cite{Mehta:book}, which leads to
$$
\langle |Z|^s \rangle_{\beta} = \prod_{j=0}^n \frac{\Gamma(1 + j \beta /2) \Gamma(1 + s + j \beta /2)}{(\Gamma( 1 + s/2 + j \beta /2))^2}
$$
denoting the gamma function by $\Gamma$ (without an index).
Hence $\log \langle |Z|^s \rangle_{\beta}$ has an easy form and equals at the same time by definition $\sum_{j \geq 1} \frac{\Gamma_j(\beta)}{j!} s^j$,
where $\Gamma_j(\beta)= \Gamma_j(\Re \log Z)$ denotes the $j$-th cumulant of the distribution of the real part of $\log Z$ under $C \beta E$. Differentiating $\log \langle |Z|^s \rangle_{\beta}$
one obtains
\begin{equation} \label{vorbereitung}
\Gamma_j (\beta) = \frac{2^{j-1} -1}{2^{j-1}} \sum_{k=0}^{n-1} \psi^{(j-1)}(1 + k \beta /2),
\end{equation}
where
$$
\psi^{(j)}(z):= \frac{d^{j+1} \log \Gamma(z)}{dz^{j+1}}
= (-1)^{j+1} \int_0^\infty \frac{t^j e^{-zt}}{1-e^{-t}} dt
$$
for $z\in\mathbb C$ with $\Re z>0$ are the polygamma functions, see \cite[6.4.1]{AbramowitzStegun:1964}.
The result of this section is:
\begin{theorem}\label{thmRM}
Let $(a_n)_{n\in\mathbb N}$ be a sequence in $\mathbb R$ such that
$1\ll a_n \ll \sqrt{\log n}$ holds. The sequence of random variables
$\Bigl(\frac{\Re \log Z}{a_n \sqrt{\log{n}}}\Bigr)_{n\in\mathbb N}$ and
$\Bigl(\frac{\Im \log Z }{a_n \sqrt{\log{n}}}\Bigr)_{n\in\mathbb N}$ under the average over the $C \beta E$ of $n \times n$ matrices
satisfy a moderate deviation principle for $\beta=1,2$ and $4$ with speed $a_n^2$ and rate function
$I(x)=\frac{x^2}{2}$.
\end{theorem}
\begin{remark} \label{remarkchar}
Theorem~\ref{thmRM} for $\beta=2$ states the same moderate deviation principle as
in \cite[page 440, Theorem 3.5]{HughesKeatingOConnell:2001} for the same scaling
range $1\ll a_n \ll \sqrt{\log n}$ -- but the speed in Theorem~\ref{thmRM} here is
given more explicit:
The speed $b_n$ of moderate deviations in
\cite[Theorem 3.5]{HughesKeatingOConnell:2001} is given by
$b_n= - \frac{a_n^2 \sigma_n^2}{W_{-1}\bigl(-\frac{a_n \sigma_n}{n}\bigr)}$,
where $W_{-1}$ denotes the Lambert's $W$-Function.
The Lambert's $W$-Function solves the equation $W(x) e^{W(x)}=x$ and $W_{-1}$
denotes the real branch with $W_{-1}(x)\leq -1$. For negative $x$
tending to zero we get the following asymptotic behaviour:
$W_{-1}(x) = \log |x| + {\mathcal O}\bigl(\log\bigl| \log|x| \bigr|\bigr) $.
This implies that the limiting speed behaves like
$$
b_n= - \frac{a_n^2 \sigma_n^2}{W_{-1}\bigl(-\frac{a_n \sigma_n}{n}\bigr)}
\sim \frac{a_n^2 \sigma_n^2}{\log n - \log{a_n \sigma_n}}
\sim \frac{a_n^2 \sigma_n^2}{\log n}\sim a_n^2 = s_n\,.
$$
Additionally in \cite[Theorem 3.5]{HughesKeatingOConnell:2001}
the asymptotic behaviour of $\frac{\Re \log(Z)}{a_n \sqrt{\log{n}}}$ for scaling ranges $a_n=\sqrt{\log n}$
and $\sqrt{\log n} \ll a_n \ll n / \sqrt{\log n}$ is considered.
The circular orthogonal and circular symplectic ensembles were not studied
in \cite{HughesKeatingOConnell:2001}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thmRM}]
In \cite[eq. (47)]{KeatingSnaith:2000} an integral
representation of the cumulants of $\Re \log(Z)$ for the case $\beta=2$ is derived and
an outline of the extension to $\beta=1$ and $4$ is given. Similarly we prove a bound of the
cumulants satisfying the condition \eqref{eqcumulants} for these three circular
ensembles. With \eqref{vorbereitung} the cumulant can be written as
\begin{eqnarray*}
\Gamma_j\bigl(\Re \log(Z)\bigr)
&=& \frac{2^{j-1}-1}{2^{j-1}} \sum_{k=0}^{n-1} \psi^{(j-1)}\bigl(1+k \frac{\beta}{2}\bigr)
\\
&=& \frac{2^{j-1}-1}{2^{j-1}} \sum_{k=0}^{n-1} (-1)^{j} \int_0^{\infty} \frac{t^{j-1} e^{-(1+k\frac{\beta}{2})t}}{1-e^{-t}} dt
\\
&=& \frac{2^{j-1}-1}{2^{j-1}} (-1)^{j} \int_0^{\infty} t^{j-1} \frac{e^{-t}}{1-e^{-t}} \frac{1-e^{-n \frac{\beta}{2} t}}{1-e^{-\frac{\beta}{2}t}} dt
\\
&=& \frac{2^{j-1}-1}{2^{j-1}} (-1)^{j} \int_0^{\infty} t^{j-1} e^{-t} \bigl(1-e^{-n \frac{\beta}{2} t}\bigr) \sum_{r=0}^{\infty} \sum_{s=0}^{\infty} e^{-(s+r \frac{\beta}{2})t} dt
\end{eqnarray*}
using properties of geometric series for the last two equalities.
Thus we have
$$
\Gamma_j\bigl(\Re \log(Z)\bigr)
= \frac{2^{j-1}-1}{2^{j-1}} (-1)^{j} \sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \int_0^{\infty} t^{j-1} e^{-(s+r \frac{\beta}{2})t} \bigl(1-e^{-n \frac{\beta}{2} t}\bigr) dt.
$$
To get a representation via the gamma function we integrate by substitution
needing a prefactor $(s+r \frac{\beta}{2})^{j-1}$ for $t^{j-1}$ and the
derivative $(s+r \frac{\beta}{2})$ of $(s+r \frac{\beta}{2})t$:
\begin{eqnarray}
\Gamma_j\bigl(\Re \log(Z)\bigr)
&=& \frac{2^{j-1}-1}{2^{j-1}} (-1)^{j} \Gamma(j) \left(
\sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+r \frac{\beta}{2})^j} -
\sum_{r=n}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+r \frac{\beta}{2})^j}
\right)
\nonumber\\
&\leq& \frac{2^{j-1}-1}{2^{j-1}} (-1)^{j} \Gamma(j)
\sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+r \frac{\beta}{2})^j}\,.
\label{cumulantenzwischenstand}
\end{eqnarray}
In the case $\beta=1$ we can estimate the sum as follows:
For $r\in \mathbb N_0$ and $s\in \mathbb N$ the integer $k= 2 (s+\frac{r}{2})=2s+r$ can be
written in $k/2$ number of ways if $k$ is even, in no way if $k=1$, and in $\frac{k+1}{2}$
ways otherwise.
\begin{eqnarray*}
\sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+\frac{r}{2})^j}
&=& 2^j \left( \sum_{k=1}^{\infty} \frac{k}{(2k)^j}
+ \sum_{k=1}^{\infty} \frac{k}{(2k+1)^j}\right)
\\
&\leq& 2^{j-1} \left(
\sum_{k=1}^{\infty} \frac{2k}{(2k)^j}
+ \sum_{k=1}^{\infty} \frac{2k-1}{(2k-1)^j}
+ \sum_{k=1}^{\infty} \frac{1}{(2k-1)^j}
\right)
\\
&=& 2^{j-1} \left(\zeta(j-2) + \bigl(1-\frac{1}{2^j}\bigr) \zeta(j-1) \right),
\end{eqnarray*}
applying the fact that
$
\sum_{k=1}^{\infty} (2k-1)^{-j}
= \sum_{k=1}^{\infty} k^{-j} - \sum_{k=1}^{\infty} (2k)^{-j}
= \bigl(1-\frac{1}{2^j}\bigr) \zeta(j-1)
$.
Bounding the zeta function by $\frac{\pi^2}{6}$, this gives
$$
\sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+\frac{r}{2})^j}
\leq 2^{j-1} 2 \frac{\pi^2}{6} = 2^{j-1} \frac{\pi^2}{3}.
$$
For $\beta=2$ we immediately get:
$$
\sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+r \frac{\beta}{2})^j}
= \sum_{k=1}^{\infty} \frac{k}{k^j} = \zeta(j-1) \leq \frac{\pi^2}{6}.
$$
The case $\beta=4$ can be considered similarly, see \cite[p.84]{KeatingSnaith:2000}:
Counting the ways in which $k=s+2r$ this yields
$$
\sum_{r=0}^{n-1} \sum_{s=1}^{\infty} \frac{1}{(s+2r)^j}
\leq \frac{1}{2} \left(\zeta(j-1) + \bigl(1-\frac{1}{2^j}\bigr) \zeta(j) \right)
\leq \frac{\pi^2}{6}.
$$
Together with equation \eqref{cumulantenzwischenstand} we can conclude that
\begin{eqnarray*}
\left| \Gamma_j\Bigl(\frac{\Re \log(Z)}{\sigma_{n,\beta}}\Bigr) \right|
&=& \frac{\bigr| \Gamma_j \Re \log(Z) \bigr|}{\sigma_{n,\beta}^j}
\leq \frac{2^{j-1}-1}{2^{j-1}} \Gamma(j)
\sum_{r=0}^{\infty} \sum_{s=1}^{\infty} \frac{1}{(s+r \frac{\beta}{2})^j}
\frac{1}{\sigma_{n,\beta}^j}
\\
&\leq& \Gamma(j) \frac{1}{\sigma_{n,\beta}^j}
\left\{\begin{array}{ll}
2^{j-1} \frac{\pi^2}{3} & \text{for }\beta=1\\
\frac{\pi^2}{6} & \text{for }\beta=2,4.
\end{array}
\right.
\end{eqnarray*}
In order to read the parameters $\gamma$ and $\delta$ we apply that the variance of
$Z$ is bounded from below by
$\sigma_{n,\beta}^2 \geq \frac{\log 2}{\beta} \geq \frac{1}{2\beta}$.
Finally we have
\begin{equation}
\left| \Gamma_j\Bigl(\frac{\Re \log(Z)}{\sigma_{n,\beta}}\Bigr) \right|
\leq
(j!) \frac{1}{\sigma_{n,\beta}^{j-2}}
\left\{\begin{array}{ll}
2^j \frac{\pi^2}{3} & \text{for }\beta=1\\
4 \frac{\pi^2}{6} & \text{for }\beta=2\\
8 \frac{\pi^2}{6} & \text{for }\beta=4
\end{array}
\right\}
\leq
(j!) \frac{1}{\sigma_{n,\beta}^{j-2}}
\left\{\begin{array}{ll}
\bigl( \frac{8\pi^2}{3}\bigr)^{j-2} & \text{for }\beta=1\\
\bigl(\frac{2\pi^2}{3}\bigr)^{j-2} & \text{for }\beta=2\\
\bigl(\frac{4\pi^2}{3}\bigr)^{j-2} & \text{for }\beta=4
\end{array}
\right.
\end{equation}
for all $j\geq 3$, hence equation \eqref{eqcumulants} is satisfied for $\gamma=0$ and
$\Delta_n= \frac{3 \sigma_{n,\beta}}{8\pi^2}$.
Theorem~\ref{thmcumulants} completes the prove for $\Re \log(Z)$.
Since the $j$-th cumulant of the distribution of the imaginary part of $\log Z$ can be bounded by
the $j$-th cumulant of the distribution of the real part of $\log Z$
for all $j\geq 3$, see \cite[eq. (62)]{KeatingSnaith:2000}, the MDP of
$\Im \log(Z)$ follows immediately.
\end{proof}
\begin{remark}
Dyson observed that the induced eigenvalue distributions of the $C \beta E$ ensembles correspond to the Gibbs distribution
for the classical Coulomb gas on the circle at three different temperatures. Matrix models for general $\beta >0$
for Dysons's circular eigenvalue statistics are provided in \cite{KillipNenciu:2007}, using the theory of orthogonal polynomials
on the unit circle. They obtained a sparse matrix model which is five-diagonal. In this framework, there is no natural
underlying measure such as the Haar measure; the matrix ensembles are characterized by the laws of their elements.
\end{remark}
\section{Moderate deviations for determinantal point processes}
The collection of eigenvalues of a random matrix can be viewed as a configuration of points
(on ${\Bbb R}$ or on ${\Bbb C}$), that is a determinantal process. Central Limit Theorems
for occupation numbers were studied in the literature, see \cite{Zeitounibook}
and references therein. This section is devoted to the study of moderate deviation principles
for occupation numbers of determinantal point processes. We will see that it will be an application
of Theorem \ref{thmnotidentical}.
Let $\Lambda$ be a locally compact Polish space, equipped with a positive Radon measure $\mu$ on its
Borel $\sigma$-algebra. Let ${\mathcal M}_+(\Lambda)$ denote the set of positive $\sigma$-finite Radon measures on $\Lambda$.
A point process is a random, integer-valued $\chi \in {\mathcal M}_+(\Lambda)$, and it is simple if $P( \exists x \in \Lambda: \chi(\{x\}) >1)=0$.
A locally integrable function $\varrho : \Lambda^k \to [0, \infty)$ is called a joint intensity (correlation), if for
any mutually disjoint family of subsets $D_1, \ldots, D_k$ of $\Lambda$
$$
\E \bigl( \prod_{i=1}^k \chi(D_i) \bigr) = \int_{\prod_{i=1}^k D_i} \varrho_k(x_1, \ldots, x_k) d\mu(x_1) \cdots d \mu(x_k),
$$
where $\E$ denotes the expectation with respect to the law of the point configurations of $\chi$.
A simple point process $\chi$ is said to be a {\it determinantal point process} with kernel $K$ if its joint intensities $\varrho_k$
exist and are given by
\begin{equation} \label{DPP}
\varrho_k(x_1, \ldots, x_k) = \det_{i,j=1}^k \bigl( K(x_i, x_j) \bigr).
\end{equation}
An integral operator ${\mathcal K}: L^2(\mu) \to L^2(\mu)$ with kernel $K$ given by
$$
{\mathcal K}(f)(x) = \int K(x,y) f(y) \, d \mu(y), \quad f \in L^2(\mu)
$$
is {\it admissible} with admissible kernel $K$ if ${\mathcal K}$ is self-adjoint, nonnegative and locally trace-class
(for details see \cite[4.2.12]{Zeitounibook}). A standard result is, that an integral compact operator
${\mathcal K}$ with admissible kernel $K$ possesses the decomposition
$$
{\mathcal K} f(x) = \sum_{k=1}^n \lambda_k \phi_k(x) \langle \phi_k, f \rangle_{L^2(\mu)},
$$
where the functions $\phi_k$ are orthonormal in $L^2(\mu)$, $n$ is either finite or infinite, and $\lambda_k >0$
for all $k$, leading to
\begin{equation} \label{kernelrep}
K(x,y) = \sum_{k=1}^n \lambda_k \phi_k(x) \phi_k^*(y),
\end{equation}
an equality in $L^2(\mu \times \mu)$.
Moreover, an admissible integral operator ${\mathcal K}$ with kernel $K$ is called {\it good} with good kernel $K$ if the $\lambda_k$ in \eqref{kernelrep}
satisfy $\lambda_k \in (0,1]$. If the kernel $K$ of a determinantal point process is (locally) admissible, then it must in fact be good, see
\cite[4.2.21]{Zeitounibook}.
\begin{example} \label{GUEDDP}
If $(\lambda_1, \ldots, \lambda_n)$ be the eigenvalues of the GUE (Gaussian unitary ensemble) of dimension $n$ and denote
by $\chi_n$ the point process $\chi_n(D) = \sum_{i=1}^n 1_{\{ \lambda_i \in D\}}$. Then $\chi_n$ is a determinantal point process with admissible, good
kernel $K(x,y)= \sum_{k=0}^{n-1} \Psi_k(x) \Psi_k(y)$, where the functions $\Psi_k$ are the oscillator wave-functions, that is
$\Psi_k(x) := \frac{e^{-x^2/4} H_k(x)}{\sqrt{\sqrt{2 \pi} k!}}$, where $H_k(x):= (-1)^k e^{x^2/2} \frac{d^k}{dx^k} e^{-x^2/2}$ is the $k$-th
Hermite polynomial; see \cite[Def. 3.2.1, Ex. 4.2.15]{Zeitounibook}.
\end{example}
We will apply the following representation due to \cite[Theorem 7]{HoKPV06}: Suppose $\chi$ is a determinantal process with good kernel $K$ of the form
\eqref{kernelrep}, with $\sum_k \lambda_k < \infty$. Let $(I_k)_{k=1}^n$ be independent Bernoulli variables with $P(I_k=1) = \lambda_k$. Set
$$
K_I(x,y) = \sum_{k=1}^n I_k \, \phi_k(x) \phi_k^*(y),
$$
and let $\chi_I$ denote the determinantal point process with random kernel $K_I$. Then $\chi$ and $\chi_I$ have the same distribution.
Therefore, let $K$ be a good kernel and for $D \subset \Lambda$ we write $K_D(x,y)= 1_D(x) K(x,y) 1_D(y)$. Let $D$ be such that
$K_D$ is trace-class, with eigenvalues $\lambda_k$, $k \geq 1$. Then $\chi(D)$ has the same distribution as $\sum_k \xi_k$
where $\xi_k$ are independent Bernoulli random variables with $P(\xi_k=1)= \lambda_k$ and $P(\xi_k =0) = 1 - \lambda_k$.
Now we can state the main result of this section:
\begin{theorem} \label{mdpDDP}
Consider a sequence $(\chi_n)_n$ of determinantal point processes on $\Lambda$ with good kernels $K_n$. Let $D_n$
be a sequence of measurable subsets of $\Lambda$ such that $(K_n)_{D_n}$ is trace class. Assume that $(a_n)_n$ is a sequence of real numbers such that
$$
1 \ll a_n \ll \frac{\bigl( \sum_{k=1}^n \lambda_k^n(1- \lambda_k^n) \bigr)^{1/2}}{ \max_{1 \le i \le n} (\lambda_i^n(1-\lambda_i^n))^{1/2}}.
$$
Then $(Z_n)_n$ with
$$
Z_n := \frac{1}{a_n} \frac{\chi_n(D_n) - \E (\chi_n(D_n))}{\sqrt{\V (\chi_n(D_n))}}
$$
satisfies a moderate deviation principle with speed $a_n^2$ and rate function $I(x) = \frac{x^2}{2}$.
\end{theorem}
\begin{remark}
Obviously we have $\max_{1 \le i \le n} (\lambda_i^n(1-\lambda_i^n))^{1/2} \leq \frac 12$. To assure that $(a_n)_n$ is growing to infinity,
it is necessary that $\V (\chi_n(D_n))$ goes to infinity. Moreover, under the assumptions of Theorem \ref{mdpDDP},
$$
\V (\chi_n(D_n)) = \sum_k \lambda_k^n (1- \lambda_k^n) \leq \sum_k \lambda_k^n = \int K_n(x,x) d \mu_n(x),
$$
thus for a moderate deviation principle, it is necessary that $\lim_{n \to \infty} \int_{D_n} K_n(x,x) d\mu_n(x) = + \infty$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{mdpDDP}]
We only have to check a moderate deviation principle for the rescaled partial sums of independent Bernoulli random variables $\xi_k$
with $P(\xi_k =1) = \lambda_k$. Therefore we apply Theorem \ref{thmnotidentical}. Take $X_k^n := \frac{\xi_k - \lambda_k^n}{\sqrt{\lambda_k^n(1- \lambda_k^n)}}$.
Then we obtain easily that condition \eqref{momentenbedingungen} is satisfied for $X_k^n$ with $\gamma=0$ and a constant $K_n=1$.
\end{proof}
\begin{example}[{\it Eigenvalues of the GUE/GOE}]
Let $D=[-a,b]$ with $a,b>0$ and $\alpha \in (-\frac 12, \frac 12)$, and $D_n := n^{\alpha} D$. Consider the determinantal point process of Example
\ref{GUEDDP}. Then $Z_n / a_n$ satisfies a moderate deviation principle; see \cite[4.2.27]{Zeitounibook}, where $\V( \chi_n(D_n)) \to \infty$ is proved applying
an upper bound with the help of the sine-kernel. Note, that the same conclusions hold when the GUE is replaced by the GOE (Gaussian orthogonal ensembles), see
\cite[4.2.29]{Zeitounibook}.
\end{example}
\begin{example}[{\it Sine-, Airy- and Bessel point processes}]
Recall the sine-kernel $K_{sine}(x,y)= \frac{1}{\pi} \frac{\sin(x-y)}{x-y}$ which arises as the limit of many interesting
point processes, for example as a scaling limit in the bulk of the spectrum in the GUE.
With $\Lambda = {\Bbb R}$ and $\mu$ to be the Lebesgue measure, the corresponding operator is locally admissible
and determines a determinantal point process on ${\Bbb R}$. The operator is not of trace class but locally of trace class.
For $D_n = [-n,n]$, consider $K_n = 1_{D_n} K_{sine}$. The Central Limit Theorem for the rescaled $\chi_n(D_n)$ was proved by Costin and Lebowitz
in 1995. They proved that $\V (\chi_n(D_n))$ goes to infinity. Hence a moderate deviation principle for the appropriately rescaled
sine kernel process follows. It was shown in \cite{Soshnikov:2000}, that the condition $\lim_{n \to \infty} \V (\chi_n(D_n))= +\infty$
is satisfied for the Airy kernel $K_{Airy}$ with $D_n=[-n,n]$, and for Bessel kernel $K_{Bessel}$ with $D_n=[-n,n]$. In these cases, the growth
of $\V(\chi_n(D_n))$ is logarithmic with respect to the mean number of points in $D_n$. For a proof that the Airy process has a locally admissible
kernel which determines a determinantal point process, see \cite[4.2.30]{Zeitounibook}.
The Airy kernel arises as a scaling limit at the edge of the spectrum in the GUE and at the soft right edge of the spectrum in the Laguerre ensemble,
while the Bessel kernel arises as a scaling limit at the hard left edge in the Laguerre ensemble. We conclude a moderate deviation principle
for the corresponding kernel point processes. For details and more examples like families of kernels corresponding to random matrices
for the classical compact groups, see \cite{Soshnikov2:2000}.
\end{example}
\section{Proof of Theorem \ref{thmcumulants}}\label{proofsection}
The following lemma is an essential element of the proof of
Theorem \ref{thmcumulants}. Rudzkis, Saulis and Statulevi{\v{c}}ius showed in 1978, that
condition \eqref{eqcumulants} on the cumulants implies the
following large deviation probabilities:
\begin{lemma}\label{lemmaRSS}
Let $Z$ be a centered random variable with variance one and existing
absolute moments, which satisfies
$$
\bigl| \Gamma_j \bigr| \leq (j!)^{1+\gamma} / \Delta^{j-2}
\quad\text{for all } j=3,4, \dots
$$
for fixed $\gamma\geq 0$ and $\Delta>0$.
Then
\begin{eqnarray*}
\frac{P(Z\geq x)}{1-\Phi(x)}&=&
\exp\bigl( L_{\gamma}(x)\bigr)
\Bigl(1+q_1 \psi(x) \frac{x+1}{\Delta_{\gamma}}\Bigr)
\\\text{and }\quad
\frac{P(Z\leq- x)}{\Phi(-x)}&=&
\exp\bigl( L_{\gamma}(-x)\bigr)
\Bigl(1+q_2 \psi(x) \frac{x+1}{\Delta_{\gamma}}\Bigr)
\end{eqnarray*}
hold in the interval $0\leq x < \Delta_{\gamma}$,
using the following notation:
\begin{eqnarray}
\Delta_{\gamma}
&=& \frac{1}{6}\left( \frac{\sqrt{2}}{6} \Delta\right)^{1/(1+2\gamma)}
\nonumber
\\
\psi(x)
&=& \frac{60 \left(1+ 10 \Delta_{\gamma}^2 \exp\bigl( -(1-x/\Delta_{\gamma}) \sqrt{\Delta_{\gamma}}\bigr)\right)}{1-x/\Delta_{\gamma}}\,,
\label{eqpsi}
\end{eqnarray}
$q_1,q_2$ are two constants in the interval $[-1,1]$
and $L_{\gamma}$ is a function
(defined in \cite[Lemma 2.3, eq. (2.8)]{SaulisStratulyavichus:1989}) satisfying
\begin{equation}\label{eqLgamma}
\bigl| L_{\gamma}(x)\bigr| \leq \frac{|x|^3}{3 \Delta_{\gamma}}
\text{ for all $x$ with } |x|\leq \Delta_{\gamma}\,.
\end{equation}
\end{lemma}
For the proof see \cite[Lemma 2.3]{SaulisStratulyavichus:1989}.
\begin{lemma}\label{lemmaERS}
In the situation of Lemma \ref{lemmaRSS} there exist two constants
$C_1(\gamma)$ and $C_2(\gamma)$, which depend only on $\gamma$ and
satisfy the following inequalities:
\begin{eqnarray*}
&&\left| \log \frac{P(Z\geq x)}{1-\Phi(x)} \right| \leq C_2(\gamma) \frac{1+x^3}{\Delta^{1/(1+2\gamma)}} \\
\text{and}&& \left| \log \frac{P(Z\leq -x)}{\Phi(-x)} \right| \leq C_2(\gamma) \frac{1+x^3}{\Delta^{1/(1+2\gamma)}}
\end{eqnarray*}
for all $0\leq x\leq C_1(\gamma) \Delta^{1/(1+2\gamma)}$.
\end{lemma}
\begin{proof}
In \cite{ERS:2009} these bounds were concluded from the previous Lemma \ref{lemmaRSS}.
The proof here is analogue to the proof of \cite[Corollary 3.1]{ERS:2009}.
In the situation of Lemma \ref{lemmaRSS} the function $\psi$
defined in \eqref{eqpsi} is bounded by
$\psi(x) \leq c_1+ c_2\Delta_{\gamma}^2 \exp\bigl(-c_3\sqrt{\Delta_{\gamma}}\bigr)$
for all $0\leq x\leq q \Delta_{\gamma}$ for any fixed constant $q\in[0,1)$
and some positive constants $c_1,c_2$ and $c_3$ depending on $q$ only.
The term $c_1+ c_2\Delta_{\gamma}^2 \exp\bigl(-c_3\sqrt{\Delta_{\gamma}}\bigr)$
can be bounded uniformly in
$\Delta_{\gamma}
=\frac{1}{6}\left( \frac{\sqrt{2}}{6} \Delta\right)^{1/(1+2\gamma)}$,
which combined with the estimation \eqref{eqLgamma} implies the existence
of universal positive constants $c_4,c_5$ and $c_6$, such that
$$
\exp\Bigl( \frac{-c_5 x^3}{\Delta^{1/(1+2\gamma)}}\Bigr)
\Bigl( 1-\frac{c_6(1+x)}{\Delta^{1/(1+2\gamma)}}\Bigr)
\leq \frac{P(Z\geq x)}{1-\Phi(x)} \leq
\exp\Bigl( \frac{c_5 x^3}{\Delta^{1/(1+2\gamma)}}\Bigr)
\Bigl( 1+\frac{c_6(1+x)}{\Delta^{1/(1+2\gamma)}}\Bigr)
$$
holds for all $0\leq x \leq c_4 \Delta^{1/(1+2\gamma)}$.
If $\Delta^{1/(1+2\gamma)}\leq 3 c_6$, we can choose $C_1(\gamma)$
and $C_2(\gamma)$ such that the first inequality in Lemma
\ref{lemmaERS} is satisfied.
In the case $\Delta^{1/(1+2\gamma)}> 3 c_6$ we have for all
$0\leq x \leq \frac{\Delta^{1/(1+2\gamma)}}{3 c_6}$
$$
\frac{c_6(1+x)}{\Delta^{1/(1+2\gamma)}} \leq
\frac{c_6}{\Delta^{1/(1+2\gamma)}} +\frac{1}{3}
\leq \frac{2}{3}\,.
$$
If $\Delta^{1/(1+2\gamma)}> 3 c_6$ and
$0\leq x \leq \frac{\Delta^{1/(1+2\gamma)}}{3 c_6}$ hold,
we can bound
\begin{equation*}
\left| \log \frac{P(Z\leq -x)}{\Phi(-x)} \right| \leq \frac{c_5 x^3}{\Delta^{1/(1+2\gamma)}}
+ \max\left\{
\Bigl|\log\Bigl( 1-\frac{c_6(1+x)}{\Delta^{1/(1+2\gamma)}}\Bigr)\Bigr|;
\Bigl|\log\Bigl( 1+\frac{c_6(1+x)}{\Delta^{1/(1+2\gamma)}}\Bigr)\Bigr|
\right\}\,.
\end{equation*}
Due to the concavity of the logarithm the absolute value of the straight line
\begin{eqnarray*}
g(x)=\frac{3 \log 3}{2} x - \frac{3 \log 3}{2}
&&\hspace{6.5cm}
\end{eqnarray*}
is bigger or equal
than the absolute value of $\log(x)$ for any
$\frac{1}{3}\leq x \leq \frac{5}{3}$. And we have
\begin{eqnarray*}
|\log(1-y)|\leq \frac{\log 3}{3/2} y = \frac{3 \log 3}{2} y
&&\hspace{6.5cm}
\end{eqnarray*}
and $|\log(1+y)|\leq y$
for any $0\leq y \leq \frac{2}{3}$.
Thus for $\Delta^{1/(1+2\gamma)}> 3 c_6$ and
$0\leq x \leq \frac{\Delta^{1/(1+2\gamma)}}{3 c_6}$ it follows that
\begin{equation*}
\left| \log \frac{P(Z\leq -x)}{\Phi(-x)} \right|
\leq \frac{c_5 x^3}{\Delta^{1/(1+2\gamma)}}
+ \frac{3 \log 3}{2} \frac{c_6(1+x)}{\Delta^{1/(1+2\gamma)}}
\leq \frac{c_5 x^3}{\Delta^{1/(1+2\gamma)}}
+ \frac{\log 3}{2} \frac{c_6(5+x^3)}{\Delta^{1/(1+2\gamma)}}\,,
\end{equation*}
applying $x^3-3x+2=(x-1)^2(x+2)\geq 0$ which is equivalent to
$3(1+x)\leq 5+x^3$.
Thus the first inequality in Lemma \ref{lemmaERS} is proved.
The second inequality in Lemma \ref{lemmaERS} can be proved similarly.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmcumulants}]
The idea of the proof is similarly to the proof of
\cite[Lemma 3.6]{ERS:2009} for the case of bounded geometric functionals.
It follows from Lemma \ref{lemmaERS} that in the situation of Theorem
\ref{thmcumulants} there exist two constants $C_1(\gamma)$ and $C_2(\gamma)$,
which satisfy the following inequalities:
\begin{eqnarray*}
&&\left| \log \frac{P(Z_n\geq y)}{1-\Phi(y)} \right| \leq C_2(\gamma) \frac{1+y^3}{\Delta_n^{1/(1+2\gamma)}} \\
\text{and}&& \left| \log \frac{P(Z_n\leq -y)}{\Phi(-y)} \right| \leq C_2(\gamma) \frac{1+y^3}{\Delta_n^{1/(1+2\gamma)}}
\end{eqnarray*}
for all $0\leq y\leq C_1(\gamma) \Delta_n^{1/(1+2\gamma)}$.
The logarithm can be represented as
\begin{eqnarray*}
\log \frac{P\left(\frac{1}{a_n} Z_n\geq x\right)}{1-\Phi(a_n x)}
&=& \log \frac{P\left(\frac{1}{a_n}Z_n\geq x\right)}{e^{\frac{(a_n x)^2}{2}} \bigl(1-\Phi(a_n x)\bigr)} e^{\frac{(a_n x)^2}{2}} \\
&=& \log P\left(\frac{1}{a_n}Z_n\geq x\right) +\frac{(a_n x)^2}{2}
-\log\Bigl({e^{\frac{(a_n x)^2}{2}} \bigl(1-\Phi(a_n x)\bigr)}\Bigr) \, .
\end{eqnarray*}
For the term at the left-hand side we can use the bounds
provided by Lemma \ref{lemmaERS} for
$y=a_n x$ and $0\leq x \leq C_1(\gamma) \frac{\Delta_n^{1/(1+2\gamma)}}{a_n}$.
Note that the bound for $x$ grows to infinity as $n$ does, thus it does not
imply any restriction.
Since, for all $y\geq 0$, we have
$$
\frac{1}{2+\sqrt{2\pi} y} \leq e^{\frac{y^2}{2}} \bigl(1-\Phi(y)\bigr) \leq \frac{1}{2}
$$
the monotonicity of the logarithm implies
\begin{eqnarray*}
\left|\log P\left(\frac{1}{a_n}Z_n\geq x\right) + \frac{(a_n x)^2}{2}\right|
&\leq& \left|\log\Bigl({e^{\frac{(a_n x)^2}{2}} \bigl(1-\Phi(a_n x)\bigr)}\Bigr)\right| + C_2(\gamma) \frac{1+(a_n x)^3}{\Delta_n^{1/(1+2\gamma)}}
\\
&\leq& \left|\log\left(\frac{1}{2+\sqrt{2\pi}a_n x}\right)\right| + C_2(\gamma) \frac{1+(a_n x)^3}{\Delta_n^{1/(1+2\gamma)}}
\\
&\leq& \log\bigl(2+\sqrt{2\pi}a_n x\bigr) + C_2(\gamma) \frac{1+(a_n x)^3}{\Delta_n^{1/(1+2\gamma)}}\, .
\end{eqnarray*}
And it follows that
\begin{eqnarray*}
&&\left|\frac{1}{a_n^2} \log P\left(\frac{1}{a_n}Z_n\geq x\right) + \frac{x^2}{2}\right|
\leq \frac{1}{a_n^2} \log\bigl(2+\sqrt{2\pi}a_n x\bigr)
+ C_2(\gamma) \frac{1+(a_n x)^3}{a_n^2 \Delta_n^{1/(1+2\gamma)}}\\
&=& \frac{1}{a_n^2} \log(2+\sqrt{2\pi}a_n x)
+C_2(\gamma) \left(\frac{1}{a_n^2 \Delta_n^{1/(1+2\gamma)}}+\frac{a_n}{\Delta_n^{1/(1+2\gamma)}} x^3\right)
\stackrel{n\to\infty}{\longrightarrow} 0\, .
\end{eqnarray*}
Similarly we can prove
\begin{align*}
\left|\frac{1}{a_n^2} \log P\left(\frac{1}{a_n}Z_n\leq -x\right) + \frac{x^2}{2}\right|
&\leq \frac{1}{a_n^2} \log\bigl(2+\sqrt{2\pi}a_n x\bigr)
+ C_2(\gamma) \frac{1+(a_n x)^3}{a_n^2 \Delta_n^{1/(1+2\gamma)}}\\
&\stackrel{n\to\infty}{\longrightarrow} 0\, .
\end{align*}
These bounds can be carried forward to a full moderate deviation principle
analogue to the proof of \cite[Theorem 1.2]{ERS:2009}.
\end{proof}
\newcommand{\SortNoop}[1]{}\def\cprime{$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2010-12-23T02:02:03",
"yymm": "1012",
"arxiv_id": "1012.5027",
"language": "en",
"url": "https://arxiv.org/abs/1012.5027",
"abstract": "The purpose of the present paper is to establish moderate deviation principles for a rather general class of random variables fulfilling certain bounds of the cumulants. We apply a celebrated lemma of the theory of large deviations probabilities due to Rudzkis, Saulis and Statulevicius. The examples of random objects we treat include dependency graphs, subgraph-counting statistics in Erdős-Rényi random graphs and $U$-statistics. Moreover, we prove moderate deviation principles for certain statistics appearing in random matrix theory, namely characteristic polynomials of random unitary matrices as well as the number of particles in a growing box of random determinantal point processes like the number of eigenvalues in the GUE or the number of points in Airy, Bessel, and $\\sin$ random point fields.",
"subjects": "Probability (math.PR)",
"title": "Moderate deviations via cumulants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806489800368,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274274972014
} |
https://arxiv.org/abs/1410.5559 | Computing Symmetric Positive Definite Solutions of Three Types of Nonlinear Matrix Equations | Nonlinear matrix equations arise in many practical contexts related to control theory, dynamical programming and finite element methods for solving some partial differential equations. In most of these applications, it is needed to compute a symmetric and positive definite solution. Here, we propose new iterative algorithms for solving three different types of nonlinear matrix equations. We have recently proposed a new algorithm for solving positive definite total least squares problems. Making use of an iterative process for inverse of a matrix, we convert the nonlinear matrix equation to an iterative linear one, and, in every iteration, we apply our algorithm for solving a positive definite total least squares problem to solve the linear subproblem and update the newly defined variables and the matrix inverse terms using appropriate formulas. Our proposed algorithms have a number of useful features. One is that the computed unknown matrix remains symmetric and positive definite in all iterations. As the second useful feature, numerical test results show that in most cases our proposed approach turns to compute solutions with smaller errors within lower computing times. Finally, we provide some test results showing that our proposed algorithm converges to a symmetric and positive definite solution in Matlab software environment on a PC, while other methods fail to do so. | \section{Introduction}
In several practical applications concerned with solving partial differential equations, control theory and ladder network design a symmetric and positive definite solution of a nonlinear matrix equation needs to be computed; e.g., see \cite{2,5,6}. We consider the nonlinear matrix equation
\begin{equation}\label{19}
X+{\sum}_{i=1}^mA_i^Tf_i(X)A_i=Q,
\end{equation}
with $A_i$, $i=1,\cdots,m,$ and $Q$ being $n\times n$ matrices. For some special set of functions $f_i$, $i=1,\cdots,m,$ equation (\ref{19}) turns to the usual nonlinear equations. Sayed \cite{19} considered a class of nonlinear equations of the general form (\ref{19}) with some special choices of the $f_i$. He introduced an iterative algorithm to solve the equations. Here, we discuss three cases of interest:\\
\textbf{Case 1: $m=1, f_1(X)=X^{-1}.$} This leads to
\begin{equation}\label{20}
X+A^TX^{-1}A=Q,
\end{equation}
which arises in contexts related to control theory; e.g., see \cite{8}. Zhou \cite{8} discussed a method for solving (\ref{20}). A similar equation
\begin{equation}\label{21}
X+A^{\ast}X^{-q}A=Q
\end{equation}
is also concidered, where $0<q\leq 1$; e.g., see \cite{9,21}. In 2005, Hasanov \cite{9} introduced a method for solving (\ref{21}). Also, in 2013, Yin \cite{10} outlined a novel method for solving (\ref{21}). Assuming $A=B+iC$ and $Q=M+iN$, a complex form of (\ref{21}) is defined, whose solution was discussed by Guo \cite{13}.\\
\textbf{Case 2: $m=1, f_1(X)=-X^{-2}.$} Then, the nonlinear equation
\begin{equation}\label{22}
X-A^TX^{-2}A=Q
\end{equation}
is at hand. This equation arises in solving special types of partial differential equations using finite element methods; e.g., see \cite{5,17}. Ivanov \cite{17} disscused two iterative methods for solving (\ref{22}). Cheng \cite{11} derived a purterbation analysis of the Hermitian solution to (\ref{22}). Also, Sayed \cite{14} and Ivanov \citep{5} introduced iterative methods for solving slightly different equations, $X-A^TX^{-n}A=Q,
$ with $n\geq 2$ being an integer, and $X+A^TX^{-2}A=Q$.\\
\textbf{Case 3: $m=2, f_1(X)=X^{-t_1}, f2_(X)=X^{-t_2}.$} This results in the equation
\begin{equation}\label{23}
X^s+A_1^TX^{-t_1}A_1+A_2^TX^{-t_2}A_2=Q
\end{equation}
with different applications in control theory, dynamic programming and statistics; e.g., see \cite{6,12}. In 2010, Liu \cite{6} described a method for solving (\ref{23}). Also, Long \cite{12} considered an special case of (\ref{23}) with $t_1=t_2=1$ . After discussing some properties of symmetric and positive definite solutions of the corresponding nonlinear equation, Long outlined an iterative method to solve it. Pei \cite{20} and Duan \cite{18} considered another special case with $A_2=0$. They studied the conditions for existence of a symmetric and positive definite solution to the corresponding nonlinear equation and outlined two different algorithms to compute it.\\ Solving some other nonlinear matrix equations has also been discussed in the litrature. For instance, solutions of the nonlinear equations
\begin{center}
$X={\sum}_{l=0}^{k-1}P_l^TX^{{\alpha}_l}P_l$
\end{center}
and
\begin{center}
$X={\sum}_{l=0}^{k-1}{P_l^TXP_l}^{{\alpha}_l}$
\end{center}
have been considered in \cite{15,16}.
The remainder of our work is organized as follows. In Section 2, we describe our general idea for computing a symmetric positive definite solution to the above three different types of nonlinear matrix equations. In Section 3, we provide the details and outline three algorithms for solving the equations. Computational results and comparisons with available methods are given in Section 4. Section 5 gives our concluding remarks.
\section{The general Idea}
In \cite{1}, we recently proposed a new method for solving a positive definite total least squares problem.
There, the goal was to compute a symmetric and positive definite solution of the over-determined linear system of equations
\begin{equation}\label{1}
DX \simeq T,
\end{equation}
where $D,T \in {\mathbb{R}}^{m \times n},$ with $m \geq n,$ are known, using a total error formulation. Unlike the ordinary
least squares formulation, in a total formulation both $D$ and $T$ are assumed to contain error. Hence, we proposed an error function
\begin{equation}\label{2}
f(X)=\mathop{\mathrm{tr}}{(DX-T)}^T(D-TX^{-1}),
\end{equation}
with $\mathop{\mathrm{tr}}(\cdot)$ standing for the trace of a matrix. Then, the solution of the positive definite total least squares problem (\ref{1}) was considered to be the symmetric and positive definite matrix $X$ minimizing $f(X)$. In \cite{1}, we proposed positive definite total least squares with Cholesky decomposition algorithm (PDTLS-Chol) and positive definite total least squares with spectral decomposition algorithm (PDTLS-Spec) to compute the solution to the positive definite total least squares problems and showed that PDTLS-Chol has less computational complexity. In both algorithms, the key point is that since the objective function $f(X)$ is strictly convex on the cone of the symmetric and positive definite matrices, the solution to (\ref{2}) is the symmetric and positive definite matrix $X$ satisfying the first order optimality conditions $\nabla f(X)=0$. It was shown that computing such a matrix is possible using the Cholesky or spectral decomposition of $D^TD$. Here, we intend to make use of PDTLS-Chol to compute a symmetric and positive definite solution to some nonlinear
equations. The general idea is to propose a linear approximation of the nonlinear equation and solve the corresponding linear problem
in every iteration. To find a proper linear approximation, we define a suitable change of variables. We also make use of the iterative formula
$Y_{n+1}=Y_n(2I-XY_n)$, as the Newton's iteration, to converge to the solution of $X-Y^{-1}=0$ which is $X^{-1}$; e.g. see, \cite{22,23}. In a total formulation, both the coefficient and the right hand side matrices are assumed to contain error. Hence, an error is also supposed for the inverse term in the linear subproblems and it seems to be a proper idea to approximate these terms by use of iterative formulas.\\
Therefore, in each iteration of our proposed algorithm for solving a nonlinear matrix equation, a symmetric and positive
definite solution to the linear approximation of the nonlinear equation is computed using PDTLS-Chol. The process is terminated after satisfaction of a proper
stopping criterion.\\
In Section 3, we discuss solving the nonlinear equation $X+{\sum}_{i=1}^mA_i^Tf_i(X)A_i=Q,$ for the specified three cases mentioned above. In the remainder of our work, by solving a nonlinear equation, we mean finding its symmetric and positive definite solution.
\section{Solving the Nonlinear Equations}
\subsection{Case 1: $m=1, f_1(X)=X^{-1}.$}
Here, the goal is to develop an algorithm to solve the nonlinear matrix equation
\begin{equation}\label{8}
X+A^TX^{-1}A=Q,
\end{equation}
with $A, Q \in {\mathbb{R}}^{n \times n}$. Assuming $Q$ to be the $n \times n$ identity matrix, $I$, the matrix equation
\[X+A^TX^{-1}A=I,\]
is at hand. This equation arises in different contexts including
analysis of ladder networks, dynamic programming, control theory, stochastic filtering and statistics; e.g., see \cite{2,8,3,4}.\\
Letting $Y=X^{-1}$, (\ref{8}) becomes
\[X+A^TYA=Q.\]
We are to make use of the iterative formula
\[Y_{k+1}=Y_k(2I-XY_k)\]
for the $Y_k$ converging to $X^{-1}$. Thus, to solve (\ref{8}), we define the sequences $Y_{k+1}$ and $X_{k+1}$ by
\begin{subequations}
\begin{equation}\label{9}
Y_{k+1}=Y_k(2I-X_kY_k),
\end{equation}
\begin{equation}\label{10}
X_{k+1}\simeq Q-A^TY_{k+1}A,
\end{equation}
\end{subequations}
starting with arbitrary symmetric and positive definite points $X_0, Y_0 \in {\mathbb{R}}^{n \times n}$. Hence, in each iteration of our proposed algorithm for solving (\ref{8}),
after computing $Y_{k+1}$ from (\ref{9}), we perform PDTLS-Chol for $D=I$ and $T=Q-A^TY_{k+1}A$ to compute $X_{k+1}$. In the remainder of our work, by $X=\textrm{PDTLS-Chol}(D,T)$, we mean that $X$ is computed by applying PDTLS-Chol for the input arguments $D$ and $T$. The advantage of this method, as compared to simply letting $X_{k+1}=Q-A^TY_{k+1}A$, is that $X_{k+1}$ remains positive definite in all iterations. A proper
stopping criterion here would be
\[E=\|X_{k+1}+A^TY_{k+1}A-Q\|\leq \delta+\epsilon \|X_{k+1}\|,\]
where $\delta$ is close to the machine
(or user's) zero, and $\epsilon$ is close to the unit round-off error.\\
Next, we outline the steps of our proposed algorithm for solving (\ref{8}).\\
\begin{algorithm}[H]
\caption{ Solving the Nonlinear Matrix Equation $X+A^TX^{-1}A=Q$: Nonlinear1.}
\label{alg1}
\begin{algorithmic}[1]
\Procedure {Nonlinear1}{$A$, $\delta$, $\epsilon$}
\State Choose the arbitrary symmetric and positive definite matrices $X, Y \in {\mathbb{R}}^{n \times n}$.
\Repeat
\State Let
\[Y=Y(2I-XY),\]
\Statex $\hspace{1.1cm}$and
\[X=\textrm{PDTLS-Chol}(I,Q-A^TYA).\]
\State Compute $E=\|X+A^TYA-Q\|$.
\Until{$E\leq \delta+\epsilon \|X\|$.}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Case 2: $m=1, f_1(X)=-X^{-2}.$}
Here, we consider solving the nonlinear matrix equation
\begin{equation}\label{11}
X-A^TX^{-2}A=Q,
\end{equation}
where $A \in {\mathbb{R}}^{n \times n}$. This equation arises in solving partial differential equations; e.g., see \cite{5}. As before, defining $Y=X^{-1}$, we get
\[Y^{-1}-A^TY^2A=Q.\]
Hence, the iterative equation
\begin{equation}\label{12}
Y_{k+1}^{-1}-A^TY_{k}^2A=Q
\end{equation}
needs to be solved. We make use of the formula
\begin{equation}\label{13}
X_{k+1}=X_k(2I-Y_{k+1}X_k),
\end{equation}
converging to $Y_{k+1}^{-1}$. Substituting $X_{k+1}$ in (\ref{12}), we get
\[X_k(2I-Y_{k+1}X_k)-A^TY_{k}^2A=Q,\]
or equivalently,
\[2I-X_kY_{k+1}-A^TY_{k}^2A{X_k}^{-1}-Q{X_k}^{-1}=0.\]
Hence, $Y_{k+1}$ can be computed using
\begin{equation}\label{14}
Y_{k+1}=\textrm{PDTLS-Chol}(X_k,2I-A^TY_{k}^2A{X_k}^{-1}-Q{X_k}^{-1}).
\end{equation}
Thus, in every iteration of our proposed algorithm, starting from arbitrary symmetric and positive definite matrices $Y_0, X_0\in {\mathbb{R}}^{n\times n}$, we compute $Y_{k+1}$ from (\ref{14}) and $X_{k+1}$ from (\ref{13}). A proper stopping criterion would be
\[\|X_{k+1}-A^TY_{k+1}^2A-Q\|<\delta+\epsilon \|X_{k+1}\|,\]
with $\delta$ and $\epsilon$ as defined in Case 1. Now, $X_{k+1}$ gives an approximate solution of (\ref{11}). The described steps for solving (\ref{11}) are summerized in the following algorithm.
\begin{algorithm}[H]
\caption{ Solving the Nonlinear Matrix Equation $X+A^TX^{-2}A=Q$: Nonlinear2.}
\label{alg2}
\begin{algorithmic}[1]
\Procedure {Nonlinear2}{$A$, $\delta$}
\State Choose arbitrary symmetric and positive definite matrices $X, Y \in {\mathbb{R}}^{n \times n}$.
\Repeat
\State Let
\[Y=\textrm{PDTLS-Chol}(X,2I-A^TY^2AX^{-1}-QX^{-1}).\]
\State Let
\[X=X(2I-YX).\]
\State Compute $E=\|X-A^TY^2A-Q\|$.
\Until{$E\leq \delta+\epsilon \|X\|$.}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Case 3: $m=2, f_1(X)=X^{-t_1}, f2_(X)=X^{-t_2}.$} The nonlinear matrix equation
\begin{equation}\label{15}
X^s+A_1^TX^{-t_1}A_1+A_2^TX^{-t_2}A_2=Q
\end{equation}
has applications in different areas such as control theory and dynamical programming; e.g., see \cite{6}. To solve (\ref{15}), we make use of the same change of variables as given in Section 3.2, $Y=X^{-1}$. Substituting $Y$ in (\ref{15}), we get
\[X^s+A_1^TY^{t_1}A_1+{A_2}^TY^{t_2}A_2=Q.\]
Thus, the iterative equation
\begin{equation}
X_{k+1}^s+A_1^TY_k^{t_1}A_1+A_2^TY_k^{t_2}A_2=Q
\end{equation}
is generated, which is equivalent to
\begin{eqnarray}
U=\textrm{PDTLS-Chol}(I,Q-A_1^TY_k^{t_1}A_1-A_2^TY_k^{t_2}A_2),\label{16}\\
X_{k+1}=U^{\frac{1}{s}}.\nonumber
\end{eqnarray}
To update $Y_k$ to $Y_{k+1}$, one iteration of the formula,
\begin{equation}\label{17}
Y_{k+1}=Y_k(2I-X_{k+1}Y_k),
\end{equation}
is applied. Thus, in each iteration of our proposed algorithm for solving (\ref{15}), starting from an arbitrary symmetric and positive definite $n\times n$ matrix $Y_0$, we compute $X_{k+1}$ using (\ref{16}) and then apply (\ref{17}) to compute $Y_{k+1}$. Instead of starting from an arbitrary matrix $Y_0$, as suggested in \cite{6}, $Y_0=(\frac{\gamma+1}{2\gamma})Q^{-\frac{1}{s}}$ is a suitable starting point. The stopping condition can be set to $E=\|X_{k+1}^s+A_1^TY_k^{t_1}A_1+A_2^TY_k^{t_2}A_2-Q\|<\delta+\epsilon \|X_{k+1}^s\|$ with $\delta$ and $\epsilon$ as before. We now outline our proposed algorithm for solving (\ref{15}).
\begin{algorithm}[H]
\caption{ Solving the Nonlinear Matrix Equation $X^s+A_1^TX^{-t_1}A_1+A_2^TX^{-t_2}A_2=Q$: Nonlinear3.}
\label{alg3}
\begin{algorithmic}[1]
\Procedure {Nonlinear3}{$A_1$, $A_2$, $Q$, $s$, $t_1$, $t_2$, $\delta$}
\State Choose arbitrary symmetric and positive definite matrix $Y \in {\mathbb{R}}^{n \times n}$.
\Repeat
\State Let
\[U=\textrm{PDTLS-Chol}(I,Q-A_1^TY^{t_1}A_1-A_2^TY^{t_2}A_2)\]
\Statex $\hspace{1.1cm}$and
\[X=U^{\frac{1}{s}}.\]
\State Compute $Y=Y(2I-XY),$ and compute $E=\|U+A_1^TY^{t_1}A_1+$
\Statex $\hspace{1.1cm}A_2^TY^{t_2}A_2-Q\|$.
\Until{$E\leq \delta+\epsilon \|U\|$.}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Note 1}
In \cite{6}, an iterative algorithm was proposed for solving (\ref{15}). An advantage of Algorithm 3 is that in all iterations, $X_k$ remains positive definite because of using PDTLS-Chol for updating $X_k$.
\paragraph{Note 2}
Considering Case 3 with $m>2$ results in the nonlinear matrix equation
\begin{equation}\label{18}
X^s+{\sum}_{i=1}^mA_i^TX^{-t_i}A_i=Q.
\end{equation}
The nonlinear equation (\ref{18}) arises in the same contexts as (\ref{15}) including control theory and dynamical programming; e.g., see \cite{7}. A similar procedure to Algorithm 3 can be applied to solve (\ref{18}). In each iteration, it is appropriate to let
\[U=\textrm{PDTLS-Chol}(I,Q-{\sum}_{i=1}^mA_i^TY^{t_i}A_i),\]
\[X=U^{\frac{1}{s}},\]
and
\[Y=Y(2I-XY).\]
The process would be terminated when $E=\|U+{\sum}_{i=1}^mA_i^TY^{t_i}A_i-Q\|<\delta+\epsilon \|U\|$.\\
Next, in Section 4, we discuss the necessary and sufficient conditions for the existence of solution for the three considered cases above.
\section{Existence of Solution}
In \cite{6}, the necessary and sufficient conditions for the exitence of a symmetric and positive definite solution to Case 3 were disscussed. Here, we first recall the conditions in the following theorem and then make use of the theorem for special choices of parameters to provide the necessary and sufficient conditions for the existence of a symmetric and positive definite solution for Case 1. Finally, we point out a theorem from \cite{17} about the sufficient conditions for the existence of a positive definite solution for Case 2.
\begin{theorem}\label{th1}
Case 3 has a unique symmetric and positive definite solution if and only if $A_1$ and $A_2$ can be factored as $A_1={(LL^T)}^{\frac{t_1}{2s}}N_1$ and $A_2={(LL^T)}^{\frac{t_2}{2s}}N_2$, so that the matrix $\left(\begin{array}{c}
LQ^{-\frac{1}{2}} \\
N_1Q^{-\frac{1}{2}} \\
N_2Q^{-\frac{1}{2}}
\end{array}\right)$ has orthogonal columns.
\end{theorem}
\begin{proof}
See \cite{6}.$\hspace{1cm}\square$
\end{proof}
\begin{theorem}\label{th2}
Case 1 has a unique symmetric and positive definite solution if and only if $A$ can be factored as $A={(LL^T)}^{\frac{1}{2}}N$, with $Q^{-\frac{1}{2}}L^TLQ^{-\frac{1}{2}}+Q^{-\frac{1}{2}}N^TNQ^{-\frac{1}{2}}$ being a diagonal matrix.
\end{theorem}
\begin{proof}
Making use of Theorem \ref{th1} for $t_1=1$, $t_2=0$, $A_2=0$ and $s=1$, it can be concluded that Case 1 has a symmetric and positive definite solution if and only if $A$ can be factored as $A={(LL^T)}^{\frac{1}{2}}N$ such that $\left(\begin{array}{c}
LQ^{-\frac{1}{2}} \\
NQ^{-\frac{1}{2}}
\end{array}\right)$ has orthogonal columns, or equivalently $Q^{-\frac{1}{2}}L^TLQ^{-\frac{1}{2}}+Q^{-\frac{1}{2}}N^TNQ^{-\frac{1}{2}}$ is a diagonal matrix.$\hspace{1cm}\square$ \\
\end{proof}
In the following theorem, sufficient conditions for existence of a symmetric and positive definite solution to Case 2 are given.
\begin{theorem}\label{th3}
If there exists an $\alpha >2$ such that
\begin{subequations}
\begin{equation}\label{24}
AA^T>{\alpha}^2(\alpha-1)I,
\end{equation}
\begin{equation}\label{25}
\sqrt{\frac{AA^T}{\alpha-1}}-\frac{1}{{\alpha}^2}AA^T<I,
\end{equation}
\begin{equation}\label{26}
\frac{{\|A\|}^2}{2\alpha{(\alpha -1)}^2}<1,
\end{equation}
\end{subequations}
then, Case 2 has a symmetric and positive definite solution.
\end{theorem}
\begin{proof}
See \cite{17}.$\hspace{1cm}\square$
\end{proof}
\begin{theorem}\label{th4}
If all of the singular values of $A$, ${\sigma}_i$, for $i=1, \cdots, n$, satisfy $\alpha\sqrt{\alpha-1}<{\sigma}_i<\sqrt{2\alpha}(\alpha-1)$, for an $\alpha>2$, then the conditions (\ref{24})-(\ref{26}) are satisfied. Thus, Case 2 has a symmetric and positive definite solution.
\end{theorem}
\begin{proof}
The eigenvalues of $AA^T$ are equal to ${\lambda}_i={\sigma}_i^2$, $i=1, \cdots, n$. Since $\alpha\sqrt{\alpha-1}<{\sigma}_i$, we have ${\alpha}^2(\alpha-1)<{\lambda}_i$ and (\ref{24}) is satisfied. Let $U=\sqrt{AA^T}$. From (\ref{24}), we get
\[U>({\alpha}\sqrt{(\alpha-1)})I,\]
and
\[\sqrt{(\alpha-1)}U-\frac{{\alpha}^2}{2}I>\frac{\alpha}{2}(\alpha-2)I.\]
Thus, we have
\begin{eqnarray}\nonumber
(\sqrt{(\alpha-1)}U-\frac{{\alpha}^2}{2}I)^2&>&(\frac{{\alpha}^4}{4}-{\alpha}^3+{\alpha}^2)I\nonumber\\
&=&\frac{{\alpha}^2}{4}{(\alpha-2)}^2I.\label{28}
\end{eqnarray}
Hence, (\ref{28}) results in
\[{\alpha}^2U-\sqrt{(\alpha-1)}U^2<\sqrt{(\alpha-1)}{\alpha}^2I,\]
or equivalently,
\[\frac{U}{\sqrt{(\alpha-1)}}-\frac{1}{{\alpha}^2}U^2<I.\]
Now, substituting $U$ with $\sqrt{AA^T}$ gives (\ref{25}). Finally, we show that under the assumption ${\sigma}_i<\sqrt{2\alpha}(\alpha-1)$, (\ref{26}) holds. It is sufficient to substitute ${\|A\|}^2$ with $\max ({\lambda}_i)$. Since $\max ({\lambda}_i)<2\alpha{(\alpha-1)}^2$, we have ${\|A\|}^2<2\alpha{(\alpha-1)}^2$.$\hspace{1cm}\square$\\
\end{proof}
Consequently, to generate a test problem in Case 1 having a symmetric and positive definite solution, it is sufficient to choose $A$ with singular values satisfying $\alpha\sqrt{(\alpha-1)}<{\sigma}_i<\sqrt{2\alpha}(\alpha-1)$, for an $\alpha>2$. In the next section, we will use the above results to generate test problems with symmetric and positive definite solutions.
\section{Numerical Results}
We made use of MATLAB 2014a in a Windows 7 machine with a 2.4 GHz CPU and a 6 GB RAM to implement our proposed algorithms and other methods. We then applied the programs on some existing test problems as well as newly generated ones. The numerical results corresponding to cases 1, 2 and 3 are respectively reported in sections 5.1, 5.2 and 5.3.\\
In Section 5.1, first a table is provided in which the selected values for the matrices $A$ and $Q$ are reported. We then report the computing time and the resulting error for solving the nonlinear equation (\ref{8}) using our proposed method and an existing method due to Zhou \cite{8}. In one of these examples, a complex value for the matrix $A$ has been chosen to affirm that our proposed method is applicable to both real and complex problems.\\
In Section 5.2, the computing times and the resulting errors in solving the nonlinear equation (\ref{11}), for almost the same $A$ and $Q$ matrices as given in Section 5.1, are reported using our proposed method and the method discussed by Ivanov \cite{5}. \\
In Section 5.3, we first provide three tables to present the values of $A_1$, $A_2$, $Q$, $s$, $t_1$ and $t_2$ for our test problems. We then report the computing times and the error values for solving the nonlinear equation (\ref{15}) using our proposed method and the method described by Liu \cite{6}. \\
Furthermore, in each section, we generate 100 random test problems satisfying the sufficient conditions for existence of a symmetric and positive definite solution discussed in Section 4. Representing the Dolan-Mor\'{e} time and error profiles, we confirm the efficiency of our proposed algrithms for solving the random test problems. \\
To generate these test problems for cases 1, 2 and 3, we use the results of theorems 2, 4 and 1. respectively. For Case 1, assuming $A={(LL^T)}^{\frac{1}{2}}N$ and $Q=I$, the matrix $\left(\begin{array}{c}
L \\
N
\end{array}\right)$ is needed to have orthogonal columns. Thus, to generate a test problem for Case 1 with a symmetric and positive definite solution, it is sufficient to set\\\\
\textbf{Pseudocode 1}
\begin{verbatim}
Q1=qr(rand(2*n));
Q2=Q1(:,1:n);
L=Q2(1:n,:);
N=Q2(n+1:2*n,:);
A=(L*L')^(0.5)*N;
\end{verbatim}
in MATLAB. Similarly, for Case 3, to generate a test problem having a symmetric and positive definite solution, we set \\\\
\textbf{Pseudocode 2}
\begin{verbatim}
Q1=qr(rand(3*n));
Q2=Q1(:,1:n);
L=Q2(1:n,:);
N1=Q2(n+1:2*n);
N2=Q2(2*n+1:3*n);
A1=(L*L')^(t1/(2*s))*N1;
A2=(L*L')^(t2/(2*s))*N2;
\end{verbatim}
Finally, generating a test problem for Case 2 with a symmetric and positive definite solution is possible using the pseudocode below for an arbitrary $\alpha>2$:\\\\
\textbf{Pseudocode 3}
\begin{verbatim}
s1=alpha(sqrt(alpha-1))
s2=sqrt(2alpha)(alpha-1)
d=(s2-s1)*rand(n,1)+s1;
D=diag(d);
U=qr(rand(n));
V=qr(rand(n));
A=U*D*V';
\end{verbatim}
\paragraph{Note} In pseudocodes 1, 2 and 3, the QR factorizations of $2n\times 2n$, $3n\times 3n$ and $n\times n$ matrices need to be computed. Thus, the computing complexity of Pseudocode 3 is lower than the others and as seen in numerical results, it is possible to generate larger test problems for Case 2 without encountering a memory problem.
\subsection{Results for solving (\ref{8})}
In Table \ref{t1}, we see four test examples, assuming $Q=I$ with different values for $A$, to test Nonlinear1 for solving (\ref{8}).
\newpage
\begin{table}[!ht]
\caption{The values of $A$ in test problems.}
\label{t1}
\begin{center}\footnotesize
\begin{tabular}{|c|c|}
\hline
& \\
Example 1 & $A=\left(
\begin{array}{cccc}
0.0955 & 0.0797 & 0.0848 & 0.0575\\
0.0920 & 0.0114 & 0.0583 & 0.0010\\
0.0385 & 0.0159 & 0.0586 & 0.0809\\
0.0163 & 0.0356 & 0.0926 & 0.0609
\end{array}
\right)$\\
& \\
\hline
& \\
Example 2 & $A=\left(
\begin{array}{cccc}
0.8862 & 0.8978 & 0.8194 & 0.4279\\
0.9311 & 0.5934 & 0.5319 & 0.9661\\
0.1908 & 0.5038 & 0.2021 & 0.6201 \\
0.2586 & 0.6128 & 0.4539 & 0.6954
\end{array}
\right)$ \\
& \\
\hline
& \\
Example 3 & $A=A_1+iA_2$ \\
& \\
\hline
& \\
Example 4 & $A=\left(
\begin{array}{cccccc}
0.0450 & 0.0440 & 0.0900 & 0.0660 & 0.0470 & 0.0060\\
0.0810 & 0.0680 & 0.0550 & 0.0700 & 0.0460 & 0.0140\\
0.0930 & 0.0470 & 0.0750 & 0.0920 & 0.0810 & 0.0170 \\
0.0670 & 0.0950 & 0.0120 & 0.0660 & 0.0820 & 0.0630\\
0.0370 & 0.0350 & 0.0450 & 0.0690 & 0.0190 & 0.0030\\
0.0410 & 0.0340 & 0.0070 & 0.0850 & 0.0030 & 0.0470
\end{array}
\right)$\\
& \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{center}
$A_1=\left(\begin{array}{cccccc}
0.0320 & 0.0540 & 0.0220 & 0.0370 & 0.0190 & 0.0860\\
0.0120 & 0.0650 & 0.0110 & 0.0760 & 0.0140 & 0.0480 \\
0.0940 & 0.0540 & 0.0110 & 0.0630 & 0.0700 & 0.0390 \\
0.0650 & 0.0720 & 0.0060 & 0.0770 & 0.0090 & 0.0670 \\
0.0480 & 0.0520 & 0.0400 & 0.0930 & 0.0530 & 0.0740 \\
0.0640 & 0.0990 & 0.0450 & 0.0970 & 0.0530 & 0.0520
\end{array}
\right)$
$\vspace{1cm}$
\end{center}
\begin{center}
$A_2=\left(\begin{array}{cccccc}
0.0350 & 0.0240 & 0.0680 & 0.0270 & 0.0770 & 0.0790\\
0.0150 & 0.0440 & 0.0700 & 0.0200 & 0.0400 & 0.0950\\
0.0590 & 0.0690 & 0.0440 & 0.0820 & 0.0810 & 0.0330\\
0.0260 & 0.0360 & 0.0020 & 0.0430 & 0.0760 & 0.0670\\
0.0040 & 0.0740 & 0.0330 & 0.0890 & 0.0380 & 0.0440\\
0.0750 & 0.0390 & 0.0420 & 0.0390 & 0.0220 & 0.0830
\end{array}
\right)$
\end{center}
\normalsize
In Table \ref{t2}, the computed solutions to (\ref{8}) are reported for the considered examples.
\begin{table}[!ht]
\caption{Computed positive definite solutions of (\ref{8}) using Nonlinear1 for examples 1 - 4 as reported in Table \ref{t1}.}
\label{t2}
\begin{center}\footnotesize
\begin{tabular}{|c|c|}
\hline
& \\
Example 1 & $X=\left(
\begin{array}{cccc}
1.0009 & 0.0007 & 0.0012 & 0.0009\\
0.0007 & 1.0005 & 0.0009 & 0.0007\\
0.0012 & 0.0009 & 1.0016 & 0.0013\\
0.0009 & 0.0007 & 0.0013 & 1.0010
\end{array}
\right)$\\
& \\
\hline
& \\
Example 2 & $X=\left(
\begin{array}{cccc}
1.9517 & 1.1863 & 0.9614 & 1.0964\\
1.1863 & 2.3866 & 1.1153 & 1.2316\\
0.9614 & 1.1153 & 1.9041 & 0.9967 \\
1.0964 & 1.2316 & 0.9967 & 0.6954
\end{array}
\right)$
\\
& \\
\hline
& \\
Example 3 & $X=X_1+iX_2$ \\
& \\
\hline
& \\
Example 4 & $X=\left(
\begin{array}{cccccc}
1.2245 & 0.1924 & 0.1858 & 0.2775 & 0.1651 & 0.0823\\
0.1924 & 1.1642 & 0.1606 & 0.2382 & 0.1422 & 0.0699\\
0.1858 & 0.1606 & 1.1569 & 0.2301 & 0.1380 & 0.0679 \\
0.2775 & 0.2382 & 0.2301 & 1.3441 & 0.2027 & 0.1016\\
0.1651 & 0.1422 & 0.1380 & 0.2027 & 1.1221 & 0.0596\\
0.0823 & 0.0699 & 0.0679 & 0.1016 & 0.0596 & 1.0295
\end{array}
\right)$ \\
& \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{center}
$X_1=\left(\begin{array}{cccccc}
1.4837 & 0.5838 & 0.3177 & 0.6582 & 0.4726 & 0.6214 \\
0.5838 & 1.7355 & 0.4195 & 0.8271 & 0.6059 & 0.7910 \\
0.3177 & 0.4195 & 1.2801 & 0.4660 & 0.3809 & 0.4745\\
0.6582 & 0.8271 & 0.4660 & 1.9316 & 0.6778 & 0.8879 \\
0.4726 & 0.6059 & 0.3809 & 0.6778 & 1.5422 & 0.6681 \\
0.6214 & 0.7910 & 0.4745 & 0.8879 & 0.6681 & 1.8703
\end{array}
\right)$\end{center}
\begin{center}
$X_2=\left(\begin{array}{cccccc}
0.0000 & 0.0547 & 0.1716 & 0.0498 & 0.1856 & 0.1349\\
- 0.0547 & 0.0000 & 0.1793 & - 0.0210 & 0.1711 & 0.0981\\
0.1716 & -0.1793 & 0.0000 & -0.2160 & - 0.0492 & - 0.1332\\
- 0.0498 & 0.0210 & 0.2160 & - 0.0000 & 0.2104 & 0.1339\\
- 0.1856 & - 0.1711 & 0.0492 & - 0.2104 & - 0.0000 & - 0.1071\\
- 0.1349 & - 0.0981 & 0.1332 & - 0.1339 & 0.1071 & 0.0000
\end{array}
\right)$
\end{center}
\normalsize
Table \ref{t3} represents the error values, $E=\|X+A^TX^{-1}A-Q\|$, the computing times, $T$, and the number of iterations, $n_{It}$, for solving (\ref{8}) in examples 1 through 4 using our proposed algorithm, Nonlinear1, and the method introduced by Zhou \cite{8}, denoted by Zhou's algorithm.
\newpage
\begin{table}[htbp]
\caption{Error values, computing times and number of iterations for Nonlinear1 and Zhou's algorithm.}
\label{t3}
\begin{center}\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{\multirow{2}{*}{Example}} & \multicolumn{3}{c|}{Nonlinear1} & \multicolumn{3}{c|}{Zhou's algorithm}\\
\cline{2-7}
& $E$ & $T$ & $n_{It}$ & $E$ & $T$ & $n_{It}$ \\
\hline
1 & 5.78E-010 & 1.21E-004 & 5 & 1.64E-009 & 2.87E-003 & 4\\
2 & 1.33E-011 & 7.11E-004 & 4 & 3.26E-010 & 1.13E-003 & 6\\
3 & 2.71E-010 & 2.14E-003 & 9 & 5.61E-009 & 4.31E-002 & 9\\
4 & 2.36E-010 & 1.11E-004 & 3 & 1.27E-009 & 7.87E-004 & 8\\
\hline
\end{tabular}
\end{center}
\end{table}
\normalsize
Considering the results in Table \ref{t3}, it is concluded that our proposed method for solving (\ref{8}) computes a symmetric and positive definite solution faster than Zhou's algorithm. The error values however are almost the same. We also compare these two algorithms in solving 100 random test problems later. These test problems are generated randomly using Pseudocode 1. In these test problems, the size of the matrix $A$ is taken to be $5\times 5$, $10\times 10$, $100 \times 100$ or $1200 \times 1200$. It should be mentioned that our proposed algorithm was capable of computing the solution corresponding to a $1200\times 1200$ matrix while the Zhou's algorithm encountered a memory problem. In figures \ref{f1} and \ref{f2}, the Dolan-Mor\'{e} time and error profiles are represented to confirm the efficiency of our proposed algorithm in solving (\ref{8}), showing more efficiency and lower error values.
\begin{figure}[ht!]
\includegraphics[width=0.7\textwidth]{time1.eps}
\caption{Comparing the computing times by Nonlinear1 and Zhou's algorithms.}
\label{f1}
\end{figure}
\newpage
\begin{figure}[ht!]
\includegraphics[width=0.7\textwidth]{error1.eps}
\caption{Comparing the error values of Nonlinear1 and Zhou's algorithms.}
\label{f2}
\end{figure}
\subsection{Results for solving (\ref{11})}
Here, we compare the results of solving (\ref{11}) with use of our proposed algorithm, Nonlinear2, and the algorithm due to Ivanov \cite{17}. The assumed values for $A$ and $Q$ are almost the same as the ones in Section 5.1. The only difference is the value of $A$ in Example 3. The matrix $A$ here is asuumed to be
\begin{center}
$A=\left(\begin{array}{cccc}
-0.1 & -0.1 & 0.02 & 0.08\\
-0.09 & 0.3 & -0.2 & -0.1\\
-0.04 & 0.1 & 0.01 & -0.1\\
-0.08 & -0.06 & -0.1 & -0.2
\end{array}
\right).$
\end{center}
In Table \ref{t4}, we report the computed solution $X$ of (\ref{11}) using Nonlinear2 for the four examples and with $Q$ being equal to $I$.
\newpage
\begin{table}[!ht]
\caption{Computed positive definite solutions of (\ref{11}) using Nonlinear2 for examples 1 through 4.}
\label{t4}
\begin{center}\footnotesize
\begin{tabular}{|c|c|}
\hline
& \\
Example 1 & $X=\left(
\begin{array}{cccc}
0.7138 & 0.4637 & 0.4443 & -0.3512\\
0.4637 & 1.3535 & 0.1059 & 0.0952\\
0.4443 & 0.1059 & 0.9837 & 0.1273\\
-0.3512 & 0.0952 & 0.1273 & 1.1561
\end{array}
\right)$\\
& \\
\hline
& \\
Example 2 & $X=\left(
\begin{array}{cccc}
0.3837 & 0.0774 & -0.0886 & -0.0423\\
0.0774 & 0.6145 & 0.2798 & 0.3775\\
-0.0886 & 0.2798 & 0.7603 & 0.6386\\
-0.0423 & 0.3775 & 0.6386 & 0.7550
\end{array}
\right)$
\\
& \\
\hline
& \\
Example 3 & $X=\left(
\begin{array}{cccccc}
0.9877 & 0.0125 & 0.0068 & 0.0170\\
0.0125 & 1.0821 & -0.0433 & -0.0525\\
0.0068 & -0.0433 & 1.0540 & 0.0560\\
0.0170 & -0.0525 & 0.0560 & 1.0621 \end{array}
\right)$ \\
& \\
\hline
& \\
Example 4 & $X=\left(
\begin{array}{cccccc}
1.1818 & 0.0117 & -0.0053 & -0.0099 & 0.0477 & -0.0952\\
0.0117 & 0.9994 & -0.0202 & -0.0168 & 0.0000 & -0.0160\\
-0.0053 & -0.0202 & 0.9526 & -0.0482 & -0.0181 & -0.0341\\
-0.0099 & -0.0168 & -0.0482 & 0.9554 & -0.0181 & -0.0289\\
0.0477 & 0.0000 & -0.0181 & -0.0181 & 1.0107 & -0.0345\\
-0.0952 & -0.0160 & -0.0341 & -0.0289 & -0.0345 & 1.0267
\end{array}
\right)$ \\
& \\
\hline
\end{tabular}
\end{center}
\end{table}
\normalsize
In Table \ref{t5}, to compare our proposed method in solving (\ref{11}) for examples 1 through 4 with the method due to Ivanov \cite{17}, denoted by Ivanov's algorithm, we report error values, $E=\|X-A^TX^{-2}A-Q\|$, the computing times, $T$, and the number of iterations, $n_{It}$.
\begin{table}[htbp]
\caption{Computing times, error values and number of iterations for Nonlinear2 and Ivanov's algorithm.}
\label{t5}
\begin{center}\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{\multirow{2}{*}{Case}} & \multicolumn{3}{c|}{Nonlinear 2} & \multicolumn{3}{c|}{Ivanov Algorithm}\\
\cline{2-7}
& $E$ & $T$ & $n_{It}$ & $E$ & $T$ & $n_{It}$ \\
\hline
1 & 8.61E-011 & 1.23E-003 & 2 & 1.64E+000 & 5.32E+000 & 27\\
2 & 6.79E-010 & 9.24E-004 & 3 & 1.09E-009 & 1.11E-002 & 13\\
3 & 1.28E-010 & 2.59E-002 & 55 & 3.77E+000 & 1.14E-001 & 100\\
4$^{\ast}$ & 2.51E-011 & 7.85E-004 & 12 & 5.43E+000 & 2.04E-002 & 28\\
\hline
\end{tabular}
\end{center}
\end{table}
\normalsize
\vspace{0.1cm}
$\ast:$ Chosen from \cite{17}.\\
\newpage
The reported results in Table \ref{t5} show that our proposed algorithm computes the symmetric and positive definite solution to (\ref{11}) faster and with lower error values than Ivanov's algorithm. The Dolan-Mor\'{e} time and error profiles are presented in figures \ref{f3} and \ref{f4} to confirm the efficiency of our proposed algorithm in computing a symmetric and positive definite solution to (\ref{11}) over randomly generated test problems using Pseudocode 3. The size of the matrix $A$ is taken to be $10\times 10$, $100 \times 100$, $1000 \times 1000$ or $3000\times 3000$. Although both algorihms were able to compute the solutions for large matrices, as shown in figures \ref{f3} and \ref{f4}, the computing times and the error values are lower for our proposed algorithm.
\begin{figure}[ht!]
\includegraphics[width=0.7\textwidth]{time2.eps}
\caption{Comparing the computing times by Nonlinear2 and Ivanov's algorithm.}
\label{f3}
\end{figure}
\newpage
\begin{figure}[ht!]
\includegraphics[width=0.7\textwidth]{error2.eps}
\caption{Comparing the error values of Nonlinear2 and Ivanov's algorithm.}
\label{f4}
\end{figure}
\subsection{Results for solving (\ref{15})}
Two test problems are reported in Table \ref{t6}. The first one is chosen from \cite{6}.\\
\begin{table}[!ht]
\caption{The values of $A$, $B$, $Q$, $s$, $t_1$ and $t_2$ for the test problems.}
\label{t6}
\begin{center}\footnotesize
\begin{tabular}{|c|c|}
\hline
Example 1$^{\ast}$ & Example 2\\
\hline
$A=\left(
\begin{array}{cccccc}
2 & 0 & 0 & 1 & 0 & 0\\
1 & 2 & 0 & 0 & 1 & 0\\
0 & 0 & 3 & 0 & 1 & 0\\
1 & 0 & 0 & 2 & 0 & 1\\
1 & 0 & 1 & 0 & 3 & 0\\
0 & 1 & 0 & 0 & 1 & 2
\end{array}
\right)$ & $A=\left(
\begin{array}{cccc}
0.5853 & 0\\
0 & 0.5497
\end{array}
\right)$ \\
\hline
$B=\left(
\begin{array}{cccccc}
2 & 1 & 6 & 0 & 5 & 7\\
3 & 4 & 7 & 1 & 3 & 0\\
0 & 9 & 2 & 4 & 7 & 8\\
8 & 5 & 3 & 0 & 0 & 1\\
2 & 5 & 0 & 2 & 1 & 7\\
4 & 0 & 0 & 1 & 4 & 9
\end{array}
\right)$ & $B=\left(
\begin{array}{cccc}
0.9172 & 0\\
0 & 0.2858
\end{array}
\right)$ \\
\hline
$Q=\left(
\begin{array}{cccccc}
105 & 66 & 58 & 15 & 41 & 73\\
66 & 154 & 67 & 50 & 88 & 121\\
58 & 67 & 109 & 15 & 71 & 61\\
15 & 50 & 15 & 28 & 37 & 57\\
41 & 88 & 71 & 37 & 113 & 136\\
73 & 121 & 61 & 57 & 136 & 250
\end{array}
\right)$ & $Q=\left(
\begin{array}{cccc}
0.3786 & 0\\
0 & 0.3769
\end{array}
\right)$ \\
\hline
$s=5,\hspace{0.2cm} t_1=0.2,\hspace{0.2cm}t_2=0.5$ & $s=2,\hspace{0.2cm} t_1=t_2=0.5$\\
\hline
\end{tabular}
\end{center}
\end{table}
\vspace{0.1cm}
$\ast:$ Chosen from \cite{6}.\\
In Table \ref{t7}, the error values, $E$, the computing times, $T$ and the number of iterations, $n_{It}$, are reported for solving (\ref{15}) for examples 1 and 2 as in Table \ref{t6}, using our proposed method, Nonlinear3, and the method given by Liu \cite{6}, denoted by Liu's algorithm.
\begin{table}[htbp]
\caption{Error values, computing times and number of iterations for Nonlinear3 and Liu's algorithm.}
\label{t7}
\begin{center}\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
{\multirow{2}{*}{Case}} & \multicolumn{3}{c|}{Nonlinear 3} & \multicolumn{3}{c|}{Liu Algorithm}\\
\cline{2-7}
& $E$ & $T$ & $n_{It}$ & $E$ & $T$ & $n_{It}$ \\
\hline
1 & 1.71E-008 & 5.64E-003 & 3 & 1.12 & 4.41E-004 & 1\\
1 & 1.52E-011 & 8.38E-002 & 8 & 4.33E+097 & 1.31E-002 & 10\\
2 & 3.93E-011 & 3.35E-002 & 29 & 3.93E-011 & 1.50E-002 & 29\\
\hline
\end{tabular}
\end{center}
\end{table}
As seen in Table \ref{t7}, our proposed method for solving (\ref{15}) computes the solution faster and with lower error values in example 1 and both methods perform exactly the same for the second example. Also, 100 random test problems were generated using Pseudocode 2. Here, the matrices $A_1$ and $A_2$ are taken to be $5\times 5$, $10\times 10$, $100 \times 100$ or $1000\times 1000$. For $1000\times 1000$ matrices $A_1$ and $A_2$, our proposed algorithm could compute the solution while Liu's algorithm encountered a memory problem. The Dolan-Mor\'{e} time and error profiles for these test problems are presented in figures \ref{f5} and \ref{f6}. Considering figures \ref{f5} and \ref{f6}, it can be concluded that our proposed algorithm for solving (\ref{15}) computes the symmetric and positive definite solution faster and with lower error values.
\begin{figure}[ht!]
\includegraphics[width=0.7\textwidth]{time3.eps}
\caption{Comparing the computing time for Nonlinear3 and Liu's algorithms.}
\label{f5}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=0.7\textwidth]{error3.eps}
\caption{Comparing the error values for Nonlinear3 and Liu's algorithms.}
\label{f6}
\end{figure}
\section*{Concluding Remarks}
Making use of our recently proposed method for solving positive definite total least squares poblems, we presented iterative methods for solving three types of nonlinear matrix equations. We provided linear iterative formulas by use of special convergent formulas and dedfining proper change of variables. To solve the resulting linear peoblems, we used our recently proposed method for solving total positive definite least squares problems (PDTLS-Chol). Compared with other methods, use of PDTLS-Chol for solving the linear problems offers two useful features. First, the solution in all iterations remains positive definite. Second, in a total formulation, as used in PDTLS-Chol, the right hand side matrix of a linear system is also assumed to contain error, and hence approximation of the inverse in the right hand side of the linear equations is not problematic. We outlined three specific algorithms for solving the three types of nonlinear matrix equations having applications in control theory and numerical solutions of partial differential equations. We then experimented with some existing test problems as well as our randomly generated ones for each of the three problem types and reported the corresponding numerical results. Comparing with the existing methods, the reported Dolan-Mor\'{e} profiles confirm the effectiveness of our proposed algorithms in computing a positive definite solution with lower error values and lower computing times.
\section*{Acknowledgements}
The authors thank Research Council of Sharif University of Technology for supporting this work.
\section*{References}
\section{The Elsevier article class}
\paragraph{Installation} If the document class \emph{elsarticle} is not available on your computer, you can download and install the system package \emph{texlive-publishers} (Linux) or install the \LaTeX\ package \emph{elsarticle} using the package manager of your \TeX\ installation, which is typically \TeX\ Live or Mik\TeX.
\paragraph{Usage} Once the package is properly installed, you can use the document class \emph{elsarticle} to create a manuscript. Please make sure that your manuscript follows the guidelines in the Guide for Authors of the relevant journal. It is not necessary to typeset your manuscript in exactly the same way as an article, unless you are submitting to a camera-ready copy (CRC) journal.
\paragraph{Functionality} The Elsevier article class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the
\begin{itemize}
\item document style
\item baselineskip
\item front matter
\item keywords and MSC codes
\item theorems, definitions and proofs
\item lables of enumerations
\item citation style and labeling.
\end{itemize}
\section{Front matter}
The author names and affiliations could be formatted in two ways:
\begin{enumerate}[(1)]
\item Group the authors per affiliation.
\item Use footnotes to indicate the affiliations.
\end{enumerate}
See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to.
\section{Bibliography styles}
There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use Bib\TeX\ to generate your bibliography and include DOIs whenever available.
Here are two sample references: \cite{Feynman1963118,Dirac1953888}. | {
"timestamp": "2014-10-22T02:07:51",
"yymm": "1410",
"arxiv_id": "1410.5559",
"language": "en",
"url": "https://arxiv.org/abs/1410.5559",
"abstract": "Nonlinear matrix equations arise in many practical contexts related to control theory, dynamical programming and finite element methods for solving some partial differential equations. In most of these applications, it is needed to compute a symmetric and positive definite solution. Here, we propose new iterative algorithms for solving three different types of nonlinear matrix equations. We have recently proposed a new algorithm for solving positive definite total least squares problems. Making use of an iterative process for inverse of a matrix, we convert the nonlinear matrix equation to an iterative linear one, and, in every iteration, we apply our algorithm for solving a positive definite total least squares problem to solve the linear subproblem and update the newly defined variables and the matrix inverse terms using appropriate formulas. Our proposed algorithms have a number of useful features. One is that the computed unknown matrix remains symmetric and positive definite in all iterations. As the second useful feature, numerical test results show that in most cases our proposed approach turns to compute solutions with smaller errors within lower computing times. Finally, we provide some test results showing that our proposed algorithm converges to a symmetric and positive definite solution in Matlab software environment on a PC, while other methods fail to do so.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Computing Symmetric Positive Definite Solutions of Three Types of Nonlinear Matrix Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806478450307,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274266780185
} |
https://arxiv.org/abs/2003.06339 | Extreme boundary conditions and random tilings | Standard statistical mechanical or condensed matter arguments tell us that bulk properties of a physical system do not depend too much on boundary conditions. Random tilings of large regions provide counterexamples to such intuition, as illustrated by the famous 'arctic circle theorem' for dimer coverings in two dimensions. In these notes, I discuss such examples in the context of critical phenomena, and their relation to 1+1d quantum particle models. All those turn out to share a common feature: they are inhomogeneous, in the sense that local densities now depend on position in the bulk. I explain how such problems may be understood using variational (or hydrodynamic) arguments, how to treat long range correlations, and how non trivial edge behavior can occur. While all this is done on the example of the dimer model, the results presented here have much greater generality. In that sense the dimer model serves as an opportunity to discuss broader methods and results. [These notes require only a basic knowledge of statistical mechanics.] | \section*{Abstract}
{\bf
Standard statistical mechanical or condensed matter arguments tell us that bulk properties of a physical system do not depend too much on boundary conditions. Random tilings of large regions provide counterexamples to such intuition, as illustrated by the famous 'arctic circle theorem' for dimer coverings in two dimensions.
In these notes, I discuss such examples in the context of critical phenomena, and their relation to 1+1d quantum particle models. All those turn out to share a common feature: they are inhomogeneous, in the sense that local densities now depend on position in the bulk. I explain how such problems may be understood using variational (or hydrodynamic) arguments, how to treat long range correlations, and how non trivial edge behavior can occur.
While all this is done on the example of the dimer model, the results presented here have much greater generality. In that sense the dimer model serves as an opportunity to discuss broader methods and results. [These notes require only a basic knowledge of statistical mechanics.]
}
\pagebreak
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\input{sec1.tex}
\newpage
\input{sec2.tex}
\input{sec3.tex}
\input{sec4.tex}
\input{sec5.tex}
\input{sec6.tex}
\pagebreak
\input{sec_appfermions.tex}
\pagebreak
\section{The limit shape phenomenon}
\label{sec:intro}
Perhaps one of the greatest strength of theoretical physicists is their ability to make simplifying assumptions which are obviously not true. These untrue assumptions are then used to make powerful predictions, some of which can be confirmed by experiments or even proved mathematically.
For example, band theory assumes periodic boundary conditions to get a well defined momentum, and explains why certain materials are conductors, insulators or even topological insulators. To take an even older example, the hydrodynamic description of a glass of water relies on well defined conserved quantities such as mass density or momentum density. This is obviously not true, since the boundary breaks momentum conservation. However far from the boundary this assumption is much more reasonable to the leading order, which is why nobody is (and should) be worried about this.
Similar assumptions are routinely made in classical statistical mechanics. For example, Onsager famously solved \cite{Onsager_1944} the 2d Ising model, and found an exact formula for the free energy. What he did was assume periodic boundary conditions (or translational invariance), which simplify the calculations considerably. Then, his results for the free energy holds for any reasonable large chunk of the square lattice, since the free energy is well-known not to depend on boundary conditions. This simple but important observation is taught in any course on statistical mechanics. It also motivates introducing the \emph{dimer model}, which we will discuss at great length in the lectures. One of the most striking property of the dimer model is that it totally violates the previous well-known fact of life. The free energy of the dimer model does depend, heavily, on boundary conditions.
\subsection{Domino tilings and boundary conditions}
Let us introduce the dimer (or domino tiling) model, on the square lattice. Edges connect nearest neighbors. Dimers are entities that cover edges of the lattice. The model then asks to cover the lattice with dimers, while following the constraint that \emph{each lattice site be touched by exactly one dimer}.
We then call dimer covering a valid configuration of dimers on the lattice. For example, all $5$ possible coverings of the $4\times 2$ square lattice are shown below.\\
\begin{tikzpicture}[scale=0.9]
\begin{scope}[xshift=-3.5cm]
\draw[thick,color=black] (-0.5,0) -- (1,0); \draw[thick,color=black] (-0.5,0.5) -- (1,0.5);
\begin{scope}[rotate=90,yshift=-0.5cm]
\draw[thick,color=black] (-0.0,0) -- (0.5,0);
\draw[thick,color=black] (-0.0,0.5) -- (0.5,0.5);
\end{scope}
\draw[line width=5pt,color=dblue] (-0.5,0) -- (-0.5,0.5);\draw[line width=5pt,color=dblue] (1,0) -- (1,0.5);
\draw[line width=5pt,color=dblue] (0,0) -- (0.5,0);
\draw[line width=5pt,color=dblue] (0,0.5) -- (0.5,0.5);
\end{scope}
\begin{scope}[yshift=-0cm]
\begin{scope}[xshift=0cm]
\draw[thick,color=black] (-0.5,0) -- (1,0); \draw[thick,color=black] (-0.5,0.5) -- (1,0.5);
\begin{scope}[rotate=90,yshift=-0.5cm]
\draw[thick,color=black] (-0.0,0) -- (0.5,0); \draw[thick,color=black] (-0.,0.5) -- (0.5,0.5);
\draw[thick,color=black] (0,1) -- (0.5,1); \draw[thick,color=black] (0,-0.5) -- (0.5,-0.5);
\end{scope}
\draw[line width=5pt,color=dblue] (-0.5,0) -- (0,0);\draw[line width=5pt,color=dblue] (0.5,0) -- (1,0);
\draw[line width=5pt,color=dblue] (-0.5,0.5) -- (0,0.5);\draw[line width=5pt,color=dblue] (0.5,0.5) -- (1,0.5);
\end{scope}
\begin{scope}[xshift=3.5cm]
\draw[thick,color=black] (-0.5,0) -- (1,0); \draw[thick,color=black] (-0.5,0.5) -- (1,0.5);
\begin{scope}[rotate=90,yshift=-0.5cm]
\draw[thick,color=black] (-0.0,0) -- (0.5,0); \draw[thick,color=black] (-0.,0.5) -- (0.5,0.5);
\draw[thick,color=black] (0,1) -- (0.5,1); \draw[thick,color=black] (0,-0.5) -- (0.5,-0.5);
\end{scope}
\draw[line width=5pt,color=dblue] (-0.5,0) -- (-0.5,0.5);\draw[line width=5pt,color=dblue] (0.5,0) -- (1,0);
\draw[line width=5pt,color=dblue] (0,0) -- (0,0.5);\draw[line width=5pt,color=dblue] (0.5,0.5) -- (1,0.5);
\end{scope}
\begin{scope}[xshift=7cm]
\draw[thick,color=black] (-0.5,0) -- (1,0); \draw[thick,color=black] (-0.5,0.5) -- (1,0.5);
\begin{scope}[rotate=90,yshift=-0.5cm]
\draw[thick,color=black] (-0.0,0) -- (0.5,0); \draw[thick,color=black] (-0.,0.5) -- (0.5,0.5);
\draw[thick,color=black] (0,1) -- (0.5,1); \draw[thick,color=black] (0,-0.5) -- (0.5,-0.5);
\end{scope}
\draw[line width=5pt,color=dblue] (0.5,0) -- (0.5,0.5);\draw[line width=5pt,color=dblue] (-0.5,0) -- (0,0);
\draw[line width=5pt,color=dblue] (1,0) -- (1,0.5);\draw[line width=5pt,color=dblue] (-0.5,0.5) -- (0,0.5);
\end{scope}
\begin{scope}[xshift=10.5cm]
\draw[thick,color=black] (-0.5,0) -- (1,0); \draw[thick,color=black] (-0.5,0.5) -- (1,0.5);
\begin{scope}[rotate=90,yshift=-0.5cm]
\draw[thick,color=black] (-0.,0) -- (0.5,0); \draw[thick,color=black] (-0.,0.5) -- (0.5,0.5);
\draw[thick,color=black] (0,1) -- (0.5,1); \draw[thick,color=black] (0,-0.5) -- (0.5,-0.5);
\end{scope}
\draw[line width=5pt,color=dblue] (-0.5,0) -- (-0.5,0.5);\draw[line width=5pt,color=dblue] (-0.,0) -- (-0.,0.5);
\draw[line width=5pt,color=dblue] (0.5,0) -- (0.5,0.5);\draw[line width=5pt,color=dblue] (1,0) -- (1,0.5);
\end{scope}
\end{scope}
\end{tikzpicture}
We are of course interested in the thermodynamic limit, that is, dimer coverings of large $M\times L$ lattices. As we shall see shortly the dimer model is exactly solvable; for example, the partition function has been computed by Kasteleyn \cite{Kasteleyn} and Temperley and Fisher \cite{TemperleyFisher}. They found that the corresponding free energy is
\begin{align}
F&=-\frac{1}{LM}\log Z\\
&\to -\frac{C}{\pi}
\end{align}
in the limit $L\to \infty$, $M\to \infty$, $L/M$ fixed. $C$ is the Catalan constant, $C/\pi \simeq 0.29156$. Hence the number of possible coverings grows extremely fast when $L,M$ are increased.
Examples of dimer coverings on an $L\times L$ grid are shown in figure \ref{fig:square}. In each picture, a dimer covering is picked uniformly at random, and drawn using a color code that we explain now. The lattice is bipartite, which means one can label lattice sites with two colors, black and white, in such a way that each black (white) lattice point has all its nearest neighbors of the opposite color. Then, we draw a vertical dimer in blue (resp. yellow) if its bottom part touches a black (resp. white) vertex. Similarly green (resp. red) dimers have their left part that touches a black (resp. white) vertex. At this stage the only purpose of this convention is to make the pictures look nicer. For the same reason we also make the dimers much thicker, so that they fill all space, and the underlying lattice cannot be seen anymore.
\begin{myfigure}[label=fig:square]{Thermodynamic limit}
\includegraphics[width=4.4cm]{./Pictures/fig_1_16_3.pdf}\hfill
\includegraphics[width=4.4cm]{./Pictures/fig_1_64_3.pdf}\hfill
\includegraphics[width=4.4cm]{./Pictures/fig_1_256_0.pdf}\hfill
\tcbline
Dimer configuration chosen uniformly at random, in the set of all possible coverings of the $L\times L$ square lattice $\{(x,y)\in \mathbb{N}^2,x<L, y<L\}$. From left to right, $L=16,64,256$.
\end{myfigure}
Much later, in 1991, Ref.~\cite{Elkies1991} studied dimer coverings of a peculiar region
\begin{equation}
\mathbb{A}_L=\{x,y\in \mathbb{Z}+1/2\quad,\quad |x|+|y|\leq L\},
\end{equation}
which they called \emph{Aztec diamond}. They showed using combinatorial methods that the partition function, or number of dimer coverings, is given by the remarkably simple formula
\begin{equation}\label{eq:AztecZ}
Z=2^{L(L+1)/2}
\end{equation}
Therefore, there are much less available dimer coverings on the Aztec diamond that there would be on a regular square grid with the same area. We will see in the following that this is an effect of boundary conditions, so free energy \emph{does} depend on boundary conditions for dimers.
Even more remarkable is the following observation. Draw for reasonably large $L$ a dimer covering out of the $2^{L(L+1)/2}$ available ones, uniformly at random, as we did before. The corresponding pictures are shown in figure \ref{fig:aztec}. The covering appears only random inside a region which looks roughly like disk for large $L$. Outside of the disk the orientations appear deterministic, with red dimers on the left, blue dimers at the bottom, etc.
\begin{myfigure}[label=fig:aztec]{The arctic circle theorem in action}
\includegraphics[width=4.5cm]{./Pictures/fig_16.png}\hfill
\includegraphics[width=4.5cm]{./Pictures/fig_128.png}\hfill
\includegraphics[width=4.5cm]{./Pictures/fig_1024.png}
\tcbline
Dimer coverings of an Aztec diamond of order $L$, chosen uniformly at random. From left to right, $L=16,128,1024$. As $L$ increases, dimers appear totally frozen outside a region, which looks like a disk. There are, however, non trivial long range fluctuations inside the disk.
\end{myfigure}
This is one of the most famous instance of what is called the limit shape phenomenon. We will give a more precise definition of limit shape later on, but for now, let us just introduce the terminology that we will use in these notes. In the deterministic region dimers are essentially frozen, so it is called \emph{frozen region}. In contrast, dimer orientations fluctuate in the \emph{fluctuating region}, sometimes also called liquid region. We will see later that correlations functions decay as power-laws, so the liquid region is critical. The interface between the frozen and liquid region is called an arctic curve. In the case discussed above, Ref.~\cite{ArcticCircle} proved that the arctic curve becomes an exact circle in the limit $L\to\infty$, a result which now goes under the name \emph{arctic circle theorem}.
There is no a priori reason why the arctic curve should be a circle or even smooth in this particular case, this just comes out of a long calculation. Lattice symmetries, however, do impose that the arctic curve be invariant under rotations by $\pi/2$. To illustrate this last point, one can generalise the model to include ``interactions'' between dimers. The simplest way to do that is to put weights favouring (or disfavouring) aligned dimers on a given plaquette \cite{alet2005interacting,alet2006classical}. In that case the partition function may be written as
\begin{equation}
Z=\sum_{\mathcal{C}}e^{\lambda N_{\rm par}},
\end{equation}
where $N_{\rm par}$ counts the number of plaquettes with two dimers parallel to each other, that is, plaquettes of the form:
\begin{equation}\label{eq:flipableplaquettes}
\begin{tikzpicture}
\draw[thick,color=black] (0.5,0) -- (0.5,0.5) -- (1,0.5) -- (1,0) --cycle;
\draw[line width=5pt,color=dblue] (0.5,0) -- (0.5,0.5);
\draw[line width=5pt,color=dblue] (1,0) -- (1,0.5);
\draw (2.5,0.25) node {or};
\begin{scope}[xshift=4cm]
\draw[thick,color=black] (0.,0) -- (0.,0.5) -- (0.5,0.5) -- (0.5,0) --cycle;
\draw[line width=5pt,color=dblue] (0.,0) -- (0.5,0.);
\draw[line width=5pt,color=dblue] (0,0.5) -- (0.5,0.5);
\end{scope}
\end{tikzpicture}
\end{equation}
Positive (negative) $\lambda$ corresponds to attractive (respulsive) interactions between the dimers.
For $\lambda\neq 0$ the arctic curve is not known analytically, but simulations shown in figure \ref{fig:interactingdimers} clearly suggest that the arctic curve is very different from a circle. In fact, a closely related six vertex model has arctic curves which can be computed \cite{colomo2010arctic,ColomoPronkoZinn} and they are not even algebraic in general. See the bottom part of figure \ref{fig:interactingdimers} for examples.
To finish this section let us mention that the pictures can be generated using a simple Markov chain Monte Carlo algorithm, which goes as follows in the free case. Start from any simple configuration, and then repeat the following lots of times:
\begin{enumerate}[label=(\roman*)]
\item Pick a plaquette uniformly at random.
\item If it has two horizontal (resp. vertical) dimers such as shown in (\ref{eq:flipableplaquettes}), flip them to get vertical (resp. horizontal) dimers. Otherwise do nothing.
\end{enumerate}
After lots of updates, the Markov chain will thermalize and show typical configurations, which were shown above. Of course, the number of necessary updates increases quickly with system size, so generating the best pictures might take a while for large $L$. There are more complicated (but more efficient) algorithms, see e. g. \cite{Elkies1992,Propp2003} for this geometry, and \cite{alet2006classical} for more general boundary conditions.
\begin{myfigure}[label=fig:interactingdimers,float=ht!]{Interacting dimers and six vertex model}
\includegraphics[width=4.5cm]{./Pictures/fig_1024_0p4.png}\hfill
\includegraphics[width=4.5cm]{./Pictures/fig_1024.png}\hfill
\includegraphics[width=4.5cm]{./Pictures/fig_1024_3.png}
\tcbline
Interacting dimers on an Aztec diamond of size $L=1024$. Left: repulsive interactions $e^\lambda=0.4$. Middle: free dimers $e^\lambda=1$. Right: attractive interactions $e^\lambda=3$.
\tcbline
\includegraphics[width=4.6cm]{./Pictures/fig6v_1024_1_0p4_13.png}\hfill
\includegraphics[width=4.6cm]{./Pictures/fig6v_1024_1_1_13.png}\hfill
\includegraphics[width=4.6cm]{./Pictures/fig6v_1024_1_7_13.png}\hfill
\tcbline
Interacting dimers on the Aztec diamond with interaction set only on plaquettes of the even sublattice. This model can be mapped to the integrable six vertex model with domain wall boundary conditions (in which case $a=b=1$ and $\Delta=1-e^{\lambda}$) \cite{Elkies1991}. Left: $\Delta=0.4$, middle $\Delta=0$, right $\Delta=-6$. The arctic curve is known but complicated away from the free point $\Delta=0$, where we are back to a circle. Here we show in blue even flippable plaquettes, or odd empty plaquettes (this corresponds to $c$ vertices in six vertex language). Note the appearance of a third region for negative $\Delta$, where most vertices are of $c$ type, and correlation functions decay exponentially. This region is usually called \emph{gas}.
\end{myfigure}
\subsection{Detour: a one-dimensional classical Coulomb gas model}
\label{sec:coulombgas}
The limit shape phenomenon may be illustrated by the following simple 1d model, which contains the basic ingredients needed to get a inhomogeneous density profile with frozen region. We consider $N$ particles living in a box $B=[-1,1]$. The probability of having the particles at positions $x_1,\ldots, x_N \in B$ is given by
\begin{equation}
P(x_1,\ldots,x_N)=\frac{1}{Z_{N}(\beta)} \prod_{1\leq i<j\leq N}\left|x_i-x_j\right|^\beta,
\end{equation}
where $\beta$ is a positive real number. This probability density function may be interpreted as a Boltzmann weight $P(x_1,\ldots,x_N)=\frac{1}{Z_N(\beta)} e^{-\beta E(x_1,\ldots,x_N)}$, where $\beta$ is now inverse temperature. The energy
\begin{equation}\label{eq:energy}
E(x_1,\ldots,x_N)=-\frac{1}{2}\sum_{1\leq i<j\leq N} \log (x_i-x_j)^2
\end{equation}
is the electrostatic energy of a gas of $N$ particles with 2d Coulomb interactions. For this reason it is often called Coulomb gas\footnote{By 2d we mean what would be the Coulomb potential, $V_{2}(\mathbf{r}_1,\mathbf{r}_2)=\log \frac{1}{|\mathbf{r}_1-\mathbf{r}_2|}$, corresponding to solving the Poisson equation in two dimensions. Unfortunately we live in three dimension, so the real Coulomb potential is $V_3(\mathbf{r}_1,\mathbf{r}_2)=\frac{1}{|\mathbf{r}_1-\mathbf{r}_2|}$, even when the particles are stuck to a lower dimensional region.}.
We will consider two versions of the model:
\begin{itemize}
\item The first is the model as stated, with partition function
\begin{equation}
Z_N(\beta)=\int_{B^N} dx_1 \ldots dx_N e^{-\beta E(x_1,\ldots,x_N)}.
\end{equation}
This is well known from random matrix theory, where it is called (a limit of) the Jacobi ensemble.
\item In the second we impose that the allowed positions for the particles be discrete. The particles now live in $B_L=\{-1+\frac{2j}{L-1}\; j=0,1,\ldots,L-1\}$. The partition function reads
\begin{equation}
Z_N(\beta)=\sum_{x_1 \in B_L}\ldots \sum_{x_N\in B_L} e^{-\beta E(x_1,\ldots,x_N)}.
\end{equation}
This type of models usually go under the name discrete beta ensembles.
\end{itemize}
We are interested in the average distribution of the charges, in the limit $N\to \infty$. In the second model, we further impose a fixed density $d=N/L$. Formally, the first may be recovered from the second by considering a low density limit $d\to 0$. We study the density profile in both models separately.
Before proceeding let us point out the following important fact: the interaction is long range so energy is of order $N^2$, while entropy fluctuations are expected to be of order $N$. This is not the standard situation in thermodynamics, where there is typically a competition between energy and entropy. Here energy dominates, which means the average density profile (limit shape) can be obtained just by minimising energy. Hence, the precise value of $\beta$ does not matter here. Of course, fluctuations on top of the limit shape are still there. They are more complicated, especially in the discrete case.
\subsubsection{The continuous gas}
The continuous gas may be treated using standard techniques \cite{Forrester}. Introducing the density $\rho(x)=\sum_{i=1}^N \delta(x-x_i)$, the energy may be rewritten, up to an unimportant additive constant as
\begin{equation}
E(x_1,\ldots,x_N)=-\frac{1}{2} \int_{B^2}dx dx' \log(x-x')^2\rho(x)\rho(x').
\end{equation}
[The singularity along the diagonal in the previous equation is integrable].
In the thermodynamic limit, $\rho$ is expected to become a smooth function. Therefore, finding the equilibrium distribution of the charges boils down to minimizing the energy functional
\begin{equation}\label{eq:functional}
\mathcal{E}[\rho]=-\int_{B^2} dx dx' \log(x-x')^2 \rho(x)\rho(x')
\end{equation}
subject to the constraint
\begin{equation}\label{eq:constraint}
\int_B \rho(x)dx=N.
\end{equation}
Handling the constraint can be done by introducing a Lagrange multiplier $\lambda$. We consider the functional
\begin{equation}
\mathcal{L}[\rho,\lambda]=\mathcal{E}[\rho]+\lambda\left(1-\int_B dx \rho(x)\right),
\end{equation}
and write down the Euler-Lagrange (EL) equations
\begin{eqnarray}
\frac{\delta \mathcal{L}}{\delta \rho(x)}&=&0,\\
\frac{\partial \mathcal{L}}{\partial \lambda}&=&0.
\end{eqnarray}
The second equation gives back the constraint, while the first reads
\begin{equation}
\int_B dx'\log(x-x')^2 \rho(x')+\lambda=0.
\end{equation}
One can check that $\rho(x)=\frac{\lambda}{2\pi \log 2}\frac{1}{\sqrt{1-x^2}}$ is a solution to the above linear integral equation. Typically, uniqueness is guaranteed by the convexity of the energy functional, which is the case here. Finally, normalization of $\rho$ yields $\lambda=2N\log 2$. Hence we get the density profile
\begin{equation}\label{eq:density_continuum}
\rho(x)=\frac{N}{\pi \sqrt{1-x^2}}.
\end{equation}
This density, called the arcsine law, is integrable but unbounded near $x=\pm 1$. This means the particles tend to accumulate near the two boundaries, a non trivial effect of the long-range repulsion between them. Let us finally mention that this type of problem has been studied for a very long time, in relation to potential theory as well as orthogonal polynomials. For example, Stieltjes observed back in 1885 \cite{stieltjes1885,stieltjes1887} that the positions of the particles that minimizes the electrostatic energy (\ref{eq:energy}) on $B=[-1,1]$ coincide with the zeroes of known orthogonal polynomials\footnote{More precisely, the equilibrium positions for the pdf $\prod_{i=1}^N (1+x_i)^a (1-x_i)^{b}\prod_{j>i}(x_i-x_j)^\beta$ on $B^N$ are the $N$ roots of the polynomials $P_N^{(2\beta a-1,2\beta b-1)}(x)$, where the $P_N^{(p,q)}$ are the Jacobi polynomials, which are orthogonal on $B$ with respect to the measure $d\mu(x)=(1+x)^p (1-x)^q dx$ ($p,q>-1$). The interested reader may consult \cite{Stieltjes_review} and references therein for a broader overview. The case $\alpha=\beta=0$ which we are dealing with here is degenerate \cite{Schur1918}, since the measure is not integrable; in this case the equilibrium positions of the particles are the zeroes of the polynomial $(1-x^2)P_{N-2}^{(1,1)}(x)$.}. In this context, the limiting distribution (\ref{eq:density_continuum}) was probably known even before. See also exercise \ref{exo:chebyshev}.
\begin{myfigure}{Limit shapes for the continuous gas}
\begin{tikzpicture}[scale=0.89]
\foreach \x in {3,7,14,46,99,117,174,198,309,342,379,444,468,492,506,510}{
\draw[black] (0.03,0) -- (0.03*512,0);
\filldraw[dblue] (0.03*\x,0) circle (0.08cm);
}
\node (myfirstpic) at (7.68,-4.) {\includegraphics[height=5.8cm]{./Plots/DCG/density_b2_continuum.pdf}};
\end{tikzpicture}
\tcblower Top: a typical configuration for $N=16$ particles on $B$. Bottom: density profile for $N=16$ particles (Monte Carlo simulation) and comparison with the thermodynamic limit (\ref{eq:density_continuum}).
\end{myfigure}
\subsubsection{The discrete gas}
The discrete model can be treated using a similar method. Introducing the particle density $\rho_x =\sum_i \delta_{x,x_i}$, we expect that it becomes a continuous function $\rho(x)$ normalized to $\int_B \rho(x)=2d$ in the limit $L\to \infty,N\to\infty$, with fixed total density $N/L=d$. We still have to minimize the energy functional $\mathcal{E}[\rho]$ of (\ref{eq:functional}) with the constraint (\ref{eq:constraint}). However, due to the discrete nature of the problem, density cannot exceed one, since two particles cannot sit on the same site. This means we get a second constraint
\begin{equation}\label{eq:constraint2}
\rho(x)\leq 1\quad,\quad \forall x \in [-1,1].
\end{equation}
This extra constraint is \emph{typical for discrete models}.
The previous solution (\ref{eq:density_continuum}) is unbounded, which means a new analysis is needed.
The solution to minimizing (\ref{eq:functional}) with the constraints (\ref{eq:constraint}) and (\ref{eq:constraint2}) was found in Ref.~\cite{Rakhmanov_1996}. Let us first give the result and comment on it later. The limit shape is given by
\begin{equation}\label{eq:density_discrete}
\rho(x)=\left\{
\begin{array}{ccc}
\frac{2}{\pi}\arcsin\left(\frac{d}{\sqrt{1-x^2}}\right)&,& \left|x\right|<\sqrt{1-d^2}\\
1&,& \sqrt{1-d^2}<\left|x\right|<1
\end{array}
\right.
\end{equation}
As in the continuum the Coulomb interaction wants to make the density very large close to the edge. This is not possible, due to the constraint (\ref{eq:constraint2}). Hence, the system compromises by having the maximum allowed density over a larger region $[-1,-\sqrt{1-d^2}]\cup [\sqrt{1-d^2},1]$. In this region the position of the particles become deterministic, we call this the \emph{frozen region} in analogy to dimers. The other region is fluctuating. See figure \ref{fig:limshapes2} for an illustration.
\begin{myfigure}[label=fig:limshapes2,float=ht!]{Limit shapes for the discrete model}
\centering\begin{tikzpicture}[scale=0.89]
\begin{scope}[yshift=1.cm]
\foreach \x in {1,...,64}{
\draw[thick,dblue] (0.24*\x,0) circle (0.08cm);
}
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11,12,13,15,16,17,19,21,23,24,26,27,30,32,34,36,37,40,42,43,45,47,48,49,50,51,53,54,55,56,57,58,59,60,61,62,63,64}{
\filldraw[dblue] (0.24*\x,0) circle (0.08cm);
}
\end{scope}
\foreach \x in {1,...,64}{
\draw[thick,dblue] (0.24*\x,0) circle (0.08cm);
}
\foreach \x in {1,2,3,4,5,6,8,9,11,14,16,19,23,25,28,30,31,37,41,44,45,47,50,52,55,56,59,60,61,62,63,64}{
\filldraw[dblue] (0.24*\x,0) circle (0.08cm);
}
\begin{scope}[yshift=-1.0cm]
\foreach \x in {1,...,64}{
\draw[thick,dblue] (0.24*\x,0) circle (0.08cm);
}
\foreach \x in {1,2,3,8,12,16,20,27,34,43,46,55,59,61,63,64}{
\filldraw[dblue] (0.24*\x,0) circle (0.08cm);
}
\end{scope}
\node (m) at (7.68,-5.5) {\includegraphics[height=6.5cm]{./Plots/DCG/density_b2_32_64.pdf}};
\end{tikzpicture}
\tcblower Top: typical configurations for $N$ particles on $L=64$ sites. (a) $N=48$ (b) $N=32$ (c) $N=16$. Bottom (d): Corresponding density profiles and expected limit shapes (\ref{eq:density_discrete}) in the thermodynamic limit.
\end{myfigure}
Let us now explain heuristically how to find the solution. The crucial point is that the second constraint (\ref{eq:constraint2}) may be temporarily lifted by assuming that the density equals one in a given region $x\in [-1,-r]\cup [r,1]$ close to the edges. Then, we have to minimise the new functional
\begin{equation}\label{eq:tominimize}
\mathcal{E}_r[\rho]=-\int_{-r}^r dx \int_{-r}^{r}dx' \log(x-x')^2 \rho_r(x)\rho_r(x')+\int_{-r}^r \rho_r(x) V(x)dx+E_r,
\end{equation}
where the potential
\begin{equation}
V(x)=-\left(\int_{-1}^{-r}+\int_{r}^{1}\right)dx' \log(x-x')^2
\end{equation}
simply comes from the interaction of the charges in $[-r,r]$ with the charges in the frozen region (in (\ref{eq:tominimize}) there is also an additive constant $E_r$ which accounts for self-interaction in the frozen region).
The constraint on the new equilibrium measure now read
\begin{eqnarray}\label{eq:constraint1bis}
\int_{-r}^r \rho_r(x)dx&=&2d-2(1-r),\\ \label{eq:constraint2bis}
\rho_r(x)&\leq& 1.
\end{eqnarray}
Now, depending on the value of $r$, the solution of the minimization problem without imposing the second constraint may well satisfy (\ref{eq:constraint2bis}) anyway (e. g for $r$ close to one it does not, while for $r=1-d$ it trivially does, since there is no mass left in $\rho_r$). The solution to the minimization problem (\ref{eq:tominimize}) with only constraint (\ref{eq:constraint1bis}) can be found by writing down the Euler-Lagrange equation once again. Denoting by $\rho_r$ the solution, we choose the value
\begin{equation}
r_0= \sup \{r\in [0,1]\;,\; \rho_r(x)\leq 1 \quad \forall x \in [-r,r]\}
\end{equation}
In words, this is the largest $r$ such that extra constraint is automatically satisfied.
The solution to the minimization of (\ref{eq:tominimize}) with constraint (\ref{eq:constraint1bis}), (\ref{eq:constraint2}) is then $\rho(x)=\rho_{r_0}(x)$ if $|x|<r$, $\rho(x)=1$ otherwise. This yields (\ref{eq:density_discrete}). For a rigorous proof, we refer to \cite{Rakhmanov_1996}.
To finish this section, let us finally mention that some examples of limit shape problems in 2d statistical mechanics can also be mapped to a similar discrete Coulomb gas problem in 1d, as was pointed out in \cite{Johansson2000}. In the following we will not pursue this direction, however, and aim for a more general hydrodynamic theory.
\pagebreak
\paragraph{Plan of the rest of the lectures.}
It looks like a good idea to try and apply the general ideas we used to solve the discrete Coulomb gas model. However, it is not clear at this stage what we should minimize exactly to obtain the limit shapes in general. This question is addressed in section \ref{sec:hydro}, where we introduce the height mapping and use that to understand which functional to minimize. Then, we actually compute this in section \ref{sec:tranfermatrix}, for dimers on the hexagonal lattice. We use the transfer matrix formalism, and a mapping onto free fermions (see appendix \ref{app:freefermions_allyouneedtoknow} for a description of free fermions techniques, if need be). The choice of the hexagonal lattice is motivated by technical simplicity. The square lattice is similar, and left to the reader (see exercises \ref{ex:tmsquare} and \ref{ex:hydroquare}). Then we use that to find the limit shapes in section \ref{sec:cburgers}. Section \ref{sec:exactcalc} deals with exact lattice calculations, which allows to recover the arctic curves in a few selected cases. Finally, we discuss a few related and more complicated problems in section \ref{sec:interactions}, and conclude.
\vspace*{\fill}
\pagebreak
\begin{exercise}{Coverings of an Aztec diamond \quad \cite{Elkies1991}}
Try (and perhaps fail) to show that (\ref{eq:AztecZ}) is correct.
\end{exercise}
\vspace*{\fill}
\begin{exercise}[label=ex:fqh]{Fractional quantum Hall effect}
Laughlin famously guessed \cite{Laughlin} that the experimental observation of fractional plateaus by Tsui et al \cite{Tsui} may be understood by the model wave function\\ ${\Psi(z_1,\ldots,z_N)=\prod_{1\leq i<j\leq N}(z_i-z_j)^{\beta/2}} e^{-\frac{1}{2}\sum |z_i|^2}$ where $\beta/2$ is a positive integer, the $z_i\in \mathbb{C}$, and they got the Nobel prize after that.
\tcbline
1. By interpreting $|\Psi(z_1,\ldots,z_N)|^2$ as a pdf for the $N$ particles, what is the limit shape in the limit $N\to \infty$? \\
2. The $N$ particles are now constrained to live on the sites of a $L\times L$ square lattice with mesh $a=1$ (the origin $z=0$ is set at the barycenter). What is the limit shape in the limit $N\to \infty$ with fixed density $d=N/L^2$ when $d$ is not too large? What happens at higher densities?
\end{exercise}
\vspace*{\fill}
\begin{exercise}[label=exo:chebyshev]{Zeros of orthogonal polynomials [Stieltjes 1885]}
Consider the electrostatic energy considered before, with two extra charges at positions $\pm 1$, $E(x_1,\ldots,x_N)=-\sum_{i<j} \log |x_i-x_j|-a \sum_i\log|1+x_i|-b\sum_i \log|1-x_i|$ on $B$ for $a,b \geq 0$. For large $N$, the limit shape does not depend on $a,b$.
\tcbline
1. Why? In the following, we set $a=b=1/4$.\\
2. Write down a system of equations satisfied by the positions $y_1,\ldots,y_N$ that minimise the electrostatic energy.\\
3. Let $p_N(x)=\prod_{i=1}^N (x-y_i)$. Show that the previous system is equivalent to $2\frac{p_N''(y_i)}{p_N'(y_i)}+\frac{1}{1-y_i}-\frac{1}{1+y_i}=0$\quad$\forall i=1,\ldots,N$. \\
4. Show that $(1-x^2)p_N''(x)-xp_N'(x)=-N^2 p_N(x)$. Show that $p_N(\cos \theta)=\cos(N\theta)$.\\
5. Find the zeroes of $p_N$ and recover the limit shape (\ref{eq:density_continuum}) in the limit $N\to \infty$.
\end{exercise}
\vspace*{\fill}
\section{Variational principle}
\label{sec:hydro}
We show in this section that the right quantity to minimise is a variant of the free energy. To understand this, we first introduce an important ingredient, the height mapping.
\subsection{The height mapping}
Consider the dimer model on the square lattice. Remember the lattice is bipartite, which means one can label sites with two colors, black and white, in such a way that each black (white) lattice point has all its nearest neighbors of the opposite color.
To each dimer configuration we associate a height configuration, as follows. Heights are discrete numbers (integers in some units, see figure \ref{fig:heightmapping}) which live on plaquettes (or the dual lattice, which is also square). We pick a reference point, say the leftmost bottom plaquette, and set its height to zero. Then, turning counterclockwise around a black (resp. white) vertex, the height picks up $+3$ (resp. $-3$) when crossing a dimer, $-1$ (resp. $+1$) otherwise. A dimer configuration with the corresponding height configuration is shown in figure \ref{fig:heightmapping} on the left. Recall the colour code used in these notes. We draw a vertical dimer in blue (resp. yellow) if its bottom part touches a black (resp. white) vertex. Similarly green (resp. red) dimers have their left part that touches a black (resp. white) vertex. Hence crossing a blue dimer from left to right, the height always picks $-3$, etc.
\begin{myfigure}[label=fig:heightmapping]{The height mapping for dimers on the square lattice}
\begin{tikzpicture}[scale=0.85]
\foreach \x in {0,1,2,3,4,5,6,7}{
\draw (\x,0) -- (\x,7);\draw (0,\x) -- (7,\x);
}
\draw[color=dblue,line width=8pt] (0,0) -- (0,1);
\draw[color=yello,line width=8pt] (1,0) -- (1,1);
\draw[color=yello,line width=8pt] (0,3) -- (0,4);
\draw[color=dblue,line width=8pt] (1,3) -- (1,4);
\draw[color=dblue,line width=8pt] (0,6) -- (0,7.);
\draw[color=yello,line width=8pt] (1,6) -- (1,7.);
\draw[color=yello,line width=8pt] (2,1) -- (2,2);
\draw[color=dblue,line width=8pt] (3,1) -- (3,2);
\draw[color=yello,line width=8pt] (2,5) -- (2,6);
\draw[color=dblue,line width=8pt] (3,5) -- (3,6);
\draw[color=dblue,line width=8pt] (4,2) -- (4,3);
\draw[color=yello,line width=8pt] (5,2) -- (5,3);
\draw[color=dblue,line width=8pt] (4,6) -- (4,7);
\draw[color=dblue,line width=8pt] (6,4) -- (6,5);
\draw[color=yello,line width=8pt] (7,4) -- (7,5);
\draw[color=yello,line width=8pt] (7,6) -- (7,7);
\draw[color=gree,line width=8pt] (0,2) -- (1,2);
\draw[color=dred,line width=8pt] (0,5) -- (1,5);
\draw[color=gree,line width=8pt] (2,0) -- (3,0);
\draw[color=dred,line width=8pt] (2,3) -- (3,3);
\draw[color=gree,line width=8pt] (2,4) -- (3,4);
\draw[color=dred,line width=8pt] (2,7) -- (3,7);
\draw[color=gree,line width=8pt] (4,0) -- (5,0);
\draw[color=dred,line width=8pt] (4,1) -- (5,1);
\draw[color=gree,line width=8pt] (4,4) -- (5,4);
\draw[color=dred,line width=8pt] (4,5) -- (5,5);
\draw[color=dred,line width=8pt] (5,6) -- (6,6);
\draw[color=gree,line width=8pt] (5,7) -- (6,7);
\draw[color=gree,line width=8pt] (6,0) -- (7,0);
\draw[color=dred,line width=8pt] (6,1) -- (7,1);
\draw[color=gree,line width=8pt] (6,2) -- (7,2);
\draw[color=dred,line width=8pt] (6,3) -- (7,3);
\foreach \y in {0,2,4,6,8}{
\draw (-0.5,-0.5+\y) node {$0$};\draw (7.5,-0.5+\y) node {$0$};
}
\foreach \y in {1,3,5,7}{
\draw (-0.5,-0.5+\y) node {$1$};\draw (7.5,-0.5+\y) node {$1$};
}
\foreach \x in {2,4,6}{
\draw (\x-0.5,-0.5) node {$0$};\draw (\x-0.5,7.5) node {$0$};
}
\foreach \x in {1,3,5,7}{
\draw (\x-0.5,-0.5) node {$-1$};\draw (\x-0.5,7.5) node {$-1$};
}
\draw (0.5,0.5) node {$-2$};\draw (1.5,0.5) node {$1$};\draw (2.5,0.5) node {$2$};\draw (3.5,0.5) node {$1$};\draw (4.5,0.5) node {$2$};\draw (5.5,0.5) node {$1$};\draw (6.5,0.5) node {$2$};
\draw (0.5,1.5) node {$-1$};\draw (1.5,1.5) node {$0$};\draw (2.5,1.5) node {$3$};\draw (3.5,1.5) node {$0$};\draw (4.5,1.5) node {$-1$};\draw (5.5,1.5) node {$0$};\draw (6.5,1.5) node {$-1$};
\draw (0.5,2.5) node {$2$};\draw (1.5,2.5) node {$1$};\draw (2.5,2.5) node {$2$};\draw (3.5,2.5) node {$1$};\draw (4.5,2.5) node {$-2$};\draw (5.5,2.5) node {$1$};\draw (6.5,2.5) node {$2$};
\draw (0.5,3.5) node {$3$};\draw (1.5,3.5) node {$0$};\draw (2.5,3.5) node {$-1$};\draw (3.5,3.5) node {$0$};\draw (4.5,3.5) node {$-1$};\draw (5.5,3.5) node {$0$};\draw (6.5,3.5) node {$-1$};
\draw (0.5,4.5) node {$2$};\draw (1.5,4.5) node {$1$};\draw (2.5,4.5) node {$2$};\draw (3.5,4.5) node {$1$};\draw (4.5,4.5) node {$2$};\draw (5.5,4.5) node {$1$};\draw (6.5,4.5) node {$-2$};
\draw (0.5,5.5) node {$-1$};\draw (1.5,5.5) node {$0$};\draw (2.5,5.5) node {$3$};\draw (3.5,5.5) node {$0$};\draw (4.5,5.5) node {$-1$};\draw (5.5,5.5) node {$0$};\draw (6.5,5.5) node {$-1$};
\draw (0.5,6.5) node {$-2$};\draw (1.5,6.5) node {$1$};\draw (2.5,6.5) node {$2$};\draw (3.5,6.5) node {$1$};\draw (4.5,6.5) node {$-2$};\draw (5.5,6.5) node {$-3$};\draw (6.5,6.5) node {$-2$};
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale=0.8]
\foreach \x in {0,1,2,3,4,5,6}{
\draw (\x,0) -- (\x,5);
}
\foreach \x in {0,1,2,3,4,5}{\draw (0,\x) -- (6,\x);}
\draw[color=dblue,line width=8pt] (0,0) -- (0,1);
\draw[color=dblue,line width=8pt] (1,-0.5) -- (1,0);
\draw[color=dblue,line width=8pt] (2,0) -- (2,1);
\draw[color=dblue,line width=8pt] (3,-0.5) -- (3,0);
\draw[color=dblue,line width=8pt] (4,0) -- (4,1);
\draw[color=dblue,line width=8pt] (5,-0.5) -- (5,0);
\draw[color=dblue,line width=8pt,opacity=0.6] (1,1) -- (1,2);
\draw[color=dblue,line width=8pt,opacity=0.6] (3,1) -- (3,2);
\draw[color=dblue,line width=8pt,opacity=0.6] (0,2) -- (0,3);
\draw[color=dblue,line width=8pt,opacity=0.6] (2,2) -- (2,3);
\draw[color=dblue,line width=8pt,opacity=0.6] (1,3) -- (1,4);
\draw[color=dblue,line width=8pt,opacity=0.6] (0,4) -- (0,5);
\draw (-0.6,0.5) node {$-1$};\draw (0.5,0.5) node {$2$};\draw (1.5,0.5) node {$3$};\draw (2.5,0.5) node {$6$};\draw (3.5,0.5) node {$7$};\draw (4.5,0.5) node {$10$};
\draw (-0.5,1.5) node {$0$};\draw (0.5,1.5) node {$1$};\draw (1.5,1.5) node {$4$};\draw (2.5,1.5) node {$5$};\draw (3.5,1.5) node {$8$};
\draw (-0.6,2.5) node {$-1$};\draw (0.5,2.5) node {$2$};\draw (1.5,2.5) node {$3$};\draw (2.5,2.5) node {$6$};
\draw (-0.5,3.5) node {$0$};\draw (0.5,3.5) node {$1$};\draw (1.5,3.5) node {$4$};
\draw (-0.6,4.5) node {$-1$};\draw (0.5,4.5) node {$2$};
\end{tikzpicture}
\tcbline
Height mapping on the square lattice, in units of $\pi/4$. Left: heights corresponding to a dimer configuration picked uniformly at random. Observe the heights remain on average quite close to zero. Right: heights corresponding to an alternating boundary condition with zigzag blue dimers. Some other vertical dimer occupancies are automatically set as a result, those are shown in lighter blue. The corresponding mean horizontal slope is maximal, $\partial_x h=\pi/2$. The gradient in the vertical direction is also automatically set to $\partial_y h=0$. In fact, in general the slopes satisfy $|\partial_x h|+|\partial_y h|<\pi/2$.
\end{myfigure}
At the very end and for later convenience, all heights are remultiplied by $\pi/4$: in the remainder of these notes, heights are therefore elements of $\frac{\pi}{4}\mathbb{Z}$.
The mapping is one to one, and has many nice properties that we will investigate in the following. For the moment let us just say that mapping to discrete heights has a long history in statistical mechanics, which dates back to Ref.~\cite{BloteHilhorst_1982,Nienhuis_1984}. In fact, the model studied in these references can be mapped onto dimers on the hexagonal lattice, which we will study later on.
\subsection{Minimizing the free energy}
From the height mapping, we know that dimers like to be in configurations where the height gradient is close to zero. A good example is provided by a rectangular domain, as illustrated in figure \ref{fig:heightmapping} on the left. Boundary conditions might spoil that, however. The reader can easily check that the Aztec diamond geometry of figure \ref{fig:aztec} can be cooked up by imposing the following boundary condition
\begin{center}\begin{tikzpicture}[scale=0.8]
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11}{
\draw (\x,0) -- (\x,1);
}
\foreach \x in {0,1}{\draw (0,\x) -- (11,\x);}
\draw[color=dblue,line width=8pt] (0,0) -- (0,1);
\draw[color=dblue,line width=8pt] (1,-0.5) -- (1,0);
\draw[color=dblue,line width=8pt] (2,0) -- (2,1);
\draw[color=dblue,line width=8pt] (3,-0.5) -- (3,0);
\draw[color=dblue,line width=8pt] (4,0) -- (4,1);
\draw[color=dblue,line width=8pt] (5,-0.5) -- (5,0);
\draw[color=yello,line width=8pt] (6,-0.5) -- (6,0);
\draw[color=yello,line width=8pt] (7,0) -- (7,1);
\draw[color=yello,line width=8pt] (8,-0.5) -- (8,0);
\draw[color=yello,line width=8pt] (9,0) -- (9,1);
\draw[color=yello,line width=8pt] (10,-0.5) -- (10,0);
\draw[color=yello,line width=8pt] (11,0) -- (11,1);
\end{tikzpicture}\end{center}
at the bottom, and a similar one at the top. The ``half-dimers'' in the above picture mean that the edge above is not occupied by a dimer. In that case the slopes are maximal along the Aztec diamond, and alternate $-\pi/2,\pi/2,-\pi/2,\pi/2$ from one boundary to the other.
These \emph{extreme} boundary conditions play an important role, and are, as we shall see, responsible for the appearance of the arctic circle. Indeed, the number of available dimer coverings in a given region strongly depends on the average slopes imposed at the boundary, as can already be guessed by looking at right part of figure.~\ref{fig:heightmapping}.
A way to see this is to use the fact that all possible slopes are allowed when covering a torus with dimers, as can be seen in figure~\ref{fig:torusconfig}. So consider a $\ell\times \ell$ torus ($\ell$ even), and associate a weight $a=e^{\mu}$ for yellow dimers, $1/a$ for blue dimers, $b=e^{\nu}$ for green dimers, $1/b$ for red dimers.
The corresponding partition function may be written as
\begin{equation}
Z(\mu,\nu)=\sum_r \sum_{s} Z_{r,s}\,e^{\frac{\ell^2}{\pi}(r \mu+s \nu)}.
\end{equation}
Above, $Z_{r,s}$ precisely counts the number of dimer coverings in the $(r,s)$ sector, where $r$ is the mean horizontal slope $\braket{\partial_x h}$ and $s$ the mean vertical slope $\braket{\partial_y h}$. The total number of dimer coverings is $Z(0,0)$. Hence $Z(\mu,\nu)$ encodes all the information about the $Z_{r,s}$. For example, the generating function reads for a $8\times 8$ torus ($311853312$ coverings in total, we will explain later how to calculate this):
\begin{align}\nonumber
&Z(\mu,\nu)=153722916+33490432 \left(a+\frac{1}{a}+b+\frac{1}{b}\right)+5427224 \left(a b+\frac{a}{b}+\frac{1}{a b}+\frac{b}{a}\right)\\\nonumber
&+550928 \left(a^2+\frac{1}{a^2}+b^2+\frac{1}{b^2}\right)+31232 \left(a^2 b+\frac{a^2}{b}+\frac{b}{a^2}+\frac{1}{a^2 b}+a b^2+\frac{a}{b^2}+\frac{1}{a b^2}+\frac{b^2}{a}\right)\\\nonumber
&+1536 \left(a^3+\frac{1}{a^3}+b^3+\frac{1}{b^3}\right)+6 \left(a^2 b^2+\frac{a^2}{b^2}+\frac{b^2}{a^2}+\frac{1}{a^2 b^2}\right)+4 \left(a^3 b+\frac{a^3}{b}+\frac{b}{a^3}+\frac{1}{a^3 b}\right.\\&+\left.a b^3+\frac{a}{b^3}+\frac{1}{a b^3}+\frac{b^3}{a}\right)+\frac{1}{a^4} + a^4 + \frac{1}{b^4} + b^4.\label{eq:brutecounting}
\end{align}
\pagebreak
\begin{myfigure}[label=fig:torusconfig]{A possible height gradient on the torus}
\centering\begin{tikzpicture}[scale=0.8]
\foreach \x in {0,1,2,3,4,5,6,7}{
\draw (\x,0) -- (\x,7);\draw (0,\x) -- (7,\x);
\draw[dashed] (-0.5,\x) -- (0,\x); \draw[dashed] (7,\x) -- (7.5,\x);
\draw[dashed] (\x,-0.5) -- (\x,0);\draw[dashed] (\x,7) -- (\x,7.5);
}
\draw[color=dblue,line width=8pt] (0,0) -- (0,1);\draw[color=yello,line width=8pt] (1,0) -- (1,1);
\draw[color=yello,line width=8pt] (2,1) -- (2,2); \draw[color=yello,line width=8pt] (3,0) -- (3,1);\draw[color=yello,line width=8pt] (4,1) -- (4,2);
\draw[color=dblue,line width=8pt] (5,1) -- (5,2);\draw[color=yello,line width=8pt] (6,1) -- (6,2);\draw[color=yello,line width=8pt] (7,0) -- (7,1);
\draw[color=gree,line width=8pt] (4,0) -- (5,0);
\draw[color=yello,line width=8pt] (1,2) -- (1,3); \draw[color=yello,line width=8pt] (3,2) -- (3,3); \draw[color=yello,line width=8pt] (2,3) -- (2,4);
\draw[color=yello,line width=8pt] (0,3) -- (0,4); \draw[color=yello,line width=8pt] (0,5) -- (0,6); \draw[color=yello,line width=8pt] (1,4) -- (1,5);
\draw[color=dred,line width=8pt] (0,7) -- (1,7); \draw[color=dred,line width=8pt] (1,6) -- (2,6); \draw[color=dred,line width=8pt] (2,5) -- (3,5);
\draw[color=dred,line width=8pt] (3,6) -- (4,6); \draw[color=gree,line width=8pt] (3,7) -- (4,7);
\draw[color=dred,line width=8pt] (3,4) -- (4,4);\draw[color=dred,line width=8pt] (4,3) -- (5,3);\draw[color=dred,line width=8pt] (4,5) -- (5,5);
\draw[color=dred,line width=8pt] (5,4) -- (6,4); \draw[color=dred,line width=8pt] (6,3) -- (7,3);
\draw[color=yello,line width=8pt] (5,6) -- (5,7); \draw[color=yello,line width=8pt] (6,5) -- (6,6);\draw[color=yello,line width=8pt] (7,4) -- (7,5);
\draw[color=yello,line width=8pt] (7,6) -- (7,7);
\draw[color=gree,line width=8pt] (-0.6,2) -- (0,2); \draw[color=gree,line width=8pt] (7,2) -- (7.6,2);
\draw[color=yello,line width=8pt] (2,-0.6) -- (2,0); \draw[color=yello,line width=8pt] (2,7) -- (2,7.6);
\draw[color=yello,line width=8pt] (6,-0.6) -- (6,0);\draw[color=yello,line width=8pt] (6,7) -- (6,7.6);
\draw (-0.5,-0.5) node {$0$};\draw (0.5,-0.5) node {$-1$};\draw (1.5,-0.5) node {$0$};\draw (2.5,-0.5) node {$3$};\draw (3.5,-0.5) node {$4$};
\draw (4.5,-0.5) node {$3$};\draw (5.5,-0.5) node {$4$};\draw (6.5,-0.5) node {$7$};\draw (7.5,-0.5) node {$8$};
\draw (-0.5,0.5) node {$1$};\draw (0.5,0.5) node {$-2$};\draw (1.5,0.5) node {$1$};\draw (2.5,0.5) node {$2$};\draw (3.5,0.5) node {$5$};
\draw (4.5,0.5) node {$6$};\draw (5.5,0.5) node {$5$};\draw (6.5,0.5) node {$6$};\draw (7.5,0.5) node {$9$};
\draw (-0.5,1.5) node {$0$};\draw (0.5,1.5) node {$-1$};\draw (1.5,1.5) node {$0$};\draw (2.5,1.5) node {$3$};\draw (3.5,1.5) node {$4$};
\draw (4.5,1.5) node {$7$};\draw (5.5,1.5) node {$4$};\draw (6.5,1.5) node {$7$};\draw (7.5,1.5) node {$8$};
\draw (-0.5,2.5) node {$-3$};\draw (0.5,2.5) node {$-2$};\draw (1.5,2.5) node {$1$};\draw (2.5,2.5) node {$2$};\draw (3.5,2.5) node {$5$};
\draw (4.5,2.5) node {$6$};\draw (5.5,2.5) node {$5$};\draw (6.5,2.5) node {$6$};\draw (7.5,2.5) node {$5$};
\draw (-0.5,3.5) node {$-4$};\draw (0.5,3.5) node {$-1$};\draw (1.5,3.5) node {$0$};\draw (2.5,3.5) node {$3$};\draw (3.5,3.5) node {$4$};
\draw (4.5,3.5) node {$3$};\draw (5.5,3.5) node {$4$};\draw (6.5,3.5) node {$3$};\draw (7.5,3.5) node {$4$};
\draw (-0.5,4.5) node {$-3$};\draw (0.5,4.5) node {$-2$};\draw (1.5,4.5) node {$1$};\draw (2.5,4.5) node {$2$};\draw (3.5,4.5) node {$1$};
\draw (4.5,4.5) node {$2$};\draw (5.5,4.5) node {$1$};\draw (6.5,4.5) node {$2$};\draw (7.5,4.5) node {$5$};
\draw (-0.5,5.5) node {$-4$};\draw (0.5,5.5) node {$-1$};\draw (1.5,5.5) node {$0$};\draw (2.5,5.5) node {$-1$};\draw (3.5,5.5) node {$0$};
\draw (4.5,5.5) node {$-1$};\draw (5.5,5.5) node {$0$};\draw (6.5,5.5) node {$3$};\draw (7.5,5.5) node {$4$};
\draw (-0.5,6.5) node {$-3$};\draw (0.5,6.5) node {$-2$};\draw (1.5,6.5) node {$-3$};\draw (2.5,6.5) node {$-2$};\draw (3.5,6.5) node {$-3$};
\draw (4.5,6.5) node {$-2$};\draw (5.5,6.5) node {$1$};\draw (6.5,6.5) node {$2$};\draw (7.5,6.5) node {$5$};
\draw (-0.5,7.5) node {$-4$};\draw (0.5,7.5) node {$-5$};\draw (1.5,7.5) node {$-4$};\draw (2.5,7.5) node {$-1$};\draw (3.5,7.5) node {$0$};
\draw (4.5,7.5) node {$-1$};\draw (5.5,7.5) node {$0$};\draw (6.5,7.5) node {$3$};\draw (7.5,7.5) node {$4$};
\end{tikzpicture}
\tcbline
\raggedright
A dimer covering of the $8\times 8$ torus, and its height configuration (in units of $\pi/4$). The fact that dimers can wrap around the torus means both $r$ and $s$ can be non zero. The slopes in this example are $r=(8/8)\pi/4=\pi/4$, $s=(-4/8)\pi/4=-\pi/8$. The dimer configuration drawn here contributes to the $a^2/b$ term in (\ref{eq:brutecounting}). Note that we refrain from identifying the bottom/top and leftmost/rightmost plaquettes here, for the sake of the argument.
\end{myfigure}
The corresponding free energy is, in the thermodynamic limit,
\begin{equation}\label{eq:surfacetension}
F(r,s)=-\lim_{L\to \infty}\frac{1}{\ell^2}\log Z_{r,s}.
\end{equation}
The crucial point is that once $r$ and $s$ are fixed, the resulting free energy does not depend anymore on the details of the boundary conditions. So we are back to the situation discussed in the introduction for the Ising model, provided $r$ and $s$ are fixed.
This free energy (or 'surface tension') has many fascinating properties, and will play a central role in the following. For now let us just mention that it is minimum at $(r,s)=(0,0)$; in most cases it is also a convex function.
With the free energy $F(r,s)$ as a given, one way to understand the limit shape phenomenon is to consider a very large lattice, say $L\times L$, and cut it in several much smaller (say square) cells of size $\ell\times \ell$, where $\ell$ is still much larger than the lattice spacing ($=1$ for us). Namely, our system is macroscopic, and we look at it at mesoscopic scales where (i) it can be described in the continuum (ii) it is uniform. The total system is then considered to be a collection of smoothly connected uniform cells, each with its own average boundary gradient $\vec{\nabla} h$, which is imposed by the surrounding cells.
The system will then try to minimize total free energy (maximize number of dimer coverings), by choosing the appropriate slopes in each mesoscopic cell. Hence we need to minimise the functional (or action)
\begin{empheq}[box=\othermathbox]{equation}\label{eq:classical_action}
S_0[h]=\int_{D} dx dy F(\partial_x h,\partial_y h)
\end{empheq}
over some domain $D$. The Euler-Lagrange equations for this variational problem read
\begin{equation}\label{eq:eulerlagrange_surface}
\frac{\partial}{\partial x} F^{(10)}(\partial_x h,\partial_y h)+ \frac{\partial}{\partial y} F^{(01)}(\partial_x h,\partial_y h)=0,
\end{equation}
where we have introduced the notation
\begin{equation}
F^{(ij)}(r,s)=\frac{\partial^i}{\partial r^i}\frac{\partial^j}{\partial s^j}F(r,s),
\end{equation}
for the partial derivatives of $F$,
and $h=h(x,y)$. It also possible to add extra constraint in a similar fashion to what we did in section \ref{sec:coulombgas}.
On physical grounds this variational (or hydrodynamic) principle is expected to hold under very general conditions, and has been used for a long time in the context of crystal surfaces in statistical mechanics (see e.g. \cite{Andreev1981,Nienhuis_1984}). For dimers the situation is even more favorable, since the validity of the variational principle is now a theorem \cite{variationaldimers}.
Therefore, the limit shape is given by the solution to the PDE (\ref{eq:eulerlagrange_surface}) with appropriate boundary conditions. Computing exactly the free energy can also be done \cite{Nienhuis_1984,variationaldimers}, we will explain in the following section how to. It should be stressed, however, that even with an exact expression for the free energy, solving the PDE for arbitrary conditions is in general a difficult problem.
\subsection{Fluctuations and free field theory}
We have so far only discussed the limit shape, which gives the average height field at position $x$. There are of course fluctuations on top of that, which are, we argue here, described by a massless free field theory. Those fluctuations are in particular responsible for the power-law decay of correlation functions.
\paragraph{Homogeneous case.}
To discuss fluctuations, let us first consider the simpler case of the rectangular geometry (or really, any simply connected finite planar domain), for which $\nabla h$ is close to zero on average (see figure \ref{fig:heightmapping}), and the slopes in the neighborhood of $(r,s)=(0,0)$ dominate. Due to lattice symmetry considerations $F(\pm r,\pm s)=F(r,s)$ and $F(r,s)=F(s,r)$, so $F^{(10)}$ and $F^{(01)}$ both vanish, and to lowest non trivial order we have
\begin{equation}\label{eq:surface_approx}
F(r,s)=F(0,0)+\frac{r^2+s^2}{2\pi K}+o(r^2,s^2).
\end{equation}
So, neglecting higher order corrections, the free energy is determined up to a single unspecified parameter $K>0$. $K$ has many names\footnote{The inverse of $K$ may be interpreted as a stiffness, for this reason $\kappa=1/(2K)$ is called the stiffness. The terminology compactification radius $R^2=1/K$ comes from string theory.}, in the following we call it the \emph{Luttinger parameter}. The Euler-Lagrange equations for the average height take a simple form in that case:
\begin{equation}
\left(\partial_x^2+\partial_y^2\right)h=\nabla^2 h=0,
\end{equation}
so $h$ is a harmonic function with appropriate boundary conditions on $\partial D$. The solution is sometimes called the classical part of the field, and noted $h_{\rm cl}$. In fancier terms $h_{\rm cl}(x,y)$ is the harmonic extension of $h_\partial$, its value at the boundary, to $D$.
On top of that, fluctuations may be handled by writing
\begin{equation}
h=h_{\rm cl}+\frac{\delta h}{2}
\end{equation}
where $\delta h$ satisfies Dirichlet boundary conditions on $\partial D$ (the factor $1/2$ is set to match standard conventions later on).
Plugging in (\ref{eq:surface_approx}) and dropping $F(0,0)$ yields an action
\begin{equation}
S[h]=S_0[h_{\rm cl}]+S_{\rm fluc}[\delta h]
\end{equation}
where the linear term drops out due to the Euler-Lagrange equations, combined with the fact that $\delta h$ obeys Dirichlet boundary conditions (integrate by parts). The action for the fluctuations is
\begin{equation}\label{eq:simpleaction}
S_{\rm fluc}[\phi]=\frac{1}{8\pi K}\int_D dx dy\, (\nabla \phi)^2.
\end{equation}
The path integral formulation reads
\begin{equation}\label{eq:pathi}
\mathcal{Z}=e^{-S[h_{\rm cl}]}\int [\mathcal{D}\delta h]e^{-S_{\rm fluc}[\delta h]},
\end{equation}
where $\delta h$ satisfies Dirichlet boundary conditions, or, equivalently,
\begin{equation}
\mathcal{Z}=\int [\mathcal{D} h]e^{-S[ h]},
\end{equation}
where $h=h_{\rm cl}$ at the boundary.
The above is the Euclidean action of a massless free (or Gaussian) scalar field in two dimensions, in the following we will simply refer to it as the \emph{free field}\footnote{It has many other names: free (compact or not) boson, bosonic string, (Tomonaga-)Luttinger liquid, $c=1$ conformal field theory. In mathematics the names massless Gaussian field, Gaussian free field or the related Gaussian multiplicative chaos can also be found.}. It is always desirable to visualize things, see figure \ref{fig:discreheight_realisition} for two realizations of the discrete height field for dimers. The reader can then try to imagine what this becomes in the continuum limit.
\begin{myfigure}[label=fig:discreheight_realisition]{Discrete height field}
\includegraphics[height=4.7cm]{./Plots/discreteheightssquaredimers_16.jpg}
\includegraphics[height=4.7cm]{./Plots/discreteheightssquaredimers_16scale}
\hfill
\includegraphics[height=4.7cm]{./Plots/discreteheightssquaredimers_64.jpg}
\includegraphics[height=4.7cm]{./Plots/discreteheightssquaredimers_64scale}
\tcbline
Discrete height configuration corresponding to a uniform random covering on an $L\times L$ square lattice (free boundary conditions, which means the average slopes are zero). Left: $L=16$. Right: $L=64$. At a given point the variance of the field can be shown to grow like $\approx \log L$.
\end{myfigure}
Before heading to the inhomogeneous case several important remarks are in order.
\begin{itemize}
\item The argument we just provided is quite standard \cite{Spohn_lectures}; in fact, the very reason field theory techniques may be applied to statistical or condensed matter systems is that this type of reasoning often just works, and provides \emph{exact results} for critical exponent and long range correlation functions. However, a proper derivation from a concrete lattice model is very often a difficult task.
\item As can be seen from the figure, height field configurations look wilder and wilder as system size is increased. In particular, its variance at any point can be shown to diverge as $\approx\log L$. The free field is, in fact, a singular object.
On a simply connected bounded planar domain $D$ a possible mathematical definition is as follows. Consider (minus) the Laplacian $-\nabla^2$ on $D$ with Dirichlet boundary conditions. We write its normalized eigenfunctions as $u_k(x,y)$ and the corresponding eigenvalues as $\lambda_k$ (they are all strictly positive). Then introduce
\begin{equation}\label{eq:gff}
\varphi(x,y)=\sum_{k=1}^{\infty} \xi_k\, u_k(x,y),
\end{equation}
where the $\xi_{k}$ are independent centered Gaussian random variables with variance $\mathbb{E}[(\xi_k)^2]=1/\lambda_k$. This implies that $\mathbb{E}[\varphi(x,y)\varphi(x',y')]=\sum_{k}\frac{1}{\lambda_k} u_k(x,y)u_k(x',y')$ is the Green's function for the laplacian on $D$ with Dirichlet boundary conditions, which means $\mathbb{E}[\varphi(x,y)\varphi(x',y')]=\braket{\delta h(x,y) \delta h(x',y')}$, where the braket on the right denotes expectation value with respect to the path integral (\ref{eq:pathi}). At the level of correlations functions, $\delta h$ and $\varphi$ are the same object.
The series (\ref{eq:gff}) in fact, converges almost nowhere. Therefore, the free field has to be seen as a random distribution, not a random function\footnote{This follows from the Kolmogorov's three-series theorem for convergence of random series. A necessary condition for convergence of $\sum_k X_k$ in our case is that $\sum_k \textrm{var}\,X_k<\infty$, which is not the case since the eigenvalues of the laplacian decay as $k^{-2/d}$ in $d$ spatial dimensions. The analogous construction in one spatial dimension converges almost surely, and defines a brownian bridge (Brownian motion on $[0,1]$ conditioned to come back to its starting point) on $[0,1]$. So things get worse in higher dimensions.}. The interested reader can have a look at References \cite{FreeField_Dubedat,Garban_kpz,GFF_Sheffield,GMCLiouville_lectures,Kahane,Simon} for a mathematical treatment of the free field. For computations what matters is to be able to integrate against smooth test functions, so this is not a problem. It is, however, nice to keep in mind that the free field is fundamentally a singular object.
\item For the dimer model we will see that $K=1$, and relate this to the fact that dimers map to free fermions. Adding local interactions (say on plaquettes, as described earlier) affects the free energy and changes the Luttinger parameter over a wide range in parameter space \cite{alet2005interacting,alet2006classical}. This result is proved \cite{Giuliani_2017,Giuliani_2019} for sufficiently small (but finite) interaction strength. When interactions are too strong, the system typically undergoes a roughening transition \cite{Roughening,BloteHilhorst_1982} of Berezinsky-Kosterlitz-Thouless (BKT) type\cite{Berezinski,KT} to a phase where the height field becomes regular. In that case long range power law correlations are lost.
\item The reader might be tempted to point out that the boundary heights are flat on average for the dimer model in a rectangular domain, so that it is reasonable to impose Dirichlet boundary conditions $h_{\partial}=0$, and safely conclude that $h_{\rm cl}(x,y)=0$ $\forall x,y\in D$ since this is the only harmonic function satisfying the boundary conditions. This is not quite true, in fact the correct boundary average heights alternate ($\pm \pi/8$), and this slight change has an impact on some observables in the continuum limit. $h_{\rm cl}$ is calculated in exercise~\ref{ex:evenodd}.
\item The action (\ref{eq:simpleaction}) is one of the simplest example of a conformal field theory. This becomes transparent when rewriting the action in terms of complex coordinates $z=x+iy$, $\bar{z}=x-iy$, in which case $S=\frac{1}{4\pi K}\int dz d\bar{z}(\partial_z \phi) (\partial_{\bar{z}} \phi)$. One can easily check that the action is invariant under any transformation $z\mapsto g(z)$, $\bar{z}\mapsto \bar{g}(\bar{z})$, where $g$ ($\bar{g}$) is any holomorphic (antiholomorphic) function. All such transformations preserve angles locally, they are \emph{conformal}. The action (\ref{eq:simpleaction}) is then conformaly invariant. Conformal field theory is a vast subject, we will barely scratch the surface in these notes. We refer to \cite{Ginsparg_cft,Yellowbook,Mussardo} for reviews.
\end{itemize}
\paragraph{Inhomogeneous case.} The inhomogeneous case is slightly more complicated, due to the fact that the classical solution solves a more complicated PDE (\ref{eq:eulerlagrange_surface}). The free energy is still expected to be a strictly convex function, at least over a wide range of slopes. This means the determinant of the Hessian matrix is strictly positive. This allows to still define the Luttinger parameter, as
\begin{equation}
\frac{1}{\pi K(r,s)}=\sqrt{\det H[r,s]},
\end{equation}
where $H$ is the Hessian matrix
\begin{equation}
H[r,s]=\left(\begin{array}{cc}F^{(20)}(r,s)&F^{(11)}(r,s)\\F^{(11)}(r,s)&F^{(02)}(r,s)\end{array}\right).
\end{equation}
The (inverse) of the Luttinger parameter tells us how convex free energy is, so still measures stiffness. The smaller the $K$ the more energy slope fluctuations on top of the classical solution will cost.
By repeating the same arguments as before, we get
\begin{equation}
S[h]=S_0[h_{\rm cl}]+S_{\rm fluc}[\delta h]
\end{equation}
with
\begin{equation}\label{eq:tominimizeS}
S_0[h_{\rm cl}]=\int_{D} dx dy F(\partial_x h_{\rm cl},\partial_y h_{\rm cl})
\end{equation}
and
\begin{equation}\label{eq:inhaction}
S_{\rm fluc}[\phi]=\frac{1}{8}\int dx dy (\partial_x \phi\;\; \partial_y \phi) H[\partial_x h_{\rm cl},\partial_y h_{\rm cl}] (\partial_x \phi\;\; \partial_y \phi)^T.
\end{equation}
This can be written in covariant form as
\begin{equation}\label{eq:cov}
S_{\rm fluc}[\phi]= \frac{1}{8\pi}\int \frac{\sqrt{\det g}}{K} g^{ab}(\partial_a \phi)(\partial_b \phi)
\end{equation}
where $g=H^{-1}$ is an emergent metric. It is a priori non flat, since $H=H[r,s]$ depends on $x,y$ through $r=\partial_x h_{\rm cl}(x,y),s=\partial_y h_{\rm cl}(x,y)$.
What we just wrote is an action for the fluctuation $\delta h$ in a curved metric, which itself is determined from the limit shape $h_{\rm cl}(x,y)$. A sample of the discrete height field is shown in figure \ref{fig:discreteheightfield_aztec}, and illustrates what we just said.
\begin{myfigure}[label=fig:discreteheightfield_aztec]{Discrete height field on the Aztec diamond}
\includegraphics[height=4.7cm]{./Plots/discreteheightssquaredimers_Aztec_72} \hfill
\includegraphics[height=4.7cm]{./Plots/discreteheightssquaredimers_Aztectop_72}
\tcbline
Discrete height configuration on a Aztec diamond of order $L=72$. Left: front view. Right: top view. The average height is of order $L$, while fluctuations are of order $\approx \log L$, as before.
\end{myfigure}
\begin{itemize}
\item In the covariant form of the action (\ref{eq:cov}) $K=K(\partial_x h_{\rm cl}(x,y),\partial_y h_{\rm cl}(x,y))$, so the Luttinger parameter depends on position in general. This has important conceptual consequences, in particular it means that conformal invariance is broken in the general inhomogeneous case. For dimers we will see in fact that $K=\textrm{cst}=1$, and explain this as a general property of models that map to free fermions.
\item The argument leading to (\ref{eq:inhaction}) is appealing but oversimplified, for the following reason. As in the homogeneous case, we have used the EL equations to get rid of the linear term in the expansion. However, it is important to keep in mind how all those terms scale with $L$. The quadratic term is subleading (by $1/L$) compared to the linear one, and EL only implies that the leading part of the linear term vanishes, but does not tell us anything about the lower order part, which would potentially break the quadratic nature of the action. As explained in Ref.~\cite{Granet_2019}, a more precise argument considers the dependence of the free energy on higher derivatives $\partial^2 h$ of the height field. Then, redoing the previous steps leads to a quadratic action identical to (\ref{eq:inhaction}), albeit with an extra ($1/L$ subleading compared to the limit shape) contribution to the average of the height field. This is not shocking if one compares to the homogeneous case, where the limit shape is zero to leading order, but the classical solution $h_{\rm cl}$ is $O(1)$ and still matters.
\item This is as far as we can go without knowing the specific form of the free energy. We compute it in the next section in the simplest example, that is dimers on the honeycomb lattice.
\item Of course, obtaining exact expressions for correlations on the lattice are also very much desirable, to check our hydrodynamic assumptions and their limitations. This approach is discussed in section \ref{sec:exactcalc}.
\end{itemize}
\pagebreak
\begin{exercise}[label=ex:ffsample]{Sample of the free field}
Using your favorite programing language, draw an (approximate) sample of the free field on a rectangle $[0,\pi]^2$ with Dirichlet boundary conditions.
\end{exercise}
\paragraph{}\mbox{}
\begin{exercise}[label=ex:evenodd,breakable]{The even-odd effect \qquad [Inspired by \cite{Ferdinand,kenyon2000,Stephan_RVB,Stephan_phase,Bondesan_rectangle2}]}
We consider interacting dimers on the $L_x\times L_y$ square lattice, in the rectangular
(figure~\ref{fig:heightmapping}) and cylinder geometries. $L_x$ is assumed to be even, while $L_y$ can be either even (``even case'') or odd (``odd case''). We wish to compute $\mathcal{R}(\alpha)=\frac{Z(L_x,L_y)Z(L_x,L_y)}{Z(L_x,L_y+1)Z(L_x,L_y-1)}$ for even $L_y$ in the limit $L_x,L_y\to \infty$ with fixed aspect ratio $\alpha=L_y/L_x$. To do that we assume that the only relevant contribution to the ratio of partition functions is that of the classical part $e^{-S_0[h_{\rm cl}]}$, with the action (\ref{eq:tominimizeS}), meaning contributions from fluctuations are the same for even and odd. \\
1.\,Explain $\mathcal{R}(\alpha)=O(1)$ from physical arguments. Surprinsingly, it turns out that $\mathcal{R}(\alpha)\neq 1$, see below.
\tcbline
We deal with a cylinder of circumference $L_x$ and height $L_y$ first.\\
2. What are the possible height differences between top and bottom contributing to $Z(L_x,L_y)$ for even $L_y$? Same question for odd $L_y$. \\
3. For each case, find the set of harmonic functions on the cylinder, that implement these height differences. [Hint: those are really simple functions.] Show then that
$$\mathcal{R}_{\rm cyl}(\alpha)=\frac{\sum_{n\in \mathbb{Z}} e^{-\pi n^2/K\alpha}}{\sum_{n\in \mathbb{Z}}e^{-\pi (n+1/2)^2/K\alpha}}.$$
\tcbline
We now treat the (more technical, use Maple/Mathematica) case of a rectangle.\\
5. Compute the average heights on all sides in the even and odd case.\\
6. Show that the real and imaginary parts of a holomorphic function are harmonic. \\
7. Show that the Dirichlet energy $\int_D dx dy (\nabla h_{\rm e})^2-(\nabla h_{\rm o})^2$ is invariant under conformal maps.\\
The conformal map from the upper-half plane $\mathbb{H}=\{z,\, \textrm{Im}\, z>0\}$ to $D$ is given by the \href{https://en.wikipedia.org/wiki/Schwarz\%E2\%80\%93Christoffel_mapping}{Schwarz-Christoffel} map $w(z)=\int^z \frac{dt}{\sqrt{1-t^2}\sqrt{1-k^2t^2}}$, $0<k<1$, where $k=k(\alpha)$ depends on $\alpha$. The points $z=-1/k,-1,1,1/k$ are mapped to the top left, bottom left, bottom right, top right vertices respectively. \\
8. Find the harmonic functions $\phi_{\rm e,o}(u,v)$ in the upper-half plane that implement the boundary conditions along the real axis. [Hint: with $z=u+\mathrm{i}\mkern1mu v$, $\arg z=\textrm{Im} \log (z)$]\\
9. Show that $S[h_{\rm e}]-S[h_{\rm o}]=\frac{1}{4K}\log \frac{1-k}{1+k}$, which means $[\mathcal{R}_{\rm rect}(\alpha)]=\left(\frac{1+k(\alpha)}{1-k(\alpha)}\right)^{1/(4K)}$. Working out $k(a)$ from the map $w(z)$ leads to
$$\mathcal{R}_{\rm rect}(\alpha)=\left(\frac{\sum_{n\in \mathbb{Z}} e^{-\pi \alpha n^2}}{\sum_{n\in \mathbb{Z}} (-1)^ne^{-\pi \alpha n^2}}\right)^{1/(2K)}.$$
For free dimers ($K=1$) the result can also be extracted from the lattice\cite{Ferdinand}.\vspace{0.2cm}\\
\includegraphics[height=4.5cm]{./Plots/harmonicrectangle_even.jpg}\hfill
\includegraphics[height=4.5cm]{./Plots/harmonicrectangle_odd.jpg}
\end{exercise}
\pagebreak
\section{Transfer matrix for dimers}
\label{sec:tranfermatrix}
We show here how the dimer model is equivalent to a system of free fermions, through the transfer matrix formalism. Before explaining the mapping a few remarks are in order. First, the method we present is not the only one. The most widely used for dimers relies on work of Kasteleyn \cite{Kasteleyn}, who was the first to solve the model, expressing e.g. the partition function for dimers on planar graphs as Pfaffians (see also \cite{TemperleyFisher}). In particular, many results on the mathematical side rely almost exclusively on Kasteleyn theory. The two methods are not that different, see e.g. Ref~\cite{FendleyMoessnerSondhi}, and lead to essentially the same results. Our choice has the advantage of connecting to techniques better known to physicists, in particular free fermions, similar to the (simplified version \cite{Kaufman,SchultzMattisLieb} of) the Onsager transfer matrix \cite{Onsager_1944} of the Ising model. This choice is also motivated by possible generalizations to interacting integrable systems such as six vertex model, where the transfer matrix method cannot really be avoided. The first mapping of dimers onto free fermions is due to Lieb\cite{Lieb_dimers}. The version we present here is slightly different, and leads in a transparent way to a Hermitian transfer matrix with conserved number of particles, in the spirit of Refs.\cite{alet2006classical,Stephan_Shannon,Allegra_2016}. For simplicity, we mainly focus on the dimer model on the hexagonal lattice. The square lattice is similar, and worked out in exercise \ref{ex:tmsquare}.
\subsection{Reminder on the transfer matrix formalism}
In this whole section we generalise the honeycomb dimer model slightly by putting alternating weights $\ldots,1,u,1,u,\ldots$ for horizontal dimers along a given horizontal line (see figure \ref{fig:hexadimersmapping}, where the lattice is drawn as a brick wall). The first important observation is that a given dimer configuration on the lattice is uniquely determined by the occupancies along vertical edges, so we may completely ignore the occupancies of horizontal ones.
\begin{myfigure}[label=fig:hexadimersmapping]{Transfer matrix for dimers on honeycomb}
\centering\begin{tikzpicture}[scale=0.9]
\foreach \y in {0,1,2,3}{
\draw[thick] (0,\y) -- (11,\y);
}
\foreach \x in {0,2,4,6,8,10}{
\draw[thick] (\x+1,-1) -- (\x+1,0);
\draw[red!50!black,line width=3pt,opacity=0.6] (\x+1,-1) -- (\x+1,0);
\foreach \y in {0,2}{
\draw (\x,\y) -- (\x,\y+1);
\draw[red!50!black,line width=3pt,opacity=0.6] (\x,\y) -- (\x,\y+1);
\draw (\x+1,\y+1) -- (\x+1,\y+2);
\draw[red!50!black,line width=3pt,opacity=0.6] (\x+1,\y+1) -- (\x+1,\y+2);
}
}
\draw[line width=5pt,dblue] (1,-1) -- (1,0);
\draw[line width=5pt,dblue] (7,-1) -- (7,0);
\draw[line width=5pt,dblue] (9,-1) -- (9,0);
\draw[line width=5pt,dblue] (8,0) -- (8,1);
\draw[line width=5pt,dblue] (11,1) -- (11,2);
\draw[line width=5pt,dblue] (10,2) -- (10,3);
\draw[line width=5pt,dblue] (11,3) -- (11,4);
\draw[line width=5pt,dblue] (0,0) -- (0,1);
\draw[line width=5pt,dblue] (4,0) -- (4,1);
\draw[line width=5pt,dblue] (7,1) -- (7,2);
\draw[line width=5pt,dblue] (0,2) -- (0,3);
\draw[line width=5pt,dblue] (1,1) -- (1,2);
\draw[line width=5pt,dblue] (6,2) -- (6,3);
\draw [line width=5pt,dblue] (2,0) -- (3,0);
\draw [line width=5pt,dblue] (2,1) -- (3,1);
\draw [line width=5pt,dblue] (2,2) -- (3,2);
\draw [line width=5pt,dblue] (1,3) -- (2,3);
\draw [line width=5pt,dblue] (5,0) -- (6,0);
\draw [line width=5pt,dblue] (10,0) -- (11,0);
\draw [line width=5pt,dblue] (9,1) -- (10,1);
\draw [line width=5pt,dblue] (8,2) -- (9,2);
\draw [line width=5pt,dblue] (8,3) -- (9,3);
\draw [line width=5pt,dblue] (5,1) -- (6,1);
\draw [line width=5pt,dblue] (4,2) -- (5,2);
\draw [line width=5pt,dblue] (4,3) -- (5,3);
\draw [line width=5pt,dblue] (3,3) -- (3,4);
\draw [line width=5pt,dblue] (7,3) -- (7,4);
\draw (13,-0.5) node {$\ket{011001}$};
\draw (13,3.5) node {$\bra{101010}$};
\draw (-1.5,0) node {$T$};
\draw (-1.45,1) node {$T'$};
\draw (-1.5,2) node {$T$};
\draw (-1.45,3) node {$T'$};
\foreach \x in {0,2,4,6,8}{
\foreach \y in {0,1,2,3}{
\draw (1.5+\x,\y-0.28) node {$u$};
}
}
\begin{scope}[xshift=-1.5cm]
\draw [very thick,domain=55:-45,xshift=1cm,<-,yshift=0.45cm,scale=0.85] plot ({-0.6*cos(\x)}, {0.72*sin(\x)-0.6});
\draw [very thick,domain=55:-45,xshift=1cm,<-,yshift=1.45cm,scale=0.85] plot ({-0.6*cos(\x)}, {0.72*sin(\x)-0.6});
\draw [very thick,domain=55:-45,xshift=1cm,<-,yshift=2.45cm,scale=0.85] plot ({-0.6*cos(\x)}, {0.72*sin(\x)-0.6});
\draw [very thick,domain=55:-45,xshift=1cm,<-,yshift=3.45cm,scale=0.85] plot ({-0.6*cos(\x)}, {0.72*sin(\x)-0.6});
\end{scope}
\end{tikzpicture}
\tcblower
$(L=6)\times (M=4)$ hexagonal lattice. An example of dimer covering (dimers are thick blue lines). We see the occupancy of the top and bottom vertical edges as imposed. In thinner dark red are drawn the vertical edges not occupied by dimers, those become particles ($1$) in the following, while real dimers are holes ($0$).
\end{myfigure}
Put now a $0$ on vertical edges occupied by a dimer (shown in thick blue on the figure), and a $1$ otherwise (thinner darkred). We see the ones as a collection of particles propagating upwards, from bottom to top. We then associate a basis vector to the vertical dimers occupancies along a given horizontal line. For example for $L=6$ in the picture, to the bottom line configuration $011001$ we associate the vector $\ket{011001}$. The scalar product is $\braket{\mathcal{C}|\mathcal{C}'}=1$ if the two line configurations $\mathcal{C}$ and $\mathcal{C}'$ are the same, zero otherwise. There are $2^L$ possible line configurations.
Imagine now that it is possible to find a $2^L\times 2^L$ matrix $T$, called the \emph{transfer matrix}, such that $\braket{\mathcal{C}|T|\mathcal{C}'}=u^{n}$ if the configurations of two successive horizontal lines are compatible when stitched together --meaning they form a valid dimer covering-- and zero otherwise. Here $n$ is the number of horizontal dimers with a $u$ weight when stitching the two horizontal lines. Then the partition function on the $L\times M$ lattice is simply $Z=\braket{\textrm{top}|T^{M}|\textrm{bottom}}$, where $\bra{\rm top}$ and $\ket{\rm bottom}$ are the bras and kets corresponding to the the top and bottom configurations respectively.
\emph{Proof.} $\braket{\textrm{top}|T^{M}|\textrm{bottom}}=\sum_{\mathcal{C}_1,\ldots,\mathcal{C}_{M-1}}\braket{\textrm{top}|T|\mathcal{C}_{M-1}}\braket{\mathcal{C}_{M-1}|T|\mathcal{C}_{M-2}}\ldots \braket{\mathcal{C}_{1}|T|\textrm{bottom}}$. Each element in the sum is one for valid configurations, zero otherwise, so this just counts the number of dimer coverings compatible with the top and bottom boundary condition. It is also possible to require the bottom and top boundaries to coincide (periodic boundary conditions), in which case $Z={\rm Tr}\, T^{M}$.
Here we actually need two transfer matrices $T$ and $T'$, since the rule changes depending on the parity of the row considered. Let us again consider the example in the figure. With a natural labeling of the edges $1,\ldots,L=6$ from left to right, the bottom configuration is $\ket{\rm bottom}=\ket{011001}$, and $T\ket{011001}=\ket{010101}+\ket{011001}+\ket{001101}$, but we have $T'\ket{011001}=\ket{011001}+\ket{101001}+\ket{110001}+\ket{011010}+\ket{101010}+\ket{110010}\neq T\ket{011001}$.
\subsection{Dimers as free fermions}
It is important to realize that both transfer matrices (TMs) preserve the number of particles, so those are block diagonal, each block indexed by the number of particles (or ones).
\paragraph{Transfer matrix in the zero particle sector.} One can easily check that $T,T'$ act as identity in this sector, namely $T'\ket{00\ldots0}=\ket{00\ldots 0}=T\ket{00\ldots0}$.
\paragraph{Transfer matrix in the one particle sector.} This one is not much more difficult. The corresponding matrix elements are $A_{ij}=\delta_{i,j}+u\delta_{i,j+1}$, $A'_{ij}=\delta_{ij}+u\delta_{i+1,j}$, where
\begin{equation}
A_{ij}=\braket{0\ldots 0\underbrace{1}_{i}0\ldots 0|T|0\ldots 0\underbrace{1}_{j}0\ldots 0}.
\end{equation}
Here $i,j$ are the indices corresponding to the position of the (unique) one. $A'$ is similar.
\paragraph{Full Transfer matrix.}
For more particles the rules become more cumbersome, due to the dimer hardcore constraint, which prevents two $1$ from occupying the same site. This is where the power of the free fermion formalism comes in. The reader not well acquainted with these techniques is invited to have a look at the self-contained introduction in appendix \ref{app:freefermions_allyouneedtoknow}, which has everything needed to understand the present lectures. If you already know (\ref{eq:ff_fancy}), however, chances are you don't need to read it.
First, let us represent vectors $\ket{\mathcal{C}}$ using the fermion formalism.
We use fermions operators to represent all states. For example for $L=4$, we have $\ket{1101}=c_1^\dag c_2^\dag c_4^\dag \ket{\bf 0}$, where $\ket{\bf 0}=\ket{0000}$ is the fermion vacuum. For any dimer configuration $\mathcal{C}$, the associated ket reads $\ket{\mathcal{C}}=c_{i_1}^\dag \ldots c_{i_n}^\dag \ket{\bf 0}$, where the $i_1<\ldots<i_n$ are the positions of the $n$ particles (the $n$ ones) in the configuration $\mathcal{C}$. Note that since fermions anticommute, applying the operators in a different order might generate undesirable minus signs, e.g. $c_1^\dag c_2^\dag \ket{\bf 0}=-c_2^\dag c_1^\dag \ket{\bf 0}$.
The previous results read $T\ket{\bf 0}=\ket{\bf 0}$ and $T c_i^\dag \ket{\bf 0}=\sum_{j} A_{ji} c_j^\dag \ket{\bf 0}$. Now the main claim is (see e.g. \cite{Spohn_lectures})
\begin{empheq}[box=\othermathbox]{equation}\label{eq:tmdimers}
T=\exp\left(\sum_{i,j=1}^L(\log A)_{ij}c_i^\dag c_j\right)
\end{empheq}
provided the logarithm of $A$ makes sense. [For $T'$ just replace $A$ by $A'$ in (\ref{eq:tmdimers})]. Hence both TMs are the exponential of a Hamiltonian that is quadratic in the fermions operator, a \emph{free fermions Hamiltonian}.
\emph{Sketch of the proof.} It goes in two steps. First, let us show that a sufficient condition for a good TM is that
\begin{align}\label{eq:tvac}
T\ket{\bf 0} &=\ket{\bf 0},\\\label{eq:tcom}
T c_i^\dag&=\left(\sum_{j=1}^L A_{ji} c_j^\dag\right) T.
\end{align}
This obviously works for $n=0,1$ particles.
One then needs to determine the action on states $c_{i_1}^\dag \ldots c_{i_n}^\dag \ket{\bf 0}$, $i_1<\ldots<i_n$ in the $n$-particle sector, for $n=2,\ldots,L$. It is obtained by commuting $T$ successively with all fermions operators using (\ref{eq:tcom}), and then using (\ref{eq:tvac}):
\begin{equation}\label{eq:tmsum}
T c_{i_1}^\dag \ldots c_{i_n}^\dag \ket{\bf 0}=\sum_{j_1,\ldots,j_n} A_{j_1 i_1}\ldots A_{j_n i_n}c_{j_1}^\dag \ldots c_{j_n}^\dag \ket{\bf 0}
\end{equation}
The sum generates all possibilities for the particles to go upward-left or upward right. The jumps are not independent however, if two particles go to the same site $i$, the corresponding contribution is proportional to $c_i^\dag c_i^\dag=0$, which means fermion operators effectively enforce the dimer hardcore constraint. [This observation also justifies the terminology free fermions: the particles interact only through the Pauli exclusion principle, which prevents two fermions from being in the same state (occupy the same site). In that sense free fermions are as free as fermions can ever be.]
Importantly, nonzero contributions to the sum in (\ref{eq:tmsum}) are still all ordered, since only nearest neighbor jumps are allowed in $A$. Hence all valid configurations are counted with the correct ($+$) sign.
The second step is to realize that (\ref{eq:tmdimers}) satisfies (\ref{eq:tvac}), (\ref{eq:tcom}). This is essentially formula (\ref{eq:gentimeevolution}) in appendix \ref{app:freefermions_allyouneedtoknow}, go there for the proof.
\paragraph{Various subtleties.}
\begin{enumerate}[label=\emph{\alph*)}]
\item\label{item:subt_1} The reader might be worried that the matrices $A$, $A'$ contain Jordan blocks, so are not diagonalizable. While the derivation does not require $A,A'$ to be diagonalizable, this is still a nuisance. An easy fix is to consider $T'T$ and call it the transfer matrix. Using the identity (\ref{eq:productformula}) we have
\begin{equation}
T'T=\exp\left(\sum_{i,j}(\log B)_{ij}c_i^\dag c_j\right)\qquad,\qquad B=A'A.
\end{equation}
$B$ (and $\log B$) are now symmetric, so the Hamiltonian inside the exponential is a legitimate quantum mechanical Hermitian operator. In particular, one can use standard band theory techniques to examine its long range properties. All of them are solely determined from $T'T$, which can be considered to be the true transfer matrix. In the following, we call $T'T$ the \emph{transfer matrix for the dimer model}.
\item\label{item:subt_2} Care must be taken when implementing periodic boundary conditions. Indeed, $Tc_L^\dag\ket{\bf 0}=(c_L^\dag+c_1^\dag) \ket{\bf 0}$, but it is incorrect to write $T c_2^\dag c_L^\dag \ket{\bf 0}=(c_2^\dag+c_3^\dag)(c_L^\dag+c_1^\dag)\ket{\bf 0}$, since this equals $c_2^\dag c_L^\dag \ket{\bf 0}-c_1^\dag c_2^\dag\ket{\bf 0}+c_3^\dag c_L^\dag \ket{\bf 0} -c_1^\dag c_3^\dag\ket{\bf 0}$. Indeed, remember the correspondence with the stat mech model imposes that fermion operators be applied in order. PBC spoil that natural order when the number of particles is even. Hence the transfer matrix has to satisfy, $Tc_{L}^\dag=\left(c_L^\dag+(-1)^{\hat{N}-1} c_1^\dag\right)T$, where $\hat{N}=\sum_{j=1}^L c_j^\dag c_j$ is the total fermion number operator. In practice, this means it is necessary to consider separately even and odd fermion sectors, in order to keep the quadratic nature of the Hamiltonian.
\item\label{item:subt_3} Diagonalization. As explained in appendix \ref{sec:howtodiag}, diagonalizing $T'T$ boils down to diagonalizing the $L\times L$ matrix $B$, which is a huge simplification. We obtain
\begin{equation}\label{eq:tm_diagonalized}
T'T=\exp\left(-2\sum_{k\in \Omega}\varepsilon(k)f_k^\dag f_k\right)
\end{equation}
where the $\varepsilon(k)$ are the eigenvalues of $-\frac{1}{2}\log B$, $\Omega$ is the set of labels for those, and
\begin{equation}
f_k^\dag =\sum_{j=1}^L v_{jk}c_j^\dag,
\end{equation}
where $v_{jk}$ are the normalized eigenvectors of $B$, $\sum_{j} v_{jk}v_{jq}=\delta_{kq}$. This implies
\begin{equation}
\{f_k,f_{k'}^\dag \}=\delta_{kk'}\qquad,\qquad \{f_k,f_{k'}\}=\{f_k^\dag f_{k'}^\dag\}=0.
\end{equation}
For dimers on the honeycomb lattice with open boundary conditions (OBC) as described in this section, it is possible to get them explicitly. One possibility is to make the Ansatz $v_{jk}\propto \sin(kj+\gamma)$, and check that those provide unnormalized eigenvectors of $B$ provided $\gamma=0$ and $\sin [k(L+1)]+u\sin [kL]=0$. $\Omega$ is then the set of the $L$ solutions to the previous equation in the interval $(0,\pi)$,
and
\begin{equation}\label{eq:hexadispersion}
\varepsilon(k)=-\frac{1}{2}\log\left(1+u^2+2u \cos k\right).
\end{equation}
The minus signs in (\ref{eq:tm_diagonalized}),(\ref{eq:hexadispersion}) are here to mimic usual band theory, where one wants to minimize energy and look for ground states, the factor $2$ to remember that our transfer matrix $T'T$ moves by two steps.
The case of periodic boundary conditions (PBC) turns out to be easier, and the reader can check that the set of allowed momenta is
\begin{equation}\label{eq:pbc_momenta}
\Omega_\alpha=\left\{\frac{(2m+\alpha)\pi}{L}\quad,\quad m=-L/2,\ldots,L/2-1\right\},
\end{equation}
where $\alpha=1$ for even $\hat{N}$ and $\alpha=0$ for odd $\hat{N}$. This will be used in section \ref{sec:torus}.
\end{enumerate}
\subsection{Hamiltonian limit and quantum spin chains}
We have just seen that dimers on the honeycomb lattice can be understood as a free fermion system, with dispersion (\ref{eq:hexadispersion}). A similar result holds for the square lattice. For $0<u<1$, the dispersion relation is analytic in $[-\pi,\pi]$ so leads to a \emph{local} Hamiltonian, that is, the hoppings decay exponentially fast with distance. Let us write $\mathcal{T}(u)=T'T$ the corresponding transfer matrix. It is easy to see that
\begin{equation}\label{eq:hamlimit}
\lim_{u\to 0} \mathcal{T}(u)^{1/u}=\exp\left(-H\right)
\end{equation}
where $H$ is a free fermion Hamiltonian with dispersion $\varepsilon(k)=-\cos k$, that is a tight binding model with only next nearest neighbor hoppings. As is well known, this model also corresponds to the spin-1/2 XX chain \cite{LiebSchultzMattis_fermions}. Eq.~(\ref{eq:hamlimit}) is called a Hamiltonian (or Trotter) limit, so the XX chain is a Hamiltonian limit of dimers on the honeycomb lattice (this works for the square lattice too, see figure \ref{fig:hamlimit}).
A way to think of this limit is to consider a finite $L\times M$ lattice, and notice that $u$ essentially controls the hopping rates to the left and right. Then multiply the vertical size by an integer $p$ and divide $u$ by $p$. We have a $L\times (pM)$ lattice with hopping rate $u/p$. The limit $p\to \infty$ is nontrivial because while the hopping rate goes to zero, dimers are given more opportunities to hop since the number of application of the transfer matrix increases. It is also convenient to make the lattice spacing in the vertical direction proportional to $1/p$, so that the distance between top and bottom stays the same in that limit.
The Hamiltonian limit is in a sense intermediate between discrete and continuous. The horizontal direction stays discrete, but the vertical one becomes continuous. Hamiltonian limits are key to the application of integrability techniques to quantum spin chains \cite{korepin_bogoliubov_izergin_1993,gaudin_2014}. To finish this section let us mention the two most important examples of Hamiltonian limits: the XXZ spin chain is the Hamiltonian limit of the six-vertex model (interacting dimers), while the Ising chain in transverse field is the Hamiltonian limit of the 2d classical Ising model.
\begin{myfigure}[label=fig:hamlimit,float=!ht]{Hamiltonian limit of square dimers}
\hspace{0.57cm}
\includegraphics[height=4cm]{./Pictures/fig_96_1_1_13.png}\hfill
\includegraphics[height=4cm]{./Pictures/fig_192_2_1_10.png}
\tcbline
\includegraphics[height=4cm]{./Pictures/fig_384_4_1_11.png}\hfill
\includegraphics[height=4cm]{./Pictures/fig_768_8_1_10.png}
\tcbline
Illustration of the Hamiltonian limit $u\to 0$ for dimers on the square lattice. In that case $u$ corresponds to the weight of all horizontal dimers. Top left: $u=1$. Top right: $u=1/2$. Bottom left: $u=1/4$. Bottom right $u=1/8$. In this limit the vertical direction becomes continuous, but the arctic curve $x^2+y^2=R^2/(1+u^2)$ stays a circle.
\end{myfigure}
\subsection{Connection to the height mapping}
\label{sec:heightconnection}
The height mapping for dimers on the honeycomb lattice is very similar to that of the square. Turning counterclockwise around black (resp. white) vertices, the height picks a factor $+2$ (resp. $-2$) when crossing a dimer, $-1$ (resp. $+1$) otherwise. As before, we use a color code to make the mapping more readable: vertical dimers always belong to the same sublattice, so we color them all in blue. Horizontal dimers are shown either in red or green. The rules are illustrated in figure~\ref{fig:hexadimers_heights}. For future convenience, we also remultiply all heights by $\pi/3$ at the end, so that heights are integer multiples of $\pi/3$.
The relation to the free fermion mapping is as follows. First, notice that the minimum slope in the horizontal direction correspond to all edges occupied by vertical dimers, so $\partial_x h=-\frac{2\pi}{3}$. The corresponding fermion density is $n_x=0$. The maximal slope is $\partial_x h=\frac{\pi}{3}$ and corresponds to fermion density $n_x=1$. In the vertical direction, the minimum slope is $-\frac{\pi}{2}$ while the maximal is $\frac{\pi}{2}$. Notice again that the two slopes are not independent, $\partial_y h$ takes values in $[-k_F,k_F]$, where $k_F=\partial_x h+2\pi/3$.
$\partial_x h$ is simply related to the fermion density, but how do we control $\partial_y h$? The simplest way to do that is to assign different weights to red and green dimers: starting from now we assign a weight $b$ to red dimers, and a weight $b^{-1}$ to green dimers. It is easy to see that the power of $b$ in the expansion of the partition function allows to determine $\partial_y h$. Then, the corresponding transfer matrix has dispersion $\varepsilon(k+i\nu)$, where $b=e^{\nu}$. Due to $\varepsilon(-k)=\varepsilon(k)$ the transfer matrix is still normal and real, but not symmetric anymore.
\begin{myfigure}[label=fig:hexadimers_heights]{Height mapping for dimers on the honeycomb lattice}
\begin{center}\begin{tikzpicture}[scale=0.9]
\foreach \y in {0,1,2,3}{
\draw (0,\y) -- (11,\y);
}
\foreach \x in {0,2,4,6,8,10}{
\foreach \y in {0,2,4}{
\draw (\x+1,-1+\y) -- (\x+1,0+\y);
}
}
\foreach \x in {-1,1,3,5,7,9}{
\foreach \y in {1,3}{
\draw (\x+1,-1+\y) -- (\x+1,0+\y);
}
}
\draw[line width=5pt,dblue] (1,-1) -- (1,0);
\draw[line width=5pt,dblue] (7,-1) -- (7,0);
\draw[line width=5pt,dblue] (9,-1) -- (9,0);
\draw[line width=5pt,dblue] (8,0) -- (8,1);
\draw[line width=5pt,dblue] (11,1) -- (11,2);
\draw[line width=5pt,dblue] (10,2) -- (10,3);
\draw[line width=5pt,dblue] (11,3) -- (11,4);
\draw[line width=5pt,dblue] (0,0) -- (0,1);
\draw[line width=5pt,dblue] (4,0) -- (4,1);
\draw[line width=5pt,dblue] (7,1) -- (7,2);
\draw[line width=5pt,dblue] (0,2) -- (0,3);
\draw[line width=5pt,dblue] (1,1) -- (1,2);
\draw[line width=5pt,dblue] (6,2) -- (6,3);
\draw [line width=5pt,dred] (2,0) -- (3,0);
\draw [line width=5pt,gree] (2,1) -- (3,1);
\draw [line width=5pt,dred] (2,2) -- (3,2);
\draw [line width=5pt,dred] (1,3) -- (2,3);
\draw [line width=5pt,gree] (5,0) -- (6,0);
\draw [line width=5pt,dred] (10,0) -- (11,0);
\draw [line width=5pt,dred] (9,1) -- (10,1);
\draw [line width=5pt,dred] (8,2) -- (9,2);
\draw [line width=5pt,gree] (8,3) -- (9,3);
\draw [line width=5pt,dred] (5,1) -- (6,1);
\draw [line width=5pt,dred] (4,2) -- (5,2);
\draw [line width=5pt,gree] (4,3) -- (5,3);
\draw [line width=5pt,dblue] (3,3) -- (3,4);
\draw [line width=5pt,dblue] (7,3) -- (7,4);
\draw (0,-0.5) node {$0$}; \draw (-1,0.5) node {$1$};\draw (0,1.5) node {$0$};\draw (-1,2.5) node {$1$};\draw (0,3.5) node {$0$};
\draw (2,-0.5) node {$-2$}; \draw (1,0.5) node {$-1$};\draw (2,1.5) node {$-2$};\draw (1,2.5) node {$-1$};\draw (2,3.5) node {$1$};
\draw (4,-0.5) node {$-1$};\draw (3,0.5) node {$0$};\draw (4,1.5) node {$-1$};\draw (3,2.5) node {$0$};\draw (4,3.5) node {$-1$};
\draw (6,-0.5) node {$0$};\draw (5,0.5) node {$-2$};\draw (6,1.5) node {$0$};\draw (5,2.5) node {$1$};\draw (6,3.5) node {$0$};
\draw (8,-0.5) node {$-2$};\draw (7,0.5) node {$-1$};\draw (8,1.5) node {$-2$};\draw (7,2.5) node {$-1$};\draw (8,3.5) node {$-2$};
\draw (10,-0.5) node {$-4$};\draw (9,0.5) node {$-3$};\draw (10,1.5) node {$-1$};\draw (9,2.5) node {$0$};\draw (10,3.5) node {$-1$};
\draw (12,-0.5) node {$-3$};\draw (11,0.5) node {$-2$};\draw (12,1.5) node {$-3$};\draw (11,2.5) node {$-2$};\draw (12,3.5) node {$-3$};
\end{tikzpicture}
\end{center}
\tcbline
Height mapping for dimers on honeycomb, drawn as a brickwall. Heights are shown in units of $\pi/3$, as explained in the text. Recall that particles (fermions) are vertical edges not occupied by a dimer.
\end{myfigure}
The information about heights can also be extracted in a more algebraic way, directly from the transfer matrix. The operator $c_x^\dag c_x$ codes fermion density, and the knowledge of all on-site densities for all $x$ trivially allows to reconstruct all heights along a given horizontal line, and the average slope $\partial_x h$. This seems less obvious for $\partial_y h$, but there is nevertheless an operator which allows to do so. Assuming OBC for simplicity, it is defined as
\begin{equation}\label{eq:icurrent_def}
J_x=\sum_{i,j=1}^L \Gamma_{ij}^{(x)} c_i^\dag c_j\qquad,\qquad \Gamma_{ij}^{(x)}=\sum_{l=x}^{L} \left(\delta_{il}\delta_{lj}-B_{il}(B^{-1})_{lj}\right),
\end{equation}
Using (\ref{eq:gentimeevolution}), (\ref{eq:gentimeevolution2}), one can show that it satisfies the identity
\begin{equation}\label{eq:icurrent_property}
\sum_{x=a}^b\left(T'Tc_x^\dag c_x (T'T)^{-1}-c_x^\dag c_x\right)=J_{b+1}-J_a
\end{equation}
for all $a,b\in \{1,\ldots,L\}$, $b\geq a$. This is similar to a continuity equation: the difference in total fermion number on the segment $[a,b]$ from one horizontal line to the next can be interpreted as a current of particles flowing through the endpoints of the segment. $J_x$ is also local, in the sense that for any analytic dispersion, the $\Gamma_{ij}^{(x)}$ decay exponentially fast with both $|i-x|$ and $|j-x|$. Hence we call $J_x$ the \emph{local current}.
It is also possible to make global versions of these operators. The particle density $\rho=\frac{1}{L}\sum_x c_x^\dag c_x$ gives access to the average slope $\partial_x h$, and becomes for $L\to \infty$
\begin{equation}
\rho=\int_{-\pi}^{\pi} \frac{dk}{2\pi}c^\dag(k)c(k),
\end{equation}
with conventions explained in appendix~\ref{sec:infinitelattice}.
The current density $J=\frac{1}{L}\sum_{x}J_x$ simplifies for large $L$ to\footnote{For an infinite system we may analogously define the current operator as $J_x=\sum_{i,j\in \mathbb{Z}} \Gamma_{ij}^{(x)} c_i^\dag c_j$, with $\Gamma_{ij}^{(x)}=\sum_{l=x}^{\infty}\left[\delta_{il}\delta_{lj}-B_{il}(B^{-1})_{lj}\right]$. Now $B$ is an infinite matrix, which can easily be inverted. This leads to the integral representation $\Gamma_{ij}^{(x)}=\int_{-\pi}^{\pi}\frac{dq}{2\pi}\int_{-\pi}^{\pi} \frac{dq'}{2\pi}e^{-\mathrm{i}\mkern1mu q i}e^{\mathrm{i}\mkern1mu q'j}e^{\mathrm{i}\mkern1mu x(q-q')}\frac{1-e^{-\varepsilon(q+\mathrm{i}\mkern1mu \nu)+\varepsilon(q'+\mathrm{i}\mkern1mu \nu)}}{1-e^{\mathrm{i}\mkern1mu (q-q')}}$. Then, equation (\ref{eq:totcurrent}) follows from the identity $\sum_{x\in \mathbb{Z}}\Gamma_{ij}^{(x)}=\int_{-\pi}^{\pi} \frac{dq}{2\pi} e^{-\mathrm{i}\mkern1mu q(i-j)}\mathrm{i}\mkern1mu \varepsilon'(q+\mathrm{i}\mkern1mu \nu)$ which is obtained by applying the L'Hôpistal rule to the previous equation.}
\begin{equation}\label{eq:totcurrent}
J=\int_{-\pi}^{\pi}\frac{dk}{2\pi}\mathrm{i}\mkern1mu \varepsilon'(k+\mathrm{i}\mkern1mu \nu)c^\dag(k)c(k),
\end{equation}
and gives access to the average $\partial_y h$, as we shall check in the next subsection. Looking at (\ref{eq:totcurrent}), we immediately see that a non-zero $\nu$ is necessary to get a real non-zero current in any eigenstate of $H$.
\subsection[Exact free energy for free fermions]{Torus partition function and exact free energy for free fermions}
\label{sec:torus}
We have now all the ingredients needed to compute the free energy. The generating function for all slopes on the $L\times M$ torus is given by
\begin{equation}
e^{-2 ML\mu /3} Z(\mu,\nu)=\textrm{Tr}\left[ \displaystyle{e^{-M\sum_k \left[\varepsilon(k+\mathrm{i}\mkern1mu\nu)+\mu\right]f_k^\dag f_k}}\right]
\end{equation}
This can be evaluated in closed form, using formula (\ref{eq:traceexp}), and being careful about the point discussed in \ref{item:subt_3}. We find
\begin{equation}
e^{-2 ML\mu/3} Z(\mu,\nu)=\frac{1}{2}\left(Z_0^+-Z_0^-+Z_1^++Z_1^-\right)
\end{equation}
where
\begin{equation}\label{eq:fourterms}
Z_{\alpha}^{\pm}=\prod_{k\in \Omega_\alpha} (1\pm e^{-M\left[\varepsilon(k+i\nu)+\mu\right]})
\end{equation}
where recall that $\Omega_\alpha$ is given by (\ref{eq:pbc_momenta}).
The reader can check that $\Omega_1$ (resp. $\Omega_0$) allows to generate all eigenvalues in the even fermion sector (resp. odd fermion sector). The leading asymptotic behavior may be determined by picking any\footnote{Except e. g. when $u=1$, $\nu=\mu=0$ where $Z_0^-$ vanishes identically, since one of the $\varepsilon(k)$ is exactly zero. It can only happen to one term at a time, so does not matter.} of the four terms (\ref{eq:fourterms}). It is then easy to see that the only term that matter are those for which the argument of the exponential is strictly positive. Hence setting $M=L$ for convenience, we obtain
\begin{align}
f(\mu,\nu)&=-\lim_{L\to \infty}\frac{Z(\mu,\nu)}{L^2}\\
&=-\frac{2\mu}{3}+\int_{\rm FS} \frac{dk}{2\pi}\left[\varepsilon(k+i\nu)+\mu\right].
\end{align}
The Fermi sea $\textrm{FS}$ is the set of $k$ for which the real part of the integrand is negative, in our case it is a single symmetric interval $[-k_F,k_F]$. Therefore, the generating free energy is simply the ground state energy corresponding to the dispersion $\varepsilon(k+i\nu)+\mu$. One can check that it is real.
The last step to get the asymptotic behavior of $Z_{r,s}$ is to choose $\mu,\nu$ in such a way that the term $e^{\mu r+\nu s}Z_{r,s}$ dominates in $Z(\mu,\nu)$. In the thermodynamic limit the corresponding free energy $F(r,s)$ is given by
\begin{equation}
F(r,s)=f(\mu,\nu)-\frac{\mu r}{\pi}-\frac{\nu s}{\pi},
\end{equation}
where $\mu$ and $\nu$ are determined from
\begin{align}
\partial_\mu F(r,s) &=0,\\
\partial_\nu F(r,s) &=0.
\end{align}
That is, $F(r,s)$ is the Legendre transform of $f(\mu,\nu)$, with $r$ (resp. $s$) conjugate to $\mu$ (resp. $\nu$).
We finally obtain the following fundamental result
\begin{empheq}[box=\othermathbox]{equation}\label{eq:gen1bsurfacetension}
F(r,s)=\int_{-k_F}^{k_F}\frac{dk}{2\pi}\varepsilon(k+\mathrm{i}\mkern1mu\nu)-\frac{\nu s}{\pi},
\end{empheq}
where $k_F$ and $\nu$ are determined from $(r,s)$ through
\begin{empheq}[box=\othermathbox]{align}\label{eq:kf_r}
r&=k_F-\frac{2\pi}{3},\\\label{eq:nu_s}
s&=-\textrm{Im}\,\varepsilon(k_F+\mathrm{i}\mkern1mu\nu).
\end{empheq}
Hence, the free energy is solely determined from the dispersion relation, extended to the strip $[-\pi,\pi]+\mathrm{i}\mkern1mu \mathbb{R}$ of the complex plane. From this formula it is also obvious that (\ref{eq:gen1bsurfacetension}) is not specific to the honeycomb lattice, but applies to any one-band fermion problem, that satisfies $\varepsilon(-k)=\varepsilon(k)$. $s$ is also proportional to the mean current $\braket{J}$ in a Fermi sea eigenstate, since $\int_{-k_F}^{k_F} \frac{dk}{2\pi}\varepsilon'(k+\mathrm{i}\mkern1mu \nu)=\frac{\mathrm{i}\mkern1mu}{\pi} \textrm{Im}\, \varepsilon(k_F+\mathrm{i}\mkern1mu\nu)$, further justifying the discussion in section \ref{sec:heightconnection}.
The result can also be generalized to several bands problems, but we do not investigate this here. Let us now compute this in two important examples.
\paragraph{XX chain in imaginary time.}
Let us start with the simplest, which is the PNG droplet, or XX chain in imaginary time. The dispersion relation is $\varepsilon(k)=-\cos k$, which makes explicit computations easy. We find
\begin{equation}
F(r,s)=\frac{s}{\pi}\textrm{arcsinh}\left( \frac{s}{\cos r}\right)-\frac{1}{\pi} \sqrt{\cos^2 r+s^2}.
\end{equation}
It is shown in figure \ref{fig:exsurface} on the left. Now $r=k_F-\pi/2$ --notice the difference compared to dimers-- so $r\in [-\pi/2,\pi/2]$, while $s\in \mathbb{R}$. The constraints between $r$, and $s$ are relaxed in this limit, except for the fact that the free energy is infinite when $r=\pm \pi/2$, $s\neq 0$.
\paragraph{Dimers on honeycomb.}
In this case integration is a little bit more tedious. We obtain after some algebra
\begin{equation}
2\pi F(r,s)=\textrm{Im}\left[{\rm Li}_2(-u e^{\mathrm{i}\mkern1mu k_F-\nu})+{\rm Li}_2(-u^{-1}e^{\mathrm{i}\mkern1mu k_F-\nu})\right]-k_F \log u-(k_F+2s)\nu,
\end{equation}
where recall $\nu$ solves (\ref{eq:nu_s}) and ${\rm Li}_2(\alpha)=-\int_0^\alpha \frac{\log (1-t)}{t}dt$ is the dilogarithm.
The free energy is shown in figure \ref{fig:exsurface} on the right. To our knowledge, all explicitly known free energies can be expressed in terms of such dilogarithms \cite{Nienhuis_1984,variationaldimers,deGierKenyon}.
\begin{myfigure}[label=fig:exsurface]{Examples of free energies}
\includegraphics[width=6cm]{./Plots/XXchain_surfacetension}\hfill
\includegraphics[width=6cm]{./Plots/hexa_surfacetension_3s5}
\tcbline
Left: free energy for the XX chain. Right: free energy for dimers on the hexagonal lattice, shown as a function of $k_F$, $s$, for $u=3/5$).
\end{myfigure}
\subsection{$K=1$ for free fermions}
We are now in a position to compute the Hessian of the free energy, and check our previous claim that $K=1$ in general for free fermions. The calculation presented below is essentially that of \cite{Abanov_hydro}. The first partial derivatives are
\begin{align}
F^{(10)}(r,s)&=\frac{\varepsilon(z)+\varepsilon(\bar{z})}{2\pi},\\
F^{(01)}(r,s)&= \frac{\mathrm{i}\mkern1mu (z-\bar{z})}{2\pi},
\end{align}
where we have introduced $z=k_F+\mathrm{i}\mkern1mu\nu$. Eq.~(\ref{eq:nu_s}), or $2s=\mathrm{i}\mkern1mu(\varepsilon(z)-\varepsilon(\bar{z}))$ was also explicitly used to derive the second equation. Now for the second derivatives:
\begin{align}
F^{(20)}(r,s)&=\frac{(\partial_r z) \varepsilon'(z)+(\partial_r \bar{z}) \varepsilon'(\bar{z})}{2\pi},\\
F^{(02)}(r,s)&= \frac{\mathrm{i}\mkern1mu \partial_s( z-\bar{z})}{2\pi}.
\end{align}
$F^{(11)}$ can be computed in two ways, either $\partial_r F^{(01)}(r,s)$ or $\partial_s F^{(10)}(r,s)$, so
\begin{align}
F^{(11)}(r,s)&=\frac{\mathrm{i}\mkern1mu \partial_r(z-\bar{z})}{2\pi}\\
&=\frac{(\partial_s z)\varepsilon'(z)+(\partial_s \bar{z})\varepsilon'(\bar{z})}{2\pi}.
\end{align}
Hence
\begin{align}
F^{(20)}F^{(02)}-F^{(11)}F^{(11)}&=\frac{\mathrm{i}\mkern1mu}{4\pi^2}\big[\partial_r (z+\bar{z})\big] \big[(\partial_s z) \varepsilon'(z)-(\partial_s \bar{z})\varepsilon'(\bar{z})\big]\\
&=\frac{\mathrm{i}\mkern1mu}{2\pi^2} \partial_s \left[\varepsilon(z)-\varepsilon(\bar{z})\right]\\
&=\frac{1}{\pi^2},
\end{align}
where we have again used (\ref{eq:nu_s}) to get the last line.
So $K=1$, independent on $r$ and $s$. This identity holds for any statistical mechanical model which can be mapped onto free fermions. For more complicated models such as interacting dimers or the six vertex model, $K$ generically depends both on both $r$ and $s$ (see e. g. \cite{korepin_bogoliubov_izergin_1993,Granet_2019,Giuliani_2019}).
\vspace*{\fill}
\begin{exercise}[label=ex:gasfreefermions]{The log gas with free fermions}
Let $\displaystyle{\psi(x_1,\ldots,x_N)=\frac{1}{\sqrt{Z_N}}\prod_{i<j}(x_i-x_j)e^{-\frac{1}{2}\sum_i V(x_i)}}$ be a normalized wave function on the real line, $x_i\in \mathbb{R}$.
\tcbline
1. Consider the state $\ket{\Psi}=\frac{1}{\sqrt{Z_N}}f_1^\dag \ldots f_N^\dag \ket{0}$, where ${f_m^\dag=\int_{\mathbb{R}} dx x^{m-1}e^{-V(x)/2} c^\dag(x)}$ and $\{c(x),c^\dag(x')\}=\delta(x-x')$. Show that $\psi(x_1,\ldots,x_N)=\braket{0|c(x_1)\ldots c(x_N)|\Psi}$. Check that $\braket{\Psi|\Psi}=1$.\vspace{0.1cm}\\
2. Let ${d_k^\dag =\int dx\, p_k(x)e^{-V(x)/2} c^\dag(x)}$ where $p_k(x)$ is a polynomial in $x$ of degree at most $k-1$. Under which condition do we have $\{d_k,d_q^\dag\}=\delta_{kq}$ for $k,q \in \{1,\ldots,N\}$? We assume it is satisfied in the following.\vspace{0.1cm}\\
3. Show that $\ket{\Psi}=d_1^\dag \ldots d_N^\dag\ket{0}$.\vspace{0.1cm}\\
4. Show that $\braket{\Psi|c^\dag(x)c(x')|\Psi}=\sum_{k=1}^N p_k(x)p_k(x')$.\vspace{0.1cm}\\
5. Show that $\mathbb{E}(\rho(x_1)\ldots \rho(x_n))=\det_{1\leq i,j,\leq n}(\braket{\Psi|c^\dag(x_i)c(x_j)|\Psi})$, where the expectation value $\mathbb{E}$ is taken with respect to the pdf $|\psi(x_1,\ldots,x_N)|^2$
\end{exercise}
\pagebreak
\section{Solving the minimization problem}
\label{sec:cburgers}
From the previous sections, we have now all the necessary ingredients to solve the variational problem.
\subsection{Complex Burgers equations}
Recall the EL equations are
\begin{equation}
\partial_x F^{(10)}+\partial_y F^{(01)}=0,
\end{equation}
which read for free fermions, using (\ref{eq:gen1bsurfacetension},\ref{eq:kf_r},\ref{eq:nu_s}),
\begin{equation}
\partial_x \frac{\varepsilon(k_F+\mathrm{i}\mkern1mu\nu)+\varepsilon(-k_F+i\nu)}{2\pi}-\partial_y \frac{\nu}{\pi}=0,
\end{equation}
where $k_F$ and $\nu$ depend on position $x,y$. It is possible to express $k_F,\nu$ in terms of $r,s$ but this is not necessary. A convenient way to proceed is to introduce
\begin{equation}
z=z(x,y)=k_F+\mathrm{i}\mkern1mu\nu,
\end{equation}
as we did before.
In terms of those, the EL reads
\begin{equation}\label{eq:ELz}
\partial_x (\varepsilon(z)+\varepsilon(\bar{z}))+\mathrm{i}\mkern1mu\partial_y (z-\bar{z})=0.
\end{equation}
Since $r,s$ are by definition partial derivatives of the height field, we also have the continuity equations $\partial_y r-\partial_x s=0$ which may be also rewritten as $\partial_{y}(z+\bar{z})-i\partial_x (\varepsilon(z)-\varepsilon(\bar{z}))=0$. Combining with (\ref{eq:ELz}) yields the (conjugate of each other) equations \cite{Abanov_hydro}
\begin{empheq}[box=\othermathbox]{align}\label{eq:nice}
\mathrm{i}\mkern1mu\partial_y z+\partial_x \varepsilon(z)&=0,\\\label{eq:nice2}
-\mathrm{i}\mkern1mu\partial_y \bar{z}+\partial_x \varepsilon(\bar{z})&=0.
\end{empheq}
For $\varepsilon(k)=k^2/2$ (free Fermi gas), this PDE is called \emph{complex Burgers equation}, in the following we refer to (\ref{eq:nice}) as a complex Burgers-type equation. An alternative but equivalent point of view is discussed in section~\ref{sec:alg_geom}.
The interpretation of these equations is very nice. Think of a free fermion system described by a (boosted) Fermi sea $[-k_l,k_r]$, where $k_{l,r}$ are the left (or right) Fermi momenta. The particle density is $\rho=\frac{k_r+k_l}{2\pi}$, while the current is $\frac{\varepsilon(k_r)-\varepsilon(k_l)}{2\pi}$. Under unitary time evolution, they satisfy the continuity equations $\partial_t k_{l,r}+\partial_x \varepsilon(k_{l,r})=0$. Since $\partial_x \varepsilon(k)=(\partial_x k)\varepsilon'(k)$, this is essentially the statement that each quasiparticle with momentum $k$ moves at a speed given by the group velocity $v(k)=\varepsilon'(k)$. Now $z$ and $\bar{z}$ are the (respective) analytic continuations of $k_l$, $k_r$ to the complex plane, in which case the continuity equations are continued by Wick rotation $t=-iy$, leading to (\ref{eq:nice},\ref{eq:nice2}). Hence (\ref{eq:nice},\ref{eq:nice2}) can be seen as Wick-rotated continuity equations.
In fact Ref.~\cite{Abanov_hydro} proceeds in exactly the reverse order as we just did. Eqs.~(\ref{eq:nice},\ref{eq:nice2}) are just assumed to hold in imaginary time, as the only sensible analytic continuation of the real time continuity equations. Then, it is possible to find the simplest lagrangia
\begin{equation}
\mathcal{L}=-\frac{1}{4\pi}(z-\bar{z})(\varepsilon(z)-\varepsilon(\bar{z}))+\int_{-\bar{z}}^z \frac{dk}{2\pi}\varepsilon(k),
\end{equation}
for which (\ref{eq:nice},\ref{eq:nice2}) are the EL equations. The reader can easily check that this is exactly the free energy (\ref{eq:gen1bsurfacetension}). Here we derived all that starting from the lattice model.
\subsection{Complex characteristics}
Such equations may be ``solved'' by the method of complex characteristics. Without entering too much into specifics (see e.g. \cite{Amoeba}), the solution $z$ satisfies
\begin{equation}\label{eq:implicit}
G(z)=x+\mathrm{i}\mkern1mu\varepsilon'(z)y,
\end{equation}
where $G$ is an analytic function. This result is nothing more than a (nice) parametrisation of the whole set of solutions. As with all PDE's, it is very important to specify the boundary conditions; here those turn out to be enough to determine the desired analytic function $G$. Said differently, to each boundary condition (bc) there corresponds an analytic function. However, finding it from a given bc is, in general, a difficult task. At this stage the reader might be worried that we did not achieve much from a practical perspective: we mapped the difficult problem of finding the limit shape onto the difficult problem of determining the analytic function.
Fortunately the situation is not that bad, since there are a few interesting examples where $G$ can be guessed. Those include the emptiness formation probability setup \cite{Abanov_hydro} (see also exercise \ref{ex:emptiness}), but the method can also be adapted to a domain wall geometry, as is discussed below. Note also that in this approach, if the density profile along any horizontal or vertical slice is known, then $G$ is known, so everything is known.
\subsection{Introducing the domain wall geometry}
A simple way to generate limit shapes is to look at a 'domain wall geometry' similar to the one we used to generate the Aztec diamond for square lattice dimers. We consider lattice fermions on the infinite line $\mathbb{Z}$, and impose the configurations $\ket{\psi}=\prod_{x<0}c_x^\dag \ket{\bf 0}$ at the bottom and top boundaries. In height language these boundary conditions correspond to the maximum slopes $\partial_x h=\frac{\pi}{3}$ for $x<0$, $\partial_x h=-\frac{2\pi}{3}$ for $x>0$. The boundary conditions are shown in figure \ref{fig:dwhexa} on the left, while a typical configuration is shown on the right.
\begin{myfigure}[label=fig:dwhexa,float=!ht]{Domain wall geometry for dimers on the hexagonal lattice}
\includegraphics[height=5.4cm]{./Plots/domainwall_hexa/hexagon.pdf}
\tcbline Domain wall geometry (left). Right: typical configuration with domain wall boundary conditions. The particles (fermions) are shown in yellow.
\end{myfigure}
Specific examples with this geometry are worked out in the next subsection. We will see in particular that the corresponding arctic curve is an ellipse.
\subsection{Examples}
Here we discuss two examples with boundary conditions that correspond to a domain wall initial state. We focus on the $XX$ chain first, and then proceed to dimers on the hexagonal lattice (assuming $u<1$).
\paragraph{The XX chain in imaginary time (or PNG droplet\cite{Spohn_prl,PraehoferSpohn2002}).} Here $\varepsilon(z)=-\cos z$, which gives $\varepsilon'(z)=\sin z$.
One can check the right solution for the domain wall geometry takes the form
\begin{equation}\label{eq:quadratic}
R\cos z=x+\mathrm{i}\mkern1mu y \sin z.
\end{equation}
[This corresponds to $G(z)=R\cos z$]. It is sufficient to check that the boundary conditions are correct. Indeed, those are set at $y=\pm R$, and read (for $y=R$)
\begin{equation}
R e^{-\mathrm{i}\mkern1mu k_F(x, R)+\nu(x, R)}=x.
\end{equation}
The rhs is real, so this imposes $k_F(x>0)=0$ and $k_F(x<0)=\pi$, which is exactly the density corresponding to the domain wall boundary conditions.
(\ref{eq:quadratic}) is a quadratic equation, so can be easily solved. Provided $x^2+y^2<R^2$ it has two simple roots $z$ and $-z^*$, with
\begin{equation}
z=\arccos \frac{x}{\sqrt{R^2-y^2}}-\mathrm{i}\mkern1mu\textrm{arctanh} \frac{y}{R},
\end{equation}
From this we can compute essentially everything, as we demonstrate now.
The density inside the arctic circle is simply given by $\rho(x,y)=\frac{\partial_x h}{\pi}+1/2=\frac{1}{\pi}\textrm{Re} \,z=\frac{1}{\pi}\arccos \frac{x}{\sqrt{R^2-y^2}}$, the current being $-\textrm{Im}\,\varepsilon(z)=\frac{y \sqrt{R^2-x^2-y^2}}{R^2-y^2}$. The height profile is given by
\begin{equation}
h(x,y)=-\sqrt{R^2-x^2-y^2}-|x|\arcsin \frac{|x|}{\sqrt{R^2-y^2}},
\end{equation}
and the corresponding (minimal) free energy is
\begin{equation}
F(\partial_x h_{\rm cl}(x,y),\partial_y h_{\rm cl}(x,y))=\frac{\sqrt{R^2-x^2-y^2} \left(-R+y \,{\rm arcsinh}\left(\frac{y}{\sqrt{R^2-y^2}}\right)\right)}{\pi \left(R^2-y^2\right)}.
\end{equation}
See figure \ref{fig:xxmin} for pictures.
The total free energy is $S_0[h_{\rm cl}]=\int dx dy F(\partial_x h_{\rm cl}(x,y),\partial_y h_{\rm cl}(x,y))=-R^2/2$, which means the partition function scales as $Z(R)\sim e^{R^2/2}$. In fact, we will see in section \ref{sec:exactcalc} that $Z(R)=e^{R^2/2}$ exactly for all $R$.
\begin{myfigure}[label=fig:xxmin,float=!ht]{Minimiser for the XX chain}
\includegraphics[width=6cm]{./Plots/XXchain_densityprofile}\hfill
\includegraphics[width=6cm]{./Plots/XXchain_currentprofile}
\includegraphics[width=6cm]{./Plots/XXchain_minimalsurfacetension}\hfill
\includegraphics[width=6cm]{./Plots/XXchain_heightprofile}
\tcbline
Density profile, current, free energy in the fluctuating region, (minus the) height profile corresponding to the minimiser of $\int_D dx dy F(\partial_x h,\partial_y h)$.
\end{myfigure}
Looking at the edge behavior is also worthwhile. Except near $x=0$, $y=\pm R$, the density profile vanishes with a square-root singularity. The height field in turn vanishes as $x^{3/2}$, a result which is known more generally as the Pokrovsky-Talapov law \cite{Pokrovsky_Talapov}.
\paragraph{Hilbert transform and dimers on the hexagonal lattice.}
For a general dispersion relation $\varepsilon(z)=-\sum_p a_p \cos pz$, $v(z)=\varepsilon'(z)$, it is easy to see that the right generalisation of (\ref{eq:quadratic}) reads
\begin{equation}\label{eq:hydro_hilbert}
-R \tilde{v}(z)=x+\mathrm{i}\mkern1mu y v(z)
\end{equation}
where $\tilde{v}$ is obtained from $v(z)=\sum_p p a_p \sin pz$, by replacing all $\sin p z$ by $-\cos pz$. This transformation is called \emph{Hilbert transform}.
Applying this to the dispersion (\ref{eq:hexadispersion}) leads to
\begin{equation}
\tilde{v}(z)=-\frac{u(u+\cos z)}{1+u^2+2u \cos z}.
\end{equation}
We also get a quadratic equation, whose solution is
\begin{equation}
z=\arccos \left(\frac{(1+u^2)x-Ru^2}{u\sqrt{(R-2x)^2-y^2}}\right)-\mathrm{i}\mkern1mu\textrm{arctanh} \left(\frac{y}{R-2x}\right)
\end{equation}
inside the arctic ellipse $X^2+y^2<R^2$, where $X=\frac{1-u^2}{u}x+Ru$. It is also possible to compute everything as we did before, but we refrain from doing so.
\begin{myfigure}[label=fig:hexamin,float=!ht]{Minimiser for dimers on the honeycomb lattice}
\includegraphics[width=6cm]{./Plots/hexa_densityprofile}\hfill
\includegraphics[width=6cm]{./Plots/hexa_currentprofile}
\includegraphics[width=6cm]{./Plots/hexa_minimalsurfacetension_3s5}\hfill
\includegraphics[width=6cm]{./Plots/hexa_heightprofile}
\tcbline
Density profile $\frac{1}{\pi}\partial_x h+\frac{2}{3}$, current $\partial_y h$, free energy in the fluctuating region and (minus the) corresponding height profile.
\end{myfigure}
\subsection{Algebraic geometry}
\label{sec:alg_geom}
Recall the free energy may be put into the form
\begin{equation}
F(r,s)=f(\mu,\nu)-\mu r -\nu s,
\end{equation}
where
\begin{equation}
f(\mu,\nu)=\int_{-\pi}^{\pi}\frac{dk}{2\pi}\left[\varepsilon(k+i\nu)+\mu\right],
\end{equation}
and $\mu,\nu$ are determined from (\ref{eq:kf_r},\ref{eq:nu_s}),
namely, as the (double) Legendre transform of the generating function $f(\mu,\nu)$. Let us take the honeycomb dimer model as an example. There is an alternative (equivalent) way of accessing the generating function, using the Kasteleyn approach\cite{Nienhuis_1984,kenyon2000}, which leads to the more symmetric expression
\begin{equation}\label{eq:symm}
f(\mu,\nu)=\int_{-\pi}^{\pi}\frac{dk}{2\pi}\int_{-\pi}^{\pi}\frac{dq}{2\pi} \log \left|P(e^{\mu/\pi+ik},e^{\nu/\pi+iq})\right|
\end{equation}
where $P(z,w)=z+w+1$. The reader can check that the two expressions match, using $\int_{-\pi}^{\pi} \frac{d\theta}{2\pi}\log(\alpha+\beta e^{i\theta})=\log \max(|\alpha|,|\beta|)$. In a sense, the transfer matrix approach does the integral over $q$ for free in (\ref{eq:symm}), but depending on context, the symmetric form can be nicer, and this is the way free energy is calculated in the mathematical literature, see e.g. \cite{variationaldimers}. The polynomial $P$ essentially encodes the lattice, more complicated ones leads to higher order polynomials, for example $P(z,w)=1+z+w-zw$ for the square lattice. In Kasteleyn's approach, $P$ is related to the determinant of the Kasteleyn matrix.
In algebraic geometry context, $f(\mu,\nu)$ as defined is (\ref{eq:symm}) is called the \emph{Ronkin function} of the polynomial $P(z,w)=z+w+1$, and the free energy $F(r,s)$ is then its Legendre dual. The dual takes values inside a polygon of allowed slopes, which is the \emph{Newton polygon} of the polynomial $P$. One can also define the \emph{Amoeba} of the algebraic curve $P(z,w)=0$, as the set
\begin{equation}
\mathcal{A}(P)=\{(\log |z|,\log |w|)\in \mathbb{R}^2,\;P(z,w)=0\},
\end{equation}
namely, the projection of the algebraic curve (in $\mathbb{C}^2$) $P(z,w)=0$ to $\mathbb{R}^2$ by the map $(z,w)\mapsto (\log |z|,\log |w|)$.
Now, from general algebraic geometry machinery, the Ronkin function is convex, and linear in the complement of the Amoeba. The complement is an union of disjoint simply connected pieces, which correspond to frozen regions \cite{Nienhuis_1984}. Under the Legendre duality each component of the complement is mapped to a single point of the Newton polygon. One can also show that the Hessian of the Ronkin function is constant $\det\textrm{Hess}(R)=\pi^2$ for any point inside the Amoeba. This is interpreted as a Monge-Ampère equation. By Legendre duality this implies $\textrm{Hess}(F)=1/\pi^2$, so $K=1$ in the fluctuating region. This result happens to be a characterisation of algebraic curves known as Harnack curves, so the algebraic curves one gets from dimer models are always Harnack. The interested reader may have a look at references \cite{kenyon2006,kenyon2007,Amoeba} for a much more precise discussion.
Hence, the statement ``The free energy of the dimer model is the Ronkin function of a Harnack curve, so satisfies a Monge-Amp\`ere relation for any point in the Amoeba'' translates for us into the statement
``Dimers map to free fermions, so bosonising the fermions we get $K=1$ in the fluctuating region''. A nice illustration of the two different types of jargons.
\subsection{Edge behavior}
\label{sec:edgebe}
The hydrodynamic solution gave $z=z(x,y)=k_F+\mathrm{i}\mkern1mu\nu$ from which the limit shape follows. Our aim is to provide a heuristic derivation of the universal edge behavior near the arctic curve. To do that, we look at the particular case where $\nu$ (or the current) is zero, which occurs at least in the middle at $y=0$ in all the examples we discussed. Let us emphasize that the argument we present here may be generalized to $\nu\neq 0$. However, the discussion would involve left/right ground states of non-Hermitian Hamiltonians, which would obscure the argument slightly. We now assume that $z(x,y)=k_F(x,y)$ is known from the hydrodynamic solution and happens to be real.
The first claim is that the correlations around a given point $(x,y)$ are those of the ground state of the Hamiltonian
\begin{equation}
H=\int_{-\pi}^{\pi} \frac{dk}{2\pi} \big[\varepsilon(k)+\mu\big]c^\dag(k)c(k),
\end{equation}
where $\mu$ is set such that $\varepsilon(k_F)=-\mu$, to ensure that $k_F$ be the Fermi momentum.
Close to the arctic curve the density goes to $0$ or $1$, let us focus on the case of vanishing density. This means $k_F$ goes to zero, and it is possible to expand the dispersion around $k=0$. Since $\varepsilon(-k)=\varepsilon(k)$, we have $\varepsilon(k)=\varepsilon(0)+\frac{1}{2}\epsilon''(0)k^2+O(k^4)$, and we get the effective edge Hamiltonian
\begin{equation}
H_{\rm edge}=\int_\mathbb{R} \frac{dk}{2\pi}\frac{1}{2}\varepsilon''(0)\left(k^2-k_F^2\right)c^\dag(k)c(k).
\end{equation}
Now how does $k_F$ depend on $x,y$ close to the edge? The edge corresponds to the case where two (or more) solutions $z_s$ and $-z_s^*$ to the hydrodynamic equation
\begin{equation}
x+iy \varepsilon'(z)+R\tilde{\varepsilon}'(z)=0
\end{equation}
coalesce, so that we get a double root or higher. \emph{Generically} this root will be only double, meaning $\varepsilon''(0)\neq 0$. Writing $x_{\rm a}(y,R)$ for the arctic curve, and setting $x=x_{\rm a}- \tilde{x}$, we get that, generically
\begin{equation}\label{eq:edgekf}
k_F(x)\sim \alpha\sqrt{\frac{\tilde{x}}{R}}
\end{equation}
for some coefficient $\alpha$ that depends on $x_{\rm a}/R$. (for example for the domain wall XX chain $\alpha=\sqrt{2}$). Therefore, $k_F^2$ behaves linearly in $\tilde{x}$, close to the edge.
Now what are free fermions with $k^2$ dispersion? This is just a free Fermi gas in the continuum, sometimes encountered via its relation to the Tonks-Girardeau gas in cold atom systems \cite{1dbosons_review}. Making crudely the substitution $k\to -i\frac{d}{d\tilde{x}}$ and undoing the Fourier transform yields the Hamiltonian
\begin{equation}
H_{\rm edge}\propto\int_\mathbb{R} d\tilde{x} c^\dag(\tilde{x})\left(-\frac{d^2}{d \tilde{x}^2}+\frac{\alpha \tilde{x}}{R}\right) c(\tilde{x}),
\end{equation}
that is
free Dirac fermions $\{c(\tilde{x}),c^\dag(\tilde{x}')\}=\delta(\tilde{x}-\tilde{x}')$, in a linear potential. The change of variables $\tilde{x}=(R/\alpha)^{1/3}u$ leads to
\begin{equation}\label{eq:edgeH}
H_{\rm edge}\propto\int_\mathbb{R} du \,c^\dag(u)\left(-\frac{d^2}{du^2}+u\right) c(u).
\end{equation}
Therefore, a non trivial Hamiltonian emerges at distances $\sim R^{1/3}$ from the edge. We expect correlations on such distance to be that of the ground state of (\ref{eq:edgeH}).
This is still a free problem, so diagonalizing $H_{\rm edge}$ boils down to solving the single-particle eigenvalue equation $\left(-\frac{d^2}{du^2}+u\right) f(\lambda,u)=\lambda f(\lambda,u)$. The solutions are well-known to be Airy functions. It is an exercise to show that the (Dirac sea-like) ground state propagator for this Hamiltonian is
\begin{equation}
\braket{c^\dag(u)c(u')}=\int_{0}^{\infty}d\lambda \,{\rm Ai}(u+\lambda) {\rm Ai}(u'+\lambda),
\end{equation}
which is known as the \emph{Airy kernel}.
Let us now look at a region $A=[s,\infty)$ of the infinite line. It is another nice exercise to compute the generating function $\Upsilon(\alpha)=\braket{e^{\alpha\int_s^\infty c^\dag(u)c(u)du}}$, and show that it is given by the infinite series
\begin{align}\label{eq:deffred}
\Upsilon(\alpha,s)&=1+\sum_{n=1}^\infty \frac{(e^\alpha-1)^n}{n!} \int_{s}^\infty du_1\ldots \int_s^\infty du_n \det_{1\leq i,j\leq n} \left(\braket{c^\dag(u_i)c(u_j)}\right).
\end{align}
Now define the emptiness formation probability $E(s)=\lim_{\alpha\to-\infty}\Upsilon(\alpha,s)$, which is, as its name indicates, the probability that the interval $A=[s,\infty)$ be empty of particles. We have from the previous formula the exact series representation
\begin{equation}\label{eq:tracywidom}
E(s)=1+\sum_{n=1}^\infty \frac{(-1)^n}{n!} \int_{s}^\infty du_1\ldots \int_s^\infty du_n \det_{1\leq i,j\leq n} \left(\braket{c^\dag(u_i)c(u_j)}\right).
\end{equation}
One can show that $E(s)$ is smooth, positive, strictly increasing, and $\lim_{s\to-\infty}E(s)=0$, while $\lim_{s\to \infty}E(s)=1$. $E(s)$ gives us information about the last (or rightmost) particle. Indeed $E(s+ds)-E(s)$ is proportional to the probability that the rightmost particle lies in the interval $[s,s+ds]$. Hence $p(s)=\frac{dE}{ds}$ is actually the probability density function for the rightmost particle, so $E(s)$ may be interpreted as the cumulative distribution for the rightmost particle.
The distribution that has the rhs of (\ref{eq:tracywidom}) as a cumulative distribution has a name. It is called the \emph{Tracy-Widom (T-W) distribution} \cite{TracyWidom_tw}. We finish with a few remarks:
\begin{itemize}
\item Exact series such as (\ref{eq:tracywidom}) or the one above are called Fredholm determinants. They are the continuum analog of the regular determinant, for operators. See e.g. (\ref{eq:secondtolast_der},\ref{eq:last_der}) for a discrete analog.
\item Proving that the distribution of the rightmost fermion does converge, after proper rescaling, to the T-W distribution requires of course more work than the heuristic argument we just gave. However, it still illustrates the physical mechanism through which T-W emerges, that is free fermions in a linear potential. We will see in the next section another mechanism through which T-W occurs.
\item Convergence to T-W has been proved in all the models we considered so far. For the $XX$ chain this was done by Praehoffer and Spohn \cite{PraehoferSpohn2002}, while dimers have been treated by Johannson (honeycomb \cite{Johansson2000} and square \cite{johansson2005}).
\item The attentive reader might have spotted a physical flaw with the argument we just made. We have implicitly assumed separation of scales throughout. This is fine in the bulk, but it is not clear that it still holds at the edge. In fact, one can show that the edge is exactly borderline with respect to separation of scale, since the density varies on distances that are comparable to inter particle distances. In that sense T-W is smoothly connected to the bulk, where separation of scale does hold. This also justifies the need for exact calculations starting from the lattice, see section~\ref{sec:exactcalc}.
\item The word generic near (\ref{eq:edgekf}) is important. It is necessary for the density to vanish as square root to get a $k^2$ dispersion and a linear potential for the fermions, so $R^{1/3}$ behavior and T-W. If the arctic curve has cusps for example, edge correlations near (generic) cusps are described by a higher order kernel known as Pearcey kernel. We refer to \cite{Johansson_edgereview} for a discussion of all these kernels. Another example of higher order kernel can be found in exercise \ref{ex:higherorder}.
\item Below is a plot of the T-W probability density function. As can be observed it superficially resembles a gaussian, but it is slightly skewed.
\begin{myfigure}[label=fig:tw]{Tracy-Widom distribution}
\centering
\includegraphics[height=5cm]{./Plots/tw}
\end{myfigure}
\item T-W scaling is part of a broader subject, which goes under the name Kardar-Parisi-Zhang (KPZ) equation \cite{KPZ_1986} and KPZ universality class. Roughly speaking, the KPZ equation is a stochastic PDE that models interface growth \cite{TakeuchiSano,Takeuchi2011}, and it turns out the long time limit of this equation with certain initial conditions is exactly the T-W distribution. We refer to \cite{Spohn_lectures,Corwin_review,Quastel_Spohn_2015,Sasamoto_review,Dean_2019} for reviews on this equation, the KPZ universality class, and related topics in random matrix theory.
\end{itemize}
\vspace*{\fill}
\begin{exercise}[label=ex:emptiness]{Emptiness boundary conditions \qquad \cite{Abanov_hydro}}
We consider fermions with dispersion $\varepsilon(z)=-\cos z$.
We have seen in the text, that solutions to complex Burgers may be put under the form $\cos z=G(x+iy \sin z)$, where $G$ is analytic. We wish to find the limit shape corresponding to emptiness boundary conditions, that is (i) vanishing density on an interval $x\in [-\ell,\ell]$, $y=0$ of the full complex plane and (ii) density $1/2$ at infinity.
\tcbline
1. What is the correct function $G$ to implement those boundary conditions?\\
\, [Hint: the full density profile is shown in the picture below for $\ell=1$]\\
2. Where do you expect Pokrovsky-Talapov-Tracy-Widom behavior?
\tcbline
\begin{center}
\includegraphics[height=6cm]{./Plots/abanov_emptiness}
\includegraphics[height=6cm]{./Plots/abanov_emptiness_scale}
\end{center}
\end{exercise}
\begin{exercise}[label=ex:other]{Reverse engineering other solutions}
By playing with simple functions, find other solutions to the complex Burgers equation and interpret them.
\end{exercise}
\begin{exercise}[label=ex:higherorder]{Another higher order edge kernel \qquad \cite{Multi,Stephan_edge}}
We consider the Hamiltonian
$$H=\int_{-\pi}^{\pi} \frac{dk}{2\pi}\big[\varepsilon(k)-\mu \big]c^\dag(k)c(k)$$
where, semiclassically, $\mu$ depends position as $\mu=\mu(x)=x/R$ for large $R$. The dispersion relation is $\varepsilon(k)=-\cos k+\frac{1}{4}\cos (2k)$.
\tcbline
1. Where is the location of the right edge?\\
2. On which scales to you expect the distribution of the righmost fermion to be?\\
3. Compute the associated edge kernel and edge distribution.
\end{exercise}
\pagebreak
\begin{exercise}[label=ex:tmsquare,breakable]{The Aztec arctic circle (I: Transfer matrix)}
The aim of this exercise is to work out the transfer matrix for dimers on the square lattice. This will be used in exercise \ref{ex:hydroquare} to recover the arctic circle shown in figure \ref{fig:aztec}, and discussed at length in the introduction.
\vspace{0.5cm}
The mapping to fermions goes as follows. With the conventions of section \ref{sec:hydro}, we define a fermion as a blue dimer, or an empty vertical edge of the even sublattice, shown by a zizag line see below. As we shall see, one complication compared to the honeycomb lattice is that the transfer matrix is invariant with respect to translations of two lattice sites. We are therefore dealing with a two-band problem, in fermion language.
\vspace{0.4cm}
\begin{tikzpicture}[scale=0.75]
\draw (3.5,8) node {Rectangle};
\foreach \x in {0,1,2,3,4,5,6,7}{
\draw (\x,0) -- (\x,7);\draw (0,\x) -- (7,\x);
}
\draw[color=dblue,line width=8pt] (0,0) -- (0,1);
\draw[color=yello,line width=8pt] (1,0) -- (1,1);
\draw[color=yello,line width=8pt] (0,3) -- (0,4);
\draw[color=dblue,line width=8pt] (1,3) -- (1,4);
\draw[color=dblue,line width=8pt] (0,6) -- (0,7.);
\draw[color=yello,line width=8pt] (1,6) -- (1,7.);
\draw[color=yello,line width=8pt] (2,1) -- (2,2);
\draw[color=dblue,line width=8pt] (3,1) -- (3,2);
\draw[color=yello,line width=8pt] (2,5) -- (2,6);
\draw[color=dblue,line width=8pt] (3,5) -- (3,6);
\draw[color=dblue,line width=8pt] (4,2) -- (4,3);
\draw[color=yello,line width=8pt] (5,2) -- (5,3);
\draw[color=dblue,line width=8pt] (4,6) -- (4,7);
\draw[color=dblue,line width=8pt] (6,4) -- (6,5);
\draw[color=yello,line width=8pt] (7,4) -- (7,5);
\draw[color=yello,line width=8pt] (7,6) -- (7,7);
\draw[color=gree,line width=8pt] (0,2) -- (1,2);
\draw[color=dred,line width=8pt] (0,5) -- (1,5);
\draw[color=gree,line width=8pt] (2,0) -- (3,0);
\draw[color=dred,line width=8pt] (2,3) -- (3,3);
\draw[color=gree,line width=8pt] (2,4) -- (3,4);
\draw[color=dred,line width=8pt] (2,7) -- (3,7);
\draw[color=gree,line width=8pt] (4,0) -- (5,0);
\draw[color=dred,line width=8pt] (4,1) -- (5,1);
\draw[color=gree,line width=8pt] (4,4) -- (5,4);
\draw[color=dred,line width=8pt] (4,5) -- (5,5);
\draw[color=dred,line width=8pt] (5,6) -- (6,6);
\draw[color=gree,line width=8pt] (5,7) -- (6,7);
\draw[color=gree,line width=8pt] (6,0) -- (7,0);
\draw[color=dred,line width=8pt] (6,1) -- (7,1);
\draw[color=gree,line width=8pt] (6,2) -- (7,2);
\draw[color=dred,line width=8pt] (6,3) -- (7,3);
\foreach \i in {1,5}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (0,\i) -- (0,\i+1);
}
\foreach \i in {2,4}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (1,\i) -- (1,\i+1);
}
\foreach \i in {3}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (2,\i) -- (2,\i+1);
}
\foreach \i in {0,2,4,6}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (3,\i) -- (3,\i+1);
}
\foreach \i in {1,3,5}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (4,\i) -- (4,\i+1);
}
\foreach \i in {0,4,6}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (5,\i) -- (5,\i+1);
}
\foreach \i in {1,3,5}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (6,\i) -- (6,\i+1);
}
\foreach \i in {0,2}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (7,\i) -- (7,\i+1);
}
\begin{scope}[xshift=11cm]
\draw (3.5,8) node {Aztec diamond};
\foreach \x in {0,1,2,3,4,5,6,7}{
\draw (\x,0) -- (\x,7);\draw (0,\x) -- (7,\x);
}
\draw[color=dblue,line width=8pt] (0,0) -- (0,1);
\draw[color=dblue,line width=8pt] (1,-0.5) -- (1,0);
\draw[color=dblue,line width=8pt] (2,0) -- (2,1);
\draw[color=dblue,line width=8pt] (3,-0.5) -- (3,0);
\draw[color=yello,line width=8pt] (4,-0.5) -- (4,0);
\draw[color=yello,line width=8pt] (5,0) -- (5,1);
\draw[color=yello,line width=8pt] (6,-0.5) -- (6,0);
\draw[color=yello,line width=8pt] (7,0) -- (7,1);
\draw[color=dblue,line width=8pt] (1,1) -- (1,2);
\draw[color=dblue,line width=8pt] (0,2) -- (0,3);
\draw[color=dblue,line width=8pt] (1,5) -- (1,6);
\draw[color=dblue,line width=8pt] (0,6) -- (0,7);
\draw[color=dblue,line width=8pt] (2,6) -- (2,7);
\draw[color=dblue,line width=8pt] (0,4) -- (0,5);
\draw[color=yello,line width=8pt] (6,1) -- (6,2);
\draw[color=yello,line width=8pt] (7,2) -- (7,3);
\draw[color=dblue,line width=8pt] (1,7) -- (1,7.5);
\draw[color=dblue,line width=8pt] (3,7) -- (3,7.5);
\draw[color=yello,line width=8pt] (4,7) -- (4,7.5);
\draw[color=yello,line width=8pt] (6,7) -- (6,7.5);
\draw[color=yello,line width=8pt] (5,6) -- (5,7);
\draw[color=yello,line width=8pt] (7,6) -- (7,7);
\draw[color=yello,line width=8pt] (6,5) -- (6,6);
\draw[color=yello,line width=8pt] (7,4) -- (7,5);
\draw[color=gree,line width=8pt] (3,1) -- (4,1);
\draw[color=dred,line width=8pt] (3,6) -- (4,6);
\draw[color=dblue,line width=8pt] (1,3) -- (1,4);
\draw[color=yello,line width=8pt] (2,3) -- (2,4);
\draw[color=gree,line width=8pt] (2,2) -- (3,2);
\draw[color=yello,line width=8pt] (6,3) -- (6,4);
\draw[color=gree,line width=8pt] (4,2) -- (5,2);
\draw[color=dred,line width=8pt] (4,3) -- (5,3);
\draw[color=gree,line width=8pt] (4,4) -- (5,4);
\draw[color=dred,line width=8pt] (4,5) -- (5,5);
\draw[color=dred,line width=8pt] (2,5) -- (3,5);
\draw[color=dblue,line width=8pt] (3,3) -- (3,4);
\foreach \i in {0,2,4,6}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (1,\i) -- (1,\i+1);
}
\foreach \i in {1,3,5}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (0,\i) -- (0,\i+1);
}
\foreach \i in {1,5}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (2,\i) -- (2,\i+1);
}
\foreach \i in {0,2,4,6}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (3,\i) -- (3,\i+1);
}
\foreach \i in {1,3,5}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (4,\i) -- (4,\i+1);
}
\foreach \i in {2,4}{
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (5,\i) -- (5,\i+1);
}
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (0,-0.5) -- (0,0);
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (2,-0.5) -- (2,0);
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (0,7) -- (0,7.5);
\draw[decorate,decoration={zigzag,amplitude=0.8mm,segment length=2.5mm,post length=0mm},line width =2 pt,color=black] (2,7) -- (2,7.5);
\end{scope}
\end{tikzpicture}
\tcbline
Similar to the honeycomb lattice, we need two transfer matrices $T$ and $T'$, and assume periodic boundary conditions in the horizontal direction.
1. Show $T c_{2j}^\dag T^{-1}=c_{2j}^\dag$, $T c_{2j+1}^\dag T^{-1}=c_{2j}^\dag+c_{2j+1}^\dag+c_{2j+2}^\dag$, and $T\ket{\bf 0}=\ket{\bf 0}$.
2. Show $T' c_{2j}^\dag T'^{-1}=c_{2j-1}^\dag+c_{2j}^\dag+c_{2j+1}^\dag$, $T' c_{2j+1}^\dag T'^{-1}=c_{2j+1}^\dag$, and $T'\ket{\bf 0}=\ket{\bf 0}$.
3. Show $$(T'T)\left(\begin{array}{c}a^\dag_k \\b_k^\dag\end{array}\right)(T'T)^{-1}=\left(\begin{array}{cc}1&2\cos k\\2\cos k&1+4 \cos^2 k\end{array}\right)\left(\begin{array}{c}a^\dag_k \\b_k^\dag\end{array}\right)$$
where $a_k^\dag=\sum_{j=1}^{L/2} e^{\mathrm{i}\mkern1mu 2k j}c_{2j}^\dag$, $b_k^\dag=\sum_{j=1}^{L/2} e^{\mathrm{i}\mkern1mu (2k+1) j}c_{2j+1}^\dag$ and properly quantized momenta $k$ in $[-\pi/2,\pi/2]$.
4. Show $(T'T)f_{\pm}^\dag (k) (T'T)^{-1}=\lambda_{\pm}(k)f_{\pm}^\dag(k)$, where $f_+^\dag (k)=\cos \theta(k)a_k^\dag +\sin \theta(k)b_k^\dag$, $f_-^\dag (k)=-\sin \theta(k)a_k^\dag +\cos \theta(k)b_k^\dag$, $\lambda_+(k)=\cot^2 \theta(k)$, $\lambda_-(k)=\tan^2 \theta(k)$ and
$$\cot 2\theta(k)=\cos k$$
5. Show
\begin{equation}\label{eq:squaretm}
T'T=\exp\left(-2\sum_k \varepsilon(k)f^\dag(k)f(k)\right),
\end{equation}
where the $k$'s are now quantized in $[-\pi,\pi]$,
\begin{equation}\label{eq:squaredisp}
\varepsilon(k)=-\log \left(\cos k+\sqrt{1+\cos^2 k}\right),
\end{equation}
and find an expression for $f^\dag(k)$. [Hint: $\cot^2(\alpha+\pi/2)=\tan^2 \alpha$]
\end{exercise}
\begin{exercise}[label=ex:hydroquare]{The Aztec arctic circle (II: hydrodynamics)}
The fact that the transfer matrix can be put under the form (\ref{eq:squaretm}), (\ref{eq:squaredisp}) means we can apply the framework explained in the present chapter. In particular, (\ref{eq:implicit}) still holds in the hydrodynamic limit.
6. Show that the hydrodynamic equation
\begin{equation}\label{eq:hydro_square}
x+\mathrm{i}\mkern1mu y \frac{\sin z}{\sqrt{1+\cos^2 z}}=R \frac{\cos z}{\sqrt{1+\cos^2 z}}
\end{equation}
does implement the correct boundary conditions $\textrm{Re}\, z=0$ $\forall x>R-|y|$ and $\textrm{Re}\, z=\pi$ $\forall x<|y|-R$.
7. We introduce the map $z\mapsto \zeta(z)=\arctan[\frac{1}{\sqrt{2}}\tan z]$, initially defined for any $z$ in the strip $\textrm{Re}(z)\in [-\pi/2,\pi/2]$. We then extend it to $\textrm{Re}\, z\in [-\pi,\pi]$ by requiring $\zeta(z\pm \pi)=\zeta(z)\pm\pi$.
Show that (\ref{eq:hydro_square}) may be rewritten as
\begin{equation}\label{eq:hydro_squarebis}
x+\mathrm{i}\mkern1mu y \sin \zeta=\frac{R}{\sqrt{2}}\cos \zeta.
\end{equation}
Show that (\ref{eq:hydro_squarebis}) has two solutions $\zeta_F$ and $-\zeta_F^*$, with $\textrm{Re}\, \zeta_F\in (0,\pi)$, provided $x$ and $y$ satisfy $x^2+y^2<R^2/2$.
8. For $y=0$ and $x^2<R^2/2$, show that $\zeta_F$ is real, and
\begin{align*}
\braket{c_{2j+\sigma}^\dag c_{2j+\sigma}}&=\int_{-z_F}^{z_F}\frac{dk}{\pi}\left(\frac{1}{2}+(-1)^\sigma \frac{\cos 2\theta(k)}{2}\right)\\
&=\frac{z_F}{\pi}+\frac{(-1)^\sigma}{\pi}\arctan(\sin \zeta_F)
\end{align*}
where $\sigma=0,1$.
9. Compute the probabilities for vertical dimer occupancies along the horizontal line $y=0$ in the scaling limit.
\end{exercise}
\pagebreak
\section{Exact lattice calculations}
\label{sec:exactcalc}
The approach we have taken so far was variational or hydrodynamic: we showed how computing the limit shape boils down to solving PDEs, and found a few cases where this could be done explicitly. It turns out those precise cases can often be treated with other more direct methods. That is, computing explictely the two point function, and then recover our previous results by a careful asymptotic analysis. This approach is often more technical, but nicely complements hydrodynamics, by confirming its prediction, and also providing more information.
Our aim is to give a flavor how this can be done, on one of the simplest examples. We refer to \cite{okounkovreshetikhin,Boutillier_2017,Allegra_2016} for similar calculations. We take the transfer matrices from before, and we try to compute the two point function for fermions in the domain wall geometry, where the top and bottom boundary are domain wall-like, with all particles packed to the left of the origin, set at $x=0$. This is the precise geometry in which the hydrodynamic problem was solved in terms of Hilbert transform (section \ref{sec:cburgers}).
Of course as always in a free fermion calculation, if we know the propagator then Wick's theorem allows to reconstruct all higher order correlations. However, as we shall see, computing the two point function is more difficult than in standard condensed matter theory setups.
Let us now specify some notations.
We take lattice sites to be half-integers, that is $x\in \mathbb{Z}+1/2$. We focus on the special case where operators are measured at the same imaginary time $y$, but generalisation is straightforward. We wish to evaluate
\begin{equation}
\braket{c_x^\dag (y,R)c_{x'}(y,R)}=\frac{\braket{\psi|e^{-(R-y)H}c_x^\dag c_{x'}e^{-(R+y)H}|\psi}}{\braket{\psi|e^{-2RH}|\psi}}
\end{equation}
where expectation values are taken in the domain wall state $\ket{\psi}=\prod_{x<0} c_x^\dag \ket{\bf 0}$.
We also have $H=\int \frac{dk}{2\pi}\varepsilon(k)c^\dag(k)c(k)$, and recall $c^\dag_x=\int_{-\pi}^{\pi} \frac{dk}{2\pi}e^{-\mathrm{i}\mkern1mu kx}c^\dag(k)$, $c^\dag(k)=\sum_{j\in\mathbb{Z}+1/2}e^{\mathrm{i}\mkern1mu k j}c_j^\dag$. Using $e^{\epsilon(k) c^\dag(k) c(k)}c^\dag(k) e^{-\epsilon(k) c^\dag(k) c(k)}=e^{\epsilon(k)}c^\dag(k)$, this may be rewritten as
\begin{equation}
\braket{c_x^\dag (y,R)c_{x'}(y,R)}=\int_{-\pi}^{\pi} \frac{dk}{2\pi}\int_{-\pi}^{\pi}\frac{dq}{2\pi}e^{-\mathrm{i}\mkern1mu kx +\mathrm{i}\mkern1mu qx'} e^{-(R-y)\varepsilon(k)+(R-y)\varepsilon(q)} G_R(k,q),
\end{equation}
with
\begin{equation}\label{eq:boundaryprop}
G_R(k,q)=\frac{\braket{c^\dag(k)c(q)e^{-2RH}}}{\Braket{e^{-2RH}}}.
\end{equation}
This remaining term is unusual, and illustrates the extra layer of complexity associated to imaginary time problems --for a real time calculation, just set $y=it$ and $R=0$. It is very difficult to evaluate (\ref{eq:boundaryprop}) in general; however, the special form of the domain wall state in which averages are taken allows for a small miracle.
\subsection{A nice bosonization trick}
Let us work out the case of nearest neighbor hoppings, for which $\varepsilon(k)=-\cos k$. The main player in the calculation will be the operator
\begin{equation}\label{eq:boson}
b=\sum_{x\in \mathbb{Z}+1/2} c_x^\dag c_{x+1}.
\end{equation}
Obviously, $H=-(b+b^\dag)/2$. Think of a finite-size regularisation of the chain, e. g. with sites from $-l$ to $l$. The commutator of $b$ with $b^\dag$ is given by a telescopic sum, which simplifies to
\begin{eqnarray}
[b,b^\dag]&=&c_{-l}^\dag c_{-l}-c_{l}^\dag c_l
\end{eqnarray}
which means $\braket{\psi|[b,b^\dag]|\psi}=1$.
Now expand $e^{-2RH}$ in power series. It is easy to check that $\braket{\psi|[b,b^\dag]H^p|\psi}=\braket{\psi|H^p|\psi}$ provided $l>p$. Hence for any term in the power series, we can always choose $l$ sufficiently large such that the commutator is scalar.
For any finite $R$ the series is expected to converge quite fast, which means we are allowed to assume $[b,b^\dag]=1$ throughout. Hence the operator $b$ is, effectively, a boson. It also happens to annihilate the domain wall state, $b\ket{\psi}=0$. Now, recall the following formula
\begin{equation}
e^{\alpha (b^\dag+b)}=e^{\alpha b^\dag }e^{\alpha b}e^{\frac{\alpha^2}{2}},
\end{equation}
which is a special case of the Baker-Campbell-Hausdorff identity\footnote{A proof of the general formula $e^{s(A+B)}=e^{sA}e^{sB}e^{-(s^2/2) [A,B]}$, valid in case $[A,B]$ commutes with $A$ and $B$ may be obtained by (i) showing that $e^{-sB}A e^{sB}=A+s[A,B]$ (take derivative) (ii) show that both rhs and lhs of the formula satisfy the same first order differential equation with the same initial data (again take derivative).}. Using this formula both in the numerator and denominator with $\alpha=R$ combined with $e^{\alpha b}\ket{\psi}=\ket{\psi}$, $\bra{\psi}e^{\alpha b^\dag}=\bra{\psi}$, yields
\begin{equation}
G_R(k,q)=\braket{c^\dag(k)c(q) e^{Rb^\dag }}.
\end{equation}
The last trick is to take derivative. Computing the commutator
\begin{equation}
[c^\dag(k)c(q),b^\dag]=\left(e^{-\mathrm{i}\mkern1mu q}-e^{-\mathrm{i}\mkern1mu k}\right)c^\dag(k)c(q),
\end{equation}
we obtain
\begin{equation}
\partial_R G_R(k,q)=(e^{-\mathrm{i}\mkern1mu q}-e^{-\mathrm{i}\mkern1mu k})G_R(k,q).
\end{equation}
Integrating back we finally obtain
\begin{equation}
G_R(k,q)=e^{R(e^{-\mathrm{i}\mkern1mu q}-e^{-\mathrm{i}\mkern1mu k})}G_0(k,q).
\end{equation}
\subsection{General dispersion relation}
The case of general dispersion relation $\varepsilon(k)=-\sum_{n\geq 1}h_n \cos(nk)$ can be handled in a similar fashion. One introduces the set of operators
\begin{equation}
b_n=\sum_x c_x^\dag c_{x+n}\quad,\quad n\geq 1
\end{equation}
which effectively satisfy the commutation relations
\begin{equation}
[b_n,b_m^\dag]=n\delta_{nm}\qquad,\qquad [b_n,b_m]=0=[b_n^\dag,b_m^\dag].
\end{equation}
Using this one can show in a similar fashion
\begin{equation}
\braket{e^{2RH}}=e^{(\sum_n n h_n^2)R^2/2},
\end{equation}
and
\begin{equation}
G_R(k,q)=e^{R\sum_{n} h_n(e^{-\mathrm{i}\mkern1mu n q}-e^{-\mathrm{i}\mkern1mu n k})}G_0(k,q).
\end{equation}
The propagator finally reads
\begin{empheq}[box=\othermathbox]{align}\label{eq:exactpropagator}
\braket{c_x^\dag (y,R)c_{x'}(y,R)}=\int_{-\pi}^{\pi} \frac{dk}{2\pi}\int_{-\pi}^{\pi}\frac{dq}{2\pi}e^{-\mathrm{i}\mkern1mu kx +\mathrm{i}\mkern1mu qx' +y(\varepsilon(k)-\varepsilon(q))-\mathrm{i}\mkern1mu R(\tilde{\varepsilon}(k)- \tilde{\varepsilon}(q))} G_0(k,q)
\end{empheq}
where $\tilde{\varepsilon}(k)$ is the Hilbert transform of $\varepsilon(k)$. Let us insist again that this only holds for expectation values in the domain wall state.
All what is left is to compute $G_0(k,q)$. We get from a direct calculation
\begin{equation}\label{eq:G0}
G_0(k,q)=\frac{1}{2\mathrm{i}\mkern1mu \sin \left(\frac{k-q}{2}-\mathrm{i}\mkern1mu 0^+\right)}.
\end{equation}
\subsection{Asymptotic analysis}
The general method to evaluate the double integral in the limit $R\to \infty$, $x/R$, $x'/R$, $y/R$ fixed is the stationary phase, or steepest descent method. The argument inside the exponential can have very large real and imaginary parts. Writing
\begin{equation}
\theta(k)=kx+\mathrm{i}\mkern1mu y \varepsilon(k)+R\tilde{\varepsilon}(k),
\end{equation}
one expects the integral to be dominated, after proper contour deformation, by the region close to the points $k_c$ (resp. $q_c$) where the phase $\theta(k)$ (resp. $-\theta(q)$) becomes stationary. The stationary points are the solution of the equation
\begin{equation}
\theta'(k)=x+\mathrm{i}\mkern1mu y \varepsilon'(k)+R\tilde{\varepsilon}'(k)=0,
\end{equation}
whose solution we denote by $z$ and $-z^*$.
This equation is, in fact, identical to (\ref{eq:hydro_hilbert}), which we obtained from the hydrodynamic approach. A full asymptotic analysis falls outside the scope of these lectures. However, let us just mention that what matters is the taylor expansion of the phase around the saddle points, that is the expansion
\begin{equation}
\theta(k)=\theta(z)+\frac{1}{2}\theta''(z)(k-z)^2+o((k-z)^2).
\end{equation}
Essentially computing the asymptotics boils down to computing a gaussian integral. The case of coinciding points $x'=x$ is more tricky, since the $k$ and $q$ saddle points might coincide. In that case one has to take into account the pole at $k=q$. See e.g. \cite{2012BorodinGorin} for the details.
Let us briefly comment on the edge behavior. The arctic curve corresponds to the points where the two solutions (assuming there are two) $z$ and $-z^*$ become equal.
This means the second derivative $\theta''(z)$ vanishes, and it becomes necessary to expand to third order
\begin{equation}
\theta(k)=\theta(z)+\frac{1}{6}\theta'''(z)(k-z)^3+o((k-z)^3).
\end{equation}
This naturally leads to the Airy kernel, since Airy functions may be alternatively defined as ${\rm Ai}(x)=\int_{\mathbb{R}+\mathrm{i}\mkern1mu 0^+} \frac{dk}{2\pi} e^{\mathrm{i}\mkern1mu kx +\mathrm{i}\mkern1mu k^3/3}$. The subject would require longer exposition, but we have illustrated the two equivalent through which the Airy kernel emerges. Either with fermions in a potential that becomes linear in a Hamiltonian point of view, or through coalescence of two saddle points.
\pagebreak
\begin{exercise}[label=ex:toeplitz]{Bosonizing Toeplitz determinants\qquad \cite{Baxter_Onsagerhistory,Szego,okounkovreshetikhin}}
A semi-infinite Toeplitz matrix is a matrix $T=(T_{ij})_{i,j\in \mathbb{N}}$ whose elements depend only on the difference $i-j$, $T_{ij}=g_{i-j}$. It is convenient to interpret the $g_l$ as Fourier coefficients of a periodic function, sometimes called symbol:
\begin{equation}
g(k)=\sum_{l\in \mathbb{Z}} e^{\mathrm{i}\mkern1mu k l}g_l\qquad,\qquad g_l=\int_{-\pi}^{\pi} \frac{dk}{2\pi}e^{-\mathrm{i}\mkern1mu k l}g(k)
\end{equation}
We assume that $g$ is sufficiently smooth, has a well-defined logarithm which we denote by $\varepsilon(k)=\log g(k)$, and also that $\int_{-\pi}^{\pi}\frac{dk}{2\pi}\varepsilon(k)=0$. Consider the free fermions Hamiltonian $H=\int_{-\pi}^{\pi} \frac{dk}{2\pi}\varepsilon(k)c^\dag(k)c(k)$, with conventions (\ref{eq:fourierconventions}), which reads $H=\sum_{j\in \mathbb{Z}}\sum_{p>0} \varepsilon_p( c_{j+p}^\dag c_j+h.c)$ in real space, where the $\varepsilon_p$ are the Fourier coefficients of $\varepsilon(k)$. We introduce a similar domain wall state $\ket{\phi}=\prod_{j=0}^\infty c_j^\dag \ket{0}$ as in the text, where notice the fermions are now located on nonnegative integers.\\
\tcbline
{1.} Show using bosonization that $\braket{\phi|e^H|\phi}=\exp\left(\frac{1}{2}\sum_{p=1}^{\infty}p \varepsilon_p \varepsilon_{-p}\right)$.\\
{2.} Show that $T_{ij}=\braket{0|c_j e^H c_i^\dag |0}$ and $(T^{-1})_{ij}=\displaystyle{\frac{\braket{\phi| c_i^\dag e^H c_j|\phi}}{\braket{\phi|e^H|\phi}}}$.\\
{3.} Show using bosonization that
$$(T^{-1})_{ij}=\int_{-\pi}^{\pi}\frac{dk}{2\pi}\int_{-\pi}^\pi \frac{dq}{2\pi}e^{-\mathrm{i}\mkern1mu (ki-qj)} g_+^{-1}(k)g_-^{-1}(q)\frac{1}{1-e^{-\mathrm{i}\mkern1mu (k-q-\mathrm{i}\mkern1mu 0^+)}},$$
where the $g_{\pm }(k)=\exp\left(\sum_{\pm n>0}\varepsilon_n e^{\mathrm{i}\mkern1mu k n}\right)$ are the Wiener-Hopf factors of $g(k)$.
\tcbline
We now look at a finite $2N\times 2N$ truncation of $T$, which we note $T_{N}$. We want to evaluate $\det T_{N}=\det_{0\leq i,j\leq 2N-1} (g_{i-j})$ in the limit $N\to \infty$. For this purpose, we introduce the state $\ket{\psi_N}=\prod_{|x|<N}c_x^\dag\ket{\bf 0}$ where the sites are now put on the half-integer line $x\in \mathbb{Z}+1/2$.\\
{4.} Show using Wick's theorem that $\det T_N=\braket{\psi_N|e^H|\psi_N}$.\\
Let us introduce the `right modes' $r_{n}=\sum_{x>0} c_x^\dag c_{x+n}$ as well as the `left modes' $l_{n}^\dag=\sum_{x<0} c_x^\dag c_{x+n}$ for $n\in \mathbb{N}^*$.\\
{5.} Under which condition on $n,m,N$ do we have $[r_n,b_m]=0$? $[r_n,r_m^\dag]=n\delta_{nm}$? $[l_n,l_m^\dag]=n\delta_{nm}$? \\
{6.} Bosonize and show that when $N\to \infty$ the determinant should converge to
$$\lim_{N\to \infty}\det T_N=\exp\left(\sum_{p=1}^{\infty}p \varepsilon_p \varepsilon_{-p}\right)$$
provided the series inside the exponential converges sufficiently fast.
This result was first found by Onsager-Kaufman\cite{Baxter_Onsagerhistory}, and proved by Szeg\"o \cite{Szego} shortly thereafter (both used different techniques).\\
{7.} You are Onsager and Kaufman, and you just realized that the spin-spin correlations $\braket{\sigma_{0,0}\sigma_{n,n}}$ along the diagonal of the classical isotropic Ising model on $\mathbb{Z}^2$ are given by a $n\times n$ Toeplitz determinant with symbol $g(k)^2=\frac{1-\alpha e^{-ik}}{1-\alpha e^{ik}}$, where $1/\alpha=\sinh^2 (\beta J)$. This holds below the critical temperature, $J\beta>J\beta_c=\textrm{arcsinh} \,1$. What is the magnetisation exponent of the 2d Ising model?
\end{exercise}
\pagebreak
\section[Conclusion]{Conclusion and related problems}
\label{sec:interactions}
We finish with a discussion of a few selected topics that go beyond the lectures, but still fit well with the spirit of the notes. We first examine the effects of interactions (section \ref{sec:interactions}), how this (does not) affect the edge behavior in section \ref{sec:interaction_edge}, and finally explore the intricacies of the Wick rotation, section \ref{sec:wickrotation}.
\subsection{Interactions
For interacting systems, i.e. systems that cannot be mapped onto free fermions, the logic of the present notes still applies. The difference is that no exact formula for the free energy exists in general anymore. There are however deformations of the dimer model for which some analytical progress is possible. Those models are called \emph{integrable}.
A discussion of all the intricacies of integrable models falls well outside the scope of the present notes, see \cite{Baxter1982,korepin_bogoliubov_izergin_1993,gaudin_2014} for reviews.
We consider the case of the six vertex model, or even plaquettes interacting dimers, as explained in section \ref{sec:intro}. We parametrize the interaction term as $e^{\lambda}=1-\Delta$ or $e^{\lambda}=1-\cos\gamma$, depending on convenience.
The only result that we need here is the following fact: in the same way that the free energy at the free fermion point could be determined from a simple ground state energy with some current,
\begin{equation}
F(r,s)=\sum_{|k|<k_F} \varepsilon(k+\mathrm{i}\mkern1mu\nu)-\nu s;
\end{equation}
where $\nu$ and $k_F$ are determined from $r,s$, a similar expressions holds true for half-interacting dimers (or the six vertex model). Namely, the free energy is determined from the biggest eigenvalue of the transfer matrix (with appropriate particle number and current). The latter is given by \cite{Reshetikhin,Granet_2019}
\begin{equation}
\Lambda=e^{-L\nu}\prod_{j=1}^{N} \frac{\sinh (\lambda_j+\mathrm{i}\mkern1mu \gamma) }{\sinh \lambda_j}+e^{L\nu} \prod_{j=1}^{N} \frac{\sinh (\lambda_j-\mathrm{i}\mkern1mu\gamma)}{\sinh \lambda_j}.
\end{equation}
The $\lambda_j$ play a role similar to momenta for free fermions, and $N/L$ controls $r$. The big difference is that the quantization condition is much more complicated. It is given by a set of equations
\begin{equation}\label{eq:Bethe_equations}
\left[e^{-2\nu} \frac{\sinh\left(\lambda_i+\mathrm{i}\mkern1mu \gamma/2\right)}{\sinh \left(\lambda_i-\mathrm{i}\mkern1mu \gamma/2\right)} \right]^{L}= \prod_{j\neq i}^N \frac{\sinh (\lambda_i-\lambda_j+\mathrm{i}\mkern1mu \gamma)}{\sinh (\lambda_i-\lambda_j-\mathrm{i}\mkern1mu \gamma)},
\end{equation}
called Bethe equations\footnote{The standard Bethe equations for the six vertex model, or its Hamiltonian limit the spin-1/2 XXZ chain, correspond to the case $\nu=0$. Here we need a slight generalisation to induce some imaginary current.}. The free case corresponds to $\gamma=\pi/2$ for which the rhs simplifies to $1$, and the $\lambda_k$ can be obtained explicitely. The reader can easily imagine that solving the Bethe equations is in general extremely difficult. The fact that (away from the free fermion point) the rhs is a complicated product over the positions of the particles has an important physical consequence, which usually goes under the name \emph{dressing}: changing the number of particles ($N$) affects all the rapidities, as illustrated in figure \ref{fig:dressing}.
\begin{myfigure}[label=fig:dressing]{Dressing}
\begin{tikzpicture}[scale=0.7]
\filldraw (-6.179673045, 3.149150899) circle (0.15cm);
\filldraw (-5.047602703, 3.059403614) circle (0.15cm);
\filldraw (-3.919374928, 2.996286058) circle (0.15cm);
\filldraw (-2.795684542, 2.953567878) circle (0.15cm);
\filldraw (-1.675786344, 2.927120059) circle (0.15cm);
\filldraw (-0.5583186444,2.914471491) circle (0.15cm);
\tikzset{cross/.style={cross out, draw=black, fill=none, minimum size=2*(#1-\pgflinewidth), inner sep=0pt, outer sep=0pt}, cross/.default={2pt}}
\begin{scope}[xshift=11cm]
\filldraw (-5.399612373, 3.) circle (0.15cm);
\filldraw (-4.417864669, 3) circle (0.15cm);
\filldraw (-3.436116965, 3.) circle (0.15cm);
\filldraw (-2.454369261, 3.) circle (0.15cm);
\filldraw (-1.472621556, 3.) circle (0.15cm);
\filldraw (-0.4908738521, 3.) circle (0.15cm);
\end{scope}
\begin{scope}[yshift=-0.0cm]
\filldraw (-8.762162721, 3.550787827) node[cross=4pt,blue,ultra thick] {};
\filldraw (-7.628359244, 3.287397452) node[cross=4pt,blue,ultra thick] {};
\filldraw (-6.456658735, 3.088688702) node[cross=4pt,blue,ultra thick] {};
\filldraw (-5.273171345, 2.947930434) node[cross=4pt,blue,ultra thick] {};
\filldraw (-4.091475601, 2.851947) node[cross=4pt,blue,ultra thick] {};
\filldraw (-2.916069972, 2.789028335) node[cross=4pt,blue,ultra thick] {};
\filldraw (-1.74685429, 2.751049404) node[cross=4pt,blue,ultra thick] {};
\filldraw (-0.5818034649, 2.733170847) node[cross=4pt,blue,ultra thick] {};
\end{scope}
\begin{scope}[xshift=11cm,yshift=-0.0cm]
\draw (-7.363107782, 3.) node[cross=4pt,blue,ultra thick] {};
\filldraw (-6.381360078, 3.) node[cross=4pt,blue,ultra thick] {};
\filldraw (-5.399612373, 3.) node[cross=4pt,blue,ultra thick] {};
\filldraw (-4.417864669, 3) node[cross=4pt,blue,ultra thick] {};
\filldraw (-3.436116965, 3.) node[cross=4pt,blue,ultra thick] {};
\filldraw (-2.454369261, 3.) node[cross=4pt,blue,ultra thick] {};
\filldraw (-1.472621556, 3.) node[cross=4pt,blue,ultra thick] {};
\filldraw (-0.4908738521, 3.) node[cross=4pt,blue,ultra thick] {};
\end{scope}
\end{tikzpicture}
\tcbline
Illustration of dressing. Left: Case $\Delta=0.6$, $L=32$. Black circles are half the Bethe roots for $N=12$, red crosses for $N=16$. Right: same for the free case $\Delta=0$ or $\gamma=\pi/2$.
\end{myfigure}
Dressing severely complicates asymptotic analysis, since any $\sum_j f(\lambda_j)$ becomes in the thermodynamic limit $\int f(\lambda)\rho(\lambda)d\lambda$, where $\rho$ is the density of Bethe roots. Now comes the big problem: the root density is only known in the case of zero current $\nu=0$, in which case it is a solution to a linear integral equation over a segment of the real line \cite{korepin_bogoliubov_izergin_1993}. In the case of nonzero current these aspects have been investigated numerically in Ref.~\cite{Granet_2019}, and one finds that the Bethe roots condense on some non trivial curve in the complex plane.
A particular case that can be treated exactly is the so-called five vertex model, obtained from the six vertex model by setting one of the vertex weights to zero --it can also be seen as an (half-plaquettes) interacting version of the honeycomb dimer model \cite{5vdimers}. This model is related to stochastic processes such as TASEP. The exact free energy can be computed, and limit shapes are also parametrized by analytic functions \cite{deGierKenyon}. This is to date one of the most complicated model in which the variational/hydrodynamic program has been applied. This model is also the only one for which one can show that the Luttinger parameter $K$ is not constant.
For the full six vertex model the hydrodynamic program has not been completed, the main bottleneck being determining the exact curve on which the roots densify. However, the arctic curve has been determined analytically, using the lattice approach. In a series of papers \cite{ColomoPronko_arctic,ColomoPronkoZinnJustin}, Colomo and Pronko, and Colomo-Pronko-Zinn-Justin managed to compute exactly the lattice emptiness formation probability, which gives the distribution of the last particle. They managed to determine the precise location where this probability goes to zero in the thermodynamic limit. This location coincides with the arctic curve. Except at special values where $\gamma/\pi$ is a rational number, this curve is non algebraic. Later, an attractive \emph{tangent method} was also introduced to get the arctic curve in a slightly simpler way \cite{TangentMethod}. This method was also recently used to provide a proof \cite{Aggarwal_iceproof} of the Colomo-Pronko formula for the curve in the special case $\Delta=1/2$, the so-called combinatorial point.
\subsection{Tracy-Widom at the edge}\label{sec:interaction_edge}
Tracy-Widom scaling is, in fact, also expected at the edge, for the following simple physical reason: near the edge the particle (or hole) density goes to zero, hence particles (holes) are diluted. For local interaction such as the plaquette terms we discussed previously, it is reasonable to assume that interactions become weaker and weaker. Hence particles become effectively free near the edge, and we expect the arguments presented in section \ref{sec:edgebe} to hold, and T-W scaling at the edge. One can check numerically that this is the case: in \ref{fig:twmc} we show Monte Carlo simulations in interacting dimers and the six vertex model. We compute the lattice emptiness formation probability numerically, rescale appropriately, and compare to the T-W distribution. As can be seen the agreement is quite good. We also compute the skewness of the discrete distribution, and compare it to T-W, with excellent (and improving for larger $L$) agreement. Other similar checks in inhomogeneous quantum chains can also be found in \cite{Stephan_edge}, or with anharmonic chains in thermal equilibrium \cite{Mendl_2015}.
\begin{myfigure}[label=fig:twmc]{Tracy-Widom scaling with interactions}
\includegraphics[width=7cm]{./Plots/TWdimers6v/twdimercheck.pdf}\hfill
\includegraphics[width=7cm]{./Plots/C3lectures.pdf}
\tcbline
Numerical check of Tracy-Widom scaling for interacting dimers and half-plaquette interacting dimers (aka six vertex model). Left: rescaled distribution for interacting dimers ($e^\lambda=2$) and $N=64,256,512$, compared to the centered T-W distribution (thick red line). Right: skewness $\textrm{sk}=\mathbb{E}[(s-\mathbb{E}[s])^3]$ as a function of $N^{-2/3}$, for both interacting dimers on all plaquettes, and six vertex model in which case we show the corresponding value of $\Delta$. This is compared to the T-W value $\textrm{sk}\simeq 0.22408$. The thickness of the lines are the Monte Carlo error bars. The exact lattice skewness is also shown for comparison at the free fermions point $\Delta=0$ (bullets).
\end{myfigure}
Showing this analytically in some generality in integrable models is not easy, despite the fact that the lattice emptiness formation probability is known exactly in the six vertex model. Note that the argument above does not assume integrability. However, the dilution argument can nicely be illustrated in Bethe-Ansatz integrable models, such as six vertex. Indeed looking back at the Bethe equations (\ref{eq:Bethe_equations}), the edge diluted limit corresponds to $N\ll L$, for which the rhs can be considered a constant. Hence in this limit dressing disappears, and we are back to free fermions.
On the rigorous side, stochastics processes such are ASEP appear to be more tractable. For example, a proof of T-W scaling is known for ASEP with step initial conditions \cite{Tracy2009}.
\subsection{Wick rotation and inhomogeneous quantum quenches}\label{sec:wickrotation}
We have seen that the expectation value of an observable $O_x$ in imaginary time is given by
\begin{equation}\label{eq:imagter}
\braket{O_x}=\frac{\braket{\psi|e^{-(R+y)H} O_x e^{-(R-y)H} |\psi}}{\braket{\psi|e^{-2RH}|\psi}}
\end{equation}
with either $R$ and $y$ discrete (dimers models, etc) or $R$ and $y$ continuous (XX chain in imaginary time). Our starting point will be the later case. The reader interested in quantum models might have spotted a similarity between the previous expression and regular time evolution steming from the Schr\"odinger equation. Indeed, for a quantum system prepared in a state $\ket{\psi}$, and let evolve in time with the Hamiltonian $H$, the wave function at time $t$ is $\ket{\psi(t)}=e^{-\mathrm{i}\mkern1mu H t}$, and the expectation value of $O$ at time $t$ becomes
\begin{align}
\braket{O_x(t)}&=\braket{\psi(t)|O_x|\psi(t)}\\
&=\braket{\psi|e^{\mathrm{i}\mkern1mu H t}O_x e^{-\mathrm{i}\mkern1mu H t}|\psi}
\end{align}
Formally the real time evolution may be recovered from setting $y=-\mathrm{i}\mkern1mu t$ in (\ref{eq:imagter}) and taking the limit $R\to 0^+$. This procedure is the famous \emph{Wick rotation}.
This observation can be useful for two reasons.
\begin{enumerate}[label=\emph{\alph*)}]
\item
First, exact calculation in the spirit of section \ref{sec:exactcalc} or using integrability techniques are quite algebraic in nature. For this reason they are often valid for any value of $R,y\in \mathbb{C}$, which means the Wick rotation is perfectly justified for any finite time $t$. We can use this to derive exact highly nontrivial expressions for out of equilibrium quantities in a few selected case.
For example, the partition function $Z(R)=\braket{\psi|e^{-RH}|\psi}$ of the six vertex model with domain wall boundary conditions is known exactly from the work of Korepin \cite{Korepin1982} and Izergin \cite{Izergin1987} (see also \cite{IzerginCokerKorepin1992}). We can then take the Hamiltonian limit, perform the Wick rotation, and get an exact expression \cite{Return_Stephan} for the amplitude $\braket{\psi|e^{\mathrm{i}\mkern1mu t H_{\rm XXZ}}|\psi}$, where $H_{\rm XXZ}$ is the Hamiltonian of the XXZ spin chain, an integrable generalisation of the XX chain. The modulus square of the amplitude is called return probability, or Loschmidt echo. This exact result is difficult to get from more direct approaches \cite{FeherPozsgay}, which makes this method worthwhile.
\item At a more speculative level, it is tempting to try and Wick-rotate results already in the thermodynamic limit, following \cite{Calabrese_2016}. Of course, there is no mathematical justification for this. The statement that this can lead to wrong results is called \emph{Stokes phenomenon}. Let us go back to our quench from a domain wall state. Hydrodynamics ideas can also be applied to out of equilibrium quantum integrable sytems, as was established recently \cite{Doyonhydro,YoungItalians}. This subject goes under the name \emph{generalized hydrodynamics}. For a quench from a domain wall state a small miracle occurs, and it is possible to get the density profile exactly \cite{ColluraDeLucaViti}. What is nice is that the Wick rotation of the arctic curve gives back precisely the location of the front, the simplest example being the free fermion point arctic circle $x^2+y^2=R^2$ which becomes $x=\pm t$ after Wick rotation.
This does not work for all observables, though. For example the return probability or the density profile have the crazy property of being nowhere continuous as a function of $\Delta$ in the thermodynamic limit, which is clearly not the case in the original statistical problem.
\end{enumerate}
\vfill
\paragraph{Acknowledgements.}
I am grateful to many researchers for discussions and collaboration on various topics very much related to the present lectures. Among those are
J\'er\'emie Bouttier, Filippo Colomo, J\'er\^ome Dubail, Paul Fendley, Christophe Garban, Gr\'egoire Misguich, Vincent Pasquier, Fabio Toninelli and Jacopo Viti. I thank Saverio Bocini for a careful reading of these notes, suggesting various improvements, and helping correct many misprints. I also thank Alejandro Caicedo for working out the intricacies of exercises \ref{ex:tmsquare} and \ref{ex:hydroquare}.
\paragraph{Funding information.}
The ANR-18-CE40-0033 grant ('DIMERS') is warmly acknowledged.
\section[Free fermions reminder]{Self-contained reminder on free fermions techniques}
\label{app:freefermions_allyouneedtoknow}
\subsection{An explicit construction of lattice fermions}
\paragraph{Two level system.} The two level system is obviously the most important genuine quantum system ever, so let us start from this one. We consider the Hilbert space $\mathcal{H}\simeq \mathbb{C}^2$, and take the most down to earth approach, which is to work with two by two matrices ($\dim \mathcal{H}=2$). We choose the two basis vectors $\ket{0}=\twovec{0}{1}$ and $\ket{1}=\twovec{1}{0}$. $\ket{1}$ is interpreted as the presence of a particle, $\ket{0}$ as the absence of a particle (vacuum state). Pure states (or wave functions) are of the form
\begin{equation}
\ket{\psi}=\alpha \ket{0}+\beta \ket{1}\qquad, \qquad(\alpha,\beta) \in \mathbb{C}^2.
\end{equation}
Now let us introduce the two main heroes,
\begin{equation}
c=\twomat{0}{0}{1}{0}
\end{equation}
and
\begin{equation}
c^\dag=\twomat{0}{1}{0}{0}.
\end{equation}
$^\dag$ denotes hermitian conjugate. We use bra/ket notations, e. g. the bra $\bra{0}=(\ket{0})^\dag=(0\;\;1)$ is a line vector. We have $c^\dag \ket{0}=\ket{1}$, $c\ket{1}=\ket{0}$. Since $c\ket{1}=\ket{0}$, $c$ destroys the particle, so we call it the \emph{annihilation operator}. $c^\dag\ket{0}=\ket{1}$ creates a particle from the vacuum, so $c^\dag$ is the \emph{creation operator}. We also have $c^\dag\ket{1}=c\ket{0}=\twovec{0}{0}=0$, so that it is not possible to create two particles, or destroy a non existent particle. The zero vector is not to be confused with the vacuum. The rightmost hand side of the previous equation involves a slight abuse of notations; in the following we keep on writing $0$ any vector/matrix that has all elements equal to zero.
Note also $c^\dag c=\twomat{1}{0}{0}{0}$, and $c^\dag c+c c^\dag=I_2=\twomat{1}{0}{0}{1}$, which will be useful in the following. Obviously, any matrix in $M_2(\mathbb{C})$ can be written as a linear combination of $c,c^\dag$, $c^\dag c$ and $cc^\dag$.
\paragraph{A collection of $L$ two level systems.}
We now consider the Hilbert space $\mathcal{H}\simeq (\mathbb{C}^2)^{\otimes L}$, where $L$ is an integer $\geq 2$. $\mathcal{H}$ has dimension $\dim \mathcal{H}=2^L$. We want a set of (Dirac) fermionic operators, that is, a set of $2^L\times 2^L$ matrices $c_i,c_i^\dag$ for $i=1,\ldots,L$ that satisfy
\begin{eqnarray}\label{eq:acom1}
c_i c_j^\dag+c_j^\dag c_i&=&\delta_{ij}I\\\label{eq:acom2}
c_i c_j&=&-c_j c_i
\end{eqnarray}
where $I=I_2\otimes \ldots \otimes I_2$ is the identity operator. $\otimes$ denotes tensor product, given by
\begin{equation}\label{eq:ugly}
\twomat{a_0}{b_0}{c_0}{d_0} \otimes \twomat{a_1}{b_1}{c_1}{d_1} =\left(\begin{array}{cccc}a_0 a_1&a_0 b_1&b_0 a_1&b_0 b_1\\a_0 c_1&a_0 d_1&b_0 c_1&b_0 d_1\\c_0 a_1&c_0 b_1&d_0 a_1&d_0 b_1\\c_0 c_1&c_0 d_1&d_0 c_1&d_0 d_1\end{array}\right)
\end{equation}
The fact that it satisfies
\begin{equation}\label{eq:useful}
(A\otimes B)(C\otimes D)=(AC)\otimes (BD)
\end{equation}
is more important than the explicit formula (\ref{eq:ugly}).
The relations (\ref{eq:acom1}) and (\ref{eq:acom2}) are called canonical anticommutation relations (CAR), since they involve the anticommutator $\{A,B\}=AB+BA$, instead of the commutator $[A,B]=AB-BA$.
An explicit construction, due to Jordan and Wigner \cite{JordanWigner1928} \footnote{This result is often used when studying quantum spin chains, which are modeled using Pauli matrices. The construction goes $\sigma_j^\alpha=I_2\otimes \ldots \otimes \sigma^\alpha\otimes I_2\ldots\otimes I_2$, for $\alpha=\textrm{x},\textrm{y},\textrm{z}$, where $\sigma^{\rm x}=c^\dag+c$, $\sigma^{\rm y}=-ic^\dag+ic$, $\sigma^{\rm z}=2c^\dag c-I_2$ are the Pauli matrices. The ``Pauli matrices acting on site $j$'' are related to fermions through the \emph{Jordan-Wigner transformation} $\sigma_j^{\rm z}=2c^\dag_j c_j-I$, $\sigma_j^x+i \sigma_j^y=2c_j^\dag \prod_{l=1}^{j-1}\left(I-2c_l^\dag c_l\right)=2 c_j^\dag (-1)^{\sum_{l=1}^{j-1}c_j^\dag c_j}$.} is given by:
\begin{eqnarray}
c_1^\dag&=&c^\dag \otimes \underbrace{I_2\otimes \ldots \otimes I_2}_{N-1\textrm{\, times}}\\\nonumber
\vdots&&\\
c_k^\dag &=& \underbrace{\twomat{-1}{0}{0}{1}\otimes \ldots \otimes \twomat{-1}{0}{0}{1}}_{k-1 \textrm{\, times}} \otimes\,c^\dag\,\otimes \underbrace{I_2\otimes \ldots \otimes I_2}_{N-k\textrm{\, times}}\label{eq:kth}\\\nonumber
\vdots&&\\
c_{N}^\dag&=&\underbrace{\twomat{-1}{0}{0}{1}\otimes \ldots \otimes \twomat{-1}{0}{0}{1}}_{N-1 \textrm{\, times}} \otimes\,c^\dag
\end{eqnarray}
The chain of $k-1$ tensor products of $(-1)^{c^\dag c}=e^{\mathrm{i}\mkern1mu \pi c^\dag c}$ in (\ref{eq:kth}) is called a \emph{Jordan-Wigner string}.
One can readily check that the CAR (\ref{eq:acom1}), (\ref{eq:acom2}) are satisfied, using (\ref{eq:useful}) lots of times.
Of course, in practice, one often just uses the CAR without caring about an explicit representation. However, in the context of classical statistical mechanics, the above explicit construction turns out to be very useful.
\subsection{Summary of useful fermions properties}
Starting from now, we start droping the identity matrix in the equations, and simply treat it as a scalar. The CAR now read
\begin{equation}\label{eq:carbis}
c_i^\dag c_j=\delta_{ij}-c_j c_i^\dag\qquad,\qquad c_i c_j=-c_j c_i\qquad,\qquad c_i^\dag c_j^\dag=-c_j^\dag c_i^\dag.
\end{equation}
In particular $c_i c_i=0=c_i^\dag c_i^\dag$. The following properties all follow rather straightforwardly from (\ref{eq:carbis}) and the existence of a vacuum state $\ket{\bf 0}=\ket{0}\otimes \ldots\otimes \ket{0}$ annihilated by all $c_i$, $i=1,\ldots,L$:
\begin{itemize}
\item $\bra{\bf 0}$ is annihilated by all $c_i^\dag$, $\bra{\bf 0}c_i^\dag=0$, $\forall i \in \{1,\ldots,L\}$.
\item Commutation relations for quadratic forms:
\begin{equation}\label{eq:quadraticcomm}
[c_i^\dag c_j,c_k^\dag c_l]=\delta_{jk}c_i^\dag c_l-\delta_{il}c_k^\dag c_j.
\end{equation}
In particular,
\begin{equation}\label{eq:comm}
[c_i^\dag c_i,c_k^\dag c_k]=0.
\end{equation}
\item Exponentiation [follows from the fact that $c_i^\dag c_i$ is idempotent, $(c_i^\dag c_i)^2=c_i^\dag c_i$]:
\begin{equation}\label{eq:expo}
\exp\left(\tau c_i^\dag c_i\right)=1+(e^\tau-1)c_i^\dag c_i.
\end{equation}
\item Time evolution [take derivative]:
\begin{equation}\label{eq:timeevolution}
e^{\tau c_i^\dag c_i}c_j^\dag e^{-\tau c_i^\dag c_i}=e^{\tau \delta_{ij}}c_j^\dag.
\end{equation}
\end{itemize}
\subsection{How to diagonalize a free fermions Hamiltonian?}
\label{sec:howtodiag}
A free (lattice) fermions Hamiltonian is a $2^L\times 2^L$ matrix that is quadratic in the fermions creation and annihilation operators (we assume hermiticity here, which is not necessary, strictly speaking):
\begin{equation}
H=\sum_{i,j=1}^{L} \left(A_{ij}c_i^\dag c_j+B_{ij}c_i^\dag c_j^\dag +B_{ij}^* c_j c_i\right)
\end{equation}
where $A$ and $B$ are $L\times L$ matrices ($A$ is a Hermitian). This is of course a specific class of Hamiltonians, since we have at most $2L^2$ free parameters while the Hilbert space size is $2^L$.
In this context, free really means quadratic in the fermion operators.
In the following we explain how to diagonalise $H$ in the special case $B=0$, for simplicity. The procedure described below can be generalized to treat cases where $B$ is a non zero matrix. Hamiltonians of the form
\begin{equation}\label{eq:U1}
H=\sum_{i,j=1}^{L} A_{ij}c_i^\dag c_j
\end{equation}
conserve the number of particles, which means applying it on $n$-particle states $c_{i_1}^\dag \ldots c_{i_n}^\dag \ket{\bf 0}$ returns a sum over $n$ particle states (any fermion destroyed by $c_j$ is immediately created back by $c_i^\dag$). $A$ is a hermitian $L\times L$ matrix, so can be diagonalized in an orthonormal basis. The corresponding eigenvalue equations read (assume no multiplicities for simplicity)
\begin{equation}
\sum_{j=1}^L A_{ij} u_{jk}=\epsilon_k u_{ik}\quad,\quad k=1,\ldots,L.
\end{equation}
The eigenvalues are the $\epsilon_k$ and the $u_{jk}$ are orthonormal, meaning $\sum_{j=1}^{L}u_{jk}^* u_{jq}=\delta_{kq}$.
Now introduce a new set of fermions as
\begin{equation}
f_k^\dag =\sum_{j=1}^L u_{jk} c_j^\dag\quad,\quad k=1,\ldots,L\qquad,\qquad f_k=(f_k^\dag)^{\dag}.
\end{equation}
Then it is easy to show $\{f_k,f_q^\dag\}=f_k f_q^\dag+f_q^\dag f_k=\delta_{kq}$ and $\{f_k,f_q\}=f_k f_q+f_q f_q=0$, so the new set of operators also obey the CAR. In terms of these the Hamiltonian reads
\begin{equation}
H=\sum_{k=1}^L \epsilon_k f_k^\dag f_k.
\end{equation}
Obtaining the spectrum becomes quite easy now. Obviously $H\ket{\bf 0}=0$. Using the anticommutation relations, $Hf_k^\dag\ket{\bf 0}=\epsilon_k f_k^\dag\ket{\bf 0}$. By induction, we obtain
\begin{equation}
H f_{k_1}^\dag f_{k_2}^\dag \ldots f_{k_n}^\dag\ket{\bf 0}=(\epsilon_{k_1}+\ldots+\epsilon_{k_n}) f_{k_1}^\dag f_{k_2}^\dag \ldots f_{k_n}^\dag \ket{\bf 0}.
\end{equation}
To get a nonzero eigenvector, the $k_i$ have to be pairwise distincts. Also, any permutation of the $k_i$s gives back the same eigenvector up to a sign. Hence the spectrum is
$$\sum_{i=1}^n \epsilon_{k_i}\quad,\quad \{k_1,\ldots,k_n\}\textrm{ subset of } \{1,\ldots,L\}.$$ There are $\binom{L}{n}$ linearly independent eigenvectors in the sector with $n$ particles, so the total number of eigenvalues is $\sum_{n=0}^L \binom{L}{n}=2^L=\dim \mathcal{H}$, as should be.
An eigenstate with smallest eigenvalue is obtained by choosing all single particle energies $\epsilon_k$ that are negative, $\ket{\bf \Omega}=\prod_{k,\epsilon_k< 0}f_k^\dag \ket{\bf 0}$ (irrespective of the order in which the product is taken).
\subsection{More elaborate properties}
\label{sec:ff_advanced}
Here is a collection of results that are very useful in practice. Almost all free fermions calculations make use of several of those at some point. We start with the most famous one.
\begin{enumerate}[label=\emph{\alph*)}]
\item Wick's theorem \cite{Wick}. Let the $f_j$ be linear combinations of the $c_i,c_i^\dag$ for $j\in \{1,\ldots,2n\}$, and $H$ be a free fermion Hamiltonian. Then the thermal average
\begin{equation}
\braket{f_1\ldots f_{2n}}_{\beta}=\frac{\textrm{Tr} \left[f_1 \ldots f_{2n}e^{-\beta H}\right]}{\textrm{Tr}\left[ e^{-\beta H}\right]}
\end{equation}
may be expressed as
\begin{empheq}[box=\othermathbox]{align}\label{eq:Wickstheorem}
\braket{f_1 \ldots f_{2n}}_{\beta}=\underset{1\leq i,j\leq 2n}{\textrm{Pf}}\left( \Braket{\mathcal{T}[f_i f_j]}_{\beta}\right).
\end{empheq}
where $\mathcal{T}[f_i f_j]$ equals $f_i f_j$ if $i<j$, $0$ if $i=j$, and $-f_j f_i$ if $i>j$.
$\textrm{Pf}$ is the Pfaffian. For an antisymmetric matrix $A=(A_{ij})_{1\leq i,j\leq 2n}$, it is defined as
\begin{equation}
\textrm{Pf}\, A=\frac{1}{2^n n!}\sum_{\sigma \in S_{2n}}(-1)^\sigma A_{\sigma(1),\sigma(2)}\ldots A_{\sigma(2n-1)\sigma(2n)},
\end{equation}
where the sum runs over all permutations $\sigma$ of $\{1,\ldots,2n\}$, and $(-1)^\sigma$ is the signature of the permutation. The Pfaffian can be shown to be a square root of the determinant.
The factor $1/(2^n n!)$ may also be removed by imposing the ordering $\sigma(2i)<\sigma(2i+1)$, and $\sigma(2i)<\sigma(2j)$ for $j>i$.
As an example, the theorem yields
\begin{equation}
\braket{f_1 f_2 f_3 f_4}_{\beta}=\braket{f_1 f_2}_\beta\braket{f_3 f_4}_\beta-\braket{f_1 f_3}_\beta\braket{f_2 f_4}_\beta+\braket{f_1 f_4}_\beta\braket{f_2 f_3}_\beta.
\end{equation}
We refer to \cite{Gaudin_Wick} for a proof of (\ref{eq:Wickstheorem}). Note that as a particular case, expectation values in any ground state may be obtained by taking the limit $\beta\to \infty$.
\item Wick's theorem (no pairings). We consider the special case where $H$ is of the form (\ref{eq:U1}), while for $j=1,\ldots,n$ the $f_j$ are linear combinations of the $c_i^\dag$ only, and the $f_{j+n}=g_j$ are linear combinations of the $c_j$ only. Then
\begin{equation}
\braket{f_1 \ldots f_{2n}}_{\beta}=\det_{1\leq i,j\leq n}\left(\braket{f_i g_j}_\beta\right).
\end{equation}
We will need only this particular case in the limit $\beta\to \infty$ in these notes.
\item Trace of exponential.
\begin{equation}\label{eq:traceexp}
\textrm{Tr}\, e^{-\beta \sum_{i,j} P_{ij}c_i^\dag c_j} =\det(1+e^{-\beta P}).
\end{equation}
\emph{Proof}: diagonalize the quadratic form in the exponential.
\item General time evolution.
\begin{align}\label{eq:gentimeevolution}
e^{\sum_{i,j} P_{ij} c_i^\dag c_j}c_l^\dag e^{-\sum_{i,j} P_{ij} c_i^\dag c_j}&=\textstyle{\sum_m} (e^P)_{ml}c_m^\dag,\\\label{eq:gentimeevolution2}
e^{\sum_{i,j} P_{ij} c_i^\dag c_j}c_l e^{-\sum_{i,j} P_{ij} c_i^\dag c_j}&=\textstyle{\sum_m} (e^{-P})_{lm}c_m
\end{align}
\emph{Proof}: diagonalize the quadratic form in the exponentials.
\item Product of exponentials.
\begin{equation}\label{eq:productformula}
e^{\sum_{i,j} P_{ij}c_i^\dag c_j} e^{\sum_{i,j}Q_{ij}c_i^\dag c_j}=e^{\sum_{i,j}\log(e^P e^Q)_{ij}c_i^\dag c_j}.
\end{equation}
\emph{Proof}: use the Baker-Campbell-Hausdorff formula.
\item Average of an exponential. Let $P=(P_{ij})_{1\leq i,j\leq L}$ be a $L\times L$ matrix. Then is any state where Wick's theorem applies we have
\begin{equation}\label{eq:ff_fancy}
\Braket{e^{\sum_{i,j}P_{ij}c_i^\dag c_j}}=\det(I+(e^P-I) C).
\end{equation}
$C$ is the $L\times L$ matrix with elements $C_{ij}=\braket{c_i^\dag c_j}$, $I$ the $L\times L$ identity. Of course, it is also possible to combine (\ref{eq:ff_fancy}) with (\ref{eq:productformula}) to compute the average of a product of exponentials.
The formula gives the full counting statistics as byproduct: for any subset $A$ of $\{1,2,\ldots,L\}$:
\begin{equation}
\Braket{e^{\lambda \sum_{j\in A}c_j^\dag c_j}}=\det_{i,j \in A}\left(\delta_{ij}+[e^{\lambda}-1]\braket{c_i^\dag c_j}\right).
\end{equation}
\emph{Proof}: just pick $P_{ij}=\lambda \delta_{ij}\delta_{i\in A}$ in (\ref{eq:ff_fancy}) and check that only the block $i,j\in A$ contributes to the determinant.
\end{enumerate}
\emph{Proof of (\ref{eq:ff_fancy}):} Introduce a new set of fermions $d_k,d_k^\dag$ that diagonalise the quadratic form in the exponential on the lhs, as in section \ref{sec:howtodiag}. Denote by $\epsilon_k$ the corresponding eigenvalues of $P$. The $d_k,d_k^\dag$ also satisfy the CAR. Then
\begin{align}
\Braket{e^{\sum_{i,j=1}^L P_{ij}c_i^\dag c_j}}&=\Braket{e^{\sum_{k=1}^L\epsilon_{k}d_k^\dag d_k}},\\
&=\Braket{\prod_{k=1}^L e^{\epsilon_k d_k^\dag d_k}},\\
&=\Braket{\prod_{k=1}^L \left(1+[e^{\epsilon_k}-1]d_k^\dag d_k\right)},\\
&=\Braket{\sum_{n=0}^L \,\,\sum_{k_1<\ldots<k_n}\prod_{\alpha=1}^n (e^{\epsilon_{k_\alpha}}-1)d_{k_\alpha}^\dag d_{k_\alpha}},\\\label{eq:secondtolast_der}
&=\sum_{n=0}^L \,\,\sum_{k_1<\ldots<k_n} \det_{1\leq \alpha,\beta\leq n}\left([e^{\epsilon_{k_\alpha}}-1]\braket{d_{k_\alpha}^\dag d_{k_\beta}}\right),\\\label{eq:last_der}
&=\det_{1\leq k,q\leq L}\left(\delta_{kq}+[e^{\epsilon_k}-1]\braket{d_{k}^\dag d_q}\right).
\end{align}
We have used in succession (\ref{eq:comm}), (\ref{eq:expo}), expanded the product over $k$, Wick's theorem, and recognised the Cauchy-Binet identity. Writing $P=U\Delta U^\dag$, where $\Delta=\textrm{diag}(\epsilon_1,\ldots,\epsilon_L)$, (\ref{eq:last_der}) reads
\begin{equation}
\Braket{e^{\sum_{i,j=1}^L P_{ij}c_i^\dag c_j}}=\det(I+(e^\Delta-I)U^\dag C U).
\end{equation}
Finally, multiplying by $U$ on the left and $U^\dag$ on the right does not change the determinant, and gives (\ref{eq:ff_fancy}).
\subsection{Infinite lattice or continuum limit}
\label{sec:infinitelattice}
So far we have introduced lattice fermions as $2^L\times 2^L$ matrices, where $L$ is the number of lattice sites. It is however very useful to consider generalisations where the matrices become infinite or operators in the continuum. The case of the infinite lattice, e. g. $\mathbb{Z}$ can easily be dealt with by considering a finite lattice $j\in \{-l,-l+1,\ldots ,l\}$ computing observables $\braket{O}_l$ and then taking $l\to \infty$.
We often make use of the following notation
\begin{equation}\label{eq:fourierconventions}
c^\dag(k)=\sum_{x\in \mathbb{Z}} e^{\mathrm{i}\mkern1mu k x}c_x^\dag\qquad,\qquad c_x^\dag =\int_{-\pi}^{\pi} \frac{dk}{2\pi} e^{-\mathrm{i}\mkern1mu kx}c^\dag(k)
\end{equation}
for the Fourier transform on the infinite lattice, where $k\in [-\pi,\pi]$, in the main text. The obey the anticommutation relations $\{c(k),c^\dag(k')\}=2\pi \delta(k-k')$.
It is of course also possible to consider continuous real space fermions, in which case we use the notation $c^\dag(x)$, which obeys the anticommutation relations $\{c(x),c^\dag(x')\}=\delta(x-x')$. In the main text, $k$ and $q$ always refers to momentum while $x$ and $y$ always refer to position, so it is not possible to get them confused.
\pagebreak
\end{appendix}
| {
"timestamp": "2021-02-05T02:21:18",
"yymm": "2003",
"arxiv_id": "2003.06339",
"language": "en",
"url": "https://arxiv.org/abs/2003.06339",
"abstract": "Standard statistical mechanical or condensed matter arguments tell us that bulk properties of a physical system do not depend too much on boundary conditions. Random tilings of large regions provide counterexamples to such intuition, as illustrated by the famous 'arctic circle theorem' for dimer coverings in two dimensions. In these notes, I discuss such examples in the context of critical phenomena, and their relation to 1+1d quantum particle models. All those turn out to share a common feature: they are inhomogeneous, in the sense that local densities now depend on position in the bulk. I explain how such problems may be understood using variational (or hydrodynamic) arguments, how to treat long range correlations, and how non trivial edge behavior can occur. While all this is done on the example of the dimer model, the results presented here have much greater generality. In that sense the dimer model serves as an opportunity to discuss broader methods and results. [These notes require only a basic knowledge of statistical mechanics.]",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph)",
"title": "Extreme boundary conditions and random tilings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806478450307,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7077274266780185
} |
https://arxiv.org/abs/2108.12946 | The Complement Problem for Linklessly Embeddable Graphs | We find all maximal linklessly embeddable graphs of order up to 11, and verify that for every graph $G$ of order 11 either $G$ or its complement $cG$ is intrinsically linked. We give an example of a graph $G$ of order 11 such that both $G$ and $cG$ are $K_6$-minor free. We provide minimal order examples of maximal linklessly embeddable graphs that are not triangular or not 3-connected. We prove a Nordhaus-Gaddum type conjecture on the Colin de Verdière invariant for graphs on at most 11 vertices. We give a description of the programs used in the search. | \section{Introduction}
In this article we continue the study of graphs and their complements initiated nearly sixty years ago by Battle, Harary and Kodama~\cite{BHK}.
They established that the complement of a planar graph with nine vertices is necessarily non-planar.
This result was independently proved by Tutte in \cite{Tutte}.
We express this by saying that the complete graph $K_9$ is not bi-planar.
In a series of articles, Harary and Akiyama investigated conditions under which both a graph $G$ and its complement $cG$ possess a specified property: have connectivity one, have line-connectivity one, are 2-connected, are forests, are bipartite, are outerplanar, are Eulerian \cite{AH1}, have the same girth, have circumference 3 or 4 \cite{AH3}, have the same number of endpoints \cite{AH4}.
Ichihara and Mattman \cite{IM} proved that the complement of an $n$-apex graph with $2n+9$ vertices is not $n$-apex, that is, $K_{2n+9}$ is not bi-$n$-apex.
In particular $K_{11}$ is not bi-1-apex.
A graph is \dfn{$n$-apex} if deleting some set of $n$ vertices yields a planar graph.
A 1-apex graph is usually called apex.
This article is about the property of linkless embedability.
A graph is \dfn{intrinsically linked} (\dfn{IL}) if every embedding of it in $\mathbb{R}^3$ (or $S^3$) contains a nontrivial 2-component link.
A graph is \dfn{linklessly embeddable} if it is not intrinsically linked (\dfn{nIL}).
The combined work of Conway and Gordon \cite{CG}, Sachs \cite{Sa}, and Robertson, Seymour and Thomas \cite{RST} fully characterize IL graphs: a graph is IL if and only if it contains a graph in the Petersen family as a minor.
The Petersen family consists of seven graphs obtained from $K_6$ by $\nabla Y-$moves and $Y\nabla-$moves, as presented in Figure \ref{fig-ty}.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(160, 50)
\put(0,0){\includegraphics[width=2.4in]{fig-ty}}
\end{picture}
\caption{\small $\nabla Y-$ and $Y\nabla-$moves}
\label{fig-ty}
\end{center}
\end{figure}
We ask: what is the smallest $N$ such that the complete graph $K_N$ is not bi-nIL?
It is easy to prove that $N \le 15$:
Mader \cite{Ma} showed that a graph on $n\ge 6$ vertices and at least $4n-9$ edges contains a $K_6$ minor.
Since, for $n \ge 15$, $K_n$ has ${n}\choose{2}$ $\ge 2(4n-9)$ edges,
for any graph $G$ with at least 15 vertices,
either $G$ or $cG$ contains a $K_6$ minor and hence is IL.
In \cite{PP2}, the last two authors proved $K_{10}$ is bi-apex and therefore bi-nIL (by \cite{Sa}), and that $K_{13}$ is not bi-nIL, thereby showing that $11\le N \le 13$.
In this paper we show that $N=11$.
Then we prove:
\begin{theorem}
If $G$ is a linklessly embeddable graph with at least 11 vertices, then the complement of $G$ is intrinsically linked.
\label{main}
\end{theorem}
Kotlov, Lov\'asz and Vempala \cite{KLV} conjectured that
for any graph of order $n$, $\mu(G)+\mu(cG)\ge n-2$,
where $\mu$ denotes the de Verdi\`ere invariant of a graph.
We use Theorem \ref{main} in the next section to prove this conjecture for graphs of order 11.
A nIL graph is \textit{maxnIL} if it is not a proper subgraph of a nIL graph of the same order.
To prove Theorem~\ref{main}, it is enough to check the statement for maxnIL graphs.
This is because if $G$ is a subgraph of a nIL graph $G'$ of the same order, and $cG'$ is IL, then $cG$ is also IL, as it contains $cG'$ as a subgraph.
To this end, using two computer algorithms that will be detailed in Section \ref{CS}, we found a complete list of maxnIL graphs of order up to 11. We provide the list in the appendix \cite{appendix}.
For each maxnIL graph of order 11, its complement was found to be IL.
Techniques and considerations used to narrow down the computer search are presented in Section \ref{maxn}.\\
\section{maxnIL graphs of order up to 11}
\label{maxn}
By Mader \cite{Ma}, any graph of order 11 and size greater than 34 contains a $K_6$ minor, so it is IL.
On the other hand, by work of Aires \cite{A}, we know that a maxnIL graph on 11 vertices has at least 22 edges.
We therefore narrowed our search to graphs on 11 vertices and $22\le m\le 34$ edges.
Similar bounds on the number of edges were set for $n=7,8,9,10$
(in order to find all maxnIL graphs on 11 vertices,
we first needed to find all maxnIL graphs on 10 or fewer vertices,
as explained further below).
By \cite{NPP}, maxnIL graphs are 2-connected; so the search was restricted to graphs with connectivity at least 2.
We further narrowed the search space by proving a set of results on maxnIL graphs,
which we describe below.\\
To \textit{cone} a vertex over a graph $H$ means we add a new vertex $v$ and connect $v$ to every vertex of $H$.
Sachs \cite{Sa} showed that a graph $H$ is planar if and only if coning one vertex over $H$ yields a nIL graph.
It follows that
if a nIL graph $G$ of order $n$ has a vertex $v$ of degree $n-1$,
then $G\setminus v$ is planar.
We use this to show:
\begin{proposition}
\label{prop-maxnil-apex}
Let $G$ be a maxnIL graph of order $n$.
Then the following are equivalent:
(i)~$G$ is apex;
(ii)~$G$ has a vertex of degree $n-1$;
(iii)~$G$ is a cone on a maximal planar graph of order $n-1$.
\end{proposition}
\begin{proof}
(i) $\to$ (ii):
Suppose $G$ is apex.
Then for some vertex $v \in G$, $G\setminus v$ is planar.
If $\deg(v) < n-1$, then an edge $e$ incident to $v$ can be added to $G$
such that $G+e$ is nIL, contradicting the maximality of $G$.
(ii) $\to$ (iii):
If a vertex $v$ has degree $n-1$,
then, by \cite{Sa}, $G\setminus v$ is planar.
It must be maximal planar since otherwise
an edge $e$ can be added to $G\setminus v$ while preserving its planarity,
so that $G+e$ is nIL, again a contradiction.
(iii) $\to$ (i): This follows from the definition of apex.
\end{proof}
This means that each plane triangulation of order $n\ge 5$ gives rise to a unique maxnIL graph of order $n+1$.
There is only one plane triangulation with 5 vertices; and, by work of Bowen and Fisk \cite{BF}, the plane triangulations with $6\le n\le 10$ vertices are also known, and their numbers are: $T_6=2, T_7=5, T_8=14, T_9=50, T_{10}=233$.
The number of maxnIL apex graphs is presented in Table 1.
An edge $e$ of a graph $G$ is \textit{triangular} if $e$ is in some 3-cycle in $G$.
A graph is \textit{triangular} if every edge of it is triangular.
A graph $G$ is the \dfn{clique sum} of two graphs $G_1$ and $G_2$ over $K_p$ if $V(G)=V(G_1)\cup V(G_2)$, $E(G)=E(G_1)\cup E(G_2)$, and the subgraphs induced by $V(G_1)\cap V(G_2)$ in $G_1$ and $G_2$ are both complete of order $p$.
Clique sum constructions are often used to create larger order graphs with a certain property (e.g., linklessly embeddable, or $K_6$-minor-free) from two graphs with the same property.
\begin{proposition}
If every maxnIL graph of order at least 3 and at most $n$ is triangular, then every maxnIL graph $G$ of order $n+1$ is 3-connected.
\label{3-connected}
\end{proposition}
\begin{proof}
Assume $G$ is a maxnIL graph of order $n+1$ that is not 3-connected.
By \cite{NPP}, $G$ is a clique sum of two maxnIL graphs of order at most $n$
over an edge which is non-triangular in at least one of the two graphs.
This contradicts the assumption that every maxnIL graph of order at most $n$ is triangular.
\end{proof}
The only maxnIL graphs of order $3 \le n \le 5$ are $K_n$.
And $K_6^-$ ($K_6$ minus an edge) is the only maxnIL graph of order 6.
These graphs are all triangular.
It follows by Proposition \ref{3-connected} that every maxnIL graph of order 7 is 3-connected and hence has minimal degree at least 3.
For each $n=7, 8, 9, 10$, we used the Nauty program designed by McKay~\cite{MP} to generate the 2-connected graphs of minimal degree 3 with $n$ vertices and $2n\le m\le 4n-10$ edges.
We then used the Mathematica program detailed in Section \ref{CS}
to find the maxnIL graphs among them and to confirm they were all triangular.
The search considered a total of 5,065,328 graphs.
By Proposition \ref{3-connected} , this implies that all maxnIL graphs of order 11 are 3-connected.
(However, there exists a maxnIL graph of order 11 which is not triangular.
See Figure \ref{1127})
As our next step in finding all maxnIL graphs of order 11,
we first focused on those with a vertex of degree 3.
By \cite{NPP}, if a vertex $v$ of a maxnIL graph $G$ has degree 3,
then $v$ belongs to a $K_4$ subgraph of $G$ and $G\setminus v$ is maxnIL.
Having found all maxnIL graphs of order 10 (there are 107 of them),
we constructed, from each such graph $G$, graphs of order 11 by
adding a new vertex and connecting it to three vertices which form a triangle in $G$.
We obtained 1963 graphs this way, of which 159 were maxnIL.
Since plane triangulations have minimal degree 3,
by Proposition \ref{prop-maxnil-apex} maxnIL apex graphs have minimal degree 4 for $n\ge 5$.
Therefore this set of 159 maxnIL graphs consists of non-apex graphs only
and is disjoint from the set of 233 maxnIL apex graphs found above.
The search for all maxnIL graphs of order 11 was now reduced to considering 3-connected graphs
with minimum degree $\ge 4$.
Furthermore, since we had already found all maxnIL graphs of order 11 that are apex,
we now only had to search for non-apex maxnIL graphs of order 11;
by Proposition \ref{prop-maxnil-apex},
these graphs all have maximum degree $\le 9$.
Unfortunately, (to our knowledge) Nauty does not accept connectivity 3 as a parameter,
so we had to consider all 2-connected graphs of order 11 with vertex degrees between 4 and 9.
There are 158,056,639 such graphs, which we saved in a .g6 master file.
We used the Alabama Super Computer (ASC) and the Python code detailed in Section \ref{CS} to sift out the graphs which had $K_6$ minors.
To do this, we split the master file into subfiles with approximately 1,000,000 graphs each, and then each subfile was tested on 16 cores for up to 150 hours, with at most 15 jobs running at a time.
Due to the nature of the algorithm, the larger graphs (size 34 for instance) ran faster, since they were more likely to contain a $K_6$ minor and be dismissed by the program.
Some instances for size 27 timed out, so we had to run them again on 32 cores with more time allotment. This process took roughly 3 months.
Once a job would finish and a $K_6$-minor free list of graphs was produced, the list would be portioned into batches of 300,000 graphs and the Mathematica program for finding maxnIL graphs would run on up to 4 Intel Core i5-6500 @ 3.20GHz processors at a time, at a rate of roughly 120,000 graphs per day.
This time, the jobs involving smaller size graphs would finish first, since less cycles need to be considered.
The search yielded 317 order 11, 3-connected maxnIL graphs with vertex degrees between 4 and 9.
All of them are necessarily non-apex (as explained above).
Together with the 159 maxnIL graphs with minimum vertex degree 3 and the 233 maxnIL apex graphs, the search produced all the 709 maxnIL graphs of order 11.
The Mathematica program checked all their complements to be IL.
This proves Theorem~\ref{main},
as any nIL graph of order greater than 11 contains a nIL induced subgraph of order 11.
Table~1 shows the number of maxnIL graphs for each order of at most 11.
For order 11, the search found maxnIL graphs with $m$ edges for every $27 \le m\le 34$,
and none for any other values of $m$.
\begin{table}[htpb!]
\label{TableMN}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
order $n$& 6 & 7 & 8 & 9 & 10 & 11 \\ \hline
$1-$apex maxnIL graphs & 1 & 2 & 5 & 14 & 50 & 233 \\ \hline
all maxnIL graphs & 1 & 2 & 6 & 20 & 107 & 709 \\ \hline
\end{tabular}\\\vspace*{0.2in}
\caption{ maxnIL graphs with $6\le n\le 11$ vertices.}
\end{table}
\begin{remark} The graph $G_{11,27}$, depicted in Figure \ref{1127}, has some unique properties among the maxnIL graphs of order 11. It is the only graph in that list which has 27 edges. It is also the only maxnIL graph of order 11 which is not triangular. In this graph, the edges $v_1v_8$ and $v_2v_{10}$ are non-triangular. Given that all the maxnIL graphs of order at most 10 are all triangular, $G_{11,27}$ is the minimal order example for non-triangular maxIL graphs.
\end{remark}
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(250, 200)
\put(0,0){\includegraphics[width=3in]{fig-maxnIL1127}}
\end{picture}
\caption{\small MaxnIL graph with $n=11$ vertices and $m=27$ edges. Edges $v_1v_8$ and $v_2v_{10}$ are non-triangular.}
\label{1127}
\end{center}
\end{figure}
\begin{remark}
Given that all maxnIL graphs of order $\le 11$ are 3-connected,
it is natural to ask whether every maxnIL graph (of any order) is 3-connected.
We see as follows that the answer is negative.
Starting with $G_{11,27}$, adding a new vertex and connecting it to the endpoints of one of its non-triangular edges, yields a nIL graph of order 12 that is not 3-connected.
This graph is maxnIL, since it is the clique sum of two maxnIL graphs over an edge which is non-triangular in one of them \cite{NPP}. This a is a minimal order maxnIL graph with vertex connectivity 2.
\end{remark}
\begin{remark}
In light of Theorem~\ref{main},
one may wonder whether for every graph $G$ of order 11 either $G$ or $cG$ has a $K_6$ minor.
The graph in Figure \ref{1121}(a) is a maxnIL graph of order 11 since it is a cone over a maximal planar graph.
In particular, it has no $K_6$ minor.
It turns out that the complement of this graph has no $K_6$ minor either.
Thus $K_{11}$ is ``bi-$K_6$-minor-free.''
We leave it as an open question whether $K_{12}$ is bi-$K_6$-minor-free.
\end{remark}
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(370, 170)
\put(0,0){\includegraphics[width=5in]{fig-11K6free}}
\end{picture}
\caption{\small (a) MaxnIL apex graph $G$. Vertex $v_{11}$ connects with all other ten vertices. (b) The complement of $G$.}
\label{1121}
\end{center}
\end{figure}
\begin{remark} The search found all maxnIL graphs of order at most 11 to be 2-apex.
While there are known maxnIL graphs of order 13 which are not 2-apex (such as Maharry's $Q_{13,3}$ graph \cite{M}), it is still an open question whether all maxnIL graphs of order 12 are 2-apex.
\end{remark}
\begin{remark} In 1990, Colin de Verdi\'ere \cite{dV} introduced the graph invariant $\mu$, based on spectral properties of matrices associated with a graph $G$. He showed that $\mu$ is monotone under taking minors
and that planarity is characterized by the inequality $\mu\le 3$. Lov\'asz and Schrijver
showed that linkless embeddability is characterized by the inequality $\mu\le 4$ \cite{LS}.
By reformulating the definition of $\mu$ in terms of vector labelings, Kotlov, Lov\'asz, and Vempala \cite{KLV} related some topological properties of a graph to the $\mu$ invariant of its complement:
for $G$ a simple graph on $n$ vertices,
(a)~if $G$ is planar then $\mu(cG)\ge n-5$;
(b)~if $G$ is outerplanar then $\mu(cG)\ge n-4$.
The three authors also conjectured that for any simple graph $G$ of order $n$, $\mu(G)+\mu(cG) \ge n-2$.
The conjecture is known to hold for planar graphs, graphs of order at most 10, and graphs with $\mu(G)\ge n-6$ \cite{Hog}.
Theorem \ref{main} validates the conjecture for graphs of order 11:
$G$ or $cG$, say $G$, is IL, so $\mu(G) \ge 5 = 11-6$;
and the conjecture holds for graphs with $\mu(G) \ge n-6$.
\end{remark}
We make a few observations about the graphs listed in Table 1.
\begin{enumerate}
\item There is exactly one non-apex maxnIL graph with 8 vertices, called the J{\o}rgensen graph \cite{J}.
The J{\o}rgensen graph is minor minimal non-apex.
\item Since the complement of each plane triangulation with 9 vertices is non-planar by \cite{BHK}, the complement of each apex maxnIL graph with 10 vertices is non-planar.
Among 50 such apex maxnil graphs, only one has a complement which is IL, with all other complements being nIL.
This complement is the graph $G_9$ in the Petersen family plus one isolated vertex.
\item
It is straightforward to show that
if a maxnIL graph of order $n$ is apex then it has $4n-10$ edges.
But the converse does not hold;
there are maxnIL graphs of order $n$ with $4n-10$ edges that are not apex.
In Figure~\ref{non-apex} we provide a minimal order example of such a graph.
\end{enumerate}
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(250, 130)
\put(0,0){\includegraphics[width=3.5in]{fig-nonapex}}
\end{picture}
\caption{\small A maxnIL graph with $n=9$ vertices and $4n-10$ edges that is non-apex.}
\label{non-apex}
\end{center}
\end{figure}
\section{The Algorithms Used for the Searches}
\label{CS}
{\it Algorithms for determining if a graph is IL or maxnIL}.
We start with an arbitrary embedding $G \subset S^3$ of the given graph.
Any other embedding $G'$ of the same graph
can be obtained from $G$ by a combination of isotopy
and crossing changes (i.e., allowing edges to ``go through each other'').
A crossing change between a pair of edges
is equivalent to adding a full twist (two half-twists)
between the two edges, as in Figure~\ref{twist}.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(250, 70)
\put(0,0){\includegraphics[width=3.5in]{fig-twist}}
\end{picture}
\caption{\small A crossing change between two edges of a graph.}
\label{twist}
\end{center}
\end{figure}
The linking number between any two disjoint cycles in $G'$ can be computed from their corresponding linking number in $G$
by taking into account the number of twists added
between all (disjoint) pairs of edges
in the two components of the link in question.
In the algorithm, for every 2-component link $L$ in the given embedding $G$, we compute its linking number $\mathrm{lk}_G(L)$.
We then assign a variable to every disjoint pair of edges in $G$, representing the number of twists to be added between the two edges in order to obtain a new embedding $G'$.
We write the linking number $\mathrm{lk}_{G'}(L)$ in terms of $\mathrm{lk}_{G}(L)$ and the variables.
Setting all these linking numbers equal to zero gives us a system of linear equations that has an integer solution if and only if there exists an embedding $G'$ of the given graph such that every link in $G'$ has linking number zero.
By the work of Robertson, Seymour, and Thomas~\cite{RST}, a graph has a linkless embedding if and only if it has an embedding with no odd linking numbers.
So, instead of solving the system of linear equations for integer solutions, we work in $\mathbb{Z}_2$ (which is much faster).
Lastly, to determine if a given nIL graph $G$ is maxnIL, we simply check if for every pair of nonadjacent vertices $v, w \in G$ the graph $G + vw$ is IL.
Both of these algorithms were implemented in Mathematica. \\
{\it Searching for $K_6$ minors.}
Recall that $H$ is a minor of $G$ if deleting and contracting zero or more edges and deleting zero or more vertices in $G$ yields a graph isomorphic to $H$.
However, to check if a connected graph $G$ has a $K_6$ minor, we only need to check if contracting zero or more edges in $G$ yields $K_6$; deleting edges is not necessary since $K_6$ is a complete graph, and deleting vertices is not necessary since $G$ is assumed to be connected.
Now consider a connected graph \(G\) of order \(n > 6.\) Contracting an edge in \(G\) produces a graph with one fewer vertex. So \(G\) has a \(K_6\) minor if and only if there exists a set of \(n-6\) edges in $G$ such that contracting those edges yields a graph isomorphic to \(K_6\).
If $G$ has $m$ edges, there are \({m \choose n-6}\) sets of edges to check.
After contracting the edges in each of these sets, checking if the resulting minor is isomorphic to $K_6$ can be done by simply checking if every vertex has degree~5.
This algorithm was implemented in Python, using the NetworkX library.\\
{\bf Acknowledgement:} This work was made possible in part by a grant of high performance computing resources and technical support from the Alabama Supercomputer Authority.
\bibliographystyle{amsplain}
| {
"timestamp": "2022-04-22T02:32:24",
"yymm": "2108",
"arxiv_id": "2108.12946",
"language": "en",
"url": "https://arxiv.org/abs/2108.12946",
"abstract": "We find all maximal linklessly embeddable graphs of order up to 11, and verify that for every graph $G$ of order 11 either $G$ or its complement $cG$ is intrinsically linked. We give an example of a graph $G$ of order 11 such that both $G$ and $cG$ are $K_6$-minor free. We provide minimal order examples of maximal linklessly embeddable graphs that are not triangular or not 3-connected. We prove a Nordhaus-Gaddum type conjecture on the Colin de Verdière invariant for graphs on at most 11 vertices. We give a description of the programs used in the search.",
"subjects": "Geometric Topology (math.GT)",
"title": "The Complement Problem for Linklessly Embeddable Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806472775277,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.707727426268427
} |
https://arxiv.org/abs/quant-ph/9806073 | Levinson theorem in two dimensions | A two-dimensional analogue of Levinson's theorem for nonrelativistic quantum mechanics is established, which relates the phase shift at threshold(zero momentum) for the $m$th partial wave to the total number of bound states with angular momentum $m\hbar(m=0,1,2,...)$ in an attractive central field. | \section*{\large \S1. Introduction}
In 1949, a theorem in quantum mechanics was established by
Levinson[1],
which relates the phase shift at threshold(zero momentum) for the
$l$th partial wave, $\delta_l(0)$, to the number of bound states
with the same
azimuthal quantum number, $n_l$. This is one of the most interesting
and beautiful results in nonrelativistic quantum theory. The subject
was then studied by many authors, some are listed in the References%
[2-5]. The relativistic generalization of Levinson's theorem has also
been well established[6-8]. However, most of these authors deal with
the problem in ordinary three-dimensional space. To our knowledge
a two-dimensional version of Levinson's theorem was not presented
in the literature. The purpose of this work is to develop an analogue
of this theorem in two spatial dimensions. It relates the phase shift
at threshold for the $m$th partial wave, $\eta_m(0)$, to the total
number of bound states with angular momentum $m\hbar$, $n_m$(the
total number of bound states with angular momentum $-m\hbar$ is also
$n_m$):
$$\eta_m(0)=n_m\pi, \quad m=0,1,2,\ldots.$$
This is similar to but slightly simpler than the original one in
three dimensions. In three dimensions Levinson's theorem takes the
same
form with $m$ replaced by $l$. But when $l=0$ the relation must be
modified if there exists a zero-energy resonance(a half-bound state).
The mathematical origin is that the behaviour of the phase shifts
$\delta_l(k)$ near
$k=0$ may be different for $l=0$ and $l\ne0$. In two dimensions
no similar situation occurs. This paper is organized as follows. In
the next section we give a brief formulation of the partial-wave
method for nonrelativistic scattering in
two spatial dimensions. In \S3
we discuss the behaviour of the phase shifts near $k=0$ in some
detail.
In \S4 we establish the Levinson theorem using the Green function
method[2,3,5]. \S5 is devoted to the discussion of some aspects
of the theorem.
\section*{\large \S2. Partial-wave method in two dimensions}
A particle with mass $\mu$ and energy $E$ moving in an external
field $V({\bf r})$ satisfies the stationary Schr\"odinger
equation
\begin{equation}
H\psi=-{\hbar^2\over2\mu}\nabla^2\psi+V({\bf r})\psi=E\psi.
\end{equation}
We use the polar coordinates $(r,\theta)$ as well as the rectangular
coordinates $(x, y)$ in two spatial dimensions. For scattering
problems $E>0$(we assume that $V\to 0$ more rapidly than $r^{-2}$
when $r\to\infty$).
The incident wave may be chosen as
\begin{equation} \psi_i=e^{ikx} \end{equation}
which solves Eq.(1) when $r\to\infty$ provided that $k=\sqrt{2\mu E/
\hbar^2}$. The scattered wave should have the asymptotic form
\begin{equation}
\psi_s\stackrel{r\to\infty}{\longrightarrow}\sqrt{i\over r}
f(\theta)e^{ikr} \end{equation}
where the factor $\sqrt i=e^{i\pi/4}$ is introduced for later
convenience. This also solves (1) when $r\to \infty$. It is easy to
show that the differential cross section $\sigma(\theta)$(in two
spatial dimensions the cross section may be more appropriately called
cross width) is given in terms of the scattering amplitude
$f(\theta)$ by
\begin{equation} \sigma(\theta)=|f(\theta)|^2.\end{equation}
The outgoing wave comprising (2) and (3) and thus takes the following
form at infinity.
\begin{equation} \psi\stackrel{r\to\infty}{\longrightarrow}e^{ikx}+
\sqrt{i\over r} f(\theta)e^{ikr}. \end{equation}
We are interested in central or spherically symmetric
(actually cylindrically
symmetric in two dimensions) potentials $V({\bf r})=V(r)$.
In this paper we deal only with central potentials. Then solutions
of Eq.(1) may be expanded as
\begin{equation}
\psi(r,\theta)=\sum_{m=-\infty}^{+\infty}a_mR_{|m|}(r)
e^{im\theta} \end{equation}
where $R_m(r)$ satisfies the radial equation
\begin{equation}
R''_m+{1\over r}R'_m+\left(k^2-{2\mu\over\hbar^2}V-{m^2\over r^2}
\right)R_m=0,\quad m=0,1,2,\ldots.\end{equation}
If $V=0$, the regular solution of (7) is the Bessel function and may
be taken as $R_m^{(0)}(r)=\sqrt kJ_m(kr)$ and hence has the asymptotic
form
\begin{equation} R_m^{(0)}(r)\stackrel{r\to\infty}{\longrightarrow}
\sqrt{2\over \pi r} \cos\left(kr-{m\pi\over 2}-{\pi\over 4}\right).
\end{equation}
We have assumed that $V(r)\to0$ more rapidly than $r^{-2}$
when $r\to\infty$,
so the solution $R_m(r)$ approaches the form of a linear combination
of the
Bessel function and the Neumann function at large $r$ and may have
the asymptotic form
\begin{equation} R_m(r)\stackrel{r\to\infty}{\longrightarrow}
\sqrt{2\over \pi r} \cos\left[kr-{m\pi\over 2}-{\pi\over 4}
+\eta_m(k)\right]. \end{equation}
Here $\eta_m(k)$ is the phase shift of the $m$th partial wave. It is a
function of $k$. As in three dimensions, all $\eta_m(k)$ are real in a
real potential. Substituting (9) into (6) gives one asymptotic form
for $\psi$, while substitution of the formula
\begin{equation} e^{ikx}=\sum_{m=-\infty}^{+\infty}i^{|m|}J_{|m|}(kr)
e^{im\theta} \end{equation}
and the asymptotic form of the Bessel functions into (5) gives
another. Comparing these two asymptotic forms one finds an expression
for $f(\theta)$ in terms of the phase shifts:
\begin{equation}
f(\theta)=\sum_{m=-\infty}^{+\infty}\sqrt{2\over\pi k}e^{i\eta_{|m|}}
\sin\eta_{|m|}e^{im\theta}. \end{equation}
The total cross section $\sigma_t$ turns out to be
\begin{equation}
\sigma_t=\int_0^{2\pi}d\theta\,\sigma(\theta)={4\over k}
\left(\sin^2\eta_0+2\sum_{m=1}^\infty\sin^2\eta_m\right).
\end{equation}
From the above relations one easily realizes that all information of
the scattering process is contained in the phase shifts $\eta_m(k)$.
The latter are determined by solving the radial equation (7) with
the boundary condition (9) and thus depend on the particular form
of $V(r)$. In an attractive field, the number of bound states with
given angular momentum $m\hbar$, denoted by $n_m$ above, also depends
on the particular form of $V(r)$. It will be shown that $n_m$ is
related to $\eta_m(k)$ at threshold. This is similar to Levinson's
theorem in three dimensions. In the next section we first discuss the
behaviour of $\eta_m(k)$ near $k=0$.
\section*{\large \S3. Phase shifts near threshold}
Assuming that $V(r)$ is less singular than $r^{-2}$ when $r\to0$,
then the regular solution of the radial equation (7) may have the
following power dependence on $r$ near $r=0$.
\begin{equation}
f_m(r,k)\stackrel{r\to0}{\longrightarrow}{r^m\over 2^m m!},
\quad m=0,1,2,\ldots.\end{equation}
Here we denote the regular solution of (7) with the boundary condition
(13) by $f_m(r,k)$. Note that the equation (7) depends on $k$ only
through $k^2$, which is an integral function of $k$, and the boundary
condition (13) is independent of $k$. Thus a theorem of Poincar\'e
tells us that $f_m(r,k)$ is an integral function of $k$ for a fixed
$r$. On the other hand, the solution $R_m(r)$ with the boundary
condition (9), which is proportional to $f_m(r,k)$, need not be an
integral function of $k$. We denote a potential that satisfies
$V(r)=0$ when $r>a>0$ by $V_a(r)$. In such potentials the solution
of (7) when $r>a$ may take the form
\begin{equation}
R_m^+(r)=\sqrt k[\cos\eta_m J_m(kr)-\sin\eta_m N_m(kr)],\quad
m=0,1,2,\ldots \end{equation}
where the superscript ``+'' indicates $r>a$. It is easy to verify
that $R_m^+(r)$ indeed satisfies the boundary condition (9). When
$r<a$ we have
\begin{equation} R_m^-(r)=A_m(k)f_m(r,k),\quad m=0,1,2,\ldots
\end{equation}
where the superscript ``$-$'' indicates $r<a$. In general the
coefficient
$A_m$ depends on $k$, so that the two parts of $R_m(r)$ can be
connected smoothly at $r=a$. This leads to
\begin{equation} \tan\eta_m={\rho J'_m(\rho)-\beta_m(\rho)J_m(\rho)
\over \rho N'_m(\rho)-\beta_m(\rho)N_m(\rho)}\end{equation}
where $\rho=ka$ and
\begin{equation} \beta_m(\rho)={af'_m(a,k)\over f_m(a,k)}
\end{equation}
where the prime indicates differentiation with respect to $r$. As
mentioned above, $f_m(a,k)$ is an integral function of $k$, so is
$f'_m(a,k)$. Moreover, both of them are even functions of $k$ since
Eq.(7) depends only on $k^2$. Therefore when $k\to0$ or $\rho\to0$,
the leading term for $\beta_m(\rho)$ may have one of the following
forms
$$\beta_m(\rho)\to \alpha_m^+\rho^{2l_m^+},$$
$$\beta_m(\rho)\to \alpha_m^-\rho^{-2l_m^-},$$
$$\beta_m(\rho)\to \gamma_m+\alpha_m \rho^{2l_m}$$
where $\alpha_m^\pm$, $\alpha_m$, and $\gamma_m$ are
nonzero constants, while $l_m^\pm$ and $l_m$ are natural numbers.
Using these relations and the leading terms of
$J_m(\rho)$ and $N_m(\rho)$ for $\rho\to0$, the leading term in
$\tan\eta_m$
when $\rho\to0$ can be explicitly worked out. When $\gamma_m=-m$ some
care should be taken. However, careful analysis gives in any case
\begin{equation} \tan\eta_m\to b_m \rho^{2p_m}\quad{\rm or}\quad
{\pi\over 2\ln \rho}\quad(k\to 0) \end{equation}
where $b_m\ne0$ is a contant and $p_m$ is a natural number.
Eq.(18) is important for the development of the Levinson theorem
in the next section.
For comparison we give the corresponding results in three dimensions.
The phase shifts are denoted by $\delta_l(k)$. By similar analysis
it can be shown in a potential $V_a(r)$ that
$$\tan\delta_l\to c_l\rho^{2q_l-1},\quad l=0,1,2,\ldots\quad
(k\to 0)\eqno(18')$$
where $c_l\ne0$ is a constant, and $q_l$ is a natural number for
$l\ne 0$, while $q_0$ may be a natural number or zero. We see that
$\delta_l(0)$ generally equals a multiple of $\pi$ for all $l$.
But $\delta_0(0)$ gets an additional $\pi/2$ when $q_0=0$. The latter
case does not occur for any $\eta_m(0)$, which is obvious from (18).
The difference between (18) and ($18'$) comes from the fact that the
Neumann function in the solution (14) involves the logarithmic
function while the spherical Neumann function in the three-dimensional
solution does not. It can be shown that $\delta_0(0)$ gets an
additional $\pi/2$ when there exists a half-bound state(a zero-energy
resonance) in the angular momentum channel $l=0$.
\section*{\large \S4. The Levinson theorem}
Now we proceed to establish the Levinson theorem by the Green
function method. Introduce the retarded Green function
$G({\bf r}, {\bf r'}, E)$ defined by
\begin{equation}
G({\bf r}, {\bf r'}, E)=\sum_\nu{\psi_\nu({\bf r})\psi^*_\nu
({\bf r'})\over E-E_\nu+i\epsilon} \end{equation}
where $\{\psi_\nu({\bf r})\}$ is a complete set of orthonormal
solutions to (1), and $\epsilon=0^+$.
$G({\bf r}, {\bf r'}, E)$ satisfies the equation
\begin{equation}
(E-H+i\epsilon)G({\bf r}, {\bf r'}, E)=\delta({\bf r}-{\bf r'}).
\end{equation}
For a free particle we have a similar definition:
\begin{equation}
G^{(0)}({\bf r}, {\bf r'}, E)=\sum_\nu{\psi^{(0)}_\nu({\bf r})
\psi^{(0)*}_\nu
({\bf r'})\over E-E^{(0)}_\nu+i\epsilon} \end{equation}
where $\{\psi^{(0)}_\nu({\bf r})\}$
is a complete set of orthonormal solutions to Eq.(1) with $V=0$.
$G^{(0)}({\bf r}, {\bf r'}, E)$ satisfies
\begin{equation} (E-H_0+i\epsilon)G^{(0)}({\bf r}, {\bf r'}, E)
=\delta({\bf r}-{\bf r'}) \end{equation}
where $H_0$ is the Hamiltonian of the free particle. We have the
integral equation for $G({\bf r}, {\bf r'}, E)$:
\begin{equation}
G({\bf r}, {\bf r'}, E)-G^{(0)}({\bf r}, {\bf r'}, E)=
\int d{\bf r}''\,G^{(0)}({\bf r}, {\bf r''}, E)V({\bf r''})
G({\bf r''}, {\bf r'}, E).\end{equation}
In a central field $V({\bf r})=V(r)$(not necessarily $V_a(r)$),
we have
$$ \psi_\nu(r,\theta)=\psi_{m\kappa}(r,\theta)={u_{|m|\kappa}(r)
\over \sqrt r}{e^{im\theta}\over\sqrt{2\pi}},\quad m=0,\pm1,\pm2,
\ldots $$
where $\kappa$ is a quantum number associated with the energy
$E_{m\kappa}$ which is
determined by solving the radial equation
\begin{equation}
u''_{m\kappa}+\left[{2\mu\over\hbar^2}(E_{m\kappa}-V)-{m^2-1/4
\over r^2}
\right]u_{m\kappa}=0,\quad m=0,1,2,\ldots\end{equation}
with appropriate boundary conditions in the radial direction. The
radial wave functions $u_{m\kappa}(r)$ satisfy the orthonormal
condition
\begin{equation}
(u_{m\kappa}, u_{m\kappa'})=\int_0^\infty dr\,u_{m\kappa}^*(r)
u_{m\kappa'}(r)=\delta_{\kappa\kappa'}.\end{equation}
For an attractive field we have discrete spectrum($E_{m\kappa}<0$)
as well as continuous spectrum ($E_{m\kappa}>0$). We can, however,
require the wave functions to vanish at a sufficiently large radius
$R$($R\gg a$ for $V_a(r)$) and thus discretize the continuous part of
the spectrum.
In this case the upper limit of the integration in (25) should be
replaced by $R$. It is easy to show that
\begin{equation}
G({\bf r}, {\bf r'}, E)=G_0(r,r',E){1\over 2\pi}+\sum_{m=1}^\infty
G_m(r,r',E){\cos m(\theta-\theta')\over\pi}\end{equation}
where
\begin{equation}
G_m(r,r',E)=\sum_{\kappa}{u_{m\kappa}(r)u_{m\kappa}^*(r')\over
\sqrt{rr'}(E-E_{m\kappa}+i\epsilon)},\quad m=0,1,2,\ldots.
\end{equation}
For a free particle, the following results can be obtained in the same
way.
\begin{equation}
G^{(0)}({\bf r}, {\bf r'}, E)=G_0^{(0)}(r,r',E){1\over 2\pi}+
\sum_{m=1}^\infty G_m^{(0)}(r,r',E){\cos m(\theta-\theta')
\over\pi}\end{equation}
where
\begin{equation}
G_m^{(0)}(r,r',E)=\sum_{\kappa}{u_{m\kappa}^{(0)}(r)u_{m\kappa}
^{(0)*}(r')\over
\sqrt{rr'}(E-E_{m\kappa}^{(0)}+i\epsilon)},\quad m=0,1,2,\ldots
\end{equation}
where $u_{m\kappa}^{(0)}(r)$ satisfies (24) with $V=0$, and the energy
spectrum($E_{m\kappa}^{(0)}>0$) is discretized according to the above
described prescription. Thus the orthonormal relation for
$u_{m\kappa}^{(0)}(r)$ is similar to (25). Substituting (26) and (28)
into (23) we get an integral equation for $G_m(r,r',E)$:
\begin{equation}
G_m(r,r',E)-G_m^{(0)}(r,r',E)=\int dr''\,r''G_m^{(0)}(r,r'',E)
V(r'')G_m(r'',r',E),\quad m=0,1,2,\ldots.\end{equation}
Using the orthonormal relation (25) it is easy to show that
\begin{equation}
\int dr\,rG_m(r,r,E)=\sum_\kappa{1\over E-E_{m\kappa}+i\epsilon}.
\end{equation}
Employing the mathematical formula
\begin{equation} {1\over x+i\epsilon}=P{1\over x}-i\pi\delta(x)
\end{equation}
and taking the imaginary part of the above equation we have
\begin{equation}
{\rm Im}\int dr\,rG_m(r,r,E)=-\pi\sum_\kappa\delta(E-E_{m\kappa}).
\end{equation}
Integrating this equation over $E$ from $-\infty$ to $0^-$ yields
\begin{equation}
{\rm Im}\int_{-\infty}^{0^-}dE\,\int dr\,rG_m(r,r,E)=-n_m^-\pi
\end{equation}
where $n_m^-$ is the number of bound states with negative energies and
with angular momentum $m\hbar$(when $m\ne0$ we have the same number of
bound states with angular momentum $-m\hbar$ as well). The possibility
of a zero-energy bound state will be discussed in the next section.
In a similar way one can show that
\begin{equation}
{\rm Im}\int_{-\infty}^{0^-}dE\,\int dr\,rG_m^{(0)}(r,r,E)=0.
\end{equation}
Here and in (34) the integration over $E$ is performed to the upper
limit $0^-$ instead of 0 such that it suffers no ambiguity.
Combining (34) and (35) we have
\begin{equation} {\rm Im}\int_{-\infty}^{0^-}dE\,\int dr\,r[G_m(r,r,E)
-G_m^{(0)}(r,r,E)]=-n_m^-\pi.
\end{equation}
On the other hand, substituting (27) and (29) into the rhs of (30)
we have
\begin{equation}
\int dr\,r[G_m(r,r,E)-G_m^{(0)}(r,r,E)]=\sum_{\kappa\sigma}
{(u_{m\sigma},u_{m\kappa}^{(0)})(u_{m\kappa}^{(0)}, Vu_{m\sigma})
\over (E-E_{m\kappa}^{(0)}+i\epsilon)(E-E_{m\sigma}+i\epsilon)}.
\end{equation}
Using the radial equations for $u_{m\kappa}^{(0)}(r)$ and
$u_{m\sigma}(r)$ and the boundary condition that these radial wave
functions vanish at $r=0$ and $r=R$, it is not difficult to
show that
\begin{equation}
(u_{m\kappa}^{(0)}, Vu_{m\sigma})=(E_{m\sigma}-E_{m\kappa}^{(0)})
(u_{m\kappa}^{(0)}, u_{m\sigma}).\end{equation}
Substituting this result into (37) and taking the imaginary part we
have
\begin{equation}
{\rm Im}\int dr\,r[G_m(r,r,E)-G_m^{(0)}(r,r,E)]=\pi\sum_{\kappa
\sigma}[\delta(E-E_{m\kappa}^{(0)})-\delta(E-E_{m\sigma})]
|(u_{m\kappa}^{(0)}, u_{m\sigma})|^2.\end{equation}
Integrating this equation over $E$ from $-\infty$ to $+\infty$
it is easy to find that
\begin{equation}
{\rm Im}\int_{-\infty}^{+\infty}dE\,\int dr\,r[G_m(r,r,E)
-G_m^{(0)}(r,r,E)]=0.\end{equation}
Similar to the three-dimensional case, this equation means that the
total number of states in a specific angular momentum channel is not
altered by an attractive field, except that some scattering states
are pulled down into the bound-state region. This result, together
with (36), leads to
\begin{equation} {\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,r[G_m(r,r,E)
-G_m^{(0)}(r,r,E)]=n_m^-\pi.\end{equation}
We have thereupon finished the first step in our establishment of
the Levinson theorem.
The next step is to calculate the lhs of (41) in another way. In the
above treatment we have discretized the continuous spectrum of
$E_{m\kappa}^{(0)}$ and the continuous part of $E_{m\kappa}$. In the
following we will directly deal with these continuous spectra. We will
denote $u_{m\kappa}^{(0)}(r)$ by $u_{mk}^{(0)}(r)$,
and $u_{m\kappa}(r)$ with continuous $\kappa$ by $u_{mk}(r)$,
whereas those $u_{m\kappa}(r)$
with discrete $\kappa$(bound states) will be denoted by the original
notation.
The notations $E_{m\kappa}^{(0)}$ and $E_{m\kappa}$
with continuous $\kappa$ will also be changed to $E_{mk}^{(0)}$ and
$E_{mk}$(both are equal to $\hbar^2k^2/2\mu$) respectively.
The orthonormal relation for $u_{mk}^{(0)}(r)$ now takes
the form
\begin{equation} (u_{mk}^{(0)}, u_{mk'}^{(0)})=\delta(k-k').
\end{equation}
The orthonormal relation for $u_{mk}(r)$ is similar, while that for
$u_{m\kappa}(r)$ has the same appearance as (25). It is easy to show
that
$$ u_{mk}^{(0)}(r)=\sqrt{kr} J_m(kr),\quad k=\sqrt{2\mu E_{mk}^{(0)}}
/\hbar$$
satisfies the radial equation with $V=0$ and the orthonormal relation
(42). Thus $u_{mk}^{(0)}(r)$ has the asymptotic form
\begin{equation} u_{mk}^{(0)}(r)\stackrel{r\to\infty}{\longrightarrow}
\sqrt{2\over \pi} \cos\left(kr-{m\pi\over 2}-{\pi\over 4}\right)
\end{equation}
corresponding to (8). In an external field, the wave functions are
distorted and thus the asymptotic form for $u_{mk}(r)$ becomes
\begin{equation} u_{mk}(r)\stackrel{r\to\infty}{\longrightarrow}
\sqrt{2\over \pi} \cos\left[kr-{m\pi\over 2}-{\pi\over 4}
+\eta_m(k)\right] \end{equation}
corresponding to (9). Note that the coefficient in the asymptotic
form is the same for $u_{mk}^{(0)}(r)$ and $u_{mk}(r)$. In this
treatment it can be shown that
\begin{equation}
G_m(r,r',E)=\sum_{\kappa}{u_{m\kappa}(r)u_{m\kappa}^*(r')\over
\sqrt{rr'}(E-E_{m\kappa}+i\epsilon)}+
\int dk\,{u_{mk}(r)u_{mk}^*(r')\over
\sqrt{rr'}(E-E_{mk}+i\epsilon)} \end{equation}
and thus
\begin{equation}
{\rm Im}\int dr\,rG_m(r,r,E)=-\pi\sum_\kappa\delta(E-E_{m\kappa})
-\pi\int dk\,\delta(E-E_{mk})(u_{mk}, u_{mk}). \end{equation}
Integrating this equation over $E$ from $0^-$ to $+\infty$ and taking
into account the fact that $E_{m\kappa}<0$ while $E_{mk}\ge0$, we have
\begin{equation} {\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,rG_m(r,r,E)=
-\pi\int dk\,(u_{mk}, u_{mk}). \end{equation}
There is no ambiguity in the integration over $E$ since the lower
limit is set to be $0^-$ instead of 0(note that $E_{mk}$ may equal 0).
As before, the possibility of a zero-energy bound state will be
discussed in \S5. In the same way, we have
\begin{equation}
{\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,rG_m^{(0)}(r,r,E)=
-\pi\int dk\,(u_{mk}^{(0)}, u_{mk}^{(0)}). \end{equation}
It should be pointed out that the integrands in both (47) and (48)
are $\delta(0)(=\infty)$ according to Eq.(42) and a similar equation
for $u_{mk}(r)$. However, there is a subtle difference between
these two $\delta$ functions, and it is this difference that leads
to the Levinson
theorem. Since both integrands are singular, we first evaluate
\begin{equation}
(u_{mk}, u_{ml})_{r_0}-(u_{mk}^{(0)}, u_{ml}^{(0)})_{r_0}\equiv
\int_0^{r_0}dr\,u_{mk}^*(r)u_{ml}(r)-
\int_0^{r_0}dr\,u_{mk}^{(0)*}(r)u_{ml}^{(0)}(r)\end{equation}
where $r_0$ is a large but finite radius, and finally take the limit
$l\to k$ and $r_0\to\infty$. As in the discretized case, we have the
boundary condition
\begin{equation} u_{mk}^{(0)}(0)=0, \quad u_{mk}(0)=0.
\end{equation}
Using the radial equation and this boundary condition it is easy
to show that
\begin{equation}
(k^2-l^2)(u_{mk}, u_{ml})_{r_0}=u_{mk}^*(r_0)u'_{ml}(r_0)
-u_{mk}^{\prime*}(r_0)u_{ml}(r_0).\end{equation}
Since $r_0$ is large, we can use (44) to evaluate the rhs and in the
limit $l\to k$ we get
\begin{equation}
(u_{mk}, u_{mk})_{r_0}={r_0\over \pi}+{1 \over \pi}\eta'_m(k)-
{(-)^m\over 2\pi k}\cos[2kr_0+2\eta_m(k)].\end{equation}
In the same way we have
\begin{equation} (u_{mk}^{(0)}, u_{mk}^{(0)})_{r_0}={r_0\over \pi}-
{(-)^m\over 2\pi k}\cos2kr_0.\end{equation}
Therefore,
\begin{equation}
(u_{mk}, u_{mk})_{r_0}-(u_{mk}^{(0)}, u_{mk}^{(0)})_{r_0}=
{1\over\pi}\eta'_m(k)+{(-)^m\over 2}\delta(k)\sin2\eta_m(k)+
{(-)^m\over\pi k}\cos 2kr_0\sin^2\eta_m(k)\end{equation}
where we have employed the well-known formula
$$ \lim_{r_0\to\infty}{\sin 2kr_0\over \pi k}=\delta(k). $$
So far in this section $V(r)$ need not be $V_a(r)$. In the following
we set $V(r)=V_a(r)$. Then Eq.(18) is valid, and we have
$\sin2\eta_m(0)=0$. Therefore the second
term on the rhs of (54) vanishes. Integrating this result over $k$
(from 0 to $+\infty$), taking the limit $r_0\to \infty$, and
incorporating the results (47), (48) we arrive at
\begin{eqnarray}
&&{\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,r[G_m(r,r,E)
-G_m^{(0)}(r,r,E)] \nonumber\\
&&=\eta_m(0)-\eta_m(\infty)-(-)^m\lim_
{r_0\to\infty}\int_0^\infty dk\,{\cos 2kr_0\over k}\sin^2\eta_m(k).
\end{eqnarray}
The last term in this equation can be decomposed into two integrals,
the first from 0 to $\varepsilon=0^+$, which can be shown to vanish
on account of (18), while the second from $\varepsilon$ to $+\infty$,
which also vanishes in the limit $r_0\to\infty$ since the factor
$\cos2kr_0$ oscillates very rapidly. Therefore we have
\begin{equation} {\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,r[G_m(r,r,E)
-G_m^{(0)}(r,r,E)]=\eta_m(0)-\eta_m(\infty).\end{equation}
Combining (56) and (41) we arrive at the Levinson theorem:
\begin{equation}\eta_m(0)-\eta_m(\infty)=n_m^-\pi,\quad m=0,1,2,
\ldots. \end{equation}
In the next section we will discuss some aspects of this theorem,
and get a more general form which takes zero-energy bound states
into consideration.
\section*{\large \S5. Discussions}
In this section we discuss and clarify some aspects of the Levinson
theorem obtained in the form (57) in the last section.
1. {\it On zero-energy bound states}. In \S4 we have not taken
into account
the possible existence of a zero-energy bound state. Indeed, this may
occur for a square well potential with radius $a$ and depth $V_0$ when
$m>1$ and $J_{m-1}(\xi)=0$ where $\xi=k_0a$ and $k_0=\sqrt{2\mu V_0}/
\hbar$. (For $m=0,1$ regular solutions with zero energy may be found
but they are not normalizable and thus are not bound states.
see below) The
existence of a zero-energy bound state would not alter the results
(34)-(36) where $n_m^-$ is the number of bound states with negative
energies. Therefore Eq.(41) remains valid in this case. On the other
hand, Eq.(47) becomes
\begin{equation} {\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,rG_m(r,r,E)=
-\pi-\pi\int dk\,(u_{mk}, u_{mk}) \end{equation}
which is an obvious consequence of (46). Accordingly, the result
(56) becomes
\begin{equation} {\rm Im}\int_{0^-}^{+\infty}dE\,\int dr\,r[G_m(r,r,E)
-G_m^{(0)}(r,r,E)]=\eta_m(0)-\eta_m(\infty)-\pi.\end{equation}
Hence, the Levinson theorem takes in the present case the form
\begin{equation}\eta_m(0)-\eta_m(\infty)=n_m\pi \end{equation}
where $n_m=n_m^-+1$ represents the total number of bound
states, including the one with zero energy. When there is no
zero-energy bound state, $n_m=n_m^-$.
Thus Eq.(60) holds in any case and is the final form of the Levinson
theorem.
To see the difference in zero-energy states between two and three
dimensions, we notice that the two-dimensional radial wave function
$u_{m\kappa}(r)$ satisfies (24). When regarded as a one-dimensional
Schr\"odinger equation, the effective potential reads
$$\tilde V_2(r)=V(r)+{\hbar^2(m^2-1/4)\over 2\mu r^2}$$
where the subscript ``2'' indicates two dimensions. In three
dimensions, with $\psi_{lm\kappa}(r,\theta,\phi)=r^{-1}\chi_{l\kappa}
(r)Y_{lm}(\theta,\phi)$(here $r$, $\theta$ etc. should not be
confused with those in two dimensions), the radial wave function
$\chi_{l\kappa}(r)$ satisfies
$$\chi_{l\kappa}''+\left[{2\mu\over\hbar^2}(E_{l\kappa}-V)-{l(l+1)
\over r^2}\right]\chi_{l\kappa}=0,\quad l=0,1,2,\ldots.
\eqno(24')$$
When this is regarded as a one-dimensional Schr\"odinger equation,
the effective potential is
$$\tilde V_3(r)=V(r)+{\hbar^2 l(l+1)\over 2\mu r^2}$$
which is obviously different from $\tilde V_2(r)$, and as a
consequence
the zero-energy solutions are different from those of (24). More
specifically, with $V(r)=V_a(r)$ and $E_{m0}=0$, the exterior
solution($r>a$) of (24) reads
$$u_{m0}^+(r)=r^{-m+1/2}.$$
A zero-energy solution exists if $V_a(r)$ is such that the interior
solution $u_{m0}^-(r)$ can be connected with $u_{m0}^+(r)$ at $r=a$
smoothly. This leads to the above mentioned condition for a square
well potential. Here we are concerned about the normalizability of
the solution. It is clear that the above solution can be normalized
and thus is a bound state only when $m>1$. We have
$\psi_{10}\stackrel{r\to\infty}{\longrightarrow}0$, and $\psi_{00}$
is finite at infinity though
$u_{00}(r)\stackrel{r\to\infty}{\longrightarrow}\infty$, thus both
are regular. But either of them can not be normalized and thus is not
a bound state. In this case $\eta_m(0)$ gets no additional term and
Eq.(57) is not modified. When $m>1$, however, $n_m^-\to n_m^-+1=n_m$
and $\eta_m(0)$ gets an additional $\pi$ if a zero-energy solution%
(a bound state) actually exists. In three dimensions,
with $V(r)=V_a(r)$ and $E_{l0}=0$, the exterior
solution of ($24'$) reads
$$\chi_{l0}^+(r)=r^{-l}.$$
This is normalizable when $l>0$. Thus only $l=0$ is distinguished.
When the $l=0$ solution really emerges, $\delta_0(0)$ gets an
additional $\pi/2$ as pointed out in \S3, and the Levinson theorem
is modified by $n_0\to n_0+1/2$. This is different from the case
$m=0,1$ in two dimensions where $\eta_m(0)$ always equals a multiple
of $\pi$. When $l>0$ the case is basically the same as $m>1$
in two dimensions.
2. {\it About $\eta_m(\infty)$}. Write down two radial equations
for $u_{mk}(r)$ and $\tilde u_{mk}(r)$ in the external potentials
$U(r)$ and $\tilde U(r)$ respectively. Using the boundary condition
(50) and the asymptotic form (44) for $u_{mk}(r)$ and similar ones
for $\tilde u_{mk}(r)$, it is easy to show that
\begin{equation}
\sin[\eta_m(k)-\tilde \eta_m(k)]=-{\pi\mu\over\hbar^2k}
\int_0^\infty dr\,[U(r)-\tilde U(r)] u_{mk}^*(r)\tilde u_{mk}(r).
\end{equation}
Now we take $U(r)=V(r)$(not necessarily be $V_a(r)$)
and $\tilde U(r)=0$, it is natural to define
$\tilde\eta_m(k)=0$ in the absence of an external field. Obviously,
$\tilde u_{mk}(r)=u_{mk}^{(0)}(r)$, so (61) becomes
\begin{equation} \sin\eta_m(k)=-{\pi\mu\over\hbar^2k}
\int_0^\infty dr\,u_{mk}^*(r)V(r)u_{mk}^{(0)}(r).\end{equation}
For very large $k$, $V(r)$ can be ignored in the radial equation since
it is less singular than $r^{-2}$ near the origin and well behaved
elsewhere as assumed and so can be neglected everywhere(cf. Eq.(24)).
Therefore $u_{mk}(r)$ in (62) can be replaced by $u_{mk}^{(0)}(r)$
in this limit, and we have for very large $k$
\begin{equation} \sin\eta_m(k)=-{\pi\mu\over\hbar^2k}
\int_0^\infty dr\,|u_{mk}^{(0)}(r)|^2V(r).\end{equation}
Substituting $u_{mk}^{(0)}(r)=\sqrt{kr}J_m(kr)$ into this equation,
and using the asymptotic formula for the Bessel functions at large
argument, we have approximately
\begin{equation} \sin\eta_m(k)=-{2\mu\over\hbar^2k}
\int_0^\infty dr\,\cos^2\left(kr-{m\pi\over2}-{\pi\over4}\right)V(r)
.\end{equation}
Further, since $k$ is very large, the cosine oscillates very rapidly
and thus the squared cosine may be replaced by its mean value 1/2.
In this way we arrive at
\begin{equation} \sin\eta_m(k)=-{\mu\over\hbar^2k}
\int_0^\infty dr\,V(r)\end{equation}
for very large $k$, where we assume that the integral exists. This
result has the same form as that in three dimensions.
Of course it holds in the special case $V(r)=V_a(r)$. Obviously,
$\sin\eta_m(k)\to0$ when $k\to\infty$, hence we can freely define
$\eta_m(\infty)=0$. Under this convention the Levinson theorem
takes the form
\begin{equation} \eta_m(0)=n_m\pi,\quad m=0,1,2,\ldots. \end{equation}
This means that the phase shift at threshold serves as a counter for
the bound states. This is similar to the case in three dimensions, but
is somewhat simpler. In three dimensions, the case $l=0$ should be
modified when there exists a zero-energy resonance(a
half-bound state). Here we have no such problem.
3. {\it Extension to more general potentials}. In the above
development
of the Levinson theorem, we have assumed that $V(r)=0$ when $r>a$.
We also assumed that $V(r)$ is less singular than $r^{-2}$ when
$r\to 0$, so that the power dependence in (13) near $r=0$ is valid.
Indeed, the existence of the integral in (65) requires that $V(r)$
is less singular than $r^{-1}$ when $r\to 0$. We will always assume
that $V(r)$ is sufficiently well behaved near the origin such that
the above requirements are all satisfied. On the other hand,
the radius
$a$ beyond which $V(r)$ vanishes is not specified in our discussion.
Though both $\eta_m(0)$ and $n_m$ in (66) depend on the particular
form of $V(r)$ and thus depend on $a$, the equality between them does
not. Hence one expects that (66) remains valid when $V_a(r)$ is varied
continuously to the limit $a\to\infty$ if $n_m$ remains finite in the
process. It seems that the Levinson theorem holds for quite general
potentials(well behaved near the origin as emphasized above) as long
as they decrease rapidly enough when $r\to \infty$ such that the total
number of bound states in a particular angular momentum channel
is finite.
Extension of the present work to relativistic quantum mechanics is
currently under progress.
\vskip 2pc
The author is grateful to Prof. Guang-jiong Ni for useful
communications and for encouragement.
This work was supported by the
National Natural Science Foundation of China.
\newpage
\parindent 0pc
{\large References}\newline
[1] N. Levinson, K. Dan. Vidensk. Selsk. Mat.-Fys. Medd. 25,
No.9(1949).
[2] J. M. Jauch, Helv. Phys. Acta. 30, 143(1957).
[3] A. Martin, Nuovo Cimento 7, 607(1958).
[4] R. G. Newton, J. Math. Phys. 18, 1348; 1582(1977); \newline
{\it Scattering Theory of Waves and Particles}(McGraw-Hill, New York,
1966).
[5] G.-J. Ni, Phys. Energ. Fort. Phys. Nucl. 3, 432(1979).
[6] Z.-Q. Ma and G.-J. Ni, Phys. Rev. D31, 1482(1985).
[7] N. Poliatzky, Phys. Rev. Lett. 70, 2507(1993);
Helv. Phys. Acta. 66, 241(1993).
[8] J. Piekarewicz, Phys. Rev. C48, 2174(1993).
\end{document}
| {
"timestamp": "1998-06-23T05:29:14",
"yymm": "9806",
"arxiv_id": "quant-ph/9806073",
"language": "en",
"url": "https://arxiv.org/abs/quant-ph/9806073",
"abstract": "A two-dimensional analogue of Levinson's theorem for nonrelativistic quantum mechanics is established, which relates the phase shift at threshold(zero momentum) for the $m$th partial wave to the total number of bound states with angular momentum $m\\hbar(m=0,1,2,...)$ in an attractive central field.",
"subjects": "Quantum Physics (quant-ph)",
"title": "Levinson theorem in two dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806552225685,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7077274261337361
} |
https://arxiv.org/abs/1904.04494 | Residue fixed point index and wildly ramified power series | In this paper, we study power series having a fixed point of multiplier 1. First, we give a closed formula for the residue fixed point index, in terms of the first coefficients of the power series. Then, we use this formula to study wildly ramified power series in positive characteristic. Among power series having a multiple fixed point of small multiplicity, we characterize those having the smallest possible lower ramification numbers in terms of the residue fixed point index. Furthermore, we show that these power series form a generic set, and, in the case of convergent power series, we also give an optimal lower bound for the distance to other periodic points. | \section{Introduction}
Consider an open subset~$U$ of~$\C$ and a holomorphic map~$f \colon U \to \C$.
For a fixed point~$z_0$ of~$f$, the derivative~$f'(z_0)$ is invariant under coordinate changes.
In the case~$z_0$ is isolated as a fixed point of~$f$, a related invariant is defined by the countour integral
\begin{equation}
\label{eq:19}
\ind(f, z_0)
\=
\frac{1}{2\pi i} \oint \frac{\hspace{1pt}\operatorname{d}\hspace{-1pt} z}{z - f(z)},
\end{equation}
where we integrate on a sufficiently small simple closed curve around~$z_0$ that is positively oriented.
The complex number~\eqref{eq:19} is invariant under coordinate changes and is called the \emph{residue fixed point index of~$f$ at~$z_0$}.
Together with the related holomorphic fixed point formula, it is one of the basic tools in complex dynamics, see, \emph{e.g.}, \cite[\S12]{Mil06c} for background, and \cite{BuffEpstein2002, Buff03, BuffXavEcalle2013} for some results where the residue fixed point index plays an important r{\^o}le.
See also~\cite[Exercise~5.10]{Silverman2007} for an extension to an arbitrary ground field.
In the case~$f'(z_0) \neq 1$, a direct computation shows that~\eqref{eq:19} is equal to~$\frac{1}{1 - f'(z_0)}$.
We give a closed formula for~\eqref{eq:19} in the case~$f'(z_0) = 1$, in terms of the first coefficients of the power series expansion of~$f$ about~$z_0$ (Theorem~\ref{thm:closed-formula} in~\S\ref{sec:closed-formula}).
This formula holds for an arbitrary ground field.
We also show that the residue fixed point index is invariant under coordinate changes, and use it to study normal forms.
We also study the behavior of the residue fixed point under iteration.
In our succeeding results, we restrict to ground fields of positive characteristic and power series having the origin as a fixed point of multiplier~$1$.
Such power series are called \emph{wildly ramified}.\footnote{This terminology arises from the study of field automorphisms.
Every power series~$f$ with coefficients in a field~$\K$ that satisfies~$f(0) = 0$ and~$f'(0) = 1$, defines a field automorphism of~$\K[[t]]$ given by~$g \mapsto g \circ f$.
When~$\K$ is of positive characteristic, this type of field automorphism is traditionally known as \emph{wildly ramified}, due to the behavior of its associated ramification numbers.}
See, \emph{e.g.}, \cite{Sen1969,Keating1992,LaubieSaine1998,Win04} for background on wildly ramified power series, \cite{KallalKirkpatrick2019,LaubieMovahhediSalinier2002,LindahlNordqvist2018,LindahlRiveraLetelier2013,LindahlRiveraLetelier2015,Fransson2017,RiveraLetelier2003} for results related to this paper, and \cite{HermanYoccoz1983, Lindahl2004, LindahlZieve2010, Ruggiero2015} and references therein for local dynamics of analytic germs in positive characteristic.
See also, \emph{e.g.}, \cite{johnson1988,Camina2000} and references therein, for the myriad of group-theoretic results about the ``Nottingham group'', which is the group under composition formed by the wildly ramified power series.
Every wildly ramified power series has associated a sequence of ``lower ramification'' numbers.
It encodes the multiplicity of the origin for the iterates of the power series.
We study the lower ramification numbers of power series for which the multiplicity at the origin is small.
First, we characterize those power series having the smallest possible lower ramification numbers.
They are characterized by the nonvanishing of {\'E}calle's ``iterative residue'', which is a dynamical version of the residue fixed point index (Theorem~\ref{thm:q-ramification} in~\S\ref{sec:q-ramification}).
As a consequence, we obtain that these power series form a generic set.
In the case of convergent power series, we also give an optimal lower bound for the distance to other periodic points (Theorem~\ref{thm:lower-bound} in~\S\ref{sec:lower-bound}).
This gives an affirmative solution to~\cite[Conjecture~1.2]{LindahlRiveraLetelier2013}, for generic multiple fixed points of a fixed and small multiplicity, and to~\cite[Conjecture~4.3]{KallalKirkpatrick2019}.
We proceed to describe our results more precisely.
\subsection{Closed formula for the residue fixed point index}
\label{sec:closed-formula}
Our first result is a closed formula for the residue fixed point index of a fixed point of multiplier~$1$.
We allow an arbitrary ground field, and an arbitrary power series about a fixed point.
In particular, we allow non-convergent power series.
To simplify the notation, throughout the rest of the paper we restrict to the case of a power series~$f$ fixing the origin, and denote~$\ind(f, 0)$ by~$\ind(f)$.
\begin{mydef}
Let~$\K$ be a field and~$f$ a power series with coefficients in~$\K$ satisfying~$f(0)=0$ and~$f(z)\neq z$.
The \emph{residue fixed point index of~$f$ at~$0$}, denoted by~$\ind(f)$, is the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of
\[ \frac{1}{z-f(z)}. \]
\end{mydef}
Clearly, this definition agrees with~\eqref{eq:19} in the case where~$\K = \C$, $z_0 = 0$, and~$f$ is holomorphic on a neighborhood of~$0$.
To state our first result, denote by $\mathbb{N}$ the set of nonnegative integers and for an integer~$q \ge 1$ and~$(\MultiInd_0, \ldots, \MultiInd_q)$ in~$\N^{q + 1}$, define
\begin{displaymath}
|(\MultiInd_0, \ldots, \MultiInd_q)| \= \sum_{j = 0}^{q} \MultiInd_j
\text{ and }
\| (\MultiInd_0, \ldots, \MultiInd_q) \| \= \sum_{j = 1}^q j \MultiInd_j.
\end{displaymath}
\begin{thm}[Residue fixed point index formula]
\label{thm:closed-formula}
Let $\K$ be a field, $q\geq 1$ an integer, and~$f$ a power series with coefficients in~$\K$ of the form
\begin{equation}
\label{psform}f(z)
=
z\left(1 + \sum_{j=q}^{+\infty} a_jz^j\right), \text{ with } a_q \neq 0.
\end{equation}
Then we have
\begin{equation}
\label{eq:closed-formula}
\ind(f)
=
- \frac{1}{a_q^{q+1}}\sum_{\substack{\MultiInd \in \N^{q+1} \\ |\MultiInd| = q, \| \MultiInd \| = q }} (-1)^{q-\MultiInd_0}\binom{q-\MultiInd_0}{\MultiInd_{1},\ldots,\MultiInd_{q}}\prod_{j = 0}^q a_{q + j}^{\iota_j}.
\end{equation}
\end{thm}
We also show that the residue fixed point index is invariant under coordinate changes (Proposition~\ref{conj} in~\S\ref{invarresidue}) and use the residue fixed point index to study normal forms (Proposition~\ref{nf} in~\S\ref{sec:nf}).
Both of these results, together with Theorem~\ref{thm:closed-formula}, are used to prove our results below.
In Appendix~\ref{app:resit} we use Theorem~\ref{thm:closed-formula} to study the behavior under iterations of the residue fixed point index, and of the closely related ``iterative residue'' defined below.
\subsection{Wildly ramified power series}
\label{sec:q-ramification}
Let~$\K$ be a field, and~$f$ a power series with coefficients in~$\K$ such that~$f(0) = 0$ and~$f(z) \neq z$.
The \emph{multiplicity of~$0$ as a fixed point of~$f$} is the lowest degree of a nonzero term in $f(z) - z$.
We denote it by~$\mult(f)$.
From now on we assume the characteristic~$p$ of~$\K$ is positive.
The power series~$f$ is \emph{wildly ramified} if~$\mult(f) \ge 2$, or equivalently, if~$0$ is a multiple fixed point of~$f$.
Note that~$f$ is wildly ramified if and only if~$f'(0) = 1$.
For a wildly ramified power series~$f$, the \emph{lower ramification numbers~$\{ i_n(f) \}_{n = 0}^{+\infty}$ of~$f$} are defined by
\[ i_n(f) \= \mult(f^{p^n})-1. \]
See, \emph{e.g.}, \cite{Sen1969,Keating1992,LaubieSaine1998,Win04} and references therein for background on wildly ramified power series and their lower ramification numbers.
Due to their relation to ultrametric dynamics, they have been studied in, \emph{e.g.}, \cite[\S3.2]{RiveraLetelier2003}, \cite{LindahlRiveraLetelier2015,LindahlRiveraLetelier2013,LindahlNordqvist2018}.
Note that the lower ramification numbers are invariant under coordinate changes.
If we put
\begin{displaymath}
q \= \mult(f) - 1 \ge 1,
\end{displaymath}
then the results of Sen in~\cite{Sen1969} imply that, in the case~$q \le p - 1$, for every integer~$n \ge 0$ we have
\begin{equation}
\label{eq:q-minimal}
i_n(f) \ge q (1+p+ \cdots +p^n),
\end{equation}
see Proposition~\ref{prop:qramifisminramif} in~\S\ref{sec:lowramif}.
Following~\cite{Fransson2017}, for an integer~$q \ge 1$ that is not divisible by~$p$, we say that~$f$ is \emph{$q$-ramified} if equality holds in~\eqref{eq:q-minimal} for every~$n$.
In the case~$q = 1$, $1$-ramified power series are also known as ``minimally ramified'' \cite{LaubieMovahhediSalinier2002,LindahlRiveraLetelier2013,LindahlRiveraLetelier2015}.
$q$-Ramified power series appear naturally as reductions of invertible elements of formal groups, see for example~\cite[\emph{Proposition}~4.2]{LaubieMovahhediSalinier2002} for the case~$q = 1$, and~\cite[\emph{Corollaire}~3.12]{LaubieMovahhediSalinier2002} for general~$q$ not divisible by~$p$.
Note that when~$q$ is divisible by~$p$, for every~$n \ge 1$ we have~$i_n(f) = i_0(f)p^n$ \cite{Sen1969}, so we cannot have equality in~\eqref{eq:q-minimal}.
Our next result characterizes $q$-ramified power series when~$q \le p - 1$, and shows that $q$-ramified power series are generic among power series having the origin as a fixed point of multiplicity~$q + 1$.
We restrict to odd~$p$, as the case~$p = 2$ is treated in~\cite{LindahlRiveraLetelier2015,LindahlRiveraLetelier2013}.
As in~\cite[Theorem~E]{LindahlRiveraLetelier2013}, our characterization is best stated in terms of the ``iterative residue'', which is a dynamical variant of the residue fixed point index introduced by {\'E}calle in the complex setting.
For a power series~$f$ satisfying ${f(0) = 0}$ and ${f(z) \neq z}$, the \emph{iterative residue of~$f$} is defined by\footnote{We keep {\'E}calle's notation ``$\resit$'', an abbreviation of the French ``\emph{r{\'e}sidue it{\'e}ratif}''.}
\begin{equation}
\label{eq:17}
\resit(f)
\=
\frac{1}{2}\mult(f) - \ind(f).
\end{equation}
See, \emph{e.g.}, \cite[\S{I}]{Ecalle1975}, or~\cite[\S12]{Mil06c} for background on the iterative residue.
\begin{thm}[$q$-ramified power series]
\label{thm:q-ramification}
Let~$p$ be an odd prime number and $\K$ a field of characteristic~$p$.
Furthermore, let~$q$ be in~$\{1, \ldots, p - 1 \}$, and let~$f$ be a power series with coefficients in~$\K$ satisfying~$\mult(f) = q+1$.
Then~$f$ is $q$-ramified if and only if $\resit(f) \neq 0$.
\end{thm}
Let~$q \ge 1$ be an integer, $x_q$, $x_{q + 1}$, \ldots indeterminates over~$\K$, and consider the generic power series
\begin{displaymath}
f(\zeta) \= \zeta \left( 1 + \sum_{j = q}^{+ \infty} x_j \zeta^j \right).
\end{displaymath}
Then by Theorem~\ref{thm:closed-formula}, $x_q^{q + 1} \resit(f)$ is equal to
\begin{equation}
\label{eq:genericity polynomial}
\left( \frac{q + 1}{2} \right) x_q^{q + 1} + \sum_{\substack{\MultiInd \in \N^{q+1} \\ |\MultiInd| = q, \| \MultiInd \| = q }} (-1)^{q-\MultiInd_0}\binom{q-\MultiInd_0}{\MultiInd_{1},\ldots,\MultiInd_{q}}\prod_{j = 0}^q x_{q + j}^{\iota_j},
\end{equation}
which is a polynomial in~$x_q$, $x_{q + 1}$, \ldots, $x_{2q}$ with coefficients in~$\F_p$.\footnote{Note that this polynomial is isobaric of degree~$q(q + 1)$.}
Thus, the following corollary is a direct consequence of Theorem~\ref{thm:q-ramification}.
\begin{corr}
\label{c:genericity}
Let~$p$ be an odd prime number, $\K$ a field of characteristic~$p$, and~$q$ in~$\{1, \ldots, p - 1 \}$.
Then, among power series with coefficients in~$\K$ for which the origin is a fixed point of multiplicity~$q + 1$, those that are $q$-ramified are generic.
\end{corr}
The following corollary is essentially a reformulation of the previous corollary in terms of the \emph{Nottingham group~$\mathcal{N}(\K)$}, which is the group under composition formed by all wildly ramified power series with coefficients in~$\K$.
Since the work of Johnson~\cite{johnson1988}, this group has been extensively studied for its interesting group-theoretic properties.
See for instance the survey article~\cite{Camina2000}.
Given an integer~$q \ge 1$, consider the subgroup of~$\mathcal{N}(\K)$,
\begin{displaymath}
\mathcal{N}_q(\K)
\=
\{ f \text{ power series with coefficients in~$\K$ satisfying~$\mult(f) \ge q + 1$} \}.
\end{displaymath}
Note that in the case~$q = 1$, we have~$\mathcal{N}_1(\K) = \mathcal{N}(\K)$.
\begin{corr}
Let~$p$ be an odd prime number, $\K$ a field of characteristic~$p$, and~$q$ in~$\{1, \ldots, p - 1 \}$.
Then, an element~$f$ of~$\mathcal{N}_q(\K)$ is $q$-ramified if and only if~$\resit(f) \neq 0$.
In particular, $q$-ramified power series are generic in~$\mathcal{N}_q(\K)$.
\end{corr}
This answers~\cite[Question~1.4]{KallalKirkpatrick2019} for~$q$ in~$\{1, \ldots, p - 1\}$.
In the case $q=1$, Theorem~\ref{thm:q-ramification} was shown by Lindahl and the second named author~\cite[Theorem~E]{LindahlRiveraLetelier2013}.
This last result also applies to the case~$p=2$, and asserts that a power series of the form~\eqref{psform} with~$q = 1$ is $1$-ramified if and only if
\begin{displaymath}
\resit(f) \neq 0
\text{ and }
\resit(f) \neq 1.
\end{displaymath}
In the case $q=2$, Theorem~\ref{thm:q-ramification} was shown by the first named author~\cite[Theorem~1]{Fransson2017}, with~$\resit(f)$ replaced by~\eqref{eq:genericity polynomial}.
In the case~$q = 3$ and~$\K = \F_p$, Theorem~\ref{thm:q-ramification} was shown by Kallal and Kirkpatrick in the first version of~\cite{KallalKirkpatrick2019}, with~$\resit(f)$ replaced by~\eqref{eq:genericity polynomial}.
After a preliminary version of this paper was completed, we received a new version of~\cite{KallalKirkpatrick2019} proving Theorem~\ref{thm:q-ramification} when restricted to those~$q$ satisfying~$q^2 < p$, and with~$\resit(f)$ replaced by~\eqref{eq:genericity polynomial}.
Theorem~\ref{thm:q-ramification} and its corollaries are not expected to extend to the case~$q \ge p + 1$ not divisible by~$p$.
In fact, we give examples showing that the conclusion of Theorem~\ref{thm:q-ramification} is false for~$q = p + 1$, see Example~\ref{ex:p + 1} in~\S\ref{sec:furth-results-exampl}.
About genericity, if~$q \ge p + 1$ is not divisible by~$p$, then the results of Laubie and Sa{\"{\i}}ne in~\cite{LaubieSaine1998} imply that the inequality~\eqref{eq:q-minimal} fails in general, even for~$n = 1$.
Thus, for~$q \ge p + 1$ the $q$-ramified power series are not expected to be generic among power series having~$0$ as a fixed point of multiplicity~$q + 1$.
So, the following question arises naturally.
\begin{question}
Let~$p$ be a prime number, $\K$ a field of characteristic~$p$, and~$q \ge p + 1$ an integer that is not divisible by~$p$.
How are the lower ramification numbers of a generic power series in~$\mathcal{N}_q(\K)$? \footnote{Recently, the first named author answered this question completely in~\cite{Nor1909}.}
\end{question}
In the case~$q = p + 1$, it seems that for a generic power series satisfying~$\mult(f) = q + 1$, we have for every~$n \ge 0$
\begin{displaymath}
i_n(f) = 1 + p + \cdots + p^{n + 1}.
\end{displaymath}
See also Example~\ref{ex:p + 1} in~\S\ref{sec:furth-results-exampl}, and the discussion following it.
\subsection{Periodic points of wildly ramified power series}
\label{sec:lower-bound}
Our next result is about the distribution of periodic points of a convergent $q$-ramified power series.
To state it, we introduce some notation.
Given an ultrametric field $(\K,|\cdot|)$, denote by
\begin{displaymath}
\mathcal{O}_{\K}
\=
\{ \zeta \in \K : |\zeta| \le 1 \},
\text{ and }
\mathfrak{m}_{\K}
\=
\{ \zeta \in \K : |\zeta| < 1 \},
\end{displaymath}
the ring of integers of~$\K$ and the maximal ideal of~$\mathcal{O}_{\K}$, respectively.
\begin{thm}[Periodic points lower bound]
\label{thm:lower-bound}
Let~$p$ be an odd prime number, let~$q$ be in~$\{1, \ldots, p - 1 \}$, and let~$(\K, |\cdot|)$ be an ultrametric field of characteristic~$p$.
Furthermore, let~$f$ be a power series with coefficients in~$\mathcal{O}_{\K}$ of the form
\[f(\zeta)
\equiv
\zeta(1 + a\zeta^q) \mod \langle \zeta^{q+2} \rangle, \text{ with } a\neq0.\]
Then, for every fixed point~$\zeta_0$ of~$f$ in~$\mathcal{O}_{\K}$ that is different from~$0$ we have $|\zeta_0|\geq |a|$, and for every periodic point~$\zeta_0$ of~$f$ in~$\mathcal{O}_{\K}$ that is not a fixed point, we have
\begin{equation}\label{normbound}
|\zeta_0|
\ge
|a| \cdot |\resit(f)|^\frac{1}{p}.
\end{equation}
\end{thm}
We give explicit examples for which equality holds in~\eqref{normbound} for every periodic point that is not fixed, when~$q \le p - 3$ (Example~\ref{e:optimality} in~\S\ref{sec:furth-results-exampl}).
We recall that by Theorem~\ref{thm:closed-formula} we can explicitly compute~$\resit(f)$, see also~\eqref{eq:genericity polynomial}, so the lower bound in Theorem~\ref{thm:lower-bound} is effective.
Note also that the lower bound given by Theorem~\ref{thm:lower-bound} is trivial in the case that~$f$ is not $q$-ramified, because by Theorem~\ref{thm:q-ramification} we have $\resit(f) = 0$ in this case.
Note that every convergent power series about~$0$ without constant term is conjugated to a power series with coefficients in~$\mathcal{O}_{\K}$ by a scale change.
So, the following corollary is a direct consequence of Theorem~\ref{thm:lower-bound}.
\begin{corr}
Let~$\K$ be an ultrametric field of positive characteristic, and let~$q \ge 1$ be an integer that is strictly smaller than the characteristic of~$\K$.
Moreover, let~$f$ be a $q$-ramified power series with coefficients in~$\K$ that converges on a neighborhood of the origin.
Then the origin is isolated as a periodic point of~$f$.
\end{corr}
Combined with Corollary~\ref{c:genericity} and~\cite[Theorem~E with $p = 2$]{LindahlRiveraLetelier2013}, the previous corollary implies the following result as a direct consequence.
\begin{corr}
\label{c:generic isolation}
Let~$p$ be a prime number and fix~$m$ in~$\{2, \ldots, p \}$.
Then, over a field of characteristic~$p$, a generic fixed point of multiplicity~$m$ is isolated as a periodic point.
\end{corr}
This corollary solves~\cite[Conjecture~1.2]{LindahlRiveraLetelier2013} in the affirmative, for generic multiple fixed points of a fixed and small multiplicity, as well as~\cite[Conjecture~4.3]{KallalKirkpatrick2019}.
In the case~$m = 2$, Corollary~\ref{c:generic isolation} is~\cite[Main Theorem]{LindahlRiveraLetelier2015}.
In the case~$q = 1$, Theorem~\ref{thm:lower-bound} was shown by Lindahl and the second named author~\cite[Theorem~B]{LindahlRiveraLetelier2015}.
This last result also applies to~$p = 2$.
In the case~${q = 2}$, and for power series with integer coefficients, Theorem~\ref{thm:lower-bound} was shown by Lindahl and the first named author~\cite[Theorem~A]{LindahlNordqvist2018}.
\subsection{Organization}
\label{sec:organziation}
In~\S\ref{sec:invariance} and in Appendix~\ref{app:resit}, we study the residue fixed point index over a field of arbitrary characteristic.
Theorem~\ref{thm:closed-formula} is shown in~\S\ref{sec:proof-of-closed-formula}, the invariance of the residue fixed point index under coordinate changes is shown in~\S\ref{invarresidue}, and in~\S\ref{sec:nf} we study normal forms.
All these results are used in the in the proof of Theorems~\ref{thm:q-ramification} and~\ref{thm:lower-bound}.
In Appendix~\ref{app:resit}, we study the behavior under iterations of the iterative residue.
In~\S\ref{shortproof} we give a short proof of Theorem~\ref{thm:q-ramification} that relies on a result of Laubie and Sa{\"{\i}}ne in~\cite{LaubieSaine1998}.
After some preliminaries on lower ramification numbers in~\S\ref{sec:lowramif}, this proof is given in~\S\ref{sec:proof of ramification}.
In~\S\ref{s:periodic points} we give a self-contained proof of Theorem~\ref{thm:q-ramification}, and the proof of Theorem~\ref{thm:lower-bound}.
We obtain both of these from our main technical result that we state as the ``Main Lemma'' at the beginning of~\S\ref{s:periodic points}.
The proof of this result occupies~\S\ref{s:proof of Main Lemma}.
In~\S\ref{sec:self-contained-proof}, we use the Main Lemma and the results in~\S\ref{sec:invariance} to obtain more information about the coefficients of the iterates of a wildly ramified power series as in Theorem~\ref{thm:q-ramification}.
This is stated as Proposition~\ref{prop:deltaiterates}, and it implies Theorem~\ref{thm:q-ramification} as a direct consequence.
It is also the main new ingredient in the proof of Theorem~\ref{thm:lower-bound}, which is given in~\S\ref{sec:lowerbound}.
In~\S\ref{sec:furth-results-exampl}, we gather several examples illustrating our results.
\subsection*{Acknowledgments}
We would like to thank the referees for their valuable comments and corrections that helped improve the exposition of the paper.
The first named author acknowledges support from \emph{Kungliga Vetenskapsakademien}, grant MG2018-0011, for his visit to the second named author at University of Rochester. He would also like to thank the second named author for his hospitality and for providing an excellent working environment during said visit. Finally, the first named author would also like to thank his supervisor Karl-Olof Lindahl for fruitful discussions in the early stages of this project.
The second named author acknowledges partial support from NSF grant DMS-1700291.
\section{The residue fixed point index}\label{sec:invariance}
In this section we prove the closed formula (Theorem~\ref{thm:closed-formula}) and the invariance under coordinate changes of the residue fixed point index.
The former is proved in~\S\ref{sec:proof-of-closed-formula}, and the latter is stated and proved in~\S\ref{invarresidue}.
In~\S\ref{sec:nf} we also use the residue fixed point index to study normal forms of wildly ramified power series.
Given a ring~$R$ and elements $a_1, \ldots, a_n$ of~$R$, denote by $\langle a_1,\ldots,a_n \rangle$ the ideal generated by~$a_1, \ldots, a_n$.
Furthermore, denote by~$R[[z]]$ the ring of power series with coefficients in~$R$ in the variable~$z$, and denote by~$\ord_z$ the $z$-adic valuation on~$R[[z]]$, \emph{i.e.}, for a nonzero~$f$ in~$R[[z]]$ the valuation~$\ord_z(f)$ is the unique integer~$j$ such that~$f$ is in~$z^jR[[z]] \setminus z^{j+1}R[[z]]$, and for~$f = 0$ we have $\ord_z(0) \= +\infty$.
\subsection{Closed formula for the residue fixed point index}
\label{sec:proof-of-closed-formula}
In this section we prove Theorem~\ref{thm:closed-formula}, after the following lemma.
\begin{lemma}\label{sumlittlea}
Let $\K$ be a field, $q\geq 1$ an integer, and~$f$ a power series with coefficients in~$\K$ of the form~\eqref{psform}.
Then~$- a_q^{q + 1} \ind(f)$ is equal to the coefficient of~$z^{q}$ in
\begin{equation}
\label{eq:sumlittlea}
\sum_{r=0}^qa_q^r(-1)^{q-r}(a_{q+1}z+\cdots+a_{2q}z^{q})^{q-r}.
\end{equation}
\end{lemma}
\begin{proof}
From the definition, $\ind(f)$ is equal to the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of
\begin{equation}
\label{eq:laurentindf}
\begin{split}
\frac{1}{z-f(z)}
& =
-\frac{1}{a_qz^{q+1}+a_{q+1}z^{q+2}+\cdots+a_{2q}z^{2q+1}+\cdots}
\\ & =
-\frac{1}{a_qz^{q+1}}\cdot\frac{1}{1 + \frac{a_{q+1}}{a_q}z +\frac{a_{q+2}}{a_q}z^2+\cdots}
\\ & =
-\frac{1}{a_q^{q+1}z^{q+1}}\sum_{j=0}^{+ \infty}a_q^{q-j}(-1)^j\left(a_{q+1}z +a_{q+2}z^2+\cdots\right)^j.
\end{split}
\end{equation}
Thus, $\ind(f)$ is equal to the coefficient of~$z^q$ in the sum in \eqref{eq:laurentindf}.
Note that for $k\geq 2q+1$, the coefficient $a_k$ does not contribute to the coefficient of~$z^q$ in the sum in \eqref{eq:laurentindf}.
Also for $j>q$, the corresponding term in the sum in~\eqref{eq:laurentindf} has no term in~$z^q$.
Hence, $\ind(f)$ is equal to the coefficient of~$z^q$ in~\eqref{eq:sumlittlea}, as claimed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:closed-formula}]
In view of Lemma~\ref{sumlittlea}, it is sufficient to compute the coefficient of~$z^q$ in~\eqref{eq:sumlittlea}.
Using the multinomial theorem and regrouping, \eqref{eq:sumlittlea} is equal to
\begin{multline*}
\sum_{r=0}^qa_q^{r}(-1)^{q-r}\sum_{\substack{(\MultiInd_1, \ldots, \MultiInd_q) \in \N^q \\ \MultiInd_{1}+\ldots+\MultiInd_{q} =q-r}}\binom{q-r}{\MultiInd_{1},\ldots, \MultiInd_{q}}\prod_{j=1}^{q} (a_{q+j} z^j)^{i_{j}}
\\ =
\sum_{\substack{\MultiInd \in \N^{q + 1} \\ |\MultiInd| = q}}(-1)^{q-\MultiInd_0}\binom{q-\MultiInd_0}{\MultiInd_{1},\ldots,\MultiInd_{q}} \left(\prod_{j = 0}^q a_{q + j}^{\iota_j}\right)z^{\|\iota\|}.
\end{multline*}
In the last expression, the term in~$z^{q}$ is given by restricting the sum to those multi-indices~$\MultiInd$ satisfying~$\| \MultiInd \| = q$.
This proves the theorem.
\end{proof}
\subsection{The residue fixed point index is invariant}\label{invarresidue}
This section is devoted to prove the following proposition.
\begin{prop}
\label{conj}
Let~$\K$ be a field.
Then, among power series~$f$ with coefficients in~$\K$ and satisfying~$f(0)=0$ and~$f(z) \neq z$, the residue fixed point index is invariant under coordinate changes.
That is, for every power series~$\varphi$ with coefficients in~$\K$ such that~$\varphi(0) = 0$ and~$\varphi'(0) \neq 0$, the power series~$\widehat{f} \= \varphi \circ f \circ \varphi^{-1}$ satisfies
\begin{displaymath}
\ind(\widehat{f}) = \ind(f).
\end{displaymath}
\end{prop}
The proof of this proposition is given after the following lemma.
\begin{lemma}\label{powersarezero}
Let~$\K$ be a field and~$\varphi$ a power series with coefficients in~$\K$ such that~$\varphi(0) = 0$ and~$\varphi'(0) \neq 0$.
Then for every integer $N\geq 1$, the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of
\[\frac{\varphi'(z)}{\varphi(z)^{N+1}}\]
is zero.
\end{lemma}
\begin{proof}
Put~$\varphi(z) = {\displaystyle \sum_{j=0}^{+\infty} a_jz^j}$ and for a field automorphism~$\sigma$ of~$\K$ put
\[\varphi^\sigma(z) \= \sum_{j=0}^{+\infty} \sigma(a_j)z^j.\]
If the characteristic of~$\K$ is zero or if the characteristic of~$\K$ is positive and it does not divide~$N$, then the lemma is clear as
\[\frac{\varphi'(z)}{\varphi(z)^{N+1}} = \left(-\frac{1}{N}\cdot \frac{1}{\varphi(z)^N}\right)'. \]
So we assume~$\K$ is of characteristic $p>0$ and that~$N$ is divisible by~$p$.
Let $\ell\geq 1$ be the largest integer such that $p^\ell \mid N$, and put $n\=p^{-\ell}N$.
Moreover, denote by $\frob\colon \K\to\K$ the Frobenius automorphism, given by $\frob(z) \= z^p$, and put $\sigma\=\frob^\ell$.
Then we have
\begin{equation}\label{eq:frob}
\frac{\varphi'(z)}{\varphi(z)^{N +1}}
=
\frac{(\varphi^\sigma)'(z^{p^\ell})}{\varphi^\sigma(z^{p^\ell})^{n+1}}
\cdot \left( \frac{\varphi^\sigma(z^{p^\ell})}{(\varphi^\sigma)'(z^{p^\ell})} \cdot \frac{\varphi'(z)}{\varphi(z)} \right).
\end{equation}
Since $n$ is not divisible by $p$, the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of $\frac{(\varphi^{\sigma})'(z)}{(\varphi^{\sigma}(z))^{n+1}}$ is zero.
So the coefficient of~$\frac{1}{z^{p^\ell}}$ in the Laurent series expansion about~$0$ of~$\frac{(\varphi^\sigma)'(z^{p^\ell})}{\varphi^\sigma(z^{p^\ell})^{n+1}}$ is zero.
Together with
\[\ord_z\left(\frac{\varphi^{\sigma}(z^{p^\ell})}{(\varphi^\sigma)'(z^{p^\ell})}\cdot \frac{\varphi'(z)}{\varphi(z)}\right) = p^\ell-1,\]
this implies that the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of~$\frac{\varphi'(z)}{\varphi(z)^{N +1}}$ is zero, which is the desired assertion.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{conj}]
If $f'(0) \neq 1$, then~$\ind(f)$ is equal to $\frac{1}{1-f'(0)}$, which is easily seen to be invariant under coordinate changes.
Assume $f'(0) = 1$, and put
\begin{displaymath}
\Delta(z) \= f(z)-z
\text{ and }
q\=\ord_z(\Delta(z))-1.
\end{displaymath}
Our hypothesis $f(z)\neq z$ implies that $q$ is finite and our assumption $f'(0)=1$ implies that $q\geq 1$.
Let~$\varphi$ be a power series with coefficients in~$\K$ such that $\varphi(0)=0$ and $\varphi'(0) \neq 0$, and put
\begin{displaymath}
\widehat{f}\=\varphi^{-1}\circ f \circ \varphi
\text{ and }
\widehat{\Delta}(z) \= \widehat{f}(z)-z.
\end{displaymath}
Clearly $\widehat{f}'(0) = 1$, so $\ord_z(\widehat{\Delta}(z))\geq 2$.
Moreover,
\begin{equation}
\label{DeltaH}
\begin{split}
\Delta \circ \varphi(z)
& =
\varphi(\widehat{f}(z)) - \varphi(z)
\\ & =
\varphi(z + \widehat{\Delta}(z)) - \varphi(z)
\\ & \equiv
\varphi'(z)\widehat{\Delta}(z) \mod \langle \widehat{\Delta}(z)^2\rangle.
\end{split}
\end{equation}
Since $\ord_z(\Delta) = q+1$ and $\ord_z(\varphi') = 0$, we conclude that
\begin{displaymath}
\ord_z(\Delta\circ \varphi) = q+1
\text{ and }
{\ord_z(\varphi'\cdot \widehat{\Delta})} = \ord_z(\widehat{\Delta}).
\end{displaymath}
On the other hand, by (\ref{DeltaH}) we have $\ord_z(\Delta\circ\varphi-\varphi'\cdot\widehat{\Delta})\geq 2\ord_z(\widehat{\Delta})$ and therefore
\begin{displaymath}
\ord_z(\widehat{\Delta}) = \ord_z(\Delta \circ \varphi) = q+1.
\end{displaymath}
Using (\ref{DeltaH}) again we obtain
\[\Delta \circ \varphi \equiv \varphi'\cdot \widehat{\Delta} + \langle z^{2q+2} \rangle,\]
and conclude that~$\ind(\widehat{f})$ is equal to the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of
\[\frac{\varphi'}{\Delta\circ \varphi}. \]
Putting
\[\left(\frac{1}{\Delta}\right)(z) \= \sum_{i = -(q+1)}^{+\infty} a_i z^i,\]
we have
\begin{displaymath}
\left(\frac{\varphi'}{\Delta\circ \varphi}\right)(z)
=
\sum_{N = 0}^{q} a_{- (N + 1)} \frac{\varphi'(z)}{\varphi(z)^{N + 1}}
+
\sum_{i = 0}^{+\infty} a_i \varphi(z)^i \varphi'(z).
\end{displaymath}
By Lemma~\ref{powersarezero}, the coefficient of~$\frac{1}{z}$ in the Laurent series expansion about~$0$ of the right-hand side is equal to that of~$a_{-1} \frac{\varphi'(z)}{\varphi(z)}$, which is clearly equal to $a_{-1}$.
This completes the proof of the proposition.
\end{proof}
\subsection{Normal forms in positive characteristic}\label{sec:nf}
Let~$\K$ be a field and~$f$ a power series with coefficients in~$\K$ such that~$q \= \mult(f) - 1$ is finite and satisfies~$q \ge 1$.
In the case of $\K=\C$, or more generally if~$\K$ is of characteristic zero, there exists a (formal) power series conjugating~$f$ to the polynomial
\begin{equation}
\label{eq:22}
z(1+z^q + \ind(f)z^{2q}).
\end{equation}
When~$\K$ is of characteristic zero, this polynomial is called the \emph{normal form of~$f$}.
This statement is false if~$\K$ is of positive characteristic.
Our goal in this section is to prove the following proposition giving a sufficient condition for~$f$ to have the same normal form up to a high order.
\begin{prop}\label{nf}
Let~$p$ be a prime number and~$\K$ a field of characteristic~$p$.
Moreover, let $q$ be in~$\{1, \ldots, p - 1 \}$, and let~$f$ be a power series with coefficients in~$\K$ satisfying~$\mult(f) = q + 1$.
Then, $f$ is conjugated to a power series with coefficients in a finite extension of~$\K$, of the form
\begin{equation}
\label{eq:nff}
z(1 + z^q + \ind(f)z^{2q})\mod \langle z^{2q + p + 1} \rangle.
\end{equation}
\end{prop}
The proof of this proposition is given after the following lemma.
\begin{lemma}\label{removeterms}
Let~$\K$ be a field, $q \ge 1$ an integer, and~$f$ a power series with coefficients in~$\K$ of the form
\begin{displaymath}
f(z)
=
z\left(1 + \sum_{j=q}^{+\infty} a_jz^j\right), \text{ with } a_q \neq 0.
\end{displaymath}
Then, for every integer~$k \ge 1$ such that~$a_{q + k} \neq 0$ and~$k\neq q$ in $\K$, there is~$c$ in~$\K$ such that for the polynomial~$\varphi(z) \= z(1 + c z^k)$, we have
\[ \varphi \circ f \circ \varphi^{-1}(z) \equiv z(1 + a_qz^q + \cdots + a_{q+k-1}z^{q+k-1}) \mod \langle z^{q+k+2}\rangle.\]
\end{lemma}
\begin{proof}
Let~$c$ be a constant in~$\K$ to be chosen later, and put
\[ \varphi(z) \= z(1 + cz^k)
\text{ and }
\widehat{f}(z)
\=
\varphi \circ f \circ \varphi^{-1}(z)
= z \left( 1 + \sum_{j = q}^{+ \infty} \widehat{a}_jz^j \right).\]
Then we find
\begin{displaymath}
\begin{split}
\varphi \circ f(z)
& \equiv
z (1 + a_qz^q + \cdots + a_{q+k}z^{q+k})(1 + cz^k(1 + a_qz^q)^k)
\mod \langle z^{q+k+2} \rangle
\\ & \equiv
z(1 + cz^k + a_qz^q + \cdots + a_{q+k-1}z^{q+k-1}
\\&\qquad
+ ((k+1)ca_q + a_{q+k})z^{q+k}) \mod \langle z^{q+k+2} \rangle,
\end{split}
\end{displaymath}
and
\begin{displaymath}
\begin{split}
\widehat{f} \circ \varphi(z)
& \equiv
z (1 + cz^k)(1 + \widehat{a}_qz^q (1 + cz^k)^q + \widehat{a}_{q + 1} z^{q + 1} + \cdots + \widehat{a}_{q+k}z^{q+k})
\\ &
\quad \mod \langle z^{q+k+2} \rangle
\\ & \equiv
z(1 + cz^k + \widehat{a}_qz^q +\cdots + \widehat{a}_{q+k-1}z^{q+k-1}
\\&\qquad
+ ((q+1)c\widehat{a}_q + \widehat{a}_{q + k})z^{q+k}) \mod \langle z^{q+k+2} \rangle.
\end{split}
\end{displaymath}
Equating both expression yields
\[a_q = \widehat{a}_q,\ldots, a_{q+k-1} = \widehat{a}_{q+k-1},\]
and
\[ \widehat{a}_{q + k} = (k - q)c a_q + a_{q+k}.\]
By our assumption $k\neq q$ in~$\K$, we can take~$c = - \frac{a_{q+k}}{a_q(k-q)}$ to obtain~$\widehat{a}_{q + k} = 0$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{nf}]
Denote by~$a \neq 0$ the coefficient of~$z^{q+1}$ in~$f$, and let~$\gamma$ in a finite extension of~$\K$ be such that~$\gamma^q = a^{-1}$.
Note that the power series~$\widehat{f}(z) \= \gamma^{-1} f(\gamma z)$ satisfies~$\mult(\widehat{f}) = q + 1$ and that the coefficient of~$z^{q + 1}$ in~$\widehat{f}$ is equal to~$1$.
Since by assumption~$q$ is in~$\{1,\ldots, p - 1\}$, we can apply Lemma~\ref{removeterms} successively with~$k = 1, \ldots, q - 1$, to obtain that there is a polynomial~$\varphi$ with coefficients in~$\K[\gamma]$, such that~$\varphi(0) = 0$, $\varphi'(0) = 1$, and
\begin{displaymath}
g(z)
\=
\varphi \circ \widehat{f} \circ \varphi^{-1}(z)
\equiv
z (1 + z^q) \mod \langle z^{2q + 1} \rangle.
\end{displaymath}
Note that by Theorem~\ref{thm:closed-formula} the coefficient of~$z^{2q + 1}$ in~$g$ is equal to~$\ind(g)$ and by Proposition~\ref{conj} we have~$\ind(g) = \ind(\widehat{f}) = \ind(f)$.
Thus,
\begin{displaymath}
g(z)
\equiv
z (1 + z^q + \ind(f) z^{2q}) \mod \langle z^{2q + 2} \rangle.
\end{displaymath}
Finally, we apply Lemma~\ref{removeterms} successively with~$k = q + 1, \ldots, q + p - 1$, to obtain that there is a polynomial~$\phi$ with coefficients in~$\K[\gamma]$, such that~$\phi(0) = 0$, $\phi'(0) = 1$, and
\begin{displaymath}
\phi \circ g \circ \phi^{-1}(z)
\equiv
z (1 + z^q + \ind(f) z^{2q}) \mod \langle z^{2q + p + 1} \rangle.
\qedhere
\end{displaymath}
\end{proof}
\section{$q$-Ramified power series}\label{shortproof}
After some preliminaries on lower ramification numbers in~\S\ref{sec:lowramif}, in~\S\ref{sec:proof of ramification} we give a short proof of Theorem~\ref{thm:q-ramification} that relies on a result of Laubie and Sa{\"{\i}}ne in~\cite{LaubieSaine1998}.
See~\S\ref{sec:self-contained-proof} for a self-contained proof of Theorem~\ref{thm:q-ramification}.
\subsection{Lower ramification numbers}\label{sec:lowramif}
In this section we fix a prime number~$p$ and a field~$\K$ of characteristic~$p$.
Recall that for a power series~$f$ in~$\K[[\zeta]]$ and an integer~$n \ge 1$, the lower ramification number~$i_n(f)$ of~$f$ is
\[i_n(f) = \mult(f^{p^n})-1.\]
Lower ramification numbers have been studied by several authors, \emph{e.g.}, \cite{Sen1969,Keating1992,LaubieSaine1998,LaubieMovahhediSalinier2002}.
A central theorem of Sen~\cite[Theorem~1]{Sen1969} states that if for some $n\geq0$ we have $i_n(f) < +\infty$, then
\[i_n(f) \equiv i_{n-1}(f) \pmod{p^n}.\]
The following consequence of Sen's theorem shows that for~$q$ in~$\{1, \ldots, p - 1 \}$, a $q$-ramified power series can be thought of as minimal in the sense that for every integer~$n$ the lower ramification number~$i_n(f)$ is least possible.
\begin{prop}\label{prop:qramifisminramif}
Let~$p$ be a prime number and~$\K$ a field of characteristic $p$.
Then for every~$q$ in $\{1,\ldots,p-1\}$, and every power series~$f$ in~$\K[[\zeta]]$ satisfying $\mult(f) = q+1$, we have for every integer~$n \ge 1$
\begin{equation}
\label{ineq}
i_n(f) \geq q(1 + p + \cdots + p^n).
\end{equation}
\end{prop}
The proof of this proposition is given after the following lemma.
To state this lemma, we introduce some notation.
Let~$R$ be a ring, and~$f$ a power series in~$R[[z]]$ of the form~$f(z) \equiv z \mod \langle z^2 \rangle$.
Following~\cite[\emph{Exemple}~3.19]{RiveraLetelier2003} and~\cite{LindahlRiveraLetelier2013}, define recursively for every integer $m\geq 0$ the power series~$\Delta_m$ by
\begin{equation}
\label{eq:27}
\Delta_0(z) \= z,
\end{equation}
and for~$m \ge 1$ by
\begin{equation}
\label{eq:23}
\Delta_m(z)
\=
\Delta_{m-1}(f(z)) - \Delta_{m-1}(z).
\end{equation}
If~$R$ is of characteristic zero, then for every prime number~$p$ a direct computation shows that we have
\begin{equation}
\label{eq:7}
\Delta_p(z) \equiv f^p(z)-z \mod \langle p \rangle.
\end{equation}
In the case~$R$ is of characteristic~$p$, we have~$\Delta_p(z) = f^p(z)-z$.
\begin{lemma}\label{lemma:delta}
Let~$p$ be a prime number and~$\K$ a field of characteristic $p$.
Given a wildly ramified power series~$f$ in~$\K[[\zeta]]$, let~$( \Delta_m )_{m = 0}^{+\infty}$ be as above.
Then for every integer~$m \ge 1$ we have
\begin{equation}
\label{eq:24}
\ord_\zeta(\Delta_m)-\ord_\zeta(\Delta_{m-1})
\geq
\ord_\zeta(\Delta_1)-1.
\end{equation}
\end{lemma}
\begin{proof}
Put~$q \= {\ord_\zeta(\Delta_1)-1}$, $f(\zeta) = \zeta\left(1 + \sum_{i = q}^{+ \infty} b_i\zeta^i\right)$, $r \= \ord_{\zeta}(\Delta_m)$, and~$\Delta_m(\zeta)=\sum_{i = r}^{+ \infty} a_i\zeta^i$.
Then
\[\Delta_{m+1}(\zeta)
=
\sum_{i =r}^{+ \infty} a_i\zeta^i\left[(1+b_q\zeta^q+\cdots)^i - 1\right], \]
and therefore~$\ord_\zeta(\Delta_{m+1}) \ge r+q$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:qramifisminramif}]
We prove~\eqref{ineq} by induction in~$n$.
To prove~\eqref{ineq} for~$n = 1$, let~$(\Delta_m)_{m = 0}^{+ \infty}$ be as in~\eqref{eq:27} and~\eqref{eq:23}.
Then for every integer~$m \ge 1$ we have $\ord_\zeta(\Delta_m) - \ord_\zeta(\Delta_{m-1}) \geq q$ by Lemma~\ref{lemma:delta}.
An induction argument combined with~\eqref{eq:7} gives
\begin{displaymath}
i_1(f) = \ord_\zeta(\Delta_p) - 1 \geq qp = p i_0(f).
\end{displaymath}
But by Sen's theorem we have $i_1(f) \equiv i_0(f) \pmod{p}$, so
\begin{equation}
\label{firstinduc}
i_1(f) \geq qp + q.
\end{equation}
This proves~\eqref{ineq} for~$n = 1$.
Let~$n \ge 1$ be an integer for which~\eqref{ineq} holds, and put $g(\zeta) \= f^{p^n}(\zeta)$.
Let~$(\widehat{\Delta}_m)_{m = 0}^{+ \infty}$ be the sequence~$(\Delta_m)_{m = 0}^{+ \infty}$ given by~\eqref{eq:27} and~\eqref{eq:23} with~$f$ replaced by~$g$.
Then by Lemma~\ref{lemma:delta} for every integer~$m \ge 1$ we have
\[\ord_\zeta(\widehat{\Delta}_m)-\ord_\zeta(\widehat{\Delta}_{m-1})
\geq
\ord_{\zeta}(\widehat{\Delta}_1)
=
i_0(g). \]
An induction argument together with~\eqref{eq:7}, implies
\begin{equation}
\label{eq:26}
i_{n + 1}(f)
=
i_1(g)
=
\ord_\zeta(\widehat{\Delta}_p) - 1
\geq
p i_0(g)
=
p i_n(f).
\end{equation}
If the inequality in our induction assumption~\eqref{ineq} is strict, then we have
\begin{displaymath}
i_{n + 1}(f)
\ge
p + p q (1 + p + \cdots + p^n)
>
q(1 + p + \cdots + p^{n + 1}).
\end{displaymath}
If equality holds in~\eqref{ineq}, then by Sen's theorem we have
\begin{displaymath}
i_{n + 1}(f) \equiv q(1 + p + \cdots + p^n) \pmod{p^{n+1}}.
\end{displaymath}
Combined with~\eqref{eq:26}, this implies
\begin{displaymath}
i_{n+1}(f)
\geq
q + p q (1 + p + \cdots + p^n)
=
q(1 + p + \cdots + p^{n + 1}).
\end{displaymath}
In all the cases we obtain~\eqref{ineq} with~$n$ replaced by~$n + 1$.
This completes the proof of the induction step, and of the the proposition.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:q-ramification}}
\label{sec:proof of ramification}
In the proof of Theorem~\ref{thm:q-ramification} we use the following result of Laubie and Sa{\"{\i}}ne.
\begin{prop}[\cite{LaubieSaine1998}, Corollary~1]
\label{laubiecorr}
Let~$p$ be a prime number, $\K$ a field of characteristic~$p$, and~$f$ in~$\K[[\zeta]]$ such that $f(0)=0$ and $f'(0)=1$.
If
\begin{displaymath}
p\nmid i_0(f)
\text{ and }
i_1(f) < (p^2-p+1)i_0(f),
\end{displaymath}
then for every integer $n\geq 1$ we have
\[ i_n(f) = i_0(f) + (1 + p + \cdots + p^n)(i_1(f)-i_0(f)). \]
\end{prop}
In view of this result, the proof of Theorem~\ref{thm:q-ramification} reduces to show that for~$q$ in~$\{1, \ldots, p - 1 \}$ and~$f$ in~$\K[[\zeta]]$ satisfying~$i_0(f) = q$, the conditions
\begin{displaymath}
i_1(f) = q (p + 1)
\text{ and }
\resit(f) \neq 0
\end{displaymath}
are equivalent.
The following is the key ingredient, together with Proposition~\ref{nf} and the invariance of the residue fixed point index under coordinate changes shown in~\S\ref{sec:invariance}.
\begin{prop}\label{prop:deltaiteratesshort}
Let~$p$ be an odd prime number and consider the rings
\begin{displaymath}
\Z_{(p)} \= \left\{\frac{m}{n} \in \Q : m, n \in \Z, p\nmid n \right\},
\end{displaymath}
\begin{displaymath}
F_1 \= \Z_{(p)}[x_0, x_1],
\text { and }
F_\infty \= \Z_{(p)}[x_0, x_1,x_2,\ldots].
\end{displaymath}
Then for each integer~$q \ge 1$ not divisible by~$p$, the power series~$\widehat{f}$ in~$F_\infty[[\zeta]]$ defined by
\[ \widehat{f}(\zeta)
\=
\zeta\left(1 + x_0 \zeta^q + x_1\zeta^{2q} + \zeta^{2q}\sum_{i=1}^{+ \infty} x_{i+1}\zeta^i\right), \]
satisfies
\begin{displaymath}
\widehat{f}^p(\zeta)
\equiv
\zeta \left(1 + x_0^{p - 1} \left( x_0^2 \frac{q + 1}{2} - x_1 \right) \zeta^{q(p + 1)}\right) \mod \langle p, \zeta^{q(p + 1) + 2} \rangle.
\end{displaymath}
\end{prop}
The proof of Theorem~\ref{thm:q-ramification} is given at the end of this section, after the proof of this proposition.
To prove this proposition we use the strategy introduced in~\cite[\emph{Exemple}~3.19]{RiveraLetelier2003} and~\cite{LindahlRiveraLetelier2013}, using~\eqref{eq:27} and~\eqref{eq:23}.
We also use the following elementary lemma.
\begin{lemma}\label{wilson}
Let~$p$ be an odd prime number, $a$ and~$b$ in~$\F_p$ such that $a\neq 0$, and let~$w \colon \F_p \to \F_p$ be defined by~$w(n) \= an +b$.
Denoting $s' \= -a^{-1}b$, we have
\begin{displaymath}
\prod_{s \in \F_p \setminus \{s'\}} w(s) = -1
\text{ and }
\sum_{s \in \F_p \setminus\{s'\}} \frac{1}{w(s)} = 0.
\end{displaymath}
\end{lemma}
\begin{proof}
We use the fact that the nonconstant affine map~$w$ is a bijection of~$\F_p$.
Together with Wilson's theorem this implies the first assertion.
The second assertion follows from the fact that, since~$p$ is odd, the sum of all nonzero elements in $\F_p$ is~$0$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:deltaiteratesshort}]
Let~$(\Delta_m)_{m = 0}^{+ \infty}$ be given by~\eqref{eq:27} and~\eqref{eq:23}.
For each integer~$m \ge 1$ define~$\alpha_m$, and $\beta_m$ in the ring~$F_1 \= \Z_{(p)}[x_0, x_1]$ by the recursive relations
\begin{align}
\alpha_{m+1} & \= x_0 (qm+1)\alpha_m, \label{receq1short}
\\
\beta_{m+1} & \= \left[x_0^2 \binom{qm+1}{2} + x_1(qm+1)\right]\alpha_m + x_0 (q(m+1)+1)\beta_m,
\label{receq2short}
\end{align}
with initial conditions $\alpha_1 \= x_0$ and $\beta_1 \= x_1$.
We prove by induction that for every integer~$m \ge 1$ we have
\begin{equation}
\label{deltaclaimshort}
\Delta_m(\zeta)
\equiv
\alpha_m\zeta^{qm+1} + \beta_m\zeta^{q(m+1)+1} \mod \langle \zeta^{q(m + 1) + 2} \rangle.
\end{equation}
For $m=1$ this holds by definition. Assume further that it is valid for some $m\geq1$.
Then
\begin{displaymath}
\begin{split}
\Delta_{m+1}(\zeta)
& = \Delta_m(\widehat{f}(\zeta)) - \Delta_m(\zeta)
\\ &\equiv \alpha_m\zeta^{qm+1}\left[\left(1 + x_0 \zeta^q + x_1\zeta^{2q} + \cdots\right)^{qm+1} - 1\right]\\
&\qquad + \beta_m\zeta^{q(m+1)+1}\left[\left(1 + x_0 \zeta^q + x_1\zeta^{2q} + \cdots\right)^{q(m+1)+1}-1\right]
\\ & \qquad
\mod \langle \zeta^{q(m+2)+2} \rangle
\\
&\equiv \alpha_m\left[ \zeta^{q(m+1)+1}x_0 (qm+1) + \zeta^{q(m+2)+1}\left( x_0^2 \binom{qm+1}{2} + x_1 (qm+1)\right)\right] \\
&\qquad+ \beta_m\zeta^{q(m+2)+1}x_0 (q(m+1)+1) \mod \langle \zeta^{q(m+2)+2} \rangle.
\end{split}
\end{displaymath}
In view of~\eqref{receq1short} and~\eqref{receq2short}, this proves the induction step and~\eqref{deltaclaimshort}.
By~\eqref{eq:7} and~\eqref{deltaclaimshort}, to prove the proposition it is sufficient to prove
\begin{equation}
\label{eq:5short}
\alpha_p \equiv 0 \mod p F_1
\text{ and }
\beta_p \equiv x_0^{p - 1} \left( x_0^2 \frac{q + 1}{2} - x_1 \right) \mod p F_1.
\end{equation}
We do this by solving explicitly the linear recurrences described in~\eqref{receq1short}, and \eqref{receq2short}.
By telescoping \eqref{receq1short}, we obtain for every~$m \ge 1$ the solution
\begin{equation}
\label{alpham}
\alpha_m = x_0^m \prod_{j=1}^{m-1}(qj+1).
\end{equation}
Taking~$m = p$ we obtain the first congruence in~\eqref{eq:5short}.
On the other hand, inserting \eqref{alpham} in~\eqref{receq2short} yields
\[\beta_{m+1} = \left(x_0^2 \frac{qm}{2} + x_1 \right) x_0^m \prod_{j=1}^m (qj+1) + x_0 (q(m+1)+1)\beta_m.\]
Noting that for every~$j \ge 0$ we have~$qj + 1 > 0$, we utilize the substitution
\[\beta^*_m \= \beta_m \bigg/ \left( x_0^{m - 1} \prod_{j=1}^m (qj+1) \right),\]
which yields
\[\beta^*_{m+1}
=
\beta^*_{m} + \left(x_0^2 \frac{qm}{2} + x_1 \right)\frac{1}{q(m+1)+1}.\]
Using $\beta^*_1 = \frac{x_1}{q + 1}$, we obtain inductively for every~$m \ge 1$
\begin{displaymath}
\beta^*_m
=
\sum_{r=1}^{m} \left(x_0^2 \frac{q(r - 1)}{2} + x_1 \right)\frac{1}{qr + 1}.
\end{displaymath}
Equivalently,
\begin{equation}\label{betam}
\beta_m
=
x_0^{m - 1} \sum_{r=1}^{m}\left[ \left( x_0^2 \frac{q(r - 1)}{2} + x_1 \right) \prod_{j \in \{1, \ldots, m\} \setminus \{ r \}} (qj+1) \right].
\end{equation}
When~$m = p$ every term in the sum above contains a factor~$p$, except for the unique~$r$ in~$\{1, \ldots, p \}$ such that $qr \equiv -1 \pmod{p}$.
Denote by~$r_0$ this value of~$r$.
Then by Lemma~\ref{wilson}, we have
\begin{displaymath}
\begin{split}
\beta_p
&\equiv
x_0^{p - 1} \left(\frac{ x_0^2 q(r_0 - 1)}{2} + x_1\right) \prod_{j \in \{ 1, \ldots, p \} \setminus \{ r_0 \}} (qj+1) \mod p F_1
\\ &\equiv
x_0^{p - 1} \left(x_0^2 \frac{q+1}{2} - x_1\right) \mod p F_1.
\end{split}
\end{displaymath}
This proves the second congruence in~\eqref{eq:5short} and thus the proposition.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:q-ramification}]
By Proposition~\ref{nf} and our hypothesis that~$q$ is in~$\{1, \ldots, p-1\}$, we have that~$f$ is conjugated to a power series~$g$ in~$\K[[\zeta]]$ of the form
\[g(\zeta) \equiv \zeta(1 + \zeta^q + \ind(f)\zeta^{2q}) \mod \langle \zeta^{3q+2} \rangle.\]
Since
\begin{displaymath}
i_0(g) = i_0(f) = q
\text{ and }
i_1(g) = i_1(f),
\end{displaymath}
by Proposition~\ref{laubiecorr} the series~$f$ is $q$-ramified if and only if~$i_1(g) = q (p + 1)$.
Let~$\Z_{(p)}$ and~$F_\infty$ be as in Proposition~\ref{prop:deltaiteratesshort}.
Moreover, let~$h \colon F_\infty \to \K$ be the unique ring homomorphism extending the reduction map~$\Z_{(p)} \to \F_p$, such that~$h(x_1) = \ind(f)$ and such that for every~$i \ge 2$ the element~$h(x_i)$ of~$\K$ is the coefficient of~$\zeta^{2q + i}$ in~$g$.
Then~$h$ extends to a ring homomorphism~$F_{\infty}[[ \zeta ]] \to \K [[ \zeta ]]$ that maps~$\widehat{f}$ to~$g$.
So, Proposition~\ref{prop:deltaiteratesshort} implies
\begin{displaymath}
g^{p}(\zeta) - \zeta
\equiv
\resit(f) \zeta^{q(p+1)+1} \mod \langle \zeta^{q(p+1)+2} \rangle.
\end{displaymath}
This proves that~$i_1(g) = q(p + 1)$ if and only if~$\resit(f) \neq 0$ and completes the proof of the theorem.
\end{proof}
\section{Periodic points of $q$-ramified power series}
\label{s:periodic points}
In this section we give a self-contained proof of Theorem~\ref{thm:q-ramification}, and the proof of Theorem~\ref{thm:lower-bound}.
In doing so, we obtain more information about the coefficients of the iterates of a wildly ramified power series as in Theorem~\ref{thm:q-ramification} (Proposition~\ref{prop:deltaiterates} in~\S\ref{sec:self-contained-proof}).
This extra information is used to prove Theorem~\ref{thm:lower-bound} in~\S\ref{sec:lowerbound}.
The main ingredients in the proofs of Theorems~\ref{thm:q-ramification} and~\ref{thm:lower-bound} are the results on the residue fixed point index in~\S\ref{sec:invariance}, and the following result that is proved in~\S\ref{s:proof of Main Lemma}.
\begin{main}
Let~$p$ be an odd prime number, and let~$\Z_{(p)}$, $F_1$ and~$F_{\infty}$ be the rings defined in Proposition~\ref{prop:deltaiteratesshort}.
Moreover, let~$q \ge 1$ be an integer that is not divisible by~$p$, and~$\ell \ge 1$ an integer satisfying
\begin{displaymath}
\ell \equiv q \pmod{p},
\text{ and }
\ell \le p - 1
\text{ or }
2\ell + 1 \le q.
\end{displaymath}
Then the power series~$\widehat{f}$ in~$F_\infty[[\zeta]]$ defined by
\[\widehat{f}(\zeta) \= \zeta\left(1 + x_0\zeta^{q} + x_1\zeta^{q+\ell} + \zeta^{q+2\ell}\sum_{i=1}^\infty x_{i+1}\zeta^{i}\right), \]
satisfies the following property: There are~$\beta$ and~$\gamma$ in~$F_1$ such that
\begin{align}
\label{e:main beta}
\beta
& \equiv
\begin{cases}
x_0^{p - 1} \left( x_0^2 \frac{q + 1}{2} - x_1 \right) \mod p F_1
& \text{if $q \le p - 1$};
\\
- x_0^{p - 1} x_1 \mod p F_1
& \text{if $q \ge p + 1$},
\end{cases}
\\
\label{e:main gamma}
\gamma
& \equiv
\begin{cases}
- x_0^{p - 2} \left(x_0^2 \frac{q + 1}{2} - x_1 \right)^2 \mod p F_1
& \text{if $q \le p - 1$};
\\
- x_0^{p - 2} x_1^2 \mod p F_1
& \text{if $q \ge p + 1$},
\end{cases}
\intertext{ and }
\widehat{f}^p(\zeta)
& \equiv
\zeta \left(1 + \beta \zeta^{qp + \ell} + \gamma \zeta^{qp + 2\ell} \right) \mod \langle p, \zeta^{qp + 2\ell + 2} \rangle.
\end{align}
\end{main}
\subsection{Self-contained proof of Theorem~\ref{thm:q-ramification}}
\label{sec:self-contained-proof}
The goal of this section is to deduce the following proposition from the Main Lemma, which is a more precise version of Theorem~\ref{thm:q-ramification}.
It is also one of the main ingredients of the proof of Theorem~\ref{thm:lower-bound}, which is given in~\S\ref{sec:lowerbound}.
\begin{prop}\label{prop:deltaiterates}
Let~$p$ be an odd prime number and~$\K$ a field of characteristic~$p$.
Furthermore, let~$q$ be in~$\{1, \ldots, p-1 \}$, let~$f$ in~$\K[[\zeta]]$ be of the form
\[f(\zeta)
\equiv
\zeta(1 + a_0\zeta^q + a_1\zeta^{2q}) \mod \langle \zeta^{3q+2}\rangle, \text{ with } a_0\neq 0, \]
and for each integer~$n\geq1$, put
\begin{align*}
\chi_n
& \=
a_0^{\frac{p^{n+1}-1}{p-1}}\left(\frac{q+1}{2}-\frac{a_1}{a_0^2}\right)^{\frac{p^n-1}{p-1}},
\intertext{ and }
\psi_n
& \=
-a_0^{\frac{p^{n+1}-1}{p-1}+1}\left(\frac{q+1}{2}-\frac{a_1}{a_0^2}\right)^{\frac{p^n-1}{p-1}+1}.
\end{align*}
Then we have
\begin{displaymath}
f^{p^n}(\zeta) - \zeta
\equiv
\chi_n\zeta^{q\frac{p^{n+1}-1}{p-1}+1} + \psi_n\zeta^{q\frac{p^{n+1}-1}{p-1}+q+1} \mod \langle \zeta^{q\frac{p^{n+1}-1}{p-1}+q+2} \rangle.
\end{displaymath}
In particular, $f$ is $q$-ramified if and only if
\begin{displaymath}
\resit(f) = \frac{q + 1}{2} - \frac{a_1}{a_0^2} \neq 0.
\end{displaymath}
\end{prop}
The proof of Proposition~\ref{prop:deltaiterates} is given after the following lemma.
\begin{lemma}\label{elimination}
Let~$p$ be an odd prime number, $q$ in~$\{1,\ldots,p-1\}$, and~$d \ge 1$ an integer satisfying $d\equiv 1 \pmod{p}$.
Furthermore, let~$\K$ be a field of characteristic~$p$ and let~$f$ in~$\K[[\zeta]]$ be of the form
\[f(\zeta)
\equiv
\zeta\left(1+ a_0\zeta^{qd} + a_1\zeta^{q(d+1)}\right) \mod \langle \zeta^{q(d+1)+2} \rangle, \text{ with } a_0\neq 0.\]
Then there is a polynomial~$\varphi$ with coefficients in~$\K$ such that~$\mult(\varphi) \ge {q + 2}$, and such that~$\varphi$ conjugates~$f$ to a power series~$g$ satisfying
\begin{align}
\label{eq:12}
g(\zeta)
& \equiv
\zeta\left(1 + a_0\zeta^{qd} + a_1\zeta^{q(d+1)}\right) \mod \langle \zeta^{q(d+1) + p + 1} \rangle,
\intertext{ and }
\notag
g^p(\zeta)
& \equiv f^p(\zeta) \mod \langle \zeta^{i_1(f)+ q + 2} \rangle.
\end{align}
\end{lemma}
\begin{proof}
Noting that~$qd \equiv q \pmod{p}$, we can apply Lemma~\ref{removeterms} successively with~$q$ replaced by~$qd$, and with
\begin{displaymath}
k
=
q + 1, \ldots, q + p - 1,
\end{displaymath}
to obtain a polynomial~$\varphi$ satisfying~$\mult(\varphi) \ge q + 2$, such that~$g \= \varphi \circ f \circ \varphi^{-1}$ satisfies~\eqref{eq:12}.
To prove the second assertion, note that~$\varphi$ also conjugates~$f^p$ to~$g^p$, so by Lemma~\ref{removeterms}
\begin{displaymath}
i_1(f) = i_1(g)
\text{ and }
f^p(\zeta) \equiv g^p(\zeta) \mod \langle \zeta^{i_1(f) + \mult(\varphi)} \rangle.
\end{displaymath}
The desired assertion follows from the inequality~$\mult(\varphi) \ge q + 2$.
This completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:deltaiterates}]
The last assertion is a direct consequence of the first and of~\eqref{eq:genericity polynomial}.
To prove the first assertion, for each integer $n\geq 0$ put $d_n \= 1 + p + \cdots + p^n$, and note that
\begin{displaymath}
d_n \equiv 1 \pmod{p},
\text{ and }
d_np+1 = d_{n + 1}.
\end{displaymath}
We first prove by induction that for every integer~$n \ge 0$ there are~$\chi_n$ and~$\psi_n$ in~$\K$, such that
\begin{equation}
\label{eq:13}
f^{p^n}(\zeta)
\equiv
\zeta\left(1 + \chi_{n}\zeta^{qd_n} + \psi_{n}\zeta^{q(d_n+1)}\right) \mod \langle \zeta^{q(d_n+1)+2} \rangle.
\end{equation}
The case~$n = 0$ is trivial, with
\begin{equation}
\label{eq:10}
\chi_0 = a_0
\text{ and }
\psi_0 = a_1.
\end{equation}
Let~$n \ge 0$ be a given integer, and assume the desired assertion is true for~$n$.
By Lemma~\ref{elimination} there is a power series~$g$ with coefficients in~$\K$ such that
\begin{align}
\notag
g(\zeta)
& \equiv
\zeta\left(1 + \chi_{n}\zeta^{qd_n} + \psi_{n}\zeta^{q(d_n+1)}\right) \mod \langle \zeta^{q(d_n+2)+2} \rangle,
\intertext{ and }
\label{eq:14}
g^p(\zeta)
& \equiv
f^{p^{n + 1}}(\zeta) \mod \langle \zeta^{i_{n + 1}(f) + q + 2} \rangle.
\end{align}
Define $\Z_{(p)}, F_1$ and $F_\infty$ as in Proposition~\ref{prop:deltaiteratesshort}.
Moreover, let $\widehat{g}$ in~$F_\infty[[\zeta]]$ be of the form
\[ \widehat{g}(\zeta)
\=
\zeta\left(1+x_0 \zeta^{qd_n} + x_1 \zeta^{q(d_n + 1)} + \zeta^{q(d_n + 2)}\sum_{j=1}^{+ \infty}x_{j+1} \zeta^j\right), \]
let $h\colon F_\infty \to \K$ be the unique ring homomorphism extending the reduction map $\Z_{(p)} \to \F_p$, such that $h(x_0) = \chi_n$, $h(x_1) = \psi_n$, and such that for every $i\geq 2$ the element $h(x_i)$ of $\K$ is the coefficient of $\zeta^{q(d_n+2)+i}$ in~$\widehat{g}$.
Then~$h$ extends to a ring homomorphism $F_\infty[[\zeta]] \to \K[[\zeta]]$ that maps $\widehat{g}$ to~$g$.
In the case~$n = 0$, note that~$\widehat{f}$ in the Main Lemma is equal to~$\widehat{g}$, so
\begin{multline*}
g^p(\zeta)
\equiv
\zeta \left( 1 + \chi_0^{p+1}\left(\frac{q+1}{2} - \frac{\psi_0}{\chi_0^2}\right)\zeta^{q(p+1)}
\right. \\ \left.
- \chi_0^{p+2} \left(\frac{q+1}{2}-\frac{\psi_0}{\chi_0^2}\right)^2\zeta^{q(p+2)} \right) \mod \langle \zeta^{q(p+2)+2} \rangle.
\end{multline*}
Together with~\eqref{eq:14} with~$n = 0$, this implies
\begin{displaymath}
i_1(f) = i_1(g) \ge q(p + 1) = qd_1,
\end{displaymath}
and~\eqref{eq:13} with~$n = 1$,
\begin{equation}
\label{eq:11}
\chi_1
\=
\chi_0^{p+1}\left(\frac{q+1}{2} - \frac{\psi_0}{\chi_0^2} \right)
\text{ and }
\psi_1
\= - \chi_0^{p+2} \left(\frac{q+1}{2}-\frac{\psi_0}{\chi_0^2}\right)^2.
\end{equation}
In the case~$n \ge 1$, the Main Lemma with~$q$ replaced by~$qd_n$ and~$\ell$ replaced by~$q$, implies
\begin{displaymath}
g^p(\zeta)
\equiv
\zeta \left( 1 - \chi_n^{p - 1} \psi_n \zeta^{q(d_np+1)} - \chi_n^{p - 2} \psi_n^2 \zeta^{q(d_n p + 2)}\right)
\mod \langle \zeta^{q(d_n p+2)+2} \rangle.
\end{displaymath}
Together with~\eqref{eq:14} this implies
\begin{displaymath}
i_{n + 1}(f) = i_1(g) \ge q(d_n p + 1) = q d_{n + 1}
\end{displaymath}
and~\eqref{eq:13} with
\begin{equation}
\label{eq:15}
\chi_{n + 1}
=
- \chi_n^{p - 1} \psi_n
\text{ and }
\psi_{n + 1}
=
- \chi_n^{p - 2} \psi_n^2.
\end{equation}
This completes the proof of the induction step and of~\eqref{eq:13} for every integer~$n \ge 0$.
Then the proposition follows from a direct computation using the recursion~\eqref{eq:15}, together with~\eqref{eq:10} and~\eqref{eq:11}.
\end{proof}
\subsection{Lower bound of the norm of periodic points}
\label{sec:lowerbound}
The goal of this section is to prove Theorem~\ref{thm:lower-bound}.
We first introduce some notation and recall a result from~\cite{LindahlRiveraLetelier2015}.
Let~$(\K, | \cdot |)$ be an ultrametric field, and recall that~$\mathcal{O}_{\K}$ denotes the ring of integers of~$\K$, and~$\mathfrak{m}_{\K}$ the maximal ideal of~$\mathcal{O}_{\K}$.
Denote the residue field of~$\K$ by~$\widetilde{\K} \= \mathcal{O}_{\K} / \mathfrak{m}_{\K}$, and for an element~$a$ of~$\mathcal{O}_{\K}$, denote by the~$\widetilde{a}$ its reduction in~$\widetilde{\K}$.
The reduction of a power series~$f$ in~$\mathcal{O}_{\K}[[\zeta]]$, is the power series~$\widetilde{f}$ in~$\widetilde{\K}[[\zeta]]$ whose coefficients are the reductions of the corresponding coefficients of~$f$.
For a power series~$f$ in $\mathcal{O}_{\K}[[\zeta]]$, the \emph{Weierstrass degree} $\wideg(f)$ of~$f$ is the order in $\widetilde{\K}[[\zeta]]$ of the reduction $\widetilde{f}$ of~$f$.
Note that if $\wideg(f)$ is finite, then the number of zeros of~$f$ in~$\mathfrak{m}_{\K}$, counted with multiplicity, is less than or equal to~$\wideg(f)$, see, \emph{e.g.}, \cite[\S{VI}, Theorem~9.2]{Lang2002}.
In the case the characteristic~$p$ of~$\widetilde{\K}$ is positive, and~$f$ is a wildly ramified power series in~$\mathcal{O}_{\K}[[\zeta]]$, it is well-known that the minimal period of every periodic point of~$f$ in~$\mathfrak{m}_{\K}$ is a power of~$p$.
\begin{mydef}
Let $p$ be a prime number and $\K$ field of characteristic $p$.
For a wildly ramified power series~$f$ in $\K[[\zeta]]$, define for each integer $n \geq 0$ the element~$\delta_n(f)$ of $\K$ as follows: Put $\delta_n(f) \= 0$ if $i_n(f) = +\infty$, and otherwise let $\delta_n(f)$ be the coefficient of~$\zeta^{i_n(f)+1}$ in~$f^{p^n}(\zeta)$.
\end{mydef}
\begin{lemma}[Special case of Lemma~2.4 in \cite{LindahlRiveraLetelier2015}]\label{lem24}
Let~$p$ be a prime number and $(\K,|\cdot|)$ an ultrametric field of characteristic $p$.
Then, for every wildly ramified power series~$f$ in $\mathcal{O}_{\K}[[\zeta]]$, the following properties hold.
\begin{enumerate}
\item
Let~$w_0$ in $\mathfrak{m}_{\K}$ be a fixed point of~$f$ different from~$0$.
Then we have
\begin{displaymath}
|w_0|\geq |\delta_0(f)|
\end{displaymath}
with equality if and only if
\begin{displaymath}
\wideg(f(\zeta)-\zeta)= i_0(f)+2.
\end{displaymath}
\item
Let $n\geq1$ be an integer and~$\zeta_0$ in~$\mathfrak{m}_{\K}$ a periodic point of~$f$ of minimal period~$p^n$.
If in addition $i_n(f)<+\infty$, then we have
\begin{displaymath}
|\zeta_0|
\geq
\left|\frac{\delta_n(f)}{\delta_{n-1}(f)}\right|^{\frac{1}{p^n}},
\end{displaymath}
with equality if and only if
\begin{equation}
\label{widegzeta}
\wideg\left(\frac{f^{p^n}(\zeta)-\zeta}{f^{p^{n-1}}(\zeta)-\zeta}\right) = i_n(f)-i_{n-1}(f)+p^n.
\end{equation}
Moreover, if (\ref{widegzeta}) holds, then the cycle containing $\zeta_0$ is the only cycle of minimal period $p^n$ of~$f$ in $\mathfrak{m}_{\K}$, and for every point $\zeta_0'$ in this cycle $|\zeta_0'|=\left|\frac{\delta_n(f)}{\delta_{n-1}(f)}\right|^{\frac{1}{p^n}}$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:lower-bound}]
The assertion about fixed points is a direct consequence of ${\delta_0(f) = a}$ and Lemma~\ref{lem24}(1).
To prove the statement about periodic points that are not fixed, note first that this statement holds trivially in the case~$\resit(f) = 0$.
Thus, we assume that~$\resit(f) \neq 0$, and therefore~$f$ is $q$-ramified by Theorem~\ref{thm:q-ramification}.
In particular, for every integer~$n \ge 1$ we have~$i_n(f) < + \infty$.
On the other hand, by Proposition~\ref{prop:deltaiterates} we have for every integer~$n \ge 1$
\begin{displaymath}
\delta_n(f) = a^{\frac{p^{n + 1} - 1}{p - 1}} \resit(f)^{\frac{p^n-1}{p-1}}.
\end{displaymath}
Hence, by Lemma~\ref{lem24}(2) we have for every periodic point~$\zeta_0$ in~$\mathfrak{m}_k$ of minimal period~$p^n$,
\begin{equation}\label{normbound2}
|\zeta_0|
\geq
\left|\frac{\delta_n(f)}{\delta_{n-1}(f)}\right|^{\frac{1}{p^n}}
=
\left| a^{p^n} \resit(f)^{p^{n-1}} \right|^{\frac{1}{p^n}}
=
|a| \cdot | \resit(f)|^{\frac{1}{p}}.
\end{equation}
This completes the proof of Theorem~\ref{thm:lower-bound}.
\end{proof}
\begin{rem}
Equality in \eqref{normbound2} is, as seen in Lemma~\ref{lem24}, given by a condition on the reduction of $f$. In the case of equality, for $q$-ramified power series \emph{all} periodic points in the open unit disk, which are not fixed by~$f$, in fact lie on the sphere about the origin of radius~$|\delta_0(f)| \cdot |\resit(f)|^{\frac{1}{p}}$, see Example~\ref{e:optimality} in~\S\ref{sec:furth-results-exampl}.
\end{rem}
\section{Proof of the Main Lemma}
\label{s:proof of Main Lemma}
The goal of this section is to prove the Main Lemma.
We use the strategy introduced in~\cite[\S3.2]{RiveraLetelier2003} and~\cite{LindahlRiveraLetelier2013}, using the power series~$(\Delta_m)_{m = 0}^{+ \infty}$ defined by~\eqref{eq:27} and~\eqref{eq:23}.
The proof is naturally divided into the cases~$q \le p - 1$ and~$q \ge p + 1$.
\partn{Case 1, $q \le p - 1$}
Note that in this case we have~$\ell = q$.
For each integer~$m \ge 1$ define~$\alpha_m$, $\beta_m$ and $\gamma_m$ in~$F_1$ by the recursive relations
\begin{align}
\alpha_{m+1} & \= x_0 (qm+1)\alpha_m \label{receq1}\\
\beta_{m+1} & \= \left[ x_0^2 \binom{qm+1}{2} + x_1(qm+1)\right]\alpha_m + x_0 (q(m+1)+1)\beta_m \label{receq2}\\
\gamma_{m+1} & \= \left[ x_0^3 \binom{qm+1}{3} + x_0x_1 qm (qm+1)\right]\alpha_m \label{receq4}\\
& \quad + \left[ x_0^2 \binom{q(m+1)+1}{2}+ x_1(q(m+1)+1)\right]\beta_m \notag
\\ & \quad + x_0 (q(m+2)+1)\gamma_m, \notag
\end{align}
with initial conditions $\alpha_1 \= x_0$, $\beta_1 \= x_1$, and $\gamma_1 \= 0$.
We claim that for every integer~$m \ge 1$ we have
\begin{equation}
\label{deltaclaim}
\Delta_m(\zeta)
\equiv
\alpha_m\zeta^{qm+1} + \beta_m\zeta^{q(m+1)+1} + \gamma_m\zeta^{q(m+2)+1}\mod \langle \zeta^{q(m + 2) + 2} \rangle.
\end{equation}
For $m=1$ this holds by definition.
Assume this is valid for some $m\geq1$.
Then
\begin{displaymath}
\begin{split}
\Delta_{m+1}(\zeta)
& = \Delta_m(\widehat{f}(\zeta)) - \Delta_m(\zeta)\\
&\equiv \alpha_m\zeta^{qm+1}\left[\left(1 + x_0 \zeta^q + x_1\zeta^{2q} + x_2\zeta^{3q+1} + \cdots\right)^{qm+1} - 1\right]\\
&\qquad + \beta_m\zeta^{q(m+1)+1}\left[\left(1 + x_0 \zeta^q + x_1\zeta^{2q} + x_2\zeta^{3q+1} + \cdots\right)^{q(m+1)+1}-1\right]\\
&\qquad + \gamma_m\zeta^{q(m+2)+1}\left[\left(1 + x_0 \zeta^q + x_1\zeta^{2q} + x_2\zeta^{3q+1} + \cdots\right)^{q(m+2) +1}-1\right]\\
&\qquad \mod \langle \zeta^{q(m+3)+2} \rangle\\
&\equiv
\alpha_m\left[ \zeta^{q(m+1)+1}x_0 (qm+1) + \zeta^{q(m+2)+1}\left(x_0^2 \binom{qm+1}{2} + x_1(qm+1)\right)
\right. \\ & \qquad \left.
+ \zeta^{q(m+3)+1}\left( x_0^3 \binom{qm+1}{3} + x_0 x_1 qm (qm+1)\right) \right]
\\
&\qquad+ \beta_m\Bigg[ \zeta^{q(m+2)+1} x_0 (q(m+1)+1)
\\ & \qquad
+ \zeta^{q(m+3)+1}\left( x_0^2 \binom{q(m+1)+1}{2} + x_1(q(m+1)+1)\right) \Bigg]
\\
&\qquad + \gamma_m\zeta^{q(m+3)+1} x_0 (q(m+2)+1) \mod \langle \zeta^{q(m+3)+2} \rangle,
\end{split}
\end{displaymath}
which proves the induction step and~\eqref{deltaclaim}.
In view of~\eqref{eq:7} and~\eqref{deltaclaim}, to prove the Main Lemma with $q \le p - 1$, it is sufficient to prove
\begin{equation}
\label{e:main alpha}
\alpha_p \equiv 0 \mod p F_1,
\end{equation}
\eqref{e:main beta} with~$\beta = \beta_p$, and~\eqref{e:main gamma}~$\gamma = \gamma_p$.
The first~2 are given by Proposition~\ref{prop:deltaiteratesshort}, so we only need to prove the latter.
To do this, we solve~\eqref{receq4} explicitly, utilizing the explicit solutions of~\eqref{receq1} and~\eqref{receq2} given in the proof of Proposition~\ref{prop:deltaiteratesshort}.
Assume first~$q \equiv - 1 \pmod{p}$.
By~\eqref{alpham} and~\eqref{betam} with~$m = p - 1$, we have
\begin{displaymath}
\alpha_{p - 1} \equiv 0 \mod p F_1
\text{ and }
\beta_{p - 1} \equiv - x_0^{p - 2} x_1 \mod p F_1.
\end{displaymath}
Combined with~\eqref{receq4} with~$m = p - 1$, this implies
\begin{displaymath}
\gamma_p
\equiv
- x_0^{p - 2} x_1^2 \mod p F_1.
\end{displaymath}
This proves~\eqref{e:main gamma} with~$\gamma = \gamma_p$, when~$q \equiv - 1 \pmod{p}$.
It remains to prove~\eqref{e:main gamma} with~$\gamma = \gamma_p$, when~$q \not\equiv - 1 \pmod{p}$.
Denote by~$r_0$ the unique~$r$ in~$\{1, \ldots, p - 1 \}$ such that $qr \equiv - 1 \pmod{p}$.
By our assumption~$q \not\equiv - 1 \pmod{p}$, we have~$r_0 \neq 1$ and therefore
\begin{equation}
\label{eq:4}
r_0 \in \{2, \ldots, p - 1 \}.
\end{equation}
Noting that for every~$j \ge 0$ we have~$qj + 1 > 0$, we use the substitution
\[\gamma^*_m \=
\frac{\gamma_mx_0^2}{(q(m+1)+1)(qm+1)\alpha_m}. \]
Note that by~\eqref{alpham} we have
\[\gamma^*_m =
\gamma_m\bigg/ \left( x_0^{m - 2} \prod_{j=1}^{m + 1} (qj+1) \right). \]
On the other hand, by~\eqref{alpham} and~\eqref{betam} we get
\[\frac{\beta_m}{\alpha_m} = \frac{1}{x_0} \sum_{r=1}^m \left(x_0^2\frac{q(r-1)}{2} + x_1\right)\frac{qm+1}{qr+1}.
\]
By plugging these equations into~\eqref{receq4}, we obtain
\begin{displaymath}
\begin{split}
\gamma^*_{m+1}
& =
\gamma^*_m + \frac{qm}{(q(m+1)+1)(q(m+2)+1)} x_0^2 \left( x_0^2 \frac{qm-1}{6} + x_1 \right)
\\
& \quad +\frac{1}{q(m+2)+1} \left( x_0^2 \frac{q(m+1)}{2} + x_1 \right)
\sum_{r=1}^{m} \left(x_0^2 \frac{q(r - 1)}{2} + x_1 \right) \frac{1}{qr + 1}.
\end{split}
\end{displaymath}
Using~$\gamma^*_{1} = 0$ and defining for every integer~$s$
\[H(s) \= x_0^2 \frac{qs}{2}+x_1, \]
we obtain inductively for each $m\geq 1$
\begin{multline*}
\gamma^*_{m}
=
\sum_{s=1}^{m-1} \left[ \frac{qs}{(q(s+1)+1)(q(s+2)+1)} x_0^2 \left(x_0^2 \frac{qs - 1}{6} + x_1 \right)
\right. \\ \left.
+ \frac{H(s+1)}{q(s+2)+1}\sum_{r=1}^{s}\frac{H(r - 1)}{qr + 1} \right].
\end{multline*}
Equivalently,
\begin{multline}
\label{dmexplicit}
\gamma_{m}
=
x_0^{m - 2} \sum_{s=1}^{m-1}\left[ x_0^2 qs \left(x_0^2 \frac{qs - 1}{6} + x_1 \right) \prod_{j\in \{1,\ldots,m+1\}\setminus\{s+1,s+2\}}(qj+1)
\right. \\ \left.
+ H(s+1)\sum_{r=1}^{s}H(r - 1) \prod_{j\in \{1,\ldots,m+1\}\setminus \{r, s+2\}}(qj+1)\right].
\end{multline}
Setting~$m = p$, for every~$s$ in~$\{1, \ldots, p - 1\}$ we have by Lemma~\ref{wilson}
\begin{displaymath}
\prod_{\substack{j\in \{1,\ldots,p+1\} \\ j \not\in \{s+1,s+2\}}}(qj+1)
\equiv
\begin{cases}
- \frac{q(p+1)+1}{q(r_0 + 1)+1} \equiv -\frac{q+1}{q} \mod p\Z_{(p)}
& \text{if~$s = r_0 - 1$};
\\
- \frac{q(p+1)+1}{q(r_0 - 1)+1} \equiv \frac{q+1}{q} \mod p\Z_{(p)}
& \text{if $s = r_0 - 2$};
\\
0
& \text{otherwise}.
\end{cases}
\end{displaymath}
Analogously, for every~$s$ in~$\{1, \ldots, p - 1 \}$ and~$r$ in~$\{1, \ldots, s\}$, we have
\begin{displaymath}
\prod_{\substack{j\in \{1,\ldots,p+1\} \\ j \not\in \{r, s+2\}}} (qj+1)
\equiv
\begin{cases}
- \frac{q+1}{qr + 1} \mod p\Z_{(p)}
& \text{if $s = r_0 - 2$};
\\
- \frac{q+1}{q(s + 2) + 1} \mod p\Z_{(p)}
& \text{if~$s \ge r_0$ and~$r = r_0$};
\\
0
& \text{otherwise}.
\end{cases}
\end{displaymath}
Combined with~\eqref{dmexplicit} with~$m = p$ and
\begin{equation}
\label{eq:2}
H(r_0 - 1) \equiv - x_0^2 \frac{q + 1}{2} + x_1 \mod p \Z_{(p)},
\end{equation}
these congruences imply
\begin{equation}
\label{eq:3}
\begin{split}
\gamma_p
& \equiv
- x_0^{p} q(r_0 - 1) \left( x_0^2 \frac{q(r_0 - 1) - 1}{6} + x_1 \right) \frac{q + 1}{q}
\\ & \quad
+ x_0^{p} q (r_0 - 2) \left( x_0^2 \frac{q (r_0 - 2) - 1}{6} + x_1 \right) \frac{q + 1}{q}
\\ & \quad
- x_0^{p - 2} H(r_0 - 1) \sum_{r = 1}^{r_0 - 2} H(r - 1) \frac{q + 1}{qr + 1}
\\ & \quad
- x_0^{p - 2} \sum_{s = r_0}^{p - 1} H(s + 1)H(r_0 - 1) \frac{q + 1}{q(s + 2) + 1} \mod p F_1
\\ & \equiv
- x_0^{p} (q + 1) H(r_0 - 1)
\\ & \quad
- x_0^{p - 2} (q + 1) H(r_0 - 1)
\sum_{\substack{r \in \{1, \ldots, p + 1 \} \\ r \not\in \{ r_0 - 1, r_0, r_0 + 1 \}}} \frac{H(r - 1)}{q r + 1}
\mod p F_1.
\end{split}
\end{equation}
By~\eqref{eq:4}, we have
\begin{multline}
\label{eq:5}
\sum_{\substack{r \in \{1, \ldots, p + 1 \} \\ r \not\in \{ r_0 - 1, r_0, r_0 + 1 \}}} \frac{H(r - 1)}{q r + 1}
\\
\begin{aligned}
& \equiv
\sum_{\substack{r \in \{1, \ldots, p + 1 \} \\ r \not\in \{ r_0 - 1, r_0, r_0 + 1 \}}} \left( \frac{x_0^2}{2} + \frac{H(r_0 - 1)}{q r + 1} \right)
\mod p F_1
\\ & \equiv
- x_0^2 + H(r_0 - 1) \sum_{\substack{r \in \{1, \ldots, p + 1 \} \\ r \not\in \{ r_0 - 1, r_0, r_0 + 1 \}}} \frac{1}{q r + 1}
\mod p F_1.
\end{aligned}
\end{multline}
On the other hand, by the second assertion of Lemma~\ref{wilson}, we have
\begin{multline}
\label{eq:25}
\sum_{\substack{r \in \{1, \ldots, p + 1 \} \\ r \not\in \{ r_0 - 1, r_0, r_0 + 1 \}}} \frac{1}{q r + 1}
\\
\begin{aligned}
& \equiv
\frac{1}{q(p + 1) + 1} - \frac{1}{q(r_0 - 1) + 1} - \frac{1}{q(r_0 + 1) + 1}
\mod p \Z_{(p)}
\\ & \equiv
\frac{1}{q + 1}
\mod p \Z_{(p)}.
\end{aligned}
\end{multline}
Together with~\eqref{eq:2}, \eqref{eq:3}, and~\eqref{eq:5}, this implies~\eqref{e:main gamma} with~$\gamma = \gamma_p$ and completes the proof of the Main Lemma in the case~$q \le p - 1$.
\partn{Case 2, $q \ge p + 1$}
Note that in this case our hypotheses on~$\ell$ imply in all the cases that~$q \ge 2 \ell + 1$.
For each integer~$m \ge 1$ define~$\widehat{\alpha}_m$, $\widehat{\beta}_m$ and $\widehat{\gamma}_m$ in~$F_1$ by the recursive relations
\begin{align}
& \widehat{\alpha}_{m+1} \= x_0(qm+1)\widehat{\alpha}_m \label{receq1b}\\
&\widehat{\beta}_{m+1} \= x_1(qm+1)\widehat{\alpha}_m + x_0(qm+\ell+1)\widehat{\beta}_m \label{receq2b}\\
& \widehat{\gamma}_{m+1} \= x_1(qm +\ell+1)\widehat{\beta}_m + x_0(qm+2\ell+1)\widehat{\gamma}_m\label{receq4b},
\end{align}
with initial conditions $\widehat{\alpha}_1 \= x_0$~$\widehat{\beta}_1 \= x_1$, and $\widehat{\gamma}_1 \=0$.
We claim that for every integer~$m \ge 1$ we have
\begin{equation}
\label{deltaclaimhat}
\Delta_m(\zeta)
\equiv
\widehat{\alpha}_m\zeta^{qm+1} + \widehat{\beta}_m\zeta^{qm+\ell+1} + \widehat{\gamma}_m\zeta^{qm+2\ell+1}\mod \langle \zeta^{qm+2\ell+2} \rangle.
\end{equation}
For $m=1$ this holds by definition.
Assume further this is valid for some $m\geq1$.
Then, using $q \ge 2 \ell + 1$, we have
\begin{displaymath}
\begin{split}
\Delta_{m+1}(\zeta)
& = \Delta_{m}(\widehat{f}(\zeta)) - \Delta_{m}(\zeta)\\
& \equiv \widehat{\alpha}_m\zeta^{qm+1}\left[\left(1 + x_0\zeta^{q} + x_1\zeta^{q+\ell} + x_2\zeta^{q+2\ell + 1} + \cdots\right)^{qm+1} - 1\right]\\
&\quad + \widehat{\beta}_m\zeta^{qm+\ell+1}\left[\left(1 +x_0\zeta^{q} + x_1\zeta^{q+\ell} + x_2\zeta^{q+2\ell + 1} + \cdots\right)^{qm + \ell + 1}-1\right]\\
&\quad + \widehat{\gamma}_m\zeta^{qm + 2\ell + 1}\left[\left(1 + x_0\zeta^{q} + x_1\zeta^{q+\ell} + x_2\zeta^{q+2\ell + 1} + \cdots\right)^{qm + 2\ell + 1}-1\right]\\
&\quad \mod \langle \zeta^{q(m + 1) + 2\ell + 2} \rangle
\\
&\equiv\widehat{\alpha}_m\left(\zeta^{q(m+1) + 1}x_0(qm+1) + \zeta^{q(m+1)+\ell+1}x_1(qm+1)\right) \\
&\quad+ \widehat{\beta}_m\left(\zeta^{q(m+1)+\ell+1}x_0(qm+\ell+1) + \zeta^{q(m+1)+2\ell+1}x_1(qm+\ell+1)\right)
\\
& \quad+ \widehat{\gamma}_m\zeta^{q(m+1)+2\ell+1}x_0(qm+2\ell+1) \mod \langle \zeta^{q(m+1) + 2\ell +2} \rangle,
\end{split}
\end{displaymath}
which proves the induction step and the claim \eqref{deltaclaimhat}.
In view of~\eqref{eq:7} and~\eqref{deltaclaimhat}, to complete the proof of the Main Lemma in the case~$q \ge p + 1$, it is sufficient to prove
\begin{equation}
\label{e:main alpha bis}
\widehat{\alpha}_p \equiv 0 \mod p F_1,
\end{equation}
\eqref{e:main beta} with~$\beta = \widehat{\beta}_p$, and~\eqref{e:main gamma} with~$\gamma = \widehat{\gamma}_p$.
The linear recursion described in \eqref{receq1b}, \eqref{receq2b} and \eqref{receq4b} can be solved explicitly.
By telescoping \eqref{receq1b}, we obtain for every~$m \ge 1$ the solution
\begin{equation}
\label{alphamhat}
\widehat{\alpha}_m
=
x_0^{m}\prod_{j=1}^{m-1}(qj+1).
\end{equation}
Taking~$m = p$, this implies~\eqref{e:main alpha bis}.
On the other hand, inserting \eqref{alphamhat} in~\eqref{receq2b} yields
\[\widehat{\beta}_{m+1} = x_0^{m}x_1\prod_{j=1}^m (qj+1) + x_0(qm+\ell+1)\widehat{\beta}_m.\]
Then, an induction argument shows that for every~$m \ge 1$ we have
\begin{equation}\label{betamhat}
\widehat{\beta}_m \equiv x_0^{m-1}x_1\sum_{r=1}^{m}\prod_{j \in \{1, \ldots, m\} \setminus \{ r \}} (q j+1) \mod p F_1.
\end{equation}
When~$m = p$ every term in the sum above contains a factor~$p$, except for the unique~$r_0$ in~$\{1, \ldots, p - 1 \}$ satisfying $qr_0 \equiv -1 \pmod{p}$.
Then by Lemma~\ref{wilson}, we have
\begin{displaymath}
\begin{split}
\widehat{\beta}_p
&\equiv
x_0^{p-1}x_1\prod_{j \in \{ 1, \ldots, p \} \setminus \{ r_0 \}} (q j+1) \mod p F_1
\\ &\equiv
-x_0^{p-1}x_1 \mod p F_1.
\end{split}
\end{displaymath}
This proves~\eqref{e:main beta} with~$\beta = \widehat{\beta}_p$.
To prove~\eqref{e:main gamma} with~$\gamma = \widehat{\gamma}_p$, assume first~$q \equiv - 1 \pmod{p}$.
Then by~\eqref{receq4b} with~$m = p - 1$, \eqref{betamhat}, and Lemma~\ref{wilson} we have
\begin{displaymath}
\begin{split}
\widehat{\gamma}_p
& \equiv
x_1 \widehat{\beta}_{p - 1} \mod p F_1
\\ & \equiv
x_0^{p - 2} x_1^2 \sum_{r = 1}^{p - 1} \prod_{j \in \{1, \ldots, p - 1\} \setminus \{ r \}} (q j + 1) \mod p F_1
\\ & \equiv
x_0^{p - 2} x_1^2 \prod_{j \in \{2, \ldots, p - 1\}} (1 - j) \mod p F_1
\\ & \equiv
- x_0^{p - 2} x_1^2 \mod p F_1.
\end{split}
\end{displaymath}
It remains to prove~\eqref{e:main gamma} with~$\gamma = \widehat{\gamma}_p$ in the case~$q \not\equiv - 1 \pmod{p}$.
Note that in this case $r_0 \neq 1$.
Inserting~\eqref{betamhat} in~\eqref{receq4b}, we obtain
\begin{multline}
\label{eq:9}
\widehat{\gamma}_{m+1} \equiv x_0^{m-1}x_1^2\sum_{r = 1}^{m}\prod_{j \in \{1, \ldots, m+1\} \setminus \{ r \}} (q j+1)
\\
+ x_0(q(m+2)+1)\widehat{\gamma}_m \mod p F_1.
\end{multline}
For every~$m \ge 1$ define~$\check{\gamma}_m$ in~$F_1$ recursively, by~$\check{\gamma}_1 \= 0$ and for~$m \ge 1$, by
\begin{equation}
\label{eq:1}
\check{\gamma}_{m + 1}
\=
x_0^{m-1}x_1^2\sum_{r=1}^{m}\prod_{j \in \{1, \ldots, m+1\} \setminus \{ r \}} (q j+1)
+ x_0(q(m+2)+1)\check{\gamma}_m.
\end{equation}
Note that by~\eqref{eq:9} for every~$m \ge 1$ we have~$\check{\gamma}_m \equiv \widehat{\gamma}_m \mod p F_1$.
Using that for every~$j \ge 0$ we have~$q j + 1 > 0$, and the substitution
\[\check{\gamma}^*_m
\=
\check{\gamma}_m \bigg/ \left(x_0^{m-2}x_1^2\prod_{j=1}^{m+1} (q j+1) \right),\]
we obtain
\[\check{\gamma}_{m+1}^*
=
\check{\gamma}_{m}^* + \frac{1}{q(m+2)+1}\sum_{r=1}^{m}\frac{1}{q r + 1}.\]
Inductively we have
\begin{equation}
\label{eq:8}
\check{\gamma}_m^*
=
\sum_{s=1}^{m-1}\frac{1}{q (s+2) + 1}\sum_{r=1}^{s}\frac{1}{q r + 1},
\end{equation}
which is a rational number.
Since~$r_0 \neq 1$, for every~$r$ in~$\{1, \ldots, p + 1 \} \setminus \{ r_0 \}$ we have that~$\frac{1}{q r + 1}$ is in~$\Z_{(p)}$.
Thus, taking~$m = p$ in~\eqref{eq:8}, and using~\eqref{eq:25}, we obtain
\begin{equation*}
\begin{split}
(q r_0 + 1) \check{\gamma}_p^*
& \equiv
\sum_{\substack{r \in \{1, \ldots, p + 1 \} \\ r \not\in \{ r_0 - 1, r_0, r_0 + 1 \}}} \frac{1}{q r + 1} \mod p \Z_{(p)}
\\ & \equiv
\frac{1}{q + 1} \mod p \Z_{(p)}.
\end{split}
\end{equation*}
Using Lemma~\ref{wilson}, we obtain
\begin{displaymath}
\begin{split}
\check{\gamma}_p
& \equiv
x_0^{p-2}x_1^2 \frac{1}{q + 1} \prod_{j \in \{1, \ldots, p + 1 \} \setminus \{ r_0 \}} (q j + 1) \mod p F_1
\\ & \equiv
- x_0^{p-2}x_1^2 \mod p F_1.
\end{split}
\end{displaymath}
This completes the proof of~\eqref{e:main gamma} with~$\gamma = \widehat{\gamma}_p$ and of the Main Lemma.
\section{Further results and examples}
\label{sec:furth-results-exampl}
In this section we gather several examples illustrating our results and state some further consequences of our main theorems.
\begin{example}
\label{ex:p + 1}
The following example shows that the conclusion of Theorem~\ref{thm:q-ramification} is false when~$q = p + 1$ and~$p$ is odd.
Consider the polynomial with coefficients in~$\F_p$,
\begin{displaymath}
P(\zeta)
\=
\zeta (1 + \zeta^{p + 1} + \zeta^{p + 2} + \zeta^{2(p + 1)}).
\end{displaymath}
A direct computation using~\eqref{eq:genericity polynomial} shows that~$\resit(P) = 1$.
On the other hand, using the Main Lemma with $q = p + 1$, $\ell = 1$, and $x_0=x_1=1$, we have
\begin{displaymath}
i_1(P) = p^2 + p + 1
<
i_0(P)(p + 1),
\end{displaymath}
so~$P$ is not $(p + 1)$-ramified.
\end{example}
There is another natural source of power series~$f$ that satisfy~$i_0(f) = p + 1$ and that are not~$(p + 1)$-ramified.
Let~$g$ in~$\K[[\zeta]]$ be a~$1$-ramified power series, and put~${f \= g^p}$.
Then
\begin{displaymath}
i_0(f)
=
i_1(g)
=
p+1
\text{ and }
i_1(f) = i_2(g) = 1+p+p^2
<
i_0(f) (p+1),
\end{displaymath}
so~$f$ is not $(p+1)$-ramified.
For concreteness, let~$a$ in~$\K$ be different from~$1$, and assume that~$g$ is of the form
\[
g(\zeta)
\equiv
\zeta(1 + \zeta + a \zeta^2) \mod \langle \zeta^4 \rangle.
\]
In view of~\eqref{eq:genericity polynomial}, we have~$\resit(g) = 1 - a \neq 0$, so~$g$ is~$1$-ramified by Theorem~\ref{thm:q-ramification}.
On the other hand, for $p = 3$, $5$ and $7$ a computation shows that $\resit(f) = (1 - a)^p \neq 0$.
Thus, in contrast with the situation for~$q$ in~$\{1, \ldots, p - 1 \}$ in Theorem~\ref{thm:q-ramification}, for~$q = p + 1$ and~$p = 3$, $5$ and $7$ the nonvanishing of the iterative residue does not imply $(p + 1)$\nobreakdash-ramification.\footnote{The situation is now clear form the recent characterization of $(p + 1)$-ramification by the first named author in~\cite{Nor1909}.}
So, the following question arises naturally.
\begin{question}
For which $1$-ramified power series~$g$ in~$\K[[\zeta]]$ do we have~$\resit(g^p) \neq 0$?
\end{question}
\begin{example}
\label{ex:p - 1}
The following example illustrates Theorem~\ref{thm:q-ramification} in the case~$q = p - 1$.
A direct computation shows that for the polynomial~$P(\zeta) \= \zeta + \zeta^p$, we have for every integer~$n \ge 1$
\begin{displaymath}
P^{p^n}(\zeta) = \zeta + \zeta^{p^{p^n}}.
\end{displaymath}
In particular, $i_n(P) = p^{p^n} - 1$, and therefore~$P$ is not $(p - 1)$-ramified.
This is consistent with Theorem~\ref{thm:q-ramification}, since by Theorem~\ref{thm:closed-formula} we have~$\resit(P) = \ind(P) = 0$.
\end{example}
\begin{example}
\label{e:optimality}
This example shows that the lower bound~\eqref{normbound} in Theorem~\ref{thm:lower-bound} is optimal for~$p \ge 5$ and~$q \le p - 3$.
Let~$p \ge 3$ be a prime number, ($\K$, $| \cdot |$) an ultrametric field of characteristic~$p$, and~$q$ in~$\{1, \ldots, p - 1 \}$.
Furthermore, let~$a$ and~$b$ in~$\K$ be such that $0 < |a| < 1$ and $|b|=1$, and let~$f$ be a power series in~$\K[[z]]$ satisfying
\begin{displaymath}
f(\zeta) \equiv \zeta(1 + a\zeta^q + b\zeta^{q+1}) \mod \langle \zeta^{2q + 4} \rangle.
\end{displaymath}
A direct computation using~\eqref{eq:genericity polynomial} shows that
\begin{displaymath}
\resit(f) = \frac{q + 1}{2} + (-1)^q \frac{b^q}{a^{q + 1}} \neq 0,
\end{displaymath}
so by Theorem~\ref{thm:q-ramification} the series~$f$ is $q$-ramified.
In the case~$q \le p - 2$, by~\eqref{eq:genericity polynomial} the reduction~$\widetilde{f}$ of~$f$ satisfies~$\resit(\widetilde{f}) = \frac{q + 2}{2}$.
Assuming further that~$q \le p - 3$, we have~$\resit(\widetilde{f}) \neq 0$, and we obtain that~$\widetilde{f}$ is ${(q+1)}$\nobreakdash-ramified by Theorem~\ref{thm:q-ramification}.
This implies that~\eqref{widegzeta} in Lemma~\ref{lem24} holds for every integer~$n \ge 1$.
It follows that for every periodic point~$\zeta_0$ of~$f$ in~$\mathfrak{m}_{\K}$ that is not fixed, we have
\[|\zeta_0| = |a| \cdot |\resit(f)|^{\frac{1}{p}}, \]
see the proof of Theorem~\ref{thm:lower-bound}.
\end{example}
The following result is a direct consequence of Theorems~\ref{thm:q-ramification} and~\ref{thm:lower-bound} for fixed points whose multiplier is a root of unity, compare with~\cite[Corollary~C]{LindahlRiveraLetelier2015}.
\begin{corr}
\label{c:higher order}
Let $\K$ be an ultrametric field of odd characteristic, let $\gamma$ in~$\K$ be a root of unity, and denote by~$q \ge 1$ the order of~$\gamma$.
Moreover, let~$f$ be a power series with coefficients in~$\K$ satisfying $f(0)=0$ and $f'(0)=\gamma$.
If
\begin{displaymath}
q' \= \mult(f^q) - 1 \le p - 1
\text{ and }
\resit(f^q) \neq 0,
\end{displaymath}
then $f^q$ is $q'$-ramified.
In particular, if~$f$ converges on a neighborhood of the origin, then the origin is isolated as a periodic point of~$f$.
\end{corr}
\begin{example}
Let~$\K$ be an ultrametric field of characteristic~$7$, and note that~$2$ is a root of unity in~$\K$ of order~$3$.
Let~$f$ be a power series with coefficients in~$\mathcal{O}_{\K}$ such that
\begin{displaymath}
f(\zeta)
\equiv
2\zeta + \zeta^2 \mod \langle \zeta^{13} \rangle.
\end{displaymath}
A direct computation shows that
\begin{displaymath}
f^3(\zeta) \equiv \zeta(1 + \zeta^6 + \zeta^7) \mod \langle \zeta^{13} \rangle.
\end{displaymath}
In particular, $\mult(f^3) - 1 = 6 > 3$, so~$f$ is not minimally ramified in the sense of~\cite{LindahlRiveraLetelier2015}, and we cannot apply Corollary~C of that paper to~$f$.
However, by~\eqref{eq:genericity polynomial} we have $\resit(f^3) \neq 0$, so Corollary~\ref{c:higher order} applies to~$f^3$ and it implies that~$f^3$ is $6$\nobreakdash-ramified and that the origin is isolated as a periodic point of~$f^3$, and hence of~$f$.
\end{example}
| {
"timestamp": "2020-02-04T02:24:36",
"yymm": "1904",
"arxiv_id": "1904.04494",
"language": "en",
"url": "https://arxiv.org/abs/1904.04494",
"abstract": "In this paper, we study power series having a fixed point of multiplier 1. First, we give a closed formula for the residue fixed point index, in terms of the first coefficients of the power series. Then, we use this formula to study wildly ramified power series in positive characteristic. Among power series having a multiple fixed point of small multiplicity, we characterize those having the smallest possible lower ramification numbers in terms of the residue fixed point index. Furthermore, we show that these power series form a generic set, and, in the case of convergent power series, we also give an optimal lower bound for the distance to other periodic points.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Residue fixed point index and wildly ramified power series",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806546550656,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7077274257241446
} |
https://arxiv.org/abs/1312.0039 | A bound for the error term in the Brent-McMillan algorithm | The Brent-McMillan algorithm B3 (1980), when implemented with binary splitting, is the fastest known algorithm for high-precision computation of Euler's constant. However, no rigorous error bound for the algorithm has ever been published. We provide such a bound and justify the empirical observations of Brent and McMillan. We also give bounds on the error in the asymptotic expansions of functions related to modified Bessel functions. | \section{Introduction}
Brent and McMillan \cite{BrentMcMillan1980,Jackson1996} observed
that Euler's constant
\begin{equation*}
\gamma = \lim_{n \rightarrow \infty } (H_n - \ln(n)) \approx 0.5772156649,
\quad H_n = \sum_{k=1}^n \frac{1}{k}\,\raisebox{2pt}{$,$}
\end{equation*}
can be computed rapidly to high accuracy using the formula
\begin{equation}
\gamma = \frac{S_0(2n) - K_0(2n)}{I_0(2n)} - \ln(n)\,\raisebox{2pt}{$,$}
\label{eq:bessel}
\end{equation}
where $n > 0$ is a free parameter (understood to be an integer),
$K_0(x)$ and $I_0(x)$ denote the usual Bessel functions, and
\begin{equation*}
S_0(x) = \sum_{k=0}^{\infty} \frac{H_k}{(k!)^2} \left(\frac{x}{2}\right)^{2k}.
\end{equation*}
The idea is to choose $n$ optimally so that an asymptotic
series can be used to compute $K_0(2n)$, while $S_0(2n)$ and $I_0(2n)$
are computed using Taylor series.
When all series are evaluated using the \emph{binary splitting}
technique (see \cite[\S4.9]{BrentZimmermann2010}),
the first $d$ digits of $\gamma$
can be computed in essentially optimal time $O(d^{1+\varepsilon})$.
This approach has been used for all recent record
calculations of $\gamma$, including the
current world record of 29,844,489,545 digits
set by A. Yee and R. Chan in 2009 \cite{Yee}.
Brent and McMillan gave three algorithms (B1, B2 and B3)
to compute $\gamma$ via~\eqref{eq:bessel}.
The most efficient, B3, approximates
$K_0(2n)$ using the asymptotic expansion
\begin{equation}
2x I_0(x) K_0(x) = \sum_{k=0}^{m/2-1} \frac{b_k}{x^{2k}} + T_m(x)\,, \quad
b_k = \frac{[(2k)!]^3}{(k!)^4 8^{2k}}\,\raisebox{2pt}{$,$}
\label{eq:ikasymp}
\end{equation}
where one should take $m \approx 4n$.
The expansion \eqref{eq:ikasymp}
appears as formula 9.7.5 in Abramowitz and Stegun \cite{AbramowitzStegun1964},
and 10.40.6 in the Digital Library of Mathematical Functions \cite{DLMF}.
Unfortunately, neither work gives a proof or reference, and no bound
for the error term $T_m(x)$ is provided. Brent and McMillan
observed empirically that $T_{4n}(2n) = O(e^{-4n})$, which
would give a final error of $O(e^{-8n})$ for~$\gamma$,
but left this as a conjecture.
Brent \cite{Brent2010} recently noted that the error
term can be bounded rigorously,
starting from the individual asymptotic expansions of $I_0(x)$ and $K_0(x)$.
However, he did not present an explicit bound at that time.
In this paper, we calculate an explicit error bound,
allowing the fastest version of the Brent-McMillan algorithm (B3) to be used
for provably correct evaluation of $\gamma$.
To bound the error in the Brent-McMillan algorithm we must bound the
errors in evaluating the transcendental functions $I_0(2n)$, $K_0(2n)$ and
$S_0(2n)$ occurring in~\eqref{eq:bessel} (we ignore the
error in evaluating $\ln(n)$ since this is well-understood). The most
difficult task is to bound the error associated with $K_0(2n)$. For
reasons of efficiency, the algorithm approximates
$I_0(2n)K_0(2n)$ using the asymptotic expansion~\eqref{eq:ikasymp},
and then the term $K_0(2n)/I_0(2n)$ in~\eqref{eq:bessel} is computed from
$I_0(2n)K_0(2n)/I_0(2n)^2$.
Sections~\ref{sec:bounds}--\ref{sec:product} contain
bounds on the size of various error terms
that are needed for the main result.
For example, Lemma~\ref{lemma:Q_mbound} bounds the error in the asymptotic
expansion for $I_0(x)$, which is nontrivial as the terms do not have
alternating signs.
The asymptotic expansion~\eqref{eq:ikasymp} can be obtained formally by
multiplying the asymptotic
expansions (see~\eqref{eq:k0series}--\eqref{eq:i0series} below) for
$K_0$ and $I_0$. To obtain $m$ terms in the asymptotic expansion, we multiply
the polynomials $P_m(-1/z)$ and $P_m(1/z)$ occurring
in~\eqref{eq:k0series}--\eqref{eq:i0series}, then discard half the terms
(here $z = 1/x$ is small when $x \approx 2n$ is large, so we discard the
terms involving high powers of $z$). To bound the error,
we show in Lemma~\ref{lem:T_4n_bd}
that the discarded terms are sufficiently small, and also take into
account the error terms $R_m$ and $Q_m$ in the asymptotic expansions for
$K_0$ and $I_0$.
The main result, Theorem~\ref{thm:Jbound}, is given in
Section~\ref{sec:complete}.
Provided the parameter $N$ (the number of terms used to approximate
$S_0(2n)$ and $I_0(2n)$) is sufficiently large, the error is bounded
by $24e^{-8n}$. Corollary~\ref{cor:simpler_bound} shows that it is sufficient
to take $N \approx 4.971n$.
\section{Bounds for the individual Bessel functions} \label{sec:bounds}
Asymptotic expansions for $I_0(x)$ and $K_0(x)$
are given by Olver~\cite[pp.~266--269]{Olver1997} and
can be found in~\cite[\S10.40]{DLMF}.
They can be written as
\begin{equation}
K_0(x) = e^{-x} \left(\frac{\pi}{2 x}\right)^{1/2} \left( P_m(-x) + R_m(x) \right)
\label{eq:k0series}
\end{equation}
and
\begin{equation}
I_0(x) = \frac{e^x}{(2 \pi x)^{1/2}} \left( P_m(x) + Q_m(x) \right),
\label{eq:i0series}
\end{equation}
where $R_m(x)$ and $Q_m(x)$ denote error terms,
\begin{equation}
P_m(x) = \sum_{k=0}^{m-1} a_k x^{-k},
\;\;\text{and}\;\;
a_k = \frac{[(2k)!]^2}{(k!)^3 32^k}\,\raisebox{2pt}{$.$}
\label{eq:pseries}
\end{equation}
For $n \ge 1$,
\begin{equation} \label{eq:factorial_bounds}
\sqrt{2\pi} n^{n+1/2} e^{-n} \le n! \le e n^{n+1/2} e^{-n},
\end{equation}
so the coefficients $a_k$ in \eqref{eq:pseries} satisfy
\begin{equation}
a_k \le \frac{e^2}{\pi^{3/2} 2^{1/2}} \frac{1}{k^{1/2}} \left(\frac{k}{2e}\right)^k
< \frac{1}{k^{1/2}} \left(\frac{k}{2e}\right)^k
\label{eq:coeffbound}
\end{equation}
for $k \ge 1$ (the first term is $a_0 = 1$).
For $x > 0$, we also have the global bounds
\begin{equation}
0 < K_0(x) < e^{-x} \left(\frac{\pi}{2 x}\right)^{1/2}
\label{eq:globalbound1}
\end{equation}
and
\begin{equation}
I_0(x) > \frac{e^x}{(2 \pi x)^{1/2}}\,\raisebox{2pt}{$.$}
\label{eq:globalbound2}
\end{equation}
Observe that the bound on $K_0(x)$ and equation~\eqref{eq:k0series} imply that
\begin{equation} \label{eq:PR_ineq}
|P_m(-x) + R_m(x)| < 1.
\end{equation}
For $x > 0$, the series~\eqref{eq:k0series} for $K_0(x)$
is alternating, and the remainder satisfies
\begin{equation} \label{eq:R_mbound}
|R_m(x)| \le \frac{a_m}{x^m}
< \frac{1}{m^{1/2}} \left(\frac{m}{2e}\right)^m
\frac{1}{x^m}\,\raisebox{2pt}{$.$}
\end{equation}
The series~\eqref{eq:i0series} for $I_0(x)$ is not alternating.
The following lemma bounds the error $Q_m(x)$.
\begin{lemma} \label{lemma:Q_mbound}
Let $Q_m(x)$ be defined by~\eqref{eq:i0series}. Then for $m \ge 1$ and
real $x \ge 2$ we have
\[|Q_m(x)| \le 4\left(\frac{m}{2ex}\right)^m + e^{-2x}.\]
\end{lemma}
\begin{proof}
The identity
$I_0(x) = i(K_0(-x) - K_0(x))/\pi$
gives
\begin{equation}
Q_m(x) = R_m(-x) - \frac{i}{\pi} \frac{(2\pi x)^{1/2}}{e^x} K_0(x).
\end{equation}
According to Olver~\cite[p.~269]{Olver1997},
\begin{equation}
|R_m(-x)| \le 2 \chi(m) \exp(\tfrac{1}{8} \pi x^{-1}) a_m x^{-m},
\end{equation}
where
\begin{equation}
\chi(m) = \pi^{1/2} \frac{\Gamma(m/2+1)}{\Gamma(m/2+1/2)}
\le \frac{\pi}{2}\,m^{1/2}
\end{equation}
(the bound on $\chi(m)$ follows as $\chi(m)/m^{1/2}$ is monotonic decreasing
for $m \ge 1$).
Since $x \ge 2$, applying \eqref{eq:coeffbound} gives
\begin{equation}
|R_m(-x)| \le \pi e^{\pi/16} \left(\frac{m}{2e}\right)^m \frac{1}{x^m}
< 4 \left(\frac{m}{2ex}\right)^m.
\end{equation}
Combined with the global bound \eqref{eq:globalbound1} for $K_0(x)$, we obtain
\begin{equation} \label{eq:Q_m_bd}
|Q_m(x)| \le |R_m(-x)| + \frac{1}{\pi} \frac{(2\pi x)^{1/2}}{e^x} K_0(x)
\le 4 \left(\frac{m}{2ex}\right)^m + e^{-2x}.
\end{equation}
\end{proof}
\pagebreak[3]
\begin{corollary} \label{cor:IK_upper_bd}
For $x \ge 2$, we have $0 < I_0(x)K_0(x) < 1/x$.
\end{corollary}
\begin{proof}
The first inequality is obvious, since both $I_0(x)$ and $K_0(x)$
are positive.
Also, using~\eqref{eq:i0series} and~\eqref{eq:Q_m_bd} with $m=1$ gives
\[I_0(x) \le \frac{e^x}{(2\pi x)^{1/2}}(1 + e^{-1} + e^{-4}),\]
so from \eqref{eq:globalbound1} we have
\[I_0(x)K_0(x) \le \frac{1 + e^{-1} + e^{-4}}{2x} < \frac{1}{x}\,\raisebox{2pt}{$.$}\]
\end{proof}
\begin{lemma} \label{lemma:R_and_Sbound}
If $R_m(x)$ and $Q_m(x)$ are defined by~\eqref{eq:k0series}
and~\eqref{eq:i0series} respectively, then
\begin{equation}
|R_{4n}(2n)| \le \frac{e^{-4n}}{2 n^{1/2}}
\;\;\text{and}\;\; |Q_{4n}(2n)| \le 5 e^{-4n}.
\label{eq:rsevalbounds}
\end{equation}
\end{lemma}
\begin{proof}
Taking $x = 2n$ and $m = 4n$, the inequality~\eqref{eq:R_mbound}
gives the first inequality, and
Lemma~\ref{lemma:Q_mbound} gives the second inequality.
\end{proof}
We also need the following lemma.
\begin{lemma}
If $P_m(x)$ is defined by~\eqref{eq:pseries}, then
\begin{equation}
|P_{4n}(2n)| < 2\;\;\text{and}\;\; |P_{4n}(-2n)| < 1.
\label{eq:pnbounds}
\end{equation}
\end{lemma}
\begin{proof}
Using~\eqref{eq:pseries} and~\eqref{eq:coeffbound}, we have
\begin{eqnarray*}
P_{4n}(2n) &=& 1 + \sum_{k=1}^{4n-1}\frac{a_k}{(2n)^k}\\
&\le& 1 + \sum_{k=1}^{4n-1}k^{-1/2}\left(\frac{k}{4en}\right)^k\\
&\le& 1 + \sum_{k=1}^{4n-1}e^{-k} < \frac{e}{e-1} < 2.
\end{eqnarray*}
The right inequality in \eqref{eq:pnbounds}
can be proved in a similar manner, taking the sign alternations
into account.
\end{proof}
\section{Bounds for the product} \label{sec:product}
We wish to bound the error term $T_m(x)$ in \eqref{eq:ikasymp}
when evaluated at $x = 2n$, $m = 4n$. The result is given by the
following lemma.
\begin{lemma} \label{lem:T_4n_bd}
If $T_m(x)$ is defined by~$\eqref{eq:ikasymp}$, then
$T_{4n}(2n) < 7e^{-4n}$.
\end{lemma}
\begin{proof}
In terms of the
expansions for $I_0(x)$ and $K_0(x)$, we have
\begin{eqnarray}
2x I_0(x) K_0(x) &=& (P_m(-x) + R_m(x)) (P_m(x) + Q_m(x))\nonumber \\
&=& P_m(x) P_m(-x) +\nonumber \\
&& \left[(P_m(-x)+R_m(x)) Q_m(x) + P_m(x) R_m(x)\right].
\label{eq:expanded}
\end{eqnarray}
It follows from \eqref{eq:PR_ineq},
\eqref{eq:rsevalbounds} and \eqref{eq:pnbounds} that
the expression $[\cdots]$ in \eqref{eq:expanded},
evaluated at $x = 2n$, $m = 4n$,
is bounded in absolute value by
\begin{equation}
5 e^{-4n} + e^{-4n}/n^{1/2} \le 6e^{-4n}.
\label{eq:crossbound}
\end{equation}
Next, we rewrite
\begin{equation*}
P_m(x) P_m(-x) = \sum_{i=0}^{m-1} \sum_{j=0}^{m-1} (-1)^i a_i a_j x^{-(i+j)}
\end{equation*}
as $L + U$, where
\begin{equation}
L = \sum_{k=0}^{m-1} \left( \sum_{j=0}^k (-1)^j a_j a_{k-j} \right) x^{-k}
\label{eq:lsum}
\end{equation}
and
\begin{equation}
U = \sum_{k=m}^{2m-2} \left( \sum_{j=k-(m-1)}^{m-1} (-1)^j a_j a_{k-j} \right)
x^{-k}.
\label{eq:usum}
\end{equation}
The ``lower'' sum $L$ is precisely $\sum_{k=0}^{m/2-1} b_k x^{-2k}$.
Replacing $k$ by $2k$ in \eqref{eq:lsum} (as the odd terms
vanish by symmetry), we have to prove
\begin{equation}
\sum_{j=0}^{2k} \frac{(-1)^j [(2j)!]^2 [(4k-2j)!]^2}{(j!)^3 [(2k-j)!]^3 32^{2k}} = \frac{[(2k)!]^3}{(k!)^4 8^{2k}}
\,\raisebox{2pt}{$.$}
\label{eq:hypsum}
\end{equation}
This can be done algorithmically using the creative telescoping
approach of Wilf and Zeilberger. For example, the
implementation in the Mathematica package \texttt{HolonomicFunctions}
by Koutschan \cite{Koutschan2010} can be used.
The command
\begin{verbatim}
a = ((2j)!)^2 / ((j!)^3 32^j);
CreativeTelescoping[(-1)^j a (a /. j -> 2k-j),
{S[j]-1}, S[k]]
\end{verbatim}
outputs the recurrence equation
\begin{equation*}
(8+8 k) b_{k+1} - \left(1+6 k+12 k^2+8 k^3\right) b_k = 0
\end{equation*}
matching the right-hand side of~\eqref{eq:hypsum},
together with a telescoping certificate.
Since the summand in \eqref{eq:hypsum} vanishes
for $j < 0$ and $j > 2k$, no boundary conditions
enter into the telescoping relation,
and checking the initial value ($k = 0$)
suffices to prove the identity.\footnote%
{Curiously, the built-in
\texttt{Sum} function in Mathematica 9.0.1
computes a closed form for the sum \eqref{eq:hypsum},
but returns an answer that is wrong by a factor 2
if the factor $[(4k-2j)!]^2$ in the summand is input as $[(2(2k-j))!]^2$.}
It remains to bound the ``upper'' sum $U$ given by \eqref{eq:usum}.
The coefficients of $U = \sum_{k=m}^{2m-2} c_k x^{-k}$
can also be written as
\begin{equation}
c_k = \sum_{j=1}^{2m-k-1} (-1)^{j+k+m} a_{k-m+j} a_{m-j}.
\end{equation}
By symmetry, this sum is zero when $k$ is odd, so we only need
to consider the case of $k$ even.
We first note that, if $1 \le i < j$, then $a_i a_j \ge a_{i+1} a_{j-1}$. This
can be seen by observing that the ratio satisfies
\begin{equation}
\frac{a_i a_j}{a_{i+1} a_{j-1}} = \frac{(i+1) (2j-1)^2}{j (2i+1)^2} \ge 1.
\end{equation}
Thus, after adding the duplicated terms, $c_k$ can be written as an alternating sum in which
the terms decrease in magnitude, e.g.
\begin{equation}
-2 a_1 a_{11} + 2 a_2 a_{10} - \ldots + 2 a_5 a_7 - a_6 a_6,
\end{equation}
and its absolute value can be bounded by that of the first term, $2 a_{1+k-m} a_{m-1}$, giving
\begin{equation}
\left|\sum_{k=m}^{2m-2} \frac{c_k}{x^k} \right| \le \sum_{k=m}^{2m-2} t_k,
\quad t_k = \frac{2 a_{1+k-m} a_{m-1}}{x^k}\,\raisebox{2pt}{$.$}
\end{equation}
Evaluating at $x = 2n, m = 4n$ as usual, the term ratio
\begin{equation}
\frac{t_{k+1}}{t_k} = \frac{(3+2k-8n)^2}{16n(2+k-4n)}
\end{equation}
is bounded by 1 when $4n \le k \le 8n-2$. Therefore,
using~\eqref{eq:coeffbound},
\begin{equation}
\sum_{k=m}^{2m-2} t_k \le (m-1) t_m \le e^{-4n} \frac{(4n-1)^{4n-1/2}}{2^{8n-1} n^{4n}} < e^{-4n}.
\label{eq:tbound}
\end{equation}
Adding \eqref{eq:crossbound} and \eqref{eq:tbound}, we find that
$|T_{4n}(2n)| < 7e^{-4n}$.
\end{proof}
\section{A complete error bound} \label{sec:complete}
We are now equipped to justify Algorithm~B3.
The algorithm computes an approximation $\widetilde{\gamma}$ to $\gamma$.
Theorem~\ref{thm:Jbound} bounds the error
$|\widetilde{\gamma}-\gamma|$ in the algorithm, excluding
rounding errors and any error in the evaluation of $\ln n$.
The finite sums $S$ and $I$ approximate $S_0(2n)$ and
$I_0(2n)$ respectively, while $T$ approximates $I_0(2n)K_0(2n)$.
\begin{theorem} \label{thm:Jbound}
Given an integer $n \ge 1$, let $N \ge 4n$ be an
integer such that
\begin{equation}
\frac{2 n^{2N} H_N}{(N!)^2} < \varepsilon_0,
\label{eq:e0bound}
\end{equation}
where
\begin{equation}
\varepsilon_0 = \frac{e^{-6n}}{(4\pi n)^{1/2} (1+H_N)}\,\raisebox{2pt}{$.$}
\label{eq:eps0_def}
\end{equation}
Let
\begin{equation*}
S = \sum_{k=0}^{N-1} \frac{H_k n^{2k}}{(k!)^2}\,\raisebox{2pt}{$,$}
\quad I = \sum_{k=0}^{N-1} \frac{n^{2k}}{(k!)^2}\,\raisebox{2pt}{$,$}
\quad T = \frac{1}{4n} \sum_{k=0}^{2n-1}
\frac{[(2k)!]^3}{(k!)^4 8^{2k} (2n)^{2k}}\,\raisebox{2pt}{$,$}
\end{equation*}
and
\begin{equation*}
\widetilde{\gamma} = \frac{S}{I} - \frac{T}{I^2} - \ln n\,.
\end{equation*}
Then
\begin{equation}
|\widetilde{\gamma} - \gamma| < 24 e^{-8n}. \label{eq:fullbound}
\end{equation}
\end{theorem}
\begin{proof}
Let
\begin{align*}
\varepsilon_1 = S_0(2n) - S & = \sum_{k=N}^{\infty} \frac{H_k n^{2k}}{(k!)^2}
\,\raisebox{2pt}{$,$} \\
\varepsilon_2 = I_0(2n) - I & = \sum_{k=N}^{\infty} \frac{n^{2k}}{(k!)^2}
\,\raisebox{2pt}{$.$}
\end{align*}
Inspection of the term ratios for $k \ge N$ shows that
$\varepsilon_1$ and $\varepsilon_2$ are bounded by
the left side of~\eqref{eq:e0bound}.
Using~\eqref{eq:globalbound2} to bound $1/I_0(2n)$, it follows that
\begin{align*}
\left| \frac{S+\varepsilon_1}{I+\varepsilon_2} - \frac{S}{I} \right|
& = \left| \frac{\varepsilon_1 I - \varepsilon_2 S}{(I + \varepsilon_2) I} \right| \\
& \le \frac{\varepsilon_0 (I + S)}{(I + \varepsilon_2) I} \\
& = \varepsilon_0 \left( \frac{1}{I_0(2n)} \right) \left( 1 + \frac{S}{I} \right) \\
& < \frac{e^{-6n}}{(4\pi n)^{1/2} (1+H_N)} \left( \frac{(4 \pi n)^{1/2}}{e^{2n}} \right) ( 1 + H_N ) \\
& = e^{-8n}.
\end{align*}
We have $T + \varepsilon_3 = I_0(2n) K_0(2n)$ where,
from Lemma~\ref{lem:T_4n_bd},
$|\varepsilon_3| < 7e^{-4n} / (4n)$.
Thus, from Corollary~\ref{cor:IK_upper_bd},
\[T \le \frac{1}{2n} + \frac{7e^{-4n}}{4n} < \frac{1}{n}\,\raisebox{2pt}{$.$}\]
Therefore, using~\eqref{eq:globalbound2} again,
\begin{align*}
\left| \frac{T+\varepsilon_3}{(I+\varepsilon_2)^2} - \frac{T}{I^2} \right|
& = \left| \frac{\varepsilon_3 I^2 - T \varepsilon_2 (2 I + \varepsilon_2)}{(I + \varepsilon_2)^2 I^2} \right| \\
& \le \frac{|\varepsilon_3|}{(I+\varepsilon_2)^2} + T \varepsilon_2 \frac{(2I+\varepsilon_2)}{(I+\varepsilon_2)^2 I^2} \\
& \le \frac{|\varepsilon_3|}{I_0(2n)^2} + T \varepsilon_2 \frac{3}{I_0(2n)^3} \\
& < 7 \pi e^{-8n} + e^{-8n} \\
& < 23 e^{-8n}.
\end{align*}
Thus, the total error $|\widetilde{\gamma} - \gamma|$
is bounded by $e^{-8n} + 23e^{-8n} = 24 e^{-8n}$.
\end{proof}
\begin{remark} \label{remark:1}
{\rm
We did not try to obtain the best possible constant
in~\eqref{eq:fullbound}. A more detailed analysis shows that we can
reduce the constant $24$
by a factor greater
than two if $n$ is large. See also Remark~\ref{remark:3}.
}
\end{remark}
Since the condition on $N$ in Theorem~\ref{thm:Jbound} is rather
complicated, we give the following corollary.
\begin{corollary} \label{cor:simpler_bound}
Let $\alpha \approx 4.970625759544$ be the
unique positive real solution of $\alpha (\ln \alpha - 1) = 3$.
If $n \ge 138$ and $N \ge \alpha n$ are integers, then the conclusion
of Theorem~$\ref{thm:Jbound}$ holds.
\end{corollary}
\begin{proof}
For $138 \le n \le 214$
we can verify by direct computation that
conditions~\eqref{eq:e0bound}--\eqref{eq:eps0_def}
of Theorem~\ref{thm:Jbound} hold.
Hence, in the following we assume that $n \ge 215$.
Since $N \ge \alpha n$, this implies that
$N \ge \lceil 215\alpha \rceil = 1069$.
Let $\beta = N/n$. Then $\beta \ge \alpha$, so
$\beta(\ln\beta - 1) \ge 3$.
Thus $2n(\beta\ln\beta - \beta - 3) \ge 0$. Taking exponentials
and using $\beta = N/n$, we obtain
\begin{equation} \label{key_ineq}
N^{2N} \ge e^{2N+6n}n^{2N}.
\end{equation}
Define the real analytic function $h(x) := \ln x + \gamma + 1/(2x)$.
The upper bound $H_N \le h(N)$
follows from the Euler-Maclaurin expansion
\[H_N - \ln(N) - \gamma \sim \frac{1}{2N} -
\sum_{k=1}^\infty \frac{B_{2k}}{2k} N^{-2k},
\]
since the terms on the right-hand-side alternate in sign.
Using our assumption that $N \ge 1069$, it is easy to verify that
\begin{equation} \label{eq:N_ineq2}
\sqrt{\pi\alpha N} \ge 2h(N)(h(N)+1).
\end{equation}
Since $\beta \ge \alpha$, it follows from~\eqref{eq:N_ineq2} that
\begin{equation} \label{eq:N_ineq1}
\sqrt{\pi\beta N} \ge 2h(N)(h(N)+1).
\end{equation}
Substituting $\beta = N/n$ in~\eqref{eq:N_ineq1},
it follows that
\begin{equation} \label{eq:Nn_ineq}
\pi N > 2 h(N)(h(N)+1)(\pi n)^{1/2}.
\end{equation}
Using~\eqref{key_ineq}, this gives
\begin{equation} \label{eq:36b}
\pi N^{2N+1} > 2n^{2N}h(N)(h(N)+1)
(\pi n)^{1/2}e^{2N+6n}.
\end{equation}
{From} the first inequality of~\eqref{eq:factorial_bounds} we have
$(N!)^2 \ge {2\pi}N^{2N+1}e^{-2N}$.
Using this and $h(N) \ge H_N$, we see that~\eqref{eq:36b} implies
\begin{equation} \label{eq:e0bound2}
(N!)^2 > 4n^{2N}H_N(1+H_N)(\pi n)^{1/2}e^{6n}.
\end{equation}
However, it is easy to see that~\eqref{eq:e0bound2} is equivalent
to conditions~\eqref{eq:e0bound}--\eqref{eq:eps0_def}
of Theorem~\ref{thm:Jbound}. Hence, the conclusion of
Theorem~\ref{thm:Jbound} holds.
\end{proof}
\begin{remark} \label{remark:2}
{\rm
If $0 < n < 138$ then Corollary~$\ref{cor:simpler_bound}$ does not apply,
but a numerical computation shows that it is always sufficient to take
$N \ge \alpha n + 1$.
}
\end{remark}
\begin{remark} \label{remark:3}
{\rm
As indicated in Table~\ref{tab:bcomparison}, the bound in \eqref{eq:fullbound}
is nearly optimal for large~$n$.
Our bound $24e^{-8n}$ appears to overestimate the true error
by a factor that grows slightly faster than order $n^{1/2}$,
which is inconsequential for high-precision computation of~$\gamma$.
}
\end{remark}
\begin{table}[ht]
\begin{center}
\begin{tabular}{ c | c | l | l }
$n$ & $N$ & \;\;\;$|\widetilde{\gamma} - \gamma|$ & \;\;\;$24e^{-8n}$\\[2pt]
\hline
&&&\\[-9pt]
10 & 50 & $7.68 \cdot 10^{-38}$ & $4.34 \cdot 10^{-34}$ \\
100 & 498 & $5.32 \cdot 10^{-349}$ & $8.81 \cdot 10^{-347}$ \\
1000 & 4971 & $1.96 \cdot 10^{-3476}$ & $1.06 \cdot 10^{-3473}$ \\
10000 & 49706 & $2.85 \cdot 10^{-34746}$ & $6.64 \cdot 10^{-34743}$
\end{tabular}
\caption{The error $|\widetilde{\gamma} -\gamma|$
compared to the bound \eqref{eq:fullbound}.}
\label{tab:bcomparison}
\end{center}
\end{table}
\bibliographystyle{plain}
\pagebreak[3]
| {
"timestamp": "2013-12-03T02:00:50",
"yymm": "1312",
"arxiv_id": "1312.0039",
"language": "en",
"url": "https://arxiv.org/abs/1312.0039",
"abstract": "The Brent-McMillan algorithm B3 (1980), when implemented with binary splitting, is the fastest known algorithm for high-precision computation of Euler's constant. However, no rigorous error bound for the algorithm has ever been published. We provide such a bound and justify the empirical observations of Brent and McMillan. We also give bounds on the error in the asymptotic expansions of functions related to modified Bessel functions.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A bound for the error term in the Brent-McMillan algorithm",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98058065352006,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.707727424904962
} |
https://arxiv.org/abs/1803.07964 | Stochastic Learning under Random Reshuffling with Constant Step-sizes | In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly. Recent works have pursued justifications for this behavior by examining the convergence rate of the learning process under diminishing step-sizes. This work focuses on the constant step-size case and strongly convex loss function. In this case, convergence is guaranteed to a small neighborhood of the optimizer albeit at a linear rate. The analysis establishes analytically that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size $O(\mu^2)$ around the minimizer rather than $O(\mu)$. Furthermore, we derive an analytical expression for the steady-state mean-square-error performance of the algorithm, which helps clarify in greater detail the differences between sampling with and without replacement. We also explain the periodic behavior that is observed in random reshuffling implementations. | \section{Introduction}
\section{MOTIVATION}
We consider minimizing an empirical risk function $J(w)$, which is defined as the sample average of loss values over a possibly large but finite training set:\vspace{-2mm}
\eq{\label{prob-emp-into}
w^\star \;\stackrel{\Delta}{=}\; \argmin_{w\in {\mathbb{R}}^M}\; J(w) \;\stackrel{\Delta}{=}\; \frac{1}{N}\sum_{n=1}^{N}Q(w;x_n)\\[-6mm]\nonumber
}
where the $\{x_n\}_{n=1}^N$ denotes the training data samples and the loss function $Q(w;x_n)$ is assumed convex and differentiable. We assume the empirical risk $J(w)$ is strongly-convex which ensures that the minimizer, $w^{\star}$, is unique. Problems of the form (\ref{prob-emp-into}) are common
in many areas of machine learning including linear regression, logistic regression and their regularized versions.
When the size of the dataset $N$ is large, it is impractical to solve \eqref{prob-emp-into} directly via traditional gradient descent by evaluating the full gradient at every iteration. One simple, yet powerful remedy is to employ the stochastic gradient method (SGD) \cite{bertsekas1989parallel,polyak1992acceleration,polyak1987introduction,bottou2010large,bousquet2008tradeoffs,moulines2011non,zhang2004solving, needell2014stochastic}.
Rather than compute the full gradient ${\nabla}_w J(w)$ over the entire data set, these algorithms pick one index $\boldsymbol{n}_i$ at random at every iteration, and employ ${\nabla}_w Q(w;x_{\boldsymbol{n}_i})$ to approximate ${\nabla}_w J(w)$. Specifically, at iteration $i$, the update for estimating the minimizer is of the form\cite{yuan2016stochastic}:\vspace{-1mm}
\eq{\label{alg:esgd-intr}
{\boldsymbol{w}}_i = {\boldsymbol{w}}_{i-1} - \mu {\nabla}_w Q({\boldsymbol{w}}_{i-1};x_{{\boldsymbol{n}}_i}),
}
where $\mu$ is the step-size parameter.
Note that we are using boldface notation to refer to random variables. Traditionally, the index ${\boldsymbol{n}}_i$ is uniformly distributed over the discrete set $\{1,2,\ldots,N\}$.
It has been noted in the literature \cite{ bottou2009curiously,recht2012toward,gurbuzbalaban2015random, zhang2014note} that incorporating random reshuffling into the gradient descent implementation helps achieve better performance. More broadly than in the case of the pure SGD algorithm, it has also been observed that applying random reshuffling in variance-reduction algorithms, like SVRG\cite{Johnson2013}, SAGA\cite{defazio2014saga}, can accelerate the convergence speed\cite{de2016efficient,defazio2014finito,ying2017convergence, ying2018convergence}. The reshuffling technique has also been applied in distributed system to reduce the communication and computation cost\cite{lee2018speeding}.
In random reshuffling implementations, the data points are no longer picked independently and uniformly at random. Instead, the gradient descent algorithm is run multiple times over the data where each run is indexed by $k \geq 1$ and is referred to as an epoch. For each epoch, the original data is first reshuffled and then passed over in order. In this manner, the $i$-th sample of epoch $k$ can be viewed as ${{\boldsymbol \sigma}}^{k}(i)$, where the symbol ${\boldsymbol \sigma}$ represents a uniform random permutation of the indices. We can then express the random reshuffling algorithm for the $k-$th epoch in the following manner:
\begin{equation}
{\boldsymbol{w}}_i^k = {\boldsymbol{w}}_{i-1}^k - \mu{\nabla}_w Q({\boldsymbol{w}}_{i-1}^k;x_{{\boldsymbol \sigma}^k(i)}),\;\;\;\; i = 1, \ldots, N \label{alg.rr}
\end{equation}
with the boundary condition:
\begin{equation}
{\boldsymbol{w}}^k_0= {\boldsymbol{w}}^{k-1}_N\label{bound.condtion}
\end{equation}
In other words, the initial condition for epoch $k$ is the last iterate from epoch $k-1$. The boldface notation for the symbols ${\boldsymbol{w}}$ and $\bm{\sigma}$ in (\ref{alg.rr}) emphasizes the random nature of these variables due to the randomness in the permutation operation. While the samples over one epoch are no longer picked independently from each other, the uniformity of the permutation function implies the following useful properties\cite{ying2017convergence, bertsekas2003convex, horvitz1952generalization}:
\begin{align}
{\boldsymbol \sigma}^k(i) \neq&\, {\boldsymbol \sigma}^k(j),\;\; 1\leq i\neq j \leq N \label{prop1}\\
{\mathbb{P}}[\ {\boldsymbol \sigma}^k(i)=n\ ] =&\, \frac{1}{N},\;\;\;\hspace{3.4mm} 1\leq n\leq N \label{prop2}
\\
{\mathbb{P}}[{\boldsymbol \sigma}^k(i+1)=n \,|\, {\boldsymbol \sigma}^k(1\colon i)] =&\,\left\{
\begin{aligned}
\frac{1}{N-i},\;\; & n \notin {\boldsymbol \sigma}^k(1{:}i)\\
0\;\;\;,\;\; & n\in {\boldsymbol \sigma}^k(1{:} i)
\end{aligned}
\right.\label{prop3}
\end{align}
where ${\boldsymbol \sigma}^k(1{:} i)$ represents the collection of permuted indices for the samples numbered $1$ through $i$.
Several recent works \cite{recht2012toward, gurbuzbalaban2015random, Shamir2016Without} have pursued justifications for the enhanced behavior of random reshuffling implementations over independent sampling (with replacement).
{\color{black} The work \cite{gurbuzbalaban2015random} examined the convergence rate of the learning process under diminishing step-sizes, i.e., $\mu(i)=c/i$, where $c$ is some positive constant. It analytically showed that, for strongly convex objective functions, the convergence rate under random reshuffling can be improved from $O(1/i)$ in vanilla SGD\cite{agarwal2009information} to $O(1/i^2)$.
The incremental gradient methods\cite{bertsekas1997new,gurbuzbalaban2015convergence}, which can be viewed as the deterministic version of random reshuffling, shares similar conclusions, i.e., random reshuffling helps accelerate the convergence rate from $O(1/i)$ to $O(1/i^2)$ under decaying step-sizes.
Also, in the work \cite{Shamir2016Without}, it establishes that random reshuffling will not degrade performance relative to the stochastic gradient descent implementation, provided the number of epochs is not too large.
}
In this work, we focus on a different setting than\cite{recht2012toward, gurbuzbalaban2015random, Shamir2016Without} involving random reshuffling under {\em constant} rather than decaying step-sizes. In this case, convergence
is only guaranteed to a small neighborhood of the optimizer albeit
at a linear rate. The analysis will establish analytically that random reshuffling
outperforms independent sampling (with replacement) by showing that the mean-square-error of the iterate at the end of each run in the random reshuffling strategy will be in the order of $O(\mu^2)$. This is a significant improvement over the performance of traditional stochastic gradient descent, which is $O(\mu)$~\cite{yuan2016stochastic}.
Furthermore, we derive an analytical expression for the steady-state mean-square-error performance of the algorithm, {\color{black} which is exact for quadratic risks and provides a good approximation for general risks. This helps clarify in greater detail the differences between sampling with and without replacement}
We also explain the periodic behavior that is observed in random reshuffling implementations.
{\color{black}
\subsection{Overview of results}
\begin{itemize}
\item Section \ref{sec.xe.g} provides a stability proof, which shows that under constant step-size random reshuffling will converge into a small neighborhood around the minimizer. The radius of the neighborhood improves from $O(\mu)$ under uniform sampling to $O(\mu^2)$ under random reshuffling. --- Theorem \ref{lemma.start.pont}.
\item Next, we examine more closely the value of the scaling constant in the $O(\mu^2)$ factor by introducing a long term model and deriving an expression for its mean-square-deviation (MSD) performance --- Theorem \ref{main.theorem}. The theorem reveals how the number of samples $N$, step-size $\mu$, and Hessian of the loss function impact performance.
\item In Theorem \ref{theorem.iterations} we provide an expression for an upper bound for the MSD performance at all points close to steady-state. The result of the theorem helps explain the periodic behavior that is observed in random reshuffling implementations.
\item The mismatch between the original reshuffling and the long model is provided in Lemma \ref{lemma.mismatch}.
\item Inspired by quadratic risks, we simplify the MSD expressions in Theorems \ref{main.theorem} and \ref{theorem.iterations} by using the hyperbolic tanh$(\cdot)$ functions --- equations \eqref{msd.hyperbolic} and \eqref{23gdsdse}.
\item In equations \eqref{eq.80} -- \eqref{msd.infty}, we show that as the sample size increases, the established MSD expression in Theorem \ref{main.theorem} will regress to the same expression as the uniform sampling case.
\end{itemize}
}
\section{ANALYSIS OF THE STOCHASTIC GRADIENT UNDER RANDOM RESHUFFLING} \label{sec.xe.g}
\subsection{Properties of the Gradient Approximation}
We start by examining the properties of the stochastic gradient $\nabla_w Q({\boldsymbol{w}}^k_{i-1};x_{{\boldsymbol \sigma}^k(i)})$ under random reshuffling.
One main source of difficulty that we shall encounter in the analysis of performance under random reshuffling is the fact that a single sample of the stochastic gradient $\nabla_w Q({\boldsymbol{w}}^k_{i-1};x_{{\boldsymbol \sigma}^k(i)})$ is now a {\em biased} estimate of the true gradient and, moreover, it is no longer independent of past selections, ${\boldsymbol \sigma}^k(1: i-1)$. This is in contrast to implementations where samples are picked independently at every iteration. Indeed, note that conditioned on previously picked data and on the previous iterate, we have:
\vspace{1mm}
\begin{align}
&\hspace{-4mm}\mathbb{E}\hspace{0.05cm}\big[\nabla_w Q_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)\,|\, {\boldsymbol{w}}_{i-1}^k, {\boldsymbol \sigma}^k(1\,{:}\,i-1) \big]\nonumber\\
=&\, \frac{1}{N-i+1} \sum_{n\notin{\boldsymbol \sigma}^k(1\,{:}\,i-1)}\nabla_w Q({\boldsymbol{w}}_{i-1}^k)
\nonumber\\
\neq&\, \nabla J({\boldsymbol{w}}_0^k)\label{f23r23bg43}
\end{align}
The difference (\ref{f23r23bg43}) is generally nonzero in view of the definition (\ref{prob-emp-into}).
For the first iteration of every epoch however, it can be verified that the following holds:
\begin{align}
\mathbb{E}\hspace{0.05cm} \left[\nabla_w Q_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_0^k) \,\Big|\,{\boldsymbol{w}}_0^k \right]
\stackrel{(\ref{prop2})}{=}&\; \frac{1}{N} \sum_{n=1}^N Q({\boldsymbol{w}}_0^k;x_n)
\nonumber\\
\stackrel{(\ref{prob-emp-into})}{=}&\; \nabla J({\boldsymbol{w}}_0^k) \label{zero.cross_term}
\end{align}
since at the beginning of one epoch, no data has been selected yet.
Perhaps surprisingly, we will be showing that the biased construction of the stochastic gradient estimate not only does not hurt the performance of the algorithm, but instead significantly improves it. In large part, the analysis will revolve around considering the accuracy of the gradient approximation over an entire epoch, rather than focusing on single samples at a time. Recall that by construction in random reshuffling, every sample is picked once and only once over one epoch. This means that the {\em sample average} (rather than the true mean) of the gradient noise process is zero since
\eq{
\frac{1}{N}\sum_{i=1}^N {\nabla}_w Q({\boldsymbol{w}}; x_{{\boldsymbol \sigma}^k(i)}) = \nabla J({\boldsymbol{w}}) \label{grad.define}
}
for any ${\boldsymbol{w}}$ and any reshuffling order ${\boldsymbol \sigma}^k$. This property will become key in the analysis.
\subsection{Convergence Analysis}
We can now establish a key convergence and performance property for the random reshuffling algorithm, which provides solid analytical justification for its observed improved performance in practice.
To begin with, we assume that the risk function satisfies the following conditions, which are automatically satisfied by many learning problems of interest, such as mean-square-error or logistic regression analysis and their regularized versions --- see, e.g., \cite{sayed2014adaptation,sayed2014adaptive,bishop2006pattern,hastie2009elements, theo2008}.
\begin{assumption}[\sc Condition on loss function]
\label{assumption.1}
It is assumed that $Q(w;x_n)$ is differentiable and has a $\delta_n$-Lipschitz continuous gradient, i.e., for every $n=1,\ldots,N$ and any $w_1, w_2 \in {\mathbb{R}}^M$: \vspace{-0.3mm}
\eq{\label{eq-ass-cost-lc-e}
\|{\nabla}_w Q(w_1;x_n) - {\nabla}_w Q(w_2;x_n) \| \le \delta_n \|w_1-w_2\|
}
where $\delta_n > 0$. We also assume $J(w)$ is $\nu$-strongly convex:
\eq{\label{eq-ass-cost-sc-e}
\hspace{-1mm}\Big({\nabla}_w J(w_1)- {\nabla}_w J(w_2)\Big)^{\mathsf{T}} (w_1 - w_2) &\ge \nu \|w_1-w_2\|^2
}
\hfill{$\blacksquare$}
\end{assumption}
\vspace{-1mm}
\noindent If we introduce $\delta = \max\{\delta_1, \delta_2, \cdots, \delta_N\}$, then each ${\nabla}_w Q (w;x_n)$ and $\nabla_w J(w)$ are also $\delta$-Lipschitz continuous.
The following theorem focuses on the convergence of the \emph{starting} point of each epoch and establishes in (\ref{lemma.stability2}) that it actually approaches a smaller neighborhood of size $O(\mu^2)$ around $w^{\star}$. Afterwards, using this result, we also show that the same $O(\mu^2)-$performance level holds for {\em all} iterates ${\boldsymbol{w}}_i^k$ and not just for the starting points of the epochs.
To simplify the notation, we introduce the constant ${\mathcal{K}}$, which is the gradient noise variance at optimal point $w^\star$:
\eq{
{\mathcal{K}} \;\stackrel{\Delta}{=}\; \frac{1}{N} \sum_{n=1}^N\left\|{\nabla}_w Q(w^{\star};x_n)\right\|^2 \label{gradient.noise}
}
\begin{theorem}[\sc Stability of starting points]\label{lemma.start.pont}
{\color{black} Under assumption \ref{assumption.1}, the starting point of each run in \eqref{alg.rr}, i.e., ${\boldsymbol{w}}^k_0$, satisfies}
\eq{
\limsup_{k\to\infty} \mathbb{E}\hspace{0.05cm} \|{\boldsymbol{w}}_0^k - w^\star\|^2 \leq\;& \frac{4\mu^2\delta^2N^2}{\nu^2}{\mathcal{K}}
=O(\mu^2)\label{lemma.stability2}
}
when the step-size is sufficiently small, namely, for $\color{black}\mu \leq \frac{\nu}{3\delta^2 N}$\footnote{\color{black} The proof in this theorem is based on the worst case scenario, which implies the inequalities hold for any realizations. Therefore, this proof is also applicable to the deterministic cyclic sampling case.}. {\color{black} The convergence to steady-state regime occurs at an exponential rate, dictated by the parameter:
\eq{
\alpha \;\stackrel{\Delta}{=}\; 1- \mu\nu N/2
}}
\vspace{-2mm}
\end{theorem}
\footnotesize \begin{proof}
See Appendix \ref{proof.theorem.1}
\end{proof}
\smallskip
\noindent Having established the stability of the first point of every epoch, we can establish the stability of every point.\vspace{-1mm}
\begin{corollary}[\sc Full Stability] \label{lemma.corollary.1}
Under assumption \ref{assumption.1}, it holds that\eq{
\limsup_{k\to\infty} \mathbb{E}\hspace{0.05cm} \|{\boldsymbol{w}}_{i}^k - w^\star\|^2 =\;&O(\mu^2)\label{lemma.stability3}
}
for all $i$ when the step-size is sufficiently small.
\end{corollary}
\footnotesize \begin{proof}
See Appendix \ref{proof.corollary.1}
\end{proof}
\vspace{2mm}
{\color{black}
\noindent With the previous established Theorem \ref{lemma.start.pont}, it is also easy to gain the convergence theorem under decaying step-sizes.\vspace{-1mm}
\begin{corollary}[\sc Convergence under decaying step-sizes] \label{lemma.corollary.2}
Under assumption \ref{assumption.1} and the decaying step-sizes $\mu(i) = {c}/{(i+1)}$ is employed, the iterate ${\boldsymbol{w}}^k_i$ converge to the minimizer $w^\star$ exactly as $i\to\infty$ with $O(1/i^2)$ rate.\vspace{-2mm}
\end{corollary}
\footnotesize \begin{proof}
See Appendix \ref{proof.corollary.2}
\end{proof}
}
\section{ILLUSTRATING BEHAVIOR AND PERIODICITY}\label{sec.illus}
In this section we illustrate the theoretical findings so far by numerical simulations. We consider the following logistic regression problem: \vspace{-3mm}
\eq{\label{xcn23bh}
\min_w\quad J(w) = \frac{1}{N}\sum_{n=1}^{N} Q(w;h_n,\gamma(n)),
}
where $h_n\in{\mathbb{R}}^M$ is the feature vector, $\gamma(n)\in \{\pm1\}$ is the scalar label, and
\eq{\label{xn8}
Q(w;h_n,\gamma_n) \;\stackrel{\Delta}{=}\; \rho \|w\|^2 + \ln\left(1+\exp(-\gamma(n) h_n^{\mathsf{T}} w)\right).
}
The constant $\rho$ is the regularization parameter. In the first simulation, we compare the performance of the standard stochastic gradient descent (SGD) algorithm (\ref{alg:esgd-intr}) with replacement and the random reshuffling (RR) algorithm \eqref{alg.rr}. We set $N=1000$ and $M=10$. Each
$h_{n}$ is generated from the normal distribution ${\mathcal{N}}(0; \Lambda_M)$, where $\Lambda_M$ is a diagonal matrix with each diagonal entry generated from the uniform distribution ${\mathcal{U}}(1,10)$. To generate $\gamma(n)$, we first generate an auxiliary random vector $w_0\in \mathbb{R}^{M}$ with each entry following ${\mathcal{N}}(0,1)$. Next, we generate ${\boldsymbol{u}}(n)$ from a uniform distribution ${\mathcal{U}}(0,1)$. If ${\boldsymbol{u}}(n) \le 1/(1+\exp(-h_{n}^{\mathsf{T}} w_0))$ then $\gamma(n)$ is set as $+1$; otherwise $\gamma(n)$ is set as $-1$. We select $\rho=0.1$ during all simulations. Figure \ref{fig:sgd_vs_rr_1en2} illustrates {\color{black}the mean-square-deviation (MSD) performance, i.e., $\mathbb{E}\hspace{0.05cm}\|{\boldsymbol{w}}_0^k-w^\star\|^2$,} of the SGD and RR algorithms when $\mu = 0.003$. It is observed that the RR algorithm oscillates during the steady-state regime, and that the MSD at the ${\boldsymbol{w}}_0^k$ is the best among all iterates $\{{\boldsymbol{w}}_i^k\}_{i=1}^{N-1}$ during epoch $k$. Furthermore, it is also observed that RR has better MSD performance than SGD. Similar observations also occur in Fig. \ref{fig:sgd_vs_rr_1en3}, where $\mu=0.0003$. It is worth noting that the gap between SGD and RR is much larger in Fig. \ref{fig:sgd_vs_rr_1en3} than in Fig. \ref{fig:sgd_vs_rr_1en2}.\vspace{-3mm}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.35]{sgd_vs_rr_stepsize_3en3-eps-converted-to.pdf}\vspace{-2mm}
\caption{\small{RR has better mean-square-deviation (MSD) performance, i.e., $\mathbb{E}\hspace{0.05cm}\|{\boldsymbol{w}}_0^k-w^\star\|^2$, than standard SGD when $\mu = 0.003$. The dotted black curve is drawn by connecting the MSD performance at the starting points of the successive epochs.}}\vspace{-2mm}
\label{fig:sgd_vs_rr_1en2}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.35]{3en4-eps-converted-to.pdf}\vspace{-1mm}
\caption{\small{RR has much better MSD performance, i.e., $\mathbb{E}\hspace{0.05cm}\|{\boldsymbol{w}}_0^k-w^\star\|^2$, than standard SGD when $\mu = 0.0003$. The dotted black curve is drawn by connecting the MSD performance at the starting points of the successive epochs.}}\vspace{-2mm}
\label{fig:sgd_vs_rr_1en3}
\end{figure}
Next, in the second simulation we verify the conclusion that the MSD for the starting point of each epoch for the random reshuffling algorithm, i.e., ${\boldsymbol{w}}_0^k$, can achieve $O(\mu^2)$ instead of $O(\mu)$. We still consider the regularized logistic regression problem \eqref{xcn23bh} and \eqref{xn8}, and the same experimental setting.
Recall that in Lemma \ref{lemma.start.pont}, we proved that
\begin{align}
\limsup_{k\to\infty} \mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{0}^k\|^2
\leq&\,O(\mu^2),
\end{align}
which indicates that when $\mu$ is reduced a factor of 10, the MSD-performance $\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{0}^k\|^2$ should be improved by at least $20$ dB. We observe a decay of about 20dB per decade in Fig. \ref{fig:msd_vs_stepsize_20} for a logistic regression problem with $N=25$ data points and 30dB per decade in Fig. \ref{fig:msd_vs_stepsize_30} with $N=1000$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.36]{MSD_VS_Stepsize_20.pdf}\vspace{-3mm}
\caption{\small{Mean-square-deviation performance at steady-state versus the step size for a logistic problem involving $N=25$ data points. The slope is around $20$ dB per decade.}}\vspace{-3mm}
\label{fig:msd_vs_stepsize_20}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.36]{MSD_VS_Stepsize_30.pdf}\vspace{-3mm}
\caption{\small{Mean-square-deviation performance at steady-state versus the step size for a logistic problem involving $N=1000$ data points. The slope is around $30$ dB per decade.}}
\label{fig:msd_vs_stepsize_30}
\end{figure}
\smallskip
\section{INTRODUCING A LONG-TERM MODEL}
We proved in the earlier sections that the mean-square error under random reshuffling approaches a small $O(\mu^2)-$neighborhood around the minimizer. Our objective now is to assess more accurately the size of the constant that multiplies $\mu^2$ in the $O(\mu^2)$ result, and examine how this constant may depend on various parameters including the amount of data, $N$, and the form of the loss function $Q$.
To do that, we proceed in two steps. First, we introduce an auxiliary long-term model in (\ref{eq.long_term.rec}) below and subsequently determine how far the performance of this model is from the original system described by \eqref{eq.error_expand} further ahead.\vspace{-3mm}
\subsection{Error Dynamics}\label{subsec.error}
In order to quantify the performance of the random reshuffling implementation more accurately than the $O(\mu^2)-$figure obtained earlier, we will need to impose a condition on the smoothness of the Hessian matrix of the risk function.
\begin{assumption}[\sc Hessian is Lipschitz continuous]
\label{assumption.2}
The risk function $J(w)$ has a Lipschitz continuous Hessian matrix, i.e., there exists a constant $\kappa\geq0$, such that
\begin{equation}
\|\nabla^2_w J(w_1) - \nabla^2_w J(w_2) \| \leq \kappa \|w_1-w_2\|
\label{hessian.lipschitz}
\end{equation}\hfill{$\blacksquare$}
\end{assumption}
\vspace{-1mm}
\noindent Under this assumption, the gradient vector, $\nabla_w J(w)$, can be expressed in Taylor expansion in the form\cite[p. 378]{ sayed2014adaptation}:
\begin{equation}
\nabla_w J(w) = \nabla^2_w J(w^\star) (w-w^\star) + \xi(w),\;\;\;\; \forall w
\end{equation}
where the residual term satisfies:
\begin{equation}
\|\xi(w)\| \leq \frac{\kappa}{2}\|w-w^\star\|^2 \label{3h98u.ni}
\end{equation}
\noindent
As such, we can rewrite algorithm (\ref{alg.rr}) in the form: \begin{align}
\widetilde{\boldsymbol{w}}_i^k =& \widetilde{\boldsymbol{w}}_{i-1}^k + \mu \nabla_w J({\boldsymbol{w}}_{i-1}^k) \nonumber\\
&\;\;+ \mu\Big({\nabla}_w Q({\boldsymbol{w}}_{i-1}^k;x_{{\boldsymbol \sigma}^k(i)})- \nabla_w J({\boldsymbol{w}}_{i-1}^k)\Big)\nonumber\\
=& \widetilde{\boldsymbol{w}}_{i-1}^k - \mu \nabla^2_w J({\boldsymbol{w}}^\star) \widetilde{\boldsymbol{w}}_{i-1}^k + \mu\xi({\boldsymbol{w}}_{i-1}^k)\nonumber\\
&\;\;+\mu\Big({\nabla}_w Q({\boldsymbol{w}}_{i-1}^k;x_{{\boldsymbol \sigma}^k(i)})- \nabla_w J({\boldsymbol{w}}_{i-1}^k)\Big)\label{231398u.123932}
\end{align}
To ease the notation, we introduce the Hessian matrix $H$ and the gradient noise process:
\eq{
H \;\stackrel{\Delta}{=}\;& \nabla^2_w J(w^\star) \nonumber\\
s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)\;\stackrel{\Delta}{=}\;&{\nabla}_w Q({\boldsymbol{w}}_{i-1}^k;x_{{\boldsymbol \sigma}^k(i)})- \nabla_w J({\boldsymbol{w}}_{i-1}^k)
}
so that \eqref{231398u.123932} is simplified as:
\eq{
\widetilde{\boldsymbol{w}}_i^k =(I-\mu H) \widetilde{\boldsymbol{w}}_{i-1}^k + \mu\xi({\boldsymbol{w}}_{i-1}^k) + \mu s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)
\label{231398u.1239}
}
Now property \eqref{zero.cross_term} motivates us to expand (\ref{231398u.1239}) into the following error recursion by adding and subtracting the same gradient noise term evaluated at ${\boldsymbol{w}}_0^k$:
\begin{align}
\widetilde{{\boldsymbol{w}}}_i^k =& (I-\mu H)\widetilde{{\boldsymbol{w}}}_{i-1}^k + \mu
s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k)\nonumber\\ \;\;&\;\;\;{}+\underbrace{\mu \big(
s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)- s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k) \big)}_{\rm noise\ mismatch} + \mu\xi({\boldsymbol{w}}_{i-1}^k)
\label{eq.origin.onestep}
\end{align}
Iterating (\ref{eq.origin.onestep}) and using (\ref{bound.condtion}) we can establish the following useful relation, which we call upon in the sequel:
\begin{align}
\widetilde{{\boldsymbol{w}}}_0^{k+1}
=&\;\; (I-\mu H)^N \widetilde{{\boldsymbol{w}}}_{0}^k + \mu \sum^N_{i=1} ( I - \mu H )^{N-i} s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_0^k) \nonumber\\
&\;\; + \mu \sum^N_{i=1} ( I - \mu H )^{N-i} \left(s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)- s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k)\right) \nonumber\\
&\;\; + \mu \sum^N_{i=1} ( I - \mu H )^{N-i} \xi({\boldsymbol{w}}_{i-1}^k)
\label{eq.error_expand}
\end{align}
Note that recursion (\ref{eq.error_expand}) relates $\widetilde{{\boldsymbol{w}}}_0^k$ to $\widetilde{{\boldsymbol{w}}}_0^{k+1}$, which are the starting points of two successive epochs. In this way, we have now transformed recursion (\ref{alg.rr}), which runs from one sample to another within the same epoch, into a relation that runs from one starting point to another over two successive epochs.
To proceed, we will ignore the last two terms in (\ref{eq.error_expand}) and consider the following approximate model, which we shall refer to as a {\em long-term} model.
\begin{equation}
\widetilde{{\boldsymbol{w}}}_0^{\prime k+1}
= (I-\mu H)^N\widetilde{{\boldsymbol{w}}}_{0}^{\prime k} \color{black}+\color{black} \mu\underbrace{\sum_{i=1}^{N}(I-\mu H)^{N-i} s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k)}_{\;\stackrel{\Delta}{=}\; s'({\boldsymbol{w}}_0^k)} \label{eq.long_term.rec}
\end{equation}
Obviously, the state evolution will be different than (\ref{eq.error_expand}) and is therefore denoted by the prime notation, $\widetilde{{\boldsymbol{w}}}_0^{\prime k}$. Observe, however, that in model (\ref{eq.long_term.rec}) the gradient noise process is still being evaluated at the original state vector, ${\boldsymbol{w}}_0^k$, and not at the new state vector, ${\boldsymbol{w}}_0^{\prime k}$.
\subsection{Performance of the Long-Term Model across Epochs}\label{subsec.perform.epoch}
Note that the gradient noise ${\boldsymbol{s}}'({\boldsymbol{w}}_0^k)$ in (\ref{eq.long_term.rec}) has the form of a weighted sum over one epoch. This noise clearly satisfies the property:
\begin{align}
\mathbb{E}\hspace{0.05cm} [\,s'({\boldsymbol{w}}_0^k) \,|\, {\boldsymbol{w}}_0^k\,]=&\; 0 \label{noise.zero}
\end{align}
We also know that $s'({\boldsymbol{w}}_0^k)$ satisfies the Markov property, i.e., it is independent of all previous ${\boldsymbol{w}}^{k'}_i$ and ${\boldsymbol \sigma}^{k'}(\cdot)$, where $k'<k$, conditioned on ${\boldsymbol{w}}_0^k$.
To motivate the next lemma consider the following auxiliary setting.
Assume we have a collection of $N$ vectors $\{x_i\}$ in ${\mathbb{R}}^2$ whose sum is zero. We define a random walk over these vectors in the following manner. At each time instant, we select a random vector $x_{{\boldsymbol{n}}_i}$ uniformly and with replacement from this set and move from the current location along the vector $x_{{\boldsymbol{n}}_i}$ to the next location. If we keep repeating this construction, we obtain behavior that is represented by the right plot in Fig. 5. Assume instead that we repeat the same experiment except that now we assume the data $\{x_i\}$ is first reshuffled and then vectors $x_{{\boldsymbol \sigma}(i)}$ are selected uniformly without replacement. Because of the zero sum property, and because sampling is now performed without replacement, we find that in this second implementation we always return to the origin after $N$ selections. This situation is illustrated in the left plot of the same Fig. \ref{fig:random_walk}. The next lemma considers this scenario and provides useful expressions that allow us to estimate the expected location after $1, 2$ or more (unitl $N-1$) movements. These results will be used in the sequel in our analysis of the performance of stochastic learning under RR.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.35]{random_walk.pdf}
\caption{\small{Random walk versus Random reshuffling walk. The lines with same color represent all $i$-th choices walk in different epochs. }}
\label{fig:random_walk}
\end{figure}
\begin{lemma} \label{lemma.2}
Suppose we have a set of $N$ vectors $X = \{x_i\}^N_{i=1}$ with the constraint
\(
\sum_{i=1}^{N} x_i = 0
\). Assume the elements of $X$ are randomly reshuffled and then selected uniformly without replacement. Let $\beta$ be any nonnegative constant, $B$ be any symmetric positive semi-definite matrix, and introduce \eq{
R_x\;\stackrel{\Delta}{=}\;&\frac{1}{N}\sum_{i=1}^N x_ix_i^{\mathsf{T}}\\
{\rm Var}(X) \;\stackrel{\Delta}{=}\;& \frac{1}{N} \sum_{i=1}^N\|x_i\|^2 = \mbox{\rm {\small Tr}}(R_x)
}
Define the following functions for any $1\leq n\leq N$:
\eq{
f(n;X,\!\beta) \;\stackrel{\Delta}{=}\;& \mathbb{E}\hspace{0.05cm} \left\|\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\right\|^2\\
F(n;X,B) \;\stackrel{\Delta}{=}\; & \mathbb{E}\hspace{0.05cm} \left[\sum_{j=1}^nB^{n-j} x_{{\boldsymbol \sigma}(j)}\right]\left[\sum_{j=1}^n x_{{\boldsymbol \sigma}(j)}^{\mathsf{T}} B^{n-j}\right]
}
It then holds that \eq{
f(n;X,\!\beta)=&\, \frac{(\sum_{i=0}^{n-1}\beta^{2i})N-(\sum_{i=0}^{n-1}\beta^i)^2}{N-1}{\rm Var}(X) \label{eq.lemma2}\\
F(n;X,B)=& \frac{\left[\displaystyle\sum_{i=0}^{n-1}B^i R_xB^i\right]\!\!N \!-\! \left[\displaystyle\sum_{i=0}^{n-1}B^i\right] \!R_x \!\left[\displaystyle\sum_{i=0}^{n-1}B^i\right] }{N-1}
\label{eq.lemma2.2}
}
\end{lemma}
\begin{proof}
The proof is provided in Appendix \ref{app.noise}.
\end{proof}\vspace{2mm}
We now return to the stochastic gradient implementation under random reshuffling. Recall from \eqref{grad.define} that the stochastic gradient satisfies the zero sample mean property so that
\eq{
\sum_{i=1}^N s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}) = 0
}
at any given point ${\boldsymbol{w}}$. Applying Lemma \ref{lemma.2}, we readily conclude that
\begin{align}
&\hspace{-5mm}\mathbb{E}\hspace{0.05cm} [s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\,|\,{\boldsymbol{w}}_0^k]\nonumber\\
=&\; \frac{N\left(\sum_{i=0}^{N-1} (I-\mu H)^iR_s^k(I-\mu H)^i\right)}{N-1}\nonumber\\
&\;\; {}- \frac{\big[\sum_{i=0}^{N-1}(I-\mu H)^i\big] R_s^k \big[\sum_{i=0}^{N-1}(I-\mu H)^i\big]}{N-1} \label{agg.noise.result}
\end{align}
where
\begin{equation}
R_s^k \;\stackrel{\Delta}{=}\;\frac{1}{N} \sum_{n=1}^N {\boldsymbol{s}}_n({\boldsymbol{w}}_0^k) {\boldsymbol{s}}_n({\boldsymbol{w}}_0^k)^{\mathsf{T}}
\end{equation}
Similarly, we conclude for the gradient noise at the optimal $w^\star$:
\eq{
R_s^{\prime \star}\;\stackrel{\Delta}{=}\;&\mathbb{E}\hspace{0.05cm} [s'({\boldsymbol{w}}^\star)s'({\boldsymbol{w}}^\star)^{\mathsf{T}}]\nonumber\\
=&\; \frac{N\left(\sum_{i=0}^{N-1} (I-\mu H)^iR^\star_s(I-\mu H)^i\right)}{N-1}\nonumber\\
&\; {}- \frac{\big[\sum_{i=0}^{N-1}(I-\mu H)^i\big] R^{ \star}_s \big[\sum_{i=0}^{N-1}(I-\mu H)^i\big]}{N-1} \label{eq.r_s.star}
}
where \eq{ R^\star_s = &\; \frac{1}{N}\sum_{i=0}^N \nabla Q(w^{\star}; x_i)\nabla Q(w^{\star}; x_i)^{\mathsf{T}}}
\begin{theorem}[\sc Performance of Long-term Model across Epochs] \label{main.theorem}
Under assumptions \ref{assumption.1} and \ref{assumption.2}, when the step size $\mu$ is sufficiently small, {\color{black} namely, for $\mu\leq{1}/{\delta}$,} the mean-square-deviation (MSD) of the long term model (\ref{eq.long_term.rec}) is given by
\begin{align}
{\rm MSD}_{\rm RR}^{\rm lt}\;\stackrel{\Delta}{=}\;& \limsup_{k\to\infty}\| {\boldsymbol{w}}^{\prime k}_0 - w^\star\|^2\nonumber\\
=\,& \mu^2 \mbox{\rm {\small Tr}}\left((I- (I-\mu H)^{2N})^{-1}R^{\prime \star}_s\right)+ O(\mu^4)\label{eq.msd.reshuffle}
\end{align}
{\color{black} The convergence to steady-state regime occurs at an exponential rate, dictated by the parameter:
\eq{
\alpha \;\stackrel{\Delta}{=}\; (1 - \mu \lambda_{\min}(H))^{2N} \approx 1 - 2\mu \lambda_{\min}(H) N
}}\vspace{-5mm}
\end{theorem}
\begin{proof}See Appendix \ref{app.main.theorem}.
\end{proof}
The simulations in Fig. \ref{fig:rr_lr} show that the MSD expression (\ref{eq.msd.reshuffle}) fits well the performance of the original random reshuffling algorithm. We will establish this fact analytically in the sequel. For now, the simulation is simply confirming that the performance of the long-term model is a good indication of the performance of the original stochastic gradient implementation under RR. \vspace{-3mm}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.37]{reshuffle_lr.pdf}\vspace{-3mm}
\caption{\small{Mean-square-deviation perfromance of random reshuffling algorithm curve on least-mean-square cost function}}\vspace{-3mm}
\label{fig:rr_lr}
\end{figure}
\subsection{Performance of the Long-Term Model over Iterations} \label{subsec.perform.time}
In the previous section we examined the performance of the long-term model at the starting points of successive epochs. In this section, we examine the performance of the same model at any iterate ${\boldsymbol{w}}_i^k$ as time approaches $\infty$. This analysis will help explain the oscillations that are observed in the learning curves in the simulations. {\color{black} First, similar to \eqref{eq.r_s.star}, we need to determine the covariance matrix $R^{\prime\star}_{s,i}$ for any $i$. From Lemma \ref{lemma.2}, we immediately get that }
\eq{
R^{\prime\star}_{s,i}\;\stackrel{\Delta}{=}\;&\mathbb{E}\hspace{0.05cm} s_i'(w^\star) s_i'(w^\star)^{\mathsf{T}} \nonumber\\
=&\; \frac{N\left(\sum_{j=0}^{i-1} (I-\mu H)^jR_s^\star(I-\mu H)^j\right)}{N-1}\nonumber\\
&\;\; {}- \frac{\big[\sum_{j=0}^{i-1}(I-\mu H)^j\big] R_s^\star \big[\sum_{j=0}^{i-1}(I-\mu H)^j\big]}{N-1}
}
{\color{black}
\begin{theorem}[\sc Performance\! Upper-bound\! for\! Long- Term Model]\label{theorem.iterations}
Under assumptions \ref{assumption.1} and \ref{assumption.2}, when the step size $\mu$ satisfies $\mu\leq\frac{2}{\delta+\nu}$, the upper-bound of mean-square-deviation (MSD) of the long term model (\ref{eq.long_term.rec}) at all iterations is given by
\eq{
&\hspace{-8mm}\lim_{k\to\infty}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2\nonumber\\
\leq&(1-\mu\nu)^{2i}\mu^2\mbox{\rm {\small Tr}}\left(\big(I-(I-\mu H)^{2N}\big)^{-1} R_{s}^{\prime \star}\right)\nonumber\\
& +\Big(1\!-\!(1\!-\!\mu\nu)^{2i}\Big)\mu^2\mbox{\rm {\small Tr}}\left(\big(I\!-\!(I\!-\!\mu\nu)^{2i}\big)^{-1} R_{s,i}^{\prime \star}\right) \label{2389g.cs}\\
\;\stackrel{\Delta}{=}\;& \eta_i {\rm MSD}_{\rm RR}^{\rm lt} + (1-\eta_i) {\rm MSD}_{\rm RR,i}^{\rm lt} \label{23x.gewsd}
}
\end{theorem}
\footnotesize \begin{proof} See Appendix \ref{app.theorem.iterations}.\end{proof}
}
We need to point out unlike that \eqref{eq.msd.reshuffle}, expression \eqref{89n.sdg} is an upper-bound rather than an actual performance expression. Still, this bound can help provide useful insights on the periodic behavior that is observed in the simulations. The expression \eqref{2389g.cs} on the right-hand side is a convex combination of two performance measures as defined in \eqref{23x.gewsd}, where the second term is always larger than the first term but approaching it as $i$ increases towards $N$. This behavior will become clearer later in the context of an example and the hyperbolic representation in section \ref{subsec.hyper}.
{\color{black}
Before we continue, we would like to comment on the convergence curve under random reshuffling. Unlike the convergence curve under uniform sampling, we observe periodic fluctuations under random reshuffling in Figures \ref{fig:sgd_vs_rr_1en3} and \ref{fig:rr_lr}. The main reason for this behavior is the fact that the gradient noise is no longer i.i.d. in steady-state. Specifically, the noise variance is now a function of the iterate and it assumes its lowest value at the beginning and end of every epoch. In lemma \ref{lemma.2}, we show that the variance of the random walk process resulting from random reshuffling at each iteration $n$ in Eq. \eqref{eq.lemma2}. We plot the function for $N=20$ and ${\rm Var}(x)=1$ in Fig.~\ref{fig:var_func}. Since the mean-square performance of the algorithm is related to the variance of the gradient noise, it is expected that this bell-shape behavior will be reflected in to the MSD curve as well, thus, resulting in better performance at the beginning and end of every epoch.
\vspace{-3mm}
\begin{figure}[htb]
\centering
\includegraphics[scale=0.43]{variance_func.pdf}\vspace{-2mm}
\caption{\small{The variance function $f(n;X,\beta)$ at \eqref{eq.lemma2} versus $n$ with different $\beta$ value.}}\vspace{-3mm}
\label{fig:var_func}\vspace{-3mm}
\end{figure}
}
{\color{black}
\subsection{Mismatch Bound}
Now we provide an upper bound on the mismatch between the long-term model (\ref{eq.long_term.rec}) and the original algorithm (\ref{alg.rr}).
\begin{lemma}[\sc Mismatch Bound] \label{lemma.mismatch}
After long enough iterations, i.e., $k\gg 1$, the difference between the long term model trajectory (\ref{eq.long_term.rec}) and the original trajectory (\ref{alg.rr}) is
\begin{align}
\limsup_{k\to\infty} &\;\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_0^{\prime k} - \widetilde{{\boldsymbol{w}}}_0^{k}\|^2 \leq \frac{4\mu^2\delta^2N^2}{\nu^2(N-1)}{\mathcal{K}} +O(\mu^3)
\label{eq.mismatch}
\end{align}
\end{lemma}
{\bf Proof:}
See Appendix \ref{mismatch.proof}.\hfill \hfill{$\blacksquare$}
}
\smallskip
\section{QUADRATIC RISKS AND HYPERBOLIC REPRESENTATION}
{\color{black} Lastly,} we consider an example involving a quadratic (least-squares) risk to show that, in this case, the long-term model provides the exact MSD for the original algorithm. The analysis will also provide some insights into expression (\ref{eq.msd.reshuffle}). {\color{black} It also motivates a hyperbolic representation for the MSD, which helps provides some more insights into the MSD behavior.}
\subsection{Quadratic Risks}
Thus, consider the following quadratic risk function:\vspace{-1mm}
\begin{equation}
\min_{w} J(w) = \frac{1}{2N} \sum_{n=1}^N\|Aw-x_n\|^2
\end{equation}
where $A$ has full column rank. We have:
\eq{
\nabla_w J(w) = &\, A^{\mathsf{T}} A w - A^{\mathsf{T}}\underbrace{\left(\frac{1}{N}\sum_{n=1}^N x_n\right)}_{\;\stackrel{\Delta}{=}\; \bar{x}}\\[-0.5mm]
\nabla_w Q(w; x_n) =&\,A^{\mathsf{T}} A w -A^{\mathsf{T}} x_n\\
\nabla^2 J(w_{i}^k) =&\, A^{\mathsf{T}} A\\
s_n(w)
=&\, A^{\mathsf{T}} (x_n - \bar{x})
}
Since the gradient noise $s_n({\boldsymbol{w}})$ is independent of ${\boldsymbol{w}}$, we have
\eq{
s_n({\boldsymbol{w}}_i^k)-s_n({\boldsymbol{w}}_0^k) \equiv 0
}
Moreover, since the risk is quadratic, it also holds that \begin{equation}\xi(w)\equiv 0\end{equation}
Therefore, the long-term model is exactly the same as the original algorithm. For this example, we can calculate the following quantities:
\eq{
w^\star =&\, (A^{\mathsf{T}} A)^{-1} A^{\mathsf{T}} \bar{x}\\
R_s^\star=&\,A^{\mathsf{T}} \frac{1}{N}\sum_{n=1}^N(x_n - \bar{x})(x_n - \bar{x})^{\mathsf{T}} A=A^{\mathsf{T}} R_{xx} A\\
{\rm Var}(x) = & \frac{1}{N}\sum_{n=1}^N \|x_n -\bar{x}\|^2\,\\
I-\mu H =&\, I - \mu A^{\mathsf{T}} A
}
In special case when the columns of $A$ are orthogonal and normalized, i.e., $A^{\mathsf{T}} A = I$, we can simplify the MSD expression \eqref{eq.msd.reshuffle} by noting that
\eq{
&\hspace{-4mm}R_s^{\prime \star}\nonumber\\[-2mm]
=&\, \frac{1}{N-1} \!\left(\!N\sum_{i=0}^{N-1} (1-\mu)^{2i} - \Big(\sum_{i=0}^{N-1} (1-\mu)^{i}\Big)^2\right)\!A^{\mathsf{T}} R_{xx}A
\nonumber\\
=&\,\frac{1}{N-1}\!\!\left(\!\frac{N(1-(1\!-\!\mu)^{2N})}{2\mu-\mu^2} - \frac{\left(1 -(1\!-\!\mu)^{N}\right)^2}{\mu^2}\right)\!A^{\mathsf{T}} R_{xx}A
}
and, hence,
\eq{
{\rm MSD}_{\rm RR} =&\,\mu^2\mbox{\rm {\small Tr}}\left((1 - (1-\mu )^{2N})^{-1} R^{\prime \star}_s\right)\nonumber\\
=&\, \frac{\mu^2}{N-1}\left(\frac{N}{2\mu-\mu^2}- \frac{(1 -(1-\mu)^{N})^2}{\mu^2(1 -(1-\mu)^{2N})}\right){\rm Var}(x)\nonumber\\
=&\,\frac{\mu^2}{N-1}\left(\frac{N}{2\mu-\mu^2}- \frac{1 -(1-\mu)^{N}}{\mu^2(1 +(1-\mu)^{N})}\right){\rm Var}(x)}
In order to provide further insights on this MSD expression,
we simplify it under a small $\mu$ assumption. We could introduce the Taylor series:
\eq{
(1-\mu)^N =&\, 1 - N\mu + O(N^2\mu^2)\label{gh23}
}
However, this approximation can be bad if $N$ is large, which is not uncommon in big data. Instead, we appeal to:
\eq{
(1-\mu)^{N} = e^{N\ln(1-\mu)} = e^{-\mu N + O(\mu^2 N)} \approx e^{-\mu N} \label{exp.approx}
}
Notice it is $O(\mu^2 N)$ instead of $O(\mu^2 N^2)$, and therefore \eqref{exp.approx} is a tighter approximation than \eqref{gh23} when $N$ is large.
Based on this, we further approximate:
\eq{
\frac{1 -(1-\mu)^{N}}{1 +(1-\mu)^{N}} \approx {\tanh}(\mu N /2)
}
and arrive at the simplified expression:
\eq{
{\rm MSD}_{\rm RR} \approx& \frac{\mu}{N-1}\left(\frac{N}{2} - \frac{\tanh(\frac{\mu N}{2})}{\mu}\right) {\rm Var}(x)\nonumber\\
=&\frac{\mu}{2}\frac{N}{N-1} \left(1- \frac{2}{\mu N}\tanh\Big(\frac{\mu N}{2}\Big)\right){\rm Var}(x) \label{simply.msd}
}
For comparison purposes, we know that a simplified expression for MSD under uniform sampling has the following expression\cite{yuan2016stochastic}:
\eq{
{\rm MSD}_{\rm us} =&\frac{\mu}{2}{\rm Var}(x)
}
Hence, the random reshuffling case has an extra multiplicative factor:
\eq{
m_{\rm RR} \;\stackrel{\Delta}{=}\;\frac{N}{N-1} \left(1- \frac{2}{\mu N}\tanh\Big(\frac{\mu N}{2}\Big)\right)
}
We plot $m_{\rm RR}$ versus $\mu N$ in the left plot of Fig.~\ref{fig:rr_factor} where we ignore $\frac{N}{N-1}$.
Now it is clear from the figure that the smaller the step size $\mu$ or the smaller sample size $N$ are, the larger the improvement in performance is. In contrast, when $\mu N$ goes to infinity, the term $m_{\rm RR}$ will converge to $1$, i.e., the same performance as uniform sampling situation, which is consistent with the infinite-horizon case.
Lastly, noting that
\eq{
&\hspace{-4mm}R_{s, i}^{\prime \star}\nonumber\\
=&\, \frac{1}{N-1}\!\! \left(\!N\sum_{j=0}^{i-1} (1-\mu)^{2j} - \Big(\sum_{j=0}^{i-1} (1-\mu)^{j}\Big)^2\right)\!A^{\mathsf{T}} R_{xx}A
\nonumber\\
=&\,\frac{1}{N-1}\!\!\left(\!\frac{N(1-(1-\mu)^{2i})}{2\mu-\mu^2} - \frac{\left(1 -(1-\mu)^{i}\right)^2}{\mu^2}\right)\!A^{\mathsf{T}} R_{xx}A\label{g23g.xc}
}
and using the approximation \eqref{exp.approx}:
\eq{
\hspace{-3mm}R_{s, i}^{\prime \star}\approx\frac{N}{N-1} \left(\frac{1- e^{-2\mu i}}{2\mu} - \frac{(1- e^{-\mu i})^2}{\mu^2 N}\right)A^{\mathsf{T}} R_{xx}A
}
Substituting in \eqref{2389g.cs}, we get for $i\in [1,N]$:
\eq{
&\hspace{-5mm}\lim_{k\to\infty}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2\nonumber\\
\approx&
e^{-2\mu i}\frac{\mu}{2}\frac{N}{N-1} \left(1- \frac{2}{\mu N}\tanh\left(\frac{\mu N}{2}\right)\right){\rm Var}(x)\nonumber\\
&\; + (1-e^{-2\mu i})\frac{\mu}{2}\underbrace{\frac{N}{N-1} \left(1- \frac{2}{\mu N}\tanh\left(\frac{\mu i}{2}\right)\right)}_{\;\stackrel{\Delta}{=}\; m_{\rm RR}(i)}{\rm Var}(x) \raisetag{4mm}\label{x2.dsy3g}
}
Since $\tanh(\cdot)$ is monotonically increasing, $m_{\rm RR}(i) \geq m_{\rm RR}$. With $i$ increasing, the convex combination gives more weight to the second term, which is larger than the first term. This explains the increasing of MSD at the first half of the cycle. With $i$ increasing further, $m_{\rm RR}(i)$ will decrease to the same level as $m_{\rm RR}$. Hence, MSD at the second half of the cycle will decrease again.
The simulation result shows in the right plot of Fig.~\ref{fig:rr_factor} fits with the theoretical analysis for quadratic risks rather well. \vspace{-2mm}
\begin{figure}[htp]
\centering
\hspace{-2mm}\includegraphics[scale=0.265]{rr_factor.jpg}\vspace{-0mm}\hspace{-1.5mm}
\includegraphics[scale=0.44]{upper_bound_all.pdf}
\caption{\small \color{black}Left: The curve of $m_{\rm RR}$ versus $\mu N$. Right: Mean-square-deviation performance of random reshuffling for a quadratic risk.}\vspace{-5mm}
\label{fig:rr_factor}
\end{figure}
{
\begin{table*}[!htp]
\caption{\color{black}\small Summary of the Results with Random Reshuffling versus Uniform Sampling with Replacement.\vspace*{-3mm}}\label{table.conclusion}
\centering
\footnotesize{\color{black}
\begin{tabular}{|c||c|c|c|}
\cline{1-4}
{\color{white}$\Big($}&Uniform sampling\cite{yuan2016stochastic} \cellcolor[gray]{0.8}& Random Reshuffling (Long-term or Quadratic) \cellcolor[gray]{0.8}&Random Reshuffling \cellcolor[gray]{0.8}\nonumber\\\hhline{====}
{\color{white}$\Big($}Steady-state &$O(\mu)$&$O(\mu^3)$ --- Eq. \eqref{23gniox}&$O(\mu^2)$ --- Eq. \eqref{lemma.stability2}\\\cline{1-4}
MSD$^{\rm epoch}$&$\frac{\mu}{2}\mbox{\rm {\small Tr}}(H^{-1}R_s)$& $\mu^2 \mbox{\rm {\small Tr}}\Big((I- (I-\mu H)^{2N})^{-1}R^{\prime \star}_s\Big)$ --- Eq. \eqref{eq.msd.reshuffle}
&\eqref{eq.msd.reshuffle} + $O(\mu^2)$ --- Eq.\eqref{eq.mismatch} \\\cline{1-4}
MSD$^{\rm epoch}$ (Hyperbolic)&$\displaystyle\frac{\mu}{2}\mbox{\rm {\small Tr}}\big( \Lambda^{-1} U^{\mathsf{T}} R_s^\star U \big)$&$\displaystyle\frac{\mu}{2}\mbox{\rm {\small Tr}}\Bigg(M_{\rm RR}\Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Bigg)$ --- Eq. \eqref{msd.hyperbolic}&\eqref{msd.hyperbolic}+ $O(\mu^2)$ --- Eq.\eqref{eq.mismatch}\\\cline{1-4}
MSD$^{\rm iteraion}$& $\frac{\mu}{2}\mbox{\rm {\small Tr}}(H^{-1}R_s)$&
\hspace{-4mm}$
\begin{array}{ll}
&(1-\mu\nu)^{2i}\mu^2\mbox{\rm {\small Tr}}\left(\big(I-(I-\mu H)^{2N}\big)^{-1} R_{s}^{\prime \star}\right)\nonumber\\
&\;\; +\Big(1\!-\!(1\!-\!\mu\nu)^{2i}\Big)\mu^2\mbox{\rm {\small Tr}}\left(\big(I\!-\!(I\!-\!\mu\nu)^{2i}\big)^{-1} R_{s,i}^{\prime \star}\right)\nonumber
\end{array}
$--- Eq. \eqref{2389g.cs}
&\eqref{2389g.cs} + $O(\mu^2)$ --- Eq.\eqref{eq.mismatch} \\\cline{1-4}
MSD$^{\rm iteraion}$(Hyperbolic)& $\displaystyle\frac{\mu}{2}\mbox{\rm {\small Tr}}\big( \Lambda^{-1} U^{\mathsf{T}} R_s^\star U \big)$
& \hspace{-8mm}$\begin{array}{ll}
&e^{-2\mu i}\frac{\mu}{2}\mbox{\rm {\small Tr}}\Big(M_{RR}\Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Big)\\
&\;\;\;+ (1-e^{-2\mu i})\frac{\mu}{2}\mbox{\rm {\small Tr}}\Big( M_{RR}(i)\Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Big)
\end{array}$
--- Eq. \eqref{23gdsdse}\hspace{-2mm}
&\eqref{23gdsdse} + $O(\mu^2)$ --- Eq.\eqref{eq.mismatch} \\\cline{1-4}
{\color{white}$\Big($}Infinite data&$\frac{\mu}{2}\mbox{\rm {\small Tr}}\big(H^{-1}R_s\big)$&$\frac{\mu}{2}\mbox{\rm {\small Tr}}(H^{-1}R_s)$--- Eq. \eqref{msd.infty}&$\frac{\mu}{2}\mbox{\rm {\small Tr}}(H^{-1}R_s)$ \\\cline{1-4}
{\color{white}$\Big($}Periodic &No& Yes --- Eq. \eqref{2389g.cs} & Yes\\
\cline{1-4}
\end{tabular}\vspace{-3mm}
}
\end{table*}
}
\subsection{Hyperbolic Representation for MSD in Long-term Model}\vspace{-1mm} \label{subsec.hyper}
Motivated by the result for the quadratic risk case, we now derive a similar expression for the MSD more generally also in terms of a tanh function. First, we extend result \eqref{exp.approx} into a matrix version. Supposing $\Lambda$ is a positive diagonal matrix and $\mu$ is sufficiently small such that $I-\mu\Lambda$ is a stable matrix, we have\vspace{-1mm}
\eq{
(I-\mu \Lambda)^N \approx e^{-\mu N \Lambda}
}
and \vspace{-0.3mm}
\eq{
\sum_{i=0}^{N-1}(I-\mu \Lambda)^i =& \frac{1}{\mu} (I-\mu \Lambda)^ N\Lambda^{-1}
\approx \frac{1}{\mu}e^{-\mu N \Lambda}\Lambda^{-1} \label{g892.g3}
}
It follows that
\eq{
&\mbox{\rm {\small Tr}}\left(\!(I- (I\!-\!\mu H)^{2N})^{-1}\left(\sum_{i=0}^{N-1} (I-\mu H)^iR^\star_s(I-\mu H)^i\right)\right)\nonumber\\
&\stackrel{\eqref{eq.weight.solution}}{=}\mbox{\rm {\small Tr}}\left(\sum_{i=0}^{N-1} \sum_{k=0}^\infty(I-\mu H)^i(I-\mu H)^{2kN} (I-\mu H)^i R_s^\star\right)\nonumber\\
&= \mbox{\rm {\small Tr}}\left( \sum_{k=0}^\infty\sum_{i=0}^{N-1}(I-\mu H)^{2(kN+i)} R_s^\star\right)\nonumber\\
&\stackrel{(a)}{=} \mbox{\rm {\small Tr}}\left(\sum_{j=0}^{\infty} (I-\mu H)^{2j} R_s^\star\right)\nonumber\\
&= \mbox{\rm {\small Tr}}\left((I-(I-\mu H)^2)^{-1} R_s^\star \right)\nonumber\\
&\approx \frac{1}{2\mu}\mbox{\rm {\small Tr}} (H^{-1}R_s^\star)=\frac{1}{2\mu} \mbox{\rm {\small Tr}}(\Lambda^{-1} U^{\mathsf{T}} R_s^\star U )
}
where in step (a) we used the fact that $k N + i$ is the $N$-modular representation of all integer numbers. To shorten the notation, we let:\vspace{-1mm}
\eq{
\tau \;\stackrel{\Delta}{=}\; \mu N
}
Next, for the second part of \eqref{eq.msd.reshuffle}:
\eq{
&\mbox{\rm {\small Tr}}\left(\!\!(I- (I\!-\!\mu H)^{2N})^{-1}\Big[\sum_{i=0}^{N-1}(I\!-\!\mu H)^i\Big] R^{ \star}_s \Big[\sum_{i=0}^{N-1}(I\!-\!\mu H)^i\Big]\!\right)\nonumber\\
&\stackrel{(a)}{=}\!\frac{1}{\mu^2}\mbox{\rm {\small Tr}}\Big(\!(I\!-\! e^{\!-2\tau\Lambda})^{-1}(I\!-\!e^{\!-\tau \Lambda}) \Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Lambda^{-1} (I\!-\!e^{\!-\tau \Lambda})\!\Big)\nonumber\\
&= \frac{1}{\mu^2}\mbox{\rm {\small Tr}}\Big(\!\Lambda^{-1}(I\!-\!e^{\!-\tau \Lambda})(I\!-\!\mu e^{\!-2\tau\Lambda})^{-1}(I\!-\!e^{\!-\tau \Lambda}) \Lambda^{-1} U^{\mathsf{T}} R_s^\star U \!\Big)\nonumber\\
&\stackrel{(b)}{=}\frac{1}{\mu^2}\mbox{\rm {\small Tr}}\Big(\Lambda^{-1}(I+ e^{-\tau\Lambda})^{-1}(I-e^{-\tau \Lambda}) \Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Big)
\nonumber\\
&= \frac{1}{\mu^2}\mbox{\rm {\small Tr}}\Big(\Lambda^{-1}\tanh(\tau\Lambda/2) \Lambda^{-1} UR_s^\star U^{\mathsf{T}} \Big)\nonumber\\
&\stackrel{}{=} \frac{N}{2\mu}\mbox{\rm {\small Tr}}\Big(2\tau^{-1}\Lambda^{-1}\tanh(\tau\Lambda/2) \Lambda^{-1} UR_s^\star U^{\mathsf{T}} \Big)
}
where step (a) replaces $H$ by its eigendecomposition and uses \eqref{g892.g3}, while step (b) exploits the fact that
\eq{
I-e^{-2\tau \Lambda} = (I+e^{-\tau \Lambda})(I-e^{-\tau \Lambda})
}
Moreover, the tanh notation refers to
\eq{
\tanh{\Lambda} = {\rm diag}\{\tanh(\Lambda_{1,1}), \cdots, \tanh(\Lambda_{M,M})\}
}
Combining the above two results gives
\eq{
&\hspace{-2mm}{\rm MSD}_{\rm RR}^{\rm lt} \nonumber\\
&= \frac{\mu}{2}\mbox{\rm {\small Tr}}\Bigg( \underbrace{ \frac{N}{N-1}\Big[I - \frac{2}{\mu N}\Lambda^{-1}\tanh\Big(\frac{\mu N}{2}\Lambda\Big)\Big]}_{\;\stackrel{\Delta}{=}\; M_{RR}}\Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Bigg)\raisetag{4mm}\label{msd.hyperbolic}
}
Compared with the uniform sampling case:
\eq{
{\rm MSD}_{\rm US} =& \frac{\mu}{2}\mbox{\rm {\small Tr}}\big( H^{-1}R_s^\star \big)
=\frac{\mu}{2}\mbox{\rm {\small Tr}}\big( \Lambda^{-1} U^{\mathsf{T}} R_s^\star U \big)\vspace{-2mm} \label{eq.msd.hyperbolic}
}
Now, it is clear that the diagonal matrix factor $M_{\rm RR}$ serves the same purpose as $m_{\rm RR}$. Each entry of this factor matrix captures the improvement of random reshuffling over uniform sampling. Lastly, we focus on the order of expression \eqref{eq.msd.hyperbolic}. We know from the Taylor's expansion that
\eq{
1 - \frac{1}{x}\tanh(x) = O(x^2)
}
We conclude that
\eq{
M_{\rm RR} = O(\mu^2 N^2)\,\Longrightarrow\, {\rm MSD}_{\rm RR}^{\rm lt} = O(\mu^3)
\label{23gniox}
}
that confirms the observation of $O(\mu^3)$ in the Fig.~\ref{fig:sgd_vs_rr_1en3}.
\vspace{-0mm}
{\color{black} Lastly, similar to the derivation for the quadratic case \eqref{g23g.xc}--\eqref{x2.dsy3g}, we can establish the hyperbolic representation of MSD for general case at all iterations:\small
\eq{
&\hspace{-5mm}\lim_{k\to\infty}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2\nonumber\\[-2mm]
\approx&
e^{-2\mu i}\frac{\mu}{2}\mbox{\rm {\small Tr}}\Bigg(\frac{N}{N-1}\Big[I - \frac{2}{\mu N}\Lambda^{-1}\tanh\Big(\frac{\mu N}{2}\Lambda\Big)\Big]\Lambda^{-1} U^{\mathsf{T}} R_s^\star U \Bigg)\nonumber\\
&\; +\! (1\! -\! e^{\! -2\mu i})\frac{\mu}{2}\mbox{\rm {\small Tr}}\Bigg(\!\! \underbrace{ \frac{N}{N\! -\! 1}\Big[I\! -\! \frac{2}{\mu N}\Lambda^{-1}\! \tanh\Big(\frac{\mu i}{2}\Lambda\Big)\Big]}_{\;\stackrel{\Delta}{=}\; M_{RR}(i)}\! \Lambda^{\! -\! 1} U^{\mathsf{T}}\! R_s^\star U\! \Bigg)\label{23gdsdse}\raisetag{4mm}
}
}\normalsize
\vspace{-10mm}
\subsection{Infinite-Horizon Case}
In this work, we are mostly interested in the finite-data case, where the data size is $N$. The results so far are based on this assumption. However, it is inspiring though to see how the performance result would simplify if we allow $N$ to grow to infinity. In that case, we get
\begin{align}
\lim_{N\to\infty} {\rm MSD}^{\rm lt}_{\rm RR} =&\; \mu^2\lim_{N\to\infty}\mbox{\rm {\small Tr}}\left((I- (I-\mu H)^{2N})^{-1}R^{\prime\star}_s\right)
\nonumber\\
=&\;\mu^2\lim_{N\to\infty}\mbox{\rm {\small Tr}}\left(R^{\prime\star}_s\right)\label{eq.80}
\end{align}
since for sufficiently small $\mu$, the matrix $I-\mu H$ is stable. Moreover, observe further that:
\eq{\lim_{N\to\infty} \sum_{i=0}^{N-1}(I-\mu H)^{2i} =\;& \Big(I - (I-\mu H)^2\Big)^{-1}\nonumber\\
=\;&\frac{1}{2\mu}H^{-1}\big(I - \mu H/2\big)^{-1}\nonumber\\
=\;&\frac{1}{2\mu} H^{-1} + O(1)
}
{\color{black} where $O(1)$ represents a matrix where all entries are $O(1)$.}
Hence,
\begin{align}
\lim_{N\to\infty}\mbox{\rm {\small Tr}}\left(R^{\prime \star}_s\right) =&\; \mbox{\rm {\small Tr}}\left(\lim_{N\to\infty}R^{\prime \star}_s\right)\nonumber\\
=&\;\mbox{\rm {\small Tr}}\left(\lim_{N\to\infty} \sum_{i=0}^N(I-\mu H)^iR^\star_s(I-\mu H)^i \right)\nonumber\\
=&\;\mbox{\rm {\small Tr}}\left( \lim_{N\to\infty}\sum_{i=0}^N(I-\mu H)^{2i}R^\star_s \right)\label{98hj9gh23}\\
=&\; \frac{1}{2\mu} \mbox{\rm {\small Tr}} (H^{-1}R^\star_s) +O(1) \label{89g32.}
\end{align}
Substituting this result back into (\ref{eq.80}), we establish:
\begin{equation}
\lim_{N\to\infty} {\rm MSD}_{\rm RR}^{\rm lt} = \frac{\mu}{2}\mbox{\rm {\small Tr}} (H^{-1}R_s) + O(\mu^2)
\label{msd.infty}\textsl{}\end{equation}
which is exactly the same expression we have in the streaming data case\cite{sayed2014adaptation}.
{\color{black}
If we examine the hyperbolic approximation of MSD, performance is proportional to ${\rm tanh}(\mu N)$, which implies the performance will degrade with $\mu N$ but it will saturate if $\mu N$ keeps increasing. Equation \eqref{msd.infty} shows that the limit value is the same as the uniform sampling case.
}
\section{Concluding Remarks}
In conclusion, this work studies the performance of stochastic gradient implementations under random reshuffling and provides a detailed analytical justification for the improved performance of these implementations over uniform sampling. The work focuses on constant step-size adaptation, where the
agent is continuously learning. The analysis establishes analytically that random
reshuffling outperforms uniform sampling by showing
that iterates approach a smaller neighborhood of size $O(\mu^2)$
around the minimizer rather than $O(\mu)$. Simulation results
illustrate the theoretical findings.
We also summarize the conclusions in Table \ref{table.conclusion}.
\appendices
\section{Proof of Theorem \ref{lemma.start.pont}}\label{proof.theorem.1}
{\color{black} Note first that
\eq{
{\boldsymbol{w}}_0^{k+1} \!\;\stackrel{\Delta}{=}\;& {\boldsymbol{w}}_{N}^k\nonumber\\
\stackrel{\eqref{alg.rr}}{=}\;&{\boldsymbol{w}}_{N-1}^k - \mu{\nabla}_w Q({\boldsymbol{w}}_{N-1}^k; x_{{\boldsymbol \sigma}^k(N)}) \nonumber\\[-1mm]
& \vdots\nonumber\\[-2mm]
=\;& {\boldsymbol{w}}_{0}^k -\mu \sum_{i=1}^N{\nabla}_w Q({\boldsymbol{w}}_{i-1}^k; x_{{\boldsymbol \sigma}^k(i)})\nonumber\\
\stackrel{(\ref{grad.define})}{=}\;& {\boldsymbol{w}}_{0}^k -\mu N \nabla_w J({\boldsymbol{w}}^k_0) \label{gewg.gwe}\\
&\;- \mu \sum_{i=1}^N\underbrace{\big( {\nabla}_w Q({\boldsymbol{w}}_{i-1}^k; x_{{\boldsymbol \sigma}^k(i)}) -
{\nabla}_w Q({\boldsymbol{w}}_{0}^k; x_{{\boldsymbol \sigma}^k(i)})\big)}_{\;\stackrel{\Delta}{=}\; g_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})} \nonumber
}
where we denote by $g_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})$ the incremental gradient noise which is the mismatch between the gradient approximations evaluated at ${\boldsymbol{w}}^k_0$ and ${\boldsymbol{w}}^k_{i-1}$.
Next, we introduce the error vector: \eq{\widetilde{{\boldsymbol{w}}}_0^k \;\stackrel{\Delta}{=}\; w^\star - {\boldsymbol{w}}_0^k}
and let $0~<~t~<~1$ be any scalar that we will specify further below. Subtracting $w^\star$ from both sides of (\ref{gewg.gwe}) and squaring, we get:
{\color{black}
\eq{
&\hspace{-5mm}\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2 \nonumber\\
=& \left\|\widetilde{\boldsymbol{w}}_{0}^k + \mu N \nabla_w J({\boldsymbol{w}}^k_0)+\mu\sum_{i=1}^Ng_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})\right\|^2\nonumber\\
\stackrel{(a)}{\leq}& \frac{1}{t}\|\widetilde{\boldsymbol{w}}_{0}^k + \mu N \nabla_w J({\boldsymbol{w}}^k_0) \|^2 +\frac{\mu^2}{1-t}\left\|\sum_{i=1}^Ng_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})\right\|^2\nonumber\\
\stackrel{(b)}{\leq}& \frac{1}{t}\left\|\widetilde{\boldsymbol{w}}_{0}^k \!+\! \mu N \nabla_w J({\boldsymbol{w}}^k_0) \right\|^2 \!\!+\!\frac{\mu^2N}{1-t}\left(\sum_{i=1}^N\left\|g_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})\right\|^2\right) \label{2h89jbg.sbd}
}
where step (a) exploits Jensen's inequality:
\eq{
\|a+b\|^2 = \left\|\frac{t}{t} a+ \frac{1-t}{1-t}b\right\|^2 \leq \frac{1}{t}\| a \| + \frac{1}{1-t}\|b\|^2
}
}
and step (b) uses the fact that:
\eq{
\left\|\sum_{i=1}^N x_i\right\|^2=N^2\left\|\sum_{i=1}^N\frac{1}{N} x_i\right\|^2\leq N\sum_{i=1}^N\left\|x_i\right\|^2 \label{eq.jensen}
}
We show in Appendix \ref{app.incr.noise} that the rightmost term in (\ref{2h89jbg.sbd}) can be bounded by:
\eq{
\sum_{i=1}^N\left\|g_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})\right\|^2
&\leq \frac{\mu^2 \delta^2N^3}{1-2\mu^2\delta^2N^2}\left(2\delta^2\|\widetilde{\boldsymbol{w}}_0^k\|^2 + {\mathcal{K}} \right) \label{fwehig.we}
}
while for the first term in \eqref{2h89jbg.sbd} we have
{\color{black}
\eq{
&\hspace{-4mm}\left\|\widetilde{\boldsymbol{w}}^k_0 + \mu N \nabla J({\boldsymbol{w}}_0^k)\right\|^2\nonumber\\[1mm]
=& \|\widetilde{\boldsymbol{w}}^k_0\|^2 + \mu^2 N^2\| \nabla J({\boldsymbol{w}}_0^k)\|^2+ 2 \mu N(\widetilde{\boldsymbol{w}}^k_0)^{\mathsf{T}} \nabla J({\boldsymbol{w}}_0^k)\nonumber\\[1mm]
\leq& \Big(1-2\mu N\frac{\nu\delta}{\delta+\nu}\Big)\|\widetilde{\boldsymbol{w}}^k_0\|^2 + \mu N(\mu N-\frac{2}{\delta+\nu}) \| \nabla J({\boldsymbol{w}}_0^k)\|^2 \nonumber\\[-1mm]
\label{f9j30.3x}
}
where in the first inequality we exploit the co-coercivity inequality\cite{nesterov2013introductory} that
\eq{
&(\nabla J(x) - \nabla J(y))^{\mathsf{T}} (x-y)\nonumber\\
&\;\;\;\;\;\geq \frac{\nu\delta}{ \delta+\nu}\|x-y\|^2+\frac{1}{\delta+\nu}\|\nabla J(x)-\nabla J(y)\|^2
}
Next we require the step size to satisfy
\eq{
\mu \leq \frac{2}{(\delta+\nu)N} \label{stepsize.condition1}
}
Then, the coefficient of the last term in \eqref{f9j30.3x} is negative. Combining with the strongly convexity property $\| \nabla J({\boldsymbol{w}}_0^k) - \nabla J(w^\star)\| \geq \nu\|\widetilde{\boldsymbol{w}}_{0}^k\|$, we have
\eq{
&\hspace{-4mm}\left\|\widetilde{\boldsymbol{w}}^k_0 + \mu N \nabla J({\boldsymbol{w}}_0^k)\right\|^2\nonumber\\[1mm]
\leq&\Big(1-2\mu N\frac{\nu\delta}{\delta+\nu}\Big)\|\widetilde{\boldsymbol{w}}^k_0\|^2 + \mu N\nu^2(\mu N-\frac{2}{\delta+\nu}) \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
=&\Big(1-\mu \nu N)^2\|\widetilde{\boldsymbol{w}}^k_0\|^2 \label{f9j30.3}
}
Combining \eqref{fwehig.we} and \eqref{f9j30.3}, we establish:
\eq{
\|\widetilde{\boldsymbol{w}}^{k+1}_0\|^2\leq& \frac{1}{t} \left(1- \mu N \nu \right)^2 \|\widetilde{\boldsymbol{w}}_{0}^k\|^2\nonumber\\
&\;\;{} + \frac{\mu^2N}{1-t} \frac{\mu^2 \delta^2 N^3}{1-2\mu^2\delta^2N^2}\left(2\delta^2\|\widetilde{\boldsymbol{w}}_0^k\|^2 + \mathcal{K}\right)
}
\noindent We are free to choose $t\in(0,1)$. Thus, let $t=1-\mu N \nu$. Then, we conclude that
\eq{
\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2
\leq& \Bigg(1-\mu N \nu + \frac{2\mu^3\delta^4N^3}{\nu(1-2\mu^2\delta^2N^2)}\Bigg)\|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
&\;\;+\frac{\mu^3\delta^2N^3\mathcal{K}}{\nu(1-2\mu^2\delta^2N^2)} \label{g32jo.sgw}
}
If we assume $\mu$ is sufficiently small such that
\eq{\label{mu-2}
1-2\mu^2\delta^2N^2 \ge \frac{1}{2},
}
then inequality \eqref{g32jo.sgw} becomes
\eq{\label{xcnweu-2}
\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2 \leq\;& \Big(1-\mu N \nu + \frac{4\mu^3\delta^4N^3}{\nu}\Big)\|\widetilde{\boldsymbol{w}}^k_0\|^2+\frac{2\mu^3\delta^2N^3}{\nu}\mathcal{K}.
}
If we further assume the step-size $\mu$ is sufficiently small such that
\eq{\label{mu-3}
1-\mu N \nu + \frac{4\mu^3\delta^4N^3}{\nu} \le 1-\frac{1}{2} \mu N \nu
}
}
then inequality \eqref{xcnweu-2} becomes
\eq{
\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2 \leq\;& \left(1-\frac{1}{2}\mu N \nu \right) \|\widetilde{\boldsymbol{w}}^k_0\|^2+\frac{2\mu^3\delta^2N^3}{\nu}\mathcal{K}
}
Iterating over $k$, we have
\eq{
\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2\leq\;& \left(1-\frac{1}{2}\mu N \nu \right)^k \|\widetilde{\boldsymbol{w}}^0_0\|^2\nonumber\\
&\;\;{}+\left(\frac{2\mu^3\delta^2N^3}{\nu}\mathcal{K}\right)\sum_{j=1}^{k}\left(1-\frac{1}{2}\mu N \nu \right)^j \nonumber \\
\le\, & \left(1-\frac{1}{2}\mu N \nu \right)^k \|\widetilde{\boldsymbol{w}}^0_0\|^2+\frac{4\mu^2\delta^2N^2}{\nu^2}\mathcal{K}\label{sdnsdn}
}
By taking expectations {\color{black} with respect to the filtration, i.e., the collection of past information,} on both sides, we have
\eq{
\mathbb{E}\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2 \leq\;& \left(1-\frac{1}{2}\mu N \nu \right)^k \mathbb{E} \|\widetilde{\boldsymbol{w}}^0_0\|^2+\frac{4\mu^2\delta^2N^2}{\nu^2}\mathcal{K}, \label{13g,sdfgew}
}
which implies that
\eq{
\limsup_{k\to \infty} \mathbb{E}\|\widetilde{\boldsymbol{w}}_0^{k}\|^2 = O(\mu^2)
}
Finally we find a sufficient range for $\mu$ for stability. To satisfy \eqref{stepsize.condition1}, \eqref{mu-2} and \eqref{mu-3}, it is enough to set $\mu$ as
\eq{\color{black}
\mu &\color{black} \le \min \left\{ \frac{2}{(\delta+\nu)N} , \frac{1}{2\delta N}, \frac{\nu}{\sqrt{8}\delta^2 N} \right\} < \frac{\nu}{3\delta^2 N}.
}
The argument in this derivation provides a self-contained proof for the convergence result (\ref{lemma.stability2}), which generalizes the approach from \cite{ying2017rr}. There, the bound (\ref{lemma.stability2}) was derived from an intermediate property (23) in \cite{ying2017rr}, which does not always hold. Here, the same result is re-derived and shown to hold irrespective of this property. Consequently, we are now able to obtain Lemma 1 from \cite{ying2017rr} as a corollary to our current result, as shown next.
}
\section{Derivation of \eqref{fwehig.we}}\label{app.incr.noise}
Indeed, from Lipschitz continuity of the gradients, we have
\eq{
\sum_{i=1}^N\left\|g_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})\right\|^2 \leq\,& \sum_{i=1}^N\delta^2\left\|{\boldsymbol{w}}^k_{i-1}-{\boldsymbol{w}}^k_0\right\|^2 \nonumber\\
=\,&\delta^2\sum_{i=1}^N\left\|\sum_{j=1}^{i-1}({\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k)\right\|^2\nonumber\\
\stackrel{\eqref{eq.jensen}}{\leq}&\delta^2\sum_{i=1}^N(i-1)\sum_{j=1}^{i-1}\|{\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k\|^2
}
Using the equivalence relation \eq{
\sum_{i=1}^N\sum_{j=1}^{i-1} a_{ij}\equiv \sum_{j=1}^{N-1}\sum_{i=j+1}^N a_{ij} \label{eq.equival.sum}
}
we obtain
\eq{
\sum_{i=1}^N\left\|g_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}^k_{i-1})\right\|^2\leq& \delta^2\sum_{j=1}^N\sum_{i=j+1}^{N}(i-1)\|{\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k\|^2\nonumber\\
\leq&\frac{\delta^2N^2}{2}\sum_{j=1}^N\|{\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k\|^2 \label{gwioeg}
}
\noindent where in the second inequality we used the fact that
\eq{
\sum_{i=j+1}^N(i-1)\leq \sum_{i=1}^N(i-1)=\frac{N(N-1)}{2} \leq\frac{N^2}{2},\;\;j=1,2,\ldots,N \label{eq.summation}
}
We can recursively bound the difference terms in \eqref{gwioeg} as follows. From \eqref{alg.rr}, we have
\eq{
&\hspace{-5mm}\|{\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k\|^2 \nonumber\\
=& \mu^2 \|\nabla_w Q(w_{j-1}; x_{{\boldsymbol \sigma}^k{(j)}})\|^2\nonumber\\
\leq& 2\mu^2 \|\nabla_w Q(w_{j-1}; x_{{\boldsymbol \sigma}^k{(j)}}) - \nabla_w Q(w^\star; x_{{\boldsymbol \sigma}^k(j)})\|^2\nonumber\\
&\;\;{}+2\mu^2 \|\nabla_w Q(w^\star; x_{{\boldsymbol \sigma}^k(j)})\|^2\nonumber\\
\leq& 2\mu^2\delta^2\|\widetilde{\boldsymbol{w}}_{j-1}^k\|^2 + 2\mu^2 \|\nabla_w Q(w^\star; x_{{\boldsymbol \sigma}^k(j)})\|^2\nonumber\\
\leq& 4\mu^2\delta^2\|\widetilde{\boldsymbol{w}}_0^k\|^2 + 4\mu^2\delta^2\|{\boldsymbol{w}}_{j-1}^k -{\boldsymbol{w}}_{0}^k\|^2\nonumber\\
&\;\;{}+2\mu^2 \|\nabla_w Q(w^\star; x_{{\boldsymbol \sigma}^k(j)})\|^2
}
Summing over $j$:
\eq{
&\hspace{-5mm}\sum_{j=1}^{N}\|{\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k\|^2 \nonumber\\[-2mm]
\stackrel{\eqref{gradient.noise}}{\leq}\;& 4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 + 2\mu^2N\mathcal{K} + 4\mu^2\delta^2\sum_{j=1}^N\|{\boldsymbol{w}}_{j-1}^k -{\boldsymbol{w}}_{0}^k\|^2\nonumber\\
=\;& 4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 \!+\! 2\mu^2N\mathcal{K}\!+ \! 4\mu^2\delta^2\sum_{j=1}^N\left\| \sum_{i=1}^{j-1}({\boldsymbol{w}}_{i}^k -{\boldsymbol{w}}_{i-1}^k)\right\|^2\nonumber\\
\stackrel{\eqref{eq.jensen}}{=}\;&\! 4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 \! +\! 2\mu^2N\mathcal{K} \nonumber\\
&\;\;+ 4\mu^2\delta^2\sum_{j=1}^N\sum_{i=1}^{j-1}(j-1)\| {\boldsymbol{w}}_{i}^k \!-\!{\boldsymbol{w}}_{i-1}^k\|^2\nonumber\\
\stackrel{\eqref{eq.equival.sum}}{=}\;&\! 4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 + 2\mu^2N\mathcal{K} \nonumber\\
&\;\;+ 4\mu^2\delta^2\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}(j-1)\| {\boldsymbol{w}}_{i}^k -{\boldsymbol{w}}_{i-1}^k\|^2\nonumber\\
\stackrel{\eqref{eq.summation}}{\leq}\;&\! 4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 + 2\mu^2N\mathcal{K} + 2\mu^2\delta^2 N^2\sum_{j=1}^{N-1}\| {\boldsymbol{w}}_{j}^k -{\boldsymbol{w}}_{j-1}^k\|^2\nonumber\\
\leq\;& \!4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 + 2\mu^2N\mathcal{K} + 2\mu^2\delta^2 N^2\sum_{j=1}^{N}\| {\boldsymbol{w}}_{j}^k -{\boldsymbol{w}}_{j-1}^k\|^2
}
Rearranging the terms, we get
\eq{
(1\!-\!2\mu^2\delta^2N^2)\sum_{j=1}^{N}\|{\boldsymbol{w}}_{j}^k - {\boldsymbol{w}}_{j-1}^k\|^2 \leq 4\mu^2\delta^2N\|\widetilde{\boldsymbol{w}}_0^k\|^2 \!+\! 2\mu^2N\mathcal{K}\raisetag{2mm}
}
After substituting into \eqref{gwioeg} and simplifying, we have \eqref{fwehig.we}.
\section{Proof of Corollary \ref{lemma.corollary.1}}\label{proof.corollary.1}
We have
\eq{
\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_i\|^2 \leq& 2 \mathbb{E}\hspace{0.05cm}\|{\boldsymbol{w}}^k_i - {\boldsymbol{w}}^k_0\|^2 + 2\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
\leq& \color{black}2 i\sum_{j=0}^{i-1}\mathbb{E}\hspace{0.05cm}\|{\boldsymbol{w}}^k_{j+1} - {\boldsymbol{w}}^k_j\|^2 + 2\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
\leq& 2 i\sum_{j=0}^{i-1}\mathbb{E}\hspace{0.05cm}\|\nabla_w Q({\boldsymbol{w}}^k_j; x_{{\boldsymbol \sigma}^k(j)})\|^2 + 2\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
\leq& 2 \mu^2\delta^2i\sum_{j=0}^{i-1}\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}^k_j\|^2 + 2\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2
}
Summing over $i$;
\eq{
&\hspace{-5mm}\sum_{i=1}^{N-1}\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_i\|^2\nonumber\\
\leq& 2 \mu^2\delta^2\sum_{i=1}^{N-1}\sum_{j=0}^{i-1}i\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}^k_j\|^2 + 2N\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
=& 2 \mu^2\delta^2\sum_{j=0}^{N-1}\sum_{i=j+1}^{N-1}i\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}^k_j\|^2 + 2N\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
\leq& \mu^2\delta^2N^2\sum_{j=0}^{N-1}\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}^k_j\|^2 + 2N\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2\nonumber\\
=& \mu^2\delta^2N^2\sum_{j=1}^{N-1}\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}^k_j\|^2 + (2N+\mu^2\delta^2N^2)\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2
}
Rearranging terms, we get
\eq{
\sum_{i=1}^{N-1}\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_i\|^2\leq&\frac{2N+\mu^2\delta^2N^2}{1-\mu^2\delta^2N^2}\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_0\|^2
}
Let $k\to\infty$, then
\eq{
\limsup_{k\to\infty}\sum_{i=1}^{N-1}\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_i\|^2 = O(\mu^2)
}
Noting that every term in the summation is non-negative, we conclude that for all \( j\):
\eq{
\limsup_{k\to\infty}\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_j\|^2 \le \limsup_{k\to\infty}\sum_{i=1}^{N-1}\mathbb{E}\hspace{0.05cm} \|\widetilde{\boldsymbol{w}}^k_i\|^2 = O(\mu^2)
}\vspace{-3mm}
{\color{black}
\section{Proof of corollary \ref{lemma.corollary.2}}\label{proof.corollary.2}\vspace{-1mm}
For completeness, it is easy to modify our derivation to arrive at a similar conclusion to \cite{Gurbuzbalaban2015} for the diminishing step-size scenario.
Observe that inequality (\ref{13g,sdfgew}) continues to hold for decaying step-sizes:
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2 \leq\;& \left(1-\frac{1}{2}\mu(k) N \nu \right) \mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}^k_0\|^2+\frac{2\mu(k)^3\delta^2N^3}{\nu}\mathcal{K} \label{xdfe.gwe}
}
For simplicity, we only consider step-size sequences of the form:
\eq{
\mu(k) = \frac{c}{k+1}, \;\;\;\;k\geq 0
}
where $c$ is some positive constant. Then, we can exploit Chung's Lemma \cite{polyak1987introduction} or \cite[Lemma F.5]{sayed2014adaptation}
%
to conclude that the convergence rate of $\mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^2$ is $O(1/k^2)$. The relationship between the number of epoch $k$ and iteration $i$ is linear. Therefore, it also follows that the convergence rate is $O(1/i^2)$.
}
\vspace{-4mm}
\section{Proof of Lemma \ref{lemma.2}} \label{app.noise}\vspace{-1mm}
We employ mathematical induction. First, it is easy to verify that $f(1;X,\beta) = {\rm Var}(X)$. Now, assuming (\ref{eq.lemma2}) is correct for case $n$, we consider case $n+1$:
\begin{align}
f(n+1;X, \beta)
&= \mathbb{E}\hspace{0.05cm} \Bigg\|\sum_{j=1}^{n+1}\beta^{n+1-j} x_{{\boldsymbol \sigma}(j)}\Bigg\|^2\nonumber\\
&= \mathbb{E}\hspace{0.05cm} \Bigg\|\beta\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}+x_{{\boldsymbol \sigma}(n+1)}\Bigg\|^2\nonumber\\
&= \beta^2\mathbb{E}\hspace{0.05cm}\Bigg \|\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg\|^2+\mathbb{E}\hspace{0.05cm} \|x_{{\boldsymbol \sigma}(n+1)}\|^2\nonumber\\
&\hspace{7mm} + 2\beta\mathbb{E}\hspace{0.05cm} \Bigg(\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg)^{\mathsf{T}} x_{{\boldsymbol \sigma}(n+1)} \label{gu9ng.xd}
\end{align}
From the uniform random reshuffling property \eqref{prop2}, we know that:
\eq{
\mathbb{E}\hspace{0.05cm}\|x_{{\boldsymbol \sigma}(n+1)}\|^2 = {\rm Var}(x) \label{zdxc.CS}
}
For the cross terms, we exploit the law of total expectation\cite{durrett2010probability}:
\eq{
&\hspace{-2mm}\mathbb{E}\hspace{0.05cm}\Bigg(\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg)^{\mathsf{T}} x_{{\boldsymbol \sigma}(n+1)}\nonumber\\
&=\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(1:n)}\!\!\left[\!\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(n+1)}\Bigg(\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg)^{\mathsf{T}} x_{{\boldsymbol \sigma}(n+1)}\Big|\,{\boldsymbol \sigma}(1:n)\!\right]\nonumber\\
&\stackrel{\eqref{prop3}}{=}\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(1:n)}\left[\Bigg(\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg)^{\mathsf{T}} \left(\frac{1}{N-n}\sum_{j\notin {\boldsymbol \sigma}(1:n)}x_j\right)\right]\nonumber\\
&=-\frac{1}{N-n}\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(1:n)}\left[\Bigg(\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg)^{\mathsf{T}}\sum_{j=1}^{n}x_{{\boldsymbol \sigma}(j)}\right]\nonumber\\
&=-\frac{1}{N-n}\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(1:n)}\sum_{j=1}^{n}\beta^{n-j}\|x_{{\boldsymbol \sigma}(j)}\|^2 \nonumber\\
&\;\;\;\;\;\;-\frac{1}{N-n}\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(1:n)}\sum_{i=1}^{n}\beta^{n-i} \left(\sum_{j=1, j\neq i}^{n}x^{\mathsf{T}}_{{\boldsymbol \sigma}(i)}x_{{\boldsymbol \sigma}(j)}\right) \label{gwe090jg2}
}
Without loss of generality, we assume $i<j$ in the following argument. If $i>j$, exchanging the place of $x_{{\boldsymbol \sigma}(i)}$ and $x_{{\boldsymbol \sigma}(j)}$ leads to the same conclusion:
\begin{align}
\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(1:n)}\big[x^{\mathsf{T}}_{{\boldsymbol \sigma}(i)}x_{{\boldsymbol \sigma}(j)}\big] =\,&\,
\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(i),{\boldsymbol \sigma}(j)}\big[x^{\mathsf{T}}_{{\boldsymbol \sigma}(i)}x_{{\boldsymbol \sigma}(j)}\big]\nonumber\\
=\,&\, \mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(i)}\left\{ x^{\mathsf{T}}_{{\boldsymbol \sigma}(i)}\mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(j)}[ x_{{\boldsymbol \sigma}(j)}\,|\, {\boldsymbol \sigma}(i)]\right\}\nonumber\\
\stackrel{(\ref{prop3})}{=}&\, -\frac{1}{N-1} \mathbb{E}\hspace{0.05cm}_{{\boldsymbol \sigma}(i)}\|x_{{\boldsymbol \sigma}(i)}\|^2\nonumber\\
=&\, -\frac{1}{N-1} {\rm Var}(X) \label{oijw.ni}
\end{align}
Substituting \eqref{oijw.ni} into \eqref{gwe090jg2}, we obtain:
\begin{align}
&\hspace{-5mm}\mathbb{E}\hspace{0.05cm}\Bigg(\sum_{j=1}^{n}\beta^{n-j} x_{{\boldsymbol \sigma}(j)}\Bigg)^{\mathsf{T}} x_{{\boldsymbol \sigma}(n+1)}\nonumber\\
=& -\frac{1}{N-n}\left(\sum_{j=1}^{n}\beta^{n-j} -\sum_{j=1}^n\beta^{n-j}\frac{n-1}{N-1}\right){\rm Var}(X) \nonumber\\
=&\; -\frac{1}{N-1} \sum_{j=1}^n \beta^{j-1} {\rm Var}(X)\label{eq.63}
\end{align}
Combining \eqref{gu9ng.xd}, \eqref{zdxc.CS}, and \eqref{eq.63}, we get:
\begin{align}
&\hspace{-3mm}f(n+1;X, \beta)\nonumber\\
=& \beta^2 f(n;X, \beta) + {\rm Var}(X) - \frac{2}{N-1} \sum_{j=1}^n \beta^{j} {\rm Var}(X)\nonumber\\
=&\left(\beta^2\frac{(\sum_{i=0}^{n-1}\beta^{2i})N\!-\!(\sum_{i=0}^{n-1}\beta^i)^2}{N-1} \!+\! 1 \!-\! \frac{2\sum_{j=1}^n \beta^{j}}{N-1}\right) {\rm Var}(X)
\nonumber\\
=&\frac{(\sum_{i=1}^{n}\beta^{2i})N-(\sum_{i=1}^{n}\beta^i)^2 + (N-1) - 2\sum_{j=1}^n \beta^{j}}{N-1} {\rm Var}(X)
\nonumber\\
=& \frac{(\sum_{i=0}^{n}\beta^{2i})N-(\sum_{i=0}^{n}\beta^i)^2}{N-1}{\rm Var}(X)
\end{align}
Hence, we conclude that (\ref{eq.lemma2}) is valid.
Next, the proof of \eqref{eq.lemma2.2} is similar. It is easy to verify that $F(1;X, B) = R_x$. Assuming (\ref{eq.lemma2.2}) is correct for case $n$, we consider case $n+1$:
\begin{align}
&\hspace{-5mm}F(n+1; X, B) \nonumber\\
=\;&\mathbb{E}\hspace{0.05cm} \!\left[\!\sum_{j=1}^{n}B^{n-j}x_{{\boldsymbol \sigma}(j)} + x_{{\boldsymbol \sigma}(n+1)}\!\right]\!\!\left[\sum_{j=1}^{n}x_{{\boldsymbol \sigma}(j)}^{\mathsf{T}} B^{n-j} +x_{{\boldsymbol \sigma}(n+1)}\!\right]\nonumber\\
=\,& B F(n;X,B) B + \mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}B^{n-j}x_{{\boldsymbol \sigma}(j)} x_{{\boldsymbol \sigma}(n+1)}^{\mathsf{T}}\nonumber\\
&\;\; {}+\mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}x_{{\boldsymbol \sigma}(n+1)} x_{{\boldsymbol \sigma}(j)}^{\mathsf{T}} B^{n-j} + R_s\nonumber\\
\stackrel{(\ref{prop3})}{=}&B F(n;X,B) B - \frac{1}{N-n}\mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}\sum_{i=1}^{n}B^{n-j}\ x_{{\boldsymbol \sigma}(j)} x_{{\boldsymbol \sigma}(i)}^{\mathsf{T}}\nonumber\\
&\;\; {}-\frac{1}{N-n}\mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}\sum_{i=1}^{n} x_{{\boldsymbol \sigma}(i)} x_{{\boldsymbol \sigma}(j)}^{\mathsf{T}} B^{n-j} +R_s
\nonumber\\
\stackrel{(a)}{=}\,&B F(n;X,B)B - \frac{1}{N-1}\sum_{j=1}^{n}B^{n-j}R_s\nonumber\\
&\;\; {}-\frac{1}{N-1}\sum_{j=1}^{n}R_s B^{n-j} + R_s \label{9jgh2.d}
\end{align}
where in the step (a) we use the same trick as (\ref{eq.63}):
\eq{
&\hspace{-5mm}\mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}\sum_{i=1}^{n}B^{n-j}x_{{\boldsymbol \sigma}(j)}x_{{\boldsymbol \sigma}(i)}^{\mathsf{T}}\nonumber\\
=\,&\mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}B^{n-j}x_{{\boldsymbol \sigma}(j)}x_{{\boldsymbol \sigma}(j)}^{\mathsf{T}} +
\mathbb{E}\hspace{0.05cm} \sum_{j=1}^{n}\sum_{i\neq j}B^{n-j}x_{{\boldsymbol \sigma}(j)}x_{{\boldsymbol \sigma}(i)}^{\mathsf{T}}\nonumber\\
=\,&\sum_{j=1}^{n}B^{n-j}R_s - \frac{1}{N-1}\mathbb{E}\hspace{0.05cm}\sum_{j=1}^{n}B^{n-j}x_{{\boldsymbol \sigma}(j)}x_{{\boldsymbol \sigma}(j)}
}
Now if we substitute the $F(n;X,B)$ according to (\ref{eq.lemma2.2}) into (\ref{9jgh2.d}), we will conclude that the format of (\ref{eq.lemma2.2}) is still valid for $F(n+1; X, B)$, which completes the proof.
\section{Proof of Theorem \ref{main.theorem}} \label{app.main.theorem}
We introduce the eigen-decomposition \cite{Horn03}
\eq{
H = U\Lambda U^{\mathsf{T}}
}
where $U$ is orthogonal and $\Lambda$ is diagonal with positive entries. Transforming (\ref{eq.long_term.rec}) into the eigenvector space of $H$, we obtain:
\begin{align}
U^{\mathsf{T}} \widetilde{{\boldsymbol{w}}}_0^{\prime k+1} = (I-\mu\Lambda)^N U^{\mathsf{T}} \widetilde{{\boldsymbol{w}}}_0^{\prime k} - \mu U^{\mathsf{T}} s'({\boldsymbol{w}}_0^k) \label{89g23.ge}
\end{align}
Let
\begin{equation}
\bar {\boldsymbol{w}}_0^{k} \;\stackrel{\Delta}{=}\; U^{\mathsf{T}} \widetilde{{\boldsymbol{w}}}_0^{\prime k}
\end{equation}
and introduce any positive-definite matrix $\Sigma$. Computing the weighted square norm of both sides of (\ref{89g23.ge}) and taking expectations we get
\begin{align}
\mathbb{E}\hspace{0.05cm} \|\bar {\boldsymbol{w}}_0^{k+1}\|^2_{\Sigma} \stackrel{\eqref{noise.zero}}{=} \mathbb{E}\hspace{0.05cm} \|(I-\mu\Lambda)^N\bar {\boldsymbol{w}}_0^{k}\|^2_{\Sigma} + \mu^2\mathbb{E}\hspace{0.05cm} \| U^{\mathsf{T}} s'({\boldsymbol{w}}_0^k)\|^2_\Sigma
\end{align}
where $\|x\|^2_\Sigma \;\stackrel{\Delta}{=}\; x^{\mathsf{T}}\Sigma x$ and we are free to choose $\Sigma$. The cross term is canceled thanks to property \eqref{noise.zero}.
Letting $k\to\infty$, we get:
\begin{align}
\lim_{k\to \infty} \mathbb{E}\hspace{0.05cm} \|\bar{{\boldsymbol{w}}}_{0}^k\|^2_{\Sigma - (I-\mu \Lambda)^N\Sigma (I-\mu \Lambda)^N} = \lim_{k\to \infty}\mu^2\mathbb{E}\hspace{0.05cm} \| U^{\mathsf{T}} s'({\boldsymbol{w}}_0^k)\|^2_\Sigma \label{eq.limit}
\end{align}
To recover the mean-square-deviation $\mathbb{E}\hspace{0.05cm} \|\bar{{\boldsymbol{w}}}_{0}^k\|^2$, we choose $\Sigma$ as the solution to the Lyapunov equation:
\begin{equation}
\Sigma - (I-\mu \Lambda)^N\Sigma (I-\mu \Lambda)^N = I\label{8hj9g.3}
\end{equation}
which is given by
\begin{align}
\Sigma^\star = \sum_{k=0}^\infty (I - \mu \Lambda)^{2N k}
=& \left(I - (I-\mu \Lambda)^{2N}\right)^{-1} \label{eq.weight.solution}
\end{align}
{\color{black} where we require $\mu<\frac{1}{\delta}$ for the stability of infinite series summation.}
The desired MSD is given by:
\begin{align}
{\rm MSD}^{\rm lt}_{\rm RR} \;\stackrel{\Delta}{=}\;\lim_{k\to \infty} \mathbb{E}\hspace{0.05cm} \|\widetilde{{\boldsymbol{w}}}_{0}^{\prime k}\|^2
=\lim_{k\to \infty} \mathbb{E}\hspace{0.05cm} \|\bar{{\boldsymbol{w}}}_{0}^k\|^2
\end{align}
and, hence,
\begin{align}
&\hspace{-5mm}\lim_{k\to \infty} \mathbb{E}\hspace{0.05cm} \|\bar{{\boldsymbol{w}}}_{0}^k\|^2
\nonumber\\
\stackrel{(\ref{eq.limit})}{=}&\,
\lim_{k\to \infty}\mu^2\mathbb{E}\hspace{0.05cm} \| U^{\mathsf{T}} s'({\boldsymbol{w}}^k_0)\|^2_{\Sigma^\star}\nonumber\\
=&\, \lim_{k\to \infty}\mu^2 \mbox{\rm {\small Tr}}\left(U\Sigma^\star U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k) s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\right) \nonumber\\
=&\,\lim_{k\to \infty}\mu^2 \mbox{\rm {\small Tr}}\left(U\Sigma^\star U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right) +
\nonumber\\
&\;\;\lim_{k\to \infty}\mu^2 \mbox{\rm {\small Tr}}\left(U\Sigma^\star U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right)\nonumber\\
=& \mu^2 \mbox{\rm {\small Tr}}\left(U\Sigma^\star U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right) + O(\mu^4) \label{89jgws.ge}
\end{align}
The proof of last equality is provided in Appendix \ref{g892.gf3}.
Combining \eqref{eq.weight.solution} and the fact that $U$ is the eigenvector matrix of $H$, we get:
\eq{
&\hspace{-5mm}{\rm MSD}^{\rm lt}_{\rm RR} \nonumber\\
=\,&\mu^2 \mbox{\rm {\small Tr}}\left(U\sum_{k=0}^\infty (I - \mu \Lambda)^{2N k} U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right) + O(\mu^4)\nonumber\\
=\,&\mu^2 \mbox{\rm {\small Tr}}\left(\sum_{k=0}^\infty (I - \mu H)^{2N k} \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right) + O(\mu^4)\nonumber\\
=\,&\mu^2 \mbox{\rm {\small Tr}}\left(\big(I-(I-\mu H)^{2N}\big)^{-1} R_s^{\prime \star}\right) + O(\mu^4)\label{msd.lt}
}
{\color{black}
As for the convergence rate, we can follow the same argument in \cite[Chapter 4]{sayed2014adaptation} to get
\eq{
\alpha \;\stackrel{\Delta}{=}\; (1 - \mu \lambda_{\min}(H))^{2N} \approx 1 - 2\mu \lambda_{\min}(H) N
}
}
\vspace{-3mm}
{\color{black}
\section{Proof of Theorem \ref{theorem.iterations}}\label{app.theorem.iterations}
Using a similar approach to \eqref{eq.error_expand}, we have
\eq{
\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}
= (I-\mu H)^i\widetilde{{\boldsymbol{w}}}_{0}^{\prime k} - \mu\underbrace{\sum_{j=1}^{i}(I-\mu H)^{i-j} s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k)}_{\;\stackrel{\Delta}{=}\; s_i'({\boldsymbol{w}}_0^k)} \label{eq.long_term.i}\raisetag{4mm}
}
where
\eq{
\mathbb{E}\hspace{0.05cm}[s_i'({\boldsymbol{w}}_0^k) | {\boldsymbol{w}}_0^k] = \mu \sum_{j=1}^{i}(I-\mu H)^{i-j} \mathbb{E}\hspace{0.05cm} \big[s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k) | {\boldsymbol{w}}_0^k\big] =0\nonumber
}
Computing the squared norm and taking expectations we get:
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2 = \mathbb{E}\hspace{0.05cm}\|(I-\mu H)^i\widetilde{{\boldsymbol{w}}}_{0}^{\prime k}\|^2 + \mu^2\mathbb{E}\hspace{0.05cm}\|s_i'({\boldsymbol{w}}_0^k)\|^2
}
We assume $\mu$ is sufficiently small so that $\|I-\mu H\| \leq 1-\mu\nu$, i.e. requiring $\mu \leq \frac{2}{\nu+\delta}$ and let $t = (1-\mu\nu)^i$. Then,
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2 \leq(1-\mu\nu)^{2i}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{0}^{\prime k}\|^2 + \mu^2 \mathbb{E}\hspace{0.05cm}\|s_i'({\boldsymbol{w}}_0^k)\|^2
}
From Lemma \ref{lemma.2}, we know that
\eq{
R^{\prime\star}_{s,i}\;\stackrel{\Delta}{=}\;&\mathbb{E}\hspace{0.05cm} s_i'(w^\star) s_i'(w^\star)^{\mathsf{T}} \nonumber\\
=&\; \frac{N\left(\sum_{j=0}^{i-1} (I-\mu H)^jR_s^\star(I-\mu H)^j\right)}{N-1} \\
&\;\; {}- \frac{\big[\sum_{j=0}^{i-1}(I-\mu H)^j\big] R_s^\star \big[\sum_{j=0}^{i-1}(I-\mu H)^j\big]}{N-1} \nonumber
}
and
\eq{
\mathbb{E}\hspace{0.05cm}\|s_i'(w^\star)\|^2 = \mbox{\rm {\small Tr}}\big(\mathbb{E}\hspace{0.05cm} s_i'(w^\star) s_i'(w^\star)^{\mathsf{T}}\big)
}
With $k\to\infty$, we obtain:\vspace{0.5mm}
\eq{
\lim_{k\to\infty}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2\leq&
(1-\mu\nu)^{2i}{\rm MSD}_{\rm RR}^{\rm lt} + \mu^2 \mbox{\rm {\small Tr}}\big(R^{\prime\star}_{s,i}\big) + O(\mu^4) \label{89n.sdg}
}
where the $O(\mu^4)$ term comes from the same argument in \eqref{89jgws.ge}.
Substituting the result \eqref{msd.lt}, we have
\eq{
\lim_{k\to\infty}\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_{i}^{\prime k}\|^2\leq&(1-\mu\nu)^{2i}\mu^2\mbox{\rm {\small Tr}}\left(\big(I-(I-\mu H)^{2N}\big)^{-1} R_{s}^{\prime \star}\right)\nonumber\\
& +\mu^2\mbox{\rm {\small Tr}}\left( R_{s,i}^{\prime \star}\right) + O(\mu^4)\label{4h3dx}
}
Lastly, we multiple $\Big(1\!-\!(1\!-\!\mu\nu)^{2i}\Big)$ and its inverse at the second term of \eqref{4h3dx}, which results in \eqref{2389g.cs}.
}
\section{Mismatch of Gradient Noise in (\ref{89jgws.ge})}\label{g892.gf3}\vspace{-2mm}
In this appendix, we will show that
\eq{
&\hspace{-3mm}\lim_{k\to \infty}\mu^2 \mbox{\rm {\small Tr}}\left(U\Sigma^\star U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right)\nonumber\\
&= O(\mu^4)
}
which is equivalent to showing
\eq{
&\hspace{-3mm}\lim_{k\to \infty}\mbox{\rm {\small Tr}}\left(U\Sigma^\star U^{\mathsf{T}} \mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right)\nonumber\\
&= O(\mu^2)
}
Using the inequality that $|\mbox{\rm {\small Tr}}(X)| \leq c\|X\|$ for any square matrix and some constant $c$,
we can just focus on the norm instead of trace:
\eq{
&\hspace{-4mm}\left\|U\Sigma^\star U^{\mathsf{T}} \big(\mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\big)\right\|\nonumber\\
&\leq \|U\Sigma^\star U^{\mathsf{T}}\| \left\|\mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right\|
\nonumber\\
&= O(1/\mu)\left\|\mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right\|
}
where the last equality is due to
\eq{
\|U\Sigma^\star U^{\mathsf{T}}\
=&\left\|\big(I -(I-\mu \Lambda)^{2N}\big)^{-1}\right\|\nonumber\\
=&\left\|\big(2N\mu \Lambda + O(\mu^2)\big)^{-1}\right\|\nonumber\\
=& O(1/\mu)
}
This result implies that we now need to show
\eq{
\lim_{k\to \infty}\left\|\mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right\| = O(\mu^3)\nonumber
}
Since we have already established an expression for the covariance matrix of the gradient noise in \eqref{agg.noise.result} we have:
\begin{align}
&\hspace{-5mm}\mathbb{E}\hspace{0.05cm} [s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\,|\,{\boldsymbol{w}}_0^k]\nonumber\\
=&\; \frac{N\left(\sum_{i=0}^{N-1} (I-\mu H)^iR_s^k(I-\mu H)^i\right)}{N-1}\nonumber\\
&\;\; {}- \frac{\big[\sum_{i=0}^{N-1}(I-\mu H)^i\big] R_s^k \big[\sum_{i=0}^{N-1}(I-\mu H)^i\big]}{N-1}
\end{align}
Thus,
\eq{
&\hspace{-7mm}\mathbb{E}\hspace{0.05cm} [s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\,|\,{\boldsymbol{w}}_0^k] - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}
\nonumber\\
=&\, \frac{N\left(\sum_{i=0}^{N-1} (I-\mu H)^i\widetilde{R}_s^k(I-\mu H)^i\right)}{N-1}\nonumber\\
&\; {}- \frac{\big[\sum_{i=0}^{N-1}(I-\mu H)^i\big] \widetilde{R}_s^k \big[\sum_{i=0}^{N-1}(I-\mu H)^i\big]}{N-1}\label{f32h78fwf}
}
where
\eq{
\widetilde{R}_s^k \;\stackrel{\Delta}{=}\;& R_s^k - R_s^\star \label{Rs_tilde}\\
R_s^k \;\stackrel{\Delta}{=}\;&\frac{1}{N} \sum_{n=1}^N {\boldsymbol{s}}_n({\boldsymbol{w}}_0^k) {\boldsymbol{s}}_n({\boldsymbol{w}}_0^k)^{\mathsf{T}} \label{Rs_k}\\
R_s^\star \;\stackrel{\Delta}{=}\;&\frac{1}{N} \sum_{n=1}^N {\boldsymbol{s}}_n(w^\star) {\boldsymbol{s}}_n(w^\star)^{\mathsf{T}} \label{Rs_star}
}
To simplify the notation, we rewrite the first term as follows:
\eq{
&\hspace{-7mm}N\left(\sum_{i=0}^{N-1} (I-\mu H)^i\widetilde{R}_s^k(I-\mu H)^i\right)\nonumber\\
=&\; \sum_{i=0}^{N-1}\sum_{j=0}^{N-1}(I-\mu H)^i\widetilde{R}_s^k(I-\mu H)^i\label{r32h89.g32}
}
Similarly, the second term:\vspace{-1mm}
\eq{
&\hspace{-7mm}\left[\sum_{i=0}^{N-1}(I-\mu H)^i\right] \widetilde{R}_s^k \left[\sum_{i=0}^{N-1}(I-\mu H)^i\right]\nonumber\\[-1mm]
=&\; \sum_{i=0}^{N-1}\sum_{j=0}^{N-1}(I-\mu H)^i\widetilde{R}_s^k(I-\mu H)^j\label{r2h389.ghj8}
}
Subtracting (\ref{r32h89.g32}) from (\ref{r2h389.ghj8}) we obtain (in the following, the notation $O(\mu^m)$ is a matrix where each entry can be bounded by $O(\mu^m)$):
\eq{
&\hspace{-6mm}
\mathbb{E}\hspace{0.05cm} [s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\,|\,{\boldsymbol{w}}_0^k] - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\nonumber\\
=&\;\frac{1}{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}(I-\mu H)^i\widetilde{R}_s^k[(I-\mu H)^i - (I-\mu H)^j]\nonumber\\
\stackrel{(a)}{=}&\;\frac{1}{N-1}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}(I-\mu H)^i\widetilde{R}_s^k[\mu (j-i)H + O(\mu^2) ]
\nonumber\\
\stackrel{(b)}{=}&\;\frac{1}{N-1}\mu \sum_{i=0}^{N-1}\sum_{j=0}^{N-1}(I-\mu H)^i\widetilde{R}_s^k (j-i)H + \widetilde{R}_s^kO(\mu^2)
\nonumber\\
\stackrel{(c)}{=} &\;\frac{1}{N-1} \mu \sum_{i=0}^{N-1}\sum_{j=0}^{N-1}\big(I+O(\mu)\big)\widetilde{R}_s^k (j-i)H + \widetilde{R}_s^kO(\mu^2)\nonumber\\
=&\;\frac{1}{N-1}\mu \underbrace{\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}\widetilde{R}_s^k (j-i)}_{=0}H +
O(\mu^2)\widetilde{R}_s^kH+ \widetilde{R}_s^kO(\mu^2)\nonumber\\
=&\;O(\mu^2)\widetilde{R}_s^k H+ \widetilde{R}_s^kO(\mu^2) \label{r31289.g1}
}
where steps (a) and (c) use the binomial expansion, and step (b) assumes the step-size is small enough so that $I-\mu H$ is stable.
Next, we conclude:
\eq{
&\hspace{-4mm}\left\|\mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}} - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}}\right\|\nonumber\\
=\,&\left\|\mathbb{E}\hspace{0.05cm}_{{\boldsymbol{w}}^k_0}\Big[ \mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\,|\, {\boldsymbol{w}}_0^k\Big] - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}} \right\|
\nonumber\\
\stackrel{(a)}{\leq}\,&\,\mathbb{E}\hspace{0.05cm}_{{\boldsymbol{w}}^k_0}\left\|\Big[ \mathbb{E}\hspace{0.05cm} s'({\boldsymbol{w}}_0^k)s'({\boldsymbol{w}}_0^k)^{\mathsf{T}}\,|\, {\boldsymbol{w}}_0^k\Big] - \mathbb{E}\hspace{0.05cm} s'(w^\star) s'(w^\star)^{\mathsf{T}} \right\|
\nonumber\\
\stackrel{(\ref{r31289.g1})}{=}&\mathbb{E}\hspace{0.05cm}_{{\boldsymbol{w}}^k_0}\|O(\mu^2)\widetilde{R}_s^k H+ \widetilde{R}_s^kO(\mu^2)\|
\nonumber\\
\leq\,&\, O(\mu^2)\mathbb{E}\hspace{0.05cm}\| \widetilde{R}_s^k\|
}
where step (a) applies Jensen's inequality. Lastly, we prove
\begin{equation}
\lim_{k\to\infty} \mathbb{E}\hspace{0.05cm}\| \widetilde{R}_s^k\| = O(\mu)
\end{equation}
From \eqref{Rs_tilde}-\eqref{Rs_star}, we have
\eq{\label{28hanelkcuys}
\widetilde{R}_s^k =& R_s^k - R_s^\star \nonumber \\
=&\;\frac{1}{N}\sum_{n=1}^{N}\left[ s_n({\boldsymbol{w}}_0^k) s_n({\boldsymbol{w}}_0^k)^{\mathsf{T}} - s_n(w^\star) s_n(w^\star)^{\mathsf{T}} \right] \nonumber \\
=&\; \frac{1}{N}\sum_{n=1}^{N}\left[ s_n({\boldsymbol{w}}_0^k) [s_n({\boldsymbol{w}}_0^k)-s_n(w^\star)]^{\mathsf{T}}\right.
\nonumber \\&\;\hspace{12mm}
\left.{}+[s_n({\boldsymbol{w}}_0^k)- s_n(w^\star)] s_n(w^\star)^{\mathsf{T}}\right]
}
Next, it is easy to verify that $s_{n}(w)$ is also $2\delta$-Lipschitz continuity:
\eq{
&\hspace{-6mm}\|s_{n}({\boldsymbol{w}}_{0}^k)- s_{n}(w^\star)\|
\nonumber\\
\stackrel{}{\leq}&\; \|\nabla J({\boldsymbol{w}}_{0}^k) - \nabla J(w^\star)\|
+ \|\nabla Q({\boldsymbol{w}}_{0}^k; x_{n})-\nabla Q(w^\star; x_{n}) \|
\nonumber\\
\stackrel{(\ref{eq-ass-cost-lc-e})}{\leq}&\; 2\delta \|\widetilde{{\boldsymbol{w}}}^k_0\|\label{noise.lipschitz}
}
Taking the expectation of the norm of \eqref{28hanelkcuys}:
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{R}_s^k\|\leq&\, \frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\hspace{0.05cm}\left\| s_n({\boldsymbol{w}}_0^k) [s_n({\boldsymbol{w}}_0^k)-s_n(w^\star)]^{\mathsf{T}}\right.
\nonumber \\&\;\hspace{12mm}
\left.{}+[s_n({\boldsymbol{w}}_0^k)- s_n(w^\star)] s_n(w^\star)^{\mathsf{T}}\right\|
\nonumber \\
\stackrel{\eqref{noise.lipschitz}}{\leq}&\;\frac{2}{N}\sum_{n=1}^{N}\mathbb{E}\hspace{0.05cm} \left(\|s_n({\boldsymbol{w}}_0^k) \| \delta\|\widetilde{{\boldsymbol{w}}}^k_0\| + \delta\|\widetilde{{\boldsymbol{w}}}^k_0\| \|s_n(w^\star)\|\right)
\nonumber\\
\leq&\frac{2\delta}{N}\!\sum_{n=1}^{N} \!\sqrt{\mathbb{E}\hspace{0.05cm}\! \|s_n({\boldsymbol{w}}_0^k) \|^2\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}^k_0\|^2} \!+\! \sqrt{\mathbb{E}\hspace{0.05cm}\!\|\widetilde{{\boldsymbol{w}}}^k_0\|^2} \|s_n(w^\star)\|\label{389jg4.3}
}
where the last inequality exploits the Cauchy-Schwartz inequality.
Next, as we prove in theorem \ref{lemma.start.pont}, when $k\gg 1$:
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}^k_0\|^2 \,=&\; O(\mu^2)\\
\mathbb{E}\hspace{0.05cm} \|s_n({\boldsymbol{w}}_0^k) \|^2 \leq\,&\;2\mathbb{E}\hspace{0.05cm} \|s_n({\boldsymbol{w}}_0^k) -s_n(w^\star)\|^2 +2\mathbb{E}\hspace{0.05cm} \|s_n(w^\star)\|^2\nonumber\\
\leq& O(\mu^2)+O(1) = O(1)
}
Substituting the previous results into (\ref{389jg4.3}), we conclude
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{R}_s^k\|\leq&\, \frac{\delta}{N}\sum_{n=1}^{N}\left(\sqrt{O(\mu^2)O(1)} + \sqrt{O(\mu^2)}O(1)\right)
\nonumber\\
=&\;O(\mu), \;\; k\gg1
}\vspace{-3mm}
\section{Bound on long-term difference}\label{mismatch.proof}\vspace{-2mm}
Subtracting (\ref{eq.error_expand}) from (\ref{eq.long_term.rec}) and then taking the conditional expectation, we obtain:
\begin{align}
&\hspace{-3mm}\mathbb{E}\hspace{0.05cm}\big[\,\|\widetilde{{\boldsymbol{w}}}_0^{k+1} - \widetilde{{\boldsymbol{w}}}_0^{\prime k+1}\|^2 \,|\,\widetilde{{\boldsymbol{w}}}_0^{k} , \widetilde{{\boldsymbol{w}}}_0^{\prime k}\,\big]
\nonumber\\
\leq&\; \frac{1}{t}\|(I-\mu H)^N\| \|\widetilde{{\boldsymbol{w}}}_{0}^k -\widetilde{{\boldsymbol{w}}}_{0}^{\prime k}\|^2\nonumber\\
& +\frac{2\mu^2}{1-t}\! \underbrace{\mathbb{E}\hspace{0.05cm} \!\left\|\sum^N_{i=1} ( I \!-\! \mu H )^{N-i} \left(s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)\!-\! s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k)\right)\right\|^2}_{B}
\nonumber\\
& + \frac{2\mu^2}{1-t} \underbrace{\mathbb{E}\hspace{0.05cm}\left\|\sum^N_{i=1} ( I - \mu H )^{N-i} \xi({\boldsymbol{w}}_{i-1}^k)\right\|^2}_{C}\label{h89gh92}
\end{align}
where we exploit the Jensen's inequality and $0<t<1$.
In the following, we assume the step size is sufficiently small so that:
\eq{
\|I-\mu H\| \leq 1-\mu \nu
}
\noindent Now, we find a tighter bound on the $B$ term:
\eq{
B\stackrel{(a)}{\leq}&\mathbb{E}\hspace{0.05cm} \!\left(\!\sum^N_{i=1}\left\|\! ( I \!-\! \mu H )^{N-i}\right\|\! \left\|s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{i-1}^k)\!-\! s_{{\boldsymbol \sigma}^k(i)}({\boldsymbol{w}}_{0}^k)\right\|\!\right)^2
\nonumber\\
\stackrel{\eqref{noise.lipschitz}}{\leq}&\,\mathbb{E}\hspace{0.05cm} \left(\sum^N_{i=1}\left\| ( I - \mu H )^{N-i}\right\| 2\delta\mu\left\| {\boldsymbol{w}}_{i-1}^k - {\boldsymbol{w}}_{0}^k\right\|\right)^2
\nonumber\\
=&\,4\delta^2\mu^2\sum_{i=1}^N \sum_{j=1}^N \|I-\mu H\|^{(N-i)(N-j)}\times
\nonumber\\
&\;\; \mathbb{E}\hspace{0.05cm}\!\left\| \!\sum_{n=1}^{i-1}{\nabla} Q({\boldsymbol{w}}_{n-1}^k;x_{{\boldsymbol \sigma}^k(n)})\right\|\!
\left\| \sum_{n=1}^{j-1}{\nabla}\! Q({\boldsymbol{w}}_{n-1}^k;x_{{\boldsymbol \sigma}^k(n)})\right\|
\nonumber\\
\stackrel{(b)}{\leq}&\,4\delta^2\mu^2\sum_{i=1}^N \sum_{j=1}^N (1-\mu\nu)^{(N-i)(N-j)}\times
\nonumber\\
& \!\sqrt{\!\mathbb{E}\hspace{0.05cm}\!\left\|\! \sum_{n=1}^{i-1}\hspace{-0.5mm}{\nabla}\hspace{-0.5mm} Q({\boldsymbol{w}}_{n-1}^k;x_{{\boldsymbol \sigma}^k(n)})\right\|^2 \!\hspace{-1.2mm}\mathbb{E}\hspace{0.05cm}\!
\left\|\! \sum_{n=1}^{j-1}\hspace{-0.5mm}{\nabla}\hspace{-0.5mm} Q({\boldsymbol{w}}_{n-1}^k;x_{{\boldsymbol \sigma}^k(n)})\right\|^2}
\nonumber\\
=\,&4\delta^2\mu^2\hspace{-1mm}\left(\hspace{-0.7mm}\sum_{i=1}^N (1\!-\!\mu\nu)^{N-i}\hspace{-1.2mm}
\sqrt{\mathbb{E}\hspace{0.05cm}\Bigg\| \sum_{n=1}^{i-1}{\nabla} Q\big({\boldsymbol{w}}^k_{n-1};x_{{\boldsymbol \sigma}^k(n)}\big)\Bigg\|^2}\right)^2
\nonumber\\
\label{24h89f3.g32}
}
where step (a) exploits the triangular inequality, and the sub-multiplicative property of norms, and step (b) uses Cauchy-Schwartz.
Then, we establish the following when $k$ is large enough:
\eq{
&\hspace{-5mm}\mathbb{E}\hspace{0.05cm}\Big\| \sum_{n=1}^{i-1}{\nabla}_w Q\big({\boldsymbol{w}}^k_{n-1};x_{{\boldsymbol \sigma}^k(n)}\big)\Big\|^2 \nonumber\\
=&\,\mathbb{E}\hspace{0.05cm}\Bigg\| \sum_{n=1}^{i-1}\Big({\nabla} Q({\boldsymbol{w}}_0^k;x_{{\boldsymbol \sigma}^k(n)}) - {\nabla} Q(w^\star;x_{{\boldsymbol \sigma}^k(n)}) +\nonumber\\
&\;\;\;\;{\nabla} Q(w^\star;x_{{\boldsymbol \sigma}^k(n)}) \Big)\Bigg\|^2
\nonumber\\
\stackrel{}{\leq}&\,2\mathbb{E}\hspace{0.05cm}\Bigg\| \sum_{n=1}^{i-1}{\nabla}_w Q\big(w^\star;x_{{\boldsymbol \sigma}^k(n)}\big)\Bigg\|^2 \nonumber\\
&\;\;+2\mathbb{E}\hspace{0.05cm}\Bigg\| \sum_{n=1}^{i-1}\left({\nabla} Q({\boldsymbol{w}}_0^k;x_{{\boldsymbol \sigma}^k(n)}) - {\nabla} Q(w^\star;x_{{\boldsymbol \sigma}^k(n)})
\right)\Bigg\|^2
\nonumber\\
=&\,2\frac{(i-1)N-(i-1)^2}{N-1}{\mathcal{K}} + O(\mu^2)
}
where the last equality is because we already conclude from Lemma \ref{lemma.2} and \eqref{gradient.noise} that
\eq{
\mathbb{E}\hspace{0.05cm}\left\| \sum_{n=1}^{i-1}\hspace{-0.5mm}{\nabla}_w Q(w^\star;x_{{\boldsymbol \sigma}^k(n)})\right\|^2 =& \frac{(i-1)N-(i-1)^2}{N-1}{\mathcal{K}}
}
Moreover, we know that for sufficiently large $k$:
\eq{
&\hspace{-5mm}\mathbb{E}\hspace{0.05cm}\left\| \sum_{n=1}^{i-1}\left({\nabla}_w Q({\boldsymbol{w}}_0^k;x_{{\boldsymbol \sigma}^k(n)}) - {\nabla}_w Q(w^\star;x_{{\boldsymbol \sigma}^k(n)}) \right)\right\|^2
\nonumber\\
\leq&\,(i-1)\mathbb{E}\hspace{0.05cm}\sum_{n=1}^{i-1}\left\| {\nabla}_w Q({\boldsymbol{w}}_0^k;x_{{\boldsymbol \sigma}^k(n)}) - {\nabla}_w Q(w^\star;x_{{\boldsymbol \sigma}^k(n)})\right\|^2
\nonumber\\
\leq&\, \delta^2 (i-1)\mathbb{E}\hspace{0.05cm}\sum_{n=1}^{i-1}\|\widetilde{{\boldsymbol{w}}}_{i-1}^k\|^2
\nonumber\\
=&\,O(\mu^2)
}
Substituting previous results into (\ref{24h89f3.g32}):
\eq{
B
\leq&4\delta^2\mu^2\Bigg(\sum_{i=1}^N (1-\mu\nu)^{N-i}\times\\
&\hspace{15mm}
\sqrt{2\frac{(i-1)N-(i-1)^2}{N-1}+O(\mu^2)}\Bigg)^2\!{\mathcal{K}}\nonumber
}
We know for any $0\leq i\leq N$
\eq{
\frac{(i-1)N-(i-1)^2}{N-1}\leq \frac{N^2}{4(N-1)}
}
and, hence,
\eq{
B\leq& 4\delta^2\mu^2\left(\frac{N^2}{2(N-1)} + O(\mu^2)\right) \left(\frac{1-(1-\mu\nu)^N}{\mu\nu}\right)^2{\mathcal{K}}\nonumber\\
=&\frac{2\delta^2N^2}{\nu^2(N-1)}(1-(1-\mu\nu)^N)^2{\mathcal{K}} +O(\mu^2)\label{bound.B}
}
We can bound the term $C$ when epoch $k$ is sufficiently large:
\eq{
C \leq& N \sum_{i=1}^N\mathbb{E}\hspace{0.05cm}\|(I-\mu N)^{N-i} \xi({\boldsymbol{w}}^k_{i-1})\|^2\nonumber\\
\stackrel{\eqref{3h98u.ni}}{\leq}& \frac{\kappa^2N}{4} \sum_{i=1}^N \mathbb{E}\hspace{0.05cm}\|\widetilde{\boldsymbol{w}}_{i-1}^k\|^4\nonumber\\
=&O(\mu^4) \label{gi.d}
}
where the last equality is due to \eqref{xcnweu-2}:
\eq{
\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^4 \leq\;& \left(\Big(1-\frac{1}{2}\mu N\nu\Big)\|\widetilde{\boldsymbol{w}}^k_0\|^2+\frac{2\mu^3\delta^2N^3}{\nu}\mathcal{K}\right)^2\nonumber\\
\leq\;& \frac{\Big(1-\frac{1}{2}\mu N\nu\Big)^2}{s}\|\widetilde{\boldsymbol{w}}^k_0\|^4+\frac{4\mu^6\delta^4N^6}{(1-s)\nu^2}\mathcal{K}^2
}
Let $s=1-\frac{1}{2}\mu N\nu$, we obtain:
\eq{
\|\widetilde{\boldsymbol{w}}_0^{k+1}\|^4 \leq\;&\Big(1-\frac{1}{2}\mu N\nu\Big)\|\widetilde{\boldsymbol{w}}^k_0\|^4 + \frac{8\mu^5\delta^4N^5}{\nu^3}\mathcal{K}^2
}
After letting $k\to\infty$ and taking expectation, we conclude $\mathbb{E}\hspace{0.05cm} \|\widetilde{{\boldsymbol{w}}}_0^k\|^4=O(\mu^4)$.
Lastly, choosing $t=(1-\mu\nu)^N$ in \eqref{h89gh92} and combining \eqref{bound.B} and \eqref{gi.d}, we establish:
\eq{
&\hspace{-4mm}\mathbb{E}\hspace{0.05cm}\big[\,\|\widetilde{{\boldsymbol{w}}}_0^{k+1} - \widetilde{{\boldsymbol{w}}}_0^{\prime k+1}\|^2\big]\nonumber\\ \leq&\, (1-\mu\nu)^N\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_0^{k} - \widetilde{{\boldsymbol{w}}}_0^{\prime k}\|^2\nonumber\\
&\;+\frac{2\mu^2}{1-(1-\mu\nu)^N}\frac{2\delta^2N^2}{\nu^2(N-1)}(1-(1-\mu\nu)^N)^2{\mathcal{K}} + O(\mu^4)
}
Letting $k\to \infty$, we conclude
\eq{
\mathbb{E}\hspace{0.05cm}\|\widetilde{{\boldsymbol{w}}}_0^{k} - \widetilde{{\boldsymbol{w}}}_0^{\prime k}\|^2\leq \frac{4\mu^2\delta^2N^2}{\nu^2(N-1)}{\mathcal{K}} +O(\mu^3)
}
\smallskip
\bibliographystyle{IEEEbib}
| {
"timestamp": "2018-10-11T02:01:25",
"yymm": "1803",
"arxiv_id": "1803.07964",
"language": "en",
"url": "https://arxiv.org/abs/1803.07964",
"abstract": "In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly. Recent works have pursued justifications for this behavior by examining the convergence rate of the learning process under diminishing step-sizes. This work focuses on the constant step-size case and strongly convex loss function. In this case, convergence is guaranteed to a small neighborhood of the optimizer albeit at a linear rate. The analysis establishes analytically that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size $O(\\mu^2)$ around the minimizer rather than $O(\\mu)$. Furthermore, we derive an analytical expression for the steady-state mean-square-error performance of the algorithm, which helps clarify in greater detail the differences between sampling with and without replacement. We also explain the periodic behavior that is observed in random reshuffling implementations.",
"subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "Stochastic Learning under Random Reshuffling with Constant Step-sizes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806518175514,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.707727423676188
} |
https://arxiv.org/abs/1604.00845 | Sparse Fourier Transform in Any Constant Dimension with Nearly-Optimal Sample Complexity in Sublinear Time | We consider the problem of computing a $k$-sparse approximation to the Fourier transform of a length $N$ signal. Our main result is a randomized algorithm for computing such an approximation (i.e. achieving the $\ell_2/\ell_2$ sparse recovery guarantees using Fourier measurements) using $O_d(k\log N\log\log N)$ samples of the signal in time domain that runs in time $O_d(k\log^{d+3} N)$, where $d\geq 1$ is the dimensionality of the Fourier transform. The sample complexity matches the lower bound of $\Omega(k\log (N/k))$ for non-adaptive algorithms due to \cite{DIPW} for any $k\leq N^{1-\delta}$ for a constant $\delta>0$ up to an $O(\log\log N)$ factor. Prior to our work a result with comparable sample complexity $k\log N \log^{O(1)}\log N$ and sublinear runtime was known for the Fourier transform on the line \cite{IKP}, but for any dimension $d\geq 2$ previously known techniques either suffered from a polylogarithmic factor loss in sample complexity or required $\Omega(N)$ runtime. | \section{The algorithm and proof overview}\label{sec:sublinear}
In this section we state our algorithm and give an outline of the analysis. The formal proofs are then presented in the rest of the paper (the organization of the rest of the paper is presented in section~\ref{sec:org}). Our algorithm (Algorithm~\ref{alg:main-sublinear}), at a high level, proceeds as follows.
{\bf Measuring $\widehat{x}$.} The algorithms starts by taking measurements of the signal in lines~5-16. Note that the algorithm selects $O(\log\log N)$ hashings $H_r=(\pi_r, B, F), r=1,\ldots, O(\log\log N)$, where $\pi_r$ are selected uniformly at random, and for each $r$ selects a set ${\mathcal{A}}_r\subseteq {[n]^d}\times {[n]^d}$ of size $O(\log\log N)$ that determines locations to access in frequency domain. The signal $\widehat{x}$ is accessed via the function \textsc{HashToBins} (see Lemma~\ref{lm:hashing} above for its properties. The function \textsc{HashToBins} accesses filtered versions of $\widehat{x}$ shifted by elements of a randomly selected set (the number of shifts is $O(\log N/\log \log N)$). These shifts are useful for locating `heavy' elements from the output of \textsc{HashToBins}. Note that since each hashing takes $O(B)=O(k)$ samples, the total sample complexity of the measurement step is $O(k\log N \log\log N)$. This is the dominant contribution to sample complexity, but it is not the only one. The other contribution of $O(k\log N\log\log N)$ comes from invocations of \textsc{EstimateValues} from our $\ell_1$-SNR reduction loop (see below). The loop goes over $O(\log R^*)=O(\log N)$ iterations, and in each iteration \textsc{EstimateValues} uses $O(\log\log N)$ fresh hash functions to keep the number of false positives and estimation error small.
The location algorithm is Algorithm~\ref{alg:location}. Our main tool for bounding performance of \textsc{LocateSignal} is Theorem~\ref{thm:l1-res-loc}, stated below. Theorem~\ref{thm:l1-res-loc} applies to the following setting. Fix a set $S\subseteq {[n]^d}$ and a set of hashings $H_1,\ldots, H_{r_{max}}$ that encode signal measurement patterns, and let $S^*\subseteq S$ denote the set of elements of $S$ that are not isolated with respect to most of these hashings. Theorem~\ref{thm:l1-res-loc} shows that for any signal $x$ and partially recovered signal $\chi$, if $L$ denotes the output list of an invocation of \textsc{LocateSignal} on the pair $(x, \chi)$ with measurements given by $H_1,\ldots, H_{r_{max}}$ and a set of random shifts, then the $\ell_1$ norm of elements of the residual $(x-\chi)_S$ that are not discovered by \textsc{LocateSignal} can be bounded by a function of the amount of $\ell_1$ mass of the residual that fell outside of the `good' set $S\setminus S^*$, plus the `noise level' $\mu\geq ||x_{{[n]^d}\setminus S}||_\infty$ times $k$.
If we think of applying Theorem~\ref{thm:l1-res-loc} iteratively, we intuitively get that the fixed set of measurements given by hashings $H_1,\ldots, H_r$ allows us to always reduce the $\ell_1$ norm of the residual $x'=x-\chi$ on the `good' set $S\setminus S^*$ to about the amount of mass that is located outside of this good set(this is exactly how we use \textsc{LocateSignal} in our signal to noise ratio reduction loop below). In section~\ref{sec:l1} we prove
\begin{theorem}\label{thm:l1-res-loc}
For any constant $C'>0$ there exist absolute constants $C_1, C_2, C_3>0$ such that for any $x, \chi\in \mathbb{C}^N$, $x'=x-\chi$, any integer $k\geq 1$ and any $S\subseteq {[n]^d}$ such that $||x_{{[n]^d}\setminus S}||_\infty\leq C'\mu$, where $\mu=||x_{{[n]^d}\setminus [k]}||_2/\sqrt{k}$, the following conditions hold if $||x'||_\infty/\mu=N^{O(1)}$.
Let $\pi_r=(\Sigma_r, q_r), r=1,\ldots, r_{max}$ denote permutations, and let $H_r=(\pi_r, B, F)$, $F\geq 2d, F=\Theta(d)$, where $B\geq (2\pi)^{4d\cdot \fc} k/\alpha^d$ for $\alpha\in (0, 1)$ smaller than a constant. Let $S^*\subseteq S$ denote the set of elements that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of hashings $\{H_r\}$. Then if additionally for every $s\in [1:d]$ the sets ${\mathcal{A}}_r\star (\mathbf{1}, \mathbf{e}_s)$ are balanced in coordinate $s$ (as per Definition~\ref{def:balance}) for all $r=1,\ldots, r_{max}$, and $r_{max}, c_{max}\geq (C_1/\sqrt{\alpha})\log\log N$, then
$$
L:=\bigcup_{r=1}^{r_{max}}\textsc{LocateSignal}\left(\chi, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}\right)
$$
satisfies
$$
||x'_{S\setminus S^*\setminus L}||_1\leq (C_2\alpha)^{d/2} ||x'_S||_1+C_3^{d^2}(||\chi_{{[n]^d}\setminus S}||_1+||x'_{S^*}||_1)+4\mu |S|.
$$
\end{theorem}
\paragraph{\bf Reducing signal to noise ratio.} Once the samples have been taken, the algorithm proceeds to the signal to noise (SNR) reduction loop (lines~17-23). The objective of this loop is to reduce the mass of the top (about $k$) elements in the residual signal to roughly the noise level $\mu\cdot k$ (once this is done, we run a `cleanup' primitive, referred to as \textsc{RecoverAtConstantSNR}, to complete the recovery process -- see below). Specifically, we define the set $S$ of `head elements' in the original signal $x$ as
\begin{equation}\label{eq:s-def-l1}
S=\{i\in {[n]^d}: |x_i|>\mu\},
\end{equation}
where $\mu^2=\err_k^2(x)/k$ is the average tail noise level. Note that we have $|S|\leq 2k$. Indeed, if $|S|>2k$, more than $k$ elements of $S$ belong to the tail, amounting to more than $\mu^2\cdot k=\err_k^2(x)$ tail mass. Ideally, we would like this loop to construct and approximation $\chi^{(T)}$ to $x$ {\em supported only on $S$} such that $||(x-\chi^{(T)})_S||_1=O(\mu k)$, i.e. the $\ell_1$-SNR of the residual signal on the set $S$ of heavy elements is reduced to a constant.
As some false positives will unfortunately occur throughout the execution of our algorithm due to the weaker sublinear time location and estimation primitives that we use, our SNR reduction loop is to construct an approximation $\chi^{(T)}$ to $x$ with the somewhat weaker properties that
\begin{equation}\label{eq:294ht943htgrr}
||(x-\chi^{(T)})_S||_1+||\chi^{(T)}||_{{[n]^d}\setminus S}=O(\mu k)\text{~~~~~and~~~~}||\chi^{(T)}||_0\ll k.
\end{equation}
Thus, we reduce the $\ell_1$-SNR on the set $S$ of `head' elements to a constant, and at the same time not introduce too many spurious coefficients (i.e. false positives) outside $S$, and these coefficients do not contribute much $\ell_1$ mass. The SNR reduction loop itself consists of repeated alternating invocations of two primitives, namely \textsc{ReduceL1Norm} and \textsc{ReduceInfNorm}. Of these two the former can be viewed as performing most of the reduction, and \textsc{ReduceInfNorm} is naturally viewed as performing a `cleanup' phase to fix inefficiencies of \textsc{ReduceL1Norm} that are due to the small number of hash functions (only $O(\log\log N)$ as opposed to $O(\log N)$ in~\cite{IK14a}) that we are allowed to use, as well as some mistakes that our sublinear runtime location and estimation primitives used in \textsc{ReduceL1Norm} might make.
\begin{algorithm}[H]
\caption{Location primitive: given a set of measurements corresponding to a single hash function, returns a list of elements in ${[n]^d}$, one per each hash bucket}\label{alg:location}
\begin{algorithmic}[1]
\Procedure{LocateSignal}{$\chi, H, \{m(\widehat{x}, H, a\star (\mathbf{1}, \mathbf{w})\}_{a\in {\mathcal{A}}, \mathbf{w}\in \H}$}\Comment{$H=(\pi, B, F), B=b^d$}
\State Let $x':=x-\chi$. Compute $\{m(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w})\}_{a\in {\mathcal{A}}, \mathbf{w}\in \H}$ using Corollary~\ref{c:semiequi1} and \textsc{HashToBins}.
\State $L\gets \emptyset$
\For{$j \in [b]^d$}\Comment{Loop over all hash buckets, indexed by $j\in [b]^d$}
\State ${\bf f}\gets {\bf 0}^d$
\For {$s=1$ to $d$}\Comment{Recovering each of $d$ coordinates separately}
\State $\Delta\gets 2^{\lfloor \frac1{2}\log_2 \log_2 n\rfloor}$
\For {$g=1$ to $\log_\Delta n-1$}
\State $\mathbf{w}\gets n\Delta^{-g} \cdot \mathbf{e}_s$ \Comment{Note that $\mathbf{w}\in \H$}
\State {\bf If}~there exists a unique $r\in [0:\Delta-1]$ such that
\State~~~~~$\left|\omega_{\Delta}^{-r\cdot \beta_s}\cdot \omega^{-(n\cdot \Delta^{-g} {\bf f}_s)\cdot \beta_s}\cdot \frac{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))}{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))}-1\right|<1/3$ for at least $3/5$ fraction of $a=(\alpha, \beta)\in {\mathcal{A}}$
\State {\bf then} ${\bf f}\gets {\bf f}+\Delta^{g-1}\cdot r\cdot {\bf e}_s$ ~{\bf else} return {\bf FAIL}
\EndFor
\EndFor
\State $L \gets L \cup \{\Sigma^{-1}{\bf f}\}$ \Comment{Add recovered element to output list}
\EndFor
\State \textbf{return} $L$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\textsc{ReduceL1Norm} is presented as Algorithm~\ref{alg:l1-norm-reduction} below. The algorithm performs $O(\log\log N)$ rounds of the following process: first, run \textsc{LocateSignal} on the current residual signal, then estimate values of the elements that belong to the list $L$ output by \textsc{LocateSignal}, and {\bf only keep those that are above a certain threshold} (see threshold $\frac1{10000} 2^{-t} \nu+4\mu$ in the call the \textsc{EstimateValues} in line~9 of Algorithm~\ref{alg:l1-norm-reduction}). This thresholding operation is crucial, and allows us to control the number of false positives. In fact, this is very similar to the approach of~\cite{IK14a} of recovering elements starting from the largest. The only difference is that {\bf (a)} our `reliability threshold' is dictated by the $\ell_1$ norm of the residual rather than the $\ell_\infty$ norm, as in~\cite{IK14a}, and {\bf (b)} some false positives can still occur due to our weaker estimation primitives. Our main tool for formally stating the effect of \textsc{ReduceL1Norm} is Lemma~\ref{lm:reduce-l1-norm} below. Intuitively, the lemma shows that \textsc{ReduceL1Norm} reduces the $\ell_1$ norm of the head elements of the input signal $x-\chi$ by a polylogarthmic factor, and does not introduce too many new spurious elements (false positives) in the process. The introduced spurious elements, if any, do not contribute much $\ell_1$ mass to the head of the signal. Formally, we show in section~\ref{sec:rl1n}
\begin{lemma}\label{lm:reduce-l1-norm}
For any $x\in \mathbb{C}^N$, any integer $k\geq 1$, $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for $\alpha\in (0, 1]$ smaller than an absolute constant and $F\geq 2d, F=\Theta(d)$ the following conditions hold for the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}$, where $\mu^2:=||x_{{[n]^d}\setminus [k]}||_2^2/k$. Suppose that $||x||_\infty/\mu=N^{O(1)}$.
For any sequence of hashings $H_r=(\pi_r, B, F)$, $r=1,\ldots, r_{max}$, if $S^*\subseteq S$ denotes the set of elements of $S$ that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of the hashings $H_r, r=1,\ldots, r_{max}$, then for any $\chi\in \mathbb{C}^{[n]^d}$, $x':=x-\chi$, if $\nu\geq (\log^4 N)\mu$ is a parameter such that
\begin{description}
\item[A] $||(x-\chi)_S||_1\leq (\nu +20\mu)k$;
\item[B] $||\chi_{{[n]^d}\setminus S}||_0\leq \frac{1}{\log^{19} N}k$;
\item[C] $||(x-\chi)_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1\leq \frac{\nu}{\log^4 N}k$,
\end{description}
the following conditions hold.
If parameters $r_{max}, c_{max}$ are chosen to be at least $(C_1/\sqrt{\alpha})\log\log N$, where $C_1$ is the constant from Theorem~\ref{thm:l1-res-loc} and measurements are taken as in Algorithm~\ref{alg:main-sublinear}, then the output $\chi'$ of the call
$$
\Call{ReduceL1Norm}{\chi, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}, 4\mu (\log^4 n)^{T-t}, \mu}
$$
satisfies
\begin{enumerate}
\item $||(x'-\chi')_S||_1\leq \frac1{\log^{4} N} \nu k+20\mu k$ ~~~~~~~~($\ell_1$ norm of head elements is reduced by $\approx \log^4 N$ factor)
\item $||(\chi+\chi')_{{[n]^d}\setminus S}||_0\leq ||\chi_{{[n]^d}\setminus S}||_0+ \frac1{\log^{20} N}k$~~~~(few spurious coefficients are introduced)
\item $||(x'-\chi')_{S^*}||_1+||(\chi+\chi')_{{[n]^d}\setminus S}||_1\leq ||x'_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1+\frac1{\log^{20} N}\nu k$ ~~~~($\ell_1$ norm of spurious coefficients does not grow fast)
\end{enumerate}
with probability at least $1-1/\log^{2} N$ over the randomness used to take measurements $m$ and by calls to \textsc{EstimateValues}.
The number of samples used is bounded by $2^{O(d^2)}k(\log\log N)^2$, and the runtime is bounded by $2^{O(d^2)} k\log^{d+2} N$.
\end{lemma}
Equipped with Lemma~\ref{lm:reduce-l1-norm} as well as its counterpart Lemma~\ref{lm:linf} that bounds the performance of \textsc{ReduceInfNorm} (see section~\ref{sec:linf}) we are able to prove that the SNR reduction loop indeed achieves its cause, namely~\eqref{eq:294ht943htgrr}. Formally, we prove in section~\ref{sec:snr-loop}
\begin{theorem}\label{thm:l1snr}
For any $x\in \mathbb{C}^N$, any integer $k\geq 1$, if $\mu^2=\err_k^2(x)/k$ and $R^*\geq ||x||_\infty/\mu=N^{O(1)}$, the following conditions hold for the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}\subseteq {[n]^d}$.
Then the SNR reduction loop of Algorithm~\ref{alg:main-sublinear} (lines~19-25) returns $\chi^{(T)}$ such that
\begin{equation*}
\begin{split}
&||(x-\chi^{(T)})_S||_1\lesssim \mu\text{~~~~~~~~~~~~~~~~~($\ell_1$-SNR on head elements is constant)}\\
&||\chi^{(T)}_{{[n]^d}\setminus S}||_1\lesssim \mu \text{~~~~~~~~~~~~~~~~~~~~~~~~~~(spurious elements contribute little in $\ell_1$ norm)}\\
&||\chi^{(T)}_{{[n]^d}\setminus S}||_0\lesssim \frac1{\log^{19} N} k\text{~~~~~~~~~~~~~(small number of spurious elements have been introduced)}
\end{split}
\end{equation*}
with probability at least $1-1/\log N$ over the internal randomness used by Algorithm~\ref{alg:main-sublinear}. The sample complexity is $2^{O(d^2)}k\log N(\log\log N)$. The runtime is bounded by $2^{O(d^2)} k \log^{d+3} N$.
\end{theorem}
\paragraph{ Recovery at constant $\ell_1$-SNR.} Once ~\eqref{eq:294ht943htgrr} has been achieved, we run the \textsc{RecoverAtConstantSNR} primitive (Algorithm~\ref{alg:const-snr}) on the residual signal. Adding the correction $\chi'$ that it outputs to the output $\chi^{(T)}$ of the SNR reduction loop gives the final output of the algorithm. We prove in section~\ref{sec:const-snr}
\begin{lemma}\label{lm:const-snr}
For any ${\epsilon}>0$, $\hat x, \chi\in \mathbb{C}^N$, $x'=x-\chi$ and any integer $k\geq 1$ if $||x'_{[2k]}||_1\leq O(||x_{{[n]^d}\setminus [k]}||_2\sqrt{k})$ and $||x'_{{[n]^d}\setminus [2k]}||_2^2\leq ||x_{{[n]^d}\setminus [k]}||_2^2$, the following conditions hold. If $||x||_\infty/\mu=N^{O(1)}$, then the output $\chi'$ of
\Call{RecoverAtConstantSNR}{$\hat x, \chi, 2k, {\epsilon}$} satisfies
$$
||x'-\chi'||^2_2\leq (1+O({\epsilon}))||x_{{[n]^d}\setminus [k]}||_2^2
$$
with at least $99/100$ probability over its internal randomness. The sample complexity is $2^{O(d^2)}\frac1{{\epsilon}} k\log N$, and the runtime complexity is at most $2^{O(d^2)}\frac1{{\epsilon}} k \log^{d+1} N.$
\end{lemma}
We give the intuition behind the proof here, as the argument is somewhat more delicate than the analysis of \textsc{RecoverAtConstSNR} in~\cite{IKP}, due to the $\ell_1$-SNR, rather than $\ell_2$-SNR assumption.
Specifically, if instead of $||(x-\chi)_{[2k]}||_1\leq O(\mu k)$ we had $||(x-\chi)_{[2k]}||_2^2\leq O(\mu^2 k)$, then it would be essentially sufficient to note that after a single hashing into about $k/({\epsilon}\alpha)$ buckets for a constant $\alpha\in (0, 1)$, every element $i\in [2k]$ is recovered with probability at least $1-O({\epsilon}\alpha)$, say, as it is enough to (on average) recover all but about an ${\epsilon}$ fraction of coefficients. This would not be sufficient here since we only have a bound on the $\ell_1$ norm of the residual, and hence some elements can contribute much more $\ell_2$ norm than others. However, we are able to show that the probability that an element of the residual signal $x'_i$ is not recovered is bounded by
$O(\frac{\alpha {\epsilon} \mu^2}{|x'_i|^2}+\frac{\alpha {\epsilon} \mu}{|x'_i|})$, where the first term corresponds to contribution of tail noise and the second corresponds to the head elements. This bound implies that the total expected $\ell_2^2$ mass in the elements that are not recovered is upper bounded by
$\sum_{i\in [2k]} |x'_i|^2\cdot O(\frac{\alpha {\epsilon} \mu^2}{|x'_i|^2}+\frac{\alpha {\epsilon} \mu}{|x'_i|})\leq O({\epsilon} \mu^2 k+{\epsilon}\mu \sum_{i\in [2k]} |x'_i|)=O({\epsilon} \mu^2 k)$, giving the result.
Finally, putting the results above together, we prove in section~\ref{sec:sfft}
\begin{theorem}\label{thm:main}
For any ${\epsilon}>0$, $x\in \mathbb{C}^{[n]^d}$ and any integer $k\geq 1$, if $R^*\geq ||x||_\infty/\mu=N^{O(1)}$, $\mu^2=O(||x_{{[n]^d}\setminus [k]}||_2^2/k)$ and $\alpha>0$ is smaller than an absolute constant, \textsc{SparseFFT}$(\hat x, k, {\epsilon}, R^*, \mu)$ solves the $\ell_2/\ell_2$ sparse recovery problem using $2^{O(d^2)} (k\log N \log\log N+\frac1{{\epsilon}}k\log N)$ samples and
$2^{O(d^2)} \frac1{{\epsilon}}k \log^{d+3} N$ time with at least $98/100$ success probability.
\end{theorem}
\begin{algorithm}
\caption{\textsc{SparseFFT}($\hat x, k, {\epsilon}, R^*, \mu$)}\label{alg:main-sublinear}
\begin{algorithmic}[1]
\Procedure{SparseFFT}{$\hat x, k, {\epsilon}, R^*, \mu$}
\State $\chi^{(0)} \gets 0$ \Comment{in $\mathbb{C}^n$}.
\State $T \gets \log_{(\log^4 N)} R^*$
\State $F\gets 2d$
\State $B\leftarrow (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$, $\alpha>0$ sufficiently small constant
\State $r_{max}\gets (C/\sqrt{\alpha})\log\log N, c_{max}\gets (C/\sqrt{\alpha})\log\log N$ for a sufficiently large constant $C>0$
\State $\H\gets \{\mathbf{0}_d\}$, $\Delta\gets 2^{\lfloor \frac1{2}\log_2 \log_2 n\rfloor}$ \Comment{$\mathbf{0}_d$ is the zero vector in dimension $d$}
\For {$g=1$ to $\lceil \log_\Delta n \rceil$}
\State $\H\gets \H\cup \bigcup_{s=1}^d n \Delta^{-g} \cdot \mathbf{e}_s$ \Comment{$\mathbf{e}_s$ is the unit vector in direction $s$}
\EndFor
\State $G\gets$ filter with $B$ buckets and sharpness $F$, as per Lemma~\ref{lm:filter-prop}
\For {$r=1$ to $r_{max}$}\Comment{Samples that will be used for location}
\State Choose $\Sigma_r\in \mathcal{M}_{d\times d}$, $q_r\in {[n]^d}$ uniformly at random, let $\pi_r:=(\Sigma_r, q_r)$ and let $H_r:=(\pi_r, B, F)$
\State Let ${\mathcal{A}}_r\gets $ $C\log\log N$ elements of ${[n]^d}\times {[n]^d}$ sampled uniformly at random with replacement
\For {$\mathbf{w}\in \H$}
\State $m(\widehat{x}, H_r, a\star(\mathbf{1}, \mathbf{w}))\gets \Call{HashToBins}{\hat x, 0, (H_r, a\star (\mathbf{1}, \mathbf{w}))}$ for all $a\in {\mathcal{A}}_r, \mathbf{w}\in \H$
\EndFor
\EndFor
\For{$t = 0, 1, \dotsc,T-1$}
\State $\chi' \gets \textsc{ReduceL1Norm}\left(\chi^{(t)}, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}, 4\mu (\log^4 n)^{T-t}, \mu\right)$
\State\Comment{Reduce $\ell_1$ norm of dominant elements in the residual signal}
\State $\nu'\gets (\log^4 N)(4\mu (\log^4 N)^{T-(t+1)}+20\mu)$ \Comment{Threshold}
\State $\chi'' \gets \Call{ReduceInfNorm}{\hat x, \chi^{(t)}+\chi', 4k/(\log^4 N), \nu', \nu'}$
\State\Comment{Reduce $\ell_\infty$ norm of spurious elements introduced by \textsc{ReduceL1Nom}}
\State $\chi^{(t+1)} \gets \chi^{(t)} + \chi'+\chi''$
\EndFor
\State $\chi' \gets \textsc{RecoverAtConstantSNR}(\hat x, \chi^{(T)}, 2k, \epsilon)$
\State \textbf{return} $\chi^{(T)} + \chi'$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Omitted proofs}\label{app:A}
\begin{proofof}{Lemma~\ref{lm:isolated-pi}}
We start with
\begin{equation}\label{eq:lixchbsp}
\begin{split}
{\bf \mbox{\bf E}}_{\Sigma, q}[|\pi(S\setminus \{i\})\cap \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^t)|]&=\sum_{j\in S\setminus \{i\}} {\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^t)]\\
\end{split}
\end{equation}
Recall that by definition of $h(i)$ one has $||(n/b)\cdot h(i)-\pi(i)||_\infty\leq (n/b)$, so by triangle inequality
$$
||\pi(j)-\pi(i)||_\infty\leq ||\pi(j)-(n/b)h(i)||_\infty+||\pi(i)-(n/b)h(i)||_\infty,
$$
so
\begin{equation}\label{eq:lixchbs}
\begin{split}
{\bf \mbox{\bf E}}_{\Sigma, q}[|\pi(S\setminus \{i\})\cap \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^t)|]&\leq \sum_{j\in S\setminus \{i\}} {\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in \mathbb{B}^\infty_{\pi(i)}((n/b)\cdot (2^t+1))]\\
&\leq \sum_{j\in S\setminus \{i\}} {\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in \mathbb{B}^\infty_{\pi(i)}((n/b)\cdot 2^{t+1})]\\
\end{split}
\end{equation}
Since $\pi_{\Sigma, q}(i)=\Sigma(i-q)$ for all $i\in {[n]^d}$, we have
\begin{equation*}
\begin{split}
{\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in \mathbb{B}^\infty_{\pi(i)}((n/b)\cdot 2^{t+1})]&={\bf \mbox{\bf Pr}}_{\Sigma, q}[||\Sigma(j-i)||_\infty\leq (n/b)\cdot 2^{t+1}]\leq 2(2^{t+2}/b)^d,
\end{split}
\end{equation*}
where we used the fact that by Lemma~\ref{lemma:limitedindependence}, for any fixed $i$, $j\neq i$ and any radius $r\geq 0$,
\begin{equation}\label{eq:li}
{\bf \mbox{\bf Pr}}_{\Sigma}[\norm{\infty}{\Sigma(i-j)} \leq r] \leq 2(2r/n)^d
\end{equation}
with $r=(n/b)\cdot 2^{t+1}$.
Putting this together with~\eqref{eq:lixchbs}, we get
\begin{equation*}
\begin{split}
{\bf \mbox{\bf E}}_{\Sigma, q}[|\pi(S\setminus \{i\})\cap \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^t)|]\leq |S|\cdot 2(2^{t+2}/b)^{d}&\leq (|S|/B)\cdot 2^{(t+2)d+1}\\
&\leq \frac1{4}(2\pi)^{-d\cdot \fc}\cdot 64^{-(d+F)}\alpha^d 2^{(t+2)d+1}.
\end{split}
\end{equation*}
Now by Markov's inequality we have that $i$ fails to be isolated at scale $t$ with probability at most
$$
{\bf \mbox{\bf Pr}}_{\Sigma, q}\left[|\pi(S\setminus \{i\})\cap \mathbb{B}^\infty_{\pi(i)}((n/b)\cdot 2^t)|>(2\pi)^{-d\cdot \fc}\cdot 64^{-(d+F)}\alpha^{d/2} 2^{(t+2)d+t+1} \right]\leq \frac1{4}2^{-t} \alpha^{d/2}.
$$
Taking the union bound over all $t\geq 0$, we get
$$
{\bf \mbox{\bf Pr}}_{\Sigma, q}[i~\text{is not isolated}]\leq \sum_{t\geq 0}\frac1{4}2^{-t} \alpha^{d/2} \leq \frac1{2}\alpha^{d/2}\leq \frac1{2}\alpha^{1/2}
$$
as required.
\end{proofof}
Before giving a proof of Lemma~\ref{lm:hashing}, we state the following lemma, which is immediate from Lemma~\ref{l:hashtobins}:
\begin{lemma}\label{lm:linearity}
Let $x, x^1, x^2, \chi, \chi^1, \chi^2\in \mathbb{C}^N$, $x=x^1+x^2$, $\chi=\chi^1+\chi^2$.
Let $\Sigma\in \mathcal{M}_{d\times d}, q, a\in {[n]^d}$, $B=b^d, b\geq 2$ an integer. Let
\begin{equation*}
\begin{split}
u&=\Call{HashToBins}{\hat x, \chi, (H, a)} \\
u^1&=\Call{HashToBins}{\widehat{x^1}, \chi^1, (H, a)} \\
u^2&=\Call{HashToBins}{\widehat{x^2}, \chi^2, (H, a)}.
\end{split}
\end{equation*}
Then for each $j\in [b]^d$ one has
\begin{equation*}
\begin{split}
|G^{-1}_{o_i(i)} u_j\omega^{-a^T\Sigma i}-(x-\chi)_i|^p&\lesssim |G^{-1}_{o_i(i)} u^1_j\omega^{-a^T\Sigma i}-(x^1-\chi^1)_i|^p+|G^{-1}_{o_i(i)} u^2_j\omega^{-a^T\Sigma i}-(x^2-\chi^2)_i|^p\\
&+N^{-\Omega(c)}
\end{split}
\end{equation*}
for $p\in \{1, 2\}$, where $O(c)$ is the word precision of our semi-equispaced Fourier transform computations.
\end{lemma}
\begin{proofof}{Lemma~\ref{lm:hashing}}
By Lemma~\ref{lemma:limitedindependence}, for any fixed $i$ and $j$ and any $t\geq 0$,
\[
{\bf \mbox{\bf Pr}}_{\Sigma}[\norm{\infty}{\Sigma(i-j)} \leq t] \leq 2(2t/n)^d.
\]
Per Lemma~\ref{l:hashtobins},
\textsc{HashToBins} computes the vector $u \in \mathbb{C}^B$ given by
\begin{equation}\label{eq:u-delta}
u_{h(i)} - \Delta_{h(i)} = \sum_{j\in {[n]^d}} G_{o_i(j)}x'_j \omega^{a^T \Sigma j}
\end{equation}
for some $\Delta$ with $\norm{\infty}{\Delta}^2 \leq N^{-\Omega(c)}$.
We define the vector $v \in \mathbb{C}^n$ by $v_{\Sigma j} = x'_j G_{o_i(j)}$, so that
\[
u_{h(i)} - \Delta_{h(i)} = \sum_{j\in {[n]^d}} \omega^{a^Tj} v_j = \sqrt{N}\widehat{v}_a
\]
so
\[
u_{h(i)} - \omega^{a^T\Sigma i}G_{o_i(i)}x'_i - \Delta_{h(i)} = \sqrt{N}(\widehat{v_{\overline{\{\Sigma i\}}}})_a.
\]
We have by \eqref{eq:u-delta} and the fact that $(X+Y)^2\leq 2X^2+2Y^2$
\begin{equation*}
\begin{split}
\abs{G_{o_i(i)}^{-1}\omega^{-a^T\Sigma i}u_{h(i)} - x'_i}^2 =G_{o_i(i)}^{-2} \abs{u_{h(i)} - \omega^{a^T\Sigma i}G_{o_i(i)}x'_i}^2\\
\leq 2G_{o_i(i)}^{-2} \abs{u_{h(i)} - \omega^{a^T\Sigma i}G_{o_i(i)}x'_i - \Delta_{h(i)}}^2 + 2G_{o_i(i)}^{-2}\Delta_{h(i)}^2\\
=2G_{o_i(i)}^{-2} \abs{\sum_{j\in {[n]^d}} G_{o_i(j)}x'_j \omega^{a^T \Sigma j}}^2 +2G_{o_i(i)}^{-2} \Delta_{h(i)}^2\\
\end{split}
\end{equation*}
By Parseval's theorem, therefore, we have
\begin{equation}\label{eq:a-est}
\begin{split}
{\bf \mbox{\bf E}}_a[\abs{G_{o_i(i)}^{-1}\omega^{-a^T\Sigma i}u_{h(i)} - x'_i}^2]
&\leq 2G_{o_i(i)}^{-2} {\bf \mbox{\bf E}}_a[\abs{\sum_{j\in {[n]^d}} G_{o_i(j)}x'_j \omega^{a^T \Sigma j}}^2] + 2{\bf \mbox{\bf E}}_a[\Delta_{h(i)}^2]\\
&= 2G_{o_i(i)}^{-2} (\norm{2}{v_{\overline{\{\Sigma i\}}}}^2 + \Delta_{h(i)}^2)\\
&\lesssim N^{-\Omega(c)} + \sum_{j\in {[n]^d} \setminus \{i\}} \abs{x'_j G_{o_i(j)}}^2\\
&\lesssim N^{-\Omega(c)} + \sum_{j \in {[n]^d} \setminus \{i\}} \abs{x'_j G_{o_i(j)}}^2\\
&\lesssim N^{-\Omega(c)} + \mu_{\Sigma, q}^2(i).\\
\end{split}
\end{equation}
We now prove {\bf (2)}. Recall that the filter $G$ approximates an ideal filter, which would be $1$ inside $\mathbb{B}^\infty_{0}(n/b)$ and $0$ everywhere else. We use the bound on $G_{o_i(j)}=G_{\pi(i)-\pi(j)}$ in terms of $||\pi(i)-\pi(j)||_\infty$ from Lemma~\ref{lm:filter-prop}, (2). In order to leverage the bound, we partition ${[n]^d}=\mathbb{B}^\infty_{(n/b)\cdot h(i)}(n/2)$ as
$$
\mathbb{B}^\infty_{(n/b)\cdot h(i)}(n/2)=\mathbb{B}^\infty_{(n/b)\cdot h(i)}(n/b)\cup \bigcup_{t=1}^{\log_2 (b/2)} \left(\mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)2^{t})\setminus \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)2^{t-1})\right).
$$
For simplicity of notation, let $X_0=\mathbb{B}^\infty_{(n/b)\cdot h(i)}(n/b)$ and $X_t=\mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^{t})\setminus \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^{t-1})$ for $t\geq 1$.
For each $t\geq 1$ we have by Lemma~\ref{lm:filter-prop}, (2)
$$
\max_{\pi(l)\in X_t} |G_{o_i(l)}|\leq \max_{\pi(l)\not \in \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)2^{t-1})} |G_{o_i(l)}| \leq \left(\frac{2}{1+2^{t-1}}\right)^{F}.
$$
Since the rhs is greater than $1$ for $t\leq 0$, we can use this bound for all $t\leq \log_2 (b/2)$.
Further, by Lemma~\ref{lemma:limitedindependence} we have for each $j\neq i$ and $t\geq 0$
$$
{\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in X_t]\leq {\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^{t})]\leq 2(2^{t+1}/b)^{d}.
$$
Putting these bounds together, we get
\begin{equation*}
\begin{split}
{\bf \mbox{\bf E}}_{\Sigma, q}[\mu^2_{\Sigma, q}(i)]&={\bf \mbox{\bf E}}_{\Sigma, q}[\sum_{j \in {[n]^d}\setminus \{i\}} \abs{x'_j G_{o_i(j)}}^2] \\
&\leq \sum_{j\in {[n]^d}\setminus \{i\}} \abs{x'_j}^2 \cdot \sum_{t=0}^{\log_2 (b/2)} {\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in X_t]\cdot \max_{\pi(l)\in X_t} |G_{o_i(l)}|\\
&\leq \sum_{j\in {[n]^d}\setminus \{i\}} \abs{x'_j}^2 \cdot \sum_{t=0}^{\log_2 (b/2)} (2^{t+1}/b)^{d}\cdot \left(\frac{2}{1+2^{t-1}}\right)^{F}\\
&\leq \frac{2^F}{B}\sum_{j\in {[n]^d}\setminus \{i\}} \abs{x'_j}^2 \sum_{t=0}^{+\infty} 2^{(t+1)d-F (t-1)}\\
& \leq 2^{O(d)} \frac{\norm{2}{x'}^2}{B}
\end{split}
\end{equation*}
as long as $F\geq 2d$ and $F=\Theta(d)$. Recalling that $G_{o_i(i)}^{-1}\leq (2\pi)^{d\cdot \fc}$ completes the proof of {\bf (2)}.
The proof of {\bf (1)} is similar. We have
\begin{equation}
\begin{split}
{\bf \mbox{\bf E}}_{\Sigma, q}[\max_{a\in {[n]^d}} |\sum_{j \in {[n]^d}\setminus \{i\}} x'_j G_{o_i(j)} \omega^{a^T\Sigma j}|] &\leq {\bf \mbox{\bf E}}_{\Sigma, q}[\sum_{j \in {[n]^d}\setminus \{i\}} |x'_j G_{o_i(j)}|]+|\Delta_{h(i)}|\\
&\leq |\Delta_{h(i)}|+\sum_{j\in {[n]^d}\setminus \{i\}} \abs{x'_j} \cdot \sum_{t=0}^{\log_2 (b/2)} {\bf \mbox{\bf Pr}}_{\Sigma, q}[\pi(j)\in X_t]\cdot \max_{\pi(l)\in X_t} |G_{o_i(l)}|\\
&\leq |\Delta_{h(i)}|+\sum_{j\in {[n]^d}\setminus \{i\}} \abs{x'_j} \cdot \sum_{t=0}^{\log_2 (b/2)} (2^{t+1}/b)^{d}\cdot \left(\frac{2}{1+2^{t-1}}\right)^{F}\\
&\leq |\Delta_{h(i)}|+\frac{2^F}{B}\sum_{j\in {[n]^d}\setminus \{i\}} \abs{x'_j} \sum_{t=0}^{+\infty} 2^{(t+1)d-F (t-1)}\\
& \leq |\Delta_{h(i)}|+2^{O(d)} \frac{\norm{1}{x'}}{B},\\
\end{split}
\end{equation}
where
\begin{equation*}
\Delta_{h(i)} \lesssim N^{-\Omega(c)}.
\end{equation*}
Recalling that $G_{o_i(i)}^{-1}\leq (2\pi)^{d\cdot \fc}$ and $R^*\leq ||x||_\infty/\mu$
completes the proof of {\bf (1)}.
\end{proofof}
\section{Analysis of \textsc{ReduceL1Norm} and \textsc{SparseFFT}}\label{sec:reduce-l1}
In this section we first give a correctness proof and runtime analysis for \textsc{ReduceL1Norm} (section~\ref{sec:rl1n}), then analyze the SNR reduction loop in \textsc{SparseFFT}(section~\ref{sec:snr-loop}) and finally prove correctness of \textsc{SparseFFT} and provide runtime bounds in section~\ref{sec:sfft}.
\subsection{Analysis of \textsc{ReduceL1Norm}}\label{sec:rl1n}
The main result of this section is Lemma~\ref{lm:reduce-l1-norm} (restated below). Intuitively, the lemma shows that \textsc{ReduceL1Norm} reduces the $\ell_1$ norm of the head elements of the input signal $x-\chi$ by a polylogarthmic factor, and does not introduce too many new spurious elements (false positives) in the process. The introduced spurious elements, if any, do not contribute much $\ell_1$ mass to the head of the signal. Formally, we show
\noindent{\em {\bf Lemma~\ref{lm:reduce-l1-norm}}(Restated)
For any $x\in \mathbb{C}^N$, any integer $k\geq 1$, $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for $\alpha\in (0, 1]$ smaller than an absolute constant and $F\geq 2d, F=\Theta(d)$ the following conditions hold for the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}$, where $\mu^2\geq ||x_{{[n]^d}\setminus [k]}||_2^2/k$. Suppose that $||x||_\infty/\mu=N^{O(1)}$.
For any sequence of hashings $H_r=(\pi_r, B, F)$, $r=1,\ldots, r_{max}$, if $S^*\subseteq S$ denotes the set of elements of $S$ that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of the hashings $H_r, r=1,\ldots, r_{max}$, then for any $\chi\in \mathbb{C}^{[n]^d}$, $x':=x-\chi$, if $\nu\geq (\log^4 N)\mu$ is a parameter such that
\begin{description}
\item[A] $||(x-\chi)_S||_1\leq (\nu +20\mu)k$;
\item[B] $||\chi_{{[n]^d}\setminus S}||_0\leq \frac{1}{\log^{19} N}k$;
\item[C] $||(x-\chi)_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1\leq \frac{\nu}{\log^4 N}k$,
\end{description}
the following conditions hold.
If parameters $r_{max}, c_{max}$ are chosen to be at least $(C_1/\sqrt{\alpha})\log\log N$, where $C_1$ is the constant from Theorem~\ref{thm:l1-res-loc} and measurements are taken as in Algorithm~\ref{alg:main-sublinear}, then the output $\chi'$ of the call
$$
\Call{ReduceL1Norm}{\chi, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}, 4\mu (\log^4 n)^{T-t}, \mu}
$$
satisfies
\begin{enumerate}
\item $||(x'-\chi')_S||_1\leq \frac1{\log^{4} N} \nu k+20\mu k$ ~~~~~~~~($\ell_1$ norm of head elements is reduced by $\approx \log^4 N$ factor)
\item $||(\chi+\chi')_{{[n]^d}\setminus S}||_0\leq ||\chi_{{[n]^d}\setminus S}||_0+ \frac1{\log^{20} N}k$~~~~(few spurious coefficients are introduced)
\item $||(x'-\chi')_{S^*}||_1+||(\chi+\chi')_{{[n]^d}\setminus S}||_1\leq ||x'_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1+\frac1{\log^{20} N}\nu k$ ~~~~($\ell_1$ norm of spurious coefficients does not grow fast)
\end{enumerate}
with probability at least $1-1/\log^{2} N$ over the randomness used to take measurements $m$ and by calls to \textsc{EstimateValues}.
The number of samples used is bounded by $2^{O(d^2)}k(\log\log N)^2$, and the runtime is bounded by $2^{O(d^2)} k\log^{d+2} N$.
}
Before giving the proof of Lemma~\ref{lm:reduce-l1-norm}, we prove two simple supporting lemmas.
\begin{lemma}[Few spurious elements are introduced in \textsc{ReduceL1Norm}]\label{lm:small-l0-increment}
For any $x\in \mathbb{C}^N$, any integer $k\geq 1$, $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for $\alpha\in (0, 1]$ smaller than an absolute constant and $F\geq 2d, F=\Theta(d)$ the following conditions hold for the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}$, where $\mu^2\geq ||x_{{[n]^d}\setminus [k]}||_2^2/k$.
For any sequence of hashings $H_r=(\pi_r, B, F)$, $r=1,\ldots, r_{max}$, if $S^*\subseteq S$ denotes the set of elements of $S$ that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of the hashings $H_r, r=1,\ldots, r_{max}$, then for any $\chi\in \mathbb{C}^{[n]^d}$, $x':=x-\chi$ the following conditions hold.
Consider the call
$$
\Call{ReduceL1Norm}{\chi, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}, 4\mu (\log^4 n)^{T-t}, \mu},
$$
where we assume that measurements of $x$ are taken as in Algorithm~\ref{alg:main-sublinear}. Denote, for each $t=0,\ldots, \log_2(\log^4 N)$, the signal recovered by step $t$ in this call by $\chi^{(t)}$ (see Algorithm~\ref{alg:l1-norm-reduction}). There exists an absolute constant $C>0$ such that if for a parameter $\nu\geq 2^t\mu$ at step $t$
\begin{description}
\item[A] $||(x'-\chi^{(t)})_S||_1\leq (2^{-t}\nu+20\mu) k$;
\item[B] $||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq \frac2{\log^{19} N} k$,
\item[C] $||(x'-\chi^{(t)})_{S^*}||_1+||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_1\leq \frac{2\nu}{\log^4 N} k$,
\end{description}
then with probability at least $1-(\log N)^{-3}$ over the randomness used in \textsc{EstimateValues} at step $t$ one has
$$
||(\chi+\chi^{(t+1)})_{{[n]^d}\setminus S}||_0-||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq \frac{1}{\log^{21} N} k.
$$
\end{lemma}
\begin{proof}
Recall that $L'\subseteq L$ is the list output by \textsc{EstimateValues}. We let
$$
L''=\left\{i\in L: |\chi'_i-x'_i|>\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)\right\}
$$ denote the set of elements in $L'$ that failed to be estimated to within an additive $\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)$ error term.
For any element $i\in L$ we consider two cases, depending on whether $i\in L'\setminus L''$ or $i\in L''$.
\paragraph{Case 1:} First suppose that $i\in L'\setminus L''$, i.e. $|x'_i-\chi'_i|<\alpha^{1/2}(2^{-t}\nu+20\mu)$. Then if $\alpha$ is smaller than an absolute constant, we have
$$
|x'_i|>\frac1{1000}\nu 2^{-t}+4\mu-(\alpha^{1/2}(2^{-t}\nu+20\mu))\geq 2\mu,
$$
because only elements $i$ with $|\chi'_i|>\frac1{1000}\nu 2^{-t}+4\mu$ are included in the set $L'$ in the call
$$
\chi' \gets \Call{EstimateValues}{x, \chi^{(t)}, L, k, {\epsilon}, C(\log\log N+d^2+O(\log (B/k))), \frac1{1000}\nu 2^{-t}+4\mu}
$$
due to the pruning threshold of $\frac1{1000}\nu 2^{-t}+4\mu$ passed to \textsc{EstimateValues} in the last argument.
Since $||x_{{[n]^d}\setminus S}||_\infty\leq \mu$ by definition of $S$, this means that either $i\in S$, or $i\in \supp \chi^{(t)}$. In both cases $i$ contributes at most $0$ to
$||(\chi+\chi^{(t+1)})_{{[n]^d}\setminus S}||_0-||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0$.
\paragraph{Case 2:}Now suppose that $i\in L''$, i.e. $|(x'-\chi')_i|\geq \alpha^{1/2}(2^{-t}\nu+20\mu)$. In this case $i$ may contribute $1$ to
$||(\chi+\chi^{(t+1)})_{{[n]^d}\setminus S}||_0-||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0$. However,
the number of elements in $L''$ is small. To show this, we invoke Lemma~\ref{lm:estimate-l1l2} to obtain precision guarantees for the call to \textsc{EstimateValues} on the pair $x, \chi$ and set of `head elements' $S\cup \supp \chi$. Note that $|S|\leq 2k$, as otherwise we would have $||x_{{[n]^d}\setminus [k]}||_2^2> \mu\cdot k$, a contradiction. Further, by assumption {\bf B} of the lemma we have $||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq k$, so $|S\cup \supp (\chi+\chi^{(t)})|\leq 4k$. The $\ell_1$ norm of $x'-\chi^{(t)}$ on $S\cup \supp (\chi+\chi^{(t)})$ can be bounded as
\begin{equation*}
\begin{split}
\frac{||(x'-\chi^{(t)})_S||_1+||(x'-\chi^{(t)})_{\text{supp}(\chi+\chi^{(t)})\setminus S}||_1}{4k}&\leq \frac{||(x'-\chi^{(t)})_S||_1+||\chi^{(t)}_{{[n]^d}\setminus S}||_1+||x'_{{[n]^d}\setminus S}||_\infty\cdot |\supp (\chi+\chi^{(t)})|}{4k}\\
&\leq \frac{(2^{-t}\nu+20\mu )k+\frac{2\nu}{\log^4 N} k+\mu\cdot (4k)}{2k}\leq 2^{-t}\nu+20\mu,
\end{split}
\end{equation*}
For the $\ell_2$ bound on the tail of the signal we have
$$
\frac{||(x'-\chi^{(t)})_{{[n]^d}\setminus (S\cup \supp (\chi+\chi^{(t)}))}||_2^2}{4k}\leq \frac{||x_{{[n]^d}\setminus S}||_2^2}{4k}\leq \mu^2.
$$
We thus have by Lemma~\ref{lm:estimate-l1l2}, {\bf (1)} for every $i\in L'$ that the estimate $w_i$ returned by \textsc{EstimateValues} satisfies
$$
{\bf \mbox{\bf Pr}}[|w_i-x'_i|>\alpha^{1/2}(2^{-t}\nu+20\mu)]<2^{-\Omega(r_{max})}.
$$
Since $r_{max}$ is chosen as $r_{max}=C(\log\log N+d^2+\log (B/k))$ for a sufficiently large absolute constant $C>0$, we have
$$
{\bf \mbox{\bf Pr}}[|w_i-x'_i|>\alpha^{1/2}(2^{-t}\nu+20\mu)]<2^{-\Omega(r_{max})}\leq (k/B)\cdot (\log N)^{-25}.
$$
This means that
$$
{\bf \mbox{\bf E}}[|L''|]\leq |L|\cdot (k/B)\cdot (\log N)^{-25}\leq (B\cdot r_{max})(k/B)\cdot (\log N)^{-25}\leq (\log N)^{-23},
$$
where the expectation is over the randomness used in \textsc{EstimateValues}. We used the fact that $|L'|\leq |L|\leq B\cdot r_{max}$ and that $r_{max}$ to derive the upper bound above. An application of Markov's inequality completes the proof.
\end{proof}
\begin{lemma}[Spurious elements do not introduce significant $\ell_1$ error]\label{lm:small-l1-increment}
For any $x\in \mathbb{C}^N$, any integer $k\geq 1$, $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for $\alpha\in (0, 1]$ smaller than an absolute constant and $F\geq 2d, F=\Theta(d)$ the following conditions hold for the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}$, where $\mu^2\geq ||x_{{[n]^d}\setminus [k]}||_2^2/k$.
For any sequence of hashings $H_r=(\pi_r, B, F)$, $r=1,\ldots, r_{max}$, if $S^*\subseteq S$ denotes the set of elements of $S$ that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of the hashings $H_r, r=1,\ldots, r_{max}$, then for any $\chi\in \mathbb{C}^{[n]^d}$, $x':=x-\chi$ the following conditions hold.
Consider the call
$$
\Call{ReduceL1Norm}{\chi, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}, 4\mu (\log^4 n)^{T-t}, \mu},
$$
where we assume that measurements of $x$ are taken as in Algorithm~\ref{alg:main-sublinear}. Denote, for each $t=0,\ldots, \log_2(\log^4 N)$, the signal recovered by step $t$ in this call by $\chi^{(t)}$ (see Algorithm~\ref{alg:l1-norm-reduction}). There exists an absolute constant $C>0$ such that if for a parameter $\nu\geq 2^t \mu$ at step $t$
\begin{description}
\item[A] $||(x'-\chi^{(t)})_S||_1\leq (2^{-t}\nu+20\mu) k$;
\item[B] $||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq \frac2{\log^{19} N} k$;
\item[C] $||(x'-\chi^{(t)})_{S^*}||_1+||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_1\leq \frac{2\nu}{\log^4 N} k$,
\end{description}
then with probability at least $1-(\log N)^{-3}$ over the randomness used in \textsc{EstimateValues} at step $t$ one has
$$
||(x'-\chi^{(t+1)})_{({[n]^d}\setminus S)\cup S^*}||_1-||(x'-\chi^{(t)})_{({[n]^d}\setminus S)\cup S^*}||_1\leq \frac{1}{\log^{21} N}k (\nu+\mu)
$$
\end{lemma}
\begin{proof}
We let $Q:=({[n]^d}\setminus S)\cup S^*$ to simplify notation, and recall that $L'\subseteq L$ is the list output by \textsc{EstimateValues}. We let
$$
L''=\left\{i\in L: |\chi'_i-x'_i|>\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)\right\}
$$
denote the set of elements in $L'$ that failed to be estimated to within an additive $\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)$ error term.
We write
\begin{equation}\label{eq:dfskgj}
\begin{split}
||(x-\chi^{(t+1)})_Q||_1&=||(x-\chi^{(t+1)})_{Q\setminus L'}||_1+||(x-\chi^{(t+1)})_{(Q\cap L')\setminus L''}||_1+||(x-\chi^{(t+1)})_{Q\cap L''}||_1\\
\end{split}
\end{equation}
We first note that $\chi^{(t+1)}_i=\chi^{(t)}_i$ for all $i\not \in L'$, and hence $||(x'-\chi^{(t+1)})_{Q\setminus L}||_1=||(x'-\chi^{(t)})_{Q\setminus L}||_1$.
Second, for $i\in (Q\cap L')\setminus L''$ (second term) one has $|x'_i-\chi_i^{(t+1)}|\leq \sqrt{\alpha}(\nu 2^{-t}+4\mu)$. Since only elements $i\in L$ with $|\chi'_i|>\frac1{1000}\nu 2^{-t}+4\mu$ are reported by the threshold setting in \textsc{EstimateValues}, so
$|x'_i-\chi'|\leq \sqrt{\alpha} (2^{-t}\nu+20\mu)\leq x'_i$ as long as $\alpha$ is smaller than a constant. We thus get that $||(x-\chi^{(t+1)})_{(Q\cap L')\setminus L''}||_1\leq ||(x-\chi^{(t)})_{(Q\cap L')\setminus L''}||_1$.
For the third term, we note that for each $i\in L$ the estimate $w_i$ computed in the call to \textsc{EstimateValues} satisfies
\begin{equation}\label{eq:9FHFJdhdjd}
{\bf \mbox{\bf E}}\left[\left||w_i-x'_i|-\sqrt{\alpha}(2^{-t}\nu+20\mu)\right|_+\right]\leq \sqrt{\alpha}(2^{-t}\nu+\mu)k2^{-\Omega(r_{max})}
\end{equation}
by Lemma~\ref{lm:estimate-l1l2}, {\bf (2)}. Verification of the preconditions of the lemma is identical to Lemma~\ref{lm:small-l0-increment} (note that the assumptions of this lemma and Lemma~\ref{lm:small-l0-increment} are identical) and is hence omitted. Since $r_{max}=C(\log\log N+\log (B/k))$, the rhs of~\eqref{eq:9FHFJdhdjd} is bounded by $(\log N)^{-25} \sqrt{\alpha}(2^{-t}\nu+\mu)k$ as long as $C>0$ is larger than an absolute constant.
We thus have
\begin{equation*}
\begin{split}
||(x'-\chi^{(t+1)})_{S\cap L''}||_1\leq \sum_{i\in S\cap L''} \left(\sqrt{\alpha}(2^{-t}\nu+20\mu)+\left||w_i-x'_i|-\sqrt{\alpha}(2^{-t}\nu+20\mu)\right|_+\right).
\end{split}
\end{equation*}
Combining ~\eqref{eq:9FHFJdhdjd} with the fact that by by Lemma~\ref{lm:estimate-l1l2}, {\bf (1)}, we have for every $i\in L$
$$
{\bf \mbox{\bf Pr}}\left[|w_i-x'_i|>\sqrt{\alpha}(2^{-t}\nu+20\mu)\right]\leq 2^{-\Omega(r_{max})}\leq (k/B)\cdot (\log N)^{-25}
$$
by our choice of $r_{max}$, we get that
$$
||(x'-\chi^{(t+1)})_{S\cap L''}||_1\leq 2\sqrt{\alpha}(2^{-t}\nu+20\mu)\cdot|L|\cdot (k/B)\cdot (\log N)^{-25}.
$$
An application of Markov's inequality then implies, if $\alpha$ is smaller than an absolute constant, that
$$
{\bf \mbox{\bf Pr}}[||(x'-\chi^{(t+1)})_{S\cap L''}||_1> \frac1{\log^{21} N}(\nu+\mu)k]<1/\log^3 N.
$$
Substituting the bounds we just derived into \eqref{eq:dfskgj}, we get
$$
||(x-\chi^{(t+1)})_Q||_1\leq ||(x-\chi^{(t)})_Q||_1+\frac1{\log^{21} N}(\nu+\mu)k
$$
as required.
\end{proof}
Equipped with the two lemmas above, we can now give a proof of Lemma~\ref{lm:reduce-l1-norm}:
\begin{proofof}{Lemma~\ref{lm:reduce-l1-norm}}
We prove the result by strong induction on $t=0,\ldots, \log_2 (\log^4 N)$. Specifically, we prove that there exist events ${\mathcal{E}}_t, t=0,\ldots, \log_2 (\log^4 N)$ such that {\bf (a)} ${\mathcal{E}}_t$ depends on the randomness used in the call to \textsc{EstimateValues} at step $t$, ${\mathcal{E}}_t$ satisfies ${\bf \mbox{\bf Pr}}[{\mathcal{E}}_t|{\mathcal{E}}_0\wedge \ldots {\mathcal{E}}_{t-1}]\geq 1-3/\log^2 N$ and {\bf (b)} for all $t$ conditional on ${\mathcal{E}}_0\wedge {\mathcal{E}}_1\wedge\ldots\wedge{\mathcal{E}}_t$ one has
\begin{description}
\item[(1)] $||(x'-\chi^{(t)})_{S\setminus S^*}||_1\leq (2^{-t}\nu+20\mu) k$;
\item[(2)] $||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq ||\chi_{{[n]^d}\setminus S}||_0+\frac{t}{\log^{21} N}k$;
\item[(3)] $||(x'-\chi^{(t)})_{S^*}||_1+||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_1\leq ||x'_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1+\frac{t}{\log^{21} N}\nu k$
\end{description}
The {\bf base} is provided by $t=0$ and is trivial since $\chi^{(0)}=0$. We now give the {\bf inductive step}.
We start by proving the inductive step for {\bf (2)} and {\bf (3)}. We will use Lemma~\ref{lm:small-l0-increment} and Lemma~\ref{lm:small-l1-increment}, and hence we start by verifying that their preconditions (which are identical for the two lemmas) are satisfied. Precondition {\bf A} is satisfied directly by inductive hypothesis {\bf (1)}. Precondition {\bf B} is satisfied since
$$
||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq ||\chi_{{[n]^d}\setminus S}||_0+\frac{t}{\log^{21} N}k\leq \frac1{\log^{19} N} k+\frac{\log2(\log^4 N)}{\log^{21} N}\leq \frac2{\log^{19} N} k,
$$
where we used assumption {\bf B} of this lemma and inductive hypothesis {\bf (2)}. Precondition {\bf C} is satisfied since
$$
||(x'-\chi^{(t)})_{S^*}||_1+||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_1\leq ||x'_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1+\frac{t}{\log^{21} N}\nu k \leq \frac{\nu}{\log^4 N} k
+\frac{t}{\log^{21} N}\nu k\leq \frac{2\nu}{\log^4 N} k,
$$
where we used assumption {\bf 3} of this lemma, inductive assumption {\bf (3)} and the fact that $t\leq \log_2 (\log^4 N)\leq \log N$ for sufficiently large $N$.
\paragraph{Proving {\bf (2)}.}
To prove the inductive step for {\bf (2)}, we use Lemma~\ref{lm:small-l0-increment}. Lemma~\ref{lm:small-l0-increment} shows that with probability at least $1-(\log N)^{-2}$ over the randomness used in \textsc{EstimateValues} (denote the success event by ${\mathcal{E}}^1_t$) we have
$$
||(\chi+\chi^{(t+1)})_{{[n]^d}\setminus S}||_0-||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0\leq \frac{1}{\log^{21} N}k,
$$
so $||(\chi+\chi^{(t+1)})_{{[n]^d}\setminus S}||_0\leq ||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_0+\frac{1}{\log^{21} N}k\leq ||\chi_{{[n]^d}\setminus S}||_0+\frac{t+1}{\log^{21} N}k$ as required.
\paragraph{Proving {\bf (3)}.}
At the same time we have by Lemma~\ref{lm:small-l1-increment} that with probability at least $1-(\log N)^{-2}$ (denote the success event by ${\mathcal{E}}^2_t$)
$$
||(x'-\chi^{(t+1)})_{({[n]^d}\setminus S)\cup S^*}||_1-||(x'-\chi^{(t)})_{({[n]^d}\setminus S)\cup S^*}||_1\leq \frac{1}{\log^{21} N}k \nu,
$$
so by combing this with assumption {\bf (3)} of the lemma we get
$$
||(x'-\chi^{(t+1)})_{({[n]^d}\setminus S)\cup S^*}||_1\leq \frac{1}{\log^{20} N}\nu k+\frac{t+1}{\log^{21} N}\nu k
$$
as required.
\paragraph{Proving {\bf (1)}.}
We let $L''\subseteq L$ denote the set of elements in $L$ that fail to be estimated to within a small additive error. Specifically, we let
$$
L''=\left\{i\in L: |\chi'_i-x'_i|>\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)\right\},
$$
where $\chi'$ is the output of \textsc{EstimateValues} in iteration $t$. We bound $||(x'-\chi^{(t+1)})_{S\setminus S^*}||_1$ by splitting this $\ell_1$ norm into three terms, depending on whether the corresponding elements were updated in iteration $t$ and whether they were well estimated. We have
\begin{equation}\label{eq:org-bound-pihwrve33v}
\begin{split}
&||(x'-\chi^{(t+1)})_{S\setminus S^*}||_1=||(x'-(\chi^{(t)}+\chi'))_{S\setminus S^*}||_1\\
&\leq ||(x'-(\chi^{(t)}+\chi'))_{S\setminus (S^*\cup L)}||_1+||(x'-(\chi^{(t)}+\chi'))_{(S\cap L)\setminus L'\setminus L''}||_1+||(x'-(\chi^{(t)}+\chi'))_{(S\cap L')\setminus L''}||_1\\
&+||(x'-(\chi^{(t)}+\chi'))_{L''}||_1\\
&=||(x'-\chi^{(t)})_{S\setminus (S^*\cup L)}||_1+||(x'-(\chi^{(t)}+\chi'))_{(S\cap L)\setminus L'\setminus L''}||_1+||(x'-(\chi^{(t)}+\chi'))_{(S\cap L')\setminus L''}||_1\\
&+||(x'-(\chi^{(t)}+\chi'))_{(L\cap S)\cap L''}||_1\\
&=:S_1+S_2+S_3+S_4,
\end{split}
\end{equation}
where we used the fact that $\chi'_{S\setminus L}\equiv 0$ to go from the second line to the third. We now bound the four terms.
The {\bf second term} (i.e. $S_2$) captures elements of $S$ that were estimated precisely (and hence they are not in $L''$), but were not included into $L'$ as they did not pass the threshold test (being estimated as larger than $\frac1{1000}2^{-t}\nu+4\mu$) in \textsc{EstimateValues}. One thus has
\begin{equation}\label{eq:l2p}
\begin{split}
||(x-(\chi^{(t)}+\chi'))_{(S\cap L)\setminus L'\setminus L''}||_1&\leq \alpha^{1/2}(2^{-t}\nu+20\mu)\cdot |(S\cap L')\setminus L''|+(\frac1{1000} 2^{-t}\nu+4\mu)\cdot |(S\cap L')\setminus L''|\\
& \leq ((\frac{1}{1000}+\alpha^{1/2})2^{-t}\nu+(4+20\alpha^{1/2})\mu)2k
\end{split}
\end{equation}
since $|S|\leq 2k$ by assumption of the lemma.
The {\bf third term} (i.e. $S_3$) captures elements of $S$ that were reported by \textsc{EstimateValues} (hence do not belong to $L'$) and were approximated well (hence belong to $L''$). One has, by definition of the set $L''$,
\begin{equation}\label{eq:23grgerg}
\begin{split}
||(x-(\chi^{(t)}+\chi'))_{(S\cap L')\setminus L''}||_1&=\alpha^{1/2}(2^{-t}\nu+20\mu)\cdot |(S\cap L')\setminus L''|\\
& \leq 2\alpha^{1/2}(2^{-t}\nu+20\mu)k
\end{split}
\end{equation}
since $|S|\leq 2k$ by assumption of the lemma.
For the {\bf forth term} (i.e. $S_4$) we have
$$
||(x'-(\chi^{(t)}+\chi'))_{L''}||_1\leq \alpha^{1/2}\left(2^{-t}\nu+20\mu\right)\cdot |L''|+\sum_{i\in S} \left||\chi'_i-x'_i|-\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)\right|_+.
$$
By Lemma~\ref{lm:estimate-l1l2}, {\bf (1)} (invoked on the set $S\cup \supp(\chi+\chi^{(t)}+\chi')$) we have ${\bf \mbox{\bf E}}[|L''|]\leq B\cdot 2^{-\Omega(r_{max})}$ and by Lemma~\ref{lm:estimate-l1l2}, {\bf (2)}
for any $i$ one has
$$
{\bf \mbox{\bf E}}\left[\left||\chi'_i-x'_i|-\alpha^{1/2}\left(2^{-t}\nu+20\mu\right)\right|_+\right]\leq |L|\cdot \alpha^{1/2}\left(2^{-t}\nu+20\mu\right) 2^{-\Omega(r_{max})}.
$$
Since the parameter $r_{max}$ in \textsc{EstimateValues} is chosen to be at least $C(\log\log N+d^2+\log (B/k))$ for a sufficiently large constant $C$, and $|L|=O(\log N)B$, we have
\begin{equation*}
{\bf \mbox{\bf E}}\left[||(x'-(\chi^{(t)}+\chi'))_{L''}||_1\right]\leq \alpha^{1/2}\left(2^{-t}\nu+20\mu\right) |L| 2^{-\Omega(r_{max})}\leq \frac1{\log^{25} N}\left(2^{-t}\nu+20\mu\right)k
\end{equation*}
By Markov's inequality we thus have
\begin{equation}\label{eq:l2p-11}
||(x'-(\chi^{(t)}+\chi'))_{L''}||_1\leq \alpha^{1/2}\left(2^{-t}\nu+20\mu\right) |L| 2^{-\Omega(r_{max})}\leq \frac1{\log^{22} N}\left(2^{-t}\nu+20\mu\right)k
\end{equation}
with probability at least $1-1/\log^3 N$. Denote the success event by ${\mathcal{E}}_t^0$.
Finally, in order to bound the {\bf first term} (i.e. $S_1$), we invoke Theorem~\ref{thm:l1-res-loc} to analyze the call to \textsc{LocateSignal} in the $t$-th iteration.
We note that since $r_{max}, c_{max}\geq (C_1/\sqrt{\alpha})\log\log N$ (where $C_1$ is the constant from Theorem~\ref{thm:l1-res-loc}) by assumption of the lemma, the preconditions of Theorem~\ref{thm:l1-res-loc} are satisfied. By Theorem~\ref{thm:l1-res-loc} together with {\bf (1)} and {\bf (3)} of the inductive hypothesis we have
\begin{equation}\label{eq:l111}
\begin{split}
||(x'-\chi^{(t)})_{S\setminus (S^*\cup L)}||_1&\leq (4C_2\alpha)^{d/2} ||(x'-\chi^{(t)})_{S\setminus S^*}||_1+(4C)^{d^2}( ||(\chi+\chi^{(t)})_{{[n]^d}\setminus S}||_1+||(x'-\chi^{(t)})_{S^*}||_1)+4\mu |S|\\
&\leq O((4C_2\alpha)^{d/2}) (2^{-t}\nu+20\mu)k+(4C)^{d^2}(\frac{2}{\log^{20} N} \nu k)+8\mu k\\
&\leq \frac1{1000} (2^{-t}\nu+20\mu)k+8\mu k\\
\end{split}
\end{equation}
if $\alpha$ is smaller than an absolute constant and $N$ is sufficiently large.
Now substituting bounds on $S_1, S_2, S_3, S_4$ provided by ~\eqref{eq:l111},~\eqref{eq:l2p},~\eqref{eq:23grgerg} and~\eqref{eq:l2p-11} into~\eqref{eq:org-bound-pihwrve33v} we get
\begin{equation*}
\begin{split}
||(x'-\chi^{(t+1)})_{S\setminus S^*}||_1&\leq (\frac2{1000}+O(\alpha^{1/2})) 2^{-t}\nu+(16+O(\alpha^{1/2}))\mu k\\
&\leq 2^{-t}\nu+20\mu k\\
\end{split}
\end{equation*}
when $\alpha$ is a sufficiently small constant, as required. This proves the inductive step for {\bf (1)} and completes the proof of the induction.
Let ${\mathcal{E}}_t={\mathcal{E}}^0_t\wedge {\mathcal{E}}^1_t \wedge {\mathcal{E}}^2_t$ denote the success event for step $t$. We have by a union bound ${\bf \mbox{\bf Pr}}[{\mathcal{E}}_t]\geq 1-3t/\log^2 N$ as required.
\paragraph{Sample complexity and runtime} It remains to bound the sampling complexity and runtime. First note that \textsc{ReduceL1Norm} only takes fresh samples in the calls to \textsc{EstimateValues} that it issues.
By Lemma~\ref{lm:estimate-l1l2} each such call uses $2^{O(d^2)} k(\log\log N)$ samples, amounting to $2^{O(d^2)} k(\log\log N)^2$ samples over $O(\log\log N)$ iterations.
By Lemma~\ref{lm:loc} each call to \textsc{LocateSignal} takes $O(B(\log N)^{3/2})$ time. Updating the measurements $m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w})), \mathbf{w}\in \H$ takes
$$
|\H|c_{max} r_{max}\cdot F^{O(d)}\cdot B \log^{d+1} N \log\log N=2^{O(d^2)}\cdot k\log^{d+2} N
$$ time overall.
The runtime complexity of the calls to \textsc{EstimateValues} is $2^{O(d^2)}\cdot k \log^{d+1} N(\log\log N)^2$ time overall.
Thus, the runtime is bounded by $2^{O(d^2)}k \log^{d+2} N$.
\end{proofof}
\subsection{Analysis of SNR reduction loop in \textsc{SparseFFT}}\label{sec:snr-loop}
In this section we prove
\noindent{\em {\bf Theorem~\ref{thm:l1snr}}
For any $x\in \mathbb{C}^N$, any integer $k\geq 1$, if $\mu^2\geq \err_k^2(x)/k$ and $R^*\geq ||x||_\infty/\mu=N^{O(1)}$, the following conditions hold for the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}\subseteq {[n]^d}$.
Then the SNR reduction loop of Algorithm~\ref{alg:main-sublinear} (lines~19-25) returns $\chi^{(T)}$ such that
\begin{equation*}
\begin{split}
&||(x-\chi^{(T)})_S||_1\lesssim \mu\text{~~~~~~~~~~~~~~~~~($\ell_1$-SNR on head elements is constant)}\\
&||\chi^{(T)}_{{[n]^d}\setminus S}||_1\lesssim \mu \text{~~~~~~~~~~~~~~~~~~~~~~~~~~(spurious elements contribute little in $\ell_1$ norm)}\\
&||\chi^{(T)}_{{[n]^d}\setminus S}||_0\lesssim \frac1{\log^{19} N} k\text{~~~~~~~~~~~~~(small number of spurious elements have been introduced)}
\end{split}
\end{equation*}
with probability at least $1-1/\log N$ over the internal randomness used by Algorithm~\ref{alg:main-sublinear}. The sample complexity is $2^{O(d^2)}k\log N(\log\log N)$. The runtime is bounded by $2^{O(d^2)} k \log^{d+3} N$.
}
\begin{proof}
We start with correctness. We prove by induction that after the $t$-th iteration one has
\begin{description}
\item[(1)] $||(x-\chi^{(t)})_S||_1\leq 4(\log^4 N)^{T-t} \mu k+20\mu k$;
\item[(2)] $||x-\chi^{(t)}||_\infty=O((\log^4 N)^{T-(t-1)} \mu)$;
\item[(3)] $||\chi^{(t)}_{{[n]^d}\setminus S}||_0\leq \frac{t}{\log^{20} N} k$.
\end{description}
The base is provided by $t=0$, where all claims are trivially true by definition of $R^*$.
We now prove the inductive step. The main tool here is Lemma~\ref{lm:reduce-l1-norm}, so we start by verifying that its preconditions are satisfied. First note that
First, since $|S^*|\leq 2^{-\Omega(r_{max})}|S|\leq 2^{-\Omega(r_{max})}k\leq \frac1{\log^{19} N}k$ with probability at least $1-2^{-\Omega(r_{max})}\geq 1-1/\log N$ by Lemma~\ref{lm:good-prob} and choice of $r_{max}\geq (C/\sqrt{\alpha})\log\log N$ for a sufficiently large constant $C>0$. Also, by Claim~\ref{cl:balanced} we have that with probability at least $1-1/\log^{2} N$ for every $s\in [1:d]$ the sets ${\mathcal{A}}_r\star (\mathbf{0}, \mathbf{e}_s)$ are balanced (as per Definition~\ref{def:balance} with $\Delta=2^{\lfloor \frac1{2}\log_2 \log_2 n\rfloor}$, as needed for Algorithm~\ref{alg:location}). Also note that by {\bf (2)} of the inductive hypothesis one has $||x-\chi^{(t)}||_\infty/\mu=R^* \cdot O(\log N)=N^{O(1)}$.
First, assuming the inductive hypothesis {\bf (1)-(3)}, we verify that the preconditions of Lemma~\ref{lm:reduce-l1-norm} are satisfied with $\nu=4(\log^4 N)^{T-t} \mu k$. First, for {\bf (A)} one has $||(x-\chi^{(t)})_S||_1\leq 4(\log^4 N)^{T-t} \mu k$. This satisfies precondition {\bf A} of Lemma~\ref{lm:reduce-l1-norm}. We have
\begin{equation}\label{eq:123rergregregvvv}
\begin{split}
||(x-\chi^{(t)})_{S^*}||_1+||\chi^{(t)}_{{[n]^d}\setminus S}||_1&\leq ||x-\chi^{(t)}||_\infty\cdot (||(x-\chi^{(t)})_{S^*}||_0+||\chi^{(t)}_{{[n]^d}\setminus S}||_0)\\
&\leq O(\log^4 N)\cdot \nu \cdot \left(\frac1{\log^{19} N}k+\frac{t}{\log^{20} N}k \right)\leq \frac{16}{\log^{14} N}\nu k
\end{split}
\end{equation}
for sufficiently large $N$. Since the rhs is less than $\frac1{\log^4 N}\nu k$, precondition {\bf (C)} of Lemma~\ref{lm:reduce-l1-norm} is also satisfied. Precondition {\bf (B)} of Lemma~\ref{lm:reduce-l1-norm} is satisfied by inductive hypothesis, {\bf (3)} together with the fact that $T=o(\log R^*)=o(\log N)$.
Thus, all preconditions of Lemma~\ref{lm:reduce-l1-norm} are satisfied. Then by Lemma~\ref{lm:reduce-l1-norm} with $\nu=4(\log^4 N)^{T-t} \mu$
one has with probability at least $1-1/\log^2 N$
\begin{enumerate}
\item $||(x'-\chi^{(t)}-\chi')_S||_1\leq \frac1{\log^4 N}\nu k+20\mu k$;
\item $||(\chi^{(t)}+\chi')_{{[n]^d}\setminus S}||_0-||\chi^{(t)}_{{[n]^d}\setminus S}||_0\leq \frac1{\log^{20} N}k$;
\item $||(x'-(\chi^{(t)}+\chi'))_{S^*}||_1+||(\chi^{(t)}+\chi')_{{[n]^d}\setminus S}||_1\leq ||(x'-\chi^{(t)})_{S^*}||_1+||\chi^{(t)}_{{[n]^d}\setminus S}||_1+\frac1{\log^{20} N}\nu k$.
\end{enumerate}
Combining 1 above with~\eqref{eq:123rergregregvvv} proves {\bf (1)} of the inductive step:
\begin{equation*}
\begin{split}
||(x-\chi^{(t+1)})_S||_1=||(x-\chi^{(t)}-\chi')_S||_1&\leq \frac1{\log^4 N}\nu k+20\mu k=\frac1{\log^4 N}4(\log^4 N)^{T-t} \mu k+20\mu k\\
&=4(\log^4 N)^{T-(t+1)} \mu k+20\mu k.
\end{split}
\end{equation*}
Also, combining 2 above with the fact that $||\chi^{(t)}_{{[n]^d}\setminus S}||_0\leq \frac{t}{\log^{20} N} k$ yields $||\chi^{(t+1)}_{{[n]^d}\setminus S}||_0\leq \frac{t+1}{\log^{20} N} k$ as required.
In order to prove the inductive step is remains to analyze the call to \textsc{ReduceInfNorm}, for which we use Lemma~\ref{lm:linf} with parameter $\tilde k=4k/\log^4 N$. We first verify that preconditions of the lemma are satisfied. Let $y:=x-(\chi+\chi^{(t)}+\chi')$ to simplify notation. For that we need to verify that
\begin{equation}\label{eq:head-c-ojg23g2g}
||y_{[\tilde k]}||_1/\tilde k\leq 4(\log^4 N)^{T-(t+1)}\mu=(\log^4 N)\cdot (\frac1{\log^4 N}\nu+20 \mu)
\end{equation}
and
\begin{equation}\label{eq:pih2gg3rg}
||y_{{[n]^d}\setminus [\tilde k]}||_2/\sqrt{\tilde k}\leq (\log^4 N)\cdot (\frac1{\log^4 N}\nu +20\mu ),
\end{equation}
where we denote $\tilde k:=4k/\log^4 N$ for convenience. The first condition is easy to verify, as we now show. Indeed, we have
\begin{equation*}
\begin{split}
&||y_{\tilde k}||_1\leq ||y_S||_1+||y_{\text{supp}(\chi^{(t)}+\chi')\setminus S}||_1+||x_{{[n]^d}\setminus S}||_\infty \cdot \tilde k\\
&\leq ||y_S||_1 + ||(\chi^{(t)}+\chi')_{{[n]^d}\setminus S}||_1+||x_{\text{supp}(\chi^{(t)}+\chi')\setminus S}||_\infty\cdot \tilde k+||x_{{[n]^d}\setminus S}||_\infty\cdot \tilde k\\
&\leq \frac1{\log^4 N}\nu k+20\mu k + \frac1{\log^4 N}\nu k+2\mu \tilde k\leq \frac2{\log^4 N}\nu k+40 \mu k,
\end{split}
\end{equation*}
where we used the triangle inequality to upper bound $||y_{\text{supp}(\chi^{(t)}+\chi')\setminus S}||_1$ by $||(\chi^{(t)}+\chi')_{{[n]^d}\setminus S}||_1+||x_{\text{supp}(\chi^{(t)}+\chi')\setminus S}||_\infty\cdot \tilde k$ to go from the first line to the second. We thus have
$$
||y_{[\tilde k]}||_1/\tilde k\leq (\frac2{\log^4 N}\nu k+40 \mu k)/(4k/\log^4 N)\leq (\log^4 N)\cdot (\frac1{\log^4 N}\nu+20 \mu)
$$
as required. This establishes~\eqref{eq:head-c-ojg23g2g}.
To verify the second condition, we first let $\tilde S:=S\cup \supp(\chi+\chi^{(t)}+\chi')$ to simplify notation. We have
\begin{equation}\label{eq:11111}
\begin{split}
||y_{{[n]^d}\setminus [\tilde k]}||_2^2&=||y_{\tilde S\setminus [\tilde k]}||_2^2+||y_{({[n]^d}\setminus \tilde S)\setminus [\tilde k]}||_2^2\leq ||y_{\tilde S\setminus [\tilde k]}||_2^2+\mu^2 k,
\end{split}
\end{equation}
where we used the fact that $y_{{[n]^d}\setminus \tilde S}=x_{{[n]^d}\setminus \tilde S}$ and hence $||y_{({[n]^d}\setminus \tilde S)\setminus [\tilde k]}||_2^2\leq \mu^2 k$.
We now note that $||y_{\tilde S\setminus [\tilde k]}||_1\leq ||y_{\tilde S}||_1\leq 2(\frac1{\log^4 N}\nu k+20\mu k)$, and so it must be that $||y_{S\setminus [\tilde k]}||_\infty\leq 2(\frac1{\log^4 N}\nu k+20\mu k) (k/\tilde k)$, as otherwise the top $\tilde k$ elements of $y_{[\tilde k]}$ would contribute more than $2(\frac1{\log^4 N}\nu k+20\mu k)$ to $||y_{\tilde S}||_1$, a contradiction. With these constraints $||y_{\tilde S\setminus [\tilde k]}||_2^2$ is maximized when there are $\tilde k$ elements in $y_{\tilde S\setminus [\tilde k]}$, all equal to the maximum possible value, i.e. $||y_{\tilde S\setminus [\tilde k]}||_2^2\leq 4(\frac1{\log^4 N}\nu k+20\mu k)^2 (k/\tilde k)^2 \tilde k$. Plugging this into~\eqref{eq:11111}, we get $||y_{{[n]^d}\setminus [\tilde k]}||_2^2\leq ||y_{\tilde S\setminus [\tilde k]}||_2^2+\mu^2k\leq 4(\frac1{\log^4 N}\nu k+20\mu k)^2 (k/\tilde k)^2 \tilde k+\mu^2 k$. This implies that
\begin{equation*}
\begin{split}
||y_{{[n]^d}\setminus [\tilde k]}||_2/\sqrt{\tilde k}&\leq \sqrt{4(\frac1{\log^4 N}\nu k+20\mu k)^2 (k/\tilde k)^2+\mu^2 (k/\tilde k)}\leq 2 (k/\tilde k)\sqrt{(\frac1{\log^4 N}\nu k+20\mu k)^2 +\mu^2}\\
&\leq 2((\frac1{\log^4 N}\nu k+20\mu k)+\mu) (k/\tilde k)\leq (\log^4 N) (\frac1{\log^4 N}\nu k+20\mu k),
\end{split}
\end{equation*}
establishing ~\eqref{eq:pih2gg3rg}.
Finally, also recall that $||y_{S\setminus [\tilde k]}||_\infty\leq 2(\frac1{\log^4 N}\nu k+20\mu k) (k/\tilde k)\leq (\log^4 N)\cdot (\frac1{\log^4 N}\nu k+20\mu k)$ and $||y_{{[n]^d}\setminus \tilde S}||_\infty =||x_{{[n]^d}\setminus S}||_\infty\leq \mu$.
We thus have that all preconditions of Lemma~\ref{lm:linf} are satisfied for the set of top $\tilde k$ elements of $y$, and hence its output satisfies
$$
||x-(\chi^{(t)}-\chi'-\chi'')||_\infty=O(\log^4 N)\cdot(\frac1{\log^4 N}\nu k+20\mu k).
$$
Putting these bounds together establishes {\bf (2)}, and completes the inductive step and the proof of correctness.
Finally, taking a union bound over all failure events (each call to \textsc{EstimateValues} succeeds with probability at least $1-\frac1{\log^2 N}$, and with probability at least $1-1/\log^{2} N$ for all $s\in [1:d]$ the set ${\mathcal{A}}_r\star (\mathbf{0}, \mathbf{e}_s)$ is balanced in coordinate $s$) and using the fact that $\log T=o(\log N)$ and each call to \textsc{LocateSignal} is deterministic, we get that success probability of the SNR reduction look is lower bounded by $1-1/\log N$.
\paragraph{Sample complexity and runtime}
The sample complexity is bounded by the the sample complexity of the calls to \textsc{ReduceL1Norm} and \textsc{ReduceInfNorm} inside the loop times $O(\log N/\log\log N)$ for the number of iterations. The former is
bounded by $2^{O(d^2)}k (\log\log N)^2$ by Lemma~\ref{lm:reduce-l1-norm}, and the latter is bounded by $2^{O(d^2)}k/\log N$ by Lemma~\ref{lm:linf}, amounting to
at most $2^{O(d^2)} k \log N (\log\log N)$ samples overall. The runtime complexity is at most $2^{O(d^2)} k\log^{d+3} N$ overall for the calls to \textsc{ReduceL1Norm} and no more than $2^{O(d^2)}k \log^{d+3} N$ overall for the calls to \textsc{ReduceInfNorm}.
\end{proof}
\subsection{Analysis of \textsc{SparseFFT}}\label{sec:sfft}
\noindent {\em {\bf Theorem~\ref{thm:main}}
For any ${\epsilon}>0$, $x\in \mathbb{C}^{[n]^d}$ and any integer $k\geq 1$, if $R^*\geq ||x||_\infty/\mu=\text{poly}(N)$, $\mu^2\geq ||x_{{[n]^d}\setminus [k]}||_2^2/k$, $\mu^2=O(||x_{{[n]^d}\setminus [k]}||_2^2/k)$ and $\alpha>0$ is smaller than an absolute constant, \textsc{SparseFFT}$(\hat x, k, {\epsilon}, R^*, \mu)$ solves the $\ell_2/\ell_2$ sparse recovery problem using $2^{O(d^2)} (k\log N \log\log N+\frac1{{\epsilon}}k\log N)$ samples and
$2^{O(d^2)} \frac1{{\epsilon}}k \log^{d+3} N$ time with at least $98/100$ success probability.
}
\begin{proof}
By Theorem~\ref{thm:l1snr} the set $S:=\{i\in {[n]^d}: |x_i|>\mu\}$ satisfies
$$
||(x-\chi^{(T)})_S||_1\lesssim \mu k
$$
and
\begin{equation*}
\begin{split}
||\chi^{(T)}_{{[n]^d}\setminus S}||_1\lesssim \mu k\\
||\chi^{(T)}_{{[n]^d}\setminus S}||_0\lesssim \frac1{\log^{19} N}k\\
\end{split}
\end{equation*}
with probability at least $1-1/\log N$.
We now show that the signal $x':=x-\chi^{(T)}$ satisfies preconditions of Lemma~\ref{lm:const-snr} with parameter $k$. Indeed, letting $Q\subseteq {[n]^d}$ denote the top $2k$ coefficients of $x'$, we have
\begin{equation*}
\begin{split}
||x'_Q||_1&\leq ||x'_{Q\cap S}||_1+||\chi^{(T)}_{(Q\setminus S)\cap \supp \chi^{(T)}}||_1+|Q|\cdot ||x_{{[n]^d}\setminus S}||_1\leq O(\mu k)\\
\end{split}
\end{equation*}
Furthermore, since $Q$ is the set of top $2k$ elements of $x'$, we have
\begin{equation*}
\begin{split}
||x'_{{[n]^d}\setminus Q}||_2^2&\leq ||x'_{{[n]^d}\setminus (S\cup \supp \chi^{(T)})}||_2^2\leq ||x_{{[n]^d}\setminus (S\cup \supp \chi^{(T)})}||_2^2\leq ||x_{{[n]^d}\setminus S}||_2^2\\
&\leq \mu^2 |S|+||x_{{[n]^d}\setminus [k]}||_2^2=O(\mu^2 k)
\end{split}
\end{equation*}
as required.
Thus, with at least $99/100$ probability we have by Lemma~\ref{lm:const-snr} that
$$
||x-\chi^{(T)}-\chi'||_2\leq (1+O({\epsilon}))\err_{k}(x).
$$
By a union bound over the $1/\log N$ failure probability of the SNR reduction loop we have that \textsc{SparseFFT} is correct with probability at least $98/100$, as required.
It remains to bound the sample and runtime complexity. The number of samples needed to compute
$$
m(\widehat{x}, H_r, a\star(\mathbf{1}, \mathbf{w}))\gets \Call{HashToBins}{\hat x, 0, (H_r, a\star (\mathbf{1}, \mathbf{w}))}
$$
for all $a\in {\mathcal{A}}_r$, $\mathbf{w}\in \H$ is bounded by $2^{O(d^2)} k\log N (\log\log N)$ by our choice of $B=2^{O(d^2)}k$, $r_{max}=O(\log\log N)$, $|\H|=O(\log N/\log \log N)$ and $|{\mathcal{A}}_r|=O(\log\log N)$, together with Lemma~\ref{l:hashtobins}. This is asymptotically the same as the $2^{O(d^2)} k \log N (\log\log N)$ sample complexity of the $\ell_1$ norm reduction loop by Theorem~\ref{thm:l1snr}. The sampling complexity of the call to \textsc{RecoverAtConstantSNR} is at most $2^{O(d^2)} \frac1{{\epsilon}}k \log N$ by Lemma~\ref{lm:const-snr}, yielding the claimed bound.
The runtime of the SNR reduction loop is bounded by $2^{O(d^2)}k \log^{d+3} N$ by Theorem~\ref{thm:l1snr}, and the runtime of \textsc{RecoverAtConstantSNR} is at most $2^{O(d^2)} \frac1{{\epsilon}}k \log^{d+2} N$ by Lemma~\ref{lm:const-snr}.
\end{proof}
\subsection{Recovery at constant SNR}\label{sec:const-snr}
The algorithm is given by
\begin{algorithm}[H]
\caption{\textsc{RecoverAtConstantSNR}($\hat x, \chi, k, {\epsilon}$)}\label{alg:const-snr}
\begin{algorithmic}[1]
\Procedure{RecoverAtConstantSNR}{$\hat x, \chi, k, {\epsilon}$}
\State $B\gets (2\pi)^{4d\cdot \fc}\cdot k/({\epsilon}\alpha^d)$
\State Choose $\Sigma\in \mathcal{M}_{d\times d}$, $q\in {[n]^d}$ uniformly at random, let $\pi:=(\Sigma, q)$ and let $H_r:=(\pi_r, B, F)$
\State Let ${\mathcal{A}}\gets $ $C\log\log N$ elements of ${[n]^d}\times {[n]^d}$ sampled uniformly at random with replacement
\State $\H\gets \{\mathbf{0}_d\}$, $\Delta\gets 2^{\lfloor \frac1{2}\log_2 \log_2 n\rfloor}$ \Comment{$\mathbf{0}_d$ is the zero vector in dimension $d$}
\For {$g=1$ to $\lceil \log_\Delta n \rceil$}
\State $\H\gets \H\cup \bigcup_{s=1}^d n \Delta^{-g} \cdot \mathbf{e}_s$ \Comment{$\mathbf{e}_s$ is the unit vector in direction $s$}
\EndFor
\For {$\mathbf{w}\in \H$}
\State $m(\widehat{x}, H, a\star(\mathbf{1}, \mathbf{w}))\gets \Call{HashToBins}{\hat x, 0, (H, a\star (\mathbf{1}, \mathbf{w}))}$ for all $a\in {\mathcal{A}}, \mathbf{w}\in \H$
\EndFor
\State $L \gets \textsc{LocateSignal}\left(\chi^{(t)}, k, \{m(\widehat{x}, H, a\star (\mathbf{1}, \mathbf{w}))\}_{a\in {\mathcal{A}}, \mathbf{w}\in \H}\right)$
\State $\chi' \gets \Call{EstimateValues}{\hat x, \chi, 2k, {\epsilon}, O(\log N), 0}$
\State $L'\gets $top $4k$ elements of $\chi'$
\State \textbf{return} $\chi+\chi'_{L'}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
Our analysis will use
\begin{lemma}[Lemma~9.1 from~\cite{IKP}]\label{lm:comparison}
Let $x, z\in \mathbb{C}^n$ and $k\leq n$. Let $S$ contain the largest $k$ terms of $x$, and $T$ contain the largest $2k$ terms of $z$. Then $||x-z_T||_2^2\leq ||x-x_S||_2^2+4||(x-z)_{S\cup T}||_2^2$.
\end{lemma}
\noindent{\em {\bf Lemma~\ref{lm:const-snr}}
For any ${\epsilon}>0$, $\hat x, \chi\in \mathbb{C}^N$, $x'=x-\chi$ and any integer $k\geq 1$ if $||x'_{[2k]}||_1\leq O(||x_{{[n]^d}\setminus [k]}||_2\sqrt{k})$ and $||x'_{{[n]^d}\setminus [2k]}||_2^2\leq ||x_{{[n]^d}\setminus [k]}||_2^2$, the following conditions hold. If $||x||_\infty/\mu=N^{O(1)}$, then the output $\chi'$ of
\Call{RecoverAtConstantSNR}{$\hat x, \chi, 2k, {\epsilon}$} satisfies
$$
||x'-\chi'||^2_2\leq (1+O({\epsilon}))||x_{{[n]^d}\setminus [k]}||_2^2
$$
with at least $99/100$ probability over its internal randomness. The sample complexity is $2^{O(d^2)}\frac1{{\epsilon}} k\log N$, and the runtime complexity is at most $2^{O(d^2)}\frac1{{\epsilon}} k \log^{d+1} N.$
}
\begin{remark}
We note that the error bound is in terms of the $k$-term approximation error of $x$ as opposed to the $2k$-term approximation error of $x'=x-\chi$.
\end{remark}
\begin{proof}
Let $S$ denote the top $2k$ coefficients of $x'$. We first derive bounds on the probability that an element $i\in S$ is not located.
Recall that by Lemma~\ref{lm:loc} for any $i\in S$ if
\begin{enumerate}
\item $e^{head}_{i}(H, x', 0)<|x'_i|/20$;
\item $e^{tail}_{i}(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x')< |x'_i|/20$ for all $\mathbf{w}\in \H$;
\item for every $s\in [1:d]$ the set ${\mathcal{A}}\star(\mathbf{0}, \mathbf{e}_s)$ is balanced (as per Definition~\ref{def:balance}),
\end{enumerate}
then $i\in L$, i.e. $i$ is successfully located in \textsc{LocateSignal}.
We now upper bound the probability that an element $i\in S$ is not located. We let $\mu^2:=||x_{{[n]^d}\setminus k}||_2^2/k$ to simplify notation.
\paragraph{Contribution from head elements.} We need to bound, for $i\in S$, the quantity
\begin{equation*}
\begin{split}
e^{head}_i(H, x', 0)&=G_{o_i(i)}^{-1}\cdot \sum_{j\in S\setminus \{i\}} G_{o_i(j)}|x'_j|.
\end{split}
\end{equation*}
Recall that $m(\widehat{x}, H, a\star(\mathbf{1}, \mathbf{w}))=\Call{HashToBins}{\hat x, 0, (H, a\star (\mathbf{1}, \mathbf{w}))}$, and let $m:=m(\widehat{x}, H, a\star(\mathbf{1}, \mathbf{w}))$ to simplify notation. By Lemma~\ref{lm:hashing}, {\bf (1)} one has
\begin{equation}\label{eq:const-h}
{\bf \mbox{\bf E}}_H[\max_{a\in {[n]^d}} \abs{G_{o_i(i)}^{-1}\omega^{-a^T\Sigma i}m_{h(i)} - (x'_{S})_i}]\leq (2\pi)^{d\cdot \fc}\cdot C^d ||x'_{S}||_1/B+\mu/N^2
\end{equation}
for a constant $C>0$. This yields
$$
{\bf \mbox{\bf E}}_H[e^{head}_i(H, x', 0)]\leq (2\pi)^{d\cdot \fc} \cdot C^d ||x'_{S}||_1/B\lesssim (2\pi)^{d\cdot \fc} \cdot C^d \mu k/B\lesssim \alpha^d C^d {\epsilon} \mu.
$$
by the choice of $B$ in \textsc{RecoverAtConstantSNR}. Now by Markov's inequality we have for each $i\in {[n]^d}$
\begin{equation}\label{eq:const-eh-prob}
{\bf \mbox{\bf Pr}}_H[e^{head}_{i}(H, x', 0)>|x'_i|/20]\lesssim \alpha^d C^d {\epsilon} \mu/|x'_i|\lesssim \alpha {\epsilon} \mu/|x'_i|
\end{equation}
as long as $\alpha$ is smaller than a constant.
\paragraph{Contribution of tail elements}
We restate the definitions of $e^{tail}$ variables here for convenience of the reader (see~\eqref{eq:et-pi}, \eqref{eq:et-pi-a-h}, \eqref{eq:et-pi-a} and \eqref{eq:et}).
We have
\begin{equation*}
e^{tail}_i(H, z, x):=\left|G_{o_i(i)}^{-1}\cdot \sum_{j\in {[n]^d}\setminus S} G_{o_i(j)}x_j \omega^{z^T \Sigma (j-i)}\right|.
\end{equation*}
For any $\mathcal{Z}\subseteq {[n]^d}$ we have
\begin{equation*}
e^{tail}_i(H, \mathcal{Z}, x):=\text{quant}^{1/5}_{z\in \mathcal{Z}} \left|G_{o_i(i)}^{-1}\cdot \sum_{j\in {[n]^d}\setminus S} G_{o_i(j)}x_j \omega^{z^T \Sigma (j-i)}\right|.
\end{equation*}
Note that the algorithm first selects sets ${\mathcal{A}}_r\subseteq {[n]^d}\times {[n]^d}$, and then accesses the signal at locations given by ${\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}), \mathbf{w}\in \H$ (after permuting input space).
The definition of $e^{tail}_i(H, {\mathcal{A}}_r, x')$ for permutation $\pi=(\Sigma, q)$ allows us to capture the amount of noise that our measurements for locating a specific set of bits of $\Sigma i$ suffer from. Since the algorithm requires all $\mathbf{w}\in \H$ to be not too noisy in order to succeed (see preconditions 2 and 3 of Lemma~\ref{lm:loc}), we have
\begin{equation*}
e^{tail}_i(H, {\mathcal{A}}, x')=40\mu_{H, i}(x)+\sum_{\mathbf{w}\in \H} \left|e^{tail}_i(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x')-40\mu_{H, i}(x')\right|_+
\end{equation*}
where for any $\eta\in \mathbb{R}$ one has $|\eta|_+=\eta$ if $\eta>0$ and $|\eta|_+=0$ otherwise.
For each $i\in S$ we now define an error event ${\mathcal{E}}^*_i$ whose non-occurrence implies location of element $i$, and then show that for each $i\in S$ one has
\begin{equation}\label{eq:g}
{\bf \mbox{\bf Pr}}_{H, {\mathcal{A}}}[{\mathcal{E}}^*_i]\lesssim \frac{\alpha {\epsilon} \mu^2}{|x'_i|^2}.
\end{equation}
Once we have ~\eqref{eq:g}, together with~\eqref{eq:const-eh-prob} it allows us to prove the main result of the lemma. In what follows we concentrate on proving
~\eqref{eq:g}. Specifically, for each $i\in S$ define
\begin{equation*}
{\mathcal{E}}^*_i=\{(H, {\mathcal{A}}): \exists \mathbf{w}\in \H \text{~s.t.~} e^{tail}_i(H,{\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x')>|x'_i|/20\}.
\end{equation*}
Recall that $e^{tail}_i(H, z, x')=\Call{HashToBins}{\widehat{x_{{[n]^d}\setminus S}}, \chi_{{[n]^d}\setminus S}, (H, z)}$ by definition of the measurements $m$. By Lemma~\ref{lm:hashing}, {\bf (3)} one has, for a uniformly random $z\in {[n]^d}$, that
${\bf \mbox{\bf E}}_z[|e^{tail}_i(H, z, x')|^2|]=\mu_{H, i}^2(x')$. By Lemma~\ref{lm:hashing}, {\bf (2)}, we have that
\begin{equation}\label{eq:expect-sigmaq}
{\bf \mbox{\bf E}}_H[\mu^2_{H, i}(x')] \leq (2\pi)^{2d\cdot \fc}\cdot C^d \norm{2}{(x-\chi)_{{[n]^d}\setminus S}}^2/B+\mu^2/N^2\leq \alpha {\epsilon} \mu^2.
\end{equation}
Thus by Markov's inequality
$$
{\bf \mbox{\bf Pr}}_{z}[e^{tail}_{i}(H, z, x')^2> (|x'_i|/20)^2]\leq \alpha{\epsilon} (\mu_{H, i}(x'))^2/(|x'_i|/20)^2.
$$
Combining this with Lemma~\ref{lm:quant-exp}, we get for all $\tau\leq (1/20)(|x'_i|/20)$ and all $\mathbf{w}\in \H$
\begin{equation}\label{eq:xyz}
{\bf \mbox{\bf Pr}}_{{\mathcal{A}}}[\text{quant}^{1/5}_{z\in {\mathcal{A}}\star (\mathbf{1}, \mathbf{w})} e^{tail}_{i}(H, z, x')> |x'_i|/20| \mu_{H, i}^2(x')=\tau]<(4\tau/(|x'_i|/20))^{\Omega(|{\mathcal{A}}|)}.
\end{equation}
Equipped with the bounds above, we now bound ${\bf \mbox{\bf Pr}}[{\mathcal{E}}^*_i]$. To that effect, for each $\tau>0$ let the event ${\mathcal{E}}_i(\tau)$ be defined as
${\mathcal{E}}_i(\tau)=\{\mu_{H, i}(x')=\tau\}$. Note that since we assume that we operate on $O(\log n)$ bit integers, $\mu_{H, i}(x')$ takes on a finite number of values, and hence ${\mathcal{E}}_i(\tau)$ is well-defined. It is convenient to bound ${\bf \mbox{\bf Pr}}[{\mathcal{E}}_i^*]$ as a sum of three terms:
\begin{equation*}
\begin{split}
{\bf \mbox{\bf Pr}}_{H, {\mathcal{A}}}[{\mathcal{E}}^*_i]&\leq {\bf \mbox{\bf Pr}}_{H, {\mathcal{A}}}\left[\left.e^{tail}_{i}(H, {\mathcal{A}}, x')> |x'_i|/20\right|\bigcup_{\tau\leq \sqrt{\alpha {\epsilon}}\mu}{\mathcal{E}}_i(\tau)\right] \\
&+\int_{\sqrt{\alpha {\epsilon}}\mu}^{(1/8)(|x'_i|/20)} {\bf \mbox{\bf Pr}}_{H, {\mathcal{A}}}\left[\left.e^{tail}_{i}(H, {\mathcal{A}}, x')> |x'_i|/20\right.|{\mathcal{E}}_i(\tau)\right] {\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)]d\tau\\
&+\int_{(1/8)(|x'_i|/20)}^\infty {\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)]d\tau\\
\end{split}
\end{equation*}
We now bound each of the three terms separately for $i$ such that $|x'_i|/20\geq 2\sqrt{\alpha{\epsilon}} \mu_{H, i}(x')$. This is sufficient for our purposes, as other elements only contribute a small amount of $\ell_2^2$ mass.
\begin{description}
\item[1.] By \eqref{eq:xyz} and a union bound over $\H$ the first term is bounded by
\begin{equation}\label{eq:r-ub-1}
|\H|\cdot (\sqrt{\alpha {\epsilon}}\mu/(|x'_i|/20))^{\Omega(|A|)}\leq \alpha {\epsilon} \mu^2/|x'_i|^2\cdot |\H|\cdot 2^{-\Omega(|A|)}\leq \alpha{\epsilon} \mu^2/|x'_i|^2.
\end{equation}
since $|{\mathcal{A}}|\geq C\log \log N$ for a sufficiently large constant $C>0$ in \textsc{RecoverAtConstantSNR}.
\item[2.] The second term, again by a union bound over $\H$ and using \eqref{eq:xyz}, is bounded by
\begin{equation}\label{eq:const-snr-abc}
\begin{split}
&\int_{\sqrt{\alpha {\epsilon}}\mu}^{(1/8)(|x'_i|/20)} |\H|\cdot (4\tau /(|x'_i|/20))^{\Omega(|A|)} {\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)]d\tau\\
\leq &\int_{\sqrt{\alpha {\epsilon}}\mu}^{(1/8)(|x'_i|/20)} |\H|\cdot (4\tau /(|x'_i|/20))^{\Omega(|A|)} (4\tau /(|x'_i|/20))^{2}{\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)] d\tau\\
\end{split}
\end{equation}
Since $|{\mathcal{A}}|\geq C\log \log N$ for a sufficiently large constant $C>0$ and $(4\tau /(|x'_i|/20))\leq 1/2$ over the whole range of $\tau$ by our assumption that $|x'_i|/20\geq 2\sqrt{\alpha{\epsilon}} \mu_{H, i}(x')$, we have
$$
|\H|\cdot (4\tau /(|x'_i|/20))^{\Omega(|A|)}\leq |\H|\cdot (1/2)^{\Omega(|A|)}=o(1)
$$
for each $\tau\in [\sqrt{\alpha {\epsilon}}\mu, (1/8)(|x'_i|/20)]$. Thus, \eqref{eq:const-snr-abc} is upper bounded by
\begin{equation*}
\begin{split}
\int_{\sqrt{\alpha {\epsilon}}\mu}^{(1/8)(|x'_i|/20)} (4\tau /(|x'_i|/20))^{2}{\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)] d\tau\\
\lesssim \frac1{(|x'_i|/20)^{2}}\int_{\sqrt{\alpha {\epsilon}}\mu}^{(1/8)(|x'_i|/20)} \tau^2{\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)] d\tau\\
\leq \frac{\alpha {\epsilon} \mu^2}{(|x'_i|/20)^2}\\
\end{split}
\end{equation*}
since
$$
\int_{\sqrt{\alpha {\epsilon}}\mu}^{(1/8)(|x'_i|/20)} \tau^2{\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)] d\tau\leq \int_{0}^{\infty} \tau^2{\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)] d\tau={\bf \mbox{\bf E}}_H[\mu^2_{H, i}(x')]=O(\alpha){\epsilon} \mu^2
$$
by \eqref{eq:expect-sigmaq}.
\item [3.] For the third term we have
$$
\int_{(1/8)(|x'_i|/20)}^\infty {\bf \mbox{\bf Pr}}[{\mathcal{E}}_i(\tau)]d\tau={\bf \mbox{\bf Pr}}[\mu_{H, i}(x')>(1/8)(|x'_i|/20)]\lesssim \frac{\alpha {\epsilon} \mu^2}{|x'_i|^2}
$$
by Markov's inequality applied to \eqref{eq:expect-sigmaq}.
\end{description}
Putting the three estimates together, we get ${\bf \mbox{\bf Pr}}[{\mathcal{E}}_i^*]=\frac{O(\alpha){\epsilon} \mu^2}{|x'_i|^2}$. Together with \eqref{eq:const-eh-prob} this yields for $i\in S$
$$
{\bf \mbox{\bf Pr}}[i\not \in L]\lesssim \frac{\alpha {\epsilon} \mu^2}{|x'_i|^2}+\frac{\alpha {\epsilon} \mu}{|x'_i|}.
$$
In particular,
\begin{equation*}
\begin{split}
{\bf \mbox{\bf E}}\left[\sum_{i\in S} |x'_i|^2\cdot \mathbf{1}_{i\in S\setminus L}\right]&\leq \sum_{i\in S} |x'_i|^2 {\bf \mbox{\bf Pr}}[i\not \in L]\\
&\lesssim \sum_{i\in S} |x'_i|^2 \left(\frac{\alpha {\epsilon} \mu}{|x'_i|}+\frac{\alpha {\epsilon} \mu^2}{|x'_i|^2}\right)\lesssim \alpha {\epsilon}\mu^2 k,
\end{split}
\end{equation*}
where we used the assumption of the lemma that $||x'_{[2k]}||_1\leq O(||x_{{[n]^d}\setminus [k]}||_2\sqrt{k})$ and $||x'_{{[n]^d}\setminus [2k]}||_2^2\leq ||x_{{[n]^d}\setminus [k]}||_2^2$ in the last line.
By Markov's inequality we thus have ${\bf \mbox{\bf Pr}}[||x'_{S\setminus L}||_2^2>{\epsilon} \mu^2 k]<1/10$ as long as $\alpha$ is smaller than a constant.
We now upper bound $||x'-\chi'||_2^2$. We apply Lemma~\ref{lm:comparison} to vectors $x'$ and $\chi'$ with sets $S$ and $L'$ respectively, getting
\begin{equation*}
\begin{split}
||x'-\chi'_{L'}||_2^2&\leq ||x'-x'_S||_2^2+4||(x'-\chi')_{S\cup L'}||_2^2\\
&\leq ||x_{{[n]^d}\setminus [k]}||_2^2+4||(x'-\chi')_{S\setminus L}||_2^2+4||(x'-\chi')_{S\cap L}||_2^2\\
&\leq ||x_{{[n]^d}\setminus [k]}||_2^2+4{\epsilon} \mu^2 k+4{\epsilon} \mu^2 |S|\\
&\leq ||x_{{[n]^d}\setminus [k]}||_2^2+O({\epsilon} \mu^2 k),\\
\end{split}
\end{equation*}
where we used the fact that $||(x'-\chi')_{S\cap L}||_\infty\leq \sqrt{{\epsilon}} \mu$ with probability at least $1-1/N$ over the randomness used in \textsc{EstimateValues} by Lemma~\ref{lm:estimate-l1l2}, {\bf (3)}.
This completes the proof of correctness.
\paragraph{Sample complexity and runtime}The number of samples taken is bounded by $2^{O(d^2)}\frac1{{\epsilon}} k\log N$ by Lemma~\ref{l:hashtobins}, the choice of $B$. The sampling complexity of the call to \textsc{EstimateValues} is at most $2^{O(d^2)}\frac1{{\epsilon}} k\log N$. The runtime is bounded by $2^{O(d^2)}\frac1{{\epsilon}}k \log^{d+1} N \log\log N$ for computing the measurements $m(\widehat{x}, H, a\star (\mathbf{1}, \mathbf{w}))$ and $2^{O(d^2)}\frac1{{\epsilon}} k \log^{d+1} N$ for estimation.
\end{proof}
\section{Acknowledgements}
The author would like to thank Piotr Indyk for many useful discussions at various stages of this work.
\newcommand{\etalchar}[1]{$^{#1}$}
\section{Introduction}
The Discrete Fourier Transform (DFT) is a fundamental mathematical concept that allows to represent a discrete signal of length $N$ as a linear combination of $N$ pure harmonics, or frequencies. The development of a fast algorithm for Discrete Fourier Transform, known as FFT (Fast Fourier Transform) in 1965 revolutionized digital signal processing, earning FFT a place in the top 10 most important algorithms of the twentieth century~\cite{citeulike:6838680}. Fast Fourier Transform (FFT) computes the DFT of a length $N$ signal in time $O(N\log N)$, and finding a faster algorithm for DFT is a major open problem in theoretical computer science. While FFT applies to general signals, many of the applications of FFT (e.g. image and video compression schemes such as JPEG and MPEG) rely on the fact that the Fourier spectrum of signals that arise in practice can often be approximated very well by only a few of the top Fourier coefficients, i.e. practical signals are often (approximately) {\em sparse} in the Fourier basis.
Besides applications in signal processing, the Fourier sparsity property of real world signal plays and important role in medical imaging, where the cost of {\em measuring a signal}, i.e. {\em sample complexity}, is often a major bottleneck. For example, an MRI machine effectively measures the Fourier transform of a signal $x$ representing the object being scanned, and the reconstruction problem is exactly the problem of inverting the Fourier transform $\widehat{x}$ of $x$ approximately given a set of measurements. Minimizing the sample complexity of acquiring a signal using Fourier measurements thus translates directly to reduction in the time the patient spends in the MRI machine~\cite{MRICS} while a scan is being taken. In applications to Computed Tomography (CT) reduction in measurement cost leads to reduction in the radiation dose that a patient receives~\cite{CT_CS}. Because of this strong practical motivation,
the problem of computing a good approximation to the FFT of a Fourier sparse signal fast and using few measurements in time domain has been the subject of much attention several communities.
In the area of {\em compressive sensing}~\cite{Don,CTao}, where one studies the task of recovering (approximately) sparse signals from linear measurements, Fourier measurements have been one of the key settings of interest. In particular, the seminal work of~\cite{CTao,RV} has shown that length $N$ signals with at most $k$ nonzero Fourier coefficients can be recovered using only $k \log^{O(1)} N$ samples in time domain. The recovery algorithms are based on linear programming and run in time polynomial in $N$.
A different line of research on the {\em Sparse Fourier Transform} (Sparse FFT), initiated in the fields of computational complexity and learning theory, has been focused on developing algorithms whose sample complexity {\em and} running time scale with the sparsity as opposed to the length of the input signal. Many such algorithms have been proposed in the literature, including \cite{GL,KM,Man,GGIMS,AGS,GMS,Iw,Ak,HIKP,HIKP2,BCGLS,HAKI,pawar2013computing,heidersparse, IKP}. These works show that, for a wide range of signals, both the time complexity and the number of signal samples taken can be significantly sub-linear in $N$, often of the form $k \log^{O(1)} N$.
In this paper we consider the problem of computing a sparse approximation to a signal $x\in \mathbb{C}^N$ given access to its Fourier transform $\widehat{x}\in \mathbb{C}^N$.\footnote{Note that the problem of reconstructing a signal from Fourier measurements is equivalent to the problem of computing the Fourier transform of a signal $x$ whose spectrum is approximately sparse, as the DFT and its inverse are only different by a conjugation.} The best known results obtained in both compressive sensing literature and sparse FFT literature on this problem are summarized in Fig.~\ref{fig:1}.
We focus on algorithms that work for worst-case signals and recover $k$-sparse approximations satisfying the so-called $\ell_2/\ell_2$ approximation
guarantee. In this case, the goal of an algorithm is as follows: given $m$ samples of the Fourier transform $\widehat{x}$ of a signal $x$, and the sparsity parameter $k$, output $x'$ satisfying
\begin{equation}
\label{e:l2l2}
\| x-x'\|_2 \le C \min_{k \text{-sparse } y } \|x-y\|_2,
\end{equation}
The algorithms are randomized\footnote{Some of the algorithms~\cite{CTao,RV,CGV} can in
fact be made deterministic, but at the cost of satisfying a somewhat
weaker $\ell_2/\ell_1$ guarantee.}
and succeed with at least constant probability.
\begin{figure*}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Reference & Time & Samples & $C$ & Dimension\\
& & & & $d>1$?\\
\hline\hline
\cite{CTao,RV, CGV}& & & &\\
\cite{Bourgain2014, HR16} & $N \times m$ linear program & $O(k \log^2(k) \log(N))$ & $O(1)$ &yes\\
\cite{CP} & $N \times m$ linear program & $O(k \log N)$ & $(\log N)^{O(1)}$ &yes\\
\cite{HIKP2} & $O(k \log(N) \log(N/k))$ & $O(k \log(N) \log(N/k))$ & any &no \\
\cite{IKP} & $k \log^2(N) \log^{O(1)} \log N$ & $k \log(N) \log^{O(1)} \log N$ & any &no\\
\cite{IK14a} & $N \log^{O(1)} N$ & $O(k \log N)$ & any & yes\\
\hline
\cite{DIPW} & & $\Omega(k \log(N/k))$ & O(1) & lower bound\\
\hline
\end{tabular}
\end{center}
\caption{Bounds for the algorithms that recover $k$-sparse Fourier approximations. All algorithms produce an output satisfying Equation~\ref{e:l2l2} with probability of success that is at least constant. The forth column specifies constraints on approximation factor $C$. For example, $C=O(1)$ means that the algorithm can only handle constant $C$ as opposed to any $C>1$. The last column specifies whether the sample complexity bounds are unchanged, up to factors that depend on dimension $d$ only, for higher dimensional DFT.}\label{fig:1}
\end{figure*}
{\bf Higher dimensional Fourier transform.} While significant attention in the sublinear Sparse FFT literature has been devoted to the basic case of Fourier transform on the line (i.e. one-dimensional signals), the sparsest signals often occur in applications involving higher-dimensional DFTs. Although a reduction from DFT on a two-dimensional grid {\em with relatively prime side lengths} $p \times q$ to a one-dimensional DFT of length $pq$ is possible~\cite{GMS,Iw-arxiv}), the reduction does not apply to the most common case when the side lengths of the grid are equal to the same powers of two. It turns out that most sublinear Sparse FFT techniques developed for the one-dimensional DFT do not extend well to the higher dimensional setting, suffering from {\em at least a polylogaritmic loss in sample complexity}. Specifically, the only prior sublinear time algorithm that applies to general $m\times m$ grids is due to to~\cite{GMS}, has $O(k \log^c N)$ sample and time complexity for a
rather large value of $c$. If $N$ is a power of $2$, a
two-dimensional adaptation of the~\cite{HIKP2} algorithm (outlined in~\cite{GHIKPS})
has roughly $O(k \log^3 N)$ time and sample complexity, and an adaptation of~\cite{IKP} has $O(k\log^2 N(\log\log N)^{O(1)})$ sample complexity. In general dimension $d\geq 1$ both of these algorithms have sample complexity $\Omega(k\log^d N)$.
Thus, none of the results obtained so far was able to guarantee sparse recovery from high dimensional (any $d\geq 2$) Fourier measurements without suffering at least a polylogarithmic loss in sample complexity, while at the same time achieving sublinear runtime.
{\bf Our results.} In this paper we give an algorithm that achieves the $\ell_2/\ell_2$ sparse recovery guarantees~\eqref{e:l2l2} with $d$-dimensional Fourier measurements that uses $O_d(k \log N$ $\log \log N)$ samples of the signal and has the running time of $O_d(k \log^{d+3} N)$. This is the first sublinear time algorithm that comes within a $\text{poly}(\log\log N)$ factor of the sample complexity lower bound of $\Omega(k\log (N/k))$ due to ~\cite{DIPW} for any dimension higher than one.
{\bf Sparse Fourier Transform overview.} The overall outline of our algorithm follows the framework of~\cite{GMS, HIKP2,IKP,IK14a}, which adapt the methods of~\cite{CCF,GLPS} from arbitrary linear
measurements to Fourier measurements. The idea is to take, multiple times, a set of $B=O(k)$
linear measurements of the form
\[
\twid{u}_j = \sum_{i : h(i) = j} s_i x_i
\]
for random hash functions $h: [N] \to [B]$ and random sign changes
$s_i$ with $\abs{s_i} = 1$. This corresponds to \emph{hashing} to $B$
\emph{buckets}. With such ideal linear measurements, $O(\log(N/k))$
hashes suffice for sparse recovery, giving an $O(k \log(N/k))$ sample
complexity.
The sparse Fourier transform algorithms approximate $\twid{u}$ using linear combinations of Fourier samples. Specifically, the coefficients of $x$ are first permuted via a random affine permutation of the input space. Then the coefficients are partitioned into buckets. This step uses the``filtering'' process that approximately partitions the range of $x$ into intervals (or, in higher dimension, squares, or $\ell_\infty$ balls) with $N/B$ coefficients each, where each interval corresponds to one bucket. Overall, this ensures that
most of the large coefficients are ``isolated'', i.e., are hashed to unique buckets, as well as that the contributions from the ``tail'' of the signal $x$ to those buckets is not much greater than the average (the tail of the signal defined as $\err_k(x)=\min_{k-\text{sparse}~y} ||x-y||_2$). This allows one to mimic the iterative recovery algorithm described for linear measurements above. However, there are several difficulties in making this work using an optimal number of samples.
This enables the algorithm to identify the locations of the dominant coefficients and estimate their values, producing a sparse estimate $\chi$ of $x$. To improve this
estimate, we repeat the process on $x - \chi$ by subtracting the
influence of $\chi$ during hashing, thereby {\em refining } the approximation of $x$ constructed. After a few iterations of this refinement process the algorithm obtains a good
sparse approximation $\chi$ of $x$.
A major hurdle in implementing this strategy is that any filter that has been constructed in the literature so far is imprecise in that coefficients contribute (``leak''') to buckets other than the one they are technically mapped into. This contribution, however, is limited and can be controlled by the quality of the filter. The details of filter choice have played a crucial role in recent developments in Sparse FFT algorithms. For example, the best known runtime for one-dimensional Sparse FFT, due to~\cite{HIKP}, was obtained by constructing filters that (almost) precisely mimic the ideal hash process, allowing for a very fast implementation of the process in dimension one. The price to pay for the precision of the filter, however, is that each hashing becomes a $\log^d N$ factor more costly in terms of sample complexity and runtime than in the idealized case. At the other extreme, the algorithm of~\cite{GMS} uses much less precise filters, which only lead to a $C^d$ loss of sample complexity in higher dimensions $d$, for a constant $C>0$. Unfortunately, because of the imprecision of the filters the iterative improvement process becomes quite noisy, requiring $\Omega(\log N)$ iterations of the refinement process above. As~\cite{GMS} use fresh randomness for each such iteration, this results in an $\Omega(\log N)$ factor loss in sample complexity. The result of~\cite{IKP} uses a hybrid strategy, effectively interpolating between~\cite{HIKP} and~\cite{GMS}. This gives the near optimal $O(k\log N \log^{O(1)}\log N)$ sample complexity in dimension one (i.e. Fourier transform on the line), but still suffers from a $\log^{d-1} N$ loss in dimension $d$.
{\bf Techniques of~\cite{IK14a}.} The first algorithm to achieve optimal sample complexity was recently introduced in~\cite{IK14a}. The algorithms uses an approach inspired by~\cite{GMS} (and hence uses `crude' filters that do not lose much in sample complexity), but introduces a key innovation enabling optimal sample complexity: the algorithm does {\em not} use fresh hash functions in every repetition of the refinement process. Instead, $O(\log N)$ hash functions are chosen at the beginning of the process, such that each large coefficient is isolated by most of those functions with high probability. The same hash functions are then used throughout the execution of the algorithm. As every hash function required a separate set of samples to construct the buckets, reusing the hash functions makes sample complexity {\em independent of the number of iterations}, leading to the optimal bound.
While a natural idea, reusing hash functions creates a major difficulty: if the algorithm identified a non-existent large coefficient (i.e. a false positive) by mistake and added it to $\chi$, this coefficient would be present in the difference vector $x - \chi$ (i.e. residual signal) and would need to be corrected later. As the spurious coefficient depends on the measurements, the `isolation' properties required for recovery need not hold for it as its position is determined by the hash functions themselves, and the algorithm might not be able to correct the mistake. This hurdle was overcome in~\cite{IK14a} by ensuring that no large coefficients are created spuriously throughout the execution process. This is a nontrivial property to achieve, as the hashing process is quite noisy due to use of the `crude' filters to reduce the number of samples (because the filters are quite simple, the bucketing process suffers from substantial leakage). The solution was to recover the large coefficients in decreasing order of their magnitude. Specifically, in each step, the algorithm recovered coefficients with magnitude that exceeded a specific threshold (that decreases at an exponential rate). With this approach the $\ell_\infty$ norm of the residual signal decreases by a constant factor in every round, resulting in the even stronger $\ell_\infty/\ell_2$ sparse recovery guarantees in the end. The price to pay for this strong guarantee was the need for a very strong primitive for locating dominant elements in the residual signal: a primitive was needed that would make mistakes with at most inverse polynomial probability. This was achieved by essentially brute-force decoding over all potential elements in $[N]$: the algorithm loops over all elements $i\in [N]$ and for each $i$ tests, using the $O(\log N)$ measurements taken, whether $i$ is a dominant element in the residual signal. This resulted in $\Omega(N)$ runtime.
\if 0
Overall, the algorithm had two key properties (i) it was iterative, and therefore the values of the coefficients estimated in one stage could be corrected in the second stage and (ii) did not require fresh hash functions (and therefore new measurements) in each iteration. Property (ii) implied that the number of measurements was determined by only a single (first) stage, and did not increase beyond that. Property (i) implied that the bucketing and estimation process could be achieved using rather ``crude'' filters\footnote{In fact, the filters are only slightly more accurate than the filters introduced in~\cite{GMS}, and require the same number of samples.}, since the estimated values could be corrected in the future stages. As a result each of the hash function require only $O(k)$ samples; since the algorithm used $O(\log N)$ hash functions, the $O(k \log N)$ bound follows. This stands in contrast with the algorithm of $\cite{GMS}$ (which used crude filters of similar complexity but required new measurements per each iteration) or~\cite{HIKP2} (which used much stronger filters with $O(k \log N)$ sample complexity) or~\cite{IKP} (which used filters of varying quality and sample complexity). The advantage of the approach is amplified in higher dimension, as the ratio of the number of samples required by the filter to the value $k$ grows exponentially in the dimension. Thus, the filters still require $O(k)$ samples in any fixed dimension $d$, while for~\cite{HIKP2, IKP} this bound increases to $O(k \log^d N)$.
\fi
{\bf Our techniques.} In this paper we show how to make the aforementioned algorithm run in sub-linear time, at the price of a slightly increased sampling complexity of $O_d(k\log N$ $\log\log N)$. To achieve a sub-linear runtime, we need to replace the loop over all $N$ coefficients by a location primitive (similar to that in prior works) that identifies the position of any large coefficient that is isolated in a bucket in $\log^{O(1)} N$ time per bucket, i.e. without resorting to brute force enumeration over the domain of size $N$. Unfortunately, the identification step alone increases the sampling complexity by $O(\log N)$ per hash function, so unlike~\cite{IK14a}, here we cannot repeat this process using $O(\log N)$ hash functions to ensure that each large coefficient is isolated by one of those functions. Instead, we can only afford $O(\log \log N)$ hash functions overall, which means that $1/\log^{O(1)} N$ fraction of large coefficients will not be isolated in most hashings. This immediately precludes the possibility of using the initial samples to achieve $\ell_\infty$ norm reduction as in~\cite{IK14a}. Another problem, however, is that the weaker location primitive that we use may generate {\em spurious coefficients} at every step of the recovery process. These spurious coefficients, together with the $1/\log^{O(1)} N$ fraction of non-isolated elements, contaminate the recovery process and essentially render the original samples useless after a small number of refinement steps. To overcome these hurdles, instead of the $\ell_\infty$ reduction process of~\cite{IK14a} we use a weaker invariant on the reduction of mass in the `heavy' elements of the signal throughout our iterative process. Specifically, instead of reduction of $\ell_\infty$ norm of the residual as in~\cite{IK14a} we give a procedure for reducing the $\ell_1$ norm of the `head' of the signal. To overcome the contamination coming from non-isolated as well as spuriously created coefficients, we achieve $\ell_1$ norm reduction by alternating two procedures. The first procedure uses the $O(\log\log N)$ hash functions to reduce the $\ell_1$ norm of `well-hashed' elements in the signal, and the second uses a simple sparse recovery primitive to reduce the $\ell_\infty$ norm of offending coefficients when the first procedure gets stuck. This can be viewed as a signal-to-noise ratio (SNR) reduction step similar in spirit the one achieved in~\cite{IKP}. The SNR reduction phase is insufficient for achieving the $\ell_2/\ell_2$ sparse recovery guarantee, and hence we need to run a cleanup phase at the end, when the signal to noise ratio is constant. It has been observed before (in~\cite{IKP}) that if the signal to noise ratio is constant, then recovery can be done using standard techniques with optimal sample complexity. The crucial difference between~\cite{IKP} and our setting is, however, that we only have bounds on $\ell_1$-SNR as opposed to $\ell_2$-SNR In~\cite{IKP}. It turns out, however, that this is not a problem -- we give a stronger analysis of the corresponding primitive from~\cite{IKP}, showing that $\ell_1$-SNR bound is sufficient.
{\bf Related work on continuous Sparse FFT.} Recently \cite{BCGLS} and \cite{PZ15} gave algorithms for the related problem of computing Sparse FFT in the continuous setting. These results are not directly comparable to ours, and suffer from a polylogarithmic inefficiency in sample complexity bounds.
\if 0
We show, however, that recovery can be performed even in the presence of false positives using just a small amount of fresh randomness at every iteration of the refinement process. The fresh randomness allows us to ensure that {\bf (a)} the number of false positives remains small (they constitute at most a $k/\log^{\Theta(1)} N$) fraction of the elements in the head of the signal) and {\bf (b)} that the $\ell_\infty$ norm of the false positives, as well as the non-isolated elements remains small.
Due to the complications introduced by the more efficient (but weaker) location primitive,
\fi
\if 0
In fact, the situation is somewhat more complicated. Using $O(\log\log N)$ hash functions only guarantees that essentially all but a $1/\log^{O(1)} N$ fraction of heavy hitters is located, but does not preclude a large number of false positives, i.e. spurious coefficients. We deal with this by estimating the value of each located coefficient using $O(\log \log N)$ fresh hash functions, which allows us to ensure that at most a $1/\log^{O(1)} N$ fraction of coefficients are spurious. Because of this, the sample complexity of our sublinear time algorithm does in fact increase with the number of iterations, which is not the case for the $\tilde O(n)$ time algorithm. The sampling complexity of the estimation stage, however, is $O(k\log\log N)$, amounting to $O(k\log N\log \log N)$ samples overall.
In order to eliminate those coefficients, every $O(\log\log N)$ iterations we run a different procedure\footnote{The procedure is essentially the original algorithm of~\cite{GMS} with slightly different analysis} that identifies and subtracts the missed and spurious large coefficients using fresh randomness. That procedure is less efficient: it requires $k' \log^{O(1)} N$ samples, where $k'$ is the number of spurious coefficients. However, since $k' \le k/ \log^{O(1)} n$, the total number of samples remains small.
\fi
\section{Organization}\label{sec:org}
The rest of the paper is organized as follows. In section~\ref{sec:l11} we set up notation necessary for the analysis of \textsc{LocateSignal}, and specifically for a proof of Theorem~\ref{thm:l1-res-loc}, as well as prove some basic claims. In section~\ref{sec:l1} we prove Theorem~\ref{thm:l1-res-loc}. In section~\ref{sec:reduce-l1} we prove performance guarantees for \textsc{ReduceL1Norm} (Lemma~\ref{lm:reduce-l1-norm}), then combine them with Lemma~\ref{lm:linf} to prove that the main loop in Algorithm~\ref{alg:main-sublinear} reduces $\ell_1$ norm of the head elements. We then conclude with a proof of correctness for Algorithm~\ref{alg:main-sublinear}. Section~\ref{sec:linf} is devoted to analyzing the \textsc{ReduceInfNorm} procedure, and section~\ref{sec:const-snr} is devoted to analyzing the \textsc{RecoverAtConstantSNR} procedure. Some useful lemmas are gathered in section~\ref{sec:utils}, and section~\ref{sec:semiequi} describes the algorithm for semiequispaced Fourier transform that we use to update our samples with the residual signal. Appendix~\ref{app:A} contains proofs omitted from the main body of the paper.
\section{Analysis of \textsc{LocateSignal}: main definitions and basic claims}\label{sec:l11}
In this section we state our main signal location primitive, \textsc{LocateSignal} (Algorithm~\ref{alg:location}). Given a sequence of measurements $m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{a\in {\mathcal{A}}_r, \mathbf{w}\in \H}, r=1,\ldots, r_{max}$ a signal $\widehat{x}\in \mathbb{C}^{[n]^d}$ and a partially recovered signal $\chi\in \mathbb{C}^{[n]^d}$, \textsc{LocateSignal} outputs a list of locations $L\subseteq {[n]^d}$ that, as we show below in Theorem~\ref{thm:l1-res-loc} (see section~\ref{sec:l1}), contains the elements of $x$ that contribute most of its $\ell_1$ mass. An important feature of \textsc{LocateSignal} is that it is an entirely deterministic procedure, giving recovery guarantees for any signal $x$ and any partially recovered signal $\chi$. As Theorem~\ref{thm:l1-res-loc} shows, however, these guarantees are strongest when most of the mass of the residual $x-\chi$ resides on elements in ${[n]^d}$ that are {\em isolated with respect to most hashings $H_1,\ldots, H_{r_{max}}$} used for measurements. This flexibility is crucial for our analysis, and is exactly what allows us to reuse measurements and thereby achieve near-optimal sample complexity.
In the rest of this section we first state Algorithm~\ref{alg:location}, and then derive useful characterization of elements $i$ of the input signal $(x-\chi)_i$ that are successfully located by \textsc{LocateSignal}. The main result of this section is Corollary~\ref{cor:loc}. This comes down to bounding, for a given input signal $x$ and partially recovered signal $\chi$, the expected $\ell_1$ norm of the noise contributed to the process of locating heavy hitters in a call to \Call{LocateSignal}{$\widehat{x}, \chi, H, \{m(\widehat{x}, H, a\star (\mathbf{1}, \mathbf{w}))\}_{a\in {\mathcal{A}}, \mathbf{w}\in \H}$} by {\bf (a)} the tail of the original signal $x$ (tail noise $e^{tail}$) and {\bf (b)} the heavy hitters and false positives (heavy hitter noise $e^{head}$). It is useful to note that unlike in ~\cite{IK14a}, we cannot expect the tail of the signal to not change, but rather need to control this change.
In what follows we derive useful conditions under which an element $i\in {[n]^d}$ is identified by \textsc{LocateSignal}. Let $S\subseteq {[n]^d}$ be any set of size at most $2k$, and let $\mu$ be such that $x_{{[n]^d}\setminus S}\leq \mu$. Note that this fits the definition of $S$ given in~\eqref{eq:s-def-l1} (but other instantiations are possible, and will be used later in section~\ref{sec:const-snr}).
Consider a call to
$$
\Call{LocateSignal}{\chi, H, \{m(\widehat{x}, H, a\star (\mathbf{1}, \mathbf{w})\}_{a\in {\mathcal{A}}, \mathbf{w}\in \H}}.
$$
For each $a\in {\mathcal{A}}$ and fixed $\mathbf{w}\in \H$ we let $z:=a\star (\mathbf{1}, \mathbf{w})\in {[n]^d}$ to simplify notation. The measurement vectors $m:=m(\widehat{x'}, H, z)$ computed in \textsc{LocateSignal} satisfy, for every $i\in S$, (by Lemma~\ref{l:hashtobins})
\[
m_{h(i)} = \sum_{j\in {[n]^d}} G_{o_i(j)}x'_j \omega^{z^T \Sigma j}+\Delta_{h(i), z},
\]
where $\Delta$ corresponds to polynomially small estimation noise due to approximate computation of the Fourier transform, and the filter $G_{o_i(j)}$ is the filter corresponding to hashing $H$. In particular, for each hashing $H$ and parameter $a\in {[n]^d}$ one has:
\begin{equation*}
\begin{split}
G_{o_i(i)}^{-1}m_{h(i)} \omega^{-z^T \Sigma i} = x'_i + G_{o_i(i)}^{-1}\sum_{j\in {[n]^d}\setminus \{i\}} G_{o_i(j)}x'_j \omega^{z^T \Sigma (j-i)}+G_{o_i(i)}^{-1}\Delta_{h(i), z}\omega^{-z^T \Sigma i}
\end{split}
\end{equation*}
It is useful to represent the residual signal $x$ as a sum of three terms: $x'=(x-\chi)_S-\chi_{{[n]^d}\setminus S}+x_{{[n]^d}\setminus S}$, where the first term is the residual signal coming from the `heavy' elements in $S$, the second corresponds to false positives, or spurious elements discovered and erroneously subtracted by the algorithm, and the third corresponds to the tail of the signal. Similarly, we bound the noise contributed by the first two (head elements and false positives) and the third (tail noise) parts of the residual signal to the location process separately. For each $i\in S$ we write
\begin{equation}\label{eq:uexp1}
\begin{split}
&G_{o_i(i)}^{-1}m_{h(i)}\omega^{-z^T \Sigma i}=x'_i \\
&+G_{o_i(i)}^{-1}\cdot \left[\sum_{j\in S\setminus \{i\}} G_{o_i(j)}x'_j \omega^{z^T \Sigma (j-i)}-\sum_{j\in {[n]^d}\setminus S} G_{o_i(j)}\chi_j \omega^{z^T \Sigma (j-i)}\right]\text{~~~~(head elements and false positives)}\\
&+G_{o_i(i)}^{-1}\cdot \sum_{j\in {[n]^d}\setminus S} G_{o_i(j)}x_j \omega^{z^T \Sigma (j-i)}\text{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(tail noise)}\\
&+G_{o_i(i)}^{-1}\cdot \Delta_{h(i)}\omega^{-z^T \Sigma i}.
\end{split}
\end{equation}
\paragraph{Noise from heavy hitters.} The first term in \eqref{eq:uexp1} corresponds to noise from $(x-\chi)_{S\setminus \{i\}}-\chi_{{[n]^d}\setminus (S\setminus \{i\})}$, i.e. noise from heavy hitters and false positives. For every $i\in S$, hashing $H$ we let
\begin{equation}\label{eq:eh-pi}
e^{head}_{i}(H, x, \chi):=G_{o_i(i)}^{-1}\cdot \sum_{j\in S\setminus \{i\}} G_{o_i(j)} |y_j|,\text{~~~~where~}y=(x-\chi)_{S}-\chi_{{[n]^d}\setminus S}.\\
\end{equation}
We thus get that $e^{head}_i(H, x, \chi)$ upper bounds the absolute value of the first error term in~\eqref{eq:uexp1}. Note that $G\geq 0$ by Lemma~\ref{lm:filter-prop} as long as $F$ is even, which is the setting that we are in. If $e^{head}_{i}(H, x, \chi)$ is large, \textsc{LocateSignal} may not be able to locate $i$ using measurements of the residual signal $x-\chi$ taken with hashing $H$. However, the noise in other hashings may be smaller, allowing recovery. In order to reflect this fact we define, for a sequence of hashings $H_1,\ldots, H_r$ and a signal $y\in \mathbb{C}^{[n]^d}$
\begin{equation}\label{eq:eh}
e^{head}_{i}(\{H_r\}, x, \chi):=\text{quant}^{1/5}_r e^{head}_i(H_r, x, \chi),
\end{equation}
where for a list of reals $u_1,\ldots, u_s$ and a number $f\in (0, 1)$ we let $\text{quant}^{f}(u_1,\ldots, u_s)$ denote the $\lceil f \cdot s\rceil$-th largest element of $u_1,\ldots, u_s$.
\paragraph{Tail noise.} To capture the second term in \eqref{eq:uexp1} (corresponding to tail noise), we define, for any $i\in S, z\in {[n]^d}, \mathbf{w}\in \H$, permutation $\pi=(\Sigma, q)$ and hashing $H=(\pi, B, F)$
\begin{equation}\label{eq:et-pi}
e^{tail}_i(H, z, x):=\left|G_{o_i(i)}^{-1}\cdot \sum_{j\in {[n]^d}\setminus S} G_{o_i(j)}x_j \omega^{z^T \Sigma (j-i)}\right|.
\end{equation}
With this definition in place $e^{tail}_i(H, z, x)$ upper bounds the second term in~\eqref{eq:uexp1}. As our algorithm uses several values of $a\in {\mathcal{A}}_r\subseteq {[n]^d}\times {[n]^d}$ to perform location, a more robust version of $e^{tail}_i(H, z)$ will be useful. To that effect we let for any $\mathcal{Z}\subseteq {[n]^d}$ (we will later use $\mathcal{Z}={\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w})$ for various $\mathbf{w}\in \H$)
\begin{equation}\label{eq:et-pi-a-h}
e^{tail}_i(H, \mathcal{Z}, x):=\text{quant}^{1/5}_{z\in \mathcal{Z}} \left|G_{o_i(i)}^{-1}\cdot \sum_{j\in {[n]^d}\setminus S} G_{o_i(j)}x_j \omega^{z^T \Sigma (j-i)}\right|.
\end{equation}
Note that the algorithm first selects sets ${\mathcal{A}}_r\subseteq {[n]^d}\times {[n]^d}$, and then access the signal at locations ${\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}), \mathbf{w}\in \H$.
The definition of $e^{tail}_i(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x)$ for a fixed $\mathbf{w}\in \H$ allows us to capture the amount of noise that our measurements that use $H$ suffer from for locating a specific set of bits of $\Sigma i$. Since the algorithm requires all $\mathbf{w}\in \H$ to be not too noisy in order to succeed (see precondition 2 of Lemma~\ref{lm:loc}), it is convenient to introduce notation that captures this. We define
\begin{equation}\label{eq:et-pi-a}
e^{tail}_i(H, {\mathcal{A}}, x):=40\mu_{H, i}(x)+\sum_{\mathbf{w}\in \H} \left|e^{tail}_i(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x)-40\mu_{H, i}(x)\right|_+
\end{equation}
where for any $\eta\in \mathbb{R}$ one has $|\eta|_+=\eta$ if $\eta>0$ and $|\eta|_+=0$ otherwise.
The following definition is useful for bounding the norm of elements $i\in S$ that are not discovered by several calls to \textsc{LocateSignal} on a sequence of hashings $\{H_r\}$. For a sequence of measurement patterns $\{H_r, {\mathcal{A}}_r\}$ we let
\begin{equation}\label{eq:et}
e^{tail}_i(\{H_r, {\mathcal{A}}_r\}, x):=\text{quant}^{1/5}_r e^{tail}_i(H_r, {\mathcal{A}}_r, x).
\end{equation}
Finally, for any $S\subseteq {[n]^d}$ we let
$$
e^{head}_S(\cdot):=\sum_{i\in S} e^{head}_i(\cdot)\text{~~~and~~~}e^{tail}_S(\cdot):=\sum_{i\in S} e^{tail}_i(\cdot),
$$
where $\cdot$ stands for any set of parameters as above.
Equipped with the definitions above, we now prove the following lemma, which yields sufficient conditions for recovery of elements $i\in S$ in \textsc{LocateSignal} in terms of $e^{head}$ and $e^{tail}$.
\begin{lemma}\label{lm:loc}
Let $H=(\pi, B, R)$ be a hashing, and let ${\mathcal{A}}\subseteq {[n]^d}\times {[n]^d}$. Then for every $S\subseteq {[n]^d}$ and for every $x, \chi\in \mathbb{C}^{{[n]^d}}$ and $x'=x-\chi$, the following conditions hold.
Let $L$ denote the output of
$$
\Call{LocateSignal}{\chi, H, \{m(\widehat{x}, H, a\star (\mathbf{1}, \mathbf{w}))\}_{a\in {\mathcal{A}}, \mathbf{w}\in \H}}.
$$
Then for any $i\in S$ such that $|x'_i|>N^{-\Omega(c)}$, if there exists $r\in [1:r_{max}]$ such that
\begin{enumerate}
\item $e^{head}_{i}(H, x')<|x'_i|/20$;
\item $e^{tail}_{i}(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x')< |x'_i|/20$ for all $\mathbf{w}\in \H$;
\item for every $s\in [1:d]$ the set ${\mathcal{A}}\star(\mathbf{0}, \mathbf{e}_s)$ is balanced in coordinate $s$ (as per Definition~\ref{def:balance}),
\end{enumerate}
then $i\in L$. The time taken by the invocation of \textsc{LocateSignal} is $O(B\cdot \log^{d+1} N)$.
\end{lemma}
\begin{proof}
We show that each coordinate $s=1,\ldots, d$ of $\Sigma i$ is successfully recovered in \textsc{LocateSignal}. Let $q=\Sigma i$ for convenience.
Fix $s\in [1:d]$. We show by induction on $g=0,\ldots, \log_{\Delta} n-1$ that after the $g$-th iteration of lines~6-10 of Algorithm~\ref{alg:location} we have that ${\bf f}_s$ coincides with ${\bf q}_s$ on the bottom $g\cdot \log_2 \Delta$ bits, i.e. ${\bf f}_s-{\bf q}_s= 0 \mod \Delta^g$ (note that we trivially have ${\bf f}_s< \Delta^g$ after iteration $g$).
The {\bf base} of the induction is trivial and is provided by $g=0$.
We now show the {\bf inductive step}. Assume by the inductive hypothesis that ${\bf f}_s-{\bf q}_s= 0 \mod \Delta^{g-1}$, so that
${\bf q}_s={\bf f}_s+\Delta^{g-1}(r_0+\Delta r_1+\Delta^2 r_2+\ldots)$ for some sequence $r_0,r_1,\ldots$, $0\leq r_j<\Delta$. Thus, $(r_0, r_1,\ldots)$ is the expansion of $({\bf q}_s-{\bf f}_s)/\Delta^{g-1}$ base $\Delta$, and $r_0$ is the least significant digit. We now show that $r_0$ is the unique value of $r$ that satisfies the conditions of lines~8-10 of Algorithm~\ref{alg:location}.
First, we have by~\eqref{eq:uexp1} together with \eqref{eq:eh-pi} and ~\eqref{eq:et-pi} one has for each $a\in {\mathcal{A}}$ and $\mathbf{w}\in \H$
\begin{equation*}
\begin{split}
\left|m_{h(i)}(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))- G_{o_i(i)} x'_i \omega^{((a\star(\mathbf{1}, \mathbf{w}))^T {\bf q}}\right|&\leq e^{head}_i(H, x, \chi)+e^{tail}_i(H, a\star(\mathbf{1}, \mathbf{w}), x)+N^{-\Omega(c)}.
\end{split}
\end{equation*}
Since $\mathbf{0}\in \H$, we also have for all $a\in {\mathcal{A}}$
\begin{equation*}
\begin{split}
\left|m_{h(i)}(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))- G_{o_i(i)} x'_i \omega^{(a\star(\mathbf{1}, \mathbf{0}))^T {\bf q}}\right|&\leq e^{head}_i(H, x, \chi)+e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{0}), x)+N^{-\Omega(c)},
\end{split}
\end{equation*}
where the $N^{-\Omega(c)}$ terms correspond to polynomially small error from approximate computation of the Fourier transform via Lemma~\ref{c:semiequi1}.
Let $j:=h(i)$. We will show that $i$ is recovered from bucket $j$. The bounds above imply that
\begin{equation}\label{eq:gergergre}
\begin{split}
\frac{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))}{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))}=\frac{x'_i \omega^{(a\star (\mathbf{1}, \mathbf{w}))^T {\bf q}}+E'}{x'_i \omega^{(a\star (\mathbf{1}, \mathbf{0}))^T {\bf q}}+E''}
\end{split}
\end{equation}
for some $E', E''$ satisfying $|E'|\leq e^{head}_i(H, x, \chi)+e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{w}), x)+N^{-\Omega(c)}$ and $|E''|\leq e^{head}_i(H, x, \chi)+e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{0}))+N^{-\Omega(c)}$. For all but $1/5$ fraction of $a\in {\mathcal{A}}$ we have by definition of $e^{tail}$ (see~\eqref{eq:et-pi-a-h}) that {\bf both}
\begin{equation}\label{eq:etail-bounds-eq-1}
e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{w}), x)\leq e^{tail}_i(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x)\leq |x'_i|/20
\end{equation}
and
\begin{equation}\label{eq:etail-bounds-eq-2}
e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{0})\leq e^{tail}_i(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{0}), x)\leq |x'_i|/20.
\end{equation}
In particular, we can rewrite ~\eqref{eq:gergergre} as
\begin{equation}\label{eq:gergergre-2}
\begin{split}
\frac{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))}{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))}&=\frac{x'_i \omega^{(a\star (\mathbf{1}, \mathbf{w}))^T {\bf q}}+E'}{x'_i \omega^{(a\star (\mathbf{1}, \mathbf{0}))^T {\bf q}}+E''}\\
&=\frac{\omega^{(a\star (\mathbf{1}, \mathbf{w}))^T {\bf q}}}{\omega^{(a\star (\mathbf{1}, \mathbf{0}))^T {\bf q}}}\cdot\xi\text{~~~where~~}\xi=\frac{1+\omega^{-(a\star (\mathbf{1}, \mathbf{w}))^T {\bf q}}E'/x_i'}{1+\omega^{-(a\star (\mathbf{1}, \mathbf{0}))^T{\bf q}} E''/x_i'}\\
&=\omega^{(a\star (\mathbf{1}, \mathbf{w}))^T {\bf q}-(a\star (\mathbf{1}, \mathbf{0}))^T {\bf q}}\cdot\xi\\
&=\omega^{(a\star (\mathbf{0}, \mathbf{w}))^T {\bf q}}\cdot\xi.\\
\end{split}
\end{equation}
Let ${\mathcal{A}}^*\subseteq {\mathcal{A}}$ denote the set of values of $a\in {\mathcal{A}}$ that satisfy the bounds~\eqref{eq:etail-bounds-eq-1} and~\eqref{eq:etail-bounds-eq-2} above.
We thus have for $a\in {\mathcal{A}}^*$, combining ~\eqref{eq:gergergre-2} with assumptions {\bf 1-2} of the lemma, that
\begin{equation}\label{eq:bound-1-oigb344tg32t}
|E'|/x_i'\leq (2/20)+1/N^{-\Omega(c)}\leq 1/8\text{~~~and~~~~}|E''|/x_i'\leq (2/20)+1/N^{-\Omega(c)}\leq 1/8
\end{equation}
for sufficiently large $N$, where $O(c)$ is the word precision of our semi-equispaced Fourier transform computation. Note that we used the assumption that $|x'_i|\geq N^{-\Omega(c)}$.
Writing $a=(\alpha, \beta)\in{[n]^d}\times {[n]^d}$, we have by~\eqref{eq:gergergre-2} that $\frac{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))}{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))}=\omega^{((\alpha, \beta)\star (\mathbf{0}, \mathbf{w}))^T {\bf q}}\cdot\xi$, and since $\mathbf{w}^T{\bf q}=n\Delta^{-g}{\bf q}_s$ when $\mathbf{w}=n \Delta^{-g} {\bf e}_s$ (as in line~8 of Algorithm~\ref{alg:location}), we get
$$
\frac{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))}{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))}=\omega^{(a\star (\mathbf{0}, \mathbf{w}))^T {\bf q}}\cdot\xi=\omega^{n\Delta^{-g} \beta_s {\bf q}_s}\cdot\xi=\omega^{n\Delta^{-g} \beta_s {\bf q}_s}+\omega^{n\Delta^{-g} \beta_s {\bf q}_s}(\xi-1).
$$
We analyze the first term now, and will show later that the second term is small. Since ${\bf q}_s={\bf f}_s+\Delta^{g-1}(r_0+\Delta r_1+\Delta^2 r_2+\ldots)$ by the inductive hypothesis, we have, substituting the first term above into the expression in line~10 of Algorithm~\ref{alg:location},
\begin{equation*}
\begin{split}
\omega_\Delta^{-r\cdot \beta_s}\cdot \omega^{-n\Delta^{-g}{\bf f_s}\cdot \beta_s}\cdot \omega^{n\Delta^{-g} \beta_s {\bf q}_s}&=\omega_\Delta^{-r\cdot \beta_s}\cdot \omega^{n\Delta^{-g}({\bf q}_s-{\bf f_s})\cdot \beta_s}\\
&=\omega_\Delta^{-r\cdot \beta_s}\cdot \omega^{n\Delta^{-g}(\Delta^{g-1}(r_0+\Delta r_1+\Delta^2 r_2+\ldots))\cdot \beta_s}\\
&=\omega_\Delta^{-r\cdot \beta_s}\cdot \omega^{(n/\Delta)\cdot (r_0+\Delta r_1+\Delta^2 r_2+\ldots)\cdot \beta_s}\\
&=\omega_\Delta^{-r\cdot \beta_s}\cdot \omega_{\Delta}^{r_0\cdot \beta_s}\\
&=\omega_\Delta^{(-r+r_0)\cdot \beta_s}.
\end{split}
\end{equation*}
We used the fact that $\omega^{n/\Delta}=e^{2\pi i (n/\Delta)/n}=e^{2\pi i/\Delta}=\omega_\Delta$ and $(\omega_{\Delta})^\Delta=1$. Thus, we have
\begin{equation}\label{eq:92hg34grggggdds}
\omega_{\Delta}^{-r\cdot \beta_s}\omega^{-(n2^{-g}{\bf f_s})\cdot \beta_s}\frac{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{w}))}{m_j(\widehat{x'}, H, a\star (\mathbf{1}, \mathbf{0}))}=\omega_\Delta^{(-r+r_0)\cdot \beta_s}+\omega_\Delta^{(-r+r_0)\cdot \beta_s}(\xi-1).
\end{equation}
We now consider two cases. First suppose that $r=r_0$. Then $\omega_\Delta^{(-r+r_0)\cdot \beta_s}=1$, and it remains to note that by~\eqref{eq:bound-1-oigb344tg32t} we have $|\xi-1|\leq \frac{1+1/8}{1-1/8}-1\leq 2/7< 1/3$.
Thus every $a\in {\mathcal{A}}^*$ passes the test in line~9 of Algorithm~\ref{alg:location}. Since $|{\mathcal{A}}^*|\geq (4/5)|{\mathcal{A}}|>(3/5)|{\mathcal{A}}|$ by the argument above, we have that $r_0$ passes the test in line~9. It remains to show that $r_0$ is the unique element in $0,\ldots, \Delta-1$ that passes this test.
Now suppose that $r\neq r_0$. Then by the assumption that ${\mathcal{A}}\star (\mathbf{0}, \mathbf{e}_s)$ is balanced (assumption {\bf 3} of the lemma) at least $49/100$ fraction of $\omega_\Delta^{(-r+r_0)\cdot \beta_s}$ have negative real part. This means that for at least $49/100$ of $a\in {\mathcal{A}}$ we have using triangle inequality
\begin{equation*}
\begin{split}
\left|\left[\omega_\Delta^{(-r+r_0)\cdot \beta_s}+\omega_\Delta^{(-r+r_0)\cdot \beta_s}(\xi-1)\right]-1\right|&\geq \left|\omega_\Delta^{(-r+r_0)\cdot \beta_s}-1\right|-\left|\omega_\Delta^{(-r+r_0)\cdot \beta_s}(\xi-1)\right|\\
&\geq \left|\mathbf{i}-1\right|-1/3\\
&\geq \sqrt{2}-1/3> 1/3,
\end{split}
\end{equation*}
and hence the condition in line~9 of Algorithm~\ref{alg:location} is not satisfied for any $r\neq r_0$. This shows that location is successful and completes the proof of correctness.
Runtime bounds follow by noting that \textsc{LocateSignal} recovers $d$ coordinates with $\log n$ bits per coordinate. Coordinates are recovered in batches of $\log \Delta$ bits, and the time taken is bounded by $B\cdot d(\log_\Delta n)\Delta\leq B (\log N)^{3/2}$. Updating the measurements using semi-equispaced FFT takes $B\log^{d+1} N$ time.
\end{proof}
We also get an immediate corollary of Lemma~\ref{lm:loc}. The corollary is crucial to our proof of Theorem~\ref{thm:l1-res-loc} (the main result about efficiency of \textsc{LocateSignal}) in the next section.
\begin{corollary}\label{cor:loc}
For any integer $r_{max}\geq 1$, for any sequence of $r_{max}$ hashings $H_r=(\pi_r, B, R), r\in [1:r_{max}]$ and evaluation points ${\mathcal{A}}_r\subseteq {[n]^d}\times {[n]^d}$, for every $S\subseteq {[n]^d}$ and for every $x, \chi\in \mathbb{C}^{{[n]^d}}, x':=x-\chi$, the following conditions hold.
If for each $r\in [1:r_{max}]$ $L_r\subseteq {[n]^d}$ denotes the output of \Call{LocateSignal}{$\widehat{x}, \chi, H_r, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{a\in {\mathcal{A}}_r, \mathbf{w}\in \H}$}, $L=\bigcup_{r=1}^{r_{max}} L_r$, and the sets ${\mathcal{A}}_r\star(\mathbf{0}, \mathbf{w})$ are balanced for all $\mathbf{w}\in \H$ and $r\in [1:r_{max}]$, then
\begin{equation}
||x'_{S\setminus L}||_1\leq 20 ||e^{head}_S(\{H_r\}, x, \chi)||_1+20 ||e^{tail}_S(\{H_r, {\mathcal{A}}_r\}, x)||_1+|S|\cdot N^{-\Omega(c)}.\tag{*}
\end{equation}
Furthermore, every element $i\in S$ such that
\begin{equation}
|x'_i|>20 (e^{head}_i(\{H_r\}, x, \chi)+e^{tail}_i(\{H_r, {\mathcal{A}}_r\}, x))+N^{-\Omega(c)}\tag{**}
\end{equation}
belongs to $L$.
\end{corollary}
\begin{proof}
Suppose that $i\in S$ fails to be located in any of the $R$ calls, and $|x'_i|\geq N^{-\Omega(c)}$. By Lemma~\ref{lm:loc} and the assumption that ${\mathcal{A}}_r\star(\mathbf{0}, \mathbf{w})$ is balanced for all $\mathbf{w}\in \H$ and $r\in [1:r_{max}]$ this means that for at least one half of values $r\in [1:r_{max}]$ either {\bf (A)} $e^{head}_{i}(H_r, x, \chi)\geq |x_i|/20$ or {\bf (B)} $e^{tail}_{i}(H_r, {\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}), x)> |x_i|/20$ for at least one $\mathbf{w}\in \H$. We consider these two cases separately.
\paragraph{Case (A).} In this case we have $e^{head}_{i}(H_s, x, \chi)\geq |x_i|/20$ for at least one half of $r\in [1:r_{max}]$, so
in particular $e^{head}_i(\{H_r\}, x, \chi)\geq \text{quant}^{1/5}_r e^{head}_{i}(H_r, x, \chi)\geq |x'_i|/20$.
\paragraph{Case (B).} Suppose that $e^{tail}_{i}(H_r, {\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}), x)> |x'_i|/20$ for some $\mathbf{w}=\mathbf{w}(r)\in \H$ for at least one half of $r\in [1:r_{max}]$ (denote this set by $Q\subseteq [1:r_{max}]$). We then have
\begin{equation*}
\begin{split}
e^{tail}_i(\{H_r, {\mathcal{A}}_r\}, x)&=\text{quant}^{1/5}_{r\in [1:r_{max}]} e^{tail}_i(H_r, {\mathcal{A}}_r, x)\\
&=\text{quant}^{1/5}_{r\in [1:r_{max}]} \left[40\mu_{H_r, i}(x)+\sum_{\mathbf{w}\in \H} \left|e^{tail}_i(H_r, {\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}), x)-40\mu_{H_r, i}(x)\right|_+\right]\\
&\geq \min_{r\in Q} \left[40\mu_{H_r, i}(x)+\left|e^{tail}_i(H_r, {\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}(r)), x)-40\mu_{H_r, i}(x)\right|_+\right]\\
&\geq \min_{r\in Q} e^{tail}_i(H_r, {\mathcal{A}}_r\star (\mathbf{1}, \mathbf{w}(r)), x)\\
&\geq |x'_i|/20
\end{split}
\end{equation*}
as required. This completes the proof of {\bf (*)} as well as {\bf (**)}.
\end{proof}
\section{Analysis of \textsc{LocateSignal}: bounding $\ell_1$ norm of undiscovered elements}\label{sec:l1}
The main result of this section is Theorem~\ref{thm:l1-res-loc}, which is our main tool for showing efficiency of \textsc{LocateSignal}. Theorem~\ref{thm:l1-res-loc} applies to the following setting. Fix a set $S\subseteq {[n]^d}$ and a set of hashings $H_1,\ldots, H_{r_{max}}$, and let $S^*\subseteq S$ denote the set of elements of $S$ that are not isolated with respect to most of these hashings $H_1,\ldots, H_{r_{max}}$. Theorem~\ref{thm:l1-res-loc} shows that for any signal $x$ and partially recovered signal $\chi$, if $L$ denotes the output list of an invocation of \textsc{LocateSignal} on the pair $(x, \chi)$ with hashings $H_1,\ldots, H_{r_{max}}$, then
the $\ell_1$ norm of elements of the residual $(x-\chi)_S$ that are not discovered by \textsc{LocateSignal} can be bounded by a function of the amount of $\ell_1$ mass of the residual that fell outside of the `good' set $S\setminus S^*$, plus the `noise level' $\mu\geq ||x_{{[n]^d}\setminus S}||_\infty$ times $k$.
If we think of applying Theorem~\ref{thm:l1-res-loc} iteratively, we intuitively get that the fixed set of measurements with hashings $\{H_r\}$ allows us to always reduce the $\ell_1$ norm of the residual $x'=x-\chi$ on the `good' set $S\setminus S^*$ to about the amount of mass that is located outside of this good set.
\noindent{\em {\bf Theorem~\ref{thm:l1-res-loc}}
There exist absolute constants $C_1, C_2, C_3>0$ such that for any $x, \chi\in \mathbb{C}^N$ and residual signal $x'=x-\chi$ the following conditions hold. Let $S\subseteq {[n]^d}, |S|\leq 2k$, be such that $||x_{{[n]^d}\setminus S}||_\infty\leq \mu$. Suppose that $||x||_\infty/\mu\leq N^{O(1)}$.~Let $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$.~ Let $S^*\subseteq S$ denote the set of elements that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of hashings $\{H_r\}_{r=1}^{r_{max}}$. Suppose that for every $s\in [1:d]$ the sets ${\mathcal{A}}_r\star(\mathbf{0}, \mathbf{e}_s)$ are balanced (as per Definition~\ref{def:balance}), $r=1,\ldots, r_{max}$, and the exponent $F$ of the filter $G$ is even and satisfies $F\geq 2d$.
Let
$$
L=\bigcup_{r=1}^{r_{max}}\Call{LocateSignal}{\chi, H_r, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w})\}_{a\in {\mathcal{A}}_r, \mathbf{w}\in \H_r}}.
$$
Then if $r_{max}, c_{max}\geq (C_1/\sqrt{\alpha})\log\log N$, one has
$$
||x'_{S\setminus S^*\setminus L}||_1\leq (C_2\alpha)^{d/2} ||x'_S||_1+C_3^{d^2}(||\chi_{{[n]^d}\setminus S}||_1+||x'_{S^*}||_1)+4\mu |S|.
$$
}
As we will show later, Theorem~\ref{thm:l1-res-loc} can be used to show that (assuming perfect estimation) invoking \textsc{LocateSignal} repeatedly allows one to reduce to $\ell_1$ norm of the head elements down to essentially
$$
||x'_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1,
$$
i.e. the $\ell_1$ norm of the elements that are not well isolated and the set of new elements created by the process due to false positives in location.
In what follows we derive bounds on $||e^{head}||_1$ (in section~\ref{sec:h-noise}) and $||e^{tail}||_1$ (in section~\ref{sec:tail-noise}) that lead to a proof of Theorem~\ref{thm:l1-res-loc}.
\subsection{Bounding noise from heavy hitters}\label{sec:h-noise}
We first derive bounds on noise from heavy hitters that a single hashing $H$ results in, i.e. $e^{head}(H, x)$, (see Lemma~\ref{lm:l1b-single-pi}), and then use these bounds to bound $e^{head}(\{H\}, x)$ (see Lemma~\ref{lm:eh-s-sstar}). These bounds, together with upper bounds on contribution of tail noise from the next section, then lead to a proof of Theorem~\ref{thm:l1-res-loc}.
\begin{lemma}\label{lm:l1b-single-pi}
Let $x, \chi\in \mathbb{C}^N, x'=x-\chi$. Let $S\subseteq {[n]^d}, |S|\leq 2k$, be such that $||x_{{[n]^d}\setminus S}||_\infty\leq \mu$. Suppose that $||x||_\infty/\mu\leq N^{O(1)}$.~Let $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$.~
Let $\pi=(\Sigma, q)$ be a permutation, let $H=(\pi, B, F), F\geq 2d$ be a hashing into $B$ buckets and filter $G$ with sharpness $F$. Let $S^*_H\subseteq S$ denote the set of elements $i\in S$ that are not isolated under $H$. Then one has, for $e^{head}$ defined with respect to $S$,
$$
||e^{head}_{S\setminus S^*_H}(H, x, \chi)||_1\leq 2^{O(d)} \alpha^{d/2} ||x'_{S\setminus S^*_H}||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} (||x'_{S^*}||_1+||\chi_{{[n]^d}\setminus S}||_1).
$$
Furthermore, if $\chi_{{[n]^d}\setminus S}=0$ and $S^*_H=\emptyset$, then one has $||e^{head}_{S}(H, x, \chi)||_\infty\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_\infty$.
\end{lemma}
\begin{proof}
By \eqref{eq:eh-pi} for $i\in S\setminus S^*_H$
\begin{equation}\label{eq:asdf}
\begin{split}
e^{head}_{i}(H, x')&= |G_{o_i(i)}^{-1}|\cdot \sum_{j\in S\setminus S^*_H \setminus \{i\}} |G_{o_i(j)}|x'_j|\text{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(isolated head elements)}\\
&+|G_{o_i(i)}^{-1}|\cdot \left[\sum_{j\in S^*_H} |G_{o_i(j)}|x'_j|+\sum_{j\in {[n]^d}\setminus S} |G_{o_i(j)}||\chi_j|\right]\text{~~~(non-isolated head elements and false positives)}\\
&=|G_{o_i(i)}^{-1}|\cdot (A_1(i)+A_2(i)).
\end{split}
\end{equation}
Let $A_1:=\sum_{i\in S\setminus S^*_H} A_1(i), A_2:=\sum_{i\in S\setminus S^*_H} A_2(i)$.
We bound $A_1$ and $A_2$ separately.
\paragraph{Bounding $A_1$.} We start with a convenient upper bound on $A_1$:
\begin{equation}\label{eq:2pogh}
\begin{split}
A_1=&\sum_{i\in S\setminus S^*_H} \sum_{j\in S\setminus S^*_H\setminus \{i\}} |G_{o_i(j)}||x'_j|~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{(recall that $o_i(j)=\pi(j)-(n/b)h(i)$)}\\
& =\sum_{t\geq 0} \sum_{i\in S\setminus S^*_H} \sum_{\substack{j\in S\setminus S^*_H\setminus \{i\}\text{~s.t.~}\\ ||\pi(j)-\pi(i)||_\infty\in (n/b)\cdot [2^t-1, 2^{t+1}-1)}} |G_{o_i(j)}||x'_j|, \text{~~~(consider all scales $t\geq 0$)}\\
& \leq \sum_{t\geq 0} \sum_{i\in S\setminus S^*_H} \max_{||\pi(j)-\pi(i)||_\infty\geq (n/b)\cdot (2^t-1)} G_{o_i(j)}\cdot \sum_{\substack{j\in S\setminus S^*_H\setminus \{i\}\text{~s.t.~}\\ ||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)}} |x'_j|\\
&= \sum_{j\in S\setminus S^*_H} |x'_j|\cdot \sum_{t\geq 0} \max_{\substack{||\pi(j)-\pi(i)||_\infty\geq \\(n/b)\cdot (2^t-1)}} G_{o_i(j)}\cdot \left|\left\{i\in S\setminus S^*_H\setminus \{j\} \text{~s.t.~}||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)\right\}\right|.
\end{split}
\end{equation}
Note that in the first line we summed, over all $i\in S\setminus S^*_H$ (i.e. all isolated $i$), the contributions of all other $i\in S$ to the noise in their buckets. We need to bound the first line in terms of $||x'_{S\setminus S^*_H}||_1$. For that, we first classified all $j\in S\setminus S^*_H$ according to the $\ell_\infty$ distance from $i$ to $j$ (in the second line), then upper bounded the value of the filter $G_{o_i(j)}$ based on the distance $||\pi(i)-\pi(j)||_\infty$, and finally changed order of summation to ensure that the outer summation is a weighted sum of absolute values of $x'_j$ {\em over all $j\in S\setminus S^*_H$}\footnote{We note here that we started by summing over $i$ first and then over $j$, but switched the order of summation to the opposite in the last line. This is because the quantity $G_{o_i(j)}$, which determines contribution of $j\in S$ to the estimation error of $i\in S$ {\em is not symmetric} in $i$ and $j$. Indeed, even though $G$ itself is symmetric around the origin, we have $o_i(j)=\pi(j)-(n/b)h(i)\neq o_j(i)$. }. In order to upper bound $A_1$ it now suffices to upper bound all factors multiplying $x'_j$ in the last line of the equation above. As we now show, a strong bound follows from isolation properties of $i$.
We start by upper bounding $G$ using Lemma~\ref{lm:filter-prop}, {\bf (2)}. We first note that by triangle inequality
$$
||\pi(j)-(n/b)h(i)||_\infty\geq ||\pi(j)-\pi(i)||_\infty-||\pi(i)-(n/b)h(i)||_\infty\geq (n/b)(2^t-1)-(n/b)= (n/b) (2^{t-1}-2).
$$
The rhs is positive for all $t\geq 3$ and for such $t$ satisfies $2^{t-1}-2\leq 2^{t-2}$. We hence get for all $t\geq 3$
\begin{equation}\label{eq:g-ubound}
\begin{split}
\max_{||\pi(j)-\pi(i)||_\infty\geq (n/b)\cdot (2^{t-1}-1)} G_{o_i(j)}\leq \left(\frac2{1+||\pi(j)-(n/b)h(i)||_\infty}\right)^{F}\leq \left(\frac2{1+2^{t-2}}\right)^{F}\leq 2^{-(t-3)F}.
\end{split}
\end{equation}
We also have the bound $||G||_\infty\leq 1$ from Lemma~\ref{lm:filter-prop}, {\bf (3)}. It remains to bound the last term on the rhs of the last line in~\eqref{eq:2pogh}. We need the fact that for a pair $i, j$ such that $||\pi(j)-\pi(i)||_\infty\leq 2^{t+1}-1$ we have by triangle inequality
$$
||\pi(j)-(n/b)h(i)||_\infty\leq ||\pi(j)-\pi(i)||_\infty+||\pi(i)-(n/b)h(i)||_\infty\leq (n/b)(2^{t+1}-1)+(n/b)= (n/b) 2^{t+1}.
$$
Equipped with this bound, we now conclude that
\begin{equation}\label{eq:3hrgrg}
\begin{split}
&\left|\left\{i\in S\setminus S^*_H\setminus \{j\} \text{~s.t.~}||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)\right\}\right|\\
&=|\pi(S\setminus \{i\})\cap \mathbb{B}^\infty_{(n/b) h(i)}((n/b)\cdot 2^{t+1})|\leq (2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+2)d+1}\cdot 2^t,
\end{split}
\end{equation}
where we used the assumption that $i\in S\setminus S^*_H$ are isolated (see Definition~\ref{def:isolated}). We thus get for any $j\in S\setminus S^*_H$
\begin{equation*}
\begin{split}
\eta_j:=&\sum_{t\geq 0} \max_{\substack{||\pi(j)-\pi(i)||_\infty\geq \\(n/b)\cdot (2^t-1)}} G_{o_i(j)}\cdot \left|\left\{i\in S\setminus S^*_H\setminus \{j\} \text{~s.t.~}||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)\right\}\right|\\
&\leq \sum_{t\geq 0} ((2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+2)d+1}\cdot 2^{t}) \min\{1, 2^{-(t-3)F}\} \\
&\leq (2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{2d+1} \sum_{t\geq 0} 2^{t(d+1)}\cdot \min\{1, 2^{-(t-3)F}\}\\
\end{split}
\end{equation*}
We now note that
\begin{equation*}
\begin{split}
\sum_{t\geq 0} 2^{t(d+1)}\cdot \min\{1, 2^{-(t-3)F}\}&=1+2^{2(d+1)}+2^{3(d+1)}\sum_{t\geq 3} 2^{(t-3)(d+1)}\cdot \min\{1, 2^{-(t-3)F}\}\\
&=1+2^{2(d+1)}+2^{3(d+1)}\sum_{t\geq 3} 2^{(t-3)(d+1-F)}\leq 1+2^{2(d+1)}+2^{3(d+1)+1}\leq 2^{4(d+1)+1},\\
\end{split}
\end{equation*}
since $F\geq 2d$ by assumption of the lemma, and hence for all $j\in S\setminus S^*_H$ one has $\eta_j\leq (2\pi)^{-d\cdot \fc}\cdot 2^{O(d)} \alpha^{d/2}$. Combining the estimates above, we now get
\begin{equation*}
\begin{split}
A_1\leq &\sum_{j\in S\setminus S^*_H} |x'_j|\cdot \eta_j\leq ||x'_S||_1(2\pi)^{-d\cdot \fc}\cdot 2^{O(d)}\alpha^{d/2}, \\
\end{split}
\end{equation*}
as required. The $\ell_\infty$ bound for the case when $\chi_{{[n]^d}\setminus S}=0$ follows in a similar manner and is hence omitted.
We now turn to bounding $A_2$. The bound that we get here is weaker since $\chi_{{[n]^d}\setminus S}$ is an adversarially placed signal and we do not have isolation properties with respect to it, resulting in a weaker bound on (the equivalent of) $\eta_j$ for $j\in S^*_H$ than we had for $j\in S\setminus S^*_H$. We let $y:=x'_{S^*}-\chi_{{[n]^d}\setminus S}$ to simplify notation. We have, as in~\eqref{eq:2pogh},
\begin{equation*}
\begin{split}
A_2&\leq \sum_{j\in S\setminus S^*_H} |x'_j|\cdot \kappa_j,\\
& \text{~where~}\\
&\kappa_j=\sum_{t\geq 0} \max_{\substack{||\pi(j)-\pi(i)||_\infty\geq \\(n/b)\cdot (2^t-1)}} G_{o_i(j)}\cdot \left|\left\{i\in S\setminus S^*_H\setminus \{j\} \text{~s.t.~}||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)\right\}\right|.
\end{split}
\end{equation*}
The first term can be upper bounded as before. For the second term, we note that every pair of points $i_1, i_2\in S\setminus S^*_H$ by triangle inequality satisfy
$$
(n/b)||\pi(i_1)-\pi(i_2)||_\infty\leq (n/b)||\pi(i_1)-\pi(j)||_\infty+||\pi(j)-\pi(i_2)||_\infty\leq (n/b)\cdot (2^{t+2}-2)\leq (n/b)\cdot 2^{t+2}
$$
Since both $i_1$ and $i_2$ are isolated under $\pi$, this means that
$$
\left|\left\{i\in S\setminus S^*_H\setminus \{j\} \text{~s.t.~}||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)\right\}\right|\leq (2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+3)d}\cdot 2^{t+2}+1,
$$
where we used the bound from Definition~\ref{def:isolated} for $i$, but counted the point $i$ itself (this is what makes the bound on $\kappa_j$ weaker than the bound on $\eta_j$). A similar calculation to the one above for $A_1$ now gives
\begin{equation*}
\begin{split}
\kappa_j:=&\sum_{t\geq 0} \max_{\substack{||\pi(j)-\pi(i)||_\infty\geq \\(n/b)\cdot (2^t-1)}} G_{o_i(j)}\cdot \left|\left\{i\in S\setminus S^*_H\setminus \{j\} \text{~s.t.~}||\pi(j)-\pi(i)||_\infty\leq (n/b)\cdot (2^{t+1}-1)\right\}\right|\\
&\leq \sum_{t\geq 0} ((2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+3)d}\cdot 2^{t+2}+1) \min\{1, 2^{-(t-3)F}\} \\
&\leq 2^{O(d)}((2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2}+1)= 2^{O(d)}.
\end{split}
\end{equation*}
We thus have
\begin{equation*}
\begin{split}
A_2\leq &\sum_{j\in {[n]^d}} |y_j| \kappa_j\leq 2^{O(d)} ||y||_1.
\end{split}
\end{equation*}
Plugging our bounds on $A_1$ and $A_2$ into \eqref{eq:asdf}, we get
\begin{equation*}
\begin{split}
e^{head}_{i}(H, x, \chi)&\leq |G_{o_i(i)}^{-1}|\cdot (A_1+A_2)\leq |G_{o_i(i)}^{-1}|(2^{O(d)}(2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} ||x'_S||_1+2^{O(d)}||y||_1)\\
&\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} ||y||_1\\
\end{split}
\end{equation*}
as required.
\if 0
Now to obtain the $\ell_\infty$ bound for the case when $\chi_{{[n]^d}\setminus S}=0$, we write for each $i\in S\setminus S^*_H$
\begin{equation}\label{eq:odfjgh}
\begin{split}
A_1(i)\leq &\sum_{t\geq 0} \left(\frac2{1+2^t}\right)^{F} \sum_{i'\in S\setminus \{i\}, ||h(i')-h(i)||_\infty\leq 2^t} |x'_{i'}|\\
&\leq ||x'||_\infty\cdot \sum_{t\geq 0} \left(\frac2{1+2^t}\right)^{F} \sum_{i\in S\setminus S^*_H}(|S^\pi\cap \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^{t+2})|-1)\\
\end{split}
\end{equation}
and use the fact that
$$
|\mathbb{B}^\infty_{h(i)}((n/b)\cdot 2^{t+1})\cap S^\pi|-1\leq (2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+3)d}\cdot 2^{t}
$$
for every $i\in (S\setminus S^*_H)$ by definition of an isolated element (see Definition~\ref{def:isolated}). Continuing \eqref{eq:odfjgh}, we get
\begin{equation*}
\begin{split}
A_1\leq ||x'||_\infty\cdot \sum_{t\geq 0} \left(\frac2{1+2^t}\right)^{F}(2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+3)d}\cdot 2^{t}=||x'||_\infty\cdot (2\pi)^{-d\cdot \fc} 2^{O(d)} \alpha^{d/2},\\
\end{split}
\end{equation*}
implying that
$$
||e^{head}_{S\setminus S^*_H}(H, x, \chi)||_\infty\leq |G_{o_i(i)}^{-1}|\cdot A_1\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_\infty
$$
as required.
\fi
\end{proof}
\begin{remark}
The second bound of this lemma will be useful later in section~\ref{sec:linf} for analyzing \textsc{ReduceInfNorm}.
\end{remark}
We now bound the final error induced by head elements, i.e. $e^{head}(\{H_r\}, x, \chi)$:
\begin{lemma}\label{lm:eh-s-sstar}
Let $x, \chi\in {[n]^d}$, $x'=x-\chi$. Let $S\subseteq {[n]^d}, |S|\leq 2k$, be such that $||x_{{[n]^d}\setminus S}||_\infty\leq \mu$. Suppose that $||x||_\infty/\mu\leq N^{O(1)}$.~Let $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$.~ Let $\{\pi_r\}_{r=1}^{r_{max}}$ be a set of permutations, let $H_r=(\pi_r, B, F), F\geq 2d$ be a hashing into $B$ buckets and filter $G$ with sharpness $F$. Let $S^*$ denote the set of elements $i\in S$ that are not isolated under at least $\sqrt{\alpha}$ fraction of $H_r$. Then, one has for $e^{head}$ defined with respect to $S$,
$$
||e^{head}_{S\setminus S^*}(\{H_r\}, x, \chi)||_1\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} ||\chi_{{[n]^d}\setminus S}||_1.
$$
Furthermore, if $\chi_{{[n]^d}\setminus S}=0$, then $||e^{head}_{S\setminus S^*}(\{H_r\}, x, \chi)||_\infty\leq 2^{d/2}\alpha^{d/2} ||x'_S||_\infty$.
\end{lemma}
\begin{proof}
Recall that by \eqref{eq:eh} one has for each $i\in {[n]^d}$ $e^{head}_i(\{H_r\}, x, \chi)=\text{quant}^{1/5}_{r\in [1:r_{max}]} e^{head}_i(H_r, x, \chi)$. This means that for each $i\in S\setminus S^*$ there exist at least $(1/5-\sqrt{\alpha})r_{max}$ values of $r$ such that
$e^{head}_i(H_r, x, \chi)>e^{head}_i(\{H_r\}, x, \chi)$, and hence
$$
||e^{head}_{S\setminus S^*}(\{H_r\}, x, \chi)||_1\leq \frac1{(1/5-\sqrt{\alpha})r_{max}}\sum_{r=1}^{r_{max}} ||e^{head}_{S\setminus S^*_r}(H_r, x, \chi)||_1.
$$
By Lemma~\ref{lm:l1b-single-pi} one has
$$
||e^{head}_{S\setminus S^*_{H_r}}(H_r, x, \chi)||_1\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} ||\chi_{{[n]^d}\setminus S}||_1
$$
for all $r$, implying that
\begin{equation*}
\begin{split}
||e^{head}_{S\setminus S^*}(\{H_r\}, x, \chi)||_1&\leq \frac{1}{(1/5-\sqrt{\alpha})}(2^{O(d)}\alpha^{d/2} ||x'_S||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} ||\chi_{{[n]^d}\setminus S}||_1)\\
&\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} ||\chi_{{[n]^d}\setminus S}||_1
\end{split}
\end{equation*}
as required.
The proof of the second bound follows analogously using the $\ell_\infty$ bound from Lemma~\ref{lm:l1b-single-pi}.
\end{proof}
\begin{remark}
The second bound of this lemma will be useful later in section~\ref{sec:linf} for analyzing \textsc{ReduceInfNorm}.
\end{remark}
\subsection{Bounding effect of tail noise}\label{sec:tail-noise}
\begin{lemma}\label{lm:loc-tail-small-single-H}
For any constant $C'>0$ there exists an absolute constant $C>0$ such that for any $x\in \mathbb{C}^{{[n]^d}}$, any integer $k\geq 1$ and $S\subseteq {[n]^d}$ such that $||x_{{[n]^d}\setminus S}||_\infty\leq C'||x_{{[n]^d}\setminus [k]}||_2/\sqrt{k}$, for any integer $B\geq 1$ a power of $2^d$ the following conditions hold.
If $(H, {\mathcal{A}})$ are random measurements as in Algorithm~\ref{alg:main-sublinear}, $H=(\pi, B, F)$ satisfies $F\geq 2d$ and $||x_{{[n]^d}\setminus [k]}||_2\geq N^{-\Omega(c)}$, where $O(c)$ is the word precision of our semi-equispaced Fourier transform computation, then for any $i\in {[n]^d}$ one has, for $e^{tail}$ defined with respect to $S$,
$$
{\bf \mbox{\bf E}}_{H, {\mathcal{A}}}\left[e^{tail}_{i}(H, {\mathcal{A}}, x)\right]\leq (2\pi)^{d\cdot \fc} \cdot C^d(40+|\H|2^{-\Omega(|{\mathcal{A}}|)}) ||x_{{[n]^d} \setminus [k]}||_2/\sqrt{B}.
$$
\end{lemma}
\begin{proof}
Recall that for any $H=(\pi, B, G), a, \mathbf{w}$ one has $(e^{tail}(H, a\star (\mathbf{1}, \mathbf{w}), x_{{[n]^d}\setminus [k]}))^2=|u_i|^2$, where
$$
u =\Call{HashToBins}{\widehat{x_{{[n]^d}\setminus S}}, 0, (H, a\star (\mathbf{1}, \mathbf{w}))}.
$$
Since the elements of ${\mathcal{A}}$ are selected uniformly at random, we have for any $H$ and $\mathbf{w}$ by Lemma~\ref{lm:hashing}, {\bf (3)}, since $a\star (\mathbf{1}, \mathbf{w})$ is uniformly random in ${[n]^d}$, that
\begin{equation}\label{eq:expect-a-0hg443g}
{\bf \mbox{\bf E}}_a[(e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{w}), x))^2]={\bf \mbox{\bf E}}_{a}[\abs{G_{o_i(i)}^{-1}\omega^{-(a\star (\mathbf{1}, \mathbf{w}))^T\Sigma i}u_{h(i)} - x_i}^2]\leq \mu^2_{H, i}(x)+N^{-\Omega(c)},
\end{equation}
where $c>0$ is the large constant that governs the precision of our Fourier transform computations.
By Lemma~\ref{lm:hashing}, {\bf (2)} applied to the pair $(\widehat{x_{{[n]^d}\setminus S}}, 0)$ there exists a constant $C>0$ such that
$$
{\bf \mbox{\bf E}}_H[\mu^2_{H, i}] \leq (2\pi)^{2d\cdot \fc}\cdot C^d ||x_{{[n]^d} \setminus S}||_2^2/B
$$
We would like to upper bound the rhs in terms of $||x_{{[n]^d}\setminus [k]}||_2^2$ (the tail energy), but this requires an argument since $S$ is not exactly the set of top $k$ elements of $x$. However, since $S$ contains the large coefficients of $x$, a bound is easy to obtain. Indeed, denoting the set of top $k$ coefficients of $x$ by $[k]\subseteq {[n]^d}$ as usual, we get
\begin{equation*}
||x_{{[n]^d} \setminus S}||_2^2\leq ||x_{{[n]^d} \setminus (S\cup [k])}||_2^2+||x_{[k]\setminus S}||_2^2\leq ||x_{{[n]^d} \setminus [k]}||_2^2+k\cdot ||x_{[k]\setminus S}||_\infty^2\leq (C'+1)||x_{{[n]^d} \setminus [k]}||_2.
\end{equation*}
Thus, we have
\begin{equation*}
{\bf \mbox{\bf E}}_H[\mu^2_{H, i}(x)+N^{-\Omega(c)}]\leq (2\pi)^{2d\cdot \fc}\cdot (C'+2)C^d ||x_{{[n]^d} \setminus [k]}||_2^2/B,
\end{equation*}
where we used the assumption that $||x_{{[n]^d} \setminus k}||_2\geq N^{-\Omega(c)}$.
We now get by Jensen's inequality
\begin{equation}\label{eq:mu-b}
{\bf \mbox{\bf E}}_H[\mu_{H, i}(x)]\leq(2\pi)^{d\cdot \fc}\cdot (C'')^d ||x_{{[n]^d} \setminus k}||_2/\sqrt{B}
\end{equation}
for a constant $C''>0$. Note that
By ~\eqref{eq:expect-a-0hg443g} for each $i\in {[n]^d}$, hashing $H$, evaluation point $a\in {[n]^d}\times {[n]^d}$ and direction $\mathbf{w}$ we have
${\bf \mbox{\bf E}}_a[(e^{tail}_i(H, a\star (\mathbf{1}, \mathbf{w}), x))^2]=(\mu_{H, i}(x))^2$. Applying Jensen's inequality, we hence get for any $H$ and $\mathbf{w}\in \H$
\begin{equation}
{\bf \mbox{\bf E}}_a[e^{tail}_{i}(H, a\star (\mathbf{1}, \mathbf{w}), x)]\leq \mu_{H, i}(x).
\end{equation}
Applying Lemma~\ref{lm:quant-exp} with $Y=e^{tail}_{i}(H, a\star (\mathbf{1}, \mathbf{w}), x)$ and $\gamma=1/5$ (recall that the definition of $e^{tail}_i(H, z, x)$ involves a $1/5$-quantile over ${\mathcal{A}}$) and using the previous bound, we get, for any fixed $H$ and $\mathbf{w}\in \H$
\begin{equation}\label{eq:e-b-exp}
{\bf \mbox{\bf E}}_{\mathcal{A}}\left[\left|e^{tail}_{i}(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x)-40\cdot \mu_{H, i}(x)\right|_+\right]\leq \mu_{H, i}(x)\cdot 2^{-\Omega(|{\mathcal{A}}|)},
\end{equation}
and hence by a union bound over all $\mathbf{w}\in \H$ we have
\begin{equation*}
{\bf \mbox{\bf E}}_{\mathcal{A}}\left[\sum_{\mathbf{w}\in \H}\left|e^{tail}_{i}(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x)-40\cdot \mu_{H, i}(x)\right|_+\right]\leq \mu_{H, i}(x)\cdot |\H|2^{-\Omega(|{\mathcal{A}}|)}.
\end{equation*}
Putting this together with~\eqref{eq:mu-b}, we get
\begin{equation*}
\begin{split}
&{\bf \mbox{\bf E}}_{H, {\mathcal{A}}}\left[e^{tail}_i(H, {\mathcal{A}}, x)\right]\\
&={\bf \mbox{\bf E}}_H\left[{\bf \mbox{\bf E}}_{\mathcal{A}}\left[40\mu_{H, i}(x)+\sum_{\mathbf{w}\in \H}\left|e^{tail}_{i}(H, {\mathcal{A}}\star (\mathbf{1}, \mathbf{w}), x)-40\cdot \mu_{H, i}(x)\right|_+\right]\right]\\
&\leq{\bf \mbox{\bf E}}_H\left[\mu_{H, i}(x)(40+|\H|2^{-\Omega(|{\mathcal{A}}|)})\right]\\
&\leq (2\pi)^{d\cdot \fc} (C'')^d (40+|\H|2^{-\Omega(|{\mathcal{A}}|)}) ||x_{{[n]^d} \setminus k}||_2/\sqrt{B}
\end{split}
\end{equation*}
as required.
\end{proof}
\begin{lemma}\label{lm:loc-tail-small}
For any constant $C'>0$ there exists an absolute constant $C>0$ such that for any $x\in \mathbb{C}^{{[n]^d}}$, any integer $k\geq 1$ and $S\subseteq {[n]^d}$ such that $||x_{{[n]^d}\setminus S}||_\infty\leq C'||x_{{[n]^d}\setminus [k]}||/\sqrt{k}$, if $B\geq 1$, then the following conditions hold , for $e^{tail}$ defined with respect to $S$.
If hashings $H_r=(\pi_r, B, F), F\geq 2d$ and sets ${\mathcal{A}}_r, |{\mathcal{A}}_r|\geq c_{max}$ for $r=1,\ldots, r_{max}$ are chosen at random, then
\begin{description}
\item[(1)] for every $i\in {[n]^d}$ one has
$$
{\bf \mbox{\bf E}}_{\{(H_r, {\mathcal{A}}_r)\}}\left[e^{tail}_i(\{H_r, {\mathcal{A}}_r\}, x)\right]\leq (2\pi)^{d\cdot \fc} C^d (40+|\H|2^{-\Omega(c_{max})}) ||x_{{[n]^d} \setminus [k]}||_2/\sqrt{B}.
$$
\item[(2)] for every $i\in {[n]^d}$ one has
$$
{\bf \mbox{\bf Pr}}_{\{(H_r, {\mathcal{A}}_r)\}}\left[e^{tail}_i(\{H_r, {\mathcal{A}}_r\}, x)> (2\pi)^{d\cdot \fc} C^d (40+|\H|2^{-\Omega(c_{max})}) ||x_{{[n]^d} \setminus [k]}||_2/\sqrt{B}\right]=2^{-\Omega(r_{max})}
$$
and
\begin{equation*}
\begin{split}
&{\bf \mbox{\bf E}}_{\{(H_r, {\mathcal{A}}_r)\}}\left[\left|e^{tail}_i(\{H_r, {\mathcal{A}}_r\}, x)- (2\pi)^{d\cdot \fc} C^d (40+|\H|2^{-\Omega(c_{max})}) ||x_{{[n]^d} \setminus [k]}||_2/\sqrt{B}\right|_+\right]\\
&=2^{-\Omega(r_{max})}\cdot (2\pi)^{d\cdot \fc} C^d (40+|\H|2^{-\Omega(c_{max})}) ||x_{{[n]^d} \setminus [k]}||_2/\sqrt{B}.
\end{split}
\end{equation*}
\end{description}
\end{lemma}
\begin{proof}
Follows by applying Lemma~\ref{lm:quant-exp} with $Y=e^{tail}_{i}(H_r, {\mathcal{A}}_r, x)$.
\end{proof}
\subsection{Putting it together}
The bounds from the previous two sections yield a proof of Theorem~\ref{thm:l1-res-loc}, which we restate here for convenience of the reader:
{\em {\bf Theorem~\ref{thm:l1-res-loc}}
For any constant $C'>0$ there exist absolute constants $C_1, C_2, C_3>0$ such that for any $x\in \mathbb{C}^{[n]^d}$, any integer $k\geq 1$ and any $S\subseteq {[n]^d}$ such that $||x_{{[n]^d}\setminus S}||_\infty\leq C'\mu$, where $\mu=||x_{{[n]^d}\setminus [k]}||_2/\sqrt{k}$, the following conditions hold.
Let $\pi_r=(\Sigma_r, q_r), r=1,\ldots, r_{max}$ denote permutations, and let $H_r=(\pi_r, B, F)$, $F\geq 2d$, where $B\geq (2\pi)^{4d\cdot \fc} k/\alpha^d$ for $\alpha\in (0, 1)$ smaller than a constant. Let $S^*\subseteq S$ denote the set of elements that are not isolated with respect to at least a $\sqrt{\alpha}$ fraction of hashings $\{H_r\}$. Then if $r_{max}, c_{max}\geq (C_1/\sqrt{\alpha})\log\log N$, then with probability at least $1-1/\log^2 N$ over the randomness of the measurements for all $\chi\in \mathbb{C}^{[n]^d}$ such that $x':=x-\chi$ satisifies $||x'||_\infty/\mu\leq N^{O(1)}$ one has
$$
L:=\bigcup_{r=1}^{r_{max}}\textsc{LocateSignal}\left(\chi, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}\right)
$$
satisfies
$$
||x'_{S\setminus S^*\setminus L}||_1\leq (C_2\alpha)^{d/2} ||x'_S||_1+C_3^{d^2}(||\chi_{{[n]^d}\setminus S}||_1+||x'_{S^*}||_1)+4\mu |S|.
$$
}
\begin{proof}
First note that with probability at least $1-1/(10\log^2 N)$ for every $s\in [1:d]$ the sets ${\mathcal{A}}_r\star (\mathbf{0}, \mathbf{e}_s)$ are balanced (as per Definition~\ref{def:balance}) for all $r=1,\ldots, r_{max}$ and all $\mathbf{w}\in \H$ by Claim~\ref{cl:balanced}.
By Corollary~\ref{cor:loc} applied with $S'=S\setminus S^*$ one has
$$
||(x-\chi)_{(S\setminus S^*)\setminus L}||_1\leq 20 \cdot (||e^{head}_{S\setminus S^*}(\{H_r\}, x')||_1+||e^{tail}(\{H_r, {\mathcal{A}}_r\}, x)||_1)+||x'||_\infty |S|\cdot N^{-\Omega(c)}.
$$
We also have
$$
||e^{head}_{S\setminus S^*}(\{H_r\}, x')||_1\leq 2^{O(d)}\alpha^{d/2} ||x'_S||_1+(2\pi)^{d\cdot \fc} \cdot 2^{O(d)} ||\chi_{{[n]^d}\setminus S}||_1
$$
by Lemma~\ref{lm:eh-s-sstar} and with probability at least $1-1/(10\log^2 N)$
\begin{equation*}
||e^{tail}_{S\setminus S^*}(\{H_r, {\mathcal{A}}_r\}, x)||_1\leq (2\pi)^{d\cdot \fc} C^d (40+|\H|2^{-\Omega(c_{max})}) ||x_{{[n]^d} \setminus [k]}||_2|S|/\sqrt{B}
\end{equation*}
by Lemma~\ref{lm:loc-tail-small}. The rhs of the previous equation is bounded by $|S|\mu$ by the choice of $B$ as long as $\alpha$ is smaller than a absolute constant, as required.
Putting these bounds together and using the fact that $|\H|\leq \log N$ (so that $|\H|\cdot (2^{-\Omega(r_{max})}+2^{-\Omega(c_{max})})\leq 1$), and taking a union bound over the failure events, we get the result.
\end{proof}
\section{$\ell_\infty/\ell_2$ guarantees and constant SNR case}
In this section we state and analyze our algorithm for obtaining $\ell_\infty/\ell_2$ guarantees in $\tilde O(k)$ time, as well as a procedure for recovery under the assumption of bounded $\ell_1$ norm of heavy hitters (which is very similar to the \textsc{RecoverAtConstSNR} procedure used in \cite{IKP}).
\subsection{$\ell_\infty/\ell_2$ guarantees}\label{sec:linf}
The algorithm is given as Algorithm~\ref{alg:inf-norm-reduction}.
\begin{algorithm}
\caption{\textsc{ReduceInfNorm}($\hat x, \chi, k, \nu, R^*, \mu$)}\label{alg:inf-norm-reduction}
\begin{algorithmic}[1]
\Procedure{ReduceInfNorm}{$\hat x, \chi, k, \nu, R^*, \mu$}
\State $\chi^{(0)} \gets 0$ \Comment{in $\mathbb{C}^n$}
\State $B\gets (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for a small constant $\alpha>0$
\State $T\gets \log_2 R^*$
\State $r_{max}\gets (C/\sqrt{\alpha})\log N$ for sufficiently large constant $C>0$
\State $\H\gets \{\mathbf{0}_d\}$, $\Delta\gets 2^{\lfloor \frac1{2}\log_2 \log_2 n\rfloor}$ \Comment{$\mathbf{0}_d$ is the zero vector in dimension $d$}
\For {$g=1$ to $\lceil \log_\Delta n \rceil$}
\State $\H\gets \H\cup \bigcup_{s=1}^d \{n \Delta^{-g} \cdot \mathbf{e}_s\}$ \Comment{$\mathbf{e}_s$ is the unit vector in direction $s$}
\EndFor
\State $G\gets$ filter with $B$ buckets and sharpness $F$, as per Lemma~\ref{lm:filter-prop}
\For {$r=1$ to $r_{max}$}\Comment{Samples that will be used for location}
\State Choose $\Sigma_r\in \mathcal{M}_{d\times d}$, $q_r\in {[n]^d}$ uniformly at random, let $\pi_r:=(\Sigma_r, q_r)$ and let $H_r:=(\pi_r, B, F)$
\State Let ${\mathcal{A}}_r\gets $ $C\log\log N$ elements of ${[n]^d}\times {[n]^d}$ sampled uniformly at random with replacement
\For {$\mathbf{w}\in \H$}
\State $m(\widehat{x}, H_r, a\star(\mathbf{1}, \mathbf{w}))\gets \Call{HashToBins}{\hat x, 0, (H_r, a\star (\mathbf{1}, \mathbf{w}))}$ for all $a\in {\mathcal{A}}_r, \mathbf{w}\in \H$
\EndFor
\EndFor
\For{$t=0$ to $T-1$} \Comment{Locating elements of the residual that pass a threshold test}
\For{$r=1$ to $r_{max}$}
\State $L_r \gets \textsc{LocateSignal}\left(\chi^{(t)}, k, \{m(\widehat{x}, H_r, a\star (\mathbf{1}, \mathbf{w}))\}_{r=1, a\in {\mathcal{A}}_r, \mathbf{w}\in \H}^{r_{max}}\right)$
\EndFor
\State $L\gets \bigcup_{r=1}^{r_{max}} L_r$
\State $\chi' \gets \Call{EstimateValues}{\hat x, \chi^{(t)}, L, k, 1, O(\log n), 5(\nu 2^{T-(t+1)}+\mu), \infty}$
\State $\chi^{(t+1)} \gets \chi^{(t)} + \chi'$
\EndFor
\State \textbf{return} $\chi^{(T)}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}\label{lm:linf}
For any $x, \chi\in \mathbb{C}^n$, $x'=x-\chi$, any integer $k\geq 1$, if parameters $\nu$ and $\mu$ satisfy $\nu\geq ||x'_{[k]}||_1/k$, $\mu^2\geq ||x'_{{[n]^d}\setminus [k]}||_2^2/k$, then the following conditions hold. If $S\subseteq {[n]^d}$ is the set of top $k$ elements of $x'$ in terms of absolute value, and $||x'_{{[n]^d}\setminus S}||_\infty\leq \nu$, then the output $\chi\in \mathbb{C}^{[n]^d}$ of a call to \Call{ReduceInfNorm}{$\widehat{x}, \chi, k, \nu, R^*, \mu$} with probability at least $1-N^{-10}$ over the randomness used in the call satisfies
$$
||x'-\chi||_\infty\leq 8(\nu+\mu)+O(1/N^c), \text{~~~~~(all elements in $S$ have been reduced to about $\nu+\mu$)},
$$
where the $O(||x'||_\infty/N^c)$ term corresponds to polynomially small error in our computation of the semiequispaced Fourier transform. Furthermore, we have $\chi_{{[n]^d}\setminus S}\equiv 0$.
The number of samples used is bounded by $2^{O(d^2)} k\log^3 N$. The runtime is bounded by $2^{O(d^2)} k \log^{d+3} N$.
\end{lemma}
\begin{proof}
We prove by induction on $t$ that with probability at least $1-N^{-10}$ one has for each $t=0,\ldots, T-1$
\begin{description}
\item[{\bf (1)}] $||(x'-\chi^{(t)})_S||_\infty\leq 8(\nu 2^{T-t}+\mu)$
\item[{\bf (2)}] $\chi^{(t)}_{{[n]^d}\setminus S}\equiv 0$
\item[{\bf (3)}] $|(x'_i-\chi^{(t)})_i|\leq |x'_i|$ for all $i\in {[n]^d}$
\end{description}
for all such $t$.
The {\bf base} $t=0$ holds trivially. We now prove the {\bf inductive step}. First, since $r=C\log N$ for a constant $C>0$, we have by Lemma~\ref{lm:good-prob} that each $i\in S$ is isolated under at least a $1-\sqrt{\alpha}$ fraction of hashings $H_1,\ldots, H_{r_{max}}$ with probability at least $1-2^{-\Omega(\sqrt{\alpha}r_{max})}\geq 1- N^{-10}$ as long as $C>0$ is sufficiently large. This lets us invoke Lemma~\ref{lm:eh-s-sstar} with $S^*=\emptyset$. We now use Lemma~\ref{lm:eh-s-sstar} to obtain bounds on functions $e^{head}$ and $e^{tail}$ applied to our hashings $\{H_r\}$ and vector $x'$. Note that $e^{head}$ and $e^{tail}$ are defined in terms of a set $S\subseteq {[n]^d}$ (this dependence is not made explicit to alleviate notation). We use $S=[\tilde k]$, i.e. $S$ is the top $k$ elements of $x'$. The inductive hypothesis together with the second part of Lemma~\ref{lm:eh-s-sstar} gives for each $i\in S$
$$
||e^{head}_S(\{H_r\}, x', \chi^{(t)})||_\infty\leq (C\alpha)^{d/2} ||(x'-\chi^{(t)})_S||_\infty.
$$
To bound the effect of tail noise, we invoke the second part of Lemma~\ref{lm:loc-tail-small}, which states that if $r_{max}=C\log N$ for a sufficiently large constant $C>0$ , we have
$||e^{tail}_S(\{H_r, {\mathcal{A}}_r\}, x')||_\infty=O(\sqrt{\alpha} \mu)$.
These two facts together imply by the second claim of Corollary~\ref{cor:loc} that each $i\in S$ such that
$$
|(x'-\chi^{(t)})_i|\geq 20\sqrt{\alpha} ||(x'-\chi^{(t)})_S||_\infty+20\sqrt{\alpha} \mu
$$
is located. In particular, by the inductive hypothesis this means that every $i\in S$ such that
$$
|(x'-\chi^{(t)})_i|\geq 20\sqrt{\alpha} (\nu 2^{T-t}+2\mu)+(4\mu)
$$
is located and reported in the list $L$ . This means that
$$
||(x'-\chi^{(t)})_{{[n]^d}\setminus L}||_\infty\leq 20\sqrt{\alpha} (\nu 2^{T-t}+2\mu)+(4\mu),
$$
and hence it remains to show that each such element in $L$ is properly estimated in the call to \textsc{EstimateValues}, and that no elements outside of $S$ are updated.
We first bound estimation quality. First note that by part {\bf (3)} of the inductive hypothesis together with Lemma~\ref{lm:estimate-l1l2}, {\bf (1)} one has for each $i\in L$
$$
{\bf \mbox{\bf Pr}}[|\chi'- (x'-\chi^{(t)})_i|>\sqrt{\alpha}\cdot (\nu+\mu)]<2^{-\Omega(r_{max})}<N^{-10},
$$
as long as $r_{max}\geq C\log N$ for a sufficiently large constant $C>0$.
This means that all elements in the list $L$ are estimated up to an additive $(\nu+\mu)/10\leq (\nu 2^{T-t}+\mu)/10$ term as long as $\alpha$ is smaller than an absolute constant. Putting the bounds above together proves part {\bf (1)} of the inductive step.
To prove parts {\bf (2)} and {\bf (3)} of the inductive step, we recall that the only elements $i\in {[n]^d}$ that are updated are the ones that satisfy $|\chi'|\geq 5(\nu 2^{T-(t+1)}+\mu)$. By the triangle inequality and the bound on additive estimation error above that
$$
|(x'-\chi^{(t)})_i|\geq 5(\nu 2^{T-(t+1)}+\mu)-(\nu +\mu)/10> 4(\nu 2^{T-(t+1)}+\mu)\geq 4(\nu+\mu).
$$
Since $|(x'-\chi^{(t)})_i|\leq |x_i|$ by part {\bf (2)} of the inductive hypothesis, we have that only elements $i\in {[n]^d}$ with $|x'_i|\geq 4(\nu+\mu)$ are updated, but those belong to $S$ since $||x'_{{[n]^d}\setminus S}||_\infty\leq \nu$ by assumption of the lemma. This proves part {\bf (3)} of the inductive step. Part {\bf (2)} of the inductive step follows since
$|(x'-\chi^{(t)}-\chi')_i|\leq (\nu +\mu)/10$ by the additive error bounds above, and the fact that $|(x'-\chi^{(t)})_i|>4(\nu+\mu)$. This completes the proof of the inductive step and the proof of correctness.
\paragraph{Sample complexity and runtime} Since \textsc{HashToBins} uses $B\cdot F^d$ samples by Lemma~\ref{l:hashtobins}, the sample complexity of location is bounded by
$$
B\cdot F^d\cdot r_{max}\cdot c_{max}\cdot |\H|=2^{O(d^2)} k\log^3 N.
$$
Each call to \textsc{EstimateValues} uses $B\cdot F^d\cdot k \cdot r_{max}$ samples, and there are $O(\log N)$ such calls overall, resulting in sample complexity of
$$
B\cdot F^d \cdot r_{max}\cdot \log N=2^{O(d^2)} k\log^2 N.
$$
Thus, the sample complexity is bounded by $2^{O(d^2)} k\log^3 N$. The runtime bound follows analogously.
\end{proof}
\section{Preliminaries}\label{sec:prelim}
For a positive even integer $a$ we will use the notation $[a]=\{-\frac{a}{2}, -\frac{a}{2}+1, \ldots, -1, 0, 1,\ldots, \frac{a}{2}-1\}$. We will consider signals of length $N=n^d$, where $n$ is a power of $2$ and $d\geq 1$ is the dimension. We use the notation $\omega=e^{2\pi i/n}$ for the root of unity of order $n$. The $d$-dimensional forward and inverse Fourier transforms are given by
\begin{equation}\label{eq:dft-forward}
\hat x_{j}=\frac1{\sqrt{N}}\sum_{i\in {[n]^d}} \omega^{-i^Tj}x_i \text{~~and~~}x_{j}=\frac1{\sqrt{N}}\sum_{i\in {[n]^d}} \omega^{i^Tj}\hat x_i
\end{equation}
respectively, where $j\in {[n]^d}$. We will denote the forward Fourier transform by $\mathcal{F}$ and
Note that we use the orthonormal version of the Fourier transform. We assume that the input signal has entries of polynomial precision and range. Thus, we have $||\hat x||_2=||x||_2$ for all $x\in \mathbb{C}^N$ (Parseval's identity).
Given access to samples of $\widehat{x}$, we recover a signal $z$ such that
\begin{equation*}
||x-z||_2\leq (1+{\epsilon})\min_{k-\text{~sparse~} y} ||x-y||_2
\end{equation*}
We will use pseudorandom spectrum permutations, which we now define. We write $\mathcal{M}_{d\times d}$ for the set of $d\times d$ matrices over $\mathbb{Z}_n$ with odd determinant.
For $\Sigma\in \mathcal{M}_{d\times d}, q\in {[n]^d}$ and $i\in {[n]^d}$ let
$\pi_{\Sigma, q}(i)=\Sigma(i-q) \mod n$.
Since $\Sigma\in \mathcal{M}_{d\times d}$, this is a permutation. Our algorithm will use $\pi$ to hash heavy hitters into $B$ buckets, where we will choose $B\approx k$. We will often omit the subscript $\Sigma, q$ and simply write $\pi(i)$ when $\Sigma, q$ is fixed or clear from context. For $i, j\in {[n]^d}$ we let $o_i(j)= \pi(j) - (n/b) h(i)$ be the ``offset'' of $j\in {[n]^d}$ relative to $i\in {[n]^d}$ (note that this definition is different from the one in~\cite{IK14a}). We will always have $B=b^d$, where $b$ is a power of $2$.
\begin{definition}
Suppose that $\Sigma^{-1}$ exists $\bmod~n$. For $a, q\in {[n]^d}$ we define the
permutation $P_{\Sigma, a, q}$ by $(P_{\Sigma, a, q}\hat x)_i=\hat x_{\Sigma^T(i-a)} \omega^{i^T\Sigma q}$.
\end{definition}
\begin{lemma}\label{lm:perm}
$\mathcal{F}^{-1}({P_{\Sigma, a, q} \hat x})_{\pi_{\Sigma, q}(i)}=x_i \omega^{a^T \Sigma i}$
\end{lemma}
The proof is given in~\cite{IK14a} and we do not repeat it here.
Define
\begin{equation}\label{eq:mu-def}
\begin{split}
\err_k(x)=\min_{k-\text{sparse}~y} ||x-y||_2\text{~~and~~}\mu^2=\err_k^2(x)/k.
\end{split}
\end{equation}
In this paper, we assume knowledge of $\mu$ (a constant factor upper bound on $\mu$ suffices). We also assume that the signal to noise ration is bounded by a polynomial, namely
that $R^*:=||x||_\infty/\mu\leq N^{O(1)}$.
We use the notation $\mathbb{B}^\infty_{r}(x)$ to denote the $\ell_\infty$ ball of radius $r$ around $x$:
$\mathbb{B}^\infty_{r}(x)=\{y\in {[n]^d}: ||x-y||_\infty\leq r\}$, where $||x-y||_\infty=\max_{s\in d} ||x_s-y_s||_{\circ}$, and $||x_s-y_s||_{\circ}$ is the circular distance on $\mathbb{Z}_n$.
We will also use the notation $f\lesssim g$ to denote $f=O(g)$. For a real number $a$ we write $|a|_+$ to denote the positive part of $a$, i.e. $|a|_+=a$ if $a\geq 0$ and $|a|_+=0$ otherwise.
We will use the filter $G, \hat G$ constructed in~\cite{IK14a}. The filter is defined by a parameter $F\geq 1$ that governs its decay properties. The filter satisfies $\supp \hat G\subseteq [-F\cdot b, F\cdot b]^d$ and
\begin{lemma}[Lemma~3.1 in~\cite{IK14a}]\label{lm:filter-prop}
One has {\bf (1)} $G_j\in [\frac1{(2\pi)^{F \cdot d}}, 1]$ for all $j\in {[n]^d}$ such that $||j||_\infty\leq \frac{n}{2b}$ and {\bf (2)} $|G_j|\leq \left(\frac2{1+(b/n)||j||_\infty}\right)^{F}$ for all $j\in {[n]^d}$
as long as $b\geq 3$ and {\bf (3)} $G_j\in [0, 1]$ for all $j$ as long as $F$ is even.
\end{lemma}
\begin{remark}
Property {\bf (3)} was not stated explicitly in Lemma~3.1 of~\cite{IK14a}, but follows directly from their construction.
\end{remark}
The properties above imply that most of the mass of the filter is concentrated in a square of side $O(n/b)$, approximating the ``ideal'' filter (whose value would be equal to $1$ for entries within the square and equal to $0$ outside of it).
Note that for each $i\in {[n]^d}$ one has $|G_{o_i(i)}|\geq \frac1{(2\pi)^{d\cdot \fc}}$. We refer to the parameter $F$ as the {\em sharpness} of the filter. Our hash functions are not pairwise independent, but possess a property that still makes hashing using our filters efficient:
\begin{lemma}[Lemma~3.2 in~\cite{IK14a}]\label{lemma:limitedindependence}
Let $i, j\in {[n]^d}$. Let $\Sigma$ be uniformly random with odd determinant. Then for all $t\geq 0$ one has $
\Pr[||\Sigma(i-j)||_\infty \leq t] \leq 2(2t/n)^d$.
\end{lemma}
Pseudorandom spectrum permutations combined with a filter $G$ give us the ability to `hash' the elements of the input signal into a number of buckets (denoted by $B$). We formalize this using the notion of a {\em hashing}. A hashing is a tuple consisting of a pseudorandom spectrum permutation $\pi$, target number of buckets $B$ and a sharpness parameter $F$ of our filter, denoted by $H=(\pi, B, F)$. Formally, $H$ is a function that maps a signal $x$ to $B$ signals, each corresponding to a hash bucket, allowing us to solve the $k$-sparse recovery problem on input $x$ by reducing it to $1$-sparse recovery problems on the bucketed signals. We give the formal definition below.
\begin{definition}[Hashing $H=(\pi, B, F)$]
For a permutation $\pi=(\Sigma, q)$, parameters $b>1$, $B=b^d$ and $F$, a {\em hashing} $H:=(\pi, B, F)$ is a function mapping a signal $x\in \mathbb{C}^{{[n]^d}}$ to $B$ signals $H(x)=(u_s)_{s\in [b]^d}$, where $u_s\in \mathbb{C}^{{[n]^d}}$ for each $s\in [b]^d$, such that for each $i\in {[n]^d}$
$$
u_{s, i} = \sum_{j\in {[n]^d}} G_{\pi(j)-(n/b)\cdot s}x_j \omega^{i^T \Sigma j}\in \mathbb{C},
$$
where $G$ is a filter with $B$ buckets and sharpness $F$ constructed in Lemma~\ref{lm:filter-prop}.
\end{definition}
For a hashing $H=(\pi, B, F), \pi=(\Sigma, q)$ we sometimes write $P_{H, a}, a\in {[n]^d}$ to denote $P_{\Sigma, a, q}$. We will consider hashings of the input signal $x$, as well as the residual signal $x-\chi$, where
\begin{definition}[Measurement $m=m(x, H, a)$]
For a signal $x\in \mathbb{C}^{{[n]^d}}$, a hashing $H=(\pi, B, F)$ and a parameter $a\in {[n]^d}$, a {\em measurement} $m=m(x, H, a)\in \mathbb{C}^{[b]^d}$ is the $B$-dimensional complex valued vector of evaluations of a hashing $H(x)$ at $a\in \mathbb{C}^{{[n]^d}}$, i.e.
length $B$, indexed by $[b]^d$ and given by evaluating the hashing $H$ at $a\in {[n]^d}$, i.e. for $s\in [b]^d$
$$
m_s = \sum_{j\in {[n]^d}} G_{\pi(j)-(n/b)\cdot s}x_j \omega^{a^T \Sigma j},
$$
where $G$ is a filter with $B$ buckets and sharpness $F$ constructed in Lemma~\ref{lm:filter-prop}.
\end{definition}
\begin{definition}
For any $x\in \mathbb{C}^{[n]^d}$ and any hashing $H=(\pi, B, G)$ define the vector $\mu^2_{H, \cdot}(x)\in \mathbb{R}^{[n]^d}$ by letting for every $i\in {[n]^d}$
$$\mu^2_{H, i}(x):=\abs{G_{o_i(i)}^{-1}}\sum_{j \in {[n]^d}\setminus \{i\}} \abs{x_j}^2 \abs{G_{o_i(j)}}^2.$$
\end{definition}
We access the signal $x$ in Fourier domain via the function $\Call{HashToBins}{\hat x, \chi, (H, a)}$, which evaluates the hashing $H$ of residual signal $x-\chi$ at point $a\in {[n]^d}$, i.e. computes the measurement $m(x, H, a)$ (the computation is done with polynomial precision). One can view this function as ``hashing'' $x$ into $B$ bins by convolving it with the filter $G$ constructed above and subsampling appropriately. The pseudocode for this function is given in section~\ref{sec:hash2bins}. In what follows we will use the following properties of \textsc{HashToBins}:
\begin{lemma}\label{lm:hashing}
There exists a constant $C>0$ such that for any dimension $d\geq 1$, any integer $B\geq 1$, any $x, \chi\in \mathbb{C}^{[n]^d}, x':=x-\chi$, if $\Sigma\in \mathcal{M}_{d\times d}, a, q\in {[n]^d}$ are selected uniformly at random, the following conditions hold.
Let $\pi=(\Sigma, q)$, $H=(\pi, B, G)$, where $G$ is the filter with $B$ buckets and sharpness $F$ constructed in Lemma~\ref{lm:filter-prop}, and let
$u =\Call{HashToBins}{\hat x, \chi, (H, a)}$. Then if $F\geq 2d, F=\Theta(d)$, for any $i\in {[n]^d}$
\begin{description}
\item[(1)] For any $H$ one has $\max_{a\in {[n]^d}} \abs{G_{o_i(i)}^{-1}\omega^{-a^T\Sigma i}u_{h(i)} - x'_i}\leq G_{o_i(i)}^{-1}\cdot \sum_{j\in S\setminus \{i\}} G_{o_i(j)} |x'_j|$. Furthermore,
${\bf \mbox{\bf E}}_H[G_{o_i(i)}^{-1}\cdot \sum_{j\in S\setminus \{i\}} G_{o_i(j)} |x'_j|]\leq (2\pi)^{d\cdot \fc}\cdot C^d ||x'||_1/B+N^{-\Omega(c)}$;
\item[(2)] ${\bf \mbox{\bf E}}_H[\mu^2_{H, i}(x')] \leq (2\pi)^{2d\cdot \fc}\cdot C^d \norm{2}{x'}^2/B$,
\end{description}
Furthermore,
\begin{description}
\item[(3)] for any hashing $H$, if $a$ is chosen uniformly at random from ${[n]^d}$, one has
$$
{\bf \mbox{\bf E}}_{a}[\abs{G_{o_i(i)}^{-1}\omega^{-a^T\Sigma i}u_{h(i)} - x'_i}^2]\leq \mu^2_{H, i}(x')+N^{-\Omega(c)}.
$$
\end{description}
Here $c>0$ is an absolute constant that can be chosen arbitrarily large at the expense of a factor of $c^{O(d)}$ in runtime.
\end{lemma}
The proof of Lemma~\ref{lm:hashing} is given in Appendix~\ref{app:A}. We will need several definitions and lemmas from~\cite{IK14a}, which we state here. We sometimes need slight modifications of the corresponding statements from~\cite{IK14a}, in which case we provide proofs in Appendix~\ref{app:A}.
Throughout this paper the main object of our analysis is a properly defined set $S\subseteq {[n]^d}$ that contains the 'large' coefficients of the input vector $x$. Below we state our definitions and auxiliary lemmas without specifying the identity of this set, and then use specific instantiations of $S$ to analyze outer primitives such as \textsc{ReduceL1Norm}, \textsc{ReduceInfNorm} and \textsc{RecoverAtConstSNR}. This is convenient because the analysis of all of these primitives can then use the same basic claims about estimation and location primitives. The definition of $S$ given in~\eqref{eq:s-def-l1} above is the one we use for analyzing \textsc{ReduceL1Norm} and the SNR reduction loop. Analysis of \textsc{ReduceInfNorm} (section~\ref{sec:linf}) and \textsc{RecoverAtConstantSNR} (section~\ref{sec:const-snr}) use different instantiations of $S$, but these are local to the corresponding sections, and hence the definition in~\eqref{eq:s-def-l1} is the best one to have in mind for the rest of this section.
First, we need the definition of an element $i\in {[n]^d}$ being isolated under a hashing $H=(\pi, B, F)$. Intuitively, an element $i\in S$ is isolated under hashing $H$ with respect to set $S$ if not too many other elements $S$ are hashed too close to $i$. Formally, we have
\begin{definition}[Isolated element]\label{def:isolated}
Let $H=(\pi, B, F)$, where $\pi=(\Sigma, q)$, $\Sigma\in \mathcal{M}_{d\times d}, q\in {[n]^d}$. We say that an element $i\in {[n]^d}$ is {\em isolated} under hashing $H$ {\em at scale $t$} if
$$
|\pi(S\setminus \{i\})\cap \mathbb{B}^\infty_{(n/b)\cdot h(i)}((n/b)\cdot 2^t)|\leq (2\pi)^{-d\cdot \fc}\cdot \alpha^{d/2} 2^{(t+1)d}\cdot 2^{t}.
$$
We say that $i$ is simply {\em isolated} under hashing $H$ if it is isolated under $H$ at all scales $t\geq 0$.
\end{definition}
The following lemma shows that any element $i\in S$ is likely to be isolated under a random permutation $\pi$:
\begin{lemma}\label{lm:isolated-pi}
For any integer $k\geq 1$ and any $S\subseteq {[n]^d}, |S|\leq 2k$, if $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for $\alpha\in (0, 1)$ smaller than an absolute constant, $F\geq 2d$, and a hashing $H=(\pi, B, F)$ is chosen randomly (i.e. $\Sigma\in \mathcal{M}_{d\times d}, q\in {[n]^d}$ are chosen uniformly at random, and $\pi=(\Sigma, q)$), then each $i\in {[n]^d}$ is {\em isolated} under permutation $\pi$ with probability at least $1-\frac1{2}\sqrt{\alpha}$.
\end{lemma}
The proof of the lemma is very similar to Lemma~5.4 in~\cite{IK14a} (the only difference is that the $\ell_\infty$ ball is centered at the point that $i$ hashes to in Lemma~\ref{lm:isolated-pi}, whereas it was centered at $\pi(i)$ in Lemma~5.4 of~\cite{IK14a}) and is given in Appendix~\ref{app:A} for completeness.
As every element $i\in S$ is likely to be isolated under one random hashing, it is very likely to be isolated under a large fraction of hashings $H_1,\ldots, H_{r_{max}}$:
\begin{lemma}\label{lm:good-prob}
For any integer $k\geq 1$, and any $S\subseteq {[n]^d}, |S|\leq 2k$, if $B\geq (2\pi)^{4d\cdot \fc}\cdot k/\alpha^d$ for $\alpha\in (0, 1)$ smaller than an absolute constant, $F\geq 2d$, $H_r=(\pi_r, B, F)$, $r=1,\ldots, r_{max}$ a sequence of random hashings, then every $i\in {[n]^d}$ is isolated with respect to $S$ under at least $(1-\sqrt{\alpha})r_{max}$ hashings $H_r, r=1,\ldots, r_{max}$ with probability at least $1-2^{-\Omega(\sqrt{\alpha}r_{max})}$.
\end{lemma}
\begin{proof}
Follows by an application of Chernoff bounds and Lemma~\ref{lm:isolated-pi}.
\end{proof}
It is convenient for our location primitive (\textsc{LocateSignal}, see Algorithm~\ref{alg:location}) to sample the signal at pairs of locations chosen randomly (but in a correlated fashion). The two points are then combined into one in a linear fashion. We now define notation for this common operation on pairs of numbers in ${[n]^d}$. Note that we are viewing pairs in ${[n]^d}\times {[n]^d}$ as vectors in dimension $2$, and the $\star$ operation below is just the dot product over this two dimensional space. However, since our input space is already endowed with a dot product (for $i, j\in {[n]^d}$ we denote their dot product by $i^Tj$), having special notation here will help avoid confusion.
\paragraph{Operations on vectors in ${[n]^d}$.} For a pair of vectors $(\alpha_1, \beta_1), (\alpha_2, \beta_2)\in {[n]^d}\times {[n]^d}$ we let $(\alpha_1, \beta_1)\star (\alpha_2, \beta_2)$ denote the vector $\gamma\in {[n]^d}$ such that
$$
\gamma_i=(\alpha_1)_i\cdot (\alpha_2)_i+(\beta_1)_i\cdot (\beta_2)_i\text{~~~for all ~}i\in [d].
$$
Note that for any $a, b, c\in {[n]^d}\times {[n]^d}$ one has $a\star b+a\star c=a\star (b+c)$, where addition for elements of ${[n]^d}\times {[n]^d}$ is componentwise.
We write $\mathbf{1}\in {[n]^d}$ for the all ones vector in dimension $d$, and $\mathbf{0}\in {[n]^d}$ for the zero vector. For a set ${\mathcal{A}}\subseteq {[n]^d}\times {[n]^d}$ and a vector $(\alpha, \beta)\in {[n]^d}\times {[n]^d}$ we denote
$$
{\mathcal{A}}\star (\alpha, \beta):=\{a\star(\alpha, \beta): a\in {\mathcal{A}}\}.
$$
\begin{definition}[Balanced set of points]\label{def:balance}
For an integer $\Delta\geq 2$ we say that a (multi)set $\mathcal{Z}\subseteq {[n]^d}$ is {\em $\Delta$-balanced} in coordinate $s\in [1:d]$ if for every $r=1,\ldots, \Delta-1$ at least $49/100$ fraction of elements in the set $\{\omega_{\Delta}^{r\cdot z_s}\}_{z\in \mathcal{Z}}$ belong to the left halfplane $\{u\in \mathbb{C}: \text{Re}(u)\leq 0\}$ in the complex plane, where $\omega_\Delta=e^{2\pi i/\Delta}$ is the $\Delta$-th root of unity.
\end{definition}
Note that if $\Delta$ divides $n$, then for any fixed value of $r$ the point $\omega_\Delta^{r\cdot z_s}$ is uniformly distributed over the $\Delta'$-th roots of unity for some $\Delta'$ between $2$ and $\Delta$ for every $r=1,\ldots, \Delta-1$ when $z_s$ is uniformly random in $[n]$. Thus for $r\neq 0$ we expect at least half the points to lie in the halfplane $\{u\in \mathbb{C}: \text{Re}(u)\leq 0\}$. A set $\mathcal{Z}$ is balanced if it does not deviate from expected behavior too much.
The following claim is immediate via standard concentration bounds:
\begin{claim}\label{cl:balanced}
There exists a constant $C>0$ such that for any $\Delta$ a power of two, $\Delta=\log^{O(1)} n$, and $n$ a power of $2$ the following holds if $\Delta<n$. If elements of a (multi)set ${\mathcal{A}}\subseteq {[n]^d}\times {[n]^d}$ of size $C\log\log N$ are chosen uniformly at random with replacement from ${[n]^d}\times {[n]^d}$, then with probability at least $1-1/\log^4 N$ one has that for every $s\in [1:d]$ the set ${\mathcal{A}}\star (\mathbf{0}, \mathbf{e}_s)$ is $\Delta$-balanced in coordinate $s$.
\end{claim}
Since we only use one value of $\Delta$ in the paper (see line~8 in Algorithm~\ref{alg:location}), we will usually say that a set is simply `balanced' to denote the $\Delta$-balanced property for this value of $\Delta$.
\section{Semi-equispaced Fourier Transform}\label{sec:semiequi}
In this section we give an algorithm for computing the semi-equispaced Fourier transform, prove its correctness and give runtime bounds.
\begin{algorithm}[H]
\caption{Semi-equispaced Fourier Transform in $2^{O(d)}k \log^{d} N$ time}\label{alg:semi-fft}
\begin{algorithmic}[1]
\Procedure{SemiEquispacedFFT}{$x$, $c$}\Comment{$x \in \mathbb{C}^{{[n]^d}}$ is $k$-sparse}
\State Let $B\geq 2^d k$, be a power of $2^d$, $b=B^{1/d}$
\State $G, \widehat{G'} \gets $ $d$-th tensor powers of the flat window functions of~\cite{HIKP2}, see below
\State $y_i \gets \frac{1}{\sqrt{N}}(x * G)_{i\cdot \frac{n}{2b}}$ for $i \in [2b]^d$.
\State $\widehat{y} \gets \textsc{FFT}(y)$ \Comment{FFT on $[2b]^d$}
\State $\widehat{x}'_i \gets \widehat{y}_i$ for $||i||_\infty \leq b/2$.
\State \Return $\widehat{x}'$
\EndProcedure
\end{algorithmic}
\end{algorithm}
We define filters $G, \widehat{G'}$ as $d$-th tensor powers of the flat window functions of~\cite{HIKP2}, so
that $G_i = 0$ for all $||i||_\infty \gtrsim c (n/b) \log N$,
$\norm{2}{G -G'} \leq N^{-c}$,
\[
\widehat{G'}_i = \left\{
\begin{array}{cl}
1 & \text{ if } ||i||_\infty \leq b/2\\
0 & \text{ if } ||i||_\infty > b\\
\end{array}
\right.,
\]
and $\widehat{G'}_i \in [0, 1]$ everywhere.
The following is similar to results of~\cite{DR93, IKP}.
\begin{lemma}\label{l:semiequi1}
Let $n$ be a power of two, $N=n^d$, $c\geq 2$ a constant. Let integer $B\geq 1$, be a power of $2^d$, $b=B^{1/d}$. For any $x\in \mathbb{C}^{[n]^d}$ Algorithm~\ref{alg:semi-fft} computes $\widehat{x}'_i$
for all $||i||_\infty \leq b/2$ such that
\[
\abs{\widehat{x}'_i - \widehat{x}_i} \leq \norm{2}{x} / N^c
\]
in $c^{O(d)}||x||_0 \log^{d} N+2^{O(d)}B\log B$ time.
\end{lemma}
\begin{proof}
Define
\[
z = \frac{1}{\sqrt{N}}x * G.
\]
We have that $\widehat{z}_i = \widehat{x}_i \widehat{G}_i$ for all $i\in {[n]^d}$.
Furthermore, because subsampling and aliasing are dual under the
Fourier transform, since $y_i = z_{i\cdot (n/2b)}, i\in [2b]^d$ is a subsampling of $z$ we have for
$i$ such that $||i||_\infty \leq b/2$ that
\begin{align*}
\widehat{x}'_i = \widehat{y}_i &= \sum_{j \in [n/(2b)]^d} \widehat{z}_{i + 2b\cdot j} \\
&= \sum_{j\in [n/(2b)]^d} \widehat{x}_{i + 2b \cdot j}\widehat{G}_{i+2b\cdot j} \\
&= \sum_{j\in [n/(2b)]^d } \widehat{x}_{i + 2b\cdot j}\widehat{G'}_{i+2b\cdot j} + \sum_{j\in [n/(2b)]^d} \widehat{x}_{i + 2b\cdot j}(\widehat{G}_{i+2b\cdot j} - \widehat{G'}_{i+2b\cdot j})\\
&= \sum_{j\in [n/(2b)]^d} \widehat{x}_{i +
2b\cdot j}\widehat{G'}_{i+2b\cdot j}+ \sum_{j\in [n/(2b)]^d} \widehat{x}_{i +
2b\cdot j}(\widehat{G}_{i+2b\cdot j} -\widehat{G'}_{i+2b\cdot j}).
\end{align*}
For the second term we have using Cauchy-Schwarz
$$
\sum_{j\in [n/(2b)]^d} \widehat{x}_{i + 2b\cdot j}(\widehat{G}_{i+2b\cdot j} -\widehat{G'}_{i+2b\cdot j})\leq ||x||_2 ||\widehat{G} -\widehat{G'}||_2\leq ||x||_2/N^c.
$$
For the first term we have
$$
\sum_{j\in [n/(2b)]^d} \widehat{x}_{i + 2b\cdot j}\widehat{G'}_{i+2b\cdot j}=\widehat{x}_i\cdot \widehat{G'}_{i+2b\cdot 0}=\widehat{x}_i
$$
for all $i\in [2b]^d$ such that $||i||_\infty\leq b$, since for any $j\neq 0$ the argument of $\widehat{G'}_{i+2b\cdot j}$ is larger than $b$ in $\ell_\infty$ norm, and hence $\widehat{G'}_{i+2b\cdot j}=0$ for all $j\neq 0$.
Putting these bounds together we get that
\[
\abs{\widehat{x}'_i - \widehat{x}_i} \leq \norm{2}{\widehat{x}}
\norm{2}{\widehat{G}-\widehat{G'}} \leq \norm{2}{x}N^{-c}
\]
as desired.
The time complexity of computing the FFT of $y$ is $2^{O(d)}B\log B$.
The vector $y$ can be constructed in time $c^{O(d)}||x||_0\log^d N$. This is because the support of $G_i$ is localized so that
each nonzero coordinate $i$ of $x$ only contributes to $c^{O(d)}\log^d N$
entries of $y$.
\end{proof}
We will need the following simple generalization:
\begin{corollary}\label{c:semiequi1}
Let $n$ be a power of two, $N=n^d$, $c\geq 2$ a constant, and $\Sigma\in \mathcal{M}_{d\times d}, q\in {[n]^d}$. Let integer $B\geq 1$ be a power of $2^d$, $b=B^{1/d}$.
Let $S = \{\Sigma(i - q) : i \in \mathbb{Z}, ||i||_\infty \leq b/2\}$. Then for any $x\in \mathbb{C}^{[n]^d}$ we can compute $\widehat{x}'_i$ for all
$i \in S$ time such that
\[
\abs{\widehat{x}'_i - \widehat{x}_i} \leq \norm{2}{x} / N^c
\]
in $c^{O(d)}||x||_0 \log^d N+2^{O(d)}B\log B$ time.
\end{corollary}
\begin{proof}
Define
$x^*_j = \omega^{qj}x_{\Sigma^{-T}j}$. Then for all $i \in [n]$,
\begin{align*}
\widehat{x}_{\Sigma(i-q)}
&= \frac{1}{\sqrt{N}}\sum_{j \in {[n]^d}} \omega^{-j^T\Sigma(i-q)}x_j\\
&= \frac{1}{\sqrt{N}}\sum_{j \in {[n]^d}} \omega^{-j^T\Sigma i}\omega^{j^T\Sigma q}x_j\\
&= \frac{1}{\sqrt{N}}\sum_{j'=\Sigma^{T}j \in {[n]^d}} \omega^{-(j')^Ti}\omega^{(j')^Tq}x_{\Sigma^{-T}j'}\\
&= \frac{1}{\sqrt{N}}\sum_{j'=\Sigma^{T}j \in {[n]^d}} \omega^{-(j')^Ti}x^*_{j'}\\
&= \widehat{x}^*_i.
\end{align*}
We can access $\widehat{x}^*_i$ with $O(d^2)$ overhead, so by
Lemma~\ref{l:semiequi1} we can approximate $\widehat{x}_{\Sigma(i-q)} =
\widehat{x}^*_i$ for $||i||_\infty \leq k$ in $c^{O(d)}k\log^d N$ time.
\end{proof}
\section{Utilities}\label{sec:utils}
\subsection{Properties of \textsc{EstimateValues}}
In this section we describe the procedure \textsc{EstimateValues} (see Algorithm~\ref{alg:est}), which, given access to a signal $x$ in frequency domain (i.e. given $\widehat{x}$), a partially recovered signal $\chi$ and a target list of locations $L\subseteq {[n]^d}$,
estimates values of the elements in $L$, and outputs the elements that are above a threshold $\nu$ in absolute value. The SNR reduction loop uses the thresholding function of \textsc{EstimateValues} and passes a nonzero threshold, while \textsc{RecoverAtConstantSNR} uses $\nu=0$.
\begin{algorithm}
\caption{\textsc{EstimateValues}($x, \chi, L, k, {\epsilon}, \nu, r_{max}$)}\label{alg:est}
\begin{algorithmic}[1]
\Procedure{EstimateValues}{$x, \chi, L, k, {\epsilon}, \nu, r_{max}$}\Comment{$r_{max}$ controls estimate confidence}
\State $B\gets (2\pi)^{4d\cdot \fc}\cdot k/({\epsilon} \alpha^{2d})$
\For {$r=0$ to $r_{max}$}
\State Choose $\Sigma_r\in \mathcal{M}_{d\times d}, q_r, z_r\in {[n]^d}$ uniformly at random
\State Let $\pi_r:=(\Sigma_r, q_r)$, $H_r:=(\pi_r, B, F), F= 2d$
\State $u_r \gets \Call{HashToBins}{\hat x, \chi, \chi, (H_r, z_r)}$
\State \Comment{Using semi-equispaced Fourier transform (Corollary~\ref{c:semiequi1})}
\EndFor
\State $L'\gets \emptyset$\Comment{Initialize output list to empty}
\For{$f\in L$}
\For{$r=0$ to $r_{max}$}
\State $j\gets h_r(f)$
\State $w^r_f\gets v_{r, j} G^{-1}_{o_f(f)}\omega^{-z_r^T \Sigma_r f}$ \Comment{Estimate $x'_f$ from each measurement}
\EndFor
\State $w_f\gets \text{median}\{w^r_f\}_{r=1}^ {r_{max}}$ \Comment{Median is taken coordinatewise}
\State {\bf If~} $|w_f|>\nu$ {\bf then~} $L'\gets L'\cup \{f\}$
\EndFor
\State \textbf{return} $w_{L'}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}[$\ell_1/\ell_2$ bounds on estimation quality]\label{lm:estimate-l1l2}
For any ${\epsilon}\in (0, 1]$, any $x, \chi\in \mathbb{C}^n, x'=x-\chi$, any $L\subseteq {[n]^d}$, any integer $k$ and any set $S\subseteq {[n]^d}, |S|\leq 2k$ the following conditions hold. If $\nu\geq ||(x-\chi)_S||_1/k$ and $\mu^2\geq ||(x-\chi)_{{[n]^d}\setminus S}||_2^2/k$, then the output $w$ of \Call{EstimateValues}{$\hat x, \chi, L, k, {\epsilon}, \nu, r_{max}$} satisfies the following bounds if $r_{max}$ is larger than an absolute constant.
For each $i\in L$
\begin{description}
\item[(1)] ${\bf \mbox{\bf Pr}}[|w_i- x'_i|>\sqrt{{\epsilon} \alpha}(\nu+\mu)]<2^{-\Omega(r_{max})}$;
\item[(2)] ${\bf \mbox{\bf E}}\left[\left||w_i- x'_i|-\sqrt{{\epsilon} \alpha}(\nu+\mu)\right|_+\right]\leq \sqrt{{\epsilon}\alpha}(\nu+\mu)2^{-\Omega(r_{max})}$;
\item[(3)] ${\bf \mbox{\bf E}}\left[\left||w_i-x'_i|^2-{\epsilon} \alpha(\nu+\mu)^2\right|_+\right]\leq 2^{-\Omega(r_{max})}{\epsilon}(\nu^2+\mu^2)$.
\end{description}
The sample complexity is bounded by $\frac1{{\epsilon}}2^{O(d^2)} k r_{max}$. The runtime is bounded by $2^{O(d^2)}\frac1{{\epsilon}}k \log^{d+1} N r_{max}$.
\end{lemma}
\begin{proof}
We analyze the vector $u_{r} \gets \Call{HashToBins}{\hat x, \chi, (H_r, z_r)}$ using the approximate linearity of \textsc{HashToBins} given by Lemma~\ref{lm:linearity} (see Appendix~\ref{app:A}). Writing $x'=x'_S+x'_{{[n]^d}\setminus S}$, we let
$$
u^{head}_r:=\Call{HashToBins}{\widehat{x_S}, \chi_S, (H_r, z_r)}\text{~~~and~~~}u^{tail}_r:=\Call{HashToBins}{\widehat{x_{{[n]^d}\setminus S}}, \chi_{{[n]^d}\setminus S}, (H_r, z_r)}
$$
we apply Lemma~\ref{lm:hashing}, {\bf (1)} to the first vector, obtaining
\begin{equation}\label{eq:ig34gwgt43tUf}
{\bf \mbox{\bf E}}_{H_r, z_r}[\abs{G_{o_i(i)}^{-1}\omega^{-z_r^T\Sigma i}u^{head}_{h(i)} - (x'_S)_i}]\leq (2\pi)^{d\cdot \fc}\cdot C^d ||x_S||_1/B+\mu/N^2
\end{equation}
Similarly applying Lemma~\ref{lm:hashing}, {\bf (2)} and {\bf (3)} to the $u^{tail}$, we get
\begin{equation*}
{\bf \mbox{\bf E}}_{H_r, z_r}[\abs{G_{o_i(i)}^{-1}\omega^{-z_r^T\Sigma i}u^{tail}_{h_r(i)} - (x'_{{[n]^d}\setminus S})_i}^2]\leq (2\pi)^{2d\cdot \fc}\cdot C^d \norm{2}{x'_{{[n]^d}\setminus S}}^2/B,
\end{equation*}
which by Jensen's inequality implies
\begin{equation}\label{eq:02ht42jggdsgfgxcc}
\begin{split}
{\bf \mbox{\bf E}}_{H_r, z_r}[\abs{G_{o_i(i)}^{-1}\omega^{-a_r^T\Sigma i}u^{tail}_{h(i)} - ((x-\chi)_{{[n]^d}\setminus S})_i}]&\leq (2\pi)^{d\cdot \fc}\cdot C^d \sqrt{\norm{2}{x_{{[n]^d}\setminus S}}^2/B}\\
&\leq (2\pi)^{d\cdot \fc}\cdot C^d \mu\cdot \sqrt{k/B}.
\end{split}
\end{equation}
Putting~\eqref{eq:ig34gwgt43tUf} and ~\eqref{eq:02ht42jggdsgfgxcc} together and using Lemma~\ref{lm:linearity}, we get
\begin{equation}\label{eq:02ht42jggdsggggrg344c}
{\bf \mbox{\bf E}}_{H_r, z_r}[\abs{G_{o_i(i)}^{-1}\omega^{-z_r^T\Sigma i}u_{h(i)} - (x-\chi)_i}]\leq (2\pi)^{d\cdot \fc}\cdot C^d (||x_S||_1/B+\mu \cdot \sqrt{k/B}).
\end{equation}
We hence get by Markov's inequality together with the choice $B=(2\pi)^{4d\cdot \fc}\cdot k/({\epsilon} \alpha^{2d})$ in \textsc{EstimateValues} (see Algorithm~\ref{alg:est})
\begin{equation}\label{eq:geropghj3g34t}
{\bf \mbox{\bf Pr}}_{H_r, z_r}[\abs{G_{o_i(i)}^{-1}\omega^{-z_r^T\Sigma i}u_{h(i)} - (x-\chi)_i}>\frac1{2}\sqrt{{\epsilon} \alpha}(\nu+\mu)]\leq (C\alpha)^{d/2}.
\end{equation}
The rhs is smaller than $1/10$ as long as $\alpha$ is smaller than an absolute constant.
Since $w_i$ is obtained by taking the median in real and imaginary components, we get by Lemma~\ref{lm:median-est}
$$
|w_i-x_i'|\leq 2\text{median}(|w^1_i-x'_i|, \ldots, |w^{r_{max}}_i-x'_i|).
$$
By ~\eqref{eq:geropghj3g34t} combined with Lemma~\ref{lm:quant-exp} with $\gamma=1/10$ we thus have
$$
{\bf \mbox{\bf Pr}}_{\{H_r, z_r\}}[|w_i-x'_i|>\sqrt{{\epsilon} \alpha}(\nu+\mu)]<2^{-\Omega(r_{max})}.
$$
This establishes {\bf (1)}. {\bf (2)} follows similarly by applying the first bound from Lemma~\ref{lm:quant-exp} with $\gamma=1/2$ to random variables $X_r=|w^r_i-x_i|, r=1,\ldots, r_{max}$ and $Y=|w_i-x_i|$.
The third claim of the lemma follows analogously.
The sample and runtime bounds follow by Lemma~\ref{l:hashtobins} and Lemma~\ref{l:semiequi1} by the choice of parameters.
\end{proof}
\if 0
\begin{lemma}\label{lm:estimate}
Let $x, \chi\in \mathbb{C}^n, x'=x-\chi$, let $S\subseteq {[n]^d}, |S|\leq 2k/{\epsilon}$ be such that $||x_S||_1\leq \nu\cdot k$, and let $\mu^2={\epsilon} \err_k^2(x)/k$.
Consider a call to \Call{EstimateValues}{$\hat x, \chi, L, k, {\epsilon}, \nu, r_{max}, m$}, and let $w_i$ denote the final output.
If $\nu=\infty$ (no thresholding), then for each $i\in L$ the estimates $w_i$ satisfy the following bounds for sufficiently large $r_{max}$:
\begin{enumerate}
\item ${\bf \mbox{\bf Pr}}[|w_i- x'_i|>\sqrt{\alpha}(\nu+\mu)]<2^{-\Omega(r_{max})}$;
\if 0
\item ${\bf \mbox{\bf E}}[|w_i- x'_i|\cdot \mathbf{1}_{|w_i- x'_i|>\sqrt{\alpha}\mu}]\leq 2^{-\Omega(r_{max})}\mu$;
\item ${\bf \mbox{\bf E}}[|w_i-x'_i|^2| \cdot \mathbf{1}_{| w_i- x'_i|^2>\sqrt{\alpha}\mu^2}]\leq 2^{-\Omega(r_{max})}\mu^2$.
\fi
\end{enumerate}
The sample complexity is bounded by $2^{O(d^2)}\frac1{{\epsilon}} k r_{max}$. The runtime complexity is bounded by $2^{O(d^2)}\frac1{{\epsilon}}k \log^{d+1} N r_{max}$.
\end{lemma}
\begin{proof}
Consider an element $i\in S$. By Lemma~\ref{lm:hashing}, (4) we have
$$
{\bf \mbox{\bf Pr}}_{\Sigma, q, a}[|w_i-x'_i|< \alpha^{d/2}C^d (\mu+{\epsilon}\norm{1}{x'_S}/k)]\geq 1-O(\sqrt{\alpha}),
$$
so for sufficiently small constant $\alpha$ one has
$$
{\bf \mbox{\bf Pr}}_{\Sigma, q, a}[|w_i-x'_i|<O(\sqrt{\alpha})(\nu+\mu)]\geq 3/4.
$$
Since $w_i$ is obtained by taking the median, we get by Lemma~\ref{lm:median-est}
$$
|w_i-x'_i|\leq \text{median}(
$$
Lemma~\ref{lm:quant-01} that
$$
{\bf \mbox{\bf Pr}}_{\Sigma_r, q_r, a_r, r=1,\ldots, r_{max}}[|w_i-x_i|>O(\sqrt{\alpha})(\nu+\mu)]<2^{-\Omega(r_{max})}.
$$
The second and third bounds follow similarly.
The sample and runtime bounds follow by Lemma~\ref{l:hashtobins} and Lemma~\ref{l:semiequi1} by the choice of parameters.
\end{proof}
\fi
\subsection{Properties of \textsc{HashToBins}}\label{sec:hash2bins}
\begin{algorithm}
\caption{Hashing using Fourier samples (analyzed in Lemma~\ref{l:hashtobins})}\label{alg:hash2bins}
\begin{algorithmic}[1]
\Procedure{HashToBins}{$\widehat{x}, \chi, (H, a)$}
\State $G \gets$ filter with $B$ buckets, $F=2d$\Comment{$H=(\pi, B, F), \pi=(\Sigma q)$}
\State Compute $y'=\hat G\cdot P_{\Sigma, a, q}(\hat x-\hat \chi')$,
for some $\chi'$ with $\norm{\infty}{\widehat{\chi}-\widehat{\chi}'}<N^{-\Omega(c)}$\Comment{$c$ is a large constant}
\State Compute $u_j = \sqrt{N}\mathcal{F}^{-1}(y')_{(n/b)\cdot j}$ for $j \in [b]^d$
\State {\bf return} $u$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{lemma}\label{l:hashtobins}
\Call{HashToBins}{$\widehat{x}, \chi, (H, a)$} computes
$u$ such that for any $i \in [n]$,
\[
u_{h(i)} = \Delta_{h(i)} + \sum_j G_{o_i(j)}(x - \chi)_j \omega^{a^T\Sigma j}
\]
where $G$ is the filter defined in section~\ref{sec:prelim}, and $\Delta_{h(i)}^2 \leq \norm{2}{\chi}^2
/ ((R^*)^2N^{11})$ is a negligible error term. It takes $O(B F^d)$
samples, and if $\norm{0}{\chi} \lesssim B$, it takes $O(2^{O(d)}\cdot B\log^d N)$ time.
\end{lemma}
\begin{proof}
Let $S = \supp(\widehat{G})$, so $\abs{S} \lesssim (2F)^d\cdot B$ and in fact
$S \subset \mathbb{B}^\infty_{F \cdot B^{1/d}}(0)$.
First, \textsc{HashToBins} computes
\[
y'=\widehat{G} \cdot P_{\Sigma, a, q}\widehat{x - \chi'}=\widehat{G} \cdot P_{\Sigma, a, q}\widehat{x - \chi}+\widehat{G} \cdot P_{\Sigma, a, q}\widehat{\chi - \chi'},
\]
for an approximation $\widehat{\chi}'$ to $\widehat{\chi}$. This is efficient
because one can compute $(P_{\Sigma, a, q}\widehat{x})_S$ with
$O(\abs{S})$ time and samples, and $P_{\Sigma, a, q}\widehat{\chi}'_S$ is
easily computed from $\widehat{\chi}'_T$ for $T = \{\Sigma(j-b) : j \in
S\}$. Since $T$ is an image of an $\ell_\infty$ ball under a linear transformation and $\chi$ is $B$-sparse, by
Corollary~\ref{c:semiequi1}, an approximation $\widehat{\chi}'$ to
$\widehat{\chi}$ can be computed in $O(2^{O(d)}\cdot B\log^d N)$ time such
that $|\widehat{\chi}_i-\widehat{\chi}'_i|<N^{-\Omega(c)}$
for all $i\in T$. Since $\norm{1}{\widehat{G}} \leq
\sqrt{N}\norm{2}{\widehat{G}} = \sqrt{N}\norm{2}{G} \leq N
\norm{\infty}{G} \leq N$ and $\widehat{G}$ is $0$ outside $S$, this
implies that
\begin{equation}\label{eq:gh-norm}
\norm{2}{\widehat{G}\cdot P_{\Sigma, a, q}(\widehat{\chi-\chi'})}\leq \norm{1}{\widehat{G}}\max_{i \in S} \abs{(P_{\Sigma, a, q}(\widehat{\chi-\chi'}))_i} = \norm{1}{\widehat{G}}\max_{i \in T} \abs{(\widehat{\chi-\chi'})_i} \leq N^{-\Omega(c)}
\end{equation}
as long as $c$ is larger than an absolute constant.
Define $\Delta$ by $\widehat{\Delta}=\sqrt{N}\widehat{G}\cdot P_{\Sigma, a, q}(\widehat{\chi-\chi'})$. Then \textsc{HashToBins} computes $u \in
\mathbb{C}^B$ such that for all $i$,
\[
u_{h(i)} =
\sqrt{N}\mathcal{F}^{-1}(y')_{(n/b)\cdot h(i)}=\sqrt{N}\mathcal{F}^{-1}(y)_{(n/b)\cdot h(i)}+\Delta_{(n/b)\cdot h(i)},
\]
for $y = \widehat{G} \cdot P_{\Sigma, a, q}\widehat{x - \chi}$. This
computation takes $O(\norm{0}{y'} + B \log B) \lesssim B \log (N)$
time. We have by the convolution theorem that
\begin{align*}
u_{h(i)} &= \sqrt{N}\mathcal{F}^{-1}(\widehat{G} \cdot P_{\Sigma, a, q}\widehat{(x - \chi)})_{(n/b)\cdot h(i)} + \Delta_{(n/b)\cdot h(i)}\\
&= (G * \mathcal{F}(P_{\Sigma, a, q}\widehat{(x - \chi)}))_{(n/b)\cdot h(i)}+\Delta_{(n/b)\cdot h(i)}\\
&= \sum_{\pi(j) \in [N]} G_{(n/b)\cdot h(i)-\pi(j)} \mathcal{F}(P_{\Sigma, a, q}\widehat{(x - \chi)})_{\pi(j)}+\Delta_{(n/b)\cdot h(i)}\\
&= \sum_{i \in [N]} G_{o_i(j)} (x - \chi)_{j} \omega^{a^T\Sigma j}+\Delta_{(n/b)\cdot h(i)}
\end{align*}
where the last step is the definition of $o_i(j)$ and Lemma~\ref{lm:perm}.
Finally, we note that
\[
|\Delta_{(n/b)\cdot h(i)}|\leq \norm{2}{\Delta} =\norm{2}{\widehat{\Delta}}=\sqrt{N}\norm{2}{\widehat{G}\cdot P_{\Sigma, a, q}(\widehat{\chi-\chi'})}\leq N^{-\Omega(c)},
\]
where we used \eqref{eq:gh-norm} in the last step. This completes the proof.
\end{proof}
\if 0
\subsection{Convolution theorem and Parseval's identity}
\begin{fact}[Convolution theorem]\label{f:convolution}
For $f, g\in \mathbb{C}^n$ one has $\widehat{f\cdot g}=\widehat f * \widehat g$.
\end{fact}
\begin{proof}
\begin{equation*}
\begin{split}
\widehat {(f\cdot g)}_{j}&=\frac1{n}\sum_{i\in [n]} \omega^{ij} f(i)g(i)=\frac1{n}\sum_{i\in [n]} \omega^{ij} \left(\sum_{a\in [n]}\hat f(a)\omega^{-ai}\sum_{b\in [n]}\hat g(b)\omega^{-bi}\right)\\
&=\frac1{n}\sum_{i\in [n]} \omega^{ij} \left(\sum_{a, b\in [n]}\hat f(a) \hat g(b)\omega^{-(a+b)i}\right)\\
&=\frac1{n}\left(\sum_{a, b\in [n]}\hat f(a) \hat g(b)\sum_{i\in [n]} \omega^{(j-(a+b))i}\right)\\
&=\sum_{a\in [n]}\hat f(a) \hat g(j-a)=f*g
\end{split}
\end{equation*}
\end{proof}
\begin{fact}[Parseval's identity]\label{f:parseval}
For $f\in \mathbb{C}^n$ one has $||\widehat{f}||_2^2=\frac1{n}||f||_2^2$.
\end{fact}
\begin{proof}
We calculate the norm of the forward Fourier transform matrix. The dot product of two rows is
$$
\sum_{i\in [n]} \left(\frac1{n}\omega^{ia}\right)\left(\frac1{n}\omega^{ib}\right)=\frac1{n}\mathbf{1}_{a=b}.
$$
Thus, the norm is $1/\sqrt{n}$, implying that $||\widehat{f}||_2=(1/\sqrt{n}) ||f||_2$, and hence the result.
\end{proof}
\fi
\subsection{Lemmas on quantiles and the median estimator}
In this section we prove several lemmas useful for analyzing the concentration properties of the median estimate. We will use
\begin{theorem}[Chernoff bound]\label{thm:chernoff}
Let $X_1,\ldots, X_n$ be independent $0/1$ Bernoulli random variables with $\sum_{i=1}^n {\bf \mbox{\bf E}}[X_i]=\mu$. Then for any $\delta>1$ one has ${\bf \mbox{\bf Pr}}[\sum_{i=1}^n X_i>(1+\delta)\mu]<e^{-\delta \mu/3}$.
\end{theorem}
\if 0
\begin{lemma}\label{lm:median}
For any constant $\mu>0$ and any sequence of nonnegative independent random variables $X_1,\ldots, X_n\geq 0$ with ${\bf \mbox{\bf E}}[X_i] \leq \mu$ for all $i=1,\ldots, n$, the following conditions hold. If $Y:=\text{median} (X_1,\ldots, X_n)$, then for any $t\geq 4$ one has
${\bf \mbox{\bf Pr}}[Y>t \mu]<2^{-\Omega(n)}$. Furthermore, for any $t\geq 4$ one has
${\bf \mbox{\bf E}}[Y\cdot \mathbf{1}_{Y\geq t\cdot \mu}]=O(\mu\cdot 2^{-\Omega(n)})$.
\end{lemma}
\begin{proof}
For any $t\geq 1$ by Markov's inequality ${\bf \mbox{\bf Pr}}[X_i>t \mu]\leq 1/t$. Define indicator random variables $Z_i$ by letting $Z_i=1$ if $X_i>t \mu$ and $Z_i=0$ otherwise.
Then ${\bf \mbox{\bf Pr}}[Y>t \mu]\leq {\bf \mbox{\bf Pr}}[\sum_{i=1}^n Z_i\geq n/2]$. At the same time we have
\begin{equation}\label{eq:vi34g43g}
{\bf \mbox{\bf Pr}}\left[\sum_{i=1}^n Z_i\geq n/2\right]\leq e^{-(t/2-1)n/6}
\end{equation}
by Theorem~\ref{thm:chernoff} invoked with $\delta={\bf \mbox{\bf Pr}}[Z_i=1]^{-1}-1\geq t/2-1$ (note that we required $t$ to be larger than $4$ to be able to use the simplified version of the Chernoff bound provided by Theorem~\ref{thm:chernoff}; one can see that the same result holds for any $t>2$ bounded away from $2$ by a constant). This proves the first claim.
For the second claim we have
\begin{equation*}
\begin{split}
{\bf \mbox{\bf E}}[Y\cdot \mathbf{1}_{Y\geq t_*\cdot \mu}]&\leq \int_{t^*}^\infty t \mu \cdot {\bf \mbox{\bf Pr}}[Y\geq t\cdot \mu]dt\\
&\leq \int_{t^*}^\infty t \mu e^{-(t/2-1)n/6}dt\text{~~~~~~~~~~ (by~\eqref{eq:vi34g43g})}\\
&\leq e^{-n/6}\int_{t^*}^\infty t \mu e^{-(t/2-2)n/6}dt\\
&=O(\mu\cdot e^{-n/6})
\end{split}
\end{equation*}
as required.
\end{proof}
\fi
\if 0
\begin{lemma}\label{lm:quant-01}
Let $X_1,\ldots, X_n\in \{0, 1\}$ be independent random variables with ${\bf \mbox{\bf E}}[X_i] \leq 1/4$ for all $i$. Let $Y:=\text{median} (X_1,\ldots, X_n)$. Then
${\bf \mbox{\bf Pr}}[Y>0]<2^{-\Omega(n)}$.
\end{lemma}
\begin{proof}
Follows directly by Chernoff bounds.
\end{proof}
\fi
\begin{lemma}[Error bounds for the median estimator]\label{lm:median-est}
Let $X_1,\ldots, X_n\in \mathbb{C}$ be independent random variables. Let $Y:=\text{median} (X_1,\ldots, X_n)$, where the median is applied coordinatewise. Then for any $a\in \mathbb{C}$ one has
\if 0
\begin{equation*}
\begin{split}
|Y-a|^2\leq &\text{median} ((\text{re}(X_1)-\text{re}(a))^2,\ldots, (\text{re}(X_n)-\text{re}(a))^2)\\
&+\text{median} ((\text{im}(X_1)-\text{im}(a))^2,\ldots, (\text{im}(X_n)-\text{im}(a))^2)\\
|Y-a|\leq &\text{median}(|\text{re}(X_1)-\text{re}(a)|,\ldots, |\text{re}(X_n)-\text{re}(a)|)\\
&+\text{median} (|\text{im}(X_1)-\text{im}(a)|,\ldots, |\text{im}(X_n)-\text{im}(a)|).
\end{split}
\end{equation*}
\fi
\begin{equation*}
\begin{split}
|Y-a|\leq &2\text{median}(|X_1-a|,\ldots, |X_n-a|)\\
= &2\sqrt{\text{median}(|X_1-a|^2,\ldots, |X_n-a|^2)}.\\
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Let $i, j\in [n]$ be such that $Y=\text{re}(X_i)+\mathbf{i}\cdot \text{im}(X_j)$.
Suppose that $\text{re}(X_i)\geq \text{re}(a)$ (the other case is analogous). Then since $\text{re}(X_i)$ is the median in the list $(\text{re}(X_1),\ldots, \text{re}(X_n))$ by definition of $Y$, we have that at least half of $X_s, s=1,\ldots, n$ satisfy $|\text{re}(X_s)-\text{re}(a)|>|\text{re}(X_i)-\text{re}(a)|$, and hence
\begin{equation}\label{eq:25855ughgf-1}
|\text{re}(X_i)-\text{re}(a)|\leq \text{median}(|\text{re}(X_1)-\text{re}(a)|,\ldots, |\text{re}(X_n)-\text{re}(a)|).
\end{equation}
Since squaring a list of numbers preserves the order, we also have
\begin{equation}\label{eq:25855ughgf-2}
(\text{re}(X_i)-\text{re}(a))^2\leq \text{median}((\text{re}(X_1)-\text{re}(a))^2,\ldots, (\text{re}(X_n)-\text{re}(a))^2).
\end{equation}
A similar argument holds for the imaginary part.
Combining
$$
|Y-a|^2=(\text{re}(a)-\text{re}(X_i))^2+(\text{im}(a)-\text{im}(X_i))^2
$$
with~\eqref{eq:25855ughgf-1} gives
\begin{equation*}
\begin{split}
|Y-a|^2\leq &\text{median} ((\text{re}(X_1)-\text{re}(a))^2,\ldots, (\text{re}(X_n)-\text{re}(a))^2)\\
&+\text{median} ((\text{im}(X_1)-\text{im}(a))^2,\ldots, (\text{im}(X_n)-\text{im}(a))^2)\\
\end{split}
\end{equation*}
Noting that
$$
|Y-a|=((\text{re}(a)-\text{re}(X_i))^2+(\text{im}(a)-\text{im}(X_i))^2)^{1/2}\leq |\text{re}(a)-\text{re}(X_i)|+|\text{im}(a)-\text{im}(X_i)|
$$
and using ~\eqref{eq:25855ughgf-2}, we also get
\begin{equation*}
\begin{split}
|Y-a|\leq &\text{median}(|\text{re}(X_1)-\text{re}(a)|,\ldots, |\text{re}(X_n)-\text{re}(a)|)\\
&+\text{median} (|\text{im}(X_1)-\text{im}(a)|,\ldots, |\text{im}(X_n)-\text{im}(a)|).
\end{split}
\end{equation*}
The results of the lemma follow by noting that $|\text{re}(X)-\text{re}(a)|\leq |X-a|$ and $|\text{im}(X)-\text{im}(a)|\leq |X-a|$.
\end{proof}
\begin{lemma}\label{lm:quant-exp}
Let $X_1,\ldots, X_n\geq 0$ be independent random variables with ${\bf \mbox{\bf E}}[X_i]\leq \mu$ for each $i=1,\ldots, n$. Then for any $\gamma\in (0, 1)$ if $Y\leq\text{quant}^{\gamma} (X_1,\ldots, X_n)$,
then
$$
{\bf \mbox{\bf E}}[\left|Y-4\mu/\gamma\right|_+]\leq (\mu/\gamma) \cdot 2^{-\Omega(n)}
$$
and
$$
{\bf \mbox{\bf Pr}}[Y\geq 4\mu/\gamma]\leq 2^{-\Omega(n)}.
$$
\end{lemma}
\begin{proof}
For any $t\geq 1$ by Markov's inequality ${\bf \mbox{\bf Pr}}[X_i>t \mu/\gamma]\leq \gamma/t$. Define indicator random variables $Z_i$ by letting $Z_i=1$ if $X_i>t \mu/\gamma$ and $Z_i=0$ otherwise. Note that ${\bf \mbox{\bf E}}[Z_i]\leq \gamma/t$ for each $i$.
Then since $Y$ is bounded above by the $\gamma n$-th largest of $\{X_i\}_{i=1}^n$, we have ${\bf \mbox{\bf Pr}}[Y>t \mu/\gamma]\leq {\bf \mbox{\bf Pr}}[\sum_{i=1}^n Z_i\geq \gamma n]$. As ${\bf \mbox{\bf E}}[Z_i]\leq \gamma/t$, this can only happen if the sum $\sum_{i=1}^n Z_i$ exceeds expectation by a factor of at least $t$. We now apply Theorem~\ref{thm:chernoff} to the sequence $Z_i,i=1,\ldots, n$. We have
\begin{equation}\label{eq:vi34g43qqg}
{\bf \mbox{\bf Pr}}\left[\sum_{i=1}^n Z_i\geq \gamma n\right]\leq e^{-(t-1)\gamma n/3}
\end{equation}
by Theorem~\ref{thm:chernoff} invoked with $\delta=t-1$. The assumptions of Theorem~\ref{thm:chernoff} are satisfied as long as $t> 2$. This proves the second claim we have $t=4$ in that case.
For the first claim we have
\begin{equation*}
\begin{split}
{\bf \mbox{\bf E}}[Y\cdot \mathbf{1}_{Y\geq 4\cdot \mu/\gamma}]&\leq \int_{4}^\infty t \mu \cdot {\bf \mbox{\bf Pr}}[Y\geq t\cdot \mu/\gamma]dt\\
&\leq \int_{4}^\infty t \mu e^{-(t-1)n/3}dt\text{~~~~~~~~~~ (by~\eqref{eq:vi34g43qqg})}\\
&\leq e^{-n/3}\int_{4}^\infty t \mu e^{-(t-2)n/3}dt\\
&=O(\mu\cdot e^{-n/3})
\end{split}
\end{equation*}
as required.
\end{proof}
| {
"timestamp": "2016-04-05T02:16:12",
"yymm": "1604",
"arxiv_id": "1604.00845",
"language": "en",
"url": "https://arxiv.org/abs/1604.00845",
"abstract": "We consider the problem of computing a $k$-sparse approximation to the Fourier transform of a length $N$ signal. Our main result is a randomized algorithm for computing such an approximation (i.e. achieving the $\\ell_2/\\ell_2$ sparse recovery guarantees using Fourier measurements) using $O_d(k\\log N\\log\\log N)$ samples of the signal in time domain that runs in time $O_d(k\\log^{d+3} N)$, where $d\\geq 1$ is the dimensionality of the Fourier transform. The sample complexity matches the lower bound of $\\Omega(k\\log (N/k))$ for non-adaptive algorithms due to \\cite{DIPW} for any $k\\leq N^{1-\\delta}$ for a constant $\\delta>0$ up to an $O(\\log\\log N)$ factor. Prior to our work a result with comparable sample complexity $k\\log N \\log^{O(1)}\\log N$ and sublinear runtime was known for the Fourier transform on the line \\cite{IKP}, but for any dimension $d\\geq 2$ previously known techniques either suffered from a polylogarithmic factor loss in sample complexity or required $\\Omega(N)$ runtime.",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "Sparse Fourier Transform in Any Constant Dimension with Nearly-Optimal Sample Complexity in Sublinear Time",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806512500486,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7077274232665968
} |
https://arxiv.org/abs/1402.5133 | The Marstrand Theorem in Nonpositive Curvature | In a paper from 1954, Marstrand proved that if $K\subset \mathbb{R}^2$ with Hausdorff dimension greater than 1, then its one-dimensional projection has positive Lebesgue measure for almost-all directions. In this article, we show that if $M$ is a simply connected surface with non-positive curvature, then Marstrand's theorem is still valid. | \section{Introduction}\label{IntroMars}
Consider $\mathbb{R}^2$ as a metric space with a metric $d$. If $U$ is a subset of $\mathbb{R}^2$, the diameter of $U$ is $|U|=\sup\{d(x,y):x,y\in U\}$ and, if $\mathcal{U}$ is a family of subsets of $\mathbb{R}^{2}$, the diameter of $\mathcal{U}$ is defined by
$$\left\|\mathcal{U}\right\|=\sup_{U\in\ \mathcal{U}}|U|.$$
Given $s>0$, the Hausdorff $s$-measure of a subset $K$ of $\mathbb{R}^2$ is
$$m_s(K)=\lim_{\epsilon \to 0}\left( \inf_{ \stackrel{\mathcal{U}\ \text{covers} \ K}{\|U\|<\epsilon}}\sum_{U\in \ \mathcal{U}}{|U|}^{s} \right).$$
In particular, when $d$ is the Euclidean metric and $s=1$, then $m=m_1$ is the Lebesgue measure. It is not difficult to show that there exists a unique $s_0\geq 0$ for which $m_s(K)=+\infty$ if $s<s_0$ and $m_s(K)=0$ if $s>s_0$. We define the Hausdorff dimension of $K$ as $HD(K)=s_0$. Also, for each $\theta\in \mathbb{R}$, let $v_\theta=(\cos \theta,\sin \theta)$, $L_{\theta}$ the line in $\mathbb{R}^2$ through of the origin containing $v_{\theta}$ and $\pi_{\theta}:\mathbb{R}^2\to L_{\theta}$ the orthogonal projection.\\
In 1954, J. M. Marstrand \cite{Marst} proved the following result on the fractal dimension
of plane sets.\\
\ \\
\textbf{Theorem[Marstrand]:}\textit{\ If $K\subset \mathbb{R}^2$ such that $HD(K)>1$, then
$m(\pi_{\theta}(K))>0$ for m-almost every $\theta\in \mathbb{R}$.}\\
\ \\
The proof is based on a qualitative characterization of the ``bad" angles $\theta$ for
which the result is not true.\\
\ \\
Many generalizations and simpler proofs have appeared since. One of them
came in 1968 by R. Kaufman, who gave a very short proof of Marstrand's
Theorem using methods of potential theory. See \cite{Kaufman} for his original proof and
\cite{PT}, \cite{Falconer} for further discussion. Another recent proof of the theorem (2011), which uses combinatorial techniques is found in \cite{YG}.\\
\ \\
In this article, we consider $M$ a simply connected surface with a Riemannian metric of non-positive curvature, and using the potential theory techniques of Kaufman \cite{Kaufman}, we show the following more general version of the Marstrand's Theorem.\\
\ \\
\noindent \emph{\bf The Geometric Marstrand Theorem:} \ \ \textit{Let $M$ be a Hadamard surface, let $K\subset M$ and $p\in M$, such that $HD(K)>1$, then for almost every line $l$ coming from p, we have $\pi_{l}(K)$ has positive Lebesgue measure, where $\pi_l$ is the orthogonal projection on $l$.}\\
\ \\
\noindent Then using the Hadamard's theorem (cf. \cite{ManP}), the theorem above can be stated as follows:\\
\noindent \emph{\bf Main Theorem:}\ \ \textit{Let ${\mathbb R}^2$ be endow with a metric $g$ of non-positive curvature, and $K \subset {\mathbb R}^2$ with $HD(K)> 1$. Then for almost every $\theta \in (-\pi/2 , \pi /2)$, we have that $m(\pi_\theta(K))> 0$, \newline where $\pi_{\theta}$ is the orthogonal projection with the metric $g$ on the line $l_{\theta}$, of initial velocity $v_{\theta}=(\cos\theta,\sin\theta)\in T_{p}\mathbb{R}^{2}$.}
\section{Preliminaries} \label{Preliminaries}
\noindent Let $M$ be a Riemannian manifold with metric $\left\langle \ , \ \right\rangle$, a line in $M$ is a geodesic defined for all parameter values and minimizing distance between any of its points, that is, $\gamma : {\mathbb R} \to M$ is a isometry.
If $M$ is a manifold of dimension $n$, simply connected and non-positive curvature, then the space of lines leaving of a point $p$ can be seen as a sphere of dimension $n-1$. So, in the case of surfaces the set of lines agrees with $S^1$ in the space tangent $T_{p}M$ of the point $p$. Therefore, in each point on the surface the set of lines can be oriented and parameterized by $\left(-\frac{\pi}{2} , \frac{\pi}{2}\right]$ and endowed of Lebesgue measure. Thus, using the previous identification, we can talk about almost every line through a point of $M$ (cf. \cite{Bridson}).
In the conditions above, Hadamard's theorem states that $M$ is diffeomorphic to $\mathbb{R}^{n}$, (cf. \cite{ManP}).\\
\ \\
Moreover, given a geodesic triangle $\Delta ABC$ with sides , $\vec{BC}$ and $\vec{AC}$ denote by $\angle A$ the angle between geodesic segments $\vec{AB}$ and $\vec{AC}$, then \textit{the law of cosines} says
$$|\vec{BC}|^{2}\geq|\vec{AB}|^{2}+|\vec{AC}|^2-2|\vec{AB}||\vec{AC}|\cos\angle A,$$
where $|\vec{ij}|$ is the distance between the points $i,j$ for $i,j\in \{A,B,C\}$.\\
\noindent \textbf{Gauss's Lemma:} Let $p\in M$ and let $v,w\in B_{\epsilon}(0)\in T_vT_p M \approx T_p M$ and $M\ni q=exp_pv$. Then,
$$\left\langle d(exp_p)_{v}v,d(exp_p)_{v}w\right\rangle_{q}=\left\langle v,w\right\rangle_{p}.$$
\subsection{Projections}\label{Projections}
Let $M$ be a manifold simply connected and of non-positive curvature. Let $C$ be a complete convex set in $M$. \textit{The orthogonal projection} (or simply
`\textit{projection}') is the name given to the map $\pi\colon M\to C$ constructed in the following proposition: (cf. \cite[pp 176]{Bridson}).
\begin{Pro}The projection $\pi$ satisfies the following properties:
\begin{enumerate}
\item For any $x\in M$ there is a unique point $\pi(x)\in C$ such that $d(x,\pi(x))=d(x, C) = \inf_{y\in C} d(x,y)$.
\item If $x_0$ is in the geodesic segment $[x, \pi(x)]$, then $\pi(x_0)= \pi(x)$.
\item Given $x \notin C$, $y \in C$ and $y \neq \pi(x)$, then $\angle_{\pi(x)}(x,y) \geq \frac{\pi}{2}$.
\item $x \longmapsto \pi(x)$ is a retraction on $C$.
\end{enumerate}
\end{Pro}
\begin{C}
Let $M$, $C$ be as above and define $d_C(x):=d(x,C)$, then
\begin{enumerate}
\item $d_C$ is a convex function, that is, if $\alpha (t)$ is a geodesic parametrized proportionally to arc length, then
$$d_C(\alpha(t)) \leq (1-t)d_C (\alpha(0)) + td_C \alpha(1)\ \ \text{for} \ \ t\in [0,1].$$
\item For all $x,y \in M$, we have $\left|d_C(x)- d_C(y)\right|\leq d(x,y)$.
\item The restriction of $d_C$ to the sphere of center $x$ and radius $r\leq d_C(x)$ reaches the infimum in a unique point $y$ with
$$d_C(x)=d_C(y) + r.$$
\end{enumerate}
\end{C}
\noindent Here we consider $\mathbb{R}^{2}$ with a Riemannian metric $g$, such that the curvature $K_{\mathbb{R}^{2}}$ is non-positive, \emph{i.e.}, $K_{\mathbb{R}^{2}}\leq 0$.
\noindent Recall that a line $\gamma$ in ${\mathbb{R}}^2$ is a geodesic defined for all parameter values and minimizing distance between any of its points, that is,
$\gamma \colon {\mathbb{R}} \to {\mathbb{R}}^2$ and $d(\gamma(t),\gamma(s))=|t-s|$, where $d$ is the distance induced by the Riemannian metric $g$, in other words, a parametrization of $\gamma$ is a isometry.
Then, given $x\in \mathbb{R}^2$ there is a unique $\gamma(t_x)$ such that $\pi_\gamma (x)=\gamma(t_x)$, thus without loss of generality we may call $\pi_\gamma(x)=t_x$.\\
\ \\
Fix $p \in \mathbb{R}^{2}$ and let $\{e_1,e_2\}$ be a positive orthogonal basis of $T_p\mathbb{R}^2$, \emph{i.e.}, the basis $\{e_1,e_2\}$ has the induced orientation of $\mathbb{R}^{2}$. Then, call $v_t=(\cos t,\sin t)$ in coordinates the unit vector $(\cos t) e_1+(\sin t) e_2\in T_{p}\mathbb{R}^{2}$. Denote by $l_t$ the line through $p$ with velocity $v_t$, given by $l_t(s)=exp_{p}sv_t$ and by $\pi_t$ the projection on $l_t$.
Then, given $\theta\in [0,2\pi)$, we can define $\pi\colon [0,2\pi)\times T_{p}\mathbb{R}^2 \to \mathbb{R}$ by the unique parameter $s$ such that $\pi_{\theta}(exp_{p}w)=\exp_{p}s{v_{\theta}}$ \emph{i.e.}, $\pi(\theta,w):=\pi_{\theta}(w)$ and
$$\pi_{\theta}(exp_{p}w)=exp_{p}\pi(\theta, w)v_{\theta}.$$
\section{Behavior of the Projection $\pi$}\label{BP}
In this section we will prove some lemmas that will help to understand the projection $\pi$.
\subsection{Differentiability of $\pi$ in $\theta$ and $w$}
\begin{Le}\label{L1M}
The projection $\pi$ is differentiable in $\theta$ and \ $w$.
\end{Le}
\begin{proof}[\bf{Proof}]
Fix $w$ and call $q=exp_{p}{w}$. Let $\alpha_v(t)\subset T_q{\mathbb R}^2$ such that $exp_{q}\alpha_{v}(t)=\gamma_v(t)$, where $\gamma_v$ is the line such that $\gamma_v(0)=p$ and $\gamma_v'(0)=v$, then, for all $v\in S^1$, there is a unique $t_v$ such that $d(q,\gamma_v({\mathbb R}))=d(q,\gamma_v(t_v))$ and satisfies
$$\left\langle d(exp_q)_{\alpha_v{(t_v)}} (\alpha_v'(t_v)) , d(exp_q)_{\alpha_v{(t_v)}} (\alpha_v(t_v))\right\rangle=
\left\langle \gamma_v'(t_v),d(exp_q)_{\alpha_v{(t_v)}} (\alpha_v{(t_v)})\right\rangle=0.$$
By Gauss Lemma, we have
$$\frac{1}{2}\frac{\partial}{\partial t}\left\|\alpha_v(t)\right\|^2(t_v)= \left\langle \alpha_v'(t_v),\alpha_v(t_v) \right \rangle=0.$$
We define the real function
\begin{eqnarray*}
\eta: S^1 &\times & {\mathbb R} \longrightarrow {\mathbb R}\\
\eta(v,t)&=&\frac{1}{2}\frac{\partial}{\partial t}\left\|\alpha_v(t)\right\|^2,
\end{eqnarray*}
\noindent this function is $C^\infty$ and satisfies $\eta(v,t_{v})=0$, also $\frac{\partial}{\partial t} \eta(v,t)= \frac{1}{2}\frac{\partial^2}{\partial t^2} \left\|\alpha_v(t)\right\|^2$.\\
\ \\
\noindent Put $g(t)={\left\| \alpha_{v_{0}}(t)\right\|}^2$, then $\frac{\partial}{\partial t} \eta(v_0,t_0)=\frac{1}{2}g''(t_0)$.
Also, $g(t)={d(q,\gamma_{v_0}(t))}^2$ is differentiable and has a global minimum at $t_{v_0}$, as $K_{\mathbb{R}^2}\leq 0$, $g$ is convex. In fact, for $s\in[0,1]$
\begin{eqnarray*}
\displaystyle g(sx+(1-s)y)&=&d(q,\gamma_{v_0}(sx+(1-s)y))^2 \leq
\left(sd(q,\gamma_{v_0}(x))+(1-s)d(q,\gamma_{v_0}(y))\right)^2 \\
&\leq& sd(q,\gamma_{v_0}(x))^2+(1-s)d(q,\gamma_v(y))^2=sg(x)+(1-s)g(y)
\end{eqnarray*}
\noindent by the law of cosines and using the fact $\angle_{\pi_{\gamma_{v_0}(t_0)}}(q,\gamma_{v_0}$(t)$) = \frac{\pi}{2}$ at the point of projection
\begin{center}
$d(q,\gamma_{v_0}(t_{v_0}))^2 + d(\gamma_{v_0}(t_{v_0}),\gamma_{v_0}(t))^2 \leq d(q,\gamma_{v_0}(t))^2$
\end{center}
equivalently
\begin{center}
$g(t_{v_0})+(t-t_{v_0})^2 \leq g(t)$.
\end{center}
Therefore, as $g'(t_{v_0})=0$, then $g''(t_{v_0}) > 0$.
This implies that $\frac{\partial \eta}{\partial t}(v_0,t_0)\neq 0$ and by Theorem of Implicit Functions, there is an open $U$ containing $(v_0,t_{v_0})$, a open $V \subset S^1$ containing $v_0$ and $\xi:V \longrightarrow {\mathbb R}$, a class function $C^\infty$ with $\xi(v_0)=t_{v_0}$ suct that
$$\left\{(v,t)\in U: \eta(v,t)=0\right\} \Longleftrightarrow \left\{ v\in V : t=\xi(v)\right\}.$$
Since by construction $\eta(v,\xi(v))=0$ implies $\pi(v,q)= \xi(v)$, and therefore $\pi(v,q)$ is diffe-\\rentiable in $v$, in fact it is $C^\infty$. The above shows that $\pi$ is differentiable in $\theta$.\\
\ \\
\noindent Analogously, is proven that $\pi$ is differentiable in $w$.
\end{proof}
\noindent Let $w\in T_{p}\mathbb{R}^2\setminus\{0\}$ and put $\theta_{w}^{\bot}\in [0,2\pi)$ such that $w$ and $v_{\theta_{w}^{\bot}}$ are orthogonal, that is $\left\langle w , v_{\theta_{w}^{\bot}}\right\rangle=0$, where the $\left\langle \ , \ \right\rangle$ is the inner product in $T_{p}\mathbb{R}^{2}$ and the set $\{w,v_{\theta_{w}^{\bot}}\}$ is a positive basis of $T_{p}\mathbb{R}^2$.
\begin{Le}\label{L2M}
The projection $\pi$ satisfies, $$\displaystyle\frac{\partial \pi}{\partial \theta}\left(\theta_{w}^{\bot},w\right)=-\left\|w\right\|.$$
Moreover, there exists $\epsilon>0$ such that, for all $w$
$$-\left\|w\right\|\leq\frac{\partial \pi}{\partial \theta}\left(\theta,w\right)\leq -\frac{1}{2}\left\|w\right\| \ \text{and} \ \ \left|\frac{\partial^{2} \pi}{\partial^{2} \theta}\left(\theta,w\right)\right|\leq \left\|w\right\|,$$ whenever $\left|\theta-\theta_{w}^{\bot}\right|<\epsilon$.
\end{Le}
\noindent Before proving Lemma \ref{L2M} we will seek to understand the function $\pi(\theta,w)$.\\
\ \\
\noindent Let $\pi_{l_\theta}$ be the orthogonal projection on the line $\l_\theta$ generated by the vector $v_{\theta}$ in $T_{p}\mathbb{R}^{2}$,
in this case, $\pi_{l_\theta}(w)=\left\|w\right\|\cos(arg(w)-\theta)$, where $arg(w)$ is the argument of $w$ with relation to $e_1$ and the positivity of basis $\{e_1,e_2\}$.
\ \\
Now using the law of cosines
\begin{center}
$d(p,\pi_\theta(exp_{p} w))^2+d(exp_{p}w,\pi_\theta(exp_{p} w))^2\leq \left\|w\right\|^2=\pi_{l_\theta}(w)^2+d(\pi_{l_{\theta}}(w)v_\theta,w)^2,$
\end{center}
\noindent Since, $K\leq 0$, then $$d(exp_{p}w,\pi_\theta(exp_{p} w))=d(exp_{p}w,exp_{p}\pi_{\theta}(w)v_{\theta})\geq d(w,\pi_{\theta}(w)v_{\theta})\geq d(w,\pi_{l_{\theta}}(w)v_{\theta}).$$
Joining the previous expressions we obtain
$$d(p,\pi_\theta(exp_{p}w))^2\leq \pi_{l_\theta}(w)^2 \Longleftrightarrow \pi_{\theta}(w)^{2}\leq \pi_{l_{\theta}}(w)^2.$$
\noindent Thus, since $\pi_{\theta}(w)$ has the same sign as $\pi_{l_{\theta}}(w)$, then
\begin{eqnarray}
\pi_\theta(w)\geq 0 &\Longrightarrow & \pi_{\theta}(w)\leq \pi_{l_{\theta}}(w)=\left\|w\right\|\cos(arg(w)-\theta); \label{E1}\\
\pi_{\theta}(w)\leq 0 &\Longrightarrow& \pi_{\theta}(w)\geq \pi_{l_{\theta}}(w)=\left\|w\right\|\cos(arg(w)-\theta). \label{E2}
\end{eqnarray}
\begin{proof}[\bf{Proof of Lemma \ref{L2M}}] \ \\
As $\left\langle w,\theta_{w}^{\bot}\right\rangle=0$, then $arg(w)-\theta_{w}^{\bot}=-\pi/2$, thus $\pi(\theta_{w}^{\bot},w)=0=\left\|w\right\|\cos(-\pi/2)$. Moreover, as $\pi(\theta_{w}^{\bot}-h,w)\geq 0$ and $\pi(\theta_{w}^{\bot}+h,w)\leq 0$ for $h>0$ small, then
$$\frac{\pi(\theta_{w}^{\bot}-h,w)}{h}\leq \frac{\left\|w\right\|\cos\left(arg(w)-(\theta_{w}^{\bot}-h)\right)}{h}$$
and
$$\frac{\pi(\theta_{w}^{\bot}+h,w)}{h}\geq \frac{\left\|w\right\|\cos\left(arg(w)-(\theta_{w}^{\bot}+h)\right)}{h}.$$
If $h\to 0$ in the two previous inequalities we have
$$\frac{\partial \pi}{\partial \theta}(\theta_{w}^{\bot},w)\leq -\left\|w\right\|\sin(arg(w)-\theta_{w}^{\bot})=-\left\|w\right\|$$
and
$$\frac{\partial \pi}{\partial \theta}(\theta_{w}^{\bot},w)\geq -\left\|w\right\|\sin(arg(w)-\theta_{w}^{\bot})=-\left\|w\right\|.$$
Therefore,
\begin{equation}\label{E3}
\frac{\partial \pi}{\partial \theta}(\theta_{w}^{\bot},w)=-\left\|w\right\|.
\end{equation}
\noindent Moreover, for $h>0$ small and by the equation (\ref{E2}), we have
\begin{eqnarray*}
\pi(\theta_{w}^{\bot}+h,w)&=&\frac{\partial \pi}{\partial \theta}(\theta_{w}^{\bot},w)h+\frac{1}{2}\frac{\partial^{2} \pi}{\partial^{2} \theta}(\theta_{w}^{\bot},w)h^{2}+r(h)\\
&\geq&\left\|w\right\|\cos\left(arg(w)-(\theta_{w}^{\bot}+h)\right)\\
&=&\left\|w\right\|\left(\frac{\partial}{\partial \theta} \cos(\theta-arg(w))|_{\theta_{w}^{\bot}}h+\frac{1}{2}\frac{\partial^{2}}{\partial^{2} \theta} \cos(\theta-arg(w))|_{\theta_{w}^{\bot}}h^{2}+R(h)\right).
\end{eqnarray*}
The above inequality and equation (\ref{E3}) implies that $$\frac{\partial^{2} \pi}{\partial^{2} \theta}(\theta_{w}^{\bot},w)h^{2}+r(h)\geq \left\|w\right\|\left(\frac{\partial^{2}}{\partial^{2} \theta} \cos(\theta-arg(w))|_{\theta_{w}^{\bot}}h^{2}+R(h)\right).$$
Since $\dfrac{\partial^{2}}{\partial^{2} \theta} \cos(\theta-arg(w))|_{\theta_{w}^{\bot}}=0$, then $\dfrac{\partial^{2} \pi}{\partial^{2} \theta}(\theta_{w}^{\bot},w)\geq 0$.
Analogously, using $\pi(\theta_{w}^{\bot}-h,w)$ and equation (\ref{E1}) we have that $\dfrac{\partial^{2} \pi}{\partial^{2} \theta}(\theta_{w}^{\bot},w)\leq 0$. So,
\begin{equation}\label{E4}
\dfrac{\partial^{2} \pi}{\partial^{2} \theta}(\theta_{w}^{\bot},w)=0.
\end{equation}
\noindent Using Taylor's expansion of third order for $\pi(\theta_{w}^{\bot}+h,w)$ and $h>0$, the equations (\ref{E2}), (\ref{E4}), and the fact that
$\dfrac{\partial^{3}}{\partial^{3} \theta} \cos(\theta-arg(w))|_{\theta_{w}^{\bot}}=1$, implies that
$$\frac{\partial^{3} \pi}{\partial^{3} \theta}(\theta_{w}^{\bot},w)\frac{h^3}{6}+r_{3}(h)\geq \frac{h^3}{6}+R_{3}(h).$$
Thus,
\begin{equation}\label{E5}
\frac{\partial^{3} \pi}{\partial^{3} \theta}(\theta_{w}^{\bot},w)\geq 1.
\end{equation}
\noindent Equations (\ref{E4}) and (\ref{E5}) implies that, for any $w\in T_{p}\mathbb{R}^2$, the function $\dfrac{\partial \pi}{\partial \theta}(\cdot,w)$ has a minimum in $\theta=\theta_{w}^{\bot}$, therefore there is $\epsilon_{1}>0$ such that
\begin{equation}\label{E6}
-\left\|w\right\|\leq \frac{\partial \pi}{\partial \theta}(\theta,w) \ \ \text{for all} \ \ \left|\theta-\theta_{w}^{\bot}\right|<\epsilon_1.
\end{equation}
\noindent The lemma will be proved if we show the following statements:
\begin{enumerate}
\item There is $\delta_1>0$, such that for all $\left\|w\right\|\geq 1$, $$\frac{\partial \pi}{\partial \theta}(\theta,w)\leq -\frac{1}{2}\left\|w\right\|, \ \ \text{whenever} \left|\theta-\theta_{w}^{\bot}\right|<\delta_1.$$
In fact: Let $1/2>\beta>0$, then by continuity of $\dfrac{\partial \pi}{\partial \theta}$, there is $\delta_1$ such that $$\text{if} \ \ \left|\theta-\theta_{w}^{\bot}\right|<\delta_1, \ \ \text{then}\ \ \frac{\partial \pi}{\partial \theta}(\theta,w)-\frac{\partial \pi}{\partial \theta}(\theta_{w}^{\bot},w)<\beta.$$
Thus, $\dfrac{\partial \pi}{\partial \theta}(\theta,w)<\beta-\left\|w\right\|<-\frac{1}{2}\left\|w\right\|$ for any $\left\|w\right\|\geq 1$.
\item There is $\epsilon_2>0$, such that for all $\left\|w\right\|=1$ and $t\in[0,1]$
$$\frac{\partial \pi}{\partial \theta}(\theta,tw)\leq -\frac{1}{2}t, \ \ \text{whenever}\ \ \left|\theta-\theta_{w}^{\bot}\right|<\epsilon_2.$$
In fact: Suppose by contradiction that for all $n\in \mathbb{N}$, there are $w_n, t_n, \theta_n$, $\left\|w_n\right\|=1$ such that $\left|\theta_{w_n}^{\bot}-\theta_n\right|<\frac{1}{n}$ and $\frac{\partial \pi}{\partial \theta}(\theta_n,t_n w_n)>-\frac{1}{2}t_n$. Without loss of generality, we can assume that $w_n\to w$, $\theta_n \to \theta_{w}^{\bot}$ and $t_n\to t$. If $t\neq 0$, the above implies a contradiction with (\ref{E3}). Thus, suppose that $t=0$, then
consider the $C^{1}$-function $H(\theta,t,w)=\frac{\partial \pi}{\partial \theta}(\theta,tw)$, then $\frac{\partial H}{\partial t}(\theta_{w}^{\bot},0,w)=-\left\|w\right\|=-1$. Since $H$ is $C^1$, then
$$\lim_{n\to \infty}\frac{H(\theta_n,t_n,w_n)}{t_n}=\lim_{t\to 0}\frac{H(\theta,t,w)}{t}=-1<-1/2\leq \lim_{n\to \infty}\frac{H(\theta_n,t_n,w_n)}{t_n}.$$
Which is absurd, so the assertion 2 is proved.
\end{enumerate}
\noindent Take $\epsilon=\min\{\epsilon_1,\epsilon_2,\delta_1\}$, then by the equation (\ref{E6}) and the statements 1 and 2 we have the second part of Lemma \ref{L2M}. The third part is analogous, just consider that $\frac{\partial ^{2}\pi}{\partial^2 \theta}(\theta_{w}^{\bot},w)=0$. So we conclude the proof of Lemma.
\end{proof}
\begin{Le}\label{L3M}
Let $w\neq 0$ and $\theta \neq \theta_{w}^{\bot}$, then $\displaystyle\lim_{t \to 0^{+}}\frac{\pi_{\theta}(tw)}{t}\neq 0$.
\end{Le}
\begin{proof}[\bf{proof}]
Suppose that $\lim_{t \to 0^{+}}\frac{\pi_{\theta}(tw)}{t}=0$, put $w(t)=exp_{p}tw$, let $v(t)\in T_{w(t)}\mathbb{R}^2$ the unit vector such that $exp_{w(t)}s(t)v(t)=\pi_{\theta}(exp_{p}tw)$ for some $s(t)\geq 0$. Let $J(t)\in T_{w(t)}\mathbb{R}^2$ such that $exp_{w(t)}J(t)=p$, that is, $J(t)=-d(exp_{p})_{tw}w$. Then, putting $\alpha(t)$ the oriented angle between $v(t)$ and $J(t)$ (cf. Figure \ref{fig:MF1}).\\
\begin{figure}[htbp]
\centering
\includegraphics[width=0.40\textwidth]{MF1.pdf}
\caption{Convergence geodesics}
\label{fig:MF1}
\end{figure}
\noindent By the law of cosines and using that $d(p,w(t))=\left\|J(t)\right\|=t\left\|w\right\|$ for $t>0$, and $\pi_{\theta}(tw)=d(p,\pi_{\theta}(w(t)))$, we obtain
$$\pi_{\theta}(tw)^{2}\geq \left\|J(t)\right\|^{2}+d(w(t),\pi_{\theta}(w(t)))^{2}-2\left\|J(t)\right\|d(w(t),\pi_{\theta}(w(t)))\cos \alpha(t).$$
Put $\displaystyle\lim_{t\to 0^{+}}\frac{d(w(t),\pi_{\theta}(w(t)))}{t}=B$, then dividing by $t^{2}$ and when $t\to 0$ we have
\begin{eqnarray*}
0=\left(\lim_{t \to 0^{+}}\frac{\pi_{\theta}(tw)}{t}\right)^{2}&\geq& \left\|w\right\|^{2}+B^{2}-2\left\|w\right\|B\lim_{t\to 0^{+}}\cos \alpha(t)\\
&\geq &\left\|w\right\|^{2}+B^{2}-2\left\|w\right\|B=(\left\|w\right\|-B)^{2}\geq 0.\\
\end{eqnarray*}
Thus, we conclude that $B=\left\|w\right\|$ and $\displaystyle\lim_{t\to 0^{+}}\cos \alpha(t)=1$. Therefore, $\displaystyle\lim_{t\to 0^{+}}\alpha(t)=0$, this implies the following geodesic convergence
$$exp_{w(t)}sv(t)\stackrel{t\to 0^{+}}{\longrightarrow} exp_{p}{s\frac{-w}{\left\|w\right\|}},$$
given that $w(t)\to p$ and $v(t)\to -\frac{w}{\left\|w\right\|}$ when $t\to 0^{+}$.\\
Moreover, by definition of $s(t)$, we have that
$$\left\langle d\left(exp_{w(t)}\right)_{s(t)v(t)}v(t),d\left(exp_{p}\right)_{\pi_{\theta}(tw)v_{\theta}}v_{\theta})\right\rangle_{\pi_{\theta}(tw)v_{\theta}}=0,$$
using the fact that $d(exp_{p})_{0}=I$, where $I$ is the identity of $T_{p}\mathbb{R}^{2}$, then when $t\to 0^{+}$ and we conclude that $\left\langle -\frac{w}{\left\|w\right\|},v_{\theta}\right\rangle=0$ and this is a contradiction as $\theta\neq \theta_{w}^{\bot}$.
\end{proof}
\ \\
\noindent Now we subdivide $T_{p}\mathbb{R}^2$ in three regions: Consider $\epsilon$ given by the Lemma \ref{L2M}, then
\begin{eqnarray*}
R_1&=&\left\{w\in T_{p}\mathbb{R}^2\colon \text{the angle} \ \ \angle(w,e_1)\leq \frac{\pi}{2}-\frac{3}{2}\epsilon \ \ \text{and} \ \ \angle(w,e_1)\geq \frac{3\pi}{2}+\frac{3}{2}\epsilon\right\};\\
R_2&=&\left\{w\in T_{p}\mathbb{R}^2\colon \text{the angle} \ \ \frac{\pi}{4}+\frac{3}{2}\epsilon\leq\angle(w,e_1)\leq \frac{5\pi}{4}-\frac{3}{2}\epsilon\right\};\\
R_3&=&\left\{w\in T_{p}\mathbb{R}^2\colon \text{the angle} \ \ \frac{3\pi}{4}+\frac{3}{2}\epsilon\leq\angle(w,e_1)\leq \frac{7\pi}{4}-\frac{3}{2}\epsilon\right\}.
\end{eqnarray*}
\ \\
\noindent For $w\in T_p\mathbb{R}^2$, putting $a_{w}^{\bot}=\theta_{w}^{\bot}-\dfrac{\epsilon}{2}$ and $\tilde{a}_w^{\bot}=\theta_{w}^{\bot}+\dfrac{\epsilon}{2}$, where $\epsilon$ is given in Lemma \ref{L2M}.
\begin{Le}\label{L4M} For the function $\pi_{\theta}(w)$ we have that
\begin{enumerate}
\item There is $C_1>0$ such that for all $w\in R_1$,
\begin{itemize}
\item[$\left(\mathrm{a}\right)$] If $\left\|w\right\|\leq 1$, then
$\pi_{\theta}(w)\geq C_1\left\|w\right\| \ \ \text{for} \ \ \theta\in [0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi].$
\item[$\left(\mathrm{b}\right)$] If $\left\|w\right\|\geq 1$, then
$\pi_{\theta}(w)\geq C_{1} \ \ \text{for} \ \ \theta\in [0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi].$
\end{itemize}
\item There is $C_2>0$ such that for all $w\in R_2$,
\begin{itemize}
\item[$\left(\mathrm{a}\right)$] If $\left\|w\right\|\leq 1$, then
$\pi_{\theta}(w)\geq C_2\left\|w\right\| \ \ \text{for} \ \ \theta\in \left[\frac{3}{4}\pi, a_w^{\bot}\right]\cup \left[\tilde{a}_w^{\bot},\frac{7}{4}\pi\right].$
\item[$\left(\mathrm{b}\right)$] If $\left\|w\right\|\geq 1$, then
$\pi_{\theta}(w)\geq C_{2} \ \ \text{for} \ \ \theta\in \left[\frac{3}{4}\pi, a_w^{\bot}\right]\cup \left[\tilde{a}_w^{\bot},\frac{7}{4}\pi\right].$
\end{itemize}
\item There is $C_3>0$ such that for all $w\in R_3$,
\begin{itemize}
\item[$\left(\mathrm{a}\right)$] If $\left\|w\right\|\leq 1$, then
$\pi_{\theta}(w)\geq C_3\left\|w\right\| \ \ \text{for} \ \ \theta\in \left[\frac{5}{4}\pi, a_w^{\bot}\right]\cup \left[\tilde{a}_w^{\bot},\frac{9}{4}\pi\right].$
\item[$\left(\mathrm{b}\right)$] If $\left\|w\right\|\geq 1$, then
$\pi_{\theta}(w)\geq C_{3} \ \ \text{for} \ \ \theta\in \left[\frac{5}{4}\pi, a_w^{\bot}\right]\cup \left[\tilde{a}_w^{\bot},\frac{9}{4}\pi\right].$
\end{itemize}
\end{enumerate}
\end{Le}
\ \\
\noindent We prove the part $1$, the parts $2$ and $3$ are analogous.
\begin{proof}[\bf{proof}]
It suffices to prove that there is $C_1>0$ such that for all $w\in R_1$ with $\left\|w\right\|=1$ and all $t\in\left[0,1\right]$ we have
\begin{equation}\label{E6'}
\pi_{\theta}(tw)\geq C_1t \ \ \text{for} \ \ \theta\in [0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi].
\end{equation}
In fact: By contradiction, suppose that for all $n\in\mathbb{N}$ there is $w_n$ with $\left\|w_n\right\|=1$, $t_n\in [0,1]$ and $\theta_n\in [0, a_{w_n}^{\bot}]\cup [\tilde{a}_{w_{n}}^{\bot},\pi]$ such that $\pi_{\theta_n}(t_nw_n)<\frac{1}{n}t_n$.
We can assume that $w_n\to w$, $\theta_n\to \theta \in [0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi]$ and $t_n\to t$ when $n\to \infty$.
If $t\neq 0$, then since for $w\in R_1$ and $\theta\in[0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi]$, $\pi_{\theta}(tw)\geq 0$, we have
$0\leq \pi_{\theta}(tw)\leq 0$, so $\theta=\theta_{w}^{\bot}$ and this is a contradiction, because $\theta\in [0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi]$ and $\epsilon$ is fixed.\\
If $t=0$, consider the $C^{1}$-function $F(\theta, t,w)=\pi_{\theta}(tw)$, then
$$0=\lim_{n\to \infty}\frac{F(\theta_n,t_n,w_n)}{t_n}=\lim_{t\to 0}\frac{F(\theta,t,w)}{t},$$
\noindent by Lemma \ref{L3M} we know that $\displaystyle\lim_{t\to 0}\frac{F(\theta,t,w)}{t}\neq 0$, and this is a contradiction with the above, so the affirmation is proved.\\
\ \\
Now, since $\theta_{w}^{\bot}=\theta_{tw}^{\bot}$ for $t>0$ we have \\
\textbf{($\mathrm{a}$)} If $\left\|w\right\|\leq1$, then by (\ref{E6'}), $\pi_{\theta}(w)=\pi_{\theta}(\left\|w\right\|\dfrac{w}{\left\|w\right\|})\geq C_1\left\|w\right\|$ for $\theta\in [0, a_w^{\bot}]\cup [\tilde{a}_w^{\bot},\pi]$.\\
\ \\
\textbf{($\mathrm{b}$)} Since $\pi_{\theta}(w)\geq \pi_{\theta}(\frac{w}{\left\|w\right\|})$ for $\left\|w\right\|\geq 1$, then the equation (\ref{E6'}) and implies the result.
\end{proof}
\subsection{The Bessel Function Associated to $\pi_{\theta}(w)$}
For $w\in T_{p}\mathbb{R}^{2}$ consider the Bessel function
$$\tilde{J}_{w}(z)=\int_{0}^{2\pi}\cos(z\pi_{\theta}(w))d\theta.$$
Observe that we can consider $\pi_{\theta}(w)$ as a periodic function in $\theta$ of period $2\pi$. Moreover, $\tilde{J}_{w}(z)$ has the following properties:
\begin{enumerate}
\item $\tilde{J}_{w}(z)=\tilde{J}_{w}(-z)$;
\item $\displaystyle\tilde{J}_{w}(z)=\int_{0}^{2\pi}\cos(z\pi_{\theta}(w))d\theta=\int_{t}^{2\pi+t}\cos(z\pi_{\theta}(w))d\theta$ for any $t\in \mathbb{R}$.
\item As $\pi_{\theta+\pi}(\exp_p(w))=-\pi_\theta(\exp_p(w))$, then
\begin{eqnarray*}
\displaystyle\int^{\pi+t}_{t}\cos(z\pi_{\theta}w) d\theta&=&\int^{2\pi+t}_{\pi+t}\cos(z\pi_{\theta-\pi}w)d\theta=\int^{2\pi+t}_{\pi+t}\cos(-z\pi_{\theta}w)d\theta\\
&=&\int^{2\pi+t}_{\pi+t}\cos(z\pi_{\theta}(w)d\theta,
\end{eqnarray*}
\end{enumerate}
\noindent Thus,
\begin{equation}\label{E7}
\tilde{J}_w(z)=\ds2\int^{\pi+t}_{t}\cos(z\pi_\theta(w))d\theta:=2J^{t}_w(z).
\end{equation}
\ \\
\begin{R}\label{R1M}
To fix ideas we consider
\begin{eqnarray*}
t&=&0 \ \ \text{for} \ \ w\in R_1;\\
t&=&\frac{3}{4}\pi \ \ \text{for} \ \ w\in R_2;\\
t&=&\frac{5}{4}\pi \ \ \text{for} \ \ w\in R_3.\\
\end{eqnarray*}
\end{R}
\begin{Pro}\label{P1M} For any $w\in T_{p}\mathbb{R}^2$ we have that
$\displaystyle\int^{\infty}_{-\infty}\tilde{J}_{w}(z)dz<\infty$.
\end{Pro}
\begin{proof}[\bf{proof}]
We divide the proof in three parts.
\begin{enumerate}
\item If $w\in R_1$, in this case, by Remark \ref{R1M} and equation (\ref{E7}) is it suffices to prove the Lemma for $J_{w}^{0}(z):=J_{w}(z)$.
\item If $w\in R_2$, in this case, by Remark \ref{R1M} and equation (\ref{E7}) is it suffices to prove the Lemma for $J_{w}^{{3\pi}/{4}}(z)$.
\item If $w\in R_3$, in this case, by Remark \ref{R1M} and equation (\ref{E7}) is it suffices to prove the Lemma for $J_{w}^{{5\pi}/{4}}(z)$.
\end{enumerate}
We will prove 1, the proof of 2 and 3 are analogous. In fact: Since $J_{w}(z)=J_{w}(-z)$, then
$$\displaystyle\int^{\infty}_{-\infty}{J}_{w}(z)dz=2\displaystyle\int^{\infty}_{0}{J}_{w}(z)dz,$$
so, the proof is reduced to prove that $\displaystyle\int^{\infty}_{0}{J}_{w}(z)dz<\infty.$\\
Let $w\in R_1$ and $x>0$, then
\begin{eqnarray*}
\displaystyle\int^{x}_{0}{J}_w(z)dz&=&\int^{\pi}_{0}\int^{x}_{0}\cos(z\pi_\theta(w))dzd\theta=\int^{\pi}_{0}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta\\
&=&\int^{\theta_{w}^{\bot}}_{0}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta+\int^{\pi}_{\theta_{w}^{\bot}}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta:=I^{1}_{w}(x)+I^{2}_{w}(x).
\end{eqnarray*}
The next step is to estimate $I^{1}_{w}(x)$ and $I^{2}_{w}(x)$.\\
\begin{equation}\label{E8}
\displaystyle I^{1}_{w}(x)=\int^{a_{w}^{\bot}}_{0}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta+\int^{\theta_{w}^{\bot}}_{a_{w}^{\bot}}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta,
\end{equation}
where $a^{\bot}_{w}=\theta_{w}^{\bot}-\epsilon$.
\noindent Now, by Lemma \ref{L4M}.1 we have that for $\theta\in [0,a_w^{\bot}]$ and $\left\|w\right\|\leq 1$, then $\pi_{\theta}(w)\geq C_1\left\|w\right\|$ and for $\left\|w\right\|\geq 1$, $\pi_{\theta}(w)\geq C_1$.\\
\noindent Since, $\sin(x\pi_\theta(w))\leq 1$, then the first integral on the right side (\ref{E8}) is bounded in $x$. In fact:
\begin{equation}\label{E9}
\int^{a_{w}^{\bot}}_{0}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta \leq \left\{ \begin{array}{lll}
\frac{\pi}{C_1\left\|w\right\|}& \mbox{\text{if}\ \ $0<\left\|w\right\|\leq 1$};\\
& \\
\frac{\pi}{C_1}& \mbox{\text{if} \ \ $\left\|w\right\|>1$. \
}\end{array} \right.
\end{equation}
\noindent Now we estimate the second integral on the right side of (\ref{E8}).\\
\ \\
Put $f_w(\theta)=\pi_\theta(w)$, then $f_{w}(\theta_{w}^{\bot})=0$ and $f_{w}(\theta)>0$ for $\theta<\theta_{w}^{\bot}$. Moreover, recall that by Lemma \ref{L2M}, $\frac{\partial \pi}{\partial \theta}(\theta_{w}^{\bot},w)=-\left\|w\right\|\neq 0$, then $f_w'(\theta_{w}^{\bot})\neq 0$, and put $s=f_{w}(\theta)$. Thus,
\begin{equation}\label{E10}
\displaystyle\int^{\theta_{w}^{\bot}}_{a_w^{\bot}}\frac{\sin(x\pi_\theta(w))}{\pi_\theta(w)}d\theta=-\int^{f_{w}(a_w^{\bot})}_{0}\frac{\sin \left(xs\right)}{sf'_{w}(f_w^{-1}(s))}ds=
-\int^{f_w(a_{w}^{\bot})}_{0}\frac{\sin \left(xs\right)}{sg_w(s)}ds,
\end{equation}
where $g_w(s)=f'_w(f_w^{-1}(s))$ is $C^\infty$.\\
\ \\
Now by definition of $s$, if $s\in [0,f_{w}(a_{w}^{\bot})]$, then $f_{w}^{-1}(s)\in [a_w^{\bot},\theta_{w}^{\bot}]$. Thus by Lemma \ref{L2M} we have
\begin{equation}\label{E11}
-\left\|w\right\|\leq g_w(s)\leq-\frac{1}{2}\left\|w\right\| \ \ \text{for all} \ \ s\in[0,f_{w}(a_{w}^{\bot})].
\end{equation}
For large $x$
\begin{center}
$\displaystyle-\int^{\frac{2\pi}{x}}_{0}\frac{\sin \left(xs\right)}{sg_\alpha(s)}ds=-\int^{\frac{\pi}{x}}_{0}\frac{\sin \left(xs\right)}{sg_\alpha(s)}ds-\int^{2\pi/x}_{\pi/x}\frac{\sin \left(xs\right)}{sg_\alpha(s)}ds$
\end{center}
Since $\sin\left(xs\right)\geq0$ in $\left[0,\frac{\pi}{x}\right]$ then
\begin{equation}\label{E12}
\displaystyle-\int^{\frac{\pi}{x}}_{0}\frac{\sin \left(xs\right)}{sg_\alpha(s)}ds\leq\frac{2}{\left\|w\right\|}\int^{\frac{\pi}{x}}_{0}\frac{\sin \left(xs\right)}{s}ds=\frac{2}{\left\|w\right\|}\int^{\pi}_{0}\frac{\sin y}{y}dy.
\end{equation}
\noindent As well $-sin\left(xs\right)\geq0$ for $s\in\left[\frac{\pi}{x},\frac{2\pi}{x}\right]$, then $\displaystyle-\int^{2\pi/x}_{\pi/x}\frac{\sin\left(xs\right)}{sg_w(s)}ds\leq 0$. So, by (\ref{E12})
\begin{equation}\label{E13}
\displaystyle-\int^{\frac{2\pi}{x}}_{0}\frac{\sin \left(xs\right)}{sg_w(s)}ds\leq \frac{2}{\left\|w\right\|}\int^{\pi}_{0}\frac{\sin y}{y}dy.
\end{equation}
\ \\
Let $n\in \mathbb{N}$ such that $\displaystyle n\leq \frac{xf_w(a_w^{\bot})}{2\pi}\leq n+1$, then
\begin{center}
$\displaystyle\int^{f_w(a_w^{\bot})}_{0}\frac{\sin \left(xs\right)}{sg_w(s)}ds=\int^{\frac{2\pi}{x}}_{0}\frac{\sin \left(xs\right)}{sg_w(s)}ds+\sum^{k=n-1}_{k=1}\int^{\frac{2\pi(k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds+\int^{f_w(a_w^{\bot})}_{\frac{2\pi n}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds.$
\end{center}
If $\frac{2\pi n}{x}\leq f_w(a_w^{\bot})\leq \frac{\pi(2n+1)}{x}$, then $\sin\left(xs\right)\geq 0$ and by Lemma \ref{L2M}, we have
\ \\
$$\displaystyle \frac{\sin\left(xs\right)}{s\left\|w\right\|}\leq -\frac{\sin \left(xs\right)}{sg_w(s)}\leq \frac{2\sin\left(xs\right)}{s\left\|w\right\|} \ \ \text{and} \ \ \displaystyle \frac{2\sin\left(xs\right)}{s\left\|w\right\|} \leq \frac{2x \sin \left(xs\right)}{\left\|w\right\|2\pi n}.$$
This implies
\begin{eqnarray*}
\displaystyle-\int^{f_w(a_w^{\bot})}_{\frac{2\pi n }{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds&\leq& \int^{f_w(a_w^{\bot})}_{\frac{2\pi n}{x}}\frac{x \sin \left(xs\right)}{\left\|w\right\|\pi n}ds\leq\frac{x}{\left\|w\right\|\pi n}\int^{f_w(a_w^{\bot})}_{\frac{2\pi n}{x}}{\sin \left(xs\right)}ds\\
&\leq& \frac{x}{\left\|w\right\| \pi n}\left(f_w\left(a_w^{\bot}\right)-\frac{2\pi n}{x}\right)=\frac{2}{\left\|w\right\|}\left(\frac{xf_w(a_w^{\bot})}{2\pi n}-1\right) \\
&\leq& \frac{2}{\left\|w\right\|}\left(\frac{2\pi(n+1)}{2\pi n}-1\right)
=\frac{2}{\left\|w\right\|}\frac{1}{n}.
\end{eqnarray*}
In the case that $f_w(a_{w}^{\bot})\geq \frac{\pi(2n+1)}{x}$, then $\displaystyle-\int^{f_w(a_{w}^{\bot})}_{\frac{\pi(2n+1)}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds\leq 0$, so
\begin{eqnarray*}
\displaystyle-\int^{f_w(a_{w}^{\bot})}_{\frac{2\pi n}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds&\leq& -\int^{\frac{\pi(2n+1)}{x}}_{\frac{\pi (2n+1)}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds\leq \frac{x}{\left\|w\right\|2\pi n }\left(\frac{\pi(2n+1)}{x}-\frac{2\pi n}{x}\right)\\
&=&\frac{1}{\left\|w\right\|}\frac{1}{n}.
\end{eqnarray*}
In any case, we have
\begin{equation}\label{E14}
\displaystyle\int^{f_w(a_{w}^{\bot})}_{\frac{2\pi n}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds \leq \frac{2}{\left\|w\right\|}\frac{1}{n}.
\end{equation}
\ \\
Now we only need to estimate $\displaystyle\sum^{k=n-1}_{k=1}\int^{\frac{2\pi(k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin \left(xs\right)}{sg_w(s)}ds$.\\
Put $s_0=\frac{2\pi k}{x}$, then
\begin{center}
$\displaystyle\int^{\frac{2\pi (k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin\left(xs\right)}{sg_w(s)}ds=\int^{\frac{2\pi (k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin\left(xs\right)}{s_0g_w(s_0)}ds+\int^{\frac{2\pi (k+1)}{x}}_{\frac{2\pi k}{x}} \sin \left(xs\right) \left(\frac{1}{sg_w(s)}-\frac{1}{s_0g_w(s_0)}\right)ds.$
\end{center}
\ \\
The first integral on the right term of the above equality is zero.\\
\ \\
Now we estimate the second integral on the right side of the above equation.\\
\ \\
By Lemma \ref{L2M} we have $g_w(s)g_w(s_0)>\dfrac{\left\|w\right\|^{2}}{4}$, also $ss_0\geq \left(\frac{2\pi k}{x}\right)^2$.\\
Thus, $\displaystyle \frac{1}{ss_0g_w(s)g_w(s_0)}<\frac{1}{\left\|w\right\|^2}\frac{x^2}{\pi^2k^2}$. Moreover,
\ \\
\begin{eqnarray*}
\left|s_0g_w(s_0)-sg_w(s)\right|&=&\left|(s_0-s)g_w(s_0)+s(g_w(s_0)-g_w(s))\right|\\
&\leq&\left|s_0-s\right|\left|g_w(s_0)\right|+s\left|g_w(s)-g_w(s_0)\right|\\
&\leq& \left|s-s_0\right|\left(\left|g_w(s_0)\right|+s \sup_{s\in [0,f_w(a_{w}^{\bot})]}\left|g'_w(s)\right|\right) \ \ \downarrow \ \ \text{by Lemma \ref{L2M}} \ \ \\
&\leq& \frac{2\pi}{x}\left(\left\|w\right\|+\frac{2\pi(k+1)}{x}\left\|w\right\|\right)\\
&\leq& \frac{2\pi}{x}\left\|w\right\|\left(1+\frac{2\pi n}{x}\right)\\
&\leq& \frac{2\pi}{x}\left\|w\right\|\left(1+f_{w}(a_{w}^{\bot})\right) \\
&\leq& \frac{2\pi}{x}\left\|w\right\|\left(1+\left\|w\right\|\right)
\end{eqnarray*}
as, $f_{w}(a_{w}^{\bot})\leq \left\|w\right\|$.
Therefore,
\begin{center}
$\displaystyle \left|\frac{1}{sg_w(s)}-\frac{1}{s_0g_w(s_0)}\right|=\left|\frac{sg_w(s)-s_0g_w(s_0)}{ss_0g_w(s)g_w(s_0)}\right|\leq \frac{2\left(1+\left\|w\right\|\right)}{\pi\left\|w\right\|}\left(\frac{x}{k^2}\right).$
\end{center}
Then,
\begin{center}
$\displaystyle\left|\int^{\frac{2\pi (k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin\left(xs\right)}{sg_w(s)}ds\right|\leq \frac{2\left(1+\left\|w\right\|\right)}{\pi\left\|w\right\|}\left(\frac{x}{k^2}\right)\left(\frac{2\pi(k+1)}{x}-\frac{2\pi k}{x}\right)=\frac{4\left(1+\left\|w\right\|\right)}{\left\|w\right\|}\left(\frac{1}{k^2}\right)$.
\end{center}
Therefore,
\begin{equation}\label{E15}
\displaystyle\left|\sum^{k=n-1}_{k=1}\int^{\frac{2\pi (k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin\left(xs\right)}{sg_w(s)}ds\right|\leq \sum^{k=n-1}_{k=1}\left|\int^{\frac{2\pi (k+1)}{x}}_{\frac{2\pi k}{x}}\frac{\sin\left(xs\right)}{sg_w(s)}ds\right|\leq {A(\left\|w\right\|)}\sum^{k=n-1}_{k=1}\frac{1}{k^2},
\end{equation}
where $A(\left\|w\right\|)=\dfrac{4\left(1+\left\|w\right\|\right)}{\left\|w\right\|}$.\\
\ \\
Since $\sum^{\infty}_{k=1}\frac{1}{k^2}:=a<\infty$ and put $b=\int^{\pi}_{0}\frac{\sin y}{y}$, then the equations (\ref{E13}), (\ref{E14}), and (\ref{E15}) imply
\begin{equation}\label{E16}
-\int^{f_w(a_{w}^{\bot})}_{0}\frac{\sin(xs)}{sg_w(s)}ds\leq \frac{2}{\left\|w\right\|}b+\frac{2}{\left\|w\right\|}\frac{1}{n}+A(\left\|w\right\|)a.
\end{equation}
\noindent Thus, by the equation (\ref{E8}), (\ref{E9}), (\ref{E10}), and (\ref{E16}) we have
\begin{equation}\label{E17}
I^{1}_{w}(x) \leq \left\{ \begin{array}{lll}
\displaystyle\frac{\pi}{C_1\left\|w\right\|}+\frac{2}{\left\|w\right\|}b+\frac{2}{\left\|w\right\|}\frac{1}{n}+A(\left\|w\right\|)a & \mbox{\text{if}\ \ $0<\left\|w\right\|\leq 1$};\\
& \\
\displaystyle\frac{\pi}{C_1}+\frac{2}{\left\|w\right\|}b+\frac{2}{\left\|w\right\|}\frac{1}{n}+A(\left\|w\right\|)a& \mbox{\text{if} \ \ $\left\|w\right\|>1$. \
}\end{array} \right.
\end{equation}
\noindent Completely analogous using $\tilde{a}_{w}^{\bot}$ instead of $a_{w}^{\bot}$ and taking $n'$ such that \\
$-\frac{\pi(2n'+1)}{x}\leq f_w(\tilde{a}_w^{\bot})\leq -\frac{2\pi n'}{x}$, we also obtain
\begin{equation}\label{E18}
I^{2}_{w}(x) \leq \left\{ \begin{array}{lll}
\displaystyle\frac{\pi}{C_1\left\|w\right\|}+\frac{2}{\left\|w\right\|}b+\frac{2}{\left\|w\right\|}\frac{1}{n'}+A(\left\|w\right\|)a & \mbox{\text{if}\ \ $0<\left\|w\right\|\leq 1$};\\
& \\
\displaystyle\frac{\pi}{C_1}+\frac{2}{\left\|w\right\|}b+\frac{2}{\left\|w\right\|}\frac{1}{n'}+A(\left\|w\right\|)a& \mbox{\text{if} \ \ $\left\|w\right\|>1$. \
}\end{array} \right.
\end{equation}
Since $n,n'\to \infty$ as $x\to\infty$, then (\ref{E17}) and (\ref{E18}) implies
\begin{equation}\label{E19}
\int^{\infty}_{0}J_w(z)dz\leq \left\{ \begin{array}{lll}
\displaystyle\frac{2\pi}{C_1\left\|w\right\|}+\frac{4}{\left\|w\right\|}b+2A(\left\|w\right\|)a & \mbox{\text{if}\ \ $0<\left\|w\right\|\leq 1$};\\
& \\
\displaystyle\frac{2\pi}{C_1}+\frac{2}{\left\|w\right\|}b+2A(\left\|w\right\|)a& \mbox{\text{if} \ \ $\left\|w\right\|>1$. \
}\end{array} \right.
\end{equation}
\noindent Thus, we conclude the proof of Proposition \ref{P1M}.
\end{proof}
\noindent Put $j_1=0$, $j_2=\dfrac{3\pi}{4}$ and $j_3=\dfrac{5\pi}{4}$, then it is also easy to see that for $w\in R_i$,
\begin{equation}\label{E20}
\int^{\infty}_{0}J^{j_i}_w(z)dz\leq \left\{ \begin{array}{lll}
\displaystyle\frac{2\pi}{C_i\left\|w\right\|}+\frac{4}{\left\|w\right\|}b+2A(\left\|w\right\|)a & \mbox{\text{if}\ \ $0<\left\|w\right\|\leq 1$};\\
& \\
\displaystyle\frac{2\pi}{C_i}+\frac{2}{\left\|w\right\|}b+2A(\left\|w\right\|)a& \mbox{\text{if} \ \ $\left\|w\right\|>1$, \
}\end{array} \right.
\end{equation}
$i=1,2,3$, where $C_i$ are given in Lemma \ref{L4M}.
\section{Proof of the Main Theorem.}
\noindent As in the Kaufman's proof of Marstrand's theorem (cf. \cite{Kaufman}), we use the potential theory.\\
Put $d=HD(K)>1$, assume that $0<M_d(K)<\infty$ and for some $C>0$, we have
\begin{center}
$m_d(K\cap B_r(x))\leq Cr^d$
\end{center}
for $x\in \mathbb{R}^2$ and $0<r\leq 1$ (cf. \cite{Falconer}). Let $\mu$ be the finite measure on $\mathbb{R}^2$ defined by $\mu(A)=m_d(K \cap A)$, $A$ a measurable subset of $\mathbb{R}^2$. For $-\frac{\pi}{2}<\theta<\frac{\pi}{2}$, let us denote by $\mu_\theta$ the (unique) measure on $\mathbb{R}$ such that $\int {fd\mu_\theta}=\int{(f\circ\pi_\theta)}d\mu$ for every continuous function $f$. The theorem will follow, if we show that the support of $\mu_\theta$ has positive Lebesgue measure for almost all $\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$, since this support is clearly contained in $\pi_\theta(K)$. To do this we use the following fact.\\
\begin{Le}\label{L5M}$\left(\emph{cf. \cite[pg. 65]{PT}}\right)$
Let $\eta$ be a finite measure with compact support on $\mathbb{R}$ and $$\hat{\eta}(p)=\frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}e^{-ixp}d\eta(x),$$ for $p\in\mathbb{R}$ $\left(\hat{\eta} \ \ \text{is the fourier transform of} \ \ \eta\right)$. If $0<\int^{\infty}_{-\infty}|\hat{\eta}(p)|^2dp<\infty$ then the support of $\eta$ has positive Lebesgue measure.
\end{Le}
\begin{proof}[\bf{Proof of the Main Theorem}] \ \\
We now show that, for almost any $\theta\in\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, $\hat{\mu}_\theta$ is square-integrable. From the definitions we have
\begin{center}
$|\hat{\mu}_\theta(p)|^2=\displaystyle\frac{1}{2\pi}\int \int e^{i(y-x)p}d\mu_\theta(x)d\mu_\theta(y)=\frac{1}{2\pi}\int\int e^{ip(\pi_\theta(v)-\pi_\theta(u))}d\mu(u)d\mu(v)$
\end{center}
as $\pi_{\theta+\pi}(u)=-\pi_\theta(u)$, then
\begin{center}
$|\hat{\mu}_\theta(p)|^2+|\hat{\mu}_{\theta+\pi}(p)|^2=\displaystyle \frac{1}{\pi}\int\int \cos(p(\pi_\theta(v)-\pi_\theta(u)))d\mu(u)d\mu(v)$.
\end{center}
And so
\begin{eqnarray*}
\displaystyle\int^{2\pi}_{0}|\hat{\mu}_\theta(p)|^2d\theta&=&\frac{1}{2\pi}\int^{2\pi}_{0}\int\int cos(p(\pi_\theta(v)-\pi_\theta(u)))d\mu(u)d\mu(v)d\theta\\
&=&\frac{1}{2\pi}\int\int\left(\int^{2\pi}_{0} cos(p(\pi_\theta(v)-\pi_\theta(u)))d\theta \right)d\mu(u)d\mu(v).
\end{eqnarray*}
Observe now that for all $x>0$ and for all $u,v$ there are $L\in \mathbb{N}$ and $w(u,v)$ such that
\begin{center}
$\displaystyle\int^{x}_{0}\int^{2\pi}_{0}\cos(p(\pi_{\theta}(u)-\pi_{\theta}(v)))d\theta dp
\leq L \left|\int^{x}_{0}\int^{2\pi}_{0}\cos(p\pi_{\theta}(w(u,v))d\theta dp\right|$
\end{center}
$w(u,v)$ can be taken such that $d(p,w)=d(u,v)$.
So, we have for $x>0$
\begin{center}
$\displaystyle \int^{x}_{-x}\int^{2\pi}_{0}|\hat{\mu}_\theta(p)|^2d\theta dp \leq \frac{2L}{2\pi}\int\int\left|\int^{x}_{0}\tilde{J}_{w(u,v)}(p)dp\right|d\mu(u)d\mu(v)$.
\end{center}
Follows
\begin{equation*}
\displaystyle \frac{\pi}{L}\int^{\infty}_{-\infty}\int^{2\pi}_{0}|\hat{\mu}_\theta(p)|^2d\theta dp\leq\int\int\left|\int^{\infty}_{0}\tilde{J}_{w(u,v)}(p)dp\right|d\mu(u)d\mu(v)=
\end{equation*}
\begin{equation*}
=\int\int_{\{\left\|w\right\|>1\}}\left|\int^{\infty}_{0}\tilde{J}_{w(u,v)}(p)dp\right|d\mu(u)d\mu(v)+\int \int_{\left\{\left\|w\right\|\leq1\right\}}\left|\int^{\infty}_{0}\tilde{J}_{w(u,v)}(p)dp\right|d\mu(u)d\mu(v)\\
\end{equation*}
\begin{equation}\label{E21}
=:I+II.
\end{equation}
By (\ref{E7}) and Remark \ref{R1M}
\begin{eqnarray*}
I&=&{\int\int}_{\left\{\left\|w\right\|>1\right\}}\left|\int^{\infty}_{0}\tilde{J}_{w}(p)dp\right|d\mu(u)d\mu(v)=\sum_{i=1}^{3} \int\int_{\{\left\|w\right\|>1\}\cap R_i}\left|\int^{\infty}_{0}\tilde{J}_{w}(p)dp\right|d\mu(u)d\mu(v)\\
&=&2\sum_{i=1}^{3} \int\int_{\{\left\|w\right\|>1\}\cap R_i}\left|\int^{\infty}_{0}{J}^{j_i}_{w}(p)dp\right|d\mu(u)d\mu(v).
\end{eqnarray*}
Now by (\ref{E19}) and (\ref{E20}), we have
\begin{eqnarray*}
I\leq 2\sum_{i=1}^{3}\displaystyle\int\int_{\{\left\|w\right\|>1\}\cap R_i}\left(\frac{2\pi}{C_i}+\frac{2}{\left\|w\right\|}b+2A(\left\|w\right\|)a\right)d\mu(u)d\mu(v).
\end{eqnarray*}
If $\left\|w\right\|>1$, then $\frac{1}{\left\|w\right\|}<1$ and $A(\left\|w\right\|)=\frac{4(1+\left\|w\right\|)}{\left\|w\right\|}<8$, moreover, as the support of the measure $\mu\times \mu$ is contained in $K \times K$ which is compact, then
\begin{equation}\label{E22}
I\leq 6\left(2\pi\max\left\{\frac{1}{C_i}\right\}+2b+16a\right)\mu(K)^2.
\end{equation}
We now estimate $II$, in fact: By (\ref{E7}) and Remark \ref{R1M},
\begin{eqnarray*}
II&=&\int \int_{\left\{\left\|w\right\|\leq1\right\}}\left|\int^{\infty}_{0}\tilde{J}_{w}(p)dp\right|d\mu(u)d\mu(v)=\sum_{i=1}^{3}\int \int_{\left\{\left\|w\right\|\leq1\right\}\cap R_i}\left|\int^{\infty}_{0}\tilde{J}_{w}(p)dp\right|d\mu(u)d\mu(v)\\
&=&2\sum_{i=1}^{3} \int\int_{\{\left\|w\right\|\leq1\}\cap R_i}\left|\int^{\infty}_{0}{J}^{j_i}_{w}(p)dp\right|d\mu(u)d\mu(v).
\end{eqnarray*}
Now by (\ref{E19}) and (\ref{E20}), we have
\begin{eqnarray}\label{E23}
II&\leq& 2\sum_{i=1}^{3} \int\int_{\{\left\|w\right\|\leq1\}\cap R_i}\left(\frac{2\pi}{C_i\left\|w\right\|}+\frac{4}{\left\|w\right\|}b+2A(\left\|w\right\|)a\right)d\mu(u)d\mu(v) \nonumber \\
&\leq&6\int\int_{\{\left\|w\right\|\leq1\}}\left(\left(\max\left\{\frac{2\pi}{C_i}\right\}+4b+8a\right)\frac{1}{\left\|w\right\|} +8a\right)d\mu(u)d\mu(v).
\end{eqnarray}
\noindent Remember that $\left\|w(u,v)\right\|=d(u,v)$, then
\begin{center}
$\displaystyle\int\int_{\left\{\left\|w\right\|\leq 1\right\}}\frac{1}{\left\|w\right\|}d\mu(u)d\mu(v)=\int\int_{\left\{d(u,v)\leq 1\right\}}\frac{1}{d(u,v)}d\mu(u)d\mu(v)$.
\end{center}
Now, for some $0<\beta<1$
\begin{eqnarray*}
\displaystyle\int_{\left\{\left\|w\right\|\leq 1\right\}}\frac{1}{d(u,v)}d\mu(v)&=&\sum^{\infty}_{n=1}\int_{{\beta}^n\leq d(u,v)\leq {\beta}^{n-1}}\frac{d\mu(v)}{d(u,v)}\leq \sum^{\infty}_{n=1}{\beta}^{-n}\mu(B_{{\beta}^{n-1}}(u))\\
&\leq& C\sum^{\infty}_{n=1}{\beta}^{-n}({\beta}^{n-1})^d \ \ \ \\
&\leq& C\displaystyle\sum^{\infty}_{n=1}{\beta}^{-d}({\beta}^{d-1})^n \ \text{with} \ \ d>1\\
&=&C{\beta}^{-d}\left(\frac{1}{1-{\beta}^{d-1}}-1\right)=\frac{C}{\beta-{\beta}^{d}}.
\end{eqnarray*}
\noindent Therefore,
\begin{center}
$\displaystyle\int\int_{\left\{\left\|w\right\|\leq 1\right\}}\frac{1}{\left\|w\right\|}d\mu(u)d\mu(v)\leq \mu({\mathbb{R}^2})\frac{C}{\beta-{\beta}^d}$.
\end{center}
\noindent Also, $\displaystyle\int\int_{\left\{\left\|w\right\|\leq 1\right\}} 48d\mu(u)d\mu(v)\leq 8a{\mu(K)}^2<\infty$.\\
Using these last two inequalities and the equation (\ref{E23}) we have that
\begin{equation}\label{E24}
II\leq 6\left(\left(\max\left\{\frac{2\pi}{C_i}\right\}+4b+8a\right)\frac{C}{\beta-{\beta}^{d}} +8a\mu(K)^{2}\right).
\end{equation}
\noindent Using Fubini, the by equations (\ref{E21}), (\ref{E22}) and (\ref{E24}) we have
\begin{center}
$\displaystyle \frac{\pi}{L}\int^{2\pi}_{0}\int^{\infty}_{-\infty}|\hat{\mu}_\theta(p)|^2dpd\theta \leq I+II \leq 6\left(2\pi\max\left\{\frac{1}{C_i'}\right\}+2b+16a\right)\mu(K)^2+6\left(\left(\max\left\{\frac{2\pi}{C_i}\right\}+4b+8a\right)\frac{C}{\beta-{\beta}^{d}} +8a\mu(K)^{2}\right)<\infty$.
\end{center}
Therefore, $\displaystyle\int^{\infty}_{-\infty}|\hat{\mu}_\theta(p)|^2dp<\infty$ for almost all $\theta \in \left(-\frac{\pi}{2},\frac{\pi}{2}\right)$.\\
If exists $\theta \in \left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ such that $\displaystyle\int^{\infty}_{-\infty}|\hat{\mu}_\theta(p)|^2dp=0$, then $\displaystyle\int^{\infty}_{-\infty}\left|\varphi(x)\right|^2dx=\int^{\infty}_{-\infty}|\hat{\mu}_\theta(p)|^2dp=0$ where $\varphi(x)=\displaystyle\frac{1}{\sqrt{2\pi}}\int^{\infty}_{-\infty}e^{ixp}\hat{\mu_{\theta}}(p)dp$. This implies that $\varphi\equiv 0$ almost every where, but $d\mu_{\theta}=\varphi dx$. This is $\mu_{\theta}(\mathbb{R})=\int^{\infty}_{-\infty}\varphi(x)dx=0$ and this implies that $\mu(\mathbb{R}^2)=0$, this contradicts the fact that $d$-measure of Haussdorff of $K$ is positive.\\
\ \\
The result follows of Lemma \ref{L5M}, in the case $0<m_d(K)<\infty$. \\
\ \\
In the general case, we take $0<m_{d'}(K')<\infty$ with $1<d'<d$ and $K'\subset K$ (cf. \cite{Falconer}). Then, by the same argument $\pi_{\theta}(K')$ has positive measure for almost all $\theta$, and since $\pi_{\theta}(K')\subset \pi_{\theta}(K)$, then the same is true for $\pi_{\theta}(K)$.
\end{proof}
\newpage
$$\textbf{Acknowledgments}$$
The author is thankful to IMPA for the excellent ambient during
the preparation of this manuscript. The author is also grateful to Carlos Gustavo Moreira for carefully reading the preliminary version of this work and their comments in this work. This work was financially supported by CNPq-Brazil, Capes, and the Palis Balzan Prize.
\bibliographystyle{alpha}
| {
"timestamp": "2014-02-21T02:10:03",
"yymm": "1402",
"arxiv_id": "1402.5133",
"language": "en",
"url": "https://arxiv.org/abs/1402.5133",
"abstract": "In a paper from 1954, Marstrand proved that if $K\\subset \\mathbb{R}^2$ with Hausdorff dimension greater than 1, then its one-dimensional projection has positive Lebesgue measure for almost-all directions. In this article, we show that if $M$ is a simply connected surface with non-positive curvature, then Marstrand's theorem is still valid.",
"subjects": "Differential Geometry (math.DG)",
"title": "The Marstrand Theorem in Nonpositive Curvature",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806506825456,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7077274228570053
} |
https://arxiv.org/abs/1602.06135 | Quantifying noisy attractors: from heteroclinic to excitable networks | Attractors of dynamical systems may be networks in phase space that can be heteroclinic (where there are dynamical connections between simple invariant sets) or excitable (where a perturbation threshold needs to be crossed to a dynamical connection between "nodes"). Such network attractors can display a high degree of sensitivity to noise both in terms of the regions of phase space visited and in terms of the sequence of transitions around the network. The two types of network are intimately related---one can directly bifurcate to the other.In this paper we attempt to quantify the effect of additive noise on such network attractors. Noise increases the average rate at which the networks are explored, and can result in "macroscopic" random motion around the network. We perform an asymptotic analysis of local behaviour of an escape model near heteroclinic/excitable nodes in the limit of noise $\eta\rightarrow 0^+$ as a model for the mean residence time $T$ near equilibria.We also explore transition probabilities between nodes of the network in the presence of anisotropic noise. For low levels of noise, numerical results suggest that a (heteroclinic or excitable) network can approximately realise any set of transition probabilities and any sufficiently large mean residence times at the given nodes. We show that this can be well modelled in our example network by multiple independent escape processes, where the direction of first escape determines the transition. This suggests that it is feasible to design noisy network attractors with arbitrary Markov transition probabilities and residence times. | \section{Introduction}
\label{sec:intro}
It is well known that noise can play a fundamental role in modifying the qualitative behaviour of a dynamical system. This is especially the case for what we term ``network attractors'' that include a number of invariant sets connected in some dynamical way. In this paper we consider the effect of noise on two related types of network attractor: heteroclinic networks (equilibria connected by heteroclinic orbits) and excitable networks (equilibria connected by orbits that start within some distance of the starting equilibrium). As noted in previous work~\cite{AshOroWorTow07,AshPos15}, a bifurcation of the equilibria in a symmetric system may lead to a transition from heteroclinic to excitable attractor.
For heteroclinic cycle attractors, it is well known that addition of noise can cause a non-ergodic attractor to become an approximately periodic ``noisy'' limit cycle \cite{StoHol90,StoArm99}. For excitable systems, the creative properties of noise in a potential landscape have been well studied in the literature on stochastic resonance \cite{Linder_etal_2004,Benzi81} where Kramers' law for escape times near a stable equilibrium coupled with global reconnection can lead to approximately periodic behaviour. Both of these effects can be thought of as a regularizing effect of adding noise.
There is another noise-induced effect that seems to have received less attention (notable exceptions being \cite{ArmStoKir03,Bakhtinb}) If there is more than one outgoing direction for a connection (either heteroclinic, or excitable) from an equilibrium then it is not immediately clear which connection will be followed by the trajectory. On the one hand, there may be one preferred direction corresponding to the most unstable eigenvalue (in the case of a heteroclinic connection) or the shallowest potential saddle (in the case of an excitable connection). On the other hand, if the noise is anisotropic then variations in noise amplitudes in different directions can make one direction preferred over another. In fact, the connection chosen will be the result of a competition between the noise and dynamical processes for a number of possible outcomes. This results in a macroscopically observable randomness in the dynamics, where the noise forces dynamical behaviour to explore the network in a random manner.
The first aim of this paper is to present a qualitative exploration of the effect of additive noise on network attractors. We characterize the scaling of mean residence times near equilibria for both the heteroclinic and excitable cases, using a mixture of asymptotic analysis of a simplified problem and numerical examples. We also study how the noise determines the transition probabilities from a given node.
The second aim of this paper is to present a design principle for noisy networks. For a given (but arbitrary) set of mean residence times and transition probabilities, we argue it is possible to find a network attractor that is well-modelled by a first order Markov process where the mean residence times and transition probabilities are as desired. As previously shown \cite{AshPos13}, in the small noise limit, motion around a noisy network may be modelled as a one-step Markov chain as long as the local values of the eigenvalues do not cause ``lift-off'' and longer time correlations in the trajectory \cite{ArmStoKir03,Bakhtin,Bakhtinb}.
The paper is structured as follows: In Section~\ref{sec:3graph} we give two illustrative examples, where low amplitude noise added to a dynamical system that realises a ``network attractor'' gives rise to a random walk around the ``noisy network''. Section~\ref{sec:general} introduces network attractors for deterministic and noisy systems in general terms, along the lines of \cite{AshPos15}. We quantify the trajectory in terms of random variables for the residence times and transitions between network nodes. The means of these random variables give the mean residence time and the transition probabilities. In section~\ref{subsec:connecting} we give some general hypotheses on the nature of noisy network attractors and, assuming these hypotheses, we conjecture that any set of transition probabilities and sufficiently long mean residence times may be approximately realised by appropriate choice of noise amplitudes.
Section~\ref{sec:residence} models the mean residence times at each node by considering escape from a region near an equilibrium for the case where there is a connection in only one dimension. We find low-noise asymptotic scalings of the mean residence time on both sides of, as well as at the bifurcation between, heteroclinic and excitable connections. These scalings are verified and illustrated in Section~\ref{sec:onedimsims} using numerical simulations for a one dimensional SDE where there is transition from heteroclinic to excitable connection on changing a parameter.
Section~\ref{sec:switching} examines transition probabilities on a network. We consider a system with a simple (but fully nonlinear) noisy network attractor, adapting an example from our previous work~\cite{AshPos15}. For this example, we show that one can design the transition probabilities and mean residence times for a noisy network attractor by specifying the amplitudes of additive noise within the system. Details of the construction are included in Appendix~\ref{app:bidirthree}.
Although the general problem of relating the transition probabilities to the noise amplitudes seems to be difficult, it seems that the switching can be well-modelled as a competition between two independent escape processes, and we investigate this in Section~\ref{sec:multiplesc}. In the case where the escape distribution is close to exponential we show that one can approximate the transition probability simply from the mean escape times. More generally the transition probability is determined by the distributions of escape times, not just their means.
Finally Section~\ref{sec:discuss} gives a discussion of some implications, possible areas of application, and open questions raised by this work.
\subsection{Example: random walks on three-node networks}
\label{sec:3graph}
In order to motivate the sort of dynamics we are considering, consider the finite graphs shown in Figure~\ref{fig:3graphs}. Appendix~\ref{app:3graphs} describes dynamical systems of the form described in our previous work~\cite{AshPos15} that realise each of the networks shown in Figure~\ref{fig:3graphs}; see equations~\eqref{eq:C3system1} and~\eqref{eq:C3system2} for details. The aim of this paper will be to quantify both the mean residence times and the transition probabilities for such a network attractor in the presence of noise.
\begin{figure}%
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(110,50)(0,0)
\put(5,5){\includegraphics[trim= 0cm 0cm 0cm 0cm,clip=true,width=50mm]{figure1a}}
\put(65,5){\includegraphics[trim= 0cm 0cm 0cm 0cm,clip=true,width=50mm]{figure1b}}
\put(5,45){(a)}
\put(65,45){(b)}
\end{picture}
\end{center}
\caption{Two graphs with three nodes. In (a) transitions can only be made in one direction, and in (b), transitions may be made in both directions. We realise each of these graphs as heteroclinic/excitable network attractors with added noise using the system described in Appendix A. In both cases the mean residence time at equilibria is infinite unless there is noise. For case (a) we add noise of strength $\eta_{cw}$ in the direction of the connections. In case (b) we add noise of strengths $\eta_{cw}$ and $\eta_{acw}$ to transitions in the clockwise and anticlockwise directions, respectively. See Figure~\ref{fig:runs} for example timeseries showing trajectories near noisy network attractors that realise these graphs.}%
\label{fig:3graphs}%
\end{figure}
We show in Figure~\ref{fig:runs} some numerical simulations of typical runs starting at node $\xi_1$. The one dimensional observable $S(t)$ (see Appendix~\ref{app:3graphs}) has the property that $S(t)\approx k$ whenever the trajectory is near the equilibrium $\xi_k$. The components $p_j$ are approximately equal to $1$ when the trajectory is near the equilibrium $\xi_k$, and the components $y_j$ are non-zero during the transitions between equilibria. Here $p_1$ corresponds to the transition from $\xi_1$ to $\xi_2$.
Figures~\ref{fig:runs}(a) and (c) show heteroclinic and excitable realisations, respectively, for the uni-directional cycle shown in Figure~\ref{fig:3graphs}(a).
Figures~\ref{fig:runs}(b) and (d) show heteroclinic and excitable realisations, respectively, for the bi-directional cycle shown in Figure~\ref{fig:3graphs}(b). Here, transitions are possible in both directions.
Close inspection reveals that there is much greater variability in residence times for the excitable realisations than for the heteroclinic realisation. In particular, the time-series for the heteroclinic uni-directional ring (Figure~\ref{fig:runs}(a)) is approximately periodic.
In the noise-free case (not shown), excitable realisations remains at the (now stable) starting state, while heteroclinic realisations perform an asymptotic slowing down between the nodes in the graph.
\begin{figure}%
\begin{center}
\includegraphics[width=17cm,trim={0cm 0cm 0cm 0cm},clip=]{figure2.eps}~
\end{center}
\caption{Timeseries of $S(t)$ (top), and $p_1$, $p_2$ and $y_1$ (bottom) for typical trajectories corresponding to realisations of the graphs shown in Figure~\ref{fig:3graphs} with weak noise; the system and parameters are described in Appendix~\ref{app:3graphs}. If the system state is close to the equilibrium $\xi_k$ that represents the $k$th node in the graph then the observable satisfies $S(t)\approx k$. For these runs we show time series for $\eta_{cw}=\eta_{acw}=0.03$. Panels (a) and (c) are for the uni-directional ring (shown in Figure~\ref{fig:3graphs}(a)), and panels (b) and (d) are for the bi-directional ring (shown in Figure~\ref{fig:3graphs}(b)). Panels (a) and (b) show heteroclinic realisations and panels (c) and (d) show excitable realisations of the graphs in Figure~\ref{fig:3graphs}.}%
\label{fig:runs}%
\end{figure}
\section{Deterministic and noisy network dynamics}
\label{sec:general}
Consider an autonomous ordinary differential equation (ODE)
\begin{equation}
\frac{d}{dt} x= f(x,\nu)
\label{eq:ode}
\end{equation}
on $x\in\mathbb{R}^d$ where $t\geq 0$, $f(x,\nu)$ is a smooth nonlinear function, and $\nu\in\mathbb{R}$ is a bifurcation parameter. We first define more precisely what we mean by heteroclinic and excitable networks in such a deterministic system before considering the statistics of noise-perturbed versions.
\subsection{Networks in phase space}
We say there is a {\em heteroclinic connection} from one equilibrium $\xi_i$ to another $\xi_j$ for (\ref{eq:ode}) if
$$
W^u(\xi_i)\cap W^s(\xi_j)\neq \emptyset.
$$
We say (\ref{eq:ode}) has a {\em heteroclinic network attractor} if there is an asymptotically stable compact connected set $\Sigma\subset\mathbb{R}^d$ such that for some set of saddle equilibria $\{\xi_k\}_{k=1}^{N}$ we have
\begin{equation}
\Sigma= \bigcup_{k=1}^{N} W^u(\xi_k)
\label{eq:hetnet}
\end{equation}
where
$$
W^u(\xi)=\{x~:~\alpha(x)=\{\xi\}\},~~W^s(\xi)=\{x~:~\omega(x)=\{\xi\}\}
$$
(these sets are manifolds if the saddles are hyperbolic). This definition is fairly weak (cf \cite{AshPos15}) - we do not necessarily assume hyperbolicity of the saddles or even chain recurrence of the network. However we assume that the closure of all $W^u(\xi_k)$ are contained within the network (the network is "clean" \cite{field_96}) as we will be concerned with behaviour that remains close to the network under stochastic perturbation.
We say the system (\ref{eq:ode}) has an {\em excitable connection for amplitude $\delta>0$} from one equilibrium $\xi_i$ to another $\xi_j$ if
$$
B_{\delta}(\xi_i)\cap W^s(\xi_j)\neq \emptyset.
$$
This connection has {\em threshold} $\delta_{th}$ if
$$
\delta_{th}= \inf \{\delta>0~:~B_{\delta}(\xi_i)\cap W^s(\xi_j)\neq \emptyset\}.
$$
A set $\Sigma$ is an {\em excitable network for amplitude $\delta>0$} \cite{AshPos15} if there is a set of equilibria $\{\xi_i\}$ such that
\begin{equation}
\Sigma=\Sigma_{\mathrm{exc}}(\{\xi_i\},\delta):=\bigcup_{i,j=1}^{n} \{\phi_t(x)~:~x\in B_{\delta}(\xi_i) \mbox{ and }t>0\}\cap W^s(\xi_j)
\label{eq:exnet}
\end{equation}
As noted in \cite{AshPos15}, a heteroclinic connection is also an excitable connection with $\delta_{th}=0$ though the converse is not the necessarily true.
An excitable network for amplitude $\delta$ means if we can follow an arbitrary path on the network by a mixture of trajectories and ``jumps'' of maximum size $\delta$. In a previous paper \cite{AshPos15} we gave a particular construction of coupled nonlinear systems (\ref{eq:ode}) where an arbitrary network can be constructed as a heteroclinic or as an excitable network in phase space.
\subsection{Noisy network attractors}
For cases where the noise-free system (\ref{eq:ode}) has a network attractor $\Sigma$ we will investigate the associated autonomous stochastic differential equation (SDE)
\begin{equation}
dx= f(x,\nu)dt+\eta dw
\label{eq:sde}
\end{equation}
on $x\in\mathbb{R}^d$ where $t\geq 0$, $w(t)$ is $d$-dimensional Brownian motion, and $\eta=\mbox{diag}(\eta_1,\ldots,\eta_d)$. We are concerned with investigating the influence of noise on the associated noisy system (\ref{eq:sde}) under the assumption that trajectories remain close to the heteroclinic or excitable network attractor. We express this more precisely in Section~\ref{subsec:connecting}.
In what follows we will consider $\vartheta\in \Omega$ where $\Omega$ represents the possible noise trajectories. Formally one can understand the solution of (\ref{eq:sde}) as a ``random dynamical system'' \cite{Arnold98}: the solution $x(t)$ can be viewed as a cocycle over the noise trajectory: we write
\begin{equation}
\begin{array}{rcl}
x(t+s) &=& \varphi(t,\vartheta(s),x(s))\\
\vartheta(t+s) &=& \theta(t,\vartheta(s))
\end{array}
\label{eq:rds}
\end{equation}
for any $t,s\geq 0$: note that $\varphi:\mathbb{R}^+\times\Omega\times \mathbb{R}^d\rightarrow \mathbb{R}^d$ is a cocycle that represents the evolution of the system with noise $\vartheta(s)$ whilst $\theta:\mathbb{R}^+\times \Omega\rightarrow \Omega$ represents the evolution of the noise - typically just a shift in time. We will assume there is a measure $\mu_\Omega$ (such as Wiener measure) on $\Omega$, and we assume $\vartheta$ is chosen from a set of full measure with respect to $\mu_\Omega$. We will also assume that the random dynamical system has an attractor that supports a natural ergodic measure $M$ on $\Omega\times \mathbb{R}^d$ whose projection onto $\Omega$ is $\mu_\Omega$ and whose marginals are absolutely continuous with respect to $d$-dimensional Lebesgue measure on the fibres $\mathbb{R}^d$. For any $A\subset \mathbb{R}^d\times \Omega$ we write $\mathrm{Prob}(A):=M(A)$. In heuristic terms we can think of $\mathrm{Prob}(\cdot)$ as assigning probabilities to possible asymptotic states of the noisy system.
\subsection{Itineraries on attracting networks}
\label{sec:itineraries}
Let us assume that typical trajectories of (\ref{eq:sde}) spend most of their time close to a network $\Sigma$ of the form (\ref{eq:hetnet}) or (\ref{eq:exnet}). We attempt to describe the motion in terms of the itinerary around the network, i.e. the sequence and timing of visits to the equilibrium nodes $\xi_k$.
Fix a tolerance $h>0$ (such that $|\xi_p-\xi_q|>2h$ for all $p\neq q$) and define
$$
K(x) := \left\{\begin{array}{rl}
i & \mbox{ if } |x-\xi_i|\leq h\\
0 & \mbox{ otherwise.}\end{array}\right.
$$
For a trajectory $x(t)$ we define
$$
\tilde{K}(t) = \{ K(s) ~:~ s = \sup\{ s\leq t~:~ K(s)\neq 0\} \}
$$
which gives the ``last visited node'' and if we start near a node this will always be non-zero. if $\tilde{K}(x(t))=i$ we say $x(t)$ is {\em close to the $i$th node}.
For $|\eta|$ small and trajectories that remain close to $\Sigma$ we expect that
$$
\lim_{T\rightarrow \infty} \frac{1}{T} \int_{s=0}^T |K(x(s)) -\tilde{K}(s)|\,ds
$$
to be small, i.e. $K(x(t))=\tilde{K}(t)$ most of the time. For a given initial condition $x_0$, amplitude and realisation of the noise $\vartheta$, the trajectory $x(t)$ divides up the $t>0$ into an {\em itinerary}. This is the unique sequence of {\em epochs}
$$
\{ (i_j(x_0,\vartheta),s_j(x_0,\vartheta)) ~:~ j\in\mathbb{N}\}
$$
such that $\tilde{K}(t)=i_j$ for the interval $t\in[s_j,s_{j+1})$, and $i_{j+1}\neq i_j$. As in \cite{AshPos13}, the times of entry $s_j$ are increasing while the {\em duration} of the $j$th epoch we define to be
$$
\tau_j(x_0,\vartheta)=s_{j+1}(x_0,\vartheta)-s_j(x_0,\vartheta).
$$
We are interested in various statistics of this itinerary including the distribution of {\em residence times} for the $j$th node:
$$
\rho_j(\tau)=\mathrm{Prob}\left(\{(\tilde{x},\tilde{\vartheta})~:~\tau_\ell=\tau \mbox{ given that }i_\ell(\tilde{x},\tilde{\vartheta})=j\}\right).
$$
The {\em mean residence time} at the $j$th node is the expected value of $\tau_j$, i.e.
\begin{equation}
T_j = \int_{\tau=0}^{z\infty} \tau \rho_j(\tau)\,d\tau
\end{equation}
If the network has several outgoing connections from a node one might expect the addition of noise to enhance random switching; we show that this can, at least in our case, be well modelled as a competition between independent first escape time processes in the different directions, such that the residence time is the minimum first escape time and the transition probability is the direction of first escape.
For a given (finite) sequence of nodes $\{j_k~:~k=1,\cdots m\}$ we can examine the probability of seeing this sequence of nodes as
\begin{equation}\label{eq:prob}
\mathcal{P}(j_1,\ldots,j_m) := \mathrm{Prob} \left( \{(\tilde{x},\tilde{\vartheta})~:~i_{\ell}(\tilde{x},\tilde{\vartheta})=j_{\ell} \mbox{ for }\ell=1,\ldots,m\} \right)
\end{equation}
and use this to investigate the asymptotic probabilities being at state $j$,
$\pi_j:=\mathcal{P}(j)$ (assuming that $\pi_j>0$). In equation~\eqref{eq:prob}, we can think of the the initial condition of the trajectory $\tilde{x}$ being chosen randomly from the attractor, and then the probability is taken with respect to that initial condition and all possible noise trajectories.
More precisely, the {\em transition probability} that the next state is $j_2$ given we are at state $j_1$ is
\begin{equation}
\pi_{j_1,j_2}:=\frac{1}{\pi_{j_1}}\mathcal{P}(j_1,j_2)
\end{equation}
As in \cite{AshPos13} we say the {\em transitions are memoryless} if
\begin{equation}
\mathcal{P}(j_1,\ldots,j_m)=\mathcal{P}(j_1,\ldots,j_{m-1}) \pi_{p,q}
\label{eq:mless}
\end{equation}
for all $p,q$ and any sequence $j_1,\cdots,j_m$ where $j_{m-1}=p$ and $j_m=q$. As noted in \cite{AshPos13}, in many cases we can expect the transitions to be asymptotically memoryless (i.e. (\ref{eq:mless}) holds with an error that goes to zero as the noise goes to zero), in which case the transitions are well modelled by a first order Markov chain where the transition probabilities are $\pi_{p,q}$.
More precisely we say for some $\epsilon>0$ that the transitions are {\em $\epsilon$-memoryless} if
\begin{equation}
|\mathcal{P}(j_1,\ldots,j_m)-\mathcal{P}(j_1,\ldots,j_{m-1}) \pi_{p,q}|<\epsilon
\label{eq:deltamless}
\end{equation}
for all $p,q$ and any sequence $j_1,\cdots,j_m$ where $j_{m-1}=p$ and $j_m=q$.
\subsection{Connecting microscopic and macroscopic randomness}
\label{subsec:connecting}
An important question that we aim to address in the remainder of this paper is to understand how random variables that determine the itineraries of trajectories of a noisy network attractor for (\ref{eq:sde}) are influenced by the dynamics of the noise-free system (\ref{eq:ode}) and the noise amplitudes. In particular, we are concerned with systems where in the limit of asymptotically low additive noise, all of the mass of the attractor is centred on the network nodes. More precisely, we consider systems of the form (\ref{eq:sde}) such that
\begin{itemize}
\item[(H1)] The noise-free system has a network attractor $\Sigma$ between a finite set of equilibria $\{\xi_i\}_{i=1}^N$ and any connection from $\xi_i$ to $\xi_j$ has added noise of amplitude $\eta_{ij}$
\item[(H2)] For fixed tolerance $h>0$ and any $\epsilon>0$, there is an $\eta>0$ such that whenever $|\eta_{ij}|<\eta$ for all $i,j$ any typical trajectory $x(t)$ with itinerary $K(t)$ will satisfy
$$
\frac{1}{t} \int_{s=0}^{t} \delta_{\tilde{K}(x(s)),K(x(s))} \, ds <\epsilon
$$
i.e. the proportion of time where trajectory is not close to an equilibrium is arbitrarily small.
\item[(H3)] For any $\epsilon>0$ there is an $E$ such that whenever $\eta_{ij}<E$ for all $i,j$ then the transitions are $\epsilon$-memoryless.
\end{itemize}
Making the above assumptions, we conjecture that the (microscopic) noise amplitudes $\eta_{ij}>0$ can be chosen to realise (macroscopic) noisy network dynamics with any given statistics (that is, mean residence times $T_j$ and transition probilities $\pi_{i,j}$), as long as the residence times are sufficiently long. We believe that (H1)-(H3) are reasonable assumptions to make, and in particular, can be numerically verified for the example networks we give in Section~\ref{sec:3graph}. Bakhtin has results \cite[Theorem 6.1]{Bakhtinb} for the limiting invariant measure for some heteroclinic cycles, that implies (H2). Hypothesis (H3) is discussed in more detail in our previous work~\cite{AshPos13} and also by Bakhtin~\cite[Section 10]{Bakhtinb}. (H3) can be violated for heteroclinic networks, if parameters are chosen so that there is `lift-off'~\cite{ArmStoKir03}. This may be the case if there are outgoing eigenvalues that are stronger than the incoming eigenvalues at an equilibrium. For excitable networks we do not expect (H3) to be easily violated.
\begin{conjecture}
\label{conj:main}
Suppose that (\ref{eq:sde}) has a noisy network attractor such that hypotheses (H1)-(H3) hold. We conjecture there is a $\tau>0$ such that for any desired mean residence times $R_j>\tau$ and any desired transition probabilities $\Pi_{ij}>0$ with $\sum_j\Pi_{ij}=1$, there exists a choice of noise amplitudes $\eta_{ij}$ such that
$$
T_j=R_j,~~\pi_{i,j}=\Pi_{i,j}
$$
for all $i,j$.
\end{conjecture}
We present some snumerical evidence supporting this in Section~\ref{sec:switching}.
\section{Residence times for noisy heteroclinic and excitable networks}
\label{sec:residence}
For the noisy network dynamics discussed in Section~\ref{sec:itineraries} we study the behaviour near the connections in terms of an escape process near an equilibrium. On entering a neighbourhood of $\xi_1$ the dynamics of those $y_i$ variables that correspond to outgoing directions in the graph will be unstable (for the heteroclinic case) or marginally stable (for the excitable case); we assume all others are strongly stable. Without loss of generality we consider $y=y_1$ corresponding to a connection from $\xi_1$ to $\xi_2$. The mean escape time from a neighbourhood of an equilibrium $\xi_1$ of a network will be approximated using a one dimension model of the bifurcation to an excitable connection.
For the excitable case this is the well-studied Kramers escape rate from a local potential well. Although Kramers' result has been known and applied in many areas for a long time, only recently have full mathematical justifications of the asymptotic formulae been available \cite{Friedlinetal2012}, and generalisations to more complex situations including some bifurcation problems have only recently been developed by Bakhtin \cite{Bakhtin}, Berglund, Gentz \cite{Berglund2013,BerglundGentz2008} and others. A related case of escape over a potential maximum that undergoes a supercritical pitchfork bifurcation is analysed in detail by Berglund and Gentz~\cite{BerglundGentz2008}: we treat however the problem of escape from a saddle that becomes a sink at a subcritical pitchfork bifurcation on varying $\nu$.
To this end, consider the one dimensional SDE
\begin{equation}
dx= -V'(x) dt + \eta dw.
\label{eq:sdeescape}
\end{equation}
Kramers' formula is an asymptotic formula for the mean transition time from one minimum, $x_0$, to another minimum, $y_0$, of $V(x)$ that causes the trajectory to pass over the maximum potential barrier $z_0$. It states that
\begin{equation}
T \approx \frac{2\pi}{\sqrt{V''(x_0)|V''(z_0)|}} \exp \left(2\frac{V(z_0)-V(x_0)}{\eta^2}\right)\left(1+O(\eta)\right)
\label{eq:Kramers}
\end{equation}
in the limit $\eta\rightarrow 0^+$ (see \cite{Berglund2013} for a review).
More precisely, we approximate the mean residence time near a saddle as the mean escape time $T(\nu,\eta)$ from $x=0$ for the one-dimensional problem (\ref{eq:sdeescape}) with potential
\begin{equation}
V(x)=\frac{1}{6} x^6 -\frac{1}{2} x^4+\frac{\nu}{2}x^2
\label{eq:potential}
\end{equation}
from the interval $[-a,a]$ for some fixed $a$ of order one; more precisely we choose an $0<a$ that separates the additional potential wells of (\ref{eq:potential}) from $x=0$. Figure~\ref{fig:potls} illustrates the potential and the choice of $a$: we will be interested in cases where $\nu$ is close to zero so any additional equilibria lie within $[-a,a]$ and the noise amplitude is asymptotically small: $\eta\rightarrow 0^+$.
\begin{figure}%
\centerline{
\includegraphics[width=0.8\columnwidth]{figure3}
}%
\caption{Bold lines show the potential $V(x)$ versus $x$ (\ref{eq:potential}) for values of $\nu<0$, $\nu=0$ and $\nu>0$. Note that there are minima at $B+$ and $B-$ for all $\nu$ close enough to zero, while $A$ is a local maximum for $\nu\leq 0$. For $\nu>0$ there are local maxima at $C+$ and $C-$ and $A$ is a local minimum. The fainter lines show the truncated potential (\ref{eq:mpotential}). We will model the transitions from $A$ to $B\pm$ in the full potential as the first passage through the lines $x=\pm a$ for the truncated potential; this gives good predictions for small enough $\sqrt{\nu}\ll a$ in the limit $\eta\rightarrow 0^+$.}%
\label{fig:potls}%
\end{figure}
We can consider the modified potential
\begin{equation}
V(x)= \frac{\nu x^2-x^4}{2}.
\label{eq:mpotential}
\end{equation}
For $\nu<0$, this has a saddle at $x=0$ that is stabilised via a subcritical pitchfork on increasing $\nu$ through zero. For the case $\nu>0$, $V$ has a minimum at $x=0$ and maxima at $x_{\pm}=\pm \sqrt{\nu/2}$, and we assume $x_{\pm}\in[-a,a]$. Using Berglund~\cite[Section 3.1]{Berglund2013} we calculate the mean escape time, $w_{a}(x)$, for solutions of (\ref{eq:sdeescape}) starting at a location $x\in (-a,a)$ out of the interval $[-a,a]$. This is given by solving the Poisson problem
\begin{equation}
\frac{\eta^2}{2} \frac{d^2}{dx^2}w_a(x) - V'(x) \frac{d}{dx} w_a(x)=-1,~~ w_a(-a)=w_a(a)=0.
\label{eq:waitingtime}
\end{equation}
The solution of this can be expressed in integral form as
$$
w_a(s) = \frac{2}{\eta^2} \int_{x=s}^{a} \int_{y=0}^{x} \exp \frac{2(V(x)-V(y))}{\eta^2} \,dy\,dx
$$
For escapes with $x$ near the origin, i.e. $0<|x|\ll |a|$ of the potential (\ref{eq:potential}) this can be approximated by
\begin{equation}
T(\nu,\eta) = \frac{2}{\eta^2} \int_{x=0}^{a} \int_{y=0}^{x} \exp \frac{\nu(x^2-y^2)+(y^4-x^4)}{\eta^2} \,dy\,dx.
\label{eq:escapes}
\end{equation}
In the following sections, we compute asymptotics of $T(\nu,\eta)$ for small $\eta$ and $\nu$. In particular, we consider the limit $\eta\rightarrow 0^+$ for three cases: $\nu<0$, $\nu=0$, and $\nu>0$. We begin by finding some bounds on $T(\nu,\eta)$.
\begin{lemma}
\begin{equation} \label{eq:Tbounds}
\frac{1}{2\eta} \int_{z=0}^{\frac{a^2}{\eta}} \frac{1-\exp(z(\alpha-z))}{z(z-\alpha)} \,dz < T(\nu,\eta) < \frac{1}{\eta} \int_{z=0}^{\frac{2a^2}{\eta}} \frac{1-\exp(2z(\alpha-z))}{2z(z-\alpha)} \,dz
\end{equation}
\end{lemma}
\begin{proof}
Rescale $v:=y/\sqrt{\eta}$, $u=x/\sqrt{\eta}$,
and define
$
\alpha:=\frac{\nu}{\eta}
$
to get
\begin{equation}
T(\nu,\eta) = \frac{2}{\eta} \int_{u=0}^{\frac{a}{\sqrt{\eta}}} \int_{v=0}^{u} \exp \left[\alpha (u^2-v^2)+(v^4-u^4)\right] \,dv\,du.
\end{equation}
Now let us define
$$
p:=u+v,~~q:=u-v.
$$
Changing integration variables from $(u,v)$ to $(p,q)$ we have
\begin{equation}
T(\nu,\eta) = \frac{1}{\eta} \int_{p=0}^{\frac{2a}{\sqrt{\eta}}} \int_{q=0}^{\min(p,\frac{2a}{\sqrt{\eta}}-p)} \exp \left[pq(\alpha -(p^2+q^2)/2)\right] \,dq\,dp.
\label{eq:ovTpq}
\end{equation}
In the region of integration we have $0<q<p$, so $p^2<p^2+q^2<2p^2$ and so
$$
pq(\alpha-p^2)<pq(\alpha -(p^2+q^2))<pq(\alpha- p^2/2)
$$
We can thus find an upper bound to (\ref{eq:ovTpq}) using
\begin{eqnarray*}
T(\nu,\eta) & < &
\frac{1}{\eta} \int_{p=0}^{\frac{2a}{\sqrt{\eta}}} \int_{q=0}^{\min(p,\frac{2a}{\sqrt{\eta}}-p)} \exp \left[q p (\alpha - p^2/2)\right] \,dq\,dp\\
&<& \frac{1}{\eta} \int_{p=0}^{\frac{2a}{\sqrt{\eta}}} \int_{q=0}^{p} \exp \left[q p (\alpha - p^2/2)\right] \,dq\,dp\\
&=&
\frac{1}{\eta} \int_{p=0}^{\frac{2a}{\sqrt{\eta}}} \frac{1-\exp [p^2(\alpha-p^2/2)]}{p(p^2/2-\alpha)} \,dp
\end{eqnarray*}
Changing coordinates to $z=p^2/2$, gives the upper bound.
A lower bound to (\ref{eq:ovTpq}) is given by
\begin{eqnarray*}
T(\nu,\eta) & > &
\frac{1}{\eta} \int_{p=0}^{\frac{a}{\sqrt{\eta}}} \int_{q=0}^{\min(p,\frac{2a}{\sqrt{\eta}}-p)} \exp \left[q p (\alpha - p^2)\right] \,dq\,dp\\
&>& \frac{1}{\eta} \int_{p=0}^{\frac{a}{\sqrt{\eta}}} \int_{q=0}^{p} \exp \left[q p (\alpha - p^2)\right] \,dq\,dp\\
&=&
\frac{1}{\eta} \int_{p=0}^{\frac{a}{\sqrt{\eta}}} \frac{1-\exp [p^2(\alpha-p^2)]}{p(p^2-\alpha)} \,dp
\end{eqnarray*}
Changing coordinates to $z=p^2$ gives the lower bound. \hfill$\square$
\end{proof}
\subsection{Scaling for heteroclinic connections}
Heteroclinic connections in a network correspond to $\nu<0$. In this parameter regime the scaling for $T(\nu,\eta)$ as $\eta$ tends to zero is given as follows:
\begin{lemma}
Suppose $\nu<0$. Pick some $0<\beta<1$. Then in the limit $\eta\rightarrow 0^+$,
\begin{equation}
\beta<\frac{T(\nu,\eta)}{\frac{1}{\nu}\ln \eta} <1.
\label{eq:nultzero}
\end{equation}
\end{lemma}
Observe that the leading order of this scaling is as expected from Stone and Holmes~\cite{StoHol90}.
~
\begin{proof}
We begin by computing the upper bound.
For $\nu<0$ and $\eta>0$ (so that $\alpha=\nu/\eta<0$) note that the integrand in the upper bound in (\ref{eq:Tbounds}), $f(z)=\frac{1-\exp(2z(\alpha-z))}{2z(z-\alpha)}$, satisfies both
\[ f(z)<\frac{1}{2z(z-\alpha)}\quad \forall z>0 \]
and
\[
f(z)<1\quad \forall z>0.
\]
This implies that for some $z^*>0$, we can split the integral into
\begin{eqnarray}
T(\nu,\eta) &<& \frac{1}{\eta} \left[ \int_{z=0}^{z^*} \,dz + \int_{z^*}^\frac{2a^2}{\eta} \frac{1}{2z(z-\alpha)} \,dz\right] \nonumber \\
&=& \frac{z^{*}}{\eta} +\frac{1}{2\nu}\left[-\ln \frac{|\alpha|}{z^*} -\ln \left(1+\frac{z^*}{|\alpha|}\right) + \ln \left(1-\frac{\nu}{2a^2}\right) \right]. \label{eq:upperbd2}
\end{eqnarray}
We then choose $z^*=\dfrac{1}{|\alpha|}=\dfrac{\eta}{|\nu|}=-\dfrac{\eta}{\nu}$ and letting $\eta\rightarrow 0^+$, we find
\begin{eqnarray*}
T(\nu,\eta) & < & -\frac{1}{\nu} -\frac{1}{2\nu}\ln \frac{|\nu|^2}{\eta^2}-\frac{1}{2\nu}\ln\left(1+\frac{\eta^2}{|\nu|^2} \right)+\frac{1}{2\nu}\ln\left(1-\frac{\nu}{2a^2} \right) \\
& < & \frac{1}{\nu}\ln\eta +\frac{1}{\nu}\left(-1-\ln|\nu|+\ln\left( 1-\frac{\nu}{2a^2}\right) \right) +O\left(\frac{\eta^2}{\nu^3} \right)
\end{eqnarray*}
so that in the limit $\eta\rightarrow 0^+$ for fixed $\nu<0$ and $a>0$,
\begin{equation}
T(\nu,\eta)< \frac{1}{\nu}\ln \eta + K_1 + O(\eta^2)
\label{eq:nultzeroupper}
\end{equation}
where
$$
K_1= \frac{1}{\nu}\left(-1-\ln|\nu|+\ln\left( 1-\frac{\nu}{2a^2}\right) \right).
$$
We now obtain a lower bound. Let the integrand in the lower bound in (\ref{eq:Tbounds}) be $g(z)=\frac{1-\exp(z(\alpha-z))}{z(z-\alpha)}$ and fix some $0<\beta<1$. It can be shown that for $z^*(\beta)=-\frac{\ln(1-\beta)}{|\alpha|}$, the integrand satisfies
\[
g(z)>\frac{\beta}{z(z-\alpha)}\quad \mathrm{for}\ z>z^*(\beta)
\]
and
\[
g(z)>\frac{\beta}{-2\ln(1-\beta)} \quad \mathrm{for}\ 0<z<z^*(\beta)\quad \mathrm{and}\ |\alpha|>z^*(\beta)
\]
We can thus, for fixed $\beta$ and large enough $|\alpha|$, split the integral into
\begin{eqnarray*}
T(\nu,\eta) &>& \frac{1}{2\eta} \left[ \int_{z=0}^{z^*} \frac{\beta}{-2\ln(1-\beta)} \,dz +\beta \int_{z^*}^\frac{a^2}{\eta} \frac{1}{z(z-\alpha)} \,dz.\right]\\
&=& \frac{\beta}{-4\ln (1-\beta) }\frac{z^{*}}{\eta} +\frac{\beta}{2\nu}\left[ -\ln \frac{|\alpha|}{z^*} -\ln \left(1+\frac{z^*}{|\alpha|}\right) + \ln \left(1-\frac{\nu}{a^2}\right) \right].
\end{eqnarray*}
Then, substituting for $z^*(\beta)=-\frac{\ln(1-\beta)}{|\alpha|}=-\ln(1-\beta) \frac{\eta}{|\nu|}$, we find
\begin{eqnarray*}
T(\nu,\eta) & > & -\frac{\beta}{4\nu} -\frac{\beta}{2\nu}\ln\left( \frac{|\nu|^2}{-\ln(1-\beta)\eta^2}\right)-\frac{\beta}{2\nu}\ln\left(1-\ln(1-\beta)\frac{\eta^2}{|\nu|^2} \right)+\frac{\beta}{2\nu}\ln\left(1-\frac{\nu}{a^2} \right) \\
& = & \frac{\beta}{\nu}\ln\eta +\frac{\beta}{\nu}\left(-\frac{1}{4}-\ln|\nu|+\ln(-\ln(1-\beta))+\frac{1}{2}\ln\left( 1-\frac{\nu}{a^2}\right) \right) +O\left(\frac{\eta^2}{\nu^3} \right)
\end{eqnarray*}
so that in the limit $\eta\rightarrow 0^+$, for fixed $\nu<0$, $0<\beta<1$ and $a>0$,
\begin{equation}
T(\nu,\eta)> \frac{\beta}{\nu}\ln \eta + K_2(\beta) + O(\eta^2)
\label{eq:nultzerolower}
\end{equation}
where
\[
K_2(\beta)=\frac{\beta}{\nu}\left(-\frac{1}{4}-\ln|\nu|+\ln(-\ln(1-\beta))+\frac{1}{2}\ln\left( 1-\frac{\nu}{a^2}\right) \right)
\]
\hfill$\square$
\end{proof}
\subsection{Scaling at bifurcation}
For the case $\nu=0$ where there is a bifurcation of the equilibrium at $x=0$ we obtain quite a different scaling. More precisely,
\begin{lemma}
Suppose $\nu=0$ and pick any $0<\beta<1$. Then in the limit $\eta\rightarrow 0^+$,
\begin{equation}
\frac{\sqrt{\pi}}{2}<\frac{T(\nu,\eta)}{\frac{1}{\eta}} <\sqrt{\frac{\pi}{2}}.
\label{eq:nultzero}
\end{equation}
\end{lemma}
\begin{proof}
Set $\nu=0$ (so that $\alpha=0$), then the estimate (\ref{eq:Tbounds}) gives an upper bound
\begin{eqnarray*}
T(\nu,\eta) &<& \frac{1}{\eta} \int_{z=0}^{\frac{2a^2}{\eta}} \frac{1-\exp(-2z^2)}{2z^2} \,dz\\
& = & \frac{1}{\eta} \left[\frac{1-\exp(-2z^2)}{-2z} \right]_0^{\frac{2a^2}{\eta}}+\frac{2}{\eta}\int_0^{\frac{2a^2}{\eta}}\exp(-2z^2) \, dz \\
& = & -\frac{1}{4a^2}+\frac{\exp(-8a^4/\eta^2)}{4a^2}+\frac{1}{\eta}\sqrt{\frac{\pi}{2}}\left(1-\mathrm{erfc}\left(\frac{2\sqrt{2}a^2}{\eta} \right) \right)
\end{eqnarray*}
where $\mathrm{erfc}$ is the complementary error function. Using the asymptotic expansion for $\mathrm{erfc}$ for large $X$ given by
\[
\mathrm{erfc}(X)=\frac{\exp(-X^2)}{X\sqrt{\pi}}\left(1-\frac{1}{2X^2} + \dots \right)
\]
we find the lowest order terms for $T(\nu,\eta)$ are
\begin{eqnarray*}
T(\nu,\eta) &<& \frac{1}{\eta} \sqrt{\frac{\pi}{2}}-\frac{1}{4a^2}+\frac{\eta^2\exp(-8 a^{4}/\eta^{2})}{64a^6}+O\left(\eta^4\exp(-8a^{4}/\eta^{2})\right)
\end{eqnarray*}
as $\eta \rightarrow 0^+$.
A similar computation for the lower bound gives
\begin{eqnarray*}
T(\nu,\eta) &>& \frac{1}{2\eta} \int_{y=0}^{\frac{a^2}{\eta}} \frac{1-\exp(-y^2)}{y^2} \,dy\\
& = & \frac{1}{2\eta} \left[\frac{1-\exp(-y^2)}{-y} \right]_0^{\frac{a^2}{\eta}}+\frac{1}{\eta}\int_0^{\frac{a^2}{\eta}}\exp(-y^2) \, dy \\
& = & -\frac{1}{2a^2}+\frac{\exp(-a^4/\eta^2)}{2a^2}+\frac{1}{\eta}{\frac{\sqrt{\pi}}{2}}\left(1-\mathrm{erfc}\left(\frac{a^2}{\eta} \right) \right) \\
& = & \frac{1}{\eta}{\frac{\sqrt{\pi}}{2}}-\frac{1}{2a^2}+\frac{\eta^2\exp(-a^{4}/\eta^{2})}{4a^6}+O(\eta^4 \exp(-a^4/\eta^2)).
\end{eqnarray*}
\hfill$\square$
\end{proof}
The estimate (\ref{eq:Tbounds}) also means that we have a particularly tractable scaling if we look at the limit on fixing $\alpha$ (so that $\nu=\alpha \eta$) and taking $\eta\rightarrow 0^+$:
\begin{equation}
\label{eq:upperbdalpha}
T(\nu,\eta) < \frac{C(\alpha)}{\eta}+O(1)
\end{equation}
where
$$
C(\alpha)=\int_{z=0}^{\infty} \frac{1-\exp(2z(\alpha-z))}{2z(z-\alpha)} \,dz.
$$
is a constant that is small for $\alpha<0$ and grows very quickly for $\alpha>0$.
More generally this suggests that
\begin{equation}
\label{eq:scalingalpha}
T(\nu,\eta) \approx \frac{C(\alpha)}{\eta}+O(1)
\end{equation}
for some $C(\alpha)>0$ with $C(\alpha)\rightarrow 0$ as $\alpha\rightarrow -\infty$ and $C(\alpha)\rightarrow \infty$ as $\alpha\rightarrow \infty$. We believe the upper bounds is closer than the lower bounds, i.e. numerical evidence (see Figure~\ref{fig:escapes}) suggests that
$$
C(0)\leq \sqrt{\frac{\pi}{2}}=1.253314.
$$
\subsection{Scaling for excitable connections}
For $0<\nu<2a^2$ and $\eta>0$ (so that $\alpha>0$), if $\eta\rightarrow 0^+$ then we are in the standard Kramers case. We can compute this directly from~\eqref{eq:escapes}, that is
\begin{align*}
T(\nu,\eta) = & \frac{2}{\eta^2} \int_{x=0}^{a} \int_{y=0}^{x} \exp \frac{\nu(x^2-y^2)+(y^4-x^4)}{\eta^2} \,dy\,dx \\
= & \frac{2}{\eta^2} \int_{x=0}^{a} \exp \frac{\nu x^2-x^4}{\eta^2} \int_{y=0}^{x} \exp \frac{- \nu y^2+ y^4}{\eta^2} \,dy\,dx
\end{align*}
We note that the integrand of the first integral is maximal at $0<\sqrt{\nu/2}<a$, and that of the second at $0$.
We approximate the significant contribution to the second integral over the range $0<y<\sqrt{\nu}$ and write $\mathrm{erf}(x)=\frac{2}{\sqrt{\pi}} \int_{s=0}^{x} \exp(-s^2)\,ds$ so that
\begin{align}
T(\nu,\eta) \approx & \frac{2}{\eta^2} \int_{x=-\infty}^{\infty} \exp \left( \frac{\nu^2}{4\eta^2} -\frac{2\nu}{\eta^2}\left(x-\sqrt{\frac{\nu}{2}}\right)^2\right) \, dx \int_{y=0}^{\sqrt{\nu}} \exp \frac{-\nu y^2}{\eta^2} \,dy \nonumber \\
\approx& \frac{2}{\eta^2} \exp \left( \frac{\nu^2}{4\eta^2} \right) \sqrt{\frac{\pi \eta^2}{2\nu}} \frac{\sqrt{\pi}}{2} \sqrt{\frac{\eta^2}{\nu}}\mathrm{erf}\left(\frac{\nu}{\eta}\right) \nonumber \\
\approx & \frac{\pi}{\nu\sqrt{2}} \exp \left( \frac{\nu^2}{4\eta^2} \right) \label{eq:Kramersnuplus}
\end{align}
for fixed $\nu>0$ and $\eta\rightarrow 0^+$, which corresponds to the formula (\ref{eq:Kramers}).
In fact, an approximation that is valid over a larger range of $\eta$ can be found as follows, using an explicit lower bound. We write $(x,y)=r(\cos \theta,\sin \theta)$, and $s=r^2$. Then, we assume that $\nu<2a^2$, use the fact that $\exp(a)\leq \exp(b)\leq 1$ if $a\leq b\leq 0$, and the inequalities
$$
-2\theta^2\leq \cos 2\theta-1,\quad \cos 2\theta\leq 1
$$
on $\theta\in[0,\pi/4]$ to show that
\begin{align}
T(\nu,\eta) = & \frac{2}{\eta^2} \int_{x=0}^{a}\int_{y=0}^{x} \exp \left( \frac{\nu(x^2-y^2)+(y^4-x^4)}{\eta^2} \right) \,dy\,dx \nonumber \\
> & \frac{1}{\eta^2}\int_{s=0}^{a^2}\int_{\theta=0}^{\pi/4} \exp \left( \frac{s(\nu-s)\cos 2\theta}{\eta^2}\right) \,\,ds\,d\theta\nonumber
\\
= & \frac{1}{\eta^2} \int_{s=0}^{a^2}\int_{\theta=0}^{\pi/4} \exp \left( \frac{(\nu^2/4-(s-\nu/2)^2)\cos 2\theta}{\eta^2}\right) \,\,ds\,d\theta\nonumber
\\
= & \frac{1}{\eta^2}\exp\left(\frac{\nu^2}{4\eta^2}\right) \int_{s=0}^{a^2}\int_{\theta=0}^{\pi/4} \exp \left( \frac{\nu^2}{4\eta^2}(\cos 2\theta -1)\right) \exp\left(-\frac{(s-\nu/2)^2\cos 2\theta}{\eta^2}\right) \,\,ds\,d\theta\nonumber
\\
> & \frac{1}{\eta^2}\exp\left(\frac{\nu^2}{4\eta^2}\right) \int_{\theta=0}^{\pi/4} \exp \left(- \frac{\nu^2\theta^2}{2\eta^2}\right) \,d\theta \int_{s=0}^{a^2}\exp\left(-\frac{(s-\nu/2)^2}{\eta^2}\right) \,\,ds.\nonumber
\end{align}
Evaluating these integrals we have
\begin{align}
T(\nu,\eta) > &
\frac{\pi\sqrt{2} }{4\nu} \exp \left( \frac{\nu^2}{4\eta^2}\right) \mathrm{erf}\left(\frac{\pi\nu\sqrt{2}}{8\eta}\right)\left[\mathrm{erf}\left(\frac{\nu}{2\eta}\right)+ \mathrm{erf}\left(\frac{2a^2-\nu}{2\eta}\right)\right].
\end{align}
Hence, for fixed $\nu>0$ and $2a^2>\nu$ we have $\mathrm{erf}(\nu/(2\eta))\approx\mathrm{erf}((2a^2-\nu)/(2\eta))\approx 1$ in the limit $\eta\rightarrow 0^+$, and hence
\begin{equation}
T(\nu,\eta) \geq \frac{\pi}{\nu\sqrt{2}} \exp \left( \frac{\nu^2}{4\eta^2} \right).\label{eq:Kramersnu}
\end{equation}
i.e. Kramer's formula (\ref{eq:Kramersnuplus}) is a lower bound in this case. On the other hand, if both $\nu$ and $\eta$ are small, and $\nu/\eta$ is $O(1)$ then $\mathrm{erf}\left(\frac{\pi\nu\sqrt{2}}{8\eta}\right)\approx \frac{\nu\sqrt{\pi}}{4\eta \sqrt{2}}$ and so
\begin{align}
T(\nu,\eta) \geq & \frac{\pi}{\nu\sqrt{2}} \exp \left( \frac{\nu^2}{4\eta^2} \right)\mathrm{erf}\left(\frac{\pi\nu\sqrt{2}}{8\eta}\right)\approx
\frac{\pi^{3/2}}{8\eta} \exp \left( \frac{\nu^2}{4\eta^2} \right)
\label{eq:Kramersnuplustwo}
\end{align}
In summary, for small but fixed $\nu$ and $\eta\rightarrow 0^+$ we have
$$
T(\nu,\eta) \approx \frac{K_1}{\nu} \exp \left( \frac{\nu^2}{4\eta^2} \right)
$$
while for small $\nu$ and $\eta$ but $\nu/\eta$ being $O(1)$ we have
\begin{equation}
T(\nu,\eta) \approx \frac{K_2}{\eta
} \exp \left( \frac{\nu^2}{4\eta^2} \right)
\label{eq:Kramersbigeta}
\end{equation}
In Figure~\ref{fig:summaryres} we summarise the scalings we have obtained for mean residence time in the low noise limit, near bifurcation from heteroclinic to excitable connections, while in Figure~\ref{fig:escapes} we numerically verify examples of these scalings.
\begin{figure}%
\centerline{
\setlength{\unitlength}{5cm}
\begin{picture}(2,1)
\put(0,0){\includegraphics[height=5cm]{figure4}}
\put(2.05,-0.05){$\nu$}
\put(1,1.05){$\eta$}
\put(0.4,0.2){$\dfrac{\ln\eta}{\nu}$}
\put(0.9,0.6){$\dfrac{1}{\eta}$}
\put(1.5,0.2){$\dfrac{1}{\nu}\exp{\dfrac{\nu^2}{4\eta^2}}$}
\put(1.15,0.6){$\dfrac{1}{\eta}\exp{\dfrac{\nu^2}{4\eta^2}}$}
\end{picture}
}
\caption{Schematic showing the asymptotic scalings of residence time $T(\nu,\eta)$ at a node, considered in the plane for fixed $\nu$ the leading eigenvalue at the node and $\eta$ the noise strength going to zero. In all cases $T$ is finite for $\eta>0$ but $T(\nu,\eta)\rightarrow \infty$ as $\eta\rightarrow 0^+$ for fixed $\nu$; how fast this diverges depends qualitatively on whether the associated connection is heteroclinic ($\nu<0$) or excitable ($\nu>0)$.}
\label{fig:summaryres}%
\end{figure}
\subsection{Simulation of escape for a one dimensional SDE}
\label{sec:onedimsims}
To illustrate the above scalings, we consider the SDE (\ref{eq:sdeescape}) for the potential \eqref{eq:mpotential}, i.e.
\begin{equation}
dy = (-y^4+2y^2-\nu)y dt+\eta dw.
\label{eq:yonly}
\end{equation}
We choose an $a>0$ that is away from all equilibria (typically we use $a=0.5$) and numerically compute the mean escape time
\begin{equation}
T(\nu,\eta) = \langle \{T~:~|y(T)|=a \mbox{ and }|y(t)|<a \mbox{ for all }0<t<T\}\rangle
\end{equation}
where the mean is taken over the distribution of initial $y(0)$ and over realizations of the noise process in (\ref{eq:yonly}). Using a stochastic Euler approximation with timestep $h=0.01$ and $n=1000$ realizations for each calculation gives approximations of $T(\nu,\eta)$ as a function of $\nu$ and $\eta$; see Figure~\ref{fig:escapes}. In the three cases we verify agreement of the measured mean residence times with the predicted scalings in three cases. For $\nu=-0.01$ we show the best fit (black curve) to $T=A \ln(\eta)+B$ with $A=-96$ and $B=-369$; this compares well with the prediction $A=1/\nu=-100$ and $B=(-1-\ln(|\nu|)+\ln(1-\ln/(2a^2)))/\nu)=-362.5$ from equation \eqref{eq:nultzeroupper}.
For $\nu=0$ we show the best fit (red curve) to $T=A/\eta+B$ with $A=1.152$ and $B=2.378$, again, this compares well with the prediction $A=\sqrt{\pi/2}=1.2533$ from equation \eqref{eq:scalingalpha}.
For $\nu=0.01$ we show the best fit (blue curve) to $T=A/\eta\exp(B/\eta^2)$ with $A=1.4424$ and $B=2.027\times 10^{-5}$ - compare with $B=\nu^2/4=2.5\times 10^{-5}$ in \eqref{eq:Kramersbigeta}.
In the first two cases we also find good agreement between fitting parameters and predicted values. In the third case we do not have a tight asymptotic fit but nevertheless, empirically there is a good fit to the scaling formula over this range. For the third case, we expect that the Kramers formula is more accurate for the range $2\eta<\nu$, though the timescales become extremely long.
\begin{figure}%
\begin{center}
\includegraphics[width=12cm]{figure5}
\end{center}
\caption{The open circles show numerical estimation of the mean residence time $T$ from $y=0$ in (\ref{eq:yonly}) as a function of noise $\eta$ and parameter $\nu$: black is $\nu=-0.01$, red is $\nu=0$ and blue is $\nu=0.01$. We use first arrival at $|y|=a=0.5$ to detect escape.
For $\nu=-0.01$ we show the best fit (black curve) to $T=A \ln(\eta)+B$.
For $\nu=0$ we show the best fit (red curve) to $T=A/\eta+B$.
For $\nu=0.01$ we show the best fit (blue curve) to $T=A/\eta\exp(B/\eta^2)$: see text for more details.}%
\label{fig:escapes}%
\end{figure}
\section{Transition probabilities and multiple independent escape processes}
\label{sec:switching}
In order to understand transition probabilities we must consider noisy network attractors where there is more than one possible connection from a given node. We start by discussing the bi-directional ring from Section~\ref{sec:3graph}. For small noise, it turns out that the switching can be well-approximated by multiple independent escape processes: see Section~\ref{sec:multiplesc}. This gives evidence supporting Conjecture~\ref{conj:main}.
\subsection{Example: bi-directional ring around three nodes} \label{sec:example}
Consider a noisy network attractor that realises the bi-directional ring shown in Figure~\ref{fig:3graphs}(b). For simplicity we assume there is full permutation symmetry of the three nodes in the noise-free system. This means that there are two independent outgoing connections at each node, and the dynamics on these connections is the same. However, we choose the noise amplitude amplitudes $\eta_{cw}>0$ for clockwise (resp. $\eta_{acw}>0$ for anticlockwise) transitions that may be different.
Using the system detailed in Appendix~\ref{app:bidirthree}, equation~\eqref{eq:C3system2} we perform some numerical simulations on varying $\eta_{cw}$, $\eta_{acw}$ and a parameter $\nu$ for the case where connections in the network are (a) heteroclinic $\nu<0$ (b) at bifurcation $\nu=0$ and (c) excitable $\nu>0$. In each case, we verify that the residence time and transition probabilities for clockwise (resp. anticlockwise) transitions appear to vary continuously and monotonically with the amplitudes $\eta_{cw}$ (resp. $\eta_{acw}$): Figure~\ref{fig:example3bidir} shows this for the bifurcation case (b). Figure~\ref{fig:3plots} (a), (c) and (e) show all three cases.
\begin{figure}
\centerline{
{\includegraphics[width=80mm]{figure6}}
}
\caption{Contours showing the mean residence time (solid lines) and transition probability $\pi_{acw}$ (dashed lines) for anticlockwise transitions for the system described in Section~\ref{sec:example} for the critical (bifurcation) case ($\nu=0$), as a function of the noise amplitudes $\eta_{cw}$ (resp. $\eta_{acw}$) that excite transitions in the clockwise (resp. anticlockwise) directions.
Note that the mean residence times are constant on closed curves around the origin while the lines of constant transition probability are approximately radial. Similar plots are obtained for parameters that give heteroclinic or excitable networks in Figure~\ref{fig:3plots}.}
\label{fig:example3bidir}
\end{figure}
We find that the choice of connection is well modelled by multiple independent escape processes. This gives insight into the more general problem. We note that for a fully nonlinear SDE with a noisy network attractor it may however be very difficult to estimate transition probabilities analytically.
\subsection{Multiple independent escape processes}
\label{sec:multiplesc}
Consider $n$ independent escape processes, where we escape in direction $m=1,\ldots,n$ after a time given by a continuous random variable $T_m>0$ with distribution $\rho_m(T_m)$. A {\em multiple independent escape process} means that the first ``escape'' stops the process and identifies one particular direction of escape. More precisely, we say there is {\em escape in direction $k$ at time $T$} in the case $T=T_k<T_i$ for all $i\neq k$. The distributions of random variables giving the first escape time $T$ and the escape direction $k$ are:
\begin{equation}
T=\min(T_1,\ldots,T_n),~~k=\mathrm{argmin}(T_1,\ldots,T_n).
\label{eq:firstescape}
\end{equation}
Let $\rho$ be the distribution of the random variable $T$ and $\kappa(k)$ the probabilities of the discrete random variable $m=k$. The random variables $T$ are called order statistics \cite{David03}, and one can find these from the distribution $\rho_m$ of the individual escapes $T_m$ as follows:
\begin{lemma}
\label{lem:firstescape}
The distribution $\rho(T)$ of first escape times and the probability $\kappa(k)$ are given by
\begin{eqnarray*}
\rho(T) &=& \sum_{k=1}^n \left[\rho_k(T) \prod_{m=1,m\neq k}^{n} \int_{t_m=0}^{t_k} (1-\rho_m(t_m))\,dt_m \right]\\
\kappa(k) &= &\int_{t=0}^{\infty} \rho_k(t_k)\prod_{m=1,m\neq k}^{n} \int_{t_m=0}^{t_k} (1-\rho_m(t_m))\,dt_m \,dt_k
\end{eqnarray*}
\end{lemma}
\noindent{\bf Proof:}~ This can be seen by noting that the distribution of $T$ is the sum distributions for the probability that the first escape happens in the $k$th direction at time $T$.
\hfill{\bf QED}
~
Standard results on order statistics imply that if the $T_k$ are all exponentially distributed as
$$
\rho_k(T_k)=\frac{1}{r_k}\exp(-T_k/r_k)
$$
for $r_k>0$ then
\begin{equation}
\rho(T) = \frac{1}{r}\exp(-T/r)
\label{eq:exptau}
\end{equation}
where $1/r=\sum_{m=1}^{n} (1/r_m)$. In other words, if the $T_m$ are exponentially distributed then so is $T$, and the mean rate of escape that is the average of the rates of escape of the individual processes. For this case we can compute
\begin{equation}
\kappa(k) = \frac{1/{r_k}}{\sum_{m=1}^{n} 1/{r_m}}= \frac{r}{r_k}.
\label{eq:expkappa}
\end{equation}
In this case the process with the fastest mean escape time will be the direction where escapes are most frequent.
For more general distributions for the individual processes, even if they remain independent, $\rho$ and $\kappa$ are not usually explicitly computable from the integral forms and indeed may be counter-intuitive for some sets of distribution of $\rho_k$, especially if they are multi-modal or the tails are of different weight. For example, suppose $n=2$ with $\rho_1(t_1) = 0.9 \delta(t_1-1)+0.1\delta(t_1-100)$ and $\rho_2(t_2)=\delta(t_2-2)$. Then $E(t_1)=10.9$ and $E(t_2)=2$, so mean escape time in direction 1 is much slower than in direction 2. On the other hand, the probability of the first escape occurring in direction 1 is higher than $0.9$!
For the noise-induced escape processes we consider, the distributions are determined by escapes from potential wells (and have exponential tails for low noise) or from near saddles (and may have faster-decaying tails); in both cases the distributions will not be exponential though for escape from the potential well it will have an exponential tail where the rate corresponds to the Kramers escape rate.
\subsection{Multiple escape times and transition probabilities}
\label{sec:switchingodes}
We illustrate a multiple independent escape process for a system of $n$ SDEs
\begin{equation}
dy_{k} = (-y_{k}^4+2y_{k}^2-\nu_{k})y_{k} dt+\eta_{k} dw_{k}.
\label{eq:yk}
\end{equation}
where $k=1,\ldots,n$ and $w_{k}$ are independent Brownian processes and $\nu_k,\eta_k$ are parameters and assume that we start at some $y(0)=0$; cf (\ref{eq:sdeescape}). We choose a $K>0$ that is away from all equilibria (typically we use $K=0.5$). There is escape in direction $k$ at time $\tau_k$ if
$$
|y_k(\tau_k)|=K,\mbox{ and } |y_{k}(t)|<K\mbox{ for all }0<t<\tau_k
$$
and define $\tau$ to be the first escape and $k$ the direction of first escape as in (\ref{eq:firstescape}).
Using the multiple independent escape process (\ref{eq:yk}) with $n=2$ we can approximate the behaviour of switching for the example of the bi-directional ring on three nodes discussed in Section~\ref{sec:example}.
Figure~\ref{fig:3plots}, left column (a,c,e) shows the mean residence times and switching probabilities for the noisy network attractor on varying $\eta_1=\eta_{cw}$ and $\eta_2=\eta_{acw}$. The right column (b,d,e) shows the mean first escape times and escape probabilities for the multiple escape process (\ref{eq:yk}) with the corresponding $\eta$. The computations are performed using a stochastic Euler integrator with timestep $h=0.05$. The values of $\eta_k$ are discretized into $29$ steps in each direction.
Contours of mean first escape time $T$ (solid lines) and equiprobability $\pi_{cw}$ (dashed lines) are shown in Figure~\ref{fig:3plots}. Subfigures (a,b) with $\nu<0$ corresponds to $y=0$ being linearly unstable and a noisy heteroclinic connection with two outgoing directions, subfigures (e,f) the case $\nu>0$ corresponds to $y=0$ being a sink with a small basin and a noisy excitable connection with two outgoing directions, and subfigures (c,d) are the bifurcation case $\nu=0$. Observe that there is good quantitative and qualitative agreement in all three cases illustrated in Figure~\ref{fig:3plots}. From these figures, we note the following:
\begin{itemize}
\item
The mean residence time $T$ decreases monotonically with $\eta_{aw}$ or $\eta_{acw}$. Also, $T\rightarrow \infty$ as $\max(\eta_{cw},\eta_{acw})\rightarrow 0^+$ in all cases.
\item
The transition probability $\pi_{cw}$ increases monotonically with $\eta_{cw}$ for fixed $\eta_{acw}$, moreover $\pi_{cw}\rightarrow 0$ as $\eta_{cw}/\eta_{acw}\rightarrow 0$ and $\eta_{cw}\rightarrow 1$ as $\eta_{cw}/\eta_{acw}\rightarrow 1$.
\end{itemize}
In summary, the numerical results in the left column of Figure~\ref{fig:3plots} suggest that Conjecture~\ref{conj:main} holds for all three cases of this symmetrised network where only the noise amplitudes break the symmetry. More precisely, by suitable choice of noise amplitudes $\eta_{cw},\eta_{acw}$ one can realise any transition probability $\pi_{cw}\in(0,1)$ and any sufficiently long mean residence time $T>0$. As expected from the discussion in Section~\ref{sec:residence} the scaling properties of $\pi_{cw}$ and $T$ depend strongly on whether the network is heteroclinic or excitable, near the boundaries $\pi_{cw}=0,1$ and $T=\infty$.
\begin{figure}%
\begin{center}
\includegraphics[width=130mm]{figure7}
\end{center}
\caption{(a,c,e): Contours of mean residence time $T$ (solid lines) and transition probability $\pi_{cw}$ (dashed lines) for clockwise motion in the system~\eqref{eq:C3system2} describing the bi-directional ring shown in Figure~\ref{fig:3graphs}(b). (b,d,f): Contours of mean first escape time (solid lines) and probability of first escape in direction $2$ (dashed lines) for the system (\ref{eq:yonly}) on varying $\eta_1=\eta_{cw}$, $\eta_2=\eta_{acw}$ with (a,b) $\nu=-0.01$, (c,d) $\nu=0$ and (e,f) $\nu=0.01$. Note in all cases there is good agreement. For the excitable case (e,f) there is a steep rise in residence time in the region where $\max\eta_i<\nu/2=0.005$}%
\label{fig:3plots}%
\end{figure}
\subsection{Approximating multiple escape processes for excitable networks}
The formulae from Lemma~\ref{lem:firstescape} suggest that in general one cannot obtain the mean escape time or direction of escape from a multiple escape process simply from knowledge of the mean escape time of each process: one needs knowledge of the distribution of escape times for the individual processes. However, in the case of an excitable network where there are approximately exponential distributions of residence times, this is possible.
Figure~\ref{fig:expplots}(a) shows the mean residence times and transition probabilities for the excitable case Figure~\ref{fig:3plots}(f) and Figure~\ref{fig:expplots}(b) shows that for the distribution (\ref{eq:exptau},\ref{eq:expkappa}), using a best fit to the exponential tail of a single escape process. More precisely, we use $T\approx (A/\eta) \exp(B/\eta^2)$ as in (\ref{eq:Kramersbigeta}) for the escape time $T$ in one direction with $\eta=0.01$, using $A=1.4$ and $B=2.430\times 10^{-5}$, cf. Figure~\ref{fig:escapes}.
\begin{figure}%
\begin{center}
\includegraphics[width=130mm]{figure8}
\end{center}
\caption{(a) Contours showing mean first escape time $T$ (solid lines) and probability of first escape in direction $2$ (dashed lines) for the system (\ref{eq:yonly}) on varying $\eta_1=\eta_{cw}$, $\eta_2=\eta_{acw}$ with $\nu=0.01$, as in Figure~\ref{fig:3plots}(f). (b) Contours of probabilities of escape in direction $2$ (dashed lines) and mean first escape time (solid lines) for exponential distributions fitted to empirically determined means: see text for details.}
\label{fig:expplots}%
\end{figure}
\section{Discussion}
\label{sec:discuss}
There are a number of subtle effects of noise on heteroclinic networks that have been discussed in previous work~\cite{Bakhtin,ArmStoKir03}. This paper expands and extends this to noisy excitable networks that are created by bifurcation from heteroclinic in the noise-free case. Clearly, the mean properties of the macroscopic randomness (that is, the residence times at nodes and the transition probabilities between nodes) depend on the (anisotropic) noise amplitudes. Conjecture~\ref{conj:main} suggests that, {\em vice versa}, one can select noise amplitudes to approximate a Markov process on the network as a noisy network with given transition probabilities and mean residence times. We verify this for a simple case of a bi-directional ring network; in future work we will to explore this for more complex networks in the presence of noise perturbations.
It was noted in \cite{AshPos15} that noisy heteroclinic cycles will have approximately log normal distributions of residence times \cite{StoHol90} while excitable cycles will have exponential tails to the distributions of residence times. We believe that the \emph{distributions} of macroscopic fluctuations are much more difficult to determine from the microscopic noise distributions than the means. Quite complex distributions may result, for example if there are multiple connections between the same pair of nodes.
For the weak noise case, we find numerical evidence that the residence times and transition probabilities can be characterised by modelling the transitions between nodes in the network as multiple independent escapes processes. This is not too much of a surprise, at least if the Jacobian is diagonalisable and the principal axes of noise correspond to these directions - an analysis in the general case will probably be much more complicated. In Section~\ref{sec:residence} we use the approximation of a single escape process to obtain some asymptotic scalings of mean residence times - it will be a challenge to find more accurate and justified asymptotic expressions, especially for the excitable case, and to obtain asymptotic expressions for higher moments in the distributions.
We work here with networks where the transition probabilities are memoryless - that is, they are well-modelled by a first order Markov chain. In previous work~\cite{AshPos13} we discussed an example where this is not the case and noise-induced `lift-off'~\cite{ArmStoKir03} causes longer-term correlations in the sequence of nodes visited. It will be a challenge to understand properties of the long-term correlations and, for instance, whether they affect the scalings of residence times at the nodes.
There are many more open problems that deserve a detailed analysis - indeed, an appropriate definition of a noisy network attractor is still debatable. Should this be a statistical attractor whose empirical measures are close to delta functions on the nodes, or is a more stringent definition appropriate? Given a good definition, progress on Conjecture~1 may be possible in a general setting.
Finally, we mention some potential applications. Heteroclinic network models have been used for modelling cognitive functions \cite{ashwin_karabacak_nowotny_2011,bick_rabinovich_2010,komarov_osipov_suykens_09,neves_timme_12,Rabinovich2001} as they have the ability to perform finite state computations, as well as the capacity to translate microscopic random fluctuations into macroscopic randomness. This randomness is manifested both in terms of the residence times at nodes of the network and in terms of the transition probabilities between nodes and hence choice of possible paths around the network. In this paper, we have highlighted that this work should extend in a natural way to excitable networks. Excitable networks may indeed be a more natural way to understand computations.
\subsection*{Acknowledgments}
We thank many people for stimulating conversations that contributed to the development of this paper: in particular Chris Bick, Nils Berglund, Mike Field, John Terry, Ilze Ziedins. We thank the London Mathematical Society for support of a visit of CMP to Exeter, and the University of Auckland Research Council for supporting a visit of PA to Auckland during the development of this research. PA gratefully acknowledges the financial support of the EPSRC via grant EP/N014391/1.
\newpage
| {
"timestamp": "2016-02-22T02:08:59",
"yymm": "1602",
"arxiv_id": "1602.06135",
"language": "en",
"url": "https://arxiv.org/abs/1602.06135",
"abstract": "Attractors of dynamical systems may be networks in phase space that can be heteroclinic (where there are dynamical connections between simple invariant sets) or excitable (where a perturbation threshold needs to be crossed to a dynamical connection between \"nodes\"). Such network attractors can display a high degree of sensitivity to noise both in terms of the regions of phase space visited and in terms of the sequence of transitions around the network. The two types of network are intimately related---one can directly bifurcate to the other.In this paper we attempt to quantify the effect of additive noise on such network attractors. Noise increases the average rate at which the networks are explored, and can result in \"macroscopic\" random motion around the network. We perform an asymptotic analysis of local behaviour of an escape model near heteroclinic/excitable nodes in the limit of noise $\\eta\\rightarrow 0^+$ as a model for the mean residence time $T$ near equilibria.We also explore transition probabilities between nodes of the network in the presence of anisotropic noise. For low levels of noise, numerical results suggest that a (heteroclinic or excitable) network can approximately realise any set of transition probabilities and any sufficiently large mean residence times at the given nodes. We show that this can be well modelled in our example network by multiple independent escape processes, where the direction of first escape determines the transition. This suggests that it is feasible to design noisy network attractors with arbitrary Markov transition probabilities and residence times.",
"subjects": "Adaptation and Self-Organizing Systems (nlin.AO)",
"title": "Quantifying noisy attractors: from heteroclinic to excitable networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806506825455,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7077274228570052
} |
https://arxiv.org/abs/1807.03423 | Maximal subgroup growth of some metabelian groups | Let $m_n(G)$ denote the number of maximal subgroups of $G$ of index $n$. An upper bound is given for the degree of maximal subgroup growth of all polycyclic metabelian groups $G$ (i.e., for $\limsup \frac{\log m_n(G)}{\log n}$, the degree of polynomial growth of $m_n(G)$). A condition is given for when this upper bound is attained.For $G = \mathbb{Z}^k \rtimes \mathbb{Z}$, where $A \in GL(k,\mathbb{Z})$, it is shown that $m_n(G)$ grows like a polynomial of degree equal to the number of blocks in the rational canonical form of $A$. The leading term of this polynomial is the number of distinct roots (in $\mathbb{C}$) of the characteristic polynomial of the smallest block. | \section{Introduction}
Let $G$ be a f.g.\ (finitely generated) group, and let $\subgr(G)$ denote the number of subgroups of $G$ of index $n$.
A highlight in subgroup growth is the theorem that gives an algebraic characterization of what it means for
the function $\subgr(G)$ to be bounded above by a polynomial in $n$, the so-called ``PSG Theorem'' (\textbf{p}olynomial
\textbf{s}ubgroup \textbf{g}rowth), which was proved by Lubotzky, Mann, and Segal.
See \cite{Lubotzky2003} and the references there at the end of Chapter 5.
Much progress has been made in the area of subgroup growth, but there is no known general formula for
calculating $\deg(G)$, the degree of polynomial growth of a given PSG (polynomial subgroup growth) group.
In \cite{Shalev_On_the_degree} however, Shalev gives formulas for certain metabelian
groups and also for
all f.g.\ virtually abelian groups. Here,
\[
\deg(G) = \inf \{ \alpha | \subgr(G) \leq n^\alpha \text{ for all large $n$} \} = \limsup \frac{\log \subgr(G)}{\log n}.
\]
When it comes to \emph{maximal} subgroup growth, much progress also has been made. See for example
\cite{Mann1996}, where Mann relates polynomial maximal subgroup growth in profinite groups
to having a positive probability of topologically generating the group
by picking a finite subset at random. See also the more recent \cite{Jaikin-Zapirain2011}, where
Jaikin-Zapirain and Pyber give a ``semi-structural characterization'' of polynomial maximal subgroup growth.
However, just as in subgroup growth, there are only a few groups for which we know the exact degree of maximal subgroup growth.
It \emph{is} known for
free prosolvable groups of finite rank; this was determined by Lucchini, Menegazzo and Morigi
in \cite{Lucchini2006} together with Morigi's work in \cite{Morigi2006}.
Inspired by the progress Shalev made for calculating $\deg(G)$ in \cite{Shalev_On_the_degree}, I have worked on calculating the degree
of maximal subgroup growth. Notation:
\[
\maxsubgr(G) = \text{the number of maximal subgroups of $G$ of index $n$}
\]
\[
\mdeg(G) = \inf \{ \alpha | \maxsubgr(G) \leq n^\alpha \text{ for all large $n$} \} = \limsup \frac{\log \maxsubgr(G)}{\log n}
\]
How can we determine $\mdeg(G)$, for given $G$ in some nice class of groups?
How is $\mdeg(G)$ determined by the algebraic structure of $G$? This paper answers these question for
certain metabelian groups.
One of the two main results in this paper is the following theorem, which
gives an upper bound for $\mdeg(G)$ for all polycyclic metabelian groups. This theorem also gives a condition for when the upper bound is attained:
\begin{nntheorem}
Let $G$ be a group with f.g.\ abelian normal subgroup $N$. Suppose $G/N$ is an abelian, $\ell_0$-generated group
of torsion-free rank $\ell$. After choosing a generating set for $G/N$, $N$ becomes a $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell_0}]$-module.
Let $R = \ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell_0}]$.
Let $I = (x_1 - 1, x_2 - 1, \ldots, x_{\ell_0} - 1)_{R}$. Let $t$ be
the torsion-free rank of (the abelian group) $N/IN$, and let $d = d_{\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} R}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N)$
(the minimal number of generators of $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N$ as a $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} R$-module).
Then
\[
\mdeg(G) \leq \max \{\ell + t - 1, d \},
\]
with equality if both $G \cong N \rtimes G/N$ and $\ell \geq 1$.
\end{nntheorem}
\noindent This is Theorem~\ref{thm:(f.g. abelian) by (f.g. abelian)} below.
Of course, $\mdeg(G)$ is just an approximation of how fast $\maxsubgr(G)$ grows as $n \to \infty$. Sometimes, we can be more precise
than just giving $\mdeg(G)$. For f.g.\ groups of the form
\[
G = \text{(arbitrary abelian)} \rtimes \ensuremath{\mathbb{Z}},
\]
the growth type (see Definition \ref{def:growth type}) of $\maxsubgr(G)$ is given in Proposition \ref{prop:growth type of (arbitrary abelian) by Z}.
When we specialize to groups of the form
\[
G = \text{(f.g.\ abelian)} \rtimes \ensuremath{\mathbb{Z}} = N \rtimes \ensuremath{\mathbb{Z}},
\]
we can be even more precise than giving the growth type of $\maxsubgr(G)$. Note that as $N$ is a normal subgroup of $G$,
$N$ becomes a $\ensuremath{\mathbb{Z}}[x]$-module. So $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N$ is a f.g.\ module over the PID $\ensuremath{\mathbb{Q}}[x]$. In this case,
we have the following theorem, the other main result of this paper:
\begin{nntheorem}
Let $G = N \rtimes \ensuremath{\mathbb{Z}}$, with $N$ f.g.\ as an abelian group. Let
\[
\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N = \bigoplus_{j=1}^d \ensuremath{\mathbb{Q}}[x]/(a_j),
\]
where $a_1 | a_2 | \cdots | a_d$ as provided by the structure theorem of f.g.\ modules over PIDs (so with $a_1$ not a unit).
So $d = d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N)$.
Also, let $\rho_1$ be the number of
(distinct) roots of $a_1$ in $\ensuremath{\mathbb{C}}$. Then
\[\begin{aligned}
\maxsubgr(G) & \leq \rho_1 n^{d} + O(n^{d-1}) & \text{for all large $n$, and}\\
\maxsubgr(G) & \geq \rho_1n^{d} & \text{for infinitely many $n$.}
\end{aligned}
\]
\end{nntheorem}
This is
Theorem~\ref{thm:maximal subgroup growth of (f.g. abelian) by Z}.
The result stated in the second paragraph of the abstract is Corollary~\ref{cor:max subgr growth of Z^k semidir_A Z}.
The general method used here for finding the maximal subgroup growth of metabelian groups $N \rtimes A$
naturally falls into two parts:
\begin{itemize}
\item find the maximal $\ensuremath{\mathbb{Z}}[A]$-submodules of $N$
\item count derivations (1-cocycles) from $A$ to simple quotients of $N$
\end{itemize}
See Lemma~\ref{lem:abelian by something - counting maximal subgroups by derivations}.
The idea of reducing subgroup growth questions of metabelian groups to commutative algebra is not new. See Chapter 9
in \cite{Lubotzky2003}. Also, submodule growth has been considered
by Segal before in \cite{Segal_growth_of_ideals_and_submodules} and \cite{Segal_polynomial_ring}. Further, the use of
derivations in counting subgroups is well established in subgroup growth; see the first page of Chapter 1 in
\cite{Lubotzky2003} as well as Section 1.3.
Section \ref{sec:notation} gives notation (most but not all standard) which is used throughout the paper.
Section~\ref{sec:preliminary results} shows how derivations can be counted and used for counting maximal subgroups in metabelian groups. It also
contains several miscellaneous results (mostly known) that will be needed later.
The goals of Section~\ref{sec:finitely generated Z[x]-modules} are to describe the maximal submodule growth of
(a) all $\ensuremath{\mathbb{Z}}_D[x]$-modules (with $D$ finite) which are finitely generated as $\ensuremath{\mathbb{Z}}_D$-modules
and (b) all finitely generated $\ensuremath{\mathbb{Z}}[x]$-modules. Section~\ref{sec:fin gen modules over polynomial ring with several variables} shows how to
count the maximal submodules of $\ensuremath{\mathbb{Z}}[x_1, x_2, \ldots, x_\ell]$-modules, which are finitely generated as abelian groups.
Section~\ref{sec:certain metabelian groups} contains the main results of the paper, on the maximal subgroup growth of certain metabelian groups.
It also works out the exact maximal subgroup growth of an example.
Finally, note that most of the work presented in this paper was done while I was a graduate student at Binghamton University and
is from \cite{kelley}, my dissertation.
\subsection{Notation and Terminology}
\label{sec:notation}
\noindent $\subgr(G)$: the number of subgroups of $G$ of index $n$\\
$\maxsubgr(G)$: the number of maximal subgroups of $G$ of index $n$\\
$\maxsubmod(N)$: the number of maximal submodules of $N$ of index $n$\\
$\submodisoto{S}(N)$: See Definition \ref{def:submodisoto - number of submodules with quotient iso to etc.}\\
$\mtriv(N)$, $\mnontriv(N)$: See Definition \ref{def:mtriv and mnontriv}\\
\noindent $\Der(G,A)$: the set of derivations (see below) from $G$ to $A$ \\
\noindent $H \leq_n G$: $H$ is a subgroup of $G$ of index $n$\\
$H \leq_f G$ ($H \ensuremath{\trianglelefteq}_f G$): $H$ is a subgroup of $G$ of finite index (resp.\ and is normal)\\
$I \ideal_{\max} R$: $I$ is a maximal ideal of $R$\\
$M \leq_{\max} N$: $M$ is a maximal submodule\footnote{Occasionally, the symbols `$\leq_{\max}$' will mean `maximal subgroup of'. Hopefully, the
usage will be clear from context.} of $N$\\
$(a_1,\ldots,a_k)_R:$ the ideal of $R$ generated by $a_1,\ldots,a_k \in R$\\
\noindent $\mdeg(G)$: the degree of maximal subgroup growth\footnote{This is exactly what Mann
denotes by $s^*(G)$ on page 449 of \cite{Mann1996}. Assuming
$\maxsubgr(G) \geq 1$ for infinitely many $n$, then this also
equals what Mann denotes by $s(G)$ on page 448 of that paper:
$\limsup ((\log \maxsubgr(G))/\log n) =$ \newline $ \inf \{s | \maxsubgr(G) \leq Cn^s, \text{ for some C} \}$.
Note that this differs from what Lubotzky defines on page 2 of \cite{Lubotzky2002}
as the ``\thinspace`polynomial degree' of the
rate of growth of $\maxsubgr(G)$'':
$\mathcal{M}(G) := \sup_{n \geq 2} ((\log \maxsubgr(G))/ \log n)$.} of a group G:
\[
\mdeg(G) = \inf \{ \alpha | \maxsubgr(G) \leq n^\alpha \text{ for all large $n$} \}.
\]
$\mmoddeg(N)$: the degree of maximal submodule growth\footnote{Of course,
this depends on the ring $R$ (which is implicit, given $N$). Hence, though the notation $\mmoddeg_R(N)$ would be appropriate,
we will not use the subscript $R$, especially since it is understood from the context.} of an $R$-module $N$:
\[
\mmoddeg(N) = \inf \{ \alpha | \maxsubmod(N) \leq n^\alpha \text{ for all large $n$} \}.
\]
\noindent $N \cong_R M$: $N$ and $M$ are isomorphic as $R$-modules\\
$d_R(N)$: the minimal size of an $R$-module generating set for $N$\\
$\ensuremath{\mathbb{Q}} N$: $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N$\\
$\ensuremath{\mathbb{Z}}_D$: the localization of $\ensuremath{\mathbb{Z}}$ at the (finite) set of primes $D$\\
Suppose $G$ acts on the abelian group $A$ on the left. Recall that a \emph{derivation} (also called a 1-cocycle, or crossed homomorphism)
is a function $\delta:G \to A$ that satisfies\footnote{If $A$ is not assumed to be abelian, and if $G$ instead acts on the right,
the condition changes to $\delta(gh) = \delta(g)^h \cdot \delta(h)$ for all $g, h \in G$.}
$\delta(gh) = \delta(g) + g\cdot \delta(h)$ for all $g, h \in G$.
Almost all groups that appear in this document as groups (except $\ensuremath{\mathbb{Q}}$ which is a field$\ldots$)
will be finitely generated (f.g.).
In the following definition, the (increasing and eventually positive) function $g$ has domain a subset of
the positive integers of the form $\{k, k+1, k+2, \ldots\}$.
\vspace{-.05in}
\begin{definition}
\label{def:growth type}
Let $f \colon \{1, 2, 3,\ldots \} \longrightarrow \ensuremath{\mathbb{R}}$. We say that $f$ has growth type\ldots
\begin{itemize}
\item[] \ldots at most $g$, if $f(n) = O(g(n))$ (using ``Big O'' notation\footnote{This means that
for some constant $C$, we have $|f(n)| \leq C g(n)$ for all large $n$.}).
\item[] \ldots at least $g$, if $f(n) = \Omega(g(n))$: that is, there exists some constant $C > 0$ such that
$Cg(n) \leq f(n)$ for \textbf{infinitely many} $n$.
\item[] And if $f$ has growth type at most $g$ and at least $g$, we say it has growth type $g$.
\end{itemize}
\end{definition}
Note that just like an analogous definition in \cite{Lubotzky2003} (Section 0.1), this notion of ``has growth type'' is not symmetric.
\section{Preliminary results}
\label{sec:preliminary results}
We begin with an easy observation:
\begin{lemma}
\label{lem:quotient+the_rest}
Let $G$ be a finitely generated group with $N \ensuremath{\trianglelefteq} G$. Then
\[
\maxsubgr(G) = \maxsubgr(G/N) + \text{``the complement type''}
\]
where ``the complement type'' is the number of index $n$ maximal subgroups $M$ of $G$ with $MN = G$.
\end{lemma}
\begin{proof}
Either $M$ contains $N$ or it does not. The former case is equivalent to
$MN = M$, and the latter is equivalent to $MN = G$.
\end{proof}
So how do we count ``the complement type''? It turns out that if $N$ is abelian and itself has a complement in $G$ (a subgroup
$K \leq G$ such that $N \cap K = \{1\}$ and $KN = G$) then the answer to this question
(Lemma~\ref{lem:obtaining maximal submodules}) is particularly nice.
We now recall that a
group $B$ acting on an abelian group $A$ gives us a $\ensuremath{\mathbb{Z}}[B]$-module structure for $A$. As such, $A$ is called
a $B$ module. And so for a group $G$ with
an abelian normal subgroup $N$, for any $N_0 \leq N$ with $N_0 \ensuremath{\trianglelefteq} G$ we have that $G$ acts on $N/N_0$ by
conjugation. But since $N$ is abelian, $G/N$ acts on $N/N_0$ by conjugation, and so $N/N_0$ is
a $\ensuremath{\mathbb{Z}}[G/N]$-module.
\begin{lemma}
\label{lem:obtaining maximal submodules}
Let $N$ be an abelian normal subgroup of $G$. Suppose $M$ is a proper subgroup of $G$ with $MN = G$. Then $M \leq_{\max} G$ iff $M \cap N$
is a maximal $\ensuremath{\mathbb{Z}}[G/N]$-submodule of $N$. Also, $[G: M] = [N : M\cap N]$.
\end{lemma}
\begin{proof}
This is just Result 5.4.2 from \cite{Robinson} reworded. Let $N_0 = M \cap N$. Indeed, $N_0$ being a maximal $\ensuremath{\mathbb{Z}}[G/N]$-submodule of $N$
precisely means that $N_0$ is maximal among all the proper subgroups of $N$ which are normal in $G$, and this means that
$N/ N_0$ is a minimal normal subgroup of $G/ N_0$.
\end{proof}
Recall that of course a submodule $N_0$ of $N$ is maximal iff $N/N_0$ is a simple module.
Before continuing, we make another comment about group rings. If $A$ is a free abelian group of rank $\ell$, then the group ring
$\ensuremath{\mathbb{Z}}[A]$ is just the Laurent polynomials in $\ell$ variables
with integer coefficients: $\ensuremath{\mathbb{Z}}[x_1,x_1^{-1},x_2,x_2^{-1},\ldots,x_\ell,x_\ell^{-1}]$.
\subsection{Using derivations}
\label{sec:using derivations}
The following is well known:
\begin{lemma}
\label{lem:complements correspond to derivations}
Suppose
\[
N \hookrightarrow G \overset{\pi}{\twoheadrightarrow} G/N
\]
is exact and that $\sigma$ is a splitting of $\pi$. Then there is a one-to-one correspondence between
the set of complements to $N$ and $\Der(G/N, N)$ where the action of $G/N$ on $N$ is defined
by ${}^{\bar{g}}n := \sigma(\bar{g})n\sigma(\bar{g})^{-1}$.
\end{lemma}
For a proof, see for example Corollary 2.13 in \cite{kelley}.
The idea of using derivations to count subgroups is well established. See \cite{Lubotzky2003}, pages 11, 15.
Another reference is \cite{Shalev_On_the_degree}. In fact, the origin of this section was wondering what Lemma 2.1 (iii) in
\cite{Shalev_On_the_degree} reduced to when counting
maximal subgroups; the analogous result here is
Lemma \ref{lem:abelian by something - counting maximal subgroups by derivations}.
\begin{lemma}
\label{lem:abelian by something - counting maximal subgroups by derivations}
Let $G$ be a f.g.\ group with $N \ensuremath{\trianglelefteq} G$ and $N$ abelian. Then
\[ \tag{*}
\maxsubgr(G) \leq \maxsubgr(G/N) + \sum_{N_0} \lvert\Der(G/N, N/N_0)\rvert
\]
where the sum is taken over all $N_0$ such that $N_0 \ensuremath{\trianglelefteq} G$, $N_0 \leq N$ and such that $N/N_0$ is a simple $\ensuremath{\mathbb{Z}}[G/N]$-module with $|N/N_0| = n$.
When we have $G \cong N \rtimes G/N$, then the inequality in (*) is an equality.
\end{lemma}
\begin{proof}
For the inequality, by Lemma~\ref{lem:quotient+the_rest},
we only need to show that the number of maximal subgroups $M$ of $G$ such that $MN = G$ is bounded above by
$\sum_{N_0} \lvert\Der(G/N, N/N_0)\rvert$.
Let $M \leq_{\max} G$ with $MN = G$ and $[G:M] = n$. Let $N_0 = M\cap N$. Then by Lemma~\ref{lem:obtaining maximal submodules},
$N/N_0$ is a simple $\ensuremath{\mathbb{Z}}[G/N]$-module with $|N/N_0| = n$. We have the exact sequence
\[\tag{*1}
N/N_0 \hookrightarrow G/N_0 \twoheadrightarrow G/N.
\]
By Lemma \ref{lem:complements correspond to derivations}, $M$ is counted by the term $\lvert\Der(G/N, N/N_0)\rvert$, and we have that
distinct $M_1, M_2 \leq_{\max} G$ with $M_i \cap G = N_0$ for $i = 1,2$, correspond to different derivations. This proves (*).
Next, suppose $G = N \rtimes G/N$. Let $N_0$ be a maximal $\ensuremath{\mathbb{Z}}[G/N]$-submodule of $N$ with $|N/N_0| = n$.
Let $\mathcal{M}$ be the set of maximal subgroups $M$ of $G$ (of index $n$) that
have $MN = G$ and $M \cap N = N_0$. By Lemma~\ref{lem:obtaining maximal submodules}, $\mathcal{M}$
``is'' (or rather, corresponds to) the set of complements to $N/N_0$ in $G/N_0$. Because $G/N_0$ is just $N/N_0 \rtimes G/N$,
the short exact sequence (*1) splits. Therefore, by Lemma~\ref{lem:complements correspond to derivations}, $\mathcal{M}$ has
cardinality $\lvert\Der(G/N, N/N_0)\rvert$.
\end{proof}
\subsection{Counting derivations}
\label{sec:counting derivations}
In order to actually use derivations to count maximal subgroups, we need to be able to count derivations.
We begin by stating a slightly weaker version of Lemma 2.5 from \cite{Shalev_On_the_degree}.
In the lemma here, notice that $A$ is a module over the group ring $\ensuremath{\mathbb{Z}}[\langle x \rangle]$.
\begin{lemma}
\label{lem:Shalev's counting derivations out of cyclic groups}
Suppose a cyclic group $\langle x \rangle$ acts on a finite abelian group $A$. Also, suppose
\begin{enumerate}[(i)]
\item $\langle x \rangle$ is the infinite cyclic group, or
\item $x$ has order $k$ and $(1 + x + x^2 + \cdots + x^{k-1}) \cdot a = 0$ for all $a \in A$.
\end{enumerate}
Then $\lvert\Der(\langle x \rangle, A)\rvert = |A|$.
\end{lemma}
Note: In Shalev's paper, instead of $A$,
he has an arbitrary finite group $F$.
The main reason why the lemma is not stated in that generality here
is to use additive notation for $A$. Also, instead of the second point, the lemma could instead say
\[
\lvert\Der(\langle x \rangle, A)\rvert = \lvert\{ a \in A : (1 + x + x^2 + \cdots + x^{k-1}) \cdot a = 0 \}\rvert.
\]
At this point, we could state Lemma \ref{lem:abelian by infinite cyclic - exact counting} (and prove it in one line). Readers may
want to read that before continuing this section.
While Lemma \ref{lem:Shalev's counting derivations out of cyclic groups} tells us how to count derivations
if the domain is a cyclic group, we will have need to count derivations when the domain is not cyclic. To do so,
we prove that derivations factor through quotients, just
as homomorphisms factor through quotients.
Let $G$ be a group acting on the (abelian) group $A$.
Suppose $N \ensuremath{\trianglelefteq} G$ and that $N$ acts trivially\footnote{By this we mean that $g\cdot a = a$ for all $g \in N$ and $a \in A$.}
on $A$.
Recall that this gives us an action of $G/N$ on $A$.
Further, suppose $N$ is normally generated by
$\{a_1, \ldots, a_k\}$
\begin{lemma}
\label{lem:derivations factor through quotients}
With the above notation, suppose $\delta \colon G \longrightarrow A$ is a derivation and
that $\delta(a_i) = 0$ for all $i$. Then
\begin{enumerate}[(i)]
\item $\delta(g) = 0$ for all $g \in N$, and therefore
\item $\delta$ factors through $G/N$.
\end{enumerate}
\end{lemma}
Notes: (1) This basically is exercise 4(a) in \cite{Brown1982} (pg.\ 90).
(2) The hypothesis that $A$ is abelian is not needed, but it simplifies the notation slightly; further,
in what follows, the lemma is applied only in the case that $A$ is abelian.
\begin{proof}
We have that $N$ is generated (as a subgroup) by the set of all $ga_i g^{-1}$ such that $g \in G$ and
$1 \leq i \leq k$. It is immediate from the definition of derivations that to prove (i), we only need to
show $\delta(ga_i g^{-1}) = 0$ for all $g$ and $a_i$. So pick $g$ and $a_i$.
We have (explanations following the equations)
\begin{align}
\delta(ga_1 g^{-1}) &= \delta(g) + g\delta(a_i g^{-1}) \\
&= \delta(g) + g(\delta(a_i) + a_i\delta(g^{-1})) \\
&= \delta(g) - ga_i g^{-1} \delta(g) \\
&= \delta(g) - \delta(g) = 0
\end{align}
Equations (1) and (2) follow from the definition of derivation; for (1), we associated
$ga_i g^{-1}$ as $g(a_i g^{-1})$. For equation (3), besides distributing $g$,
we are using the hypothesis that $\delta(a_i) = 0$ for all $i$,
and we are also using the general fact\footnote{This fact can be easily checked by applying the definition
of derivation to $\delta(x x^{-1}) = 0$.} that $\delta(x^{-1}) = -x^{-1}\delta(x)$ (where here $x = g$).
Equation (4) follows from (3) by using the fact that $ga_i g^{-1} \in N$, and recalling that
$N$ acts trivially on $A$. And so combining (1) - (4) gives $\delta(ga_1 g^{-1}) = 0$,
which proves part (i) of this lemma.
For part (ii), we first claim that since $\delta(N) = \{0\}$, we get a well-defined function
$\bar{\delta} \colon G/N \longrightarrow A$ via $\bar{\delta}(gN) = \delta(g)$. Indeed,
take $g \in G$ and $n \in N$. Then $\delta(gn) = \delta(g) + g\delta(n)$, but this equals
$\delta(g)$, since $\delta(n) = 0$ by part (i). What remains to be shown is that the function $\bar{\delta}$
is a derivation.
Take $g, h \in G$. Then $\bar{\delta}(gNhN) = \bar{\delta}(ghN) = \delta(gh)$, but since $\delta$ is a derivation,
$\delta(gh)$ equals $\delta(g) + g\delta(h)$, which equals $\bar{\delta}(gN) + gN\delta(hN)$, since the coset $gN$ acts
on $A$ the way $g$ acts on $A$.
\end{proof}
We prove the universal property (of free groups) for derivations. This is analogous to homomorphisms.
Let $F_d$ be the free
group on $X = \{x_1, x_2, \ldots, x_d\}$. ($F_d$ is abelian if and only if $d = 1$.) Suppose $F_d$ acts on $A$. Note
that $A$ is assumed\footnote{The reasons for this are (a) to simplify notation slightly and (b)
because the author intends to use it only in the case that $A$ is abelian.} to be an abelian group.
\begin{lemma}
\label{lem:universal property -- derivations}
With the above notation, any map $\delta \colon \{x_1, x_2, \ldots, x_d\} \longrightarrow A$ gives a unique derivation
$\delta \colon F_d \longrightarrow A$.
\end{lemma}
Note: This is exercise 3(a) in \cite{Brown1982} (pg.\ 90).
\begin{proof}
Let $x \in X$. Define $\delta(x^{-1}) := -x^{-1}\delta(x)$. Next,
for $y_1, y_2,\ldots,y_k \in X^{\pm1}$, let $y = y_1y_2\cdots y_k$, and assume $y$ is a reduced word. We will
then define
$\delta(y)$ to be $\delta(y_1) + \sum_{j=1}^{k-1} y_1\cdots y_j\delta(y_{j+1})$; written out, this says
\[
\delta(y_1\cdots y_k) := \delta(y_1) + y_1\delta(y_2) + \cdots + y_1y_2\cdots y_{k-1}\delta(y_k).
\]
Let $\epsilon$ denote the identity of $F_d$; so $\epsilon$ is the empty word. So far, we have defined $\delta(y)$
for any $y$ except $\epsilon$. We define $\delta(\epsilon) := 0$. We now have a well-defined function
$\delta \colon F_d \longrightarrow A$, and it is straightforward to check that $\delta$ is indeed a derivation:
Let $y, z \in F_d$. If $y$ or $z$ (or both) are the identity, then $\delta(yz) = \delta(y) + y\delta(z)$. So
suppose that neither $y$ nor $z$ is $\epsilon$, the identity.
\begin{case}{Case 1.}
Suppose that $yz$ is a reduced word.
\end{case}
It is easy to see that $\delta(yz) = \delta(y) + y\delta(z)$; indeed, let
$y = y_1y_2\cdots y_k$ and $z = z_1z_2\cdots z_\ell$, where $y_1, \ldots, y_k, z_1, \ldots, z_\ell \in X^{\pm1}$.
To simplify notation, for $j \in \{1, 2, \ldots, k\}$, let $\hat{y}_j$ denote $y_1y_2\cdots y_j$ and similarly
for $\hat{z}_j$ if $j \in \{1, 2, \ldots, \ell \}$. (So $y = \hat{y}_k$ and $z = \hat{z}_\ell$.) Then
\begin{align*}
\delta(yz) &= \delta(y_1\ldots y_kz_1\ldots z_\ell) \\
&= \delta(y_1) + \cdots +\hat{y}_{k-1}\delta(y_k) + y\delta(z_1) + yz_1\delta(z_2) +
\cdots + y\hat{z}_{\ell - 1}\delta(z_\ell) \\
&= \delta(y) + y(\delta(z_1) + z_1\delta(z_2) + \cdots + \hat{z}_{\ell-1}\delta(z_\ell)) \\
&= \delta(y) + y\delta(z),
\end{align*}
and this is what we wanted to show, finishing this case.
\begin{case}{Case 2.}
There is cancellation in the product $yz$.
\end{case}
To show this case, we use induction on the amount of cancellation. Our base case is the previous case, that
there is no cancellation. Note that $y$ and $z$ are each, individually, assumed still to be reduced words.
Suppose $y = ux$ and $z = x^{-1}w$ for some $x \in X^{\pm1}$ and $u, w \in F_d$. Assume that
$\delta(uw) = \delta(u) + u\delta(w)$. So since $yz = uxx^{-1}w = uw$, by our inductive hypothesis, we need
only show that $\delta(ux) + ux\delta(x^{-1}w) = \delta(u) + u\delta(w)$. We have (explanations following)
\begin{align*}
\delta(ux) + ux\delta(x^{-1}w) &= \delta(u) + u\delta(x) + ux(\delta(x^{-1}) +x^{-1}\delta(w)) \\
&= \delta(u) + u\delta(x) + ux(-x^{-1}\delta(x) + x^{-1}\delta(w)) \\
&= \delta(u) + u\delta(x) -uxx^{-1}\delta(x) + uxx^{-1}\delta(w) \\
&= \delta(u) + u\delta(x) - u\delta(x) + u\delta(w) \\
&= \delta(u) + u\delta(w)
\end{align*}
The first equality is by Case 1 applied to the reduced words $ux$ and $x^{-1}w$. The second equality
just uses our definition of $\delta(x^{-1})$. Besides distributing $ux$, the third
equality follows since the action of $F_d$ on $A$ is, of course, by automorphisms,
and hence we may pull the -1 in front. This finishes Case 2 and the lemma.
\end{proof}
For the rest of this section, we write $\ensuremath{\mathbb{Z}}^\ell = \langle x_1,\ldots, x_\ell | [x_i,x_j] \text{for all } i, j \rangle$ for the free
abelian group of rank $\ell$ (written multiplicatively).
\begin{lemma}
\label{lem:characterization of derivations from free abelian groups}
Let $S$ be a simple $\ensuremath{\mathbb{Z}}^\ell$-module. There is a one-to-one correspondence between the set $\Der(\ensuremath{\mathbb{Z}}^\ell,S)$ and the
set of functions $\delta\colon \{x_1,\ldots,x_\ell \} \longrightarrow S$ satisfying
\[ \tag{*}
(1-x_i)\delta(x_j) = (1-x_j)\delta(x_i) \text{\hspace{.1in} for all $i,j$.}
\]
\end{lemma}
\begin{proof}
Step 1. Let $\delta\colon \ensuremath{\mathbb{Z}}^\ell \longrightarrow S$ be a derivation. Fix $i,j$. Because $x_ix_j = x_jx_i$, we have $\delta(x_ix_j) = \delta(x_jx_i)$ Therefore,
$\delta(x_i) + x_i\delta(x_j) = \delta(x_j) + x_j\delta(x_i)$. Rearranging and factoring yields (*).
Step 2. Let $\delta\colon \{x_1,\ldots,x_\ell \} \longrightarrow S$ satisfy (*). By Lemma \ref{lem:universal property -- derivations}, we
get a unique derivation $\delta\colon F_\ell \longrightarrow S$, where the action of $F_\ell$ on $S$ is the
induced action.
Fix $i,j$. We claim that $\delta([x_i,x_j]) = 0$. Indeed,
\[\begin{aligned}
\delta(x_ix_jx_i^{-1}x_j^{-1}) &= \delta(x_i) + x_i\delta(x_j) - x_ix_jx_i^{-1}\delta(x_i) - x_ix_jx_i^{-1}x_j^{-1}\delta(x_j)\\
&= \delta(x_i) + x_i\delta(x_j) - x_j\delta(x_i) - \delta(x_j),
\end{aligned}\]
where last equality is by the induced action.\footnote{Indeed, $x_ix_jx_i^{-1} = x_ix_jx_i^{-1}x_j^{-1}x_j = [x_i,x_j]x_j$. We then twice use
the fact that $[x_i,x_j]$ acts trivially on $S$.} Notice that this last expression is 0 precisely because (*) holds. Therefore,
Lemma \ref{lem:derivations factor through quotients} gives us a derivation from $\ensuremath{\mathbb{Z}}^\ell$ to $S$.
Because Steps 1 and 2 are inverses of each other, we are finished.
\end{proof}
\begin{lemma}
\label{lem:counting derivations from free abelian groups to simple modules}
Let $S$ be a simple $\ensuremath{\mathbb{Z}}^\ell$-module. Then
\[
\lvert \Der(\ensuremath{\mathbb{Z}}^\ell,S) \rvert =
\begin{cases}
|S|^\ell & \text{if the action is trivial} \\
|S| & \text{otherwise.}
\end{cases}
\]
\end{lemma}
\begin{proof}
If the action is trivial, then
$\Der(\ensuremath{\mathbb{Z}}^\ell,S) = \Hom(\ensuremath{\mathbb{Z}}^\ell,S)$.
Assume the action is not trivial, and let $x_i \in \{x_1,\ldots,x_\ell\}$ be a generator\footnote{The free abelian
group is still written multiplicatively.} of $\ensuremath{\mathbb{Z}}^\ell$ that acts non-trivially
on $S$. Then the action of $(1-x_i)$ on $S$ is invertible.\footnote{Of course, $S$ is a module over the ring $R = \ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]$. Since
$S$ is a simple $R$ module, then $S$ really is a 1-dimensional vector space. In this case, the function $x_i\cdot$ is just multiplication by some
(non-identity) element of the field.}
Though there is no element $(1-x_i)^{-1}$ in $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]$, for $s_0 \in S$, we denote by
$(1-x_i)^{-1}s_0$ the image of $s_0$ under the image of the inverse automorphism of $(1-x_i)\cdot \in \Aut(S)$.
Fix $j \neq i$. The equation $(1-x_i)\delta(x_j) = (1-x_j)\delta(x_i)$ is equivalent to the equation
$\delta(x_j) = (1-x_i)^{-1}(1-x_j)\delta(x_i)$. Hence, by Lemma \ref{lem:characterization of derivations from free abelian groups}
we may pick a derivation simply by picking $\delta(x_i)$ to be any
element of $S$ and then defining $\delta(x_j)$ to be $(1-x_i)^{-1}(1-x_j)\delta(x_i)$.
\end{proof}
Our next goal is Lemma \ref{lem:counting derivations from f.g. abelian groups to simple modules}, which extends
Lemma~\ref{lem:counting derivations from free abelian groups to simple modules} to the case that
the domain is any f.g.\ abelian group.
\begin{lemma}
\label{lem:derivation vanishes on x^n}
Let $S$ be a simple $\ensuremath{\mathbb{Z}}^\ell$-module. Assume that the action is non-trivial. Let $x \in \ensuremath{\mathbb{Z}}^\ell$ be such that the
automorphism $x \cdot \in \Aut(S)$ has finite order dividing some integer $n$. Let $\delta\colon \ensuremath{\mathbb{Z}}^\ell \longrightarrow S$ be a
derivation. Then
\[
\delta(x^n) = 0.
\]
\end{lemma}
\begin{proof}
Let $y \in \ensuremath{\mathbb{Z}}^\ell$ be such that $y \cdot \in \Aut(S)$ is non-trivial.
We know (similarly to Step 1 of Lemma \ref{lem:characterization of derivations from free abelian groups}) that
\[
(1 - y)\delta(x^n) = (1 - x^n)\delta(y).
\]
But because $S$ is a simple module and $y\cdot$ is non-trivial, we get\footnote{Just like $(1 - x_i)$ in the proof of
Lemma~\ref{lem:counting derivations from free abelian groups to simple modules}\ldots} that the endomorphism $(1-y)\cdot$ is invertible. Therefore,
$\delta(x^n) = (1-y)^{-1}(1 - x^n)\delta(y)$, but since the automorphism $x\cdot$ has order dividing $n$, we have that $x^n \cdot$ is the identity
function on $S$. Therefore, $\delta(x^n) = (1-y)^{-1}(1 - x^n)\delta(y) = 0$.
\end{proof}
The following is a generalization of
Lemma~\ref{lem:counting derivations from free abelian groups to simple modules}.
\begin{lemma}
\label{lem:counting derivations from f.g. abelian groups to simple modules}
Let $H$ be a f.g.\ abelian group. Let $S$ be a simple $H$-module. Then
\[
\lvert \Der(H,S) \rvert =
\begin{cases}
\lvert \Hom(H,S)\rvert & \text{if the action is trivial} \\
|S| & \text{otherwise.}
\end{cases}
\]
\end{lemma}
\begin{proof}
If the action is trivial, then $\Der(H,S) = \Hom(H,S)$. So suppose the action is non-trivial.
Let $H$ be $\ell$-generated, and let $G = \ensuremath{\mathbb{Z}}^\ell$, the free abelian group of rank $\ell$. Let the action of $G$ on
$S$ be the induced action. By Lemma~\ref{lem:counting derivations from free abelian groups to simple modules}, we know that
$\lvert \Der(G,S)\rvert = |S|$.
To prove this lemma, it is sufficient to show that each derivation from $G$ to $S$ gives a derivation (via
Lemma \ref{lem:derivations factor through quotients}) from $H$ to $S$.\footnote{The following is clear: Let $\delta_1$
and $\delta_2$ be different derivations from $G$ to $A$ that satisfy the hypotheses of
Lemma~\ref{lem:derivations factor through quotients} (for some given $N \ensuremath{\trianglelefteq} G$). Then the lemma produces different
derivations from $G/N$ to $A$.}
Let $\delta \in \Der(G,S)$. Let $\pi\colon G \longrightarrow H$ be a surjection with kernel $N$. Let $x \in G$
be such that $\pi(x)$ has order $n$.
(So $x^n$ is an arbitrary element of $N$.) In order to apply
Lemma \ref{lem:derivations factor through quotients}, it is sufficient to show that $\delta(x^n) = 0$ (for any such $x^n$).
We have $x\cdot \in \Aut(S)$ has order dividing $n$, since $N$ acts trivially on $S$. Thus $\delta(x^n) = 0$ by
Lemma \ref{lem:derivation vanishes on x^n}.
\end{proof}
\subsection{Submodules counted by isomorphism type of quotient}
\label{sec:Submodules counted by isomorphism type of quotient}
Let $R$ be ring, and let $N$ be an $R$-module. It is well known that for every maximal submodule $M$ of $N$, we have
$N/M \cong_R R/I$ for some maximal left ideal $I \ensuremath{\lhd} R$.\footnote{If $R$ is commutative, then $I$ is the annihilator of $N/M$. If $R$ is
not necessarily commutative, then we may take any element $a \in N/M$, with $a \neq 0$. Let $I$ be the kernel of the map $r \mapsto ra$. Since $N/M$ is
simple, the map is surjective (because it is nonzero). We conclude $N/M \cong_R R/I$.}
In order to organize all the maximal submodules
of $N$ of a given index by the $R$-module isomorphism type of the quotient, we give the following definition:
\begin{definition}
\label{def:submodisoto - number of submodules with quotient iso to etc.}
Let $S$ be a (finite) simple $R$-module. Then $$\submodisoto{S}(N)$$ denotes the number of submodules $M$ of $N$ such that
$N/M \cong_R S$.
\end{definition}
We now state the following lemma:
\begin{lemma}
\label{lem:maxsubmod N = sum_S submodisoto_S(N)}
Let $N$ be a f.g.\ $R$-module. Then
\[
\maxsubmod(N) = \sum_S \submodisoto{S}(N),
\]
where the sum is taken over all simple $R$-modules of cardinality $n$. If $R$ is commutative, then also
\[
\maxsubmod(N) = \sum_I \submodisoto{R/I}(N)
\]
where the sum is taken over all maximal ideals $I$ of $R$ that have $|R/I| = n$.
\end{lemma}
\begin{proof}
The first equality holds because we can partition the set of maximal submodules by the $R$-module isomorphism type of their quotient.
The second equality then follows by the well-known fact mentioned in the first paragraph of this section together with one other
well-known fact:
Because $R$ is now assumed to be commutative, if we have two maximal ideals
$I_1, I_2 \ensuremath{\lhd} R$ with $I_1 \neq I_2$ but $|R/I_1| = |R/I_2|$ finite, then $R/I_1$ and $R/I_2$ are not isomorphic
$R$-modules,\footnote{This is because their annihilators (namely $I_1$ and $I_2$ respectively) are different.}
(even though they are
isomorphic fields).
\end{proof}
Note: Recall that if $R$ is not commutative, then it is possible for $R/I_1 \cong_R R/I_2$ as $R$-modules, even if $I_1 \neq I_2$.\footnote{For example, let
$R = M_2(\finitefield)$. Let $I_1 = \left( \begin{smallmatrix} 0& * \\ 0& *\end{smallmatrix} \right)$
and $I_2 = \left( \begin{smallmatrix} *& 0 \\ *&0 \end{smallmatrix} \right)$. Then $R/I_1$ and $R/I_2$ are both isomorphic to the unique (up to iso.) simple
$R$-module.}
\subsection{Codimension 1 subspaces}
\label{sec:Codimension 1 subspaces}
Let $R$ be a commutative (unital) ring, and
let $I \ensuremath{\lhd} R$ be maximal with $|R/I| = n$.
\begin{lemma}
\label{lem:R/I of N to R/I of N/IN}
With the notation from Definition \ref{def:submodisoto - number of submodules with quotient iso to etc.},
$$\maxsubmod[R/I](N) = \maxsubmod[R/I](N/IN).$$
\end{lemma}
\begin{proof}
It is immediate that $\maxsubmod[R/I](N) \geq \maxsubmod[R/I](N/IN)$. Let $M$ be a maximal submodule of $N$ with $N/M \cong_R R/I$.
We have $\Ann_R(N/M) = \Ann_R(R/I) = I$. Thus $IN$ is 0 mod $M$, i.e., $IN \subseteq M$. Therefore $\maxsubmod[R/I](N) \leq \maxsubmod[R/I](N/IN)$.
\end{proof}
The following is very well known.
\begin{lemma}
\label{lem:N/IN is iso to R/I tensor N}
With the above notation,
\[
R/I \ensuremath{\otimes}_R N \cong_R N/IN.
\]
\end{lemma}
\begin{lemma}
\label{lem:maxsubmod iso to R/I and dimension of tensor product}
Recall that $n = |R/I|$. We have
\[
\maxsubmod[R/I](N) = 1 + n + n^2 + \cdots + n^{s-1},
\]
where $s = \dim_{R/I}(R/I \ensuremath{\otimes}_R N)$.
\end{lemma}
\begin{proof}
Lemma \ref{lem:R/I of N to R/I of N/IN} gives $\maxsubmod[R/I](N) = \maxsubmod[R/I](N/IN)$, which itself is equal
to \newline $\maxsubmod[R/I](R/I \ensuremath{\otimes} N)$ by
Lemma \ref{lem:N/IN is iso to R/I tensor N}. Note that $R/I \ensuremath{\otimes} N$ is an $R/I$-vector space, and that its maximal submodules
are codimension 1 subspaces, the number of which is the number of dimension 1 subspaces. Thus
\[
\maxsubmod[R/I](R/I \ensuremath{\otimes}_R N) = \frac{n^s - 1}{n - 1},
\]
where $s = \dim_{R/I}(R/I \ensuremath{\otimes}_R N)$ as desired.
\end{proof}
We get the following consequence of Lemma \ref{lem:maxsubmod iso to R/I and dimension of tensor product}:
\begin{corollary}
\label{cor:submodisoto R/I for direct sum of cyclic modules}
Recall $I \ideal_{\max} R$, with $|R/I| = n$. Suppose $N_1,\ldots, N_r$ are cyclic $R$-modules, and let
$s = |\{N_i: \submodisoto{R/I}(N_i) = 1 \}|$. Then
\[
\submodisoto{R/I}(N_1 \oplus N_2 \oplus \cdots \oplus N_r) = 1 + n + n^2 + \cdots + n^{s-1}.
\]
\end{corollary}
\subsection{Miscellaneous}
\label{sec:miscellaneous}
We collect here a few more results (almost all well known) that we will use later.
How does passing to quotients affect the maximal subgroup growth? The following lemma shows that if we mod out by a finite subgroup,
then the maximal subgroup growth remains unchanged. (The question was inspired by Lemma 2.3 from \cite{Shalev_On_the_degree}.)
\begin{lemma}
\label{lem:mod out by finite subgroup -- maximal subgroup growth is unchanged}
Let $G$ be a f.g.\ group and $F \ensuremath{\trianglelefteq} G$ finite. Let $n \in \ensuremath{\mathbb{Z}}_{\geq 1}$. If $n > |F|$, then
\[
\maxsubgr(G) = \maxsubgr(G/F).
\]
\end{lemma}
\begin{proof}
We will show that if a maximal subgroup does not contain $F$, then it has
index at most $|F|$. Let $M \leq_n G$ be maximal and suppose
that $F \nsubseteq M$. Since $F \ensuremath{\trianglelefteq} G$, we get that $FM$ is a subgroup of $G$.
Since $FM$ properly contains $M$, we conclude that $FM = G$. Therefore,
\[
[G : M] = [FM: M] = [F : F \cap M] \leq |F|.
\]
\end{proof}
A similar statement works for maximal submodule growth. Let $R$ be a (unital) ring.
\begin{lemma}
\label{lem:mod out by finite submodule -- maximal submodule growth is unchanged}
Let $N$ be an $R$-module and $F \leq N$ a finite submodule. Let $n \in \ensuremath{\mathbb{Z}}_{\geq 1}$. If $n > |F|$, then
\[
\maxsubmod(N) = \maxsubmod(N/F).
\]
\end{lemma}
\begin{proof}
This is similar to our proof of Lemma \ref{lem:mod out by finite subgroup -- maximal subgroup growth is unchanged}. Let $M \leq_n N$ be a maximal.
Suppose $F \nsubseteq M$. Then $n \leq |F|$ because
\[
N/M = (M + F)/M \cong_R F/M\cap F.
\]
\end{proof}
---------------------------------------\\
The following will be used without comment throughout this document. For a proof, see for example Result 5.4.3 (iii) in \cite{Robinson}.
\begin{lemma}
Let $G$ be a solvable group, and let $M$ be a maximal subgroup of $G$ of finite index. Then
$[G:M]$ is a power of a prime.
\end{lemma}
---------------------------------------\\
Let $S$ be a $G$ module. Following \cite{Dummit2004} (page 798), we will denote by $S^G$ the set of all elements of $S$ that are fixed
by $G$: $S^G = \{s \in S: gs = s \text{ for all } g \in G \}$. If $S^G \neq \emptyset$, we say that $S$ has a fixed point. We now make an easy observation:
\begin{lemma}
\label{lem:simple module - one fixed point implies trivial and has order a prime}
Let $S$ be a simple (finite) $G$ module that has a fixed point. Then $S = S^G$ and $|S|$ is prime.
\end{lemma}
\begin{proof}
The set $S^G$ is a submodule of $S$. Since it is non-empty and $S$ is simple, we get $S = S^G$. Since the action is trivial, a simple
$G$ module is the same thing as a simple abelian group.
\end{proof}
---------------------------------------\\
Our next goal is the well-known Lemma \ref{lem:two full lattices - each scales into the other}.
We first prove the main part of that lemma.
\begin{lemma}
\label{lem:vector v gets scaled into a full lattice}
Let $D$ be an integral domain and $F$ its field of fractions. Fix $d \geq 1$. Suppose $A$
is a $D$-submodule of $F^d$ that is isomorphic (as a $D$-module) to $D^d$. Let $v \in F^d$.
Then there exists $c_0 \in D$ with $c_0 \neq 0$ such that $c_0 v \in A$.
\end{lemma}
\begin{proof}
The case when $d = 1$ is clear.
Let $X = \{x_1, \ldots, x_d\}$ be a $D$-module generating set for $A$. We claim that the $F$-span of $X$
is $F^d$. By contradiction, suppose that $X$ is linearly dependent over $F$. So there exist
$a_1, \ldots, a_d \in F$ (not all zero) such that
\[
a_1 x_1 + \cdots + a_d x_d = 0.
\]
By clearing the denominators we get
\[
\tilde{a}_1 x_1 + \cdots + \tilde{a}_d x_d = 0
\]
for some $\tilde{a}_1, \ldots, \tilde{a}_d \in D$ (not all zero) a contradiction; this proves our claim.
The claim tells us that there exist $\alpha_1, \ldots, \alpha_d \in F$ such that
\[
\alpha_1 x_1 + \cdots + \alpha_d x_d = v.
\]
Again, clearing the denominators finishes the proof.
\end{proof}
\begin{lemma}
\label{lem:two full lattices - each scales into the other}
Fix $d \geq 1$. Suppose $A$ and $B$
are $\ensuremath{\mathbb{Z}}$-submodules of $\ensuremath{\mathbb{Q}}^d$ both isomorphic (as $\ensuremath{\mathbb{Z}}$-modules) to $\ensuremath{\mathbb{Z}}^d$. Then there exists
$c \in \ensuremath{\mathbb{Z}}$ such that $cB \leq_f A$.
\end{lemma}
\begin{proof}
Let $X = \{y_1, \ldots, y_d\}$ be a $\ensuremath{\mathbb{Z}}$-module generating set for $B$. We then apply
Lemma~\ref{lem:vector v gets scaled into a full lattice} to each $y_i$ to get nonzero constants
$c_1, \ldots, c_d \in \ensuremath{\mathbb{Z}}$ such that $c_i y_i \in A$. Then $c = \Pi_1^d c_i$ works.
\end{proof}
\begin{corollary}
\label{cor:two full lattices in Q^d - one contained in the other => the index is finite}
Let $A$, $B$ be f.g.\ subgroups of $\ensuremath{\mathbb{Q}}^d$ such that $B \leq A$ and that $\ensuremath{\mathbb{Q}} B = \ensuremath{\mathbb{Q}}^d$. Then
$[A : B]$ is finite.
\end{corollary}
\begin{proof}
This follows from Lemma~\ref{lem:two full lattices - each scales into the other}. Notice that as $\ensuremath{\mathbb{Z}}$-modules,
$A$ and $B$ are both isomorphic to $\ensuremath{\mathbb{Z}}^d$. We are done because $cB \leq_f A$ for some $c$, implies that $B \leq_f A$ too.
\end{proof}
---------------------------------------\\
If we
start with a non-constant polynomial $f \in \ensuremath{\mathbb{Z}}[x]$, does $f$ split mod $p$ for infinitely many primes $p$? It turns out
that slightly more is true, as the following lemma states.
\begin{lemma}
\label{lem:roots of non-constant polynomials in F_p and in C}
Let $f \in \mathbb{Z}[x]$ be a non-constant polynomial. Consider $\bar{f} \in \mathbb{F}_p[x].$
Let $\rho_p$ be the number of distinct roots of $\bar{f}$ in $\mathbb{F}_p$, and let $\rho$ be the number of
distinct roots of $f$ in $\mathbb{C}$. Then $\rho_p = \rho$ for infinitely many primes $p$.
\end{lemma}
For a proof, see \cite{MathOverflow_Rivin}, the answer Igor Rivin gave at MathOverflow to the author's question. (Or see Keith Conrad's answer
to the same question.)
\begin{lemma}
\label{lem:num_roots in overline(F_p) of polynomial bounded above by num_roots in C}
Let $f \in \mathbb{Z}[x]$ be a non-constant polynomial. Consider $\bar{f} \in \mathbb{F}_p[x].$
Let $\bar{\rho_p}$ be the number of distinct roots of $\bar{f}$ in $\overline{\mathbb{F}_p}$, and let $\rho$ be the number of
distinct roots of $f$ in $\mathbb{C}$. Then $\bar{\rho_p} \leq \rho$ for all large primes $p$.
\end{lemma}
For a proof, see the answer Eric Wofsey gave to the author's question at \url{https://math.stackexchange.com/q/2753743}.
---------------------------------------\\
\begin{definition}
\label{def:degree of a function}
Let $k \in \ensuremath{\mathbb{Z}}_{\geq 1}$. For a function $$f \colon \{ k, k+1, k+ 2, \ldots \} \to \ensuremath{\mathbb{R}}_{\geq 0}$$
which is bounded above by a polynomial, define
\[
\deg(f) := \inf \{\, \alpha \mid f(n) \leq n^\alpha \text{ for all large } n \}.
\]
\end{definition}
Notes: (1) If $f$ itself is a polynomial, then this agrees with the normal use of the term ``degree''.
(2) We have that $\mdeg(G) = \deg(\maxsubgr(G))$.
\begin{lemma}
\label{lem:degree of a sum is the max of the degrees}
Let $k \in \ensuremath{\mathbb{Z}}_{\geq 1}$, and let
\[
f, g, h \colon \{ k, k+1, k+ 2, \ldots \} \to \ensuremath{\mathbb{R}}_{\geq 0}
\]
each be bounded above by a polynomial. Then
\[
\deg(f + g) = \max\{\deg(f), \deg(g) \}, \text{\quad and}
\]
\[
\deg(f + g + h) = \max\{\deg(f), \deg(g), \deg(h) \}.
\]
\end{lemma}
\begin{proof}
We prove the first equality, and then the second follows by applying the first equality twice.\footnote{We could use induction to prove a more general
lemma about the sum of $n$ functions.}
Certainly $\deg(f+g) \geq \deg(f)$ because $f(n) + g(n) \geq f(n)$ for all $n$. Similarly, $\deg(f+g) \geq \deg(g)$. Hence,
$\deg(f+g) \geq \max\{\deg(f), \deg(g) \}$.
Let $\alpha := \deg(f)$ and $\beta := \deg(g)$. Let $\varepsilon > 0$. Then
\[
f(n) \leq n^{\alpha + \varepsilon/2} \text{\quad and \quad} g(n) \leq n^{\beta + \varepsilon/2} \text{ for all large } n.
\]
Thus for all large $n$,
\[
\begin{aligned}
f(n) + g(n) &\leq n^{\alpha + \varepsilon/2} + n^{\beta + \varepsilon/2}\\
&\leq 2n^{\max\{\alpha, \beta \} + \varepsilon/2 }\\
&\leq n^{\max\{\alpha, \beta \} + \varepsilon},
\end{aligned}
\]
where in the last inequality, $n$ is large enough such that $2 \leq n^{\varepsilon/2}$. The inequalities give us
that $\deg(f+g) \leq \max\{\deg(f), \deg(g) \}$. Hence $\deg(f+g) = \max\{\deg(f), \deg(g) \}$.
\end{proof}
\section{Finitely generated $\ensuremath{\mathbb{Z}}[x]$-modules}
\label{sec:finitely generated Z[x]-modules}
The goals of this section are to describe the maximal submodule growth of
\begin{itemize}
\item all $\ensuremath{\mathbb{Z}}_D[x]$-modules (with $D$ finite) which are finitely generated as $\ensuremath{\mathbb{Z}}_D$-modules
\item all finitely generated $\ensuremath{\mathbb{Z}}[x]$-modules.
\end{itemize}
For the latter, the cyclic case is about finding maximal ideals in $R = \ensuremath{\mathbb{Z}}[x]$ and in quotients $R/I$ of $R$. The general
case is handled by looking at f.g.\ modules over $\finitefield[x]$ (or over $\ensuremath{\mathbb{Q}}[x])$ and applying the
well known structure theorem for f.g.\ modules over principal ideal domains. At that point, we need only appeal to
\S \ref{sec:Submodules counted by isomorphism type of quotient}.
\subsection{Cyclic $\ensuremath{\mathbb{Z}}[x]$-modules}
Let $R = \ensuremath{\mathbb{Z}}[x]$. As is well-known, a cyclic $R$ module is just (isomorphic to) $R/I$ where $I$ is an ideal of $R$; $I$ would be the annihilator of
a chosen generator.
We first review what the maximal ideals of $R$ are:
\begin{lemma}
\label{lem:maximal ideals of integral polynomial ring}
The maximal ideals of $R = \ensuremath{\mathbb{Z}}[x]$ are precisely the ideals of the form $(p, f)$ where $p$ is a prime number and $f \in R$ is a polynomial that is
irreducible mod $p$.
\end{lemma}
\begin{proof}
Though this is very well known, an argument is given here. (A reference is
example 3(d) in \cite{Dummit2004} in the section titled ``The prime spectrum of a ring''.)
Let $I \ensuremath{\lhd} R$ be maximal. Since $R$ itself is not a field, $I$ is not the zero ideal. So there is an $a \in I$ with $a \neq 0$.
We claim that $I$ contains a prime number. Indeed, if $a \in \ensuremath{\mathbb{Z}}$, then, then the characteristic of the field $R/I$ is finite and hence prime.
On the other hand, if $a$ is a non-constant polynomial, then $R/I$ is finitely generated as an abelian group, and in this case, $\ensuremath{\mathbb{Q}}$ cannot be
a subgroup of $R/I$. So in this case, we also know that the characteristic of $R/I$ is finite. Hence $I$ does contain a prime number.
So since $R/I$ is a quotient of $\finitefield[x]$, the lemma follows.
\end{proof}
We next note that maximal ideals of $\ensuremath{\mathbb{Z}}[x,x^{-1}]$ correspond exactly with the maximal ideals of $\ensuremath{\mathbb{Z}}[x]$
except for $(p, x)$, which are not maximal in $\ensuremath{\mathbb{Z}}[x,x^{-1}]$ because $x$ is a unit there.
See for example, Proposition 38 in \cite{Dummit2004} in the section titled ``Localization''. We easily get the following observation:
\begin{lemma}
\label{lem:laurant polynomial ring - maximal ideals like polynomial ring}
We have
\[
\maxsubmod(\ensuremath{\mathbb{Z}}[x,x^{-1}]) = \begin{cases}
\maxsubmod(\ensuremath{\mathbb{Z}}[x]) - 1 & \text{ when $n$ is prime}\\
\maxsubmod(\ensuremath{\mathbb{Z}}[x]) & \text{ when $n$ is not prime.}
\end{cases}
\]
\end{lemma}
In the following well-known result, $\mu$ is the m\"obius function.
\begin{lemma}
\label{lem:mobious function - exact number of irreducibles}
We have
\[
\maxsubmod[p^k](\finitefield[x]) = \frac{1}{k}\sum_{a|k} \mu\left(\frac{k}{a}\right)p^a.
\]
\end{lemma}
For a proof, see for example the last two pages of the section titled ``Finite Fields'' in \cite{Dummit2004}.
\begin{corollary}
\label{cor:mmoddeg of finitefield[x]}
The growth type of $\maxsubmod(\finitefield[x])$ is $n/\log(n)$.
\end{corollary}
Note that $\mmoddeg(\finitefield[x]) = 1$ even though $\maxsubmod(\finitefield[x])$ grows sublinearly.
\begin{lemma}
\label{lem:maximal ideal growth of Z x}
We have $\maxsubmod(\ensuremath{\mathbb{Z}}[x])$ has growth type $n$. In fact, $\maxsubmod(\ensuremath{\mathbb{Z}}[x]) \leq n$ for all $n$
and $\maxsubmod(\ensuremath{\mathbb{Z}}[x]) = n$ when $n$ is prime.
\end{lemma}
\begin{proof}
We know already that $\maxsubmod[p](\ensuremath{\mathbb{Z}}[x]) = p$, and therefore $\maxsubmod(\ensuremath{\mathbb{Z}}[x])$ has at least linear growth. To show that it has at most linear growth,
we may appeal to Lemma \ref{lem:mobious function - exact number of irreducibles} or make the following simpler observation:
The number of monic polynomials in $\finitefield[x]$ of degree $k$ is exactly $p^k$.
But since $\maxsubmod[p^k](\finitefield[x])$ is the number of \emph{irreducible}, monic polynomials of degree $k$,
we conclude that $\maxsubmod(R) \leq n$ for all $n$.
\end{proof}
Let $R = \ensuremath{\mathbb{Z}}[x]$, and let $I \ensuremath{\lhd} R$, so that $R/I$ is an ``arbitrary'' cyclic $R$ module. Recall that the content of a polynomial in $R$
is the greatest common divisor of its coefficients.
\begin{lemma}
\label{lem:bound for primes not dividing content}
Let $f \in I$ be a non-constant polynomial. Then for all primes $p$ which do not divide $\content(f)$
we have that for all $k$,
$$\maxsubmod[p^k](R/I) \leq \deg(f).$$
\end{lemma}
\begin{proof}
Let $p$ be a prime that does not divide $\content(f)$. Then $\bar{f}$ in $\finitefield[x]$ is not zero.
Note that $\maxsubmod[p^k](R/I) = \maxsubmod[p^k](R/(p,I))$. Since $R/(p,I)$ is
a quotient of $\finitefield[x]/(\bar{f})$, we get
that $\maxsubmod[p^k](R/(p,I)) \leq \maxsubmod[p^k](\finitefield[x]/(\bar{f}))$. Next, recall that $\finitefield[x]$ is a PID, and the maximal ideals
of $\finitefield[x]/(\bar{f})$ are exactly the ideals of the form $(g)$, where $g$ is an irreducible factor of $\bar{f}$.
Just note that $\bar{f}$ has at most $\deg(\bar{f}) \leq \deg(f)$ irreducible factors.
\end{proof}
\begin{lemma}
\label{lem:upper bound for finite F_p x module}
Fix a prime $p$. Let $J \ensuremath{\lhd} \finitefield[x]$, and let $g \in J$ be nonzero. Then for all $k \geq 1$, we have
\begin{enumerate}[(a)]
\item $\displaystyle{\maxsubmod[p^k](\finitefield[x]/J) \leq \left\lfloor \frac{\deg(g)}{k} \right\rfloor}$.
\item $\displaystyle{\maxsubmod[p^k](\finitefield[x]/J) \leq r}$, where $r$ is the number of distinct roots of $g$ in $\overline{\finitefield}$.
\end{enumerate}
\end{lemma}
\begin{proof}
This is similar to the proof of Lemma \ref{lem:bound for primes not dividing content}. For (a), we simply note that $g$ has at most
$\left\lfloor \frac{\deg(g)}{k} \right\rfloor$ irreducible factors of degree $k$. (If $g$ is constant, then it has 0 irreducible factors.)
For (b), notice that the number of distinct irreducible factors of $g$ is bounded above by $r$.
\end{proof}
Again, let $R = \ensuremath{\mathbb{Z}}[x]$, and let $I \ensuremath{\lhd} R$.
\begin{lemma}
\label{lem:cyclic modules over polynomial ring in 1-var with growth type n / log n}
Let $I \neq \{0\}$. Suppose that for some prime $p$, we have that $I \subseteq (p)$. Then $\maxsubmod(R/I)$ has growth type $n/\log(n)$.
\end{lemma}
\begin{proof}
We get that $\finitefield[x]$ is a quotient of $R/I$. Therefore, by Corollary \ref{cor:mmoddeg of finitefield[x]}, the growth
type of $R/I$ is at least $n/\log(n)$. We next just need to prove that the maximal submodule growth can be no larger; this uses
the fact that $I$ must contain a nonzero element.
It is easy to see that $I$ contains a non-zero polynomial; indeed,
let $0 \neq a \in I$, and let $0 \neq g(x) \in R$. Hence $a g(x) \in I$, since $I$ is an ideal of $R$. So let $f$ be any
non-constant polynomial in $I$.
Then by Lemma \ref{lem:bound for primes not dividing content},
we get $\maxsubmod(R/I) \leq \deg(f)$ for all $n$ that are powers of some prime $p$ that does not divide $\content(f)$.
And to finish, for the primes $q$ which do divide $\content(f)$, we may yet again appeal to Corollary \ref{cor:mmoddeg of finitefield[x]}.
\end{proof}
\begin{lemma}
\label{lem:cycclic modules over polynomial ring in 1-var with no growth - ie. bounded by a constant}
Suppose that for every prime $p$, we have that $I \nsubseteq (p)$. Then there is a constant $c$ such that $\maxsubmod(R/I) \leq c$ for all $n$.
\end{lemma}
\begin{proof}
Just as in the proof of Lemma \ref{lem:cyclic modules over polynomial ring in 1-var with growth type n / log n}
(first sentence of second paragraph), we have that $I$ must contain a non-constant polynomial $f$ (because $I \neq \{0\}$,
for otherwise $I \subseteq (p)$ for all primes $p$). For primes not dividing
content($f$), just apply Lemma \ref{lem:bound for primes not dividing content}. And for primes dividing content$(f)$, just use other polynomials:
Let $X = \{p_1, p_2, \ldots, p_t\}$ be the primes dividing content$(f)$. Since
$p_iR \nsupseteq I$,
we find
that $I$ contains polynomials $f_1, f_2, \ldots, f_t$ such that $\bar{f_i}$ in $\mathbb{F}_{p_i}[x]$ is not zero. Hence we may apply
Lemma \ref{lem:bound for primes not dividing content} again (for each $f_i$) to get bounds for the finitely many primes not included
in the first paragraph. Taking the maximum of all the bounds finishes the proof.
\end{proof}
\begin{corollary}
\label{cor:free abelian cyclic module has no growth - weaker version}
Let $M = \ensuremath{\mathbb{Z}}^d$ be a cyclic $\ensuremath{\mathbb{Z}}[x]$-module. Then $\maxsubmod(M) \leq d$
for all $n$.
\end{corollary}
\begin{proof}
We have that $M \cong_R R/I$ for some $I\ensuremath{\lhd} R$. Then $I$ contains the characteristic polynomial of $x$ (considered as a $\ensuremath{\mathbb{Q}}$-linear transformation).
The result follows by Lemma \ref{lem:bound for primes not dividing content} since the characteristic polynomial is monic.
\end{proof}
We get the following:
\begin{corollary}
If $N$ is a cyclic $\ensuremath{\mathbb{Z}}[x]$-module and $G = N \rtimes \ensuremath{\mathbb{Z}} = \freebycyclic{}$, then $\maxsubgr(G)$ has growth
type $n$ and hence $\mdeg(G) = 1$.
\end{corollary}
\begin{proof}
Just apply Lemma \ref{lem:abelian by infinite cyclic - exact counting} (which could have been proved in \S \ref{sec:counting derivations})
together with Corollary \ref{cor:free abelian cyclic module has no growth - weaker version}
to get that $\maxsubgr(G)$ has at most linear growth.
For the lower bound, notice that characteristic subgroups of the normal subgroup $N$ of $G$ must necessarily be
normal in $G$. Note that the subgroups $pN$ (if the group operation in $N$ is written additively) or $N^p$ (if the group
operation in $N$ were written multiplicatively), where $p$ is prime, are characteristic in $G$. Therefore $\maxsubmod(N) \geq 1$ for infinitely
many $n$, and hence $\maxsubgr(G) \geq n$ for infinitely many $n$ (again by Lemma~\ref{lem:abelian by infinite cyclic - exact counting}).
\end{proof}
\subsection{Finitely generated modules over PIDs}
\label{sec:Finitely generated modules over PIDs}
The PIDs considered in this section are all of the form $\mathbb{F}[x]$, where $\mathbb{F}$ is either $\finitefield$ or
$\ensuremath{\mathbb{Q}}$.
We first outline the main idea of this section. Let $N$ be a f.g.\ $ \ensuremath{\mathbb{Z}}[x]$ module. Any maximal submodule of $N$
of index power of a prime $p$ will contain $pN$ and so corresponds to a maximal $\finitefield[x]$-submodule of $N/pN$. But since
$\finitefield[x]$ is a PID, we can apply the structure theorem for f.g.\ modules over PIDs. If we only cared about the
prime $p$ (and no other primes), then we could immediately jump to \S\ref{sec:Direct sums with each term a quotient of the next}.
However, we do not care about only one specific prime. Rather, we want to know what happens for all (large) primes.
It would be computationally
advantageous if we did not need to apply the structure theorem infinitely many times---once for each prime. Indeed,
one major goal of \S \ref{sec:global to local: from Q to F_p} is to prove
Lemma \ref{lem:version1_of_global to local - from QN to N/pN}, which
says that for all but finitely many primes, the decomposition of $N/pN$ afforded by the
structure theorem (applied to the PID $\finitefield[x]$) ``comes from'' the decomposition
of $\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N$ as a $\ensuremath{\mathbb{Q}}[x]$-module. The other major goal is to prove Lemma~\ref{lem:global to local - from QN to N/pN - improved},
a slight generalization of Lemma~\ref{lem:version1_of_global to local - from QN to N/pN}.
\subsection{Global to local: From $\ensuremath{\mathbb{Q}}$ to $\finitefield$}
\label{sec:global to local: from Q to F_p}
The goal of this section is to prove Lemma \ref{lem:version1_of_global to local - from QN to N/pN} (and its slight generalization).
It is possible that everything in this section is already known; certainly some of it is.
Until Lemma~\ref{lem:global to local - from QN to N/pN - improved}, let $N$ be a f.g.\ $\ensuremath{\mathbb{Z}}[x]$-module.
Denote $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N$ by \ensuremath{\rationals N}.
Since $\ensuremath{\mathbb{Q}}[x]$ is a PID, we have by the fundamental theorem of f.g.\ modules over PIDs that
\[\tag{*}
\ensuremath{\rationals N} \cong_{\ensuremath{\mathbb{Q}}[x]} \left( \bigoplus_{j=1}^{s_1} \ensuremath{\mathbb{Q}}[x]/(a_j) \right) \oplus \ensuremath{\mathbb{Q}}[x]^{s_2}
\]
for some $a_j \in \ensuremath{\mathbb{Q}}[x]$ that are not units and such that $a_1 \mid a_2 \mid \ldots \mid a_{s_1}$.
Let $a \in \ensuremath{\mathbb{Q}}[x]$. Then it is easy to see that for all large primes, we may speak of
$a$ mod $p$ and state that $\bar{a} \in \finitefield[x]$. Indeed,
there exists a finite set of primes $D$ such that
$a \in \ensuremath{\integers_D[x]}$. Of course, \ensuremath{\integers_D[x]}\ is a subring of $\ensuremath{\mathbb{Q}}[x]$, and for $p \not\in D$ we have the surjection
$\ensuremath{\integers_D[x]} \twoheadrightarrow \finitefield[x]$.
What we need is to prove the following.
\begin{lemma}
\label{lem:version1_of_global to local - from QN to N/pN}
Suppose $N$ and \ensuremath{\rationals N}\ are as above. Then for all large primes $p$,
\[
\ensuremath{N/pN} \cong_{\finitefield[x]} \left( \bigoplus_{j=1}^{s_1} \finitefield[x]/(\overline{a_j}) \right) \oplus \finitefield[x]^{s_2}.
\]
\end{lemma}
We first give a high-level sketch of the basic idea. Then we state a slight generalization which we will need later.
We then show how to give a proof by using
Lemma \ref{lem:D large enough gives a basis for ker(pi_D)} via
Corollary \ref{cor:decomp_of_localization} (whose proofs are deferred to
the end of this section).
\begin{proof}[Sketch of proof idea.]
When doing the computation required in finding the decomposition
of $\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N$, the only thing keeping us from doing this computation to $N$ itself (as a $\ensuremath{\mathbb{Z}}[x]$-module) is that
we may need to divide by finitely many integers.
So if $p$ is large enough, then in $\finitefield$ we can divide by all those integers (i.e.\ their residues mod $p$). For such $p$,
the steps of the algorithm would be the same for $N/pN$ as for $\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N$. The way we fill out the details
is to first pass from $\ensuremath{\mathbb{Q}}[x]$ to a localization $\ensuremath{\mathbb{Z}}_D[x]$ of $\ensuremath{\mathbb{Z}}[x]$, where $D$ is finite. We then
mod out by $p$.
\end{proof}
\begin{lemma}
\label{lem:global to local - from QN to N/pN - improved}
Suppose $N$ is a f.g.\ $\ensuremath{\mathbb{Z}}_{D_0}[x]$-module, where $D_0$ is a finite set of primes.
Also, suppose that
\[
\ensuremath{\rationals N} \cong_{\ensuremath{\mathbb{Q}}[x]} \left( \bigoplus_{j=1}^{s_1} \ensuremath{\mathbb{Q}}[x]/(a_j) \right) \oplus \ensuremath{\mathbb{Q}}[x]^{s_2}
\]
for some $a_j \in \ensuremath{\mathbb{Q}}[x]$ that are not units and such that $a_1 \mid a_2 \mid \ldots \mid a_{s_1}$.
Then for all large primes $p$,
\[
\ensuremath{N/pN} \cong_{\finitefield[x]} \left( \bigoplus_{j=1}^{s_1} \finitefield[x]/(\overline{a_j}) \right) \oplus \finitefield[x]^{s_2}.
\]
\end{lemma}
We next show how to prove Lemma~\ref{lem:global to local - from QN to N/pN - improved}, quoting a couple results which will be proved later.
Denote $d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\rationals N})$ by $n$. So there
exists a surjection $\pi_\ensuremath{\mathbb{Q}}\colon \ensuremath{\mathbb{Q}}[x]^n \twoheadrightarrow \ensuremath{\rationals N}$ (which is a $\ensuremath{\mathbb{Q}}[x]$-homomorphism).
So $\ker (\pi_\ensuremath{\mathbb{Q}})$ is a
$\ensuremath{\mathbb{Q}}[x]$-submodule of the free module $\ensuremath{\mathbb{Q}}[x]^n$. Because $\ensuremath{\mathbb{Q}}[x]$ is a PID we conclude
that $\ker(\pi_\ensuremath{\mathbb{Q}})$ is a free module, and in fact we know that $\ensuremath{\mathbb{Q}}[x]^n$ has a basis $y_1, y_2, \ldots, y_n$
such that $\ker(\pi_\ensuremath{\mathbb{Q}})$ has basis $b_1y_1, b_2y_2, \ldots, b_my_m$ for some $m \leq n$ such that
$b_1 \mid b_2 \mid \ldots \mid b_{m}$.
Claim 1: No $b_j$ is a unit. The reason is that because $d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\rationals N}) = n$, there is no surjective $\ensuremath{\mathbb{Q}}[x]$-module
homomorphism from
$\ensuremath{\mathbb{Q}}[x]^{n-1}$ to $\ensuremath{\rationals N}$.
Claim 2: We therefore have $m = s_1$, $n-m = s_2$, and for all $j = 1,\ldots,m$, $b_j = u_ja_j$ for $u_j$ a unit. The reason is that we have
\[
\ensuremath{\rationals N} \cong_{\ensuremath{\mathbb{Q}}[x]} \ensuremath{\mathbb{Q}}[x]^n/(b_1y_1,\ldots, b_my_m), \text{ \quad and}
\]
\[
\ensuremath{\mathbb{Q}}[x]^n/(b_1y_1,\ldots, b_my_m) \cong_{\ensuremath{\mathbb{Q}}[x]} \left( \bigoplus_{j=1}^{m} \ensuremath{\mathbb{Q}}[x]/(b_j) \right) \oplus \ensuremath{\mathbb{Q}}[x]^{n-m}.
\]
Claim 2 then follows by the uniqueness of the decomposition afforded by the structure theorem. So from now on, we will write
$a_j$ instead of $b_j$.
We can make $D \supseteq D_0$ large enough (and yet keep it finite) such that $\pi_\ensuremath{\mathbb{Q}}(y_i) \in \ensuremath{N_D}$.
In this case, there is a map
$\pi_D$
satisfying the following commutative diagram
\centerline{\xymatrix{
(\ensuremath{\integers_D[x]})^n \ar@{^{(}->}[d]^{\iota_1} \ar@{->>}[r]^{\pi_D} &\ensuremath{N_D}\ar@{^{(}->}[d]^{\iota_2}\\
\ensuremath{\mathbb{Q}}[x]^n \ar@{->>}[r]^{\pi_\ensuremath{\mathbb{Q}}} &\ensuremath{\rationals N}}}
Note that if we have such $\pi_D$ and diagram for given $D$, then the same diagram holds (for a similar $\pi_D$)
if we make $D$ any larger.
Our main step in proving Lemma \ref{lem:global to local - from QN to N/pN - improved}
is Lemma \ref{lem:D large enough gives a basis for ker(pi_D)}
which gives our main reduction, Corollary \ref{cor:decomp_of_localization}.
\begin{lemma}
\label{lem:D large enough gives a basis for ker(pi_D)}
With the above notation, we can make $D$ large enough yet finite such that
$a_1y_1, a_2y_2, \ldots, a_my_m \in (\ensuremath{\integers_D[x]})^n$ and form a \ensuremath{\integers_D[x]}-basis of $\ker(\pi_D)$.
\end{lemma}
Once we prove this lemma, we will then get the following corollary,
which tells us that our decomposition
for $\ensuremath{\rationals N}$ given at the beginning of the section passes to a decomposition
of the $\ensuremath{\integers_D[x]}$-module $\ensuremath{N_D}$.
\begin{corollary}
\label{cor:decomp_of_localization}
For the above $D$, we have
\[
\ensuremath{N_D} \cong_{\ensuremath{\integers_D[x]}} \left( \bigoplus_{j=1}^{s_1} \ensuremath{\integers_D[x]}/(a_j) \right) \oplus (\ensuremath{\integers_D[x]})^{s_2}.
\]
\end{corollary}
Once we have this corollary, it will be straightforward to complete the proof of
Lemma~\ref{lem:global to local - from QN to N/pN - improved}. Indeed, let $p \not\in D$. Then
\[
N/pN \cong_{\ensuremath{\integers_D[x]}} D^{-1}(N/pN) \cong_{\ensuremath{\integers_D[x]}} \ensuremath{N_D}/p\ensuremath{N_D}.
\]
Let $A$ denote the right-hand side of the isomorphism in Corollary \ref{cor:decomp_of_localization}. We have
\[
\ensuremath{N_D}/p\ensuremath{N_D} \cong_{\ensuremath{\integers_D[x]}} A/pA \cong_{\ensuremath{\integers_D[x]}} \left( \bigoplus_{j=1}^{s_1} \finitefield[x]/(\overline{a_j}) \right) \oplus \finitefield[x]^{s_2}.
\]
Combining the above two sequences of isomorphisms yields
$$N/pN \cong_{\ensuremath{\integers_D[x]}} \left( \bigoplus_{j=1}^{s_1} \finitefield[x]/(\overline{a_j}) \right) \oplus \finitefield[x]^{s_2}$$
which passes to an isomorphism as $\finitefield[x]$-modules, giving
Lemma~\ref{lem:global to local - from QN to N/pN - improved} (and \ref{lem:version1_of_global to local - from QN to N/pN}).
The only thing that remains is to prove Lemma~\ref{lem:D large enough gives a basis for ker(pi_D)}
(and Corollary~\ref{cor:decomp_of_localization}).
\vspace{.1in}
\noindent \textbf{Proof of Lemma \ref{lem:D large enough gives a basis for ker(pi_D)}:}
\vspace{.05in}
\noindent To give a proof, we have to do some preliminaries first.
Recall that a norm $\mathscr{N}$ on an integral domain $S$ is
a function $\mathscr{N}: S \to \ensuremath{\mathbb{Z}}^{\geq 0}$ with $\mathscr{N}(0) = 0$.
\begin{definition}
Let $S$ be an integral domain with norm $\mathscr{N}$. Let $0 \neq b \in S$. We say that we can always divide by
$b$ in $S$ if for all $a \in S$, there exist $q, r \in S$ such that
\[
a = qb + r \text{\quad with $r = 0$ or $\mathscr{N}(r) < \mathscr{N}(b)$}.
\]
\end{definition}
\begin{lemma}
\label{lem:leading coefficient a unit implies we can divide}
Let $R$ be an integral domain and let $b(x) \in R[x]$. Then we can always divide by $b(x)$ in $R[x]$ if
$\leadingcoeff(b)^{-1} \in R$.
\end{lemma}
\begin{proof}[Sketch of proof]
This is clear by looking at the division algorithm in $\mathbb{F}[x]$, where $\mathbb{F}$ is the field
of fractions of $R$.
\end{proof}
We know that in a Euclidean domain, every ideal is principal. In the process of showing that, we can extract a
little more, namely Lemma~\ref{lem:minimal norm and ability to divide implies principal ideal}.
\begin{lemma}
\label{lem:minimal norm and ability to divide implies principal ideal}
Let $R$ be an integral domain with norm $\mathscr{N}$. Suppose $I \ensuremath{\lhd} R$ and that there exists $d \in I$ such that
\begin{enumerate}
\item $\mathscr{N}(d) = \min_{0 \neq \alpha \in I}\{\mathscr{N}(\alpha) \}$ \text{ and}
\item We can always divide by $d$ in $R$.
\end{enumerate}
Then $I = (d)$.
\end{lemma}
\begin{proof}
We know $I \supseteq (d)$ since $d \in I$.
Showing $I \subseteq (d)$: Suppose that $a \in I$. Because we can divide by $d$, we know there exist
$q, r \in R$ such that $a = qd + r$ with $r = 0$ or $\mathscr{N}(r) < \mathscr{N}(d)$. But $a, d \in I$ implies that $r \in I$ also.
Therefore, by minimality of $\mathscr{N}(d)$, we conclude that $r = 0$. So $a \in (d)$.
\end{proof}
\begin{lemma}
\label{lem:D exists for base case}
Let $y_1, y_2, \ldots, y_n$ be a $\ensuremath{\mathbb{Q}}[x]$-basis of $\ensuremath{\mathbb{Q}}[x]^n$. Then there exists a finite $D$ (containing $D_0$) such that
$y_1, y_2, \ldots, y_n$ form a $\ensuremath{\mathbb{Z}}_D[x]$-basis of $\ensuremath{\mathbb{Z}}_D[x]^n$.
\end{lemma}
\begin{proof}
Let $e_1, e_2, \ldots, e_n$ be a $\ensuremath{\mathbb{Z}}_D[x]$-basis of $\ensuremath{\mathbb{Z}}_D[x]^n$. Thus $e_1, e_2, \ldots, e_n$ is a
$\ensuremath{\mathbb{Q}}[x]$-basis of $\ensuremath{\mathbb{Q}}[x]^n$. For $i \in [n]$, let $\pi_i\colon \ensuremath{\mathbb{Q}}[x]^n \to \ensuremath{\mathbb{Q}}[x]$ be the projection onto
the $i$-th coordinate: $\pi_i(\sum_j r_j e_j) := r_i$. Fix $k \in [n]$. Then $y_k \in \ensuremath{\mathbb{Z}}_D[x]^n$ iff
for all $i \in [n]$, we have $\pi_i(y_k) \in \ensuremath{\mathbb{Z}}_D[x]$.
Therefore, there exists a finite $D$ (containing $D_0$) such that $y_k \in \ensuremath{\mathbb{Z}}_D[x]^n$ for all $k$, but this is not sufficient.
We have that $y_1, y_2, \ldots, y_n$ is a basis for $ \ensuremath{\mathbb{Z}}_D[x]^n$ iff the map $e_i \mapsto y_i$ $\forall i \in [n]$ is an isomorphism.
We note that this map ($e_i \mapsto y_i$) is given by a matrix; indeed, for given $j$, let
$y_j = \sum_{i=1}^n a_{ij} e_i$, and form the $n \times n$ matrix $A := (a_{ij})$. Of course, the entries of $A$ are all
in $\ensuremath{\mathbb{Z}}_D[x]$.
The matrix $A$ is invertible in the ring $M_n(\ensuremath{\mathbb{Z}}_D[x])$ (of all $n \times n$ matrices over $\ensuremath{\mathbb{Z}}_D[x]$)
iff the map $A \cdot$ from $ \ensuremath{\mathbb{Z}}_D[x]^n$ to $ \ensuremath{\mathbb{Z}}_D[x]^n$ ``multiply on the left by $A$'' is an isomorphism. Also,
the map $A \cdot$ is an isomorphism iff $y_1, y_2, \ldots, y_n$ is a $\ensuremath{\mathbb{Z}}_D[x]$-basis of $\ensuremath{\mathbb{Z}}_D[x]^n$.
We have that $A$ is an invertible matrix iff $\det(A)$ is a unit in $\ensuremath{\mathbb{Z}}_D[x]$. Because $y_1, y_2, \ldots, y_n$
is a $\ensuremath{\mathbb{Q}}[x]$-basis of $\ensuremath{\mathbb{Q}}[x]^n$, we have that $0 \neq \det(A) \in \ensuremath{\mathbb{Q}}$. Therefore, we can make $D$
large enough (while keeping it finite) so that $\det(A)^{-1} \in \ensuremath{\mathbb{Z}}_D[x]$.
\end{proof}
\begin{lemma}
\label{lem:lem:D large enough gives a basis for ker M_D}
Let \ensuremath{\rationals M}\ denote some $\ensuremath{\mathbb{Q}}[x]$-submodule of $\ensuremath{\mathbb{Q}}[x]^n$, and denote $\ensuremath{\rationals M} \cap \ensuremath{\integers_D[x]}^n$ by \ensuremath{M_D}.
Let $y_i \in \ensuremath{\mathbb{Q}}[x]^n$, $a_i \in \ensuremath{\mathbb{Q}}[x]$ be such that $y_1, y_2, \ldots, y_n$ is a
$\ensuremath{\mathbb{Q}}[x]$-basis of $\ensuremath{\mathbb{Q}}[x]^n$ and $a_1y_1, \ldots, a_m y_m$ is a $\ensuremath{\mathbb{Q}}[x]$-basis of
\ensuremath{\rationals M}.
Then there exists a finite $D \supseteq D_0$ such that
$a_1y_1, \ldots, a_my_m$ is a $\ensuremath{\integers_D[x]}$-basis of \ensuremath{M_D}.
\end{lemma}
\begin{proof}
We will show that there exists a finite $D$ such that for all $c \in \ensuremath{M_D}$, there exist unique $r_1, \ldots, r_m \in \ensuremath{\integers_D[x]}$ such that
$c = r_1 a_1 y_1 + \cdots + r_m a_m y_m$.
Suppose by Lemma \ref{lem:D exists for base case} that $D \supseteq D_0$ is large enough (yet finite) such that
$y_1, \ldots, y_n$ is a \ensuremath{\integers_D[x]}-basis of $\ensuremath{\integers_D[x]}^n$. (So $\ensuremath{\integers_D[x]}^n = \bigoplus_{i = 1}^n \ensuremath{\integers_D[x]} y_i$.) For $i \in [n]$,
let $\pi_i\colon \ensuremath{\mathbb{Q}}[x]^n \to \ensuremath{\mathbb{Q}}[x]$ be given by $\pi_i(\sum_j r_j y_j) := r_i$.
We know $\pi_i(\ensuremath{\rationals M}) \ensuremath{\lhd} \ensuremath{\mathbb{Q}}[x]$ is a principal ideal (since $\ensuremath{\mathbb{Q}}[x]$ is a PID),
but more, we have $\pi_i(\ensuremath{\rationals M}) = (a_i)_{\ensuremath{\mathbb{Q}}[x]} :=$ the ideal of $\ensuremath{\mathbb{Q}}[x]$ generated by $a_i$.
Add if necessary, finitely many primes to $D$ such that for all $i$, $a_i \in \ensuremath{\integers_D[x]}$ and such that
$\leadingcoeff(a_i)^{-1} \in \ensuremath{\mathbb{Z}}_D$. Consequently, Lemma \ref{lem:leading coefficient a unit implies we can divide}
tells us we can always divide by $a_i$ in \ensuremath{\integers_D[x]}. Since $\pi_i(\ensuremath{\rationals M}) = (a_i)_{\ensuremath{\mathbb{Q}}[x]}$ we have that $a_i$ has minimal
degree in $\pi_i(\ensuremath{\rationals M})$, and hence $a_i$ also has minimal degree in $\pi_i(\ensuremath{M_D})$. Therefore,
we conclude by Lemma \ref{lem:minimal norm and ability to divide implies principal ideal} that
$\pi_i(\ensuremath{M_D}) = (a_i)_{\ensuremath{\integers_D[x]}} :=$ the ideal of $\ensuremath{\integers_D[x]}$ generated by $a_i$. We now have $D$ picked.
Let $c \in \ensuremath{M_D}$. We know that since $c \in \ensuremath{\rationals M}$ and since $a_1 y_1, \ldots, a_m y_m$ is a $\ensuremath{\mathbb{Q}}[x]$-basis of \ensuremath{\rationals M},
there exist unique $c_1, \ldots, c_m \in \ensuremath{\mathbb{Q}}[x]$ such that
\[\tag{*}
c = c_1 a_1 y_1 + \cdots + c_m a_m y_m.
\]
We have that $\pi_i(c) \in \pi_i(\ensuremath{M_D}) = (a_i)_{\ensuremath{\integers_D[x]}}$. Therefore, for all $i$, there exist $d_i \in \ensuremath{\integers_D[x]}$ such that
$\pi_i(c) = d_i a_i$. But we know from (*) that $\pi_i(c) = c_i a_i$. Thus $c_i a_i = d_i a_i$. Since
\ensuremath{\integers_D[x]}\ is an integral domain, we conclude that $c_i = d_i$ for all $i$. Hence $c_i \in \ensuremath{\integers_D[x]}$, and they are unique.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:D large enough gives a basis for ker(pi_D)}]
Notice that in the notation of the commutative diagram preceding
Lemma \ref{lem:D large enough gives a basis for ker(pi_D)} that
$$\ker(\pi_D) = \ker(\iota_2 \circ \pi_D) = \ker(\pi_\ensuremath{\mathbb{Q}} \circ \iota_1) = \ker(\pi_\ensuremath{\mathbb{Q}}) \cap (\ensuremath{\integers_D[x]})^n.$$
Let $M_\ensuremath{\mathbb{Q}} = \ker(\pi_\ensuremath{\mathbb{Q}})$ and $M_D = \ker(\pi_\ensuremath{\mathbb{Q}}) \cap (\ensuremath{\integers_D[x]})^n$, and
apply Lemma \ref{lem:lem:D large enough gives a basis for ker M_D}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:decomp_of_localization}]
We have $\pi_D\colon \ensuremath{\integers_D[x]}^n \twoheadrightarrow N_D$ from the commutative diagram preceding Lemma~\ref{lem:D large enough gives a basis for ker(pi_D)}.
Therefore,
\[\tag{*1}
N_D \cong_{\ensuremath{\integers_D[x]} } \ensuremath{\integers_D[x]}^n/ \ker(\pi_D).
\]
We have by Lemma~\ref{lem:D large enough gives a basis for ker(pi_D)} that
\[\tag{*2}
\ensuremath{\integers_D[x]}^n/ \ker(\pi_D) \cong_{\ensuremath{\integers_D[x]}} \ensuremath{\integers_D[x]}^n/(a_1y_1, \ldots, a_my_m).
\]
Since $y_1, \ldots, y_n$ form a basis of $\ensuremath{\integers_D[x]}^n$ (as Lemma~\ref{lem:D exists for base case} says), we get that
\[\tag{*3}
\ensuremath{\integers_D[x]}^n/(a_1y_1, \ldots, a_my_m) \cong_{\ensuremath{\integers_D[x]} } \left( \bigoplus_{j=1}^{m} \ensuremath{\integers_D[x]}/(a_j) \right) \oplus (\ensuremath{\integers_D[x]})^{n-m}
\]
By Claim 2 (which follows the statement of Lemma~\ref{lem:global to local - from QN to N/pN - improved}) we have $m = s_1$ and $n - m = s_2$.
Thus, combining (*1), (*2), and (*3) gives Corollary~\ref{cor:decomp_of_localization}.
\end{proof}
\noindent This completes our proof of Lemma~\ref{lem:global to local - from QN to N/pN - improved} and hence of
Lemma~\ref{lem:version1_of_global to local - from QN to N/pN} too.
Let $N$ be a f.g.\ $\ensuremath{\mathbb{Z}}_{D_0}[x]$-module (for some finite set of primes $D_0$).
Denote the $\finitefield[x]$-torsion-free rank of $N/pN$ by $r(p)$ and write
\[
\ensuremath{\rationals N} \cong_{\ensuremath{\mathbb{Q}}[x]} \left( \bigoplus_{j=1}^{s(0)} \ensuremath{\mathbb{Q}}[x]/(a_j) \right) \oplus \ensuremath{\mathbb{Q}}[x]^{r(0)}.
\]
\begin{corollary}
\label{cor:p^k approaches infinity in two ways - consequence on module}
With the notation from the previous paragraph, there exists a constant $C$ (depending on $N$) such that $n = p^k > C$ implies
\begin{enumerate}[(a)]
\item $\maxsubmod(N) = \maxsubmod(\finitefield[x]^{r(p)})$ \hspace{.1in} or
\item $\displaystyle{\maxsubmod(N) = \maxsubmod \left( \bigoplus_{j=1}^{s(0)} \finitefield[x]/(\overline{a_{j}}) \oplus \finitefield[x]^{r(0)} \right) }$.
\end{enumerate}
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:global to local - from QN to N/pN - improved}, for all large primes $p$ we have
\[
N/pN = \left(\bigoplus_{j=1}^{s(0)} \finitefield[x]/(\overline{a_{j}}) \right) \oplus \finitefield[x]^{r(0)}.
\]
Let the exceptions, if any, be $\{p_1, p_2, \ldots, p_s\}$.
Let $p$ be any such prime. We know that the $\finitefield[x]$-torsion part of $N/pN$ is finite; say its cardinality is $p^{c_p}$.
Thus for $k > c_p$, we have $\maxsubmod[p^k](N) = \maxsubmod[p^k](\finitefield[x]^{r(p)})$
Let $c = \max \{c_p : p \in \{p_1, p_2, \ldots, p_s \} \}$, and let $C = p_s^{c}$ (assuming $p_s$ is the biggest among $p_1, p_2, \ldots, p_s$). Then this $C$ works. (If there were no exception primes $p_i$, then of course (b) always holds.)
\end{proof}
\subsection{Direct sums with each term a quotient of the next}
\label{sec:Direct sums with each term a quotient of the next}
This subsection is a continuation of Section \ref{sec:Codimension 1 subspaces}. However,
it fits naturally here because a finitely generated module over a PID can be written in the form described in the next paragraph.
Let $R$ be a commutative (unital) ring.
Let $A = A_1 \oplus A_2 \oplus \cdots \oplus A_t$, where each
$A_j$ is a cyclic $R$-module such that $A_j$ is a quotient of $A_{j+1}$ for $j = 1, 2, \ldots, t-1$. Fix a positive integer $n$,
and let $\mathcal{S}\mathcal{Q}^n$ be \emph{any} set of \textbf{s}imple \textbf{q}uotients of $A_1$ of index $n$.
\begin{lemma}
\label{lem:each term quotient of the next -- take some simple quotients of first term}
Using the notation from the preceding paragraph, we have
\[
\sum_{S \in \mathcal{S}\mathcal{Q}^n} \maxsubmod[S](A) = |\mathcal{S}\mathcal{Q}^n|(1 + n + \cdots + n^{t-1}).
\]
\end{lemma}
\begin{proof}
Let $S \in \mathcal{S}\mathcal{Q}^n$. Because $A_1$ is a quotient of $A_j$ for all $j \in \{2, 3, \ldots, t\}$, we conclude that
$|\{A_j : \maxsubmod[S](A_j) = 1 \}| = t$. Therefore, Corollary \ref{cor:submodisoto R/I for direct sum of cyclic modules} says that
$\maxsubmod[S](A) = 1 + n + \cdots + n^{t-1}$.
\end{proof}
\begin{corollary}
\label{cor:each term quotient of the next -- take certain simple quotients of j-th term}
Let $A$ be as in Lemma \ref{lem:each term quotient of the next -- take some simple quotients of first term}. Fix $j \in \{1, 2, \ldots, t\}$.
Let $\mathcal{S}\mathcal{Q}_j^n$ be a set of simple quotients of $A_j$ of index $n$ such that
$\maxsubmod[S](A_i) = 0$ for $i < j$. Then
\[
\sum_{S \in \mathcal{S}\mathcal{Q}_j^n} \maxsubmod[S](A) = |\mathcal{S}\mathcal{Q}_j^n|(1 + n + \cdots + n^{t-j}).
\]
\end{corollary}
\begin{proof}
Let $S \in \mathcal{S}\mathcal{Q}_j^n$. Then $\maxsubmod[S](A) = \maxsubmod[S](A_j \oplus A_{j+1} \oplus \cdots \oplus A_t$). The result then
follows from Lemma \ref{lem:each term quotient of the next -- take some simple quotients of first term} by reindexing (by subtracting
$(j-1)$ from each index in $A_j, A_{j+1}, \ldots, A_t$).
\end{proof}
We fix a little more notation for the following lemma.
Let $A_0$ be the zero $R$-module. For an $R$-module $B$, let $\mathcal{S}\mathcal{Q}(B,n)$ be the set of \emph{all} simple
quotients of $B$ of index $n$.
\begin{corollary}
\label{cor:each term quotient of the next -- exact number of maximal submodules}
Using the notation from the paragraph preceding this corollary and the paragraph before Lemma \ref{lem:each term quotient of the next -- take some simple quotients of first term},
we have
\[
\maxsubmod(A) = \sum_{j=1}^{t} (\maxsubmod(A_j) - \maxsubmod(A_{j-1}))(1 + n + \cdots + n^{t-j}).
\]
\end{corollary}
\begin{proof}
The idea is just to write $\mathcal{S}\mathcal{Q}(A,n)$ as a disjoint union as follows (which we can do
since it is assumed that $A_j$ is a quotient of $A_{j+1}$ for all $j$):
\[ \tag{*}
\mathcal{S}\mathcal{Q}(A,n) = \bigsqcup_{j=1}^{t} (\mathcal{S}\mathcal{Q}(A_j,n) \setminus \mathcal{S}\mathcal{Q}(A_{j-1},n)).
\]
Let $\mathcal{S}\mathcal{Q}_j^n := \mathcal{S}\mathcal{Q}(A_j,n) \setminus \mathcal{S}\mathcal{Q}(A_{j-1},n)$. We have (with explanations following)
\[\begin{aligned}
\maxsubmod(A) &= \sum_{S \in \mathcal{S}\mathcal{Q}(A,n)} \maxsubmod[S](A) \\
&= \sum_{j=1}^{t} \sum_{S \in \mathcal{S}\mathcal{Q}_j^n} \maxsubmod[S](A) \\
&= \sum_{j=1}^{t} (\maxsubmod(A_j) - \maxsubmod(A_{j-1}))(1 + n + \cdots + n^{t-j}).
\end{aligned}
\]
The first equality is by Lemma \ref{lem:maxsubmod N = sum_S submodisoto_S(N)}. The second equality is by equation (*).
For the third equality, recall that in a \emph{cyclic} module $B$, two maximal submodules $M_1$ and $M_2$ of $B$ are equal iff
$B/M_1 \cong_R B/M_2$. In other words, for a cyclic module $B$, we have $|\mathcal{SQ}(B,n)| = \maxsubmod(B)$.
Thus $|\mathcal{S}\mathcal{Q}_j^n| = \maxsubmod(A_j) - \maxsubmod(A_{j-1})$ because each $A_i$ is cyclic (and since
$\mathcal{SQ}(A_{j-1},n) \subseteq \mathcal{SQ}(A_{j},n)$).
Thus the third equality
follows by Corollary \ref{cor:each term quotient of the next -- take certain simple quotients of j-th term}.
\end{proof}
\begin{corollary}
\label{cor:each term quotient of the next -- upper and lower bounds}
Using the notation from the paragraph proceeding Lemma \ref{lem:each term quotient of the next -- take some simple quotients of first term} we
have for all $n$,
\[\begin{aligned}
\maxsubmod(A) & \leq \maxsubmod(A_t)(1 + n + \cdots + n^{t-1}) \text{\hspace{.1in} and }\\
\maxsubmod(A) & \geq \maxsubmod(A_1)(1 + n + \cdots + n^{t-1}).
\end{aligned}
\]
\end{corollary}
\begin{proof}
For the second inequality, the lower bound for $\maxsubmod(A)$, just note that the first term in the sum in
Corollary \ref{cor:each term quotient of the next -- exact number of maximal submodules} is
$\maxsubmod(A_1)(1 + n + \cdots + n^{t-1})$; of course, all the other terms in the sum of that corollary are non-negative.
For the first inequality, the upper bound for $\maxsubmod(A)$, we use Corollary
\ref{cor:each term quotient of the next -- exact number of maximal submodules} again to get
\[\begin{aligned}
\maxsubmod(A) & = \sum_{j=1}^{t} (\maxsubmod(A_j) - \maxsubmod(A_{j-1}))(1 + n + \cdots + n^{t-j}) \\
& \leq \sum_{j=1}^{t} (\maxsubmod(A_j) - \maxsubmod(A_{j-1}))(1 + n + \cdots + n^{t-1}) \\
& = \maxsubmod(A_t)(1 + n + \cdots + n^{t-1}).
\end{aligned}
\]
\end{proof}
Note that this corollary does not give us the maximal submodule growth of such a module $A$ because $A$ itself may be finite; in case $A$ is finite,
we would have
$\maxsubmod(A) = \maxsubmod(A_1) = 0$ for all large $n$.
\subsection{$\ensuremath{\mathbb{Z}}_D[x]$-modules which are f.g.\ as $\ensuremath{\mathbb{Z}}_D$-modules}
\label{sec:Z_D[x]-modules which are f.g. as Z_D-modules}
Let $R$ = $\ensuremath{\mathbb{Z}}_D[x]$ (for some finite $D$), and let $N$ be an $R$-module which is finitely generated as a $\ensuremath{\mathbb{Z}}_D$-module.
Suppose
\[
\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N = \bigoplus_{j=1}^d \ensuremath{\mathbb{Q}}[x]/(a_j),
\]
where $a_1 | a_2 | \cdots | a_d$ as provided by the structure theorem with $a_1$ (and hence each $a_i$) not a unit.
We have then that $d = d_{\ensuremath{\mathbb{Q}} \ensuremath{\otimes} R}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N)$ is the minimal size of a $\ensuremath{\mathbb{Q}} \ensuremath{\otimes} R$ generating set for
the module $\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N$. The following lemma
extends Lemma \ref{lem:version1_of_global to local - from QN to N/pN}:
We now state how Corollary \ref{cor:p^k approaches infinity in two ways - consequence on module} simplifies
since $N$ is assumed to be f.g.\ as a $\ensuremath{\mathbb{Z}}_D$-module.
\begin{corollary}
\label{cor:we can ignore finitely many primes - for f.g. abelian groups}
Using the above notation, there exists a constant $C$ (depending on $N$) such that $n = p^k > C$ implies either
\begin{enumerate}[(a)]
\item $\maxsubmod(N) = 0$ \hspace{.1in} or
\item $\displaystyle{N/pN = \bigoplus_{j=1}^d \finitefield[x]/(\overline{a_j})}$.
\end{enumerate}
\end{corollary}
\begin{proof}
This follows from Corollary \ref{cor:p^k approaches infinity in two ways - consequence on module}. Because
$N$ is f.g.\ as a $\ensuremath{\mathbb{Z}}_D$-module, then for all primes $p$, $\finitefield[x]$ is \emph{not} a quotient of
$N/pN$.
%
\end{proof}
Although the following could be taken as a corollary to Theorem \ref{thm:exact growth -with coefficient- of Z x module which is f.g. abelian},
we include a proof of this simpler result because it is easier.
\begin{proposition}
\label{prop:mdeg(N) -actually tilde{m}deg(N)}
With the above notation, $\maxsubmod(N)$ has growth type $n^{d-1}$, where still
$d = d_{\ensuremath{\mathbb{Q}} \ensuremath{\otimes} R}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N)$.
\end{proposition}
\begin{proof}
Let $C$ be as in
Corollary \ref{cor:we can ignore finitely many primes - for f.g. abelian groups}. Fix $n = p^k > C$,
such that $\maxsubmod(N) \neq 0$. Then by Corollary \ref{cor:we can ignore finitely many primes - for f.g. abelian groups}, we conclude
that $\displaystyle{ N/pN = \bigoplus_{j=1}^d \finitefield[x]/(\overline{a_j})}$.
We show that $\maxsubmod(N) \leq \deg(a_d)(1 + n + \cdots + n^{d-1})$ for all large $n$: By
Corollary \ref{cor:each term quotient of the next -- upper and lower bounds}, we get
\[ \tag{*}
\maxsubmod(N/pN) \leq \maxsubmod(\finitefield[x]/(a_d))(1 + n + \cdots + n^{d-1}).
\]
But by Lemma \ref{lem:upper bound for finite F_p x module},
we get that $\maxsubmod(\finitefield[x]/(\overline{a_d})) \leq \deg(\overline{a_d}) \leq \deg(a_d)$. Notice that the constant $\deg(a_d)$
does not depend on which (large) $p$ we pick. This gives us the
desired upper bound.
For the lower bound, by Corollary \ref{cor:each term quotient of the next -- upper and lower bounds} we get:
\[
\maxsubmod(N/pN) \geq \maxsubmod(\finitefield[x]/(\overline{a_1}))(1 + n + \cdots + n^{d-1}).
\]
Conclude by noting that since $a_1$ is a non-constant polynomial,
we get that $\maxsubmod[p^k](\finitefield[x]/(\overline{a_1})) \geq 1$ for some $k$
\end{proof}
We can be a bit more precise than simply stating the growth type of $\maxsubmod(N)$.
\begin{theorem}
\label{thm:exact growth -with coefficient- of Z x module which is f.g. abelian}
Let $N$, $d$, and $a_j$ be as in the beginning of Section \ref{sec:Z_D[x]-modules which are f.g. as Z_D-modules}, and
let $\rho_j$ be the
number of distinct roots of $a_j$ in $\ensuremath{\mathbb{C}}$. Then
\[\begin{aligned}
\maxsubmod(N) & \leq \rho_1 n^{d-1} + O(n^{d-2}) & \text{for all large $n$, and}\\
\maxsubmod(N) & \geq \rho_1n^{d-1} & \text{for infinitely many $n$.}
\end{aligned}
\]
\end{theorem}
\begin{proof}
Upper bound:
Fix a large $n = p^k$,
such that $\maxsubmod(N) \neq 0$; by Corollary~ \ref{cor:we can ignore finitely many primes - for f.g. abelian groups}, we conclude
that $$ N/pN = \bigoplus_{j=1}^d \finitefield[x]/(\overline{a_j}).$$
We first show
\[ \tag{*}
\maxsubmod(N) \leq \sum_{j=1}^d \rho_j(1 + n + \cdots + n^{d-j}).
\]
Indeed, we just use
Corollary \ref{cor:each term quotient of the next -- exact number of maximal submodules} together with
Lemma \ref{lem:upper bound for finite F_p x module} part (b): Let
$A_i = \finitefield[x]/(\overline{a_{i}})$. We have,
\[
\maxsubmod(A_j) - \maxsubmod(A_{j-1}) \leq \maxsubmod(A_j),
\]
and by Lemma~\ref{lem:upper bound for finite F_p x module}, $\maxsubmod(A_j)$ is bounded above by
the number of roots of $a_j$ in $\overline{\finitefield}$,
which by Lemma~\ref{lem:num_roots in overline(F_p) of polynomial bounded above by num_roots in C}, is bounded above by $\rho_j$. This
shows (*).
Let `$RHS$' denote the right hand side of (*). Then
\[
\begin{aligned}
RHS &= \sum_{k=0}^{d-1} \left(\sum_{j=1}^{d-k} \rho_j \right) n^k \\
&= \rho_1 n^{d-1} + \sum_{k=0}^{d-2} \left(\sum_{j=1}^{d-k} \rho_j \right) n^k.
\end{aligned}
\]
Combining these equalities with (*) completes the upper bound.
Lower bound:
By Lemma~\ref{lem:roots of non-constant polynomials in F_p and in C},
there are infinitely many primes $p$ such that
$\maxsubmod[p](A_1) = \rho_1$, where $A_1 = \finitefield[x]/(\overline{a_{1}})$ as above. We conclude by using
Corollary \ref{cor:each term quotient of the next -- upper and lower bounds}.
\end{proof}
\subsection{General f.g.\ $\ensuremath{\mathbb{Z}}[x]$-modules}
Again, let $R = \ensuremath{\mathbb{Z}}[x]$. In this subsection, we do \emph{not} assume that our modules are f.g.\ as abelian groups.
We begin by stating a result that could have been given in
Section \ref{sec:Direct sums with each term a quotient of the next}:
\begin{corollary}
\label{cor:maxsubmod A^d = maxsubmod A times 1 + n + cdots + n^ d-1}
Let $A$ be a cyclic $R$-module, and let $d \in \ensuremath{\mathbb{Z}}_{\geq 1}$. Then
\[
\maxsubmod(A^d) = \maxsubmod(A)(1+ n + \cdots + n^{d-1}).
\]
\end{corollary}
\begin{proof}
This follows immediately from Corollary \ref{cor:each term quotient of the next -- exact number of maximal submodules}
(or Corollary \ref{cor:each term quotient of the next -- upper and lower bounds}).
\end{proof}
\begin{corollary}
\label{cor:maxsubmod 'Z x' ^ d has growth type n^d}
Fix $d \in \ensuremath{\mathbb{Z}}_{\geq 1}$. Still $R = \ensuremath{\mathbb{Z}}[x]$. Then $\maxsubmod(R^d)$ has growth type $n^d$.
In fact, $\maxsubmod(R^d) \leq n^d +n^{d-1} + \cdots + n$ for all $n$, with equality
when $n$ is prime.
\end{corollary}
\begin{proof}
This follows from Lemma \ref{lem:maximal ideal growth of Z x} and Corollary \ref{cor:maxsubmod A^d = maxsubmod A times 1 + n + cdots + n^ d-1}.
\end{proof}
We give another consequence of Corollary \ref{cor:maxsubmod A^d = maxsubmod A times 1 + n + cdots + n^ d-1}:
\begin{corollary}
\label{cor:maxsubmod 'finitefield x' ^d has growth type n^d / log n}
Fix $d \in \ensuremath{\mathbb{Z}}_{\geq 1}$. Then
\[
\maxsubmod[p^k](\finitefield[x]^d) = \frac{1}{k}\sum_{a|k} \mu\left(\frac{k}{a}\right)p^a (1 + p^k + p^{2k} + \cdots + p^{(d-1)k}).
\]
Thus $\maxsubmod[p^k](\finitefield[x]^d)$ has growth type $n^d/\log(n)$.
\end{corollary}
\begin{proof}
This follows from Lemma \ref{lem:mobious function - exact number of irreducibles}
and Corollary \ref{cor:maxsubmod A^d = maxsubmod A times 1 + n + cdots + n^ d-1}. (For the ``growth type,''
the reader may also want to recall Corollary \ref{cor:mmoddeg of finitefield[x]}.)
\end{proof}
Let $N$ be an $R$-module. For any prime $p$, we will use the notation from
Corollary \ref{cor:p^k approaches infinity in two ways - consequence on module}
to denote the $\finitefield[x]$-torsion-free rank of $N/pN$ by $r(p)$, and recall
\[
\ensuremath{\rationals N} \cong_{\ensuremath{\mathbb{Q}}[x]} \left( \bigoplus_{j=1}^{s(0)} \ensuremath{\mathbb{Q}}[x]/(a_j) \right) \oplus \ensuremath{\mathbb{Q}}[x]^{r(0)}.
\]
\begin{proposition}
\label{prop:growth type of arbitry f.g. Z[x]-module}
With the notation from the previous paragraph, let $d = s(0) + r(0)$, and $r_{\text{max}} = \max_{p} \{r(p) \}$.
\[
\maxsubmod(N) \text{ has growth type }
\begin{cases}
n^{d-1} & \text{if } d > r_{\text{max}} \\
n^d & \text{if } d = r_{\text{max}} = r(0) \\
n^{r_{\text{max}}}/\log(n) & \text{otherwise}
\end{cases}
\]
\end{proposition}
\begin{proof}
The basic idea is that for large $n = p^k$, either $k$ or $p$ is large (or both). We then apply
Corollary \ref{cor:p^k approaches infinity in two ways - consequence on module}. The growth type of
$\maxsubmod(N)$ will be controlled by one of two things:
\begin{enumerate}[(i)]
\item Fix $p$ such that $r(p) = r_{\text{max}}$. Let $k$ approach infinity (in $n = p^k$). Or\ldots
\item Keep $k$ small and send $p$ to infinity.
\end{enumerate}
We define three auxiliary functions that will simplify our proof:
\[\begin{aligned}
f_1(n) & = n^{d-1} \\
f_0(n) & = n^{r(0)} \\
g_0(n) &= n^{r_{\text{max}}}/\log(n)
\end{aligned}\]
Also, we will decompose the function $\maxsubmod(N)$ into two parts. Let $C$ be the constant given by
Corollary \ref{cor:p^k approaches infinity in two ways - consequence on module}. Define $f$ and $g$ as follows.
First, $f(n) = 0$ and $g(n) = 0$ if $n$ is not a power of a prime. For a prime power $p^k$,
\[
f(p^k) :=
\begin{cases}
\maxsubmod[p^k](N) & \text{if } p > C \\
0 & \text{otherwise}
\end{cases}
\]
\[
g(p^k) :=
\begin{cases}
\maxsubmod[p^k](N) & \text{if } p \leq C \\
0 & \text{otherwise}
\end{cases}
\]
We have then that $\maxsubmod(N) = f(n) + g(n)$ for all $n$.
\emph{Claim 1}: $g$ has growth type $g_0$.
Indeed, there are only finitely many primes $p$ for which for some $k$,
$g(p^k) \neq 0$. We then apply Corollary \ref{cor:p^k approaches infinity in two ways - consequence on module} (a)
and then Corollary~\ref{cor:maxsubmod 'finitefield x' ^d has growth type n^d / log n} for each prime. This proves Claim 1.
\emph{Claim 2}: $f$ has growth type $f_0$ if $s(0) = 0$ and growth type $f_1$ if $s(0) \geq 1$.
Case $s(0) = 0$: By our choice of $C$, for all primes $p > C$ we have
$N/pN = \finitefield[x]^{r(0)}$. Hence $\maxsubmod[p](N/pN) = \sum_{i=1}^{r(0)} p^{i}$. So
$f$ has growth type at least $f_0$. Also, since $\maxsubmod(\finitefield[x]^{r(0)}) \leq \maxsubmod(\ensuremath{\mathbb{Z}}[x]^{r(0)})$,
Corollary \ref{cor:maxsubmod 'Z x' ^ d has growth type n^d} implies that $f$ has growth type at most
$f_0$. Hence $f$ has growth type $f_0$, finishing the case $s(0) = 0$.
Case $s(0) \geq 1$:
Assume that $p > C$.
Then by Lemma \ref{lem:version1_of_global to local - from QN to N/pN}, we get
\[
N/pN = \left( \bigoplus_{j=1}^{s(0)} \finitefield[x]/(\overline{a_{j,0}}) \right) \oplus (\finitefield[x])^{r(0)}.
\]
Each term in the $s(0) + r(0)$ terms of the direct sum decomposition of $N/pN$ is a quotient of the
next one (except for the last
$\finitefield[x]$, since there is no term after it). Letting $A_{1,p} = \finitefield[x]/(\overline{a_{1,0}})$,
by Corollary \ref{cor:each term quotient of the next -- upper and lower bounds} we get
\[
\maxsubmod(N/pN) \geq \maxsubmod(A_{1,p})(1 + n + \cdots + n^{d-1}).
\]
Because $a_{1,0}$ is not constant, we get that for some $k$, $\maxsubmod[p^k](A_{1,p}) \geq 1$. Therefore, $f$ has
growth type at least $f_1$.
Also, we have (with explanations following)
\[\begin{aligned}
\maxsubmod(N/pN) & \leq \maxsubmod(A_{1,p} \oplus \finitefield[x]^{d-1}) \\
& \leq \maxsubmod(A_{1,p})(1 + n + \cdots + n^{d-1}) + \maxsubmod(\finitefield[x])(1+ n + \cdots + n^{d-2} ) \\
& \leq (\maxsubmod(A_{1,p})+1)(1 + n + \cdots + n^{d-1})\\
& \leq (\deg(a_{1,0})+1)(1 + n + \cdots + n^{d-1}).
\end{aligned}
\]
The first inequality is because $N/pN$ is a quotient of $A_{1,p} \oplus \finitefield[x]^{d-1}$. The second is
by Corollary \ref{cor:each term quotient of the next -- exact number of maximal submodules}. The third
is because $\maxsubmod(\finitefield[x]) \leq n$. The fourth
is because $\maxsubmod(A_{1,p}) \leq \deg(\overline{a_{1,0}}) \leq \deg(a_{1,0})$. Notice that combining
these inequalities gives a bound for $f(n)$ independent of which large prime $p$ we use. Therefore, $f$ has
growth type at most $f_1$ and therefore has growth type $f_1$. This finishes the case $s(0) \geq 1$ and proves
Claim 2.
Note that for all large $n$, one of $f_0, f_1, g_0$ will be asymptotically at least as big as the other two. Hence we just
need to decide which is biggest given the different cases in this proposition.
Suppose that $d > r_{\text{max}}$. Then $r_{\text{max}} \leq d - 1$. Hence, $g_0(n) \leq f_1(n)$ for $n \geq 2$. Further, we always
have $r(0) \leq r_{\text{max}}$. Combining this with the previous inequality gives $r(0) \leq d-1$. Therefore,
$f_0(n) \leq f_1(n)$ for all $n$. We just showed that $f_1$ is asymptotically largest among $\{f_0, f_1, g_0\}$. Note that
because $d > r_{\text{max}}$ implies that $s(0) \geq 1$, $f$
has growth type $f_1$ by Claim 2 above. We conclude that
$\maxsubmod(N)$ has growth type $f_1(n) = n^{d-1}$.
Next, suppose that $d = r_{\text{max}} = r(0)$. Then $f_1(n) < f_0(n)$ and $g_0(n) \leq f_0(n)$
for $n \geq 2$. So $f_0(n) = n^d$ is asymptotically largest among $\{f_0, f_1, g_0\}$. We observe that
$d = r_{\text{max}}$ implies that $s(0) = 0$. Hence, Claim 2 above shows that $f$ has growth type $f_0$.
Therefore, $\maxsubmod(N)$ has growth type $f_0(n) = n^d$.
Finally, suppose that $d \leq r_{\text{max}}$ and that either $d \neq r_{\text{max}}$ or $d \neq r(0)$. Then
$f_1(n) \leq g_0(n)$ for $n \geq 2$. We show next that $r(0) < r_{\text{max}}$. Indeed,
notice that if $d \neq r_{\text{max}}$, then $d < r_{\text{max}}$ and
hence $r(0) < r_{\text{max}}$. Also, if $d \neq r(0)$, then $s(0) \geq 1$, in which case
$d = r(0) + s(0) \leq r_{\text{max}}$ implies $r(0) < r_{\text{max}}$. So whether, $d \neq r_{\text{max}}$
or $d \neq r(0)$, we get $r(0) < r_{\text{max}}$. Hence, $f_0(n) < g_0(n)$ for $n \geq 2$. Combining this with the
second sentence of this paragraph, we see that $g_0$ is largest among $\{f_0, f_1, g_0\}$. So $f$ has growth type
at most $g_0$. Also, Claim 1 says that $g$ has growth type $g_0$. Therefore $\maxsubmod(N)$ has
growth type $g_0(n) = n^{r_{\text{max}}}/\log(n)$.
\end{proof}
\section{$\ensuremath{\mathbb{Z}}[x_1, x_2, \ldots, x_\ell]$-modules, f.g.\ as abelian groups}
\label{sec:fin gen modules over polynomial ring with several variables}
Fix a positive integer $\ell$. Let $R = \ensuremath{\mathbb{Z}}[x_1, x_2, \ldots, x_\ell]$. Note that $\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} R$ (which we will often
denote as $\ensuremath{\mathbb{Q}} R$)
is just $\ensuremath{\mathbb{Q}}[x_1, x_2, \ldots, x_\ell]$. The difficulty dealing with f.g.\ $R$ modules is that we have more than one variable.
Consequently, $\ensuremath{\mathbb{Q}}[x_1, x_2, \ldots, x_\ell]$ is not a
principal ideal domain. Thus, we do not have the nice structure theorem which was so useful to us when we had only one variable. We should
not lose heart, however, since if we restrict to $ \ensuremath{\mathbb{Z}}[x_1, x_2, \ldots, x_\ell]$-modules that are f.g.\ as abelian groups, then
we can basically summarize the action of all $\ell$ variables with a \emph{single} variable. We can then apply the structure theorem as before.
\subsection{Reducing to one variable}
Let $N = \ensuremath{\mathbb{Z}}^k$ be a $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell}]$-module. Consider $\ensuremath{\mathbb{Q}} N := \ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N = \ensuremath{\mathbb{Q}}^k$.
Then for each $i$, the map $\ensuremath{\mathbb{Q}}^k \to \ensuremath{\mathbb{Q}}^k$ given by $x_i \cdot$ (i.e.\ multiplication by $x_i$)
is a linear transformation. Let $f_i$ be the minimal polynomial of $x_i \cdot$, and let $A = \ensuremath{\mathbb{Q}}[x_1,\ldots,x_{\ell}]/(f_1,\ldots,f_\ell)$.
Let $\mathcal{J}$ be the Jacobson radical of $A$. Fix polynomials $g_1,\ldots, g_s \in \ensuremath{\mathbb{Q}}[x_1,\ldots,x_{\ell}]$ such that
$\mathcal{J} = (g_1,\ldots,g_s)_{A}$.
We may assume that in fact $g_i \in \ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell}]$ for all $i$, because if they were not in $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell}]$, then we could scale each by an integer
and rename them. Lemma \ref{lem:a surjection from rationals[x] to A/Jac(A)} was shown to the author by Marcin Mazur.
\begin{lemma}
\label{lem:a surjection from rationals[x] to A/Jac(A)}
There exists a surjection $\pi_\ensuremath{\mathbb{Q}}\colon \ensuremath{\mathbb{Q}}[x] \twoheadrightarrow A/\mathcal{J}$.
\end{lemma}
\begin{proof}
Because $A$ is a finite dimensional algebra over $\ensuremath{\mathbb{Q}}$, we have that $A/\mathcal{J}$ is semisimple.\footnote{See for example,
Lemma~6.3.1 part (2) in \cite{Webb}, which states that if a module $U$ satisfies the descending chain condition on submodules, then
$U/\Rad(U)$ is semisimple. We are dealing with semisimple \emph{algebras}, but that is fine, by Proposition~0.10 in
\url{http://www.ucl.ac.uk/~ucahaya/SemisimpleModules.pdf}}
Since $A$ is commutative, $A/\mathcal{J}$ is a product of fields (each a finite extension of $\ensuremath{\mathbb{Q}}$):
\[\tag{*1}
A/\mathcal{J} \cong \prod_{j = 1}^n F_i.
\]
But not only is every number field a simple extension of $\ensuremath{\mathbb{Q}}$ (by the primitive element theorem),
but for each finite extension $E$ of $\ensuremath{\mathbb{Q}}$ we have that $\{\alpha : \ensuremath{\mathbb{Q}}(\alpha) = E \}$ is infinite.
So choose $\alpha_j$ with $F_j = \ensuremath{\mathbb{Q}}(\alpha_j)$ such that different $\alpha_j$'s have different minimal polynomials.
Let $m_j(x)$ be the minimal polynomial of $\alpha_j$. Thus we can restate (*1) as
\[\tag{*2}
A/\mathcal{J} \cong \prod_{j = 1}^n \ensuremath{\mathbb{Q}}[x]/(m_j(x)).
\]
We may then apply the Chinese remainder theorem and conclude that
\[\tag{*3}
\prod_{j = 1}^n \ensuremath{\mathbb{Q}}[x]/(m_i(x)) \cong \ensuremath{\mathbb{Q}}[x]/(m_1(x)\cdots m_n(x)).
\]
Of course, there is a surjection from $\ensuremath{\mathbb{Q}}[x]$ to $\ensuremath{\mathbb{Q}}[x]/(m_1(x)\cdots m_n(x))$. (This surjection, combined with
(*2) and (*3), finishes the proof.)
\end{proof}
For any finite set $D$ of primes (to be decided later), let
\[
B := \ensuremath{\mathbb{Z}}_D[x_1,\ldots,x_{\ell}]/(f_1,\ldots,f_\ell).
\]
Recall that $\mathcal{J} = (g_1,\ldots,g_s)_{\ensuremath{\mathbb{Q}}[x_1,\ldots,x_{\ell}]}$ with each $g_i \in \ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell}]$. Let
\[
\mathcal{J}' := (g_1,\ldots,g_s)_{B}.
\]
\vbox{
\begin{lemma}
\label{lem:D exists for a commutative diagram}
With the above notation, there exists a finite $D$ with the following commutative diagram
\end{lemma}
\centerline{\xymatrix{
\ensuremath{\integers_D[x]} \ar@{^{(}->}[d]^{\iota_1} \ar@{->>}[r]^{\pi_D} &B/\mathcal{J}'\ar@{^{(}->}[d]^{\iota_2}\\
\ensuremath{\rationals[x]} \ar@{->>}[r]^{\pi_\ensuremath{\mathbb{Q}}} &A/\mathcal{J}}}
}
\begin{proof}
Let $f \in \ensuremath{\mathbb{Q}}[x_1,\ldots,x_{\ell}]$ be such that $\pi_\ensuremath{\mathbb{Q}}(x) = \bar{f}$.
We can make $D$ large enough (and yet keep it finite) such that $f \in \ensuremath{\mathbb{Z}}_D[x_1,\ldots,x_{\ell}]$. Of course, we then define
$\pi_D \colon \ensuremath{\integers_D[x]} \to B/\mathcal{J}'$ via $\pi_D(x) := \bar{f} \in B/\mathcal{J}'$.
What we need to do next is to make $D$ large enough to ensure that $\pi_D$ is surjective.
Because $\pi_\ensuremath{\mathbb{Q}}$ is surjective, we know that $\bar{f}$ is a generator for $A/\mathcal{J}$.
So for each $\bar{x}_i \in A/\mathcal{J}$,
choose $a_{i,j} \in \ensuremath{\mathbb{Q}}$ such that
\[
\bar{x}_i = \sum_{j=0}^{n_j} a_{i,j} (\bar{f})^j.
\]
Choose $D$ such that $a_{i,j} \in \ensuremath{\mathbb{Z}}_D$ for all $i$,$j$. This ensures that $\pi_D$ is surjective.
\end{proof}
Let $N$ be the module defined at the beginning of this section, and let $D$
be as in Lemma~\ref{lem:D exists for a commutative diagram}. Let $A$ be from the paragraph before
Lemma~\ref{lem:a surjection from rationals[x] to A/Jac(A)}. Let $B$ and $\mathcal{J}'$ be from the paragraph
before Lemma~\ref{lem:D exists for a commutative diagram}. (So $B = \ensuremath{\mathbb{Z}}_D[x_1,\ldots,x_{\ell}]/(f_1,\ldots,f_\ell)$.)
We are given that $N$ is a $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]$-module.
So $N$ is a $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]/(f_1,\ldots,f_\ell)$-module.
Thus $\ensuremath{\mathbb{Z}}_DN = \ensuremath{\mathbb{Z}}_D \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N$ is a $B$-module.
\begin{lemma}
\label{lem:maximal submodules contain jrad' Z_D N}
We use the notation from the previous paragraph. Also, let $S = \ensuremath{\mathbb{Z}}_D N$.
Let $M$ be a maximal submodule of $S$.
Then $M$ contains $\mathcal{J}' S$.
\end{lemma}
\begin{proof}
$A$ is a $\ensuremath{\mathbb{Q}}$-algebra of finite dimension over $\ensuremath{\mathbb{Q}}$. Therefore $A$ is Artinian. Consequently, $\mathcal{J}$ is a nilpotent ideal of $A$.
Therefore, $\mathcal{J}'$ is a nilpotent ideal of $B$.
By contradiction, suppose that $M$ does not contain $\mathcal{J}' S$. Then $M + \mathcal{J}' S = S$. By induction, suppose that for some $k \geq 1$
that $M + (\mathcal{J}')^k S = S$. Then $S = M + \mathcal{J}' S = M + \mathcal{J}' (M + (\mathcal{J}')^k S) = M + \mathcal{J}' M + (\mathcal{J}')^{k+1} S = M + (\mathcal{J}')^{k+1} S.$ And so
we have shown that $S = M + (\mathcal{J}')^{k+1} S$. Therefore, for all $n \geq 1$, we have that $M + (\mathcal{J}')^n S = S$. But since $\mathcal{J}'$ is a nilpotent
ideal, by taking $n$ to be large enough, we have that $M = S$, a contradiction.
\end{proof}
A proof of Lemma~\ref{lem:maximal submodules contain jrad' Z_D N} was shown to the author by Marcin Mazur.
\begin{lemma}
\label{lem:Z_DN/JZ_DN - many variables to one}
Let $N$ be the module defined at the beginning of this section, and let $D$
be as in Lemma~\ref{lem:D exists for a commutative diagram}. Let $S = \ensuremath{\mathbb{Z}}_D N.$ Then $S /\mathcal{J}'S$
is a $\ensuremath{\mathbb{Z}}_D[x]$-module such that if $p \notin D$ and $k \geq 1$, then
\[
\maxsubmod[p^k](S /\mathcal{J}'S) = \maxsubmod[p^k](N).
\]
\end{lemma}
\begin{proof
As stated in the paragraph before Lemma~\ref{lem:maximal submodules contain jrad' Z_D N}, $S$ is a $B$-module.
Hence, $S /\mathcal{J}'S$ is a $B$-module too, and so $S /\mathcal{J}'S$ is a $B/\mathcal{J}'$-module.
Therefore, $S /\mathcal{J}'S$ is a $\ensuremath{\mathbb{Z}}_D[x]$-module by
Lemma~\ref{lem:D exists for a commutative diagram}.
We have that $\ensuremath{\mathbb{Z}}_D[x]$-submodules of $S /\mathcal{J}'S$ are the same as
$B/\mathcal{J}'$-submodules of $S /\mathcal{J}'S$, and these are the same as $B$-submodules of $S /\mathcal{J}'S$.
Also, $B$-submodules of $S /\mathcal{J}'S$ are in one-to-one correspondence with
$B$-submodules of $S$ that contain $\mathcal{J}'S$, and by Lemma~\ref{lem:maximal submodules contain jrad' Z_D N},
this includes all maximal submodules.
Next, $B$-submodules of $S$ of finite index are in one-to-one correspondence to
\newline $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]/(f_1,\ldots,f_\ell)$-submodules of $N$ of index relatively prime to everything in $D$.
Finally, $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]/(f_1,\ldots,f_\ell)$-submodules of $N$ are in one-to-one correspondence to
$\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]$-submodules of $N$.
Therefore, if $p \notin D$ and $k \geq 1$, then
\[
\maxsubmod[p^k](S /\mathcal{J}'S) = \maxsubmod[p^k](N).
\]
\end{proof}
\begin{lemma
\label{lem:several variables to one, with a nice direct sum too}
Let $N$ be a $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]$-module which is f.g.\ as an abelian group. There exists a finite $D$ and a
module, denoted $\tilde{N}_{D}$, such that
\[
\tilde{N}_{D} = \bigoplus_{i = 1}^{d_0} \ensuremath{\mathbb{Z}}_{D}[x]/(a_i),
\]
for some $a_1 \mid a_2 \mid \cdots \mid a_{d_0}$ ($a_1$ not a unit) and such that for all large $n$,
\[\tag{*}
\maxsubmod(N) = \maxsubmod(\tilde{N}_{D}).
\]
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:mod out by finite submodule -- maximal submodule growth is unchanged}, we may mod out by the finite submodule of
$N$ consisting of its $\ensuremath{\mathbb{Z}}$-torsion. So assume $N = \ensuremath{\mathbb{Z}}^k$.
By Lemma~\ref{lem:Z_DN/JZ_DN - many variables to one}, there exists $N_{D_0}$ with
\[
N_{D_0} = \ensuremath{\mathbb{Z}}_{D_0} N /\mathcal{J}'\ensuremath{\mathbb{Z}}_{D_0}N
\]
such that $\maxsubmod(N_{D_0}) = \maxsubmod(N)$ for all $n$.
By Corollary~\ref{cor:decomp_of_localization}, there exists $D \supseteq D_0$ such that if we localize $N_{D_0}$ by $D$, then
we can write the resulting module as a direct sum.
The final statement in the present lemma follows from
Lemma~\ref{lem:Z_DN/JZ_DN - many variables to one} together with a fact about localization. Since we localize by a larger $D$,
in order for the equation (*) to hold, we want the index $n = p^j$ be
such that $p \notin D$. (If $j$ is large, $\maxsubmod(N) = 0 = \maxsubmod(\tilde{N}_{D})$.)
\end{proof}
\begin{lemma}
\label{lem:size of QR generating set for QN is d_0}
Let $N$, $\tilde{N}_{D}$, and $d_0$ be as in Lemma \ref{lem:several variables to one, with a nice direct sum too}. Let
$R = \ensuremath{\mathbb{Z}}[x_1,\ldots,x_\ell]$. Then
\[
d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\rationals N}) = d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\mathbb{Q}} \tilde{N}_D) = d_0.
\]
\end{lemma}
\begin{proof}
For ease of notation let $N' = \tilde{N}_D$. We have that $\ensuremath{\mathbb{Q}} N' = \bigoplus_{i = 1}^{d_0} \ensuremath{\mathbb{Q}}[x]/(a_i)$
for some $a_1 \mid a_2 \mid \cdots \mid a_{d_0}$ ($a_1$ not a unit).
Therefore $d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\mathbb{Q}} N) = d_0$.
Reviewing the proof of Lemma \ref{lem:several variables to one, with a nice direct sum too} we have that
$N' = \ensuremath{\mathbb{Z}}_D N/\mathcal{J}'\ensuremath{\mathbb{Z}}_D N$ for some finite set of primes $D$. Recall $A$ and $\mathcal{J}$ from the paragraph
before Lemma~\ref{lem:a surjection from rationals[x] to A/Jac(A)}. We have that
$\ensuremath{\mathbb{Q}} N' = \ensuremath{\mathbb{Q}} N/\mathcal{J}' \ensuremath{\mathbb{Q}} N = \ensuremath{\mathbb{Q}} N/\mathcal{J} \ensuremath{\mathbb{Q}} N.$ ($\ensuremath{\mathbb{Q}} N$ is an $A$-module. So
$\ensuremath{\mathbb{Q}} N'$ is too.) We have $d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\mathbb{Q}} N) = d_A(\ensuremath{\mathbb{Q}} N)$. Also,
$d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\mathbb{Q}} N') = d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\mathbb{Q}} N/\mathcal{J} \ensuremath{\mathbb{Q}} N ) = d_A(\ensuremath{\mathbb{Q}} N/\mathcal{J} \ensuremath{\mathbb{Q}} N )$.
So all we need is to show that $d_A(\ensuremath{\mathbb{Q}} N) = d_A(\ensuremath{\mathbb{Q}} N/ \mathcal{J} \ensuremath{\mathbb{Q}} N)$, but this follows from
Nakayama's Lemma.
\end{proof}
\subsection{Isolating the `trivial' part}
In order to state a simple formula for the maximal subgroup growth of groups of the form $\ensuremath{\mathbb{Z}}^k \rtimes \ensuremath{\mathbb{Z}}^\ell$
(and more general semidirect products),
we introduce some notation.\footnote{As it turns out, we will also use this notation for modules arising from virtually abelian groups.}
\begin{definition}
\label{def:mtriv and mnontriv}
Let $G$ be a group (or commutative monoid) and $N$ a f.g.\ $G$ module.
\begin{enumerate}[(a)]
\item $\mtriv(N)$ denotes the number of index $n$ maximal submodules $M$ of $N$ such that the action of
$G$ on $N/M$ is trivial.\footnote{By this, we mean that $g \cdot (n + M) = n + M$ for all $n \in N$ and all $g \in G$.}
\item $\mnontriv(N)$ denotes the number of index $n$ maximal submodules $M$ of $N$ such that the action of
$G$ on $N/M$ is non-trivial.\footnote{By this, we mean that there exists $g \in G$ and $n \in N$ such that $g \cdot (n+M) \neq n + M$.}
\end{enumerate}
\end{definition}
Note that of course, $\mtriv(N) + \mnontriv(N) = \maxsubmod(N)$.
Before continuing, it may be good to point out what $G$ usually is. If $N$ arises as a normal subgroup of a metabelian group, such as the
$\ensuremath{\mathbb{Z}}^k$ in
$\ensuremath{\mathbb{Z}}^k \rtimes \ensuremath{\mathbb{Z}}^\ell$, then $G = \ensuremath{\mathbb{Z}}^\ell$. Similarly, if $N$ is a $\ensuremath{\mathbb{Z}}[x]$ module, then the monoid $G$ is
$\langle x \rangle = \{ x, x^2, x^3, \ldots \}$.
Also, though not used in this paper, if $N$ is an abelian normal subgroup of finite index in a virtually abelian group, then
the $G$ in Definition~\ref{def:mtriv and mnontriv} would be the finite quotient. With this in mind, the reader is encouraged to at least read the statements of
Lemmas~\ref{lem:abelian by free abelian - num of maximal subgroups} and \ref{lem:abelian by f.g. abelian - num of maximal subgroups}
before continuing this section.
\begin{lemma}
\label{lem:mtriv(N) = maxsubmod(N/IN) = maxsubgr(N/IN)}
Let $N$ be a $\ensuremath{\mathbb{Z}}[x_1, x_2, \ldots, x_\ell]$-module which is f.g.\ as a $\ensuremath{\mathbb{Z}}$-module. Let
$I = (x_1 - 1, x_2 - 1, \ldots, x_\ell - 1)$. Then
\[
\mtriv(N) = \maxsubmod(N/IN) = \maxsubgr(N/IN).
\]
\end{lemma}
\begin{proof}
Let $M$ be a maximal submodule of $N$ such that $x_i(n + M) = n + M$ for all $i$ and all $n \in N$. This means that
$(x_i - 1) n \in M$ for all $i$ and $n$. In other words $IN \subseteq M$. Thus $M$ is counted in the term $\mtriv(N)$ iff
$IN \subseteq M$. Therefore, $\mtriv(N) = \maxsubmod(N/IN)$.
The equality $\maxsubmod(N/IN) = \maxsubgr(N/IN)$ follows because the trivial action by all the $x_i$'s implies that
maximal submodules of $N/IN$ are the same thing as maximal subgroups of $N/IN$.
\end{proof}
\begin{corollary}
\label{cor:mtriv(N) is (n^t-1)/(n-1) for n prime}
Let $N$ and $I$ be as in Lemma~\ref{lem:mtriv(N) = maxsubmod(N/IN) = maxsubgr(N/IN)}. Let $t$ be the torsion-free rank of
(the abelian group) $N/IN$. Then for all large $n$,
\[
\mtriv(N) = \maxsubmod(N/IN) =
\begin{cases}
\frac{n^t - 1}{n - 1} &\text{ if $n$ is prime}\\
0 &\text{ otherwise.}
\end{cases}
\]
\end{corollary}
\begin{proof}
This follows from Lemma~\ref{lem:mtriv(N) = maxsubmod(N/IN) = maxsubgr(N/IN)} together with two more facts.
First, by Lemma~\ref{lem:mod out by finite submodule -- maximal submodule growth is unchanged}, we may mod out
by the $\ensuremath{\mathbb{Z}}$-torsion part of $N/IN$ to get
\[
\maxsubgr(N/IN) = \maxsubgr(\ensuremath{\mathbb{Z}}^t) \text{\quad for all large } n.
\]
Second,
\[
\maxsubgr(\ensuremath{\mathbb{Z}}^t) =
\begin{cases}
\frac{n^t - 1}{n - 1} &\text{ if $n$ is prime}\\
0 &\text{ otherwise.}
\end{cases}
\]
\end{proof}
\section{Certain metabelian groups}
\label{sec:certain metabelian groups}
\subsection{Semidirect products}
Except for part of Theorem~\ref{thm:(f.g. abelian) by (f.g. abelian)} and two lemmas, the groups that appear in this section are
semidirect products.
In the following lemma, $N$ is a module over the group-ring $\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}] = $ Laurent polynomials $\ensuremath{\mathbb{Z}}[x,x^{-1}]$,
where multiplication by $x$ (in the module) is conjugation (in $G$) by a chosen generator $x$ of $\ensuremath{\mathbb{Z}}$.
Recall that $\maxsubmod(N)$ denotes the number of maximal submodules of a module $N$ of index $n$.
\begin{lemma}
\label{lem:abelian by infinite cyclic - exact counting}
Let $G = N \rtimes \ensuremath{\mathbb{Z}}$ be a f.g.\ group with $N$ abelian. Then
\[
\maxsubgr(G) = \maxsubgr(\ensuremath{\mathbb{Z}}) + n \cdot \maxsubmod(N).
\]
\end{lemma}
\begin{proof}
This follows immediately upon combining Lemma \ref{lem:abelian by something - counting maximal subgroups by derivations}
and Lemma \ref{lem:Shalev's counting derivations out of cyclic groups}.
\end{proof}
Notes: (1) For any group $G$, if there is a group $N \ensuremath{\trianglelefteq} G$ such that $G/N \cong \ensuremath{\mathbb{Z}}$,
then the extension splits, as
in the hypothesis of the Lemma \ref{lem:abelian by infinite cyclic - exact counting}. (2) The function
$\maxsubgr(\ensuremath{\mathbb{Z}})$ is the characteristic function of the prime numbers and hence is always either 1 or 0. As a result,
$\maxsubgr(\ensuremath{\mathbb{Z}})$ does not effect the growth rate of $\maxsubgr(N \rtimes \ensuremath{\mathbb{Z}})$.
\begin{theorem}
\label{thm:maximal subgroup growth of (f.g. abelian) by Z}
Let $G = N \rtimes \ensuremath{\mathbb{Z}}$, with $N$ f.g.\ as an abelian group. Let
\[
\ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} N = \bigoplus_{j=1}^d \ensuremath{\mathbb{Q}}[x]/(a_j),
\]
where $a_1 | a_2 | \cdots | a_d$ as provided by the structure theorem (so with $a_1$ not a unit). (So $d = d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N)$.)
Also, let $\rho_1$ be the number of
(distinct) roots of $a_1$ in $\ensuremath{\mathbb{C}}$. Then
\[\begin{aligned}
\maxsubgr(G) & \leq \rho_1 n^{d} + O(n^{d-1}) & \text{for all large $n$, and}\\
\maxsubgr(G) & \geq \rho_1n^{d} & \text{for infinitely many $n$.}
\end{aligned}
\]
\end{theorem}
\begin{proof}
This follows from Theorem~\ref{thm:exact growth -with coefficient- of Z x module which is f.g. abelian} together with
Lemma~\ref{lem:abelian by infinite cyclic - exact counting}.
\end{proof}
\begin{corollary}
\label{cor:mdeg(N rtimes A) = d(QN)}
Suppose that $N$ is f.g.\ as an abelian group. Then
\[
\mdeg(N \rtimes \ensuremath{\mathbb{Z}}) = d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N).
\]
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{thm:maximal subgroup growth of (f.g. abelian) by Z}.
\end{proof}
\begin{corollary}
\label{cor:max subgr growth of Z^k semidir_A Z}
Let $G = \ensuremath{\mathbb{Z}}^k \rtimes_A \ensuremath{\mathbb{Z}}$, where $A \in GL(k,\ensuremath{\mathbb{Z}})$. Let
$b$ = the number of
blocks in the rational canonical form of $A$, and let
$\rho_1$ = the number of distinct roots (in $\ensuremath{\mathbb{C}}$) of the characteristic polynomial of the smallest block. Then
\[\begin{aligned}
\maxsubgr(G) & \leq \rho_1 n^{d} + O(n^{d-1}) & \text{for all large $n$, and}\\
\maxsubgr(G) & \geq \rho_1 n^{d} & \text{for infinitely many $n$.}
\end{aligned}
\]
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{thm:maximal subgroup growth of (f.g. abelian) by Z}.
\end{proof}
We next give a few examples of Corollary~\ref{cor:mdeg(N rtimes A) = d(QN)} in the form of another
corollary.
\begin{corollary}
\label{cor:Z rtimes_sigma Z - for arbitrary permutation - mdeg = number of cylces}
Let $\sigma \in \Sym(k)$, and suppose $\sigma$ has $c$ cycles. Then
\[
\mdeg(\ensuremath{\mathbb{Z}}^k \rtimes_\sigma \ensuremath{\mathbb{Z}}) = c.
\]
\end{corollary}
\begin{proof}
By Corollary~\ref{cor:mdeg(N rtimes A) = d(QN)}, all we need to show is that $d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N) = c$. In the proof, as usual,
we will denote the abelian normal subgroup $\ensuremath{\mathbb{Z}}^k$ by $N$.
We first show that $d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N) \geq c$. Indeed, let the $c$ cycles of $\sigma$ be of lengths $n_1$,\ldots,$n_c$. Then
\[\tag{*}
\ensuremath{\mathbb{Q}} N \cong \bigoplus_{i=1}^{c} \ensuremath{\mathbb{Q}}[x]/(x^{n_i}-1).
\]
Because $x-1$ divides $x^{n_i} - 1$, we notice that there exists a submodule of $\ensuremath{\mathbb{Q}}^k$ of the form $\ensuremath{\mathbb{Q}}^c$, where
the action of $x$ on $\ensuremath{\mathbb{Q}}^c$ is trivial. Hence $d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N) \geq \dim_\ensuremath{\mathbb{Q}}(\ensuremath{\mathbb{Q}}^c) = c$.
To see that $d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N) \leq c$, all we need to note is that (*) says $\ensuremath{\mathbb{Q}} N$ is a direct sum of
$c$ cyclic modules.
\end{proof}
\begin{corollary}
For any $k \geq 1$ there exists a f.g.\ group $G$ and a finite index subgroup $H$ such that $\mdeg(G) = 1$ while $\mdeg(H) = k$.
\end{corollary}
\begin{proof}
Let $G = \ensuremath{\mathbb{Z}}^k \rtimes_\sigma \ensuremath{\mathbb{Z}}$, where $\sigma \in \Sym(k)$ is a $k$-cycle. Let
$H = \ensuremath{\mathbb{Z}}^k \rtimes k\ensuremath{\mathbb{Z}}$, which equals $\ensuremath{\mathbb{Z}}^k \times \ensuremath{\mathbb{Z}}$. Then $\mdeg(\ensuremath{\mathbb{Z}}^{k+1}) = k$, but
by Corollary~\ref{cor:Z rtimes_sigma Z - for arbitrary permutation - mdeg = number of cylces}, $\mdeg(G) = 1$.
\end{proof}
Note that this is in stark contrast to what happens when working with \emph{all} subgroups of a group. Theorem 1.1 from
Shalev's \cite{Shalev_On_the_degree} says that if $G$ is a f.g.\ group with $H \leq_f G$, then $\deg(G) \leq \deg(H) +1$.
We would like to give a perhaps more group theoretic interpretation of
Corollary~\ref{cor:mdeg(N rtimes A) = d(QN)}. With this in mind, we make an observation
on Corollary \ref{cor:Z rtimes_sigma Z - for arbitrary permutation - mdeg = number of cylces}.
When considering the group $\ensuremath{\mathbb{Z}}^k \rtimes_\sigma \ensuremath{\mathbb{Z}}$, it is easy to find a set of $c$ elements which normally generate
$\ensuremath{\mathbb{Z}}^k$ (equivalently, which generate $\ensuremath{\mathbb{Z}}^k$ as a $\ensuremath{\mathbb{Z}}[x,x^{-1}]$-module). Indeed, $\sigma$ partitions $[n]$ into $c$ cycles. Let $i_1,\ldots,i_c$ be
a complete set of representatives of the cycles. We already have a basis $e_1,\ldots,e_k$ of $\ensuremath{\mathbb{Q}}^k = \ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \ensuremath{\mathbb{Z}}^k$ fixed
(which in fact generates $\ensuremath{\mathbb{Z}}^k$ as a $\ensuremath{\mathbb{Z}}$-module).
We conclude
that the elements $e_{i_1},\ldots,e_{i_c}$ normally generate $\ensuremath{\mathbb{Z}}^k$.
\begin{corollary}
Let $G = N \rtimes \ensuremath{\mathbb{Z}}$, with $N$ f.g.\ as an abelian group.
Let $n$ be the minimal number of elements of $N$ whose \textbf{n}ormal closure in $G$ has finite index in $N$. Then
\[
\mdeg(G) = n.
\]
\end{corollary}
\begin{proof}
Let $d = d_{\ensuremath{\mathbb{Q}}[x]}(\ensuremath{\mathbb{Q}} N)$.
Let $B = \{a_1, \ldots, a_k \} \subseteq N$. The normal closure of $B$ (in $G$), denoted
$\overline{\langle B \rangle}_G$, is the \emph{same set} as the $\ensuremath{\mathbb{Z}}[x]$-submodule of $N$ that $B$ generates.
Let $N_0 = \overline{\langle B \rangle}_G$. If $N_0 \leq_f N$, then $\ensuremath{\mathbb{Q}} N_0 = \ensuremath{\mathbb{Q}} N$. Therefore,
$n \geq d$.
To show that $n \leq d$, we will just point out how every set of
$\ensuremath{\mathbb{Q}}[x]$-generators of $\ensuremath{\mathbb{Q}} N$ normally generates a finite index subgroup of $N$.
Indeed, suppose that $a_1,\ldots,a_d \in \ensuremath{\mathbb{Q}} N$ is a
$\ensuremath{\mathbb{Q}}[x]$-generating set for $\ensuremath{\mathbb{Q}} N$.
Let $N_0$ the $\ensuremath{\mathbb{Z}}[x]$-span of $a_1,\ldots,a_d$. Then $\ensuremath{\mathbb{Q}} N_0 = \ensuremath{\mathbb{Q}} N'$, and so by
Corollary~\ref{cor:two full lattices in Q^d - one contained in the other => the index is finite} we conclude that
$N_0 \leq_f N$.
\end{proof}
We next consider groups of the form $N \rtimes \ensuremath{\mathbb{Z}}$, where we do \emph{not} assume that $N$ is f.g.\ as an abelian group.
\begin{proposition}
\label{prop:growth type of (arbitrary abelian) by Z}
Suppose we have a f.g.\ group $G = N \rtimes \ensuremath{\mathbb{Z}}$ with $N$ abelian. Using the notation of
Proposition~\ref{prop:growth type of arbitry f.g. Z[x]-module}, we have
\[
\maxsubgr(G) \text{ has growth type }
\begin{cases}
n^{d} & \text{if } d > r_{\text{max}} \\
n^{d+1} & \text{if } d = r_{\text{max}} = r(0) \\
n^{r_{\text{max}} + 1}/\log(n) & \text{otherwise.}
\end{cases}
\]
\end{proposition}
\begin{proof}
This follows from Proposition~\ref{prop:growth type of arbitry f.g. Z[x]-module}, together with
Lemma~\ref{lem:abelian by infinite cyclic - exact counting}.
\end{proof}
Recall $\mtriv(N)$ and $\mnontriv(N)$ from Definition~\ref{def:mtriv and mnontriv}.
\begin{lemma}
\label{lem:abelian by free abelian - num of maximal subgroups}
Let $G$ be a f.g.\ group with abelian $N \ensuremath{\trianglelefteq} G$ such that $G/N \cong \ensuremath{\mathbb{Z}}^\ell$. Then
\[
\maxsubgr(G) \leq
\begin{cases}
\maxsubgr[p](\ensuremath{\mathbb{Z}}^\ell) + p^\ell \cdot \mtriv[p](N) + p \cdot \mnontriv[p](N) & \text{if $n = p$ is prime} \\
n \cdot \mnontriv(N) & \text{if $n$ is not prime},
\end{cases}
\]
with equality if $G = N \rtimes \ensuremath{\mathbb{Z}}^\ell$.
\end{lemma}
\begin{proof}
Let $R = \ensuremath{\mathbb{Z}}[x_1,x_2,\ldots,x_\ell]$. Lemma \ref{lem:simple module - one fixed point implies trivial and has order a prime}
tells us that any trivial, simple quotient of the $R$-module $N$ has prime order. In other words, $\mtriv(N)$ = 0 if $n$ is not
prime. Also, we know that $\maxsubgr(\ensuremath{\mathbb{Z}}^\ell) = 0$ if $n$ is not prime. So what we want to show is that for all $n$,
\[
\maxsubgr(G) \leq \maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N)
\]
with equality if $G = N \rtimes \ensuremath{\mathbb{Z}}^\ell$. Both the inequality and equality follow from
Lemma \ref{lem:abelian by something - counting maximal subgroups by derivations}, by splitting the summation
and applying Lemma~\ref{lem:counting derivations from free abelian groups to simple modules} to count the derivations.
\end{proof}
\begin{lemma}
\label{lem:abelian by f.g. abelian - num of maximal subgroups}
Let $G$ be a f.g.\ group with abelian $N \ensuremath{\trianglelefteq} G$ such that $G/N \cong A$, for some abelian $A$ of torsion-free rank $\ell$. Then
\[\tag{*}
\maxsubgr(G) \leq
\maxsubgr(A) + \lvert \Hom(A, \ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}})\rvert \cdot \mtriv(N) + n \cdot \mnontriv(N)
\]
which for large $n$ equals
\[
\maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N).
\]
And if $G = N \rtimes A$, then (*) is an equality.
\end{lemma}
\begin{proof}[Sketch of proof]
This is extremely similar to Lemma \ref{lem:abelian by free abelian - num of maximal subgroups}. Of course, we use here
Lemma~\ref{lem:counting derivations from f.g. abelian groups to simple modules} instead of
Lemma~\ref{lem:counting derivations from free abelian groups to simple modules} for the $n \cdot \mnontriv(N)$ term.
\end{proof}
\begin{theorem}
\label{thm:(f.g. abelian) by (f.g. abelian)}
Let $G$ be a group with f.g.\ abelian normal subgroup $N$. Suppose $G/N$ is an abelian, $\ell_0$-generated group
of torsion-free rank $\ell$. So $N$ is a $\ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell_0}]$-module.
Let $I = (x_1 - 1, x_2 - 1, \ldots, x_{\ell_0} - 1)_{\ensuremath{\mathbb{Z}}[x_1,\ldots,x_{\ell_0}]}$. Let $t$ be
the torsion-free rank of (the abelian group) $N/IN$, and let $d = d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\rationals N})$.
Then
\[
\mdeg(G) \leq \max \{\ell + t - 1, d \},
\]
with equality if both $G \cong N \rtimes G/N$ and $\ell \geq 1$.
\end{theorem}
\begin{proof}
By Lemma \ref{lem:abelian by f.g. abelian - num of maximal subgroups}, for large $n$,
\[\tag{*1}
\maxsubgr(G) \leq \maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N)
\]
with equality if $G \cong N \rtimes G/N$. To prove this theorem, we will just show that
\[\tag{*2}
\deg(\maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N)) \leq \max \{\ell + t - 1, d \},
\]
with equality if $\ell \geq 1$.
First, note that $\ell +t - 1$
equals $\deg(\maxsubgr(\ensuremath{\mathbb{Z}}^\ell))$ if $t = 0$ and $\deg(n^\ell \cdot \mtriv(N))$ if $t \neq 0$.
If we could show that
$d$ equals $\deg(n \cdot \mnontriv(N))$ we would be practically done, but this is not quite the case.
Next, note that $\deg(\maxsubgr(\ensuremath{\mathbb{Z}}^\ell)) = \mdeg(\ensuremath{\mathbb{Z}}^\ell) = \ell -1$.
Next, note that
$\deg(\maxsubmod(N)) = \mmoddeg(N)$, which we would like to show is $d - 1$. (This, together with (*1), is the heart of what separates the present theorem
from Corollary~\ref{cor:mdeg(N rtimes A) = d(QN)}.)
Let $\tilde{N}_D$ and $d_0$ be as in
Lemma~\ref{lem:several variables to one, with a nice direct sum too}; by this lemma, $\maxsubmod(N) = \maxsubmod(\tilde{N}_{D})$ for all large $n$.
So $\deg(\maxsubmod(N)) = \deg(\maxsubmod(\tilde{N}_D))$. By Proposition~\ref{prop:mdeg(N) -actually tilde{m}deg(N)}, applied to the module
$\tilde{N}_D$, we get that $\deg(\maxsubmod(\tilde{N}_D)) = d_0 - 1$. And by Lemma~\ref{lem:size of QR generating set for QN is d_0},
$d_0 = d_{\ensuremath{\mathbb{Q}} R}(\ensuremath{\rationals N})$ (which is $d$). Therefore, we have shown that $\mmoddeg(N) = \deg(\maxsubmod(N)) = d-1$.
Suppose $t = 0$. Then $\mtriv(N) = 0$ for all large $n$. Hence $\mnontriv(N) = \maxsubmod(N)$ for large $n$. The previous two
sentences (together with (*1)) imply that
for large $n$,
\[
\maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N) = \maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n \cdot \maxsubmod(N).
\]
Recall $t = 0$. We are done by Lemma~\ref{lem:degree of a sum is the max of the degrees}
since we already noted $\deg(\maxsubgr(\ensuremath{\mathbb{Z}}^\ell)) = \ell -1$ and since $\deg(n \cdot \maxsubmod(N)) = 1 + (d - 1) = d$.
For the rest of the proof, suppose $t \neq 0$. Note that Corollary~\ref{cor:mtriv(N) is (n^t-1)/(n-1) for n prime} implies that
$\deg(\mtriv(N)) = t-1$. Hence $\deg(n^\ell \cdot \mtriv(n)) = \ell + t - 1 $. Therefore, by
Lemma~\ref{lem:degree of a sum is the max of the degrees},
\[
\deg(\maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N)) = \deg(n^\ell \cdot \mtriv(N)) = \ell + t - 1.
\]
Therefore by Lemma~\ref{lem:degree of a sum is the max of the degrees} again,
$\deg(\maxsubgr(\ensuremath{\mathbb{Z}}^\ell) + n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N)) = $
\[ \tag{*3}
\begin{aligned}
& \deg(n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N)) \\
&= \max\{\deg(n^\ell \cdot \mtriv(N)), \deg(n \cdot \mnontriv(N)) \}\\
&= \max\{\ell + t - 1, \deg(n \cdot \mnontriv(N)) \},
\end{aligned}
\]
which is bounded above by $ \max\{\ell + t - 1, d \}$ because $\mnontriv(N) \leq \maxsubmod(N)$ implies that
$\deg(n \cdot \mnontriv(N)) \leq \deg(n \cdot \maxsubmod(N)) = 1 +(d-1) = d$. This proves (*2). So to get an equality in (*2), assume
$\ell \geq 1$.
Because $\maxsubmod(N) = \mtriv(N) + \mnontriv(N)$, we know (by Lemma~\ref{lem:degree of a sum is the max of the degrees}) that
\[
\deg(\maxsubmod(N)) = \deg(\mtriv(N)) \text{\quad or}
\]
\[
\deg(\maxsubmod(N)) = \deg(\mnontriv(N)).
\]
\noindent \emph{Case 1.} Assume $\deg(\maxsubmod(N)) = \deg(\mtriv(N))$.
Then
\[
\deg(\mnontriv(N)) \leq \deg(\mtriv(N)).
\]
Hence since we are now assuming $\ell \geq 1$,
\[
\deg(n^\ell \cdot \mtriv(N) + n \cdot \mnontriv(N)) = \deg(n^\ell \cdot \mtriv(N)),
\] which equals $\ell + t - 1 $, which is at least $d$ since $\ell \geq 1$ and $d -1 = \deg(\maxsubmod(N)) = \deg(\mtriv(N)) = t-1$. We are done with this case
by (*3).\\
\noindent \emph{Case 2.} Assume $\deg(\maxsubmod(N)) = \deg(\mnontriv(N))$.
Then $\deg(n\cdot \mnontriv(N)) = \deg(n\cdot \maxsubmod(N)) = 1 + (d -1) = d$.
We are done by (*3).
\end{proof}
Note: Let $m = \max \{\ell + t - 1, d \}$. A few changes to Case 1 in the proof of
Theorem~\ref{thm:(f.g. abelian) by (f.g. abelian)} actually shows that if $G/N$ is finite abelian,
(and $G = N \rtimes G/N$) then $\mdeg(G) = m$ or $m-1$. Also, to get this, it actually turns out that (if $G/N$ is finite abelian) we do
not even need to assume $G = N \rtimes G/N$, but this latter observation requires additional work not given here.
\subsection{Nilpotent groups}
\label{sec:nilpotent groups}
This section gives a formula for calculating $\mdeg(G)$ for all f.g.\ nilpotent groups $G$.
There are two reasons for doing this. First, at a mathematics conference at Texas~A\&M, Alex Lubotzky kindly suggested this
to the author as ``an easy exercise \footnote{More specifically, he suggested to give a formula for
the maximal subgroup growth of f.g. nilpotent groups.}
that definitely should appear in your thesis.'' Second, we would like to know how
accurate (or not) Lemma \ref{lem:abelian by something - counting maximal subgroups by derivations} is when $G$
is not a semidirect product. In Section \ref{sec:some f.g. metabelian nilpotent groups}, we apply the results of the present section to a class of examples
(certain metabelian nilpotent groups), and these groups show how inaccurate Lemma \ref{lem:abelian by something - counting maximal subgroups by derivations}
can be when applied to groups that are not semidirect products.
Let $G$ be f.g.\ nilpotent. It is well known that a maximal subgroup of $G$ must be normal and hence have prime index.
See for example 5.2.4 in \cite{Robinson} and the comments following.\footnote{Yes, the result itself
has the hypothesis that the group be finite, but notice that finiteness is not used in his ``(i) $\to$ (ii)'' nor in ``(ii) $\to$ (iii)''.}
\begin{definition}
Similar to the Frattini subgroup, we define
\[
\Phi_p(G) = \bigcap_{M \leq_p G} M.
\]
\end{definition}
Recall a familiar argument that shows $G/\Phi_p(G)$ to be an elementary abelian $p$-group: Let $M \leq_p G$. Then
$G/M$ is abelian. Therefore $G' \subseteq M$. Hence $G/\Phi_p(G)$ is abelian. Of course, $G/\Phi_p(G)$ has ``exponent'' $p$,
and it is finitely generated. Thus $G/\Phi_p(G)$ is in fact a finite dimensional $\finitefield$-vector space.
\begin{definition}
We denote\footnote{See also the same notation in \cite{Lubotzky2003} (page xxii).} by $\ur_p(G)$ the dimension of $G/\Phi_p(G)$ as an $\finitefield$-vector space.
\end{definition}
\begin{lemma}
\label{lem:nilpotent group maximal subgroup growth}
Let $r = \limsup_{p \to \infty} \ur_p(G)$. Then $\mdeg(G) = r - 1$, and in fact,
\[
\maxsubgr[p](G) \leq \frac{p^r - 1}{p - 1} \text{\hspace{.2in} for all large } p,
\]
with equality for infinitely many $p$.
\end{lemma}
\begin{proof}
Because $G/\Phi_p(G)$ is an $\finitefield$-vector space of dimension $\ur_p(G)$, we know that it (and hence $G$)
has $\frac{p^{\ur_p(G)} - 1}{p - 1}$ subgroups of index $p$.
\end{proof}
\subsection{Some f.g.\ metabelian nilpotent groups}
\label{sec:some f.g. metabelian nilpotent groups}
We will next form a class of examples of f.g.\ metabelian nilpotent groups $G_f$ each of which has a
normal subgroup $N$ such that both $N$ and $G_f/N$ are free abelian.
Fix $\ell \geq 2$, and let $k = \binom{\ell}{2}$. Write $\ensuremath{\mathbb{Z}}^k$ multiplicatively having generating
set $\{y_1, \ldots, y_k\}$. Choose a function $f: \{(i,j) | 1 \leq i < j \leq \ell \} \longrightarrow \ensuremath{\mathbb{Z}}^k$.
Let $[k] = \{1, 2, \ldots, k\}$ and similarly $[\ell] = \{1, 2, \ldots, \ell\}$.
Form the group $G_f$, a presentation of which has generating set
$\{x_1, \ldots x_\ell, y_1, \ldots y_k \}$ and relations $[x_i, x_j] = f(i,j)$ for $1 \leq i < j \leq \ell$,
$ [y_i, y_j] = 1$ for all $i, j \in [k]$, $[x_i, y_j] = 1$ for all $i \in [\ell], j \in [k]$.
So $G_f$ has the subgroup $A = \langle y_1, \ldots, y_k \rangle = \ensuremath{\mathbb{Z}}^k$ with $A \subseteq Z(G_f)$,
and also $G_f/A \cong \ensuremath{\mathbb{Z}}^\ell$. Thus $G_f$ is nilpotent and metabelian.
Form the (central) subgroup
\[
N = \langle f(i,j) \rangle_{1 \leq i < j \leq \ell}.
\]
Of course, $N \leq A$. Since we are using multiplicative notation, for a given prime $p$, modding out by $p$ gives
$N/N^p$, an $\finitefield$-vector space.
\begin{lemma}
\label{lem:ur_p(G_f) = ell + k - dim(N/N^p)}
Fix a prime $p$. Then
\[
\ur_p(G_f) = \ell + k - \dim_{\mathbb{F}_p}(N/N^p).
\]
\end{lemma}
\begin{proof}[Sketch of proof]
Forming $G_f/\Phi_p(G_f)$ is straightforward because $N \subseteq \Phi_p(G_f)$ and also $G_f^p \subseteq \Phi_p(G_f)$.
\end{proof}
Since $N$ is a subgroup of a free abelian group of rank $k$ (namely $A$), we may view $N$ as a subset of
$\ensuremath{\mathbb{Q}}^k = \ensuremath{\mathbb{Q}} \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} A$. The subspace of $\ensuremath{\mathbb{Q}}^k$ spanned by $N$ is
$\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N$. The following is clear:
\begin{lemma}
\label{lem:dim(N/N^p) over F_p = dim(Q tensor N) for almost all primes}
For almost all primes $p$
\[
\dim_{\mathbb{F}_p}(N/N^p) = \dim_\ensuremath{\mathbb{Q}}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N).
\]
\end{lemma}
Recall that $k = \binom{\ell}{2}$. Note that by choosing $f$ appropriately, we may pick
$\dim_\ensuremath{\mathbb{Q}}(\ensuremath{\mathbb{Q}} \ensuremath{\otimes} N)$ to be any number in $\{\binom{\ell}{2}, \binom{\ell}{2} - 1, \ldots, 1, 0 \}$.
So by using
Lemmas \ref{lem:nilpotent group maximal subgroup growth}, \ref{lem:ur_p(G_f) = ell + k - dim(N/N^p)}, and
\ref{lem:dim(N/N^p) over F_p = dim(Q tensor N) for almost all primes}, we can make
$\mdeg(G_f)$ any number in $\{\ell - 1, \ell, \ell + 1, \ldots, \ell +\binom{\ell}{2} -1 \}$.
And this tells us how inaccurate Lemma \ref{lem:abelian by something - counting maximal subgroups by derivations} is in general
because that lemma in this situation ends up saying $\maxsubgr(G_f) \leq \maxsubgr(\ensuremath{\mathbb{Z}}^{\ell + k})$,
but $\mdeg(\ensuremath{\mathbb{Z}}^{\ell + k}) = \ell + k - 1 = \ell + \binom{\ell}{2} -1$; specifically, we have groups $G_f$ such that
$\mdeg(G_f) = \ell - 1$ but for which Lemma~\ref{lem:abelian by something - counting maximal subgroups by derivations} only implies that
$\mdeg(G_f) \leq \ell + \binom{\ell}{2} -1$.
\subsection{A concrete example: $ \ensuremath{\mathbb{Z}}^3 \rtimes \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}$}
In this section, we calculate $\maxsubgr(G)$ exactly for $G = \ensuremath{\mathbb{Z}} \wr \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}} = \ensuremath{\mathbb{Z}}^3 \rtimes \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}$.
Let $R = \ensuremath{\mathbb{Z}}[x]/(x^3 - 1)$. The $R$-module structure of $\ensuremath{\mathbb{Z}}^3 \ensuremath{\trianglelefteq} G$ is $R$.
So, we need to calculate $\maxsubmod(R)$ for all $n$. Of course,
$x^3 - 1 = (x-1)(x^2 + x + 1)$, and so our goal is to factor $$f(x) = x^2 + x + 1$$ mod $p$ for all primes $p$. If $p = 2$,
then we easily see that $f$ is irreducible mod 2 because it has no roots mod 2. For $p = 3$, we see that
$x^2 + x + 1 \equiv (x-1)^2$.
So far, we've shown that $\maxsubmod[2](R) = 1$, $\maxsubmod[2^2](R) = 1$, $\maxsubmod[3](R) = 1$, and
that $R$ has no other ideal of index a power of 2 or 3.
\begin{lemma}
\label{lem:f is reducible implies - f is irreducible implies}
Let $p \neq 3$. If $f$ is irreducible mod $p$, then $\maxsubmod[p^2](R) = 1$ and $\maxsubmod[p](R) = 1$.
If $f$ is reducible mod $p$, then $\maxsubmod[p](R) = 3$ and $\maxsubmod[p^k](R) = 0$ for $k \geq 2$.
\end{lemma}
\begin{proof}[Sketch of proof:]
We have already shown this for $p = 2$.
Of course, the factor $x - 1$ in $x^3 - 1$ is why $\maxsubmod[p](R) \geq 1$ for all primes $p$.
The only other thing about this lemma that may need comment/proof is why $f$ factors into distinct factors mod $p$ if it
is reducible; see the paragraph after Lemma \ref{lem:quadratic reciprocity used to factor x^2 + x + 1}.
\end{proof}
It is well known how to factor $f(x) = x^2 + x + 1$ mod $p$ for all primes $p > 3$, but
we show the computation in detail. We will use the notation $\Legendre{a}{p}$ for the Legendre symbol.
\begin{lemma}
\label{lem:quadratic reciprocity used to factor x^2 + x + 1}
Let $p \neq 3$. Then the above $f$ is reducible mod $p$ if and only if $p \equiv 1$ $\mod 3$.
\end{lemma}
\begin{proof}
Recall/notice that the quadratic formula works in $\mathbb{F}[x]$ for any field $\mathbb{F}$ because completing
the square works. So $f$ is reducible in $\finitefield[x]$ if and only if $-3$ is the square of some number in $\finitefield$.
The lemma is true for $p = 2$. Suppose $p > 3$. Since $\Legendre{\cdot}{p}$ is a homomorphism from $\finitefield^{\neq 0}$ to
$\{1, -1\}$, we get $\Legendre{-3}{p} = \Legendre{-1}{p}\Legendre{3}{p}$.
We finish by using the law of quadratic reciprocity:
$\Legendre{-1}{p} = (-1)^{(p-1)/2}$ and $\Legendre{3}{p} = (-1)^{(3-1)(p-1)/4} \Legendre{p}{3}$. Therefore,
$\Legendre{-3}{p} = ((-1)^{(p-1)/2})^2 \Legendre{p}{3} = \Legendre{p}{3}$. So we see that $-3$ is a square mod $p$
if and only if $p$ is a square mod 3. But 1 is the only (nonzero) square in $\mathbb{F}_3$. So $p$ is a square mod 3 if and only if
$p \equiv 1 \mod 3$. Just recall the second sentence of the first paragraph.
\end{proof}
Recalling Lemma \ref{lem:f is reducible implies - f is irreducible implies},
we see now why if $f$ is reducible mod $p$, for $p \neq 3$, then $f$ factors into a product of two distinct terms; this
is because of the quadratic formula and that $-3 \not\equiv 0 \mod p$ and because $(-1)^2 \not\equiv -3$ implies
$(1 \pm \sqrt{-3})/2 \not\equiv 1 \mod p$.
We can now combine Lemmas \ref{lem:f is reducible implies - f is irreducible implies} and
\ref{lem:quadratic reciprocity used to factor x^2 + x + 1} to get that for $p \neq 3$,
\[\begin{aligned}
\maxsubmod[p](R) &=
\begin{cases}
3 & \text{ if } p \equiv 1 \mod 3 \\
1 & \text{ if } p \not\equiv 1 \mod 3
\end{cases} \\
\maxsubmod[p^2](R) &=
\begin{cases}
0 & \text{ if } p \equiv 1 \mod 3 \\
1 & \text{ if } p \not\equiv 1 \mod 3
\end{cases} \\
\end{aligned}
\]
We have already stated that $\maxsubmod[3](R) = 1$. For all other $n > 1$ not listed, we have $\maxsubmod(R) = 0$.
Recall that $G = \ensuremath{\mathbb{Z}} \wr \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}$. We use the first part of
Lemma \ref{lem:abelian by f.g. abelian - num of maximal subgroups} to calculate $\maxsubgr[3](G)$: in the
lemma's notation, $A = \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}$, and the $R$-module $N$ is itself $R$. So
\[\begin{aligned}
\maxsubgr[3](G) &= \maxsubgr[3](\ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}})
+ \lvert\Hom(\ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}, \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}})\rvert \cdot \mtriv[3](R) + 3 \cdot \mnontriv[3](R)\\
&= 1 + 3(1) + 3(0) = 4.
\end{aligned}
\]
Also, $\maxsubgr[3^k](G) = 0$ for $k \geq 2$ because (as stated right before
Lemma \ref{lem:f is reducible implies - f is irreducible implies}) $R$ has no maximal
ideals of index $3^k$ (for such $k$).
We next apply Lemma \ref{lem:abelian by f.g. abelian - num of maximal subgroups} again: Let $n$ be a power
of a prime $p \neq 3$. Then $\maxsubgr(\ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}) =0$ and $\lvert \Hom(\ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}, \ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}})\rvert = 1.$
Thus Lemma \ref{lem:abelian by f.g. abelian - num of maximal subgroups} simplifies to the following:
\[
\maxsubgr(G) = \mtriv(R) + n \cdot \mnontriv(R).
\]
For all $n$, if $n$ is not prime, then $\mnontriv(R) = \maxsubmod(R)$ and $\mtriv(R) = 0$. Also, for all primes $p$, $\mtriv[p](R) = 1$, and
\[
\mnontriv[p](R) =
\begin{cases}
2 & \text{ if } p \equiv 1 \mod 3 \\
0 & \text{ if } p \not\equiv 1 \mod 3.
\end{cases}
\]
Combining our work so far gives the following:
\begin{proposition}
Let $G = \ensuremath{\mathbb{Z}} \wr \ensuremath{\mathbb{Z}}/3\ensuremath{\mathbb{Z}}$. Then $\maxsubgr[3](G) = 4$. Let $p \neq 3$ be prime. Then
\[\begin{aligned}
\maxsubgr[p](G) &=
\begin{cases}
1 + 2p & \text{ if } p \equiv 1 \mod 3 \\
1 & \text{ if } p \not\equiv 1 \mod 3
\end{cases} \\
\maxsubgr[p^2](G) &=
\begin{cases}
0 & \text{ if } p \equiv 1 \mod 3 \\
p^2 & \text{ if } p \not\equiv 1 \mod 3.
\end{cases} \\
\end{aligned}
\]
For all other $n$, we get $\maxsubgr(G) = 0$. So $\maxsubgr(G) \leq 2n + 1$
for all $n$ with equality for infinitely many $n$. In particular, $\mdeg(G) = 1$.
\end{proposition}
\section*{Acknowledgments}
I would like to thank Marcin Mazur, who was my advisor while at Binghamton University. I also want to thank
Alex Lubotzky for a couple helpful conversations.
| {
"timestamp": "2018-07-11T02:03:42",
"yymm": "1807",
"arxiv_id": "1807.03423",
"language": "en",
"url": "https://arxiv.org/abs/1807.03423",
"abstract": "Let $m_n(G)$ denote the number of maximal subgroups of $G$ of index $n$. An upper bound is given for the degree of maximal subgroup growth of all polycyclic metabelian groups $G$ (i.e., for $\\limsup \\frac{\\log m_n(G)}{\\log n}$, the degree of polynomial growth of $m_n(G)$). A condition is given for when this upper bound is attained.For $G = \\mathbb{Z}^k \\rtimes \\mathbb{Z}$, where $A \\in GL(k,\\mathbb{Z})$, it is shown that $m_n(G)$ grows like a polynomial of degree equal to the number of blocks in the rational canonical form of $A$. The leading term of this polynomial is the number of distinct roots (in $\\mathbb{C}$) of the characteristic polynomial of the smallest block.",
"subjects": "Group Theory (math.GR)",
"title": "Maximal subgroup growth of some metabelian groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806501150427,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7077274224474139
} |
https://arxiv.org/abs/1309.5560 | An Efficient Numerical Scheme for the Biharmonic Equation by Weak Galerkin Finite Element Methods on Polygonal or Polyhedral Meshes | This paper presents a new and efficient numerical algorithm for the biharmonic equation by using weak Galerkin (WG) finite element methods. The WG finite element scheme is based on a variational form of the biharmonic equation that is equivalent to the usual $H^2$-semi norm. Weak partial derivatives and their approximations, called discrete weak partial derivatives, are introduced for a class of discontinuous functions defined on a finite element partition of the domain consisting of general polygons or polyhedra. The discrete weak partial derivatives serve as building blocks for the WG finite element method. The resulting matrix from the WG method is symmetric, positive definite, and parameter free. An error estimate of optimal order is derived in an $H^2$-equivalent norm for the WG finite element solutions. Error estimates in the usual $L^2$ norm are established, yielding optimal order of convergence for all the WG finite element algorithms except the one corresponding to the lowest order (i.e., piecewise quadratic elements). Some numerical experiments are presented to illustrate the efficiency and accuracy of the numerical scheme. | \section{Introduction}
This paper is concerned with new developments of numerical methods
for the biharmonic equation with Dirichlet and Neumann boundary
conditions. The model problem seeks an unknown function $u=u(x)$
satisfying
\begin{equation}\label{0.1}
\begin{split}
\Delta^2u&=f, \quad\text{in}\ \Omega,\\
u&=\xi, \quad\text{on}\ \partial\Omega,\\
\frac{\partial u}{\partial\textbf{n}}&=\nu, \quad \text{on}\ \partial\Omega,\\
\end{split}
\end{equation}
where $\Omega$ is an open bounded domain in $\mathbb{R}^d$($d=2,3$)
with a Lipschitz continuous boundary $\partial\Omega$. The functions
$f$, $\xi$, and $\nu$ are given on the domain or its boundary, as
appropriate.
A variational formulation for the biharmonic problem (\ref{0.1}) is
given by seeking $u\in H^2(\Omega)$ satisfying $u|_{\partial
\Omega}=\xi$, $\frac{\partial u}{\partial \textbf{n}}|_{\partial
\Omega}=\nu$ and the following equation
\begin{equation}\label{0.2}
\sum_{i,j=1}^d(\partial^2_{ij}u,\partial^2_{ij}v)=(f,v), \quad
\forall v\in H_0^2(\Omega),
\end{equation}
where $(\cdot,\cdot)$ stands for the usual inner product in
$L^2(\Omega)$, $\partial_{ij}^2$ is the second order partial
derivative in the direction $x_i$ and $x_j$, and $H_0^2(\Omega)$ is
the subspace of the Sobolev space $H^2(\Omega)$ consisting of
functions with vanishing trace for the function itself and its
gradient.
Based on the variational form (\ref{0.2}), one may design various
conforming finite element schemes for (\ref{0.1}) by constructing
finite element spaces as subspaces of $H^2(\Omega)$. Such
$H^2$-conforming methods essentially require $C^1$-continuity for
the underlying piecewise polynomials (known as finite element
functions) on a prescribed finite element partition. The
$C^1$-continuity imposes an enormous difficulty in the construction
of the corresponding finite element functions in practical
computation. Due to the complexity in the construction of
$C^1$-continuous elements, $H^2$-conforming finite element methods
are rarely used in practice for solving the biharmonic equation.
As an alternative approach, nonconforming and discontinuous Galerkin
finite element methods have been developed for solving the
biharmonic equation over the last several decades. The Morley
element \cite{m1968} is a well-known example of nonconforming
element for the biharmonic equation by using piecewise quadratic
polynomials. Recently, a $C^0$ interior penalty method was studied
in \cite{bs2005, eghlmt2002}. In \cite{mb2007}, a hp-version
interior-penalty discontinuous Galerkin method was developed for the
biharmonic equation. To avoid the use of $C^1$-elements, mixed
methods have been developed for the biharmonic equation by reducing
the fourth order problem to a system of two second order equations
\cite{ab1985, f1978, gnp2008, m1987, mwy3818}.
Recently, weak Galerkin (WG) has emerged as a new finite element
technique for solving partial differential equations. WG method
refers to numerical techniques for partial differential equations
where differential operators are interpreted and approximated as
distributions over a set of generalized functions. The method/idea
was first introduced in \cite{wy2013} for second order elliptic
equations, and the concept was further developed in \cite{wy3655,
wy2707, mwy3655}. By design, WG uses generalized and/or
discontinuous approximating functions on general meshes to overcome
the barrier in the construction of ``smooth'' finite element
functions. In \cite{mwy0927}, a WG finite element method was
introduced and analyzed for the biharmonic equation by using
polynomials of degree $k\ge2$ on each element plus polynomials of
degree $k$ and $k-1$ for $u$ and $\frac{\partial u}{\partial{\mathbf{n}}}$ on
the boundary of each element (i.e., elements of type
$P_k/P_{k}/P_{k-1}$). The WG scheme of \cite{mwy0927} is based on
the variational form of $(\Delta u, \Delta v)=(f,v)$.
In this paper, we will develop a highly flexible and robust WG
finite element method for the biharmonic equation by using an
element of type $P_k/P_{k-2}/P_{k-2}$; i.e., polynomials of degree
$k$ on each element and polynomials of degree $k-2$ on the boundary
of the element for $u$ and $\nabla u$. Our WG finite element scheme
is based on the variational form (\ref{0.2}), and has a smaller
number of unknowns than that of \cite{mwy0927} for the same order of
element. Intuitively, our WG finite element scheme for (\ref{0.1})
shall be derived by replacing the differential operator
$\partial_{ij}^2$ in (\ref{0.2}) by a discrete and weak version,
denoted by $\partial_{ij,w}^2$. In general, such a straightforward
replacement may not produce a working algorithm without including a
mechanism that enforces a certain weak continuity of the underlying
approximating functions. A weak continuity shall be realized by
introducing an appropriately defined stabilizer, denoted as
$s(\cdot,\cdot)$. Formally, our WG finite element method for
(\ref{0.1}) can be described by seeking a finite element function
$u_h$ satisfying
\begin{equation}\label{0.3}
\sum_{i,j=1}^d(\partial_{ij,w}^2u_h,\partial^2_{ij,w}v)_h+s(u_h,v)=(f,v)
\end{equation}
for all testing functions $v$. The main advantage of the present
approach as compared to \cite{mwy0927} lies in the fact that
elements of type $P_k/P_{k-2}/P_{k-2}$ are employed, which greatly
reduces the degrees of freedom and results in a smaller system to
solve. The rest of the paper is to specify all the details for
(\ref{0.3}), and justifies the rigorousness of the method by
establishing a mathematical convergence theory.
The paper is organized as follows. In Section
\ref{Section:Preliminaries}, we introduce some standard notations
for Sobolev spaces. Section \ref{Section:Wpartial} is devoted to a
discussion of weak partial derivatives and their discretizations. In
Section \ref{Section:WGFEM}, we present a weak Galerkin algorithm
for the biharmonic equation (\ref{0.1}). In Section
\ref{Section:L2projection}, we introduce some local $L^2$ projection
operators and then derive some approximation properties which are
useful in the convergence analysis. Section
\ref{Section:error-equation} will be devoted to the derivation of an
error equation for the WG finite element solution. In Section
\ref{Section:H2error}, we establish an optimal order of error
estimate for the WG finite element approximation in a
$H^2$-equivalent discrete norm. In Section \ref{Section:L2error}, we
shall derive an error estimate for the WG finite element method
approximation in the usual $L^2$-norm. Finally in Section
\ref{Section:NE}, we present some numerical results to demonstrate
the efficiency and accuracy of our WG method.
\section{Preliminaries and Notations}\label{Section:Preliminaries}
Let $D$ be any open bounded domain with Lipschitz continuous
boundary in $\mathbb{R}^d$, $d=2, 3$. We use the standard definition
for the Sobolev space $H^s(D)$ and the associated inner product
$(\cdot,\cdot)_{s,D}$, norm $\|\cdot\|_{s,D}$, and seminorm
$|\cdot|_{s,D}$ for any $s\geq 0$. For example, for any integer
$s\geq 0$, the seminorm $|\cdot|_{s,D}$ is given by
$$
|v|_{s,D}=\left(\sum_{|\alpha|=s}\int_D|\partial^\alpha
v|^2dD\right)^{\frac{1}{2}}
$$
with the usual notation
$$
\alpha=(\alpha_1,\cdots,\alpha_d),
|\alpha|=\alpha_1+\cdots+\alpha_d,
\partial^\alpha=\prod_{j=1}^d\partial_{x_j}^{\alpha_j}.
$$
The Sobolev norm $\|\cdot\|_{m,D}$ is given by
$$
\|v\|_{m,D}=\Big(\sum_{j=0}^m|v|_{j,D}^2\Big)^{\frac{1}{2}}.
$$
The space $H^0(D)$ coincides with $L^2(D)$, for which the norm and
the inner product are denoted by $\|\cdot\|_D$ and
$(\cdot,\cdot)_D$, respectively. When $D=\Omega$, we shall drop
the subscript $D$ in the norm and inner product notation.
Throughout the paper, the letter $C$ is used to denote a generic
constant independent of the mesh size and functions involved.
\section{Weak Partial Derivatives of Second
Order}\label{Section:Wpartial} For the biharmonic problem
(\ref{0.1}) with variational form (\ref{0.2}), the principle
differential operator is $\partial_{ij}^2$. Thus, we shall define
weak partial derivatives, denoted by $\partial_{ij,w}^2$, for a
class of discontinuous functions. For numerical purpose, we shall
also introduce a discrete version for the weak partial derivative
$\partial_{ij,w}^2$ in polynomial subspaces.
Let $T$ be any polygonal or polyhedral domain with boundary
$\partial T$. By a weak function on the region $T$, we mean a
function $v=\{v_0,v_b,\textbf{v}_g\}$ such that $v_0\in L^2(T)$,
$v_b\in L^{2}(\partial T)$ and $\textbf{v}_g\in [L^{2}(\partial
T)]^d$. The first and second components $v_0$ and $v_b$ can be
understood as the value of $v$ in the interior and on the boundary
of $T$. The third term, $\textbf{v}_g\in \mathbb{R}^d$ with
components $v_{gi}, i=1,\cdots,d,$ intends to represent the gradient
$\nabla v$ on the boundary of $T$. Note that $v_b$ and
$\textbf{v}_g$ may not necessarily be related to the trace of $v_0$
and $\nabla v_0$ on $\partial T$, respectively.
Denote by $W(T)$ the space of all weak functions on $T$; i.e.,
$$
W(T)=\{v=\{v_0,v_b,\textbf{v}_g\}: v_0\in L^2(T), v_b\in
L^{2}(\partial T), \textbf{v}_g\in [L^{2}(\partial T)]^d\}.
$$
Let $\langle\cdot,\cdot\rangle_{\partial T}$ be the inner product in
$L^2(\partial T)$. Define $G(T)$ by
$$
G(T)=\{\varphi: \varphi\in H^2(T)\}.
$$
\begin{defi}\label{defition3.1} The dual of $L^2(T)$ can be identified with
itself by using the standard $L^2$ inner product as the action of
linear functionals. With a similar interpretation, for any $v\in
W(T)$, the weak partial derivative $\partial^2_{ij}$ of
$v=\{v_0,v_b,\textbf{v}_g \}$ is defined as a linear functional
$\partial^2_{ij,w} v$ in the dual space of $G(T)$ whose action on
each $\varphi \in G(T)$ is given by
\begin{equation}\label{2.3}
(\partial^2_{ij,w}v,\varphi)_T=(v_0,\partial^2_{ji}\varphi)_T-
\langle v_b n_i,\partial_j\varphi\rangle_{\partial T}+
\langle v_{gi},\varphi n_j\rangle_{\partial T}.
\end{equation}
Here $\textbf{n}$, with components $n_{i}\ (i=1,\cdots,d)$, is the
outward normal direction of $T$ on its boundary.
\end{defi}
Unlike the classical second order derivatives, $\partial_{ij,w}^2v$
is usually different from $\partial_{ji,w}^2v$ when $i\neq j$.
The Sobolev space $H^2(T)$ can be embedded into the space $W(T)$ by
an inclusion map $i_W: H^2(T)\rightarrow W(T)$ defined as follows
$$
i_W(\phi)=\{\phi|_T,\phi|_{\partial T},\nabla \phi|_{\partial
T}\}, \qquad \phi\in H^2(T).
$$
With the help of the inclusion map $i_W$, the Sobolev space
$H^2(T)$ can be viewed as a subspace of $W(T)$ by identifying each
$\phi\in H^2(T)$ with $i_W(\phi)$. Analogously, a weak function
$v=\{v_0,v_b,\textbf{v}_g\}\in W(T)$ is said to be in $H^2(T)$ if
it can be identified with a function $\phi\in H^2(T)$ through the
above inclusion map. It is not hard to see that
$\partial_{ij,w}^2$ is identical with $\partial_{ij}^2$ in
$H^2(T)$; i.e., $\partial_{ij,w}^2v =\partial_{ij}^2v$ for all
functions $v\in H^2(T)$.
Next, for $i,j=1,\cdots,d$, we introduce a discrete version of
$\partial^2_{ij,w}$ by approximating $\partial^2_{ij,w}$ in a
polynomial subspace of the dual of $G(T)$. To this end, for any
non-negative integer $r\geq 0$, denote by $P_r(T)$ the set of
polynomials on $T$ with degree no more than $r$. A discrete
$\partial^2_{ij,w}\ (i,j=1,\cdots,d)$ operator, denoted by
$\partial^2_{ij,w,r,T}$, is defined as the unique polynomial
$\partial^2_{ij,w,r,T} v\in P_r(T)$ satisfying the following equation
\begin{equation}\label{2.4}
(\partial^2_{ij,w,r,T}v,\varphi)_T=(v_0,\partial^2
_{ji}\varphi)_T-\langle v_b n_i,\partial_j\varphi\rangle_{\partial T}
+\langle v_{gi},\varphi n_j\rangle_{\partial T},\quad \forall \varphi \in
P_r(T).
\end{equation}
\section{ Numerical Algorithm by Weak Galerkin}\label{Section:WGFEM}
Let ${\cal T}_h$ be a partition of the domain $\Omega$ into polygons
in 2D or polyhedra in 3D. Assume that ${\cal T}_h$ is shape regular
in the sense as defined in \cite{wy3655}. Denote by ${\mathcal{E}}_h$ the set
of all edges or flat faces in ${\cal T}_h$, and let
${\mathcal{E}}_h^0={\mathcal{E}}_h\setminus\partial\Omega$ be the set of all interior
edges or flat faces.
For any given integer $k\geq 2$, denote by $W_k(T)$ the discrete
weak function space given by
\begin{equation*}
W_k(T)=\big\{\{v_0,v_b,\textbf{v}_g\}: v_0\in P_k(T), v_b\in
P_{k-2}(e),\textbf{v}_g\in [P_{k-2}(e)]^d, e\subset \partial
T\big\}.
\end{equation*}
By patching $W_k(T)$ over all the elements $T\in {\cal T}_h$ through
a common value on the interface ${\mathcal{E}}_h^0$, we arrive at a weak finite
element space $V_h$ defined as follows
$$
V_h=\big\{\{v_0,v_b,\textbf{v}_g\}:\{v_0,v_b,\textbf{v}_g\}|_T\in
W_k(T), \forall T\in {\cal T}_h\big\}.
$$
Denote by $V_h^0$ the subspace of $V_h$ with vanishing trace; i.e.,
$$
V_h^0=\{\{v_0,v_b,\textbf{v}_g\}\in
V_h,v_b|_e=0,\textbf{v}_g|_e=\textbf{0}, e\subset \partial T\cap
\partial\Omega\}.
$$
Intuitively, the finite element functions in $V_h$ are piecewise
polynomials of degree $k\ge 2$. The extra value on the boundary of
each element is approximated by polynomials of degree $k-2$ for the
function itself and its gradient. For such functions, we may compute
the weak second order derivative $\partial^2_{ij,w} v$ by using the
formula (\ref{2.3}). For computational purpose, this weak partial
derivative $\partial^2_{ij,w} v$ has to be approximated by using
polynomials, preferably one with degree $k-2$. Denote by
$\partial^2_{ij, w,k-2}$ the discrete weak partial derivative
computed by using (\ref{2.4}) on each element $T$ for $k\geq 2$;
i.e.,
$$
(\partial^2_{ij, w,k-2} v)|_T=\partial^2_{ij,w,k-2,T}(v|_T), \qquad
v\in V_h.
$$
For simplicity of notation and without confusion, we shall drop the
subscript $k-2$ in the notation $\partial^2_{ij, w,k-2}$. We also
introduce the following notation
$$
(\partial^2_{w}u,\partial^2_{w}v)_h=\sum_{T\in{\cal
T}_h}\sum_{i,j=1}^d (\partial^2_{ij,w}u,\partial^2_{ij,w}v)_T,\quad
\forall u, v\in V_h.
$$
For each element $T$, denote by $Q_0$ the $L^2$ projection onto
$P_k(T)$, $k\geq 2$. For each edge or face $e\subset\partial T$,
denote by $Q_b$ the $L^2$ projection onto $P_{k-2}(e)$ or
$[P_{k-2}(e)]^d$, as appropriate. For any $w\in H^2(\Omega)$, we
define a projection $Q_h w$ into the weak finite element space $V_h$
such that on each element $T$,
$$
Q_hu=\{Q_0u,Q_bu,Q_b(\nabla u)\}.
$$
For any $w=\{w_0,w_b,\textbf{w}_g\}$ and
$v=\{v_0,v_b,\textbf{v}_g\}$ in $V_h$, we introduce a bilinear form
as follows
\begin{equation*}
\begin{split}
s(w,v)=&\sum_{T\in {\cal T}_h} h_T^{-1}\langle Q_b(\nabla
w_0)-\textbf{w}_g, Q_b(\nabla v_0)-
\textbf{v}_g\rangle_{\partial T}\\
&+\sum_{T\in {\cal T}_h} h_T^{-3}\langle Q_b w_0-w_b, Q_b
v_0-v_b\rangle_{\partial T}.
\end{split}
\end{equation*}
The following is a precise statement of the WG finite element scheme
for the biharmonic equation (\ref{0.1}) based on the variational
formulation (\ref{0.2}).
\begin{algorithm} Find $u_h=\{u_0,u_b,\textbf{u}_g\}\in V_h$
satisfying $u_b=Q_b\xi$, $\textbf{u}_g\cdot \textbf{n}=Q_{b}\nu$,
$\textbf{u}_g\cdot \boldsymbol{\tau}=Q_{b}(\nabla\xi\cdot\boldsymbol{\tau})$ on
$\partial\Omega$ and the following equation:
\begin{equation}\label{2.7}
(\partial_{w}^2u_h,\partial^2_{w}v)_h+s(u_h,v)=(f,v_0), \quad
\forall v=\{v_0,v_b,\textbf{v}_g\}\in V_h^0,
\end{equation}
where $\boldsymbol{\tau}\in \mathbb{R}^d$ is the tangential
direction to the edges/faces on the boundary $\partial\Omega$.
\end{algorithm}
The following is a useful observation concerning the finite element
space $V_h^0$.
\begin{lemma}\label{Lemma4.1} For any $v\in V_h^0$, define $\3barv{|\hspace{-.02in}|\hspace{-.02in}|}$ by
\begin{equation}\label{3barnorm}
\3barv{|\hspace{-.02in}|\hspace{-.02in}|}^2= (\partial^2_{ w}v,\partial^2_{ w}v)_h+s(v,v).
\end{equation}
Then, ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ is a norm in the linear space $V_h^0$.
\end{lemma}
\begin{proof} We shall only verify the positivity property for
${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$. To this end, assume that $\3barv{|\hspace{-.02in}|\hspace{-.02in}|}=0$ for some
$v\in V_h^0$. It follows from (\ref{3barnorm}) that
$\partial^2_{ij,w}v=0$ on $T$, $Q_b(\nabla v_0)=\textbf{v}_g$ and
$Q_bv_0=v_b$ on $\partial T$. We claim that $\partial^2_{ij}v_0=0$
on each element $T$. To this end, for any $\varphi \in P_{k-2}(T)$,
we use $\partial^2_{ij,w}v=0$ and the identity (\ref{A.002}) to
obtain
\begin{equation*}
\begin{split}
0=&(\partial^2_{ij,w}v,\varphi)_T\\
=&(\partial^2_{ij}v_0, \varphi)_T+
\langle v_{gi}-Q_{b}(\partial_i v_0),\varphi\cdot n_j\rangle_{\partial T}
+\langle Q_bv_0-v_b,\partial_j \varphi \cdot n_i\rangle_{\partial T}\\
=&(\varphi,\partial^2_{ij}v_0)_T,
\end{split}
\end{equation*}
which implies that $\partial^2_{ij}v_0=0$ for $i,j=1,\ldots, d$ on
each element $T$. Thus, $v_0$ is a linear function on $T$ and
$\nabla v_0$ is a constant on each element. The condition
$Q_b(\nabla v_0)=\textbf{v}_g$ on $\partial T$ implies that $\nabla
v_0=\textbf{v}_g$ on $\partial T$. Thus, $\nabla v_0$ is continuous
over the whole domain $\Omega$. The fact that $\textbf{v}_g =0$ on
$\partial\Omega$ leads to $\nabla v_0=0$ in $\Omega$ and
$\textbf{v}_g=0$ on each edge/face. Thus, $v_0$ is a constant on
each element $T$. This, together with the fact that $Q_bv_0=v_b$ on
$\partial T$, indicates that $v_0$ is continuous over the whole
domain $\Omega$. It follows from $v_b=0$ on $\partial\Omega$ that
$v_0=0$ everywhere in the domain $\Omega$. Furthermore,
$v_b=Q_b(v_0)=0$ on each edge/face. This completes the proof of the
lemma.
\end{proof}
\begin{lemma}\label{Lemma4.2} The Weak Galerkin Algorithm (\ref{2.7}) has a
unique solution.
\end{lemma}
\begin{proof} Let $u_h^{(1)}$ and $u_h^{(2)}$ be two different
solutions of the Weak Galerkin Algorithm (\ref{2.7}). It is clear
that the difference $e_h=u_h^{(1)}-u_h^{(2)}$ is a finite element
function in $V_h^0$ satisfying
\begin{equation}\label{2.10}
(\partial^2_{ w}e_h,\partial^2_{ w}v)_h+s(e_h,v)=0, \quad \forall v \in V_h^0.
\end{equation}
By setting $v=e_h$ in (\ref{2.10}), we obtain
$$
(\partial^2_{w}e_h,\partial^2_{w}e_h)_h+s(e_h,e_h)=0.
$$
From Lemma 4.1, we get $e_h\equiv 0$, i.e.,
$u_h^{(1)}=u_h^{(2)}$.
\end{proof}
The rest of the paper will provide a mathematical and computational
justification for the WG finite element method (\ref{2.7}).
\section{$L^2$ Projections and Their
Properties}\label{Section:L2projection}
The goal of this section is to establish some technical results for
the $L^2$ projections. These results are valuable in the error
analysis for the WG finite element method.
\begin{lemma}\label{Lemma5.1} On each element $T\in {\cal T}_h$, let ${\cal
Q}_h$ be the local $L^2$ projection onto $P_{k-2}(T)$. Then, the
$L^2$ projections $Q_h$ and ${\cal Q}_h$ satisfy the following
commutative property:
\begin{equation}\label{l}
\partial^2_{ij,w}(Q_h w)={\cal Q}_h(\partial^2_{ij} w),\qquad \forall
i,j=1,\ldots,d,
\end{equation}
for all $w\in H^2(T)$.
\end{lemma}
\begin{proof} For $\varphi\in P_{k-2}(T)$ and $w\in H^2(T)$, from the definition of $\partial^2_{ij,w}$
and the usual integration by parts, we have
\begin{equation*}
\begin{split}
(\partial^2_{ij,w}(Q_h w),\varphi)_T&=(Q_0
w,\partial^2_{ji}\varphi)_T -\langle Q_b w,\partial_j \varphi\cdot
n_i\rangle_{\partial T}+
\langle Q_{b}(\partial_i w)\cdot n_j,\varphi\rangle_{\partial T}\\
&=(w,\partial^2_{ji}\varphi)_T-\langle w,\partial_j
\varphi\cdot n_i\rangle_{\partial T}+
\langle \partial_i w\cdot n_j,\varphi\rangle_{\partial T}\\
&=(\partial^2_{ij}w,\varphi)_T\\
&=({\cal Q}_h\partial^2_{ij}w,\varphi)_T, \quad \forall
i,j=1,\cdots,d,
\end{split}
\end{equation*}
which completes the proof.
\end{proof}
The commutative property (\ref{l}) indicates that the discrete weak
partial derivative of the $L^2$ projection of a smooth function is a
good approximation of the classical partial derivative of the same
function. This is a nice and useful property of the discrete weak
partial differential operator $\partial^2_{ij,w}$ in application to
algorithm design and analysis.
The following lemma provides some approximation properties for the
projection operators $Q_h$ and ${\cal Q}_h$.
\begin{lemma}\label{Lemma5.2}\cite{wy3655, mwy0927} Let ${\cal T}_h$ be a
finite element partition of $\Omega$ satisfying the shape
regularity assumption as defined in \cite{wy3655}. Then, for any
$0\leq s\leq 2$ and $1\leq m\leq k$, we have
\begin{equation}\label{3.2}
\sum_{T\in {\cal T}_h}h_T^{2s}\|u-Q_0u\|^2_{s,T}\leq Ch^{2(m+1)}\|u\|_{m+1}^2,
\end{equation}
\begin{equation}\label{3.3}
\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^dh_T^{2s}\|\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u\|^2_{s,T}\leq Ch^{2(m-1)}\|u\|_{m+1}^2.
\end{equation}
\end{lemma}
Using Lemma \ref{Lemma5.2} we can prove the following result.
\begin{lemma}\label{Lemma5.3} Let $1\leq m\leq k$ and $u\in
H^{\max\{m+1,4\}}(\Omega)$. There exists a constant $C$ such that
the following estimates hold true:
\begin{equation}\label{3.5}
\Big(\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^dh_T\|\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u\|_{\partial T}^2\Big)^{\frac{1}{2}}\leq
Ch^{m-1}\|u\|_{m+1},
\end{equation}
\begin{equation}\label{3.6}
\Big(\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^dh_T^3\|\partial_j(\partial^2
_{ij}u-{\cal Q}_h\partial^2_{ij}u)\|_{\partial T}^2\Big)^{\frac{1}{2}}\\
\leq Ch^{m-1}(\|u\|_{m+1}+h\delta_{m,2}\|u\|_4),
\end{equation}
\begin{equation}\label{3.7}
\Big(\sum_{T\in {\cal T}_h}h_T^{-1}\|Q_b(\nabla Q_0u)-Q_b(\nabla
u)\|_{\partial T}^2\Big)^{\frac{1}{2}}\leq Ch^{m-1}\|u\|_{m+1},
\end{equation}
\begin{equation}\label{3.8}
\Big(\sum_{T\in {\cal T}_h}h_T^{-3}\|Q_b
(Q_0u)- Q_bu\|_{\partial T}^2\Big)^{\frac{1}{2}}\leq
Ch^{m-1}\|u\|_{m+1}.
\end{equation}
Here $\delta_{i,j}$ is the usual Kronecker's delta with value $1$
when $i=j$ and value $0$ otherwise.
\end{lemma}
\begin{proof} To prove (\ref{3.5}), by the trace inequality (\ref{trace-inequality}) and the
estimate (\ref{3.3}), we get
\begin{equation*}
\begin{split}
&\sum_{T\in {\cal T}_h}\sum_{i,j=1}^dh_T\|\partial^2
_{ij}u-{\cal Q}_h\partial^2_{ij}u\|_{\partial T}^2\\
\leq & C\sum_{T\in {\cal T}_h}\sum_{i,j=1}^d\Big(\|\partial^2
_{ij}u-{\cal Q}_h\partial^2
_{ij}u\|_{T}^2+h_T^2|\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij}u|_{1,T}^2\Big)\\
\leq & Ch^{2m-2}\|u\|^2_{m+1}.
\end{split}
\end{equation*}
As to (\ref{3.6}), by the trace inequality (\ref{trace-inequality})
and the estimate (\ref{3.3}), we obtain
\begin{equation*}
\begin{split}
&\sum_{T\in {\cal T}_h}\sum_{i,j=1}^dh_T^3\|\partial_j(\partial_{ij}u-{\cal Q}_h\partial_{ij}u)\|_{\partial T}^2\\
\leq & C\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\Big(h_T^2\|\partial_j(\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u)\|_{T}^2
+h_T^4|\partial_{j}(\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij}u)|_{1,T}^2\Big)\\
\leq & Ch^{2m-2}\big(\|u\|^2_{m+1}+h^2\delta_{m,2}\|u\|_4^2\big).
\end{split}
\end{equation*}
As to (\ref{3.7}), by the trace inequality (\ref{trace-inequality})
and the estimate (\ref{3.2}), we have
\begin{equation*}
\begin{split}
&\sum_{T\in {\cal T}_h}h_T^{-1}\|Q_b(\nabla Q_0u)-Q_b(\nabla u)\|_{\partial T}^2\\
\leq& \sum_{T\in {\cal T}_h}h_T^{-1}\|\nabla Q_0u-\nabla u\|_{\partial T}^2\\
\leq& C\sum_{T\in {\cal T}_h}\Big(h_T^{-2}\|\nabla Q_0u-\nabla u\|_{ T}^2+|\nabla Q_0u-\nabla u|_{1,T}^2\Big)\\
\leq& Ch^{2m-2}\|u\|^2_{m+1}.
\end{split}
\end{equation*}
Finally for (\ref{3.8}), by the trace inequality
(\ref{trace-inequality}) and the estimate (\ref{3.2}), we have
\begin{equation*}
\begin{split}
&\sum_{T\in {\cal T}_h}h_T^{-3}\|Q_b (Q_0u)- Q_bu\|_{\partial T}^2\\
\leq & \sum_{T\in {\cal T}_h}h_T^{-3}\|Q_0u-u\|_{\partial T}^2\\
\leq& C\sum_{T\in {\cal T}_h}\Big(h_T^{-4}\|Q_0u-u\|_{T}^2+h_T^{-2}\|\nabla(Q_0u-u)\|_{T}^2\Big)\\
\leq& Ch^{2m-2}\|u\|^2_{m+1}.
\end{split}
\end{equation*}
This completes the proof of the lemma.
\end{proof}
\section{An Error Equation}\label{Section:error-equation}
Let $u$ and $u_h=\{u_0,u_b,\textbf{u}_g\} \in V_h$ be the solution
(\ref{0.1}) and (\ref{2.7}) respectively. Denote by
\begin{equation}\label{error-term}
e_h=Q_hu-u_h
\end{equation}
the error function between the $L^2$ projection of the exact
solution $u$ and its weak Galerkin finite element approximation
$u_h$. An error equation refers to some identity that the error
function $e_h$ must satisfy. The goal of this section is to derive
an error equation for $e_h$. The following is our main result.
\begin{lemma}\label{Lemma6.1} The error function
$e_h$ as defined by (\ref{error-term}) is a finite element function
in $V_h^0$ and satisfies the following equation
\begin{equation}\label{4.1}
(\partial^2_w e_h,\partial^2_w v)_h+s(e_h,v)=\phi_u(v),\qquad
\forall v\in V_h^0,
\end{equation}
where
\begin{equation}\label{phiu}
\begin{split}
\phi_u(v)=
&\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial^2_{ij}u-{\cal Q}_h(\partial^2_{ij}u),(\partial_iv_0-v_{gi})\cdot n_j\rangle_{\partial T}\\
&-\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij}u)\cdot n_i,v_0-v_b\rangle_{\partial T}\\
&+s(Q_hu,v).
\end{split}
\end{equation}
\end{lemma}
\begin{proof} From Lemma \ref{Lemma5.1} we have $\partial_{ij,w}^2 Q_h
u = {\mathcal{Q}}_h (\partial_{ij}^2 u)$. Now using (\ref{A.001}) with
$\varphi=\partial_{ij,w}^2 Q_h u $ we obtain
\begin{equation*}
\begin{split}
(\partial^2_{ij}v,\partial^2_{ij,w}Q_hu)_T=&(\partial^2_{ij}v_0,{\cal
Q}_h(\partial^2_{ij}u))_T+\langle v_0-v_b,\partial_j({\cal
Q}_h(\partial^2_{ij}u))
\cdot n_i\rangle_{\partial T}\\
&-\langle(\partial_i v_0-v_{gi})\cdot n_j,{\cal Q}_h\partial^2_{ij} u\rangle_{\partial T}\\
=&(\partial^2_{ij}v_0, \partial^2_{ij}u)_T+\langle
v_0-v_b,\partial_j({\cal Q}_h(\partial^2_{ij}u))\cdot
n_i\rangle_{\partial T}\\
&-\langle(\partial_i v_0-v_{gi})\cdot n_j,{\cal Q}_h\partial^2_{ij}
u\rangle_{\partial T},
\end{split}
\end{equation*}
which implies that
\begin{equation}\label{4.2}
\begin{split}
(\partial^2_{ij}v_0, \partial^2_{ij}u)_T=&
(\partial^2_{ij,w}Q_hu,\partial^2_{ij,w}v)_T-\langle v_0-v_b,
\partial_j({\cal Q}_h(\partial^2_{ij}u))\cdot n_i\rangle_{\partial T}\\
&+\langle(\partial_i v_0-v_{gi})\cdot n_j,{\cal Q}_h\partial^2_{ij}
u\rangle_{\partial T}.
\end{split}
\end{equation}
We emphasize that (\ref{4.2}) is valid for any $v\in V_h^0$ and any
smooth function $u\in H^r(\Omega),\ r>3$. Next, it follows from the
integration by parts that
\begin{equation*}
(\partial^2_{ij}u, \partial^2_{ij}v_0)_T=
((\partial^2_{ij})^2u,v_0)_T+\langle \partial^2_{ij}u,
\partial_i v_0\cdot n_j\rangle_{\partial T}
-\langle \partial_j(\partial^2_{ij}u)\cdot n_i, v_0\rangle_{\partial
T}.
\end{equation*}
By summing over all $T$ and using the identity that $(\triangle^2
u,v_0)=(f,v_0)$, we obtain
\begin{equation*}
\begin{split}
\sum_{T\in {\cal T}_h}\sum_{i,j=1}^d(\partial^2_{ij}u, \partial^2_{ij}v_0)_T=& (f,v_0)+\sum_{T\in {\cal T}_h}\sum_{i,j=1}^d\langle \partial^2_{ij}u,(\partial_i v_0-v_{gi})\cdot n_j\rangle_{\partial T}\\
&-\sum_{T\in {\cal T}_h}\sum_{i,j=1}^d\langle
\partial_j(\partial^2_{ij}u)\cdot n_i, v_0-v_b\rangle_{\partial
T},
\end{split}
\end{equation*}
where we have used the fact that the sum for the terms associated
with $v_{gi}\cdot n_j$ and $v_b n_i$ vanishes (note that both
$v_{gi}$ and $v_b$ vanishes on $\partial\Omega$). Combining the
above equation with (\ref{4.2}) yields
\begin{equation*}
\begin{split}
(\partial^2_w Q_hu,\partial^2_w v)_h=& (f,v_0)+\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial^2_{ij}u-{\cal Q}_h(\partial^2_{ij}u),(\partial_iv_0-v_{gi})\cdot n_j\rangle_{\partial T}\\
&-\sum_{T\in{\cal
T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u)\cdot n_i,v_0-v_b\rangle_{\partial T}.
\end{split}
\end{equation*}
Adding $s(Q_hu,v)$ to both side of the above equation gives
\begin{equation}\label{4.3}
\begin{split}
&(\partial^2_w Q_hu,\partial^2_w v)_h+s(Q_hu,v)\\
=& (f,v_0)+\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial^2_{ij}u-{\cal Q}_h(\partial^2_{ij}u),
(\partial_iv_0-v_{gi})\cdot n_j\rangle_{\partial T}\\
&-\sum_{T\in{\cal
T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u)\cdot n_i,v_0-v_b\rangle_{\partial
T}+s(Q_hu,v).
\end{split}
\end{equation}
Subtracting (\ref{2.7}) from (\ref{4.3}) yields the following error
equation
\begin{equation*}
\begin{split}
&(\partial^2_w e_h,\partial^2_w v)_h+s(e_h,v)=\sum_{T\in{\cal
T}_h}\sum_{i,j=1}^d\langle\partial^2_{ij}u-
{\cal Q}_h(\partial^2_{ij}u),(\partial_iv_0-v_{gi})\cdot n_j\rangle_{\partial T}\\
&-\sum_{T\in{\cal
T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u)\cdot n_i,v_0-v_b\rangle_{\partial
T}+s(Q_hu,v),
\end{split}
\end{equation*}
which completes the proof.
\end{proof}
\section{Error Estimates in $H^2$}\label{Section:H2error}
The goal of this section is to derive some error estimate for the
solution of Weak Galerkin Algorithm (\ref{2.7}). From the error
equation (\ref{4.1}), it suffices to handle the term $\phi_u(v)$
defined by (\ref{phiu}).
Let $w$ be any smooth function in $\Omega$. We rewrite $\phi_w(v)$
as follows:
\begin{equation}\label{phiu-breaks}
\begin{split}
\phi_w(v)=&\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial^2_{ij}w-{\cal Q}_h(\partial^2_{ij}w),(\partial_iv_0-v_{gi})\cdot n_j\rangle_{\partial T}\\
&-\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij}w-{\cal Q}_h\partial^2_{ij}w)\cdot n_i, v_0-v_b\rangle_{\partial T}\\
&+ \sum_{T\in{\cal T}_h} h_T^{-1}\langle Q_b(\nabla Q_0w)-Q_b(\nabla w), Q_b(\nabla v_0)-\textbf{v}_g \rangle_{\partial T}\\
&+\sum_{T\in{\cal T}_h} h_T^{-3}\langle Q_bQ_0w-Q_bw,
Q_bv_0-v_b\rangle_{\partial T}\\
=&I_1(w,v)+I_2(w,v)+I_3(w,v)+I_4(w,v),
\end{split}
\end{equation}
where $I_j(w,v)$ are defined accordingly. Each $I_j(w,v)$ is to be
handled as follows.
\begin{lemma}\label{lemma:IoneItwo}
Assume that $w\in H^{r+1}(\Omega), v\in V_h^0$ with $r\in [2,k]$.
Let $I_1(w,v)$ and $I_2(w,v)$ be given in (\ref{phiu-breaks}). Then,
we have
\begin{eqnarray}
|I_1(w,v)|&\le& Ch^{r-1}\|w\|_{r+1}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|},\label{mmm1}\\
|I_2(w, v)|&\le& Ch^{r-1}(\|w\|_{r+1}+\delta_{k,2}\|w\|_4) {|\hspace{-.02in}|\hspace{-.02in}|}
v{|\hspace{-.02in}|\hspace{-.02in}|}.\label{mmm2}
\end{eqnarray}
\end{lemma}
\begin{proof}
For the term $I_1(w,v)$, we use Cauchy-Schwarz inequality, the
estimate (\ref{3.5}) with $m=r$ and Lemma \ref{Lemma6.5} to obtain
\begin{equation}\label{4.5}
\begin{split}
|I_1(w,v)|=&\Big|\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial^2_{ij}w-{\cal Q}_h(\partial^2_{ij}w),(\partial_i v_0-v_{gi})\cdot n_j\rangle_{\partial T}\Big|\\
\leq &\Big(\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d h_T\|\partial^2_{ij}w-{\cal Q}_h(\partial^2_{ij}w) \|_{\partial T}^2\Big)^{\frac{1}{2}}\cdot\\
&\Big(\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d h_T^{-1}\|(\partial_i v_0-v_{gi})\cdot n_j\|_{\partial T}^2\Big)^{\frac{1}{2}}\\
\leq &Ch^{r-1}\|w\|_{r+1}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|},
\end{split}
\end{equation}
which verifies (\ref{mmm1}).
As to the term $I_2(w,v)$, for the case of quadratic element $k=2$,
we use Lemma \ref{Lemma6.4} to obtain
\begin{equation}\label{before4.6}
\begin{split}
\left|\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij} w-{\cal
Q}_h\partial^2_{ij} w)\cdot n_i, v_0-v_b\rangle_{\partial
T}\right|\leq Ch \|w\|_4{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{split}
\end{equation}
For $k\ge 3$, we use Cauchy-Schwarz inequality, the estimate
(\ref{3.6}) with $m=r$, and Lemma \ref{Lemma6.2} to obtain
\begin{equation}\label{4.6}
\begin{split}
&\Big|\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij}w-{\cal Q}_h\partial^2_{ij}w)\cdot n_i,v_0-v_b\rangle_{\partial T}\Big|\\
\leq &\Big(\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d
h_T^3\|\partial_j(\partial^2_{ij}w- {\cal Q}_h\partial^2_{ij}w)
\|_{\partial T}^2\Big)^{\frac{1}{2}}\cdot
\Big(\sum_{T\in{\cal T}_h}h_T^{-3}\|v_0-v_b\|_{\partial T}^2\Big)^{\frac{1}{2}}\\
\leq &Ch^{r-1} \ \|w\|_{r+1}\ {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{split}
\end{equation}
Combining (\ref{before4.6}) with (\ref{4.6}) yields
\begin{equation}\label{before4.7}
|I_2(w,v)|\leq Ch^{r-1}(\|w\|_{r+1}+\delta_{k,2}\|w\|_4)\ {|\hspace{-.02in}|\hspace{-.02in}|}
v{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{equation}
This completes the proof of the lemma.
\end{proof}
\smallskip
\begin{lemma}\label{lemma:IthreeIfour}
Assume that $w\in H^{r+1}(\Omega), v\in V_h^0$ with $r\in [2,k]$.
Let $I_3(w,v)$ and $I_4(w,v)$ be given in (\ref{phiu-breaks}). Then,
we have
\begin{eqnarray}
|I_3(w,v)|+|I_4(w, v)|\le Ch^{r-1}\|w\|_{r+1} |v|_h,\label{mmm3}
\end{eqnarray}
where
\begin{equation}\label{3bar-half}
|v|_h = s(v,v)^{\frac12}.
\end{equation}
\end{lemma}
\begin{proof}
To estimate the term $I_3(w,v)$, we use Cauchy-Schwarz inequality
and the estimate (\ref{3.7}) with $m=r$ to obtain
\begin{equation}\label{4.7}
\begin{split}
|I_3(w,v)|=&\left|\sum_{T\in{\cal T}_h}h_T^{-1}\langle \nabla
Q_0w-\nabla
w,Q_b(\nabla v_0)-\textbf{v}_g \rangle_{\partial T}\right|\\
\leq& \Big(\sum_{T\in{\cal T}_h}h_T^{-1}\|\nabla
Q_0w-\nabla w\|^2_{\partial T}\Big)^{\frac{1}{2}}\cdot
\Big(\sum_{T\in{\cal T}_h}h_T^{-1}\|Q_b(\nabla v_0)-\textbf{v}_g\|^2_{\partial T}\Big)^{\frac{1}{2}}\\
\leq& Ch^{r-1}\|w\|_{r+1} \ |v|_h.
\end{split}
\end{equation}
As to the term $I_4(w,v)$, we use Cauchy-Schwarz inequality and the
estimate (\ref{3.8}) with $m=r$ to obtain
\begin{equation}\label{4.8}
\begin{split}
|I_4(w,v)|=&\left|\sum_{T\in{\cal T}_h}h_T^{-3}\langle Q_0w-w, Q_b v_0-v_b\rangle_{\partial T}\right|\\
\leq& \Big(\sum_{T\in{\cal T}_h}h_T^{-3}\| Q_0w-w\|^2_{\partial T}\Big)^{\frac{1}{2}}\Big(\sum_{T\in{\cal T}_h}h_T^{-3}\|Q_b v_0-v_b\|^2_{\partial T}\Big)^{\frac{1}{2}}\\
\leq& Ch^{r-1}\|w\|_{r+1}\ |v|_h.
\end{split}
\end{equation}
This completes the proof.
\end{proof}
The following result is an estimate for the error function $e_h$ in
the trip-bar norm which is essentially an $H^2$-equivalent norm in
$V_h^0$.
\begin{theorem}\label{Theorem6.6} Let $u_h\in V_h$ be the weak Galerkin finite
element solution arising from (\ref{2.7}) with finite elements of
order $k\geq 2$. Assume that the exact solution $u$ of (\ref{0.1})
is sufficiently regular such that $u\in H^{\max\{k+1,4\}}(\Omega)$.
Then, there exists a constant $C$ such that
\begin{equation}\label{4}
\3baru_h-Q_hu{|\hspace{-.02in}|\hspace{-.02in}|}\leq
Ch^{k-1}\Big(\|u\|_{k+1}+\delta_{k,2}\|u\|_4\Big).
\end{equation}
In other words, we have an optimal order of convergence in the $H^2$
norm.
\end{theorem}
\begin{proof} By letting $v=e_h$ in the error equation (\ref{4.1}),
we obtain the following identity
\begin{equation}\label{4.4}
\begin{split}
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2=&\phi_u(e_h)\\
=&I_1(u,e_h)+I_2(u,e_h)+I_3(u,e_h)+I_4(u,e_h),
\end{split}
\end{equation}
Using the estimates (\ref{mmm1}), (\ref{mmm2}), and (\ref{mmm3})
with $w=u$ and $v=e_h$ we arrive at
$$
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2 \leq
Ch^{k-1}\Big(\|u\|_{k+1}+\delta_{k,2}\|u\|_4\Big){|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|},
$$
which implies the desired error estimate (\ref{4}).
\end{proof}
\section{Error Estimates in $L^2$}\label{Section:L2error}
This section shall establish an estimate for the first component of
the error function $e_h$ in the standard $L^2$ norm. To this end, we
consider the following dual problem:
\begin{equation}\label{5.1}
\begin{split}
\Delta^2\psi&=e_0 \qquad \text{in}\ \Omega,\\
\psi&=0 \qquad \text{on}\ \partial\Omega,\\
\frac{\partial \psi}{\partial\textbf{n}}&=0 \qquad \text{on}\
\partial\Omega.
\end{split}
\end{equation}
Assume the above dual problem has the following regularity estimate
\begin{equation}\label{5.2}
\|\psi\|_4\leq C\|e_0\|.
\end{equation}
\begin{theorem}\label{Theorem7.3} Let $u_h\in V_h$ be the solution of
the Weak Galerkin Algorithm (\ref{2.7}) with finite elements of
order $k\geq 2$. Let $t_0=\min\{k,3\}$. Assume that the exact
solution of (\ref{0.1}) is sufficiently regular so that $u\in
H^{4}(\Omega)$ for $k=2$ and $u\in H^{k+1}(\Omega)$ otherwise, and
the dual problem (\ref{5.1}) has the $H^4$ regularity. Then, there
exists a constant $C$ such that
\begin{equation}\label{5.3}
\|Q_0u-u_0\|\leq
Ch^{k+t_0-2}\Big(\|u\|_{k+1}+\delta_{k,2}\|u\|_{4}\Big).
\end{equation}
In other words, we have a sub-optimal order of convergence for $k=2$
and optimal order of convergence for $k\geq 3$.
\end{theorem}
\begin{proof} By testing (\ref{5.1}) against the error function
$e_0$ on each element and using the integration by parts, we obtain
\begin{equation*}
\begin{split}
\|e_0\|^2=&(\Delta^2 \psi,e_0)\\
=&\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\Big\{(\partial^2_{ij}\psi,\partial^2_{ij}e_0)_T-\langle\partial^2_{ij}\psi,
\partial_ie_0\cdot n_j\rangle_{\partial T}+\langle \partial_j(\partial^2_{ij}\psi)\cdot n_i,e_0\rangle_{\partial T}\Big\}\\
=&\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\Big\{(\partial^2_{ij}\psi,\partial^2_{ij}e_0)_T-\langle\partial^2_{ij}\psi,
(\partial_ie_0-e_{gi})\cdot n_j\rangle_{\partial T}\\
&+\langle\partial_j(\partial^2_{ij}\psi)\cdot n_i,
e_0-e_b\rangle_{\partial T}\Big\},
\end{split}
\end{equation*}
where the added terms associated with $e_b$ and $e_{gi}$ vanish due
to the cancelation for interior edges and the fact that $e_b$ and
$e_{gi}$ have zero value on $\partial\Omega$. Using (\ref{4.2}) with
$\psi$ and $e_h$ in the place of $u$ and $v_0$ respectively, we arrive
at
\begin{equation}\label{09-100}
\begin{split}
\|e_0\|^2 =&(\partial^2_w Q_h\psi, \partial^2_w e_h)_h
+\sum_{T\in {\cal T}_h}\sum_{i,j=1}^d\Big\{\langle\partial_j(\partial^2_{ij}\psi-{\cal Q}_h(\partial^2_{ij}\psi))\cdot n_i, e_0-e_b\rangle_{\partial T}\\
&-\langle\partial^2_{ij}\psi-{\cal
Q}_h\partial^2_{ij}\psi,(\partial_ie_0-e_{gi})\cdot
n_j\rangle_{\partial T}\Big\}\\
=&(\partial^2_w Q_h\psi, \partial^2_w
e_h)_h-\phi_\psi(e_h)+s(Q_h\psi,e_h).
\end{split}
\end{equation}
Next, it follows from the error equation (\ref{4.1}) that
\begin{equation}\label{09-101}
(\partial^2_w Q_h\psi,\partial^2_w e_h)_h=\phi_u(Q_h\psi)-s(e_h,Q_h\psi).
\end{equation}
Substituting (\ref{09-101}) into (\ref{09-100}) yields
\begin{equation}\label{5.4}
\|e_0\|^2=\phi_u(Q_h\psi) - \phi_\psi(e_h).
\end{equation}
The term $\phi_\psi(e_h)$ can be handled by using Lemma
\ref{lemma:IoneItwo} and Lemma \ref{lemma:IthreeIfour} with
$r=t_0=\min\{k,3\}$ as follows:
\begin{equation}\label{phi-psi-eh}
\begin{split}
|\phi_\psi(e_h)|&\leq C h^{t_0-1} (\|\psi\|_{t_0+1}+ h\|\psi\|_4){|\hspace{-.02in}|\hspace{-.02in}|}
e_h{|\hspace{-.02in}|\hspace{-.02in}|}\\
&\leq C h^{t_0-1} \|\psi\|_4 {|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}\\
&\leq C h^{t_0-1} \|e_0\| {|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|},
\end{split}
\end{equation}
where we have used the regularity assumption (\ref{5.2}) in the last
inequality.
It remains to deal with the term $\phi_u(Q_h\psi)$ in (\ref{5.4}).
Note that from (\ref{phiu-breaks}) we have
\begin{equation}\label{raining.100}
\phi_u(Q_h\psi)= \sum_{j=1}^4 I_j(u,Q_h\psi).
\end{equation}
$I_3(u,Q_h\psi)$ and $I_4(u,Q_h\psi)$ can be handled by using Lemma
\ref{lemma:IthreeIfour} with $r=k$ as follows:
\begin{equation}\label{L2-IthreeIfour}
|I_3(u,Q_h\psi)|+|I_3(u,Q_h\psi)|\leq Ch^{k-1}\|u\|_{k+1} |Q_h\psi|_h.
\end{equation}
From the definition (\ref{3bar-half}) we have
\begin{equation*}
\begin{split}
|Q_h\psi|_h^2 &= \sum_{T\in{\mathcal{T}}_h}
\left(h_T^{-3}\|Q_b(Q_0\psi)-Q_b\psi\|_{\partial T}^2 +
h_T^{-1}\|Q_b(\nabla Q_0\psi)-Q_b\nabla\psi\|_{\partial T}^2 \right)\\
&\leq \sum_{T\in{\mathcal{T}}_h} \left(h_T^{-3}\|Q_0\psi-\psi\|_{\partial T}^2 +
h_T^{-1}\|\nabla (Q_0\psi)-\nabla\psi\|_{\partial T}^2 \right)
\end{split}
\end{equation*}
Thus, it follows from the trace inequality (\ref{trace-inequality})
and the error estimate for the projection operator $Q_0$ that
\begin{equation}
|Q_h\psi|_h\leq C h^{t_0-1}\|\psi\|_{t_0+1}\leq C
h^{t_0-1}\|\psi\|_{4} \leq C h^{t_0-1}\|e_0\|.
\end{equation}
Substituting the above estimate into (\ref{L2-IthreeIfour}) yields
\begin{equation}\label{L2-IthreeIfour-new}
|I_3(u,Q_h\psi)|+|I_3(u,Q_h\psi)|\leq Ch^{k+t_0-2}\|u\|_{k+1} \|e_0\|.
\end{equation}
The estimate for $I_1(u,Q_h\psi)$ and $I_2(u,Q_h\psi)$ shall explore the
special property of the ``test" function $Q_h\psi$. To this end, using
the orthogonality property of $Q_b$ and the fact that $\psi=Q_b\psi=0$
on $\partial\Omega$ we obtain
\begin{equation*}
\begin{split}
&\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle
\partial_j(\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij}u)\cdot n_i,
\psi-Q_b\psi\rangle_{\partial T}\\
=& \sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle
\partial_j\partial^2_{ij}u\cdot n_i,\psi-Q_b\psi\rangle_{\partial T}
=0.
\end{split}
\end{equation*}
Thus,
\begin{equation*}
\begin{split}
I_2(u,Q_h\psi)=&-\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle
\partial_j(\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij}u)\cdot n_i,
Q_0\psi-Q_b\psi\rangle_{\partial T}\\
=&-\sum_{T\in{\cal T}_h}\sum_{i,j=1}^d\langle
\partial_j(\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij}u)\cdot n_i,
Q_0\psi-\psi\rangle_{\partial T}.
\end{split}
\end{equation*}
Using the Cauchy-Schwarz inequality and the standard error estimate
for $L^2$ projections we arrive at
\begin{equation}\label{raining.200}
\begin{split}
|I_2(u,Q_h\psi)|\leq& \sum_{T\in{\cal T}_h}\sum_{i,j=1}^d
\|\partial_j(\partial^2_{ij}u-{\cal
Q}_h\partial^2_{ij}u)\|_{\partial T} \|Q_0\psi-\psi\|_{\partial T}\\
\leq & C h^{k+t_0-2} (\|u\|_{k+1} + \delta_{k,2}\|u\|_4)\
\|\psi\|_{t_0+1} \\
\leq & C h^{k+t_0-2}(
\|u\|_{k+1} + \delta_{k,2}\|u\|_4)\ \|\psi\|_{4}\\
\leq & C h^{k+t_0-2} (\|u\|_{k+1}+\delta_{k,2}\|u\|_4)\|e_0\|.
\end{split}
\end{equation}
A similar argument can be employed to deal with the term $I_1(u,
Q_h\psi)$, yielding
\begin{equation}\label{raining.300}
\begin{split}
|I_1(u,Q_h\psi)|\leq C h^{k+t_0-2} \|u\|_{k+1} \|e_0\|.
\end{split}
\end{equation}
Substituting (\ref{L2-IthreeIfour-new}), (\ref{raining.200}), and
(\ref{raining.300}) into (\ref{raining.100}) we arrive at
\begin{equation}\label{raining.400}
|\phi_u(Q_h\psi)|\leq C h^{k+t_0-2} (\|u\|_{k+1}+\delta_{k,2}\|u\|_4)
\|e_0\|.
\end{equation}
Finally, by inserting (\ref{phi-psi-eh}) and (\ref{raining.400})
into (\ref{5.4}) we obtain
$$
\|e_0\|^2\leq C (h^{t_0-1} {|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|} +
h^{k+t_0-2}(\|u\|_{k+1}+\delta_{k,2}\|u\|_4))\|e_0\|,
$$
which, together with the estimate (\ref{4}) in Theorem
\ref{Theorem6.6}, gives rise to the desired $L^2$ error estimate
(\ref{5.3}). This completes the proof of the theorem.
\end{proof}
\medskip
The $H^2$ error estimate (\ref{4}) and the $L^2$ error estimate
(\ref{5.3}) can be used to derive some error estimates for the WG
solution $u_b$ and ${\mathbf{u}}_g$. More precisely, observe that $e_b$ and
${\mathbf{e}}_g$ can be represented by $e_0$ and $\partial_{ij,w}^2e_h$ by
choosing special test functions $v$ in the error equation
(\ref{4.1}). For example, $e_b$ can be represented by $e_0$ and
$\partial_{ij,w}^2e_h$ by selecting $v=\{0, v_b, 0\}$. The
representation is expressed through an equation defined locally on
each edge $e\in{\mathcal{E}}_h^0$. The rest of the analysis should be
straightforward. Details are omitted due to page limitation.
\section{Numerical Experiments}\label{Section:NE}
In this section, we present some numerical results for the WG finite
element method analyzed in previous sections. The goal is to
demonstrate the efficiency and the convergence theory established
for the method. For simplicity, we implement the lowest order scheme
for the Weak Galerkin Algorithm (\ref{2.7}). In other words, the
implementation makes use of the following finite element space
$$
\widetilde{V}_h=\{v=\{v_0,v_b,\textbf{v}_g\}, v_0\in P_2(T), v_b\in
P_0(e), \textbf{v}_g\in [P_0(e)]^2, T\in {\cal T}_h, e\in
{\mathcal{E}}_h \}.
$$
For any given $v=\{v_0,v_b,\textbf{v}_g\}\in \widetilde{V}_h$, the
discrete weak partial derivative $\partial^2_{ij,w,r,T} v$ is
computed as a constant locally on each element $T$ by solving the
following equation
\begin{equation*}
\begin{split}
(\partial^2_{ij,w,r,T}v,\varphi)_T=(v_0,\partial^2
_{ji}\varphi)_T-\langle v_b,\partial_j\varphi\cdot n_i\rangle_{\partial T}
+\langle v_{gi}\cdot n_j,\varphi\rangle_{\partial T},
\end{split}
\end{equation*}
for all $\varphi \in P_0(T)$. Since $\varphi \in P_0(T)$, the above
equation can be simplified as
\begin{equation}
\begin{split}
(\partial^2_{ij,w,r,T}v,\varphi)_T=\langle v_{gi}\cdot n_j,\varphi\rangle_{\partial T},
\qquad \forall \varphi\in P_0(T),\;
i,j=1,2.
\end{split}
\end{equation}
The error for the solution of the Weak Galerkin Algorithm
(\ref{2.7}) is measured in four norms or semi-norms defined as
follows:
\begin{equation}\label{z}
\begin{split}
{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}^2=&\sum_{T\in {\cal
T}_h}\Bigg(\sum_{i,j=1}^d\int_T(\partial^2_{ij,w}v_h)^2dx+
h_T^{-1}\int_{\partial T} |Q_b(\nabla
v_0)-\textbf{v}_g|^2ds \\
&+ h_T^{-3}\int_{\partial T}(Q_bv_0-v_b)^2ds\Bigg),\qquad(\text{ A
discrete $H^2$-norm}),
\end{split}
\end{equation}
\begin{equation}\label{z1}
\|v\|^2=\sum_{T\in {\cal T}_h} \int_T v_0^2dx,\qquad(\text{
Element-based $L^2$-norm}),
\end{equation}
\begin{equation}\label{ubinfty}
\|v_b\|_\infty=\max_{e\in{\mathcal{E}}_h} \|v_b\|_\infty, \qquad(\text{
Edge-based $L^\infty$-norm}),
\end{equation}
\begin{equation}\label{uginfty}
\|{\mathbf{v}}_g\|_\infty=\max_{e\in{\mathcal{E}}_h} \|{\mathbf{v}}_g\|_\infty, \qquad(\text{
Edge-based $L^\infty$-norm}).
\end{equation}
\smallskip
The numerical experiment is conducted for the biharmonic equation
(\ref{0.1}) on the unit square domain $\Omega=(0,1)^2$. The function
$f=f(x,y)$ and the two boundary conditions are computed to match the
exact solution in each test case. The WG finite element scheme
(\ref{2.7}) was implemented on two type of partitions: (1) uniform
triangular partition, and (2) uniform rectangular partition. The
uniform rectangular partition was obtained by partitioning the
domain into $n\times n$ sub-rectangles as tensor products of 1-d
uniform partitions. The triangular meshes are constructed from the
rectangular partition by dividing each square element into two
triangles by the diagonal line with a negative slope. The mesh size
is denoted by $h=1/n$.
Table \ref{NE:TRI:Exact1} demonstrates the performance of the code
when the exact solution is given by $u=x^2+y^2+xy+x+y+1$. In theory,
the WG finite element method is exact for any quadratic polynomials.
The computational results are in consistency with theory. This table
indicates that the code should be working.
\begin{table}[H]
\begin{center}
\caption{Numerical error for the biharmonic equation with
exact solution $u=x^2+y^2+xy+x+y+1$ on triangular partitions.}\label{NE:TRI:Exact1}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_0 -Q_0u\| $ & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h -Q_hu{|\hspace{-.02in}|\hspace{-.02in}|}$ & $\|u_b-Q_b u\|_{\infty}$ & $\|{\mathbf{u}}_g-Q_b(\nabla u)\|_{\infty}$ \\
\hline
1 & 1.73e-014 & 2.03e-014 & 1.60e-014 & 4.44e-016 \\
\hline
5.0000e-01 & 4.35e-014 & 1.88e-013 & 5.82e-014 & 6.66e-015 \\
\hline
2.5000e-01 & 1.64e-013 & 1.60e-012 & 2.86e-013 & 8.44e-014 \\
\hline
1.2500e-01 & 7.68e-013 & 7.68e-013 & 1.48e-012 & 1.19e-012 \\
\hline
6.2500e-02 & 3.65e-012 & 9.92e-011 & 7.71e-012 & 1.34e-011 \\
\hline
3.1250e-02 & 1.43e-011 & 5.20e-010 & 5.19e-010 & 3.13e-011 \\
\hline
1.5625e-02 & 4.85e-011 & 3.40e-009 & 1.15e-010 & 5.98e-010 \\
\hline
\end{tabular}
\end{center}
\end{table}
Tables \ref{NE:TRI:Case1-1} and \ref{NE:TRI:Case1-2} show the
numerical results when the exact solution is given by
$u=x^2(1-x)^2y^2(1-y)^2$. This case has a homogeneous boundary
condition for both Dirichlet and Neumann. It shows that the
convergence rates for the solution of the Weak Galerkin Algorithm in
the $H^2$ and $L^2$ norms are of order $O(h)$ and $O(h^2)$,
respectively. The numerical results are in consistency with theory
for the $L^2$ and $H^2$ norm of the error. For the approximation of
$u$ on the edge set ${\mathcal{E}}_h$, it appears that the $L^\infty$ error is
of order $\mathcal{O}(h^2)$. But the order of convergence for the
approximation of $\nabla u$ on the edge set ${\mathcal{E}}_h$ is hard to
extract from the data. It is interesting to see that the absolute
error for both $u_b$ and ${\mathbf{u}}_g$ is quite small.
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for
exact solution $u=x^2(1-x)^2y^2(1-y)^2$ on triangular partitions.}\label{NE:TRI:Case1-1}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_0 -Q_0u\| $ & order & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h -Q_hu{|\hspace{-.02in}|\hspace{-.02in}|} $ & order \\
\hline
1 & 0.41325 & & 0.52598 & \\
\hline
5.0000e-01 &0.07371 & 2.49& 0.31309 & 0.75 \\
\hline
2.5000e-01 & 0.019859 & 1.89 & 0.18972 & 0.72 \\
\hline
1.2500e-01 & 0.005176& 1.94 & 0.100557 & 0.92\\
\hline
6.2500e-02 & 0.0013833 & 1.90& 0.05240 & 0.94\\
\hline
3.1250e-02 & 3.7499e-004 & 1.88& 0.02729 & 0.94 \\
\hline
1.5625e-02 & 9.977e-005 & 1.91 & 0.014058 & 0.96 \\
\hline
7.8125e-03& 2.583e-05& 1.95 &0.007145 & 0.98\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for
exact solution $u=x^2(1-x)^2y^2(1-y)^2$ on triangular partitions.}\label{NE:TRI:Case1-2}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_b-Q_b u\|_\infty$ & order & $\|{\mathbf{u}}_g-Q_b(\nabla u)\|_{\infty}$ & order \\
\hline
1 & 0.41494 & & 8.6485e-018 & \\
\hline
5.0000e-01 & 0.08806 & 2.24& 0.00942& \\
\hline
2.5000e-01 & 0.037013 & 1.25& 0.00491 & 0.94 \\
\hline
1.2500e-01 & 0.01069 &1.79 & 0.00354 & 0.47\\
\hline
6.2500e-02 & 0.00293 & 1.87& 0.00222 & 0.67\\
\hline
3.1250e-02 & 7.935e-004 &1.88 & 0.00102& 1.12 \\
\hline
1.5625e-02 & 2.096e-004 & 1.92 & 3.577e-004& 1.51 \\
\hline
7.8125e-03& 5.401e-05 &1.96 & 1.053e-04 & 1.76\\
\hline
\end{tabular}
\end{center}
\end{table}
Tables \ref{NE:TRI:Case2-1} and \ref{NE:TRI:Case2-2} present some
numerical results when the exact solution is given by $u=\sin(x)\sin
(y)$. We would like to invite the readers to draw conclusions from
these data.
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for exact solution
$u=\sin(x)\sin (y)$ on triangular partitions.}\label{NE:TRI:Case2-1}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_0 -Q_0u\| $ & order & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h -Q_hu{|\hspace{-.02in}|\hspace{-.02in}|}$ & order \\
\hline
1 & 0.23000 & & 0.37336 & \\
\hline
5.0000e-01 & 0.03575 & 2.68& 0.27641& 0.43 \\
\hline
2.5000e-01 & 0.00684 & 2.38 & 0.21911&0.34 \\
\hline
1.2500e-01 & 0.00147 & 2.21 & 0.17661& 0.31\\
\hline
6.2500e-02 & 4.427e-004 & 1.74 & 0.12349 &0.52\\
\hline
3.1250e-02 & 1.549e-004 &1.52& 0.07290 & 0.76 \\
\hline
1.5625e-02 & 4.658e-005 & 1.73 & 0.03916 & 0.90 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for exact solution
$u=\sin(x)\sin (y)$ on triangular partitions.}\label{NE:TRI:Case2-2}
\begin{tabular}{|c|c|c|c|c|} \hline
$h$ & $\|u_b-Q_b u\|_\infty$ & order & $\|{\mathbf{u}}_g-Q_b(\nabla
u)\|_{\infty}$ & order \\
\hline
1 & 0.21688 & & 0.06306 & \\
\hline
5.0000e-01 & 0.05108 & 2.09 & 0.05601 &0.17 \\
\hline
2.5000e-01 & 0.01132 & 2.17& 0.05062 &0.15 \\
\hline
1.2500e-01 & 0.002524 & 2.17 & 0.03606 & 0.49\\
\hline
6.2500e-02 & 8.032e-004 &1.65 & 0.01772& 1.03\\
\hline
3.1250e-02 &3.226e-004 & 1.32 & 0.00590& 1.59 \\
\hline
1.5625e-02 & 1.038e-004& 1.64 & 0.00163 & 1.85 \\
\hline
\end{tabular}
\end{center}
\end{table}
Table \ref{NE:TRI:case4} demonstrates the performance of the WG
finite element method when the exact solution is a biquadratic
polynomial. It shows that the $L^2$ convergence is of order
$\mathcal{O}(h^2)$, and the $H^2$ convergence has a rate
approximately $\mathcal{O}(h)$.
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence rates for the biharmonic
equation with exact solution $u=x(1-x)y(1-y)$ on triangular
meshes.}\label{NE:TRI:case4}
\begin{tabular}{|c|c|c|c|c|} \hline
$h$ & $\|u_0-Q_0u\|$ & order &$\3baru_h-Q_hu{|\hspace{-.02in}|\hspace{-.02in}|}$ & order \\
\hline
1 & 2.05586 & & 4.05772 & \\
\hline
5.0000e-01& 0.32234 &2.67 & 1.59961 &1.34 \\
\hline
2.5000e-01& 0.06654 &2.28 & 0.70890 & 1.17 \\
\hline
1.2500e-01& 0.01588 &2.07 & 0.34325 &1.05\\
\hline
6.2500e-02& 0.00394 & 2.01&0.17416 & 0.98 \\
\hline
3.1250e-02 &9.691e-4 & 2.02 & 0.09025 &0.95 \\
\hline
1.5625e-02 & 2.361e-4 & 2.04 &0.046632& 0.95\\
\hline
\end{tabular}
\end{center}
\end{table}
The rest of the section will present some numerical results on
rectangular meshes. The lowest order WG element on rectangles
consists of quadratic polynomials on each element enriched with
constants on the edge of each element for both $u$ and $\nabla u$.
Therefore, the total number of unknowns on each element is $18$.
Note that all the unknowns corresponding to $u$ on each element can
be eliminated locally, so that the actual number of unknowns on each
element is $12$. Table \ref{NE:REC:Exact1} shows the numerical
solution when the exact solution is a quadratic polynomial. It can
be seen that the numerical solution is numerically the same as the
exact solution, as predicted by the theory.
\begin{table}[H]
\begin{center}
\caption{Numerical error for the biharmonic equation with
exact solution $u=x^2+y^2+xy+x+y+1$ on rectangular partitions.}\label{NE:REC:Exact1}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_0 -Q_0u\| $ & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h -Q_hu{|\hspace{-.02in}|\hspace{-.02in}|}$ & $\|u_b-Q_b u\|_{\infty}$ & $\|{\mathbf{u}}_g-Q_b(\nabla u)\|_{\infty}$ \\
\hline
1 & 2.09e-015 & 0 & 0 & 0 \\
\hline
2.5000e-01 & 4.66e-015 & 2.94e-014 & 6.66e-015 & 3.55e-015 \\
\hline
6.2500e-02 & 9.91e-014 & 2.30e-012 & 1.98e-013 & 2.74e-013 \\
\hline
1.5625e-02 & 5.83e-012 & 2.06e-010 & 1.34e-011 & 4.10e-011 \\
\hline
\end{tabular}
\end{center}
\end{table}
Tables \ref{NE:REC:Case1-1} and \ref{NE:REC:Case1-2} show the
numerical results when the exact solution is given by
$u=x^2(1-x)^2y^2(1-y)^2$. The result is in consistency with the
theory.
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for
exact solution $u=x^2(1-x)^2y^2(1-y)^2$ on rectangular partitions.}\label{NE:REC:Case1-1}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_0 -Q_0u\| $ & order & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h -Q_hu {|\hspace{-.02in}|\hspace{-.02in}|}$ & order \\
\hline
1 & 1.15052 & & 0 & \\
\hline
5.0000e-01 & 0.14880& 2.95 & 0.35 & \\
\hline
2.5000e-01 & 0.03786 & 1.97 & 0.24649&0.52\\
\hline
1.2500e-01 & 0.009724& 1.96 & 0.13593& 0.86\\
\hline
6.2500e-02 & 0.002494 & 1.96 & 0.070216 & 0.95\\
\hline
3.1250e-02 & 6.509e-004 &1.94& 0.035987& 0.96 \\
\hline
1.5625e-02 & 1.709e-004& 1.93& 0.018427& 0.97 \\
\hline 7.8125e-03 & 4.415e-005 &1.95 &0.009357& 0.98 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for
exact solution $u=x^2(1-x)^2y^2(1-y)^2$ on rectangular partitions.}\label{NE:REC:Case1-2}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_b-Q_b u\|_\infty$ & order & $\|{\mathbf{u}}_g-Q_b(\nabla u)\|_{\infty}$ & order \\
\hline
1 & 0 & & 0 & \\
\hline
5.0000e-01 & 0.15414 & & 0.01343 & \\
\hline
2.5000e-01 & 0.06724& 1.20& 0.008681 &0.6297\\
\hline
1.2500e-01 & 0.01961 & 1.78 & 0.0034078 &1.3490\\
\hline
6.2500e-02 & 0.00518 & 1.92 & 0.0014578 & 1.2251\\
\hline
3.1250e-02 & 0.001359& 1.93& 8.774e-004 & 0.7325 \\
\hline
1.5625e-02 & 3.566e-004&1.93 & 3.788e-004& 1.2116 \\
\hline
7.8125e-03 & 9.195e-005 &1.96 &1.231e-004 &1.6211 \\
\hline
\end{tabular}
\end{center}
\end{table}
Tables \ref{NE:REC:Case2-1} and \ref{NE:REC:Case2-2} present some
results for the exact solution $u=\sin(x)\sin (y)$. Readers are
encouraged to compare the results here with those in Tables
\ref{NE:TRI:Case2-1} and \ref{NE:TRI:Case2-2}.
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for exact solution
$u=\sin(x)\sin (y)$ on triangular partitions.}\label{NE:REC:Case2-1}
\begin{tabular}{|c|c|c|c|c|}
\hline
$h$ & $\|u_0 -Q_0u\| $ & order & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h -Q_hu{|\hspace{-.02in}|\hspace{-.02in}|}$ & order \\
\hline
1 & 0.60602 & & 0 & \\
\hline
5.0000e-01 & 0.08424 & 2.85 & 0.26684 & \\
\hline
2.5000e-01 & 0.01549 & 2.44 & 0.22733 & 0.23\\
\hline
1.2500e-01 & 0.00360 & 2.10 & 0.18593 &0.29\\
\hline
6.2500e-02 &0.00101 & 1.83 & 0.13440 & 0.47\\
\hline
3.1250e-02 & 2.98e-004 & 1.77& 0.081869 & 0.72 \\
\hline 1.5625e-02 & 7.95e-005 & 1.91 & 0.044701 & 0.87 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\caption{Numerical error and convergence order for exact solution
$u=\sin(x)\sin (y)$ on rectangular
partitions.}\label{NE:REC:Case2-2}
\begin{tabular}{|c|c|c|c|c|} \hline
$h$ & $\|u_b-Q_b u\|_\infty$ & order & $\|{\mathbf{u}}_g-Q_b(\nabla
u)\|_{\infty}$ & order \\
\hline
1 & 0 & & 0 & \\
\hline
5.0000e-01 & 0.10202& & 0.06063 & \\
\hline
2.5000e-01 & 0.02488 & 2.04 & 0.051219 &0.24\\
\hline
1.2500e-01 & 0.006110& 2.03 & 0.039518 &0.37\\
\hline
6.2500e-02 & 0.001981 & 1.62 & 0.021362 &0.89\\
\hline
3.1250e-02 & 5.810e-004 & 1.77& 0.007942 & 1.43 \\
\hline
1.5625e-02 & 1.501e-004 & 1.95& 0.002355& 1.75 \\
\hline
\end{tabular}
\end{center}
\end{table}
More numerical experiments should be conducted for the Weak Galerkin
Algorithm (\ref{2.7}), particularly for elements of order higher
than $k=2$. There is also a need of developing fast solution
techniques for the matrix problem arising from the WG finite element
scheme (\ref{2.7}). Numerical experiments on finite element
partitions with arbitrary polygonal element should be conducted for
a further assessment of the WG method.
\section{Appendix}\label{Section:Appendix}
The goal of this Appendix is to establish some fundamental estimates
useful in the error estimate for general weak Galerkin finite
element methods.
For any $T\in{\mathcal{T}}_h$, let $\varphi$ be a regular function in $H^1(T)$.
The following trace inequality holds true \cite{wy3655}:
\begin{equation}\label{trace-inequality}
\|\varphi\|_e^2 \leq C
(h_T^{-1}\|\varphi\|_T^2+h_T\|\nabla\varphi\|_T^2),
\end{equation}
If $\varphi$ is a polynomial on the element $T\in {\mathcal{T}}_h$, then we
have from the inverse inequality (see also \cite{wy3655}) that
\begin{equation}\label{x}
\|\varphi\|_e^2 \leq C h_T^{-1}\|\varphi\|_T^2.
\end{equation}
Here $e$ is an edge/face on the boundary of $T$.
\begin{lemma}\label{Lemma:A.01}
For the discrete weak partial derivative $\partial^2_{ij,w}$, the
following identity holds true on each element $T\in {\mathcal{T}}_h$:
\begin{equation}\label{A.001}
(\partial^2_{ij,w}v,
\varphi)_T=(\partial^2_{ij}v_0,\varphi)_T+\langle
v_0-v_b,\partial_j\varphi\cdot n_i \rangle_{\partial T}-\langle
\partial_i v_0-v_{gi},\varphi n_j\rangle_{\partial T}
\end{equation}
for all $\varphi\in P_{k-2}(T)$. Consequently, we have
\begin{equation}\label{A.002}
(\partial^2_{ij,w}v,
\varphi)_T=(\partial^2_{ij}v_0,\varphi)_T+\langle
Q_bv_0-v_b,\partial_j\varphi\cdot n_i \rangle_{\partial T}-\langle
Q_b(\partial_i v_0)-v_{gi},\varphi n_j\rangle_{\partial T}
\end{equation}
\end{lemma}
\begin{proof} From the definition (\ref{2.4}) of the weak partial derivative, we
have
\begin{equation*}
\begin{split}
(\partial^2_{ij,w}v, \varphi)_T
=&(v_0,\partial^2_{ji}\varphi)_T+\langle v_{gi}\cdot
n_j,\varphi\rangle_{\partial T}
-\langle v_b,\partial_j\varphi\cdot n_i\rangle_{\partial T}\\
=&(\partial^2_{ij}v_0,\varphi)_T-\langle \partial_i v_0,\varphi\cdot
n_j\rangle_{\partial T} +\langle v_0,\partial_j\varphi\cdot
n_i\rangle_{\partial
T}\\
&+\langle v_{gi}\cdot n_j,\varphi\rangle_{\partial T}
-\langle v_b,\partial_j\varphi\cdot n_i\rangle_{\partial T}\\
=&(\partial^2_{ij}v_0,\varphi)_T+\langle
v_0-v_b,\partial_j\varphi\cdot n_i \rangle_{\partial T}-\langle
\partial_i v_0-v_{gi},\varphi n_j\rangle_{\partial T}.
\end{split}
\end{equation*}
Here we have used the usual integration by parts in the second line.
The result then follows.
\end{proof}
\begin{lemma} Let $e_h\in V_h^0$ be any finite element function.
Then, there holds
\begin{equation}\label{ap1}
\sum_{T\in {\cal T}_h}|e_0|_{2,T}\leq C{|\hspace{-.02in}|\hspace{-.02in}|} e_h {|\hspace{-.02in}|\hspace{-.02in}|},
\end{equation}
where, by definition (\ref{3barnorm}),
\begin{equation}\label{a}
\begin{split}
\3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}^2=&\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d(\partial^2_{ij,w}e_h,\partial^2_{ij,w}e_h)_T
+\sum_{T\in {\cal T}_h}h_T^{-3}\langle Q_be_0-e_b,Q_be_0-e_b
\rangle_{\partial T}\\
&+\sum_{T\in {\cal T}_h}h_T^{-1}\langle Q_b(\nabla
e_0)-\textbf{e}_g,Q_b(\nabla e_0)-\textbf{e}_g\rangle_{\partial T}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof} Using (\ref{A.002}) with $v=e_h$ and
$\varphi=\partial^2_{ij}e_0$ we obtain
\begin{equation*}
\begin{split}
(\partial^2_{ij,w}e_h,\partial^2_{ij}e_0)_T =& (\partial^2_{ij}e_0,
\partial^2_{ij}e_0)_T-\langle Q_{b}(\partial_i e_0)-e_{gi},
\partial^2_{ij}e_0\cdot n_j\rangle_{\partial T}\\
&+\langle Q_be_0-e_b,\partial_j(\partial^2_{ij}e_0)\cdot
n_i\rangle_{\partial T}.
\end{split}
\end{equation*}
Thus,
\begin{equation}\label{x1}
\begin{split}
(\partial^2_{ij}e_0, \partial^2_{ij}e_0)_T
=&(\partial^2_{ij,w}e_h,\partial^2_{ij}e_0)_T+\langle
Q_{b}(\partial_i e_0)-e_{gi},\partial^2_{ij}e_0\cdot n_j\rangle_{\partial T}\\
&-\langle Q_be_0-e_b,\partial_j(\partial^2_{ij}e_0)\cdot
n_i\rangle_{\partial T}.
\end{split}
\end{equation}
It then follows from (\ref{x1}), Cauchy-Schwarz inequality, the
inverse inequality and (\ref{x}) that
\begin{equation*}
\begin{split}
(\partial^2_{ij}e_0, \partial^2_{ij}e_0)_T\leq &
\|\partial^2_{ij,w}e_h\|_T\|\partial^2_{ij}e_0\|_T+\|Q_{b}(\partial_i
e_0)-e_{gi}\|_{\partial T}\|\partial^2_{ij}e_0\|_{\partial T}\\
&+\|Q_be_0-e_b\|_{\partial T}\|\partial_j(\partial^2_{ij}e_0)\|_{\partial T}\\
\leq
&\|\partial^2_{ij,w}e_h\|_T\|\partial^2_{ij}e_0\|_T+Ch_T^{-\frac{1}{2}}\|Q_{b}
(\partial_i e_0)-e_{gi}\|_{\partial T}\|\partial^2_{ij}e_0\|_{T}\\
&+Ch_T^{-\frac{3}{2}}\|Q_be_0-e_b\|_{\partial T}
\|\partial^2_{ij}e_0\|_{T},
\end{split}
\end{equation*}
which implies
\begin{equation}\label{s}
\|\partial^2_{ij}e_0\|_T^2\leq
\|\partial^2_{ij,w}e_h\|_T^2+Ch_T^{-1}\|Q_{b} (\partial_i
e_0)-e_{gi}\|_{\partial T}^2 +Ch_T^{-3}\|Q_be_0-e_b\|_{\partial
T}^2.
\end{equation}
Summing over $T\in{\mathcal{T}}_h$ completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{Lemma6.2} For any $e_h\in V_h^0$ and $k\geq 3$, there exists a constant
$C$ such that
\begin{equation}\label{a1}
\Big(\sum_{T\in {\cal T}_h}h_T^{-3}\|e_0-e_b\|_{\partial
T}^2\Big)^{\frac{1}{2}}\leq C \3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{equation}
\end{lemma}
\begin{proof} By the triangle inequality and the error estimate for the
projection $Q_b$, we have
\begin{equation*}
\begin{split}
h_T^{-3}\|e_0-e_b\|_{\partial T}^2
\leq & 2 h_T^{-3}\Big(\|e_0-Q_be_0\|^2_{\partial T}+\|Q_be_0-e_b\|^2_{\partial T}\Big)\\
\leq & 2h_T^{-3} \big(Ch_T^2|e_0|_{2,\partial
T}\big)^2+2h_T^{-3}\|Q_be_0-e_b\|^2_ {\partial T}\\
\leq & 2Ch_T|e_0|_{2,\partial T}^2+2h_T^{-3}\|Q_be_0-e_b\|^2_
{\partial T}\\
\leq & 2C |e_0|_{2,T}^2+2h_T^{-3}\|Q_be_0-e_b\|^2_
{\partial T}.\\
\end{split}
\end{equation*}
Combining the above with (\ref{ap1}) gives (\ref{a1}).
\end{proof}
\begin{lemma} (Poincar\'e Inequality) There exists a constant $C$ such that
\begin{equation}\label{5.45}
\sum_{T\in {\cal T}_h}\|e_0\|^2_{T}\leq C \Big(\sum_{T\in {\cal
T}_h}\|\nabla e_0\|^2_{T}+\sum_{T\in {\cal T}_h}
h_T^{-1}\|e_0-e_b\|^2_{\partial T}\Big),
\end{equation}
where $e_h\in V_h$ is any finite element function with $e_b=0$.
\end{lemma}
\begin{proof} Consider the Laplace equation:
\begin{equation*}
\begin{array}{cc}
-\Delta \phi =e_0 & \text{in} \ \Omega, \\
\phi=0 & \text{on} \ \partial\Omega.\\
\end{array}
\end{equation*}
Assume that the solution $\phi$ is regular so that
\begin{equation}\label{aaa}
\|\phi\|^2_{2}\leq C\|e_0\|^2.
\end{equation}
The above assumption is always satisfied since otherwise we may
extend the domain $\Omega$ to $\widetilde{\Omega}$ in which the
required regularity is satisfied, with $e_0$ being extended by zero
outside of $\Omega$.
By letting ${\mathbf{w}}=-\nabla \phi$, we have
\begin{equation*}
\begin{split}
\sum_{T\in {\cal T}_h}(e_0,e_0)_T=&\sum_{T\in {\cal
T}_h}(e_0,\nabla\cdot {\mathbf{w}})_T =\sum_{T\in {\cal T}_h}\langle e_0,
{\mathbf{w}}\cdot \textbf{n}\rangle_{\partial T}-\sum_{T\in {\cal T}_h}({\mathbf{w}},
\nabla e_0)_T\\
=&\sum_{T\in {\cal T}_h}\langle (e_0-e_b), {\mathbf{w}}\cdot
\textbf{n}\rangle_{\partial T}-\sum_{T\in {\cal T}_h}({\mathbf{w}},
\nabla e_0)_T\\
\leq &\sum_{T\in {\cal T}_h}\|{\mathbf{w}}\|_{T}\|\nabla
e_0\|_{T}+\sum_{T\in {\cal T}_h}\|{\mathbf{w}}\|_{\partial
T}\|e_0-e_b\|_{\partial T}.
\end{split}
\end{equation*}
The trace inequality (\ref{trace-inequality}) implies
$$
\|{\mathbf{w}}\|_{\partial T}^2\leq C(h_T^{-1}\|{\mathbf{w}}\|_{T}+h_T\|\nabla
{\mathbf{w}}\|_{T})\leq Ch_T^{-1}\|{\mathbf{w}}\|_{1,T}^2.
$$
Thus, from Cauchy-Schwarz and the regularity (\ref{aaa}) we obtain
\begin{equation*}
\begin{split}
\sum_{T\in {\cal T}_h}(e_0,e_0)_T
\leq& \sum_{T\in {\cal T}_h}\|{\mathbf{w}}\|_{1,T}\|\nabla
e_0\|_{T}+\sum_{T\in {\cal T}_h}Ch_T^{-\frac12}\|{\mathbf{w}}\|_{1, T}\|e_0-e_b\|_{\partial T}\\
\leq & C\Big(\sum_{T\in {\cal T}_h}\|\nabla
e_0\|^2_{T}+\sum_{T\in {\cal T}_h}h_T^{-1 } \|e_0-e_b\|^2_{\partial
T}\Big)^{\frac12} \|\phi\|_2\\
\leq & C\Big(\sum_{T\in {\cal T}_h}\|\nabla e_0\|^2_{T}+\sum_{T\in
{\cal T}_h}h_T^{-1 } \|e_0-e_b\|^2_{\partial
T}\Big)^{\frac12} \|e_0\|,\\
\end{split}
\end{equation*}
which verifies the estimate (\ref{5.45}).
\end{proof}
The following is another version of the Poincar\'e inequality for
functions in $V_h^0$.
\begin{lemma}\label{Lemma6.3} There exists a constant $C$ such that
\begin{equation}\label{5.44}
\Big(\sum_{T\in {\cal T}_h}\|\nabla e_0\|^2_{T}\Big)^{\frac{1}{2}}\leq C\3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}
\end{equation}
for all $e_h\in V_h^0$.
\end{lemma}
\begin{proof} Since $e_h\in V_h^0$, then we have $\textbf{e}_g=0$.
Thus, an application of (\ref{5.45}) with $e_0$ replaced by $\nabla
e_0$ yields
\begin{equation}\label{33}
\begin{split}
\sum_{T\in {\cal T}_h}\|\nabla e_0\|^2_{T}&\leq C \Big( \sum_{T\in
{\cal T}_h} |e_0|^2_{2,T}+\sum_{T\in {\cal T}_h} h_T^{-1 }\|\nabla
e_0-\textbf{e}_g\|^2_{\partial T}\Big).
\end{split}
\end{equation}
For the second term on the right-hand side of (\ref{33}), we have
\begin{equation}\label{44}
\begin{split}
&\sum_{T\in {\cal T}_h} h_T^{-1 }\|\nabla e_0-\textbf{e}_g\|^2_{\partial T}\\
\leq& 2\sum_{T\in {\cal T}_h} h_T^{-1}\|\nabla e_0-Q_b(\nabla
e_0)\|^2_{\partial T}+2\sum_{T\in {\cal T}_h} h_T^{-1}\|Q_b(\nabla
e_0)-\textbf{e}_g\|^2_{\partial T}.
\end{split}
\end{equation}
Substituting (\ref{44}) into (\ref{33}) yields
\begin{equation*}
\sum_{T\in {\cal T}_h}\|\nabla e_0\|^2_{T} \leq C\sum_{T\in {\cal
T}_h}|e_0|^2_{2,T} + C{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2\leq C{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2,
\end{equation*}
where we have used (\ref{ap1}) in the last inequality.
\end{proof}
\begin{lemma}\label{Lemma6.4}
For quadratic element $k=2$, we assume that the exact solution $u$
of (\ref{0.1}) is sufficiently regular such that $u\in H^4(\Omega)$.
There exists a constant $C$ such that the following inequality holds
true:
\begin{equation}\label{Fork=2}
\begin{split}
\left|\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\langle\partial_j(\partial^2_{ij} u-{\cal
Q}_h\partial^2_{ij} u)\cdot n_i, e_0-e_b\rangle_{\partial
T}\right|\leq Ch \|u\|_4\ \3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{split}
\end{equation}
\end{lemma}
\begin{proof} Since ${\cal Q}_h$ is the local $L^2$ projection onto
$P_{0}(T)$, then we have
\begin{equation}\label{e}
\begin{split}
&\langle\partial_j(\partial^2_{ij}u-{\cal Q}_h\partial^2_{ij} u)\cdot n_i,
e_0-e_b\rangle_{\partial T}\\
=&\langle\partial_j\partial^2_{ij}u\cdot
n_i,e_0-e_b\rangle_{\partial T}
\\
=&\langle\partial_j\partial^2_{ij} u \cdot n_i, e_0-Q_be_0 \rangle_{\partial T}
+\langle\partial_j\partial^2_{ij}u\cdot n_i,
Q_be_0-e_b\rangle_{\partial T}\\
=&\langle(I-Q_b)\partial_j\partial^2_{ij} u \cdot n_i, e_0-Q_be_0
\rangle_{\partial T}
+\langle\partial_j\partial^2_{ij}u\cdot n_i, Q_be_0-e_b\rangle_{\partial
T}\\
=& J_1 + J_2.
\end{split}
\end{equation}
For the second term $J_2$, by using Cauchy-Schwarz inequality, trace
inequality (\ref{trace-inequality}) and (\ref{a}), we have
\begin{equation}\label{f}
\begin{split}
&\left|\sum_{T\in {\cal T}_h}\sum_{i,j=1}^d\langle\partial_j\partial^2_{ij}u\cdot n_i, Q_be_0-e_b\rangle_{\partial T}\right|\\
\leq & \Big(\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^dh_T^3\|\partial_j\partial^2_{ij}u\|^2_{\partial
T}\Big)^{\frac{1}{2}}
\Big(\sum_{T\in {\cal T}_h}h_T^{-3}\|Q_0e_0-e_b\|^2_{\partial T}\Big)^{\frac{1}{2}}\\
\leq & C\Big(\sum_{T\in {\cal T}_h}h_T^3\big(h_T|u|^2_{4,T}+h_T^{-1}|u|^2_{3,T}\big)\Big)^{\frac{1}{2}}\3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}\\
\leq & Ch\big(\|u\|_3+h\|u\|_4\big)\3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{split}
\end{equation}
As to the first term $J_1$, by using Cauchy-Schwarz inequality,
trace inequality (\ref{trace-inequality}), (\ref{x}), and Lemma
\ref{Lemma6.3}, we arrive at
\begin{equation}\label{bb}
\begin{split}
&\left|\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\langle(I-Q_b)\partial_j\partial^2_{ij}u\cdot n_i
,e_0-Q_be_0\rangle_{\partial T}\right|\\
\leq &\Big(\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^d\|(I-Q_b)\partial_j\partial^2_{ij}u\|_{\partial
T}^2\Big)^{\frac{1}{2}}
\Big(\sum_{T\in {\cal T}_h} \|e_0-Q_be_0\|_{\partial T}^2\Big)^{\frac{1}{2}}\\
\leq & C\Big(\sum_{T\in {\cal T}_h}h_T | u|_{4,T}^2\Big)^{\frac{1}{2}}\Big(\sum_{T\in {\cal T}_h}h_T |e_0|_{1, T}^2\Big)^{\frac{1}{2}}\\
\leq & Ch\|u\|_4\Big(\sum_{T\in {\cal T}_h} |e_0|^2_{1, T}\Big)^{\frac{1}{2}}
\leq C h\|u\|_4 \3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}.\\
\end{split}
\end{equation}
Combining all the above inequalities gives rise to the desired
estimate (\ref{Fork=2}).
\end{proof}
\begin{lemma}\label{Lemma6.5} There exists a constant $C$ such
that the following inequality holds true:
$$
\Big(\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^dh_T^{-1}\|(\partial_ie_0-e_{gi})\cdot
n_j\|_{\partial T}^2\Big)^{\frac{1}{2}}\leq C \3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}
$$
for any $e_h\in V_h^0$.
\end{lemma}
\begin{proof} From the triangle inequality, we have
\begin{equation}
\begin{split}\label{a2}
&\|(\partial_ie_0-e_{gi})\cdot n_j\|_{\partial T}^2
\leq \|\partial_ie_0-e_{gi}\|_{\partial T}^2\\
\leq & 2\Big(\|\partial_ie_0-Q_{b}(\partial_ie_0)\|^2_{\partial T}+
\|Q_{b}(\partial_ie_0)-e_{gi}\|^2_{\partial T}\Big)\\
\leq &
Ch_T|e_0|_{2,T}^2+2\|Q_{b}(\partial_ie_0)-e_{gi}\|^2_{\partial T}.
\end{split}
\end{equation}
Thus,
\begin{equation*}
\begin{split}
\sum_{T\in {\cal
T}_h}\sum_{i,j=1}^dh_T^{-1}\|(\partial_ie_0-e_{gi})\cdot
n_j\|_{\partial T}^2&\leq C \sum_{T\in{\mathcal{T}}_h} \left(|e_0|_{2,T}^2 +
h_T^{-1}\|Q_{b}(\partial_ie_0)-e_{gi}\|^2_{\partial T}\right)\\
&\leq C{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2.
\end{split}
\end{equation*}
This completes the proof of the lemma.
\end{proof}
| {
"timestamp": "2013-09-24T02:06:46",
"yymm": "1309",
"arxiv_id": "1309.5560",
"language": "en",
"url": "https://arxiv.org/abs/1309.5560",
"abstract": "This paper presents a new and efficient numerical algorithm for the biharmonic equation by using weak Galerkin (WG) finite element methods. The WG finite element scheme is based on a variational form of the biharmonic equation that is equivalent to the usual $H^2$-semi norm. Weak partial derivatives and their approximations, called discrete weak partial derivatives, are introduced for a class of discontinuous functions defined on a finite element partition of the domain consisting of general polygons or polyhedra. The discrete weak partial derivatives serve as building blocks for the WG finite element method. The resulting matrix from the WG method is symmetric, positive definite, and parameter free. An error estimate of optimal order is derived in an $H^2$-equivalent norm for the WG finite element solutions. Error estimates in the usual $L^2$ norm are established, yielding optimal order of convergence for all the WG finite element algorithms except the one corresponding to the lowest order (i.e., piecewise quadratic elements). Some numerical experiments are presented to illustrate the efficiency and accuracy of the numerical scheme.",
"subjects": "Numerical Analysis (math.NA)",
"title": "An Efficient Numerical Scheme for the Biharmonic Equation by Weak Galerkin Finite Element Methods on Polygonal or Polyhedral Meshes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806478450307,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7077274208090483
} |
https://arxiv.org/abs/0710.4347 | Multiplicative bijections of semigroups of interval-valued continuous functions | We characterize all compact and Hausdorff spaces $X$ which satisfy that for every multiplicative bijection $\phi$ on $C(X, I)$, there exist a homeomorphism $\mu : X \to X$ and a continuous map $p: X \to (0, +\infty)$ such that $$\phi (f) (x) = f(\mu (x))^{p(x)}$$ for every $f \in C(X,I)$ and $x \in X$. This allows us to disprove a conjecture of Marovt (Proc. Amer. Math. Soc. {\bf 134} (2006), 1065-1075). Some related results on other semigroups of functions are also given. | \section{Introduction}
For a compact and Hausdorff space $X$, let $C(X, I)$ denote the semigroup of all
continuous functions on $X$ taking values in the closed unit interval $I = [0,1]$. Motivated by a result of L. Moln\'ar (see \cite{Mo}), J. Marovt studied the form of multiplicative bijections on $C(X, I)$
in a special case, namely, when $X$ satisfies the first axiom of countability (see \cite{M}). He proved that
if $\varphi$ is such a map, then there exist a homeomorphism $\mu : X \longrightarrow X$ and a continuous map $p: X \longrightarrow (0, +\infty)$ such that
$$\varphi (f) (x) = f(\mu (x))^{p(x)}$$
for every $f \in C (X, I)$ and $x \in X$.
We call maps of this form {\em standard}. Marovt conjectured that every multiplicative bijection
on $C (X, I)$ was necessarily standard, and that the assumption on $X$ of being first countable could be dropped.
In this paper we prove that Marovt's conjecture does not hold. In fact we give a characterization of spaces $X$ for which there exists a multiplicative bijection of $C(X,I)$ that is {\em not} standard. Roughly speaking, we prove that this is the case if and only if $X$ is the Stone-\v{C}ech compactification of a proper subset (see Theorem~\ref{sete}).
We study a more general case, as is that of multiplicative bijections between semigroups of functions. Of course, a multiplicative bijection between arbitrary semigroups of $C(X, I)$ need not be of the above form, even if they separate points. A simple example
is the following: take $X$ having only one point, so each semigroup of $C(X, I)$ can be identified with a
semigroup of $I$. Consider for instance $s=1/2$, $t= 1/3$, and $\mathcal{A} := \{s^n t^m : n, m \in \mathbb{N}\}$. It is now clear that the map sending each $s^n t^m$ into $s^m t^n$ is multiplicative and bijective, but cannot be described as above.
Nevertheless, our technique can be used for semigroups other than $C(X, I)$. In fact, it works if we just require these
semigroups $\mathcal{A} \subset C(X, I)$ to satisfy the following three properties (where, as usual,
if $f \in C(X, I)$, $\mathrm{coz} \hspace{.02in} f$ denotes the set $\{x \in X: f(x) \neq 0 \}$, and
$\mathrm{supp} \hspace{.02in} f$ denotes its closure).
\begin{description}
\item[Property 1] Given any $x \in X$ and any neighborhood $U$ of $x$,
there exists $f \in \mathcal{A}$ such that $f(x) \equiv 1$ on a neighborhood of $x$ and $\mathrm{supp} \hspace{.02in} f \subset U$.
\item[Property 2] If $f, g \in \mathcal{A}$ satisfy $f (x) \le g (x)$ for every $x \in X$,
and $\mathrm{supp} \hspace{.02in} f \subset \mathrm{coz} \hspace{.02in} g $,
then there exists $k \in \mathcal{A}$ such that
$f=gk$.
\item[Property 3] For each $x \in X$, the set $(0, 1) \cap \{f(x) : f \in \mathcal{A}\} $ is nonempty.
\end{description}
It is easy to see that the image by a standard multiplicative bijection of a semigroup satisfying Properties 1, 2, and 3 is a semigroup that also satisfies them. One of such semigroups is, for instance, that of all continuously differentiable functions on $I$ (also taking values in $I$). The general form of multiplicative bijections between this kind of semigroups will be given in Theorem~\ref{derr-laredo}. Finally we also give an example where Marovt's result does not hold for multiplicative bijections defined between general semigroups of this type, even when the ground spaces are first countable (see
Example~\ref{carras-si-cero}).
\smallskip
Suppose that $\varphi: \cx \ra \cy$ is multiplicative and bijective, where
$\mathcal{A} \subset C(X, I)$ and $\mathcal{B} \subset C(Y, I)$ are
semigroups satisfying Properties 1, 2, and 3. We say that $y \in Y$ is a {\em standard point} for $\varphi$
if there exist a number $p \in (0, + \infty)$ and a point $x \in X$ such that
$\varphi (f) (y) = f(x)^{p}$ for every $f \in \mathcal{A}$.
We denote by $\mathcal{R} (\varphi)$ the set of standard points for $\varphi$.
Throughout we assume that $X$ and $Y$ are compact and Hausdorff spaces. Given a (completely regular) space $Z$,
we denote by $\beta Z$ its Stone-\v{C}ech compactification.
\smallskip
Remark finally that our results are essentially different from those given in \cite{Mi} for multiplicative bijections between spaces
$C(X, \mathbb{R})$ of real-valued continuous functions on $X$. Even so, there is a similarity in that the multiplicative structure of the spaces of functions determines the space
$X$ up to homeomorphism. More recent results in this line, concerning multiplicative maps on $C(X, \mathbb{R})$, are given for instance in \cite{LS1} and \cite{LS2}.
\section{Main results}
We first give a theorem that provides a description of multiplicative bijections between semigroups
satisfying the above properties. Recall that a subset $Z$ of $Y$ is a {\em cozero-set} if there exists a real-valued
continuous function $f$ on $Y$ such that $Z = \{y \in Y: f(y) \neq 0\}$.
\begin{thm}\label{derr-laredo}
Suppose that $\varphi: \cx \ra \cy$ is multiplicative and bijective, where
$\mathcal{A} \subset C(X, I)$ and $\mathcal{B} \subset C(Y, I)$ are
semigroups satisfying Properties 1, 2, and 3. Then
\begin{enumerate}
\item\label{astamarte} there exist a
homeomorphism $\mu$ from $Y$ onto $X$
and a continuous map $p: \mathcal{R} (\varphi) \longrightarrow (0, + \infty)$ such that
$$\varphi(f) (y) = f(\mu(y))^{p(y)}$$
for every $f \in \mathcal{A}$ and $y \in \mathcal{R} (\varphi)$;
\item\label{santamarta} the set $\mathcal{R} (\varphi)$ is a
dense cozero-set in $Y$, and the map $p$ can be extended to a continuous map from $Y$ into $[0, + \infty]$ taking
values in $\{0 , + \infty\}$ at every point of $Y \setminus \mathcal{R} (\varphi)$.
\end{enumerate}
\end{thm}
\begin{rem}\label{negro}
Property 3 is necessary to ensure that Theorem~\ref{derr-laredo} holds. For instance if we take $X=Y=[-1,1]$ and
$\mathcal{A} := \{f \in C(X,I) : f(0) \in \{0,1\}\}$, it is clear that $\mathcal{A}$ satisfies both Properties 1 and 2, and that the map $\varphi: \mathcal{A} \longrightarrow \mathcal{A}$ defined, for every $f \in \mathcal{A}$, as $\varphi (f) (x) := f(x)$ if $x \le 0$, and $\varphi (f) (x) := f(x)^2$ if $x >0$, is a multiplicative bijection. Nevertheless, $\mathcal{R} (\varphi) =X$, so each point is standard for $\varphi$, but we cannot find a continuous map $p$ as given in Theorem~\ref{derr-laredo}.
\end{rem}
We next state the theorem characterizing all spaces on which can be defined a multiplicative bijection that is not standard. Recall that a topological space $Z$ is said to be {\em pseudocompact} if every real-valued continuous map on $Z$ is bounded.
\begin{thm}\label{sete}
There exists a bijective and multiplicative map $\varphi: \dx \ra \dy$
that is not standard if and only if $X$ and $Y$ are homeomorphic and there exists a
subset $Z$ of $Y$, not pseudocompact, such that $Y = \beta Z$.
\end{thm}
Obviously Theorem~\ref{sete} gives an answer in the negative to Marovt's conjecture. It is enough now to take any completely regular space $Z$ (thus ensuring that its Stone-\v{C}ech compactification exists) that is not pseudocompact (as for instance $\mathbb{N}$, $\mathbb{R}$, or any unbounded subset of a normed space), and we have that there are always multiplicative bijections on $C (\beta Z, I)$ that
are not standard.
\section{Some other results and proofs}
The following is a key lemma to prove Theorem~\ref{derr-laredo}.
\begin{lem}\label{azero}
Suppose that
$u: \mathscr{A}_1 \longrightarrow \mathscr{A}_2$ is an order preserving multiplicative bijection,
where $\mathscr{A}_1$ and $ \mathscr{A}_2$ are semigroups contained in $(0,1)$. Then there exists
$p \in (0 , + \infty)$ such that
$u(\gamma) = \gamma^p$ for every $\gamma \in \mathscr{A}_1$.
\end{lem}
\begin{proof}
Suppose on the contrary that there exist $\alpha, \beta \in \mathscr{A}_1$ such that $u( \alpha) = \alpha^p$ and $u(\beta ) =\beta^q$, where $0<p<q$. By taking integer powers of $\alpha$ if necessary, we may assume without loss of generality that $\alpha < \beta$. Define $a := - \log \alpha$ and $b:= - \log \beta$. Obviously $0 < b <a$, and we can find natural numbers $n, m$ such that
$$ \frac{b}{a} < \frac{n}{m} < \frac{q}{p} \frac{b}{a}.$$
Now it is clear that $m b < na$ and $npa <mqb$. These inequalities lead easily to $\alpha^n < \beta^m$ and $\alpha^{np} > \beta^{mq}$, that is, $u(\alpha^n) > u (\beta^m)$, against the fact that $u$ is order preserving.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{derr-laredo}]
(\ref{astamarte})
For each $x \in X$, let $\mathscr{U}_x$ be the set of all $f \in \mathcal{A}$ for which there exists a neighborhood of $x$ where $f \equiv 1 $.
Let us see that $\mathscr{C}_x := \{y \in Y: \varphi (f) (y) =1 \hspace{.03in} \forall f \in \mathscr{U}_x\}$ is nonempty.
First notice that, given any $f \in \mathscr{U}_x$, the set $\varphi (f)^{-1} (\{1\})$ is compact.
On the other hand, given $f_1, \ldots, f_n \in \mathscr{C}_x$, we can find $f_0 \in \mathcal{A}$, $f_0 \neq 0$, such that
$\mathrm{supp} \hspace{.02in} f_0 \subset \{z \in X: \prod_{i=1}^n f_i (z) =1\}$. This obviously implies that
$f_0 f_i =f_0$ for each $i$, so $\varphi(f_0) \varphi(f_i) = \varphi (f_0) \neq 0$. We deduce that
$\bigcap_{i=1}^n \varphi (f_i)^{-1} (\{1\}) \neq \emptyset$. Then $\{ \varphi (f)^{-1} (\{1\}) : f \in \mathscr{U}_x \}$ satisfies
the finite intersection property, and we conclude that
$\mathscr{C}_x $ is nonempty.
On the other hand, $\varphi^{-1}$ is also multiplicative, so given any $y \in \mathscr{C}_x$, we have that the set $\mathscr{C}_y$ (defined in a similar way as $\mathscr{C}_x$) is nonempty. Let us see that $\mathscr{C}_y =\{x\}$.
Suppose that there exists $z \in \mathscr{C}_y$, $z \neq x$. Then we can find $f_z \in \mathscr{U}_x$ such that $z \notin \mathrm{supp} \hspace{.02in} f_z $. Since $z \in \mathscr{C}_y$
and $\varphi (f_z) (y) =1$,
then we can find $k \in \mathcal{B}$ with $\mathrm{supp} \hspace{.02in} k \subset \mathrm{coz} \hspace{.02in} \varphi (f_z)$
and $\varphi^{-1} (k) (z) \neq 0$. Clearly,
if we now take $g \in \mathscr{U}_z$ with $f_z g=0$, then $g \varphi^{-1} (k) \neq 0$, but $\varphi (g) k =0$, which is impossible.
The above process lets us define a map $\mu : Y \longrightarrow X$, which turns out to be bijective, such that $\mu (y)$ is the only point in $\mathscr{C}_y$, and $y$ is the only point
in $\mathscr{C}_{\mu (y)}$, for each $y \in Y$.
We prove next that $\mu$ is continuous at every point of $Y$ (and is consequently a homeomorphism). Take any $y \in Y$, and let $U$ be an open
neighborhood of $\mu (y)$. We will see that, if $f \in \mathscr{U}_{\mu (y)}$ and $\mathrm{supp} \hspace{.02in} f \subset U $,
then $\mu (\mathrm{coz} \hspace{.02in} \varphi (f))$ is contained in $U$. Otherwise
there exists $z \in Y$ such that $\varphi (f) (z) \neq 0$ and $\mu (z) \notin \mathrm{supp} \hspace{.02in} f$,
so we can take $k \in \mathscr{U}_{\mu (z)}$ such that $kf =0$. Obviously
$\varphi (k) (z) \varphi (f) (z) \neq 0$, which
is absurd.
Finally, we have that, by definition, if $y \in \mathcal{R} (\varphi)$, then there exist $p(y) \in (0, + \infty)$
and $x \in X$ such that $\varphi(f) (y) = f (x)^{p(y)}$ for every $f \in \mathcal{A}$. It is easy to check that $x = \mu (y)$.
As for the map $p: \mathcal{R} (\varphi) \longrightarrow (0, + \infty)$, we have that for each $y \in \mathcal{R} (\varphi)$,
$$p(y) = \frac{\log \varphi (f) (y)}{\log f \left( \mu \left( y \right) \right)}$$for every $f \in \mathcal{A}$ with
$f \left( \mu \left( y \right) \right) \neq 0,1$. This implies that $p$ is continuous at $y$, and consequently on $\mathcal{R} (\varphi)$.
\medskip
(\ref{santamarta})
For each $y \in Y$, let $\mathcal{B}_y := \{g (y) : g \in \mathcal{B}\} \cap (0,1)$, and define $\mathcal{A}_{x}$ for each $x \in X$ in a similar way. Consider also
the set $\mathcal{R}_1 (\varphi)$ of all $y \in Y$ such that
$\varphi (f) (y) \neq 0, 1$ whenever $f \in \mathcal{A}$ satisfies $f (\mu (y)) \neq 0, 1$. We need the following claim.
\smallskip
{\bf Claim.} {\em Let $y \in \mathcal{R}_1 (\varphi)$. If $f, g \in \mathcal{A}$ satisfy
$g(\mu (y)) \le f(\mu (y))$, then $\varphi (g) (y) \le \varphi (f) (y) $. Moreover $\mu (y) $ belongs to $\mathcal{R}_1 \left( \varphi^{-1} \right)$.}
\smallskip
Suppose first that $g(\mu (y)) < f(\mu (y))$, and take a neighborhood $U$ of $\mu (y)$ with $ g(x) < f(x)$ for every $x \in U$. We pick $f_0, g_0 \in \mathscr{U}_{\mu (y)}$
such that $\mathrm{supp} \hspace{.02in} f_0 \subset U$, and
such that $\mathrm{supp} \hspace{.02in} g_0 \subset \{ x \in X : f_0 (x) =1\}$, respectively. Since $\mathcal{A}$
satisfies Property 2, then
there exists $k \in \mathcal{A}$ such that $(ff_0)k= gg_0$. Also $k(\mu(y)) \in (0,1)$, and consequently $$\varphi (g) (y) = \varphi (gg_0) (y) = \varphi (ff_0) (y) \varphi (k) (y) < \varphi (ff_0) (y) = \varphi (f) (y).$$
We now prove that $\mu (y)$ belongs to $\mathcal{R}_1 (\varphi^{-1})$. Let $h \in \mathcal{B}$ be such that $ h (y) \neq 0,1$.
Suppose that $\varphi^{-1} (h) (\mu (y)) = 0$ and take any $l \in \mathcal{A}$ with $l (\mu (y)) \neq 0, 1$, and
$n \in \mathbb{N}$ such that $\left( \varphi(l) (y) \right)^n < h(y)$. We then have that $ \varphi^{-1} (h)(\mu (y)) < l^n (\mu (y))$ and
$h (y) > \varphi(l^n) (y)$, what goes against what we have proved above. We deduce that $\varphi^{-1} (h) (\mu (y)) \neq 0$. In a similar way we can deduce that $\varphi^{-1} (h) (\mu (y)) \neq 1$. Thus, $\mu (y)$ belongs
to $\mathcal{R}_1 (\varphi)$.
Now, working with $\varphi^{-1}$, it is clear that if $g(\mu (y)) \le f(\mu (y))$, then we cannot get $\varphi (g) (y) > \varphi (f) (y) $.
The claim is proved.
\smallskip
For any $y \in \mathcal{R}_1 (\varphi)$, we may define a map $\varphi_y : \mathcal{A}_{\mu(y)} \longrightarrow \mathcal{B}_y$ in
the following way. Given $\alpha \in \mathcal{A}_{\mu (y)}$, there exists $f \in \mathcal{A}$ such that
$f(\mu(y)) = \alpha$. Then define $\varphi_y (\alpha) := \varphi (f) (y)$. It is clear by the above claim that
$\varphi_y$ is well defined, and obviously it is multiplicative, order preserving, and bijective.
Also, we have that $\varphi (f) (y) =1 $ whenever $f(\mu (y))= 1$, and $\varphi (f) (y) =0 $ whenever $f(\mu (y))= 0$,
$f \in \mathcal{A}$.
Consequently, by
Lemma~\ref{azero}, we have that $\mathcal{R}_1 (\varphi ) \subset \mathcal{R} (\varphi)$. The other inclusion is immediate, so
$\mathcal{R}_1 (\varphi) = \mathcal{R} (\varphi) $.
\medskip
We prove that $\mathcal{R} (\varphi) $
is dense in $Y$.
Let $W_0 \subset Y$ be open (and nonempty). Pick
$y \in W_0$ and assume that $y \notin \mathcal{R} (\varphi) = \mathcal{R}_1 (\varphi)$, so
there exists
$f_0 \in \mathcal{A}$ such that $f_0 (\mu (y)) \neq 0, 1$ and $ \varphi (f_0) (y) \in \{0, 1\}$. Let $V:= \{x \in X : 0<f_0 (x) <1\}$.
We next see that $y$ does not belong to the interior of the set $W_1 := \{ z \in W_0 : \varphi (f_0) (z) =0\}$. Otherwise, we can find
$g_0 \in \mathscr{U}_{y}$ with $g_0 \varphi (f_0) =0$. This implies that $\varphi^{-1} (g_0) f_0 =0$, against the fact that $\varphi^{-1} (g_0) (\mu (y)) =1$ and $f_0 (\mu (y)) \neq 0$. On the other hand,
$y$ does not belong to the interior of the set $W_2 := \{ z \in W_0 : \varphi (f_0) (z) =1\}$, because $\varphi (f_0) \notin \mathscr{U}_y$. Consequently, due to the form of $W_1$ and $W_2$, we have that $y$ belongs to the closure of $Y \setminus \left( W_1 \cup W_2 \right)$. This means that
$$W:= \mu^{-1} \left( V \right) \cap \left( W_0 \setminus \left( W_1 \cup W_2 \right) \right) \neq \emptyset.$$
As in the proof of the claim, it is clear that given any point in $Y$, and $f , g \in \mathcal{A}$ with $g(\mu (y)) < f(\mu (y))$, then $\varphi (g) (y) \le \varphi (f) (y)$. Now it is straightforward to check that every point in $W$ belongs to $\mathcal{R}_1 (\varphi) = \mathcal{R} (\varphi)$.
\medskip
We finally see that $\mathcal{R} (\varphi)$ is a cozero-set.
Suppose that $y \in Y \setminus \mathcal{R} (\varphi)$. Take any net $\left( z_{\alpha} \right)_{\alpha \in \Lambda}$ in $\mathcal{R} (\varphi)$
converging to $y$.
Clearly, since $y \notin \mathcal{R}_1 (\varphi)$, then there exists $f \in \mathcal{A}$ with $f (\mu (y)) \neq 0, 1$ and
either $\varphi (f) (y) =0 $ or $1$.
Also
$$p(z_{\alpha}) = \frac{\log \varphi (f) (z_{\alpha})}{\log f \left( \mu \left( z_{\alpha} \right) \right)}$$
for every $\alpha \in \Lambda$.
The conclusion that $\lim_{\alpha} p (z_{\alpha}) \in \{0, + \infty\}$ follows easily. Finally, it is clear that
$\mathcal{R} (\varphi)$ coincides with the cozero-set of the continuous function $Y \longrightarrow \mathbb{R}$ given by
$$ y \mapsto p(y) \left( p \left( y \right) ^2 +1 \right)^{-1} $$
if $y \in \mathcal{R} (\varphi )$,
and $y \mapsto 0$ otherwise.
\end{proof}
\begin{rem}\label{cerotres}
If in Theorem~\ref{derr-laredo} we assume that the semigroups satisfy just Properties 1 and 2, a similar proof
ensures the existence of the homeomorphism $\mu$ between $Y$ and $X$, even if the theorem does not hold (see Remark~\ref{negro}).
\end{rem}
In what follows, when we want to specify the homeomorphism $\mu$ and the map $p$ corresponding to a multiplicative and bijective map $\varphi$, as given in Theorem~\ref{derr-laredo}, we will write $\varphi [\mu, p]$.
It is obvious that if $\varphi = \varphi [\mu, p]$, then $\varphi^{-1} = [\mu^{-1}, q]$,
where $q = 1/\left( p\circ \mu^{-1} \right)$.
\begin{cor}\label{paragogo}
Let $\varphi: \cx \ra \cy$ be multiplicative and bijective,
where
$\mathcal{A} \subset C(X, I)$ and $\mathcal{B} \subset C(Y, I)$ are
semigroups satisfying Properties 1, 2, and 3. If $\mathcal{R} (\varphi)$
is pseudocompact, then $\mathcal{R} (\varphi) = Y$.
\end{cor}
\begin{proof}
It is obvious that if $\varphi = \varphi [\mu, p]$ and $\varphi^{-1} = \varphi^{-1}[ \mu^{-1}, q]$, then both $p$ and $q$ are bounded, and consequently neither $+ \infty$ nor
$ 0$ are limit points of $p (\mathcal{R} (\varphi ))$. The conclusion follows from Theorem~\ref{derr-laredo}.
\end{proof}
\begin{prop}\label{dismas}
Let $\varphi: \dx \ra \dy$ be multiplicative and bijective. Then $Y = \beta (\mathcal{R} (\varphi))$.
\end{prop}
\begin{proof}
We follow the notation given after
Remark~\ref{cerotres}.
Suppose that $\varphi = \varphi [\mu, p]$.
It is clear that if $\mathbf{1}$ is the map
constantly equal to $1$ on $X$, and we consider the multiplicative bijection
$\psi = \psi[\mu^{-1}, \mathbf{1}] : C (Y, I) \longrightarrow C (X, I)$,
then the composition $\varphi \circ \psi : C (Y, I) \longrightarrow C (Y, I) $ is $\varphi \circ \psi [\mathbf{id}_Y, p] $ (where $\mathbf{id}_Y$ is the identity map on $Y$). Also, $\mathcal{R}_1 (\varphi) = \mathcal{R}_1 (\varphi \circ \psi)$,
so $\mathcal{R} (\varphi) = \mathcal{R} (\varphi \circ \psi)$ (see Proof of Theorem~\ref{derr-laredo}). Consequently we can assume without loss of
generality that $X=Y$ and $\mu = \mathbf{id}_Y$.
Notice that if we consider the continuous extensions of $p$ and $q$ to maps from $Y$ to $[0, + \infty]$,
as seen in Theorem~\ref{derr-laredo}, then $p^{-1} (\{0\}) = q^{-1} (\{ + \infty\}) $
and $p^{-1} (\{+ \infty\}) = q^{-1} (\{ 0\}) $. Thus, assuming that $\beta (\mathcal{R} (\varphi)) \neq Y$, at least one of the sets $ p^{-1} (\{0\})$, $q^{-1} (\{ 0\})$
is nonempty. We conclude that there is a continuous map
from $Y \setminus p^{-1} (\{0\})$, or from $Y \setminus q^{-1} (\{ 0\})$, to $I$ which does not admit
a continuous extension to $Y$.
Taking into account that $\mathcal{R} (\varphi) = \mathcal{R} \left( \varphi^{-1} \right)$, we
assume without loss of generality that
there exists a continuous function $g_0 :Y \setminus p^{-1} (\{0\}) \longrightarrow I$ that
cannot be continuously extended to $Y$. We consider the map $h \in C (Y, I)$ whose image by $\varphi$ is the constant function $1/e$. It is easily seen that,
since $p(y) \log h (y) = \log \varphi (h) (y)$ for every $y \in \mathcal{R} (\varphi)$, then
$$h (y)= \exp \left( - \frac{1}{p(y)} \right)$$
if $y \in Y \setminus p^{-1} (\{0\})$, and $h \equiv 0$ on $ p^{-1} (\{0\})$.
Since $p^{-1} (\{0\})$ is a zero-set, then there exists a sequence $\left( K_n \right)$ of compact subsets of $Y$ with
$K_{n+1}$ contained in the interior of $K_n$ for each $n \in \mathbb{N}$, and such that
$p^{-1} (\{0\}) = \bigcap_{n=1}^{\infty} K_n$. For each $n \in \mathbb{N}$, we take $g_n \in C (Y, I)$ satisfying $g_n \equiv g_0$ on
$Y \setminus K_n$. We denote by $f_n$ its counterimage by $\varphi$.
Notice that if we define $f_0 (z) = f_n (z)$ whenever $z \notin K_n$, then
$f_0 :
Y \setminus p^{-1} (\{0\}) \longrightarrow I$ is continuous. It is also obvious that $f_0 h$ (defined
as $0$ on $p^{-1} (\{0\})$) belongs to $C(Y, I)$. On the other hand, we have that $f_0 h \equiv f_n h$ outside $K_n$ for each
$n \in \mathbb{N}$, and consequently $$\varphi (f_0 h) \equiv \varphi (f_n h) \equiv g_n \varphi (h) \equiv g_0 \varphi (h)$$
outside each $K_n$. This implies that $\varphi (f_0 h) \equiv g_0 \varphi (h) $ on $ Y \setminus p^{-1} (\{0\}) $. Now
it is easy to see that, since $\varphi (h) \equiv 1/e$ on $p^{-1} (\{0\}) $, then $\varphi (f_0 h)$ is not defined at some points of $ p^{-1} (\{0\}) $, which is absurd.
\end{proof}
\begin{rem}\label{rosales}
We deduce in particular that if the set $Y \setminus \mathcal{R} (\varphi)$ is nonempty, then its cardinality is at least $2^{\mathfrak{c}}$. Also, if it is endowed with the restricted topology, then no point in
$Y \setminus \mathcal{R} (\varphi)$ is a $G_{\delta}$ (see \cite[Chapter 9]{GJ}).
\end{rem}
\begin{rem}\label{azulbelga}
By Remark~\ref{rosales}, we have that each point of $Y$ having a countable base of neighborhoods belongs
to $\mathcal{R} (\varphi)$ for every $\varphi$. In particular, if $Y$ is first countable, then $\mathcal{R} (\varphi) = Y$, which is essentially Marovt's result. But there can be spaces which are not first countable, and for which every point is standard. As an easy example,
consider a bijective and multiplicative map $\varphi : C([0, \omega_1], I) \longrightarrow C([0, \omega_1], I)$, where
$\omega_1$ denotes the first noncountable ordinal. Since each point of $[0, \omega_1)$ has a countable base of
neighborhoods, we deduce that $[0, \omega_1] \setminus \mathcal{R} (\varphi) \subset \{\omega_1\}$, and again by Remark~\ref{rosales}, we cannot have $[0, \omega_1] \setminus \mathcal{R} (\varphi) = \{\omega_1\}$. We conclude that $\mathcal{R} (\varphi) = [0, \omega_1]$.
\end{rem}
\begin{rem}\label{nikdezir}
The conclusion given in Remark~\ref{azulbelga} is not true for more general semigroups, that is, not every point having a countable base of neighborhoods necessarily belongs to $\mathcal{R} (\varphi)$ (see Example~\ref{carras-si-cero}).
\end{rem}
Using Theorem~\ref{derr-laredo}, it is easy to see that every isolated point is standard for every multiplicative bijection. More generally, the next result allows us to identify some points that belong to $\mathcal{R} (\varphi)$ for every $\varphi$.
\begin{cor}\label{afeite}
Let $y \in Y$ be such that $\beta \left( Y \setminus \{y\} \right) \neq Y$.
Then $y \in \mathcal{R} (\varphi)$ for every multiplicative bijection $\varphi: \dx \ra \dy$.
\end{cor}
\begin{proof}
Suppose that $y \notin \mathcal{R} (\varphi)$. Thus, taking into account Proposition~\ref{dismas} and the fact that $\mathcal{R} (\varphi) \subset Y \setminus \{y\}$, we deduce that $\beta \left( Y \setminus \{y\} \right) = Y$, and we
are finished.
\end{proof}
\begin{rem}
Obviously, the converse of Corollary~\ref{afeite} is in general not true. For an easy example, consider for instance
the point $\omega_1$ in Remark~\ref{azulbelga}, and take into account that $\beta [0, \omega_1) = [0, \omega_1]$.
\end{rem}
\begin{rem}
Marovt's result cannot be given in general for the kind of semigroups we are dealing with, even if the ground spaces are assumed to be first countable. That is, it is possible that $\mathcal{R} (\varphi) \neq X$ for a multiplicative bijection $\varphi : \mathcal{A} \longrightarrow \mathcal{B}$, even in the case when $\mathcal{A}, \mathcal{B} \subset C (X, I)$ satisfy Properties 1, 2, and 3, and the
space $X$ is first countable. We next include an example of this fact.
\end{rem}
\begin{ex}\label{carras-si-cero}
For a topological space $Z$, denote by $C(Z, \mathbb{R}_{\ge 0})$ the set of all continuous maps from $Z$ to $[0, +\infty)$. Let $\mathbb{N} \cup \{ \infty \} $ be the one-point compactification of $\mathbb{N}$ and let
$\varphi : C( \mathbb{N} \cup \{ \infty \} , \mathbb{R}_{\ge 0} ) \longrightarrow C ( \mathbb{N} , \mathbb{R}_{\ge 0} )$ be defined, for each $f \in C( \mathbb{N} \cup \{ \infty \} , \mathbb{R}_{\ge 0}) $, as $\varphi (f) (n) := f(n)^n$ for every $n \in \mathbb{N}$.
It is clear that $\varphi$ is multiplicative and injective.
Since it also preserves order, then the image of each $f \in C(\mathbb{N} \cup \{ \infty \} , I)$ takes values in $I$.
Consider now the subset $\mathcal{A}_1$ of $C(\mathbb{N} \cup \{ \infty \} , I )$
of all characteristic functions which are continuous, and let $f_0 \in C( \mathbb{N} \cup \{ \infty \} , I)$ be the counterimage by $\varphi$ of the constant function equal to $1/2$. Define $\mathcal{A}$ as the set of all functions of the form
$f = \alpha g f_0^m$ for some $\alpha \ge 0$, $g \in \mathcal{A}_1$, and $m \in \mathbb{Z}$, such that $f(n) \in I$ for every $n \in \mathbb{N}$.
It is easy to see that if $f \in \mathcal {A}$, then $\lim_{n \longrightarrow \infty} \varphi (f) (n)$ exists, so in a natural way
we may define a map $\varphi : \mathcal{A} \longrightarrow C( \mathbb{N} \cup \{ \infty \} , I) $.
Put $\mathcal{B} := \varphi (\mathcal{A})$. It is easy to see that
$\mathcal{A}$ and $\mathcal{B}$ satisfy Properties 1, 2, and 3. On the other hand, every point in $\mathbb{N}$ is
isolated, and consequently $\mathbb{N}$ is contained in $\mathcal{R} (\varphi)$. Nevertheless, $\infty$ does not
belong to $\mathcal{R} (\varphi)$.
\end{ex}
\begin{proof}[Proof of Theorem~\ref{sete}]
Suppose that there exists a multiplicative bijection $\varphi: \dx \ra \dy$ that is not standard. First, by
Theorem~\ref{derr-laredo}, $X$ and $Y$ must be homeomorphic. Also, calling $Z := \mathcal{R} (\varphi)$, we have
by Corollary~\ref{paragogo} that $Z$ is not pseudocompact, and
by Proposition~\ref{dismas} that
$Y = \beta Z$.
Conversely, suppose that $X$ and $Y$ are homeomorphic, and that there exists a proper subset $Z$ of $Y$ which is not pseudocompact, and such that
$Y= \beta Z$. It is clear that without loss of generality we may assume that $X=Y$.
Since $Z$ is not pseudocompact, then there exists an unbounded continuous function $u: Z \longrightarrow [1 , + \infty)$. We define a map $\varphi: C (Z, I) \longrightarrow C (Z, I)$ as $\varphi (g) (z) := g (z)^{u(z)}$ for each $g \in C (Z, I)$ and $z \in Z$.
It is easy to check that $\varphi$ is multiplicative and injective.
Let us see that $\varphi$ is surjective. To this end, we take any $k \in C (Z, I)$, and consider the map
$L_k : Z \longrightarrow [- \infty, 0]$
defined as $L_k (z) := (\log k(z)) / u(z)$ if $k (z) \neq 0$, and $L_k (z) := -\infty$ if $k(z) =0$. It is easy to
check that $L_k$ is continuous and that $\varphi (\exp L_k) = k$ (assuming $\exp (- \infty) =0$).
Obviously $\varphi$ can be seen as a multiplicative bijection on $C(\beta Z, I)$. On the other hand, if
for some homeomorphism $\mu: \beta Z \longrightarrow \beta Z$ and a continuous map
$v: \beta Z \longrightarrow (0, + \infty)$,
$\varphi (f) (x) = f(\mu (x))^{v (x)}$ for every $f$, then $\mu$ must be the identity on $\beta Z$, and $v(z) = u(z)$ for every $z \in Z$.
This obviously implies that $v$ must attain the value $+ \infty$ on some points of $\beta Z \setminus Z$, which is
absurd.
\end{proof}
\section{Acknowledgements}
The author wishes to thank the referee for his/her valuable remarks and, also, Ana M. R\'odenas for drawing his attention to this subject.
| {
"timestamp": "2007-12-13T21:28:08",
"yymm": "0710",
"arxiv_id": "0710.4347",
"language": "en",
"url": "https://arxiv.org/abs/0710.4347",
"abstract": "We characterize all compact and Hausdorff spaces $X$ which satisfy that for every multiplicative bijection $\\phi$ on $C(X, I)$, there exist a homeomorphism $\\mu : X \\to X$ and a continuous map $p: X \\to (0, +\\infty)$ such that $$\\phi (f) (x) = f(\\mu (x))^{p(x)}$$ for every $f \\in C(X,I)$ and $x \\in X$. This allows us to disprove a conjecture of Marovt (Proc. Amer. Math. Soc. {\\bf 134} (2006), 1065-1075). Some related results on other semigroups of functions are also given.",
"subjects": "Functional Analysis (math.FA); General Topology (math.GN)",
"title": "Multiplicative bijections of semigroups of interval-valued continuous functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806557900711,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7077274206743568
} |
https://arxiv.org/abs/2211.02312 | Covering of high-dimensional sets | Let $(\mathcal{X},\rho)$ be a metric space and $\lambda$ be a Borel measure on this space defined on the $\sigma$-algebra generated by open subsets of $\mathcal{X}$; this measure $\lambda$ defines volumes of Borel subsets of $\mathcal{X}$. The principal case is where $\mathcal{X} = \mathbb{R}^d$, $\rho $ is the Euclidean metric, and $\lambda$ is the Lebesgue measure. In this article, we are not going to pay much attention to the case of small dimensions $d$ as the problem of construction of good covering schemes for small $d$ can be attacked by the brute-force optimization algorithms. On the contrary, for medium or large dimensions (say, $d\geq 10$), there is little chance of getting anything sensible without understanding the main issues related to construction of efficient covering designs. | \subsection*{Introduction}
Let $(\mathcal{X},\rho)$ be a metric space and $\lambda$ be a Borel measure on this space defined on the $\sigma$-algebra generated by open subsets of $\mathcal{X}$; this measure $\lambda$ defines volumes of Borel subsets of $\mathcal{X}$. The principal case is where $\mathcal{X} = \mathbb{R}^d$, $\rho $ is the Euclidean metric, and $\lambda$ is the Lebesgue measure. In this article, we are not going to pay much attention to the case of small dimensions $d$ as the problem of construction of good covering schemes for small $d$ can be attacked by the brute-force optimization algorithms. On the contrary, for medium or large dimensions (say, $d\geq 10$), there is little chance of getting anything sensible without understanding the main issues related to construction of efficient covering designs.
\subsection*{Optimal covering }
Let $\textbf{X} $ be a compact subset of $\mathcal{X}$ with $0< {\rm vol}(\textbf{X})<\infty$; in order to avoid unnecessary technical difficulties, we assume that $\textbf{X}$ is convex.
Consider $X_n=\{x_1, \ldots, x_n\} $, a set of $n$ points in $\mathcal{X}$; we will
call $X_n$ an $n$-point design. The number of points $n$ can either be fixed or determined in the course of computations. In the latter case, the designs $X_n$ are incremental (nested).
The covering radius of $\textbf{X}$ for the design $X_n$ is
\begin{eqnarray} \label{eq:CR}
{\rm CR} (X_n) := \max_{x\in\textbf{X}} \rho (x,X_n) \, ,
\end{eqnarray}
where
\begin{eqnarray} \label{eq:CR5}
\rho (x,X_n)= \min_{x_j\in X_n} \rho (x,x_j)\,
\end{eqnarray}
is the Hausdorff distance between point $x \in \mathcal{X}$ and the design $X_n$.
Covering radius is also
the smallest $r \geq 0$ such that the union of the balls with centers at $x_j \in X_n$ and radius~$r$ fully covers $\textbf{X}$:
\begin{eqnarray} \label{eq:CR1}
{\rm CR} (X_n)= \mbox{$\min_{ {r>0} }$} \; \mbox{ such that } \textbf{X} \subseteq {\cal B} (X_n,r)\, ,
\end{eqnarray}
where ${\cal B} (X_n,r)= \bigcup_{j=1}^n {\cal B} (x_j,r)$ and
\begin{eqnarray} \label{eq:CR2} {\cal B} (x,{ r })= \{ z \in \mathcal{X} :\; \rho (x,z) \leq { r } \}\end{eqnarray}
is the ball of radius $r$ and centre $x\in \mathcal{X}$.
{\rm CR}-optimal design $X_n^{({\rm CR})}$
is the $n$-point design such that \begin{eqnarray*} {\rm CR}(X_n^{({\rm CR})})=\min_{X_n} {\rm CR}(X_n). \end{eqnarray*}
Other common names for the covering radius are: fill distance (in approximation theory; see \cite{schaback2006kernel,wendland2004scattered}), dispersion (in Quasi Monte Carlo; see \cite[Ch. 6]{Niederreiter}),
minimax-distance criterion (in computer experiments; see \cite{pronzato2012design,santner2003design}) and
coverage threshold (in probability theory; see \cite{penrose2021random}).
Point sets with small covering radius are very desirable in theory and practice of global optimization and many branches of numerical mathematics.
In particular, the celebrated results of A.G.Sukharev imply that any {\rm CR}-optimal design $X_n^{({\rm CR})}$ provides the following:
(a)
min-max $n$-point global optimization method in the set of all adaptive $n$-point optimization strategies, see \cite{Sukh1} and \cite[Ch.4,Th.2.1]{sukharev2012minimax},
(b) worst-case $n$-point multi-objective global optimization method in the set of all adaptive $n$-point algorithms, see
\cite{vzilinskas2013worst}, and
(c) the $n$-point min-max optimal quadrature, see \cite[Ch.3,Th.1.1]{sukharev2012minimax}.
In all three cases, the class of (objective) functions is the class of Liptshitz functions, and
the optimality of the design is independent of the value of the Liptshitz constant. Sukharev's results on $n$-point min-max optimal quadrature formulas have been generalized in \cite{pages1998space}
for functional classes different from the class of Liptshitz functions; see also formula (2.3) in \cite{du1999centroidal}.
If $\textbf{X}$ is compact then choosing points outside $\textbf{X}$ cannot improve best covering designs, see Proposition 3.2.3 in \cite{borodachov2019discrete} for the case $\mathcal{X}=\mathbb{R}^d$, and therefore without loss of generality we can assume $x_j \in \textbf{X}$ for all $j$.
In case $\textbf{X}=[0,1]$ and Euclidean metric, the $n$-point design $X_n^{({\rm CR})}= \{ x_1^*, \ldots, x_n^*\}$ minimizing ${\rm CR} (\textbf{X}_n)$ consists of points $x_j^*=(2j-1)/(2n-1)$; $j=1, \ldots, n$.
The asymptotically {\rm CR}-optimal sequence of nested designs $\textbf{X}_n$ for $\textbf{X}=[0,1]$ can be constructed with the so-called Ruzsa points (see \cite[p.~154]{Niederreiter}).
In the case when $\textbf{X}$ is a $d$-dimensional sphere, {\rm CR}-optimal $n$-point designs (for specific values of $n$) and algorithms of numerical construction of $n$-point designs minimizing ${\rm CR} (\textbf{X}_n)$ (for the Euclidean metric) are provided in \cite{borodachov2019discrete}.
For $d=2$, $\textbf{X}=[0,1]^2$ and certain small values of $n$, either {\rm CR}-optimal or close to {\rm CR}-optimal designs are given in \cite{brass2005research,boroczky2004finite}. For $d\geq4$ and $n=4$, optimal constructions can be found in \cite{kuperbergball}.
Numerical construction of $n$-point {\rm CR}-optimal designs $X_n^{({\rm CR})}$ is notoriously difficult, especially when $d$ is not too small and $\textbf{X} $ has boundary (for example, $\textbf{X}$ is neither a sphere nor a torus). This is related to the complexity of the problem of minimizing \eqref{eq:CR} with respect to $X_n$. The optimization problem is a minmax problem in a very high-dimensional space $\mathcal{X}^n$.
Rather than constructing {\rm CR}-optimal designs, it is practically easier to regularize the optimization problem by replacing the criterion \eqref{eq:CR} with an easier one
in such a way that the solution of a regularized problem stays close to the solution of the original problem. Both $\min$ in \eqref{eq:CR} and $\max$ in \eqref{eq:CR5} can be regularized; for example, by means of approximating the $L_\infty$-norm by a suitable $L_p$-norm.
For examples of applications of this approach, see \cite{borodachov2019discrete} and \cite{pronzato2019measures}.
The problems of quantization and weak covering considered below are the two problems with two very natural relaxations of the {\rm CR} criterion.
\subsection*{Quantization and approximate covering }
The problem of covering is a particular instance of the problem of space-filling and other space-filling criteria such as various discrepancies, separation (or packing) radius and spacing radius can be considered. However, most of these criteria (unless it is a specially constructed discrepancy) cannot be considered as regularized covering radius and therefore the designs, optimal to such criteria, can be very poor with respect to the {\rm CR}-criterion; see e.g. Section 1.3 in \cite{zhigljavsky2021bayesian} and \cite{pronzato2020bayesian}.
In what follows, it is convenient to use the intersection of the ball \eqref{eq:CR2} and $\textbf{X}$:
\begin{eqnarray} \label{eq:CR3} {B} (x,{ r })={\cal B} (x,{ r })\cap \textbf{X}= \{ z \in \textbf{X} :\; \rho (x,z) \leq { r } \}\, .\end{eqnarray}
Intersection of $n$ balls ${\cal B} (x_j,r)$ ($x_j \in X_n$) with $\textbf{X}$ is therefore
\begin{eqnarray*}
{B} (X_n,r)=
{\cal B} (X_n,r) \cap \textbf{X}= \bigcup_{x_j \in X_n} { B} (x_j,r) = \{ x \in \textbf{X}:\; \rho (x, X_n) \leq r\}\, ,
\end{eqnarray*}
where $\rho (x,X_n) $ is defined in \eqref{eq:CR5}.
The following cdf (cumulative distribution function) is of prime importance in understanding covering properties of the design $X_n$:
\begin{eqnarray} \label{eq:CR7}
F (r,X_n)=\mbox{vol}( {B} (X_n,r))/ \mbox{vol}( \textbf{X})\, \;\; (0 \leq r \leq {\rm CR}(X_n))\, .
\end{eqnarray}
For given $r\geq 0$, $F (r,X_n)$ is
the proportion of $\textbf{X}$ covered by the balls ${\cal B} (X_n,r)$. For any compact $\textbf{X}$ and any point set $X_n$, the distribution with cdf $F (\cdot,X_n)$ is absolutely continuous on 0 $[0,{\rm CR}(X_n)]$ so that
$F (0,X_n)=0$, $F ({\rm CR}(X_n),X_n)=1$ and the cdf $F (r,X_n)$ itself is a strictly increasing continuous function on $[0,{\rm CR}(X_n)]$.
Let $\xi=\xi (X_n)$ be the r.v. (random variable) with cdf $F (r,X_n)$ defined by \eqref{eq:CR7}. The essential supremum ${\rm ess\, sup} \xi$ of $\xi$ is the covering radius ${\rm CR}(X_n)$. For any real $p>0$, the $p$-th moment of $\xi$ is the so-called {\it quantization error} of order $p$:
\begin{eqnarray}
\label{eq:errorQ}
\theta_p(X_n)=\int_0^{{\rm CR}(X_n)} r^p d F (r,X_n) = \mathbb{E}_U \rho ^p(U,X_n)\, ,
\end{eqnarray}
where $U$ is the r.v. with uniform distribution $P_{un}$ on $\textbf{X}$; that is, for any Borel subset $A$ of $\textbf{X}$,
\begin{eqnarray*}
{\rm Prob}\{U \in A\}={\rm vol}(A)/{\rm vol}(\textbf{X})=P_{un}(A)\,.
\end{eqnarray*}
As the cdf $F (r,X_n)$ itself is strictly increasing on $[0,{\rm CR}(X_n)]$ and ${\rm ess\, sup} \xi= {\rm CR}(X_n)$, $\theta_p(X_n)^{1/p} \to {\rm CR}(X_n)$ for any point set $X_n$; for details, see Section 10 in \cite{graf2007foundations}. Therefore,
$p$-th order quantization error with large $p$ can be considered as a regularized covering radius.
In general, quantization does not have to be associated with uniform distribution $P_{un}$; any other probability distribution can be used in its place.
Quantization error is a very important concept with long a rich history and very important practical implications.
Quantization error is easier to numerically optimize than covering radius. There is, for example, the celebrated `Lloyd's algorithm' which is one of the main tools in computational data science and mostly used for clustering.
For details on the theory of quantization and construction of efficient quantizers, we refer to the excellent book \cite{graf2007foundations}.
Another important concept related to covering is the concept of {\it weak (or approximate) covering} introduced in \cite{us,second_paper} and defined through quantiles of the cdf \eqref{eq:CR7} as follows. For any $\gamma \in (0,1)$ and a radius $r=r_{1-\gamma}>0$, we say that the union of $n$ balls ${ B}(X_n,r)$ makes a
$(1-\gamma)$-covering of $\textbf{X}$ if
$
F (r,X_n)=1-\gamma \,.
$
Complete (full) covering corresponds to $\gamma=0$. As the cdf $F (\cdot,X_n)$ is continuous, $r_{1-\gamma} \to {\rm CR}(X_n)$ as $\gamma\to 0$ and therefore the problem of $(1-\gamma)$-covering with small $\gamma$ can also be considered as a regularized version of the problem of full covering. Furthermore, numerical checking of weak covering (with an approximate value of $\gamma$) is straightforward while numerical checking of the full covering is practically impossible, if $d$ is large enough.
\subsection*{Covering of high dimensional sets }
We will now demonstrate three interesting phenomena of covering high dimensional sets; these properties are consequences of the papers \cite{us,second_paper,noonan2022efficient}. The first phenomena is that asymptotic properties (as $n\rightarrow \infty$) are extremely far from being reached in a (realistic) finite $n$ regime. Consequently, the asymptotic results produce poor approximations in high dimensions even for large $n$ like $n=100,000$. Let us demonstrate this now. For $X_n$ a collection of $n$ i.i.d. uniform points in $\textbf{X}$, the asymptotic behaviour of $F (\cdot,X_n)$ as $n\rightarrow \infty$ was the focus of study in \cite{us}.
In particular, we have the following asymptotic result:
\begin{eqnarray}
\label{eq:Zador1}
F_{n,d}(t) \rightarrow F_d(t):= 1-\exp(-t^{d}) \;\; \mbox{ as $n\rightarrow \infty$}\,,
\end{eqnarray}
where $F_{n,d}(t):={\rm Pr}( n^{1/d} V_d^{1/d} \rho (U,X_n) \leq t ) $ and $V_d= {\pi}^{d/2} / \left[\Gamma (d/2\!+\!1)\right]\,$
is the volume of the unit Euclidean ball ${\cal B}(0,1)$.
For the popular scenario of $\textbf{X}=[0,1]^d$, in Figure~\ref{key_figure1} for $n=1,000$ (blue plusses) and $n=10,000$ (black circles) we depict $F (r_{asy},X_n)$ as a function of $d$, where $r_{asy}$ is the asymptotic radius obtained from \eqref{eq:Zador1} to achieve 0.9 covering. We see that very quickly and for $n$ that would be deemed large, the true weak covering is significantly less than 0.9 and quickly tends to zero in $d$. The big difference between the asymptotic and finite regime is further illustrated in Figure~\ref{key_figure2}. Using a solid black line we depict $F (r,X_n)$ as a function of $r$ with $d=20$ and $n=10000$. In this figure, the dashed red line is the approximation obtained from the asymptotic result \eqref{eq:Zador1}, that is, the approximation $F(r,X_n) \approx F_d(n^{1/d}V_d^{1/d}r)$.
For $\textbf{X}=[-1,1]^d$, the weak covering properties of a number of random and deterministic designs was studied in \cite{us} and \cite{second_paper}. The second phenomena of weak covering of high-dimensional sets is the so-called `$\delta$-effect'.
The $\delta$-effect is a phenomenon in high dimensions that results in the recommendation to restrict your design $X_n$ to within the cube $[-\delta,\delta]^d$ with $0<\delta<1$, instead of $[-1,1]^d$. An example of the $\delta$-effect is demonstrated in Figure~\ref{Main_pic1}. Here, for $d=50$, $X_n$ is a sample of $n$ i.i.d. uniform random vectors from the $\delta$ cube $[-\delta,\delta]^d$. The $x$-axis corresponds to the value of $\delta$ and the $y$-axis corresponds to the proportion of $\textbf{X}$ covered. For $n=1000$ (solid black), $10000$ (dashed blue), $100000$ (dotted green), the value of $r$ has been selected to ensure the covering at optimal $\delta$ is 0.9. From this picture it is clear that in high dimensions, even with a large number of design points, it is beneficial for weak covering to choose your design in a $\delta$-cube with $\delta <<1$. The case of $\delta=1$ leads to poor weak covering.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\includegraphics[width=1\linewidth]{Covering_using_asymptotic.png}
\caption{Covering using asymptotic \\radius: $n=1000,10000$. }
\label{key_figure1}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{Covering_n_1000_d_20.png}
\caption{Weak covering and its asymptotic approximation: $d=20, n=10000$ }
\label{key_figure2}
\end{minipage}
\end{figure}
In Figure~\ref{Main_pic2} we demonstrate another phenomenon of covering the high-dimensional cube $\textbf{X}=[-1,1]^d$ (along with the $\delta$-effect). The third phenomena highlights the difficulty and excessive nature of requiring the full covering of $\textbf{X}$. Motivated by the numerical results of \cite{us}, in \cite{noonan2022efficient} a theoretical investigation into a $2^{d-1} $ design of maximum resolution concentrated at the
points
$(\pm 1/2, \ldots, \pm 1/2) \in \mathbb{R}^d$ was performed. For this design, the c.d.f. $F (\cdot,X_n)$ is shown with a black line and we also indicate the location of the $r_{1}={\rm CR}(X_n)$ and $r_{0.999}$ by vertical red and green line respectively. In this figure, we take $d=10$ and therefore $n=512$.
It is very easy to analytically compute the covering radius (for any $d>2$):
${\rm CR}(X_n)=\sqrt{d+8}/2 $ ; for $d=10$ this gives
${\rm CR}(X_n) \simeq 2.1213 $.
The value of $r_{0.999}$ satisfies $r_{0.999}(X_n) \simeq 1.3465 $. This value has been computed using very accurate approximations developed in \cite{noonan2022efficient}; we claim 3 correct decimal places in $r_{0.999}$. This figure demonstrates Theorem 3.4 in \cite{noonan2022efficient} which states for any $0<\gamma<1$ and for this special contruction $X_n$, ${r_{1-\gamma}/r_1 \rightarrow 1/\sqrt{3}}$ as $d\rightarrow \infty$. So in large dimensions, covering, for example, $99.99\%$ of $\textbf{X}$ can be achieved with a radius approximately 0.577 times smaller than the full covering radius.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{delta_d_50_gamma_09.png}
\caption{Weak covering of $\textbf{X}=[-1,1]^d$:\\ $d=50, n=1000,10000,100000$. } \label{Main_pic2}
\label{Main_pic1}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{main_pic_099_fract_fact.png}
\caption{$F(r,X_{n})$ for $2^{d-1}$ design with $r_{0.999}$ and $r_{1}$: $d=10$}
\label{Main_pic2}
\end{minipage}
\end{figure}
\subsection*{Conclusion}
The problem of covering of a given compact set $\textbf{X} \subset \mathbb{R}^d$ is formulated and several issues occurring in the case when $d$ is large are discussed. It is demonstrated that the common covering schemes like uniform random or pseudo-random sampling in $\textbf{X}$ make poor covering but there are some ways of improving such coverings.
The results discussed can make implications for devising exploration strategies in wide variety of global optimization algorithms in high dimensions.
\paragraph*{Related entries from within the Encyclopedia of Optimization:}
Random search for global optimization; Random search methods; Convergence of global random search algorithms.
\bibliographystyle{unsrt}
| {
"timestamp": "2022-11-07T02:08:13",
"yymm": "2211",
"arxiv_id": "2211.02312",
"language": "en",
"url": "https://arxiv.org/abs/2211.02312",
"abstract": "Let $(\\mathcal{X},\\rho)$ be a metric space and $\\lambda$ be a Borel measure on this space defined on the $\\sigma$-algebra generated by open subsets of $\\mathcal{X}$; this measure $\\lambda$ defines volumes of Borel subsets of $\\mathcal{X}$. The principal case is where $\\mathcal{X} = \\mathbb{R}^d$, $\\rho $ is the Euclidean metric, and $\\lambda$ is the Lebesgue measure. In this article, we are not going to pay much attention to the case of small dimensions $d$ as the problem of construction of good covering schemes for small $d$ can be attacked by the brute-force optimization algorithms. On the contrary, for medium or large dimensions (say, $d\\geq 10$), there is little chance of getting anything sensible without understanding the main issues related to construction of efficient covering designs.",
"subjects": "Optimization and Control (math.OC)",
"title": "Covering of high-dimensional sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806552225684,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7077274202647655
} |
https://arxiv.org/abs/2209.08281 | Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation | Learning sketching matrices for fast and accurate low-rank approximation (LRA) has gained increasing attention. Recently, Bartlett, Indyk, and Wagner (COLT 2022) presented a generalization bound for the learning-based LRA. Specifically, for rank-$k$ approximation using an $m \times n$ learned sketching matrix with $s$ non-zeros in each column, they proved an $\tilde{\mathrm{O}}(nsm)$ bound on the \emph{fat shattering dimension} ($\tilde{\mathrm{O}}$ hides logarithmic factors). We build on their work and make two contributions.1. We present a better $\tilde{\mathrm{O}}(nsk)$ bound ($k \le m$). En route to obtaining this result, we give a low-complexity \emph{Goldberg--Jerrum algorithm} for computing pseudo-inverse matrices, which would be of independent interest.2. We alleviate an assumption of the previous study that sketching matrices have a fixed sparsity pattern. We prove that learning positions of non-zeros increases the fat shattering dimension only by ${\mathrm{O}}(ns\log n)$. In addition, experiments confirm the practical benefit of learning sparsity patterns. | \section{INTRODUCTION}\label{sec:introduction}
Low-rank approximation (LRA) has played a crucial role in analyzing matrix data.
Although the singular value decomposition (SVD) provides an optimal LRA, it is too costly when the data size is huge.
To overcome this limitation, researchers have developed fast LRA methods with \emph{sketching}, whose basic form is as follows: given an input matrix $A \in \mathbb{R}^{n \times d}$ and a target low rank $k$, choose a sketching matrix $S \in \mathbb{R}^{m \times n}$ with $k \le m \le \min\set{n, d}$ and compute an LRA matrix for $SA \in \mathbb{R}^{m \times d}$.
If $S$ is drawn from an appropriate distribution, the resulting matrix is a good LRA of $A$ with high probability \citep{Sarlos2006-ob,Clarkson2009-yb,Clarkson2017-jq}.
This randomized sketching paradigm has led to various time- and space-efficient algorithms in numerical linear algebra.
We refer the reader to \citep{Woodruff2014-ya,Martinsson2020-mn} for more details of this area.
While such LRA methods with randomized sketching enjoy rigorous guarantees even for worst-case input matrices, a recent line of work \citep{Indyk2019-cn,Liu2020-hu,Indyk2021-yn} suggests that \emph{learning-based} LRA methods can attain significantly smaller approximation errors when we can use past data to better handle future data.
They have achieved fast and more accurate LRA by learning sketching matrices $S$ to minimize approximation errors over past data.
As for the theoretical side of learning-based LRA, \citet{Bartlett2022-mu} recently presented generalization bounds for learning sketching matrices.
Specifically, they proved an $\tilde\mathrm{O}(nsm)$\footnote{We use $\tilde\mathrm{O}$ and $\tilde\Omega$ to hide logarithmic factors.} upper bound on the \emph{fat shattering dimension} for learning an $m \times n$ sketching matrix with $s$ non-zeros at \emph{fixed} positions in each column.
They also showed an $\Omega(ns)$ lower bound.
We give an overview of their work in \cref{subsec:bartlett-overview}.
Their study has raised some natural questions.
For example, can we narrow the $\tilde\mathrm{O}(m)$ gap between the upper and lower bounds?
Moreover, generalization bounds for learning-based LRA with \emph{changeable} sparsity patterns are awaited since learning positions of non-zeros is considered to be a promising direction \citep{Indyk2021-yn} and its effectiveness has been partly confirmed \citep{Liu2020-hu}.
\subsection{Our Contribution}
Building on \citep{Bartlett2022-mu}, we address the aforementioned questions and make two contributions.
First, we improve the previous $\tilde\mathrm{O}(nsm)$ upper bound by replacing the $\mathrm{O}(m)$ factor with $\mathrm{O}(\log m)$, which leads to a better $\tilde\mathrm{O}(nsk)$ bound ($k \le m$).
Although the \emph{sketching dimension}, $m$, is often set to, for example, $4k$ in practice, there is no such theoretical relation as $m = \mathrm{O}(k)$.
Thus, our bound indeed improves the previous one.
We take the same proof strategy as \citep{Bartlett2022-mu} and represent computational procedures of a loss function by a \emph{Goldberg--Jerrum (GJ) algorithm}. Our technical contribution is to develop a new GJ algorithm for computing pseudo-inverse matrices with a smaller \emph{predicate complexity} than the previous one, which would be of independent interest. To demonstrate its usefulness, we also give a generalization bound for a learning-based Nystr\"om method by using our GJ algorithm.
Second, we give a generalization bound for learning-based LRA with changeable sparsity patterns. Supposing we can learn both positions and values of $ns$ non-zeros in a sketching matrix $S$, we prove that our $\tilde\mathrm{O}(nsk)$ upper bound on the fat shattering dimension increases only by $\mathrm{O}(ns\log n)$, despite the presence of exponentially many possible sparsity patterns in $ns$.
Hence, the bound remains $\tilde\mathrm{O}(nsk)$ (ignoring $\mathrm{O}(\log n)$) even when the sparsity pattern can change.
Also, experiments show that a recent efficient learning-based LRA method \citep{Indyk2021-yn}, which used fixed sparsity patterns, can achieve higher accuracy with changeable sparsity patterns, suggesting the practical benefit of our result.
\subsection{Related Work}
The most relevant study to ours is \citep{Bartlett2022-mu}.
They proved generalization bounds for learning-based LRA and other methods in numerical linear algebra.
Other theoretical results related to learning-based LRA include \emph{safeguard} guarantees \citep{Indyk2019-cn} and \emph{consistency} \citep{Indyk2021-yn}, which are different from generalization guarantees, as mentioned in \citep[Section 2.5]{Bartlett2022-mu}.
\citet{Gupta2017-ng} initiated the study of a PAC-learning approach to algorithm configuration, which is also called \emph{data-driven algorithm design} \citep{Balcan2021-fy}.
Recent studies have presented generalization bounds for various learning-based algorithms, e.g., integer programming methods \citep{Balcan2018-pe,Balcan2021-kv,Balcan2022-em}, clustering \citep{Balcan2020-im}, and heuristic search \citep{Sakaue2022-nt}.
\citet{Balcan2021-jv} presented a general theory for deriving generalization bounds based on piecewise structures of \emph{dual} function classes.
Their idea, however, does not lead to strong guarantees in learning-based LRA, as discussed in \citep[Appendix E]{Bartlett2022-mu}.
As with \citep{Bartlett2022-mu}, we consider a class of \emph{proxy loss} functions to obtain a generalization bound.
This idea has a slight connection to \citep{Balcan2020-gm}, which approximates dual functions with simpler ones, while the technical details are different.
\section{BACKGROUND}
For any positive integer $n$, let $[n] = \set*{1,\dots,n}$.
Let $\sign(\cdot)$ be the sign function that takes $x \in \mathbb{R}$ as input and returns $+1$ if $x>0$, $-1$ if $x<0$, or $0$ if $x=0$.
We define the degree of a polynomial by its total degree.
The degree of a rational function refers to the maximum of its numerator's and denominator's degrees, where the fraction is reduced to the lowest terms.
Let $\rank$, $\tr$, and $\det$ denote the rank, trace, and determinant.
Let $\norm{A}_F = \sqrt{\tr(A^\top A)}$ denote the Frobenius norm of a matrix $A$.
The Moore--Penrose pseudo-inverse of a matrix $A$ is denoted by $A^\dagger$.
SVD refers to the compact singular value decomposition, i.e., for $A \in \mathbb{R}^{n \times d}$ with $\rank(A) = r$, SVD computes $U\in \mathbb{R}^{n \times r}$, $\Sigma \in \mathbb{R}^{r \times r}$, and $V \in \mathbb{R}^{d \times r}$ with $A = U \Sigma V^\top$.
For any vector $x \in \mathbb{R}^n$, let $\supp(x) \subseteq [n]$ denote the set of indices of non-zeros.
A \emph{sparsity pattern}, $J \subseteq [n]$, of $x$ indicates that $x_i$ is \emph{allowed} to be non-zero if and only if $i \in J$, hence $\supp(x) \subseteq J$.
\subsection{Learning Theory}\label{subsec:learning-theory}
Let $\mathcal{X}$ be a domain of inputs, $\mathcal{D}$ a distribution over $\mathcal{X}$, and $\mathcal{L} \subseteq [0,1]^\mathcal{X}$ a class of loss functions.
In our case, $\mathcal{X}$ is a class of input matrices, and each $L \in \mathcal{L}$ measures the approximation error of LRA and is parametrized by a sketching matrix (see \cref{subsec:bartlett-overview}).
For $\delta \in (0,1)$ and $\varepsilon > 0$,
we say $\mathcal{L}$ admits $(\varepsilon, \delta)$-\emph{uniform convergence} with $N$ samples if for i.i.d.\ draws $\tilde{X} = \set{x_1,\dots,x_N} \sim \mathcal{D}^N$, it holds that
\[
\Pr_{\tilde{X}}
\brc*{
\forall L \in \mathcal{L}, \,
\abs*{\frac{1}{N}\sum_{i=1}^N L(x_i) - \mathop{\mathbb{E}}_{x\sim\mathcal{D}}[L(x)]} \le \varepsilon
}
\ge 1-\delta.
\]
If such a uniform bound over $\mathcal{L}$ holds, we can bound the gap between the empirical and expected losses regardless of how sketching matrices are learned (e.g., manual or automatic).
The following \emph{pseudo-} and \emph{fat shattering dimensions} are fundamental notions of the complexity of function classes.
\begin{definition}[Pseudo- and fat shattering dimensions]\label{def:dim
Let $\mathcal{L} \subseteq [0,1]^\mathcal{X}$ be a class of functions.
We say an input set $\set*{x_1,\dots,x_N}\subseteq \mathcal{X}$ is \textit{(pseudo) shattered} by $\mathcal{L}$ if there exist threshold values, $t_1,\dots,t_N \in \mathbb{R}$, satisfying the following condition:
for every $I \subseteq [N]$, there exists $L \in \mathcal{L}$ such that
\begin{equation}\label{eq:shatter}
i \in I \Leftrightarrow L(x_i) > t_i.
\end{equation}
For $\gamma > 0$, we say $\set*{x_1,\dots,x_N}\subseteq \mathcal{X}$ is \textit{$\gamma$-fat shattered} by $\mathcal{L}$ if the above condition holds with replacement of \eqref{eq:shatter} by
\begin{equation}
i \in I \Rightarrow L(x_i) > t_i + \gamma
\quad \text{and} \quad
i \notin I \Rightarrow L(x_i) < t_i - \gamma.
\end{equation}
The \emph{pseudo-dimension}, $\mathrm{pdim}(\mathcal{L})$, and \emph{$\gamma$-fat shattering dimension}, $\mathrm{fatdim}_\gamma(\mathcal{L})$, are the maximum size of a set that is pseudo and $\gamma$-fat shattered, respectively, by $\mathcal{L}$.
\end{definition}
It is well-known that $N = \Omega(\varepsilon^{-2}\cdot(\mathrm{pdim}(\mathcal{L}) + \log\delta^{-1}))$ samples are sufficient for ensuring $(\varepsilon, \delta)$-uniform convergence, and a similar guarantee holds if $\mathrm{fatdim}_\gamma(\mathcal{L})$ with $\gamma = \Omega(\varepsilon)$ is bounded.
We refer the reader to \citep[Theorems 19.1 and 19.2]{Anthony1999-mm} for details.
\subsection{Low-Rank Approximation}\label{subsec:lra}
Let $A \in \mathbb{R}^{n\times d} $ be an input matrix with $n \ge d$.
We assume $\rank(A) > 0$ and $\norm{A}_F^2 = 1$ by normalization.
For $k \in [d]$, we consider computing a rank-$k$ approximation of $A$.
Let $\brc{A}_k \in \mathbb{R}^{n \times d}$ denote an optimal rank-$k$ approximation, i.e.,
\[
\brc{A}_k \in \argmin\Set*{\norm{A - X}_F^2}{X \in \mathbb{R}^{n \times d}, \rank(X) = k}.
\]
Although we can compute $\brc{A}_k$ with SVD in $\mathrm{O}(nd^2)$ time \citep[Section 8.6.3]{Golub2013-xr}, this approach is time and space consuming when $A$ is huge.
\Cref{alg:scw} presents an efficient LRA algorithm with a sketching matrix $S \in \mathbb{R}^{m \times n}$ \citep{Sarlos2006-ob,Clarkson2009-yb,Clarkson2017-jq}, which is called the SCW algorithm after the authors' acronyms.
\Cref{alg:scw} is more efficient than computing $\brc{A}_k$ if we set the sketching dimension, $m$, to a much smaller value than $d$, whereas we need $m \ge k$ to get a rank-$k$ approximation.
Let $\mathrm{SCW}_k(S, A)$ denote the output of \cref{alg:scw} with a sketching matrix $S$ and an input matrix $A$.
It is known that for $\alpha > 0$, sketching matrices with $m = \tilde\Omega(k/\alpha)$ drawn from an appropriate distribution satisfy $\norm{A - \mathrm{SCW}_k(S, A)}_F \le (1+\alpha)\norm{A - \brc*{A}_k}_F$ with high probability (e.g., \citep[Section 4.1]{Woodruff2014-ya}).
\citet{Indyk2019-cn} showed that machine-learned sketching matrices often enable more accurate LRA than random ones in practice.
Given a training dataset $\mathcal{A}_{\text{train}} \subseteq \mathbb{R}^{n \times d}$ of input matrices, they proposed to learn $S$ by minimizing the empirical risk $\frac{1}{|\mathcal{A}_{\text{train}}|}\sum_{A\in\mathcal{A}_{\text{train}}} \norm{A - \mathrm{SCW}_k(S, A)}_F^2$.
Specifically, they learned sparse $S$ with the stochastic gradient descent method (SGD) by regarding non-zeros in $S$ at fixed positions as tunable parameters (where the sparsity of $S$ makes $\mathrm{SCW}_k$ efficient).
Later, researchers further studied learning-based LRA methods \citep{Liu2020-hu,Ailon2021-ia,Indyk2021-yn}, which we will overview in \cref{subsec:experiment-background}.
\begin{algorithm}[tb]
\caption{$\mathrm{SCW}_k(S, A)$}
\label{alg:scw}
\begin{algorithmic}[1]
\State Compute $SA$
\If{$SA$ is a zero matrix}
\State \Return an $n\times d$ zero matrix
\EndIf
\State $U$, $\Sigma$, $V$ $\gets \mathrm{SVD}(SA)$ \Comment{$SA = U\Sigma V^\top$}
\State Compute $AV$
\State \Return $\brc{AV}_k V^\top$
\end{algorithmic}
\end{algorithm}
\subsection{Overview of \texorpdfstring{\citep{Bartlett2022-mu}}{Bartlett et al., 2022}}\label{subsec:bartlett-overview}
\citet{Bartlett2022-mu} formally studied learning-based LRA as a statistical learning problem.
Let $\mathcal{A} \subseteq \mathbb{R}^{n\times d}$ be a class of input matrices and $\mathcal{S}\subseteq \mathbb{R}^{m\times n}$ a class of sketching matrices, where every $S \in \mathcal{S}$ has up to $s$ non-zeros in each column and the sparsity pattern is identical for all $S \in \mathcal{S}$.
Define a loss function $L:\mathcal{S}\times\mathcal{A}\to[0, 1]$\footnote{$L(S, A)$ is at most $\norm{A}_F^2 = 1$, as in \citep{Bartlett2022-mu}.} based on $\mathrm{SCW}_k$ as
\begin{equation}\label{eq:scw-loss}
L(S, A) = \norm{A - \mathrm{SCW}_k(S, A)}_F^2.
\end{equation}
Let $\mathcal{L} = \set{L(S, \cdot)}_{S \in \mathcal{S}} \subseteq {[0, 1]}^\mathcal{A}$ be the class of loss functions, where each $L(S, \cdot) \in \mathcal{L}$ is specified by $ns$ tunable parameters (non-zeros of $S$) and measures the approximation error of $\mathrm{SCW}_k(S, \cdot)$.
The authors presented the following $\tilde\mathrm{O}(nsm)$ bound on the $\varepsilon$-fat shattering dimension of $\mathcal{L}$.
\begin{theorem}[{\citet[Theorem 2.2]{Bartlett2022-mu}}]\label{thm:bartlett-fatdim}
For sufficiently small $\varepsilon > 0$, the $\varepsilon$-fat shattering dimension of $\mathcal{L}$ is bounded as
\[
\mathrm{fatdim}_\varepsilon(\mathcal{L}) = \mathrm{O}(ns \cdot (m + k\log(d/k) + \log(1/\varepsilon))).
\]
\end{theorem}
Intuitively, we can bound $\mathrm{fatdim}_\varepsilon(\mathcal{L})$ by assessing the complexity of computational procedures for evaluating $L(S, A)$.
In the LRA setting, however, directly bounding $\mathrm{fatdim}_\varepsilon(\mathcal{L})$ is not easy since $\mathrm{SCW}_k$ makes black-box use of SVD.
The authors have overcome this difficulty by considering a class $\hat\mathcal{L}_\varepsilon$ of appropriate \emph{proxy loss} functions, which we can evaluate with relatively simple computational procedures, and by bounding its pseudo-dimension, $\mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$.
As in the following definition, each $\hat{L}_\varepsilon(S, \cdot) \in \hat\mathcal{L}_\varepsilon$ is evaluated with a \emph{power-method}-based procedure so that $\hat{L}_\varepsilon(S, A)$ gives a sufficiently accurate approximation of $L(S, A)$.
\begin{definition}[Proxy loss]\label{def:proxy}
For any $A \in \mathcal{A}$, $S \in \mathcal{S}$, and $\varepsilon>0$, the proxy loss $\hat{L}_\varepsilon(S, A)$ is computed as follows:
\begin{enumerate}
\item Compute $B = A{(SA)}^\dagger(SA)$.{\label[step]{item:pinv-comp}}
\item For all possible $P_i \in \mathbb{R}^{d \times k}$ ($i = 1,\dots,\binom{d}{k}$) whose columns are $k$ distinct standard vectors in $\mathbb{R}^d$, compute $Z_i = {(BB^\top)}^qBP_i$, where $q = \mathrm{O}(\varepsilon^{-1}\log(d/\varepsilon))$.{\label[step]{item:for-all-Pi}}
\item Choose $Z = Z_i$ that minimizes $\norm{B - Z_iZ_i^\dagger B}_F^2$.{\label[step]{item:choose-Z}}
\item $\hat{L}_\varepsilon(S, A) = \norm{A - ZZ^\dagger B}_F^2$.
\end{enumerate}
Given the class $\mathcal{S}$ of sketching matrices, the class of proxy loss functions is defined as $\hat\mathcal{L}_\varepsilon = \set{\hat{L}_\varepsilon(S, \cdot)}_{S \in \mathcal{S}}$.
\end{definition}
As discussed in \citep[Section 5.3]{Bartlett2022-mu}, it holds that $\mathrm{fatdim}_\varepsilon(\mathcal{L}) \le \mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$.
Therefore, an upper bound on $\mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$ immediately implies that on $\mathrm{fatdim}_\varepsilon(\mathcal{L})$.
A benefit of considering $\hat\mathcal{L}_\varepsilon$ is that analyzing its complexity is easier than $\mathcal{L}$.
The authors upper bounded $\mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$ by modeling the computational procedure of $\hat{L}_\varepsilon$ as a \emph{Goldberg--Jerrum algorithm} \citep{Goldberg1995-af}.\footnote{Such a notion is often called the \emph{algorithmic computation tree}. Still, we here call it a GJ algorithm to be consistent with \citep{Bartlett2022-mu}. Although their original definition does not contain the equality condition in branch nodes, dealing with equalities is easy due to \citep[Corollary 2.1]{Goldberg1995-af}.}
\begin{definition}[Goldberg--Jerrum algorithm]\label{def:gj}
A GJ algorithm $\Gamma$ takes real values as input, and its procedure is represented by a binary tree with the following two types of nodes:
\begin{itemize}
\item Computation node that executes an arithmetic operation $v^{\prime \prime} = v \odot v^\prime$, where $\odot \in \set*{+, -, \times, \div}$.
\item Branch node with an out-degree of $2$, where branching is specified by the evaluation of a condition of the form $v \ge 0$ ($v \le 0$) or $v = 0$.
\end{itemize}
In both cases, $v$ and $v^\prime$ are either inputs or values computed at ancestor nodes.
Once input values are given, $\Gamma$ proceeds along a root--leaf path on the tree and sequentially performs operations specified by nodes on the path.
\end{definition}
Then, they defined two notions, the \emph{degree} and \emph{predicate complexity}, to measure the complexity of GJ algorithms.
\begin{definition}[Degree and predicate complexity]
The \emph{degree} of a GJ algorithm is the maximum degree of any rational function of input variables it computes.
The \emph{predicate complexity} of a GJ algorithm is the number of distinct rational functions that appear at its branch nodes.
If a GJ algorithm has the degree and predicate complexity of at most $\Delta$ and $p$, respectively, we call it a $(\Delta, p)$-GJ algorithm.
\end{definition}
The following theorem says that if we can check whether a loss function value exceeds a threshold value or not using a $(\Delta, p)$-GJ algorithm with small $\Delta$ and $p$, the class of such loss functions has a small pseudo-dimension.
\begin{theorem}[{\citet[Theorem 3.3]{Bartlett2022-mu}}]\label{thm:bartlett-main}
Let $\mathcal{X}$ be an input domain and $\mathcal{L} = \Set{L_\rho:\mathcal{X}\to\mathbb{R}}{\rho \in \mathbb{R}^\nu}$ a class of functions parameterized by $\rho \in \mathbb{R}^\nu$.
Assume that for every $x \in \mathcal{X}$ and $t \in \mathbb{R}$, there is a $(\Delta, p)$-GJ algorithm $\Gamma_{x,t}$ that takes $\rho \in \mathbb{R}^\nu$ as input and returns ``true'' if $L_\rho(x) > t$ and ``false'' otherwise.
Then, it holds that
\[
\mathrm{pdim}(\mathcal{L}) = \mathrm{O}(\nu \log(p\Delta)).
\]
\end{theorem}
The authors proved that for any $A \in \mathcal{A}$ and $t \in \mathbb{R}$, whether $\hat{L}_\varepsilon(S, A) > t$ or not can be checked by a $(\Delta, p)$-GJ algorithm $\Gamma_{A, t}$ with $\Delta = \mathrm{O}(mk\varepsilon^{-1}\log(d/\varepsilon))$ and
\begin{equation}\label{eq:kappa-factors}
p = 2^m\cdot 2^{\mathrm{O}(k)}\cdot {(d/k)}^{3k},
\end{equation}
where input variables are $ns$ non-zeros of $S$, i.e., $\nu = ns$.
Therefore, \cref{thm:bartlett-main} implies
\begin{equation}\label{eq:bartlett-pdim-upper}
\mathrm{pdim}(\hat\mathcal{L}_\varepsilon) = \mathrm{O}(ns \cdot (m + k\log(d/k) + \log(1/\varepsilon))).
\end{equation}
The same bound applies to $\mathrm{fatdim}_\varepsilon(\mathcal{L})$ ($\le \mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$), obtaining \cref{thm:bartlett-fatdim}.
They also gave an $\Omega(ns)$ lower bound on $\mathrm{fatdim}_\varepsilon(\mathcal{L})$; hence it is tight up to an $\tilde\mathrm{O}(m)$ factor.
\subsection{Warren's Theorem}\label{subsec:warren}
Warren's theorem \citep{Warren1968-hp} is a useful tool to evaluate the complexity of a class of polynomials.
The following extended version that allows the sign to be zero is presented in \citep[Corollary 2.1]{Goldberg1995-af}.
\begin{theorem}[Warren's theorem]\label{thm:warren}
Let $\set*{f_1,\dots,f_N}$ be a set of $N$ polynomials of degree at most $\Delta$ in $\nu$ real variables $\rho \in \mathbb{R}^\nu$.
If $N \ge \nu$, there are at most ${(8\mathrm{e} N\Delta/\nu)}^\nu$ distinct tuples of $(\sign(f_1(\rho)),\dots,\sign(f_N(\rho))) \in \set{-1, 0, +1}^N$.
\end{theorem}
This theorem is a key to proving \cref{thm:bartlett-main}, and we will also use it in \cref{sec:sparsity-pattern}.
To familiarize ourselves with the theorem, we give a proof sketch of \cref{thm:bartlett-main}. From the statement assumption in \cref{thm:bartlett-main}, whether $L_\rho(x) > t$ is determined by sign patterns of $p$ polynomials of degree at most $\Delta$ in $\rho\in\mathbb{R}^\nu$ that appear at the branch nodes of the GJ algorithm, $\Gamma_{x, t}$.
Thus, when $x_1,\dots,x_N \in \mathcal{X}$ and $t_1,\dots,t_N \in \mathbb{R}$ are given, the number of distinct outcomes (or tuples of $N$ Booleans) of GJ algorithms $\Gamma_{x_1, t_1}, \dots, \Gamma_{x_N, t_N}$, which take common $\rho$ as input, is bounded by the number of all possible sign patterns of $Np$ polynomials of degree at most $\Delta$ in $\rho$.
From Warren's theorem, the number of such sign patterns is at most ${(8\mathrm{e} Np\Delta/\nu)}^\nu$, which must be at least $2^N$ to shatter $x_1,\dots,x_N$.
The largest $N$ with ${(8\mathrm{e} Np\Delta/\nu)}^\nu \ge 2^N$ gives the $\mathrm{O}(\nu \log(p\Delta))$ bound on $\mathrm{pdim}(\mathcal{L})$, as in \cref{thm:bartlett-main}.
\section{IMPROVED UPPER BOUND}\label{sec:improved-upper}
We obtain an $\tilde\mathrm{O}(nsk)$ bound on $\mathrm{fatdim}_\varepsilon(\mathcal{L})$ by replacing the $\mathrm{O}(m)$ factor in \eqref{eq:bartlett-pdim-upper} with $\mathrm{O}(\log m)$.
To this end, we reduce the $2^m$ factor in the predicate complexity \eqref{eq:kappa-factors} to $m$.
Note that, although concatenating random matrices with $m = \tilde\mathrm{O}(k/\alpha)$ rows guarantees the $(1+\alpha)$-approximation as mentioned in \cref{subsec:lra} (known as safeguard guarantees), our improvement is not meaningless since $m$ can be much lager than $k$.
For example, even if we admit errors of $\mathrm{SCW}_k$ relative to $\brc{A}_k$ to the magnitude of $\alpha \approx \varepsilon$, $m = \tilde\mathrm{O}(k/\alpha) \simeq \tilde\mathrm{O}(k/\varepsilon)$ does not imply $m = \mathrm{O}(k \log (d/k) + \log(1/\varepsilon))$, hence $\tilde\mathrm{O}(nsk)$ can be significantly smaller than $\tilde\mathrm{O}(nsm)$.
\subsection{Previous Approach}\label{subsec:pinv-previous}
We first explain where the $2^m$ factor comes from in \citep{Bartlett2022-mu}.
By carefully expanding the proof of \citep[Lemma 5.6]{Bartlett2022-mu}, one can confirm that it is caused by Step~\ref{item:pinv-comp} in \cref{def:proxy}, where a GJ algorithm computes $A(SA)^\dagger(SA)$.
For this step, they used an $(\mathrm{O}(m), 2^m)$-GJ algorithm that computes $Z^\dagger Z$ for an input matrix $Z$ with $m$ rows \citep[Lemma 5.2]{Bartlett2022-mu}. We below describe their GJ algorithm for later convenience.
In what follows, let $I_r$ denote the $r \times r$ identity matrix for any $r \in \mathbb{Z}_{>0}$.
An essential tool for obtaining the GJ algorithm is the matrix inversion formula by the Cayley--Hamilton theorem.\footnote{\citet{Bartlett2022-mu} alternatively used a recursive formula of \citep{Csanky1976-kp}. This difference does not affect the conclusion.}
\begin{proposition}\label{prop:csanky}
Let $M$ be an $r \times r$ real matrix and
\[
\det(\lambda I_r - M) = \lambda^r + c_1\lambda^{r-1} + \cdots + c_r
\]
the characteristic polynomial of $M$.
If $M$ is invertible, we have $c_r = {(-1)}^r \det(M) \neq 0$ and
\[
M^{-1} = -\frac{1}{c_r} \cdot (M^{r-1} + c_1 \cdot M^{r-2} + \dots + c_{r-1}\cdot I_r).
\]
\end{proposition}
Let $Z$ be an input matrix with $m$ rows of rank $r \le m$.
Their GJ algorithm computes $Z^\dagger Z$ as follows.
It first finds a matrix $Y$ with $r$ linearly independent rows selected from the rows of $Z$.
Since $Y$ spans the row space of $Z$, it holds $Z^\dagger Z = Y^\top{(YY^\top)}^{-1}Y$; their algorithm computes this using \cref{prop:csanky} with $M = YY^\top$.
Note that we have
\begin{align}
c_i = {(-1)}^i \sum_{S: |S| = i} \det M[S]
& &
\text{for $i = 1,\dots,r$,}
\end{align}
where $M[S]$ is the principal minor of $M$ with indices $S \subseteq [r]$;
hence, if we take entries of $M$ to be variables, $c_1,\dots,c_r$ are polynomials of degree at most $r$.
Thus, regarding entries of $Z$ as variables, every rational function that appears in the above procedure has a degree of $\mathrm{O}(m)$.
What remains to be discussed is how to find $Y$ of full row rank.
To achieve this, their GJ algorithm goes over the rows of $Z$ and sequentially adds appropriate rows to $Y$ in a greedy fashion.
Whenever adding a new row, it checks whether the resulting $Y$ has full row rank by examining whether $\det(YY^\top) \neq 0$ or not.
This procedure involves polynomials of degree $\mathrm{O}(m)$, and the number of branch nodes is up to $2^m$ depending on which rows of $Z$ are selected, resulting in the $2^m$ predicate complexity.
\subsection{Our Result}\label{subsec:pinv-ours}
We present an $(\mathrm{O}(m), m)$-GJ algorithm for computing $Z^\dagger$ (right-multiplying $Z$ only increases the degree by one).
\begin{lemma}\label{lem:better-gj-alg}
Let $Z$ be an input matrix with $m$ rows.
There is an $(\mathrm{O}(m), m)$-GJ algorithm that computes $Z^\dagger$.
\end{lemma}
Our key idea is to begin by determining $r = \rank(ZZ^\top)$ with $m$ branch nodes, instead of branching to determine the choice of rows of $Z$.
Once $r$ is fixed, we can calculate $Z^\dagger$ without branching by the following formula.
\begin{proposition}[{\citet[Theorem 3]{Decell1965-iz}}]\label{prop:pinv}
Let $Z$ be a matrix with $m$ rows and $c_1,\dots,c_m$ the coefficients of the characteristic polynomial of $M = ZZ^\top \in \mathbb{R}^{m \times m}$, i.e.,
\[
\det(\lambda I_m - M) = \lambda^m + c_1\lambda^{m-1} +\dots+ c_m.
\]
If $r \ge 1$ is the largest index with $c_r\neq 0$, we have
\[
Z^\dagger = -\frac{1}{c_r} \cdot Z^\top
\prn*{
M^{r-1} + c_1 \cdot M^{r-2} +\dots+ c_{r-1} \cdot I_m
}.
\]
If $c_1 = \dots = c_m = 0$, $Z^\dagger$ is a zero matrix.
\end{proposition}
By using this formula in lieu of \cref{prop:csanky}, we can obtain an $(\mathrm{O}(m), m)$-GJ algorithm that computes $Z^\dagger Z$.
\begin{proof}[Proof of \cref{lem:better-gj-alg}]
We give a concrete GJ algorithm.
Let $M = ZZ^\top$.
First, we compute the coefficients $c_1,\dots,c_m$ of $\det(\lambda I_m - M)$, which are polynomials of degree $\mathrm{O}(m)$ in the entries of $Z$.
Then, check whether $c_i \neq 0$ in decreasing order of $i$.
Once we find $c_i \neq 0$, set $r = i$ as the largest index $r$ with $c_r \neq 0$.
Note that this requires only $m$ branch nodes.
If $c_m = \dots = c_1 = 0$, let $Z^\dagger$ be a zero matrix.
Otherwise, we compute $Z^\dagger$ as in \cref{prop:pinv}.
Every rational function in the above calculation has a degree of $\mathrm{O}(m)$ in $Z$.
Thus, we obtain a desired $(\mathrm{O}(m),m)$-GJ algorithm.
\end{proof}
By performing Step~\ref{item:pinv-comp} in \cref{def:proxy} with our GJ algorithm, we can replace the $\mathrm{O}(m)$ factor in the upper bound \eqref{eq:bartlett-pdim-upper} with $\mathrm{O}(\log m)$, thus improving \cref{thm:bartlett-fatdim} as follows.
\begin{proposition}\label{prop:fat-dim-improved}
For sufficiently small $\varepsilon > 0$, the $\varepsilon$-fat shattering dimension of $\mathcal{L}$ is bounded as
\[
\mathrm{fatdim}_\varepsilon(\mathcal{L}) = \mathrm{O}(ns \cdot (\log m + k\log(d/k) + \log(1/\varepsilon))).
\]
\end{proposition}
\subsection{Application to the Nystr\"om Method}\label{subsec:nystrom}
We briefly digress to demonstrate the usefulness of our GJ algorithm (\cref{lem:better-gj-alg}).
We here consider the classical Nystr\"om method \citep{Nystrom1930-dx}.
The method takes a positive semidefinite matrix $A \in \mathbb{R}^{n \times n}$ as input and computes its rank-$r$ approximation as $AS{(S^\top AS)}^\dagger {(AS)}^\top$, where $S \in \mathbb{R}^{n \times r}$ is a sketching matrix.
Unlike the SCW algorithm (\cref{alg:scw}), it does not involve SVD, hence more efficient.
Thus, it is a popular choice when handling large Laplacian and kernel matrices \citep{Gittens2016-pp}.
As with learning-based LRA methods discussed so far, we can naturally combine the Nystr\"om method with learning of sketching matrices.
Specifically, defining a loss function as
\begin{equation}\label{eq:nystrom-loss}
L(S, A) = \norm{A - AS{(S^\top AS)}^\dagger {(AS)}^\top}_F^2,
\end{equation}
we can learn high-performing sketching matrices from past data of $A$ by minimizing the empirical risk.
When it comes to generalization guarantees, we are interested in the pseudo-dimension of $\mathcal{L} = \set{L(S, \cdot)}_{S \in \mathcal{S}}$ with $L$ defined as in \eqref{eq:nystrom-loss}, where we let $\mathcal{S} \subseteq \mathbb{R}^{n \times r}$ be a class of sketching matrices with $\nu$ non-zeros at fixed positions.
We analyze the pseudo-dimension of $\mathcal{L}$ by modeling the computational procedure of $L(S, A)$ defined in \eqref{eq:nystrom-loss} as a GJ algorithm.
We first compute ${(S^\top A S)}^\dagger$ with our GJ algorithm (\cref{lem:better-gj-alg}), whose degree and predicate complexity are $\mathrm{O}(r)$ and $r$, respectively, where entries of $S$ are variables.
Other operations for computing $L(S, A)$ require no branch nodes, and the degree remains $\mathrm{O}(r)$.
Consequently, we can compute $L(S, A)$ with an $(\mathrm{O}(r), r)$-GJ algorithm, and thus \cref{thm:bartlett-main} implies the following bound on $\mathrm{pdim}(\mathcal{L})$.
\begin{proposition}
For the class of $\mathcal{L}$ of loss functions \eqref{eq:nystrom-loss}, each of which is parameterized by an $n \times r$ sketching matrix $S \in \mathcal{S}$ with $\nu$ non-zeros at fixed positions, it holds that
\[
\mathrm{pdim}(\mathcal{L}) = \mathrm{O}(\nu\log r).
\]
\end{proposition}
We can also deal with changeable sparsity patterns by using \cref{thm:sparse-main}, which we will show in \cref{sec:sparsity-pattern}.
In this case, it will immediately follow that $\mathrm{pdim}(\mathcal{L}) = \mathrm{O}(\nu\log(nr))$.
Note that if we compute ${(S^\top A S)}^\dagger$ with the previous GJ algorithm described in \cref{subsec:pinv-previous}, its predicate complexity is $2^r$, resulting in $\mathrm{pdim}(\mathcal{L}) = \mathrm{O}(\nu r\log r)$.
Thus, this example suggests that our GJ algorithm can yield much better generalization bounds for classes of functions involving pseudo-inverse computation.
\section{LEARNING SPARSITY PATTERNS}\label{sec:sparsity-pattern}
This section studies generalization bounds when sparsity patterns of sketching matrices can change.
We show that even if the class $\mathcal{S}$ of sketching matrices contains all $m \times n$ matrices with $ns$ non-zeros, the fat shattering dimension of $\mathcal{L} = \set{L(S, \cdot)}_{S \in \mathcal{S}}$ increases only by $\mathrm{O}(ns\log n)$.
\subsection{General Result}
To deal with changeable sparsity patterns, we first present an extended version of \cref{thm:bartlett-main}.
\begin{theorem}\label{thm:sparse-main}
Let $\mathcal{X}$ be an input domain and $\mathcal{L} \subseteq \mathbb{R}^\mathcal{X}$ a class of functions with $\ell $ parameters $\rho \in \mathbb{R}^\ell $ that is $\nu$-sparse, i.e.,
\[
\mathcal{L} = \Set{L_\rho:\mathcal{X}\to\mathbb{R}}{\rho \in \mathbb{R}^\ell , |\supp(\rho)| \le \nu}.
\]
Assume that for every $x \in \mathcal{X}$ and $t \in \mathbb{R}$, there is a $(\Delta, p)$-GJ algorithm, $\Gamma_{x,t}$, that takes a $\nu$-sparse variable vector $\rho \in \mathbb{R}^\ell $ as input and returns ``true'' if $L_\rho(x) > t$ and ``false'' otherwise.
Then, we have
\[
\mathrm{pdim}(\mathcal{L}) = \mathrm{O}(\nu\log(\ell p\Delta)).
\]
\end{theorem}
Compared with \cref{thm:bartlett-main}, there are $\ell $ ($\ge \nu$) parameters, which are restricted to be $\nu$-sparse.
If we naively use \cref{thm:bartlett-main} without taking the sparsity into account, the pseudo-dimension bound turns out $\tilde\mathrm{O}(\ell )$, even though every $L_\rho$ has only $\nu$ tunable non-zero parameters.
Our \cref{thm:sparse-main} provides a refined bound that grows only logarithmically with $\ell $ and keeps the linear dependence on $\nu$.
The following proof idea comes from a PAC approach to one-bit compressed sensing \citep{Ahsen2019-yj}, but how to use the idea is significantly different; indeed, the previous study does not combine it with Warren's theorem.
\begin{proof}[Proof of \cref{thm:sparse-main}]
The proof proceeds similarly to that of \citep[Theorem 3.3]{Bartlett2022-mu} (sketched in \cref{subsec:warren}), but we must take changeable sparsity patterns into account.
We arbitrarily fix $N$ pairs, $(x_1, t_1), \dots, (x_N, t_N)$, of an input and a threshold value.
We upper bound the number of all possible tuples of $N$ Booleans (or outcomes) returned by the $N$ GJ algorithms, $\Gamma_{x_1, t_1},\dots,\Gamma_{x_N, t_N}$, whose input variable $\rho \in \mathbb{R}^\ell $ is any $\nu$-sparse vector.
By the definition of the pseudo-dimension (see \cref{def:dim}), we need at least $2^N$ outcomes to shatter $\set{x_1,\dots,x_N}$, and thus the largest such $N$ gives an upper bound on $\mathrm{pdim}(\mathcal{L})$.
First, we fix a sparsity pattern $J \subseteq [\ell ]$ with $|J| = \nu$ and let
\[
\mathcal{L}_J = \Set*{L_\rho: \mathcal{X} \to \mathbb{R}}{\rho \in \mathbb{R}^\ell ,\ \supp(\rho) \subseteq J}.
\]
Note that we have $\mathcal{L} = \bigcup_{J\subseteq [\ell ]: |J| = \nu}\mathcal{L}_J$.
From the statement assumption, there is a $(\Delta, p)$-GJ algorithm $\Gamma_{x,t}$ that can check whether $L_\rho(x) > t$ or not.
That is, for any $(x,t)$, whether $L_\rho(x) > t$ or not is determined by sign patterns of $p$ polynomials of degree at most $\Delta$ in $\rho \in \mathbb{R}^\ell$.
Moreover, since $\supp(\rho) \subseteq J$, $\Gamma_{x ,t}$ takes up to $\nu$ variables as input.
Thus, once $J$ is fixed, outcomes of $\Gamma_{x_1, t_1},\dots,\Gamma_{x_N, t_N}$ are determined by sign patterns of $Np$ polynomials of degree at most $\Delta$ in $\nu$ variables.
The number of such sign patterns is at most ${(8\mathrm{e} Np\Delta/\nu)}^\nu$ by Warren's theorem (\cref{thm:warren}).
Next, we consider changing sparsity patterns.
As discussed above, a fixed sparsity pattern $J$ yields up to ${(8\mathrm{e} Np\Delta/\nu)}^\nu$ outcomes of $\Gamma_{x_1, t_1},\dots,\Gamma_{x_N, t_N}$.
If we feed $\rho \in \mathbb{R}^\ell$ with a new sparsity pattern $J^\prime$ of size $\nu$ to $\Gamma_{x_1, t_1},\dots,\Gamma_{x_N, t_N}$, then $Np$ polynomials that appear in the GJ algorithms may exhibit up to ${(8\mathrm{e} Np\Delta/\nu)}^\nu$ new sign patterns, which lead to at most that many new outcomes. Thus, when the sparsity pattern of $\rho$ can be any size-$\nu$ subset of $[\ell]$, the number of all possible outcomes of $\Gamma_{x_1, t_1},\dots,\Gamma_{x_N, t_N}$ is at most
\[
\text{``the number of sparsity patterns''} \times {(8\mathrm{e} Np\Delta/\nu)}^\nu.
\]
Since there are up to $\binom{\ell }{\nu} \le \ell ^\nu$ sparsity patterns, the number of all possible outcomes of $\Gamma_{x_1,t_1},\dots,\Gamma_{x_N,t_N}$ is at most $(8\mathrm{e} \ell Np\Delta/\nu)^\nu$.
In order for $\mathcal{L}$ to shatter $\set{x_1,\dots,x_N}$,
\[
2^N \le (8\mathrm{e} \ell Np\Delta/\nu)^\nu \Leftrightarrow N \le \nu \log_2(8\mathrm{e} \ell Np\Delta/\nu)
\]
must hold.
Since $\log_2y \le \frac23y$ for $y > 0$, the right-hand side is bounded from above as
\[
\nu\log_2(8\mathrm{e} \ell p\Delta) + \nu\log_2(N/\nu) \le \nu\log_2(8\mathrm{e} \ell p\Delta) + \frac23N.
\]
Rearranging the terms, we obtain $N \le 3\nu\log_2(8\mathrm{e} \ell p\Delta)$, hence $\mathrm{pdim}(\mathcal{L}) = \mathrm{O}(\nu \log(\ell p\Delta))$.
\end{proof}
\subsection{Result on Learning-Based LRA}\label{subsec:sparsity-lra}
We now return to the LRA setting and discuss the pseudo-dimension bound for the case of changeable sparsity patterns.
In this setting, we have $\ell = mn$ and $\nu = ns$ since every sketching matrix $S$ is of size $m\times n$ and has up to $ns$ non-zeros.
Furthermore, from the discussion in \cref{subsec:bartlett-overview,sec:improved-upper}, for any input $A \in \mathcal{A}$ and threshold value $t \in \mathbb{R}$, we can check whether the proxy loss value, $\hat{L}_\varepsilon(S, A)$, exceeds $t$ or not by using a $(\Delta, p)$-GJ algorithm with
\begin{align}
\Delta = \mathrm{O}(mk\varepsilon^{-1}\log(d/\varepsilon))
\ \
\text{and}
\ \
p = m\cdot 2^{\mathrm{O}(k)}\cdot (d/k)^{3k}.
\end{align}
Thus, from \cref{thm:sparse-main}, for the class $\hat\mathcal{L}_\varepsilon = \set{\hat{L}_\varepsilon(S, \cdot)}_{S \in \mathcal{S}}$ of proxy loss functions where $\mathcal{S}$ consists of sketching matrices with $ns$ non-zeros at any positions, it holds that
\[
\mathrm{pdim}(\hat\mathcal{L}_\varepsilon) = \mathrm{O}(ns \cdot (\log(mn) + k\log(d/k) + \log(1/\varepsilon))).
\]
The right-hand side is larger than the bound in \cref{prop:fat-dim-improved} only by $\mathrm{O}(ns\log n)$.
Note that narrowing the class $\mathcal{S}$ only decreases $\mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$; hence, the bound remains true when each $S \in \mathcal{S}$ is restricted to have $s$ non-zeros in each column.
Since we have $\mathrm{fatdim}_\varepsilon(\mathcal{L}) \le \mathrm{pdim}(\hat\mathcal{L}_\varepsilon)$ as discussed in \cref{subsec:bartlett-overview}, we obtain the following result.
\begin{proposition}\label{prop:sparsity-lra}
Let $\mathcal{L} = \set{L(S, \cdot)}_{S \in \mathcal{S}}$ be the class of loss functions defined by \eqref{eq:scw-loss} where $\mathcal{S}$ contains sketching matrices with any sparsity patterns of size $ns$.
For sufficiently small $\varepsilon > 0$, the $\varepsilon$-fat shattering dimension of $\mathcal{L}$ is bounded as
\[
\mathrm{fatdim}_\varepsilon(\mathcal{L}) = \mathrm{O}(ns \cdot (\log(mn) + k\log(d/k) + \log(1/\varepsilon))).
\]
\end{proposition}
\section{EXPERIMENTS}\label{sec:experiment}
We confirm that learning sparsity patterns can improve the empirical accuracy of learning-based LRA methods.
Note that the uniform bound discussed in \cref{subsec:learning-theory} is agnostic to learning methods; therefore, we can use \cref{prop:sparsity-lra} to obtain generalization bounds for any methods to learn sparse sketching matrices.
\subsection{Background and Learning Methods}\label{subsec:experiment-background}
Let us first overview existing methods for learning sketching matrices.
\citet{Indyk2019-cn} initiated the study of learning-based LRA, as mentioned in \cref{subsec:lra}.
Assuming fixed sparsity patterns, they learned sketching matrices by applying SGD to the SCW-based loss \eqref{eq:scw-loss}, where gradients are computed via backpropagation through differentiable SVD.
\citet{Liu2020-hu} enhanced the previous method by first learning sparsity patterns with a greedy algorithm and then learning non-zeros via SGD.
A drawback of those two methods is that backpropagating through SVD is computationally expensive.
\citet{Indyk2021-yn} has overcome this issue by developing an efficient learning method based on a surrogate loss function.
While their method again assumes fixed sparsity patterns, we can naturally extend it to changeable sparsity patterns, as detailed later.
Another related work is \citep{Ailon2021-ia}, which proposed to represent linear layers of neural networks as products of sparse matrices, like the butterfly networks.
Although their idea is applicable to LRA, it requires sketching matrices with complicated structures; thus, we below do not consider it for simplicity.
Given the above background, an natural next direction is to extend the efficient method of \citep{Indyk2021-yn} to changeable sparsity patterns.
In \citep{Indyk2021-yn}, two kinds of methods are studied, one-shot and few-shot methods.
We focus on the latter and present how to modify it to learn both positions and values of non-zeros.
Their basic idea is to minimize the following surrogate loss instead of the SCW-based loss \eqref{eq:scw-loss}:
\begin{equation}\label{eq:surrogate-loss}
\tilde{L}(S, A) = \norm{U_k^\top S^\top S U - I_0}_F^2,
\end{equation}
where $U \in \mathbb{R}^{n \times d}$ is the column orthogonal matrix computed by SVD of $A$ (assuming $\rank(A) = d$), $U_k \in \mathbb{R}^{n \times k}$ is the first $k$ columns of $U$ corresponding to the largest $k$ singular values, and $I_0 = [I_k, \bm{0}_{k, d-k}] \in \mathbb{R}^{k \times d}$ is a concatenation of the $k\times k$ identity matrix and $k \times d-k$ zeros.
Unlike the SCW-loss, differentiating the surrogate loss, $\tilde{L}(S, A)$, with respect to $S$ does not require backpropagation through SVD, hence more efficient.
Moreover, \citep[Theorem 2.2]{Indyk2021-yn} ensures the consistency of the surrogate loss, i.e., $\tilde{L}(S, A) \le \varepsilon$ implies $\norm{A - \mathrm{SCW}_k(S, A)}_F^2 \le (1 + \mathrm{O}(\varepsilon)) \norm{A - \brc{A}_k}_F^2$.
By minimizing the empirical surrogate loss via SGD, they learned non-zeros of sketching matrices at fixed positions.
To learn both positions and values of non-zeros based on the above idea, we use the projected gradient descent method,
or sometimes called iterative hard thresholding (IHT) in non-convex sparse optimization \citep{Jain2017-wl}.
The method works iteratively as with SGD.
In each iteration, given an input matrix $A \in \mathcal{A}_\text{train}$ in a training dataset, we update the sketching matrix as $S \gets \Pi_{s}(S - \eta \nabla \tilde{L}(S, A))$, where $\eta>0$ is a step size, $\nabla \tilde{L}(S, A)$ is the gradient with respect to $S$, and $\Pi_s$ is a projection operator that preserves the largest $s$ elements in absolute value for each column and set the others to zero.
In the following experiments, we refer to this method, which learns positions of non-zeros, as \textbf{Learn}\xspace and compare it with two baselines: \textbf{Fix}\xspace and \textbf{Dense}\xspace.
\textbf{Fix}\xspace is the method studied in \citep{Indyk2021-yn}, which learns non-zeros at fixed positions via SGD.
\textbf{Dense}\xspace learns values of all entries via SGD.
Note that although \textbf{Dense}\xspace naturally attains the best accuracy among them, it results in dense sketching matrices, which cannot benefit from the efficiency of sparse matrix multiplication and cause longer runtime of $\mathrm{SCW}_k$ when deployed for future data.
\begin{figure*}[t]
\centering
\begin{minipage}[t]{.32\textwidth}
\includegraphics[width=1.0\textwidth]{fig/train_loss_s1.pdf}
\subcaption*{Surrogate loss, $s=1$}
\end{minipage}
\centering
\begin{minipage}[t]{.32\textwidth}
\includegraphics[width=1.0\textwidth]{fig/train_loss_s3.pdf}
\subcaption*{Surrogate loss, $s=3$}
\end{minipage}
\centering
\begin{minipage}[t]{.32\textwidth}
\includegraphics[width=1.0\textwidth]{fig/train_loss_s5.pdf}
\subcaption*{Surrogate loss, $s=5$}
\end{minipage}
\centering
\begin{minipage}[t]{.32\textwidth}
\includegraphics[width=1.0\textwidth]{fig/train_error_s1.pdf}
\subcaption*{SCW loss, $s=1$}
\end{minipage}
\centering
\begin{minipage}[t]{.32\textwidth}
\includegraphics[width=1.0\textwidth]{fig/train_error_s3.pdf}
\subcaption*{SCW loss, $s=3$}
\end{minipage}
\centering
\begin{minipage}[t]{.32\textwidth}
\includegraphics[width=1.0\textwidth]{fig/train_error_s5.pdf}
\subcaption*{SCW loss, $s=5$}
\end{minipage}
\caption{
Surrogate and SCW-based loss values on training datasets.
The x-axis indicates the number of iterations of SGD (for \textbf{Fix}\xspace and \textbf{Dense}\xspace) or stochastic IHT (for \textbf{Learn}\xspace).
The error band indicates the standard deviation over the $30$ random trials.
}
\label{fig:train}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{fig/test.pdf}
\caption{
SCW-based loss values on test datasets. The error bar shows the standard deviation over the $30$ random trials.
}
\label{fig:test}
\end{figure}
\subsection{Settings and Results}
Experiments were conducted on a macOS machine with Apple M2 CPU and 24 GB RAM.
We implemented the methods in Python 3.9.12 and used JAX 0.3.15 \citep{Bradbury2018-xq} to compute gradients.
When performing SVD, we regarded singular values smaller than $10^{-8}$ as zero.
Let $n = 100$, $d = 50$, $m = 10$, and $k = 5$.
We made a rank-$k$ matrix $A_\text{true} \in \mathbb{R}^{n \times d}$ by multiplying $n\times k$ and $k\times d$ matrices whose entries were drawn from the uniform distributions over $[0, 1]$.
We then let $A = A_\text{true} + 0.1\times A_\text{noise}$, where entries of $A_\text{noise}$ were drawn from the standard normal distributions, and normalized $A$ so that $\norm{A}_F = 1$ holds.
By drawing $300$ noise terms independently, we created a dataset of $300$ input matrices $A$.
We split them into training and test datasets of sizes $200$ and $100$, respectively.
We made $30$ random training/test splits to calculate the average and standard deviation over the $30$ random trials.
We learn sketching matrices $S \in \mathbb{R}^{m \times n}$ by minimizing the empirical surrogate loss \eqref{eq:surrogate-loss} on a training dataset.
\textbf{Fix}\xspace and \textbf{Learn}\xspace learn $S$ with $s=1$, $3$, or $5$ non-zeros in each column; since $m=10$, the $s$ values mean that $10\%$, $30\%$, or $50\%$ of entries can be non-zero, respectively.
Initial sketching matrices were obtained by setting random $s$ entries in each column to $-1$ or $+1$ with probability $0.5$, respectively, and the others to zero; we then normalized it to satisfy $\norm{S}_F = 1$ for numerical stability.
We set the step size, $\eta$, to $0.1$.
\cref{fig:train} shows curves of surrogate \eqref{eq:surrogate-loss} and SCW-based \eqref{eq:scw-loss} loss values in the training phase.
As $s$ increased, the performances of \textbf{Fix}\xspace and \textbf{Learn}\xspace became closer to that of \textbf{Dense}\xspace.
Regarding the surrogate loss, \textbf{Learn}\xspace achieved smaller values than \textbf{Fix}\xspace, implying that \textbf{Learn}\xspace could go beyond local optima into which \textbf{Fix}\xspace fell.
As for the SCW-based loss, \textbf{Learn}\xspace slightly outperformed \textbf{Fix}\xspace for $s=1$ and $3$, and both achieved almost as small values as \textbf{Dense}\xspace when $s=5$.
\cref{fig:test} shows the SCW-based loss values on test datasets.
As with the training SCW-based loss values (\cref{fig:train}), the gap between \textbf{Fix}\xspace and \textbf{Learn}\xspace was evident with $s=1$ and $3$, while both achieved as small losses as \textbf{Dense}\xspace with $s=5$.
To conclude, \textbf{Learn}\xspace achieved smaller SCW-based loss values than \textbf{Fix}\xspace particularly when $s$ was small, suggesting that learning sparsity patterns enables more accurate learning-based LRA when we need to learn highly sparse sketching matrices for the sake of the efficiency of $\mathrm{SCW}_k$.
As for training times, \textbf{Learn}\xspace took about $8\%$ longer than \textbf{Fix}\xspace, although our main focus is accuracy and the implementations are not intended to be fast.
\section{CONCLUSION AND DISCUSSION}
Building on \citep{Bartlett2022-mu}, we have studied generalization bounds for learning-based LRA.
We have improved their $\tilde\mathrm{O}(nsm)$ bound on the fat shattering dimension to $\tilde\mathrm{O}(nsk)$ by developing an $(\mathrm{O}(m), m)$-GJ algorithm that computes a pseudo-inverse of a matrix with $m$ rows.
We have also demonstrated its usefulness by applying it to the learning-based Nystr\"om method.
Then, we have shown that learning both positions and values of non-zeros of sketching matrices increases the fat-shattering-dimension bound only by $\mathrm{O}(ns\log n)$.
Experiments have confirmed that the efficient learning method of \citep{Indyk2021-yn} can achieve higher empirical accuracy with changeable sparsity patterns.
A notable open problem is to close the $\tilde\mathrm{O}(k)$ gap between the $\tilde\mathrm{O}(nsk)$ upper and $\Omega(ns)$ lower bounds.
Note that only applying our GJ algorithm to \cref{item:choose-Z} in \cref{def:proxy} does not leave out the $\tilde\mathrm{O}(k)$ factor;
a more essential problem lies in \cref{item:for-all-Pi}, where we must avoid using exponentially many $P_i$ in $k$ to remove the $\tilde\mathrm{O}(k)$ factor.
When it comes to improving the $\Omega(ns)$ lower bound, we need to shatter more instances than $\Omega(ns)$, where $ns$ is the number of tunable parameters.
Although obtaining a greater lower bound than the number of tunable parameters is typically challenging, such lower bounds have been obtained for neural networks using the \emph{bit extraction} technique \citep{Bartlett1998-qs}.
We expect that a similar idea would help obtain a tighter lower bound.
\section*{Acknowledgements}
This work was supported by JST ERATO Grant Number JPMJER1903 and JSPS KAKENHI Grant Number JP22K17853.
\bibliographystyle{abbrvnat}
| {
"timestamp": "2022-10-14T02:11:31",
"yymm": "2209",
"arxiv_id": "2209.08281",
"language": "en",
"url": "https://arxiv.org/abs/2209.08281",
"abstract": "Learning sketching matrices for fast and accurate low-rank approximation (LRA) has gained increasing attention. Recently, Bartlett, Indyk, and Wagner (COLT 2022) presented a generalization bound for the learning-based LRA. Specifically, for rank-$k$ approximation using an $m \\times n$ learned sketching matrix with $s$ non-zeros in each column, they proved an $\\tilde{\\mathrm{O}}(nsm)$ bound on the \\emph{fat shattering dimension} ($\\tilde{\\mathrm{O}}$ hides logarithmic factors). We build on their work and make two contributions.1. We present a better $\\tilde{\\mathrm{O}}(nsk)$ bound ($k \\le m$). En route to obtaining this result, we give a low-complexity \\emph{Goldberg--Jerrum algorithm} for computing pseudo-inverse matrices, which would be of independent interest.2. We alleviate an assumption of the previous study that sketching matrices have a fixed sparsity pattern. We prove that learning positions of non-zeros increases the fat shattering dimension only by ${\\mathrm{O}}(ns\\log n)$. In addition, experiments confirm the practical benefit of learning sparsity patterns.",
"subjects": "Machine Learning (cs.LG); Data Structures and Algorithms (cs.DS)",
"title": "Improved Generalization Bound and Learning of Sparsity Patterns for Data-Driven Low-Rank Approximation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806546550656,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7077274198551744
} |
https://arxiv.org/abs/1609.06342 | Finding Linear-Recurrent Solutions to Hofstadter-Like Recurrences Using Symbolic Computation | The Hofstadter Q-sequence, with its simple definition, has defied all attempts at analyzing its behavior. Defined by a simple nested recurrence and an initial condition, the sequence looks approximately linear, though with a lot of noise. But, nobody even knows whether the sequence is infinite. In the years since Hofstadter published his sequence, various people have found variants with predictable behavior. Oftentimes, the resulting sequence looks complicated but provably grows linearly. Other times, the sequences are eventually linear recurrent. Proofs describing the behaviors of both types of sequence are inductive. In the first case, the inductive hypotheses are fairly ad-hoc, but the proofs in the second case are highly automatable. This suggests that a search for more sequences like these may be fruitful. In this paper, we develop a step-by-step symbolic algorithm to search for these sequences. Using this algorithm, we determine that such sequences come in infinite families that are themselves plentiful. In fact,there are hundreds of easy to describe families based on the Hofstadter Q-recurrence alone. | \section{Introduction}
The Hofstadter $Q$-sequence, which is defined by the recurrence $Q\p{n}=Q\p{n-Q\p{n-1}}+Q\p{n-Q\p{n-2}}$ and the initial conditions $Q\p{1}=Q\p{2}=1$, was first introduced by Douglas Hofstadter in the 1960s~\cite{geb}. The first fifteen terms increase monotonically, but thereafter the sequence rapidly devolves into chaos. On the macro level, $Q\p{n}$ seems to oscillate around $\frac{n}{2}$, but nobody has been able to prove anything more than the statement that if
\[
\limi{n}{\frac{Q\p{n}}{n}}
\]
exists, it must equal one half~\cite{golomb}. Furthermore, nobody has been able to prove that $Q\p{n}$ even \emph{exists} for all $n$. If $Q\p{n-1}\geq n$ for some $n$, then $Q\p{n}$ would be defined in terms of $Q\p{k}$ for some $k\leq 0$. But, $Q$ is not defined on nonpositive inputs, so $Q\p{n}$ would fail to exist. All subsequent terms would also fail to exist, so the sequence would be finite in this scenario. If a sequence is finite because of this sort of happenstance, we say that the sequence \emph{dies}. We do not know whether the Hofstadter $Q$-sequence dies, but we do know that it exists for at least $10^{10}$ terms~\cite[A005185]{oeis}.
More recently, Conolly examined the recurrence $C\p{n}=C\p{n-C\p{n-1}}+C\p{n-1-C\p{n-2}}$ with $C\p{1}=C\p{2}=1$ as the initial condition~\cite{con}. This setup looks quite similar to the $Q$-recurrence; the only difference is the $-1$ in the second term. But, the resulting sequence grows monotonically and satisfies
\[
\limi{n}{\frac{C\p{n}}{n}}=\frac{1}{2}.
\]
In particular, $C\p{n}-C\p{n-1}$ is always either $0$ or $1$, a property commonly known as \emph{slow}. Similar sequences have been studied by other authors~\cite{tanny}. Most of the sequences they have analyzed use recurrences with shifts similar to the one in Conolly's recurrence, though they also analyze the Hofstadter $V$-Sequence, given by $V\p{n}=V\p{n-V\p{n-1}}+V\p{n-V\p{n-4}}$ with $V\p{1}=V\p{2}=V\p{3}=V\p{4}=1$, which is also slow~\cite{hofv}. Another example of a slow sequence is the Hofstadter-Conway \$10000 Sequence, given by $A\p{n}=A\p{A\p{n-1}}+A\p{n-A\p{n-1}}$ with $A\p{1}=A\p{2}=1$. Conway famously offered a \$10000 prize for a sufficient analysis of the behavior of this sequence, which was claimed by Colin Mallows a few years later~\cite{mallows}.
There are two methods commonly used to prove that Hofstadter-like sequences are slow. In some cases, there are combinatorial interpretations for slow sequences involving counting leaves in nested tree structures~\cite{isgur2}. The sequence counting the leaves is obviously slow; the main difficulty comes in showing that the nested recurrence also describes the same structure. For some slow sequences, though, there is no known combinatorial interpretation. The other proofs of slowness usually go by induction with complicated inductive hypotheses. For a sequence $\pb{a_n}$, one would love to work with just the inductive hypothesis that $a_m-a_{m-1}\in\st{0,1}$ for all $m<n$, but this is never enough. Instead, additional inductive hypotheses are required to handle certain cases. These extra hypotheses strongly depend on the sequence in question. While the \quot{shifted} sequences have similar proofs to each other, the proof for the Hofstadter $V$-sequence uses a different sort of inductive hypothesis. One would like to try to automate these proofs, but this would require some method of determining the appropriate inductive hypotheses.
While the study of slow sequence has been fruitful~\cite{erickson,isgur1,isgur2,tanny}, other predictable Hofstadter-like sequences have been found. Whereas slow sequences often result from varying the recurrence while retaining the initial condition of the $Q$-sequence, sequences from this other class often result from varying the initial condition while retaining the original $Q$-recurrence. The first such example comes from Golomb~\cite{golomb}, who replaces the initial conditions of the $Q$-sequence with the new values $Q\p{1}=3$, $Q\p{2}=2$, $Q\p{3}=1$. (We will use the shorthand $\bk{3,2,1}$ to describe this initial condition, and we will use similar notation going forward.) The first few terms of the resulting sequence are $3,2,1,3,5,4,3,8,7,3,11,10,\ldots$, and the pattern continues forever, giving the linear-recurrent (in fact, quasilinear) sequence defined by
\[
\begin{cases}
Q\p{3k}=3k-2\\
Q\p{3k+1}=3\\
Q\p{3k+2}=3k+2.
\end{cases}
\]
In our current setup, the only linear recurrent solutions we can possibly obtain to the Hofstadter $Q$-recurrence would be quasilinear, since we require $Q\p{n}\leq n$ for all $n$. For our purposes, this condition is overly restrictive. To allow ourselves more flexibility, we will instead say that $Q\p{n}=0$ whenver $n\leq0$. The original question of the existence of the Hofstadter $Q$-sequence can still be asked: \quot{Is $Q\p{n}\leq n$ for all $n$?} Note that it is still possible for a sequence to die in this new setting. For example, in the Hofstadter $Q$-sequence under this convention, if it happens that \emph{both} $Q\p{n-1}\geq n$ and $Q\p{n-2}\geq n$, then we obtain $Q\p{n}=0$. As a result, $Q\p{n+1}$ would be defined in terms of itself, and hence it would be undetermined.
This more general setting allows us to find linear-recurrent solutions to the Hofstadter $Q$-recurrence that were not possible before. One of these was found by Ruskey, who used the initial condition
$\bk{3,6,5,3,6,8}$
to embed the Fibonacci numbers in the Hofstadter $Q$-recurrence~\cite{rusk}. This sequence takes values
\[
\begin{cases}
Q\p{3k}=F_{k+4}\\
Q\p{3k+1}=3\\
Q\p{3k+2}=6.
\end{cases}
\]
The solutions of Golomb and Ruskey both satisfy linear recurrences ($a_n=2a_{n-3}-a_{n-6}$ in the case of Golomb; $a_n=2a_{n-3}-a_{n-9}$ in the case of Ruskey). We would like to develop a general method for finding more solutions like these. One might hope for a method that, given a linear recurrence and a Hofstadter-like recurrence, determines whether there is a sequence that eventually satisfies both of them. Unfortunately, this is quite a lofty goal. Rather, we exploit a deeper structure of these sequences and generalize that. Golomb's sequence is a quasipolynomial with period $3$. Ruskey's sequence, while not a quasipolynomial, is also structured as an interleaving of three simpler sequences. We will describe an automatic way of searching for such interleaved solutions, where the following items are allowed to vary:
\begin{itemize}
\item The recurrence under consideration
\item The number of interleaved sequences
\item The growth rate of each of the interleaved sequences.
\end{itemize}
The methods of this paper apply to \emph{linear} nested recurrences. These are recurrences of the form
\[
Q\p{n}=P\p{n}+\sum_{i=1}^{d}\alpha_i Q\p{E_i},
\]
where $P\p{n}$ is an explicit expression in $n$, $k$ is a nonnegative integer, each $\alpha_i$ is an integer, and each $E_i$ is an expression of the same form as the generic formula for $Q\p{n}$ (thereby allowing for arbitrarily many nesting levels). The methods will not apply completely in all cases; certain positivity conditions will need to be satisfied. Most commonly, we will have $P\p{n}=0$, each $\alpha_i$ positive, and each $E_i$ of the form $n-\beta_i-Q\p{n-\gamma_i}$ for some nonnegative integer $\beta_i$ and positive integer $\gamma_i$. (This is the case for the Hofstadter $Q$-recurrence, where $P\p{n}=0$, $k=2$, $\alpha_1=\alpha_2=1$, $E_1=n-Q\p{n-1}$, and $E_2=n-Q\p{n-2}$.) We will call recurrences satisfying this condition about the $E_i$'s \emph{basic}, and our methods will always apply fully to basic recurrences.
In Section~\ref{sec:prs}, we will introduce a formalism that encapsulates the notion of an interleaving of simple sequences. Then, in Section~\ref{sec:soft}, we will describe our algorithm that finds these special solutions. Finally, in Sections~\ref{sec:find} and~\ref{sec:fut}, we will describe some notable sequences found using the methods in this paper, and we will discuss some future extensions.
A Maple package implementing all of the algorithms in this paper, as well as some related procedures, can be found at \texttt{http://math.rutgers.edu/$\sim$nhf12/nicehof.txt}. Generally speaking, the procedures in this package offer more general versions of the algorithms described in this paper. For example, the code allows the user to modify the \quot{default} initial conditions (for when $n\leq0$).
\section{Positive Recurrence Systems}\label{sec:prs}
In the case of Ruskey's solution, each of the three interleaved sequences can be described by a homogeneous linear recurrence with nonnegative coefficients:
\[
\begin{cases}
a_{k}=a_{k-1} & a_0=3\\
b_k=b_{k-1} & b_0=6\\
c_k=c_{k-1}+c_{k-2} & c_1=5, c_2=8.
\end{cases}
\]
(Note that these recurrences are not unique.)
Golomb's solution cannot be expressed in this way,
but each of its interleaved sequences can be described by a nonhomogeneous linear recurrence
with nonnegative coefficients:
\[
\begin{cases}
a_k=3+a_{k-1} & a_0=3\\
b_k=3\\
c_k=3+c_{k-1} & c_1=1.
\end{cases}
\]
In both of these cases, we have a system of three nonhomogneous linear recurrences where all coefficients are nonnegative. Here, none of the recurrences refer to each other in their definitions, but theoretically this should be possible.
This leads to the following generalization:
\begin{defin}\label{def:prs}
A \emph{positive recurrence system} is a system of $m$ nonhomogeneous linear recurrences of the form
\[
\begin{cases}
\forab{1}{k}=P_1\p{k}+\sum\limits_{\ell=1}^{d}\sum\limits_{j=1}^m\alpha_{1,\ell,j} \forab{j}{k-\ell}\\
\forab{2}{k}=P_2\p{k}+\sum\limits_{\ell=1}^{d}\sum\limits_{j=1}^m\alpha_{2,\ell,j} \forab{j}{k-\ell}\\
\vdots\\
\forab{m}{k}=P_m\p{k}+\sum\limits_{\ell=1}^{d}\sum\limits_{j=1}^m\alpha_{m,\ell,j} \forab{j}{k-\ell}
\end{cases}
\]
satsifying the following conditions:
\begin{itemize}
\item $d$ is a nonnegative integer.
\item $P_1$ through $P_m$ are eventually nonnegative integer-valued polynomials.
\item Each $\alpha_{i,\ell,j}$ is a nonnegative integer.
\end{itemize}
\end{defin}
Note that, for convenience, we may sometimes have a recurrence system where $\forab{i}{k}$ refers to $\forab{j}{k}$ for some $j<i$. This is permissible, as we can just replace $\forab{j}{k}$ with its right-hand side in order to conform to Definition~\ref{def:prs}.
The solutions to Hofstadter-like recurrences that we seek will
eventually be interleavings of sequences that together satisfy a positive recurrence system. What follows is a formalization of this notion:
\begin{defin}
Let $m$ be a positive integer. The sequence $\pb{a_k}_{k\geq1}$ is \emph{positive-recurrent} with period $m$ if there exist an integer $K$ such that the sequences $\st{\pb{a_{ms+r}}_{s\geq K}: 0\leq r<m}$ satisfy a positive recurrence system.
\end{defin}
Observe that eventually nonnegative polynomials are trivially {positive-recurrent} with period $1$, as we can take $d=0$. Also, any sequence satisfying a homogeneous linear recurrence with nonnegative coefficients is {positive-recurrent} with period $1$, as we can take $P_1$ identically $0$.
Generally speaking, any positive-recurrent sequence is eventually linear recurrent, as are each of the component sequences. This is true because a positive recurrence system can be converted into a linear system of equations for the generating functions of the component sequences. Each generating function is therefore a rational function.
We will be concerned with determining the rate of growth of each sequence
in a solution to a
positive recurrence system.
In order for things to be well-defined and easy to analyze, we will need the following technical definition.
\begin{defin}
An initial condition of length $N$ to a positive recurrence system is called \emph{eventually positive} if the following conditions hold:
\begin{itemize}
\item If $k\geq N$, then $P_r\p{k}\geq0$ for all $r$ and all $k$.
\item For all $0\leq i\leq d$, $\forab{r}{N-i}>0$ for all $r$.
\end{itemize}
\end{defin}
Any long enough positive initial condition is eventually positive. But, this definition allows for some nonpositive values early in the initial condition, so long as those values are never used in calculating recursively defined terms. Furthermore, we require all of the polynomials to be nonnegative when calculating recursive terms. This will be useful in our analysis, though it is not strictly necessary. (A much more complicated, weaker condition would suffice, and, in that case, we would not even need all of the polynomials to be eventually nonnegative.)
In the case where we have a solution to a positive recurrence system given by an eventually positive initial condition, the following algorithm determines the order of growth of each component sequence.
\begin{enumerate}
\item \label{it:G}Define a weighted directed graph $G$ as follows:
\begin{itemize}
\item The vertices of $G$ are the integers $\st{1,\ldots,m}$.
\item There is an arc from $i$ to $j$ if and only if, for some $\ell$, $\alpha_{i,\ell,j}>0$.
\item The weight of the arc from $i$ to $j$ is
\[
\sum_{\ell=1}^{d}\alpha_{i,\ell,j}.
\]
\end{itemize}
\item Initialize variables $d_1, d_2,\ldots, d_m$ so that $d_i$ equals the degree of $P_i$.
\item\label{st:inf} Let $W$ denote the set of vertices $v$ in $G$ satisfying one of the following:
\begin{itemize}
\item $v$ is in a directed circuit with at least one arc having weight greater than $1$.
\item $v$ is in more than one directed circuit that does not contain $v$ as an intermediate vertex.
\end{itemize}
For each $v\in W$, set $d_v$ to $\infty$ and delete any outgoing arc from $v$ in $G$ that is part of a cycle. Call the resulting graph $G'$. (We can actually delete \emph{all} outgoing arcs from $v$, but the form we have stated here will be more useful when we prove this algorithm's correctness.)
\item Define the following relation $\sim$ on $\st{1,2,\ldots,m}$:
\[
i\sim j\text{ if and only if }\p{i=j}\text{ or }\p{i\text{ and }j\text{ are in a cycle together in }G'}.
\]
It is easy to check that $\sim$ is an equivalence relation. Each equivalence class is either a single vertex or a cycle. (Every vertex in a cycle is in exactly one cycle, or else we would have removed all of its cycle-making outgoing arcs in step~\ref{st:inf}, and all such vertices are equivalent to each other.)
\item Define a directed graph $H$ as follows:
\begin{itemize}
\item The vertices of $H$ are the equivalence classes of $\st{1,2,\ldots,m}$ under $\sim$.
\item There is an arc from class $I$ to class $J$ if and only if there is an arc in $G'$ from some $i\in I$ to some $j\in J$.
\end{itemize}
If $H$ contains a directed cycle $I_1,I_2,\ldots,I_q$, then for each $1\leq h<q$, there is an arc in $G'$ from some $i^{out}_h\in I_h$ to some $i^{in}_{h+1}\in I_{h+1}$. Also, there is an arc in $G'$ from some $i^{out}_q\in I_q$ to some $i^{in}_1\in I_1$. Furthermore, by the definition of $\sim$, for each $h$, there is a (possibly trivial) directed path from $i^{in}_h$ to $i^{out}_h$ within $I_h$. Concatenating all of these arcs together gives a cycle in $G'$ that includes elements of multiple equivalence classes, which contradicts the definition of $G'$.
So, we can conclude that $H$ contains no directed cycles.
\item For each vertex $I$ of $H$, initialize a variable $d_I=\max\limits_{i\in I}d_i$.
\item\label{st:ts} Topologically sort the vertices of $H$. Consider the vertices $I$ from last to first:
\begin{itemize}
\item If $I$ is a cycle in $G'$ (including a single vertex with a loop), set $d_I$ to $d_I+1$.
\item For all $J$ with an arc from $J$ to $I$, set $d_J=\max\pb{d_J, d_I}$.
\end{itemize}
\end{enumerate}
At the end of this process, we have values $d_I$ for each equivalence class $I$. In general, for an integer $r$, let $\bar{r}$ denote its equivalence class under $\sim$. We now make the following claim:
\begin{restatable}{claim}{clmain}
\label{cl:main}
Suppose we have a positive recurrence system with $m$ component sequences, along with an eventually positive initial condition. Let $1\leq r\leq m$ be an integer.
If $d_{\bar{r}}<\infty$, then $\forab{r}{k}=\Theta\pb{k^{d_{\bar{r}}}}$. If $d_{\bar{r}}=\infty$, then $\forab{r}{k}$ grows exponentially.
\end{restatable}
The proof of this claim is uninteresting, so it has been relegated to Appendix~\ref{sec:pfclmain}. The general strategy is to inductively substitute expressions around cycles in order to obtain recurrences for sequences $\forab{r}{k}$ that refer only to earlier versions of themselves. Then, we use the general theory of linear recurrences to make claims about their asymptotics.
\section{The Algorithm}\label{sec:soft}
What follows is a description of a generic run of our algorithm for finding positive-recurrent solutions to nested recurrences. We will walk through the steps of searching for period-$m$ solutions to a linear nested recurrence $Q\p{n}$.
A dominant term in the complexity of our algorithm is that it is exponential in $m$, so the first thing we do is fix $m$.
We will follow the steps of the algorithm by applying it to a running example: searching for period-$4$ solutions to the recurrence
\[
R\p{n}=R\p{n-R\p{n-1}}+R\p{n-R\p{n-2}}+R\p{n-R\p{n-3}}.
\]
We choose this particular example because it will illustrate many facets of the algorithm without becoming too unwieldy.
\subsection{Fixing the Behavior of the Subsequences}
A positive-recurrent solution to $Q\p{n}$ with period $m$ has the form
\[
\begin{cases}
Q\p{mk}=\fora{0}{k}\\
Q\p{mk+1}=\fora{1}{k}\\
Q\p{mk+2}=\fora{2}{k}\\
\hspace{0.5in}\vdots\\
Q\p{mk+\p{m-1}}=\fora{m-1}{k}
\end{cases}
\]
for some sequences $\fa{0}$ through $\fa{m-1}$.
(For convenience, we index the interleaved sequences from zero in this context.)
For convenience, we define the following growth properties that these component sequences may have:
\begin{defin}\hspace*{\fill}
\begin{itemize}
\item We say $\fa{r}$ is \emph{constant} if, for sufficiently large $k$, $\fora{r}{k}=A$ for some constant $A$.
\item We say $\fa{r}$ is \emph{linear} if, for sufficiently large $k$, $\fora{r}{k}=Ak+B$ for some constants $A$ and $B$.
\item We say $\fa{r}$ is \emph{superlinear} if $\fora{r}{k}=\omega\p{k}$.
\item We say that $\fa{r}$ is \emph{standard linear} if $\fora{r}{k}=mk+B$ for some constant $B$.
\item We say that $\fa{r}$ is \emph{steep linear} if $\fora{r}{k}=Ak+B$ for some constants $A$ and $B$ with $A>m$.
\item We say that $\fa{r}$ is \emph{steep} if $\fa{r}$ is either steep linear or superlinear.
\end{itemize}
\end{defin}
To start the algorithm, we need to decide, for each of the $m$ component sequences, are we looking for a solution where that subsequence is \emph{constant}, \emph{standard linear}, or \emph{steep}?
(We do not concern ourselves with component sequences intermediate in growth between constant and standard linear, as they seem to be uncommon in this context and can be harder to analyze. But, the Maple package can sometimes handle these, as long as the user explicitly asks it to.)
To keep track of our choices, the algorithm stores variables $A_0,A_1,\ldots,A_{m-1}$.
We set $A_r=0$ if we decide that $\fa{r}$ is to be constant, $A_r=m$ if $\fa{r}$ is standard linear, and $A_r=\infty$ if $\fa{r}$ is steep. Observe that if $\fa{r}$ is not steep, then $\fa{r}=A_rk+B_r$ for some constant $B_r$.
In general, to perform an exhaustive search for positive-recurrent solutions, we iterate through the $3^m$ possible overall behaviors.
In our running example, we seek a solution of the form
\[
\begin{cases}
R\p{4k}=\fora{0}{k}\\
R\p{4k+1}=\fora{1}{k}\\
R\p{4k+2}=\fora{2}{k}\\
R\p{4k+3}=\fora{3}{k}.
\end{cases}
\]
Going forward, we will assume we are treating the case $A_0=\infty$, $A_1=4$, $A_2=0$, and $A_3=0$. That is, we are seeking a solution with $\fa{0}$ steep, $\fa{1}$ standard linear, and the other two sequences constant.
\subsection{Unpacking the Recurrence}
Now that we have fixed the forms of the $\fa{r}$, we can use these forms inductively to \quot{unpack} the recurrent form for each $Q\p{mk+r}$. (The base case of the induction is covered by the fact that we allow an arbitrarily long initial condition.) We start from the innermost calls to $Q$ (i.e., calls without another $Q$ inside them) and work our way outward, eliminating (nearly) all of the the $Q$'s with $k$'s inside them. For an expression of the form $Q\p{mk+r}$, we replace it by the expression $A_rk+B_r$, where $A_r$ is the variable tracking the growth of $\fa{r}$ and $B_r$ is a symbol.
Each step after rewriting the inner calls involves looking at an expression of the form $Q\p{c}$, $Q\p{-\inftyk-c}$, or $Q\p{mk-c}$ for some, possibly symbolic, constant $c$. Expressions of the first type are constant; expressions of the second type are zero (by our convention that evaluation at nonpositive indices gives zero). In both of these cases, we can immediately move outward. Expressions of the third type equal
$\forab{r}{k-t}$ for $r=\pb{-c}\modd m$ and $t=\ceil{\frac{c}{m}}$.
This all depends on the congruence class of $c$ mod $m$, since
\[
\ceil{\frac{c}{m}}=\begin{cases}
\frac{c}{m} & c\equiv0\pb{\modd m}\\
\frac{c+m-\pb{c\modd m}}{m} & \text{otherwise}.
\end{cases}
\]
So, at this step, we iterate through all possible congruence classes of these constants.
Once we do this, we replace $Q\p{mk-c}$ by $A_r\p{k-t}+B_r$ ( $r=\pb{-c}\modd m$ and $t=\ceil{\frac{c}{m}}$) for symbolic constant $B_r$. In the case the $\fa{r}$ is steep, this replacement will ensure that the coefficient on $k$ in $mk-Q\p{mk-c}$ will be $-\infty$. If $\fa{r}$ is not steep, this simply substitutes the formula for $\fa{r}$ into the recurrence.
In the case that we are considering an outermost $Q$ and $\fa{r}$ is steep, we do not replace the call to $\fa{r}$; we will leave it unevaluated.
If $Q\p{n}$ is basic, then the congruence choices we will have to make in this step are easily predetermined.
\begin{prop}\label{prop:stdb}
If $Q\p{n}$ is basic, then the $c$ values appearing in expressions $Q\p{mk-c}$ that appear at the outer level when unpacking the recurrence are precisely determined by fixing the values of the $B_r$'s for which $\fa{r}$ is constant.
\end{prop}
\begin{proof}
If the recurrence for $Q\p{n}$ is basic, it is of the form
\[
Q\p{n}=P\p{n}+\sum_{i=1}^d\alpha_iQ\p{n-\beta_i-Q\p{n-\gamma_i}}.
\]
Substituting $n=mk+r$, we obtain
\[
Q\p{mk+r}=P\p{mk+r}+\sum_{i=1}^d\alpha_iQ\p{mk+r-\beta_i-Q\p{mk+r-\gamma_i}}.
\]
The innermost expressions are the expressions $Q\p{mk+r-\gamma_i}$. For notational ease, let $r_i=\p{r-\gamma_i}\modd m$. There are three cases to consider:
\begin{description}
\item[$Q\p{mk+r-\gamma_i}$ is constant:] In this case, $Q\p{mk+r-\gamma_i}=B_{r_i}$. We then see the expression $Q\p{mk-\beta_i-B_{r_i}}$ at the outer level. Since $\beta_i$ is a predetermined constant, the constant $\beta_i+B_{r_i}$ that appears here is completely determined by fixing $B_{r_i}$. Furthermore, $\fa{r_i}$ is constant; it eventually equals $B_{r_i}$.
\item[$Q\p{mk+r-\gamma_i}$ is standard linear:] In this case, $Q\p{mk+r-\gamma_i}=mk+B_{r_i}$. This leads to the expression $Q\p{-\beta_i-B_{r_i}}$ at the outer level, which is a constant.
\item[$Q\p{mk+r-\gamma_i}$ is steep:] In this case, the expression that appears at the outer level in this term is automatically zero.
\end{description}
In summary, only one case results in an outer expression of the form $Q\p{mk-c}$, and the value of $c$ in that case is completely determined by the value of $B_r$ for an $r$ with $\fa{r}$ constant, as required.
\end{proof}
The net result of this step is that we can now read off a recurrence system for the component sequences. If the recurrence was basic, we claim that this will be a positive recurrence system. First, all coefficients in the homogeneous part of the system will be nonnegative, because the coefficients on the recursive calls in a basic recurrence are all positive. Also, all constant component sequences will be positive, since we cannot have a solution with infinitely many nonpositive entries (or else we would not be able to explicitly calculate terms). Beyond this, nonhomogeneous parts are built out of recursive calls, which will inductively result in eventually nonnegative polynomials.
In general, in order to guarantee the correctness of our algorithm, we need the nested recurrence to be such that we obtain a positive recurrence system in this step. We stated earlier that the eventual nonnegativity condition on the nohnomogeneous parts is overly restrictive for what we want to do with positive recurrence systems, so it makes sense to continue to run the rest of this algorithm even without a positive recurrence system. The algorithm may still succeed, but we can no longer guarantee that it will succeed.
In our running example, we are concerned with the congruence classes of $B_2$ and $B_3$ mod $4$. There are eight cases to check. Going forward, we will assume we are looking at the case where $B_2\equiv0\pb{\modd 4}$ and $B_3\equiv3\pb{\modd 4}$. Under these assumptions, here is how the recurrence unpacks:
{\allowdisplaybreaks
\begin{align*}
R\p{4k}&=R\p{4k-R\p{4k-1}}+R\p{4k-R\p{4k-2}}+R\p{4k-R\p{4k-3}}\\
&=R\!\pb{4k-\fora{3}{k-1}}+R\!\pb{4k-\fora{2}{k-1}}+R\!\pb{4k-\fora{1}{k-1}}\\
&=R\p{4k-0\pb{k-1}-B_3}+R\p{4k-0\pb{k-1}-B_2}+R\p{4k-4\pb{k-1}-B_1}\\
&=R\p{4k-B_3}+R\p{4k-B_2}+R\p{4-B_1}\\
&=\fora{1}{k-\frac{B_3+1}{4}}+\fora{0}{k-\frac{B_2}{4}}+R\p{4-B_1}\\
&=4\pb{k-\frac{B_3+1}{4}}+B_1+\fora{0}{k-\frac{B_2}{4}}+R\p{4-B_1}\\
&=4k-B_3-1+B_1+\fora{0}{k-\frac{B_2}{4}}+R\p{4-B_1}.
\end{align*}
\begin{align*}
R\p{4k+1}&=R\p{4k+1-R\p{4k}}+R\p{4k+1-R\p{4k-1}}+R\p{4k+1-R\p{4k-2}}\\
&=R\!\pb{4k+1-\fora{0}{k}}+R\!\pb{4k+1-\fora{3}{k-1}}+R\!\pb{4k+1-\fora{2}{k-1}}\\
&=R\p{4k+1-\infty k-B_0}+R\p{4k+1-0\pb{k-1}-B_3}+R\p{4k+1-0\pb{k-1}-B_2}\\
&=R\p{-\infty k+1-B_0}+R\p{4k+1-B_3}+R\p{4k+1-B_2}\\
&=0+\fora{2}{k-\frac{B_3+1}{4}}+\fora{1}{k-\frac{B_2}{4}}\\
&=0\pb{k-\frac{B_3+1}{4}}+B_2+4\pb{k-\frac{B_2}{4}}+B_1\\
&=B_2+4k-B_2+B_1\\
&=4k+B_1.
\end{align*}
\begin{align*}
R\p{4k+2}&=R\p{4k+2-R\p{4k+1}}+R\p{4k+2-R\p{4k}}+R\p{4k+2-R\p{4k-1}}\\
&=R\!\pb{4k+2-\fora{1}{k}}+R\!\pb{4k+2-\fora{0}{k}}+R\!\pb{4k+2-\fora{3}{k-1}}\\
&=R\p{4k+2-4k-B_1}+R\p{4k+2-\infty k-B_0}+R\p{4k+2-0\pb{k-1}-B_3}\\
&=R\p{2-B_1}+R\p{-\infty k+2-B_0}+R\p{4k+2-B_3}\\
&=R\p{2-B_1}+0+\fora{3}{k-\frac{B_3+1}{4}}\\
&=R\p{2-B_1}+0\pb{k-\frac{B_3+1}{4}}+B_3\\
&=R\p{2-B_1}+B_3.
\end{align*}
\begin{align*}
R\p{4k+3}&=R\p{4k+3-R\p{4k+2}}+R\p{4k+3-R\p{4k+1}}+R\p{4k+3-R\p{4k}}\\
&=R\!\pb{4k+3-\fora{2}{k}}+R\!\pb{4k+3-\fora{1}{k}}+R\!\pb{4k+3-\fora{0}{k}}\\
&=R\p{4k+3-0k-B_2}+R\p{4k+3-4k-B_1}+R\p{4k+3-\infty k-B_0}\\
&=R\p{4k+3-B_2}+R\p{3-B_1}+R\p{-\infty k+3-B_0}\\
&=\fora{3}{k-\frac{B_2}{4}}+R\p{3-B_1}+0\\
&=0\pb{k-\frac{B_2}{4}}+B_3+R\p{3-B_1}\\
&=B_3+R\p{3-B_1}.
\end{align*}}
\subsection{Checking for Structural Consistency}
The previous step has given us a positive recurrence system that is eventually satisfied by the $\fa{r}$'s. But, this system may not be consistent with the choices we have made.
First, we check the following things:
\begin{itemize}
\item If $\fa{r}$ is constant, our expression for $Q\p{mk+r}$ should consist only of constants.
\item If $\fa{r}$ is standard linear, our expression for $Q\p{mk+r}$ should be of the form $mk+c$ for some (possibly complicated) constant $c$.
\item If $\fa{r}$ is steep, our expression for $Q\p{mk+r}$ should have at least one of the following:
\begin{itemize}
\item A term $dk$ with $d>m$.
\item A reference to some steep $\fa{r'}$.
\end{itemize}
\end{itemize}
If any of these is violated for any $r$, there is no solution with the given $A$ values and congruence conditions.
If all of the above conditions are satisfied, we must determine the nature of each steep $\fa{r}$.
In particular, we need to determine if each one is steep linear or superlinear. As a bonus, we will be able to determine the degree of $\fa{r}$ if it is a polynomial.
Since the expressions we have for the $\fa{r}$'s form a positive recurrence system, we can use the algorithm from Section~\ref{sec:prs} to accomplish precisely this task, provided we will start with an eventually positive initial condition. (We will construct this initial condition later.) In running this algorithm, we may find that we actually do not have a solution, as the third case above includes expressions like $\forab{r}{k}=\forab{r}{k-1}$, which do not result in steep sequences.
In our example, we verify successfully that $\fa{1}$ is standard linear (the expression we obtained for $R\p{4k+1}$ is $4k+B_1$) and that $\fa{2}$ and $\fa{3}$ are constant (expressions $R\p{2-B_1}+B_3$ and $B_3+R\p{3-B_1}$ respectively).
We now run our algorithm on the positive recurrence system we obtained:
\[
\begin{cases}
\fora{0}{k}=4k-B_3-1+B_1+\fora{0}{k-\frac{B_2}{4}}+R\p{4-B_1}\\
\fora{1}{k}=4k+B_1\\
\fora{2}{k}=R\p{2-B_1}+B_3\\
\fora{3}{k}=B_3+R\p{3-B_1}.
\end{cases}
\]
The graph $G$ consists of four vertices. Vertex $0$ has a loop with weight $1$; the other three vertices are isolated. We initialize $d_0=1$, $d_1=1$, $d_2=0$, and $d_3=0$. Step~\ref{st:inf} doesn't affect any of the vertices, so $G'=G$. Similarly, $\sim$ has no nontrivial relations, so $H\cong G$ via the isomorphism $i\leftrightarrow\st{i}$. When we process vertex $\st{0}$ in $H$, we set $d_{\st{0}}=2$, and this is the only change made in Step~\ref{st:ts}. So, we obtain that $\fa{0}$ is quadratic.
\subsection{Building a Constraint Satisfaction Problem}\label{subsec:csp}
If our parameters produce a solution, we now know precisely what the structure of that solution must be. At this point, we need to see if a solution can actually be realized.
In
order to have a solution, we must check the following:
\begin{itemize}
\item If $\fa{r}$ is constant, we must have $B_r>0$. Otherwise, our solution would have infinitely many nonpositive values. This is not allowed, as we would then not be able to explicitly calculate terms of the sequence.
\item If $\fa{r}$ is constant, $B_r$ must equal our expression for $Q\p{mk+r}$.
\item If $\fa{r}$ is
standard
linear, $B_r$ must equal the constant term in our expression for $Q\p{mk+r}$.
\item If $\fa{r}$ is steep linear, we may need a steepness constraint. This constraint is somewhat more complicated; we describe it below.
\item Any constant that we have forced to have a certain congruence mod $m$ must actually have that congruence.
\item For any two constants of the form $Q\p{c}$ and $Q\p{d}$ that appear, if $c=d$ then $Q\p{c}$ must equal $Q\p{d}$.
\item If constant $Q\p{c}$ appears, then if $c\leq0$ we must have $Q\p{c}=0$.
\end{itemize}
The last two of these restrictions gives a set of conditional constraints to check; the rest of the constraints are unconditional constraints.
As mentioned above, constraining steep linear $\fa{r}$'s to actually be steep requires a more complicated constraint. This stems from the fact that steep linear $\fa{r}$'s can arise in three different ways.
The steep linear $\fa{r}$'s are a subset of the linear $\fa{r}$'s. In terms of the positive recurrence system algorithm, the linear $\fa{r}$'s are the ones whose vertices are labeled $1$ when the algorithm terminates. The following are all the ways this could happen, in terms of the graphs $G'$ and $H$ in that algorithm.
\begin{enumerate}
\item\label{it:l1} The vertex $r$ could be labeled $1$ because the expression for $Q\p{mk+r}$ is a degree-$1$ polynomial.
\item\label{it:l2} The vertex $r$ could be labeled $1$ because it is not in a cycle in $G'$ and, when it came time to assign $\bar{r}$ its final label, the largest label in $H$ it pointed to was a $1$.
\item\label{it:l3} The vertex $r$ could be labeled $1$ because it is in a cycle in $G'$ and, when it came time to assign $\bar{r}$ its final label, the largest label in $H$ it pointed to was a $0$ (or it pointed to no other vertices in $H$).
\end{enumerate}
In Case~\ref{it:l1}, $\fa{r}$ is steep linear if and only if the leading coefficient of that polynomial is greater than $m$. We already checked this in our structural consistency check, so if $\fa{r}$ is linear because of Case~\ref{it:l1}, we need no steepness constraint. In Case~\ref{it:l2}, we have that $r$ is pointing to something else linear. But, our unpacking step would have removed all references to standard linear $\fa{r'}$'s. So, in Case~\ref{it:l2}, every $\fa{r'}$ that $\fa{r}$ still refers to must be steep linear. This immediately forces $\fa{r}$ itself to be steep linear without imposing any extra constraints.
This leaves only Case~\ref{it:l3}. In this case, $r$ is in a directed cycle in $G'$, say
\\$r=r_0,r_1,r_2,\ldots,r_{t-1},r_t=r$.
Each of the corresponding sequences has an expression of the form
\[
\fora{r_i}{k}=c_i+\fora{r_{i+1}}{k-e_i}.
\]
Repeated substitution yields the formula
\[
\fora{r}{k}=\sum_{i=0}^{t-1}c_i+\fora{r}{k-\sum_{i=0}^{t-1}e_i}
\]
for some constants $e_i$. We require that $\fa{r}$ be steep. This will be accomplished if we have that
\[
\sum_{i=0}^{t-1}c_i>m\sum_{i=0}^{t-1}e_i.
\]
So, this is the steepness constraint we add in Case~\ref{it:l3}. In particular, we arrive at the same constraint for all functions in a given equivalence class.
In our example, we obtain the following constraints:
\begin{itemize}
\item $\fa{0}$ is superlinear, so there are no constraints associated to it.
\item $\fa{1}$ is standard linear, and we have $R\p{4k+1}=4k+B_1$.
This gives us the constraint $B_1=B_1$. (This constraint is tautological, but this is okay.)
\item $\fa{2}$ is constant, and we have $R\p{4k+2}=R\p{2-B_1}+B_3$. This gives us the following constraints:
\begin{itemize}
\item $B_2>0$
\item $B_2=R\p{2-B_1}+B_3$.
\end{itemize}
\item $\fa{3}$ is constant, and we have $R\p{4k+3}=B_3+R\p{3-B_1}$. This gives us the following constraints:
\begin{itemize}
\item $B_3>0$
\item $B_3=B_3+R\p{3-B_1}$.
\end{itemize}
\item Our congruence constraints are
\begin{itemize}
\item $B_2\equiv0\pb{\modd 4}$
\item $B_3\equiv3\pb{\modd 4}$.
\end{itemize}
\item Our conditional constraints are
\begin{itemize}
\item If $2-B_1=3-B_1$, then $R\p{2-B_1}$ must equal $R\p{3-B_1}$.
\item If $2-B_1\leq0$, then $R\p{2-B_1}=0$.
\item If $3-B_1\leq0$, then $R\p{3-B_1}=0$.
\end{itemize}
\end{itemize}
\subsection{Solving the Constraint Satisfaction Problem}
The recurrence solution we are seeking should exist if and only if this constraint system is satisfiable. The system of unconditional constraints is almost an integer program. The following modifications can turn it into an integer program:
\begin{itemize}
\item Since all variables are integers, strict inequalities of the form $x>y$ can be made loose by replacing them by the equivalent inequality $x\geq y+1$.
\item Congruence constraints can be converted to equality constraints via the introduction of auxiliary variables. Namely, $x\equiv y\pb{\modd m}$ is the same constraint as $x=Km+y$, where $K$ is a new auxiliary variable.
\end{itemize}
Furthermore, the conditional constraints can be incorporated into the integer program. For each constraint of the form $\pb{c=d}\Rightarrow \pb{e=f}$, we consider three cases. (If one fails, we try the next one.)
\begin{itemize}
\item Add the constraints $c=d$ and $e=f$.
\item Add the constraint $c<=d-1$.
\item Add the constraint $c>=d+1$.
\end{itemize}
And, for each constraint of the form $\pb{c\leq d}\Rightarrow \pb{e=f}$, we consider two cases.
\begin{itemize}
\item Add the constraints $c\leq d$ and $e=f$.
\item Add the constraint $c>=d+1$.
\end{itemize}
Maple has a built-in procedure for satisfying \emph{linear} integer programs. Since we are only considering linear nested recurrences,
the program we obtain is, in fact, linear.
Integer linear programming is an \textsc{np}-hard problem~\cite{karp}. But, experimentally, the instances that arise in this context seem not to be very hard. Heuristically guessing $B_r$ values that are far apart seems to make satisfying the constraints be a quick process. In particular, the constraints we obtain are typically satisfiable unless there is some obvious reason why they should not be.
The following assignments satisfy our example constraint system:
\begin{itemize}
\item $B_1=0$
\item $B_2=4$
\item $B_3=3$
\item $R\p{2}=1\pb{=R\p{2-B_1}}$
\item $R\p{3}=0\pb{=R\p{3-B_1}}$
\end{itemize}
This means we have the following eventual solution:
\[
\begin{cases}
R\p{4k}=4k-3-1+0+\fora{0}{k-1}+R\p{4}=R\p{4k-4}+4k+R\p{4}-4\\
R\p{4k+1}=4k\\
R\p{4k+2}=4\\
R\p{4k+3}=3.
\end{cases}
\]
Note that these constraints have other satisfying values, and each other satisfaction leads to another eventual solution to the recurrence.
\subsection{Constructing an Initial Condition}
We now have an eventual solution to our recurrence. But, it is quite likely that there will need to be some initial values to make the solution exist. These initial value requirements can arise from a number of sources:
\begin{itemize}
\item An expression of the form $Q\p{n-c}$ in the recurrence probably requires that the initial condition have length at least $c$. (The only case where it would not require this is if the solution we found has $Q\p{n-c}=0$ for $n\leq c$.)
\item There are sometimes equality constraints involving $Q\p{c}$ for some constant $c$. Unless the value required of $Q\p{c}$ is what it must be anyway in the eventual solution, the initial condition must have length at least $c$.
\item In constructing our solution, we used the fact that steep sequences $\fa{r}$ are eventually greater than $k+c$ for any constant $c$. But, for fixed $c$ they can be smaller for small $k$. The initial condition must be long enough to allow all the steep sequences to become sufficiently large.
\end{itemize}
The question now is, when does the initial condition end? We try to find as generic an initial condition as possible. This means that we try to minimize the length of the initial condition while including as many free parameters as we can. As a result, the initial condition we find may include symbols, and some of these symbols may have (obviously satisfiable) constraints placed on them. (To find an explicit initial condition, replace each symbol by a value satisfying any constraints on it.) Here is an outline of a procedure for finding an initial condition. This is a more streamlined version of the procedure in the Maple package. That version is more complicated, as it tries to use various tricks to shorten the initial condition.
\begin{enumerate}
\item Look for the largest $c_0$ such that $Q\p{c_0}$ appears in a constraint from the constraint satisfaction problem. Initialize the list $L$ to the symbolic list $\bk{Q\p{1},Q\p{2},\ldots,Q\p{c_0}}$
\item The constraints involving expressions of the form $Q\p{c}$ constitute a linear system. Solve this system for as many constants $Q\p{c}$ as possible. (If the system is underdetermined, these solutions may be in terms of others of these constants.) Substitute these solution values into $L$ for their respective constants.
\item\label{st:nm} Denote by $\gamma$ the maximum number such that $Q\p{n-\gamma}$ appears in the recurrence. Generate the next $\max\pb{m,\gamma}$ terms of the recurrence $Q$ with the initial condition $L$. If $Q$ is ever evaluated at a symbolic index, assume that this index is nonpositive
and record the resulting constraint on the symbols.
(If the sequence dies during this computation, go to step~\ref{st:bad}.)
\item\label{st:4} If $\max\pb{m,\gamma}$ terms were generated, look at the terms in the steep positions and determine which future terms directly depend on them. If the computations involving the steep terms used to compute the later terms all result in $0$, then the terms in the steep positions are sufficiently large. (If these terms are symbolic, this may result in inequality constraints on some constants.) Otherwise, the steep terms are not sufficiently large, so go to Step~\ref{st:bad}.
\item If the non-steep terms agree with their eventual values, then the initial condition is complete. In this case, return $L$ along with the constraints that were imposed on the symbols when generating the $m$ terms.
\item\label{st:bad} If not enough terms were generated, terms were not large enough, or some term did not agree with its eventual value, then extend $L$ by one term. In that position, put the value of the eventual solution. (If that term is a member of a steep sequence, keep that term symbolic.) Then, forget any constraints imposed on symbols in step~\ref{st:4} and return to Step~\ref{st:nm}.
\end{enumerate}
This algorithm generates an initial condition for the eventual solution found in the previous steps.
For our example, the largest $c$ such that $R\p{c}$ appears in a constraint is $c=3$. So, we initialize $L=\bk{R\p{1},R\p{2},R\p{3}}$. We see that $R\p{2}=1$ and $R\p{3}=0$, so this changes $L$ to $\bk{R\p{1},1,0}$. In this case $m=4$, and $\gamma=3$ since $R\p{n-3}$ appears in the recurrence. We now start extending $L$:
\begin{itemize}
\item We try to compute $R\p{4}$:
\begin{align*}
R\p{4}&=R\p{4-R\p{3}}+R\p{4-R\p{2}}+R\p{4-R\p{1}}\\
&=R\p{4-0}+R\p{4-1}+R\p{4-R\p{1}}\\
&=R\p{4}+R\p{3}+R\p{4-R\p{1}}.
\end{align*}
We see that $R\p{4}$ is defined in terms of $R\p{4}$, which is not allowed. So, the sequence dies immediately. We failed to generate $4$ terms, so we extend $L$. The fourth term would be part of the quadratic (steep) sequence, so we keep $R\p{4}$ symbolic. We now have $L=\bk{R\p{1},1,0, R\p{4}}$
\item The next two times we try to compute a term of $R$, the sequence will die immediately. This is the case because the $0$ that killed our sequence in the $R\p{4}$ case is referenced again when computing $R\p{5}$ and when computing $R\p{6}$. The predicted values for $R\p{5}$ and $R\p{6}$ are both $4$. So, $L$ will be extended to $\bk{R\p{1},1,0, R\p{4}, 4}$ and then to $\bk{R\p{1},1,0, R\p{4}, 4, 4}$.
\item We try to compute $R\p{7}$:
\begin{align*}
R\p{7}&=R\p{7-R\p{6}}+R\p{7-R\p{5}}+R\p{7-R\p{4}}\\
&=R\p{7-4}+R\p{7-4}+R\p{7-R\p{4}}\\
&=R\p{3}+R\p{3}+R\p{7-R\p{4}}\\
&=0+0+R\p{7-R\p{4}}\\
&=R\p{7-R\p{4}}.
\end{align*}
In order to evaluate $R\p{7-R\p{4}}$, we assume that $R\p{4}\geq7$, so this term is zero. This gives $R\p{7}=0$. Then, when we try to compute $R\p{8}$, we will fail. The predicted value for $R\p{7}$ is $3$, so we extend $L$ to $\bk{R\p{1},1,0, R\p{4}, 4, 4, 3}$.
\item We try to compute $R\p{8}$:
\begin{align*}
R\p{8}&=R\p{8-R\p{7}}+R\p{8-R\p{6}}+R\p{8-R\p{5}}\\
&=R\p{8-3}+R\p{8-4}+R\p{8-4}\\
&=R\p{5}+R\p{4}+R\p{4}\\
&=4+2R\p{4}.
\end{align*}
This is a symbolic answer, but that is permitted. The term $R\p{8}$ would be in the quadratic sequence, so this agrees with the eventual solution provided $R\p{4}$ is sufficiently large. We now try to compute $R\p{9}$:
\begin{align*}
R\p{9}&=R\p{9-R\p{8}}+R\p{9-R\p{7}}+R\p{9-R\p{6}}\\
&=R\p{9-4-2R\p{4}}+R\p{9-3}+R\p{9-4}\\
&=R\p{5-2R\p{4}}+R\p{6}+R\p{5}\\
&=R\p{5-2R\p{4}}+4+4\\
&=8+R\p{5-2R\p{4}}.
\end{align*}
If $2R\p{4}\geq5$, we obtain $R\p{9}=8$, which is its value in the eventual solution. If we compute $R\p{10}$, we find that it equals its predicted value of $4$ provided that $2R\p{4}\geq6$. Then, if we compute $R\p{11}$, we find that it equals its predicted value of $3$ provided that $2R\p{4}\geq7$. Combining all of this, we find that, for all $R\p{1}$ and all $R\p{4}\geq4$, $\bk{R\p{1},1,0, R\p{4}, 4, 4, 3}$ is an initial condition that yields the eventual solution we found in the previous step
\end{itemize}
\section{Some Findings}\label{sec:find}
As our running example illustrates, it is possible to obtain quasi-quadratic solutions to Hofstadter-like recurrences. This begs the question as to whether higher degree quasipolynomials are possible. If they are, the algorithm will find them. It turns out that there are quasipolynomial solutions to the Hofstadter $Q$-recurrence of every positive degree~\cite{gengol}. In addition, there are solutions that include both quadratic and exponential subsequences, such as the sequence obtained from the Hofstadter $Q$-recurrence with $\bk{9,0,0,0,7,9,9,10,4,9,9,3}$ as the initial condition~\cite[A275153]{oeis}.
It is likely that a construction similar to the one for arbitrary degree quasipolynomials~\cite{gengol} will also lead to examples including higher degree polynomials along with exponentials. There are also solutions to the Hofstadter $Q$-recurrence with linear subsequences with slopes greater than $1$, and such subsequences can be obtained by any of the three ways mentioned in Subsection~\ref{subsec:csp}
\begin{itemize}
\item The length-$45$ initial condition
\begin{align*}
[&0, 4, -40, -9, 8, -8, 7, 1, 5, 13, -24, -1, 8, 8, 8, 1, 5, 13, -8, 7, 8, 8,
23, 1, 5, 13, 8, 15, \\
&8, 16, 31, 1, 5, 13, 24, 23, 8, 24, 39, 1, 5, 13, 40, 31,
8]
\end{align*}
leads to a period-$8$ solution with $Q\p{8n+3}=16n-40$. This is the case because unpacking $Q\p{8n+3}$ involves adding two standard linear terms together~\cite[A275361]{oeis}.
\item The length-$16$ initial condition
\[
[-9, 2, 9, 2, 0, 7, 9, 10, 3, 0, 2, 9, 2, 9, 9, 9]
\]
leads to a period-$9$ solution where $Q\p{9n+2}$ and $Q\p{9n+8}$ both have slope $10$. But, $Q\p{9n+2}$ has slope $10$ because unpacking it yields $Q\p{9n-1}$ plus a constant~\cite[A275362]{oeis}.
\item In the previous example, $Q\p{9n+8}$ has slope $10$ because unpacking it results in $10+Q\p{9n-1}$. This appears to be, by far, the most common way steep linear solutions arise.
\end{itemize}
Our algorithm was also used to examine what sorts of recurrences can be satisfied by the exponential subsequences. This led to the observation that any homogeneous linear recurrence with positive coefficients that sum to at least $2$ can be realized~\cite{genrusk}.
We have used our algorithm to fully explore positive recurrent solution families to the Hofstadter $Q$-recurrence with small periods.
Given a solution to the Hofstadter $Q$-recurrence, any shift of this solution is also a solution, since the recurrence only depends on the relative indices of the terms (and not the absolute indices). So, solution families found by the algorithm can be considered as-is or modulo this shifting operation. There are no period-$1$ positive recurrent solutions to the Hofstadter $Q$-recurrence, and there are two families (one modulo shifting) of period~$2$ solutions. These solutions consist of one constant sequence interleaved with one standard linear sequence. For example, the initial condition $\bk{2,2}$ gives rise to the sequence $2,2,4,2,6,2,8,2,\ldots$~\cite[A275365]{oeis}. There are $12$ period~$3$ solution families ($4$ modulo shifting). One of these families includes Golomb's solution, and another includes Ruskey's sequence. The other two families consist of eventually quasilinear solutions with two constant sequences and one standard linear sequence (and appear to be related to each other). One of these families includes sequence A264756~\cite{oeis}. There are $12$ period~$4$ families ($5$ modulo shifting), all of which are quasilinear with constant and standard linear sequences. Some of these families include the period~$2$ solutions, but each such family also include additional solutions that do not have period~$2$. There are $35$ period~$5$ families ($7$ modulo shifting). Again, all of these are quasilinear. But, one of these families has a steep linear subsequence~\cite[A269328]{oeis}. There is a lot more variety beginning at period~$6$. There are $294$ solution families ($86$ modulo shifting) in this case. These solutions include quadratics as well as mixing of exponentials with steep linears. A file containing information on all of the solution families examined, modulo shifting, can be found at \texttt{http://www.math.rutgers.edu/{$\sim$}nhf12/research/hof\_small\_periods.txt}.
In addition, we found positive recurrent solutions to other recurrences, including the Conolly recurrence~\cite[A275363]{oeis} and the Hofstadter-Conway recurrence~\cite[A052928]{oeis}. This second case is notable because the solution has period~$2$ with both subsequences linear. This can happen because the Hofstadter-Conway recurrence is not basic. As a result, the congruence classes of the constant terms in the linear polynomials end up determining much of the behavior.
\section{Future Work}\label{sec:fut}
This algorithm has various positivity requirements in order to ensure that it runs correctly. Nested recurrences are allowed to have negative terms; these would inherently violate our algorithm's conditions. It would be useful to have a version of this algorithm that can handle these cases. Also, we are not currently allowing implicit solutions. Allowing these would give a method of handling sequences with infinitely many nonpositive entries, which may or may not be helpful.
In addition, the algorithm only works for linear recurrences. But, it makes perfect sense to look for solutions to nonlinear nested recurrences that consist of simpler interleaved sequences. Most of the steps of this algorithm still work when the recurrences can be nonlinear. Whether that means that $Q$ terms are multiplied by each other or raised to powers, the only thing that changes significantly is the algorithm for determining the orders of growth of the subsequences. Nonlinearity introduces some extra complications. For example, the recurrence $C\p{n}=C\p{n-C\p{n-1}}^2-2C\p{n-C\p{n-1}}+2$ has constant solutions (constant $1$ and constant $2$), but it also has a solution
\[
\begin{cases}
C\p{2k}=2^{2^{k-1}}+1\\
C\p{2k+1}=2.
\end{cases}
\]
The distinction here comes from the fact that repeated squaring causes numbers to rapidly grow, unless the initial number was $0$ or $1$. Such concerns do not arise when dealing with only linear recurrences. Hence, it would be useful to have a version of the algorithm that can handle nonlinear recurrences. Of course, we would have to modify what we are looking for, since the example here includes a doubly exponential subsequence, which cannot possibly be a component of a positive-recurrent sequence.
| {
"timestamp": "2016-09-22T02:00:40",
"yymm": "1609",
"arxiv_id": "1609.06342",
"language": "en",
"url": "https://arxiv.org/abs/1609.06342",
"abstract": "The Hofstadter Q-sequence, with its simple definition, has defied all attempts at analyzing its behavior. Defined by a simple nested recurrence and an initial condition, the sequence looks approximately linear, though with a lot of noise. But, nobody even knows whether the sequence is infinite. In the years since Hofstadter published his sequence, various people have found variants with predictable behavior. Oftentimes, the resulting sequence looks complicated but provably grows linearly. Other times, the sequences are eventually linear recurrent. Proofs describing the behaviors of both types of sequence are inductive. In the first case, the inductive hypotheses are fairly ad-hoc, but the proofs in the second case are highly automatable. This suggests that a search for more sequences like these may be fruitful. In this paper, we develop a step-by-step symbolic algorithm to search for these sequences. Using this algorithm, we determine that such sequences come in infinite families that are themselves plentiful. In fact,there are hundreds of easy to describe families based on the Hofstadter Q-recurrence alone.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Finding Linear-Recurrent Solutions to Hofstadter-Like Recurrences Using Symbolic Computation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98058065352006,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7077274190359917
} |
https://arxiv.org/abs/0712.0966 | On the Dirichlet problem for prescribed mean curvature equation over general domains | We study and solve the Dirichlet problem for graphs of prescribed mean curvature in $\mathbb R^{n+1}$ over general domains $\Omega$ without requiring a mean convexity assumption. By using pieces of nodoids as barriers we first give sufficient conditions for the solvability in case of zero boundary values. Applying a result by Schulz and Williams we can then also solve the Dirichlet problem for boundary values satisfying a Lipschitz condition. | \subsection{Introduction}
In this paper we study and solve the Dirichlet problem for
$n$-dimensional graphs of prescribed mean curvature in $\mathbb R^{n+1}$:
Given a domain $\Omega\subset\mathbb R^n$ and
Dirichlet boundary values $g\in C^0(\partial\Omega,\mathbb R)$
we want to find a solution $f\in C^2(\Omega,\mathbb R)\cap C^0(\overline\Omega,\mathbb R)$ of
\begin{equation}
\diver\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}
=n H(x,f) \quad \mbox{in}\; \Omega \label{l1} \; , \;
f=g \quad \mbox{on}\; \partial\Omega \; .
\end{equation}
The given function $H:\overline\Omega\times\mathbb R\to\mathbb R$
is called the prescribed mean curvature.
At each point $x\in\Omega$ the geometric mean curvature of
the graph $f$, defined as the average of the principal curvatures,
is equal to the value $H(x,f(x))$, thus a solution
$f$ is also called a graph of prescribed mean curvature $H$. \\ \\
For the minimal surface case, i.e. $H\equiv 0$, it is known that the
mean convexity of the domain $\Omega$ yields a necessary and sufficient condition
for the Dirichlet problem to be solvable for all Dirichlet boundary values
(see \cite{jenkins}). Here, mean convexity means that $\hat H(x)\geq 0$
for the mean curvature of $\partial\Omega$ w.r.t. the inner normal.
For the prescribed mean curvature case,
a stronger assumption is needed on the domain $\Omega$
in order to solve the boundary value problem for all Dirichlet boundary values $g$.
A necessary condition on the domain $\Omega$ and the prescribed
mean curvature $H$ is
\begin{equation}\label{a1}
|H(x,z)|\leq \frac{n-1}{n} \hat H(x) \quad
\mbox{for}\; (x,z)\in\partial\Omega\times\mathbb R
\end{equation}
(see \cite[Corollary 14.13]{gilbarg}).
Additionally requiring a smallness condition
on $H$ implying the existence of a $C^0$-estimate (such as \cite[(10.32)]{gilbarg})
Gilbarg and Trudinger \cite[Theorem 16.9]{gilbarg}
could then solve the Dirichlet problem in case $H=H(x)$. \\ \\
It is now a natural question to ask if we can
relax the mean convexity assumption (\ref{a1}) if we only consider
certain boundary values, for example zero boundary values. This is indeed possible,
as our first existence result demonstates.
\begin{theorem}\label{t1}
Assumptions:
\begin{itemize}
\item[a)] Let the bounded $C^{2+\alpha}$-domain $\Omega\subset\mathbb R^n$
satisfy a uniform exterior sphere condition of radius $r>0$
and be included in the annulus $\{x\in\mathbb R^n\; : \; r<|x|<r+d\}$
for some constant $d>0$.
\item[b)] Let the prescribed mean curvature $H=H(x,z)\in C^{1+\alpha}
(\overline\Omega\times\mathbb R,\mathbb R)$
satisfy $H_z\geq 0$ and the smallness assumption
\begin{equation}\label{l70}
h:=\sup_{x\in\Omega} |H(x,0)|<\frac{2 (2r)^{n-1}}{(2r+d)^n-(2r)^n} \; .
\end{equation}
\end{itemize}
Then the Dirichlet problem (\ref{l1}) has a unique
solution $f\in C^{2+\alpha}(\overline\Omega,\mathbb R)$
for zero boundary values.
\end{theorem}
For dimension $n=2$ and constant mean curvature,
similar existence theorems, again for zero boundary values,
can be found in \cite{lopez2}, \cite{lopez} or \cite{ripoll1}.
Note that Theorem \ref{t1} can be applied in particular to the
annulus $\Omega:=\{x\in\mathbb R^n\, : \, r<|x|<r+d\}$ which
does not satisfy the mean convexity assumption (\ref{a1}).
Given any bounded $C^2$-domain $\Omega$ we can find
constants $r>0$ and $d>0$ such that assumption a) of Theorem \ref{t1}
is satisfied for a suitable translation of $\Omega$. \\ \\
The uniqueness part of Theorem \ref{t1} follows directly from
the assumption $H_z\geq 0$ together with the maximum principle.
However, $H_z\geq 0$ is not only needed for the uniqueness but
also for the existence of a solution. More precisely, it
is needed to obtain a global gradient estimate for solutions
of Dirichlet problem (\ref{l1}) (see Theorem \ref{tn}). \\ \\
The smallness condition (\ref{l70}) is required for two
reasons: first to obtain an estimate of the $C^0$-norm
of the solution and secondly to obtain a boundary gradient
estimate (see Theorem \ref{t3}). Other smallness conditions
assuring the existence a $C^0$-estimate are given in \cite{gilbarg}, such as
\begin{equation}\label{l240}
h<\Big(\frac{\omega_n}{|\Omega|}\Big)^{1/n} \; .
\end{equation}
These two assumptions (\ref{l70}) and are (\ref{l240}) quite different as they involve different quantities:
(\ref{l70}) contains the numbers $r$ and $d$ while
(\ref{l240}) contains the volume $|\Omega|$ of $\Omega$.
Additionally, assumption (\ref{l240}) does not imply a boundary gradient estimate
while (\ref{l70}) does. We also want to remark that there are certain
domains for which (\ref{l70}) is satified and not (\ref{l240})
while for certain other domains (\ref{l240}) is satisfied but (\ref{l70}) is not. \\ \\
Note that some kind of smallness assumption on $h$
in Theorem \ref{t1} is needed since there exists
the following necessary condition:
If there exists a graph of constant mean curvature
$h>0$ over a domain $\Omega$ containing a disc
of radius $\varrho>0$, then we have necessarily
$h\leq \frac{1}{\varrho}$. This follows
from a comparision with spherical caps of constant mean curvature $\frac{1}{\varrho}$
together with the maximum principle.
Consequently, the smallness condition on $h$
in Theorem \ref{t1} cannot solely depend
on the radius $r$ of the exterior sphere condition. \\
Furthermore, the smallness condition on $h$ also cannot solely depend
on the diameter of the domain: Consider the annulus
$\Omega=\{x\in\mathbb R^n \; : \; \varepsilon<|x|<1\}$ for some
$0<\varepsilon<1$ with $\mbox{diam}(\Omega)=2$.
In Lemma \ref{lemma_neu} we show that a graph of constant mean
curvature $h>0$ having zero boundary values does not exist
if one chooses $\varepsilon>0$ sufficiently small. \\ \\
Theorem \ref{t1} specifically applies to convex domains.
Note that a convex domain satisfies
a uniform exterior sphere condition of any radius $r>0$.
By letting $r\to+\infty$, we then obtain the following
corollary, which for dimension $n=2$ and constant mean curvature
can also be found in \cite[Corollary 3]{ripoll1} or \cite[Theorem 1.4]{lopez2}.
\begin{corollary}\label{c1}
Let a bounded convex $C^{2+\alpha}$-domain $\Omega\subset\mathbb R^n$
be given such that $\overline\Omega$ is included
within the strip $\{x\in\mathbb R^n\; | \; 0<x_1<d\}$ of width $d>0$.
Let the prescribed mean curvature $H\in C^{1+\alpha}(\overline\Omega\times\mathbb R,\mathbb R)$
satisfy $H_z\geq 0$ as well as
$$ h:=\sup\limits_{\Omega}|H(x,0)|<\frac{2}{nd} \; . $$
Then the Dirichlet problem (\ref{l1}) has a unique
solution $f\in C^{2+\alpha}(\overline\Omega,\mathbb R)$
for zero boundary values.
\end{corollary}
Note that in Corollary \ref{c1} the diameter
of the domain $\Omega$ can be arbitrarily large, while
in Theorem~\ref{t1} the diameter is bounded by $2(r+d)$.
Additionally, we can choose the volume $|\Omega|$
of the domain $\Omega$ arbitrarily large
so that the smallness assumption (\ref{l240}) will not be satisfied. \\ \\
In case of arbitrary boundary values $g$,
Williams \cite{williams} could show that
the Dirichlet problem (\ref{l1}) for $H\equiv 0$ is still solvable
over domains not being mean convex domains,
if one requires certain smallness assumptions on $g$. More precisely he showed:
For any Lipschitz constant $0\leq L<\frac{1}{\sqrt{n-1}}$
there exists some $\varepsilon=\varepsilon(L,\Omega)>0$ such that the Dirichlet
problem (\ref{l1}) is solvable for the minimal surface equation if the
boundary values $g$ satisfy
\begin{equation}\label{la}
|g(x)-g(y)|\leq L|x-y|\quad \mbox{for}\; x,y\in\partial\Omega
\quad \mbox{and} \quad |g(x)|\leq \varepsilon \quad \mbox{for}\; x\in\partial\Omega \; .
\end{equation}
Note that the boundary values are only required to be Lipschitz
continuous and they are not of class $C^{2+\alpha}$. Hence, also the
solution will be at most Lipschitz continuous up to the boundary.
For the proof Williams first considers weak solutions of the minimal
surface equation. Constructing suitable barriers he then
shows that these weak solutions are continuous up to the
boundary and that the Dirichlet boundary values are attained. \\ \\
Schulz and Williams \cite{schulz}
generalised the result of Williams \cite{williams}
from the minimal surface case to the prescribed mean curvature case $H=H(x,z)$.
However, two more assumptions are needed there:
As in Theorem \ref{t1}, the prescribed mean curvature function $H$ must
satisfy the monotonocity assumption $H_z\geq 0$.
This assumption is needed for the existence of weak
solutions (see \cite{miranda}).
Moreover, they require the existence of an initial solution
$f_0\in C^2(\Omega,\mathbb R)\cap C^1(\overline\Omega,\mathbb R)$
for Dirichlet boundary values $g_0$, which must be Lipschitz
continuous with a Lipschitz constant smaller than $\frac{1}{\sqrt{n-1}}$. \\ \\
Using our solution of Theorem \ref{t1}
and Corollary \ref{c1} as an initial solution with zero
boundary values, we can apply the result of Schulz and Williams
to solve the Dirichlet problem for
Lipschitz continuous boundary values as well:
\begin{theorem}\label{t2}
Let the assumptions of Theorem \ref{t1} or Corollary
\ref{c1} be satisfied. Then for any Lipschitz
constant $0\leq L<\frac{1}{\sqrt{n-1}}$ there
exists some $\varepsilon=\varepsilon(\Omega,H,L)>0$ such that
the Dirichlet problem (\ref{l1}) has a
solution $f\in C^{2+\alpha}(\Omega,\mathbb R)\cap C^{0}(\overline\Omega,\mathbb R)$
for all Lipschitz continuous boundary values
$g:\partial\Omega\to\mathbb R$ satisfying assumption (\ref{la}).
\end{theorem}
As demonstrated in \cite{schulz}, the smallness assumption
on the Lipschitz constant $L$ is sharp.
In case of the minimal surface equation,
Theorem \ref{t2} will be false for any
Lipschitz constant $L>\frac{1}{\sqrt{n-1}}$ and any
domain $\Omega$ which is not mean convex (see \cite[Theorem 4]{williams}). \\ \\
This paper is organized as follows:
In Section 2 we first we show that solutions
satisfy a height as well as a boundary gradient estimate.
As barriers we use a piece of a rotationally symmetric
surface of constant mean curvature $h$, a so-called Delaunay nodoid. This surface
is constructed in Proposition \ref{p1} by solving
an ordinary differential equation. There we need a smallness assumption
on $h$ corresponding to assumption (\ref{l70}) of Theorem \ref{t1}.
In Section 3 we first give a global gradient estimate
in terms of the boundary gradient (see Theorem \ref{tn}).
The monotonocity assumption $H_z\geq 0$ plays an important role there.
We then give the proof of Theorem \ref{t1} and Corollary \ref{c1}
using the Leray-Schauder method from \cite{gilbarg}.
\subsection{Estimates of the height and the boundary gradient}
To obtain a priori $C^0$ estimates as well as
boundary gradient estimates for solutions of problem (\ref{l1}),
it is essential to have certain
super and subsolutions at hand serving us upper and lower barriers.
In this paper we will use a rotationally symmetric surface
of constant mean curvature $h$, a so-called Denaunay surface as barrier.
For $h=0$ we have the family of catenoids and for $h\neq 0$
a family consisting of two types of surfaces:
the embedded unduloids and the immersed nodoids
(see \cite{hsiang}; \cite{kenmotsu} for $n=2$).
We will now construct a piece of the $n$-dimensional
catenoid (if $h=0$) and $n$-dimensional nodoid (if $h\neq 0$)
which is given as a graph defined over the annulus
$$ \{x\in\mathbb R^n\; | \; r\leq |x|\leq R\} \; . $$
It can be represented almost explicitely
by solving a second order ordinary differential equation.
\begin{proposition}\label{p1}
Let the numbers $r>0$, $h\geq 0$ and $R>r$ be given satisfying
\begin{equation}\label{l6}
h<\frac{{2 (2r)^{n-1}}}{(R+r)^n-(2r)^n} \; .
\end{equation}
Then there exists a function $p\in C^2([r,R],[0,+\infty))$
with $p(r)=0$ and $p(t)>0$ for $t\in (r,R]$ such that the rotationally symmetric graph
$f(x):=p(|x|)$ defined on the annulus $r\leq |x|\leq R$ has constant mean curvature $-h$.
Furthermore, there exists some $t_0\in (r,R]$ such that
$p(t)$ is increasing for $t\in [r,t_0]$ and decreasing
for $t\in [t_0,R]$.
\end{proposition}
{\it Proof:}
\begin{itemize}
\item[1.)] Inserting $p(|x|)=f(x)$ into
the mean curvature equation
$$ \diver \frac{\nabla f}{\sqrt{1+|\nabla f|^2}}=-n h $$
we obtain for $p$ the second order differential equation
$$ \frac{p''}{(1+p'^2)^{\frac{3}{2}}}+\frac{(n-1) p'}{t (1+p'^2)^{\frac{1}{2}}}=-n h \; . $$
Multiplying this equation by $t^{n-1}$ and integrating
this yields the first order differential equation
\begin{equation}\label{l2}
\frac{t^{n-1}p'}{\sqrt{1+p'^2}}=c-ht^n
\end{equation}
where $c\in\mathbb R$ is some integration constant serving as a parameter.
We focus here on the case $c>0$, corresponding to the
choice of a nodoid. The case $c=0$ yields a sphere and $c<0$
an unduloid. Solving equation (\ref{l2}) for $p'$ we obtain
\begin{equation}\label{l3}
p'(t)=\frac{c-ht^n}{\sqrt{t^{2n-2}-(c-h t^n)^2}} \; .
\end{equation}
Clearly, (\ref{l3}) is only well defined for those $t\in (0,+\infty)$
for which the term under the root in the denominator is positive.
We will later determine for which $t$ this is the case.
Integrating (\ref{l3}) we can now define
\begin{equation}\label{l250}
p(t):=\int\limits_r^t \frac{c-hs^n}{\sqrt{s^{2n-2}-(c-h s^n)^2}} ds
\end{equation}
with $p(r)=0$.
\item[2.)] Let us first study the case $h=0$.
The denominator of (\ref{l3}) has exactly one zero
$a>0$ given as solution of $a^{n-1}=c$ and $p'(t)$ is defined
for all $t\in (a,+\infty)$. For the integral
(\ref{l250}) to be defined, we need to have that $r\in (a,+\infty)$, which is equivalent
to $c<r^{n-1}$. For example, we can set $c:=\frac{1}{2} r^{n-1}$.
The function $p(t)$ is now defined for
all $t\in [r,+\infty)$ and also $p'(t)>0$ for all $t\in [r,+\infty)$.
The claim of the proposition now follows with $t_0=R$.
\item[3.)] In case $h>0$, the denominator of (\ref{l3})
has precisely two positive zeros
$0<a<b$ given as solutions of the equations
$$ h a^n+a^{n-1}=c \quad , \quad hb^n-b^{n-1}=c \; . $$
Now $p'(t)$ is defined for all $t\in (a,b)$
and formally we have $p'(a)=+\infty$, $p'(b)=-\infty$.
Note that for
$$ t_0:=\Big(c\, h^{-1}\Big)^{\frac{1}{n}}\in (a,b) $$
we have
$$ p'(t_0)=0 \quad , \quad p'(t)>0 \quad \mbox{for}\; t\in (a,t_0)
\quad \mbox{and} \quad p'(t)<0 \quad \mbox{for}\; t\in (t_0,b) \; , $$
as desired. Now for the integral (\ref{l250})
to be defined, we need to have $a<r<t_0$, which is
equivalent to restricting the parameter $c$ such that
\begin{equation}\label{l270}
h r^n<c<h r^n+r^{n-1} \; .
\end{equation}
We then obtain $p\in C^2([r,b),\mathbb R)$.
\item[4.)] We will now show the inequality
\begin{equation}\label{l5}
p'(t_0-s)>|p'(t_0+s)| \quad \mbox{for all}\; s\in (0,t_0-a) \; .
\end{equation}
Together with $p(r)=0$ this will yield $p(t)>0$ for all $t\in (r,r+2(t_0-r)]$.
In fact, after some computation (\ref{l5})
turns out to be equivalent to
$$ q(t_0-s)+q(t_0+s)>0 \quad \mbox{for}\; s\in (0,t_0-a) $$
for the function $q(t):=(c-ht^n) t^{1-n}=c t^{1-n}-h t$.
This however is a direct consequence of the inequality
$$ c(t_0+s)^{1-n}+c(t_0-s)^{1-n}>2h t_0 $$
which holds for all $s\in (0,t_0)$, proving (\ref{l5}).
\item[5.)] We now set
$$ R'=R'(c):=r+2(t_0-r)=2t_0-r=2\Big(c h^{-1}\Big)^{\frac{1}{n}}-r<b \; . $$
From 4.) we conclude the positivity $p(t)>0$ for all $t\in (r,R']$.
Keeping in mind the restriction (\ref{l270})
on $c$ we obtain the limit
$$ R'(c)\to 2\big(r^n+ h^{-1} r^{n-1}\Big)^{\frac{1}{n}}-r
=2r\Big(1+h^{-1} r^{-1}\Big)^{\frac{1}{n}}-r $$
if we let $c\to h r^n+r^{n-1}$.
This proves the claim of the proposition whenever
$$ R<2r\Big(1+h^{-1} r^{-1}\Big)^{\frac{1}{n}}-r $$
is satisfied. An easy computation, however, asserts that
this inequality is indeed equivalent to assumption (\ref{l6}) . \hfill $\Box$
\end{itemize}
The following picture shows the graph of the function $p(t)$
for $n=2$, $h=\frac{1}{3}$, $a=1$ and $b=4$.
\psfrag{r}{$r$}
\psfrag{a}{$a$}
\psfrag{R}{$R$}
\psfrag{b}{$b$}
\psfrag{t0}{$t_0$}
\includegraphics[scale=0.88]{nodoid1.eps}\\ \noindent
{\it Remarks:}
\begin{itemize}
\item[a)] For $h=0$ and $n=2$ the function
$p(t)$ has the explicit form
$p(t)=c\, \mbox{arcosh}(t/c)$, i.e. the well known catenary.
If either $h>0$ or $n\geq 3$ the function $p(t)$
can only be represented by the elliptic integral given in the
proof of Proposition \ref{p1}.
\item[b)] In the case $h=0$ we obtain the $n$-dimensional
catenoid, a rotationally symmetric minimal surface.
The generating function is defined for all $t\in [r,+\infty)$.
In case $n=2$ we have $p(t)\to\infty$ as $t\to\infty$.
However, for $n\geq 3$ the function
$p(t)$ is uniformly bounded by some constant.
\item[c)] In case $h>0$, the maximal domain of definition
of the function $p(t)$ is the interval $(a,b)$. In case $n=2$ one can show that the
length $b-a$ of this interval is given by $b-a=\frac{1}{h}$,
in particular the length does not depend on the parameter $c$.
This is no longer the case for dimension $n\geq 3$
where $b-a$ depends on both $h$ and $c$.
\end{itemize}
At this point let us prove the following nonexistence result
which we already claimed in the introduction.
\begin{lemma}\label{lemma_neu}
For $0<\varepsilon<1$ consider the annulus $\Omega:=\{x\in\mathbb R^n\, : \, \varepsilon<|x|<1\}$.
Then given any constant $h>0$ there exists some $\varepsilon=\varepsilon(h)\in (0,1)$, such that
a graph $f\in C^2(\Omega,\mathbb R)\cap C^0(\overline\Omega,\mathbb R)$
of constant mean $h$ with zero boundary values does not exist.
\end{lemma}
{\it Proof:}
We will show that such a graph of constant mean curvature $-h$ does not exist
for sufficiently small $\varepsilon>0$. By a reflection argument, then a graph of
constant mean curvature $h$ does not exist either.
Assume to the contrary that a graph $f=f_\varepsilon$ does exist for each $\varepsilon>0$.
Because $f_\varepsilon$ has constant mean curvature $-h<0$ and zero boundary values,
the maximum principle yields $f_\varepsilon(x)\geq 0$ for $x\in\Omega$.
Now note that the domain $\Omega$ and the boundary values of $f_\varepsilon$ are rotationally symmetric.
Hence, the solution $f_\varepsilon$ is also rotationally symmetric, following from the uniqueness of
the Dirichlet problem. But then we can write $f_\varepsilon(x)=p_\varepsilon(|x|)$ where $p_\varepsilon(t)$ satisfies
$p_\varepsilon(t)\geq 0$ for $t\in [\varepsilon,1]$ and $p_\varepsilon(\varepsilon)=p_\varepsilon(1)=0$. From (\ref{l3}) we conclude
$$ p_\varepsilon(t)=\int\limits_\varepsilon^t \frac{c-hs^n}{\sqrt{s^{2n-2}-(c-h s^n)^2}} ds $$
where $c=c(\varepsilon)\in\mathbb R$ is a suitable constant.
We set $k:=c-h\varepsilon^n$ and claim $k\geq 0$. Otherwise $p_\varepsilon'(t)<0$ would hold for
all $t\in (\varepsilon,1)$, contradicting $p_\varepsilon(\varepsilon)=p_\varepsilon(1)=0$.
Note that the expression under
the root must be nonnegative for all $s\in [\varepsilon,1]$, in particular for $s=\varepsilon$ we get
$$ \varepsilon^{2n-2}-(c-h \varepsilon^n)^2=\varepsilon^{2n-2}-k^2\geq 0 $$
or equivalently $k^{-2} \varepsilon^{2n-2}\geq 1$. For any $t\in [\varepsilon,1]$ we now estimate
\begin{eqnarray}
p_\varepsilon(t)&=&\int_\varepsilon^t \frac{c-hs^n}{\sqrt{s^{2n-2}-(c-h s^n)^2}} ds
\leq\int_\varepsilon^t \frac{c-h\varepsilon^n}{\sqrt{s^{2n-2}-(c-h \varepsilon^n)^2}} ds \nonumber \\
&=&\int_1^{t/\varepsilon} \frac{k}{\sqrt{(\varepsilon \tau)^{2n-2}-k^2}} \varepsilon d\tau
=\varepsilon \int_1^{t/\varepsilon} \frac{1}{\sqrt{k^{-2}\varepsilon^{2n-2} \tau^{2n-2}-1}}d\tau \nonumber \\
&\leq&\varepsilon \int_1^{1/\varepsilon}\frac{1}{\sqrt{\tau^{2n-2}-1}}d\tau \nonumber \; .
\end{eqnarray}
In case of dimension $n\geq 3$ we conclude that $\lim\limits_{\varepsilon\to 0} p_\varepsilon(t)=0$,
which follows from
$$ \int_1^{+\infty}\frac{1}{\sqrt{\tau^{2n-2}-1}}d\tau<+\infty $$
for dimension $n\geq 3$. In case of $n=2$, the above integral is infinite. However, the explicit computation
$$ p_\varepsilon(t)\leq\varepsilon \int_1^{1/\varepsilon}\frac{1}{\sqrt{\tau^2-1}} d\tau
=\varepsilon \Big[\mbox{arcosh}(t)\Big]_1^{1/\varepsilon}=\varepsilon\mbox{arcosh}(1/\varepsilon) $$
again shows $\lim\limits_{\varepsilon\to 0} p_\varepsilon(t)=0$.
This implies that the family $f_\varepsilon(x)=p_\varepsilon(|x|)$ converges uniformly
to $f_0(x)\equiv 0$ on every compact subset of $\{x\in\mathbb R^n \, : \, 0<|x|\leq 1\}$.
Then, after extracting some subsequence, all first and second derivatives of $f_\varepsilon$ will converge to zero
by interior gradient estimates for the constant mean curvature equation.
Hence, also the mean curvature of $f_\varepsilon$ must converge to zero.
This yields a contradiction as the mean curvature of $f_\varepsilon$ is $-h$ for
each $\varepsilon>0$. \hfill $\Box$\\ \\
We can now show the a priori estimates
of the height and boundary gradient of solutions of (\ref{l1}).
\begin{theorem}\label{t3}
Assumptions:
\begin{itemize}
\item[a)] Let the bounded $C^2$-domain $\Omega\subset\mathbb R^n$
satisfy a uniform exterior sphere condition of radius $r>0$
and be included in the annulus $\{x\in\mathbb R^n\; : \; r<|x|<r+d\}$
for some constant $d>0$.
\item[b)] Let the prescribed mean curvature $H=H(x,z)\in C^1
(\overline\Omega\times\mathbb R,\mathbb R)$
satisfy $H_z\geq 0$ in $\Omega\times\mathbb R$
as well as the smallness assumption $|H(x,0)|\leq h$
for some constant
$$ h<\frac{2 (2r)^{n-1}}{(2r+d)^n-(2r)^n} \; . $$
\item[c)] Let $f\in C^2(\overline\Omega,\mathbb R)$
be a solution of problem (\ref{l1}) for
zero boundary values.
\end{itemize}
Then there exists a constant $C=C(h,r,d)$ such that $f$
satisfies the estimates
$$ ||f||_{C^0(\Omega)}\leq C \quad \mbox{and} \quad
\sup\limits_{\partial\Omega} |\nabla f(x)|\leq C \; . $$
\end{theorem}
{\it Proof:}
\begin{itemize}
\item[1.)] We first show the $C^0$-estimate.
Because of $\Omega\subset \{x\in\mathbb R^n \, : \, r<|x|<r+d\}$
the rotationally symmetric graph $\eta(x):=p(|x|)$
is well defined and has constant mean curvature $-h$.
Here, $p(t)$ is the function defined by Proposition \ref{p1} for $R:=r+d$.
From $|H(x,0)|\leq h$ together with $H_z\geq 0$ we conclude
\begin{equation}\label{l120}
H(x,z)\geq -h \quad \mbox{for} \; x\in\Omega \; , \, z\geq 0
\quad \mbox{and} \quad
H(x,z)\leq h \quad \mbox{for} \; x\in\Omega \; , \, z\leq 0 \; .
\end{equation}
We now choose $c\geq 0$ minimal such that $f(x)\leq \eta(x)+c$
holds in $\overline\Omega$. We claim that $c=0$. Otherwise
there would be a point $x_0\in\Omega$ with $f(x_0)=\eta(x_0)+c>0$.
From (\ref{l120}) together with the strong maximum principle
we then would have $f(x)\equiv \eta(x)+c$ in $\Omega$,
contradicting $f(x)=0$ on $\partial\Omega$.
Hence we have shown $f(x)\leq \eta(x)$ in $\Omega$. Similary, we obtain
$f(x)\geq -\eta(x)$. Combining these estimates we have
$$ ||f||_{C^0(\Omega)}=\sup\limits_{\Omega} |f(x)|\leq
\sup\limits_{\Omega} |\eta(x)|
\leq \sup\limits_{r\leq t\leq r+d} |p(t)|=p(t_0)=: C_1 \; . $$
Here, $t_0$ defined by Proposition \ref{p1} is
the argument for which the function $p$ achieves its maximum.
Note that $p$ only depends on $r,d$ and $h$ and hence $C_1=C_1(r,d,h)$.
\item[2.)] Given some point $x_0\in\partial\Omega$ we show the
boundary gradient estimate at $x_0$.
Since $\Omega$ satisfies a uniform exterior sphere condition of radius $r$,
we may assume that
$$ \Omega\cap B_r(0)=\emptyset \quad \mbox{and}
\quad x_0\in\partial B_r(0)\cap\partial\Omega $$
holds after a suitable translation.
We define the annulus $U:=\{x\in\mathbb R^n\, :\, r<|x|<t_0\}$
and consider the graph
$$ \eta\in C^2(\overline U,\mathbb R) \quad , \quad \eta(x):=p(|x|) \quad \mbox{for}\;
x\in\overline U\; . $$
From $f(x)=0$ on $\partial\Omega$ together with
$f(x)\leq p(t_0)=\eta(x)$ for $|x|=t_0$
we conclude $f(x)\leq \eta(x)$ on $\partial(\Omega\cap U)$.
As in part 1.), the maximum principle gives
$f(x)\leq \eta(x)$ in $\Omega\cap U$ as well as
$f(x)\geq -\eta(x)$ in $\Omega\cap U$.
From $x_0\in\partial(\Omega\cap U)$
and $f(x_0)=\eta(x_0)$ we obtain
$$ |\nabla f(x_0)|=\Big|\frac{\partial}{\partial \nu} f(x_0)\Big|
\leq \Big|\frac{\partial}{\partial \nu} \eta(x_0)\Big|=|p'(r)|=: C_2 \; , $$
where $\nu$ is the outward normal to $\partial\Omega$ at $x_0$. \hfill $\Box$
\end{itemize}
{\it Remark:} A closer inspection of the proof shows that
Theorem \ref{t3} also holds without the assumption
$H_z\geq 0$ if one requires $|H(x,z)|\leq h$
in $\Omega\times\mathbb R$ instead of $|H(x,0)|\leq h$ in $\Omega$.
However, we will essentially need the assumption $H_z\geq 0$ in the next section to
prove a global gradient estimate.
\subsection{Global gradient estimate and the proof of Theorem \ref{t1}}
In the previous section we have shown
a $C^0$-estimate together with a boundary
gradient estimate, thus we can assume
\begin{equation}\label{l55}
|f(x)|\leq M \quad \mbox{in}\; \overline\Omega
\end{equation}
for a given solution $f\in C^{2+\alpha}(\overline\Omega,\mathbb R)$
of problem (\ref{l1}).
It now remains to establish a global gradient estimate
in terms of the $C^0$-norm and the boundary gradient.
Such a global gradient estimate is derived in \cite[Theorem 15.2]{gilbarg}
for a fairly large class of quasilinear elliptic equations.
This includes the prescribed mean curvature equation in case of $H=H(x)$,
as verified in example (ii) after \cite[Theorem 15.2]{gilbarg}. We will show that
\cite[Theorem 15.2]{gilbarg} continues to hold in case $H=H(x,z)$, if we assume the
monotonocity condition $H_z\geq 0$.
Let us first write the prescribed mean curvature equation in the form
$$ \triangle f-\sum_{i,j=1}^n \frac{\partial_i f\partial_j f}{1+|\nabla f|^2} \partial_{ij} f- nH(x,f) \sqrt{1+|\nabla f|^2}=0 \; . $$
Now quantities $\alpha,\beta,\gamma$ are defined by \cite[(15.27)]{gilbarg}, which in our case are
\begin{eqnarray}
&&\alpha=-1+\frac{1}{1+|p|^2} \quad , \quad \beta=\frac{n H(x,z) \sqrt{1+|p|^2}}{|p|^2} \nonumber \\
&&\gamma=-n \frac{(1+|p|^2)^{3/2}}{|p|^2}
\Big[H_z(x,z)+\sum_{i=1}^n \frac{p_i}{|p|^2} H_{x_i}(x,z)\Big] \quad
\mbox{for}\; x\in\Omega \; , \; |z|\leq M \; , \; p\in\mathbb R^n \nonumber
\end{eqnarray}
(compare with example (ii) in chapter 15.2 of \cite{gilbarg}).
We now compute the limits
\begin{eqnarray}
&&a:=\limsup\limits_{|p|\to\infty} \alpha=-1 \quad , \quad
b:=\limsup\limits_{|p|\to\infty} \beta\leq n \sup\limits_{\Omega\times [-M,M]} |H(x,z)| \nonumber \\
&& c:=\limsup\limits_{|p|\to\infty} \gamma\leq n \sup\limits_{\Omega\times [-M,M]} |\nabla H(x,z)| \label{l27}
\end{eqnarray}
using $H_z\geq 0$ for the last limit. Because of $a=-1$ together with $b,c<+\infty$ we may apply
\cite[Theorem 15.2]{gilbarg} to obtain
\begin{theorem}\label{tn}
Let the prescribed mean curvature
$H\in C^1(\overline\Omega\times\mathbb R,\mathbb R)$ satisfy
$$ H_z(x,z)\geq 0 \quad , \quad |H(x,z)|+|\nabla H(x,z)|\leq h_0 \quad \mbox{for}\; x\in\Omega \; , \; |z|\leq M \; . $$
Let $f\in C^2(\overline\Omega,\mathbb R)$ be a solution Dirichlet problem (\ref{l1})
satisfying $||f||_{C^0(\Omega)}\leq M$.
Then the estimate
$$ \sup\limits_{x\in\Omega} |\nabla f(x)|\leq C $$
holds with a constant $C$ depending only on $n$, $h_0$, $M$, $\Omega$
and $\sup_{\partial\Omega} |\nabla f|$.
\end{theorem}
{\it Remark:} If we do not assume $H_z\geq 0$, then we will obtain
$c=+\infty$ in (\ref{l27}) and \cite[Theorem 15.2]{gilbarg} will not be applicable.
In fact, the following example shows that a gradient estimate is false if one
does not require $H_z\geq 0$.
\begin{example}
Given some parameter $\varepsilon>0$ let $\beta(z):=z^3+\varepsilon z$ for $z\in I:=[-1,1]$.
Noting $\beta'(z)=3z^2+\varepsilon>0$ in $I$, there exists a smooth inverse
$\beta^{-1}:I\to\mathbb R$. From $\beta(-1)\leq -1$ and $\beta(1)\geq 1$ we
conclude $\beta^{-1}:I\to I$. We now consider the one-dimensional graph
$f_\varepsilon(x):=\beta^{-1}(x)$ for $x\in I$ with its parametrisation $X(x)=(x,f_\varepsilon(x))$.
Substituting $z=f_\varepsilon(x)$ we obtain the reparametrisation $\tilde X(z)=(\beta(z),z)$
and we can compute the curvature $H=H(z)$ by
$$ H(z):=H_\varepsilon(z)=-\frac{\beta''}{\Big(1+(\beta')^2\Big)^{3/2}}=-\frac{6z}{\Big(1+(3z^2+\varepsilon)^2\Big)^{3/2}} \; . $$
Hence, $f_\varepsilon$ is a graph of prescribed mean curvature $H_\varepsilon(z)$.
We can find a constant $C$ such that
$$ |H_\varepsilon(z)|+|\nabla H_\varepsilon(z)|\leq C \quad \mbox{for all} \; z\in [-1,1] \; , \; 0< \varepsilon\leq 1 \; . $$
Additionally, we have the $C^0$-estimate and boundary gradient estimate
$$ |f_\varepsilon(x)|\leq 1 \quad \mbox{for}\; x\in I \quad \mbox{and}\quad |\nabla f_\varepsilon(x)|\leq 1 \quad
\mbox{for}\; x\in\partial I=\{-1,1\} \; . $$
However, there is no uniform gradient bound for $f_\varepsilon$ in $I$ because
$$ |\nabla f_\varepsilon(0)|=|f_\varepsilon'(0)|=\frac{1}{|\beta'(0)|}=\frac{1}{\varepsilon}\to \infty \quad \mbox{if}\; \varepsilon\to 0 \; . $$
In this example, all of the assumptions of Theorem \ref{tn} are satisfied except for $H_z\geq 0$.
Even though this example was purely one-dimensional, a generalisation to
higher dimensions $n\geq 2$ is easily possible.
\end{example}
We can now give the\\ \\
{\it Proof of Theorem \ref{c1}:} \\
For $t\in [0,1]$ consider the family of Dirichlet problems
\begin{equation}
f\in C^{2+\alpha}(\overline\Omega,\mathbb R) \quad , \quad
\diver\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}
=t\,n H(x,f) \quad \mbox{in}\; \Omega \quad \mbox{and}\quad
f=0 \quad \mbox{on}\; \partial\Omega\label{l11} \; .
\end{equation}
Let $f$ be such a solution
for some $t\in [0,1]$. By Theorems \ref{t3} and \ref{tn} we have the estimate
$$ ||f||_{C^1(\Omega)}\leq C $$
with some constant $C$ independet of $t$.
The Leray-Schauder method \cite[Theorem 13.8]{gilbarg}
yields a solution of the Dirichlet problem (\ref{l11}) for each $t\in [0,1]$.
For $t=1$ we obtain the desired solution of (\ref{l1}). \hfill $\Box$ \\ \\
{\it Proof of Corollary \ref{c1}:} \\
Corollary \ref{c1} is obtained as the limit case
of Theorem \ref{t1} by increasing the radius $r$ of
the exterior sphere condition to infinity.
First, since $\overline\Omega$ is bounded and included within the strip
$\{x\in\mathbb R^n\; : \; 0<x_1<d\}$, after a suitable
translation it will also be included within the
annulus $\{x\in\mathbb R^n\; : \; r<|x|<r+d\}$ for sufficiently
large $r>0$. To show which smallness condition on $h$ is required
in order to apply Theorem \ref{c1} we have to compute the limit
\begin{equation}\label{l57}
\lim\limits_{r\to\infty} \frac{2 (2r)^{n-1}}{(2r+d)^n-(2r)^n} \; .
\end{equation}
To do this, we calculate
$$
\lim\limits_{r\to\infty} \frac{(2r+d)^n-(2r)^n}{2 (2r)^{n-1}}
=\lim\limits_{r\to\infty}
\frac{(2r)^n+n (2r)^{n-1} d +O(r^{n-2})-(2r)^n}{2 (2r)^{n-1}}
=\frac{nd}{2} \; . $$
We see that the limit in (\ref{l57}) is equal
to $\frac{2}{n d}$ and hence the smallness condition
$h<\frac{2}{n d}$ is required. Alternatively we could prove
Corollary \ref{c1} also directly, by proving an analogue
result to Theorem \ref{t3} for convex domains.
Instead of using the nodoid we would then use a cylinder
as barrier whose axis is lying in the $x_1,\dots,x_n$ hyperplane.
Note that the cylinder $\{x\in\mathbb R^{n+1}\; : x_1^2+\dots+x_n^2=(\frac{d}{2})^2\}$
has constant mean curvature $h=\frac{2}{n d}$, corresponding to the
smallness condition from above. \hfill $\Box$ \\ \\
{\it Remarks:}
\begin{itemize}
\item[a)] Using the methods from \cite{bergner1}, it is also
possible to generalise Theorem \ref{t1} and Corollary \ref{c1}
to the case of prescribed anisotropic mean curvature
$$ \mbox{div}\frac{\nabla f}{\sqrt{1+|\nabla f|^2}}=n H(x,f,N) \quad \mbox{in}\; \Omega \; . $$
Here, the prescribed mean curvature does not only depend on the point $(x,f(x))$ in
space but also on the normal $N(x)$ of the graph.
Within this situation, $H_z\geq 0$
can be relaxed to weaker assumption allowing nonuniqueness of solutions.
\item[b)] The results can also be generalised in another direction:
Define the boundary part
$$ \Gamma_+:=\Big\{x\in\partial\Omega \; : \;
|H(x,z)|\leq \frac{n-1}{n} \hat H(x) \; \mbox{for all}\; z\in\mathbb R\Big\} $$
where $\hat H(x)$ is the mean curvature of $\partial\Omega$ at $x$ w.r.t.
the inner normal. Now choose a subset $\Gamma\subset \Gamma_+$ such that
$\mbox{dist}(\Gamma,\partial\Omega\backslash\Gamma_+)>0$.
On $\Gamma$ we can use the standard boundary gradient estimate
(see \cite[Corollary 14.8]{gilbarg}) and prescribe $C^{2+\alpha}$ boundary values $g$
there. Our boundary gradient estimate of Theorem \ref{t3}, requiring
zero boundary values, is then only needed on $\partial\Omega\backslash\Gamma$.
This way, Theorem \ref{t1} and Corollary \ref{c1} also hold
for Dirichlet boundary values $g\in C^{2+\alpha}(\partial\Omega,\mathbb R)$
with $g(x)=0$ on $\partial\Omega\backslash\Gamma$ and $|g(x)|\leq \varepsilon$,
where $\varepsilon=\varepsilon(\Omega,\Gamma,H)>0$ is some constant determined
by the height of the nodoid constructed in Proposition \ref{p1}.
\end{itemize}
| {
"timestamp": "2007-12-06T16:30:12",
"yymm": "0712",
"arxiv_id": "0712.0966",
"language": "en",
"url": "https://arxiv.org/abs/0712.0966",
"abstract": "We study and solve the Dirichlet problem for graphs of prescribed mean curvature in $\\mathbb R^{n+1}$ over general domains $\\Omega$ without requiring a mean convexity assumption. By using pieces of nodoids as barriers we first give sufficient conditions for the solvability in case of zero boundary values. Applying a result by Schulz and Williams we can then also solve the Dirichlet problem for boundary values satisfying a Lipschitz condition.",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)",
"title": "On the Dirichlet problem for prescribed mean curvature equation over general domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806523850543,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7077274182168091
} |
https://arxiv.org/abs/1711.03895 | Group Connectivity: $\mathbb Z_4$ v. $\mathbb Z_2^2$ | We answer a question on group connectivity suggested by Jaeger et al. [Group connectivity of graphs -- A nonhomogeneous analogue of nowhere-zero flow properties, JCTB 1992]: we find that $\mathbb Z_2^2$-connectivity does not imply $\mathbb Z_4$-connectivity, neither vice versa. We use a computer to find the graphs certifying this and to verify their properties using non-trivial enumerative algorithm. While the graphs are small (the largest has 15 vertices and 21 edges), a computer-free approach remains elusive. | \section{Introduction}
A \emph{flow} in a digraph~$G=(V,E)$ is an assignment of values of some abelian group~$\Gamma$ to
edges of~$G$ such that Kirchhoff's law is valid at every vertex.
Formally, $\varphi\colon E \to \Gamma$ satisfies
$$
\sum_{\hbox{\footnotesize $e$ ends at~$v$}} \varphi(e) = \sum_{\hbox{\footnotesize $e$ starts at~$v$}} \varphi(e)
$$
for every vertex~$v \in V$.
We say a flow is \emph{nowhere-zero} if it does not use value~$0$ at any edge.
Tutte~\cite{Tutte} started the study of nowhere-zero flows by observing, that
a plane digraph~$G$ has a nowhere-zero flow in~${\mathbb Z}_k$ if and only if its plane dual~$G^*$ is $k$-colorable
(we do not consider orientation of the edges for the coloring).
This motivated several famous conjectures, we mention just the 5-flow conjecture (due to Tutte):
every bridgeless graph has a nowhere-zero flow in~${\mathbb Z}_5$.
A motivating feature of the theory of nowhere-zero flows are several nice properties, starting
with the ones discovered by Tutte. In particular:
\begin{theorem}[Tutte~'54~\cite{Tutte}] \label{Tuttegroup}
Let $\Gamma$ be an abelian group with $k$-elements. Then for every digraph
the existence of a nowhere-zero $\Gamma$-flow is equivalent with the existence of
a nowhere-zero ${\mathbb Z}_k$-flow.
\end{theorem}
\begin{theorem}[Tutte~'54~\cite{Tutte}] \label{Tutteint}
The existence of ${\mathbb Z}_k$-flow is equivalent with the existence of a nowhere-zero integer flow, that uses only values
$\pm 1$, $\pm 2$, \dots, $\pm (k-1)$.
\end{theorem}
Jaeger et~al.~\cite{JLPT} introduced a variant of nowhere-zero flows called \emph{group connectivity}.
A digraph~$G = (V,E)$ is \emph{$\Gamma$-connected} if for every mapping $h \colon E \to \Gamma$
there is a $\Gamma$-flow~$\varphi$ on~$G$ that satisfies $\varphi(e) \ne h(e)$ for every edge~$e \in E$.
As we may choose the ``forbidden values'' $h \equiv 0$, every $\Gamma$-connected digraph has a nowhere-zero $\Gamma$-flow;
the converse is false, however. While the notion of group connectivity is stronger than
the existence of nowhere-zero flows, it is also more versatile, in particular the notion lends itself
more easily to proofs by induction. This is a consequence of an alternative definition of group connectivity:
instead of looking for a flow, we may check existence of a mapping $E \to \Gamma$ that has prespecified
surplus at each vertex.
It is easy to see that both the existence of a nowhere-zero $\Gamma$-flow
and $\Gamma$-connectivity do not change when we reverse the orientation of an edge of the digraph
(we only need to change the corresponding flow value from~$x$ to $-x$).
Thus, we will say that an undirected graph~$G$ has a nowhere-zero $\Gamma$-flow (is $\Gamma$-connected)
if some (equivalently every) orientation of~$G$ has a nowhere-zero $\Gamma$-flow (is $\Gamma$-connected).
Also, using the definition of group connectivity working with vertex surpluses, we observe that
group connectivity is monotone with respect to edge addition -- if $G$ is $\Gamma$-connected then
$G + e$ is $\Gamma$-connected for any edge $e$.
Some results on nowhere-zero flows extend to the stronger notion of group connectivity.
A celebrated recent example of this is the solution to the Jaeger's
conjecture by Thomassen et al.~\cite{LTWZ}, but there are many more.
Thus, it is worthwhile to understand the properties of group connectivity in more detail.
\begin{figure}[h]
\centering
\includegraphics{graphs/graph_z5y_z6n.pdf}
\caption{A graph which is ${\mathbb Z}_5$ but not ${\mathbb Z}_6$-connected\label{fig:5y6n}}
\end{figure}
However, some nice properties of group-valued flows are not shared by group connectivity.
In particular Jaeger~\cite{JLPT} showed that there is a graph (Figure~\ref{fig:5y6n})
that is ${\mathbb Z}_5$-connected, but not ${\mathbb Z}_6$-connected.
This contrasts with the situation for flows: Suppose $G$ has a nowhere-zero flow in ${\mathbb Z}_5$ but not
in~${\mathbb Z}_6$. Using Theorem~\ref{Tutteint} twice, we find that $G$ has a nowhere-zero integer flow with values bounded in
absolute
value by~$5$, but not one bounded by~$6$, a clear contradiction.
An analogy of Theorem~\ref{Tuttegroup} is more subtle. Indeed, in Section~3.1 of~\cite{JLPT} the authors mention:
``\dots\ we do not know of any ${\mathbb Z}_4$-connected graph which is not ${\mathbb Z}_2\times{\mathbb Z}_2$-connected, or vice versa.
Neither can we prove that such graphs do not exist.''
Our main result is the resolution to this natural question.
\begin{theorem}\label{main}~
\begin{enumerate}
\item
There is a graph that is ${\mathbb Z}_2^2$-connected but not ${\mathbb Z}_4$-connected.
\item
There is a graph that is ${\mathbb Z}_4$-connected but not ${\mathbb Z}_2^2$-connected.
\end{enumerate}
\end{theorem}
Because our result is computer aided, we do not present proof in classical sense.
Instead we present overview of our approach and examples of graphs proving Theorem~\ref{main}
in the next section. In Section~\ref{sec:algo} we describe the algorithm we used to test group
connectivity, and we add some implementation notes in Section~\ref{sec:impl}.
\section{The Group Connectivity Conjecture and Results}
When looking for graphs certifying Theorem~\ref{main}, we only need to consider
graphs that do have nowhere-zero ${\mathbb Z}_2^2$-flow (equivalently, by Theorem~\ref{Tuttegroup},
nowhere-zero ${\mathbb Z}_4$-flow). It is natural to examine cubic graphs (and their subdivisions) due to
the following theorem:
\begin{theorem}[Jaeger et al. \cite{JLPT}]
Let $G$ be an 4-edge-connected graph. Then $G$ is both ${\mathbb Z}_2^2$- and ${\mathbb Z}_4$-connected.
\end{theorem}
In contrary to the usual case, however, we
are not interested in snarks (cubic graphs that fail to be edge 3-colorable), as those
do not have nowhere-zero ${\mathbb Z}_2^2$-flow.
We note that subdividing an edge has no effect on the existence of a nowhere-zero flow
(the new edge can have the same flow value as before). It makes the group connectivity
stronger -- in effect, we are forbidding one more value on an edge.
This suggests the following strategy:
\begin{enumerate}
\item pick an arbitrary\,/\,random 3-regular graph and
\item repeatedly subdivide an edge and check ${\mathbb Z}_2^2$- and ${\mathbb Z}_4$-connectivity.
\end{enumerate}
\begin{figure}[h]
\begin{center}
\includegraphics[width=6cm]{graphs/cube_sub_forb}
\caption{A subdivision of cube which is ${\mathbb Z}_4$-connected but not ${\mathbb Z}_2^2$-connected with
forbidden assignment for which no satisfying ${\mathbb Z}_2^2$-flow exists and names for hypothetical flow values.}
\label{fig:cube_sub}
\label{fig:cube_sub_forb}
\end{center}
\end{figure}
This procedure yielded the graph in Figure~\ref{fig:cube_sub},
which appeared in the master thesis of the second author~\cite{Lysi}.
This graph is ${\mathbb Z}_4$- but not ${\mathbb Z}_2^2$-connected.
Later, with more effective implementation (see the next section) by the first author,
we found graphs that are ${\mathbb Z}_2^2$- but not ${\mathbb Z}_4$-connected.
The smallest among them are (threefold) subdivisions of cubic graphs on 12 vertices
(for an example see Figure~\ref{fig:graphs}).
We also include a proof that graph in Figure~\ref{fig:cube_sub} is not ${\mathbb Z}_2^2$-connected
which is not computer-aided:
\begin{theorem} \label{thm:non-conn}
The subdivision of cube shown in Figure~\ref{fig:cube_sub} is not ${\mathbb Z}_2^2$-connected.
\end{theorem}
\begin{figure}[h]
\begin{center}
\includegraphics[width=6cm]{graphs/cube_sub_case_01}
\hskip 0pt plus 2fil
\includegraphics[width=6cm]{graphs/cube_sub_case_11}
\caption{Cases $\alpha = (0, 1)$ and $\alpha = (1,1)$ with fragments of hypothetical flows.}
\label{fig:cube_flows}
\end{center}
\end{figure}
\begin{proof}
We will show that for the assignment of the forbidden values in Figure~\ref{fig:cube_sub_forb} there exists
no satisfying ${\mathbb Z}_2^2$-flow. First observe that values $\alpha$ and $\beta$ are of form $(.,1)$
which implies $\mu_3 = (.,0)$. So $\mu_3$ is always $(0,0)$ and $\alpha = \beta$. Also $\mu_1 = (1, .)$.
Propagation of values of flow in the case $\alpha = (0,1)$ is shown in Figure~\ref{fig:cube_flows}, on the left.
As $\mu_2 \neq (0,0)$, we have $\delta = (1, .)$.
The value
$x+y$ is 1 because $\gamma$ is forbidden to be $(0,0)$ but this forces $\mu_4 = (0,0)$ which is also forbidden.
In the case $\alpha=(1,1)$ (Figure~\ref{fig:cube_flows}, on the right),
we again combine the forbidden values to give possible form for $\mu_2$ and $\delta$, and also $\omega_3$,
$\omega_4$. In particular $\omega_3 = \omega_4 \not\in \set{ (0,0), (1,1)}$, so we may write
$\omega_3 = (u, u+1)$.
The edge $\gamma$ forbids $x = y = 0$ and the edge $\mu_4$
forbids $x=1$, $y=0$, so $y = 1$ and $\mu_4 = (x+1, x)$. So either $\omega_1 = (u+x+1, u+x+1)$
or $\omega_2 = (u + x, u + x)$ are $(0,0)$. Hence no satisfying flow exists.
\end{proof}
\begin{figure}[h]
\begin{center}
\begin{minipage}{.45\textwidth}
\center
\small ${\mathbb Z}_4$: NO \hfil ${\mathbb Z}_2^2$: YES \\
\includegraphics[width=.95\textwidth]{graphs/graph_4n_22y_nice}
\end{minipage}
\hfil\hfil
\begin{minipage}{.45\textwidth}
\center
\small ${\mathbb Z}_4$: YES \hfil ${\mathbb Z}_2^2$: NO \\
\includegraphics[width=.95\textwidth]{graphs/graph_4y_22n_nice}
\end{minipage}
\end{center}
\caption{Graphs proving Theorem~\ref{main} \label{fig:graphs}}
\end{figure}
\FloatBarrier
\section{Group connectivity testing} \label{sec:algo}
We fix a digraph~$G=(V,E)$. We let $n$ be the number of vertices and
$m$ the number of edges of~$G$.
\begin{notation}
We say that a flow $\varphi\colon E \to \Gamma$ {\em satisfies} a mapping of forbidden values
$h\colon E \to \Gamma$ if for every $e \in E$ it holds $h(e) \neq \varphi(e)$.
\end{notation}
The most straightforward way of testing whether a graph is $\Gamma$-connected,
is using the definition: We can enumerate all $h\colon E \to \Gamma$ assignments of
forbidden values and for each of them (try to) find a satisfying flow. Finding
a satisfying flow by itself is a hard problem: A cubic graph has nowhere-zero
${\mathbb Z}_4$-flow (equivalently, ${\mathbb Z}_2^2$-flow) if and only if it has an edge 3-coloring.
Testing the edge 3-colorability of cubic graphs was shown to be NP-complete by Holyer~\cite{Ho81}.
An easy observation about the structure of forbidden assignments is:
\begin{obs}\label{obs:forb_assgn}
Let $h, h'\colon E \to \Gamma$ be assignments of the forbidden values such that
$h' - h = \Delta$ is a flow. Then $h$ is satisfied by a flow $\varphi$ if and only
if $h'$ is satisfied by $\varphi + \Delta$.
\end{obs}
\begin{dfn}
We say that assignments of forbidden values $h, h'\colon E \to \Gamma$
are flow-equivalent, denoted $h \sim_f h'$, if and only if $h' - h$ is a flow.
\end{dfn}
Hence we can split all assignments of the forbidden values into equivalence classes
of $\sim_f$ and test existence of a satisfying flow only for one member of each class.
This improves algorithm from finding
$\abs{\Gamma}^{m}$ flows to finding $\abs{\Gamma}^{n - 1}$ flows
(because every equivalence class is uniquely determined by an assignment of forbidden values
which is 0 outside of some fixed spanning tree).
A bit smarter algorithm -- used to find ${\mathbb Z}_2^2$-connected graphs which are not
${\mathbb Z}_4$-connected -- can be obtained by looking
at Observation~\ref{obs:forb_assgn} the other way around. It follows that each
equivalence class of $\sim_f$ is exactly a coset generated by adding some its fixed member to all flows.
Therefore if an equivalence class $[x]_{\sim_f}$ is satisfied then for every flow $\varphi$ there is $h \in [x]_{\sim_f}$
such that $\varphi$ satisfies $h$.
\begin{theorem}
Fix a digraph $G$ and an abelian group $\Gamma$. Let $x\colon E \to \Gamma$ be
a forbidden mapping. The following statements are equivalent:
\begin{enumerate}
\item Forbidden mapping $x$ is satisfied.
\item Every $y \in [x]_{\sim_f}$ is satisfied.
\item For every flow $\varphi$, there exists $y \in [x]_{\sim_f}$ satisfied by $\varphi$.
\end{enumerate}
\end{theorem}
\begin{proof}
Equivalence of first two is Observation~\ref{obs:forb_assgn}. For item three we fix a flow $\varphi_x$
satisfying $x$. Then flow $\varphi$ satisfies forbidden mapping $x - \varphi_x + \varphi$. And vice versa
if $\varphi$ satisfies $y$ then $x$ is satisfied by $\varphi - y + x$.
\end{proof}
So we can fix a flow -- constant-zero flow being the obvious candidate -- and
for each equivalence class we test whether some of its members is satisfied by it. This
increases the number of tests back to $\abs{\Gamma}^{m}$ but now each test is
just a simple comparison instead of an NP-complete problem.
We can also trade some
space for time: We keep a table of all equivalence classes, and instead of enumerating members
of all equivalence classes, we enumerate all assignments of forbidden values that are satisfied
by the given flow. For each of them we determine its equivalence class and mark that class as satisfied.
After enumerating them all we just check whether every equivalence class is satisfied.
This decreases the number of enumerated elements to $(\abs{\Gamma} - 1)^m$ but
consumes extra $2^{n - 1}$ bits of memory.
Because we were testing subdivisions of cubic graphs we would like to optimize
cases of once and twice subdivided edges. Without any additional optimization each
subdivision of an edge increases the number of edges by one and hence slows down
the described method by the factor of $\abs{\Gamma} - 1$. But
a subdivision creates an edge 2-cut.
Without loss of generality we may assume that edges of a 2-cut -- denote them $e_1$ and
$e_2$ -- are oriented in opposite
directions. The value of any flow must be the same on both of them. Hence swapping the forbidden
values for edges $e_1$ and $e_2$ does not change the set of satisfying flows. Moreover,
we may assume that forbidden values for $e_1$ and $e_2$ are different because it
is more restrictive than the case when they are the same. This reduces the number of
cases from $\abs{\Gamma}^2$ to ${\abs{\Gamma} \choose 2}$ (i.\,e.\ from 16 to 6 for
groups of order four).
Double subdivision is in our case even simpler because we have
three forbidden values and again the most restrictive case is when they all are distinct.
So such double-subdivided edge has only one possible value (in our case, where $|\Gamma|=4$).
Now we need to plug
this observations into above-described algorithm.
Observe that the equivalence classes used in the algorithm do not have
to be equivalence classes of $\sim_f$
but we can use classes of any equivalence $\sim$ which is congruence with respect
to satisfiability and which is coarser than $\sim_f$. Being congruence with
respect to satisfiability means that either all elements of equivalence class
are satisfiable or none of them is. Being coarser than $\sim_f$ ensures
that $[x]_{\sim_f} \subseteq [x]_{\sim}$ and so if class $[x]_{\sim}$ is
satisfiable that for every flow $\varphi$ there is some $y \in [x]_{\sim}$
satisfied by $\varphi$.
Moreover, we can throw away equivalence classes that are satisfied if some other class is satisfied
(of course without creating cycles).
E.g.~if we have 2-cut with both forbidden values being $1$, then this case is implied
by the case with value $1$ and any other value.
\begin{notation}
We let $[A \to B]$ denote the set of all functions from $A$ to $B$.
\end{notation}
We summarize our approach in Algorithm~\ref{alg} and Theorem~\ref{thm:alg}.
We also need to work with equivalence classes in the algorithm, so we represent the equivalence
with throw-away class as a function $$\mathcal C\colon [E \to \Gamma] \to X \uplus \set{\ensuremath{\text{\tt NULL}}}$$
which assigns to each forbidden mapping an object representing its class (in practical
implementation elements of $X$ are just small integers), $\ensuremath{\text{\tt NULL}}$ representing the throw-away class.
Function $\mathcal C$ we used is obtained from $\sim_f$ by following modifications:
For each 2-cut we remove all classes (i.\,e.\ we set values of their elements to \ensuremath{\text{\tt NULL}}) that forbid the same value
on both edges of the cut and merge classes which differ only by swapping
values on edges of the cut. For double-subdivided edges we remove all classes that
do not forbid three different values on each double-subdivided edge and than merge all classes
that differ only by the order of forbidden values on given subdivided edge. We note that the optimization
for double-subdivided edges is essentially equivalent to removing given subdivided edge:
\begin{obs}
If graph $G$ contains an edge subdivided $\abs{\Gamma}$ times, it cannot be $\Gamma$-connected.
If it contains an edge $e$ subdivided $\abs{\Gamma}-1$ times, it is $\Gamma$-connected if and only if
$G - e$ is $\Gamma$-connected.
\end{obs}
\def\ensuremath{\text{\tt true}}{\ensuremath{\text{\tt true}}}
\def\ensuremath{\text{\tt false}}{\ensuremath{\text{\tt false}}}
\begin{algorithm}
\KwIn{graph $G$}
\KwOut{{\tt YES} if $G$ is $\Gamma$-connected, and {\tt NO} otherwise}
\BlankLine
Pick a flow $\varphi_0$\;
Create array $a$ indexed by elements of $X$\;
$a[*] \leftarrow \ensuremath{\text{\tt false}}$\;
\BlankLine
\For{$\forall h$ {\rm satisfied by} $\varphi_0$ {\rm such that} $\mathcal C(h) \neq \ensuremath{\text{\tt NULL}}$}{
$a[\mathcal C(h)] \leftarrow \ensuremath{\text{\tt true}}$\;
}
\BlankLine
\For{$\forall x \in X$}{
\lIf{$a[x] = \ensuremath{\text{\tt false}}$}{\Return{{\tt NO}}}
}
\BlankLine
\Return{{\tt YES}}\;
\caption{Group connectivity testing \label{alg}}
\end{algorithm}
\begin{theorem} \label{thm:alg}
Fix an abelian group $\Gamma$, a digraph $G$, and a function
$\mathcal C\colon [E \to \Gamma] \to X \uplus \set{\ensuremath{\text{\tt NULL}}}$ such that:
\begin{enumerate}
\item $X \subseteq \mathcal C [[E \to \Gamma]]$, \label{item:dom}
\item for all $h\colon E \to \Gamma$ if $\mathcal C(h) = \ensuremath{\text{\tt NULL}}$ then there exits $h'\colon E \to \Gamma$
such that if $h'$ is satisfied then $h$ is also satisfied
and $\mathcal C(h') \neq \ensuremath{\text{\tt NULL}}$, \label{item:almost_equiv}
\item for all $h, h'\colon E \to \Gamma$ if $\mathcal C(h) = \mathcal C(h')$ then either both are satisfied
or none of them is, and \label{item:congruence}
\item for all $h\colon E \to \Gamma$ and for all $\Gamma$-flows $\varphi$ holds
$\mathcal C(h) = \mathcal C(h + \varphi)$.
\label{item:coarser}
\end{enumerate}
Then Algorithm~\ref{alg} correctly decides whether $G$ is $\Gamma$-connected.
\end{theorem}
\begin{proof}
Obviously, Algorithm~\ref{alg} terminates.
First we prove that if the graph is $\Gamma$-connected, then the
algorithm outputs {\tt YES}. By contradiction, let $x \in X$ be the element that forced algorithm to output
{\tt NO}. Let $P = \mathcal C^{-1}(x)$ be set of preimages of $x$. It is nonempty due to Assumption~\ref{item:dom},
so we can fix some $p \in P$. The mapping $p$ is satisfied by some flow $\varphi_p$ because
$G$ is $\Gamma$-connected. The mapping $p' = p - \varphi_p + \varphi_0$ is satisfied by flow $\varphi_0$
(Observation~\ref{obs:forb_assgn}). Also $\mathcal C(p') = \mathcal C(p) = x$ (Assumption~\ref{item:coarser}),
so mapping $p'$ was enumerated by the algorithm and set $a[x]$ to \ensuremath{\text{\tt true}}. Contradiction.
Now we prove that if the algorithm outputs {\tt YES}, the graph $G$ is $\Gamma$-connected.
By contradiction, let $p\colon E \to \Gamma$ be a mapping witnessing that $G$ is not $\Gamma$-connected.
If $\mathcal C(p) = \ensuremath{\text{\tt NULL}}$, Assumption~\ref{item:almost_equiv} gives us $p'$ which is also unsatisfied and
$\mathcal C(p') \neq \ensuremath{\text{\tt NULL}}$, otherwise we take $p' = p$.
Because $\mathcal C(p') \neq \ensuremath{\text{\tt NULL}}$, none of the mappings in the set $\mathcal C^{-1}(\mathcal C(p'))$ is satisfied
(Assumption~\ref{item:congruence}). Hence $a[C(p')]$ was never set to $\ensuremath{\text{\tt true}}$, and the algorithm must have returned
{\tt NO}. Contradiction.
\end{proof}
\section{Implementation notes} \label{sec:impl}
Because large part of our work was creating programs for testing group connectivity, we would like
to add some implementation notes. Readers interested only in theoretical results may safely skip
this section.
Our first implementation of straightforward algorithm
was written by the second author during her master thesis work. It was
a C++ implementation which was very specialized for the graphs tested (subdivisions of cube),
and a CSP implementation in Sicstus Prolog to double-check the results. Both of these implementations
required preprocessed input which made them less than ideal to work with, and also was not
fast enough for searching through larger graphs.
Hence we have written a new implementation based on Algorithm~\ref{alg}
in Python~2 built on Sage libraries which already contain
a lot of tools to work with general graphs.\footnote{
We used version 2 of Python because Sage was not yet ported to Python 3.
} Because Python is an interpreted language and as such is slower,
we chose to implement performance critical parts of code in C++ binding them into Python using
Cython.\footnote{ Do not mistake with CPython -- CPython is reference implementation
of Python interpreter, whereas Cython is optimizing compiler of Python which compiles
Python into C (or C++) and then into native code using standard compiler like gcc.}
At the end of the previous section we have described function $\mathcal C$
that we are using, but we did not specify
how to calculate it. The main idea is to fix a spanning tree and transform any forbidden mapping
to an equivalent one which is zero outside this tree. To do so we keep a precalculated list of
elementary flows. We also need to take care of merged classes created by (doubly-)subdivided edges.
For doubly subdivided edges we always assign them the only interesting forbidden values
(and remove them from generation of forbidden mappings). For single subdivisions we keep list
of six interesting assignments and assign subdivided edges only values from this list.
Effect of these optimization is shown in Table~\ref{fig:timeit}.
\begin{table}
\centering
\caption{Time required to test cube subdivided on 2 edges (all 9 possibilities).}
\label{fig:timeit}
\smallskip
\begin{tabular}{lc} \hline
Algorithm & Time [s] \\ \hline
Simple (in Python) & 48.8 \\
Smart (in C++) & 3.65 \\
Smart with subdivision optimization & 0.229 \\ \hline
\end{tabular}
\smallskip
\small Measured on Intel i5 5257U.
\end{table}
To double-check our results we also implemented the straightforward algorithm in pure Python. It is called Simple algorithm in Table~\ref{fig:timeit}. It
does just check the definition -- for every forbidden assignment (fixed outside of
a spanning tree) it finds a satisfying flow (from precomputed list of flows).
A repository with both implementations may be found at our department's GitLab
$$\text{\url{https://gitlab.kam.mff.cuni.cz/radek/group-connectivity-pub}.}$$
\section{Conclusions and open problems}
We have found graphs that show that ${\mathbb Z}_2^2$- and ${\mathbb Z}_4$-connectivity are independent notions.
All of the graphs that we have found to certify this do have vertices of degree~2. Therefore, it is natural to ask,
whether such graphs exist that are 3-edge-connected (both, in cubic and general case).
Another challenging task is to find a proof that does not use computers. The main obstacle is to find
efficient techniques to show that a particular graph is $\Gamma$-connected. To prove the converse
is much easier: we guess forbidden values~$h\colon E \to \Gamma$ and then show non-existence of a flow
(see Theorem~\ref{thm:non-conn}).
Our final question is the complexity of testing group connectivity. The algorithm we have
developed is fast enough for our purposes; the required time is exponential, however.
To test for group connectivity seems harder than to test for existence of a nowhere-zero flow, which suggests
the problem is NP-hard. In fact, we believe it is $\Pi^p_2$-complete.
Circumstantial evidence which suggests
$\Pi^p_2$-completeness are somewhat dual notions of choosability and group list-colorings.
Both of these problems are known to be $\Pi^p_2$-complete -- proved by Erd\H{o}s et al.~\cite{ERT}
for choosability, and Kráľ~\cite{Dan05,Dan04} for group list-colorings.
Of those two, group list-colorings are closer match to dual of group connectivity, but graphs used in
Kráľ's proofs are non-planar, and we found no way to work around it.
So for testing group connectivity we do not know any hardness results.
\section{Acknowledgements}
The first and the last author were partially supported by GAČR grant 16-19910S.
The first author was partially supported by the Charles University, project GA UK No.~926416.
\bibliographystyle{amsplain}
| {
"timestamp": "2017-11-13T02:10:46",
"yymm": "1711",
"arxiv_id": "1711.03895",
"language": "en",
"url": "https://arxiv.org/abs/1711.03895",
"abstract": "We answer a question on group connectivity suggested by Jaeger et al. [Group connectivity of graphs -- A nonhomogeneous analogue of nowhere-zero flow properties, JCTB 1992]: we find that $\\mathbb Z_2^2$-connectivity does not imply $\\mathbb Z_4$-connectivity, neither vice versa. We use a computer to find the graphs certifying this and to verify their properties using non-trivial enumerative algorithm. While the graphs are small (the largest has 15 vertices and 21 edges), a computer-free approach remains elusive.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Group Connectivity: $\\mathbb Z_4$ v. $\\mathbb Z_2^2$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806512500485,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7077274173976263
} |
https://arxiv.org/abs/2301.06242 | Periodic dimensions and some homological properties of eventually periodic algebras | For an eventually periodic module, we have the degree and the period of its first periodic syzygy. This paper studies the former under the name \lq\lq periodic dimension\rq\rq. We give a bound for the periodic dimension of an eventually periodic module with finite Gorenstein projective dimension. We also provide a method of computing the Gorenstein projective dimension of an eventually periodic module under certain conditions. Besides, motivated by recent results of Dotsenko, Gélinas and Tamaroff and of the author, we determine the bimodule periodic dimension of an eventually periodic Gorenstein algebra. Another aim of this paper is to obtain some of the basic homological properties of eventually periodic algebras. We show that a lot of homological conjectures hold for this class of algebras. As an application, we characterize eventually periodic Gorenstein algebras in terms of bimodules Gorenstein projective dimensions. | \section{Introduction}
Throughout this paper, all rings are assumed to be associative and unital, and $k$ denotes a filed. By a module, we mean a left module unless otherwise stated.
Over a left Noetherian semiperfect ring $R$, any finitely generated module $M$ admits a minimal projective resolution
\begin{align*}
\cdots \rightarrow P_n \xrightarrow{d_n} P_{n-1} \xrightarrow{d_{n-1}} P_{n-2} \rightarrow \cdots \rightarrow P_0 \xrightarrow{d_0} M \rightarrow 0
\end{align*} with each $P_i$ finitely generated projective.
For each $n \geq 0$, the {\it $n$-th syzygy} $\Omega_{R}^{n}(M)$ of $M$ is defined by the kernel of the differential $d_{n-1} : P_{n-1} \rightarrow P_{n-2}$.
It is understood that $\Omega_{R}^{0}(M)=M$.
Recall that $M$ is {\it periodic} if $\Omega_R^{p}(M)$ is isomorphic to $M$ as $R$-modules for some $p>0$.
The least such $p$ is called the {\it period} of $M$.
We say that $M$ is {\it eventually periodic} if $\Omega_R^{n}(M)$ is periodic for some $n \geq 0$.
For an eventually periodic $R$-module $M$, we obtain the degree $n$ and the period $p$ of its first periodic syzygy $\Omega_R^{n}(M)$.
When $R$ is a (both left and right) Noetherian local ring and $M$ has finite virtual projective dimension, Avramov \cite[Theorem 4.4]{Avramov_1989} gave an upper bound for the degree $n$ and showed that the period $p$ is either $1$ or $2$.
On the other hand, using the notion of Tate cohomology, the author \cite[Theorem 2.4]{Usui_2022} obtained a result on the period $p$ without additional assumption on $R$ and $M$; however, it follows that Tate cohomology gives no information on the degree $n$.
Moreover, there are further results on the values of $n$ and $p$: for example, \cite[Theorem 1.6]{Avramov_1989_proceedings}, \cite[the proof of Corollary 6.4]{Dotsenko-Gelinas-Tamaroff_2023}, \cite[Theorem 1.2]{Gasharov-Peeva_1990}
and \cite[Proposition 4.3]{Usui21_No02}.
We note that eventually periodic modules are examined in the literature such as \cite{Bergh_2006,Croll_2013,kupper2010two,Eisenbud_1980}.
In this paper, we explore the degrees of the first periodic syzygies of (not necessarily finitely generated) eventually periodic modules over a left perfect ring $R$ (see Definition \ref{def_1} for the definition of the modules).
To do this, we will introduce the notion of {\it periodic dimensions}.
The periodic dimension $\operatorname{per.dim}_R M$ of an $R$-module $M$ is defined as the infimum of the degrees $n$ of periodic syzygies $\Omega_R^{n}(M)$ of $M$.
It is obvious that $M$ is eventually periodic if and only if $\operatorname{per.dim}_R M < \infty$.
First, we discuss the behavior of periodic dimension with respect to direct sums.
Let $\{ M_i \}_{i \in I}$ be a family of $R$-modules. It then turns out that the following equality does not hold in general:
\[
\operatorname{per.dim}_R \bigoplus_{i\in I} M_i = \sup\{\, \operatorname{per.dim}_R M_i \mid i \in I\, \}.
\]
For this, we give a condition under which this equality holds (see Corollaries \ref{claim_47} and \ref{claim_48}).
Next, we use Gorenstein projective dimensions to study periodic dimensions.
As the first main result of this paper, we show that the periodic dimension of an eventually periodic module $M$ of finite Gorenstein projective dimension $r$ equals either $r$ or $r+1$ (see Theorem \ref{claim_6}).
Moreover, we decide when the former case occurs under the additional assumption that $M$ is finitely generated over a left artin ring (see Corollary \ref{claim_49}).
Also, in the case of a Noetherian semiperfect ring, we give an analogous result to the above main result (see Theorem \ref{claim_50}).
Finally, we investigate the bimodule periodic dimension of a finite dimensional eventually periodic algebra $\L$ (see Definition \ref{def_4} for the definition of $\L$).
To start with, applying the results in the preceding paragraph, we give the second main result of this paper, which determines the bimodule periodic dimension of $\L$ in case $\L$ is Gorenstein (see Theorem \ref{claim_10}).
It is worth noting that our third main result stated below illustrates why we impose such a condition on $\L$ (cf.~Remark \ref{remark_1}).
We also exhibit that if $\L$ is Gorenstein, and if the semisimple quotient $\L/J(\L)$ is separable, where $J(\L)$ denotes the Jacobson radical of $\L$, then the bimodule periodic dimension of $\L$ can be written as the periodic dimension of $\L/J(\L)$ as a left and as a right $\L$-module (see Theorem \ref{claim_45}).
This paper also focuses on a homological aspect of finite dimensional eventually periodic algebras.
It turns out that many homological conjectures such as the periodicity conjecture, the finitistic dimension conjecture, the Gorenstein symmetric conjecture and the Auslander conjecture hold for this class of algebras (see Propositions \ref{claim_3}, \ref{claim_18}, \ref{claim_19} and \ref{claim_28}).
This enables us to obtain the third main result of this paper that a finite dimensional eventually periodic algebra is Gorenstein if and only if its bimodule Gorenstein projective dimension is finite (see Theorem \ref{claim_38}).
We point out that there is another characterization of finite dimensional eventually periodic Gorenstein algebras (see Proposition \ref{claim_39}).
This paper is organized as follows.
In Section \ref{Preliminaries}, we recall the definitions and related facts that are used in this paper.
In Section \ref{Periodic dimension}, we define and study the periodic dimensions for modules.
In Section \ref{Homological properties of eventually periodic algebras}, we examine finite dimensional eventually periodic algebras from a homological point of view.
\subsection*{Conventions and notation}
Let $R$ be a ring and $M$ an $R$-module.
We denote by $R\mbox{-}\mathrm{Mod}$ (resp.\ $R\mod$) the category of (resp.\ finitely generated) $R$-modules, by $\operatorname{gl.dim} R$ the global dimension of $R$, and by $\operatorname{proj.dim}_{R}M$ (resp.\ $\operatorname{inj.dim}_{R}M$) the projective (resp.\ injective) dimension of $M$.
We define four full subcategories of $R\mbox{-}\mathrm{Mod}$ as follows:
\begin{align*}
&R\mbox{-}\mathrm{Proj} := \left\{\, M \in R\mbox{-}\mathrm{Mod} \mid M \mbox{ is projective} \,\right\}; \\
&R\mbox{-}\mathrm{Fpd} := \left\{\, M \in R\mbox{-}\mathrm{Mod} \mid \operatorname{proj.dim}_R M < \infty \,\right\}; \\
&R\mbox{-}\mathrm{Mod}_{\mathcal{P}} := \left\{\, M \in R\mbox{-}\mathrm{Mod} \mid M \mbox{ has no non-zero direct summand in $R\mbox{-}\mathrm{Proj}$} \,\right\}; \\
&R\mbox{-}\mathrm{Mod}_{\rm fpd} := \left\{\, M \in R\mbox{-}\mathrm{Mod} \mid M \mbox{ has no non-zero direct summand in $R\mbox{-}\mathrm{Fpd}$} \,\right\}.
\end{align*}
Similarly, one defines the four full subcategories $R\mbox{-}\mathrm{proj},$ $R\mbox{-}\mathrm{fpd},$ $R\mbox{-}\mathrm{mod}_{\mathcal{P}}$ and $R\mbox{-}\mathrm{mod}_{\rm fpd}$ of $R\mod$.
For a collection $\mathcal{X}$ of $R$-modules, we denote by ${}^{\perp}\mathcal{X}$ the full subcategory of $R\mbox{-}\mathrm{Mod}$ given by
\begin{align*}
{}^{\perp}\mathcal{X} := \left\{\, M \in R\mbox{-}\mathrm{Mod} \mid \operatorname{Ext}\nolimits_{R}^{i}(M, X) = 0 \mbox{ for all } i > 0 \mbox{ and all } X \in \mathcal{X} \,\right\}.
\end{align*}
By a complex, we mean a chain complex
\begin{align*}
X_\bullet : \quad \cdots \rightarrow
X_{i+1} \xrightarrow{d_{i+1}}
X_{i} \xrightarrow{d_{i}}
X_{i-1} \xrightarrow{d_{i-1}}
X_{i-2} \rightarrow \cdots.
\end{align*}
For each $i \in \mathbb{Z}$, we denote by $\Omega_{i}(X_\bullet)$ the cokernel of the differential $d_{i+1}: X_{i+1} \rightarrow X_{i}$.
\section{Preliminaries} \label{Preliminaries}
In this section, we recall some basic facts related to Gorenstein projective modules, Gorenstein projective dimensions, and Gorenstein rings.
\subsection{Gorenstein projective modules}
Let $R$ be a ring.
Recall that an acyclic complex $T_\bullet$ of projective $R$-modules is {\it totally acyclic} if $\operatorname{Hom}\nolimits_{R}(T_\bullet, Q)$ is acyclic for any $Q \in R\mbox{-}\mathrm{Proj}$.
An $R$-module $M$ is called {\it Gorenstein projective} \cite{Enochs-Jenda_1995_MathZ} if there exists a totally acyclic complex $T_\bullet$ such that $\Omega_{0}(T_\bullet) \cong M$ in $R\mbox{-}\mathrm{Mod}$.
For example, projective modules are Gorenstein projective.
As will be seen in the next subsection, if $R$ is a Noetherian ring, then finitely generated Gorenstein projective $R$-modules are precisely {\it totally reflexive $R$-modules} in the sense of \cite{Avramov-Martsinkovsky_2002}.
Let $n$ be a positive integer.
Following \cite[Definition 2.1]{Bennis-Mahdou_2009}, we say that the $R$-module $M$ is {\it $n$-strongly Gorenstein projective} if there exists an exact sequence of $R$-modules
\begin{align*}
0 \rightarrow M \rightarrow P_{n-1} \rightarrow \cdots \rightarrow P_{0} \rightarrow M \rightarrow 0
\end{align*}
with each $P_{i}$ projective such that $\operatorname{Hom}\nolimits_{R}(-, Q)$ leaves the sequence exact whenever $Q$ is a projective $R$-module.
Recall that the {\it stable category} $R\mbox{-}\underline{\mathrm{Mod}}$ of $R\mbox{-}\mathrm{Mod}$ is the category whose objects are the same as $R\mbox{-}\mathrm{Mod}$ and morphisms are given by $\underline{\operatorname{Hom}\nolimits}_{\L}(M, N) := \operatorname{Hom}\nolimits_{\L}(M, N)/\mathcal{P}(M, N)$,
where $\mathcal{P}(M, N)$ is the group of morphisms from $M$ to $N$ factoring through a projective module.
We then observe that $M$ is $n$-strongly Gorenstein projective if and only if $\Omega_{n}(P_\bullet) \cong M$ in $R\mbox{-}\underline{\mathrm{Mod}}$ for some (hence for any) projective resolution $P_\bullet \rightarrow M$ of $M$, and $\operatorname{Ext}\nolimits_{R}^{i}(M, P) = 0$ for all $i$ with $1 \leq i \leq n$ and all $P \in R\mbox{-}\mathrm{Proj}$ (cf.~\cite[Proposition 2.2.17]{X-WChen_2017}).
The category $R\mbox{-}\mathrm{GProj}$ of Gorenstein projective $R$-modules is a Frobenius category whose projective objects are precisely projective $R$-modules, so that the stable category $R\mbox{-}\underline{\mathrm{GProj}}$ of $R\mbox{-}\mathrm{GProj}$ carries a structure of a triangulated category (cf.~\cite[Proposition 2.1.11]{X-WChen_2017}).
If $\Sigma$ denotes the shift functor on $R\mbox{-}\underline{\mathrm{GProj}}$, then any totally acyclic complex $T_\bullet$ associated with a Gorenstein projective $R$-module $M$ has the property that $\Sigma^{i}M = \Omega_{-i}(T_\bullet)$ for all $i \in \mathbb{Z}$.
Moreover, we know by \cite[Lemma 2.3.4]{Veliche_2006} that $\Sigma^{-i}M = \Omega_{i}(P_\bullet)$ for any $i \geq 0$ and any projective resolution $P_\bullet \rightarrow M$ of $M$.
On the other hand, one observes that $R\mbox{-}\mathrm{GProj} \subseteq{{}^{\perp}(R\mbox{-}\mathrm{Proj})} = {}^{\perp}(R\mbox{-}\mathrm{Fpd})$.
We denote by $n\mbox{-}R\mbox{-}\mathrm{SGProj}$ the category of $n$-strongly Gorenstein projective $R$-modules.
It follows from \cite[Proposition 2.5]{Bennis-Mahdou_2009} that the following inclusions hold for each $n>0$:
\begin{align*}
R\mbox{-}\mathrm{Proj} \subseteq{1\mbox{-}R\mbox{-}\mathrm{SGProj}} \subseteq{n\mbox{-}R\mbox{-}\mathrm{SGProj}} \subseteq{R\mbox{-}\mathrm{GProj}}.
\end{align*}
In case $R$ is a Noetherian ring, one can deduce analogous results as in the above for $R\mbox{-}\mathrm{Gproj}$ and $n\mbox{-}R\mbox{-}\mathrm{SGproj}$, where $R\mbox{-}\mathrm{Gproj}$ (resp.~$n\mbox{-}R\mbox{-}\mathrm{SGproj}$) stands for the category of finitely generated Gorenstein projective (resp.~$n$-strongly Gorenstein projective) $R$-modules.
\subsection{Gorenstein projective dimensions} \label{GorensteinProjectiveDimensions}
Let $R$ be a ring.
Following \cite[Definition 2.8]{HenrikHolm04}, we define the {\it Gorenstein projective dimension} $\operatorname{Gpd}_{R}M$ of an $R$-module $M$ by the infimum of the length $n$ of an exact sequence of $R$-modules
\begin{align*}
0 \rightarrow G_n \rightarrow \cdots \rightarrow G_1 \rightarrow G_0 \rightarrow M \rightarrow 0
\end{align*}
with each $G_i$ Gorenstein projective.
From the definition, there is an inequality
\begin{align*}
\operatorname{Gpd}_R M \leq \operatorname{proj.dim}_R M.
\end{align*}
We know from \cite[Proposition 2.27]{HenrikHolm04} that the equality holds if $M$ has finite projective dimension.
Moreover, it was proved in \cite[Theorem 2.20]{HenrikHolm04} that if $M$ has finite Gorenstein projective dimension, then we have
\begin{align} \label{eq_2}
\operatorname{Gpd}_{R}M = \sup \left\{ i \geq 0 \mid \operatorname{Ext}\nolimits_{R}^{i}(M, Q) \not= 0 \mbox{ for some } Q \in R\mbox{-}\mathrm{Proj} \right\}.
\end{align}
Now, suppose that $R$ is a Noetherian ring.
Recall from \cite{Avramov-Martsinkovsky_2002} that the {\it Gorenstein dimension} $\operatorname{G\mbox{-}dim}_{R}M$ of a finitely generated $R$-module $M$ is defined to be the infimum of the length $n$ of an exact sequence of finitely generated $R$-modules \[ 0 \rightarrow X_n \rightarrow \cdots \rightarrow X_1 \rightarrow X_0 \rightarrow M \rightarrow 0 \] with each $X_i$ totally reflexive.
It was observed in \cite[2.4.1]{Veliche_2006} that
\begin{align*}
\operatorname{G\mbox{-}dim}_{R}M = \operatorname{Gpd}_{R}M
\end{align*}
for any $M\in R\mod$.
\subsection{Gorenstein rings} \label{Preliminaries_GorensteinRings}
A Noetherian ring $R$ is called {\it Gorenstein} (or {\it Iwanaga-Gorenstein}) if $R$ has finite injective dimension as a left and as a right $R$-module (cf.~\cite{Iwanaga_1980,Buch86}).
It follows from \cite[Lemma A]{Zaks69} that any Gorenstein ring $R$ satisfies $\operatorname{inj.dim}_{R}R = \operatorname{inj.dim}_{R^{\rm op}}R$.
We hence call a Gorenstein ring $R$
with $\operatorname{inj.dim}_{R}R = d$ a {\it $d$-Gorenstein} ring.
Note that $0$-Gorenstein rings are just self-injective rings.
The following two results are due to Veliche \cite[2.4.2]{Veliche_2006} and Dotsenko, G\'{e}linas and Tamaroff \cite[Proposition 2.4]{Dotsenko-Gelinas-Tamaroff_2023}.
\begin{proposition}[Veliche] \label{Veliche_2006_2.4.2}
Let $R$ be a Noetherian ring and $d$ a non-negative integer.
Then the following conditions are equivalent.
\begin{enumerate}
\item $\operatorname{inj.dim}_{R}R \leq d$ and $\operatorname{inj.dim}_{R^{\rm op}}R \leq d$.
\item $\operatorname{Gpd}_{R} M \leq d$ for any $R$-module $M$.
\item $\operatorname{G\mbox{-}dim}_{R} M \leq d$ for any finitely generated $R$-module $M$.
\end{enumerate}
\end{proposition}
\begin{proposition}[Dotsenko-Gélinas-Tamaroff]
\label{Dotsenko-Gelinas-Tamaroff_2023_Proposition 2.4}
Let $\L$ be an artin algebra with Jacobson radical $J(\L)$.
Then $\L$ is a Gorenstein algebra if and only if $\L/J(\L)$ has finite Gorenstein dimension as a $\L$-module.
In this case, we have $\operatorname{G\mbox{-}dim}_{\L}\L/J(\L) = \operatorname{inj.dim}_{\L}\L$.
\end{proposition}
It follows from Proposition \ref{Veliche_2006_2.4.2} that for a self-injective ring $R$, we have
\begin{align*}
R\mbox{-}\mathrm{GProj} = R\mbox{-}\mathrm{Mod} \mbox{\quad and \quad} R\mbox{-}\mathrm{Gproj} = R\mod.
\end{align*}
Let $\L$ be a finite dimensional $d$-Gorenstein algebra (over the field $k$).
Then \cite[Lemma 6.1]{benson_iyengar_krause_pevtsova_2020} implies that the enveloping algebra $\Lambda^\textrm{e} := \L \otimes_k \Lambda^{\rm op}$ of $\L$ is a finite dimensional $(2d)$-Gorenstein algebra.
We note that for a finite dimensional algebra $\L$, $\Lambda^\textrm{e}$-modules can be identified with $\L$-bimodules on which the ground field $k$ acts centrally.
\section{Periodic dimensions} \label{Periodic dimension}
This section is divided into two subsections.
In the first, we introduce the periodic dimension of a module and present some of its basic properties.
Moreover, we inspect the periodic dimension of an eventually periodic module having finite Gorenstein projective dimension.
In the second, we work with finite dimensional eventually periodic algebras and examine their bimodule periodic dimensions.
\subsection{The case of modules}
Throughout this subsection, let $R$ be a left perfect ring unless otherwise stated.
This subsection is devoted to defining and studying the periodic dimensions for $R$-modules.
We start with a quick review of syzygies.
Recall that the {\it syzygy} $\Omega_{R}(M)$ of an $R$-module $M$ is the kernel of a projective cover $P \rightarrow M$ with $P\in R\mbox{-}\mathrm{Proj}$.
We put $\Omega_{R}^{0}(M) := M$ and $\Omega_{R}^{n}(M) := \Omega_{R}(\Omega_{R}^{n-1}(M))$, called the {\it $n$-th syzygy} of $M$, for $n > 0$.
Observe that for any family $\{ M_i \}_{i \in I}$ of $R$-modules, there exists an isomorphism in $R\mbox{-}\mathrm{Mod}$
\begin{align} \label{eq_9}
\Omega_R\left(\bigoplus_{i\in I} M_i\right) \cong \bigoplus_{i\in I} \Omega_R(M_i).
\end{align}
On the other hand, there exists a well-defined functor $\Omega_{R} : R\mbox{-}\underline{\mathrm{Mod}}\rightarrow R\mbox{-}\underline{\mathrm{Mod}}$ sending a module $M$ to its syzygy $\Omega_{R}(M)$.
If $R$ is a left perfect ring that is left Noetherian, then $\Omega_R$ restricts to an endofunctor on the stable category $R\mbox{-}\underline{\mathrm{mod}}$ of $R\mod$.
We now define eventually periodic modules.
\begin{definition} \label{def_1}
An $R$-module $M$ is said to be {\it periodic} if there exists an integer $p>0$ such that $\Omega_{R}^{p}(M) \cong M$ in $R\mbox{-}\mathrm{Mod}$.
The smallest $p>0$ with this property is called the {\it period} of $M$.
We call $M$ {\it eventually periodic} if there exists an integer $n \geq 0$ such that $\Omega_{R}^{n}(M)$ is periodic.
\end{definition}
In this paper, an {\it $(n, p)$-eventually periodic module} means an eventually periodic module whose $n$-th syzygy is the first periodic syzygy of period $p$. If $n=0$, such an eventually periodic module is called {\it $p$-periodic}.
For example, modules of finite projective dimension $n$ are $(n+1, 1)$-eventually periodic.
We now provide an example of $(n, p)$-eventually periodic modules.
\begin{example} \label{example_3}
Fix two integers $n \geq 0$ and $p>0$, and consider the truncated algebra $\L = kQ/R^2$, where $Q$ is the following quiver:
\vspace{5.5mm}
\begin{align*}
\xymatrix{
n \ar[r] & n-1 \ar[r] & \cdots \ar[r] & 1 \ar[r] & 0 \ar[r] & -1 \ar[r] & \cdots \ar[r] & -p+1 \ar@/_20pt/[lll]}
\end{align*}
and $R$ is the arrow ideal of the path algebra $k Q$.
We denote by $S_i$ the simple $\L$-module associated with the vertex $i$.
A direct calculation shows that $S_i$ is $(i, p)$-eventually periodic if $1 \leq i \leq n$ and is $p$-periodic if $-p+1 \leq i \leq 0$.
In particular, $S_n$ is $(n, p)$-eventually periodic.
\end{example}
It is easy to see that if $M$ is a periodic module, then all its syzygies are periodic and have the same period as $M$.
This implies that the class of periodic modules is closed under taking syzygies.
Therefore, it is natural to introduce the following notion.
\begin{definition} \label{def_2}
The {\it periodic dimension} of an $R$-module $M$ is defined by
\[
\operatorname{per.dim}_{R}M := \inf\left\{\,n \geq 0 \mid \Omega_{R}^{n}(M) \mbox{\rm \ is periodic}\,\right\}.
\]
\end{definition}
By definition, $M$ is eventually periodic if and only if $\operatorname{per.dim}_{R}M < \infty$.
In this case, $\operatorname{per.dim}_R M = \operatorname{proj.dim}_R M +1$ if $M$ has finite projective dimension.
Otherwise, $\operatorname{per.dim}_R M$ is equal to the degree $n$ of the first periodic syzygy $\Omega_R^{n}(M)$ of $M$.
Also, if $M$ has finite periodic dimension $n$, then we have
\begin{align*}
\operatorname{per.dim}_R \Omega_R^{i}(M) =
\begin{cases}
n-i & \mbox{ if $0 \leq i \leq n$,} \\
0 & \mbox{ if $i > n$ .}
\end{cases}
\end{align*}
Moreover, for any family $\{ M_i \}_{i \in I}$ of $R$-modules, the isomorphism (\ref{eq_9}) yields an inequality
\begin{align} \label{eq_7}
\operatorname{per.dim}_R \bigoplus_{i\in I} M_i \leq \sup\{\, \operatorname{per.dim}_R M_i \mid i \in I\,\}.
\end{align}
As in the following example, the equality does not hold in general.
\begin{example} \label{example_2}
Let $Q$ be the following quiver:
\begin{align*}
\xymatrix{
5 \ar@<0.6ex>[r] & 4 \ar@<0.6ex>[l] \ar[r] & 3 \ar[r] & 2 \ar[r] & 1 \ar[r] & 0 }
\end{align*}
and consider $\L = kQ/R^2$.
A direct calculation shows that
\begin{align*}
\operatorname{proj.dim}_{\L}S_i =
\begin{cases}
i &
\mbox{ if $ 0 \leq i \leq 3$, }\\
\infty & \mbox{ if $4 \leq i \leq 5$,}
\end{cases}
\mbox{\quad and\quad}
\operatorname{per.dim}_{\L}S_i =
\begin{cases}
i+1 &
\mbox{ if $ 0 \leq i \leq 3$, }\\
i-1 & \mbox{ if $4 \leq i \leq 5$,}
\end{cases}
\end{align*}
and that $\Omega_\L^{3}(S_4) = S_1 \oplus S_3 \oplus S_5$.
We then have that
\begin{align*}
\operatorname{per.dim}_\L S_4 = 3 < 4 = \max\{\, \operatorname{per.dim}_\L S_i \mid i = 1, 3, 5 \,\}.
\end{align*}
\end{example}
Example \ref{example_2} concludes that even direct summands of finitely generated periodic modules are not necessarily periodic.
The following observation shows that such direct summands are at least eventually periodic.
\begin{proposition} \label{claim_46}
Let $R$ be a left artin ring and $M$ a finitely generated periodic $R$-module. Then the following statements hold.
\begin{enumerate}
\item Any indecomposable direct summand of $M$ is eventually periodic.
\item Every indecomposable direct summand of $M$ is periodic if and only if $M$ has no non-zero direct summand with finite projective dimension.
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose that $M$ is $p$-periodic and that
\begin{align} \label{eq_8}
M = L_1 \oplus \cdots \oplus L_r \oplus N_1 \oplus \cdots \oplus N_s \oplus N_{s+1} \oplus \cdots \oplus N_t
\end{align}
is a decomposition of indecomposable $R$-modules such that
$\operatorname{proj.dim}_{R} L_i = \infty$ for $1 \leq i \leq r$,
such that $p \leq \operatorname{proj.dim}_{R} N_i < \infty$ for $1 \leq i \leq s$, and such that $\operatorname{proj.dim}_{R} N_i < p$ for $s+1 \leq i \leq t$.
For (1), it is enough to show that each $L_i$ is eventually periodic.
Since $\Omega_R^p(M) \cong M$, we have an isomorphism in $R\mod$
\begin{align*}
&\Omega_R^{p}(L_1) \oplus \cdots \oplus \Omega_R^{p}(L_r) \oplus \Omega_R^{p}(N_1) \oplus \cdots \oplus \Omega_R^{p}(N_s) \\[2mm]
\cong &\
L_1 \oplus \cdots \oplus L_r \oplus N_1 \oplus \cdots \oplus N_t.
\end{align*}
Since $\operatorname{proj.dim}_R \Omega_R^{p}(L_i) = \infty$ and $\operatorname{proj.dim}_R \Omega_R^{p}(N_i) < \infty$, Krull-Schmidt theorem implies that there exists a bijection $\sigma : \{1, \ldots, r\} \rightarrow \{1, \ldots, r\}$ such that
\begin{align*}
\Omega_R^{p}(L_i) \cong L_{\sigma(i)} \oplus N_i^\prime
\end{align*}
in $R\mod$ for each $i$, where $N_i^\prime := \bigoplus_{j \in I(i)}N_j$ for some index set $I(i) \subseteq{\{1, \ldots, t\}}$.
Applying $\Omega_R^{lp}$ with $l:=r!$ to the above isomorphism, we have the following isomorphisms in $R\mod$:
\begin{align*}
\Omega_R^{(l+1)p}(L_i)
&\cong \Omega_R^{lp}\!\left(L_{\sigma(i)}\right) \oplus \Omega_R^{lp}\!\left(N_i^\prime\right) \nonumber
\\[1.5mm]
&\cong \Omega_R^{(l-1)p}\!\left(L_{\sigma^2(i)}\right) \oplus \Omega_R^{(l-1)p}\!\left(N_{\sigma(i)}^\prime\right)\oplus \Omega_R^{lp}\!\left(N_i^\prime\right) \nonumber
\\
&\ \,\vdots\nonumber
\\
&\cong L_{\sigma^{l+1}(i)} \oplus N_{\sigma^l(i)}^\prime \oplus \left( \bigoplus_{j=1}^{l} \Omega_R^{j p}\!\left(N_{\sigma^{l-j}(i)}^\prime\right) \right) \nonumber
\\[1.5mm]
&\cong \Omega_R^{p}(L_i) \oplus \left(\bigoplus_{j=1}^{l} \Omega_R^{j p}\!\left(N_{\sigma^{l-j}(i)}^\prime \right) \right).
\end{align*}
Since the direct summand
\begin{align*}
\bigoplus_{j=1}^{l} \Omega_R^{j p}\!\left(N_{\sigma^{l-j}(i)}^\prime \right)
\end{align*}
has finite projective dimension, say $d_i$,
we deduce that
\begin{align*}
\Omega_R^{(p+d_i)+lp}(L_i) = \Omega_R^{(l+1)p+d_i}(L_i) \cong \Omega_R^{p+d_i}(L_i)
\end{align*}
in $R\mod$.
This means that the periodic dimension of $L_i$ is finite and at most $p + d_i$.
For (2), it suffices to show the\ \lq\lq if\ \!\rq\rq\ part.
When $M$ is in $R\mbox{-}\mathrm{mod}_{\rm fpd}$, or equivalently, $t=0$ in the decomposition (\ref{eq_8}), one gets a bijection $\sigma : \{1, \ldots, r\} \rightarrow \{1, \ldots, r\}$ such that $\Omega_R^{p}(L_i) \cong L_{\sigma(i)}$ for each $i$.
It then follows that $\Omega_R^{l p}(L_i) \cong L_{\sigma^{l}(i)} = L_{i}$.
\end{proof}
We have the following consequence of Proposition \ref{claim_46}.
\begin{corollary} \label{claim_47}
Let $\{ M_i \}_{i \in I}$ be a finite set of finitely generated modules over a left artin ring $R$.
Assume that $M := \bigoplus_{i\in I} M_i$ is $(n, p)$-eventually periodic with $\Omega_R^{n}(M)$ in $R\mbox{-}\mathrm{mod}_{\rm fpd}$. Then $\Omega_R^{n}(M_i)$ is periodic for all $i \in I$, and we have
\begin{align*}
\operatorname{per.dim}_R M = \max\left\{\, \operatorname{per.dim}_R M_i \mid i \in I\,\right\}.
\end{align*}
\end{corollary}
\begin{proof}
We know from Proposition \ref{claim_46} (2) that each indecomposable direct summand of $\Omega_R^{n}(M)$ is periodic.
Since a direct sum of periodic modules is again periodic, we have that each $\Omega_R^{n}(M_i)$ is periodic.
This implies that $\operatorname{per.dim}_R M_i \leq \operatorname{per.dim}_R M$ for each $i \in I$, which completes the proof.
\end{proof}
Let $M$ be an $R$-module, and let $n$ be a positive integer.
One easily observes that if $\operatorname{Ext}\nolimits_{R}^{n}(M, X)= 0$ for all $X \in R\mbox{-}\mathrm{Fpd}$, then $\Omega_R^{n}(M)$ is in $R\mbox{-}\mathrm{Mod}_{\rm fpd}$.
Next, we treat eventually periodic modules of finite Gorenstein projective dimension.
We begin with the following lemma.
\begin{lemma} \label{claim_1}
Let $M$ be an $R$-module such that $\Omega_{R}^{n+p}(M)$ $\cong$ $\Omega_{R}^{n}(M)$ in $R\mbox{-}\underline{\mathrm{Mod}}$ for some $n \geq 0$ and $p>0$.
Then we have $\operatorname{per.dim}_{R}M \leq n+1$.
Moreover, the period of the first periodic syzygy of $M$ divides $p$.
\end{lemma}
\begin{proof}
By \cite[Proposition 1.44]{AusBri69}, there exist two projective $R$-modules $P$ and $Q$ such that $\Omega_{R}^{n+p}(M) \oplus P \cong \Omega_{R}^{n}(M) \oplus Q$ in $R\mbox{-}\mathrm{Mod}$.
Taking their syzygies, we obtain an isomorphism $\Omega_{R}^{n+p+1}(M) \cong \Omega_{R}^{n+1}(M)$ in $R\mbox{-}\mathrm{Mod}$.
\end{proof}
We are now ready to give the main result of this subsection, which says that periodic dimension is almost equal to Gorenstein projective dimension when both of the two dimensions are finite.
\begin{theorem} \label{claim_6}
Let $M$ be an eventually periodic $R$-module of finite Gorenstein projective dimension $r$.
Then we have
\begin{align*}
r \,\leq\, \operatorname{per.dim}_R M \,\leq\, r+1.
\end{align*}
Moreover, there exists an isomorphism in $R\mbox{-}\underline{\mathrm{Mod}}$
\begin{align*}
\Omega_{R}^{r+p}(M) \cong \Omega_{R}^{r}(M),
\end{align*}
where $p$ denotes the period of the first periodic syzygy of $M$.
\end{theorem}
\begin{proof}
Suppose that $M$ is $(n, p)$-eventually periodic.
We first show that $r \leq n \leq r+1$.
Fix a minimal projective resolution $P_\bullet \rightarrow M$ of $M$.
The inequality $r \leq n$ can be obtained from the fact that $\Omega_{R}^{n}(M) \cong \Omega_{R}^{n+ip}(M)$ for all $i \geq 0$.
On the other hand, splicing the periodic part
\[ 0 \rightarrow \Omega_{R}^{n+p}(M) \rightarrow P_{n+p-1} \rightarrow \cdots \rightarrow P_{n} \rightarrow \Omega_{R}^{n}(M) \rightarrow 0\]
repeatedly, we can construct an acyclic complex $T_\bullet$ of projective $R$-modules such that $\Omega_0(T_\bullet) = \Omega_{R}^{n}(M)$.
Since $\Omega_{R}^{i}(M)$ is Gorenstein projective for any $i \geq n$, \cite[Lemma 2.3.3]{Veliche_2006} implies that the acyclic complex $T_\bullet$ becomes totally acyclic.
Hence we have that $\Sigma^{i}(\Omega_{R}^{n}(M))= \Sigma^{i}(\Omega_0(T_\bullet))= \Omega_{-i}(T_\bullet)$ for all $i \in \mathbb{Z}$, where $\Sigma$ denotes the shift functor on $R\mbox{-}\underline{\mathrm{GProj}}$.
Since $\Sigma^{-1} = \Omega_R$, there exist isomorphisms in $R\mbox{-}\underline{\mathrm{GProj}}$
\begin{align*}
\Omega_{R}^{r}(M)
\cong \Sigma^{n-r} \Sigma^{r-n}(\Omega_{R}^{r}(M))
\cong \Sigma^{n-r}(\Omega_{R}^{n}(M))
\cong \Omega_{R}^{n+l}(M)
\end{align*}
for some $l$ with $0 \leq l < p$.
Applying $\Omega_R^{p}$ to the above, we obtain the following isomorphisms in $R\mbox{-}\underline{\mathrm{GProj}}$:
\begin{align*}
\Omega_{R}^{r+p}(M)
\cong \Omega_{R}^{n+l+p}(M)
\cong \Omega_{R}^{n+l}(M)
\cong \Omega_{R}^{r}(M).
\end{align*}
Thus Lemma \ref{claim_1} shows that $n \leq r+1$.
This completes the proof.
\end{proof}
As in Theorem \ref{claim_6}, one can prove a similar result for a Noetherian semiperfect ring.
We state it without proof.
\begin{theorem} \label{claim_50}
Let $R$ be a Noetherian semiperfect ring and $M$ a finitely generated $(n,p)$-eventually periodic $R$-module with $\operatorname{G\mbox{-}dim}_R M = r < \infty$.
Then we have
\begin{align*}
r \,\leq\, n \,\leq\, r+1.
\end{align*}
Moreover, there exists an isomorphism in $R\mbox{-}\underline{\mathrm{mod}}$
\begin{align*}
\Omega_{R}^{r+p}(M) \cong \Omega_{R}^{r}(M).
\end{align*}
\end{theorem}
\begin{remark} \label{remark_3}
Let $R$ be a Gorenatein local ring.
Then Theorem \ref{claim_50} can be used to improve results related to eventually periodic $R$-modules such as \cite[Theorem 1.6]{Avramov_1989_proceedings}, \cite[Theorem 1.2]{Gasharov-Peeva_1990} and \cite[Theorem 4.1]{Eisenbud_1980}.
\end{remark}
We end this subsection with three corollaries of Theorem \ref{claim_6}.
First, we refine the theorem.
\begin{corollary} \label{claim_49}
Let $R$ be a left artin ring and $M$ a finitely generated $(n,p)$-eventually periodic $R$-module with $\operatorname{Gpd}_R M = r < \infty$.
Then we have
\begin{align*} r \leq n \leq r+1. \end{align*}
Moreover, there exists an isomorphism of $R$-modules
\begin{align*} \Omega_{R}^{r+p}(M) \oplus P \cong \Omega_{R}^{r}(M) \end{align*}
for some $P \in R\mbox{-}\mathrm{proj}$.
In particular, $n=r$ if and only if $\Omega_{R}^{r}(M)$ is in $R\mbox{-}\mathrm{mod}_{\mathcal{P}}$.
\end{corollary}
\begin{proof}
We need only observe that $\Omega_{R}^{r+p}(M) \oplus P \cong \Omega_{R}^{r}(M)$ in $R\mod$ for some $P \in R\mbox{-}\mathrm{proj}$.
We know from Theorem \ref{claim_6} that $\Omega_{R}^{r+p}(M) \cong \Omega_{R}^{r}(M)$ in $R\mbox{-}\underline{\mathrm{Gproj}}$.
Since $\operatorname{Ext}\nolimits_{R}^{n}(M, R)= 0$ for all $i>r$, it follows that $\Omega_R^{r+p}(M)$ has no non-zero projective direct summand.
Consequently, Krull-Schmidt theorem yields the desired isomorphism.
\end{proof}
Next, we consider two extreme cases for eventually periodic modules having finite Gorenstein projective dimension.
\begin{corollary} \label{claim_7}
The following statements hold for any $R$-module $M$.
\begin{enumerate}
\item If $M$ is $p$-periodic and has finite Gorenstein projective dimension, then $M$ is $p$-strongly Gorenstein projective.
\item If $M$ is $(n,p)$-eventually periodic and is Gorenstein projective, then $M$ is $p$-strongly Gorenstein projective.
\end{enumerate}
\end{corollary}
\begin{proof}
It is a direct consequence of Theorem \ref{claim_6}.
\end{proof}
Finally, we give a useful property of periodic dimensions.
\begin{corollary} \label{claim_48}
Let $\{ M_i \}_{i \in I}$ be a finite set of finitely generated modules over a left artin ring $R$.
If $M := \bigoplus_{i\in I} M_i$ has finite Gorenstein projective dimension, then we have
\begin{align*}
\operatorname{per.dim}_R M = \sup\left\{\, \operatorname{per.dim}_R M_i \mid i \in I\,\right\}.
\end{align*}
\end{corollary}
\begin{proof}
From the inequality (\ref{eq_7}) and Proposition \ref{claim_46} (1), we see that
$\operatorname{per.dim}_R M = \infty$ if and only if $\operatorname{per.dim}_R M_i = \infty$ for some $i \in I$.
Thus we have to obtain the desired equality in case $\operatorname{per.dim}_R M = n < \infty$.
Since the first periodic syzygy $\Omega_R^n(M) \in R\mod$ is Gorenstein projective by Theorem \ref{claim_6} and hence belongs to $R\mbox{-}\mathrm{mod}_{\rm fpd}$. Here, we use the fact that $R\mbox{-}\mathrm{GProj} \subseteq {}^{\perp}(R\mbox{-}\mathrm{Proj}) = {}^{\perp}(R\mbox{-}\mathrm{Fpd})$.
Then Corollary \ref{claim_47} completes the proof.
\end{proof}
\subsection{The case of regular bimodules} \label{The_case_of_algebras}
In the rest of this paper, an algebra will mean a finite dimensional $k$-algebra.
In this subsection, we investigate the bimodule periodic dimensions of algebras.
In particular, we determine that of eventually periodic Gorenstein algebras.
We start with the definition of eventually periodic algebras.
\begin{definition} \label{def_4}
An algebra $\L$ is called {\it eventually periodic} if the regular $\L$-bimodule $\L$ is eventually periodic.
If $\L$ is periodic as a $\L$-bimodule, $\L$ is said to be {\it periodic}.
\end{definition}
Throughout this paper, an {\it $(n,p)$-eventually periodic algebra} will mean an algebra $\L$ that is $(n,p)$-eventually periodic over $\Lambda^\textrm{e}$.
We now make a brief note on eventually periodic algebras:
as pointed out in \cite[Section 2]{Usui_2022}, eventually periodic algebras are not Gorenstein in general.
This may be surprising since periodic algebras are self-injective algebras (\cite[Proposition IV.11.18]{Skowronski-Yamagara_book_I}).
Motivated by the observation, we will characterize eventually periodic Gorenstein algebras (see Proposition \ref{claim_39} and Theorem \ref{claim_38}).
We will also show that eventually periodic algebras are at least both left and right weakly Gorenstein (see Proposition \ref{claim_40}).
On the other hand, the class of eventually periodic algebras includes monomial Gorenstein algebras (\cite[the proof of Corollary 6.4]{Dotsenko-Gelinas-Tamaroff_2023}) and monomial Nakayama algebras (\cite[Section 3.2]{Usui_2022}).
It is not difficult to check that this is a consequence of the following result due to K{\" u}pper \cite[Corollary 2.10 (1) and (2)]{kupper2010two}.
\begin{proposition}[K{\" u}pper] \label{claim_51}
Let $\L$ be a monomial algebra.
Then $\L$ is an eventually periodic algebra if and only if every simple $\L$-module is eventually periodic.
\end{proposition}
We now move on to considerations on the bimodule periodic dimensions of eventually periodic algebras.
Dotsenko, G\'{e}linas and Tamaroff showed in \cite[the proof of Corollary 6.4]{Dotsenko-Gelinas-Tamaroff_2023} that $\operatorname{per.dim}_{\Lambda^\textrm{e}} \L \leq d+1$ for any monomial $d$-Gorenstein algebra $\L$;
the author proved in \cite[Proposition 4.3]{Usui21_No02} that the tensor product $\L \otimes_k \Gamma$ of a periodic algebra $\L$ and an algebra $\Gamma$ with $\operatorname{proj.dim}_{\Gamma^\textrm{e}}\Gamma = d < \infty$ is a $d$-Gorenstein algebra with $\operatorname{per.dim}_{(\L \otimes_k \Gamma)^{\rm e}} \L \otimes_k \Gamma = d$.
These facts lead to the main result of this subsection, which shows that the bimodule periodic dimension of an eventually periodic $d$-Gorenstein algebra equals either $d$ or $d+1$.
To this end, we now calculate the bimodule Gorenstein dimension for an arbitrary Gorenstein algebra.
Let $\L$ be an algebra.
We see from \cite[Lemma 8.2.4]{Witherspoon_book} that for any finitely generated $\L$-modules $M$ and $N$, there exists an isomorphism of graded vector spaces
\begin{align*}
\operatorname{Ext}\nolimits_\L^{\bullet}(M, N) \cong \operatorname{Ext}\nolimits_{\Lambda^\textrm{e}}^{\bullet}(\L, \operatorname{Hom}\nolimits_k(M, N)).
\end{align*}
Let $D$ denote the $k$-duality $\operatorname{Hom}\nolimits_k(-, k)$.
Then the isomorphism of $\Lambda^\textrm{e}$-modules
\begin{align*}
\Lambda^\textrm{e} = {}_{\L}\L \otimes \L_\L \cong \operatorname{Hom}\nolimits_k(D(\L_\L), {}_{\L}\L)
\end{align*}
induces the following isomorphism of graded vector spaces:
\begin{align} \label{eq_6}
\operatorname{Ext}\nolimits_\L^{\bullet}(D(\L), \L) \cong \operatorname{Ext}\nolimits_{\Lambda^\textrm{e}}^{\bullet}(\L, \Lambda^\textrm{e}).
\end{align}
The following proposition extends \cite[Proposition 5.6]{Shen_2019} to higher dimensional case.
\begin{proposition} \label{claim_44}
Let $\L$ be a Gorenstein algebra. Then we have
\[\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}} \L= \operatorname{inj.dim}_{\L}\L.\]
\end{proposition}
\begin{proof}
Assume that $\L$ is $d$-Gorenstein.
Since the enveloping algebra $\Lambda^\textrm{e}$ is $(2d)$-Gorenstein, it follows that $\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}} \L \leq 2d$.
Moreover, we obtain that
\begin{align*}
d = \operatorname{inj.dim}_{\L^{\rm op}}\L = \operatorname{proj.dim}_\L D(\L) = \operatorname{G\mbox{-}dim}_\L D(\L).
\end{align*}
Hence the isomorphism (\ref{eq_6}) implies that $\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}} \L = \operatorname{G\mbox{-}dim}_\L D(\L) = d$.
\end{proof}
We are now able to prove the main result of this subsection.
\begin{theorem} \label{claim_10}
Let $\L$ be an $(n, p)$-eventually periodic $d$-Gorenstein algebra.
Then we have
\begin{align*}
d \leq n \leq d+1.
\end{align*}
Moreover, there exists an isomorphism in $\Lambda^\textrm{e}\mod$
\begin{align*}
\Omega_{\Lambda^\textrm{e}}^{d+p}(\L) \oplus P \cong \Omega_{\Lambda^\textrm{e}}^d(\L)
\end{align*}
for some $P$ in $\Lambda^\textrm{e}\mbox{-}\mathrm{proj}$.
In particular, $n = d$ if and only if $\Omega_{\Lambda^\textrm{e}}^d(\L)$ has no non-zero projective direct summand.
\end{theorem}
\begin{proof}
The finitely generated eventually periodic $\Lambda^\textrm{e}$-module $\L$ satisfies $\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}} \L= d$ by Proposition \ref{claim_44}.
Thus Corollary \ref{claim_49} completes the proof.
\end{proof}
\begin{remark} \label{remark_1}
It is possible to describe the bimodule periodic dimension of an eventually periodic algebra with finite bimodule Gorenstein dimension.
Actually, such an algebra is Gorenstein as will be seen in Theorem \ref{claim_38}.
\end{remark}
\begin{remark} \label{remark_2}
The bound given in Theorem \ref{claim_10} is the best possible.
Indeed, as mentioned above, there are $d$-Gorenstein algebras of bimodule periodic dimension $d$. Besides, Proposition \ref{claim_52} and Examples \ref{example_4} and \ref{example_1} below exhibit examples of $d$-Gorenstein algebras of bimodule periodic dimension $d+1$.
\end{remark}
We now briefly recall some basic facts on projective resolutions over an algebra $\L$.
Let $P_\bullet \xrightarrow{\varepsilon} \L$ be a projective resolution of $\L$ over $\Lambda^\textrm{e}$.
Then any $\L$-module $M$ admits a projective resolution of the form
\begin{align*}
P_{\bullet} \otimes_{\L} M \xrightarrow{\varepsilon \otimes_{\L} \operatorname{id}_M} \L \otimes_{\L} M.
\end{align*}
In particular, $\Omega_{i}(P_{\bullet} \otimes_{\L} M) = \Omega_{i}(P_{\bullet}) \otimes_{\L} M$ for all $i \geq 0$.
Similar projective resolutions can be constructed for any $\L^{\rm op}$-modules.
Therefore, one gets an inequality
\begin{align*}
\operatorname{gl.dim} \L \leq \operatorname{proj.dim}_{\Lambda^\textrm{e}}\L.
\end{align*}
It is known that the equality holds if the semisimple quotient $\L/J(\L)$ is separable.
The following observation gives another condition under which the equality holds.
\begin{proposition} \label{claim_52}
Let $\L$ be an algebra with finite bimodule projective dimension $d$.
Then $\L$ is a $(d+1, 1)$-eventually periodic $d$-Gorenstein algebra.
Moreover, we have $\operatorname{gl.dim} \L = \operatorname{proj.dim}_{\Lambda^\textrm{e}}\L$.
\end{proposition}
\begin{proof}
We show that $\L$ is $d$-Gorenstein.
Since $\operatorname{gl.dim} \L \leq \operatorname{proj.dim}_{\Lambda^\textrm{e}}\L < \infty$, it follows that $\L$ is Gorenstein with $\operatorname{inj.dim}_\L \L = \operatorname{gl.dim}\L$.
Now, one computes
\begin{align*}
d = \operatorname{proj.dim}_{\Lambda^\textrm{e}}\L = \operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}} \L = \operatorname{inj.dim}_\L \L ,
\end{align*}
where the last equality follows from Proposition \ref{claim_44}. This completes the proof.
\end{proof}
Next, we have the following description of the bimodule periodic dimensions for eventually periodic Gorenstein algebras.
In what follows, we set $\mathbbm{k} := \L/J(\L)$ for an algebra $\L$.
\begin{theorem} \label{claim_45}
Let $\L$ be an eventually periodic $d$-Gorenstein algebra.
If $\mathbbm{k}$ is a separable algebra, then we have
\begin{align*}
\operatorname{per.dim}_{\Lambda^\textrm{e}} \L = \operatorname{per.dim}_{\L}\mathbbm{k} = \operatorname{per.dim}_{\L^{\rm op}}\mathbbm{k},
\end{align*}
where the common value is either $d$ or $d+1$.
Moreover, the following conditions are equivalent.
\begin{enumerate}
\setlength{\itemsep}{.5mm}
\item The bimodule periodic dimension of $\L$ is equal to $d$.
\item The $d$-th syzygy of ${}_{\Lambda^\textrm{e}}\L$ is in $\Lambda^\textrm{e}\mbox{-}\mathrm{mod}_{\mathcal{P}}$.
\item The $d$-th syzygy of ${}_{\L}\mathbbm{k}$ is in $\L\mbox{-}\mathrm{mod}_{\mathcal{P}}$.
\item The $d$-th syzygy of ${}_{\L^{\rm op}}\mathbbm{k}$ is in $\L^{\rm op}\mbox{-}\mathrm{mod}_{\mathcal{P}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
We need only verify that $\operatorname{per.dim}_{\Lambda^\textrm{e}} \L = \operatorname{per.dim}_{\L}\mathbbm{k} = \operatorname{per.dim}_{\L^{\rm op}}\mathbbm{k}$.
It follows from Propositions \ref{Veliche_2006_2.4.2} and \ref{claim_44} that $\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}}\L = \operatorname{G\mbox{-}dim}_{\L}\mathbbm{k} = \operatorname{G\mbox{-}dim}_{\L^{\rm op}}\mathbbm{k} = d$.
Moreover, we see that $\operatorname{per.dim}_{\Lambda^\textrm{e}} \L$ is finite by definition and that $\operatorname{per.dim}_{\L}\mathbbm{k}$ and $\operatorname{per.dim}_{\L^{\rm op}}\mathbbm{k}$ are both finite by Lemma \ref{claim_42} below.
Thus Corollary \ref{claim_49} implies that
\begin{align*}
d \ \leq \ \operatorname{per.dim}_{\Lambda^\textrm{e}} \L, \ \operatorname{per.dim}_{\L}\mathbbm{k}, \ \operatorname{per.dim}_{\L^{\rm op}}\mathbbm{k} \ \leq \ d+1.
\end{align*}
We claim that $\operatorname{per.dim}_{\Lambda^\textrm{e}} \L = d+1$ implies $\operatorname{per.dim}_\L \mathbbm{k}= d+1 =\operatorname{per.dim}_{\L^{\rm op}} \mathbbm{k}$.
Since $\mathbbm{k}$ is separable, one has that
\begin{align*}
J(\Lambda^\textrm{e}) = J(\L)\otimes_k \L^{\rm op} + \L \otimes_k J(\L^{\rm op}) \mbox{\quad and \quad } \Lambda^\textrm{e} /J(\Lambda^\textrm{e}) \cong \mathbbm{k}\otimes_k \mathbbm{k}^{\rm op}.
\end{align*}
Hence if $P_\bullet \rightarrow \L$ is a minimal projective resolution of $\L$ over $\Lambda^\textrm{e}$, then the following complex induced by the tensor functor $\Lambda^\textrm{e} /J(\Lambda^\textrm{e}) \otimes_{\Lambda^\textrm{e}} -$ has trivial differentials:
\begin{align*}
\mbox{$\mathbbm{k} \otimes_{\L} P_\bullet \otimes_{\L} \mathbbm{k} \rightarrow \mathbbm{k} \otimes_{\L} \L \otimes_{\L} \mathbbm{k}$.}
\end{align*}
This implies that the projective resolutions
\begin{align}\label{eq_10}
\mbox{
$P_{\bullet} \otimes_{\L} \mathbbm{k} \rightarrow \L \otimes_{\L} \mathbbm{k} = {}_{\L}\mathbbm{k}$ \quad and \quad $\mathbbm{k} \otimes_{\L} P_{\bullet} \rightarrow \mathbbm{k} \otimes_{\L} \L = {}_{\L^{\rm op}}\mathbbm{k}$.}
\end{align}
are both minimal.
We thus conclude that if $\Omega_{\Lambda^\textrm{e}}^d(\L)$ has a non-zero projective direct summand, then so do
\begin{align*}
\Omega_{\L}^d(\mathbbm{k}) = \Omega_{\Lambda^\textrm{e}}^{d}(\L) \otimes_{\L} \mathbbm{k} \quad \mbox{ and } \quad \Omega_{\L^{\rm op}}^d(\mathbbm{k}) = \mathbbm{k} \otimes_{\L} \Omega_{\Lambda^\textrm{e}}^{d}(\L).
\end{align*}
Corollary \ref{claim_49} enables us to obtain the desired statement.
To complete the proof, it is enough to check that $\operatorname{per.dim}_{\Lambda^\textrm{e}} \L = d$ implies $\operatorname{per.dim}_{\L}\mathbbm{k} = d = \operatorname{per.dim}_{\L^{\rm op}} \mathbbm{k}$.
However, this is trivial because of the minimal projective resolutions (\ref{eq_10}).
\end{proof}
Note that $\mathbbm{k}$ is separable when $\L$ is an algebra over a perfect field or a bound quiver algebra over a field.
We end this subsection with an example of $d$-Gorenstein algebras $\L$ with $\operatorname{per.dim}_{\Lambda^\textrm{e}} \L = d+1$ and $\operatorname{proj.dim}_{\Lambda^\textrm{e}} \L = \infty$.
\begin{example} \label{example_4}
Consider the following disconnected quiver $Q$:
\begin{align*}
\xymatrix{0 \ar@(ul,dl)_-{\beta} & -1
}
\end{align*}
Let $I$ be the ideal of $kQ$ generated by $\beta^2$, and let $\L = kQ/I$.
Then $\L$ is isomorphic to the product of the periodic algebra $k[x]/(x^2)$ and the simple self-injective algebra $k$ as algebras.
Consequently, the monomial algebra $\L$ is self-injective and hence eventually periodic.
Recall that we denote by $S_i$ the simple $\L$-module corresponding to the vertex $i$.
Since
\begin{align*}
\operatorname{proj.dim}_{\L}S_i =
\begin{cases}
0 &
\mbox{ if $i = -1$, }\\
\infty & \mbox{ if $i = 0$,}
\end{cases}
\qquad \mbox{and} \qquad
\operatorname{per.dim}_{\L}S_i =
\begin{cases}
1 &
\mbox{ if $i = -1$, }\\
0 & \mbox{ if $i = 0$},
\end{cases}
\end{align*}
we have that
\begin{align*}
\operatorname{per.dim}_{\Lambda^\textrm{e}} \L = \operatorname{per.dim}_{\L}\mathbbm{k} =
\max\left\{\, \operatorname{per.dim}_{\L}S_i \,\mid\, i = -1, 0 \,\right\}
= 1,
\end{align*}
where the first and the second equality are obtained from Theorem \ref{claim_45} and Corollary \ref{claim_48}, respectively.
\end{example}
The next example is inspired by \cite[Section 2.3]{Dotsenko-Gelinas-Tamaroff_2023}.
\begin{example} \label{example_1}
For any positive integer $d$, we consider the following quiver $Q$:
\begin{align*}
\xymatrix{
d \ar@(ul,dl)_-{\beta} \ar[r]^-{\alpha_{d}} & d-1 \ar[r]^-{\alpha_{d-1}}& d-2 \ar[r] & \cdots \ar[r] & 1\ar[r]^-{\alpha_{1}} & 0
}
\end{align*}
Let $I$ be the ideal of $kQ$ generated by $\{ \beta^2, \alpha_{i-1}\alpha_{i} \mid 2 \leq i \leq d \}$, and let $\Gamma = kQ/I$.
Thanks to \cite[Theorem 2.9]{Dotsenko-Gelinas-Tamaroff_2023}, it follows that the monomial algebra $\Gamma$ is $d$-Gorenstein and hence eventually periodic.
Moreover, one has that
\begin{align*}
\operatorname{proj.dim}_{\Gamma}S_i =
\begin{cases}
i &
\mbox{ if $0 \leq i \leq d-1$, }\\
\infty & \mbox{ if $i = d$,}
\end{cases}
\end{align*}
and
\begin{align*}
\operatorname{per.dim}_{\Gamma}S_i =
\begin{cases}
i+1 &
\mbox{ if $0 \leq i \leq d-1$, }\\
d+1 & \mbox{ if $i = d$}.
\end{cases}
\end{align*}
Note that the simple projective $\Gamma$-module $S_0 ( = \Gamma \alpha_1)$ is a non-zero projective direct summand of $\Omega_{\Gamma}^{d}(S_d)$.
As in Example \ref{example_4}, one can conclude that
\begin{align*}
\operatorname{per.dim}_{\Gamma^\textrm{e}}\Gamma = \operatorname{per.dim}_{\Gamma}\mathbbm{k} = \max\left\{\, \operatorname{per.dim}_{\Gamma}S_i \mid 0 \leq i \leq d\,\right\}= d+1.
\end{align*}
\end{example}
\section{Homological properties of eventually periodic algebras} \label{Homological properties of eventually periodic algebras}
This section reveals some basic homological properties of eventually periodic algebras.
We show that a lot of homological conjectures hold for this class of algebras.
Moreover, as promised in Subsection \ref{The_case_of_algebras}, we characterize eventually periodic Gorenstein algebras and show that eventually periodic algebras are both left and right weakly Gorenstein.
First of all, we focus on the {\it periodicity conjecture}, which states that an algebra must be periodic if all its simple modules are periodic.
We refer to \cite[Section 1]{Erdmann-Skowronski_2015} for more information on the conjecture.
\begin{proposition} \label{claim_3}
An eventually periodic connected algebra $\L$ is periodic if and only if all the simple $\L$-modules are periodic.
\end{proposition}
\begin{proof}
It suffices to show the\,\lq\lq if\ \!\rq\rq\,part.
By \cite[Theorem 1.4]{Green-Snashall-Solberg_2003}, the algebra $\L$ satisfying the required condition is self-injective.
Applying Corollary \ref{claim_49} to the indecomposable Gorenstein projective $\Lambda^\textrm{e}$-module $\L$, we conclude that $\L$ is periodic.
\end{proof}
We now prepare the following easy lemma, which will be frequently used from now on.
\begin{lemma} \label{claim_42}
Let $\L$ be an $(n,p)$-eventually periodic algebra. Then we have the following statements.
\begin{enumerate}
\item The endofunctor $\Omega_{\L}$ on $\L\mbox{-}\underline{\mathrm{Mod}}$ satisfies that $\Omega_{\L}^{n+p} \cong \Omega_{\L}^{n}$.
\item The endofunctor $\Omega_{\Lambda^{\rm op}}$ on $\Lambda^{\rm op}\mbox{-}\underline{\mathrm{Mod}}$ satisfies that $\Omega_{\Lambda^{\rm op}}^{n+p} \cong \Omega_{\Lambda^{\rm op}}^{n}$.
\end{enumerate}
In particular, for any $\L$-module $M$ (resp. $\Lambda^{\rm op}$-module $N$), we have
\begin{align*}
\operatorname{per.dim}_{\L} M \leq n+1 \quad \left(\mbox{resp. } \operatorname{per.dim}_{\L^{\rm op}} N \leq n+1 \right).
\end{align*}
Moreover, the period of the first periodic syzygy of $M$ (resp. $N$) divides $p$.
\end{lemma}
\begin{proof}
We only prove (1); the proof of (2) is similar.
Since $\Omega_\L^i \cong \Omega_{\Lambda^\textrm{e}}^{i}(\L) \otimes_\L -$ as endofunctors on $\L\mbox{-}\underline{\mathrm{Mod}}$ for every $i \geq 0$,
there are isomorphisms of endofunctors on $\L\mbox{-}\underline{\mathrm{Mod}}$ \[\Omega_\L^{n+p} \cong \Omega_{\Lambda^\textrm{e}}^{n+p}(\L) \otimes_\L - \cong \Omega_{\Lambda^\textrm{e}}^{n}(\L) \otimes_\L - \cong \Omega_\L^{n}.\]
The last statement is a consequence of Lemma \ref{claim_1}.
\end{proof}
The lemma enables us to decide whether an eventually periodic algebra is Gorenstein or not.
\begin{proposition} \label{claim_39}
Let $\L$ be an eventually periodic algebra.
Then the following conditions are equivalent.
\begin{enumerate}
\item $\L$ is a Gorenstein algebra.
\item A finitely generated $\L$-module $M$ is periodic if and only if $M$ is Gorenstein projective without non-zero projective direct summands.
\end{enumerate}
\end{proposition}
\begin{proof}
We first prove that (1) implies (2).
It follows from Proposition \ref{Veliche_2006_2.4.2} and Lemmas \ref{claim_1} and \ref{claim_42} that any finitely generated $\L$-modules $M$ satisfy that $\operatorname{G\mbox{-}dim}_\L M < \infty$ and $\operatorname{per.dim}_\L M < \infty$.
Therefore, the desired equivalence is a consequence of Corollary \ref{claim_49}.
Conversely, suppose that the equivalence in (2) holds.
Since we know by Lemma \ref{claim_42} that $\operatorname{per.dim}_\L \L/J(\L) < \infty$, the equivalence implies that $\operatorname{G\mbox{-}dim}_\L \L/J(\L) < \infty$.
Thus Proposition \ref{Dotsenko-Gelinas-Tamaroff_2023_Proposition 2.4} finishes the proof.
\end{proof}
Recall that the {\it big finitistic dimension} of an algebra $\L$ is defined as
\[\operatorname{Fin.dim} \L := \sup\{\operatorname{proj.dim}_{\L}M \mid M \in \L\mbox{-}\mathrm{Mod} \ \mbox{ and }\ \operatorname{proj.dim}_{\L}M <\infty \}\]
and the {\it little finitistic dimension} of $\L$ is defined to be
\[\operatorname{fin.dim} \L := \sup\{\operatorname{proj.dim}_{\L}M \mid M \in \L\mod \ \mbox{ and }\ \operatorname{proj.dim}_{\L}M <\infty \}.\]
It is conjectured that the little finitistic dimension of an arbitrary algebra is finite.
This is known as the {\it finitistic dimension conjecture} and is still open.
See \cite{Smalo_2000,Yamagata_book_1996,Zimmermann-Huisgen_1992} for more information on this and related homological conjectures.
We now observe that the finitistic dimension conjecture holds for eventually periodic algebras and their opposite algebras.
\begin{proposition}\label{claim_18}
Let $\L$ be an $(n,p)$-eventually periodic algebra.
Then
\begin{align*}
\operatorname{Fin.dim} \L \leq n \quad \mbox{ and } \quad \operatorname{Fin.dim} \Lambda^{\rm op} \leq n.
\end{align*}
\end{proposition}
\begin{proof}
We only show that $\operatorname{Fin.dim} \L \leq n$; the other is similarly proved.
Let $M$ be a $\L$-module of finite projective dimension.
Lemma \ref{claim_42} implies that $\Omega_{\L}^{n}(M) \cong \Omega_{\L}^{n+ip}(M)$ in $\L\mbox{-}\underline{\mathrm{Mod}}$ for all $i \geq 0$, so that $\Omega_{\L}^{n}(M)$ is necessarily projective.
\end{proof}
The following consequence of Proposition \ref{claim_18} says that {\it Gorenstein symmetric conjecture} \cite{Beligiannis_2005} holds for eventually periodic algebras.
\begin{proposition} \label{claim_19}
Let $\L$ be an eventually periodic algebra.
Then $\operatorname{inj.dim}_{\L}\L < \infty$ if and only if $\operatorname{inj.dim}_{\L^{\rm op}}\L < \infty$.
\end{proposition}
\begin{proof}
It is a consequence of Proposition \ref{claim_18} and \cite[Proposition 6.10]{AusRei_1991_AdvMath}.
\end{proof}
We say that an algebra $\L$ {\it satisfies ${(\rm AC)}$} if the following condition is satisfied:
\begin{enumerate}
\item[(AC)] For a finitely generated $\L$-module $M$, there exists an integer $b_{M} \geq 0$ such that if a finitely generated $\L$-module $N$ satisfies that $\operatorname{Ext}\nolimits_{\L}^{\gg 0}(M, N) = 0$, then $\operatorname{Ext}\nolimits_{\L}^{> b_M}(M, N) = 0$.
\end{enumerate}
See \cite{Celikbas-Takahashi_2013,Christensen-Holm_2010} for more information on this condition and related homological problems.
\begin{proposition} \label{claim_28}
An eventually periodic algebra and its opposite algebra satisfy {\rm (AC)}.
\end{proposition}
\begin{proof}
We only prove that an $(n, p)$-eventually periodic algebra $\L$ satisfies {\rm (AC)}; the proof for the opposite algebra $\Lambda^{\rm op}$ is similar.
Since $\operatorname{fin.dim} \L < \infty$ by Proposition \ref{claim_18}, it suffices to consider finitely generated $\L$-modules $M$ with $\operatorname{proj.dim}_{\L}M = \infty$.
It follows from Lemma \ref{claim_42} that $\operatorname{Ext}\nolimits_{\L}^{i}(M, N) \cong \operatorname{Ext}\nolimits_{\L}^{i+p}(M, N)$ for all $i > n$ and all $N \in \L\mod$.
Taking $b_M := n$ will complete the proof.
\end{proof}
Thanks to Christensen and Holm \cite[Theorem A]{Christensen-Holm_2010}, we know that the following condition holds for an algebra $\L$ satisfying (AC):
\begin{enumerate}
\item[(ARC)] Let $M$ be a finitely generated $\L$-module. If $\operatorname{Ext}\nolimits_{\L}^{i}(M, M) $ $=$ $0$ $= \operatorname{Ext}\nolimits_{\L}^{i}(M, \L)$ for all $i >0$, then $M$ is projective.
\end{enumerate}
This condition is the key to proving the following main result of this section.
\begin{theorem} \label{claim_38}
Let $\L$ be an eventually periodic algebra.
Then $\L$ is a Gorenstein algebra if and only if the Gorenstein dimension of the regular $\L$-bimodule $\L$ is finite.
In this case, we have $\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}} \L= \operatorname{inj.dim}_{\L}\L$.
\end{theorem}
\begin{proof} It is sufficient to show the\ \lq\lq if\ \!\rq\rq\ part.
Suppose that $\L$ is $(n, p)$-eventually periodic with $\operatorname{G\mbox{-}dim}_{\Lambda^\textrm{e}}\L = r < \infty$.
Then the isomorphism (\ref{eq_6}) and Corollary \ref{claim_49} imply that $\operatorname{Ext}\nolimits_\L^{i}(D(\L), \L) = 0$ for all $i >r$ and that $r\leq n$, respectively.
Moreover, we see from Lemma \ref{claim_42} that there exists an isomorphism
\begin{align*}
\operatorname{Ext}\nolimits_{\L}^{i}\!\left(\Omega_{\L}^{n}(D(\L)), N\right) \cong \operatorname{Ext}\nolimits_{\L}^{i+p}\!\left(\Omega_{\L}^{n}(D(\L)), N\right)
\end{align*}
for all $i > 0$ and all $N \in \L\mod$.
As a result, letting $m$ be an integer divided by $p$ with $m > n$, we have the following isomorphisms
\begin{align*}
\operatorname{Ext}\nolimits_\L^{i}\!\left( \Omega_{\L}^{n}(D(\L)), \Omega_{\L}^{n}(D(\L)) \right)
&\ \cong\ \operatorname{Ext}\nolimits_\L^{i+p}\left( \Omega_{\L}^{n}(D(\L)), \Omega_{\L}^{n}(D(\L)) \right) \\
&\ \ \, \vdots \\
&\ \cong\ \operatorname{Ext}\nolimits_\L^{i+m}\!\left( \Omega_{\L}^{n}(D(\L)), \Omega_{\L}^{n}(D(\L)) \right) \\
&\ \cong\ \operatorname{Ext}\nolimits_\L^{i+m-n}\!\left( \Omega_{\L}^{n}(D(\L)), D(\L) \right) \\
&\ = 0
\end{align*}
for all $i>0$.
Hence the fact that $\L$ satisfies (ARC) implies that $\operatorname{inj.dim}_{\L^{\rm op}}\L = \operatorname{proj.dim}_{\L}D(\L) \leq n$, so that
$\L$ is Gorenstein by Proposition \ref{claim_19}.
The last statement follows from Proposition \ref{claim_44}.
\end{proof}
Now, we focus on Gorenstein projective modules over an eventually periodic algebra.
Recall from \cite{Saito_2021} that a triangulated category $\mathcal{T}$ with shift functor $\Sigma$ is {\it periodic} if $\Sigma^{m} \cong \mathrm{Id}_{\mathcal{T}}$ for some $m>0$.
The smallest such $m$ is called the {\it period} of $\mathcal{T}$.
We then have the following observation.
\begin{proposition} \label{claim_41}
Let $\L$ be an $(n,p)$-eventually periodic algebra.
Then the following statements hold.
\begin{enumerate}
\setlength{\itemsep}{1.5mm}
\item $\L\mbox{-}\underline{\mathrm{GProj}}$ and $\L\mbox{-}\underline{\mathrm{Gproj}}$ are periodic of period dividing $p$.
\item $\L\mbox{-}\mathrm{GProj} = p\mbox{-}\L\mbox{-}\mathrm{SGProj}$ and $\L\mbox{-}\mathrm{Gproj} = p\mbox{-}\L\mbox{-}\mathrm{SGproj}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\Sigma$ denote the shift functor on $\L\mbox{-}\underline{\mathrm{GProj}}$.
Since $\Omega_\L^{n+p} \cong \Omega_\L^{n}$ as endofunctors on $\L\mbox{-}\underline{\mathrm{Mod}}$ by Lemma \ref{claim_42}, the fact that $\Sigma^{-1} = \Omega_\L$ implies that $\Sigma^{-n-p} \cong\Sigma^{-n}$ and hence $\Sigma^{p} \cong \mathrm{Id}$.
Since $\Sigma$ restricts to the shift functor on $\L\mbox{-}\underline{\mathrm{Gproj}}$, we conclude that $\L\mbox{-}\underline{\mathrm{GProj}}$ and $\L\mbox{-}\underline{\mathrm{Gproj}}$ are both periodic.
Now, (2) immediately follows from (1).
\end{proof}
\begin{remark}
The same statements as Proposition \ref{claim_41} hold for $\Lambda^{\rm op}\mbox{-}\mathrm{GProj}$ and $\Lambda^{\rm op}\mbox{-}\mathrm{Gproj}$. We leave it to the reader to state and show the analogous result.
\end{remark}
We end this section by showing that eventually periodic algebras are both left and right weakly Gorenstein.
Although this is a consequence of Proposition \ref{claim_28} and \cite[Theorem C]{Christensen-Holm_2010}, we give another proof.
Recall that an algebra $\L$ is {\it left weakly Gorenstein} if $\L\mbox{-}\mathrm{Gproj} ={}^{\perp}\L$, where ${}^{\perp}\L$ is the full subcategory of $\L\mod$ given by
\[{}^{\perp}\L := \left\{ M \in \L\mod \mid \operatorname{Ext}\nolimits_{\L}^{i}(M, \L) = 0 \mbox{ for all } i > 0 \right \}. \]
Also, the algebra $\L$ is called {\it right weakly Gorenstein} if $\Lambda^{\rm op}$ is left weakly Gorenstein.
See \cite{Marczinzik_2019,Ringel-Zhang_2020} for more details.
Also, thanks to Chen \cite[page 16]{X-WChen_2017}, we have the following equality for any algebra $\L$:
\begin{align} \label{eq_5}
{}^{\perp}(\L\mbox{-}\mathrm{Proj}) = \left\{ M \in \L\mbox{-}\mathrm{Mod} \mid \operatorname{Ext}\nolimits_{\L}^{i}(M, \L) = 0 \mbox{ for all } i > 0 \right\}.
\end{align}
\begin{proposition} \label{claim_40}
Let $\L$ be an eventually periodic algebra.
Then the following statements hold.
\begin{enumerate}
\setlength{\itemsep}{1mm}
\item $\L\mbox{-}\mathrm{GProj} = {}^{\perp}(\L\mbox{-}\mathrm{Proj})$ and $\Lambda^{\rm op}\mbox{-}\mathrm{GProj} = {}^{\perp}(\Lambda^{\rm op}\mbox{-}\mathrm{Proj})$.
\item $\L$ is both left and right weakly Gorenstein.
\end{enumerate}
\end{proposition}
\begin{proof}
Assume that $\L$ is $(n,p)$-eventually periodic. We only prove that $\L\mbox{-}\mathrm{GProj} = {}^{\perp}(\L\mbox{-}\mathrm{Proj})$ and that $\L$ is left weakly Gorenstein; the proof for the others is similar.
For the former,
it suffices to show the inclusion $(\supseteq)$.
For any $M \in {}^{\perp}(\L\mbox{-}\mathrm{Proj})$, its $n$-th syzygy $\Omega_{\L}^{n}(M)$ is $p$-strongly Gorenstein projective since $\Omega_{\L}^{n+p}(M) \cong \Omega_{\L}^{n}(M)$ in $\L\mbox{-}\underline{\mathrm{Mod}}$ by Lemma \ref{claim_42}, and since
\begin{align*}
\operatorname{Ext}\nolimits_\L^{i}\!\left(\Omega_{\L}^{n}(M), \L\right) \cong \operatorname{Ext}\nolimits_\L^{i+n}(M, \L) =0
\end{align*}
for all $i>0$.
This implies that $\operatorname{Gpd}_\L M \leq n < \infty$.
But, $\operatorname{Gpd}_\L M = 0$ because $M$ is in ${}^{\perp}(\L\mbox{-}\mathrm{Proj})$.
We have thus proved that $\L\mbox{-}\mathrm{GProj} = {}^{\perp}(\L\mbox{-}\mathrm{Proj})$ as claimed.
On the other hand, the latter follows from the following equality
\begin{align*}
\L\mbox{-}\mathrm{Gproj} = \L\mbox{-}\mathrm{GProj} \cap \L\mod = {}^{\perp}(\L\mbox{-}\mathrm{Proj}) \cap \L\mod = {}^{\perp}\L,
\end{align*}
where the last one is obtained from the formula (\ref{eq_5}).
\end{proof}
\section*{Acknowledgments}
The author would like to thank Takahiro Honma and Ryotaro Koshio for helpful conversations.
\bibliographystyle{plain}
| {
"timestamp": "2023-01-18T02:16:34",
"yymm": "2301",
"arxiv_id": "2301.06242",
"language": "en",
"url": "https://arxiv.org/abs/2301.06242",
"abstract": "For an eventually periodic module, we have the degree and the period of its first periodic syzygy. This paper studies the former under the name \\lq\\lq periodic dimension\\rq\\rq. We give a bound for the periodic dimension of an eventually periodic module with finite Gorenstein projective dimension. We also provide a method of computing the Gorenstein projective dimension of an eventually periodic module under certain conditions. Besides, motivated by recent results of Dotsenko, Gélinas and Tamaroff and of the author, we determine the bimodule periodic dimension of an eventually periodic Gorenstein algebra. Another aim of this paper is to obtain some of the basic homological properties of eventually periodic algebras. We show that a lot of homological conjectures hold for this class of algebras. As an application, we characterize eventually periodic Gorenstein algebras in terms of bimodules Gorenstein projective dimensions.",
"subjects": "Representation Theory (math.RT); Rings and Algebras (math.RA)",
"title": "Periodic dimensions and some homological properties of eventually periodic algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806506825455,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7077274169880349
} |
https://arxiv.org/abs/1709.08318 | Hodge decomposition and the Shapley value of a cooperative game | We show that a cooperative game may be decomposed into a sum of component games, one for each player, using the combinatorial Hodge decomposition on a graph. This decomposition is shown to satisfy certain efficiency, null-player, symmetry, and linearity properties. Consequently, we obtain a new characterization of the classical Shapley value as the value of the grand coalition in each player's component game. We also relate this decomposition to a least-squares problem involving inessential games (in a similar spirit to previous work on least-squares and minimum-norm solution concepts) and to the graph Laplacian. Finally, we generalize this approach to games with weights and/or constraints on coalition formation. | \section{Introduction}
In cooperative game theory, one of the central questions is that of
fair division: if players form a coalition to achieve a common goal,
how should they split the profits (or costs) of that achievement among
themselves? (We restrict our attention to transferable utility games,
also called TU games, whose total value may be freely divided and
distributed among the players.) \citet{Shapley1953} introduced one of
the classical solution concepts to this problem, now known as the
\emph{Shapley value}, which he proved to be the unique allocation that
satisfies certain axioms.
In this paper, we show that a cooperative game may be decomposed into
a sum of component games, one for each player, where these components
are uniquely defined in terms of the combinatorial Hodge decomposition
on a hypercube graph associated with the game. (That is, the value is
apportioned among the players for each possible coalition, not just
the grand coalition consisting of all players.) We prove that the
Shapley value is precisely the value of the grand coalition in each
player's component game.
This characterization of the game components and the Shapley value
also implies two equivalent characterizations: one in terms of the
least-squares solution to a linear problem, whose solution is exact if
and only if the game is inessential; the other in terms of the graph
Laplacian. The first of these two characterizations is related to the
least-square and minimum-norm solution concepts of \citet{RuVaZa1998}
and \citet{KuSa2007}.
Furthermore, since the combinatorial Hodge decomposition holds for
arbitrary weighted graphs, this decomposition of cooperative games
also generalizes to cases where edges of the hypercube graph are
weighted or removed altogether. This may be seen as modeling variable
willingness or unwillingness of players to join certain coalitions, as
in some models of restricted cooperation. In the latter case, we
compare the resulting solution concepts with other ``Shapley values''
for games with cooperation restrictions, such as the precedence
constraints of \citet{FaKe1992} and the even more general digraph
games of \citet{KhSeTa2016}.
We note that the combinatorial Hodge decomposition has recently been
used to decompose noncooperative games (\citet{CaMeOzPa2011}) and has
also been applied to other problems in economics, such as ranking of
social preferences (\citet{JiLiYaYe2011,HiKaWa2011}). Here, we show
that it can also lend insight to cooperative game theory.
\section{Preliminaries}
\subsection{Cooperative games and the Shapley value}
\label{sec:introGames}
A \emph{cooperative game} consists of a finite set $N$ of players and
a function $ v \colon 2 ^N \rightarrow \mathbb{R} $, which assigns a
value $ v (S) $ to each coalition $ S \subset N $, such that
$ v ( \emptyset ) = 0 $. Assuming that all players cooperate (forming
the ``grand coalition'' $N$), the question of interest is how to split
the value $ v (N) $ among the players.
The Shapley value $ \phi _i (v) $ allocated to player $ i \in N $ is
based entirely on the marginal value
$ v \bigl( S \cup \{ i \} \bigr) - v (S) $ the player contributes when
joining each coalition $ S \subset N \setminus \{ i \} $. It is
uniquely defined according to the following theorem.
\begin{theorem}[\citet{Shapley1953}]
\label{thm:shapley}
There exists a unique allocation
$ v \mapsto \bigl( \phi _i (v) \bigr) _{ i \in N } $ satisfying the
following conditions:
\begin{enumerate}[label=(\alph*)]
\item efficiency: $ \sum _{ i \in N } \phi _i (v) = v (N) $.
\item null-player property: If
$ v \bigl( S \cup \{ i \} \bigr) - v (S) = 0 $ for all
$ S \subset N \setminus \{ i \} $, then $ \phi _i (v) = 0 $.
\item symmetry: If
$ v \bigl( S \cup \{ i \} \bigr) = v \bigl( S \cup \{ j \} \bigr)
$ for all $ S \subset N \setminus \{ i, j \} $, then
$ \phi _i (v) = \phi _j (v) $.
\item linearity: If $ v, v ^\prime $ are two games with the same set
of players $N$, then
$ \phi _i ( \alpha v + \alpha ^\prime v ^\prime ) = \alpha \phi _i
(v) + \alpha ^\prime \phi _i ( v ^\prime ) $ for all
$ \alpha, \alpha ^\prime \in \mathbb{R} $.
\end{enumerate}
Moreover, this allocation is given by the following explicit formula:
\begin{equation}
\label{eqn:shapley}
\phi _i (v) = \sum _{ S \subset N \setminus \{ i \} } \frac{ \lvert S \rvert ! \bigl( \lvert N \rvert - 1 - \lvert S \rvert \bigr) ! }{ \lvert N \rvert ! } \Bigl( v \bigl( S \cup \{ i \} \bigr) - v (S) \Bigr) .
\end{equation}
\end{theorem}
The conditions (a)--(d) listed above are often called the
\emph{Shapley axioms}. Simply stated, they say that (a) the value
obtained by the grand coalition is fully distributed among the
players, (b) a player who contributes no marginal value to any
coalition receives nothing, (c) equivalent players receive equal
amounts, and (d) the allocation is linear in the game values.
The formula \eqref{eqn:shapley} has the following useful
interpretation. Suppose the players form the grand coalition by
joining, one-at-a-time, in the order defined by a permutation $\sigma$
of $N$. That is, player $i$ joins immediately after the coalition
$ S _{ \sigma, i } = \bigl\{ j \in N : \sigma (j) < \sigma (i) \bigr\}
$ has formed, contributing marginal value
$ v \bigl( S _{ \sigma, i } \cup \{ i \} \bigr) - v (S _{ \sigma, i }
) $. Then $ \phi _i (v) $ is the average marginal value contributed by
player $i$ over all $ \lvert N \rvert ! $ permutations $\sigma$, i.e.,
\begin{equation}
\label{eqn:shapleyPermutation}
\phi _i (v) = \frac{ 1 }{ \lvert N \rvert ! } \sum _\sigma \Bigl( v \bigl( S _{ \sigma, i } \cup \{ i \} \bigr) - v (S _{ \sigma, i } ) \Bigr) .
\end{equation}
The equivalence of \eqref{eqn:shapley} and
\eqref{eqn:shapleyPermutation} is due to the fact that
$ \lvert S \rvert ! \bigl( \lvert N \rvert - 1 - \lvert S \rvert
\bigr) ! $ is precisely the number of permutations $\sigma$ for which
$ S = S _{ \sigma, i } $, since there are $ \lvert S \rvert ! $ ways
to permute the preceding players and
$ \bigl( \lvert N \rvert - 1 - \lvert S \rvert \bigr) ! $ ways to
permute the succeeding players.
For purposes of computation, of course, \eqref{eqn:shapley} is
preferable to \eqref{eqn:shapleyPermutation}, since it contains
$ 2 ^{ \lvert N \rvert - 1 } $ terms rather than $ \lvert N \rvert ! $
terms. Computing the Shapley value is \#P-complete (\citet{DePa1994}),
although some recent work has explored polynomial algorithms for
obtaining approximations to the Shapley value
(\citet{CaGoTe2009,CaGoMoTe2017}).
\begin{example}
\label{ex:introGlove}
The ``glove game'' is a classic illustrative example of a
cooperative game. Let $ N = \{ 1, 2, 3 \} $, and suppose that player
$1$ has a left-hand glove, while players $2$ and $3$ each have a
right-hand glove. The players wish to put together a pair of gloves,
which can be sold for value $1$, while unpaired gloves have no
value. That is, $ v (S) = 1 $ if $S \subset N $ contains both a left
and a right glove (i.e., player $1$ and at least one of players $2$
or $3$) and $ v (S) = 0 $ otherwise. The Shapley values for this
game are
\begin{equation*}
\phi _1 (v) = \frac{ 2 }{ 3 } , \qquad \phi _2 (v) = \phi _3 (v) = \frac{ 1 }{ 6 } .
\end{equation*}
This is perhaps easiest to interpret from the
``average-over-permutations'' perspective: player $1$ contributes
marginal value $0$ when joining the coalition first (2 of 6
permutations) and marginal value $1$ otherwise (4 of 6
permutations), so $ \phi _1 (v) = \frac{ 2 }{ 3 } $. Efficiency and
symmetry immediately give
$ \phi _2 (v) = \phi _3 (v) = \frac{ 1 }{ 6 } $.
\end{example}
\subsection{Combinatorial Hodge decomposition and the graph Laplacian}
\label{sec:introHodge}
This section briefly reviews a kind of ``discrete calculus'' on
graphs, which can be used to orthogonally decompose spaces of
functions on vertices and edges. This is a special case of the
\emph{combinatorial Hodge decomposition} on simplicial complexes
(\citet{Eckmann1945}, \citet{Dodziuk1974}), which is itself a discrete
version of the Hodge decomposition on manifolds
(\citet{Hodge1934,Hodge1935,Hodge1936,Hodge1941,Kodaira1949}). While
Hodge theory is a deep subject, connecting algebraic topology with
geometry and elliptic PDE theory, the case of graphs requires nothing
more than finite-dimensional linear algebra. Therefore, we take an
elementary approach here, and we refer readers interested in the more
general theory to the preceding references.
Let $ G = ( V, E ) $ be an oriented graph, where $V$ is the set of
vertices and $E \subset V \times V $ is the set of edges. By
``oriented,'' we mean that at most one of $ ( a, b ) $ and
$ ( b, a ) $ is in $E$ for $ a, b \in V $. If
$ f \colon E \rightarrow \mathbb{R} $ and $ (a,b) \in E $, we define
$ f(b,a) \coloneqq - f (a,b) $ for the reverse-oriented edge. Denote
by $ \ell ^2 (V) $ the space of functions
$ V \rightarrow \mathbb{R} $, equipped with the $ \ell ^2 $ inner
product
\begin{equation*}
\langle u , v \rangle \coloneqq \sum _{ a \in V } u (a) v (a) .
\end{equation*}
Similarly, denote by $ \ell ^2 (E) $ the space of functions
$E \rightarrow \mathbb{R} $ with inner product
\begin{equation*}
\langle f, g \rangle \coloneqq \sum _{ (a,b) \in E } f (a,b) g (a,b) .
\end{equation*}
With respect to the standard bases defined by $V$ and $E$, we can
identify these spaces with $ \mathbb{R} ^{ \lvert V \rvert } $ and
$ \mathbb{R} ^{ \lvert E \rvert } $, each equipped with the Euclidean
dot product.
We next define a linear operator
$ \mathrm{d} \colon \ell ^2 (V) \rightarrow \ell ^2 (E) $ and its
adjoint
$ \mathrm{d} ^\ast \colon \ell ^2 (E) \rightarrow \ell ^2 (V) $, which
are discrete analogs of the gradient and (negative) divergence of
vector calculus (or, more generally, the differential and
codifferential of exterior calculus). Define $ \mathrm{d} $ by
\begin{equation*}
\mathrm{d} u ( a, b ) \coloneqq u (b) - u (a) .
\end{equation*}
With respect to the standard bases, the matrix of $ \mathrm{d} $ is
just the transpose of the oriented incidence matrix of $G$. The
adjoint is defined by
$ \langle u , \mathrm{d} ^\ast f \rangle \coloneqq \langle \mathrm{d}
u , f \rangle $; explicitly, this is given by
\begin{equation*}
(\mathrm{d} ^\ast f) (a) = \sum _{ b \sim a } f(b,a),
\end{equation*}
where $ b \sim a $ denotes that $ (a,b) \in E $ or $ ( b, a ) \in E
$. Again, with respect to the standard bases, the matrix of
$ \mathrm{d} ^\ast $ is just the transpose of that for $ \mathrm{d} $,
i.e., the oriented incidence matrix.
By the fundamental theorem of linear algebra (\citet{Strang1993}), we
may orthogonally decompose $ \ell ^2 (V) $ and $ \ell ^2 (E) $ as
\begin{equation}
\label{eqn:graphHodgeDecomp}
\ell ^2 (V) = \mathcal{R} ( \mathrm{d} ^\ast ) \oplus \mathcal{N} ( \mathrm{d} ) , \qquad \ell ^2 (E) = \mathcal{R} ( \mathrm{d} ) \oplus \mathcal{N} ( \mathrm{d} ^\ast ) ,
\end{equation}
where $ \mathcal{R} ( \cdot ) $ and $ \mathcal{N} ( \cdot ) $ denote
range and kernel (nullspace). We call \eqref{eqn:graphHodgeDecomp} the
\emph{combinatorial Hodge decomposition} of $ \ell ^2 (V) $ and
$ \ell ^2 (E) $.
Finally, the \emph{graph Laplacian} is defined by
$ L \coloneqq \mathrm{d} ^\ast \mathrm{d} \colon \ell ^2 (V)
\rightarrow \ell ^2 (V) $. This is identical to the usual graph
Laplacian encountered in, e.g., spectral graph theory
(\citet{Chung1997}), usually expressed as $ L = {D} - A $, where
$ {D} $ is the \emph{degree matrix} and $A$ is the (unsigned)
\emph{adjacency matrix} of the graph $G$. These expressions for $L$
are seen to be identical by observing that
\begin{equation*}
(\mathrm{d} ^\ast \mathrm{d} u ) (a)
= \sum _{ b \sim a } \mathrm{d} u ( b,a )
= \sum _{ b \sim a } \bigl( u(a) - u(b) \bigr)
= \operatorname{deg}(a) u (a) - \sum _{ b \sim a } u (b).
\end{equation*}
The graph Laplacian is intimately related to the Hodge decomposition
in several ways. Of particular interest to us, suppose we wish to
compute the decomposition of $ f \in \ell ^2 (E) $ as
$ \mathrm{d} u + p = f $, where $ u \in \ell ^2 (V) $ and
$ p \in \mathcal{N} ( \mathrm{d} ^\ast ) $. Taking
$ \mathrm{d} ^\ast $ of both sides of this equation gives
$ L u = \mathrm{d} ^\ast f $, so a solution yields the Hodge
decomposition of $f$. This may also be seen as the system of
\emph{normal equations} for the least-squares approximation of $f$ by
$ \mathrm{d} u \in \mathcal{R} (\mathrm{d})$.
\section{Decomposition of cooperative games}
\label{sec:decomp}
\subsection{Cooperative games and the hypercube graph}
Given the set of players $N$, define the oriented graph
$ G = ( V, E ) $ by
\begin{equation*}
V = 2 ^N , \qquad E = \bigl\{ \bigl( S, S \cup \{ i \} \bigr) \in V \times V : S \subset N \setminus \{ i \} ,\ i \in N \bigr\} .
\end{equation*}
This is precisely the $ \lvert N \rvert $-dimensional \emph{hypercube
graph}, where each vertex corresponds to a coalition $ S \subset N$,
and where each edge corresponds to the addition of a single player
$i \notin S $ to $S$, oriented in the direction of the inclusion
$ S \hookrightarrow S \cup \{ i \} $.
With respect to this graph, a cooperative game is precisely a function
$ v \in \ell ^2 (V) $ such that $ v ( \emptyset ) = 0 $. Furthermore,
$ \mathrm{d} v \in \ell ^2 (E) $ gives the marginal value on each
edge, i.e., $ \mathrm{d} v \bigl( S, S \cup \{ i \} \bigr) $ is the
marginal value contributed by player $i$ joining coalition
$S \subset N \setminus \{ i \} $. In order to talk about the marginal
contributions of an individual player $ i \in N $, ignoring those of
the other players $ j \neq i $, we introduce the following collection
of ``partial differentiation'' operators.
\begin{definition}
\label{def:di}
For each $ i \in N $, let
$ \mathrm{d} _i \colon \ell ^2 (V) \rightarrow \ell ^2 (E) $ be the
operator
\begin{equation*}
\mathrm{d} _i u \bigl( S , S \cup \{ j \} \bigr) =
\begin{cases}
\mathrm{d} u \bigl( S , S \cup \{ i \} \bigr) & \text{if } i = j, \\
0 & \text{if } i \neq j .
\end{cases}
\end{equation*}
\end{definition}
Therefore, $ \mathrm{d} _i v \in \ell ^2 (E) $ encodes the marginal
value contributed by player $i$ to the game $v$. For any permutation
$\sigma$ of $N$, which defines a path from $ \emptyset $ to $N$, the
marginal value contributed by player $i$ along this path is
\begin{equation*}
\sum _{ j \in N } \mathrm{d} _i v \bigl( S _{ \sigma , j } , S _{ \sigma, j } \cup \{ j \} \bigr) = \mathrm{d} v \bigl( S _{ \sigma , i } , S _{ \sigma, i } \cup \{ i \} \bigr) = v \bigl( S _{ \sigma, i } \cup \{ i \} \bigr) - v (S _{ \sigma, i } ) ,
\end{equation*}
which can be interpreted as a discrete ``line integral'' of
$ \mathrm{d} _i v $ along the path.
\subsection{Decomposition of inessential games}
From \autoref{def:di}, we immediately see that
$ \mathrm{d} = \sum _{ i \in N } \mathrm{d} _i $. However, in general,
$ \mathcal{R} ( \mathrm{d} _i ) \not\subset \mathcal{R} ( \mathrm{d} )
$. To see this, observe that for any permutation $\sigma$,
\begin{equation*}
\sum _{ j \in N } \mathrm{d} u \bigl( S _{ \sigma , j } , S _{ \sigma, j } \cup \{ j \} \bigr) = \sum _{ j \in N } \Bigl( u \bigl( S _{ \sigma, j } \cup \{ j \} \bigr) - u \bigl( S _{ \sigma , j } \bigr) \Bigr)= u (N) - u ( \emptyset ) ,
\end{equation*}
since the sum telescopes. This value is the same for every permutation
$\sigma$, which is a discrete analog of the fact that the line
integral of a conservative vector field depends only in the endpoints,
not the particular path chosen. Contrast this with the previous
expression: we have already seen that
$ v \bigl( S _{ \sigma, i } \cup \{ i \} \bigr) - v (S _{ \sigma, i }
) $ may be different, depending on the permutation $\sigma$, as in the
glove game of \autoref{ex:introGlove}. The question of when
$ \mathrm{d} _i v \in \mathcal{R} ( \mathrm{d} ) $ is related to the
notion of \emph{inessential games}, as we now show.
\begin{definition}
The game $v$ is \emph{inessential} if
$ v (S) = \sum _{ i \in S } v \bigl( \{ i \} \bigr) $ for all
$ S \subset N $. That is, each coalition obtains the same value
working together as its individual members would obtain working
separately.
\end{definition}
\begin{proposition}
\label{prop:inessential}
The game $v$ is inessential if and only if
$ \mathrm{d} _i v \in \mathcal{R} ( \mathrm{d} ) $ for all
$ i \in N $.
\end{proposition}
\begin{proof}
If $ \mathrm{d} _i v \in \mathcal{R} ( \mathrm{d} ) $, then from the
calculation above, it follows that the marginal value
$ v \bigl( S \cup \{ i \} \bigr) - v ( S ) $ is the same for all
coalitions $ S \subset N \setminus \{ i \} $. Taking
$ S = \emptyset $, we see that this value is precisely
$ v \bigl( \{ i \} \bigr) $. If this holds for all players
$ i \in N $, then we conclude that $v$ is inessential.
Conversely, suppose that $v$ is inessential, and define the game
\begin{equation*}
v _i (S) =
\begin{cases}
v \bigl( \{ i \} \bigr) & \text{if $ i \in S $,}\\
0 & \text{if $ i \notin S $.}
\end{cases}
\end{equation*}
It follows immediately that
$ \mathrm{d} _i v = \mathrm{d} v _i \in \mathcal{R} ( \mathrm{d} )
$, which completes the proof.
\end{proof}
Therefore, inessential games have the decomposition
$ v = \sum _{ i \in N } v _i $, where $ v _i $ is the game described
in the proof above. In the next section, we show how this
decomposition generalizes to arbitrary games, and how the Shapley
value naturally arises from the generalized decomposition.
\subsection{Decomposition of arbitrary games and the Shapley value}
For an arbitrary game $v$, we cannot hope to find games $ v _i $ such
that $ \mathrm{d} _i v = \mathrm{d} v _i $ (unless $v$ is inessential,
as shown in \autoref{prop:inessential}). However, the Hodge
decomposition \eqref{eqn:graphHodgeDecomp} ensures that we \emph{can}
write $ \mathrm{d} _i v \in \ell ^2 (E) $ as the sum of some
$ \mathrm{d} v _i \in \mathcal{R} ( \mathrm{d} ) $ and an element of
$ \mathcal{N} ( \mathrm{d} ^\ast ) $. Moreover, since $G$ is
connected, $ \mathcal{N} ( \mathrm{d} ) \cong \mathbb{R} $, so
$ v _i $ is uniquely determined by the condition
$ v _i ( \emptyset ) = 0 $, i.e., that $v _i $ is itself a game.
The main result of this section, \autoref{thm:gameDecomp}, proves that
these component games $ v _i $ satisfy certain efficiency,
null-player, symmetry, and linearity properties. This means, in
particular, that $ v _i (N) $ satisfies the Shapley axioms, so it must
coincide with the Shapley value $ \phi _i (v) $. In addition to this
axiomatic argument, we will also show directly, in
\autoref{rmk:shapleyFormula}, that $ v _i (N) $ agrees with the
Shapley formula \eqref{eqn:shapley}.
Let $ P \colon \ell ^2 (E) \rightarrow \mathcal{R} ( \mathrm{d} ) $
denote orthogonal projection onto $ \mathcal{R} ( \mathrm{d} ) $, so
that $ P \mathrm{d} _i v $ is the $ \mathcal{R} ( \mathrm{d} ) $
component in the Hodge decomposition of $ \mathrm{d} _i v $. Note
that this projection is introduced as a theoretical tool; in practice,
we will never need to explicitly construct a matrix for $P$. Instead,
as in \autoref{sec:introHodge}, we will compute the Hodge
decomposition by solving a problem involving the graph Laplacian,
which will be discussed in \autoref{sec:laplacian}.
\begin{theorem}
\label{thm:gameDecomp}
For each $ i \in N $, let $ v _i \in \ell ^2 (V) $ with
$ v _i ( \emptyset ) $ be the unique game such that
$ \mathrm{d} v _i = P \mathrm{d} _i v $. Then the games $ v _i $
satisfy the following:
\begin{enumerate}[label=(\alph*)]
\item $ \displaystyle\sum _{ i \in N } v _i = v $.
\item If $ v \bigl( S \cup \{ i \} \bigr) - v (S) = 0 $ for all
$ S \subset N \setminus \{ i \} $, then $ v _i = 0 $.
\item If $ \sigma $ is a permutation of $N$ and $ \sigma ^\ast v $
is the game
$ ( \sigma ^\ast v ) (S) = v \bigl( \sigma (S) \bigr) $, then
$ ( \sigma ^\ast v ) _i = \sigma ^\ast ( v _{ \sigma (i) } ) $. In
particular, if $ \sigma $ is the permutation swapping $i$ and $j$,
and if $ \sigma ^\ast v = v $, then
$ v _i = \sigma ^\ast (v _j) $.
\item For any two games $ v , v ^\prime $ and
$ \alpha, \alpha ^\prime \in \mathbb{R} $,
$ ( \alpha v + \alpha ^\prime v ^\prime ) _i = \alpha v _i +
\alpha ^\prime v ^\prime _i $.
\end{enumerate}
Consequently, $ v _i (N) = \phi _i (v) $ is the Shapley value for
each player $i \in N $.
\end{theorem}
\begin{proof}
First, since $ \mathrm{d} = \sum _{ i \in N } \mathrm{d} _i $, we
have
\begin{equation*}
\mathrm{d} \sum _{ i \in N } v _i = \sum _{ i \in N } \mathrm{d} v _i = \sum _{ i \in N } P \mathrm{d} _i v = P \sum _{ i \in N } \mathrm{d} _i v = P \mathrm{d} v = \mathrm{d} v .
\end{equation*}
Since $G$ is connected, it follows that $ \sum _{ i \in N } v _i $
and $v$ differ by a constant. But this constant is
$ v ( \emptyset ) - \sum _{ i \in N } v _i (\emptyset) = 0 $, which
proves (a).
Next, if $ v \bigl( S \cup \{ i \} \bigr) - v (S) = 0 $ for all
$ S \subset N \setminus \{ i \} $, then $ \mathrm{d} _i v = 0 $. It
follows that $ \mathrm{d} v _i = P \mathrm{d} _i v = 0 $. Hence,
$ v _i $ is constant, but since $ v _i ( \emptyset ) = 0 $, we
conclude that $ v _i = 0 $, which proves (b).
Next, if $\sigma$ is a permutation of $N$, then direct calculation
shows that $ \mathrm{d} \sigma ^\ast = \sigma ^\ast \mathrm{d} $ and
$ \mathrm{d} _i \sigma ^\ast = \sigma ^\ast \mathrm{d} _{ \sigma (i)
} $. Furthermore, $\sigma$ preserves counting measure and hence the
$ \ell ^2 $ inner product, so $ P \sigma ^\ast = \sigma ^\ast P
$. Thus,
\begin{equation*}
\mathrm{d} ( \sigma ^\ast v ) _i = P \mathrm{\mathrm{d}} _i (\sigma ^\ast v ) = P \sigma ^\ast ( \mathrm{d} _{ \sigma (i) } v ) = \sigma ^\ast ( P \mathrm{d} _{ \sigma (i) } v ) = \sigma ^\ast ( \mathrm{d} v _{ \sigma (i) } ) = \mathrm{d} \sigma ^\ast ( v _{ \sigma (i) } ) ,
\end{equation*}
so $ ( \sigma ^\ast v ) _i $ and
$ \sigma ^\ast ( v _{ \sigma (i) } ) $ differ by a constant. But
this constant is
$ ( \sigma ^\ast v ) _i ( \emptyset ) - \sigma ^\ast ( v _{ \sigma
(i) } ) ( \emptyset ) = v _i ( \emptyset ) - v _{ \sigma (i) } (
\emptyset ) = 0 $, which proves (c).
Next, since $ \mathrm{d} $, $ \mathrm{d} _i $, and $P$ are all linear maps,
\begin{multline*}
\mathrm{d} ( \alpha v + \alpha ^\prime v ^\prime ) _i = P \mathrm{d} _i ( \alpha v + \alpha ^\prime v ^\prime ) = \alpha P \mathrm{d} _i v + \alpha ^\prime P \mathrm{d} _i v ^\prime \\
= \alpha \mathrm{d} v _i + \alpha ^\prime \mathrm{d} v _i ^\prime
= \mathrm{d} ( \alpha v _i + \alpha ^\prime v _i ^\prime ).
\end{multline*}
Hence, the games $ ( \alpha v + \alpha ^\prime v ^\prime ) _i $ and
$ ( \alpha v _i + \alpha ^\prime v _i ^\prime ) $ differ by a
constant---but just as above, this constant must be zero, which
proves (d).
Finally, having shown (a)--(d), it follows that the allocation
$ v \mapsto \bigl( v _i (N) \bigr) _{ i \in N } $ satisfies the
Shapley axioms of \autoref{thm:shapley}. Indeed, condition (a)
implies efficiency, (b) implies the null-player property, (c)
implies symmetry, and (d) implies linearity. Hence, by the
uniqueness property of the Shapley value, we must have that
$ v _i (N) = \phi _i (v) $ for all $ i \in N $, which completes the
proof.
\end{proof}
\begin{remark}
An immediate corollary of \autoref{prop:inessential} and
\autoref{thm:gameDecomp} is the standard result that
$ \phi _i (v) = v \bigl( \{ i \} \bigr) $ for all $ i \in N $
whenever $v$ is inessential.
\end{remark}
\begin{remark}
\label{rmk:leastSquares}
Since $P$ is orthogonal projection onto
$ \mathcal{R} ( \mathrm{d} ) $, we may also view the game $ v _i $
as the \emph{least-squares solution} to
$ \mathrm{d}v _i = \mathrm{d} _i v $, which only has an exact
solution when $v$ is inessential. That is, we have
\begin{equation*}
v _i = \operatorname{argmin}\displaylimits_{\substack{u \in \ell ^2 (V) \\ u ( \emptyset ) = 0 }} \lVert \mathrm{d} u - \mathrm{d} _i v \rVert _{ \ell ^2 (E) } .
\end{equation*}
This is similar in spirit to the work of \citet{KuSa2007} on
minimum-norm solution concepts, including the least-square values of
\citet{RuVaZa1998}. Specifically, \citet{KuSa2007} consider the
projection of $v$ itself onto the subspace of inessential games in
$ \ell ^2 (V) $. By contrast, the projection in our approach is
performed on $ \ell ^2 (E) $.
\end{remark}
\begin{remark}
\label{rmk:shapleyFormula}
The decomposition of \autoref{thm:gameDecomp} also gives a
straightforward way to derive the Shapley formula
\eqref{eqn:shapley}. By linearity, this formula is equivalent to the
statement that, if
$ \chi _{ ( S, S \cup \{ i \} ) } \in \ell ^2 (E) $ is the indicator
function equal to $1$ on $ \bigl( S, S \cup \{ i \} \bigr) $ and $0$
on all other edges, and if $ u \in \ell ^2 (V) $ is the solution to
$ \mathrm{d} u = P \chi _{ ( S, S \cup \{ i \}) } $ with
$ u (\emptyset) = 0 $, then
$ u (N) = \frac{ \lvert S \rvert ! ( \lvert N \rvert - 1 - \lvert S
\rvert ) ! }{ \lvert N \rvert ! } $.
To prove this, we begin by observing that $ u (N) $ depends only on
$ \lvert S \rvert $, not on the particular choice of $S$ and
$i$. Indeed, let $ \lvert T \rvert = \lvert S \rvert $ with
$ j \notin T $, and let $\sigma$ be a permutation of $N$ such that
$ \sigma (S) = T $ and $ \sigma (i) = j $. Then
\begin{equation*}
\mathrm{d} ( \sigma ^\ast u ) = \sigma ^\ast ( \mathrm{d} u ) = \sigma ^\ast ( P \chi _{ ( S, S \cup \{ i \} ) } ) = P \sigma ^\ast \chi _{ ( S, S \cup \{ i \} ) } = P \chi _{ ( T , T \cup \{ j \} ) },
\end{equation*}
and $ ( \sigma ^\ast u ) (N) = u \bigl( \sigma (N) \bigr) = u (N)
$. Hence, we get the same value for $ u (N) $ whether we choose the
edge $ \bigl( S, S \cup \{ i \} \bigr) $ or
$ \bigl( T , T \cup \{ j \} \bigr) $.
Next, consider the game
\begin{equation*}
v (T) \coloneqq
\begin{cases}
1 & \text{if } \lvert T \rvert > \lvert S \rvert, \\
0 & \text{if } \lvert T \rvert \leq \lvert S \rvert,
\end{cases}
\end{equation*}
so that
\begin{equation*}
\mathrm{d} v = \sum _{ \substack{\lvert T \rvert = \lvert S \rvert \\ j \notin T } } \chi _{ ( T, T \cup \{ j \} ) } .
\end{equation*}
This sum contains
$ \binom{\lvert N \rvert }{\lvert S \rvert } \bigl( \lvert N \rvert
- \lvert S \rvert \bigr) = \frac{ \lvert N \rvert ! }{ \lvert S
\rvert ! ( \lvert N \rvert - 1 - \lvert S \rvert ) ! } $ terms,
and the symmetry argument above shows that they all contribute
equally to $ v (N) $, so
\begin{equation*}
v (N) = \frac{ \lvert N \rvert ! }{ \lvert S
\rvert ! ( \lvert N \rvert - 1 - \lvert S \rvert ) ! } u (N) .
\end{equation*}
Finally, since $ v (N) = 1 $, we obtain the claimed expression for
$ u (N) $.
\end{remark}
\begin{example}
\label{ex:decompGlove}
We now illustrate the decomposition of \autoref{thm:gameDecomp} by
applying it to the glove game introduced in
\autoref{ex:introGlove}. Since $ \lvert N \rvert = 3 $, the graph
$G$ consists of vertices and edges of the ordinary,
three-dimensional cube.
\autoref{tab:glove} contains the values of $v$ and the component
games $ v _1 $, $ v _2 $, and $ v _3 $. Several of the properties
proved in \autoref{thm:gameDecomp} are immediately apparent. In
particular, we have the decomposition $ v = v _1 + v _2 + v _3 $,
while $ v _1 (N) = \frac{ 2 }{ 3 } $ and
$ v _2 (N) = v _3 (N) = \frac{ 1 }{ 6 } $ are the Shapley values
previously obtained in \autoref{ex:introGlove}. Furthermore, the
symmetry of players $2$ and $3$ is evident in all three component
games, not just $ v _2 $ and $ v _3 $. Indeed, if $ \sigma $ is the
permutation swapping players $2$ and $3$, then
$ \sigma ^\ast v = v $, so we have
\begin{equation*}
v _1 = \sigma ^\ast v _1 , \qquad v _2 = \sigma ^\ast v _3 , \qquad v _3 = \sigma ^\ast v _2 ,
\end{equation*}
which can be observed in \autoref{tab:glove}. We also point out
that, although $ v \geq 0 $, the component games
$ v _1 , v _2 , v _3 $ may take negative values.
Note also that $ \mathrm{d} v _i \neq \mathrm{d} _i v $,
corresponding to the fact that the glove game is not inessential.
\begin{table}
\centering
\renewcommand\arraystretch{1.4}
\begin{tabular}{crrrr}
$S$ & $v $ & $ v _1 $ & $ v _2 $ & $ v _3 $ \\
\hline
$ \emptyset $ & $0$ & $0$ & $0$ & $0 $ \\
$ \{ 1 \} $ & $0$ & $ \frac{5}{12} $ & $ - \frac{ 5 }{ 24 } $ & $ - \frac{ 5 }{ 24 } $ \\
$ \{ 2 \} $ & $0$ & $ - \frac{5}{24} $ & $ \frac{ 1 }{ 6 } $ & $ \frac{ 1 }{ 24 } $ \\
$ \{ 3 \} $ & $0$ & $ - \frac{5}{24} $ & $ \frac{ 1 }{ 24 } $ & $ \frac{ 1 }{ 6 } $ \\
$ \{ 1, 2 \} $ & $1$ & $ \frac{ 5 }{ 8 } $ & $ \frac{ 3 }{ 8 } $ & $0$ \\
$ \{ 1, 3 \} $ & $1$ & $ \frac{ 5 }{ 8 } $ & $0$ & $ \frac{ 3 }{ 8 } $ \\
$ \{ 2, 3 \} $ & $0$ & $ - \frac{ 1 }{ 4 } $ & $ \frac{ 1 }{ 8 } $ & $ \frac{ 1 }{ 8 } $ \\
$ \{ 1,2,3 \} $ & $1$ & $ \boldsymbol{ \frac{ 2 }{ 3 } } $ & $ \boldsymbol{ \frac{ 1 }{ 6 } } $ & $ \boldsymbol{ \frac{ 1 }{ 6 } } $
\end{tabular}
\bigskip
\caption{Decomposition of the three-player glove game as
$ v = v _1 + v _2 + v _3 $, following
\autoref{thm:gameDecomp}. The Shapley values of
$ \frac{ 2 }{ 3 } $, $ \frac{ 1 }{ 6 } $, $ \frac{ 1 }{ 6 } $
appear in bold on the last line, corresponding to the value of the
grand coalition in each component game.\label{tab:glove}}
\end{table}
\end{example}
\subsection{Decomposition via the hypercube graph Laplacian}
\label{sec:laplacian}
We now briefly show how the component games $ v _i $ may be computed
using the graph Laplacian $ L = \mathrm{d} ^\ast \mathrm{d} $, without
having to explicitly compute the orthogonal projection operator
$P$. Denote $ L _i \coloneqq \mathrm{d} ^\ast \mathrm{d} _i $; this is
in fact a weighted graph Laplacian, where the edge
$ \bigl( S , S \cup \{ j \} \bigr) $ has weight $1$ if $ i = j $ and
$0$ otherwise. (We will say more about weighted graph Laplacians in
\autoref{sec:weightedDecomp}.)
\begin{proposition}
\label{prop:laplace}
For each $ i \in N $, the component game $ v _i $ of
\autoref{thm:gameDecomp} is the unique solution to
$ L v _i = L _i v $ such that $ v _i ( \emptyset ) = 0 $.
\end{proposition}
\begin{proof}
Since
$ ( \mathrm{d} v _i - \mathrm{d} _i v ) \in \mathcal{N} ( \mathrm{d}
^\ast ) $, we immediately have
\begin{equation*}
0 = \mathrm{d} ^\ast ( \mathrm{d} v _i - \mathrm{d} _i v ) = \mathrm{d} ^\ast \mathrm{d} v _i - \mathrm{d} ^\ast \mathrm{d} _i v = L v _i - L _i v ,
\end{equation*}
so $ L v _i = L _i v $ as claimed. To show uniqueness, suppose that
$ v _i ^\prime $ is another solution. Then
$ L ( v _i - v _i ^\prime ) = 0 $, and since the hypercube graph is
connected, we must have $ v _i - v _i ^\prime $ constant. But
$ v _i ( \emptyset ) = v _i ^\prime ( \emptyset ) $, so it follows
that $ v _i = v _i ^\prime $.
\end{proof}
\begin{remark}
Equivalently, recall from \autoref{rmk:leastSquares} that $ v _i $
may also be seen as the least-squares solution to
$ \mathrm{d} v _i = \mathrm{d} _i v $. From this point of view,
$ \mathrm{d} ^\ast \mathrm{d} v _i = \mathrm{d} ^\ast \mathrm{d} _i
v $ is precisely the system of \emph{normal equations} corresponding
to this least-squares problem, as previously discussed in
\autoref{sec:introHodge}.
\end{remark}
\subsection{Explicit decomposition via discrete Green's functions}
In \autoref{prop:laplace}, we characterized the component game
$ v _i $ implicitly, as the solution to $ L v _i = L _i v $ with
$ v _i ( \emptyset ) = 0 $. In fact, it is possible to obtain an
\emph{explicit} formula for $ v _i $, in terms of the \emph{discrete
Green's function} for the hypercube graph Laplacian (\citet[Example
4]{ChYa2000}). For an arbitrary game $ v $ and coalition
$ S \subset N $, the formula for $ v _i (S) $ is a bit unwieldy, but
we also present two situations in which it simplifies nicely. First,
when $ S = N $, we show that the formula for $ v _i (N) $ yields the
Shapley formula \eqref{eqn:shapley}, giving an alternative proof that
$ v _i (N) = \phi _i (v) $. Second, when $v$ is a \emph{pure
bargaining game}, where $ v (S) = 0 $ for $ S \neq N $, we give an
explicit formula for $ v _i (S) $.
Let $ y \in \mathcal{R} ( \mathrm{d} ^\ast ) $, and consider the
problem $ L u = y $. Rather than imposing the condition
$ u ( \emptyset ) = 0 $, we instead seek the unique solution
$u \in \mathcal{R} ( \mathrm{d} ^\ast ) $.\footnote{Since
$ \mathcal{R} ( \mathrm{d} ^\ast ) $ is the orthogonal complement of
$ \mathcal{N} ( \mathrm{d} ) \cong \mathbb{R} $, this is equivalent
to saying that $y$ and $u$ are orthogonal to constant functions,
i.e., that
$ \sum _{ S \subset N } y (S) = \sum _{ S \subset N } u (S) = 0 $.}
The solution vanishing on $ \emptyset $ is then simply
$ u - u ( \emptyset ) $. Since the restriction of $L$ to
$ \mathcal{R} ( \mathrm{d} ^\ast ) $ is symmetric positive definite,
it has a symmetric positive definite inverse $K$, and the solution is
$ u = K y $. Writing out the matrix multiplication explicitly, we have
\begin{equation*}
u (S) = \sum _{ T \subset N } K ( S, T ) y (T) ,
\end{equation*}
where $ K ( \cdot , \cdot ) $ is called the \emph{discrete Green's
function}. \citet[Example 4]{ChYa2000} showed that, for the
hypercube graph Laplacian with $ \lvert N \rvert = n $, this is given
by the formula
\begin{multline*}
K ( S, T ) = \frac{ 1 }{ n 2 ^{ 2 n } } \biggl( - \sum _{ j < k }
\frac{ \bigl[ \binom{n}{0} + \cdots + \binom{n}{j}\bigr] \bigl[
\binom{n}{j+1} + \cdots + \binom{n}{n} \bigr] }{ \binom{n-1}{j} }\\
+ \sum _{ k \leq j } \frac{ \bigl[ \binom{n}{j+1} + \cdots +
\binom{n}{n} \bigr] ^2 }{ \binom{n-1}{j} } \biggr),
\end{multline*}
where $k = \lvert S \mathbin{\triangle} T \rvert $ is the distance
between $S$ and $T$ in the graph; here,
$ S \mathbin{\triangle} T \coloneqq ( S \cup T ) \setminus ( S \cap T
) $ is the symmetric difference of $S$ and $T$. (We remark that this
differs from the formula in \citet{ChYa2000} by a factor of
$\frac{1}{n}$, since they consider the normalized graph Laplacian
$ \frac{1}{n} L $.)
Given a game $v$, we may therefore use this approach to calculate
$ u _i = K L _i v $ explicitly and then take
$ v _i = u _i - u _i ( \emptyset ) $. Fortunately, the fact that
$ y = L _i v $ simplifies the formulas substantially. Given
$ T \subset N \setminus \{ i \} $,
\begin{equation*}
( L _i v ) \bigl( T \cup \{ i \} \bigr) = v \bigl( T \cup \{ i \} \bigr) - v (T) = - ( L _i v ) (T) ,
\end{equation*}
and therefore,
\begin{equation*}
u _i (S) = \sum _{ T \subset N \setminus \{ i \} } \Bigl( K \bigl( S, T \cup \{ i \} \bigr) - K ( S, T ) \Bigr) \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr)
\end{equation*}
Furthermore, if $ S \subset N \setminus \{ i \} $ is distance $k$ from
$T$, then it is distance $ k + 1 $ from $ T \cup \{ i \} $. Therefore,
all but the $ j = k $ terms in
$ K \bigl( S, T \cup \{ i \} \bigr) - K ( S, T ) $ cancel, leaving
\begin{multline*}
K \bigl( S , T \cup \{ i \} \bigr) - K ( S, T )\\
\begin{aligned}
&= - \frac{ 1 }{ n 2 ^{ 2 n } } \frac{ \bigl[ \binom{n}{0} + \cdots + \binom{n}{k}\bigr] \bigl[
\binom{n}{k+1} + \cdots + \binom{n}{n} \bigr] + \bigl[ \binom{n}{k+1} + \cdots + \binom{n}{n} \bigr] ^2 }{ \binom{n-1}{k} }\\
&= - \frac{ 1 }{ n 2 ^{ 2 n } } \frac{ \bigl[ \binom{n}{0} + \cdots + \binom{n}{k} + \binom{n}{k+1} + \cdots + \binom{n}{n} \bigr] \bigl[ \binom{n}{k+1} + \cdots + \binom{n}{n} \bigr] }{ \binom{n-1}{k} } \\
&= - \frac{ 1 }{ n 2 ^n} \frac{ \binom{n}{k+1} + \cdots + \binom{n}{n} }{ \binom{n-1}{k} } .
\end{aligned}
\end{multline*}
Similarly, $ S \cup \{ i \} $ is distance $k$ from $ T \cup \{ i \} $
and distance $ k + 1 $ from $ T $, so
\begin{equation*}
K \bigl( S \cup \{ i \} , T \cup \{ i \} \bigr) - K \bigl( S \cup \{ i \} , T \bigr) = \frac{ 1 }{ n 2 ^n} \frac{ \binom{n}{k+1} + \cdots + \binom{n}{n} }{ \binom{n-1}{k} }.
\end{equation*}
We have just proved the following theorem.
\begin{theorem}
\label{thm:explicit}
For each $ i \in N $, the component game $ v _i $ of
\autoref{thm:gameDecomp} is given by
$ v _i = u _i - u _i ( \emptyset ) $, where, for
$ S \subset N \setminus \{ i \} $,
\begin{equation*}
u _i \bigl( S \cup \{ i \} \bigr) = \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert }} \sum _{ T \subset N \setminus \{ i \} } \frac{ \binom{\lvert N \rvert}{\lvert S \mathbin{\triangle} T \rvert +1} + \cdots + \binom{\lvert N \rvert }{\lvert N \rvert } }{ \binom{\lvert N \rvert -1}{\lvert S \mathbin{\triangle} T \rvert } } \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr),
\end{equation*}
and $ u _i (S) = - u _i \bigl( S \cup \{ i \} \bigr) $. In particular,
\begin{equation*}
u _i ( \emptyset ) = - \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert } } \sum _{ T \subset N \setminus \{ i \} } \frac{ \binom{\lvert N \rvert }{\lvert T \rvert +1} + \cdots + \binom{\lvert N \rvert }{ \lvert N \rvert } }{ \binom{\lvert N \rvert -1}{ \lvert T \rvert } } \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr).
\end{equation*}
\end{theorem}
\begin{remark}
This immediately gives us an explicit formula for the projection
$ P \mathrm{d} _i v = \mathrm{d} v _i = \mathrm{d} u _i \in
\mathcal{R} ( \mathrm{d} ) $.
\end{remark}
We now use this formula to get a new proof that
$ v _i (N) = \phi _i (v) $ is the Shapley value. Observe that the
distance from $ S = N \setminus \{ i \} $ to
$ T \subset N \setminus \{ i \} $ is just
$ \lvert N \rvert - \lvert T \rvert - 1 $, so \autoref{thm:explicit} says
that
\begin{align*}
u _i (N)
&= \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert } } \sum _{ T \subset N \setminus \{ i \} } \frac{ \binom{\lvert N \rvert }{\lvert N \rvert - \lvert T \rvert } + \cdots + \binom{\lvert N \rvert }{ \lvert N \rvert } }{ \binom{\lvert N \rvert -1}{\lvert N \rvert - \lvert T \rvert - 1 } } \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr) \\
&= \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert } } \sum _{ T \subset N \setminus \{ i \} } \frac{ \binom{\lvert N \rvert }{ 0 }+ \cdots + \binom{\lvert N \rvert }{\lvert T \rvert } }{ \binom{\lvert N \rvert -1}{\lvert T \rvert } } \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr).
\end{align*}
Subtracting $ v _i (N) = u _i (N) - u _i ( \emptyset ) $ and using
$ \binom{\lvert N \rvert }{0} + \cdots + \binom{\lvert N \rvert }{
\lvert N \rvert } = 2 ^{ \lvert N \rvert } $ gives
\begin{align*}
v _i (N) &= \frac{ 1 }{ \lvert N \rvert } \sum _{ T \subset N \setminus \{ i \} } \frac{ 1 }{ \binom{\lvert N \rvert -1}{\lvert T \rvert } } \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr) \\
&= \sum _{ T \subset N \setminus \{ i \} } \frac{ \lvert T \rvert ! \bigl( \lvert N \rvert - 1 - \lvert T \rvert \bigr) ! }{ \lvert N \rvert ! } \Bigl( v \bigl( T \cup \{ i \} \bigr) - v (T) \Bigr),
\end{align*}
which is precisely the Shapley formula \eqref{eqn:shapley} for
$ \phi _i (v) $.
Finally, we apply \autoref{thm:explicit} to the case where $v$ is a
pure bargaining game, i.e., where $ v (S) = 0 $ for $ S \neq N $.
\begin{theorem}
Let $v$ be a pure bargaining game, $ i \in N $, and
$ S \subset N \setminus \{ i \} $. Then the component game $ v _i $
is given by
\begin{align*}
v _i \bigl( S \cup \{ i \} \bigr) &= \frac{ 1 }{ \lvert N \rvert 2 ^{ \lvert N \rvert } } \Biggl( 1 + \frac{ \binom{\lvert N \rvert }{0} + \cdots + \binom{\lvert N \rvert }{ \lvert S \rvert } }{ \binom{\lvert N \rvert - 1 }{ \lvert S \rvert } } \Biggr) v (N), \\
v _i (S) &= \frac{ 1 }{ \lvert N \rvert 2 ^{ \lvert N \rvert } } \Biggl( 1 - \frac{ \binom{\lvert N \rvert }{0} + \cdots + \binom{\lvert N \rvert }{ \lvert S \rvert } }{ \binom{\lvert N \rvert - 1 }{ \lvert S \rvert } } \Biggr) v (N) .
\end{align*}
\end{theorem}
\begin{proof}
Observe that $ v \bigl( T \cup \{ i \} \bigr) - v (T) $ vanishes for
$ T \subsetneq N \setminus \{ i \} $ and equals $ v (N) $ for
$ T = N \setminus \{ i \} $. The distance between
$S \subset N \setminus \{ i \} $ and $ T = N \setminus \{ i \} $ is
just $ \lvert N \rvert - \lvert S \rvert - 1 $, so applying
\autoref{thm:explicit} gives
\begin{align*}
u _i \bigl( S \cup \{ i \} \bigr)
&= \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert } } \frac{ \binom{\lvert N \rvert }{\lvert N \rvert - \lvert S \rvert } + \cdots + \binom{\lvert N \rvert }{ \lvert N \rvert } }{ \binom{\lvert N \rvert -1}{\lvert N \rvert - \lvert S \rvert - 1 } } v (N) \\
&= \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert } } \frac{ \binom{\lvert N \rvert }{ 0 } + \cdots + \binom{\lvert N \rvert }{ \lvert S \rvert } }{ \binom{\lvert N \rvert -1}{\lvert S \rvert } } v (N) ,
\end{align*}
and $ u _i (S) = - u _i \bigl( S \cup \{ i \} \bigr) $. In
particular, for $ S = \emptyset $, we get
\begin{equation*}
u _i ( \emptyset ) = - \frac{ 1 }{ \lvert N \rvert 2 ^{\lvert N \rvert } } v (N),
\end{equation*}
and the result follows immediately from
$ v _i = u _i - u _i ( \emptyset ) $.
\end{proof}
\section{Weighted decompositions and restricted cooperation}
\subsection{Decomposition of cooperative games with weighted edges}
\label{sec:weightedDecomp}
Suppose that each edge $ \bigl( S, S \cup \{ i \} \bigr) \in E $ of
the hypercube graph is assigned a weight
$ w \bigl( S, S \cup \{ i \} \bigr) > 0 $. We define
$ \ell ^2 _w (E) $ to be the space of functions
$E \rightarrow \mathbb{R} $ equipped with the weighted $ \ell ^2 $
inner product,
\begin{equation*}
\langle f, g \rangle _w \coloneqq \sum _{ ( S, S \cup \{ i \} ) \in E } w \bigl( S, S \cup \{ i \} \bigr) f \bigl( S, S \cup \{ i \} \bigr) g \bigl( S, S \cup \{ i \} \bigr).
\end{equation*}
The setting of \autoref{sec:decomp} corresponds to the special case
where $ w = 1 $ on every edge. (Equivalently, $w$ may be any positive
constant, not necessarily $1$.)
The weighted inner product affects the Hodge decomposition as
follows. Although
$ \mathrm{d} \colon \ell ^2 (V) \rightarrow \ell ^2 _w (E) $ is
unchanged,
$ \mathrm{d} ^\ast _w \colon \ell ^2 _w (E) \rightarrow \ell ^2 (V) $
is now the adjoint with respect to the weighted inner product. We then
have the combinatorial Hodge decomposition
\begin{equation*}
\ell ^2 _w (E) = \mathcal{R} (\mathrm{d}) \oplus \mathcal{N} ( \mathrm{d} _w ^\ast ) ,
\end{equation*}
where the direct sum is $ \ell ^2 _w $-orthogonal rather than
$ \ell ^2 $-orthogonal.
Denote by
$ P _w \colon \ell ^2 _w (E) \rightarrow \mathcal{R} ( \mathrm{d} ) $
the $ \ell ^2 _w $-orthogonal projection onto
$ \mathcal{R} ( \mathrm{d} ) $. The decomposition of cooperative
games in \autoref{thm:gameDecomp} may then be generalized as follows.
\begin{theorem}
\label{thm:weightedDecomp}
For each $ i \in N $, let $ v _{i, w} \in \ell ^2 (V) $ with
$ v _{i,w} ( \emptyset ) = 0 $ be the unique game such that
$ \mathrm{d} v _{i,w} = P _w \mathrm{d} _i v $. Then:
\begin{enumerate}[label=(\alph*)]
\item $ \displaystyle\sum _{ i \in N }v _{i,w} = v $.
\item If $ v \bigl( S \cup \{ i \} \bigr) - v (S) = 0 $ for all
$ S \subset N \setminus \{ i \} $, then $ v _{i,w} = 0 $.
\item If $ \sigma $ is a permutation of $N$, then
$ ( \sigma ^\ast v ) _{ i , \sigma ^\ast w } = \sigma ^\ast ( v _{
\sigma (i) , w } ) $. In particular, if $\sigma$ is the
permutation swapping $i$ and $j$, and if $ \sigma ^\ast v = v $
and $ \sigma ^\ast w = w $, then $ v _{i,w} = \sigma ^\ast ( v _{j,w} ) $.
\item For any two games $ v, v ^\prime $ and $ \alpha , \alpha ^\prime \in \mathbb{R} $, $ ( \alpha v + \alpha ^\prime v ^\prime ) _{ i , w } = \alpha v _{ i , w } + \alpha ^\prime v ^\prime _{ i , w } $.
\end{enumerate}
\end{theorem}
\begin{proof}
The proofs of (a), (b), and (d) are just as in
\autoref{thm:gameDecomp}, since the weighted projection $ P _w $ is
still linear and equal to the identity on
$ \mathcal{R} ( \mathrm{d} ) $.
For (c), we can no longer assume that a permutation preserves the
$ \ell ^2 _w $ inner product. However, we do have
$ \langle f, g \rangle _w = \langle \sigma ^\ast f , \sigma ^\ast g
\rangle _{ \sigma ^\ast w } $, which implies
$ P _{ \sigma ^\ast w } \sigma ^\ast = \sigma ^\ast P _w $. Therefore,
\begin{multline*}
\mathrm{d} ( \sigma ^\ast v ) _{ i , \sigma ^\ast w } = P _{ \sigma ^\ast w } \mathrm{d} _i ( \sigma ^\ast v ) = P _{ \sigma ^\ast w } \sigma ^\ast ( \mathrm{d} _{ \sigma (i) } v ) \\
= \sigma ^\ast ( P _w \mathrm{d} _{ \sigma (i) } v ) = \sigma ^\ast ( \mathrm{d} v _{ \sigma (i) , w } ) = \mathrm{d} \sigma ^\ast ( v _{ \sigma (i) , w } ),
\end{multline*}
and the rest of the argument proceeds as in the proof of
\autoref{thm:gameDecomp}.
\end{proof}
\begin{corollary}
\label{cor:weightedShapley}
If $ \sigma ^\ast w = w $ for all permutations $\sigma$, then
$ \phi _i (v) = v _{ i , w } (N) $.
\end{corollary}
\begin{proof}
If $w$ is invariant under permutations, then (a)--(d) imply that
$ v _{ i , w } (N) $ satisfies the Shapley axioms, so it must be the
Shapley value $ \phi _i (v) $.
\end{proof}
As in \autoref{rmk:leastSquares}, we may view the component $ v _i $
as a \emph{weighted} least-squares solution to
$ \mathrm{d} v _i = \mathrm{d} _i v $, in the sense that
\begin{equation*}
v _i = \operatorname{argmin}\displaylimits_{\substack{u \in \ell ^2 (V) \\ u ( \emptyset ) = 0 }} \lVert \mathrm{d} u - \mathrm{d} _i v \rVert _{ \ell ^2 _w (E) } .
\end{equation*}
We can also cast this in terms of the weighted graph Laplacians
$ L _w \coloneqq \mathrm{d} ^\ast _w \mathrm{d} $ and
$ L _{ w _i } \coloneqq \mathrm{d} ^\ast _w \mathrm{d} _i =
\mathrm{d} ^\ast _{ w _i } \mathrm{d} $, where the weight function
$ w _i $ is defined by
\begin{equation*}
w _i \bigl( S, S \cup \{ j \} \bigr) \coloneqq
\begin{cases}
w \bigl( S, S \cup \{ i \} \bigr) & \text{if } i = j ,\\
0 & \text{if } i \neq j .
\end{cases}
\end{equation*}
The following generalization of \autoref{prop:laplace} is stated
without proof, since the proof is essentially identical. It can also
be seen as an expression of the normal equations for the weighted
least-squares problem.
\begin{proposition}
\label{prop:weightedLaplace}
For each $ i \in N $, the component game $ v _i $ of
\autoref{thm:weightedDecomp} is the unique solution to
$ L _w v _i = L _{w _i} v $ such that $ v _i ( \emptyset ) = 0 $.
\end{proposition}
One consequence of \autoref{cor:weightedShapley} is that, although
the Shapley value is unique, the decomposition
$ v = \sum _{ i \in N } v _i $ of \autoref{thm:gameDecomp} generally
is not. Indeed, any totally symmetric weight function $w$ will yield
a decomposition satisfying the conditions of
\autoref{thm:gameDecomp}, and these will generally not agree with
one another---except at $N$, where they all give the Shapley
value. This is illustrated in the following example.
\begin{example}
\label{ex:symmetricGlove}
In \autoref{ex:decompGlove}, we decomposed the glove game with
respect to the $ \ell ^2 $ inner product, corresponding to the
constant weight function $ w \equiv 1 $. Suppose instead that we
take $ w \bigl( S, S \cup \{ i \} \bigr) = \lvert S \rvert + 1 $,
which is totally symmetric but not constant. The resulting decomposition is
shown in \autoref{tab:symmetricGlove}, and is distinct from that
obtained in \autoref{ex:decompGlove} and shown in
\autoref{tab:glove}. However, due to the symmetry of $w$, the
Shapley values are again recovered as the value of the grand
coalition in each component game. Note that the symmetry of players
$2$ and $3$ is still apparent in the component games.
\begin{table}
\centering
\renewcommand\arraystretch{1.4}
\begin{tabular}{crrrr}
$S$ & $v $ & $ v _1 $ & $ v _2 $ & $ v _3 $ \\
\hline
$ \emptyset $ & $0$ & $0$ & $0$ & $0 $ \\
$ \{ 1 \} $ & $0$ & $ \frac{ 16 }{ 31 } $ & $ -\frac{ 8 }{ 31 } $ & $ -\frac{ 8 }{ 31 } $ \\
$ \{ 2 \} $ & $0$ & $ -\frac{ 8 }{ 31 } $ & $ \frac{ 6 }{ 31 } $ & $ \frac{ 2 }{ 31 } $ \\
$ \{ 3 \} $ & $0$ & $ - \frac{ 8 }{ 31 } $ & $ \frac{ 2 }{ 31 } $ & $ \frac{ 6 }{ 31 } $ \\
$ \{ 1, 2 \} $ & $1$ & $ \frac{ 20 }{ 31 } $ & $ \frac{ 21 }{ 62 } $ & $ \frac{ 1 }{ 62 } $ \\
$ \{ 1, 3 \} $ & $1$ & $ \frac{ 20 }{ 31 } $ & $ \frac{ 1 }{ 62 } $ & $ \frac{ 21 }{ 62 } $ \\
$ \{ 2, 3 \} $ & $0$ & $ -\frac{ 9 }{ 31 } $ & $ \frac{ 9 }{ 62 } $ & $ \frac{ 9 }{ 62 } $ \\
$ \{ 1,2,3 \} $ & $1$ & $ \boldsymbol{ \frac{ 2 }{ 3 } } $ & $ \boldsymbol{ \frac{ 1 }{ 6 } } $ & $ \boldsymbol{ \frac{ 1 }{ 6 } } $
\end{tabular}
\bigskip
\caption{%
Decomposition of the three-player glove game as
$ v = v _1 + v _2 + v _3 $, following
\autoref{thm:weightedDecomp}, with weight function
$ w (S) = \lvert S \rvert + 1 $. The Shapley values of
$ \frac{ 2 }{ 3 } $, $ \frac{ 1 }{ 6 } $, $ \frac{ 1 }{ 6 } $
appear in bold on the last line, since the weight function is
totally symmetric, but the components are elsewhere distinct from those in
\autoref{tab:glove}.\label{tab:symmetricGlove} }
\end{table}
\end{example}
\begin{example}
Again, consider the glove game, but take the weight function to be
$ w \bigl( \emptyset , \{ 1 \} \bigr) = \frac{1}{2} $ and $ w = 1 $
otherwise. This may be interpreted as player $1$ being reluctant
(but not totally unwilling) to be the first player to join the
coalition. The resulting decomposition is shown in
\autoref{tab:reluctantGlove}. Unlike in the previous examples, this
$w$ is not totally symmetric, and consequently the values
$ v _1 (N) = \frac{ 13 }{ 17 } $ and
$ v _2 (N) = v _3 (N) = \frac{ 2 }{ 17 } $ no longer agree with the
Shapley values. Since player $1$ is less willing to join the
coalition first (i.e., to contribute zero marginal value), the
payoff to player $1$ is increased from $ \frac{ 2 }{ 3 } $ to
$ \frac{ 13 }{ 17 } $ at the expense of players $2$ and $3$, the
payoff to each of whom is reduced from $ \frac{ 1 }{ 6 } $ to
$ \frac{ 2 }{ 17 } $. Note that the symmetry of players $2$ and $3$
is still maintained.
\begin{table}
\centering
\renewcommand\arraystretch{1.4}
\begin{tabular}{crrrr}
$S$ & $v $ & $ v _1 $ & $ v _2 $ & $ v _3 $ \\
\hline
$ \emptyset $ & $0$ & $0$ & $0$ & $0 $ \\
$ \{ 1 \} $ & $0$ & $ \frac{ 10 }{ 17 } $ & $ - \frac{ 5 }{ 17 } $ & $ - \frac{ 5 }{ 17 } $ \\
$ \{ 2 \} $ & $0$ & $ - \frac{ 5 }{ 34 } $ & $ \frac{ 37 }{ 272 } $ & $ \frac{ 3 }{ 272 } $ \\
$ \{ 3 \} $ & $0$ & $ - \frac{ 5 }{ 34 } $ & $ \frac{ 3 }{ 272 } $ & $ \frac{ 37 }{ 272 } $ \\
$ \{ 1, 2 \} $ & $1$ & $ \frac{ 25 }{ 34 } $ & $ \frac{ 87 }{ 272 } $ & $ - \frac{ 15 }{ 272 } $\\
$ \{ 1, 3 \} $ & $1$ & $ \frac{ 25 }{ 34 } $ & $ - \frac{ 15 }{ 272 } $ & $ \frac{ 87 }{ 272 } $ \\
$ \{ 2, 3 \} $ & $0$ & $ - \frac{ 3 }{ 17 } $ & $ \frac{ 3 }{ 34 } $ & $ \frac{ 3 }{ 34 } $ \\
$ \{ 1,2,3 \} $ & $1$ & $ \boldsymbol{ \frac{ 13 }{ 17 } } $ & $ \boldsymbol{ \frac{ 2 }{ 17 } } $ & $ \boldsymbol{ \frac{ 2 }{ 17 } } $
\end{tabular}
\bigskip
\caption{ Decomposition of the three-player glove game as
$ v = v _1 + v _2 + v _3 $, following
\autoref{thm:weightedDecomp}, with weight function
$ w \bigl( \emptyset , \{ 1 \} \bigr) = \frac{1}{2} $ and
$ w = 1 $ otherwise. The bold values $ v _i (N) $ on the last line
no longer correspond to the Shapley values, since $w$ is not
totally symmetric.\label{tab:reluctantGlove} }
\end{table}
\end{example}
\begin{remark}
The weighted Shapley value of \citet{Shapley1953a} and
\citet{KaSa1987} also models asymmetry between players, but it does
so in a fundamentally different way from the approach considered
here. In \citet{Shapley1953a} each \emph{player} is assigned a
weight, while \citet{KaSa1987} generalized these player weights to
``weight systems.'' It would be interesting to investigate whether
these approaches can be related by using player weights (or weight
systems) to construct corresponding edge weights.
\end{remark}
\subsection{Decomposition of games with restricted cooperation}
The framework discussed in the preceding sections, as in
\citet{Shapley1953}, assumes that every player $ i \in N $ is willing
to join every coalition $ S \subset N $, so every such coalition may
be feasibly formed \emph{en route} to the grand coalition. In models
of restricted cooperation, however, this is not the case. The
\emph{precedence constraints} of \citet{FaKe1992} impose a partial
ordering on $N$, so that some players are constrained to join the
coalition prior to others. \citet{KhSeTa2016} have recently
generalized this to so-called digraph games, where precedence is
determined by a digraph on $N$ that (unlike the \citet{FaKe1992}) may
contain cycles; a player $i$ may be required to precede another player
$j$ in some coalitions but not others. (For another recent model of
restricted cooperation, see \citet{KoSuTa2017}.)
The constraints above all correspond to situations where a player $i$
is forbidden to join a coalition $ S \subset N \setminus \{ i \} $. In
this case, we say that the edge $ \bigl( S , S \cup \{ i \} \bigr) $
is \emph{infeasible}, and we remove it from the hypercube graph. If we
continue in this manner, removing all edges and vertices that are
incompatible with the constraints, then we arrive at a graph
$ G = ( V , E ) $ which is a \emph{subgraph} of the hypercube
graph. Here, $V$ contains the so-called \emph{feasible coalitions}
that are compatible with the constraints.
Assume that $G$ is connected and that $ \emptyset , N \in V $, so that
a coalition is feasible if and only if it can be formed starting from
$ \emptyset $, and the grand coalition is feasible. Since the Hodge
decomposition may be defined on any graph---in particular, on the
subgraph $G$ of the hypercube graph---we again obtain a decomposition
$ v = \sum _{ i \in N } v _i $, defined by
$ \mathrm{d} v _i = P \mathrm{d} _i v $ with
$ v _i ( \emptyset ) = 0 $ for $ i \in N $. Since $G$ is connected, we
again have $ \mathcal{N} ( \mathrm{d} ) \cong \mathbb{R} $, so the
decomposition is unique; moreover, it satisfies conditions (a)--(d) of
\autoref{thm:weightedDecomp}, if we interpret the missing edges as
having weight zero.
\begin{example}
In the glove game, suppose that player $1$ refuses to join the
coalition first, so that $ \{ 1 \} $ and all its incident edges are
removed from the graph. The resulting decomposition is shown in
\autoref{tab:holdout1Glove}. Note that $ v _1 = v $ and
$ v _2 = v _3 = 0 $, so that player $1$ captures \emph{all} of the
value of the game. The reason for this is that, by removing the only
edges on which players $2$ and $3$ contribute marginal value, the
constraints have turned players $2$ and $3$ into null
players. Observe also that $ \mathrm{d} v _i = \mathrm{d} _i v $ for
all $ i \in N $, so the constraints have effectively made the game
inessential.
\begin{table}
\centering
\renewcommand\arraystretch{1.4}
\begin{tabular}{crrrr}
$S$ & $v $ & $ v _1 $ & $ v _2 $ & $ v _3 $ \\
\hline
$ \emptyset $ & $0$ & $0$ & $0$ & $0 $ \\
$ \{ 2 \} $ & $0$ & $0$ & $0$ & $0$\\
$ \{ 3 \} $ & $0$ & $0$ & $0$ & $0$ \\
$ \{ 1, 2 \} $ & $1$ & $1$ & $0$ & $0$ \\
$ \{ 1, 3 \} $ & $1$ & $1$ & $0$ & $0$ \\
$ \{ 2, 3 \} $ & $0$ & $0$ & $0$ & $0$ \\
$ \{ 1,2,3 \} $ & $1$ & $ \boldsymbol{ 1 } $ & $ \boldsymbol{ 0 } $ & $ \boldsymbol{ 0 } $
\end{tabular}
\bigskip
\caption{Decomposition of the three-player glove game as
$ v = v _1 + v _2 + v _3 $, where $ \{ 1 \} $ and its incident edges
are removed from the hypercube graph. This causes players $2$ and $3$ to become null players, so player $1$ receives all the value, and the game becomes inessential.
\label{tab:holdout1Glove}}
\end{table}
\end{example}
\begin{example}
\label{ex:holdout2Glove}
On the other hand, suppose that player $ \{ 2 \} $ refuses to join
the coalition first, so that $ \{ 2 \} $ and all its incident edges
are removed from the graph. The resulting decomposition is shown in
\autoref{tab:holdout2Glove}. Unlike in the previous example, all
three players still contribute marginal value on some of the
remaining feasible edges. However, removing an edge on which player
$2$ contributes zero marginal value causes the payoff to player $2$
to increase from $ \frac{ 1 }{ 6 } $ to $ \frac{ 3 }{ 10 }
$. Interestingly, player $3$ also receives a slightly increased
payoff, from $ \frac{ 1 }{ 6 } $ to $ \frac{ 1 }{ 5 } $, since
player $3$ contributes zero marginal value to any coalition that
already contains player $2$, and one such coalition has been removed
from consideration. Both players $2$ and $3$ benefit at the expense
of player $1$, whose payoff is decreased from $ \frac{ 2 }{ 3 } $ to
$ \frac{1}{2} $.
\begin{table}
\centering
\renewcommand\arraystretch{1.4}
\begin{tabular}{crrrr}
$S$ & $v $ & $ v _1 $ & $ v _2 $ & $ v _3 $ \\
\hline
$ \emptyset $ & $0$ & $0$ & $0$ & $0 $ \\
$ \{ 1 \} $ & $0$ & $ \frac{ 3 }{ 10 } $ & $ - \frac{ 1 }{ 10 } $ & $ - \frac{ 1 }{ 5 } $ \\
$ \{ 3 \} $ & $0$ & $ - \frac{ 3 }{ 10 } $ & $ \frac{ 1 }{ 10 } $ & $ \frac{ 1 }{ 5 } $ \\
$ \{ 1, 2 \} $ & $1$ & $ \frac{ 2 }{ 5 } $ & $ \frac{ 3 }{ 5 } $ & $0$ \\
$ \{ 1, 3 \} $ & $1$ & $ \frac{1}{2} $ & $ \frac{ 1 }{ 10 } $ & $ \frac{ 2 }{ 5 } $ \\
$ \{ 2, 3 \} $ & $0$ & $ - \frac{ 2 }{ 5 } $ & $ \frac{ 1 }{ 5 } $ & $ \frac{ 1 }{ 5 } $ \\
$ \{ 1,2,3 \} $ & $ 1 $ & $ \boldsymbol{ \frac{1}{2} } $ & $ \boldsymbol{ \frac{ 3 }{ 10 } } $ & $ \boldsymbol{ \frac{ 1 }{ 5 } } $
\end{tabular}
\bigskip
\caption{Decomposition of the three-player glove game as
$ v = v _1 + v _2 + v _3 $, where $ \{ 2 \} $ and its incident edges
are removed from the hypercube graph.\label{tab:holdout2Glove} }
\end{table}
\end{example}
\begin{remark}
\label{rmk:precedence}
The values obtained in \autoref{ex:holdout2Glove} are different
from the Shapley values with precedence constraints in
\citet{FaKe1992,KhSeTa2016}, which are
$ ( \frac{1}{2}, \frac{ 1 }{ 4 } , \frac{ 1 }{ 4 } ) $. These take
the approach of averaging over feasible permutations, a
generalization of \eqref{eqn:shapleyPermutation}. In this case,
there are four feasible permutations---$ ( 1, 2, 3 ) $,
$ ( 1, 3, 2 ) $, $ ( 3, 1, 2 ) $, and $ ( 3, 2, 1 ) $---and player
$1$ contributes marginal value $1$ in two of these, while players
$2$ and $3$ each contribute marginal value $1$ in one
permutation. This illustrates that, unlike the case in
\autoref{sec:decomp}, these solution concepts are generally
distinct.
(We also note that, as pointed out in \autoref{sec:introGames}, it
is computationally undesirable to average over permutations, since
this is factorial in $ \lvert N \rvert $, whereas solving a
$ \lvert V \rvert \times \lvert V \rvert $ linear system is only
exponential in $ \lvert N \rvert $.)
\end{remark}
Finally, we note that we may also consider \emph{weighted} subgraphs
of the hypercube graph, combining the approach above with that of
\autoref{sec:weightedDecomp}, in the obvious way. The next example
illustrates that if we weight the edges according to the degrees of
the incident vertices---which are generally non-constant once edges
have been removed---we may obtain different decompositions of the
game than if we used a constant edge weight.
\begin{example}
Consider again the restricted glove game of
\autoref{ex:holdout2Glove}, where player $2$ refuses to join the
coalition first, so that $ \{ 2 \} $ and its incident edges are
removed from the hypercube graph. Instead of taking the edge
weights $ w \equiv 1 $, suppose we take
$ w ( a, b ) = \mathrm{deg}(a) \mathrm{deg}(b) $. This weight
function is related to a graph Laplacian with vertex weights
considered by \citet{Lovasz1979}, as observed by \citet{ChLa1996}.
The resulting decomposition is shown in
\autoref{tab:holdout2GloveWeight}. In this case, the players
receive the payoffs
$ ( \frac{1}{2} , \frac{ 1 }{ 4 } , \frac{ 1 }{ 4 } ) $, which is
distinct from the constant-weight payoffs
$ ( \frac{1}{2} , \frac{ 3 }{ 10 } , \frac{ 2 }{ 5 } ) $ of
\autoref{ex:holdout2Glove}. (This also happens to agree with the
Shapley values with precedence constraints in
\autoref{rmk:precedence}, although this need not be the case for an
arbitrary game.) Note that, although $ v _2 (N) = v _3 (N) $, the
component games $ v _2 $ and $ v _3 $ display asymmetry elsewhere:
e.g.,
$ v _2 \bigl( \{ 2 , 3 \} \bigr) \neq v _3 \bigl( \{ 2, 3 \} \bigr)
$. This is due to the asymmetry between players $2$ and $3$ in the
graph.
\begin{table}
\centering
\renewcommand\arraystretch{1.4}
\begin{tabular}{crrrr}
$S$ & $v $ & $ v _1 $ & $ v _2 $ & $ v _3 $ \\
\hline
$ \emptyset $ & $0$ & $0$ & $0$ & $0 $ \\
$ \{ 1 \} $ & $0$ & $ \frac{ 1 }{ 3 } $ & $ - \frac{ 1 }{ 12 } $ & $ - \frac{ 1 }{ 4 } $ \\
$ \{ 3 \} $ & $0$ & $ - \frac{ 1 }{ 3 } $ & $ \frac{ 1 }{ 12 } $ & $ \frac{ 1 }{ 4 } $ \\
$ \{ 1, 2 \} $ & $1$ & $ \frac{ 5 }{ 12 } $ & $ \frac{ 7 }{ 12 } $ & $0$ \\
$ \{ 1, 3 \} $ & $1$ & $ \frac{1}{2} $ & $ \frac{ 1 }{ 12 } $ & $ \frac{ 5 }{ 12 } $ \\
$ \{ 2, 3 \} $ & $0$ & $ - \frac{ 5 }{ 12 } $ & $ \frac{ 1 }{ 6 } $ & $ \frac{ 1 }{ 4 } $ \\
$ \{ 1,2,3 \} $ & $ 1 $ & $ \boldsymbol{ \frac{1}{2} } $ & $ \boldsymbol{ \frac{ 1 }{ 4 } } $ & $ \boldsymbol{ \frac{ 1 }{ 4 } } $
\end{tabular}
\bigskip
\caption{Decomposition of the three-player glove game as
$ v = v _1 + v _2 + v _3 $, where $ \{ 2 \} $ and its incident
edges are removed from the hypercube graph, and where each edge
is weighted by the product of the degrees of its incident
vertices.\label{tab:holdout2GloveWeight} }
\end{table}
\end{example}
\section{Conclusion}
We have used the combinatorial Hodge decomposition on a hypercube
graph to show that any cooperative game may be decomposed into a sum
of component games, one component for each player, so that this
decomposition satisfies appropriate efficiency, null-player,
symmetry, and linearity properties. This yields a new
characterization of the Shapley value as the value of the grand
coalition in each player's component game.
We have also shown that this game decomposition may be understood in
terms of the least-squares solution to a linear problem, where the
solution is exact if and only if the game is inessential. In this
sense, our decomposition may be considered as an edge-based (rather
than vertex-based) variant of the least-squares and minimum-norm
solution concepts of \citet{RuVaZa1998} and \citet{KuSa2007}. The
normal equations for this linear problem yield another, equivalent
characterization of the game decomposition in terms of the
well-studied graph Laplacian. This allowed us to obtain an explicit
formula for each component game by applying the discrete Green's
function for the hypercube graph Laplacian.
Finally, we have shown how this decomposition may be generalized, in
a natural way, using the combinatorial Hodge decomposition for
weighted graphs and subgraphs of the hypercube graph. These
generalized decompositions preserve the efficiency, null-player,
symmetry (in an appropriate sense, modulo the symmetry of the weights
and the subgraph), and linearity properties obtained earlier. This
yields a family of decompositions, and corresponding solution
concepts, for problems where players exhibit variable willingness or
unwillingness to join certain coalitions, and we have compared and
contrasted these solution concepts with those of
\citet{FaKe1992,KhSeTa2016} for certain models of restricted
cooperation.
\subsection*{Acknowledgments}
Ari Stern was supported in part by a grant from the Simons Foundation
(\#279968). Alexander Tettenhorst was supported in part by the ARTU
research fellowship program in the Department of Mathematics at
Washington University in St.~Louis.
| {
"timestamp": "2018-09-20T02:02:40",
"yymm": "1709",
"arxiv_id": "1709.08318",
"language": "en",
"url": "https://arxiv.org/abs/1709.08318",
"abstract": "We show that a cooperative game may be decomposed into a sum of component games, one for each player, using the combinatorial Hodge decomposition on a graph. This decomposition is shown to satisfy certain efficiency, null-player, symmetry, and linearity properties. Consequently, we obtain a new characterization of the classical Shapley value as the value of the grand coalition in each player's component game. We also relate this decomposition to a least-squares problem involving inessential games (in a similar spirit to previous work on least-squares and minimum-norm solution concepts) and to the graph Laplacian. Finally, we generalize this approach to games with weights and/or constraints on coalition formation.",
"subjects": "Computer Science and Game Theory (cs.GT); Combinatorics (math.CO)",
"title": "Hodge decomposition and the Shapley value of a cooperative game",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806546550656,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7077274139862039
} |
https://arxiv.org/abs/1602.08250 | The Independent Domination Polynomial | A vertex subset $W\subseteq V$ of the graph $G=(V,E)$ is an independent dominating set if every vertex in $V\backslash W$ is adjacent to at least one vertex in $W$ and the vertices of $W$ are pairwise non-adjacent. The independent domination polynomial is the ordinary generating function for the number of independent dominating sets in the graph. We investigate in this paper properties of the independent domination polynomial and some interesting connections to well known counting problems. | \section{Introduction}
We consider finite simple undirected graphs and identify edges with two-element subsets of the vertex set. We call a graph \emph{non-trivial} if it has a least one edge. Let $G=(V,E)$ be a graph with vertex set $V$ and edge set $E$. Let $W\subseteq V$, then the \emph{(vertex) induced subgraph} $G[W]$ is the graph
\[
G[W] = (W, \{\{u,v\}\in E: u,v\in W\}).
\]
We denote the number of isolated vertices in $G$ by $\mathrm{iso}(G)$. The \emph{open neighborhood} $N_{G}(v)$ of a vertex
$v\in V$ is the set of all vertices that are adjacent to $v$ in $G$. Analogously, we define
\[
N_{G}(W)=\bigcup_{v\in W}N_{G}(v)
\]
for any vertex subset $W\subseteq V$. The \emph{closed neighborhood} $N_{G}[W]$ of a vertex subset $W\subseteq V$ is simply the set $N_{G}(W)\cup W$. If the graph is known from the context, we write $N(v)$ and $N(W)$ instead of $N_{G}(v)$ and $N_{G}(W)$, respectively. The maximum degree $\Delta(G)$ of the graph $G$ is defined as $\max_{v\in V}{|N(v)|}$.
A vertex set $W\subseteq V$ is called an \emph{independent dominating set} of $G$ if
$N_{G}[W]=V$ and the induced subgraph has only isolated vertices, in other words $\mathrm{iso}(G[W])=|W|$. The \emph{independent domination polynomial} of a graph $G$ is the
ordinary generating function for the number of independent dominating sets of $G$:
\[
\id(G,x)=\sum_{\substack{W\subseteq V \\N_{G}[W]=V\\\mathrm{iso}(G[W])=|W|}}x^{|W|}.
\]
The independent domination number of the graph is a well known graph parameter (see \cite{Goddard2013} for an overview). But little is known about counting independent dominating sets. The independent domination polynomial can be obtained from the trivariate total domination polynomial (see \cite{Dod2014a}).
The paper is organized as follows. In the following section we introduce some basic properties of the independent domination polynomial. In Section \ref{section::recurrenceEquations} we prove some basic recurrence equations and in Section \ref{section::id-specialgraphclasses} we give equations for some special graph classes.
\section{The independent domination polynomial}
Like other graph polynomials the independent domination polynomial is multiplicative in respect to the components of the graph (which also follows from the connection to the trivariate total domination polynomial \cite{Dod2014a}).
\begin{lemma}\label{lemma::id-twocomponents}
Let $G=(V,E)$ be a graph with two components $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$. Then
\begin{equation*}
\id(G,x) = \id(G_1,x) \id(G_2,x).
\end{equation*}
\end{lemma}
\begin{proof}
The proof of the lemma follows directly from the definition of the polynomial.
\end{proof}
The join of two graphs was first defined by Harary in 1969.
\begin{definition}\cite{Harary1969}
The join $G \join H$ of two graphs $G=(V(G),E(G))$ and $H=(V(H),E(H))$ is the graph union $G\cup H$ together with all the edges joining $V(G)$ and $V(H)$.
\end{definition}
\begin{theorem}
Let $G=(V(G),E(G))$ and $H=(V(H),E(H))$ be two non-trivial graphs. Then
\begin{equation*}
\id(G \join H,x) = \id(G,x)+\id(H,x).
\end{equation*}
\end{theorem}
\begin{proof}
It suffices to observe that any independent dominating set of $G \join H$ is completely contained in exactly one of the vertex sets $V(G)$ or $V(H)$.
\end{proof}
Frucht and Harary introduced 1970 the corona of two graphs.
\begin{definition}\cite{Frucht1970}
Let $G=(V(G),E(G))$ and $H=(V(H),E(H))$ be graphs. Then the corona of $G$ and $H$ is the graph $G\corona H$ which is the disjoint union of $G$ and $|V(G)|$ copies of $H$ and every vertex $v$ of $G$ is adjacent to every vertex in the corresponding copy of $H$.
\end{definition}
The independence polynomial is the ordinary generating function for the number of independent sets in the graph and is denoted by $\ind(G,x)$ (see \cite{Levit2005}).
\begin{theorem}
Let $G=(V(G),E(G))$ and $H=(V(H),E(H))$ be graphs, $|V(G)|\geq 1$, $|V(H)|\geq 1$ and $n=|V(G)|$. Then
\begin{equation*}
\id(G \corona H,x) = \id(H,x)^{n} \ind\left(G,\frac{x}{\id(H,x)}\right).
\end{equation*}
\end{theorem}
\begin{proof}
Every independent vertex subset of $G$ can be expanded to an independent dominating set in $G \circ H$. Let $S\subseteq V(G)$ be such an independent set, then $S$ together with dominating vertices adjacent to the vertices in $V(G)-S$ form an independent dominating set. Precisely if $|S|=k$, then in $(n-k)$ copies of $H$ an independent dominating set must exist. Let $i_k$ be the coefficient of $x^k$ in $\ind(G,x)$. Then
\begin{align*}
\id(G \corona H,x) =& \sum \limits_{k=0}^{n} i_k x^k \id(H,x)^{n-k}\\
=& \id(H,x)^{n} \sum \limits_{k=0}^{n}i_k x^{k}\id(H,x)^{-k}\\
=& \id(H,x)^{n} \ind\left(G,\frac{x}{\id(H,x)}\right)
\end{align*}
and therefore the theorem follows.
\end{proof}
\begin{corollary}
Let $G=(V,E)$ be a graph with $n$ vertices and $E_r$ an edgeless graph with $r$ ($r>0$) vertices. Then
\begin{equation*}
\id(G \corona E_r,x) = x^{rn} \ind(G,x^{1-r}).
\end{equation*}
\end{corollary}
\begin{definition}\cite{Goddard2013}
Let $G=(V,E)$ be a graph. Then the r-expansion $\exp(G,r)$ is the graph obtained from $G$ by replacing every vertex $v\in V$ with an independent set $I_v$ of size $r$ and replacing every edge $uv\in E$ with a complete bipartite graph with the bipartite sets $I_u$ and $I_v$.
\end{definition}
\begin{theorem}
Let $G=(V,E)$ be a graph and $\exp(G,r)$ its r-expansion. Then
\begin{equation*}
\id(\exp(G,r),x)=\id(G,x^r)
\end{equation*}
\end{theorem}
\begin{proof}
Let $W\subseteq V$ be an independent dominating set in $G$. Then in $\exp(G,r)$ all vertices in $I_w$, for $w\in W$, must be dominating and all vertices in $I_u$, for $u\in V-W$, are non-dominating (because of the complete bipartite graphs between vertices in $I_w$ and $I_u$). Therefore, every independent dominating set in $G$ can be expanded to exactly one independent dominating set in $\exp(G,r)$ and vice versa.
\end{proof}
Kotek et al. proved in \cite{Kotek2013a} that the alternating sum over the domination polynomials of the vertex induced subgraphs equals $1+(-x)^n$. In contrast to this result the alternating sum over the independent domination polynomials equals one.
\begin{theorem}\label{theorem::id-suminducedsubgraphs}
Let $G=(V,E)$ be a connected graph with at least two vertices. Then
\begin{equation*}
\sum \limits_{W\subseteq V} (-1)^{|W|} \id(G[W],x) = 1.
\end{equation*}
\end{theorem}
\begin{proof}
First of all we insert the definition of the independent domination polynomial in the equation and change the order of the summation.
\allowdisplaybreaks{
\begin{align}
\sum \limits_{W\subseteq V} (-1)^{|W|} \id(G[W],x) =& \sum \limits_{W\subseteq V} (-1)^{|W|} \sum \limits_{\substack{U\subseteq W\\N_{G[W]}[U]=W\\\iso(G[U])=|U|}} x^{|U|}\notag\\
=& \sum \limits_{\substack{U\subseteq V\\\iso(G[U])=|U|}} x^{|U|} \sum \limits_{\substack{W: U\subseteq W\\N_{G[W]}[U]=W}} (-1)^{|W|}\notag\\
=& \sum \limits_{\substack{U\subseteq V\\\iso(G[U])=|U|}} x^{|U|} \sum \limits_{W: U\subseteq W \subseteq N_{G[W]}[U]} (-1)^{|W|}\label{eq::int1}\\
=& \sum \limits_{\substack{U\subseteq V\\\iso(G[U])=|U|}} x^{|U|} \sum \limits_{W: U\subseteq W \subseteq N_{G}[U]} (-1)^{|W|}\label{eq::int2}\\
=& \sum \limits_{\substack{U\subseteq V\\\iso(G[U])=|U|}} (-x)^{|U|} \sum \limits_{Y \subseteq N_{G}(U)} (-1)^{|Y|}\notag\\
=& 1\notag.
\end{align}}
In Equation (\ref{eq::int1}) we sum over all vertex subsets $W$ such that $U$ is an independent dominating set in $G[W]$. The condition $W \subseteq N_{G[W]}[U]$ in Equation (\ref{eq::int1}) guarantees that we sum only over subsets $W$ such that $U$ is a dominating set in $G[W]$. Hence, in the inner sum $W$ can be every subset from $N_G[U]$. With these considerations we obtain Equation (\ref{eq::int2}). Because of the fact that $U$ is included in every subset $W$ of the inner sum, the summation is performed only over vertex subsets included in $N_G(U)$ and $(-1)^{|U|}$ is factored out from the inner sum. The second sum vanishes for every set $U$ which is not equal $V$ or $\emptyset$ and therefore we obtain the theorem.
\end{proof}
\begin{remark}\label{remark::id-suminducedsubgraphs}
Let $G=(\{v\},\emptyset)$ be a graph with one vertex. Then
\begin{equation*}
\sum \limits_{W\subseteq V} (-1)^{|W|} \id(G[W],x) = 1-x.
\end{equation*}
\end{remark}
If the graph has more than one component we get as a consequence of Theorem \ref{theorem::id-suminducedsubgraphs} and Remark \ref{remark::id-suminducedsubgraphs} the following corollary.
\begin{corollary}\label{corollary::id-suminducedsubgraphs}
Let $G=(V,E)$ be a graph. Then
\begin{equation*}
\sum \limits_{W\subseteq V} (-1)^{|W|} \id(G[W],x) = (1-x)^{\iso(G)}.
\end{equation*}
\end{corollary}
Applying M\"obius inversion to Corollary \ref{corollary::id-suminducedsubgraphs} gives the next corollary.
\begin{corollary}\label{corollary::id-alternatingsum}
Let $G=(V,E)$ be a graph. Then
\begin{equation*}
\id(G,x) = \sum \limits_{W\subseteq V} (-1)^{|W|} (1-x)^{\iso(G[W])}.
\end{equation*}
\end{corollary}
The previous corollary gives us a formula to calculate the coefficients of the independent domination polynomial.
\begin{corollary}
Let $G=(V,E)$ be a graph with $n$ vertices. Then
\begin{equation*}
\id(G,x) = \sum \limits_{k=0}^{n} x^k \sum \limits_{\substack{W\subseteq V\\\iso(G[W])\geq k}} (-1)^{|W|+k} \binom{\iso(G[W])}{k}.
\end{equation*}
\end{corollary}
\begin{proof}
Using the Corollary \ref{corollary::id-alternatingsum}, we get
\begin{align*}
\id(G,x) &= \sum \limits_{W\subseteq V} (-1)^{|W|} (1-x)^{\iso(G[W])}\\
&= \sum \limits_{W\subseteq V} (-1)^{|W|} \sum \limits_{k=0}^{\iso(G[W])} \binom{\iso(G[W])}{k} (-x)^k\\
&= \sum \limits_{k=0}^{n} (-x)^k \sum \limits_{W\subseteq V} (-1)^{|W|} \binom{\iso(G[W])}{k}\\
&= \sum \limits_{k=0}^{n} x^k \sum \limits_{\substack{W\subseteq V\\\iso(G[W])\geq k}} (-1)^{|W|+k} \binom{\iso(G[W])}{k}.
\end{align*}
\end{proof}
We can use these results to prove a theorem which offers a fast way to calculate the independent domination polynomial. This theorem uses the \emph{i}-essential sets of a graph. The concept of essential sets of a graph was introduced by Kotek, Preen, and Tittmann \cite{Kotek2013a} for the calculation of the domination polynomial.
\begin{definition}
Let $G=(V,E)$ be a graph and $W$ a vertex subset of the graph. The set $W$ is called \emph{i}-essential if $W$ contains the open neighborhood of at least one vertex of $V\backslash W$. We denote by $\essi(G)$ the family of \emph{i}-essential sets of $G$, in formula:
\begin{equation*}
\essi(G) = \{X\subseteq V: \exists v\in V\backslash X: N(v) \subseteq X\}.
\end{equation*}
\end{definition}
An open problem concerning \emph{i}-essential sets is: Can the number of \emph{i}-essential sets of a given graph be calculated, without calculating the sets themselves? The next lemma gives two basic properties of the \emph{i}-essential sets, which may be helpful to get a better understanding of these sets.
\begin{lemma}
Let $G=(V,E)$ be a graph. Then
\begin{equation*}
\min_{W\in \essi(G)} \{|W|\} = \delta(G)
\end{equation*}
and
\begin{equation*}
N(v) \in \essi(G) \quad \forall v \in V.
\end{equation*}
\end{lemma}
\begin{remark}
If $W$ is an independent dominating set of the graph $G$ and $W\subset U$, then $U$ is not an independent dominating set. Let $S$ be the partial ordered set $(\mathcal{P}(V), \subseteq)$, then the set of the independent dominating sets of $G$ is an anti-chain in $S$.
\end{remark}
\begin{theorem}
Let $G=(V,E)$ be a graph with $n$ vertices, then
\begin{equation*}
\id(G,x) = (-1)^n \sum \limits_{U\subseteq \essi(G)} (-1)^{|U|} \left((1-x)^{|\{v\in V\backslash U|N_G(v)\subseteq U\}|}-1\right).
\end{equation*}
\end{theorem}
\begin{proof}
We obtain by Corollary \ref{corollary::id-alternatingsum}:
\begin{align*}
\id(G,x) =& \sum \limits_{W\subseteq V} (-1)^{|W|} (1-x)^{\iso(G[W])}\\
=& \sum \limits_{U\subseteq V} (-1)^{|V\backslash U|} (1-x)^{\iso(G[V\backslash U])}\\
=& (-1)^n \sum \limits_{U\subseteq V} (-1)^{|U|} (1-x)^{|\{v\in V\backslash U|N_G(v)\subseteq U\}|}\\
=& (-1)^n \sum \limits_{U\subseteq \essi(G)} (-1)^{|U|} \left((1-x)^{|\{v\in V\backslash U|N_G(v)\subseteq U\}|}-1\right).
\end{align*}
The second factor in the sum vanishes if and only if $\{v\in V\backslash U:N_G(v)\subseteq U\}=\emptyset$. Consequently, only \emph{i}-essential sets contribute to the sum.
\end{proof}
\section{Recurrence equations}\label{section::recurrenceEquations}
In this section we prove several recurrence equations for the independent domination polynomial. We need the following seven graph operations:
\begin{itemize}
\item $G-v$ denotes the graph obtained from $G$ by removal of the vertex $v\in V$ and all edges incident with $v$.
\item $G/v$ denotes the graph obtained from $G$ by the removal of the vertex $v\in V$ and the addition of edges between any pair of non-adjacent neighbors of $v$.
\item $G\odot v$ denotes the graph obtained from $G$ by removing all edges between adjacent vertices of $v\in V$.
\item $G\circ v$ denotes the graph obtained from $G$ by removing $v$ and the addition of loops to all neighbors of $v$.
\item $G-N[v]$ denotes the graph $G-N_G[v]$ obtained by deleting all of the vertices in the closed neighborhood of the vertex $v$ and the edges incident to them.
\item $G-e$ denotes the graph obtained from $G$ by removing the edge $e\in E$.
\item $G\circ v$ denotes the graph obtained from $G$ by removing $v$ and adding a loop to every neighbor of $v$.
\end{itemize}
A loop in the context of domination means that the vertex dominates itself. If a vertex $v$ has a loop, then $v\in N(v)$ and hence $v$ can not be in any independent dominating set. Hence, $\id(G\circ v,x)$ is the independent domination polynomial of the graph $G-v$ under the condition that no vertex in $N(v)$ is dominating.
\begin{remark}
Let $G=(V,E)$ be a graph and $v\in V$. Then
\begin{equation*}
\id((G\odot v)\circ v,x) = \id(G\circ v,x).
\end{equation*}
\end{remark}
\begin{remark}\cite{Kotek2012}
Let $G=(V,E)$ be a graph and $v\in V$. Then
\begin{equation}\label{eqn::basics-operations-facts}
(G\odot v) - N[v] \cong G-N[v].
\end{equation}
\end{remark}
\begin{theorem}\label{theorem::id-recurrence}
Let $G=(V,E)$ be a graph and $v$ a vertex of the graph. Then
\begin{equation*}
\id(G,x) = \id(G-v,x) - \id(G\circ v,x) + x \id(G-N[v],x).
\end{equation*}
\end{theorem}
\begin{proof}
If the vertex $v$ is dominating, then it dominates all vertices in the neighborhood and these vertices can not be dominating. This case will be counted by $x \id(G-N[v],x)$. If the vertex $v$ is not dominating, then at least one vertex in $N(v)$ must be dominating. The polynomial $\id(G-v,x)$ counts these independent dominating sets, but it counts also those sets where in $N(v)$ no vertex is dominating. Hence, we must subtract the polynomial for these cases to get the theorem.
\end{proof}
The next corollary follows directly from the last theorem and gives a recurrence equation under the condition that the neighborhood of the vertex $v$ has some special properties.
\begin{corollary}
Let $G=(V,E)$ be a graph, $u,v\in V$, $u\neq v$ and $N(u) = N(v)$. Then
\begin{equation*}
\id(G,x) = \id(G-v,x) + (x^2 -x) \id(G-N[v]-u,x).
\end{equation*}
\end{corollary}
\begin{proof}
Let $u$ and $v$ two vertices of the graph with $N(u) = N(v)$, then in the graph $G-N[v]$ the vertex $u$ is isolated. Hence, $u$ is included in all independent dominating sets of $G-N[v]$. The polynomial $x\id(G-N[v]-u,x)$ counts this case and therefore $x\id(G-N[v]-u,x) = \id(G\circ v,x)$.
\end{proof}
If we use the $\odot$-operation we can prove the following theorem.
\begin{theorem}\label{theorem::id-recurrence2}
Let $G=(V,E)$ be a graph and $v\in V$. Then
\begin{equation*}
\id(G,x) = \id(G-v,x) + \id(G\odot v,x) - \id(G\odot v -v,x).
\end{equation*}
\end{theorem}
\begin{proof}
Applying the Equations (\ref{eqn::basics-operations-facts}) to Theorem \ref{theorem::id-recurrence} gives
\begin{align}
\id(G,x) - \id(G-v,x) =& x \id(G-N[v],x) - \id(G\circ v,x) \notag\\
=& x \id(G-N[v],x)- \id((G\odot v) \circ v,x)\label{eqn::id-proof-rec1}.
\end{align}
Now we apply Theorem \ref{theorem::id-recurrence} to the graph $G\odot v$
\begin{align}
\id(G\odot v,x) - \id((G\odot v)-v,x) =& x \id((G\odot v)-N[v],x) - \id((G\odot v)\circ v,x)\label{eqn::id-proof-rec2}.
\end{align}
Putting the Equations (\ref{eqn::id-proof-rec1}) and (\ref{eqn::id-proof-rec2}) together gives the theorem.
\end{proof}
It is also possible to prove a theorem which gives a recurrence equation for the deletion of an edge in the graph.
\begin{theorem}
Let $G=(V,E)$ be a graph and $e=\{u,v\}\in E$. Then
\begin{align*}
\id(G,x) =& \id(G-e,x) - x^2 \id(G-N[u,v],x)\\
&+ x \id(G\circ v - N[u],x) + x \id(G\circ u - N[v],x).
\end{align*}
\end{theorem}
\begin{proof}
Every independent dominating set from $G$ will be an independent dominating set in $G-e$, except of those sets where $u$ or $v$ are dominating. $u$ and $v$ can be dominating in $G-e$, but not in $G$. Therefore, we must subtract $x^2 \id(G-N[u,v],x)$. Suppose now that only one of these two vertices are dominating and no vertex in the neighborhood of the other vertex is dominating. This situation will be counted in the graph $G$ but not in the graph $G-e$. Hence, we must add the polynomial for this case and the theorem follows. Remark that $x \id(G\circ v - N[u],x)$ is the independent domination polynomial under the condition that the vertex $u$ is dominating and no vertex in the neighborhood of $v$ (except $u$) is dominating.
\end{proof}
\begin{corollary}
Let $G=(V,E)$ be a graph, $e=\{u,v\}$ be an edge of the graph and $N[u]=N[v]$. Then
\begin{equation*}
\id(G,x) = \id(G-e,x) + (2x-x^2)\id(G-N[u],x).
\end{equation*}
\end{corollary}
\section{Special graph classes}\label{section::id-specialgraphclasses}
In general the calculation of the independent domination polynomial is in $\#P$ \cite{Goddard2013}, but for some special graph classes we can prove some nice recursive or closed equations. For the edgeless graph $E_n$ the independent domination polynomial is simply $x^n$. In the complete graph every independent dominating set has the size one and therefore
\begin{equation}\label{eqn::id-completeGraph}
\id(K_n,x) = nx.
\end{equation}
\begin{theorem}\label{theorem::id-completebipartite}
Let $K_{pq}=(V_1\cup V_2,E)$ be the complete bipartite graph and $p,q\geq 1$. Then
\begin{equation*}
\id(K_{pq},x) = x^p+x^q.
\end{equation*}
\end{theorem}
\begin{proof}
If in $V_1$ at least one vertex is dominating, then in $V_2$ all vertices are dominated. Therefore, all vertices in $V_1$ must be dominating so that they are a dominating set in the graph. The same argumentation holds if at least one vertex in $V_2$ is dominating.
\end{proof}
\begin{theorem}
Let $G=(V,E)$ be the path $P_n$ with at least four vertices. Then
\begin{equation*}
\id(P_n,x) = x\id(P_{n-2},x)+x\id(P_{n-3},x),
\end{equation*}
with the initial conditions
\begin{equation*}
\id(P_1,x) = x,\textnormal{ } \id(P_2,x) = 2x \textnormal{ and } \id(P_3,x) = x^2+x.
\end{equation*}
\end{theorem}
\begin{proof}
If the first vertex of the path is dominating, then the second is dominated and therefore it can not be dominating. This case will be counted by $x\id(P_{n-2},x)$. If the first vertex is non-dominating, then the second vertex must be dominating. This gives $x\id(P_{n-3},x)$ and the theorem follows.
\end{proof}
Moreover, we can use prove an explicit formula for the independent domination polynomial of the path $P_n$.
\begin{theorem}\label{theorem::path}
Let $G=(V,E)$ be the path $P_n$ with $n\geq 2$. Then
\begin{equation}\label{eqn::id-path}
\id(P_n,x)=\sum_{k=1}^{\lfloor (n+3)/2\rfloor} \binom{k+1}{n-2k+1}x^{k}.
\end{equation}
\end{theorem}
\begin{proof}
Let $p(n, k)$ be the number of independent dominating sets $W$ of $P_n$ with exactly $k$ vertices. From the defnition it follows that between the vertices of $W$ there has to be a vertex not in $W$. Additionally, there have to be $n - k - (k - 1)$ other vertices not in $W$, one or none of them may be "before" the rest vertex in $W$, between such vertices (so that there are altogether two of them) or "behind" the last vertex in $W$. Hence there are $k + 1$ possible positions. It follows that $p(n,k) = \binom{k+1}{n-2k+1}$. Consequently,
\begin{equation*}
\id(P_n,x)=\sum_{k=1}^{\lfloor (n+3)/2\rfloor} p(n,k) x^k =\sum_{k=1}^{\lfloor (n+3)/2\rfloor} \binom{k+1}{n-2k+1} x^k.
\end{equation*}
\end{proof}
We can use the polynomial of the path $P_n$ to prove a theorem for the cycle $C_n$.
\begin{theorem}
Let $G=(V,E)$ be the cycle $C_n$ ($n\geq 7$). Then
\begin{equation*}
\id(C_n,x) = 2x\id(P_{n-3},x)+x^2\id(P_{n-6},x).
\end{equation*}
\end{theorem}
\begin{proof}
Suppose we have a numbering of the vertices. Starting with one up to $n$. If the vertex $1$ of the cycle is dominating, then its two neighbors $2$ and $n$ are dominated and they cannot be dominating. This case will be counted by $x\id(P_{n-3},x)$. If the vertex $1$ is non-dominating, then one of its neighbors must be dominating. If the vertex $2$ is dominating, then the vertex $n$ can be dominating or not. This case will be counted by $x\id(P_{n-3},x)$. If the vertices $1$ and $2$ are non-dominating, then the vertices $3$ and $n$ must be dominating. This gives the last part of the sum and the theorem is proved.
\end{proof}
Using Equation (\ref{eqn::id-path}) gives
\begin{equation*}
\id(C_n,x) = \sum_{k=0}^{\lfloor \frac{n-2}{2}\rfloor }\left( 2 \binom{k+2}{n-2k-4}+\binom{k+1}{n-2k-5}\right) x^{k+2}.
\end{equation*}
\section{Conclusion and open problems}
For the independent domination polynomial of some graph products, nice theorems are known. Is it possible to find more such results for other products or for the cartesian product $G\Box H$?
We introduced the $\circ$-operation for vertices of the graph and get a recurrence equation in respect to this operation. Is it possible to prove similar recurrence equations for the domination polynomial or the total domination polynomial?
\section*{Acknowledgement}
The author would like to thank Peter Tittmann for very helpful ideas and discussions which improved the paper.
Moreover, the author would like to express his gratitude to Manja Reinwardt and Ester Then for their careful reading and helpful comments.
\nocite{*}
| {
"timestamp": "2016-02-29T02:06:46",
"yymm": "1602",
"arxiv_id": "1602.08250",
"language": "en",
"url": "https://arxiv.org/abs/1602.08250",
"abstract": "A vertex subset $W\\subseteq V$ of the graph $G=(V,E)$ is an independent dominating set if every vertex in $V\\backslash W$ is adjacent to at least one vertex in $W$ and the vertices of $W$ are pairwise non-adjacent. The independent domination polynomial is the ordinary generating function for the number of independent dominating sets in the graph. We investigate in this paper properties of the independent domination polynomial and some interesting connections to well known counting problems.",
"subjects": "Combinatorics (math.CO)",
"title": "The Independent Domination Polynomial",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806518175514,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7077274119382474
} |
https://arxiv.org/abs/1311.3679 | Partitions of unity and coverings | The aim of this paper is to prove all well-known metrization theorems using partitions of unity. To accomplish this, we first discuss sufficient and necessary conditions for existence of $\mathcal{U}$-small partitions of unity (partitions of unity subordinated to an open cover $\mathcal{U}$ of a topological space $X$). | \section{Introduction}
The purpose of this paper is to further establish and demonstrate the utility of the calculus of partitions of unity. The setting we chose to this was to prove all the well known metrization theorems in a coherent way which is accessible to students. The coherence stems from the utility of forming partitions of unity on collections of discretely distributed sets. Partitions of unity, viewed as functions to $l_1(S)$ or simplicial complexes, offer both a geometric and analytical standpoint when working with covers.
It is well known that an open covering $\mathcal{U}$ of a topological space $X$ can be intimately associated with simplicial complexes via mapping $X$ to the nerve of the covering $\mathcal{N(U)}$. The goal of this paper is to display how fundamental this association is. In particular, we stress the relationship between barycentric subdivision with star refinements and covers with $\sigma-$discrete refinements to countable joins of simplicial complexes. It seems to the authors that the correspondence between joins and discreteness of covers is an essential viewpoint when viewing the metrization theorems.
Traditionally partitions of unity are thought of as a collections of maps $\{f_s:X\to [0,1]: s \in S\}$ so that $\sum\limits_{s\in S}f_s(x)=1$ for every $x\in X$. There is another way to view partitions of unity that proves convenient in many proofs and fundamental in certain cases.
Let $S$ be a nonempty set. $\Delta(S)$ is defined to be the subspace of $l_1(S)$ of functions $f:S\to [0,1]$ such that $\| f(x) \|_{l_1} = 1$ for every $x\in X$. A partition of unity can be defined as a map $f:X\to l_1(S)$ for which the image of $X$ is contained in $\Delta(S)$. This definition is motivated by observation that a partition of unity $\{f_s:X\to [0,1]:s\in S\}$ determines a unique function $f:X \to \Delta(S)$ where $x \mapsto f_x$ and $f_x:S\to [0,1]$ is defined by $f_x(s) = f_s(x)$.
One way of creating partitions of unity on a space $X$ is by normalizing partitions of positive and finite functions $g: X \to (0,\infty)$.
\begin{Definition}\label{NormalizationDef}
Suppose $\{f_s\}_{s\in S}$ is a family of functions $f_s:X\to [0,\infty)$ such that $g=\sum\limits_{s\in S}f_s$ maps $X$ to $(0,\infty)$ and is continuous. The \textbf{normalization} of $\{f_s\}_{s\in S}$ is the partition of unity $\{\frac{f_s}{g}\}_{s\in S}$.
\end{Definition}
\begin{Definition}
Suppose $\mathcal{U}$ is a cover of a topological space $X$. A partition of unity $f=\{f_s\}_{s\in S}$
is called \textbf{$\mathcal{U}$-bounded} (or \textbf{$\mathcal{U}$-small}) if for each $s\in S$
the carrier $f_s^{-1}(0,1]$ of $f_s$ is contained in a subset of $\mathcal{U}$. The carriers $f_s^{-1}(0,1]$ are sometimes reffered to as the \textbf{cozero sets} of the partition.
Given $s \in S$ define the star of $s$ as $st(s) = \{f\in l_1(S):f(s) \neq 0\}$. An alternative definition of a partition of unity $f:X \to l_1(S)$ being \textbf{$\mathcal{U}$-bounded} (or \textbf{$\mathcal{U}$-small}) is that for each $s\in S$
the point inverse of the star $f^{-1}(st(s))$ is contained in a subset of $\mathcal{U}$.
\end{Definition}
For a given space $X$ the existence of $\mathcal{U}$-small partitions of unity for coverings $\mathcal{U}$ is an essential question in geometric topology. In particular, the existence of extensions of partitions of unity defined on closed subspaces is fundamental in general topology, see \cite{DyExtTheory}.
We are grateful to Szymon Dolecki for asking which metrization theorems can be derived from partitions of unity.
\section{A Unifying Concept}
The purpose of this section is to introduce how we are using partitions of unity to unify discreteness and joins of complexes. A consequence is an alternative formulation and proof of Urysohn's Metrization Theorem. The authors feel that the alternate formulation and proof offer more geometric intuition for metric spaces and, in particular, of the very important metric space $l_1$. Methods in this section forshadow those used throughout the exposition. Consider the classical formulation of Urysohn's Metrization Theorem:
\begin{Theorem}[Urysohn's Metrization Theorem Version 1]
A second countable space is metrizable if and only if it is normal.
\end{Theorem}
A traditional method of proof for this theorem is through embedding into a product of intervals using Tychonoff's Theorem. We offer a different approach using partitions of unity in a way that acts as a prelude to Dydak's Metrization Theorem \ref{DydMetrThm} (see also the book \cite{Dol} of Sz.Dolecki).
\begin{Theorem}[Urysohn's Metrization Theorem Version 2]\label{Version 2}
A second countable space embeds into $l_1({\mathbb N})$ if and only if $X$ is normal.
\end{Theorem}
\begin{proof}[proof of Version 2]
Notice that $X$ being normal implies there exists a collection of maps $\Delta$ from $X$ into $[0,1]$ so that $\{f^{-1}((0,1]): f \in \Delta\}$ form a basis for $X$. Notice also that $\Delta$ can be chosen to be countable since $X$ is second countable. Denote the elements of $\Delta$ by $f_1,f_2,f_3,\ldots$
Consider the function $f:X \to l_1({\mathbb N})$ by $x \mapsto f_x$ where $f_x(n) = \frac{f_n(x)}{2^n}$.
To see that $f$ is continuous, observe that $\sum\limits_{n \ge 1}f_n \leq 1$ is finite and for each $\epsilon >0$ there exists a natural $M>0$ so that $\sum\limits_{M < n}f_n(y) < \frac{\epsilon}{4}$ for all $y \in X$. $f_n$ is a continuous function for each $n \leq M$, so there exists a $\delta >0$ so that $|f_n(x)-f_n(y)|< \frac{\epsilon}{2M}$ for all $y\in B(x,\delta)$ and all $n\leq M$. Notice that $|f_x-f_y|_{l_1} = \sum\limits_{n \ge 1} |f_n(x) - f_n(y)| = \sum\limits_{n \ge 1}^{M} |f_n(x) - f_n(y)|+\sum\limits_{M < n} |f_n(x) - f_n(y)| < \sum\limits_{n=1}^M\frac{\epsilon}{2M} + 2\sum_{M < n} f_n(x) \leq \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$ for each $y \in B(x,\delta)$.
To see that this map is one-to-one, note that for $x \neq y$ there must exist, by Hausdorff considerations, $n\ge1$ so that $f_n^{-1}((0,1])$ contains $x$ but not $y$. Note that $f_y(n)= 0 \neq f_x(n)$.
This map is an embedding as it is continuous and the preimage of a basis refines a basis by assumption.
\end{proof}
Notice that $X$ can inherit a metric when considered as a subspace of $l_1(S)$ defined as $d(x,y) = ||f_x-f_y||= \sum\limits_{s\in S}|f_x(s)-f_y(s)|$. The second author noticed that this same technique could be used to define metrics on spaces given a partition of unity whose carriers form a basis. Urysohn's Metrization Theorem could be proven in a class setting and stand as motivation to prove the following metrization theorem which was introduced in \cite{DyPU}.
\begin{Theorem}[Dydak's Metrization Theorem \cite{DyPU}]\label{DydMetrThm}
A space $X$ is metrizable if and only if for there exists a continuous partition of unity $f:X\to l_1(V)$ so that $\{f^{-1}(st(v)): v \in V\}$ forms a basis for $X$.
\end{Theorem}
The proof of Version 2 of Urysohn's Metrization Theorem can be modified to give a different classification of metric spaces.
\begin{Corollary}
A space $X$ is metrizable if and only if there exists a set $V$ and an embedding of $X$ into $l_1(V)$, so that each element in the image has norm 1.
\end{Corollary}
\begin{proof}
Suppose $X$ is metrizable. By Dydak's Metrization Theorem \ref{DydMetrThm}, there exists a set V and a continuous partition of unity $f:X\to l_1(V)$ so that $\{f^{-1}(st(v)): v \in V\}$ forms a basis for $X$. This $f$ must be one-to-one by Hausdorff considerations. It is an embedding as the inverse image of a basis for $l_1(V)$ refines a basis for $X$.
\end{proof}
\section{Star refinements}
This section is devoted to creation of partitions of unity using star refinements. Our major application is the Birkhoff Metrization Theorem for Groups \ref{BirkhoffMetrThm}.
\begin{Definition}
A cover $\mathcal{V} $ of a set $X$ is a \textbf{star-refinement} of a cover $\mathcal{U} $ if for each $x\in X$ the \textbf{star} $st(x,\mathcal{V} )=\bigcup \{V\in \mathcal{V} | x\in V\}$ is contained in an element of $\mathcal{U} $.
\end{Definition}
Notice that for a simplicial complex $K$ with vertex set $S$, the cover by open stars of vertices of $S$ has a natural star refinement given by starring of the vertices of the barycentric subdivision $K'$ of $K$. In \cite{DyPU}, the second author shows how to use barycentric subdivisions of simplicial complexes to find star refinements of covers with the concept of derivatives of partitions of unity.
\begin{Definition}
Let $X$ be a space, $S \neq \emptyset$, and $\{f_s\}_{s\in S}$ be a partition of unity on $X$. The $\textbf{derivative}$ of $\{f_s\}_{s\in S}$ is a partition of unity $\{f'_T\}_{T \subset S}$ indexed by all finite subsets of $S$ so that
1) $\sum_{s \in T}\frac{f'_T}{|T|} = f_s$ for each $s \in S$
2) $f'_T(x) \neq 0$ and $f'_F(x) \neq 0$ implies $T \subset F$ or $F \subset T$.
\end{Definition}
The definition above is modeled after the special case for a continuous partition of unity $f:X \to |K|$ where the derivative is the induced map to the barycentric subdivision of $K$. See \cite{DyPU} for a constructive proof that derivatives of arbitrary partitions of unity always exist and are unique. The following proposition, proved in \cite{DyPU}, shows how to obtain star refinements from partitions of unity.
\begin{Proposition}[\cite{DyPU}]\label{ExistenceOfStarRef}
Suppose $f: X \to l_1(S)$ is a partition of unity and $f':X \to l_1(\{T \subset S:|T|<\infty\})$ is its derivative. The open cover $\{(f')^{-1}(st(T)) | T \subset S:|T|<\infty\}$ of $X$ is a star refinement of the cover $\{f^{-1}(st(s)) | s\in S\}$ of $X$.
\end{Proposition}
The following Theorem exhibits that star refinements of covers can be used to construct partitions of unity.
\begin{Theorem}\label{StarTheorem}
Suppose $\mathcal{U} $ is an open cover of a topological space $X$. $\mathcal{U}$-small partitions of unity exists if and only if
there is a sequence of open covers $\mathcal{U}_n$ of $X$ such that $\mathcal{U}_1=\mathcal{U}$ and $\mathcal{U}_{n+1}$ is a star refinement of $\mathcal{U}_n$ for each $n$.
\end{Theorem}
\begin{proof}
The converse follows by Proposition \ref{ExistenceOfStarRef} and iteration.
Suppose there is a sequence of open covers $\mathcal{U}_n$ of $X$ such that $\mathcal{U}_1=\mathcal{U}$ and $\mathcal{U}_{n+1}$ is a star refinement of $\mathcal{U}_n$ for each $n$. By picking only odd-numbered covers we may assume that for each $U\in \mathcal{U}_{n+1} $ the \textbf{star} $st(U,\mathcal{U}_{n+1} )=\bigcup \{V\in \mathcal{U}_{n+1} | U\cap V\ne\emptyset\}$ is contained in an element of $\mathcal{U}_n $
Our strategy is to find a metric $d$ on a quotient space $X/{\sim}$ of $X$ such that the projection $p:X\to X/{\sim}$ is continuous and there is an open cover $\mathcal{V} $ of $X/{\sim}$ with the property that $p^{-1}(\mathcal{V} )$ refines $\mathcal{U} $. As there is a $\mathcal{V} $-small partition of unity $g:X/{\sim}\to l_1(S)$, $g\circ p$ is a $\mathcal{U} $-small partition of unity on $X$.
\par The relation $x\sim y$ is defined by the property that for each $n$ there is $U\in \mathcal{U}_n$ containing both $x$ and $y$. It is an equivalence relation. Our first approximation of a metric on $X/{\sim}$ is $\rho(x,y)$ defined
as infimum of $2^{-n}$ such that there is $U\in \mathcal{U}_n$ containing both $x$ and $y$.
$d(x,y)$ is defined as the infimum of all sums $\sum\limits_{i=1}^k \rho(x_i,x_{i+1})$, where $x_1=x$ and $x_{k+1}=y$.
\textbf{Claim 1}: If $\sum\limits_{i=1}^k \rho(x_i,x_{i+1}) < 2^{-n}$ for some chain of points where $x_1=x$, $x_{k+1}=y$, and $n\ge 1$,
then $\rho(x,y)\leq 2^{-n}$.
\par \textbf{Proof of Claim 1}: We may assume that $k \ge 1$ is the smallest natural number such that there is a chain of size $k-1$ joining $x$ and $y$ for which the sum $\sum\limits_{i=1}^k \rho(x_i,x_{i+1})$ is smaller than $2^{-n}$. Notice the lengths $\rho(x_i,x_{i+1})$ and $\rho(x_{i+1},x_{i+2})$ of adjacent links must be different. Otherwise if there was some $m$ such that $\rho(x_i,x_{i+1}) = \rho(x_{i+1},x_{i+2}) = 2^{-m}$ then there would be elements of $\mathcal{U}_m$ containing the pairs $\{x_i,x_{i+1}\}$ and $\{x_{i+1},x_{i+2}\}$. $\mathcal{U}_{m}$ star refines $\mathcal{U}_{m-1}$ which means $\rho(x_i,x_{i+2}) \leq 2^{-m+1} = \rho(x_i,x_{i+1}) + \rho(x_{i+1},x_{i+2})$ and so we could drop the middle term $x_{i+1}$ without increasing the sum.
Notice also that we cannot have the situation of three links such that sizes of the first and the third are equal and bigger than the size of the middle one. To see this, let $m\ge 1$ be the largest natural number so that the first pair and the last pair are contained in some element of $\mathcal{U}_m$.Then each of the three pairs of points are contained in some element of $\mathcal{U}_m$. Let $U$ be the element containing the middle pair. $\mathcal{U}_{m-1}$ star refines $\mathcal{U}_m$ and so $st(U,\mathcal{U}_{m})$ is contained in an element of $\mathcal{U}_{m-1}$ which means the distance between the first and last point is at most $2^{-m+1}$ and so is strictly less than the sum of the three distances.
\newline
\indent
Claim is clear for $k\leq 3$. We proceed by induction. Pick $m < k$ such that $\sum\limits_{i=1}^m \rho(x_i,x_{i+1}) < 2^{-n-1}$ but $\sum\limits_{i=1}^{m+1} \rho(x_i,x_{i+1}) \ge 2^{-n-1}$. Notice
$\sum\limits_{i=m+1}^k \rho(x_i,x_{i+1}) < 2^{-n-1}$. By induction, $\rho(x,x_m)\leq 2^{-n-1}$ and $\rho(x_{m+1},y)\leq 2^{-n-1}$. Since there is $U\in \mathcal{U}_{n+1}$ containing $x_m$ and $x_{m+1}$, the star $st(U,\mathcal{U}_{n+1} )$ contains both $x$ and $y$.
\textbf{Claim 2}: $p:X\to X/{}\sim$ is continuous.
\par \textbf{Proof of Claim 2}: If $d(x,y) < r$ pick $n\ge 1$ such that $r-d(x,y) > 2^{-n}$.
Choose $U\in \mathcal{U}_n$ containing $y$ and notice $d(x,z) < r$ for any $z\in U$. That means $p^{-1}(B(x,r))$ is open in $X$ and $p$ is continuous.
\textbf{Claim 3}: $p^{-1}(B(x,1/2))\}_{x\in X}$ refines $\mathcal{U}$.
\par \textbf{Proof of Claim 3}: Choose $V\in\mathcal{U}_2$ containing $x\in X$. If $d(x,y) < 1/2$, then Claim 1
says $\rho(x,y)\leq 1/2$ and there is $W\in \mathcal{U}_2$ containing $x$ and $y$.
Therefore $B(x,1/2)\subset st(V,\mathcal{U}_2)$ which is contained in an element of $\mathcal{U}$.
\end{proof}
\begin{Corollary}
If $U$ is a non-empty open subset of a topological group $G$, then the cover $\mathcal{U}=\{g\cdot U\}_{g\in G}$
has a $\mathcal{U}$-small partition of unity.
\end{Corollary}
\begin{proof}
We may assume $U$ contains $1_G\in G$. Pick a sequence $\{U_n\}_{n\ge 1}$ of symmetric open neighborhoods of $1_G\in G$ such that $U_{n+1}\cdot U_{n+1}\subset U_n\subset U$ for all $n\ge 1$.
Consider the coverings $\mathcal{U}_n=\{g\cdot U_n\}_{g\in G}$. $ \mathcal{U}_{n+1}$ is a star refinement of $ \mathcal{U}_{n-1}$. To see this, it suffices to show that $st(g \cdot U_{n+1},\mathcal{U}_n) \subset g \cdot U_{n-1}$ for each $g \in G$. Let $g$ and $h$ be in $G$ with $g \cdot U_{n+1} \cap h \cdot U_{n+1} \neq \emptyset$. Then there are $u$ and $v$ in $U_{n+1}$ so that $g\cdot u = h\cdot v$. This implies $g^{-1}h \in U_{n+1}\cdot U_{n+1} \subset U_n$ and so $g^{-1}\cdot h\cdot U_{n+1} \subset g^{-1}\cdot h\cdot U_{n} \subset U_{n-1}$. It follows that $h\cdot U_{n+1} \subset g\cdot U_{n-1}$ which proves $st(g \cdot U_{n+1},\mathcal{U}_n) \subset g \cdot U_{n-1}$. So using every other cover we have, by \ref{StarTheorem}, a $ \mathcal{U}$-small partition of unity $f$ on $G$.
\end{proof}
\begin{Corollary}[Birkhoff Metrization Theorem for Groups \cite{Nag}: page 225] \label{BirkhoffMetrThm}
A topological group $G$ is metrizable if and only if it is first countable.
\end{Corollary}
\begin{proof}
Every metrizable space is first countable, so assume $G$ is first countable and pick a basis $\{U_n\}_{n\ge 1}$ of open neighborhoods of $1_G\in G$ .
Let $\mathcal{U}_n=\{g\cdot U_n\}_{g\in G}$. There are $ \mathcal{U}_n$-small partitions of unity $\{f_{n,g}\}_{g \in G}$ on $G$. This leads to a partition of unity $\{\frac{f_{n,g}}{2^n}:g \in G$ and $n \in \mathbb{N}\}$ on $G$ whose carriers form a basis of $G$. Dydak's Metrization Theorem \ref{DydMetrThm} says $G$ is metrizable if it is Hausdorff.
\end{proof}
\section{Sigma Refinements}
In this section we show how to construct partitions of unity using $\sigma$-discrete refinements of open covers.
For a normal space $X$ and a finite open cover $\mathcal{U}$, a $\mathcal{U}$-small partition of unity can be obtained as follows: Choose a closed subset $B_U$ of each $U \in \mathcal{U}$ and use Urysohn's Lemma to get a map $f_U : X\to [0,1]$ where $f_U(B_U) \subset \{1\}$ and $f_U(X \setminus U) \subset \{0\}$. $\sum\limits_{U \in \mathcal{U}}f_U$ is a partition of a continuous function $g: X \to (0,\infty)$. Notice that $\{\frac{f_U}{g}\}_{U \in \mathcal{U}}$ is a $\mathcal{U}$-small partition of unity on $X$.
Suppose now that $\mathcal{U}$ is a countable open cover of $X$. $\mathcal{U}$ may be written as a $\bigcup\limits_{n \ge1} \mathcal{U}_n $ where $\mathcal{U}_n$ is a finite subset of $\mathcal{U}$ for $n \ge 1$. For each $n \ge 1$ we can find a collection of functions $\{f_U\}_{U \in \mathcal{U}_n}$ using Urysohn's Lemma as done above. Notice that $\{\frac{f_U}{2^n}: U \in \mathcal{U}_n$ and $n \ge1 \}$ is a partition of a positive and finite function $h:X\to (0,\infty)$. Normalizing this family yields a $\mathcal{U}$-small partition of unity on $X$.
The previous two methods can be summarized as follows: Given a cover $\mathcal{U}$ of a space $X$, find a countably many partitions $\{f_s\}_{s\in S_n}$ of positive and finite functions on $X$ so that $\{f_s^{-1}(0,1]: s \in S_n\text{ and }n \ge1\}$ refines $\mathcal{U}$. These collections induce the partition $\{\frac{f_s}{2^n}:$$s \in S_n$ and $n\ge 1 \}$ of a positive and finite function. Normalizing this collection yields the desired $\mathcal{U}$-small partition of unity.
Recall that a space $X$ is $\textbf{collectionwise normal}$ if every discrete collection of closed subsets $\{A_s\}_{s\in S}$ has a discrete open coarsening $\{U_s\}_{s\in S}$ where $A_s \subset U_s$ for all $s\in S$.
\begin{Proposition}\label{SigmaDiscreteOpenRefinementProp}
A $\sigma$-discrete open cover $\mathcal{U} = \{U_s\}_{s\in S}$ of a normal space $X$ admits a $\mathcal{U}$-small partition of unity if and only if it has a $\sigma$-discrete closed refinement.
\end{Proposition}
\begin{proof} $(\Leftarrow)$
$\mathcal{U} = \bigcup\limits_{n \ge 1} \mathcal{U}_n$ where $\mathcal{U}_n = \{U_{n,s} \subset U_s:s\in S\}$ is a discrete open collection. By intersecting $\mathcal{U}_n$ with families in the $\sigma-$discrete closed refinement and taking closures, we may assume that $\mathcal{U}_n$ has a $\sigma-$discrete closed refinement for each $n\ge 1$. Label the $\sigma-$discrete closed refinement of $\mathcal{U}_n$ by $\mathcal{D}_n = \bigcup\limits_{m\ge 1}\{D_{s,m} \subset U_{s,n}:s\in S\}$. For each $\mathcal{U}_n$ we can obtain a partition of a positive and finite function whose cozero sets refines it as follows. For each $s\in S$ and $m \ge 1$ we can use Urysohn's Lemma to get a function $f_{s,m}:X \to [0,1]$ where $D_{s,m}$ is mapped to $1$ and $X \setminus U_{n,s}$ is mapped to $0$. For each $m$, we therefore have a collection of functions $\{f_{m,s}:s\in S\}$. Consider the family $\{ g_{n,s} = \sum\limits_{m\ge 1}\frac{f_{m,s}}{2^m}:s \in S\}$. This is a partition of a positive and finite function whose cozero sets are precisely $\mathcal{U}_n$.
Consider the family $\{h_s = \sum\limits_{n\ge 1} \frac{g_{n,s}}{2^n}:s\in S\}$. This is a $\mathcal{U}-$small partition of a positive and finite function. Normalizing this family yields a $\mathcal{U}-$small partition of unity.
$(\Rightarrow)$ See Proposition \ref{ExistenceSigmaDiscreteRefinementProp} for the converse.
\end{proof}
\begin{Corollary}\label{SigmaDiscreteRefinementProp}
Assume $X$ is collectionwise normal. An open cover $\mathcal{U} = \{U_s\}_{s\in S}$ of $X$ admits a $\mathcal{U}$-small partition of unity if and only if it has a $\sigma$-discrete closed refinement.
\end{Corollary}
\begin{proof}
$(\Leftarrow)$ Choose a $\sigma$-discrete closed refinement $\mathcal{V} = \bigcup\limits_{n=1}^{\infty}\mathcal{V}_n$ of $\mathcal{U}$ where each $\mathcal{V}_n = \{F_{s}\}_{s \in S_n}$ is a discrete collection of closed subsets of $X$. Let $\{U_{s}\}_{s\in S_n}$ be a discrete open collection with $F_{s} \subset U_{s}$ and $U_{s}$ lies in some element of $\mathcal{U}$ for each $s \in S_n$. Using Urysohn's Lemma for each $s \in S_n$ there is a function $f_{s}:X \to [0,1]$ where $f_{s} (X \setminus U_{s}) \subset \{0\}$ and $f_{s}(F_s) \subset \{1\}$. The collection of functions $\bigcup\limits_{n=1}^{\infty}\{\frac{f_{s}}{2^n}:s \in S_n\}$ form a partition of a positive and finite funtion $g:X \to (0,\infty)$. Normalizing the family yields a partition of unity on $X$ that is $\mathcal{U}$-small.
$(\Rightarrow)$ See \ref{ExistenceSigmaDiscreteRefinementProp} for the converse.
\end{proof}
\begin{Proposition} \label{Collnorm-sigmarefinementsTheorem}
Assume $X$ is collectionwise normal. An open cover $\mathcal{U}$ of $X$ admits a $\mathcal{U}$-small partition of unity if and only if there is an open refinement of $\mathcal{U}$ that is point-finite.
\end{Proposition}
\begin{proof}
Choose a point-finite open refinement $\mathcal{V}=\{V_s\}_{s\in S}$ of $\mathcal{U}$. Let $S(n)$ be the family of subsets of $S$
consisting of eactly $n$ elements.
For each $n\ge 1$ we will construct
two discrete closed shrinkings $\mathcal{D}_n=\{D_{T}\}_{T\in S(n)}$ and $\mathcal{D}^\ast_n=\{D^\ast_{T}\}_{T\in S(n)}$ of $\mathcal{V}$ such that $\bigcup \mathcal{D}^\ast_n$
is a cover of $X$.
$D_{s}$ is the set of points that belong to $V_s$ only: $D_{s}:=V_s\setminus \bigcup\limits_{t\ne s}V_t$.
It is a closed subset of $V_s$ and the only element of $\mathcal{V}$ that may intersect it is $V_s$, so $\{D_{s}\}_{s\in S}$ is a discrete closed family. We extend $\{D_{s}\}_{s\in S}$ to a discrete family $\{D^\ast_{s}\}_{s\in S}$
such that $D_s\subset int(D^\ast_s)\subset D_s^\ast \subset V_s$ for all $s\in S$.
Suppose $\mathcal{D}_n$ and $\mathcal{D}^\ast_n$ are known for all $n < k$.
Given $T\in S(k)$ define $D_{T,k}$ as the set of points belonging to $\bigcup\limits_{t\in T} V_t$
and not belonging to $\bigcup\limits_{t\notin T}V_t\cup \bigcup\limits_{n < k}\bigcup\limits_{F\in S(n)}int(D^\ast_F)$. Let $\mathcal{D}_k$ be the collection of $D_{T,k}$ where $T$ ranges over $S(k)$.
It is a discrete closed collection that we can enlarge to $D^\ast_k$. Notice that all the points contained in at most $n$ elements of the cover are contained in $\bigcup_{k \leq n}D^\ast_k$. Therefore $\bigcup \mathcal{D}^\ast_n$ is indeed a cover of $X$ as $\mathcal{V}$ is point finite. We are done by Proposition \ref{SigmaDiscreteRefinementProp}.
\end{proof}
Recall the notion of weak paracompactness.
\begin{Definition}
A topological space $X$ is \textbf{weakly paracompact} if for every open cover $\mathcal{U} = \{U_s:s\in S\}$ of $X$ there exists a point finite partition of unity $f:X\to l_1(S)$ on $X$. This means that for each $x \in X$ the support of $f(x)$ is finite, i.e. the cardinality of $\{s\in S:f(x)(s) \neq 0\}$ is finite.
\end{Definition}
\begin{Corollary} [Michael-Nagami Theorem \cite{En}]
Assume $X$ is collectionwise normal. $X$ is paracompact if and only if it is weakly paracompact.
\end{Corollary}
\section{Discretization of covers}
The purpose of this section is to broaden the scope of the previous section and illuminate how partitions of unity can be used to unify all metrization theorems.
In this section, star refinements are used to descritize open covers $\mathcal{U}$ into disjoint closed collections and disjoint open coarsenings of those collections. $\mathcal{U}$-small partitions of unity can be constructed analogously to those in section 3.
Our first construction is that of a discrete shrinking of a given well-ordered open cover $\mathcal{V}=\{V_s\}_{s\in S}$
to a discrete family $\{D_s\}_{s\in S}$ of closed sets so that each element of a given open cover $\mathcal{U}$
intersects at most one element of $\{D_s\}_{s\in S}$.
\begin{Definition}
Suppose $\mathcal{U}$ is an open cover of a topological space $X$ and $\mathcal{V}=\{V_s\}_{s\in S}$ is a well-ordered collection of open subsets of $X$ (i.e. the index set $S$ is well-ordered). Let $\bar S=S\cup \{\infty\}$ is a well ordered set
obtained from $S$ by adding an element $\infty$ that is larger than all elements of $S$.
Define $\alpha:\mathcal{U}\to \bar S$ as follows: $\alpha(U)=\infty$ if there is no $s\in S$ with $U \subset V_s$.
Otherwise $\alpha(U)$ is the smallest $s\in S$ satisfying $U \subset V_s$.
The
\textbf{discretization} $D(\mathcal{V},\mathcal{U})=\{D(\mathcal{V},\mathcal{U})_s\}_{s\in S}$ of $\mathcal{V}$
with respect to $\mathcal{U}$ is defined as follows: $D(\mathcal{V},\mathcal{U})_s=X\setminus \bigcup\limits_{\alpha(U)\ne s}\{U: U \in \mathcal{U}\}$.
\end{Definition}
The following result confirms that the
discretization $D(\mathcal{V},\mathcal{U})$ is indeed a discrete closed shrinking of $\mathcal{V}$.
\begin{Lemma}\label{CNDiscClosedLemma}
$D(\mathcal{V},\mathcal{U})_s\subset V_s$ and $U\cap D(\mathcal{V},\mathcal{U})_s\ne\emptyset$ implies $\alpha(U)=s$ for each $s\in S$.
\end{Lemma}
\begin{proof} If $\alpha(U)\ne s$, then $U\subset X\setminus D(\mathcal{V},\mathcal{U})_s$, so $U\cap D(\mathcal{V},\mathcal{U})_s\ne\emptyset$ implies $\alpha(U)=s$. Also, any $W\in\mathcal{U}$ containing a point outside of $V_s$ has $\alpha(W)\ne s$, resulting in
$W\subset X\setminus D(\mathcal{V},\mathcal{U})_s$. Thus, $D(\mathcal{V},\mathcal{U})_s\subset V_s$.
\end{proof}
\begin{Lemma}\label{CNDiscToRefinementLemma}
Suppose $\mathcal{V}=\{V_s\}_{s\in S}$ is a cover of $X$ and $ \{\mathcal{U}_n\}_{n\ge 1}$ is a sequence of open covers of $X$. If for any $x\in V_s\in \mathcal{V}$ there is $n\ge 1$ satisfying $st(x,\mathcal{U}_n)\subset V_s$, then the union of discretizations
$\bigcup D(\mathcal{V},\mathcal{U}_n)$ is a cover of $X$.
\end{Lemma}
\begin{proof}
Given $x\in X$, let $s\in S$ be the smallest element so that $x\in V_s$. There is $n$ such that $st(x,\mathcal{U}_n)\subset V_s$. Now, $x$ belongs to the element of $D(\mathcal{V},\mathcal{U}_n)$ indexed by $s$.
Indeed, $\alpha(U)=s$ for any $U\in \mathcal{U}_n$ containing $x$.
\end{proof}
\begin{Proposition}\label{ExistenceSigmaDiscreteRefinementProp}
If an open cover $\mathcal{U}$ of a topological space $X$ has a $\mathcal{U}$-small partition of unity, then $\mathcal{U}$ admits a $\sigma$-discrete closed refinement.
\end{Proposition}
\begin{proof}
Choose a $\mathcal{U}$-small partition of unity $f:X \to l_1(S)$. Set $\mathcal{V} =\{ st(s): s \in S \}$ and notice that $\mathcal{V}$ is a cover of the metrizable space $l_1(V) \setminus \{0\}$. By using Theorem \ref{StarTheorem}, $l_1(V) \setminus \{0\}$ has a sequence of open covers $\mathcal{U}_n$ such that for any $x \in U \in \mathcal{V}$ there is $n \ge 1$ satisfying $st(x,\mathcal{U}_n)\subset U$. Pulling these covers back to $X$, we get a sequence of open covers $\mathcal{V}_n$ such that for any $y \in V \in \mathcal{U}$ there is $n \ge 1$ so that $st(x,\mathcal{V}_n) \subset V$. Using lemma \ref{CNDiscToRefinementLemma} and Lemma \ref{CNDiscClosedLemma} we can use discretizations to get a $\sigma$-discrete closed refinement of $\mathcal{U}$.
\end{proof}
\begin{Theorem}\label{SubmainDiscreteTheorem}
Suppose $X$ is collectionwise normal and $\mathcal{U}$ is an open cover of $X$. $\mathcal{U}$-small partitions of unity exists if there is a sequence of covers $\mathcal{U}_n$ such that for any $x\in U\in \mathcal{U}$ there is $n\ge 1$ satisfying $st(x,\mathcal{U}_n)\subset U$.
\end{Theorem}
\begin{proof}
($\Rightarrow$ ) Well order the cover $\mathcal{U} = \{U_s\}_{s\in S}$. Using Lemmas \ref{CNDiscClosedLemma} and \ref{CNDiscToRefinementLemma}, $\mathcal{U}$ has a $\sigma$-discrete closed refinement $\bigcup\limits_{n\in {\mathbb N}} D(\mathcal{U},\mathcal{U}_n)$. Using collectionwise normality, $D(\mathcal{U},\mathcal{U}_n)$ has a discrete open coarsening of $\mathcal{V}_n $. Apply \ref{SigmaDiscreteOpenRefinementProp}.
($\Leftarrow$) This direction follows from Proposition \ref{StarTheorem}.
\end{proof}
\begin{Corollary}[Bing Criterion \cite{En}] \label{BingCriterion}
Suppose $\{\mathcal{U}_n\}_{n\ge 1}$ is a sequence of open covers of $X$ such that $\{st(x,\mathcal{U}_n)\}$ is a basis at $x$ for each $x\in X$. If $X$ is collectionwise normal, then it is metrizable.
\end{Corollary}
\begin{proof}
This follows from Theorem \ref{SubmainDiscreteTheorem} and Dydak's Metrization Theorem \ref{DydMetrThm}. Indeed, in this case any open cover of $X$ admits a partition of unity. In particular, each $\mathcal{U}_n$ admits a partition of unity $f_n$. Those, when combined into one partition of unity $f$ result in $f$ having point-inverses of stars of vertices being a basis of $X$. By Dydak's Metrization Theorem \ref{DydMetrThm}, $X$ is metrizable.
\end{proof}
Given an open cover $\mathcal{U}$ of $X$ let $st(\mathcal{U})$ be the cover $\{st(x,\mathcal{U})\}_{x\in X}$ of $X$ by stars of $\mathcal{U}$.
\begin{Theorem}\label{MainDiscretTheorem}
Suppose $X$ is normal and $\mathcal{U}$ is an open cover of $X$. $\mathcal{U}$-small partitions of unity exists if and only if there is a sequence of open covers $\mathcal{U}_n$ such that for any $x\in X$ and any open neighborhood $U\in \mathcal{U}$ of $x$ there is $n\ge 1$ satisfying $st(x,st(\mathcal{U}_n))\subset U$.
\end{Theorem}
\begin{proof}
The main idea is to discretize the cover $\mathcal{U}$ into countable many discrete open collections. Well order the cover $\mathcal{U} = \{U_s:s\in S\}$.
Let $\mathcal{V}_n=\{st(U,\mathcal{U}_n)\}_{U\in \mathcal{U}_n}$.
Consider discretizations $D(\mathcal{U},\mathcal{V}_n)$. First of all we need $\bigcup\limits_{n=1}^\infty D(\mathcal{U},\mathcal{V}_n)$ to be a cover of $X$.
Indeed, given $x\in X$ choose the smallest $s\in S$ such that $x\in U_s$. Pick $n\ge 1$ satisfying $st(x,st(\mathcal{U}_n))\subset U_s$.
That implies $st(V,st(\mathcal{U}_n))\subset U_s$ for every $V\in \mathcal{U}_n$ containing $x$ and $s$ is the smallest element of $S$ so that
$st(V,st(\mathcal{U}_n))$ is contained in $U_s$. Thus $x\in D(\mathcal{U},\mathcal{V}_n)_s$.
To complete the proof using \ref{SigmaDiscreteOpenRefinementProp} it suffices to show that the stars of elements of $D(\mathcal{U},\mathcal{V}_n)$ with respect
to $\mathcal{U}_n$ form a discrete family. Indeed, suppose $U\in \mathcal{U}_n$ intersects two different sets $st(D(\mathcal{U},\mathcal{V}_n)_s,\mathcal{U}_n)$
and $st(D(\mathcal{U},\mathcal{V}_n)_t,\mathcal{U}_n)$. That means $st(U,\mathcal{U}_n)$ intersects both $D(\mathcal{U},\mathcal{V}_n)_s$
and $D(\mathcal{U},\mathcal{V}_n)_t$. That is not possible.
$(\Leftarrow)$ This direction follows from \ref{StarTheorem}.
\end{proof}
The following is a version of Bing Criterion \ref{BingCriterion} for $T_0$ spaces.
\begin{Theorem}\label{AStarStarMetrTheorem}
A $T_0$ space $X$ is metrizable if and only if there is a sequence of open covers $\mathcal{U}_n$ such that for any $x\in X$
the sets $\{st(x,st(\mathcal{U}_n))\}_{n\ge 1}$ form a basis at $x$.
\end{Theorem}
\begin{proof}
If $X$ is metrizable, then $\mathcal{U}_n$ consisting of open balls of radius $\frac{1}{n}$ gives the desired sequence of covers.
As long as $X$ is normal, by \ref{MainDiscretTheorem} $X$ is metrizable. Indeed, in this case any open cover of $X$ admits a partition of unity. In particular, each $\mathcal{U}_n$ admits a partition of unity $f_n$. Those, when combined into one partition of unity $f$ result in $f$ having point-inverses of stars of vertices being a basis of $X$. By Dydak's Metrization Theorem \ref{DydMetrThm}, $X$ is metrizable.
We may assume $\mathcal{U}_{n+1}$ is a refinement of $\mathcal{U}_n$ by taking intersections of the first $n$ covers.
Let us show that for any closed set $A$ of $X$ and any $x\notin A$ there is $n$ such that $st(x,\mathcal{U}_n)$ is disjoint with $st(A,\mathcal{U}_n)$.
That is accomplished by choosing $n$ so that $st(x,st(\mathcal{U}_n))\subset X\setminus A$.
Notice $X$ is Hausdorff. Indeed, for any two points $x\ne y$ in $X$ there is a neighborhood $U$ containing only one of them, say $x$.
Apply the above observation to $A=X\setminus U$. Hence $X$ is regular as well.
To prove normality of $X$ assume $A$ and $B$ are disjoint closed sets. For each $x\in X$ choose $n(x)\ge 1$ so that
$st(x,\mathcal{U}_{n(x)})$ is contained in either $X\setminus st(A,\mathcal{U}_{n(x)})$ or in $X\setminus st(B,\mathcal{U}_{n(x)})$. Let $V_x$ be an element of $\mathcal{U}_{n(x)}$ containing $x$. Put $\mathcal{V}=\{V_x\}_{x\in X}$. We claim $st(A,\mathcal{V})\cap st(B,\mathcal{V})=\emptyset$. Assume $z\in st(A,\mathcal{V})\cap st(B,\mathcal{V})$. Therefore we have $x\in A$ and $y\in B$ so that
$z\in V_y\cap V_x$. Without loss of generality assume $n(x)\ge n(y)$. In this case $z\in st(y,\mathcal{U}_{n(x)})\cap st(A,\mathcal{U}_{n(x)})$,
a contradiction.
\end{proof}
\begin{Corollary}[Moore Metrization Theorem \cite{En}]
A $T_0$ space $X$ is metrizable if and only if there is a sequence of open covers $\mathcal{U}_n$ such that for any $x\in X$ and any open neighborhood $U$ of $x$ there is $n\ge 1$ and a neighborhood $V$ of $x$ satisfying $st(V,\mathcal{U}_n)\subset U$.
\end{Corollary}
\begin{proof}
If $X$ is metrizable, then $\mathcal{U}_n$ consisting of open balls of radius $\frac{1}{n}$ gives the desired sequence of covers.
Given such a sequence of covers one readily sees that for any $x\in X$
the sets $\{st(x,st(\mathcal{U}_n))\}_{n\ge 1}$ form a basis at $x$. Apply \ref{AStarStarMetrTheorem}.
\end{proof}
\begin{Corollary}
[Arkhangelskii Metrization Theorem \cite{En}]
A space $X$ is metrizable if and only if it is $T_1$ and has a basis $\mathcal{B}$ with the property that
for any $x\in X$ and any open neighborhood $U$ of $x$ there is a neighborhood $V$ of $x$ in $U$
so that only finitely many elements of $\mathcal{B}$ intersects both $V$ and $X\setminus U$.
\begin{proof}
($\Rightarrow$) Let $\mathcal{V}_n$ be a locally finite refinement of the cover of $X$ by $2^{-n}$ balls. Observe that $\mathcal{B} =\bigcup\limits_{n=1}^{\infty} \mathcal{V}_n$ is a basis for $X$. Let $x \in X$ and $U$ an open neighborhood of $x$. Let $n \ge 1$ be large enough so that for all $ m \ge 1$ no element of $\mathcal{V}_m$ intersects both the ball of radius $2^{-m}$ about $x$ and $X \setminus U$. By the local finiteness of the covers $\mathcal{V}_n$ we can find an open set $V \subset B(x,2^{-n})$ so that only finitely many elements of $\bigcup\limits_{n=1}^{m}\mathcal{V}_n$ intersect $V$.
($\Leftarrow$) Choose a basis $\mathcal{B}$ with the property that for any $x\in X$ and any open neighborhood $U$ of $x$ there is a neighborhood $V$ of $x$ in $U$ so that only finitely many elements of $\mathcal{B}$ intersects both $V$ and $X\setminus U$. We will build a sequence of covers $\mathcal{U}_n$ in order to apply Moore's Metrization Theorem.
Let the cover $U_1$ be all the maximal elements of $\mathcal{B}$. Inductively we can build a cover $\mathcal{U}_n$ by first removing all the elements of $\mathcal{U}_k$ for $k < n$ from $\mathcal{B}$ and setting $\mathcal{U}_n$ to be all the maximal elements in the remaining collection.
Notice that for any $x \in X$ and each neighborhood $U$ of $x$ there is an element $V \in \mathcal{B}$ so that $x \in V \subset U$ and only finitely many elements of $\mathcal{B}$ intersect $V$ and $X\setminus U$. Choose $n \ge 1$ large enough so that no element of $\mathcal{U}_n$ intersects both $V$ and $X \setminus U$ and notice that $st(V,\mathcal{U}_n) \subset U$.
\end{proof}
\end{Corollary}
\begin{Corollary}
[Alexandroff Criterion \cite{En}]
A space $X$ is metrizable if and only if it is collectionwise normal and has a basis $\mathcal{B}$ with the property that
for any $x\in X$ and any open neighborhood $U$ of $x$ only finitely many elements of $\mathcal{B}$ intersects $X\setminus U$ and contain $x$.
\end{Corollary}
\begin{proof}
($\Leftarrow$) Choose a basis $\mathcal{B}$ with the property that for any $x\in X$ and any open neighborhood $U$ of $x$ there is a neighborhood $V$ of $x$ in $U$ so that only finitely many elements of $\mathcal{B}$ intersects both $V$ and $X\setminus U$. We will build a sequence of covers $\mathcal{U}_n$ for which we can apply Proposition \ref{Collnorm-sigmarefinementsTheorem} for each $n\ge 1$.
Let the cover $U_1$ be all the maximal elements of $\mathcal{B}$. Inductively we can build a cover $\mathcal{U}_n$ by first removing all the elements of $\mathcal{U}_k$ for $k < n$ from $\mathcal{B}$ and setting $\mathcal{U}_n$ to be all the maximal elements in the remianing collection.
Observe that $\mathcal{U}_n$ is point finite. Indeed, if $U$ and $V$ are in $\mathcal{U}_n$ and contain $x$ then by maximality $U \cap X\setminus V \neq \emptyset$ and $V \cap X\setminus U \neq \emptyset$. There are only finitely many such elements for which that happens.
By Proposition \ref{Collnorm-sigmarefinementsTheorem} there is a $\mathcal{U}_n$-small partition of unity $\{f_s:X\to [0,1]\}_{s\in S_n}$. Observe $\bigcup\limits_{n=1}^{\infty}\mathcal{U}_n = \mathcal{B}$ and so $\{\frac{f_s}{2^n}:X\to [0,1]:s\in S_n$ and $n \in \mathbb{N}\}$ forms a partition of unity whose carriers form a basis for $X$.
($\Rightarrow$) This direction follows from Arkhangelskii's Metrization Theorem.
\end{proof}
| {
"timestamp": "2013-11-18T02:00:49",
"yymm": "1311",
"arxiv_id": "1311.3679",
"language": "en",
"url": "https://arxiv.org/abs/1311.3679",
"abstract": "The aim of this paper is to prove all well-known metrization theorems using partitions of unity. To accomplish this, we first discuss sufficient and necessary conditions for existence of $\\mathcal{U}$-small partitions of unity (partitions of unity subordinated to an open cover $\\mathcal{U}$ of a topological space $X$).",
"subjects": "General Topology (math.GN)",
"title": "Partitions of unity and coverings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806501150427,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7077274107094733
} |
https://arxiv.org/abs/cond-mat/0508317 | Synchronous and Asynchronous Recursive Random Scale-Free Nets | We investigate the differences between scale-free recursive nets constructed by a synchronous, deterministic updating rule (e.g., Apollonian nets), versus an asynchronous, random sequential updating rule (e.g., random Apollonian nets). We show that the dramatic discrepancies observed recently for the degree exponent in these two cases result from a biased choice of the units to be updated sequentially in the asynchronous version. | \section*{Synchronous Updating}
Consider the deterministic tree of Fig.~\ref{trees}a, obtained by the following procedure~\cite{Bollt05}: Starting from K$_2$ at generation $n=0$, construct successive generations by attaching nodes of degree one to the endpoints of each existing link (Fig.~\ref{trees}b). The tree that emerges in generation $n$ has two hubs (the nodes of highest degree) of degree $2^n$. An alternative way for constructing the tree consists of doubling the degree of each existing node, from $k$ to $2k$, by attaching to it $k$ single-degree nodes (Fig.~\ref{trees}c). Yet a third method, which highlights the self-similarity of the tree, consists of producing $3$ replicas of generation $n$ and joining them at the hubs (Fig.~\ref{trees}d).
\begin{figure}[ht]
\vspace*{0.cm}
\includegraphics*[width=0.40\textwidth]{trees}
\caption{Synchronous recursive scale-free tree and methods of construction.
(a)~Generations $n=0,1,2,3$ of the tree.
(b)~First method of construction: to each of the endpoints of every link in generation $n$
connect a node of degree one.
(c)~Second method: to each node of degree $k$ in generation $n$
add $k$ new
nodes of degree one.
(d)~Third method: to obtain generation $n+1$, join three copies of generation $n$ at
the hubs (the nodes of highest degree).
}
\label{trees}
\end{figure}
It is clear that all nodes have degrees that are powers of $2$. Let $N_n(m)$ be the number of nodes of degree $2^m$ in generation $n$. Let $N_n=\sum_mN_n(m)$ be the total number of nodes (the order) in generation $n$. Let $M_n$ be the number of links (the size) in generation $n$. We have,
\begin{equation}
M_n=3M_{n-1}\,,\qquad M_0=1\;,
\end{equation}
(seen most easily by the first method of construction),
from which follows that
\begin{equation}
M_n=3^n\;.
\end{equation}
Since the graph is a tree,
\begin{equation}
N_n=M_n+1=3^n+1\;.
\end{equation}
Also,
\begin{equation}
N_n(m)=N_{n-1}(m-1)+2\cdot3^{n-1}\delta_{m,0}\;,
\end{equation}
leading to
\begin{equation}
\label{eq5}
N_n(m)=
\begin{cases}
2\cdot3^{n-m-1}, & m<n\;,\\
2, & m=n\;,\\
0, & m>n\;.
\end{cases}
\end{equation}
This corresponds to a scale-free degree distribution of degree exponent $\gamma=1+\ln3/\ln 2$.
\section*{Asynchronous Updating}
Let us now explore the consequences of asynchronous, or random sequential updating, and how it differs from synchronous updating. Based on the first method of construction (Fig.~\ref{trees}b), at each time step, $t$, we choose a link, randomly, and connect a new node to each of its endpoints. We intend to show that the differences between synchronous and sequential updating arise because of biases in the selection rule of the link to be updated. To this end we consider the following two rules: (a)~Select one of the $M(t)$ links in the net, randomly, with equal probability, or (b)~Select a node (among the $N(t)$ nodes of the net), randomly, then select one of its neighbors, at random, and pick the link that connects the two nodes. A detailed analysis of these two rules and how they differ in the selection of specific links is given in Appendix~\ref{howtopick}.
Consider how the degree of node $i$, $k_i(t)$, changes with time, by method (a). Since each of the $k_i$ links leading to node $i$ is selected with probability $1/M(t)$, the probability that $k_i\to k_1+1$ in the next time step is $k_i(t)/M(t)$. At each time step we add two links to the tree, so $M(t)=2t-1$ (we begin with a single link at time step $t=1$). Hence, in the long time asymptotic limit changes in $k_i$ are given by
\begin{equation}
\frac{dk_i}{dt}=\frac{k_i}{2t}\;.
\end{equation}
The initial condition for this equation is $k_i(t_i)=1$, that is, we assume that the node was introduced to the tree at time $t_i$ as a node of degree one (like all newly introduced nodes). The solution is then
\begin{equation}
\label{ki}
k_i=\sqrt{\frac{t}{t_i}}\;.
\end{equation}
It follows from (\ref{ki}) that the probability that $k_i$ is larger than $k$, is
\begin{equation}
\chi(k)\equiv{\rm Pr}(k_i>k)={\rm Pr}\left(t_i<\frac{t}{k^2}\right)\;.
\end{equation}
However, since node $i$ could be introduced in any of the $t$ steps with equal probability, the probability that $t_i<T$ is $T/t$, so that $\chi(k)=1/k^2$.
The degree distribution for large $k$ then follows:
\begin{equation}
P_a(k)=-\frac{d}{dk}\chi(k)\sim k^{-3}\;.
\end{equation}
We conclude that the random sequential construction of the tree by method~(a) leads to a scale-free degree distribution of degree exponent $\gamma=3$, different from $\gamma=1+\ln3/\ln2$ of the deterministic tree.
But what if we select the links by method~(b)? In this case node $i$ may be the first of the two nodes to be selected in step $t$. This may happen with probability $1/N(t)$. Node $i$ may also be the second node selected. The probability that we reach $i$ through a randomly selected node is $k_i/N\av{k}$. The degree of node $i$ increases from $k_i$ to $k_{i}+1$ regardless of whether it is picked first or second. That is, the rate of increase is $(1/N)(1+k_i/\av{k})$. But $N(t)=2t$, while $\av{k}=2M/N=(4t-2)/(2t)\to2$, as $t\to\infty$.
It follows that in the long time asymptotic limit
\begin{equation}
\frac{dk_i}{dt}=\frac{1}{2t}\left(1+\frac{k_i}{2}\right)\;.
\end{equation}
From here we proceed exactly as for method~(a), this time obtaining
\begin{equation}
P_b(k)\sim k^{-5}\;,
\end{equation}
for large $k$. Once again the degree distribution is scale-free. The degree exponent $\gamma=5$ is not only different from that of the deterministic tree but also differs from $\gamma=3$ obtained by method~(a).
\begin{figure}[ht]
\vspace*{0.cm}\includegraphics*[width=0.40\textwidth]{Pk}
\caption{
Random sequential trees constructed by
selecting links at random ($\times$), or by picking the link between a randomly selected node
and a neighbor ($\circ$). The trees consist of $10^7$ nodes.
The straight lines of slope 3 and 5 are shown for comparison.
}
\label{Pk}
\end{figure}
In Fig.~\ref{Pk} we show the degree distribution of random sequential trees constructed by the two methods.
The simulation data are consistent with the degree exponents predicted above, though one could argue that larger simulations are needed for the distributions to converge to their long time asymptotic limit~\cite{remark1}. At any rate, the simulations demonstrate our point that a bias in the selection of the updated units is responsible for the differences between different kinds of random sequential nets, and
between random sequential nets and synchronous nets. The differences from the deterministic tree, in our case,
result from the fact that picking links at random (by either of the two methods considered here) favors links with endpoints of higher degree.
A third method for building the random sequential tree is based on the recursive technique of Fig.~\ref{trees}c, where at each iteration the degree of all existing nodes is doubled by attaching to them new nodes of degree one. In the random sequential version the degree of only one randomly selected node is doubled at each time step. The node to be updated is picked with equal probability from among the $N(t)$ nodes of the net. We now show that a random net constructed by this method achieves the same degree exponent as the deterministic net.
We cannot resort to the same technique employed for the previous two random sequential constructs because
now $k_i\to2k_i$ in a single step, and the increase is too large to allow for a continuous approximation (we are trying to study $k_i$ large). Instead, we resort to discrete rate equations.
The degrees in the random tree are powers of 2, just as in the deterministic tree. Let $N_t(m)$ be the number of nodes of degree $2^m$ at time $t$, and let $N_t=\sum_mN_t(m)$ be the
order of the tree, then
\begin{eqnarray}
\label{eq6}
\begin{aligned}
&N_{t+1}(m)= N_t(m)+\frac{N_t(m-1)}{N_t}-\frac{N_t(m)}{N_t}\;,\quad m\geq1,\\
&N_{t+1}(0)= N_t(0)-\frac{N_t(0)}{N_t}+\sum_{m=0}2^m\frac{N_t(m)}{N_t}\;.
\end{aligned}
\end{eqnarray}
The long time asymptotic limit can be obtained by making the ansatz: $N_t(m)\to b_mt$
and $N_t\to Bt$, as $t\to\infty$ ($B=\sum_mb_m$).
Substituting in the first of Eqs.~(\ref{eq6}), we learn that $b_m=b_{m-1}/(1+B)$, leading to
$b_m=b_0/(1+B)^m$. The second equation tells us that $B=2$, and the remaining $b_0$ is obtained from the constraint $\sum_mb_m=B$. We get
\begin{equation}
N_t(m)\to \frac{4}{3^{m+1}}\,t\;.
\end{equation}
This is essentially the same distribution as~(\ref{eq5}), corresponding to $\gamma=1+\ln3/\ln2$. This shows that there exist ways of choosing the units to be updated such that the differences between random sequential nets and deterministic nets are minimal.
An interesting question concerns the differences between the deterministic and random scale-free nets, even when the random construction is unbiased.
It is evident, from our last example, that differences do exist. For instance, in the deterministic tree there are exactly two hubs (the highest degree nodes), always connected to one another. This need not be the case in the corresponding random tree. More generally, let $M_n(k,l)$ be the number
of links with endpoints of degree $2^k$, $2^l$ for generation $n$ of the deterministic tree. Then
\begin{equation}
\begin{aligned}
&M_n(k,l)=M_{n-1}(k-1,l-1)\;, \qquad 0<k\leq l\;,\\
&M_n(0,l)=2^{l-1}N_{n-1}(l-1)\;,
\end{aligned}
\end{equation}
leading to
\begin{equation}
M_n(k,l)=\begin{cases}
2^{l-k}3^{n-l-1}, & k<l<n,\\
2^{n-k}, & k\leq l=n.
\end{cases}
\end{equation}
This can be compared to $M_t(k,l)$ of random trees of an equivalent size (Fig.~\ref{correlations}).
The differences seem to be trivial, leaving us with the question whether significant discrepancies show with respect to any other measure of structure.
\begin{figure}[ht]
\vspace*{0.cm}\includegraphics*[width=0.40\textwidth]{correlations}
\caption{
Degree-degree correlations in deterministic and random trees.
Shown is $M(4,m)/\sum_kM(4,k)$ as a function of $m$ for deterministic trees of generation $n=8$ ($\bullet$), compared to random trees of equal size ($\circ$). The random results were obtained as an average over 1000 realizations. Error bars are smaller than the size of the symbols.
}
\label{correlations}
\end{figure}
\acknowledgments
DbA gratefully acknowledges partial support from NSF award PHY0140094.
Support for F.C. was provided by the Secretaria de Estado de Universidades
e Investigaci\'on (Ministerio de Educaci\'on y Ciencia), Spain, and the
European Regional Development Fund (ERDF) under project TIC2002-00155.
| {
"timestamp": "2005-08-12T17:30:09",
"yymm": "0508",
"arxiv_id": "cond-mat/0508317",
"language": "en",
"url": "https://arxiv.org/abs/cond-mat/0508317",
"abstract": "We investigate the differences between scale-free recursive nets constructed by a synchronous, deterministic updating rule (e.g., Apollonian nets), versus an asynchronous, random sequential updating rule (e.g., random Apollonian nets). We show that the dramatic discrepancies observed recently for the degree exponent in these two cases result from a biased choice of the units to be updated sequentially in the asynchronous version.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech)",
"title": "Synchronous and Asynchronous Recursive Random Scale-Free Nets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806495475397,
"lm_q2_score": 0.7217431943271998,
"lm_q1q2_score": 0.7077274102998817
} |
https://arxiv.org/abs/1311.0757 | Computations with modified diagonals | Beauville and Voisin proved that the third modified diagonal of a complex K3 surface X represents a torsion class in the Chow group of X^3. Motivated by this result and by conjectures of Beauville and Voisin on the Chow ring of hyperkaehler varieties we prove some results on modified diagonals of projective varieties and we formulate a conjecture. | \section{Introduction}
Let $X$ be an $n$-dimensional variety over a field ${\mathbb K}$ and $a\in X({\mathbb K})$. For $I\subset\{1,\ldots,m\}$ we let
\begin{equation}\label{diagtwist}
\Delta^m_I(X;a):=\{(x_1,\ldots,x_m)\in X^m \mid \text{$x_i=x_j$ if $i,j\in I$ and $x_i=a$ if $i\notin I$}\}.
\end{equation}
The \emph{$m$-th modified diagonal cycle associated to $a$} is the $n$-cycle on $X^m$ given by
\begin{equation}\label{eccogamma}
\Gamma^m(X;a):=\sum\limits_{\emptyset\not= I\subset \{1,2,\ldots,m\}}(-1)^{m-|I|}\Delta^m_I(X;a)
\end{equation}
if $n>0$, and equal to $0$ if $n=0$. Gross and Schoen~\cite{groscho} proved that if $X$ is a (smooth projective) hyperelliptic curve and $a$ is a fixed point of a hyperelliptic involution then
$\Gamma^3(X;a)$ represents a torsion class in the Chow group of $X^3$. On the other hand it is known that if $X$ is a generic complex smooth plane curve and $m$ is small compared to its genus then $\Gamma^m(X;a)$ is \emph{not} algebraically equivalent to $0$, whatever $a$ is,
see~\cite{voisinf} (for the link between vanishing of $\Gamma^m(X;a)$ and Voisin's result on the Beauville decomposition of the Abel-Jacobi image of a curve see the proof of Prop.4.3 of~\cite{beauvoisin}).
Let $X$ be a complex projective $K3$ surface: Beauville and Voisin~\cite{beauvoisin} have proved that there exists $c\in X$ such that the
rational equivalence class of $\Gamma^3(X;c)$ is torsion.
A natural question arises: under which hypotheses a modified diagonal cycle on a projective variety represents a torsion class in the Chow group? We should point out that such a vanishing can entail unexpected geometric properties: if $X$ is a smooth projective variety of dimension $n$ and $\Gamma^{n+1}(X;a)$ is torsion in the Chow group then the intersection of arbitrary divisor classes $D_1,\ldots,D_{n}$ on $X$ is rationally equivalent to a multiple of $a$. A set of conjectures put forth by Beauville~\cite{beauconj} and Voisin~\cite{voisinhk} predict exactly such a degenerate behaviour for the intersection product of divisors on hyperk\"ahler varieties i.e.~complex smooth projective varieties which are simply connected and carry a holomorphic symplectic form whose cohomology class spans $H^{2,0}$ (see~\cite{liefu,shenvial} for more results on those conjectures). Our interest in modified diagonals has been motivated by the desire to prove the conjecture on hyperk\"ahler varieties stated below. From now on the notation $A\equiv B$ for cycles $A,B$ on a variety $X$ means that for some integer $d\not=0$ the cycle $dA$ is rationally equivalent to $dB$, i.e.~we will work with the rational Chow group $\CH(X)_{{\mathbb Q}}:=\CH(X)\otimes_{{\mathbb Z}}{\mathbb Q}$.
\begin{cnj}\label{cnj:diaghk}
Let $X$ be a Hyperk\"ahler variety of dimension $2n$. Then there exists $a\in X$ such that $\Gamma^{2n+1}(X;a)\equiv 0$.
\end{cnj}
In the present paper we will \emph{not} prove~\Ref{cnj}{diaghk}, instead we will establish a few basic results on modified diagonals.
Below is our first result, see~\Ref{sec}{prodotti}.
\begin{prp}\label{prp:protokunn}
Let $X,Y$ be smooth projective varieties. Suppose that there exist $a\in X({\mathbb K})$, $b\in Y({\mathbb K})$ such that $\Gamma^m(X;a)\equiv 0$ and $\Gamma^n(Y;b)\equiv 0$. Then $\Gamma^{m+n-1}(X\times Y;(a,b))\equiv 0$.
\end{prp}
We will apply the above proposition in order to show that if $T$ is a complex abelian surface and $a\in T$ then $\Gamma^5(T;a)\equiv 0$. Notice that if $E$ is an elliptic curve and $a\in E$ then $\Gamma^3(E;a)\equiv 0$ by Gross and Schoen~\cite{groscho}. These results are particular instances of a Theorem of Moonen and Yin~\cite{moy} which asserts that $\Gamma^{2g+1}(A;p)\equiv 0$ for $A$ an abelian variety of dimension $g$ and $p\in A({\mathbb K})$ (and more generally for an abelian scheme of relative dimension $g$).
A word about the relation between Moonen - Yin's result and~\Ref{cnj}{diaghk}. Beauville and Voisin proved that the relation $\Gamma^3(X;c)\equiv 0$ for $X$ a complex projective $K3$ surface (and a certain $c\in X$) follows from the existence of an elliptic surface $Y$ dominating $X$ and the relation $\Gamma^3(E_t;a)\equiv 0$ for the fibers of the elliptic fibration on $Y$. We expect that the theorem of Moonen and Yin can be used to prove
that~\Ref{cnj}{diaghk} holds for Hyperk\"ahler varieties which are covered generically by abelian varieties, this is the subject of work in progress. (It is hard to believe that every Hyperk\"ahler variety of dimension greater than $2$ is covered generically by abelian varieties, but certainly there are interesting codimension-$1$ families which have this property, viz.~lagrangian fibrations and Hilbert schemes of $K3$ surfaces, moreover Lang's conjectures on hyperbolicity would give that a hyperk\"ahler variety is generically covered by varieties birational to abelian varieties.) In~\Ref{sec}{fibrazioni} we will prove that, in a certain sense, \Ref{prp}{protokunn} holds also for ${\mathbb P}^r$ fibrations over smooth projective varieties if certain hypotheses are satisfied, then we will apply the result to prove vanishing of classes of modified diagonals of symmetric products of curves of genus at most $2$. In~\Ref{sec}{scoppio} we will prove
the following result.
\begin{prp}\label{prp:blowdel}
Let $Y$ be a smooth projective variety and $V\subset Y$ be a smooth subvariety of codimension $e$. Suppose that there exists $b\in V({\mathbb K})$ such that $\Gamma^{n+1}(Y;b)\equiv 0$ and $\Gamma^{n-e+1}(V;b)\equiv 0$. Let $X\to Y$ be the blow-up of $V$ and $a\in X({\mathbb K})$ such that $f(a)=b$. Then $\Gamma^{n+1}(X;a)\equiv 0$.
\end{prp}
We will apply~\Ref{prp}{blowdel} and~\Ref{prp}{protokunn} in order to show that~\Ref{cnj}{diaghk} holds for $S^{[n]}$ where $S$ is a complex $K3$ surface and $n=2,3$, see~\Ref{prp}{diaghilbk3}. In~\Ref{sec}{rivdop} we will consider double covers $f\colon X\to Y$ where $X$ is a projective variety. We will prove that if $a\in X({\mathbb K})$ is a ramification point and $\Gamma^m(Y;f(a))\equiv 0$ then $\Gamma^{2m-1}(X;a)\equiv 0$, provided $m=2,3$. The proof for $m=2$ is the proof, given by Gross and Schoen, that if $X$ is a hyperelliptic curve then
$\Gamma^3(X;a)\equiv 0$ for $a\in X({\mathbb K})$ a fixed point of a hyperelliptic involution; we expect that our extension will work for arbitrary $m$ but we have not been able to carry out the necessary linear algebra computations. The result for $m=3$ allows us to give another proof that
$\Gamma^5(T;a)\equiv 0$ for a complex abelian surface $T$: the equality $\Gamma^5(T;a)\equiv 0$ follows from our result on double covers and the equality $\Gamma^3(T/\langle -1\rangle;c)\equiv 0$ proved by Beauville and Voisin~\cite{beauvoisin}.
\subsection{Conventions and notation}
Varieties are defined over a base field $\mathbb K$. A \emph{point of $X$} is an element of $X({\mathbb K})$.
We denote the small diagonal $\Delta^m_{\{1,\ldots,m\}}(X;a)$ by $\Delta^m(X)$ and we let $\pi^m_i\colon X^m\to X$ be the $i$-th projection - we will drop the superscript $m$ if there is no potential for confusion. We let $X^{(n)}$ be the $n$-th symmetric product of $X$ i.e.~$X^{(n)}:=X^n/{\mathcal S}_n$ where ${\mathcal S}_n$ is the symmetric group on $n$ elements.
\subsection{Acknowledgments}
It is a pleasure to thank Lie Fu, Ben Moonen and Charles Vial for the interest they took in this work.
\section{Preliminaries}
\subsection{}\label{subsec:significato}
\setcounter{equation}{0}
Let $X$ be an $n$-dimensional projective variety over a field ${\mathbb K}$, $a\in X({\mathbb K})$ and $h$ a hyperplane class on $X$. Let $\iota\colon\Delta^m(X)\hookrightarrow X^m$ be the inclusion map. If $m\le n$ then
\begin{equation}\label{iperpiano}
\Gamma^{m}(X;a)\cdot\pi_1^{*}(h)\cdot\pi_2^{*}(h)\cdot\ldots\cdot\pi_{m-1}^{*}(h)\cdot\pi_{m}(h^{n-m+1})=
\iota_{*}(h^n).
\end{equation}
Since $\deg\iota_{*}(h^n)\not=0$ it follows that $\Gamma^{m}(X;a)\not\equiv 0$ if $m\le n$. Now suppose that $\Gamma^{n+1}(X;a)\equiv 0$. Let $D_1,\ldots,D_n$
be \emph{Cartier} divisors on $X$: then
\begin{equation}\label{tuttimulti}
0=\pi_{n+1,*}(\Gamma^{n+1}(X;a)\cdot\pi_1^{*}D_1\cdot\ldots\cdot \pi_n^{*}D_n)=D_1\cdot\ldots\cdot D_n-\deg(D_1\cdot\ldots\cdot D_n)a
\end{equation}
in $\CH_0(X)_{{\mathbb Q}}$.
\begin{rmk}\label{rmk:unicopunto}
Equation~\eqref{tuttimulti} shows that if $\Gamma^{n+1}(X;a)\equiv 0$ and $\Gamma^{n+1}(X;b)\equiv 0$ then $a\equiv b$.
\end{rmk}
\begin{expl}\label{expl:spapro}
The intersection product between cycle classes of complementary dimension defines a perfect pairing on $\CH(({\mathbb P}^n)^m)$. Let $a\in{\mathbb P}^n$: since $\Gamma^{n+1}({\mathbb P}^n;a)$ pairs to $0$ with any class of complementary dimension it follows that $\Gamma^{n+1}({\mathbb P}^n;a)\equiv 0$.
\end{expl}
\subsection{}\label{subsec:homcomp}
\setcounter{equation}{0}
In the present subsection we will assume that $X$ is a complex smooth projective variety of dimension $n$. Let $a\in X$. Let $\alpha_1,\ldots,\alpha_m\in H_{DR}(X)$ be De Rham homogeneous cohomology classes such that $\sum_{i=1}^m\deg\alpha_i=2n$. Thus it makes sense to integrate $\pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m$ on $\Gamma^m(X;a)$. Let
\begin{equation}\label{eccoesse}
s:=|\{1\le i\le m \mid \deg\alpha_i=0\}|.
\end{equation}
A straightforward computation gives that
\begin{equation}\label{integrale}
\int_{\Gamma^m(X;a)} \pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m =
\sum_{\ell=0}^{m-1}(-1)^{\ell}{{s}\choose{\ell}}\int_X \alpha_1\wedge\ldots\wedge\alpha_m.
\end{equation}
\begin{prp}\label{prp:banale}
Let $X$ be a smooth complex projective variety and $a\in X$. Let $n$ be the dimension of $X$ and $d$ be its Albanese dimension. The homology class of $\Gamma^{m}(X;a)$ is torsion if and only if $m>(n+d)$.
\end{prp}
\begin{proof}
If $n=0$ the result is obvious. From now on we assume that $n>0$. By~\eqref{iperpiano} we may assume that $m>n$.
The homology class of $\Gamma^{m}(X;a)$ is torsion if and only if the left-hand side of~\eqref{integrale} vanishes for every choice of homogeneous $\alpha_1,\ldots,\alpha_m\in H_{DR}(X)$ such that $\sum_{i=1}^m\deg\alpha_i=2n$.
Suppose first that $n<m\le(n+d)$ and let $m=n+e$: thus $0< e\le d$. Choose a point of $X$ and let $\alb_X\colon X\to\Alb(X)$ be the associated Albanese map.
Let $\theta$ be a a K\"ahler form on $\Alb(X)$: by hypothesis $\dim(\im \alb_X)=d$ and hence there exist holomorphic $1$-forms $\psi_1,\ldots,\psi_e$ on $\Alb(X)$ such that
\begin{equation}\label{positivo}
\int_{\im(\alb_X)}\psi_1\wedge\ldots\wedge\psi_e\wedge\overline{\psi}_1\wedge\ldots\wedge\overline{\psi}_e\wedge\theta^{d-e}>0.
\end{equation}
For $i=1,\ldots,e$ let $\phi_i:=\alb_X^{*}\psi_i$ and $\eta:=\alb_X^{*}\theta$. Let $\omega\in H^2_{DR}(X)$ be a K\"ahler class. Equations~\eqref{integrale} and~\eqref{positivo} give that
\begin{equation}
\scriptstyle
\int_{\Gamma^m(X;a)} \pi_1^{*}\phi_1\wedge\ldots\wedge\pi_{e}^{*}\phi_e\wedge
\pi_{e+1}^{*}\overline{\phi}_1\wedge\ldots\wedge\pi_{2e}^{*}\overline{\phi}_e\wedge
\pi_{2e+1}^{*}\eta\wedge\ldots\wedge\pi_{e+d}^{*}\eta
\wedge\pi_{e+d+1}^{*}\omega\wedge\ldots\wedge\pi_{m}^{*}\omega
=\int_{X} \phi_1\wedge\ldots\wedge\phi_e\wedge
\overline{\phi}_1\wedge\ldots\wedge\overline{\phi}_e\wedge
\eta^{d-e}\wedge\omega^{n-d}>0
\end{equation}
It follows that the homology class of $\Gamma^{m}(X;a)$ is not torsion.
Lastly suppose that $m>(n+d)$.
Let $s$ be given by~\eqref{eccoesse}: then $s\le (m-1)$ because $n>0$. It follows that if $s>0$ the right-hand side of~\eqref{integrale} vanishes (by the binomial formula). Now assume that $s=0$: by~\eqref{integrale} we have that
\begin{equation}\label{lazio}
\int_{\Gamma^m(X;a)} \pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m =\int_X \alpha_1\wedge\ldots\wedge\alpha_m.
\end{equation}
Let
\begin{equation}\label{eccoti}
t:=|\{1\le i\le m \mid \deg\alpha_i=1\}|.
\end{equation}
If $t>2d$ then the right-hand side of~\eqref{lazio} vanishes because every class in $H^1_{DR}(X)$ is represented by the pull-back of a closed $1$-form on $\Alb(X)$ via the Albanese map and by hypothesis $\dim(\im \alb_X)=d$. Now suppose that $t\le 2d$. Then
\begin{equation}
\deg(\pi_1^{*}\alpha_1\wedge\ldots\wedge\pi_m^{*}\alpha_m)\ge t+2(m-t)>2n+2d-t\ge 2n
\end{equation}
and hence the right-hand side of~\eqref{lazio} vanishes because the integrand is identically zero. This proves that if $m>(n+d)$ the homology class of $\Gamma^{m}(X;a)$ is torsion.
\end{proof}
\subsection{}\label{subsec:gradofin}
\setcounter{equation}{0}
Let $f\colon X\to Y$ be a map of finite non-zero degree between projective varieties. Let $a\in X$ and $b:=f(a)$. Then $f_{*}\Gamma^m(X;a)=(\deg f)\Gamma^m(Y;b)$. It follows that if $\Gamma^m(X;a)\equiv 0$ then $\Gamma^m(Y;b)\equiv 0$.
\section{Products}\label{sec:prodotti}
We will prove~\Ref{prp}{protokunn} and then we will prove that if $T$ is a complex abelian surface then $\Gamma^5(T;a)\equiv 0$ for any $a\in T$.
\subsection{Preliminary computations}
\setcounter{equation}{0}
Let $X$ and $Y$ be projective varieties and $a\in X$, $b\in Y$.
Let $\emptyset\not=I\subset \{1,\ldots,r\}$ and $\emptyset\not=J\subset \{1,\ldots,s\}$. Thus $\Delta^r_{I}(X;a)\subset X^r$ and $\Delta^s_{J}(Y;b)\subset Y^s$: we let
\begin{equation}\label{doppia}
\Delta^{r,s}_{I,J}(X,Y;a,b):=\Delta^r_{I}(X;a)\times \Delta^s_{J}(Y;b)\subset X^r\times Y^s.
\end{equation}
We let $\Delta^{r,s}(X,Y)=\Delta^{r,s}_{\{1,\ldots,r\},\{1,\ldots,s\}}(X,Y;a,b)$. For the remainder of the present section we let
\begin{equation}\label{semplifica}
e:=m+n-1.
\end{equation}
We will constantly make the identification
\begin{equation}\label{scambio}
\begin{matrix}
(X\times Y)^{e} & \overset{\sim}{\longrightarrow} & X^{e}\times Y^{e} \\
((x_1,y_1),\ldots,(x_{e},y_{e})) & \mapsto & (x_1,\ldots,x_{e},y_1,\ldots,y_{e})
\end{matrix}
\end{equation}
With the above notation~\Ref{prp}{protokunn} is equivalent to the following rational equivalence:
\begin{equation}\label{diagprod}
\sum\limits_{\emptyset\not=I\subset \{1,\ldots,e\}}(-1)^{e-|I|}\Delta^{e,e}_{I,I}(X,Y;a,b)\equiv 0.
\end{equation}
\begin{prp}\label{prp:delsup}
Let $X$ be a smooth projective variety and $a\in X$. Suppose that $\Gamma^m(X;a)\equiv 0$. Then
\begin{equation}\label{delsup}
\Delta^{m+r}(X)\equiv \sum_{1\le |J|\le (m-1)}(-1)^{m-1-|J| }{m+r-1-|J| \choose r}\Delta^{m+r}_J(X;a)
\end{equation}
for every $r\ge 0$.
\end{prp}
\begin{proof}
By induction on $r$. If $r=0$ then~\eqref{delsup} is equivalent to $\Gamma^m(X;a)\equiv 0$. Let's prove the inductive step.
Since $\Gamma^m(X;a)\equiv 0$ we have that
\begin{multline}
\Delta^{m+r+1}(X)\equiv \pi_{1,\ldots,m+r}^{*}\Delta^{m+r}(X)\cdot \pi_{m+r,m+r+1}^{*}\Delta^2(X)\equiv \\
\equiv \pi_{1,\ldots,m+r}^{*}\left(\sum_{\substack{J\subset\{1,\ldots,m+r\} \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J| }{m+r-1-|J| \choose r}\Delta^{m+r}_J(X;a)\right)\cdot \pi_{m+r,m+r+1}^{*}\Delta^2(X).
\end{multline}
Next notice that
\begin{equation}\label{duecasi}
\pi_{1,\ldots,m+r}^{*}\Delta^{m+r}_J(X;a)\cdot \pi_{m+r,m+r+1}^{*}\Delta^2(X)\equiv
\begin{cases}
\Delta^{m+r+1}_J(X;a) & \text{if $(m+r)\notin J$,}\\
\Delta^{m+r+1}_{J\cup\{m+r+1\}}(X;a) & \text{if $(m+r)\in J$,}
\end{cases}
\end{equation}
Thus $\Delta^{m+r+1}(X)$ is rationally equivalent to a linear combination of cycles $\Delta^{m+r+1}_J(X;a)$ with $|J|\le(m-1)$ and of cycles $\Delta^{m+r+1}_{K}(X;a)$ where
\begin{equation}\label{kappacond}
|K|=m,\qquad \{m+r,m+r+1\}\subset K.
\end{equation}
Let $K$ be such a subset and write $K=\{i_1,\ldots,i_m\}$ where $i_1<\ldots <i_m$.
Let $\iota\colon X^{m}\to X^{m+r+1}$ be the map which composed with the $j$-th projection of $X^{m+r+1}$ is equal to the constant map to $a$ if $j\notin K$, and is equal to the $l$-th projection of $X^{m}$ if if $j=i_l$. Then $\Delta^{m+r+1}_K(X;a)=\iota_{*}\Delta^{m}$ and hence
the equivalence $\Gamma^m(a)\equiv 0$ gives that
\begin{equation}
\Delta^{m+r+1}_{K}(X;a)\equiv \sum_{\substack{J\subset K \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J|}\Delta_J^{m+r+1}(X;a).
\end{equation}
Putting everything together we get an equivalence
\begin{equation}\label{caino}
\Delta^{m+r+1}(X)\equiv \sum_{1\le |J|\le (m-1)}(-1)^{m-1-|J| }c_J\Delta^{m+r+1}_J(X;a)
\end{equation}
In order to prove that $c_J={m+r-|J| \choose r+1}$ we distinguish four cases: they are indexed by the intersection
\begin{equation}\label{preiti}
J\cap\{m+r,m+r+1\}.
\end{equation}
Suppose that~\eqref{preiti} is empty. We get a contribution (to $c_J$) of ${m+r-1-|J| \choose r}$ from the first case in~\eqref{duecasi}, and a contribution of
\begin{equation}
\scriptstyle
|\{(J\cup\{m+r,m+r+1\})\subset K \subset\{1,\ldots,m+r+1\} \mid |K|=m\}|=
{m+r-1-|J|\choose m-2-|J|}={m+r-1-|J|\choose r+1}
\end{equation}
from the subsets $K$ satisfying~\eqref{kappacond}. This proves that $c_J={m+r-|J| \choose r+1}$ in this case. The proof in the other three cases is similar.
\end{proof}
\begin{crl}\label{crl:delsup}
Let $X$ be a smooth projective variety and $a\in X$. Suppose that $\Gamma^m(X;a)\equiv 0$. Let $s\ge 0$ and $I\subset \{1,\ldots,m+s\}$ be a subset of cardinality at least
$m$. Then
\begin{equation}\label{delsupdue}
\Delta^{m+s}_I(X;a)\equiv \sum_{\substack{J\subset I \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J|}{|I|-|J| -1\choose |I|-m}\Delta^{m+s}_J(X;a).
\end{equation}
\end{crl}
\begin{proof}
Let $q:=|I|$ and $I=\{i_1,\ldots,i_q\}$ where $i_1<\ldots <i_q$. Let $\iota\colon X^{q}\to X^{m+s}$ be the map which composed with the $j$-th projection of $X^{m+s}$ is equal to the constant map to $a$ if $j\notin I$, and is equal to the $l$-th projection of $X^{m}$ if if $j=i_l$.
Then $\Delta^{m+s}_I(X;a)=\iota_{*}\Delta^{q}(X)$ and one gets~\eqref{delsupdue} by invoking~\Ref{prp}{delsup}.
\end{proof}
\begin{crl}\label{crl:emmabon}
Let $X$, $Y$ be smooth projective varieties and $a\in X$, $b\in Y$. Suppose that $\Gamma^m(X;a)\equiv 0$ and $\Gamma^n(Y;a)\equiv 0$.
Assume that $m\le n$. Let $I\subset \{1,\ldots,e\}$ (recall that $e=m+n-1$).
\begin{enumerate}
\item
If $n\le |I|$ then
\begin{equation}\label{moavero}
\Delta^{e,e}_{I,I}(X,Y;a,b)\equiv \sum_{\substack{(J\cup K)\subset I \\ 1\le |J|\le(m-1) \\ 1\le |K|\le (n-1)}}(-1)^{m+n-|J|-|K|}
{|I|-|J| -1\choose m-|J|-1}{|I|-|K|-1\choose n-|K| -1}\Delta^{e,e}_{J,K}(X,Y;a,b).
\end{equation}
\item
If $m\le |I|<n$ then
\begin{equation}\label{saccomanni}
\Delta^{e,e}_{I,I}(X,Y;a,b)\equiv \sum_{\substack{J\subset I \\ 1\le |J|\le(m-1)}}(-1)^{m-1-|J|}
{|I|-|J| -1\choose m-|J|-1}\Delta^{e,e}_{J,I}(X,Y;a,b).
\end{equation}
\end{enumerate}
\end{crl}
\begin{proof}
By definition $\Delta^{e,e}_{I,I}(X,Y;a,b)=\Delta^{e}_{I}(X;a)\times \Delta^{e}_{I}(Y;b)$. Now suppose that $n\le |I|$. By~\Ref{crl}{delsup} the first factor is rationally equivalent to a linear combination of $\Delta_J^e(X;a)$'s with $J\subset I$ and $1\le |J|\le(m-1)$, the second factor is rationally equivalent to a linear combination of $\Delta_K^e(Y;b)$'s with $K\subset I$ and $1\le |K|\le(n-1)$: writing out the product one gets~\eqref{moavero}. The proof of~\eqref{saccomanni} is similar.
\end{proof}
\subsection{Linear relations between binomial coefficients.}\label{subsec:coeffbin}
\setcounter{equation}{0}
The following fact will be useful:
\begin{equation}\label{combcomb}
\sum_{t=0}^n(-1)^t p(t){n\choose t}=0\qquad \forall p\in{\mathbb Q}[x]\text{ such that $\deg p<n$.}
\end{equation}
In order to prove~\eqref{combcomb} let $d<n$: then we have
\begin{equation}
\sum_{t=0}^n(-1)^t {t\choose d}{n\choose t}={n\choose d}\sum_{t=d}^n(-1)^t {n-d\choose t-d}=(1-1)^{n-d}=0.
\end{equation}
Since $\{{x\choose 0},{x\choose 1},\ldots,{x\choose n-1}\}$ is a basis of the vector space of polynomials of degree at most $(n-1)$ Equation~\eqref{combcomb} follows.
\subsection{Proof of the main result.}\label{subsec:combinatorica}
\setcounter{equation}{0}
We will prove~\Ref{prp}{protokunn}.
As noticed above it suffices to prove that~\eqref{diagprod} holds.
Without loss of generality we may assume that $m\le n$. \Ref{crl}{emmabon} gives that for each $1\le t\le e$ and $J,K\subset \{1,\ldots,e\}$ with $|J|\le(m-1)$, $|K|\le(n-1)$ there exists $c_{J,K}(t)$ such that
\begin{equation}\label{}
\sum_{|I|=t}\Delta^{e,e}_{I,I}(X,Y;a,b)\equiv \sum_{\substack{J,K\subset \{1,\ldots,e\} \\ 1\le |J|\le(m-1) \\ 1\le |K|\le (n-1)}}
c_{J,K}(t)\Delta^{e,e}_{J,K}(X,Y;a,b).
\end{equation}
It will suffice to prove that for each $J,K$ as above we have
\begin{equation}\label{jannacci}
\sum_{t=1}^{e}(-1)^{t} c_{J,K}(t)=0.
\end{equation}
Equations~\eqref{moavero} and~\eqref{saccomanni} give that $c_{J,K}(t)=0$ if $t<|J\cup K|$ and that
\begin{equation}\label{cikappati}
c_{J,K}(t)= (-1)^{m+n-|J|-|K|}
{t-|J| -1\choose m-|J|-1}{t-|K|-1\choose n-|K|-1}{e-|J\cup K| \choose t-|J\cup K| },\quad \max\{|J\cup K|,n\}\le t\le e.
\end{equation}
We distinguish between the four cases:
\begin{enumerate}
\item
$J\not\subset K$.
\item
$J\subset K$ and $m\le|K|$.
\item
$J\subset K$, $J\not=K$ and $|K|<m$.
\item
$J= K$ and $|K|<m$.
\end{enumerate}
Suppose that~(1) holds.
Then~\Ref{crl}{emmabon} gives that $c_{J,K}(t)=0$ if $t<n$. Let $p\in{\mathbb Q}[x]$ be given by
%
\begin{equation}\label{bersani}
p:=(-1)^{m+n-|J|-|K|}{x-|J|-1\choose m-|J|-1}{x-|K|-1\choose n-|K|-1}.
\end{equation}
We must prove that
\begin{equation}\label{serenella}
\sum_{t=\max\{|J\cup K|,n\}}^e (-1)^t p(t){e-|J\cup K| \choose t-|J\cup K| }=0.
\end{equation}
If $n\le |J\cup K|$ then~\eqref{serenella} follows at once from~\eqref{combcomb} (notice that $\deg p<(e-|J\cup K|)$),
if $n< |J\cup K|$ then~\eqref{serenella} follows from~\eqref{combcomb} and the fact that $p(i)=0$ for $|J\cup K|\le i\le(n-1)$. This proves~\eqref{jannacci} if Item~(1) above holds. Now let's assume that Item~(2) above holds. Then $|J\cup K|=|K|<n$: it follows that if $n\le t$ then $c_{J,K}(t)$ is given by~\eqref{cikappati}. On the other hand~\Ref{crl}{emmabon} gives that if $t<n$ and $t\not=|K|$ then $c_{J,K}(t)=0$, and
%
\begin{equation}
c_{J,K}(|K|)=(-1)^{m-1-|J|}{|K|-|J|-1\choose m-|J|-1}.
\end{equation}
%
Thus we must prove that
%
\begin{equation}\label{marolla}
(-1)^{|K|}(-1)^{m-1-|J|}{|K|-|J|-1\choose m-|J|-1}+\sum_{t=n}^e (-1)^t p(t){e-|J\cup K| \choose t-|J\cup K| }=0
\end{equation}
where $p$ is given by~\eqref{bersani}. Now notice that $0=p(|K|+1)=\ldots=p(n-1)$: thus~\eqref{combcomb} gives that
%
\begin{equation*}
\scriptstyle
\sum_{t=n}^e (-1)^t p(t){e-|J\cup K| \choose t-|J\cup K| }=-(-1)^{|K|}p(|K|){e-|K|\choose 0}=(-1)^{m+n-1-|J|}
{|K|-|J|-1\choose m-|J|-1}{-1\choose n-|K|-1}=(-1)^{m-|J|-|K|}{|K|-|J|-1\choose m-|J|-1}.
\end{equation*}
This proves that~\eqref{marolla} holds. If Item~(3) above holds one proves~\eqref{jannacci} arguing as in Item~(1), if Item~(4) holds the argument is similar to that given if Item~(2) holds.
\qed
\subsection{Stability.}\label{subsec:stabilizza}
\setcounter{equation}{0}
We will prove a result that will be useful later on.
\begin{prp}\label{prp:stabile}
Let $X$ be a smooth projective variety and $a\in X$. Suppose that $\Gamma^m(X;a)\equiv 0$. If $s\ge 0$ then
$\Gamma^{m+s}(X;a)\equiv 0$.
\end{prp}
\begin{proof}
If $\dim X=0$ the result is trivial. Assume that $\dim X>0$. By definition
%
\begin{equation}
\Gamma^{m+s}(X;a):=\sum\limits_{\emptyset\not= I\subset \{1,2,\ldots,m+s\}}(-1)^{m+s-|I|}\Delta^{m+s}_I(X;a).
\end{equation}
Replacing $\Delta^{m+s}_I(X;a)$ for $m\le |I|\le(m+s)$ by the right-hand side of~\eqref{delsupdue} we get that
\begin{equation}
\Gamma^{m+s}(X;a):=\sum_{1\le\ell\le(m-1)}c_{\ell}
\left(\sum\limits_{| I |=\ell}(-1)\Delta^{m+s}_I(X;a)\right)
\end{equation}
where
\begin{equation}
c_{\ell}=\sum_{r=0}^s(-1)^{m-\ell-1+s-r}{m-\ell-1+r\choose m-\ell-1}{m+s-\ell\choose s-r}+(-1)^{m+s-\ell}.
\end{equation}
Thus it suffices to prove that $c_{\ell}=0$ for $1\le \ell\le(m-1)$. Letting $t=s-r$ we get that
\begin{multline}
(-1)^{m-\ell-1}c_{\ell}=\sum_{t=0}^s(-1)^{t}{m-\ell-1+s-t\choose m-\ell-1}{m+s-\ell\choose t}+(-1)^{s-1}=\\
=\sum_{t=0}^{m+s-\ell}(-1)^{t}{m-\ell-1+s-t\choose m-\ell-1}{m+s-\ell\choose t}=0
\end{multline}
where the last equality follows from~\eqref{combcomb}.
\qed
\end{proof}
\subsection{Applications}\label{subsec:supab}
\setcounter{equation}{0}
\begin{prp}\label{prp:jaciper}
Suppose that $C$ is a smooth projective curve of genus $g$ and that there exists a degree-$2$ map $f\colon C\to{\mathbb P}^1$ ramified at $p\in C$. Then
\begin{enumerate}
\item
$\Gamma^{2g+1}(C^g;(p,\ldots,p))\equiv 0$,
\item
$\Gamma^{2g+1}(C^{(g)};gp)\equiv 0$, and
\item
$\Gamma^{2g+1}(\Pic^0(C);a)\equiv 0$ for any $a\in \Pic^0(C)$.
\end{enumerate}
\end{prp}
\begin{proof}
By Proposition~4.8 of~\cite{groscho} we have $\Gamma^3(C;p)\equiv 0$. Repeated application of~\Ref{prp}{protokunn} gives the first item. The quotient map $C^g\to C^{(g)}$ is finite and the image of $(p,\ldots,p)$ is $gp$:
thus Item~(2) follows from Item~(1) and~\Ref{subsec}{gradofin}.
Let $u_g\colon C^{(g)}\to \Pic^0(C)$ be the map $D\mapsto [D-gp]$: since $u_g$ is birational Item~(2) and~\Ref{subsec}{gradofin} give that $\Gamma^{2g+1}(\Pic^0(C);{\bf 0})\equiv 0$ where ${\bf 0}$ is the origin of $\Pic^0(C)$. Acting by translations we get that $\Gamma^{2g+1}(\Pic^0(C);a)\equiv 0$ for any $a\in \Pic^0(C)$.
\end{proof}
\begin{crl}\label{crl:jaciper}
If $T$ is a complex abelian surface then $\Gamma^5(T;a)\equiv 0$ for any $a\in T$.
\end{crl}
\begin{proof}
There exists a principally polarized abelian surface $J$ and an isogeny $J\to T$. By~\Ref{subsec}{gradofin} it suffices to prove that $\Gamma^5(J;b)\equiv 0$ for any $b\in J$. The surface $J$ is either a product of two elliptic curves $E_1,E_2$ or
the Jacobian of a smooth genus-$2$ curve $C$. Suppose that the former holds. Let $a=(p_1,p_2)$ where $p_i\in E_i$ for $i=1,2$. Then $\Gamma^3(E_i;p_i)\equiv 0$ by Proposition~4.8 of~\cite{groscho} and hence~\Ref{prp}{protokunn} gives that $\Gamma^5(E_1\times E_2;(p_1,p_2))\equiv 0$. If $J$ is
the Jacobian of a smooth genus-$2$ curve $C$ the corollary follows at once from~\Ref{prp}{jaciper}.
\end{proof}
\section{${\mathbb P}^r$-fibrations}\label{sec:fibrazioni}
Let $Y$ be a smooth projective variety. Let ${\mathscr F}$ be a locally-free sheaf of rank $(r+1)$ on $Y$ and $X:={\mathbb P}({\mathscr F})$. Thus the structure map $\rho\colon X\to Y$ is a ${\mathbb P}^r$-fibration. Let $Z:=c_1({\mathscr O}_X(1))\in\CH^1(X)$. Suppose that there exists $b\in Y$ such that $\Gamma^m(Y;b)\equiv 0$ and let $a\in\rho^{-1}(b)$. If ${\mathbb P}({\mathscr F})$ is trivial then
\begin{equation}\label{traslo}
\Gamma^{m+r}(X;a)\equiv 0
\end{equation}
by~\Ref{expl}{spapro} and~\Ref{prp}{protokunn}. In general~\eqref{traslo} does not hold. In fact suppose that
$Y$ is a $K3$ surface and hence $\Gamma^3(Y;b)\equiv 0$ where $b$ is a point lying on a rational curve~\cite{beauvoisin}. If $\Gamma^{3+r}(X;a)\equiv 0$ then the top self-intersection of any divisor class on $X$ is a multiple of $[a]$, see~\Ref{subsec}{significato}: considering $Z^{r+2}$ we get that $c_2({\mathscr F})$ is a multiple of $[b]$.
We will prove the following results.
\begin{prp}\label{prp:rigcurva}
Keep notation as above and suppose that $\dim Y=1$. If $\Gamma^m(Y;b)\equiv 0$ then
$\Gamma^{m+r}(X;a)\equiv 0$.
\end{prp}
\begin{prp}\label{prp:rigsuperficie}
Keep notation as above and suppose that $\dim Y=2$. If $\Gamma^{m-1}(Y;b)\equiv 0$, or $\Gamma^{m}(Y;b)\equiv 0$ and both $c_1({\mathscr F})^2$, $c_2({\mathscr F})$ are multiples of $[b]$, then $\Gamma^{m+r}(X;a)\equiv 0$ .
\end{prp}
As an appplication we will prove the following.
\begin{prp}\label{prp:prodsim}
Suppose that $C$ is a smooth projective curve of genus $g\le 2$ over an algebraically closed field ${\mathbb K}$ and that $p\in C$ is such that $\dim|{\mathscr O}_C(2p)| \ge 1$. Then $\Gamma^{d+g+1}(C^{(d)};dp)\equiv 0$ for any $d\ge 0$.
\end{prp}
\subsection{Comparing diagonals}
\setcounter{equation}{0}
Let $\rho^n\colon X^n\to Y^n$ be the $n$-th cartesian product of $\rho$. Let $\pi_i\colon X^n\to X$ be the $i$-th projection and $Z_i:=\pi_i^{*}Z$. Given a multi-index $E=(e_1,\ldots,e_n)$ with $0\le e_i$ for $1\le i\le n$ we let $Z^E:=Z_1^{e_1}\cdot\ldots\cdot Z^{e_n}_n$. We let
\begin{equation}
\max E:=\max\{e_1,\ldots,e_n\},\qquad |E|:=e_1+\ldots+e_n.
\end{equation}
Let $d:=\dim Y$ and $[\Delta^n(X)]\in\CH_{d+r}(X^n)$ be the class of the (smallest) diagonal. Since $\rho^n$ is a $({\mathbb P}^r)^n$-fibration we may write
\begin{equation}\label{darisolvere}
[\Delta^n(X)]=\sum_{\max E\le r}(\rho^n)^{*}(w_E({\mathscr F}))\cdot Z^E,\qquad w_E({\mathscr F})\in \CH_{|E|+d-r(n-1)}(Y^n).
\end{equation}
%
In order to describe the classes $w_E$ we let $\delta^n_Y\colon Y\hookrightarrow Y^n$ and $\delta^n_X\colon X\hookrightarrow X^n$ be the diagonal embeddings.
\begin{prp}\label{prp:coefficienti}
Let $r\ge 0$ and $E=(e_1,\ldots,e_n)$ be a multi-index. There exists a universal polynomial $P_E\in{\mathbb Q}[x_1,\ldots,x_q]$, where $q:=(r(n-1)-|E|)$, such that the following holds. Let ${\mathscr F}$ be a locally-free sheaf of rank $(r+1)$ on $Y$: then (notation as above) $w_E({\mathscr F})=\delta^n_{Y,*}(P_E(c_1({\mathscr F}),\ldots,c_q({\mathscr F}))$.
\end{prp}
\begin{proof}
Let $s_i({\mathscr F})$ be the $i$-th Segre class of ${\mathscr F}$ and $E^{\vee}:=(r-e_1,\ldots,r-e_n)$. Then
\begin{equation}\label{kyenge}
\rho^n_{*}([\Delta^n(X)]\cdot Z^{E^{\vee}})=\delta_{Y,*}^n(s_{|E^{\vee}|-r}({\mathscr F})).
\end{equation}
(By convention $s_{i}({\mathscr F})=0$ if $i<0$.)
On the other hand let $J=(j_1,\ldots,j_n)$ be a multi-index: then
\begin{equation}\label{ogbonna}
\rho^n_{*}\left(\left(\sum_{\max H\le r}(\rho^n)^{*}(w_H({\mathscr F}))\cdot Z^H\right)\cdot Z^{J}\right)=
\sum_{\max H\le r} w_{H}({\mathscr F})\cdot \pi_1^{*}(s_{h_1+j_1-r})\cdot\ldots\cdot \pi_n^{*}(s_{h_n+j_n-r}).
\end{equation}
Equations~\eqref{kyenge} and~\eqref{ogbonna} give that
\begin{multline}\label{esame}
\delta_{Y,*}^n(s_{|E^{\vee}|-r}({\mathscr F}))=\rho^n_{*}([\Delta^n(X)]\cdot Z^{E^{\vee}})=
\sum_{\max H\le r} w_{H}({\mathscr F})\cdot \pi_1^{*}(s_{h_1-e_1})\cdot\ldots\cdot \pi_n^{*}(s_{h_n-e_n})= \\
=w_{E}({\mathscr F})+\sum_{\substack{|H|>|E| \\ r\ge \max H}} w_{H}({\mathscr F})\cdot \pi_1^{*}(s_{h_1-e_1})\cdot\ldots\cdot \pi_n^{*}(s_{h_n-e_n}).
\end{multline}
Starting from the highest possible value of $|E|$ i.e.~$rn$ and going through descending values of $|E|$ one gets the proposition.
\end{proof}
\begin{rmk}\label{rmk:esplicito}
The proof of~\Ref{prp}{coefficienti} gives an iterative algorithm for the computation of $w_E({\mathscr F})$. A straightforward computation gives the formulae
\begin{equation*}\label{esplicito}
w_E({\mathscr F})=
\begin{cases}
0 & \text{if $|E|>r(n-1)$}, \\
[\Delta^n(Y)] & \text{if $|E|=r(n-1)$}, \\
(\lambda_E(1)-1)\delta^n_{Y,*}( c_1({\mathscr F}))& \text{if $|E|=r(n-1)-1$}, \\
\frac{1}{2}(\lambda_E(1) -1)(\lambda_E(1) -2)\delta^n_{Y,*}( c_1({\mathscr F})^2) +
(\lambda_E(2) -1 ) \delta^n_{Y,*}(c_2({\mathscr F}))& \text{if $|E|=r(n-1)-2$},
\end{cases}
\end{equation*}
where
\begin{equation}
\lambda_E(p):=|\{ 1\le i\le n \mid e_i+p\le r \}|.
\end{equation}
\end{rmk}
\subsection{Comparing modified diagonals}
\setcounter{equation}{0}
We will compare $\Gamma^{m+r}(X;a)$ and $\Gamma^{m+r}(Y;b)$. In the present subsection
$\emptyset\not=I\subset\{1,\ldots,m+r\}$ and $I^c:=(\{1,\ldots,m+r\}\setminus I)$; we let $\pi_I\colon X^{m+r}\to X^{|I|}$ be the projection determined by $I$. We also let $H=(h_1,\ldots,h_{m+r})$ be a multi-index. If $\max H\le r$ we let $\Top H:=\{1\le i\le n \mid h_i=r\}$. Applying~\Ref{prp}{coefficienti} and~\Ref{rmk}{esplicito} we get that
\begin{multline}\label{barvitali}
\Delta^{m+r}_I(X;a)=
(\rho^{m+r})^{*}(\Delta^{m+r}_I(Y;{b}))\cdot \sum_{\substack{\max H\le r \\ |H|=r(m+r-1) \\ I^c\subset\Top H }} Z^H+ \\
+(\rho^{m+r})^{*}\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)
\cdot\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-1 \\ I^c\subset\Top H }} (\lambda_H(1)-1) Z^H+ \\
+(\rho^{m+r})^{*}\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F})^2)\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)
\cdot\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-2 \\ I^c\subset\Top H}} \frac{1}{2}(\lambda_H(1)-1)(\lambda_H(1)-2) Z^H+\\
+(\rho^{m+r})^{*}\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_2({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)
\cdot\sum_{\substack{\max H\le r \\ |H|=r(n-1)-2 \\ I^c\subset\Top H}} (\lambda_H(2)-1) Z^H+{\mathscr R}
\end{multline}
%
where
\begin{equation}\label{brasile}
{\mathscr R}=\sum_{\substack{\max H\le r \\ |H|<r(n-1)-2}}Q_H Z^H
\end{equation}
and each $Q_H$ appearing in~\eqref{brasile} vanishes if the Chern classes of ${\mathscr F}$ of degree higher than $2$ are zero.
It follows that
\begin{multline}\label{barsport}
\Gamma^{m+r}(X;a)=
\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|}
\Delta^{m+r}_I(Y;{b})\right)\cdot Z^H+ \\
+\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-1}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|}
\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)\right)
\cdot \epsilon_H Z^H+ \\
+\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-2}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|}
\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F})^2)\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)\right)
\cdot \mu_H Z^H+\\
+\sum_{\substack{\max H\le r \\ |H|=r(m+r-1)-2}}(\rho^{m+r})^{*}\left(\sum_{I^c\subset\Top H}(-1)^{m+r-|I|}
\left(\pi_I^{*}\delta^{|I|}_{Y,*}(c_2({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{{b}\times\ldots\times{b}}_{|I^c|})\right)\right)
\cdot \nu_H Z^H+{\mathscr T}
\end{multline}
%
where $\epsilon_H:=(\lambda_H(1)-1)$, $\mu_H:=(\lambda_H(1)-1)(\lambda_H(1)-2)/2$, $\nu_H:=(\lambda_H(2)-1)$, and ${\mathscr T}$ has an expansion similar to that of ${\mathscr R}$, see~\eqref{brasile} and the comment following it.
\begin{rmk}\label{rmk:svanisce}
Suppose that $\Gamma^m(Y;b)=0$. Then the first addend on the right-hand side of~\eqref{barsport} vanishes. In fact it is clearly independent of the rank-$r$ locally-free sheaf ${\mathscr F}$ and it is $0$ for trivial ${\mathscr F}$ by~\Ref{prp}{protokunn}: it follows that it vanishes.
\end{rmk}
\subsection{${\mathbb P}^r$-bundles over curves}\label{subsec:rigcurva}
\setcounter{equation}{0}
We will prove~\Ref{prp}{rigcurva}. We start with an auxiliary result.
\begin{clm}\label{clm:sommalt}
Let $Y$ be a smooth projective variety and $b\in Y$. Suppose that $\Gamma^m(Y;b)=0$. Let $\mathfrak{z}\in\CH(Y)$: then
\begin{equation}\label{sommalt}
\sum_{I\subset\{1,\ldots,(m-1)\}}(-1)^{|I|}\pi_I^{*}\delta_{Y,*}^{|I|}(\mathfrak{z})\times\pi_{I^c}^{*}(\underbrace{b,\ldots,b}_{|I^c|})=0.
\end{equation}
%
\end{clm}
\begin{proof}
Let $\pi_{\{1,\ldots,(m-1)\}}\colon Y^m\to Y^{m-1}$ be the projection to the first $(m-1)$ coordinates. Then
\begin{equation}\label{sgargiante}
\pi_{\{1,\ldots,(m-1)\},*}(\Gamma^m(Y;b)\cdot\pi_m^{*}\mathfrak{z})=0.
\end{equation}
%
The claim follows because the left-hand side of~\eqref{sgargiante} equals the left-hand side of~\eqref{sommalt} multiplied by $(-1)^m$.
\end{proof}
By~\eqref{barsport} and~\Ref{rmk}{svanisce} we must prove that if $H=(h_1,\ldots,h_{m+r})$ is a multi-index such that $\max H\le r$ and $|H|=r(m+r-1)-1$ then
\begin{equation}\label{mammamia}
\sum_{I^c\subset\Top H}(-1)^{m+r-|I|}
\pi_I^{*}\delta^{|I|}_{Y,*}(c_1({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{b,\ldots,b}_{|I^c|})=0.
\end{equation}
A straightforward computation shows that $|\Top H|\ge(m-1)$: thus~\eqref{mammamia} holds by~\Ref{clm}{sommalt}.
\qed
\subsection{${\mathbb P}^r$-bundles over surfaces}
\setcounter{equation}{0}
We will prove~\Ref{prp}{rigsuperficie}. Notice that $\Gamma^m(Y;b)=0$: in fact it holds either by hypothesis or by~\Ref{prp}{stabile} if $\Gamma^{m-1}(Y;b)=0$.
Moreover~\eqref{mammamia} holds in this case as well, the argument is that given in~\Ref{subsec}{rigcurva}. Thus~\eqref{barsport} and~\Ref{rmk}{svanisce} give that we must prove the following: if $H=(h_1,\ldots,h_{m+r})$ is a multi-index such that $\max H\le r$ and $|H|=r(m+r-1)-2$ then
\begin{equation}\label{papas}
\sum_{I^c\subset\Top H}(-1)^{m+r-|I|}
\left(\pi_I^{*}\delta^{|I|}_{Y,*}(\mu_H c_1({\mathscr F})^2+\nu_H c_2({\mathscr F}))\times\pi_{I^c}^{*}(\underbrace{b,\ldots,b}_{|I^c|})\right)=0.
\end{equation}
A straightforward computation shows that $|\Top H|\ge(m-2)$ and that equality holds if and only if $(r-1)\le h_i\le r$ for all $1\le i\le (m+r)$ (and thus the set of indices $i$ such that $h_i=(r-1)$ has cardinality $(r+2)$). If $\Gamma^{m-1}(Y;b)=0$ then~\eqref{papas} holds by~\Ref{clm}{sommalt}. If both $c_1({\mathscr F})^2$, $c_2({\mathscr F})$ are multiples of $b$ then each term in the summation in the left-hand side of~\eqref{papas}
is a multiple of $b$ and the coefficients sum up to $0$.
\qed
\subsection{Symmetric products of curves}
\setcounter{equation}{0}
If the genus of $C$ is $0$ then $C^{(d)}\cong{\mathbb P}^d$ and hence the result holds trivially, see~\Ref{expl}{spapro}. Suppose that the genus of $C$ is $1$. If $d=1$ then $\Gamma^3(C;p)\equiv 0$ by~\cite{groscho}. Let $d>1$ and let $u_d\colon C^{(d)}\to \Pic^0(C)$ be the map sending $D$ to $[D-d p]$. Since $u_d$ is ${\mathbb P}^{d-1}$-fibration we get that $\Gamma^{d+2}(C;dp)\equiv 0$ by~\Ref{prp}{prodsim} and the equivalence $\Gamma^3(C;p)\equiv 0$. Lastly suppose that the genus of $C$ is $2$. If $d=1$ then $\Gamma^3(C;p)\equiv 0$ by~\cite{groscho} and if $d=2$ then $\Gamma^5(C^{(2)};2p)\equiv 0$ by~\Ref{prp}{jaciper}. Now assume that $d>2$ and let $u_d\colon C^{(d)}\to \Pic^0(C)$ be the map sending $D$ to $[D-d p]$. Then $u_d$ is ${\mathbb P}^{d-2}$-fibration and we may write $C^{(d)}\cong{\mathbb P}({\mathscr E}_d)$ where ${\mathscr E}_d$ is a locally-free sheaf on $\Pic^0(C)$ such that
\begin{equation}
c_1({\mathscr E}_d)=-[\{[x-p] \mid x\in C\}],\qquad c_2({\mathscr E}_d)=[{\bf 0}],
\end{equation}
see Example~4.3.3 of~\cite{fulton}. By~\Ref{prp}{jaciper} we have $\Gamma^5(J(C);{\bf 0})\equiv 0$; since $c_1({\mathscr E}_d)^2=2[{\bf 0}]$ we get that $\Gamma^{d+2}(C^{(d)};dp)\equiv 0$ by~\Ref{prp}{prodsim}.
\section{Blow-ups}\label{sec:scoppio}
We will prove~\Ref{prp}{blowdel}.
A comment regarding the hypotheses of~\Ref{prp}{blowdel}.
Let $Y$ be a complex $K3$ surface and $X\to Y$ be the blow-up of $y\in Y$. We know (Beauville and Voisin) that there exists $c\in Y$ such that $\Gamma^3(Y;c)\equiv 0$, but if $y$ is not rationally equivalent to $c$ then there exists no $a\in X$ such that
$\Gamma^3(X;a)\equiv 0$, this follows from~\Ref{rmk}{unicopunto}. If $e=0,1$ then~\Ref{prp}{blowdel} is trivial, hence we will assume that $e\ge 2$. We let $f\colon X\to Y$ be the blow-up of $V$ and $E\subset X$ the exceptional divisor of $f$. Thus $a\in E$. Let $g\colon E\to V$ be defined by the restriction of $f$ to $E$, and $(E/V)^t$ be the $t$-th fibered product of $g\colon E\to V$. Let $(E/V)^{t}$ be the $t$-th fibered product of $g\colon E\to V$. The following commutative diagram will play a r\^ole in the proof of~\Ref{prp}{blowdel}
\begin{equation}
\xymatrix{ (E/V)^{t} \ar_{\gamma_{t}}[r] \ar@/{}_{-1pc}/^{\alpha_{t}}[urrd] \ar_{}[d] & E^{t} \ar_{\beta_{t}}[r] \ar_{g^{t}}[d] & X^{t} \ar_{f^{t}}[d] \\
\Delta^{t}(V) \ar^{}[r] & V^{t} \ar^{}[r] & Y^{t} }
\end{equation}
(The maps which haven't been defined are the natural ones.) Whenever there is no danger of confusion we denote $\alpha_{t}((E/V)^{t})$ by $(E/V)^{t}$.
\subsection{Pull-back of the modified diagonal.}
\setcounter{equation}{0}
On $E$ we have an exact sequence of locally-free sheaves:
\begin{equation}
0\longrightarrow{\mathscr O}_E(-1)\longrightarrow g^{*}N_{V/Y}\longrightarrow Q\longrightarrow 0.
\end{equation}
For $i=1,\ldots,t$ let $Q_i(t)$ be the pull-back of $Q$ to $E^{t}$ via the $i$-th projection $E^{t}\to E$: thus $Q_i(t)$ is locally-free of rank $(e-1)$.
\begin{prp}\label{prp:eccesso}
Keep notation as above and let $d({t}):=({t}-1)(e-1)-1$. We have the following equalities in $\CH_{\dim X}(X^{t})$:
\begin{equation}\label{eccesso}
(f^{t})^{*}\Delta^{t}(Y)=
\begin{cases}
\Delta^{t}(X) & \text{if ${t}=1$,} \\
\Delta^{t}(X)+\beta_{{t},*}((g^{t})^{*}(\Delta^{t}(V))\cdot c_{d({t})}(\oplus_{j=1}^{t} Q_j({t}))) & \text{if ${t}> 1$.}
\end{cases}
\end{equation}
\end{prp}
\begin{proof}
The equality of schemes $f^{-1}\Delta^1(Y)=\Delta^1(X)$ gives~\eqref{eccesso} for ${t}=1$. Now let's assume that ${t}>1$. The closed set $(f^{t})^{-1}\Delta^{t}(Y)$ has the following decomposition into irreducible components:
\begin{equation}
(f^{t})^{-1}\Delta^{t}(Y)=\Delta^{t}(X)\cup (E/V)^{t}.
\end{equation}
The dimension of $(E/V)^{t}$ is equal to $(\dim X+({t}-1)(e-1)-1)$ and hence is larger than the expected dimension unless unless $2={t}=e$. It follows that if ${t}=2$ and $e=2$ then $(f^2)^{*}\Delta^2(Y)=a\Delta^2(X)+b (E/V)^2$: one checks easily that $1=a=b$ and hence~\eqref{eccesso} holds if ${t}=2$ and $e=2$. Now suppose that that ${t}>1$ and $({t},e)\not=(2,2)$. Let $U:=(X^{t}\setminus(\Delta^{t}(X)\cap (E/V)^{t}))$ and ${\mathscr Z}:= (E/V)^{t} \cap U=(E/V)^{t} \setminus \Delta^{t}(X)$. Notice that $(E/V)^{t}$ is smooth and hence the open subset ${\mathscr Z}$ is smooth as well. Let $\iota\colon {\mathscr Z}\hookrightarrow U$ be the inclusion. The restriction of $(f^{t})^{*}\Delta^{t}(Y)$ to $U$ is equal to
\begin{equation}
[\Delta^{t}(X)\cap U]+\iota_{*}(c_d({t})({\mathscr N}))
\end{equation}
where ${\mathscr N}$ is the obstruction bundle (see~\cite{fulton}, Cor.~8.1.2 and Prop.~6.1(a)). One easily identifies ${\mathscr N}$ with the restriction of $\oplus_{j=1}^{t} Q_j({t})$ to ${\mathscr Z}$. It follows that the restrictions to $U$ of the left and right hand sides of~\eqref{eccesso} are equal. The proposition follows because the dimension of $(X^{t}\setminus U)=\Delta^{t}(X)\cap (E/V)^{t}$ is equal to $(\dim X-1)$, which is strictly smaller than $\dim X$.
\end{proof}
\begin{crl}\label{crl:eccesso}
Keep notation and assumptions as above. Let $I\subset\{1,\ldots,(n+1)\}$ be non-empty and $I^c:=(\{1,\ldots,(n+1)\}\setminus I)$. Let $Q_j$ denote $Q_j(n+1)$ and let ${t}:=| I |$. Then
\begin{equation}
\scriptstyle
(f^{n+1})^{*}\Delta_I(Y;b)=
\begin{cases}
\scriptstyle \Delta_I(X;a) & \scriptstyle \text{if $| I |=1$,} \\
\scriptstyle \Delta_I(X;a)+\beta_{n+1,*}((g^{n+1})^{*}\Delta_I(V;b)\cdot c_{d({t})}\left(\bigoplus\limits_{j\in I} Q_j\right)\cdot \prod_{j\in I^c} c_{e-1}(Q_j)) & \scriptstyle\text{if $| I |> 1$.}
\end{cases}
\end{equation}
\end{crl}
\begin{proof}
For $1\le i\le(n+1)$ let $\rho_{i}\colon X^{n+1}\to X$ be the $i$-th projection. Let $J=\{j_1,\ldots,j_m\}$ where $1\le j_1<\ldots < j_{t}\le(n+1)$, in particular ${t}=|J|$. We let $\pi_{J}\colon X^{n+1}\to X^{t}$ be the map such that the composition of the $i$-th projection $X^{t}\to X$ with $\pi_J$ is equal to $\rho_{j_i}$. The two maps $\pi_I\colon X^{n+1}\to X^{{t}}$ and $\pi_{I^c}\colon X^{n+1}\to X^{n+1-{t}}$ define an isomorphism $\Lambda_I\colon X^{n+1}\overset{\sim}{\longrightarrow} X^{t}\times X^{n+1-{t}}$. We have
\begin{equation}
(f^{n+1})^{*}\Delta_I(Y;b)=\Lambda_I^{*}((f^{t})^{*}\Delta^{t}(Y)\times (f^{n+1-{t}})^{*}(\{\underbrace{(b,\ldots,b)}_{n+1-{t}}\})).
\end{equation}
(Here $\times$ denotes the exterior product of cycles, see 1.10 of~\cite{fulton}.) An obstruction bundle computation gives that
\begin{equation}
(f^{n+1-{t}})^{*}(\{\underbrace{(b,\ldots,b)}_{n+1-{t}}\})=\beta_{n+1-{t},*}\left(\prod_{1\le j\le (n+1-{t})} c_{e-1}(Q_j(n+1-{t})\right)
\end{equation}
The corollary follows from the above equations and~\Ref{prp}{eccesso}.
\end{proof}
Let $I\subset\{1,\ldots,(n+1)\}$ be non-empty and let $t:=| I |$. We let $\Omega_I\in \CH_{\dim X}(E^{n+1})$ be given by
\begin{equation}\label{eccomega}
\scriptstyle
\Omega_I:=
\begin{cases}
\scriptstyle 0 & \scriptstyle \text{if $| I |=1$,} \\
\scriptstyle (g^{n+1})^{*}\Delta_I(V;b)\cdot c_{d({t})}\left(\bigoplus\limits_{j\in I} Q_j\right)\cdot \prod_{j\in I^c} c_{e-1}(Q_j) & \scriptstyle\text{if $| I |> 1$.}
\end{cases}
\end{equation}
By~\Ref{crl}{eccesso} we have $(f^{n+1})^{*}\Delta_I(Y;b)= \Delta_I(X;a)+\beta_{n+1,*}(\Omega_I)$ and hence
\begin{equation}\label{solgam}
(f^{n+1})^{*}(\Gamma^{n+1}(Y;b))=\Gamma^{n+1}(X;a)+\beta_{n+1,*}\left(\sum_{1\le| I |\le(n+1)}(-1)^{n+1-| I |}\Omega_I\right).
\end{equation}
\subsection{The proof.}
\setcounter{equation}{0}
By~\eqref{solgam} it suffices to prove that the following equality holds in $\CH_{\dim X}(E^{n+1})_{{\mathbb Q}}$:
\begin{equation}\label{altome}
\sum_{1\le| I |\le(n+1)}(-1)^{| I |}\Omega_I=0.
\end{equation}
Let $I\subset\{1,\ldots,(n+1)\}$ be of cardinality strictly greater than $(n-e)$: \Ref{crl}{delsup} allows us to express the class of $\Delta_I(V;b)$ as a linear combination of the $\Delta_J(V;b)$'s with $J\subset I$ of cardinality at most $(n-e)$. Moreover Whitney's formula allows us to write the Chern class appearing in the definition of $\Omega_I$ as a sum of products of Chern classes of the $Q_j$'s.
It follows that for each $I\subset\{1,\ldots,(n+1)\}$ we may express the class of $\Omega_I$ as a
linear combination of the classes
\begin{equation}
(g^{n+1})^{*}\Delta_J(V;b)\cdot \prod_{s=1}^{n+1}c_{k_s}(Q_s),\quad 1\le |J|\le(n-e),\quad k_1+\ldots+k_{n+1}=d(n+1)=n(e-1)-1.
\end{equation}
\begin{dfn}
$\cP_n(e)$ is the set of $(n+1)$-tuples $k_1,\ldots,k_{n+1}$ of natural numbers $0\le k_s\le (e-1)$ whose sum equals $d(n+1)$.
\end{dfn}
Summing over all $I\subset\{1,\ldots,(n+1)\}$ of a given cardinality $t$ we get the following.
\begin{clm}
Let $1\le t\le(n+1)$. There exists an integer $c_{J,K}(t)$ for each couple $(J,K)$ with $\emptyset\not=J\subset\{1,\ldots,(n+1)\}$ of cardinality at most $(n-e)$ and $K\in\cP_n(e)$ such that
\begin{equation}
\sum_{| I |=t}\Omega_I=\sum_{\substack{1\le | J |\le (n-e) \\ K\in\cP_n(e)}}c_{J,K}(t)(g^{n+1})^{*}\Delta_J(V;b)\cdot \prod_{s=1}^{n+1}c_{k_s}(Q_s).
\end{equation}
\end{clm}
It will be convenient to set $c_{J,K}(0)=0$. We will prove that
\begin{equation}\label{incredibile}
\sum_{t=0}^{n+1}(-1)^{t} c_{J,K}(t)=0.
\end{equation}
That will prove Equation~\eqref{altome} and hence also~\Ref{prp}{blowdel}. Applying~\Ref{crl}{delsup} to $(V,b)$ we get the following result.
\begin{clm}\label{clm:ritrito}
Let $I\subset\{1,\ldots,n+1\}$ be of cardinality $t\ge (n+1-e)$. Then
\begin{equation}
\Delta^{n+1}_I(V;b)\equiv \sum_{\substack{J\subset I \\ 1\le |J|\le(n-e)}}(-1)^{n-e-|J|}{t-|J| -1\choose t-n-1+e}\Delta^{n+1}_J(Y;b).
\end{equation}
\end{clm}
Given $K\in\cP_n(e)$ we let
\begin{equation}
T(K):=\{1\le i\le(n+1) \mid k_i=(e-1)\}.
\end{equation}
A simple computation gives that
\begin{equation}\label{tikapineq}
(n+1-e)\le | T(K)|.
\end{equation}
\begin{prp}
Let $\emptyset\not=J\subset\{1,\ldots,(n+1)\}$ be of cardinality at most $(n-e)$, let $K\in\cP_n(e)$ and $0\le t\le(n+1)$. Then
\begin{equation}\label{cigeikap}
c_{J,K}(t)=(-1)^{n-|J|-e}{t-|J| -1\choose n-|J|-e}{|T(K)\cap J^c|\choose n+1-t}
\end{equation}
\end{prp}
\begin{proof}
Suppose first that $0\le t\le(n-e)$. Then $c_{J,K}(t)=0$ unless $|J|=t$ and $J^c\subset T(K)$: if the latter holds then $c_{J,K}(t)=1$. Assume that the right-hand side of~\eqref{cigeikap}
is non-zero: then the first binomal coefficient is non-zero and hence $t\le|J|$. Of course also the second binomal coefficient is non-zero: it follows that
\begin{equation}
(n+1-t)\le | T(K)\cap J^c| \le |J^c|=n+1-|J|.
\end{equation}
Since $t\le|J|$ it follows that $|J|=t$ and hence $| T(K)\cap J^c| = |J^c|$ i.e.~$J^c\subset T(K)$: a straightforward computation gives that under these assumptions the right-hand side of~\eqref{cigeikap} equals $1$.
It remains to prove that~\eqref{cigeikap} holds for $(n+1-e)\le t\le(n+1)$. Looking at~\eqref{eccomega} and~\Ref{clm}{ritrito} we get that
\begin{equation}\label{lucca}
c_{J,K}(t)=(-1)^{n-e-|J|}{t-|J| -1\choose t-n-1+e}|\{I\subset\{1,\ldots,(n+1)\} \mid I^c\subset(T(K)\cap J), \quad |I|=t\}|.
\end{equation}
Since the right-hand side of~\eqref{lucca} is equal to the right-hand side of~\eqref{cigeikap} this finishes the proof.
\end{proof}
Let
\begin{equation}
p(x):={n-|J| -x\choose n-|J|-e}.
\end{equation}
Then $\deg p<|T(K)\cap J^c|$ because $\deg p=(n-|J|-e)$ and because~\eqref{tikapineq} gives that
\begin{equation}
|T(K)\cap J^c|\ge (n+1-e)+(n+1-|J|)-(n+1)=n-|J|-e+1.
\end{equation}
Thus~\eqref{combcomb} and~\eqref{cigeikap} give that
\begin{equation}
\scriptstyle
0=\sum_{s=0}^{n+1}(-1)^s p(s) {|T(K)\cap J^c|\choose s} =(-1)^{n+1}\sum_{t=0}^{n+1}(-1)^t {t-|J| -1\choose n-|J|-e}{|T(K)\cap J^c|\choose n+1-t}=
(-1)^{1-e-|J|}\sum_{t=0}^{n+1}(-1)^t c_{J,K}(t).
\end{equation}
This finishes the prooof of~\Ref{prp}{blowdel}.
\qed
\subsection{Application to Hilbert schemes of $K3$'s}
Let $S$ be a complex $K3$ surface. By Beauville and Voisin~\cite{beauvoisin} there exists $c\in S$ such that $\Gamma^3(S;c)\equiv 0$. We let $S^{[n]}$ be the Hilbert scheme parametrizing length-$n$ subschemes of $S$; Beauville~\cite{beau} proved that $S^{[n]}$ is a hyperk\"ahler variety.
\begin{prp}\label{prp:diaghilbk3}
Keep notation as above and assume that $n=2,3$. Let $a_n\in S^{[n]}$ represent a scheme supported at $c$. Then $\Gamma^{2n+1}(S^{[n]};a_n)\equiv 0$.
\end{prp}
\begin{proof}
First assume that $n=2$. Let $\pi_1\colon X\to S\times S$ be the blow-up of the diagonal $\Delta$ and $\rho_2\colon X\to S^{(2)}$ the composition of $\pi_1$ and the quotient map $S\times S\to S^{(2)}$. There is a degree-$2$ map $\phi_2\colon X\to S^{[2]}$ fitting into a commutative diagram
\begin{equation}\label{equivoci}
\xymatrix{ X \ar^{\phi_2}[r] \ar^{\rho_2}[dr] & S^{[2]} \ar^{\gamma_2}[d] \\
& S^{(2)} }
\end{equation}
where $\gamma_2([Z])=\sum_{p\in S}\ell({\mathscr O}_Z,p)$ is the Hilbert-Chow morphism. Let $x\in X$ such that $\phi_2(x)=a_2$; by~\Ref{subsec}{gradofin} it suffices to prove that $\Gamma^5(X;x)\equiv 0$. By commutativity of~\eqref{equivoci} we have $\pi_1(x)=(c,c)$. Now $\Gamma^5(S\times S;(c,c))\equiv 0$ by~\Ref{prp}{protokunn}, and since $\cod(\Delta,S\times S)=2$ it follows from~\Ref{prp}{blowdel} that $\Gamma^5(X;x)\equiv 0$. Next assume that $n=3$. Let $\pi_2\colon Y\to S^{[2]}\times S$ be the blow-up with center the tautological subscheme ${\mathscr Z}_2\subset S^{[2]}\times S$ and $\rho_3\colon Y\to S^{(3)}$ the composition of $\pi_2$ and the natural map $S^{[2]}\times S\to S^{(3)}$. There is a degree-$3$ map $\phi_3\colon Y\to S^{[3]}$ fitting into a commutative diagram
\begin{equation}\label{commedia}
\xymatrix{ Y \ar^{\phi_3}[r] \ar^{\rho_3}[dr] & S^{[3]} \ar^{\gamma_3}[d] \\
& S^{(3)} }
\end{equation}
where $\gamma_3$ is the Hilbert-Chow morphism. (See for example Proposition~2.2 of~\cite{ellstrom}.) On the other hand let $p_1\colon S\times S\to S$ be projection to the first factor; the map
\begin{equation}\label{nicolapiazza}
(\phi_2,p_1\circ \pi_1))\colon X\to S^{[2]}\times S
\end{equation}
is an isomorphism onto ${\mathscr Z}_2$. Let $y\in Y$ be such that $\phi_3(y)=a_3$; by~\Ref{subsec}{gradofin} it suffices to prove that $\Gamma^7(Y;y)\equiv 0$. Notice that $\pi_2(y)=(a_2,c)$ where $a_2\in S^{[2]}$ is supported at $c$. By the case $n=2$ (that we just proved) and~\Ref{prp}{protokunn} we have $\Gamma^7(S^{[2]}\times S;(a_2,c))\equiv 0$. Let $x\in X$ such that $\phi_2(x)=a_2$. In the proof for the case $n=2$ we showed that $\Gamma^5(X;x)\equiv 0$; since~\eqref{nicolapiazza} is an isomorphism it follows that $\Gamma^5({\mathscr Z}_2;(a_2,c))\equiv 0$. Since $\Gamma^7(S^{[2]}\times S;(a_2,c))\equiv 0$ and ${\mathscr Z}_2$ is smooth of codimension $2$, we get
$\Gamma^7(Y;y)\equiv 0$ by~\Ref{prp}{blowdel}.
\end{proof}
Let ${\mathscr Z}_n\subset S^{[n]}\times S$ be the tautological subscheme. The blow-up of $S^{[n]}\times S$ with center ${\mathscr Z}_n$ has a natural regular map of finite (non-zero) degree to $S^{[n+1]}$ and in turn ${\mathscr Z}_n$ may be described starting from the tautological subscheme ${\mathscr Z}_{n-1}\subset S^{[n-1]}\times S$. Thus one may hope to prove by induction on $n$ that $\Gamma^{2n+1}(S^{[n]};a)\equiv 0$ for any $n$: the problem is that starting with ${\mathscr Z}_{3}$ the tautological subscheme is singular.
\section{Double covers}\label{sec:rivdop}
In the present section we will assume that $X$ is a projective variety over a field ${\mathbb K}$ and that $\iota\in\Aut(X)$ is a (non-trivial) involution. We let $Y:=X/\langle\iota\rangle$ and
$f\colon X\to Y$ be the quotient map.
We assume that there exists $a\in X({\mathbb K})$ which is fixed by $\iota$ and we let $b:=f(a)$.
\begin{cnj}\label{cnj:raddoppia}
Keep hypotheses and notation as above and
suppose that $\Gamma^{m}(Y;b)\equiv 0$. Then $\Gamma^{2m-1}(X;a)\equiv 0$.
\end{cnj}
The above conjecture was proved for $m=2$ by Gross and Schoen, see Prop.~4.8 of~\cite{groscho}. We will propose a proof of~\Ref{cnj}{raddoppia} and we will show that the proof works for $m=2,3$. Of course the proof for $m=2$ is that of Gross and Schoen (with the symmetric cube of the curve replaced by the cartesian cube).
\subsection{A modest proposal}\label{subsec:insintesi}
\setcounter{equation}{0}
There is a well-defined pull-back homomorphisms
\begin{equation}
(f^q)^{*}\colon Z_{*}(Y^q)_{{\mathbb Q}}\to Z_{*}(X^q)_{{\mathbb Q}}
\end{equation}
compatible with rational equivalence (see Ex.~1.7.6 of~\cite{fulton}): thus we have an induced homomorphism $(f^q)^{*}\colon\CH_{*}(Y^q)_{{\mathbb Q}}\to \CH_{*}(X^q)_{{\mathbb Q}}$. Let $n:=\dim X$ and $\Xi_m\in Z_{n}(X^m)_{{\mathbb Q}}$ the cycle defined by
\begin{equation}\label{dramper}
\Xi_m:=(f^m)^{*}\Gamma^m(Y;b).
\end{equation}
We will show that $\Xi_m$ is a linear combination of cycles of the type
\begin{equation}\label{tipi}
\{(x,\ldots,\iota(x),\ldots x,\ldots,x,a,\ldots\iota(x),\ldots,a,\ldots) \mid x\in X\}.
\end{equation}
Notice that the $\Delta_I(X;a)$'s are of this type.
Consider the inclusions of $X^m$ in $X^{2m-1}$ which map $(x_1,\ldots,x_{m})$ to $(x_1,\ldots,x_{m},\nu(1),\ldots,\nu(m-1))$ where $\nu\colon\{1,\ldots,(m-1)\}\to\{a,x_1,\ldots,x_{m},\iota(x_1),\ldots,\iota(x_{m})\}$ is an arbitrary list.
Let $\Phi_{\nu}(\Xi_m)$ be the symmetrized image of $\Xi_m$ in $Z_{n}(X^{2m-1})$ for the inclusion determined by $\nu$: it is a linear combination of cycles~\eqref{tipi}.
By hypothesis $\Xi_m\equiv 0$ and hence any linear combination of the cycles $\Phi_{\nu}(\Xi_m)$ is rationally equivalent to $0$. One gets the proof if a suitable linear combination of the $\Phi_{\nu}(\Xi_m)$'s is a linear combination of the $\Delta_I(X;a)$'s with the appropriate coefficients (so that it is equal to a non-zero multiple of $\Gamma^{2m-1}(X;a)$). We will carry out the proof for $m=2,3$.
\subsection{Preliminaries}\label{subsec:giocomega}
\setcounter{equation}{0}
Since the involution of $X$ is non-trivial the dimension of $X$ is strictly positive i.e.~$n>0$. Let $\mu\colon\{1,\ldots,q\}\to\{a,x,\iota(x)\}$. If $\mu$ is \emph{not} the sequence $\mu(1)=\ldots=\mu(q)=a$ we let
\begin{equation}
\Omega(\mu(1),\ldots,\mu(q)):=\{(x_1,\ldots,x_{q})\in X^{q} \mid x_i=\mu(i),\quad x\in X\},
\end{equation}
and we let $\Omega(a,\ldots,a):=0$.
Thus $\Omega(\mu(1),\ldots,\mu(d))$ is an $n$-cycle on $X^{d}$.
For example $\Omega(x,\ldots,x)\in X^{q}$ is the small diagonal. Let ${\mathscr S}_{q}$ be the symmetric group on $\{ 1,\ldots,q\}$: of course it acts on $X^{q}$. For $r+s+t=q$ let
\begin{equation}
\overline{\Omega}(r,s,t):=\sum_{\sigma\in{\mathscr S}_{q}}\sigma(\Omega(\underbrace{a,\ldots,a}_{r},\underbrace{x,\ldots,x}_{s},
\underbrace{\iota(x),\ldots,\iota(x)}_{t})).
\end{equation}
Thus $\overline{\Omega}(r,s,t)$ is an $n$-cycle on $X^q$ invariant under the action of ${\mathscr S}_{q}$. Notice that
\begin{equation}\label{esseti}
\overline{\Omega}(r,s,t)=\overline{\Omega}(r,t,s).
\end{equation}
With this notation
\begin{equation}\label{altdiag}
\Gamma^q(X;a)=\sum_{\substack{ 0\le r,s \\ r+s=q}} \frac{(-1)^r}{r! s!}\overline{\Omega}(r,s,0).
\end{equation}
Let $\Xi_m$ be the cycle on $X^m$ given by~\eqref{dramper}. A straightforward computation gives that
\begin{equation}\label{tirodiag}
2\Xi_m=\sum_{\substack{ 0\le r,s,t \\ r+s+t=m}}\frac{(-2)^r}{r! s! t!}\overline{\Omega}(r,s,t).
\end{equation}
(Equality~\eqref{esseti} is the reason for the factor of $2$ in front of $\Xi_m$.)
For
\begin{equation*}
\nu\colon\{1,\ldots,(m-1)\}\to\{a,x_1,\ldots,x_{m},\iota(x_1),\ldots,\iota(x_{m})\}
\end{equation*}
we let
\begin{equation}
\begin{matrix}
X^{m}& \overset{j_{\nu}}\longrightarrow & X^{2m-1} \\
(x_1,\ldots,x_{m}) & \mapsto & (x_1,\ldots,x_{m},\nu(1),\ldots,\nu(m-1))
\end{matrix}
\end{equation}
and $\Phi_{\nu}\colon Z_n(X^{m})\to Z_n(X^{2m-1})$ be the homomorphism
\begin{equation}
\Phi_{\nu}(\gamma):=\sum_{\sigma\in{\mathscr S}_{2m-1}}\sigma_{*}(j_{\nu,*}(\gamma)).
\end{equation}
Notice that $\Phi_{\nu}$ does not change if we reorder the sequence $\nu$.
\subsection{The case $m=2$}\label{subsec:emmedue}
\setcounter{equation}{0}
A straightforward computation (recall~\eqref{esseti}) gives that
\begin{eqnarray}
\Phi_a(\Xi_2) & = & \overline{\Omega}(1,2,0)-4\overline{\Omega}(2,1,0)+\overline{\Omega}(1,1,1), \\
\Phi_{x_1}(\Xi_2) & = & \overline{\Omega}(0,3,0)-2\overline{\Omega}(1,2,0)-2\overline{\Omega}(2,1,0)+\overline{\Omega}(0,2,1), \\
\Phi_{\iota(x_1)}(\Xi_2) & = & -2\overline{\Omega}(2,1,0)-2\overline{\Omega}(1,1,1)+2\overline{\Omega}(0,2,1).
\end{eqnarray}
Thus
\begin{equation}
0\equiv -2\Phi_a(\Xi_2)+2\Phi_{x_1}(\Xi_2)-\Phi_{\iota(x_1)}(\Xi_2)=2\overline{\Omega}(0,3,0)-6\overline{\Omega}(1,2,0)+6\overline{\Omega}(2,1,0)=12\Gamma^3(X;a).
\end{equation}
\subsection{The case $m=3$}\label{subsec:emmetre}
\setcounter{equation}{0}
For every $\nu\colon\{1,2\}\to \{a,x_1,x_2,x_3,\iota(x_1),\iota(x_2),\iota(x_3)\}$ the cycle $\Phi_\nu(\Xi_3)$ is equal to the linear combination of the classes listed in the first column of Table~\eqref{coordinate} with coefficients the numbers in the corresponding column of Table~\eqref{coordinate}. For such a $\nu$ let $i(\nu)$ be its position in the first row of Table~\eqref{coordinate}: thus $i((a,a))=1$,..., $i((\iota(x_1),\iota(x_2))=9$. Table~\eqref{coordinate} allows us to rewrite
\begin{equation}\label{califano}
\sum_{\nu} \lambda_{i(\nu)}\Phi_{\nu}(\Xi_3)
\end{equation}
as an integral linear combination of the classes listed in the first column of Table~\eqref{coordinate}, with coefficients $F_1,\ldots,F_9$ which are linear functions of $\lambda_1,\ldots,\lambda_9$. Let's impose that $0=F_1=\ldots=F_6$: solving the corresponding linear system we
get that
%
\begin{eqnarray}
\lambda_1 & = & \frac{1}{3}(-8\lambda_6-2\lambda_7-8\lambda_8-8\lambda_9),\\
\lambda_2 & = & \frac{1}{3}(14\lambda_6+8\lambda_7+14\lambda_8+20\lambda_9),\\
\lambda_3 & = & \frac{1}{3}(-6\lambda_6-6\lambda_7-6\lambda_8-12\lambda_9),\\
\lambda_4 & = & \frac{1}{3}(\lambda_6-2\lambda_7+\lambda_8+4\lambda_9),\\
\lambda_5 & = & \frac{1}{3}(-5\lambda_6-2\lambda_7-5\lambda_8-8\lambda_9).
\end{eqnarray}
For such a choice of coefficients $\lambda_1,\ldots,\lambda_9$ we have that
\begin{equation}\label{pasqua}
0\equiv \sum\limits_{\nu} \lambda_{i(\nu)}\Phi_{\nu}(\Xi)=
-\frac{4}{3}(\lambda_6+\lambda_7+\lambda_8+\lambda_9)(\overline{\Omega}(0,5,0)-5\overline{\Omega}(1,4,0)+10\overline{\Omega}(2,3,0)
-10\overline{\Omega}(3,2,0)+5\overline{\Omega}(4,1,0)).
\end{equation}
Choosing integers $\lambda_6,\ldots,\lambda_9$ such that $(\lambda_6+\lambda_7+\lambda_8+\lambda_9)=-3$ we get that
\begin{equation}\label{eccoci}
0\equiv \sum\limits_{\nu} \lambda_{i(\nu)}\Phi_{\nu}(\Xi)=4\cdot 5!\Gamma^5(X;a).
\end{equation}
This concludes the proof of~\Ref{cnj}{raddoppia} for $m=3$.
%
\begin{table}[tbp]\tiny
\caption{Coordinates of $\Phi_{\nu}(\Xi)$ for $\nu=(a,a),\ldots,(\iota(x_1),\iota(x_2))$.}\label{coordinate}
\vskip 1mm
\centering
\renewcommand{\arraystretch}{1.60}
\begin{tabular}{rrrrrrrrrr}
\toprule
& $ (a,a)$ & $ (a,x_1)$ & $ (a,\iota(x_1))$ & $(x_1,x_1)$ & $(x_1,x_2)$ &
$(x_1,\iota(x_1))$ & $(x_1,\iota(x_2))$ & $ (\iota(x_1),\iota(x_1))$ & $ (\iota(x_1),\iota(x_2))$ \\
\midrule
$\overline{\Omega}(3,1,1)$ & -6 & -2 & 2 & -2 & 0 & -2 & 4 & -2 & 8 \\
\midrule
$\overline{\Omega}(2,2,1)$ & 3 & -4 & -8 & 0 & -4 & 4 & -6 & 4 & -8 \\
\midrule
$\overline{\Omega}(1,3,1)$ & 0 & 2 & 2 & -4 & 0 & -4 & -4 & -4 & 0 \\
\midrule
$\overline{\Omega}(1,2,2)$ & 0 & 1 & 2 & 0 &-2 &-4 & 0 & -4 & -4 \\
\midrule
$\overline{\Omega}(0,4,1)$ & 0 & 0 & 0 & 2 & 1 &1 & 2 & 1 & 0 \\
\midrule
$\overline{\Omega}(0,3,2)$ & 0 & 0 & 0 & 1 & 2 &3 & 2 & 3 & 4 \\
\toprule
$\overline{\Omega}(0,5,0)$ & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\
\midrule
$\overline{\Omega}(1,4,0)$ & 0 & 1 & 0 & -4 & -2 & 0 & 0 & 0 & 0 \\
\midrule
$\overline{\Omega}(2,3,0)$ & 1 & -4 & 0 & 4 & -4 & 0 & -2 & 0 & 0 \\
\midrule
$\overline{\Omega}(3,2,0)$ & -6 & 2 & -2 & -2 & 8 & -2 & 4 & -2 & 0 \\
\midrule
$\overline{\Omega}(4,1,0)$ & 12 & 8 & 8 & 8 & 4 & 8 & 4 & 8 & 4 \\
\bottomrule
%
\end{tabular}
\end{table}
| {
"timestamp": "2014-08-29T02:09:33",
"yymm": "1311",
"arxiv_id": "1311.0757",
"language": "en",
"url": "https://arxiv.org/abs/1311.0757",
"abstract": "Beauville and Voisin proved that the third modified diagonal of a complex K3 surface X represents a torsion class in the Chow group of X^3. Motivated by this result and by conjectures of Beauville and Voisin on the Chow ring of hyperkaehler varieties we prove some results on modified diagonals of projective varieties and we formulate a conjecture.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Computations with modified diagonals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806478450307,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7077274090711076
} |
https://arxiv.org/abs/2112.12295 | From Finite Vector Field Data to Combinatorial Dynamical Systems in the Sense of Forman | The main goal is to construct a combinatorial dynamical system in the sense of Forman from finite vector field data. We use a linear minimization problem with binary variables and linear equality constraints. The solution of the minimization problem induces an admissible matching for the combinatorial dynamical system. They are three main steps for the method: Construct a simplicial complex, compute a vector for each simplex, solve the minimization problem, and apply the induced matching. We argue the effectiveness of the method by testing it on the Lotka-Volterra model and the Lorenz attractor model. We are able to retrieve the cyclic behaviour in the Lotka-Volterra model and the chaotic behaviour of the Lorenz attractor. Two extensions to the algorithm are shown. We use the barycentric subdivision to obtain a better resolution. We add conditions on the minimization problem to obtain a solution which induce a gradient matching. | \section{Introduction}
The concept of combinatorial vector fields, introduced by Robin Forman \cite{RobForMorse} \cite{RobCVF} \cite{RobForUser}, is a useful tool to discretize continuous problems in mathematics \cite{thMorseClas}, in imaging \cite{arRobJohShe}, and in the computation of homology \cite{arDisMorTheAlgo}. We are interested in the combinatorial dynamical system, because it gives a qualitative approach to study dynamical systems. We are interested to study the global dynamics by approximating the underlying dynamical system with combinatorial vector field. More development in \cite{towFormalTie} shows that classical dynamical system and combinatorial dynamical system are similar on the level of Conley Index. In recent years, a generalization for combinatorial dynamical system is the combinatorial multivector field studied has been proposed by \cite{arMultiVector}. A reason for this generalization is that the original Forman's definition of combinatorial vector field is hard to implement. We show that it is possible to construct a combinatorial dynamical system in the sense of Forman with finite vector field data.
There are algorithms to construct a multivector field \cite{arPerHomMD}, and combinatorial gradient vector field in the sense of Forman from simplicial complex with value on vertices in $ \mathbb{R} $ \cite{arRobJohShe} \cite{arKinKnuMra}, and $ \mathbb{R}^d $ \cite{multiDimDisMor}. But there is no algorithm to construct a Forman's combinatorial dynamical system with cyclic behaviour. There are multiple results that we can apply on a combinatorial dynamical system. As an example, we can construct a semi flow \cite{semiFlowMMTW}, a cell decomposition\cite{towFormalTie} and a Morse decomposition \cite{linkCombClass2} for combinatorial dynamical systems.
By means of an example, it has been argued in \cite{arMultiVector} that combinatorial dynamical system in the sense of Forman might not be best suited for applications. The article \cite{arMultiVector} generalize it to combinatorial multivector field. For more information on combinatorial vector field, we refer this article \cite{arMulVecConMorFor} to the reader. In this paper, we argue that we can obtain a combinatorial dynamical system by Forman from data and we find an approximation of the global behaviour of the dynamical system with our method.
Let us take the same example at \cite{arMultiVector} in the section 3.4 and apply our method on it. We have the following equations:
\begin{equation}
\begin{cases}
\frac{dx}{dt} = -y + x(x^2 + y^2 - 4)(x^2+y^2-1) \\
\frac{dy}{dt} = x + y(x^2 + y^2 - 4)(x^2+y^2-1)
\end{cases}.
\end{equation} \label{eqEx1}
The dynamical system (\ref{eqEx1}) has a repulsive stationary point at $ (0,0) $. It also has an attracting periodic orbit with a radius $1$ with a center at $(0, 0)$ and a repulsive periodic orbit with a radius $2$ with a center at $(0, 0)$. For the dataset, we take the following data points $ X = \{ (0.22 + 0.44i, 0.22 + 0.44j) \mid i=-8, -7, -6, \ldots, 6, 7 \text{ and } j = -8, -7, -6, \ldots, 6, 7 \} $. We construct the cubical complex of this dataset. The length of the side of a square is $ 0.44 $. By applying our algorithm, we obtain the Figure (\ref{figIntroEx}). At the middle we obtain a critical square that represents a repulsive behaviour. The arrows in yellow are all part of an attracting periodic orbit. The arrows in purple are all part of a repulsive periodic orbit. The blue arrows represent the gradient behaviour. We obtain the expected results of a repulsive stationary point, an attracting periodic orbit and a repulsive periodic orbit.
\begin{figure}
\center
\includegraphics[height=10cm, width=10cm, scale=1.00, angle=0 ]{Figure/multivecCe.png}
\caption{The combinatorial dynamical system obtained by applying our algorithm to the equations (\ref{eqEx1}) with the parameter $ \alpha = 0.90 $.}
\label{figIntroEx}
\end{figure}
The main result is the construction of a combinatorial dynamical system from finite vector field data. The data is $ X $ a set of points and $ \dot{X} $ the set of vector associates to each point. Let $ X = \{x_1, x_2, \ldots, x_N\}$ such as $ x_i \in \mathbb{R}^d $ and $ \dot{X} = \{ \dot{x_1}, \dot{x_2}, \ldots, \dot{x_N} \} $ such as each $ \dot{x_i} \in \mathbb{R}^d $ is a vector associate to the point $ x_i$. The algorithm is done in three steps. First, we need to construct a simplicial complex. For the purpose of this paper, we use a method based on the Delaunay complex and the Dowker complex. We can also use a cubical complex. Next, we assign a vector to each simplex in the simplicial complex. The assignment is different if we have a Delaunay complex or a Dowker complex. Finally, we match a $n$-simplex with a $(n+1)$-simplex or itself to satisfy the definition of a combinatorial dynamical system. We use a linear minimization with binary variables and linear equality constraints. Let $K$ a simplicial complex, $N$ the number of simplices in $K$ and $ x_i, x_j \in K$. Let's define $ A_N(\{0,1 \}) $ a matrix with binary entries. The matrix $ A $ is called the matching matrix and induce a matching. If $ a_{ij} = 1 $, then we match the simplex $ x_i $ to $ x_j $. If $ a_{ij} = 0$, then we don't match the simplex $ x_i $ to $ x_j $. For each $ a_{ij} $, we associate a cost $ c_{ij} $. We set a high cost if the matching is not admissible or the matching is bad. We put a low cost if the matching is good. If $ i = j $, then we set a fix cost to a parameter $ \alpha $. The constraints guarantee that each simplex is matched only once. We obtain the following minimization problem:
\begin{equation}\label{introEq}
\begin{aligned}
& \underset{A \in M_N(\{0,1\}) }{\text{minimize}}
& & f(A) := \sum_{i=0}^{N-1}\sum_{j=0}^{N-1} a_{ij} c_{ij} \\
& \text{subject to}
& & \sum_{i=0}^{N-1} a_{ik} + \sum_{j=0}^{N-1} a_{kj} - a_{kk} = 1 , \quad k =0, 1, \ldots, N-1
\end{aligned}
\end{equation}
We have an integer linear problem(ILP). We obtain our main result.
\begin{theorem}
If $ A$ is a global minimum to the ILP above, then $ A $ induce a matching that satisfies the definition of a combinatorial dynamical system.
\end{theorem}
The article is organized as follows. In Section 2, we explain basic definitions related to simplicial complex and combinatorial dynamical system. In Section 3, we show how to construct a simplicial complex by using the Delaunay Complex or the Dowker Complex. In Section 4, we assign a vector to each simplex. The assignment is different, if we use a Delaunay complex or a Dowker complex. In Section 5, we define the minimization problem. We also show that the global minimum of the minimization problem induces a matching that satisfies the definition of combinatorial dynamical system. In Section 6, we use our method to the Lotka-Volterra model and the Lorenz attractor model. In Section 7, we show two extensions to the algorithm. We can apply a barycentric subdivision to a simplicial complex before solving the optimization problem. The second extension is to obtain a combinatorial dynamical system which is a gradient field. We can choose the parameter $\alpha$ so that the solution from the minimization problem is gradient or we can add constraints to the minimization problem removing cycles in the solution.
\section{Preliminaries}
In this section, we discuss about simplicial complex and combinatorial dynamical system in the sense of Forman. For more information on simplicial complex, we refer to the reader this book \cite{AlgTopo} for simplicial complex and these papers \cite{towFormalTie} \cite{RobCVF} for combinatorial dynamical systems.
An abstract simplicial complex is a set $K$ that contains finite non-empty sets such that if $ A \in K $, then for all subsets of $ A $ is also in $K$. We also use the geometric simplex defined as follows. A geometric $ n $-simplex is the convex hull of a geometrically independent set $ \{ v_0, v_1, \ldots, v_n \} \subset \mathbb{R}^d $. This is the set of $ x $ such as $ x = \sum_{i=0}^{n} t_i v_i $ and $ \sum_{i=0}^n t_i = 1 $. $t_i$ are called barycentric coordinates. We denote $ [ v_0, v_1, \ldots, v_n ] $ a $n$-simplex spanned by the vertices $ v_0, v_1, \ldots, v_n $. A simplicial complex $K$ is a collection of simplices such that for all $ \sigma \in K $, if $ \tau \leq \sigma $, then $ \tau \in K $ and if $ \tau = \sigma_1 \cap \sigma_2 $ then $ \tau$ is either empty or a face of $ \sigma_1 $ and $ \sigma_2 $. We note $ K_n $ the set of all $n$-simplices in $K$. Any simplex spanned by the subsets of $ \{ v_0, v_1, \ldots, v_n \} $ are called faces. We note $ \sigma \leq \tau $. If $ \sigma \leq \tau $ and $ \sigma \neq \tau $, then we say that $ \sigma $ is a proper face of $ \tau $. We note $ \sigma < \tau $. The closure of $ \sigma $ is the union of all faces of $ \sigma $ noted $ \Cl \sigma$. The boundary of $ \sigma $ is the union of all proper faces of $ \sigma $ noted $ \Bd \sigma $.
We define a partial map $ f \subset X \times Y $ if $f$ is a relation and $ (x, y), (x, y') \in f $, then $ y = y' $. We write $ f: X \nrightarrow Y $. We define the domain and the image of $ f $ as follows:
\begin{itemize}
\item $ \Dom f := \{ x \in X \mid \exists y \in Y, (x, y) \in f \} $;
\item $ \Ima f := \{ y \in Y \mid \exists x \in X, (x, y) \in f \} $.
\end{itemize}
A partial map $f$ is injective, if for $ x, x' \in X $ we have $ f(x) = f(x') $ implies that $ x = x' $. We can now define a combinatorial dynamical system. We use the same definition as \cite{towFormalTie}.
\begin{definition}\label{dfnSDC}
Let $ K $ a simplicial complex. $ \mathcal{V} : K \nrightarrow K $ is a combinatorial dynamical system if $ \mathcal{V} $ is a partial injective map and satisfy these conditions:
\begin{itemize}
\item For all $ \sigma \in \Dom \mathcal{V} $, $ \mathcal{V}(\sigma) = \sigma $ or $ \mathcal{V}(\sigma) = \tau $ such as $ \tau > \sigma $ and $ \dim \sigma + 1 = \dim \tau $.
\item $ \Dom \mathcal{V} \cup \Ima \mathcal{V} = K $.
\item $ \Dom \mathcal{V} \cap \Ima \mathcal{V} = \Crit \mathcal{V} $, where $ \Crit \mathcal{V} = \{ \sigma \in K \mid \mathcal{V}(\sigma) = \sigma \} $ is the set of critical simplices.
\end{itemize}
\end{definition}
This definition is equivalent to the definition of combinatorial dynamical system in the sense of Forman \cite{RobCVF} \cite{RobForUser}. But our definition uses a partial map. The first condition defines the combinatorial vector of $ \mathcal{V}$. The second condition is that every simplex is in a matching. The third condition is that only critical simplices are in both the image and the domain of $ \mathcal{V} $.
To visualize combinatorial dynamical system, we draw a vector from the barycenter of $ \sigma $ to the barycenter of $\mathcal{V}(\sigma) $ for each $ \sigma \in \Dom \mathcal{V} \setminus \Crit \mathcal{V} $. If $ \sigma \in \Crit \mathcal{V} $, then we color the critical simplex in red. In the Figure \ref{figSdcEx}, we have some examples of combinatorial dynamical system and some counter-examples at Figure \ref{figSdcCe}.
\begin{definition}\label{dfnMatchAdmi}
Let $ \sigma, \tau \in K $ with $ \sigma \neq \tau $. We say that a pair $(\sigma, \tau)$ is an admissible matching, if $ \sigma < \tau $ with $ \dim \sigma + 1 = \dim \tau $.
\end{definition}
This definition is useful in our minimization problem.
\begin{figure}
\center
\subfigure[ $ \mathcal{V} $ is not injective. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ce1SDC.png}\label{subFigSdcCe1}
}
\,
\subfigure[ The third condition is not satisfied. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ce2SDC.png}\label{subFigSdcCe2}
}
\qquad \qquad \qquad
\subfigure[ The first condition is not satisfied. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ce3SDC.png}\label{subFigSdcCe3}
}
\,
\subfigure[ The second condition is not satisfied. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ce4SDC.png}
\label{subFigSdcCe4}
}
\qquad
\subfigure[ $ \mathcal{V} $ is not a partial map. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ce5SDC.png}
\label{subFigSdcCe5}
}
\caption{Counter-examples of combinatorial dynamical systems.}
\label{figSdcCe}
\end{figure}
\begin{figure}
\center
\subfigure[ A combinatorial dynamical system in $ \mathbb{R}^2 $.]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ex1_CDS.png}\label{subFigSdcEx1}
}
\,
\subfigure[ A combinatorial version of the Lorenz attractor in $ \mathbb{R}^3 $. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/ex2_CDS.png}\label{subFigSdcEx2}
}
\caption{Two examples of combinatorial dynamical system.}
\label{figSdcEx}
\end{figure}
A multivalued map is a map $ F: X \to P(X) $ where $P(X)$ is the power set of $X$. We write a multivalued map by $ F : X \multimap X $.
\begin{definition}
A combinatorial multi-flow associated with a combinatorial dynamical system $ \mathcal{V} $ is $ \Pi_{\mathcal{V}} : K \multimap K $ :
\begin{equation}
\Pi_{\mathcal{V}}(\sigma):=
\begin{cases}
\Cl \sigma & \If \, \sigma \in \Crit \mathcal{V} \\
\Bd \sigma \setminus \{ \mathcal{V}^{-1}(\sigma) \} & \If \, \sigma \in \Ima \mathcal{V} \setminus \Crit \mathcal{V} \\
\{ \mathcal{V}(\sigma) \} & \If \, \sigma \in \Dom \mathcal{V} \setminus \Crit \mathcal{V}. \\
\end{cases}
\end{equation}
\end{definition}
$ \Pi_{\mathcal{V}} $ induces the dynamics in combinatorial dynamical systems.
\begin{definition}
A solution of a combinatorial multi-flow $ \Pi_{\mathcal{V}} $ of a combinatorial dynamical system $ \mathcal{V} $ is a function $ \varrho : I \to K $ such as $ I $ is an interval in $ \mathbb{Z} $ and $ \varrho(i + 1) \in \Pi_{\mathcal{V}}(\varrho(i)) $, for all $ i \in I $. We said it's a full solution if $ I = \mathbb{Z} $.
\end{definition}
We define a cycle to be a solution $ \varrho$ with $ I = [0, n] \cap \mathbb{Z} $ such as $ \varrho(0) = \varrho(n)$. We say a cycle is elementary, if every simplex is unique in $ \varrho(i) $ for $ i = 1,2, \ldots, n-1 $. The image of a cycle has $n$-simplex and $(n+1)$-simplex or a critical simplex. Because the only way to go up in dimension is from a simplex in $ \Dom \mathcal{V} \setminus \Crit \mathcal{V} $ and we cannot have two simplices $ \in \Dom \mathcal{V} \setminus \Crit \mathcal{V} $ in a row inside a solution. We denote a $d$-cycle where $d$ is the dimension of simplex in $ \Dom \mathcal{V} \cap \Ima \varrho $ and $ \Crit \mathcal{V} \cap\Ima \varrho = \emptyset $. We can use the strongly connected components from the directed graph of $ \Pi_{\mathcal{V}} $ to understand the recurrent dynamics of $ \mathcal{V} $. This will help us to study the method in the experiments. We say that a cycle $\varrho$ self-intersect at $ \tau $, if $ \card(\Ima \varrho \cap \Pi_{\mathcal{V}}(\tau)) > 1$.
\section{Construction of a Simplicial Complex}
In this section, we want to construct a simplicial complex. We remind the dataset is $ X \in \mathbb{R}^d $. For each $x \in X$, there is an associate vector $ \dot{x} \in \mathbb{R}^n $. In this article, we will show two different ways to construct a simplicial complex : the Delaunay complex \cite{boCompTopo} and the Dowker complex \cite{rGhristEAT}.
\begin{definition}
The Voronoi cell of a point $u \in S$ is the set of points, $V_u = \{ x \in \mathbb{R}^d \mid \| x- u \| \leq \| x - v \|, v \in S \}$. The Voronoi diagram of $S$ is the collection of Voronoi cells of its points.
\end{definition}
\begin{definition}
The Delaunay complex $K$ of a finite set $ S \subset \mathbb{R}^d $ is isomorphic to the nerve of the Voronoi diagram,
\begin{equation}
K = \{ \sigma \subseteq S \mid \cap_{u \in \sigma} V_u \neq \emptyset \}.
\end{equation}
\end{definition}
The Delaunay complex works better if the sampling of the data is uniform. When the sampling is not uniform, this causes a great variation in the volume of $n$-simplices. It is harder to analyze the data. The Dowker complex is a solution for a non-uniform sampling of finite vector field data.
\begin{definition}
Let $X$ and $Y$ sets and let $R \subset Y \times X$ be a relation. The Dowker complex $K$ is given by:
\begin{equation*}
K := \{[y_0, y_1, y_2, \ldots, y_n] \mid \text{ there exists a } x \in X \text{ such that } yRx_i \text{ for all } i= 0, 1, \ldots, n \}.
\end{equation*}
\end{definition}
For the purpose of this paper, we use $X$ to store data and we need to choose the set $ Y $ and a relation $R$ between $X$ and $Y$.
\begin{example}\label{exDowCmp}
Let $X = \{ (-1.2, 0), (0, 0.5), (0, -0.5), (0.5, 0)) \} \subset \mathbb{R}^2$ and $Y = \{ y_0, y_1, y_2, y_3, y_4\}$ where $ y_0 = (0, 0), y_1 = (-1, -1), y_2 = (1, 1), y_3 =(-1, 1), y_4 = (1, -1) $. $x \in X $ and $y \in Y $ are in relation if $ \| y - x \|_2 < 1.2$. We obtain the Dowker complex is $ \{ [y_0, y_1, y_4], [y_0, y_2, y_4], [y_0, y_2, y_3], [y_1, y_3] \} $.
\begin{figure}
\center
\includegraphics[height=3.75in, width=5in, scale=1.00, angle=0 ]{Figure/DowkerCS.png}
\caption{The Dowker complex obtained from the example (\ref{exDowCmp}).}
\label{imgDowkerComplex}
\end{figure}
\end{example}
\section{Compute a Vector for each Simplex}
In this section, we define two ways of assigning a vector to each simplex. We construct an application $ V : K \to \mathbb{R}^n $.
Let $K$ be a Delaunay complex. We compute a vector for a simplex by taking an average of vectors from the vertices of the simplex. More precisely, let $ \sigma = [ x_0, x_1, \ldots, x_n ] $:
\begin{equation}
V(\sigma) := \frac{\sum_{i=0}^n \dot{x}_i}{n+1}
\end{equation}
Let $K$ be a Dowker complex. Let $ X $ and $\dot{X}$ from the data and $ Y $ a set and a relation $ R $ between $X$ and $Y$. We define an application $ w : K \to X $ by $w(\sigma) = \lbrace \dot{x} \in \dot{X} \mid \text{if for all } y \in \sigma, x R y \rbrace$.
We assign a vector to a simplex $ \sigma = [x_0, x_1, \ldots, x_n ] $:
\begin{equation}
V(\sigma) := \frac{\sum_{\dot{x}\in w(\sigma)} \dot{x} }{ card(w(\sigma))}
\end{equation}
\section{The Matching Algorithm}
In this section, we describe the matching algorithm. We minimize a linear program with binary variables and linear equality constraints.
First, let's define the variables. Let $N$ the number of simplices and $ x_i $ the simplex where $0 \leq i < N$. We define a matrix $ A \in M_{N}(\{ 0,1 \}) $ of dimensions $N \times N$ where the entries are binary values. We call $ A $ the matching matrix. We do the matching as follows :
\begin{itemize}
\item If $ a_{ij} = 1 $, match $ x_i $ to $x_j$ $(\mathcal{V}(x_i) = x_j)$
\item If $ a_{ij} = 0 $, do not match $ x_i $ to $x_j $ $ (\mathcal{V}(x_i) \neq x_j) $
\end{itemize}
Now, let us define the matrix $ C \in M_{N}(\mathbb{R}) $ called the cost matrix. Let $ V : K \to \mathbb{R}^n $ be the map that takes a simplex and returns the vector value from the data. Let $b : K \to \mathbb{R}^n $ be the map sending each simplex to its barycenter and $ W : K \times K \to \mathbb{R}^n $ be given by $ W(x_i, x_j) = b(x_j) - b(x_i) $. Then, $ W(x_i, x_j) $ is a vector that starts from the barycenter of $ x_i $ and ends at the barycenter of $x_j$. The main idea is to compare the vector $ V(x_i) $ to the vector $ W(x_i, x_j) $ when the matching is admissible. Given $ x_i \neq x_j $, the matching is admissible if $ x_i < x_j $ and $ \dim x_i + 1 = \dim x_j $. We use the cosine distance to compare them. It is defined as follows:
\begin{equation}
d(x, y) = 1 - \cos(\theta) = 1 - \frac{x \cdot y}{ \| x\| \| y \|},
\end{equation}
where $ \theta $ is the angle between $ x $ and $ y$. If $ d(x,y) = 0 $, then $ x $ and $y$ are parallel in the same direction. If $ d(x,y) = 1 $, then $ x $ and $y$ are perpendicular. If $ d(x,y) = 2 $, then $ x $ and $y$ are parallel in opposite directions. We set $ c_{ij} = d(V(x_i), W(x_i, x_j)) $ when $ x_i $ and $ x_j $ is an admissible matching and $ V(x_i) \neq 0 $.
If $ V(x_i) = 0 $ and $ a_{ij} $ is an admissible matching, then $ c_{ij} = 2$. Indeed, $ d(x, 0) = 1 $ for every $ x \in \mathbb{R}^d $. This means that all vectors are perpendicular to $ \vec{0} $. But $x_i$ should be a critical simplex. By setting higher value for his cost, it has a higher chance that the solution will set this simplex to be critical.
If $ i = j $, then we set to value $ c_{ii} $ to $ \alpha \in [0, 2] $. $ \alpha $ is a parameter choose by the user. If the $ \alpha $ value is low, then we have more critical simplices. If the $ \alpha $ value is high, then we have less critical simplices.
Let us explain the geometric interpretation of the parameter $ \alpha $. There exists $ \beta \in [-1, 1] $ such as $ \alpha = 1 - \beta $. Fix a simplex $ x_i $ and consider all the admissible matching $ a_{ij} $. Suppose that for all $ c_{ij} > \alpha $ and let $ \theta_j $ the angle between $ V(x_i) $ and $ W(x_i, x_j) $. We obtain:
\begin{align*}
& 1 - \beta <1 - d(V(x_i), W(x_i, x_j)) \\
& \implies 1-\beta <1 - \cos(\theta_j) \\
& \implies \beta > \cos(\theta_j)\\
& \implies \arccos(\beta) < \theta_j,
\end{align*}
%
where $ \arccos(\beta) $ represents the critical angle. If for all $ \theta_j > \arccos(\beta) $, then the minimization problem (\ref{eqPrOp}) put simplex $ x_i $ is a critical simplex or it matches with another simplex of lower dimensions.
If the matching is not admissible and $ i \neq j $, then $ C_{ij} = \max(2\alpha + 1, 3) $.
Finally, we obtain this equality for the entries of the matrix $ C $ :
\begin{equation}
C_{ij} = \begin{cases}
d(V(x_i), W(x_i,x_j)) & \text{if } (x_i, x_j) \text{ is admissible and } V(x_i) \neq \vec{0} \\
2 & \text{if } (x_i, x_j) \text{ is admissible and } V(x_i) = \vec{0} \\
\alpha & \text{if } i = j \\
\max(2\alpha + 1, 3) & \text{Otherwise}
\end{cases}.
\end{equation}
\begin{figure}
\center
\includegraphics[height=3in, width=4in, scale=1.00, angle=0 ]{Figure/comp_V_W.png}
\caption{We compare $V([v_0, v_1])$ in green and $ W([v_0, v_1], [v_0, v_1, v_2]) $ in blue with the cosine distance.}
\label{compVW}
\end{figure}
Finally, we obtain the following minimization linear problem with binary variables and linear equality constraints:
\begin{equation}\label{eqPrOp}
\begin{aligned}
& \underset{A \in M_N(\{0,1\}) }{\text{minimize}}
& & f(A) := \sum_{i=0}^{N-1}\sum_{j=0}^{N-1} a_{ij} c_{ij} \\
& \text{subject to}
& & \sum_{i=0}^{N-1} a_{ik} + \sum_{j=0}^{N-1} a_{kj} - a_{kk} = 1 , \quad k =0, 1, \ldots, N-1
\end{aligned}
\end{equation}
Let us explain the double sum in the constraint for a fixed $k$. The sum $ \sum_{i=0}^{N-1} a_{ik} $ counts the number of vectors going into $ x_i $ i.e. $ \mathcal{V}(x_i) = x_k $. The $ \sum_{j=0}^{N-1} a_{kj} $ counts the number of vectors going out of $ x_i $ i.e $ \mathcal{V}(x_k) = x_i $. We remove the term $ a_{kk} $ because it is counted twice.
\begin{theorem}\label{thGloMinOpt}
If $ A$ is a global minimum of the ILP (\ref{eqPrOp}), then $ A $ induce a matching that satisfies the definition of a combinatorial dynamical system.
\end{theorem}
\begin{proof}
Let us show that there exists a global minimum for the optimization problem (\ref{eqPrOp}) and it induces an admissible matching. We set $ a_{kk} = 1 $ for all $ k =0, 1, \ldots, N-1 $ and $ a_{ij} = 0 $ for $ i \neq j $. $A$ satisfies the constraints of (\ref{eqPrOp}) and $A$ induces the matching that all simplices are critical. Then, the solution set of (\ref{eqPrOp}) is always feasible. The variables $ x_i $ are bounded between $0$ and $1$ for all $i = 0, 1, \ldots, N-1$. Because the set of solutions is bounded and feasible, there exists a global minimum for (\ref{eqPrOp}).
Let's show by contraposition that if the matrix $ A$ does not induce an admissible matching, then it does not satisfy the constraint of the minimization problem (\ref{eqPrOp}). We have $5$ cases that can be seen in Figure \ref{figSdcCe}. If there are two vectors going out of $ x_k$, then there exist $i,j $ such as $ a_{ki} = 1 $ and $ a_{kj} = 1 $ for $ i \neq j $. That implies that the constraints of (\ref{eqPrOp}) are not satisfied. If there are two vectors going in $ x_k $ then there exist $i,j $ such as $ a_{ik} = 1$ and $ a_{jk} = 1 $ for $ i \neq j $. The constraints of (\ref{eqPrOp}) are not satisfied. If there is a vector going in $ x_k $ and another vector going out of $ x_k $, then there exist $i,j $ such as $ a_{ik} = 1 $ and $ a_{kj} = 1 $. Then, the constraints of (\ref{eqPrOp}) are not satisfied. We obtain that $ \Dom \mathcal{V} \cap \Ima \mathcal{V} = \Crit \mathcal{V} $. For all $ x_k \in K $, there exists $i$ such as $ a_{ik} = 1 $ or $ a_{ki} = 1 $. This means that either $ x_k \in \Dom \mathcal{V} $ or $ x_k \in \Ima \mathcal{V} $. We obtain the equality $\Dom \mathcal{V} \cup \Ima \mathcal{V} = K $.
We need to check that if $ A $ have an inadmissible matching then $A$ is not a global minimum. Let's show it by contradiction. Let $ A$ by the global minimum of the problem (\ref{eqPrOp}) and $ a_{ij} = 1 $ where ($ x_i $, $ x_j $) is an inadmissible matching with $ i \neq j $. Since $ a_{ij} = 1 $, $ c_{ij} = \max(2\alpha + 1, 3) $. But, if $ a_{ii} = 1 $ and $ a_{jj} = 1$, then $ c_{ii} + c_{jj} = 2\alpha < 2\alpha +1 = c_{ij} $. So, if we change the variables for $ a_{ii} = 1, a_{jj} = 1$ and $ a_{ij} = 0 $, it still satisfies the constraints, and we obtain a smaller value for the objective function of (\ref{eqPrOp}). Then, $A$ is not the global minimum of (\ref{eqPrOp}), and we obtain the contradiction.
\end{proof}
If $A$ is not an admissible matching for a combinatorial dynamical system, then we can always find a matrix $B$ that satisfies the constraint of (\ref{eqPrOp}), and $ f(B) < f(A) $.
\begin{proposition}\label{propNotAdmi}
If $ A $ is not an admissible matching for a combinatorial dynamical system, and it satisfies the constraints of the minimization problem (\ref{eqPrOp}), then we can find $B$ such that $B$ is an admissible matching, and $ f(B) < f(A) $.
\end{proposition}
\begin{proof}
If $ A $ is not an admissible matching and it satisfies the constraint of (\ref{eqPrOp}), there exists some $ (i,j) $ such as $ a_{ij} = 1 $ and ($x_i,x_j) $ is not an admissible matching for the definition of the combinatorial dynamical system.
Let us construct the matrix $B \in M_{N}(\{0, 1 \})$. If $ (x_i, x_j) $ is an admissible matching, then $ a_{ij} = b_{ij} $. If $ x_i \to x_j $ is not an admissible matching, then $ b_{ii} = b_{jj} = 1 $ and $ b_{ij} = 0 $. We have $ c_{ii} = c_{jj} = \alpha $ and $ c_{ij} = \max(2\alpha + 1, 3) $. Then, $ c_{ii} + c_{jj} = 2 \alpha < c_{ij} $.
Finally, we obtain that $B$ still satisfies the constraints of (\ref{eqPrOp}), and $ f(B) < f(A) $.
\end{proof}
We can use the proposition (\ref{propNotAdmi}) to transform $A$ to an admissible matching with a simple procedure by setting simplices in inadmissible matching too critical.
Let us explain the meaning of the value of $f(A)$ of the problem (\ref{eqPrOp}). Let $A$ be a solution of (\ref{eqPrOp}) and suppose that $ V(x_i) \neq \vec{0} $ for all $i$. Let $ \mathcal{I} $ be the set of $(i, j)$ such that $ a_{ij} = 1 $ with $ i \neq j $. Let $ \mathcal{K} $ be the set of $ k $ such that $ a_{kk} = 1 $.
\begin{align*}\label{eqMinValueF}
f(A) :&= \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} c_{ij}a_{ij} = \sum_{(i,j) \in \mathcal{I}} c_{ij} + \sum_{k \in \mathcal{K}} c_{kk} \\
&= \sum_{(i, j) \in \mathcal{I}} \left(1 - \frac{V(x_i) \cdot W(x_i, x_j)}{\| V(x_i) \| \| W(x_i, x_j) \|} \right) + \sum_{k \in \mathcal{K}} \alpha \\
&= | \mathcal{I} | - \sum_{(i,j) \in \mathcal{I}} \frac{V(x_i) \cdot W(x_i, x_j)}{\| V(x_i)\| \| W(x_i, x_j) \| } + | \mathcal{K} | \alpha
\end{align*}
%
where $ | \mathcal{I} |$ is the number of matchings, $ | \mathcal{K} |$ is the number of critical simplices and the sum in the middle is the sum of the cosine angle between $ V(x_i) $ and $W(x_i, x_j) $. If $ \alpha $ is high, then the minimization will set more admissible matching. If $ \alpha $ is low, then the minimization will set more critical simplices.
If our problem has a lot of simplices, we could only consider the variables $ a_{ij} $ such as $(x_i , x_j)$ is an admissible matching. This reduces greatly the number of variables for the minimization problem (\ref{eqPrOp}). Let's define the minimization problem with reduced binary variables.
Let $ z $ be a vector such that $ z_k $ is assigned to an admissible matching $( x_i, x_j )$ and $ z_k \in \{0, 1 \} $. Let $m$ be the number of variables in $z$. Let $ c $ be the vector with $ c_k = c_{ij} $ for the variable $ z_k $. Let $D \in M_{N, m}(\{0,1\})$ the constraint matrix. We define the entries of $D$ as follows. Let $ x_i $ be a simplex and $ z_j $ be assigned to the matching $ x_a \to x_b$. Then $ D_{i,j} = 1 $, if $ i = a $ or $ i=b$. Otherwise $ D_{i,j} = 0 $. We obtain the new minimization problem :
\begin{equation}\label{eqPrRedOpt}
\begin{aligned}
& \underset{z_k \in \{0,1\} }{\text{minimize}}
& & f(z) = c \cdot z \\
& \text{sujet à}
& & Dz = \vec{\mathbf{1}}_m
\end{aligned}\qquad .
\end{equation}
%
We put a lower bound and an upper bound to the number of variables of the minimization problem (\ref{eqPrRedOpt}). We can now easily show the next Lemma \ref{lemNbVar}.
\begin{lemma}\label{lemNbVar}
Lets $m$ the number of variables and $N$ the number of simplices. Then, $ N \leq m \leq N^2$.
\end{lemma}
\section{Examples}
\subsection{A Toy Example}
We discuss here in detail a simple example aimed at understanding how the algorithm works.
Let us define the data. We have $ v_0 =(0,0) $, $ v_1 =(1,1), $ and $ v_2 = (2,0) $ with their associate vectors $ \dot{v_0} = (0,1) $, $ \dot{v_1} = (1,0), $ and $ \dot{v_2} = (-1, -1) $.
We construct the Delaunay complex from the points $ v_0, v_1 $ and $ v_2 $ and obtain the simplicial complex $ K = \{ [v_0, v_1, v_2], [v_0, v_1], [v_1, v_2], [v_0, v_2], [v_0], [v_1], [v_2] \} $.
Let us compute the application $ V: K \to \mathbb{R}^2 $. For the $0$-simplices, we have $ V([v_0]) = \dot{v_0} = (0, 1), V([v_1]) = \dot{v_1} = (1, 0) $ and $ V([v_2]) = \dot{v_2} = (-1, -1) $. For the other simplices, we take the average of the vectors on the vertices of the simplex :
\begin{align*}
V([v_0, v_1]) = \frac{V([v_0]) + V([v_1])}{2} = \frac{(0,1) + (1,0)}{2} =(\frac{1}{2}, \frac{1}{2})\\
V([v_0, v_2]) = \frac{V([v_0]) + V([v_2])}{2} = \frac{(0,1) + (-1, -1)}{2} =(-\frac{1}{2}, 0) \\
V([v_1, v_2]) = \frac{V([v_1]) + V([v_2]) }{2} = \frac{(1,0) + (-1,-1)}{2} =(0, - \frac{1}{2}) \\
V([v_0, v_1, v_2]) = \frac{V([v_0]) + V([v_1]) + V([v_2])}{3} = \frac{(0,1)+(1,0)+(-1,-1)}{3} =(0,0)
\end{align*}
We establish the index for the simplices.
\begin{equation*}
x_0 = [v_0], x_1 = [v_1], x_2 = [v_2], x_3 = [v_0, v_1], x_4 = [v_0, v_2], x_5 = [v_1, v_2] \text{ and } x_6 = [v_0, v_1, v_2]
\end{equation*}
We construct the matrix $C$ with $ \alpha = 0.75$. We have $ c_{ii} = \alpha = 0.75 $ for all $ i$. For the entries $c_{ij}$, where $ (x_i, x_j) $ is not an admissible matching, we have:
\begin{align*}
c_{01} = c_{02} = c_{05} = c_{06} = c_{10} = c_{12} = c_{14} = c_{16} = c_{20} = \\
c_{21} = c_{23} = c_{26} = c_{30} = c_{31} = c_{32} = c_{34} = c_{35} = c_{40} =\\
c_{41} = c_{42} = c_{43} = c_{45} = c_{50} = c_{51} = c_{52} = c_{53} = c_{54} = \\
c_{60} = c_{61} = c_{62} = c_{63} = c_{64} = c_{65} = \max(2\alpha+1, 3) = 3.
\end{align*}
Let's compute $ c_{ij}$ where $ (x_i, x_j)$ is an admissible matching. The set of admissible matching is $ \{ a_{03}, a_{04}, a_{13}, a_{15}, a_{24}, a_{25}, a_{36}, a_{46}, a_{56} \} $ where $ i \neq j $. Let's compute $ c_{03} $ and $c_{46} $ as an example. For $ c_{03} $, we need to compute $ W([v_0],[v_0, v_1]) $:
\begin{align*}
W([v_0], [v_0, v_1]) = b([v_0, v_1]) - b([v_0]) = (\frac{1}{2}, \frac{1}{2}) - (0,0) = (\frac{1}{2}, \frac{1}{2})
\end{align*}
Then,
\begin{align*}
c_{03} = d(V([v_0]), W([v_0, v_1])) = 1 - \frac{V([v_0]) \cdot W([v_0], [v_0, v_1])}{\| V([v_0]) \| \cdot \| W([v_0],[v_0, v_1]) \|} \\
= 1 - \frac{(0,1)\cdot(\frac{1}{2}, \frac{1}{2})}{\| (0,1)\| \cdot \| (\frac{1}{2}, \frac{1}{2}) \| } = 1 - \frac{\sqrt(2)}{2} \approx 0.29
\end{align*}
For $ c_{46} $, we need to compute $ W([v_0, v_2],[v_0, v_1, v_2]) $.
\begin{align*}
W([v_0, v_2], [v_0, v_1, v_2]) = b([v_0, v_1, v_2]) - b([v_0, v_2]) \\
= \frac{(0,0) + (1,1) + (2,0)}{3} - \frac{(0,0) + (2,0)}{2} = (0, \frac{1}{3})
\end{align*}
Then,
\begin{align*}
c_{46} = d(V([v_0, v_2]), W([v_0, v_2], [v_0, v_1, v_2])) \\
= 1 - \frac{V([v_0, v_2]) \cdot W([v_0, v_2], [v_0, v_1, v_2])}{\| V([v_0, v_2]) \| \cdot \| W([v_0, v_2], [v_0, v_1, v_2]) \|} \\
= 1 - \frac{(-\frac{1}{2}, 0) \cdot (0, \frac{1}{3})}{\|(-\frac{1}{2}, 0)\| \cdot \| (0, \frac{1}{3}) \|} = 1.00
\end{align*}
Finally, we obtain the matrix $ C $. We round it up to the last two decimals.
\begin{equation}
C = \begin{bmatrix}
0.75 & 3 & 3 & 0.29 & 1 & 3 & 3 \\
3 & 0.75 & 3 & 1.71 & 3 & 0.29 & 3 \\
3 & 3 & 0.75 & 3 & 0.29 & 1 & 3 \\
3 & 3 & 3 & 0.75 & 3 & 3 & 0.55 \\
3 & 3 & 3 & 3 & 0.75 & 3 & 1 \\
3 & 3 & 3 & 3 & 3 & 0.75 & 0.86 \\
3 & 3 & 3 & 3 & 3 & 3 & 0.75
\end{bmatrix}
\end{equation}
Now, we have everything set up to solve the minimization problem (\ref{eqPrOp}). We use our favourite solver to obtain this matching matrix $A$:
\begin{equation}\label{eqMatToyEx}
A = \begin{bmatrix}
0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\end{equation}
The matrix $A$ induces the matching:
\begin{align*}
a_{03} &= 1 \implies \mathcal{V}(x_0) = x_3 \implies \mathcal{V}([v_0]) = [v_0, v_1] \\
a_{15} &= 1 \implies \mathcal{V}(x_1) = x_5 \implies \mathcal{V}([v_1)] = [v_1, v_2] \\
a_{24} &= 1 \implies \mathcal{V}(x_2) = x_4 \implies \mathcal{V}([v_2]) = [v_0, v_2] \\
a_{66} &= 1 \implies \mathcal{V}(x_6) = x_6 \implies \mathcal{V}([v_0, v_1, v_2]) = [v_0, v_1, v_2].
\end{align*}
We see that $ [ v_0, v_1, v_2 ] $ is a critical simplex.
\begin{figure}
\center
\includegraphics[height=3in, width=4in, scale=1.00, angle=0 ]{Figure/toyExample.png}
\caption{The combinatorial dynamical system induce by the matching matrix $ A $ from (\ref{eqMatToyEx}).}
\label{imgGrpNonOri}
\end{figure}
\subsection{Data Generated from Classical Dynamical Systems}
\subsubsection{Lotka-Volterra}
Consider the data points $ X = \{ [10i, 10j ] \mid i = 0, 1, 2, \ldots,7, 8, j = 0, 1, 2 \ldots,7, 8 \} $. We compute $ \dot{x} $ with the Lotka-Volterra equations:
\begin{equation}\label{eqLoVolt}
\begin{cases}
\frac{dx}{dt} = (0.4 - 0.01y)x\\
\frac{dy}{dt} = (0.005x -0.3)y
\end{cases}.
\end{equation}
The dynamical system (\ref{eqLoVolt}) has an equilibrium point at $(40, 60)$ and $(0,0)$. Also, there is an infinity of cycles. We construct the Delaunay complex from the data and compute the optimal solution of the reduced minimization problem (\ref{eqPrRedOpt}) with $ \alpha = 0.95 $. In this problem, we have $ 642 $ simplices and $ 1881 $ admissible matchings. The optimal solution $A$ induce the combinatorial dynamical system at Figure \ref{imgLotkVolt}.
\begin{figure}
\center
\includegraphics[height=5.5in, width=5.5in, scale=1.00, angle=0 ]{Figure/LotkaVolt_CDS.png}
\caption{The combinatorial dynamical system obtained from the data with the Lotka-Volterra equations (\ref{eqLoVolt}).}
\label{imgLotkVolt}
\end{figure}
We obtain three critical $0$-simplices and two critical $1$-simplices. Critical $1$-simplex has a similar dynamic of a saddle point but they are no saddle point in the classical dynamical system of the Lotka-Volterra. But they are needed to satisfy the definition of combinatorial dynamical system.
For $i$-cycles, we have two $0$-cycles and two $1$-cycles. We find some recurrent behaviour as expected. We also have some errors on the boundary of the combinatorial dynamical system. We need to be careful when we analyze the data on the boundary of the simplicial complex. Some critical simplices are caused by finite data. As an example, the bottom right critical $0$-simplex in Figure \ref{imgLotkVolt} should not be there. There is no stationary fixed point in the classical dynamical system of (\ref{eqLoVolt}).
\subsubsection{Lorenz Attractor}
We take a linear approximation of the trajectory at the initial values at $ x_0 = (0.00, 1.00, 1.05) $. Let $ x_{i+1} = x_i + \Delta t \; \dot{x_i} $ with $ \Delta t = 0.2 $ and $ i = 0, 1, 2 \ldots 999 $. We obtain $ 1000 $ data points. We compute $ \dot{x} $ with the Lorenz Attractor equations:
\begin{equation}\label{eqLorAtt}
\begin{cases}
\frac{dx}{dt} = 10(y-x) \\
\frac{dy}{dt} = 28x -xz - y\\
\frac{dz}{dt} = xy-\frac{8}{3}z
\end{cases}.
\end{equation}
We want to capture the dynamics of the chaotic attractor of (\ref{eqLorAtt}). We construct the Delaunay complex from data points. We obtained $8113$ $3$-simplices. We solve the problem (\ref{eqPrRedOpt}) to reduce the number of variables. We obtain $0$ critical $0$-simplices, $20$ critical $1$-simplices, $49$ critical $2$-simplices and $28$ critical $3$-simplices. We compute the strongly connected components of the directed graph from multivalued map $ \Pi_{\mathcal{V}}$ to find the recurrent behaviour. The recurrent behaviour is an union of $i$-cycle. We have $4$ $0$-cycles, $6$ $1$-cycles and $1$ $2$-cycle. At Figure (\ref{imgLorAtt}), we see the biggest strongly connected components. It's a $1$-cycle with $3528$ simplices and $421$ self-intersections. The dynamic of this attractor is very complex. This is caused by the chaotic behaviour of the Lorenz attractor.
\begin{figure}
\center
\includegraphics[height=9cm, width=14cm, scale=1.00, angle=0 ]{Figure/cycle_LAtt.png}
\caption{The biggest strongly connected components from the combinatorial dynamical system from the data of the Lorenz Attractor equations. This is a $1$-cycle. It has $3538$ simplices, including $421$ simplices which are auto-intersections coloured in purple.}
\label{imgLorAtt}
\end{figure}
\section{Extensions to the Algorithm}
\subsection{Barycentric Subdivision}
We can apply a barycentric subdivision before the construction of the minimization problem (\ref{eqPrOp}). Let us start with a simple example to prove it is interesting to do that.
Let $ x_0 $ be a $2$-simplex and $ x_1, x_2 $ and $x_3$ be $1$-simplices and faces of $ x_0 $. Suppose that $ c_{01} = c_{02} = c_{03} < \alpha$. By the constraints of (\ref{eqPrOp}), the solution can only match one $1$-simplex with $ x_0 $. But $ x_1, x_2 $ and $ x_3 $ are good candidates for matching, because they have low costs. By applying a barycentric subdivision, we can match $ x_1, x_2 $ and $ x_3 $ with a $2$-simplex in the interior of $ x_0 $.
Let us define the barycentric subdivision. Let be $K$ the simplicial complex and $K'$ be the resulted simplicial complex after applying the barycentric subdivision of $K$. We proceed by induction on the dimension of $K$. For each simplex $ \sigma \in K_n$, we add the barycentre of $\sigma$ as a new vertex in $ K' $ and connect the barycentre of $ \sigma $ with the other barycentre at the boundary of $ \sigma $ to construct the new $n$-simplices of $K'$.
Let $ v : K \to \mathbb{R} $ be the vector associated to a simplex and let us construct $ v' : K' \to \mathbb{R} $. For each $\sigma \in K_0 $, $ \sigma \in K_0' $. We set $ v'(\sigma) = v(\sigma) $. For the other simplex $ \sigma \in K' $, there exists a simplex $ \tau \in K $ such as $ \sigma $ is in the interior of $ \tau$, then put $ v'(\sigma) = v(\tau)$.
We try this approach on an example. Let $ x = \{ (-1, -1), (1, -1), (0, 2) \} $ and $ x' = \{ (1, 1), (-1, 1),$ $ (0, -2) \} $. We have a vector field sample from the dynamical system:
\begin{equation*}
\begin{cases}
\frac{dx}{dt} = -x \\
\frac{dy}{dt} = -y
\end{cases}.
\end{equation*}
We use the Delaunay complex to construct the simplicial complex. We should obtain a fixed point at $ (0,0) $. But there is no $0$-simplex at $ (0,0) $. The solution of the minimization will also put a $0$-simplex somewhere on the vertices. But this won't represent the finite vector field data correctly. If we apply a barycentric subdivision, then it will add a $0$-simplex at the center of the $ 2 $-simplex which is closed to the real fixed point.
\begin{figure}
\center
\subfigure[ The matching from the solution obtained by solving the minimization problem without applying a barycentric subdivision. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/before_barySub.png}\label{subBefBaryCDS}
}
\,
\subfigure[ The matching from the solution obtained by solving the minimization problem with a barycentric subdivision applied to the data. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/after_barySub.png}\label{subAftBaryCDS}
}
\caption{An example on applying a barycentric subdivision before the matching algorithm. It gives a better result.}
\label{figsubBaryCDS}
\end{figure}
In summary, applying a barycentric subdivision will help to add more details to the combinatorial dynamical system. But there is an inconvenience to the barycentric subdivision. This will add more simplices to the simplicial complex and it will take more time to compute the solution.
\subsection{Gradient Combinatorial Dynamical System}
In this subsection, we demonstrate two different methods to obtain a gradient combinatorial dynamical system with a similar approach as before. The first method is to choose an $ \alpha $ small enough that the solution $A$ from the minimization (\ref{eqPrOp}) induces a matching of a gradient combinatorial dynamical system. The second method is to add constraints to the minimization problem (\ref{eqPrOp}), so to remove all cycles in the solution. We say that a combinatorial dynamical system is gradient if there is no elementary cycle with length greater than $1$. In this case, the image of an elementary cycle has only $1$ critical simplex.
For the first method, if we choose $ \alpha = 0 $, then the minimum of the problem (\ref{eqPrOp}) is obtained by setting $ a_{ii} = 1 $ for all $ i = 0, 1, \ldots, N-1 $. This set all simplices to critical and it is a gradient combinatorial dynamical system.
\begin{lemma}
Let $ c_{ij} > 0 $ for all $ i \neq j $ and $ l = \min_{i \neq j}\{ c_{ij} \} $. If $ \alpha < \frac{l}{2} $, then the global minimum of (\ref{eqPrOp}) is $ a_{ii} = 1 $ for all $ i = 0, 1, \ldots, N-1 $ and $ a_{ij} = 0 $ for $ i \neq j $.
\end{lemma}
\begin{proof}
We argue by contradiction. Let $ A $ be a global minimum of (\ref{eqPrOp}) and suppose that there exist $i \neq j $ such that $ a_{ij} = 1 $. Consider $ c_{ii} + c_{jj} = 2 \alpha < l \leq c_{ij} $. We construct $B$. Set $ b_{km} = a_{km}$ for $k = 0, 1,2, \ldots, N-1 $ and $ m = 0,1,2, \ldots N-1 $, except for $ a_{ij} = 0 $, $ a_{ii} = 1 $ and $ a_{jj} = 1 $. We have $ f(B) < f(A) $. We get a contradiction and the global minimum is $a_{ii} = 1 $ for all $ i = 0, 1, \ldots, N-1 $ and $ a_{ij} = 0 $ for $ i \neq j $.
\end{proof}
The main idea for this method is to choose the greatest $ \alpha \in [0, 2] $ such that the solution obtained from (\ref{eqPrOp}) is induced a gradient combinatorial vector field. The advantage of this approach is that it is fast to compute. But it gives more critical simplices than the second method.
For the second method, we add a new set of constraints to the integer linear problem (\ref{eqPrOp}), so to make no cycle in the solution. The new minimization problem is:
\begin{equation}\label{eqPrOpGradient}
\begin{aligned}
& \underset{A \in M_N(\{0,1\}) }{\text{minimize}}
& & f(A) := \sum_{i=0}^{N-1}\sum_{j=0}^{N-1} a_{ij} c_{ij} \\
& \text{sujet à}
& & \sum_{i=0}^{N-1} a_{ik} + \sum_{j=0}^{N-1} a_{kj} - a_{kk} = 1 , \quad k =0, 1, \ldots, N-1 \\
& & & \sum_{a_ij \in c} a_{ij} < | c |, \quad c \in C
\end{aligned}
\end{equation}
where $ C $ is the set of all possible elementary cycles with length greater than $1$ on $K$. Let $K$ a simplicial complex and $d$ the dimension of $K$. The elementary cycles can be decomposed in $i$-cycle:
\begin{equation*}
C = \bigcup_{i = 0}^{d-1} \{ \text{all possible elementary } i\text{-cycles} \}.
\end{equation*}
\begin{theorem}\label{thGloMinOptGrad}
If $ A $ is a global minimum to the ILP (\ref{eqPrOpGradient}), then $ A $ induce a matching that satisfies the condition of a gradient combinatorial dynamical system.
\end{theorem}
The proof is similar that to the Theorem \ref{thGloMinOpt} but with the new set of constraints the solution $ A $ of (\ref{eqPrOpGradient}) induces a matching with no cycle. If we set $ \alpha $ high, this approach has less critical simplices than the first method. But we have a lot of more constraints to compute.
We end with an example comparing two methods. Let $ X = \{ (0,0), (1,1), (2,0) \} $ and $ \dot{X} = \{ (0.05, 1), (1,0), (-1,-1) \} $. With this dataset, if we do not have a low $\alpha$ the solution $A$ induces a cycle in the combinatorial dynamical system. We choose $ \alpha = 0.14 $ to obtain an admissible matching with no-cycle. For the second method, we need to add two constraints to remove all possible cycle. The comparison of methods is displayed in Figure \ref{figGradCDS}.
\begin{figure}
\center
\subfigure[ The solution from the minimization problem with $ \alpha = 0.15 $. The combinatorial dynamical system has a $1$-cycle. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/grad_CDS_cyclic.png}\label{subGradCDS1}
}
\,
\subfigure[ The solution from the minimization problem with $ \alpha = 0.14 $. The combinatorial dynamical system is gradient. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/grad_CDS_lowAlpha.png}\label{subGradCDS2}
}
\qquad
\subfigure[ The solution from the minimization problem with two constraints added to remove all cyclic solutions. The combinatorial dynamical system is gradient. ]{
\includegraphics[height=5.715cm, width=7.62cm, scale=1.00, angle=0 ]{Figure/grad_CDS_constraint.png}\label{subFigGradCDS3}
}
\caption{Comparison of the different methods to make sure that the combinatorial dynamical system is gradient.}
\label{figGradCDS}
\end{figure}
\clearpage
\typeout{}
| {
"timestamp": "2021-12-24T02:05:51",
"yymm": "2112",
"arxiv_id": "2112.12295",
"language": "en",
"url": "https://arxiv.org/abs/2112.12295",
"abstract": "The main goal is to construct a combinatorial dynamical system in the sense of Forman from finite vector field data. We use a linear minimization problem with binary variables and linear equality constraints. The solution of the minimization problem induces an admissible matching for the combinatorial dynamical system. They are three main steps for the method: Construct a simplicial complex, compute a vector for each simplex, solve the minimization problem, and apply the induced matching. We argue the effectiveness of the method by testing it on the Lotka-Volterra model and the Lorenz attractor model. We are able to retrieve the cyclic behaviour in the Lotka-Volterra model and the chaotic behaviour of the Lorenz attractor. Two extensions to the algorithm are shown. We use the barycentric subdivision to obtain a better resolution. We add conditions on the minimization problem to obtain a solution which induce a gradient matching.",
"subjects": "Dynamical Systems (math.DS)",
"title": "From Finite Vector Field Data to Combinatorial Dynamical Systems in the Sense of Forman",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310532836284,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7076984971769341
} |
https://arxiv.org/abs/1708.09623 | Improvement and generalisation of Papasoglu's lemma | We improve an isoperimetric inequality due to Panos Papasoglu. We also generalize this inequality to the Finsler case by proving an optimal Finsler version of the Besicovitch's lemma which holds for any notion of Finsler volume. | \section{Introduction}
In \cite{Pa} (proposition 2.3), Panos Papasoglu shows the
\begin{lemme}
Let $(\mathbb{S}^{2},g)$ be a Riemannian two-sphere and denote by $\mathcal{A}$ its Riemannian area. Then for any $\varepsilon>0$ there exists a closed curve $\gamma$ dividing $(\mathbb{S}^{2},g)$ into two disks $D_{1}$ and $D_{2}$ of area at least $\frac{\mathcal{A}(\mathbb{S}^{2},g)}{4}$ and whose length satisfies
\begin{equation*}
\textup{length} (\gamma ) \leq 2\sqrt{3\mathcal{A}(\mathbb{S}^{2},g)} + \varepsilon.
\end{equation*}
\end{lemme}
This lemma has several deep consequences in metric geometry: using it, P. Papasoglu gives estimates of the Cheeger constant of surfaces, Y. Liokumovich, A. Nabutovsky and R. Rotman use it to answer a question asked by S. Frankel, M. Katz and M. Gromov in \cite{Lio1} whereas F. Balacheff uses it to estimate 2-spheres width in \cite{Bal2}. In this article, we give two different ways to improve Papasoglu estimate. First by a $\sqrt{2}$ factor by using directly the coarea formula instead of the Besicovitch lemma. Then by a $2\sqrt{\frac{2}{\pi}}$ factor by using an argument suggested by an anonymous reviewer and already used by Gromov to give the filling radius of $\mathbb{S}^{1}$ in the simply connected case: Pu's inequality. It gives automatically better estimates: for instance, in \cite{Lio2}, the constants 52 and 26, given by Y. Liokumovich in the abstract, could be divided by $2\sqrt{\frac{2}{\pi}}$, thus \emph{there exists a Morse function $f: M \rightarrow \mathbb{R}$, which is constant on each connected component of a Riemannian 2-sphere with $k\geq 0$ holes $M$ and has fibers of length no more than $26\sqrt{\frac{\pi}{2}\mathcal{A}(M)}$} and \emph{on every 2-sphere there exists a simple closed curve of length $\leq 13\sqrt{\frac{\pi}{2}\mathcal{A}(\mathbb{S}^{2})}$ subdividing the sphere into two discs of area $\geq \frac{1}{3}\mathcal{A}(\mathbb{S}^{2})$}.
The Besicovitch's lemma asserts that, given a parallelotope $P\subset \mathbb{R}^{n}$ endowed with a Riemannian metric $g$ then
\begin{equation*}
v(P,g) \geq \prod_{i=1}^{n}d_{i}
\end{equation*}
where $v$ denotes the Riemannian volume of $(P,g)$ and the $d_{i}$ denote the Riemannian distances between two opposite sides of $P$ (see for instance \cite{Gr} section 4.28). It was used by P. Papasoglu in the proof of his lemma. In this article, we give a natural generalisation of Besicovitch's lemma extending it to Finsler parallelotopes -- that is parallelotopes continously endowed with a norm at each of their points. As for such a manifold, there aren't one good definition of volume, we prove an optimal inequality satisfied by any Finsler volume in the sense of \cite{BBI} (paragraph 5.5.3) such as Busemann-Hausdorff and Holmes-Thompson ones. Our proof is based on the Gromov one given in \cite{Gr}. We then use it in order to extend the Papasoglu lemma to Finsler 2-spheres although the Holmes-Thompson and Busemann-Hausdorff cases could still be improved.
\section{Improvements of Papasoglu's lemma}
The Riemannian case of Papasoglu isoperimetric inequality could be improved by using directly the coarea formula instead of Besicovitch lemma:
\begin{prop} \label{papRiem}
Let $(\mathbb{S}^{2},g)$ be a Riemannian two-sphere and denote by $\mathcal{A}$ its Riemannian area. Then for any $\varepsilon>0$ there exists a closed curve $\gamma$ dividing $(\mathbb{S}^{2},g)$ into two disks $D_{1}$ and $D_{2}$ of area at least $\frac{\mathcal{A}(\mathbb{S}^{2},g)}{4}$ and whose length satisfies
\begin{equation*}
\textup{length} (\gamma ) \leq \sqrt{6\mathcal{A}(\mathbb{S}^{2},g)} + \varepsilon.
\end{equation*}
\end{prop}
\begin{proof}
Let $\Gamma$ be the set of simple closed curves dividing $(\mathbb{S}^{2},g)$ into two disks of area $\geq\frac{\mathcal{A}(\mathbb{S}^{2},g)}{4}$. Let $L=\inf_{\gamma\in \Gamma}\textup{length} (\gamma)$. Now if we fix an $\varepsilon > 0$, we can take $\gamma\in U$ such as $\textup{length} (\gamma) < L+\varepsilon$ and denote by $D_{1}$ and $D_{2}$ the two disks bounded by $\gamma$ with $\mathcal{A}(D_{1})\geq \mathcal{A}(D_{2})$ (which implies $\mathcal{A}(D_{1})\geq \frac{\mathcal{A} (\mathbb{S}^{2},g)}{2}$).
Then $\gamma$ cannot be $\varepsilon$-shortcuts on $D_{1}$ -- that is there doesn't exist any $\delta\subset D_{1}$ joigning two points $a$ and $b$ of $\gamma$ of length $\textup{length} (\delta) < \textup{length} (\gamma_{1}) - \varepsilon$ where $\gamma_{1}\subset \gamma$ is the shortest curve between $a$ and $b$ on $\gamma$. In the contrary then either $\delta\cup\gamma_{1}$ or $\delta\cup\gamma_{2}$ would bound a disk of area $\geq \frac{\mathcal{A} (\mathbb{S}^{2},g)}{4}$ with a length $< L$ (calling $\gamma_{2} = \gamma\setminus\gamma_{1}$), contradiction.
\begin{figure}
\begin{center}
\includegraphics[scale =0.8]{raccourci.png}
\caption{$\delta$ can't be an $\varepsilon$-shortcut}
\label{fig:raccourci}
\end{center}
\end{figure}
Now fix $\varepsilon > 0$ and rather take $\gamma\in U$ a curve of length $\textup{length} (\gamma)<L+\frac{\varepsilon}{L}$ (taking $\varepsilon$ small enough to have $\textup{length} (\gamma)<2L$). On $D_{1}$ the disk of greatest area, there isn't any $\frac{\varepsilon}{L}$-shortcut between two points of $\gamma$. Fix any point $A$ of $\gamma$ and denote for every $r\geq 0$ $F_{r}:=\{ m\in \overline{D_{1}}\ |\ d(A,m)=r \}$. As $d(A,\cdot)$ is a Lipschitz continuous function, it is differentiable almost everywhere and, according to Sard's lemma, $F_{r}$ is a submanifold for almost every $r$; we will restrict ourselves to such $r$. Let $u(r)$ and $v(r)$ be the two points of $\gamma$ away from $r$ from $A$ when $r<\textup{length}(\gamma)$. As $F_{r}$ is a submanifold on $D_{1}$ which is a submanifold with bondary on $D_{1}\cup\gamma$, there is a path $\delta_{r}$ of $F_{r}$ connecting $u(r)$ to $v(r)$. Then, as $\frac{\varepsilon}{L}$-shortcuts don't exist,
\begin{equation*}
\textup{length} (\delta_{r}) \geq \left\{
\begin{array}{c c c}
2r-\frac{\varepsilon}{L} & \text{ if }& r\leq \frac{\textup{length} (\gamma)}{4}\\
\textup{length} (\gamma) - 2r-\frac{\varepsilon}{L} & \text{ if }& \frac{\textup{length} (\gamma)}{4}\leq r\leq \frac{\textup{length} (\gamma)}{2}.
\end{array}
\right.
\end{equation*}
\begin{figure}
\begin{center}
\includegraphics[scale =0.8]{coaire.png}
\caption{The loci $F_{r}$ and $\delta_{r}$}
\label{fig:coaire}
\end{center}
\end{figure}
Then, using the coarea formula:
\begin{eqnarray*}
\mathcal{A}(D_{1}) &=& \int_{0}^{+\infty} \textup{length} (F_{r})\,\mathrm{d} r \\
&\geq& \int_{0}^{\frac{\textup{length}(\gamma)}{2}} \textup{length} (\delta_{r})\,\mathrm{d} r \\
&\geq& 2\int_{0}^{\frac{\textup{length}(\gamma)}{4}} \left(2r-\frac{\varepsilon}{L}\right)\,\mathrm{d} r \\
&\geq& \frac{\textup{length}(\gamma)^{2}}{8} -\varepsilon.
\end{eqnarray*}
In addition to the fact that $\frac{3}{4}\mathcal{A}(\mathbb{S}^{2})\geq \mathcal{A}(D_{1})$ and that this inequality holds for any $\varepsilon >0$ short enough, we can conclude.
\end{proof}
An anonymous reviewer suggested another way to improve it, using an argument given by Gromov in \cite{fill} (section 5.5.B', item (e)):
\begin{prop} \label{papRiem2}
Let $(\mathbb{S}^{2},g)$ be a Riemannian two-sphere and denote by $\mathcal{A}$ its Riemannian area. Then for any $\varepsilon>0$ there exists a closed curve $\gamma$ dividing $(\mathbb{S}^{2},g)$ into two disks $D_{1}$ and $D_{2}$ of area at least $\frac{\mathcal{A}(\mathbb{S}^{2},g)}{4}$ and whose length satisfies
\begin{equation*}
\textup{length} (\gamma ) \leq \sqrt{\frac{3\pi}{2}\mathcal{A}(\mathbb{S}^{2},g)} + \varepsilon.
\end{equation*}
\end{prop}
\begin{proof}
Lets having the same approach as the previous proof, taking $\gamma\in \Gamma$ a curve of length $\textup{length} (\gamma)<L+\varepsilon$ dividing $\mathbb{S}^{2}$ on two disk $D_{1}$ and $D_{2}$ with the same conditions.
As there is no $\varepsilon$-shortcut, any curve joining two antipodal points of $\partial D_{1}$ is longer than $\frac{\textup{length} (\gamma )}{2} - \varepsilon$. By identification of these antipodal points, $D_{1}$ gives a projective plane of systole greater than $\frac{\textup{length} (\gamma )}{2} - \varepsilon$, thus, applying Pu's systolic inequality,
\begin{equation*}
\mathcal{A}(D_{1}) \geq \frac{2}{\pi}\left(\frac{\textup{length} (\gamma )}{2} - \varepsilon\right)^{2}.
\end{equation*}
As $\frac{3}{4}\mathcal{A}(\mathbb{S}^{2})\geq \mathcal{A}(D_{1})$, we then conclude.
\end{proof}
\begin{rmq} \label{optimal}
The equality case of Pu's theorem tells us about the (un)optimality of this inequality. Precisely, there is no riemannian 2-sphere $(\mathbb{S}^{2},g)$ whose minimal closed curve $\gamma$ satisfying Papasoglu's hypothesis has length:
\begin{equation*}
\textup{length} (\gamma ) = \sqrt{\frac{3\pi}{2}\mathcal{A}(\mathbb{S}^{2},g)}.
\end{equation*}
As a matter of fact, this would imply equality cases $\frac{3}{4}\mathcal{A}(\mathbb{S}^{2},g) = \mathcal{A}(D_{1})$ and $\mathcal{A}(D_{1}) = \frac{2}{\pi}\left(\frac{\textup{length} (\gamma )}{2}\right)^{2}$. By Pu's theorem, $D_{1}$ is then hemisphere of the round sphere of radius $\frac{\textup{length} (\gamma )}{2\pi}$.
Let see that $D_{1}$ hemisphere of 2-sphere of radius $r$ implies that $(\mathcal{S}^{2},g)$ is the round sphere of radius $r$. This will conclude because $\gamma$ would be an equator which is obviously not minimal for Papasoglu's lemma.
In order to prove it, we will apply Pu's theorem to $D_{2}$, thus $D_{2}$ would be a round hemisphere of radius $r$. Let see that any curve $\delta_{2}$ joining two antipodal points $N$ and $S$ of $D_{2}$ is longer than $\frac{\textup{length} (\gamma )}{2} = \pi r$. Suppose the contrary for some $\delta_{2}$ in $D_{2}$ joining $N$ and $S$, then, gluing this curve with any meridian $\delta_{1}$ of the hemisphere $D_{1}$ joining $N$ and $S$, we obtain a close simple curve $\delta = \delta_{1}\cdot\delta_{2}$. As meridians of a round hemisphere of radius $r$ have length $\pi r$, $\textup{length} (\delta) < \textup{length} (\gamma)$. But, according to the intermediate value theorem, there exists a meridian $\delta_{1}$ such as $\delta$ divides $(\mathbb{S}^{2},g)$ into disks of same area, a contradiction with $\gamma$ minimality.
\begin{figure}
\begin{center}
\includegraphics[scale =0.7]{unoptimal.png}
\caption{$\delta_{2}$ glued with a meridian $\delta_{1}$}
\label{fig:unoptimal}
\end{center}
\end{figure}
\end{rmq}
\section{Besicovitch's lemma for Finsler manifolds}
In this section, we extend Papasoglu's lemma to Finsler manifolds for any good notion of Finsler area. For this, we first give a natural generalisation of the Besicovitch lemma.
\subsection{Length metric and volume on a Finsler manifolds}
The manifolds used here will be closed and connected. See \cite{BBI} for details and motivations about the results of this section.
Recall that a \emph{continuous Finsler metric} on a manifold $M$ is a continuous function $\Phi : TM\rightarrow [0,+\infty[$ whose restriction to every tangent space is an asymmetric norm. Such a manifold $M$ is said to be a \emph{Finsler manifold} $(M,\Phi)$. If $\Phi(-v_{x}) = \Phi(v_{x})$ for all tangent vectors, we shall say that $\Phi$ is a \emph{reversible} coninuous Finsler metric.
We can then define a \emph{length metric} $d_{\Phi}$ on $M$ by:
\begin{equation*}
\forall x,y\in M, \quad d_{\Phi}(x,y) = \inf_{\gamma : x\leadsto y} \textup{length}_{\Phi}(\gamma)
\end{equation*}
where the infimum is taken on the piecewise-$\mathcal{C}^{1}$ curves $\gamma: [0,1]\rightarrow M$ joining $x$ to $y$ and
\begin{equation*}
\textup{length}_{\Phi}(\gamma) := \int_{0}^{1}{\Phi(\gamma'(t))\,\mathrm{d} t}.
\end{equation*}
We will restrict ourselves to the case of reversible continuous Finsler metrics.
Contrary to the Riemannian case, there isn't one natural way to define a volume on Finsler manifolds. We will give two natural definitions. The Busemann-Hausdorff volume could be defined, for all open subset $U\subset (M,\Phi)$ as:
\begin{equation*}
v_{BH}(U) := \int_{U} \frac{|B_{g}|}{|B_{\Phi}|}v_{g}
\end{equation*}
where $v_{g}$ is the volume associated to $g$ which is a Riemannian auxiliary metric, for every $p\in M$, $|A|$ designated the $g_{p}$-normalised Lebesgue measure of $A\subset T_{p}M$ and $B_{g}$ and $B_{\Phi}$ are unit balls of $T_{p}M$ endowed with the normed metrics $g_{p}$ and $\Phi_{p}$ respectively. This definition does not depend on $g$ and boils done to normalise the volume of the unit ball of each tangent space $(T_{p}M,\Phi_{p})$.
The Holmes-Thompson volume is defined, for all open subset $U\subset (M,\Phi)$ as:
\begin{equation*}
v_{BH}(U) := \int_{U} \frac{|B_{\Phi}^{*}|}{|B_{g}|}v_{g}
\end{equation*}
where, for all $p\in M$ and all convex $K$ of the euclidean space $(T_{p}M,g_{p})$, $K^{*} := \{ u \in T_{p}M \ |\ \forall w\in K,\ g_{p}(u,w) \leq 1 \}$ is the dual convex of $K$. Compared to the Busemann-Hausdorff volume, here we normalise the unit dual ball.
In the case of a Riemannian manifold $(M,g)$, Busemann-Hausdorff and Holmes-Thompson volumes are equal and $v_{BH}=v_{HT}=v_{g}$.
These two volumes are \emph{monotonous}: $v$ is monotonous if for all short application between Finsler manifolds $f:(M,\Phi) \rightarrow (N,\Psi)$,
\begin{equation*}v(f(M),\Psi)\leq v(M,\Phi).\end{equation*}
We refer to paragraph 5.5.3 of \cite{BBI} for an in-depth analysis of the general notion of Finsler volumes (which includes these two).
\subsection{Finsler Besicovitch's Lemma}
We show that we can deduce a more general statement of Besicovitch's lemma from the proof given in section 4.28 of \cite{Gr}:
\begin{prop}[Finsler Besicovitch's lemma] \label{prop:besicovitch}
Let $P\subset \mathbb{R}^{n}$ be a $n$-dimensionnal parallelotope endowed with a reversible continuous Finsler metric $\Phi$. If $(F_i,G_i)$ (with $1\leq i\leq n$) denotes its pairs of opposite faces and $d_{i} := d_{\Phi}(F_{i},G_{i})$, then, for any Finsler volume $v$,
\begin{equation*}
v(P,\Phi) \geq v\left(\prod_{i=1}^{n}[0,d_{i}],\|\cdot\|_{\infty}\right).
\end{equation*}
\end{prop}
\begin{proof}
Let $f$ be the continous function
\begin{equation*}
f:\left\{
\begin{array}{c c c}
P &\rightarrow& \mathbb{R}^{n}\\
x &\mapsto& \left(d_{\Phi}(x,F_{i})\right)_{1\leq i\leq n}
\end{array}\right. .
\end{equation*}
As for all points $x$ and $y$ in $P$,
\begin{equation} \label{ineqbes}
|d_{\Phi}(x,F_{i})-d_{\Phi}(y,F_{i})|\leq d_{\Phi}(x,y)
\end{equation}
for all $i$, considering the maximum among $i$, one has that $f:(P,\Phi)\rightarrow (\mathbb{R}^{n},\|\cdot\|_{\infty})$ is short. Thus, proving $f(P)\supset \prod_{i=1}^{n}[0,d_{i}]=:C$ is enough to obtain the inequality.
Note that the boundary of $P$ is mapped outside the interior of $C$, more precisely, writing $f=(f^{1},\ldots ,f^{n})$, $f^{i}(F_{i}) = 0$ whereas $f^{i}(G_{i})\subset [d_{i},+\infty[$. From the definition of $P$, there exists an homeomorphism $h:P\rightarrow C$ mapping each face onto a face (with the obvious choice). So $f_{t} = tf_{|\partial P}+(1-t)h_{|\partial P}$ defines a homotopy from $h_{|\partial P}$ to $f_{|\partial P}$ with values in $\mathbb{R}^{n}\setminus \overset{\circ}{C}$. If there exists $y\in \overset{\circ}{C}\setminus f(P)$, then $f$ should be homotopic to $0$ in $\mathbb{R}^{n}\setminus y \supset \mathbb{R}^{n}\setminus \overset{\circ}{C}$, so $h_{|\partial P}$ should also be homotopic to $0$ in $\mathbb{R}^{n}\setminus y$, a contradiction ($h(P)\ni y$).
\begin{figure}
\begin{center}
\includegraphics[scale =0.7]{besicovitch.png}
\caption{Scheme of the proof}
\label{fig:besicovitch}
\end{center}
\end{figure}
As $v$ is monotonous and $f$ is short, one has the chain of inequalities
\begin{equation*}
v(P,\Phi)\geq v\left(f(P),\|\cdot\|_{\infty}\right)\geq v(C,\|\cdot\|_{\infty}).
\end{equation*}
\end{proof}
\begin{rmq}
The proof provides us some information about the equality case. As $f(P)\supset C$, in order to have $v\left(f(P),\|\cdot\|_{\infty}\right)= v(C,\|\cdot\|_{\infty})$, $f(P)$ and $C$ must only differ from a negligible set of $\mathbb{R}^{n}$. As $f:(P,\Phi)\rightarrow (\mathbb{R}^{n},\|\cdot\|_{\infty})$ is short, in order to have $v(P,\Phi)=v(f(P),\|\cdot\|_{\infty})$, $f$ needs to be locally isometric almost everywhere -- meaning that $\,\mathrm{d} f_{x}$, which is defined for almost every $x$, has norm $1$ almost everywhere. Finally, $v(P,\Phi)=v(C,\|\cdot\|_{\infty})$ implies $(P,\Phi)$ to be locally isometric almost everywhere to $\left(\widetilde{C},\|\cdot\|_{\infty}\right)\supset \left(C,\|\cdot\|_{\infty}\right)$ with $\widetilde{C}\setminus C$ negligible in $\mathbb{R}^{n}$.
\end{rmq}
\begin{exs} \label{ex:besicovitch}\
\begin{itemize}
\item For $v=v_{BH}$, it gives the sharp inequality:
\begin{equation*}
v_{BH}(P,\Phi) \geq \frac{b_{n}}{2^{n}}\prod_{i=1}^{n}d_{i}
\end{equation*}
where $b_{n}=\frac{\pi^{\frac{n}{2}}}{\Gamma\left(n+\frac{1}{2}\right)}$ designates the volume of the standard Euclidean unit ball.
\item For $v=v_{HT}$, it gives the sharp inequality:
\begin{equation*}
v_{HT}(P,\Phi) \geq \frac{2^{n}}{n! b_{n}}\prod_{i=1}^{n}d_{i}.
\end{equation*}
\end{itemize}
\end{exs}
The symmetry of $d_{\Phi}$ is key to get the inequality (\ref{ineqbes}), thus we can't directly extend this proof to the asymmetric Finsler case. Nevertheless, in the case of the Holmes-Thompson volume, the Roger-Shepard inequality allows us to assert the following
\begin{prop} \label{asym}
Let $P\subset \mathbb{R}^{n}$ be a $n$-dimensionnal parallelotope endowed with an asymmetric continuous Finsler metric $\Phi$. If $(F_i,G_i)$ (with $1\leq i\leq n$) denotes its pairs of opposite faces, then,
\begin{equation*}
v_{HT}(P,\Phi) \geq \frac{n!}{(2n)!}\frac{2^{n}}{b_{n}}\prod_{i=1}^{n}\left(d_{\Phi}(F_{i},G_{i})+d_{\Phi}(G_{i},F_{i})\right).
\end{equation*}
\end{prop}
\begin{proof}
Following the proof of theorem 4.13 of \cite{Bal}, we consider the symmetrized Finsler metric $\Psi$ defined by
\begin{equation*}
\forall u\in TP,\quad \Psi(u):= \Phi(u)+\Phi(-u)
\end{equation*}
so that, for all curve $\gamma$,
\begin{equation*}
\quad \textup{length}_{\Psi}(\gamma) = \textup{length}_{\Phi}(\gamma) + \textup{length}_{\Phi}(\check{\gamma}),
\end{equation*}
where $\check{\gamma}$ designates the time-reversed curve. Thus, for all $x,y\in P$, $d_{\Psi}(x,y)\geq d_{\Phi}(x,y)+d_{\Phi}(y,x)$, hence
\begin{equation*}
\forall i,\quad d_{\Psi}(F_{i},G_{i})\geq d_{\Phi}(F_{i},G_{i})+d_{\Phi}(G_{i},F_{i}).
\end{equation*}
On the other hand, at every $p\in P$, $B_{\Psi_{p}}=B_{\Phi_{p}}-B_{\Phi_{p}}$, thus, applying the Rogers-Shepard inequality at every cotangent space we have that
\begin{equation*}
v_{HT}(P,\Phi) \geq \frac{(n!)^{2}}{(2n)!}v_{HT}(P,\Psi).
\end{equation*}
The inequality then follows from proposition \ref{prop:besicovitch} applied to $(P,\Psi)$.
\end{proof}
\begin{rmq}
We can't hope such an inequality for the Busemann-Hausdorff volume in the asymmetric case. Here $d_{i}$ will designate $\min(d(F_{i},G_{i}),d(G_{i},F_{i}))$. To see it in $\mathbb{R}^{2}$, let take $P = [0,1]^2$ and let define an asymmetric norm $\Phi$ on $\mathbb{R}^{2}$ by its unit ball $B_{\Phi}$. Let $a=\left(-\frac{1}{2},0\right)$, $b=a+h\left(\frac{2}{3},1\right)$ and $c=a+h\left(\frac{2}{3},-1\right)$ where $h>\frac{3}{2}$; we define $B_{\Phi}$ as the triangle $abc$. As $h$ tends to infinity, $d_{1} \sim \frac{1}{h}$, $d_{2} = 1$ and $v_{BH}(P,\Phi) = \frac{3\pi}{2h^{2}}$, thus
\begin{equation*}
\frac{v_{BH}(P,\Phi)}{d_{1}d_{2}} \underset{h\to+\infty}{\longrightarrow} 0.
\end{equation*}
However, in the asymmetric flat case, we still have the weaker (sharp) inequality:
\begin{equation} \label{eq:flatBH}
v_{BH}(P,\Phi) \geq \frac{b_{n}}{2^{n}}\left(\min_{1\leq i\leq n}d_{i}\right)^{n}.
\end{equation}
As a matter of fact, taking $P=[0,1]^{n}$ without loss of generality,
\begin{equation*}
d :=\min_{1\leq i\leq n}d_{i} = \inf \{ \alpha>0,\ \alpha B_{\Phi}\cap\partial [-1,1]^{n} \neq \emptyset \},
\end{equation*}
thus for all $\alpha < d$, $\alpha B_{\Phi} \subset [-1,1]^{n}$, so $|dB_{\Phi}| \leq 2^{n}$ (where $|\cdot |$ designates the standard Lebesgue measure of $\mathbb{R}^{n}$) which is equivalent to (\ref{eq:flatBH}).
We can also show, with some duality, the Holmes-Thompson analogous of this last inequality: for all flat metric
\begin{equation*}
v_{HT}(P,\Phi) \geq \frac{2^{n}}{n!b_{n}}\left(\min_{1\leq i\leq n}d_{i}\right)^{n}.
\end{equation*}
As a matter of fact, with the last notations, for all $\alpha > 0$,
\[
\alpha B_{\Phi}\cap\partial [-1,1]^{n} \neq \emptyset \Leftrightarrow \exists i,\ \alpha B_{\Phi}\cap\{x,\ \left\langle e_{i},x\right\rangle =\pm 1 \}\neq \emptyset
\]
\[
\Leftrightarrow \exists i,\ e_{i}\not\in (\alpha B_{\Phi})^{*}\ \text{ or } -e_{i}\not\in (\alpha B_{\Phi})^{*}
\]
where the $e_{j}$ are the canonical base of $\mathbb{R}^{n}$. Thus for all $\alpha < d$, $(\alpha B_{\Phi})^{*} \supset B_{\|\cdot\|_{1}}$ the convex hull of the $\pm e_{j}$, so $|(dB_{\Phi})^{*}|\geq \frac{2^{n}}{n!}$.
\end{rmq}
\subsection{Finsler Papasoglu's lemma}
We can now extend the original proof of Papasoglu to Finsler 2-spheres.
\begin{prop}
Let $(\mathbb{S}^{2},\Phi)$ be a reversible Finsler two-sphere and let $\mathcal{A}$ be any Finsler volume and $c>0$ such as $\mathcal{A}([0,d_{1}]\times [0,d_{2}],\|\cdot\|_{\infty}) = cd_{1}d_{2}$ for all $d_{i}>0$. Then for any $\varepsilon>0$ there exists a closed curve $\gamma$ dividing $(\mathbb{S}^{2},\Phi)$ into two disks $D_{1}$ and $D_{2}$ of area at least $\frac{\mathcal{A}(\mathbb{S}^{2},\Phi)}{4}$ and whose length satisfies
\begin{equation}\label{eq:fpap}
\textup{length} (\gamma ) \leq 2\sqrt{\frac{3}{c}\mathcal{A}(\mathbb{S}^{2},\Phi)} + \varepsilon.
\end{equation}
\end{prop}
\begin{proof}
Lets having the same approach as the Riemannian proof, taking $\gamma\in \Gamma$ a curve of length $\textup{length} (\gamma)<L+\varepsilon$ dividing $\mathbb{S}^{2}$ on two disk $D_{1}$ and $D_{2}$ with the same conditions.
Let divide $\gamma$ on 4 curves $\gamma = \alpha_{1}\cup\alpha_{2}\cup\alpha_{3}\cup\alpha_{4}$ of the same length $\frac{\textup{length}(\gamma)}{4}$. As there is no $\varepsilon$-shortcut, we have got that
\begin{equation*}
d_{\Phi}(\alpha_{1},\alpha_{3}),d_{\Phi}(\alpha_{2},\alpha_{4}) \geq \frac{\textup{length}(\gamma)}{4} -\varepsilon.
\end{equation*}
Hence, by Besicovitch's lemma and the example \ref{ex:besicovitch},
\begin{equation*}
\mathcal{A} (D_{1}) \geq c\frac{(\textup{length}(\gamma)-4\varepsilon)^{2}}{16}
\end{equation*}
But $\mathcal{A} (D_{1})\leq \frac{3}{4}\mathcal{A} (\mathbb{S}^{2},\Phi)$, thus
\begin{equation*}
\textup{length} (\gamma ) \leq 2\sqrt{\frac{3}{c}\mathcal{A}(\mathbb{S}^{2},\Phi)} + 4\varepsilon.
\end{equation*}
\begin{figure}
\begin{center}
\includegraphics[scale =0.8]{papasoglu.png}
\caption{Cutting $\gamma = \partial D_{1}$ in 4 curves of the same length}
\label{fig:papasoglu}
\end{center}
\end{figure}
\end{proof}
\begin{exs}\
\begin{itemize}
\item For $\mathcal{A}=v_{BH}$, $c=\frac{\pi}{4}$ and (\ref{eq:fpap}) becomes:
\begin{equation*}
\textup{length} (\gamma ) \leq 4\sqrt{\frac{3}{\pi}\mathcal{A}(\mathbb{S}^{2},\Phi)} + \varepsilon.
\end{equation*}
\item For $\mathcal{A}=v_{HT}$, $c=\frac{2}{\pi}$ and (\ref{eq:fpap}) becomes:
\begin{equation*}
\textup{length} (\gamma ) \leq \sqrt{6\pi\mathcal{A}(\mathbb{S}^{2},\Phi)} + \varepsilon.
\end{equation*}
\end{itemize}
\end{exs}
Nevertheless, the proof of proposition \ref{papRiem2} gives a better estimate in these two special cases:
\begin{prop}
Let $(\mathbb{S}^{2},\Phi)$ be a reversible Finsler two-sphere, then for $\mathcal{A}=v_{HT}$ or $v_{BH}$, there exists $\gamma$ such that,
\begin{equation*}
\textup{length} (\gamma ) \leq \sqrt{\frac{3\pi}{2}\mathcal{A}(\mathbb{S}^{2},g)} + \varepsilon,
\end{equation*}
with the same hypothesis on $\gamma$ and $\varepsilon$ as in the previous proposition.
\end{prop}
\begin{proof}
According to Ivanov's theorem 3 and 4 of \cite{Ivanov2011}, Pu's systolic inequality remains true for these two measures and any reversible Finsler metric $\Phi$. Thus, proof of proposition \ref{papRiem2} remains valid in this case.
\end{proof}
\begin{rmq}
the optimality issue discussed in remark \ref{optimal} still applies in the Busemann-Hausdorff case, according to Ivanov's theorem.
\end{rmq}
\bibliographystyle{alpha}
| {
"timestamp": "2018-02-22T02:07:08",
"yymm": "1708",
"arxiv_id": "1708.09623",
"language": "en",
"url": "https://arxiv.org/abs/1708.09623",
"abstract": "We improve an isoperimetric inequality due to Panos Papasoglu. We also generalize this inequality to the Finsler case by proving an optimal Finsler version of the Besicovitch's lemma which holds for any notion of Finsler volume.",
"subjects": "Metric Geometry (math.MG)",
"title": "Improvement and generalisation of Papasoglu's lemma",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105321470077,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7076984966772982
} |
https://arxiv.org/abs/2109.08593 | On the Bergman kernels of holomorphic vector bundles | Consider a very ample line bundle $ E \to X$ over a compact complex manifold, endowed with a hermitian metric of curvature $-i \omega $, and the space $\mathcal{O}(E)$ of its holomorphic sections. The Fubini--Study map associates with positive definite inner products $\langle \, , \rangle$ on $\mathcal{O}(E)$ functions FS$(\langle \, ,\rangle) \in \mathcal{H}_{\omega}=\{u \in C^{\infty}(X):\omega +i\partial\overline{\partial} u >0\}$. We prove that FS is an injective immersion, but its image in general is not closed in $\mathcal{H}_{\omega}$. To obtain a closed range, FS has to be extended to certain degenerate inner products. This we do by associating Bergman kernels with general inner products on the dual $\mathcal{O}(E)^*$, and the paper describes some simple properties of this association. | \section{Introduction}
There are several ways to associate Bergman kernels/sections/functions with a holomorphic vector bundle $E \to X$ over a compact base. Suppose $E \to X$ is a very ample line bundle with a hermitian metric $h$, and we are given a positive definite inner product $\langle \, , \rangle$ on the space $\mathcal{O}(E)$ of its sections. The formula
\begin{equation}\label{unouno}
\kappa(x)=\sup\Big\{\frac{h(s(x),s(x))}{\langle s,s\rangle}: s \in \mathcal{O}(E)\setminus \{0\}\Big\}, \qquad x \in X,
\end{equation}
defines an analog of Bergman's function. Bergman's original construction corresponds to $X$ not a compact manifold but an open subset of $\mathbb{C}^{n}$, $E=X \times \mathbb{C} \to X$ the trivial line bundle, $h$ the trivial metric, and $\langle s,s\rangle= \int_X h(s,s)$, the integral with respect to Lebesgue measure. As there, in our case too, the sup in (\ref{unouno}) can be described in terms of an orthonormal basis of $(\mathcal{O}(E),\langle \, , \rangle)$.
Positive definite inner products form an open cone $\mathcal{H}_{E}$ in the finite dimensional real vector space of all inner products (by which we mean hermitian symmectric sesquilinear maps
$\mathcal{O}(E) \times \mathcal{O}(E) \to \mathbb{C}$). One version of the Fubini--Study map
FS:$\mathcal{H}_{E} \to C^{\infty}(X)$ is defined by associating with $\langle\,\, ,\, \rangle \in \mathcal{H}_E$ the logarithm of
$\kappa$ in (\ref{unouno}). Here $C^{\infty} (X)$ stands for the space of smooth functions $X \to \mathbb{R}$.
Yau in 1987 suggested to study a variant of this map when $E=K^{\otimes N}_X$ is a tensor power of the canonical bundle of $X$ and $N \to \infty$. Such asymptotic questions have received much attention in the greater generality when $E=L^{\otimes N}$ is a power of any positive line bundle, in the works of Catlin, Tian, Zelditch, and many others; a very recent contribution is by Wu,
[Ca, T, W, Y, p. 139]. However, this paper is not about asymptotics, but properties of one fixed map FS:$\mathcal{H}_E \to C^{\infty}(X)$ and of related maps.
In fact, FS maps into the open subset
$$
\mathcal{H}_{\omega} = \{u \in C^{\infty}(X): \omega + i\partial \tilde{\partial} u >0 \} \subset C^{\infty}(X),
$$
where $-i\omega$ is the curvature (1,1) form of $h$.
\begin{thm} If $E \to X$ is a very ample line bundle over a compact base, then FS$(\mathcal{H}_E) \subset \mathcal{H}_{\omega}$
is a not necessarily closed submanifold, and FS is a diffeomorphism between $\mathcal{H}_E$ and its image.\footnote{That (a
different variant of) FS is injective is already stated in \cite[Theorem 1.1]{Hs}. Yet in email correspondence with the author we
realized that a relevant Lemma 3.1 of that paper is not quite correct, the norm $\|\,\|_{op}$ there needs to be defined with more care.}
\end{thm}
In Section 2 we will recall the relevant notions of infinite dimensional geometry. That
FS$(\mathcal{H}_E) \subset \mathcal{H}_{\omega}$ may fail to be closed we will see in Example 6.2.
The question arises then, what is its closure? We give an answer in Section 6. The closure can be described in terms of
degenerations of positive definite inner products on $\mathcal{O}(E)$. The degenerations correspond to certain semidefinite inner products on the dual space $\mathcal{O}(E)^*$. It turns out that one can very naturally associate Bergman kernels and sections
with {\sl any} inner product on $\mathcal{O}(E)^*$, for any holomorphic vector bundle $E\to X$; and this association has neat properties, some undoubtably familiar, that we describe in Sections 3, 4, 5.---In different problems
in \cite{DW, W} Darvas and Wu have already associated Bergman--type functions with certain inner
products/norms on ${\mathcal O}(E)^*$, and studied the resulting `dual' Fubini--Study maps.---Rather than sampling from the findings of
this paper, we introduce here one result that can be viewed as generalizing Theorem 1.1 to an arbitrary holomorphic vector bundle $E \to X$ over a compact base. We will be brief here, and give the details in Section 3.
Our $E$ has a conjugate bundle $\overline{E} \to X$, a complex vector bundle. As a real vector bundle it is the same as $E$, but multiplication by $\lambda \in \mathbb{C}$ in the fibers of $\overline{E}$ corresponds to multiplication by $\overline{\lambda}$ in $E$. Keep in mind that $\overline{E} \to X$ is not a holomorphic vector bundle. We denote the space of smooth sections of $E \otimes \overline{E}$ (tensor product over $\mathbb{C}$) by $C^{\infty}(E \otimes \overline{E})$.
We continue to denote the cone of positive definite inner products on ${\mathcal O}(E)$ by ${\mathcal H}_E$. Any
$\langle\, , \rangle \in \mathcal{H}_E$ determines a smooth section $k \in C^{\infty}(E \otimes \overline{E})$, its Bergman section. If $s_1, \dots,s_d$ form an orthonormal basis of $(\mathcal{O}(E), \langle\, , \rangle), $
\begin{equation}\label{unodos}
k(x)= \sum_{j=1}^d s_j(x) \otimes s_j(x), \qquad x \in X.
\end{equation}
Thus there is a map
\begin{equation}\label{unotres}
b:\mathcal{H}_E \to C^{\infty}(E \otimes \overline E)
\end{equation}
that associates with $\langle\, , \rangle \in \mathcal{H}_E$ the section $k$ in (\ref{unodos}).
\begin{thm}For any holomorphic vector bundle over a compact base $b(\mathcal{H}_E) \subset C^{\infty}(E \otimes \overline{E})$ is a submanifold, and $b$ is a diffeomorphism between $\mathcal{H}_E$ and its image.
\end{thm}
\section{Basics of Fr\'echet manifolds}
Our reference to this material in Hamilton's paper \cite{Hm} (of which we will need very little). Suppose $F,G$ are Fr\'echet spaces over $\mathbb{R}$, and $U \subset F$ is open. A map $f: U \to G$ is of class $C^1$ if the directional derivatives
\begin{equation}\label{dosuno}
df(x,\xi)= \lim_{t \to 0} \frac{f(x+t\xi)-f(x)}{t}
\end{equation}
exist for all $x\in U$, $\xi \in F$, and define a continuous function $df: U \times F \to G$, called the differential of $f$. If $df$ is $C^1$, we say $f$ itself is $C^2$, and so on. The map $f$ is smooth if it is of class $C^k$ for all $k=1, 2,\dots$
Fr\'echet manifolds are Hausdorff topological spaces, glued together from open subsets of Fr\'echet spaces by smooth diffeomorphisms. One defines smoothness of maps between Fr\'echet manifolds by pulling back to open subsets of Fr\'echet spaces. The tangent bundle $TM$ of a Fr\'echet manifold $M$ is also a Fr\'echet manifold, in fact a Fr\'echet bundle over $M$. A smooth map $F:M \to N$ of Fr\'echet manifolds induces a smooth homorphism $TM \to TN$.
If $M$ is a Fr\'echet manifold, a subset $N \subset M$ is called a submanifold if the following holds: Every $x \in N$ has a neighborhood $U \subset M$, there are a Fr\'echet space $F$ and a closed subspace $G \subset F$, and there is a diffeomorphism $\Phi$ of $U$ on a neighborhood $V$ of $ 0 \in F$ such that $\Phi (U \cap N)=V \cap G$. The restrictions to $N$ of all such $\Phi$ define a manifold structure on $N$. Here our terminology deviates from Hamilton's. First, he requires that $N \subset M$ be a closed set, while we do not. Accordingly, we call $N \subset M$ a closed submanifold if it is a closed subset and a submanifold. Second, in his definition Hamilton stipulates that the subspace $G \subset F$ have a closed complement. If this also holds, we would call $N$ a direct submanifold; but the distinction will be of no consequence, as all submanifolds in this paper will be finite dimensional, hence automatically direct.
\section{Bergman Kernels}
Fix a holomorphic vector bundle (of finite rank!) $E \to X$ over a compact base. In this section we will introduce two notions of Bergman kernels, the full Bergman kernel and its restriction to the diagonal that we call Bergman section.
If $Y$ is any complex manifold, we will write $\overline{Y}$ for the complex manifold with the same underlying differential manifold but opposite complex structure. Thus, a function $f$ on an open subset of $\overline{Y}$ is holomorphic if $\overline{f}$, viewed as a function on $Y$, is holomorphic. If $Y$ happens to be a complex vector space as well, then $\overline{Y}$ too acquires a complex vector space structure: As real vector spaces $\overline{Y}=Y$, but multiplication with $\lambda \in \mathbb{C}$ in $\overline{Y} $ is the same as multiplication with the complex conjugate $\overline{\lambda}$ in $Y$. Applying this construction in families, we see that if $\pi: E \to X$ is a holomorphic vector bundle, $\pi: \overline{E} \to \overline{X}$ becomes a holomorphic vector bundle. Keep in mind that $\mathcal{O}(E) = \mathcal{O}(\overline{E})$ as real vector spaces, but again, the complex structures are opposite. It follows that their duals are not equal, but there is a real linear isomorphism
${\mathcal O}(E)^*\to{\mathcal O}(\overline E)^*$
that associates with a linear form $\sigma$ on ${\mathcal O}(E)$ its conjugate $\overline\sigma$, a linear form on ${\mathcal O}(\overline E)$.
Similarly, if $x\in X$, there is a real linear isomorphism $E_x^*\to \overline E_x^*$ that sends a linear form
$\xi\in E_x^*$ to its conjugate $\overline\xi\in\overline E_x^*$.
Occasionally we will view $\overline E$ as a bundle over $X$, not $\overline X$. When we do so, we will write
$\overline E\to X$; note that this is just a smooth complex vector bundle.
(Full) Bergman kernels of $E \to X$ are holomorphic sections of the external tensor product
$E \boxtimes \overline E \to X \times \overline{X}$. Here
\begin{equation}\nonumber
E \boxtimes \overline E=\coprod_{x \in X, y \in \overline{X}} E_x \otimes \overline{E}_y,
\end{equation}
on which the obvious projection to $X \times \overline{X}$ and the local trivializations of $E$, $\overline{E}$ induce the structure of a holomorphic vector bundle. Commonly, Bergman kernels are associated with positive definite inner products on the space
$\mathcal{O}(E)$ of sections. It will be more profitable, though, to associate them with inner products on the dual space
$\mathcal{O}(E)^*$. Of course, positive definite (or nondegenerate) inner products on $\mathcal{O}(E)$
and $\mathcal{O}(E)^*$ are in bijective correspondence. But we will need to deal with degenerations, and they are easier to describe on $\mathcal{O}(E)^*$ than on $\mathcal{O}(E)$; they will be certain semidefinite inner products on $\mathcal{O}(E)^*$.
At the same time it turns out that Bergman kernels can be associated with arbitrary inner products on $\mathcal{O}(E)^*$.
We write $\mathcal{I}_E$ for the real vector space of inner products $\mathcal{O}(E)^*$. Elements
$\langle \langle \, , \rangle \rangle \in \mathcal{I}_E $ are in bijective correspondence with pairs $(V, \langle \, , \rangle)$ of subspaces $V \subset \mathcal{O}(E)$ and non-degenerate inner products $\langle \, , \rangle$ on $V$ as follows. A pair
$(V,\langle \, ,\rangle)$ induces an inner product $\langle \, , \rangle'$ on the dual space $V^*$, a quotient of $\mathcal{O}(E)^*$. Pulling back $\langle \, , \rangle'$ by the quotient map of $ \mathcal{O}(E)^* \to V^*$ gives rise to the corresponding
$\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$ .
We write
\begin{equation}\label{tresuno}
\langle \langle \, , \rangle \rangle= \delta(V, \langle \, , \rangle) \qquad \text{ and } \qquad \langle \langle \, , \rangle \rangle =\delta (\langle \, , \rangle) \quad\text{if } V=\mathcal{O}(E).
\end{equation}
When $V=\mathcal{O}(E),$ $\delta( \langle \, , \rangle)$ is just the dual inner product on $\mathcal{O}(E)^*$. Conversely, given $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$, let
$$
N=\{\sigma \in \mathcal{O}(E)^*: \langle \langle \sigma, \cdot \rangle\rangle=0\}
$$
denote its kernel, and $V \subset \mathcal{O}(E)$ the annihilator of $N$,
$$
V=\{s \in \mathcal{O}(E): \sigma(s)=0 \text{ for all } \sigma \in N\}.
$$
The inner product $\langle \langle \, , \rangle \rangle$ descends to a nondegenerate inner product $\langle \, , \rangle '$ on
$\mathcal{O}(E)^* / N$. Any $s\in V$ induces a linear form $\sigma \mapsto \sigma(s)$ on $\mathcal{O}(E)^*$, that factors through
$\mathcal{O}(E)^* \to \mathcal{O}(E)^* / N$. Hence it can be represented as
$\sigma \mapsto \langle \langle \sigma, \tau \rangle \rangle$with some $ \tau= \tau_s \in \mathcal{O}(E)^*$ determined modulo
$N$. The nondegenerate inner product on $V$ given by $\langle s, t \rangle = \langle \langle \tau_t, \tau_s\rangle\rangle$ then
satisfies $\delta( V, \langle \, , \rangle)=\langle\langle \, ,\rangle \rangle$.
We are ready to define the Bergman kernel of $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$. If ev$_x$ denotes the evaluation map
$$
\textnormal{ev}_x: \mathcal{O}(E) \ni s \mapsto s(x) \in E_x,\qquad x \in X,
$$
the Bergman kernel in question is a section $K$ of $E \boxtimes \overline{E}$ with the property that
\begin{equation}\label{tresdos}
\langle\langle \xi \textnormal{ev}_x, \eta \,\textnormal{ev}_y \rangle\rangle=(\xi \otimes \overline{\eta}) K(x,y) \qquad \textnormal{for all } x,y \in X, \, \xi \in E_{x}^*, \,\eta \in E_y^*.
\end{equation}
One way to see that such $K(x,y)$ exists is to note first that elements $\sum a_j \otimes b_j$ of a tensor product
$A \otimes B$ determine a linear form $\varphi$ on $A^*\otimes B^*$,
$$
\varphi \left(\sum \alpha_i \otimes \beta_i \right)= \sum_{i, j} \alpha_i(a_j)\beta_i(b_j), \qquad \alpha_i \in A^*, \beta_i \in B^*,
$$
and when $A,B$ are finite dimensional, all linear forms $\varphi$ on $A^*\otimes B^*$ arise in this way; second, that for
fixed $x,y$, the left hand side of \eqref{tresdos} determines a linear form on $E_x^* \otimes \overline{E}^*_y$. Clearly, $K(x,y)$ is unique.
Since the left hand side of \eqref{tresdos} depends holomorphically on $\xi\in E^*,\overline\eta\in\overline E^*$, it follows that
$K$ is a holomorphic section of $E \boxtimes \overline{E}$. If $\langle \langle \, , \rangle \rangle =\delta (V, \langle \, ,\rangle)$, then $K$ can be represented in terms of a signed orthonormal basis of $(V, \langle \, , \rangle)$. Suppose a basis
$s_1,\dots,s_{p+q} \in V$ satisfies
\[
\langle s_i, s_j \rangle= \begin{cases}
\delta_{ij} & \textnormal{ if } i \leq p \\
-\delta_{ij} & \textnormal{ if } i > p
\end{cases}
\quad(\textnormal{Kronecker symbol}).
\]
Then
\begin{equation}\label{trestres}
K(x,y)= \sum_{j=1}^p s_j (x) \otimes s_j(y) - \sum_{j=p+1}^{p+q} s_j(x) \otimes s_j(y).
\end{equation}
Indeed, choose $\sigma_1,\dots, \sigma_{p+q} \in \mathcal{O}(E)^*$ so that their classes $[\sigma_j] \in V^*$ form a dual basis; and complete them by elements of the kernel $N$ of $\langle\langle\,, \rangle\rangle$
to a basis $ \sigma_1,\dots, \sigma_r$ of $\mathcal{O}(E)^*$. Write
$$
\xi \textnormal{ev}_x=\sum_1^r \alpha_i \sigma_i, \quad \eta \,\textnormal{ev}_y= \sum_1^r \beta_i \sigma_i, \qquad \alpha_i, \beta_i \in \mathbb{C}.
$$
We have $\xi s_j(x)=\xi \textnormal{ev}_x s_j=\alpha_j$ for $j=1,\dots,p+q$. Hence
$$
\langle \langle \xi \textnormal{ev}_x, \eta \,\textnormal{ev}_y \rangle \rangle=\langle \langle \sum_1^r \alpha_j \sigma_j, \sum_1^r \beta_j \sigma_j \rangle \rangle = \sum_1^p \alpha_j \overline{\beta}_j - \sum_{p+1}^{p+q} \alpha_j \overline{\beta}_j=(\xi \otimes \overline{\eta}) K(x,y)
$$
if $K$ is given by \eqref{trestres}.
The familiar reproducing property of $K$ carries over to the generality we consider here. If $\eta\in E_y^*$, write
$\overline\eta K$ for $(\text{id}_E\otimes\overline\eta) K(\cdot,y)\in V$.
\begin{lem
If $s\in V$ and $\eta\in E_y^*$, then
\begin{equation}
\eta s(y)=\langle s,\overline\eta K\rangle.
\end{equation}
\end{lem}
\begin{proof}
With $s_j$ as in (3.3) and $a_j\in\mathbb C$ let $s=\sum_j a_j s_j$. The right hand side of (3.4) is
$$
\sum_{j=1}^{p+q}a_j\Big\langle s_j,\sum_{i=1}^p s_i\overline\eta s_i(y)-\sum_{i=p+1}^{p+q}s_i\overline\eta s_i(y)\Big\rangle=
\sum_{j=1}^{p+q} a_j\eta s_j(y)= \eta s(y),
$$
as claimed.
\end{proof}
The restriction of $K$ to the diagonal
\begin{equation}\label{trescuatro}
k(x)= K(x,x)=\sum_1^p s_j(x) \otimes s_j(x) -\sum_{p+1}^{p+q} s_j(x) \otimes s_j(x) \in E_x \otimes \overline{E}_x
\end{equation}
is also of interest. It is a smooth section of $E \otimes \overline{E} \to X$ that we call the Bergman section. By pairing with
$E^*\otimes \overline E^*$ it induces a
not necessarily positive definite hermitian metric on this latter. If this induced metric is nondegenerate, by duality it further
induces a nondegenerate, but again not necessarily definite hermitian metric on $E\otimes\overline E$. One version of the
Fubini--Study map, different from the map introduced in Section 1,
takes a positive definite inner product $\langle\,, \rangle\in{\mathcal H}_E$ on ${\mathcal O}(E)$,
the dual inner product $\langle\langle\,, \rangle\rangle$ on ${\mathcal O}(E)^*$, and its Bergman section $k$. Often it is convenient to
define the Fubini--Study map to send $\langle\,, \rangle$ to the hermitian metric of $E\otimes\overline E$ induced by $k$, necessarily
positive definite this time. This has the advantage over the notion defined in the Introduction, through (1.1), that it is independent of
the choice of any hermitian metric $h$ on $E$, and works not only for line bundles. Nonetheless, in this paper we will stick
with the map FS as defined in the introduction, more natural for our purposes. (5.1), (5.8) will clarify how the two variants of
the Fubini--Study map (i.e. the two variants of the function $\kappa$ in (1.1) and (5.1)) are related.
\section{Basic properties}
Both the Bergman kernel $K$ and the Bergman section $k$ have an obvious symmetry property. There is an involutive
$\mathbb{R}$--homomorphism $\iota:E \boxtimes \overline{E} \to E \boxtimes \overline{E}$ covering the map
$X\times\overline X\ni(x,y)\to(y,x)\in X\times\overline X$,
determined by
\begin{equation}
\iota(e \otimes f)= f \otimes e.
\end{equation}
We write
${\mathcal O}(E\boxtimes\overline E)^\text{herm}$ for those sections $L\in{\mathcal O}(E\boxtimes\overline E)$ that satisfy $\iota L(x,y)=L(y,x)$.
(3.3) shows that $K$ is in this space. Furthermore, $\iota$ leaves invariant the bundle
$E\otimes\overline E\to X$, the restriction of
$E\boxtimes\overline E$ to the diagonal; its fixed points form a real subbundle
$(E \otimes \overline{E})^\text{ herm} \subset E \otimes \overline{E}$. In a
local frame $s_1,\dots,s_r$ of $E$ elements of $E_x \otimes \overline{E}_y$ are of form
$$
t=\sum_{i,j}(a_{ij}s_i(x))\otimes s_j(y)=\sum_{i,j} s_i(x) \otimes (\overline{a_{ij}} s_j(y)), \qquad a_{ij} \in \mathbb{C}.
$$
The involution corresponds to $a_{ij} \mapsto \overline{a_{ji}}$, and elements of $(E \otimes \overline{E})^{\text{herm}}_x$ to hermitian matrices $(a_{ij})$. It follows that $(E \otimes \overline{E})^{\textnormal{herm}}$ is indeed a subbundle, locally isomorphic to the trivial bundle over $X$ whose fibers consist of
$r \times r$ hermitian matrices. We write $C^{\infty}(E \otimes \overline{E})$ and $C^{\infty}(E \otimes \overline{E})^{\textnormal{herm}}$ for the Fr\'echet spaces of smooth sections of $E \otimes \overline{E}$, respectively, $(E \otimes \overline{E})^{\textnormal{herm}}$. \eqref{trescuatro} shows that $k \in C^{\infty}(E \otimes \overline{E})^{\textnormal{herm}}$.
Define linear maps
\begin{equation}
B:\mathcal I_E\to {\mathcal O}(E\boxtimes\overline E)^\text{herm}\qquad\text {and}\qquad
\beta: \mathcal{I}_E \to C^{\infty}(E \otimes \overline{E})^{\textnormal{herm}}
\end{equation}
by sending $\langle\langle \, , \rangle \rangle \in \mathcal{I}_E$ to its Bergman kernel $K$,
respectively Bergman section $k$, of (3.2), (3.5).
\begin{thm}
(a) $B$ is an isomorphism of topological vector spaces and $\beta$ is an isomorphism between the topological vector spaces
$\mathcal{I}_E$ and $\beta(\mathcal{I}_E)\subset C^{\infty}(E \otimes \overline{E})^{\textnormal{herm}}$.
(b) $\langle \langle \, , \rangle \rangle\in\mathcal I_E$ is positive semidefinite if and only if with $K=B(\langle \langle \, , \rangle \rangle)$
and arbitrary choice of finitely many $x_j\in X$ and $\xi_j\in E_{x_j}^*$ the hermitian matrix
\begin{equation*}
\big((\xi_i\otimes\overline{\xi_j})K(x_i,x_j)\big)_{i,j}
\end{equation*}
is positive semidefinite.
\end{thm}
\begin{proof}
(a) It will suffice to prove that $B,\beta$ are vector space isomorphisms, since on a finite dimensional $\mathbb R$--vector space
there is only one separated vector space topology.
We start with $B$. If $\dim{\mathcal O}(E)=d$, the space $\mathcal I_E$ is $d^2$ real dimensional. But so is the space
${\mathcal O}(E\boxtimes\overline E)^\text{herm}$, since with a basis $s_1,\dots,s_d$ of ${\mathcal O}(E)$ its elements can be written
$\sum_{i,j} a_{ij} s_i(x)\otimes s_j(y)$, where $a_{ij}=\overline{a_{ji}}$. Hence to prove isomorphism it suffices to check that
$\text{Ker} \,B=0$. If $K=B(\langle\langle\, , \rangle\rangle)=0$, (3.2) gives
\begin{equation}
\langle\langle \xi \textnormal{ev}_x, \eta\, \textnormal{ev}_y \rangle\rangle=0.
\end{equation}
Now the linear forms
$\xi \textnormal{ev}_x \in \mathcal{O}(E)^*$ for all $x, \xi$ span $\mathcal{O}(E)^*$, because if $s \in \mathcal{O}(E)$ annihilates all of them, then $s=0$. Therefore (4.3) implies $\langle \langle \, , \rangle \rangle=0$ and $B$ is indeed isomorphic.
As to $\beta$, the point again is that it is injective. For suppose that $k=\beta (\langle \langle \, , \rangle \rangle)=0$. Thus the
Bergman kernel $K \in \mathcal{O}(E \otimes \overline{E})$ of $\langle \langle \, , \rangle \rangle$
vanishes when restricted to the maximally real submanifold
$\{(x,x): x \in X\} \subset X \times \overline{X}$.
This implies that $K=B(\langle \langle \, , \rangle \rangle) = 0$ and $\langle \langle \, , \rangle \rangle=0$ by what we have already proved.
(b) If $a_j\in\mathbb C$, by the definition of $K$, (3.2),
\[
\langle \langle \sum_j a_j\xi_j\text{ev}_{x_j} , \sum_j a_j\xi_j\text{ev}_{x_j}\rangle \rangle=
\sum_{i,j}a_i\overline {a_j}(\xi_i\otimes\overline{\xi_j})K(x_i,x_j).
\]
As we saw in the first part of the proof, $\xi\text{ev}_x$ for all $x,\xi$ span ${\mathcal O}(E)^*$, whence the claim follows.
\end{proof}
\begin{proof}[Proof of Theorem 1.2]
The map $b: \mathcal{H}_E \to C^{\infty} (E \otimes \overline{E})$ of the theorem can be expressed through $\beta$ and the duality map $\delta$, see (3.1), as $b=\beta \circ \delta | \mathcal{H}_E$. Indeed, if $\langle \, , \rangle \in \mathcal{H}_E$, a
positive definite inner product on $\mathcal{O}(E)$, and $\langle \langle \, , \rangle \rangle= \delta (\langle \, , \rangle)$ is the dual positive definite inner product on $\mathcal{O}(E)^*$, comparing \eqref{unodos} with \eqref{trescuatro} shows that the Bergman
section $k$ associated with $\langle \, , \rangle$ is the same as the one associated with $\langle \langle \, , \rangle \rangle$. Since
$\delta | \mathcal{H}_E$ is a diffeomorphism on its image, which in turn is an open subset of $\mathcal{I}_E$, Theorem 4.1(a) implies
that $b(\mathcal{H}_E)$ is indeed a submanifold of $C^{\infty}(E \otimes \overline{E})^{\textnormal{herm}}$, and $b$ is a
diffeomorphism between $\mathcal{H}_E$ and $b(\mathcal{H}_E)$ .
\end{proof}
\section{The variational approach}
Just like in a more traditional set up, Bergman sections $k$ can be related to solutions of variational problems, and this will bring us in contact with the Fubini--Study map of the Introduction. Consider a hermitian holomorphic vector bundle
$(E,h) \to X$, with $X$ still compact. As a fiberwise bilinear map, $h:E \oplus \overline{E} \to \mathbb{C}$ induces a fiberwise linear map
$$
H: E \otimes \overline{E} \to \mathbb{C}, \qquad H \big(\sum_j e_j \otimes f_j \big)=\sum h(e_j, f_j).
$$
If $k=\beta (\langle \langle \, , \rangle \rangle) \in C^{\infty}(E \otimes \overline{E})^{\textnormal{herm}}$ is the Bergman section of $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$, we define a smooth function
\begin{equation}\label{cincouno}
\kappa=H \circ k: X \to \mathbb{R},
\end{equation}
and call it the Bergman function of $\langle \langle \, , \rangle \rangle$.
By associating $\kappa$ with $\langle \langle \, , \rangle \rangle$ we obtain a map
\begin{equation}\label{cincodos}
L:\mathcal{I}_E \to C^{\infty} (X).
\end{equation}
\begin{lem} $L$ is continuous and linear. It is also injective if $E$ is a line bundle. \end{lem}
\begin{proof}
The first statement is immediate from Theorem 4.1. As to the second, suppose $E$ is a line bundle and
$L(\langle \langle \, , \rangle \rangle)=\kappa=0$. By \eqref{trescuatro} the Bergman section is
$k=\sum_{1}^{p} s_j \otimes s_j-\sum_{p+1}^{p+q}s_j \otimes s_j$, with suitable linearly independent $s_j\in \mathcal{O}(E)$. Over some open $U$ fix a nonvanishing holomorphic section $e$ of $E|U$ and write $s_j=\varphi_j e,\, \varphi_j \in \mathcal{O}(U)$. On $U$
$$
0=\kappa=H\circ k= h(e,e) \Big ( \sum_{1}^{p}|\varphi_j|^2 - \sum_{p+1}^{p+q}|\varphi_j|^2 \Big).
$$
This implies $k=(e \otimes e) \big( \sum_{1}^{p}|\varphi_j|^2 - \sum_{p+1}^{p+q}|\varphi_j|^2 \big)=0$ on $U$, hence on $X$. By
Theorem 4.1 $\langle \langle \, , \rangle \rangle=0$ follows.
\end{proof}
Consider now an $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$, that we write as $\delta(V, \langle \, , \rangle)$, see $(3.1)$. Thus $\langle \, ,\rangle$ is a nondegenerate inner product on $V\subset \mathcal{O}(E)$, say, of signature ($p,q$). Define for
$x \in X$ and for $l=1, 2, \dots,p$
\begin{equation}\label{cincotres}
\kappa_l(x)=\min_{W}\max_{s \in W\setminus\{0\}} \frac{h(s(x),s(x))}{\langle s, s \rangle} \geq 0,
\end{equation}
the minimum taken over $l$ dimensional subspaces $W \subset V$ on which $\langle \, , \rangle$ is positive definite; and for $l=p+1,\dots, p+q$
$$
\kappa_l(x)=\max_W \min_{s \in W\setminus \{0\}} \frac{h(s(x),s(x))}{\langle s, s \rangle} \leq 0,
$$
the maximum taken over $l -p$ dimensional subspaces $W \subset V$ on which $\langle \, , \rangle$ is negative definite. Since the rank of the hermitian form
\begin{equation}
h_x:V \times V\ni (s,t) \mapsto h(s(x),t(x))
\end{equation}
is $\le\text{rk}\, E$, when $l \leq p-\text{rk}\, E$ we can choose $W$ in (5.3) contained in the kernel of $h_x$, which shows
\begin{equation}\label{cincocuatro}
\kappa_l=0 \quad\textnormal{if}\quad 1 \leq l \leq p-\text{rk}\, E, \quad\textnormal{and similarly if} \quad 1 \leq l-p \leq q-\text{rk}\, E.
\end{equation}
\begin{thm} With notation as above,
\begin{equation}\label{cincocinco}
\kappa=H \circ k = \sum_{1}^{p+q} \kappa_l.
\end{equation}
In particular, suppose $E$ is a line bundle. If $\langle \langle \, , \rangle \rangle$ is indefinite,
\begin{equation}\label{cincoseis}
\kappa=\kappa_p+\kappa_{p+q};
\end{equation}
if $\langle \langle \, , \rangle \rangle$ is positive semidefinite (but not identically 0), then
\begin{equation}\label{cincosiete}
\kappa(x)=\max_{s \in V \setminus \{0\}} \frac{h(s(x),s(x))}{ \langle s, s \rangle}.
\end{equation}
\end{thm}
Thus $\kappa$ as introduced in (1.1) is the special case of $\kappa$ of (5.1), when $E$ is a very ample line bundle and
$\langle\,, \rangle\in{\mathcal H}_E$ (equivalently, its dual $\langle \langle \, , \rangle \rangle$ is positive definite).
\begin{proof} Choose a basis $s_1,\dots,s_{p+q} \in V$ that diagonalizes both $\langle \, ,\rangle$ and the positive semidefinite inner product $h_x$ of (5.4). By scaling and reordering we can arrange that
\[
\langle s_i, s_j \rangle= \begin{cases}
\delta_{ij} & \textnormal{ if } i=1,\dots, p \\
-\delta_{i j} & \textnormal{ if } i =p+1,\dots, p+q
\end{cases}
\quad \textnormal{ and}\quad h(s_i(x),s_j(x))=c_i \delta_{ij},
\]
with $0 \leq c_1 \leq\dots \leq c_p$, and $0 \leq c_{p+1} \leq \dots \leq c_{p+q}$. As
$k=\sum_1^p s_j \otimes s_j -\sum_{p+1}^{p + q} s_j \otimes s_j$,
\begin{equation} \label{cincoocho}
\kappa(x)=\sum_1^p h(s_j(x), s_j(x))-\sum_{p+1}^{p+q} h(s_j(x), s_j (x))=\sum_1^p c_j -\sum_{p+1}^{p+q}c_j.
\end{equation}
We will next show that $c_l=\pm\kappa_l$. If $\langle\,, \rangle$ is positive definite, this follows from Courant's minimax theorem
\cite{Co}. In general it can be derived from Phillips's generalization of the minimax theorem \cite[Theorem 1]{P}; but since a direct
proof following Courant's is quite straightforward, we just write it out.
Thus, suppose $W \subset V$ is $l$ dimensional and $\langle \, ,\rangle | W$ is positive definite. Since the span of $s_j$, for
$j \geq l$, has codimension $l-1$ in $V$, it contains a nonzero vector
$s=\sum_l^{p+q} \alpha_j s_j$ that is also in $W$. Thus
\begin{equation*}
\frac{h(s(x),s(x))}{\langle s, s \rangle}= \frac{\sum_l^{p+q}c_j|\alpha_j|^2}{\sum_l^p |\alpha_j|^2-\sum_{p+1}^{p+q}|\alpha_j|^2} \geq \frac{c_l \sum_l^p|\alpha_j|^2}{\sum_l^p |\alpha_j|^2}=c_l
\end{equation*}
(note that $\sum_l^p|\alpha_j|^2 \geq \langle s, s \rangle > 0)$, and $\max_{W\setminus\{0\}} h(s(x), s(x)) / \langle s, s \rangle \geq c_l$.
But if $W$ is the span of $s_j, j \leq l$, writing a general element of $W$ as $s= \sum_1^p \beta_j s_j \neq 0$,
$$\max_{s \in W\setminus\{0\}} \frac{h(s(x),s(x))}{\langle s, s \rangle}=\max_{\beta_j} \frac{\sum_1^{l}c_j|\beta_j|^2}{\sum_1^l |\beta_j|^2}= c_l.$$
We conclude that $\kappa_l(x)=c_l$ if $1 \leq l \leq p$. One shows similarly that $\kappa_l(x)=-c_l$ if $p+1 \leq l \leq p+q$
(or with $- \langle \langle\, , \rangle \rangle \in \mathcal{I}_E$ one applies what has already been proved). In light of \eqref{cincoocho}, \eqref{cincocinco} follows.
If $E$ is a line bundle, because of \eqref{cincocuatro} $\kappa_l\neq 0$ only if $l=p$ or $l=p+q$. For this reason, when
$\langle \langle \, , \rangle \rangle$ is indefinite, so that $p,q \geq 1$, \eqref{cincocinco} reduces to \eqref{cincoseis}. When
$\langle \langle \, , \rangle \rangle \neq 0$ is positive semidefinite, $q=0$, and the right hand side of \eqref{cincocinco} consists of
a single term $\kappa_p$. For $l=p$, in \eqref{cincotres} the only choice for $W$ is $V$ itself, and \eqref{cincocinco} specializes to \eqref{cincosiete}.
\end{proof}
\begin{cor}The only semidefinite $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$ for which $\kappa=0$ is the zero inner product.
\end{cor}
\begin{proof} Say $\langle \langle \, , \rangle \rangle$ is positive semidefinite, and nonzero, so that the signature of the corresponding $\langle \, , \rangle$ on $V$ is $(p,0)$, $p\geq1$. Each $\kappa _l \geq 0, \, l=1,\dots, p,$ and $\kappa_p(x)=\max_{s \in V\setminus\{0\}} h(s(x), s(x))/\langle s, s \rangle$.
This cannot be identically zero since $V$ contains a nonzero $s$. Hence $\kappa=\sum \kappa_l \neq 0$.
\end{proof}
\section{The Fubini--Study map}
We now specialize to a very ample hermitian holomorphic line bundle $(E,h) \to X$, and denote by $-i\omega$ the curvature form of $h$. The dual of an $\langle \, , \rangle \in \mathcal{H}_E$ is a positive definite inner product
$\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$. If we first associate with $\langle \, , \rangle$ the Bergman section
$k=\beta (\langle \langle \, , \rangle \rangle)$, then $\kappa=H \circ k$ as in (5.1), the correspondence
$\mathcal{H}_E \ni \langle \, , \rangle \mapsto \kappa\in C ^{\infty}(X)$ will be smooth by Lemma 5.1. Using \eqref{cincosiete} to compute $\kappa$ we see that $\kappa$ is everywhere positive. The Fubini--Study map FS: $\mathcal{H}_E \to C^{\infty}(X)$ sends $\langle \, , \rangle$ to $\log \kappa$. It is standard that FS maps into $\mathcal{H}_{\omega}=\{u \in C^{\infty}(X): \omega +i\partial \overline{\partial} u > 0 \}$, an open subset of $C^{\infty}(X)$. Indeed, with a nonvanishing holomorphic local section $e$ of $E$, writing an orthonormal basis of $(\mathcal{O}(E), \langle \, , \rangle)$, as $s_j= \varphi_j e$, $j=1,\dots, d$, \eqref{cincouno} gives $\kappa(x)=h(e(x), e(x)) \sum |\varphi_j(x)|^2$. Hence
\begin{equation}\label{seisuno}
\omega+i \partial \overline \partial \log \kappa= \omega + i \partial \overline{\partial} \log h (e, e)+ i\partial \overline{\partial} \log \sum | \varphi_j|^2=i \partial\overline{\partial} \log \sum |\varphi_j|^2.
\end{equation}
For fixed $x \in X$ we can choose the basis $s_j$ so that $s_1$ is orthogonal to the subspace of sections vanishing at $x$. Then $\varphi_j(x)=0$ for $j \geq 2$ (and $\varphi_1(x) \neq 0$). At $x$ we obtain
\begin{equation}\label{seisdos}
i\partial \overline{\partial} \log \sum | \varphi_j|^2= i \sum_{j\ge 2} \partial \varphi_j \wedge\overline{\partial \varphi_j} / |\varphi_1|^2.
\end{equation}
This latter $(1,1)$ form is positive: as $E$ is very ample, for any nonzero tangent vector $v \in T^{10}_x X$ there is an
$s\in \mathcal{O}(E)$ that vanishes at $x$, to first order in the direction $v$, i.e., if $s=\varphi e$ then $\varphi(x)=0$,
$\partial \varphi (v) \neq 0$; and this $s$, resp. $\varphi$, must be in the span of $s_j$, resp. $\varphi_j$, $j \geq 2$.
This computation proves something more general. Let us say that a subspace $V \subset \mathcal{O}(E)$ is rather ample if for every $x \in X$ and nonzero $v \in T_x^{10} X$ there are sections $s, t \in V$ such that $s(x) \neq 0$ but $t$ vanishes at $x$, to first order in the direction $v$. Geometrically this means that $X \ni x \mapsto V \cap \text{Ker}\,\text{ev}_x \in \mathbb{P}(V)$ defines an immersion into the projective space of hyperplanes in $V$.
\begin{lem} Consider a subspace $V \subset \mathcal{O}(E)$, and a positive definite inner product $\langle \, , \rangle$ on it. The Bergman function $\kappa$ associated with $\langle \langle \, , \rangle \rangle=\delta (V, \langle \, , \rangle)$, see \eqref{cincouno}, satisfies $\log \kappa \in \mathcal{H}_{\omega}$ if and only if $V$ is rather ample.
\end{lem}
\begin{proof}
If $V$ is rather ample, then $\kappa >0$, and the computation \eqref{seisuno}, \eqref{seisdos} above, with $\mathcal{O}(E)$ replaced by $V$, gives $\log \kappa \in \mathcal{H}_{\omega}$. Conversely, if $s_j$ form an orthonormal basis of $(V, \langle \, , \rangle)$ and $\kappa(x)= \sum h(s_j(x), s_j(x))>0$ for all $x \in X$,
then for every $x \in X$ there is an $s=s_j \in V$ such that $s_j(x) \neq 0$. Suppose furthermore $\log \kappa \in \mathcal{H}_{\omega}$. Given $x \in X$, if the orthonormal basis is chosen so that $s_j(x)=0$ for $j \geq 2,$ then \eqref{seisuno}, \eqref{seisdos} show that for every nonzero $v \in T_x^{10} X$ one of the $s_j$, $j \geq 2$, will vanish to first order in the direction of $v$.
\end{proof}
Let us return to the Fubini--Study map, and prove that for a very ample line bundle $(E,h) \to X$ it is a diffeomorphism between $\mathcal{H}_E$ and $\text{FS}(\mathcal{H}_E)$, a submanifold of $\mathcal{H}_{\omega} \subset C^{\infty}(X):$
\begin{proof}[Proof of Theorem 1.1]
In Lemma 5.1 we associated with any $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$ its Bergman function
$L(\langle \langle \, , \rangle \rangle) = \kappa \in C^{\infty}(X)$, and saw that $L$ was continuous, linear , and injective, hence an isomorphism of topological vector spaces $\mathcal{I}_E \to L(\mathcal{I}_E)$. Now FS can be obtained from $L$ by precomposing it with $\delta | \mathcal{H}_E$ (see (3.1)) and by postcomposing with $\log$. As $\delta | \mathcal{H}_E$ is a diffeomorphism to its image, an open subset of $\mathcal{I}_E$, the claim follows if we take into account Lemma 6.1.
\end{proof}
It is noteworthy, though, that in general FS: $\mathcal{H}_E \to \mathcal{H}_{\omega}$ is not proper, and FS$(\mathcal{H}_E) \subset \mathcal{H}_{\omega}$ is not a closed submanifold.
\begin{example} If. $X=\mathbb{P}_1, E \to X$ is the $d$'th power of the hyperplane section bundle, $d \geq 4$, and $h$ is any hermitian metric on $E$, then FS$(\mathcal{H}_E)$ is not a relatively closed subset of ${\mathcal H}_{\omega}$.
\end{example}
\begin{proof} In the standard trivialization of $E$ over $\mathbb{C}=\mathbb{P}_1 \setminus \{\infty\}$, sections of $E$
correspond to polynomials of degree $\leq d$, and $h$ can be expressed with a suitable $a\in C^\infty(\mathbb C)$ as
$$
h(\zeta_1, \zeta_2)= a(z)\zeta_1 \overline{\zeta_2}, \qquad z\in\mathbb C,\quad\zeta_1, \zeta_2 \in E_z \approx \mathbb{C}.
$$
With $t \in (0,\infty)$ define $ \langle \, , \rangle_t \in \mathcal{H}_E$ so that $1, z,tz^2, z^3,\dots, z^d$ form an orthonormal basis.
The corresponding Bergman section is $k_t(z)=\sum ' z^j \otimes z^j+ t z^2 \otimes z^2$, where $\sum'$ refers to summation over
$j$ omitting $j=2$; and
$\kappa_t(z)=a(z)(\sum'|z|^{2j}+ t^2 |z|^4)$. As $t \to 0$, the limit of FS$(\langle \, ,\rangle_t)=\log \kappa_t$ is
$$
\log \kappa_0(z)=\log a(z)+\log {\sum}' |z|^{2j}.
$$
Since $z^j$ for $j\neq 2$ span a rather ample subspace $V \subset \mathcal{O}(E)$,
Lemma $6.1$ implies $\log \kappa_0 \in \mathcal{H}_{\omega}$. It is also in the closure of FS$({\mathcal H}_E)$.
However, $\log\kappa_0 \notin \textnormal{FS}(\mathcal{H}_E)$. Indeed, on the one hand, $L:\mathcal{I}_E \to C^{\infty}(X)$ is
injective by Lemma 5.1. On the other, if $V \subset \mathcal{O}(E)$ is the span of $z^j, j \neq 2$, an inner product is defined on it
by $\langle z^i, z^j \rangle= \delta_{ij}$, and $\delta(V, \langle \, , \rangle)=\langle \langle \, , \rangle \rangle$, then
$\kappa_0=L(\langle \langle \, , \rangle \rangle)$. Since this
$\langle \langle \, , \rangle \rangle$ is not positive definite, it is not in $\delta(\mathcal{H}_E)$. Hence
$\kappa_0 \notin L(\delta(\mathcal{H}_E))$ and $\log \kappa_0 \notin \text{FS} (\mathcal{H}_E)$.
\end{proof}
Nonetheless, it is not hard to describe, for a general very ample hermitian line bundle $(E,h) \to X$, the closure of
FS$(\mathcal{H}_E) \subset \mathcal{H}_{\omega}$. This can be done most conveniently through the `dual' Fubini--Study map
$\Phi$, defined on certain positive semidefinite $\langle \langle \, , \rangle \rangle \in \mathcal{I}_E$.
\begin{defn} Let $\mathcal{A}_E \subset \mathcal{I}_E$ consist of positive semidefinite inner products
$\langle \langle \, , \rangle \rangle=\delta(V, \langle \, , \rangle)$ on $\mathcal{O}(E)^*$ for which $V$ is rather ample. If
$\langle \langle \, , \rangle \rangle \in \mathcal{A}_E$, define
$\Phi(\langle \langle \, , \rangle \rangle)= \log L(\langle \langle \, , \rangle \rangle)$.
\end{defn}
Thus $\Phi(\langle \langle \, , \rangle \rangle)=\log \kappa$, if $\kappa \in C^{\infty}(X)$ is the Bergman function of
$\langle \langle \, , \rangle \rangle$. By Lemma 6.1, $\kappa$ is positive and $\log \kappa \in \mathcal{H}_{\omega}$. We call
$\Phi : \mathcal{A}_E \to \mathcal{H}_{\omega}$ the dual Fubini--Study map, for
\begin{equation}\label{seistres}
\textnormal{FS}=\Phi \circ \delta | \mathcal{H}_E.
\end{equation}
\begin{thm}The dual Fubini--Study map $\Phi$ is proper and injective, and the closure of FS$(\mathcal{H}_E)$ in $\mathcal{H}_{\omega}$ is $\Phi(\mathcal{A}_E)$ .
\end{thm}
\begin{proof} Since by Lemma 5.1 $L$ is injective, so is $\Phi$. To prove it is proper, take
$\langle \langle \, , \rangle \rangle_j \in \mathcal{A}_E, j \in \mathbb{N}$, and suppose
$\lim_{j \to \infty} \Phi (\langle \langle \, , \rangle \rangle_j)=\varphi \in \mathcal{H}_{\omega}$. Then
$\lim_{j \to \infty} L(\langle \langle \, , \rangle \rangle_j)=e^{\varphi} $ in $C^{\infty}(X)$. But $L(\mathcal{I}_E) \subset C^{\infty}(X)$ is a finite dimensional subspace, hence closed; and $L^{-1}: L(\mathcal{I}_E) \to \mathcal{I}_E$ is continuous. Therefore $e^{\varphi} \in L(\mathcal{I}_E)$, and $\langle \langle \, , \rangle \rangle_j$ converges to
$\langle \langle \, , \rangle \rangle= L^{-1}(e^{\varphi})\in \mathcal{I}_E$, necessarily positive semidefinite. Thus
$\kappa=e^{\varphi}= L(\langle \langle \, , \rangle \rangle)$, and $\log \kappa \in \mathcal{H}_{\omega}$.
Suppose $\langle \langle \, , \rangle \rangle= \delta(V, \langle \, , \rangle)$. Lemma 6.1 gives that $V$ is rather ample, whence
$\langle \langle \, , \rangle \rangle \in \mathcal{A}_E$ and $\varphi \in \Phi(\mathcal{A}_E)$. This shows that
$\Phi (\mathcal{A}_{E})$ is closed in $\mathcal{H}_{\omega}$ and $\Phi$ is proper.
By \eqref{seistres} the closure of FS$(\mathcal{H}_E)$ is contained in $\Phi (\mathcal{A}_E)$.
Conversely, suppose $\langle \langle \, , \rangle \rangle \in \mathcal{A}_E$. If $\langle \langle \, , \rangle \rangle_0 \in \mathcal{I}_E$ is positive definite, then $\langle \langle \, , \rangle \rangle + t \langle \langle \, , \rangle \rangle_0 \in \mathcal{I}_E$ is positive definite for all $t >0$, and of form $\delta (\langle \, , \rangle_t)$ with some $\langle \, , \rangle_t \in \mathcal{H}_E$. It follows that
$$\Phi (\langle \langle \, , \rangle \rangle)= \lim_{ t \to 0} \Phi (\langle \langle \, , \rangle \rangle + t \langle \langle \, , \rangle \rangle_0)=\lim_{t \to 0} \text{FS}(\langle \, , \rangle_t)$$
is in the closure of FS$(\mathcal{H}_E)$, and this closure is $\Phi(\mathcal{A}_E)$.
\end{proof}
Of course, in general $\Phi(\mathcal{A}_E) \subset \mathcal{H}_{\omega}$, even if closed, is not a submanifold. Still, FS$(\mathcal{H}_E)$ can be continued analytically to a closed submanifold of $\mathcal{H}_{\omega}$. For this one considers $\mathcal{P}_E \subset \mathcal{I}_E$ consisting of not necessarily semidefinite inner products $\langle \langle \, , \rangle \rangle$ whose Bergman function $\kappa =L(\langle \langle \, , \rangle \rangle)$ is everywhere positive and $\log \kappa \in \mathcal{H}_{\omega}$. This is the open subset of $\mathcal{I}_E$ that $\log L$ maps in $\mathcal{H}_{\omega}$; and $\log L(\mathcal{P}_E)$ is a closed submanifold (connected and analytic) of $\mathcal{H}_{\omega}$, containing FS$(\mathcal{H}_{\omega})$ as an open subset.
| {
"timestamp": "2021-09-20T02:19:31",
"yymm": "2109",
"arxiv_id": "2109.08593",
"language": "en",
"url": "https://arxiv.org/abs/2109.08593",
"abstract": "Consider a very ample line bundle $ E \\to X$ over a compact complex manifold, endowed with a hermitian metric of curvature $-i \\omega $, and the space $\\mathcal{O}(E)$ of its holomorphic sections. The Fubini--Study map associates with positive definite inner products $\\langle \\, , \\rangle$ on $\\mathcal{O}(E)$ functions FS$(\\langle \\, ,\\rangle) \\in \\mathcal{H}_{\\omega}=\\{u \\in C^{\\infty}(X):\\omega +i\\partial\\overline{\\partial} u >0\\}$. We prove that FS is an injective immersion, but its image in general is not closed in $\\mathcal{H}_{\\omega}$. To obtain a closed range, FS has to be extended to certain degenerate inner products. This we do by associating Bergman kernels with general inner products on the dual $\\mathcal{O}(E)^*$, and the paper describes some simple properties of this association.",
"subjects": "Complex Variables (math.CV); Differential Geometry (math.DG)",
"title": "On the Bergman kernels of holomorphic vector bundles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105287006255,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7076984941791182
} |
https://arxiv.org/abs/2101.02037 | Matrix Differential Operator Method of Finding a Particular Solution to a Nonhomogeneous Linear Ordinary Differential Equation with Constant Coefficients | The article presents a matrix differential operator and a pseudoinverse matrix differential operator for finding a particular solution to nonhomogeneous linear ordinary differential equations (ODE) with constant coefficients with special types of the right-hand side. Calculation requires the determination of an inverse or pseudoinverse matrix. If the matrix is singular, the Moore-Penrose pseudoinverse matrix is used for the calculation, which is simply calculated as the inverse submatrix of the considered matrix. It is shown that block matrices are effectively used to calculate a particular solution. | \section{Introduction}
The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to the prominent German polymath Leibniz, G.W. The French mathematician L. F. A. Arbogast was the first mathematician to separate the symbols of operation from those of quantity, introducing systematically the operator notation DF for the derivative of the function. This approach was further developed by F. J. Servois who developed convenient notations. He was followed by a school of British and Irish mathematicians. R. B. Carmichael and G. Boole describing the application of operator methods to ordinary and partial differential equations. Other prominent personalities who contributed to the development of operational calculus were, for example, physicist O. Heaviside, mathematicians T.J. Bromwich, J.R. Carson, J. Mikusi{\'n}ski, N. Wiener, and others \cite{W}.
Although the problem of solving ordinary nonhomogeneous linear differential equations with constant coefficients is generally known, we will deal with the method of finding a particular solution of differential equation using a matrix differential operator method. This method, like the method of undetermined coefficients, can be used only in special cases, if the right-hand side of the differential equation is typical, i.e. it is a constant, a polynomial function, exponential function $e^{\alpha x}$, sine or cosine functions $\sin{\beta x}$ or $\cos {\beta x}$, or finite sums and products of these functions ($\alpha$, $\beta$ constants). In some cases, the method can be used as a support method for determining the particular solution of a differential equation using the method of undetermined coefficients or the differential operator method or for evaluation of indefinite integrals.
We introduce the differential operator notation.
It is sometimes convenient to adopt the notation
$Dy$, $D^{2}y$, $D^{3}y,\cdots, D^{n}y$ to denote $\dfrac{dy}{dx}$, $ \dfrac{d^{2}y}{dx^{2}}$, $\dfrac{d^{3}y}{dx^{3}},\cdots,\dfrac{d^{n}y}{dx^{n}}$. The symbols $Dy$, $D^{2}y,\ldots$ are called {\it{differential operators}}
\cite{Ch} and have properties analogous to those of algebraic quantities \cite{H}. \\
\begin{tabular}{l}
Table 1: Operator Techniques \cite{Ch}, \cite{S}\\
\end{tabular}\\
\begin{tabular}{|l|l|}
\hline
\textbf{A.} & \\
\qquad $\phi(D)\sum\limits_{k=0}^nc_kR_k(x)$&
$\sum\limits_{k=0}^nc_k\phi(D)R_k(x)$\\
\hline
\textbf{B.} & \\
\qquad $\phi(D) \cdot \psi(D)$ & $\psi(D) \cdot \phi(D)$ \\
& $\phi(D),\, \psi(D)$ {\small polynomial operators in $D$}\\
\hline
\textbf{C.} & \\
\qquad $\phi(D)(xf(x))$ & $x\phi(D)f(x) + \phi^\prime(D)f(x)$\\
\hline
\textbf{D.} & \\
\qquad$D^ne^{\alpha x}$& $\alpha^ne^{\alpha x}$, {\small $n$ is non-negative integer}\\
\hline
\textbf{E.} & \\
\qquad$D^{n}\left\lbrack e^{ax}R(x) \right\rbrack$ &
$e^{ax}\left( D + \alpha \right)^{n}R(x)$\\
&{\small $n$ is non-negative integer}\\
\hline
\textbf{F.}&\\
\qquad$D^{2n}\sin{\beta x}$&$\left(-\beta^2\right)^n\sin{\beta x}$\\
\qquad $D^{2n}\cos{\beta x}$&$\left(-\beta^2\right)^n\cos{\beta x}$\\
\hline
\end{tabular}
\par
\vspace{3mm}
\noindent Using the operator notation, we shall agree to write the differential equation
\begin{equation}\label{DR}
a_{n}y^{(n)} + a_{n - 1}y^{(n - 1)} + \ldots + a_{1}y^{\prime} + a_{0}y = f(x),\quad a_n \neq 0,\ a_i\in \mathbb{R}
\end{equation}
as
\begin{equation}\label{DRD}
\left(a_{n}D^{n} + a_{n - 1}D^{n - 1} + \cdots + a_{1}D + a_{0}\right)y = f(x)
\end{equation}
or briefly
\begin{equation}\label{DRDp}
\phi(D)y = f(x),
\end{equation}
where
\begin{equation}\label{DO}
\phi(D) = a_nD^n + a_{n - 1}D^{n - 1} + \cdots + a_{1}D + a_{0}
\end{equation}
is called an \emph{operator polynomial} in \(D.\)
If we want to emphasize the degree of the polynomial operator \eqref{DO}, we shall
write it in the form \(\phi_{n}( D )\).
\vspace{5mm}
The use of the operator calculus for solving linear differential equations is well dealt with in several publications, e.g. \cite{H}, \cite{Ch}, \cite{S}.
\section{Matrix differential operator}
\begin{definition}
If \(S = \{\mbf{v}_{1},\mbf{v}_{2},\ldots,\mbf{v}_{n}\}\)
is a set of vectors in a vector space \(V,\) then the set of all linear
combination of
\(\mbf{v}_{1},\mbf{v}_{2}, \ldots,\mbf{v}_{n}\) is called the \emph{span} of
\(\mbf{v}_{1},\mbf{v}_{2},\ldots,\mbf{v}_{n}\) and is denoted by
\(\text{span}(\mbf{v}_{1},\mbf{v}_{2},\ldots,\mbf{v}_{n})\)
or \text{span}(S).
\end{definition}
Let \(G\) be a~vector space of all differentiable function. Consider the
subspace \(V \subset G\) given by
\begin{equation}\label{spanfi}
V=\text{span}\left( f_{1}(x),f_{2}(x),\cdots,f_{n}(x) \right),
\end{equation}
where we assume that the functions \(f_{1}(x),f_{2}(x),\cdots,f_{n}(x)\)
are linearly independent. Since the set
\(B = \{ f_{1}(x),f_{2}(x),\cdots,f_{n}(x)\}\) is linearly independent, it is a~basis for V.\newline
The functions \(f_{i}(x),\ i = 1,2,\ldots,n\) expressed in basis \(\mbf{B}\) using base vector coordinates are usually written
\[\left\lbrack f_{1}(x)\right\rbrack_{B} = \begin{bmatrix}
1 \\
0 \\
\vdots \\
0 \\
\end{bmatrix},\ \left\lbrack f_{2}(x)\right\rbrack_{B} = \begin{bmatrix}
0 \\
1 \\
\vdots \\
0 \\
\end{bmatrix},\cdots,\left\lbrack f_{n}(x) \right\rbrack_{B} = \begin{bmatrix}
0 \\
0 \\
\vdots \\
1 \\
\end{bmatrix}\]
The vector \(\left\lbrack f_{i}(x) \right\rbrack_{B}\) has in the \emph{i-}th row 1 and~0 otherwise.
Further, assume that the differential operator \( D\) maps \(V\) into itself\\
Let
\[D( f_{i}( x) )= \sum_{j = 1}^{n}c_{ij}f_{j}( x),\quad i = 1,2,\cdots,n,\]
where \(c_{ij} \in \mathbb{R}\), \(i,j = 1,2,\ldots,n\) are constants.
Then
\[\left\lbrack D\left( f_{i}(x) \right) \right\rbrack_{B} = \begin{bmatrix}
c_{i1} \\
c_{i2} \\
\vdots \\
c_{in} \\
\end{bmatrix},\quad i,j = 1,2,\ldots,n \]
and (see \cite{P})
\begin{equation}\label{Poole}
\left\lbrack D \right\rbrack_{B} = \left\lbrack \left\lbrack D\left( f_{1}(x) \right) \right\rbrack_{B} \Shortstack{. . . .}\left\lbrack D\left( f_{2}(x) \right) \right\rbrack_{B} \Shortstack{. . . .}\cdots \Shortstack{. . . .}\left\lbrack D\left( f_{n}(x) \right) \right\rbrack_{B} \right\rbrack = \begin{bmatrix}
c_{11} & c_{21} & \cdots & c_{n1} \\
c_{12} & c_{22} & \cdots & c_{n2} \\
\vdots & \vdots & \ddots & \vdots \\
c_{1n} & c_{2n} & \cdots & c_{nn} \\
\end{bmatrix}
\end{equation}
If
\[f(x) = \sum_{i = 1}^{n}{\alpha_{i}f_{i}(x),\quad \alpha_{i} \in \mathbb{R},\quad i = 1,2,\cdots,n},\quad f(x) \in V \]
then
\[\left\lbrack f(x) \right\rbrack_{B} = \begin{bmatrix}
\alpha_{1} \\
\alpha_{2} \\
\vdots \\
\alpha_{n} \\
\end{bmatrix}\]
We express what is the derivative of the function \(f(x)\)
\[Df(x) = \sum_{i = 1}^{n}{\alpha_{i}Df_{i}(x)} = \sum_{i = 1}^{n}\alpha_{i}\sum_{j = 1}^{n}{c_{ij}f_{j}(x)} = \sum_{j = 1}^{n}{\sum_{i = 1}^{n}\alpha_{i}c_{ij}f_{j}(x)}\]
respectively
\[\left\lbrack D( f(x)) \right\rbrack_{B}= \begin{bmatrix}
\sum\limits_{i=1}^n c_{i1}\alpha_{i} \\
\sum\limits_{i=1}^n c_{i2}\alpha_{i} \\
\vdots \\
\sum\limits_{i=1}^n c_{in}\alpha_{i} \\
\end{bmatrix}=\begin{bmatrix}
c_{11} & c_{21} & \cdots & c_{n1} \\
c_{12} & c_{22} & \cdots & c_{n2} \\
\vdots & \vdots & \ddots & \vdots \\
c_{1n} & c_{2n} & \cdots & c_{nn} \\
\end{bmatrix}\begin{bmatrix}
\alpha_{1} \\
\alpha_{2} \\
\vdots \\
\alpha_{n} \\
\end{bmatrix}\]
Let us next simply denote \(\left\lbrack D \right\rbrack_{B}\) as \(\mathcal{D}_B\).
The matrix \(\mathcal{D}_B\) we will called a \emph{matrix differential operator corresponding to a vector space} \(V\) \emph{with the considered basis} \(B\).
Denote
\[\left\lbrack\mathcal{D}(f(x) \right\rbrack_{B}=\mbf{f}_B^\prime = \begin{bmatrix}
\beta_{1} \\
\beta_{2} \\
\mbf{\vdots} \\
\beta_{n} \\
\end{bmatrix}\,\,\text{and\ \ }\left\lbrack f(x) \right\rbrack_{B} = \mbf{f}_B\]
then
\[\mbf{f}_B^\prime=\mathcal{D}_B\mbf{f}_B\]
Note that the matrix transformation \(\mathcal{D}_B:V \rightarrow V\) defined by
\[\mathcal{D}_B\mbf{f}_B = \mbf{f}_B^\prime\]
is a linear transformation.
As mentioned in \eqref{Poole}, \emph{i-}th, \(\left( i = 1,2,\cdots,n \right) \) column of the matrix \(\mathcal{D}\) expresses the derivative of the function \(f_{i}(x).\)
The following properties of the matrix differential operator are given without proof.
\begin{theorem}\label{1} Let \(\mathcal{D}_B\) be a matrix differential
operator of vector space \(V\) with a basis \(B\). Then
\begin{itemize}
\item
\(\mathcal{D}_B\left(\mbf{f}_B+\mbf{g}_B\right)=\mathcal{D}_B\mbf{f}_B+\mathcal{D}_B\mbf{g}_B\)
\item
\(\mathcal{D}_B\left( c\mbf{f}_B \right)=c\mathcal{D}_B\left(\mbf{f}_B \right), \quad c\in\mathbb{R}\)
\item
\(\mbf{f}_B^{\prime\prime}= \mathcal{D}_B\left(\mathcal{D}_B\mbf{f}_B\right)= \mathcal{D}_B^{2}\mbf{f}_B\)
\item
\(\mbf{f}_B^{(n)}=\mathcal{D}_B\mbf{f}_B^{( n - 1) }, \quad
\mbf{f}_B^{( 0 )}=\mbf{f}_B\)
\item
\(\mbf{f}_B^{( n )}=\mathcal{D}_B^{{n}}\mbf{f}_B\)
\item
\emph{i-}th \(\left( i = 1,2,\cdots,n \right)\ \)column of the matrix
\(\mathcal{D}_B^{n}\) expresses \it{n}-th derivative of function \(f_{i}( x).\)
\end{itemize}
\end{theorem}
\begin{example}\label{exam1}
Let us consider
\begin{equation}\label{eq1}
V = \text{span}(xe^{2x}\sin{3x}, xe^{2x}\cos{3x},e^{2x}\sin{3x},e^{2x}\cos{3x})
\end{equation}
The Wronskian
\(W(xe^{2x}\sin{3x}, xe^{2x}\cos{3x},e^{2x}\sin{3x},e^{2x}\cos{3x}) = 324 \cdot e^{8x} \neq 0.\)
It follows that the functions in \eqref{eq1} are linearly independent. Then the set
\(B = \{xe^{2x}\sin{3x}, xe^{2x}\cos{3x},e^{2x}\sin{3x},e^{2x}\cos{3x} \}\)
is a basis for V.
Applying differential operator \emph{D} to a general element of
\emph{V}, we see that
\[\aligned
D&\left({ax}e^{2x}\sin{3x} + bxe^{2x}\cos{3x} +ce^{2x}\sin{3x} + de^{2x}\cos{3x} \right) \\
&=( 2a - 3b )xe^{2x}\sin{3x} + ( 3a + 2b)xe^{2x}\cos{3x}+(a + 2c - 3d)e^{2x}\sin{3x} \\
&+(b + 3c + 2d)e^{2x}\cos{3x},
\endaligned\]
which is again in \emph{V.}
We found the corresponding matrix differential operator \(\mathcal{D}_B\)
in the vector space \(V\) with the basis \(B\). Here the construction of
\(\mathcal{D}_B\) is shown schematically
\end{example}
\parbox{\textwidth}{
\hspace{6.6cm}\rotatebox{90}{\small $D\left(xe^{2x}\sin{3x}\right)$}
\hspace{0.05cm}
\rotatebox{90}{\small $D\left(xe^{2x}\cos{3x}\right)$}
\hspace{0.03cm}
\rotatebox{90}{\small $D\left(e^{2x}\sin{3x}\right)$}
\hspace{0.01cm}
\rotatebox{90}{\small$D\left(e^{2x}\cos{3x}\right)$}
\vspace{-0.1cm}
\begin{equation}\label{schemat}
\mathcal{D}_B=\begin{matrix}
xe^{2x}\sin{3x}\\
xe^{2x}\cos{3x}\\
e^{2x}\sin{3x}\\
e^{2x}\cos{3x}\\
\end{matrix}
\begin{bmatrix}
2 & -3 & 0 & 0\\
3 & 2 & 0 & 0\\
1 & 0 & 2 & -3\\
0 & 1 & 3 & 2\\
\end{bmatrix}
\end{equation}
}%
Let us calculate the first and second derivative of the function
\[f(x) = xe^{2x}\sin{3x} + 2xe^{2x}\cos{3x} - 2e^{2x}\cos{3x} \]
using the matrix differential operator \eqref{schemat}.
We have
\[\left\lbrack\mathcal{D}f(x)\right\rbrack_{B}=\mathcal{D}_{B}\left\lbrack f(x) \right\rbrack_{B} = \begin{bmatrix}
2 & - 3 & 0 & 0 \\
3 & 2 & 0 & 0 \\
1 & 0\ & 2 & - 3 \\
0 & 1 & 3 & 2 \\
\end{bmatrix}\begin{bmatrix}
1 \\
2 \\
0 \\
- 2 \\
\end{bmatrix} = \begin{bmatrix}
- 4 \\
7 \\
7 \\
- 2 \\
\end{bmatrix}\]
So that
\[Df(x) = - 4xe^{2x}\sin{3x} + 7xe^{2x}\cos{3x} + 7e^{2x}\sin{3x} - 2e^{2x}\cos{3x}\]
The second derivative
\[\mbf{f}_B^{\prime\prime}=\mathcal{D}_B\mbf{f}_B^\prime =\begin{bmatrix}
2 & - 3 & 0 & 0 \\
3 & 2 & 0 & 0 \\
1 & 0\ & 2 & - 3 \\
0 & 1 & 3 & 2 \\
\end{bmatrix}\begin{bmatrix}
- 4 \\
7 \\
7 \\
- 2 \\
\end{bmatrix} = \begin{bmatrix}
- 29 \\
2 \\
16 \\
24 \\
\end{bmatrix}\]
\[D^{2}f(x) = - 29xe^{2x}\sin{3x} + 2xe^{2x}\cos{3x} + 16e^{2x}\sin{3x} + 24e^{2x}\cos{3x}\]
For calculating higher powers of the matrix \(\mathcal{D}_B\) it is sometimes advantageous to express it as a block matrix (see example 11) or use Jordan matrix decomposition.
The point of a matrix differential operator is not that this method is easier than a direct differentiation. Indeed, once the matrix $\mathcal{D}_B$ has been established, it is easy to find the differentials with little to do. What is significant is that matrix methods can be used at all in what appears, on the surface, to be a~calculus problem.
\pagebreak
\section{Some properties of block matrices and pseudoinverse matrix}
\begin{theorem}\label{T2} \cite{Kl}
Let \(\mbf{A}\) be a~matrix partitioned into four $2\times 2$ blocks
\[\mbf{A} = \begin{bmatrix}
\mbf{P} & \mbf{Q} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix}\]
where \(\mbf{R}\) is a regular matrix of the order $r$. Then the rank of matrix \(\mbf{A}\) is equal to $r$ if and only if
\[\mbf{Q} = \mbf{P}\mbf{R}^{- 1}\mbf{S}.\]
\end{theorem}
\begin{proof}
To the first block row of the matrix \(\mbf{A}\) we add
the second block row multiplied from the left by the matrix
\(- \mbf{P}\mbf{R}^{- 1}\). This adjustment adds the linear
combination of matrix rows \(( \mbf{R\quad S})\) to
the other matrix rows. After this adjustment, we get the matrix
\begin{equation}\label{rank}
\begin{bmatrix}
\mbf{0} & \mbf{Q - P}\mbf{R}^{- 1}\mbf{S} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix}.
\end{equation}
For matrix \(\mbf{A}\) and matrix \eqref{rank} to have the same rank, the
matrix \(\mbf{Q - P}\mbf{R}^{- 1}\mbf{S}\) must be equals
\(\mbf{0}.\) It follows that
\(\mbf{Q = P}\mbf{R}^{- 1}\mbf{S}.\)
\end{proof}
\begin{definition} Let \(\mbf{A}\) be a \(m \times n\) matrix. The matrix \(\mbf{X}\) for which
\[\mbf{AXA} = \mbf{A},\]
is called the \emph{pseudoinverse matrix} of matrix A. The pseudoinverse
matrix of a matrix \(\mbf{A}\) is denoted \(\mbf{A}^{-}.\)
\end{definition}
\begin{theorem}\label{T3}
Let \(\mbf{A}\) be a \(m \times n\) matrix partitioned into four $2\times 2$ blocks
\[\mbf{A} = \begin{bmatrix}
\mbf{P} & \mbf{Q} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix}\]
and let~the rank of the matrix \(\mbf{A}\) be $r$. If the matrix \(\mbf{R}\) is regular
of order $r$, then the \(n \times m\) matrix
\[\mbf{A}^{-} = \begin{bmatrix}
\mbf{0} & \mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix}\]
is the pseudoinverse of the matrix \(\mbf{A}.\)
\end{theorem}
\begin{proof} Compute
\[\mbf{A}\mbf{A}^{-}\mbf{A} = \begin{bmatrix}
\mbf{P} & \mbf{Q} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix}\begin{bmatrix}
\mbf{0} &\mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix}\begin{bmatrix}
\mbf{P} & \mbf{Q} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix} = \begin{bmatrix}
\mbf{P} & \mbf{P}\mbf{R}^{- 1}\mbf{S} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix} = \begin{bmatrix}
\mbf{P} & \mbf{Q} \\
\mbf{R} & \mbf{S} \\
\end{bmatrix} = \mbf{A}\]
We have used the consequence of the Theorem \ref{T2}, i.e.
\(\mbf{Q} =\mbf{P}\mbf{R}^{- 1}\mbf{S}.\)
\end{proof}
\begin{definition}
The pseudoinverse matrix \(\mbf{A}^{-}\) of the matrix \(\mbf{A}\) satisfying all of the following four conditions
\begin{itemize}
\item[(i)]
\(\mbf{A}\mbf{A}^{-}\mbf{A} = \mbf{A}\),
(\(\mbf{A}^{-}\) is the pseudoinverse matrix of the matrix
\(\mbf{A})\);
\item[(ii)]
\(\mbf{A}^{-}\mbf{A}\mbf{A}^{-} = \mbf{A}^{-}\),
(\(\mbf{A}\) is the pseudoinverse matrix of the matrix
\(\mbf{A}^{-})\);
\item[(iii)]
\(\left( \mbf{A}\mbf{A}^{-} \right)^{T}=\mbf{A}\mbf{A}^{-},\) (\(\mbf{A}\mbf{A}^{-}\) is a symmetric matrix, \(\left( \cdot \right)^{T}\) means a transposed matrix);
\item[(iv)]
\(\left( \mbf{A}^{-}\mbf{A} \right)^{T}=\mbf{A}^{-}\mbf{A}\),
(\(\mbf{A}^{-}\mbf{A}\) is a symmetric matrix)
\end{itemize}
is called the Moore-Penrose pseudoinverse of the matrix \(\mbf{A}\),
denoted by \(\mbf{A}^{+}.\)
\end{definition}
\begin{theorem}\label{T33}
Let \(\mbf{A}\) be a \(m \times n\) matrix
partitioned into four $2\times 2$ blocks
\[\mbf{A} = \begin{bmatrix}
\mbf{0} & \mbf{0} \\
\mbf{R} & \mbf{0} \\
\end{bmatrix}\]
and let the rank of matrix \(\mbf{A}\) be \(r.\) If matrix
\(\mbf{R}\) is regular of order \(r\), then
\[\mbf{A}^{-} = \begin{bmatrix}
\mbf{0} & \mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix}\]
\(n \times m\) is the Moore-Penrose pseudoinverse matrix of the matrix
\(\mbf{A}\), thus \linebreak \(\mbf{A}^{-}=\mbf{A}^{+}.\)
\end{theorem}
\begin{proof} We will verify all four conditions of definition 2.\vspace{2mm}
\begin{itemize}
\item[(i)]
\(\mbf{A}\mbf{A}^{-}\mbf{A =}\begin{bmatrix} \mbf{0} & \mbf{0} \\ \mbf{R} & \mbf{0} \\ \end{bmatrix}\begin{bmatrix} \mbf{0} & \mbf{R}^{- 1} \\ \mbf{0} & \mbf{0} \\ \end{bmatrix}\begin{bmatrix} \mbf{0} & \mbf{0} \\ \mbf{R} & \mbf{0} \\ \end{bmatrix} = \begin{bmatrix} \mbf{0} & \mbf{0} \\ \mbf{R} & \mbf{0} \\ \end{bmatrix} = \mbf{A}\)\\
The condition \emph{i} is met.\vspace{2mm}
\item[(ii)]
Pseudoinverse matrix of the matrix
\[\mbf{A}^{-} = \begin{bmatrix}
\mbf{0} & \mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix}\]
is the matrix
\[\mbf{A} = \begin{bmatrix}
\mbf{0} & \mbf{0} \\
\mbf{R} & \mbf{0} \\
\end{bmatrix},\]
because
\[\mbf{A}^{-}\mbf{A}\mbf{A}^{-}=\begin{bmatrix}
\mbf{0} & \mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix}\begin{bmatrix}
\mbf{0} & \mbf{0} \\
\mbf{R} & \mbf{0} \\
\end{bmatrix}\begin{bmatrix}
\mbf{0} & \mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix} = \begin{bmatrix}
\mbf{0} & \mbf{R}^{- 1} \\
\mbf{0} & \mbf{0} \\
\end{bmatrix} = \mbf{A}^{-}\]
We have also proved the validity of condition \emph{ii}.
\item[(iii)]
\(\left( \mbf{A}\mbf{A}^{-} \right)^{T}=\left\lbrack \begin{bmatrix}
\mbf{0} & \mbf{0} \\ \mbf{R} & \mbf{0} \\ \end{bmatrix}\begin{bmatrix} \mbf{0} & \mbf{R}^{- 1} \\ \mbf{0} & \mbf{0} \\ \end{bmatrix} \right\rbrack^{T} = \begin{bmatrix} \mbf{0} & \mbf{0} \\ \mbf{0} & \mbf{I} \\ \end{bmatrix}^{T} = \begin{bmatrix} \mbf{0} & \mbf{0} \\ \mbf{0} & \mbf{I} \\ \end{bmatrix}\)\\
We have shown that condition \emph{iii} is met.\vspace{2mm}
\item[(iv)]
\(\left( \mbf{A}^{-}\mbf{A} \right)^{T}=\left( \begin{bmatrix} \mbf{0} & \mbf{R}^{- 1} \\ \mbf{0} & \mbf{0} \\ \end{bmatrix}\begin{bmatrix} \mbf{0} & \mbf{0} \\ \mbf{R} & \mbf{0} \\ \end{bmatrix} \right)^{T} = \begin{bmatrix} \mbf{I} & \mbf{0} \\ \mbf{0} & \mbf{0} \\ \end{bmatrix}^{T} = \begin{bmatrix} \mbf{I} & \mbf{0} \\ \mbf{0} & \mbf{0} \\ \end{bmatrix}\)\\
We have shown that condition \emph{iv} is also met.
\end{itemize}
\end{proof}
\begin{corollary}
For the matrices \(\mbf{A}\) and \(\mbf{A}^{-}\) in Theorem \ref{T33} imply that
\(\mbf{A}\mbf{A}^{-}\) and \(\mbf{A}^{-}\mbf{A}\) are incomplete identity matrices.
\end{corollary}
\begin{theorem}\label{SLR} Let
\begin{equation}\label{slr}
\mbf{A}\mbf{x}=\mbf{b}
\end{equation}
be a solvable system of linear equations and \(\mbf{A}^{-}\) be a pseudoinverse of the matrix \(\mbf{A},\) then
\begin{equation}\label{rslr}
\mbf{x} = \mbf{A}^{-}\mbf{b}
\end{equation}
is a solution of \eqref{slr}.
\end{theorem}
\begin{proof} Assuming that the system of linear equations \eqref{slr} is
solvable, then a vector \(\mbf{y}\) exists, such that
\[\mbf{A}\mbf{y}=\mbf{b}\]
We show that the vector in \eqref{rslr} is the solution of the system of linear equations \eqref{slr}.
It is valid that
\[\mbf{A}\mbf{x}=\mbf{A}\left(\mbf{A}^{-}\mbf{b}\right)=
\left(\mbf{A}\mbf{A}^{-}\mbf{A}\right)\mbf{y}=\mbf{A}\mbf{y}=\mbf{b}\]
\end{proof}
\begin{remark}\label{oMP}
If the system of linear equations \eqref{slr} is solvable and its solution is expressed in the form
\(\mbf{x} = \mbf{A}^{-}\mbf{b}\) and \(\mbf{A}^{-}\) is Moore-Penrose pseudoinverse of the matrix \(\mbf{A}\), then this solution minimizes the Euclidean norm \(\left\| \mbf{Ax} - \mbf{b} \right\|\) and of all \(n\)-dimensional solutions
\(\mbf{x}\) that minimize this norm it has the lowest norm. Such a vector solution of the system of linear equations is called the \emph{solution of the system of linear equations with the least squares solution of minimum norm} or \emph{the best approximate solution of the system}.
\end{remark}
Note that in our considerations, when solving a solvable system of linear equations \eqref{slr} we will consider a solution \eqref{rslr} expressed in the form
\[\mbf {x} = \mbf {A }^+ \mbf {b}\]
The solution of \eqref{slr} exists if and only if $\mbf{A}\mbf{A}^{+}\mbf{b}=\mbf{b}$,\cite{James}.
\section{Pseudoinverse matrix differential operator}
We shall introduce some properties of the differential and inverse differential operators.
The definition of the inverse differential operator \cite{S}: Let $\dfrac1{\phi(D)}f(x)$ be defined as a particular solution $y$, such that \eqref{DRDp} then $\dfrac1{\phi(D)}f(x)$ is called \emph{the inverse differential operator}. \qed
\par We will prove relation I in the next Table 2. \\*
\proof
With respect to C in the Table 1 is
\begin{equation}\label{aad}
\phi(D)(xf(x)) = x\phi(D)f(x) + \phi^\prime(D)f(x)
\end{equation}
Let
\[\phi(D)f(x) = R(x)\]
Then
\[\frac{1}{\phi(D)}\phi(D)f(x) = \frac{1}{\phi(D)}R(x)\]
\begin{equation}\label{aae}
f(x) = \frac{1}{\phi(D)}R(x)
\end{equation}
Substituting \eqref{aae} into relationship \eqref{aad}, we get
\[\aligned
\phi(D)\left( x\frac{1}{\phi(D)}R(x) \right) &= x\phi(D)\frac{1}{\phi(D)}R(x) + \phi^\prime(D)\frac{1}{\phi(D)}R(x)\\
\phi(D)\left( x\frac{1}{\phi(D)}R(x) \right) &= x R(x) + \phi^\prime(D)\frac{1}{\phi(D)}R(x)\\
\endaligned\]
Let's multiply the previous equation from the left side with \(\dfrac{1}{\phi(D)}\) , then
\[x\frac{1}{\phi(D)}R(x) = \frac{1}{\phi(D)} x R(x) + \frac{1}{\phi(D)}\phi^\prime(D)
\frac{1}{\phi(D)}R(x)\]
From which
\[\frac{1}{\phi(D)} x R(x) = x\frac{1}{\phi(D)}R(x) - \frac{1}{\phi(D)}\phi^\prime(D)
\frac{1}{\phi(D)}R(x)\]
\qed
\pagebreak
\begin{tabular}{l}
Table 2: Inverse Operator Techniques \cite{Ch}, \cite{S}\\
\end{tabular}\\
\begin{tabular}{|l|l|}
\hline
\textbf{A.} & \\
\qquad $\frac1{\phi(D)}\sum\limits_{k=0}^nc_kR_k(x)$&
$\sum\limits_{k=0}^nc_k\frac1{\phi(D)}R_k(x)$\\
\hline
\textbf{B.} & \\
\(\qquad\frac1{D-m}R(x)\) & \(e^{mx}\int e^{-mx}R(x)dx\) \\
& \\
\hline
\textbf{C.} & \\
& $e^{m_1x}\int e^{-m_1x}\,e^{m_2x}\int e^{-m_2x}\dots$\\
\quad\( \frac1{(D-m_1)(D-m_2)\dots (D-m_n) }\)& $e^{m_nx}\int e^{-m_nx}R(x)dx^n$\\
& {\small This can also be evaluated by expanding} \\
& {\small the inverse operator into partial fractions}\\
& {\small and then using B.} \\
\hline
\textbf{D.} & \\
&$\frac{e^{px}}{\phi(p)}$\, {\small if} \, $\phi(p)\ne 0$\\
\hspace{1cm}$\frac1{\phi(D)}e^{px}$ & $\frac{x^ke^{px}}{\phi^{(k)}(p)}$\,{\small if}\,
$\textstyle{\phi(p)=\phi^\prime(p)=\dots \phi^{(k-1)}(p)=0}$\\
&\hspace{3.9cm} {\small but} \, $\phi^{(k)}(p)\ne 0$\\
\hline
\textbf{E.}&\\
\qquad$\frac1{\phi(D^2)}\cos px$&\qquad $\dfrac{\cos px}{\phi(-p^2)}$\\
&\hspace{3cm}{\small if}\quad $\phi(-p^2)\ne 0$\\
\qquad$\frac1{\phi(D^2)}\sin px$&\qquad $\dfrac{\sin px}{\phi(-p^2)}$\\
\hline
\textbf{F.}&\\
\qquad $\frac1{\phi(D)}\cos px$
& \qquad $Re\left\{\frac{e^{ipx}}{\phi(ip)}\right\}$\\
$\qquad \frac1{\phi(D)}\sin px$
& \qquad $Im\left\{\frac{e^{ipx}}{\phi(ip)}\right\}$\\
& {\small If $\phi(ip)\ne 0$, otherwise use D.}\\
\hline
\textbf{G.}&\\
\qquad $\frac1{\phi(D)}x^p$ &
\qquad $\left(\sum\limits_{k=0}^p c_kD^k\right)x^p$\\
{\small by expanding $\frac1{\phi(D)}$ in powers of $D$}
& {\small since $D^{p+n}x^p=0$ for $n>0.$}\\
\hline
\textbf{H.}&\\
\qquad $\frac1{\phi(D)}e^{px}R(x) $&\quad $e^{px}\frac1{\phi(D+p)}R(x)$\\
& {\small called the "operator shift theorem"}.\\
\hline
\textbf{I.}&\\
\qquad $\frac1{\phi(D)}xR(x)$ &
$x\frac1{\phi(D)}R(x) - \frac1{\phi(D)}\phi^\prime(D)\frac1{\phi(D)}R(x)$=\\
&$\left(x-\frac1{\phi(D)}\phi^\prime(D)\right)\frac1{\phi(D)}R(x)$\\
&\\
\hline
\textbf{J.}&{\small using substitute}\\
\quad$\dfrac1{\phi(D)}\left( e^{\alpha x} P_{m}(x)\cos{\beta x} +\right.$
& $\cos{\beta x} = \dfrac{e^{i\beta x} + e^{- i\beta x}}{2}$\\
\qquad\quad$\left. Q_{n}(x)\sin{\beta x} \right)$&
$\sin{\beta x} = \dfrac{e^{i\beta x} - e^{- i\beta x}}{2i}$ \\
&{\small and next to use an operator shift theorem}\\
\hline
\end{tabular}
\\
Proofs of many of the statements in Table 2 can be found for example in \cite{Ch} or \cite{S}.
Note that all analytical relations derived for the differential operator have an adequate expression for the matrix differential operator.\qed
In order to define a pseudoinverse matrix differential operator, we first consider a simple form of the differential equation \eqref{DR} i. e.
\begin{equation}\label{sdr}
y^\prime=f(x)
\end{equation} \par
Let us assume that the function \(f(x)\) is differentiable and every derivative of the function \(f(x)\) can be expressed as a finite linear combination of linear independent functions $f_1(x),f_2(x),\dots,f_n(x).$
Let's consider the vector space
\[V=\text{span}\left(f_1(x),f_2(x),\dots,f_n(x)\right)\]
with the basis \(B=\{f_1(x),f_2(x),\dots,f_n(x)\}\).
\begin{itemize}
\item[(\textbf{A})]
Let us assume that the particular solution of the differential equation \eqref{sdr} belongs to the vector space \(V\) with the basis $B$. Let \(\mathcal{D}_B\) be a matrix differential operator corresponding to the basis $B$. Then the equation
\[ \mathcal{D}_B\mbf{y}_B=\mbf{f}_B,\]
where $\mbf{f}_B=[f(x)]_B$, is solvable. And due to the Theorem \ref{SLR}
\begin{equation}\label{byMPP}
\mbf{y}_B=\mathcal{D}_B^{+}\mbf{f}_B
\end{equation}
is the solution of the differential equation \eqref{sdr} expressed in the basis $B$ where
$\mathcal{D}_B^{+}$ is the Moore-Penrose pseudoinverse of the matrix
$\mathcal{D}_B.$\\*
Thus $\mathcal{D}_B^{+}$ is the pseudoinverse matrix differential operator to the operator $\mathcal{D}_B$ .
(The unique solution \eqref{byMPP} to the differential equation \eqref{sdr} do not contain a kernel of matrix differential operator $\mathcal{D}_B^{+}$. This also applies in the following cases.)
\item[(\textbf{B})]
Let us assume that the particular solution of the differential equation \eqref{sdr} does not belong to the vector space \(V\) and let \(\mathcal{D}_B\) be a matrix differential operator with the considered basis B. Then the equation
\[ \mathcal{D}_B\mbf{y}_B=\mbf{f}_B\]
is not solvable. Let's create a new system of functions
\[\aligned
B_1&=\{f_1(x),f_2(x),\dots,f_n(x)\}\cup\{xf_1(x),xf_2(x),\dots,xf_n(x)\}\\
&=\{f_{11}(x),f_{12}(x),\dots,f_{1m}(x)\}\\
\endaligned \]
where $m>n$.
Let functions $f_{11}(x),f_{12}(x),\dots,f_{1m}(x)$ be a linear independent and let
\[V_1=\text{span}(f_{11}(x),f_{12}(x),\dots,f_{1m}(x))\]
with the basis $B_1$.
Let a particular solution of the differential equation \eqref{sdr} belong to the vector space \(V_1\) and let \(\mathcal{D}_{B_1}\) be a matrix differential operator with the considered basis $B_1$. Then the equation
\begin{equation}\label{ssdr}
\mathcal{D}_{B_1}\mbf{y}_{B_1}=\mbf{f}_{B_1}
\end{equation}
is solvable, where $\mbf{f}_{B_1}=[f(x)]_{B_1}$.
Due to Theorem \ref{SLR}
\[\mbf{y}_{B_1}=\mathcal{D}_{B_1}^{+}\mbf{f}_{B_1}\]
is the solution of the differential equation \eqref{sdr} expressed in the basis $B_1$
where $\mathcal{D}_{B_1}^{+}$ is the Moore-Penrose pseudoinverse of the matrix
$\mathcal{D}_{B_1}.$ Thus $\mathcal{D}_B^{+}$ is the pseudoinverse matrix differential operator to the operator $\mathcal{D}_B$ .
\item[(\textbf{C})] If a solution of the differential equation \eqref{sdr} does not belong to the vector space \(V_1\), we will create a new system of vectors in a manner similar to $B_1$ and analyse the solvability of the differential equation.
It is difficult to prove in general that the solution lies in some vector space, the construction of which we have described. It will be explained in the following examples.
\end{itemize}
For completeness, we still have to consider a differential equation in the form \eqref{DR}. Also in this case, we assume that the function \(f(x)\) is differentiable and every derivative of the function \(f(x)\) can be expressed as a finite linear combination of linear independent functions $f_1(x),f_2(x),\dots,f_n(x).$
We are considering a vector space
\[V=\text{span}(f_1(x),f_2(x),\dots,f_n(x))\]
with the basis \(B=\{f_1(x),f_2(x),\dots,f_n(x)\}\)
and and we discuss the existence of a solution to the equation
\begin{equation}\label{MDRD}
\left(a_{n}\mathcal{D}_B^{n} + a_{n - 1}\mathcal{D}_B^{n - 1} + \cdots + a_{1}\mathcal{D}_B + a_{0}\mbf{I}\right)\mbf{y}_B = \mbf{f}_B
\end{equation}
or briefly
\[\phi(\mathcal{D}_B)\mbf{y}_B =\mbf{f}_B,\]
where \(\mbf{I}\) is the identity matrix.
The discussion is analogous to previous cases. \qed\vspace{2mm}
Now let us present some elementary examples. We will focus this issue in more detail below. Let's start with a very elementary example.
\begin{example}
Determine using a matrix differential operator the particular solution of the equation
\begin{equation}\label{xx1}
y^{\prime}=x.
\end{equation}
\end{example}
\noindent
\emph{Solution.} $f(x)=x$ and each derivative of $f(x)$ can be expressed as a linear combination of
$\{x, 1\}$. Let $V=\text{span}(x,1)$. Then
\[\mathcal{D}_B=\begin{bmatrix}
{0} & {0} \\
{1} & {0} \\
\end{bmatrix}.\]
The equation
\[\mathcal{D}_B\mbf{y}_B=[x]_B \]
\[
\begin{bmatrix}
{0} & {0} \\
{1} & {0} \\
\end{bmatrix}
\begin{bmatrix}
{y}_1 \\
{y}_2 \\
\end{bmatrix}=
\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}\]
has no solution. It follows that a particular solution of \eqref{xx1} does not belong to $V$. Let's create a new system of function $B_1=\{x^2,x\}\cup\{x,1\}=\{x^2,x,1\}$ which is linear independent.
Then
$$V_1=\text{span}(x^2,x,1),\quad \mathcal{D}_{B_1}=
\begin{bmatrix}
0 & 0 & 0\\
2 & 0 & 0 \\
0 & 1 & 0.\\
\end{bmatrix}
$$
The matrix equation
\[\mathcal{D}_{B_1}\mbf{y}_{B_1}=[x]_{B_1} \]
$$\left[\begin{array}{cc|c}
0 & 0 & 0\\ \hline
2 & 0 & 0 \\
0 & 1 & 0\\
\end{array}\right]\begin{bmatrix}
y_1 \\
y_2\\
y_3\\
\end{bmatrix}=\begin{bmatrix}
0 \\
1\\
0\\
\end{bmatrix}$$
has a solution and (Theorem \ref{SLR})
$$\mbf{y}_{B_1}=\begin{bmatrix}
y_1 \\
y_2\\
y_3\\
\end{bmatrix}=\mathcal{D}_{B_1}^{+}\begin{bmatrix}
0 \\
1\\
0\\
\end{bmatrix}$$
According to Theorem \ref{T33}
$$\mathcal{D}_{B_1}^{+}=\left[\begin{array}{c|cc}
0 & \dfrac12 & 0\\
0 & 0 & 1 \\ \hline
0 & 0 & 0\\
\end{array}\right]$$
Thus
$$\mbf{y}_{B_1}=
\begin{bmatrix}
0 & \dfrac12 & 0\\
0 & 0 & 1 \\
0 & 0 & 0\\
\end{bmatrix}\begin{bmatrix}
0 \\
1\\
0\\
\end{bmatrix}=\begin{bmatrix}
\dfrac12 \\
0\\
0\\
\end{bmatrix}
$$
which, in turn, implies the particular solution
$$y=\frac12x^2$$
\begin{example}\label{inbase}
Determine the particular solutions of the equations
\begin{itemize}
\item[(a)] \quad $y^{\prime\prime}+3y^\prime-4y=x e^{2x}$
\item[(b)] \quad $y^{\prime\prime}+3y^\prime-4y=2x e^{2x}-3 e^{2x}$
\end{itemize}
\end{example}
\noindent
\noindent\textit{Solution.} Every derivative of the functions on the right-hand side of the equations is a linear combination of the functions $x\cdot e^{2x}, e^{2x}$ which are linear independent Let's assume that a particular solution of the differential equations belongs to the vector space
$V=\textrm{span}(x e^{2x}, e^{2x})$ with the base $B=\{x e^{2x}, e^{2x}\}$. Differential operator
\[\mathcal{D}_B=\begin{bmatrix}
{2} & {0} \\
{1} & {2} \\
\end{bmatrix}\]
\begin{itemize}
\item[(a)]
\[\left(\mathcal{D}_B^2+3\mathcal{D}_B-4\mbf{I}_2\right)
\mbf{y}_B=[x e^{2x}]_B\]
\[\left({\begin{bmatrix}
{2} & {0} \\
{1} & {2} \\
\end{bmatrix}}^2+3\begin{bmatrix}
{2} & {0} \\
{1} & {2} \\
\end{bmatrix}-4\begin{bmatrix}
{1} & {0} \\
{0} & {1} \\
\end{bmatrix}
\right)\mbf{y}_B=\begin{bmatrix}
{1} \\
{0} \\
\end{bmatrix}\]
\[\begin{bmatrix}
{6} & {0} \\
{5} & {-4} \\
\end{bmatrix}\mbf{y}_B=\begin{bmatrix}
{1} \\
{0} \\
\end{bmatrix}\]
\[\mbf{y}_B=\begin{bmatrix}
{6} & {0} \\
{5} & {-4} \\
\end{bmatrix}^{-1}\begin{bmatrix}
{1} \\
{0} \\
\end{bmatrix}
\renewcommand{\arraystretch}{2}
=\begin{bmatrix}
{\dfrac16} & {0} \\
\dfrac{5}{24} & -\dfrac{1}{4} \\
\end{bmatrix}\begin{bmatrix}
{1} \\
{0} \\
\end{bmatrix}=
\begin{bmatrix}
{\dfrac16} \\
{\dfrac5{24}} \\
\end{bmatrix}\]
\vspace{2mm}
We have found the particular solution of differential equation in (a)
\[y=\frac16xe^{2x}+\frac5{24}e^{2x}.\]
\item[(b)]
\[\mbf{y}_B
\renewcommand{\arraystretch}{2}
=\begin{bmatrix}
{\dfrac16} & {0} \\
\dfrac{5}{24} & -\dfrac{1}{4} \\
\end{bmatrix}\begin{bmatrix}
{2} \\
{-3} \\
\end{bmatrix}=
\begin{bmatrix}
{\dfrac13} \\
{\dfrac76} \\
\end{bmatrix}\]
\end{itemize}
\[y=\frac13xe^{2x}+\frac76e^{2x}\]
\begin{example}\label{2times}
Find the particular solution of the differential equation
\begin{equation}\label{p2times}
y^{IV}+2y^{\prime\prime}+y=2\sin x-4\cos x
\end{equation}
\end{example}
\noindent
\textit{Solution.} It's easy to prove that the particular solution $y$ of differential equation \eqref{p2times}
\[\aligned
y\notin V&=\text{span}(\sin x, \cos x)\\
y\notin V&=\text{span}(x\sin x, x\cos x, \sin x, \cos x)\\
\endaligned \]
Let's assume that $y\in V=\text{span}(x^2\sin x, x^2\cos x, x\sin x, x\cos x, \sin x, \cos x)$. Since the set of functions
\[B=\{x^2\sin x, x^2\cos x, x\sin x, x\cos x, \sin x, \cos x\}\]
is linearly independent, it is a basis for $V$. The corresponding matrix differential operator
\[\mathcal{D}_B=\begin{bmatrix}
0 &-1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 &-1 & 0 & 0 \\
0 & 2 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 &-1\\
0 & 0 & 0 & 1 & 1& 0 \\
\end{bmatrix}\]
Then
\[\aligned
\left(\mathcal{D}_B^4+2\cdot\mathcal{D}_B^2+\mbf{I}_6\right)\mbf{y}_B&=
[2\sin x-4\cos x]_B\\
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
-8 & 0 & 0 & 0 & 0 & 0\\
0 & -8 & 0 & 0 & 0 & 0\\
\end{bmatrix}\mbf{y}_B&=\begin{bmatrix}
0\\
0\\
0\\
0\\
2\\
-4\\
\end{bmatrix}\\
\mbf{y}_B=\left[\begin{array}{cc|cccc}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\ \hline
-8 & 0 & 0 & 0 & 0 & 0\\
0 & -8 & 0 & 0 & 0 & 0\\
\end{array}\right]^{+}&\begin{bmatrix}
0\\
0\\
0\\
0\\
2\\
-4\\
\end{bmatrix}\\
\mbf{y}_B=\left[\begin{array}{cccc|cc}
0 & 0 & 0 & 0 & -\frac{1}{8} & 0\\
0 & 0 & 0 & 0 & 0 & -\frac{1}{8}\\ \hline
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}\right]
\begin{bmatrix}
0\\
0\\
0\\
0\\
2\\
-4\\
\end{bmatrix}
&=\begin{bmatrix}
-\frac{1}{4}\\
\frac{1}{2}\\
0\\
0\\
0\\
0\end{bmatrix}
\endaligned \]
The Moore-Penrose pseudoinverse to the matrix $\left(\mathcal{D}_B^4+2\cdot\mathcal{D}_B^2+\mbf{I}_6\right)$ we found using Theorem \ref{T33}. Then
\[y=-\frac14x^2\sin x+\frac12x^2\cos x\]
\section{Matrix differential operator and the method of undetermined coeficients}
In order to describe the algorithm for the finding of a particular solution of
an ordinary nonhomogeneous linear differential equation with constant
coefficients \eqref{DRD} using matrix differential operator, we use knowledge of the algorithm for finding a particular solution by an
undetermined coefficients method.
Let us consider a differential equation \eqref{DRD} with a characteristic
equation
\begin{equation}\label{chro}
a_{n}k^{n} + a_{n - 1}k^{n - 1} + \ldots + a_{1}k + a_{0} = 0
\end{equation}
First, we make two conventions to simplify expressing the root
multiplicity of the characteristic equation and the value of the
function \(f(x) = x^{0}\) at the discontinuity point.
\begin{itemize}
\item[1.] If the number \(\alpha\) is not the root of the characteristic
equation \eqref{chro}, we will also say that it is a 0-fold root of the characteristic equation;\newline if the number \(\alpha\) is a simple characteristic root of \eqref{chro}, we will also say that it is 1-fold root of the characteristic equation, and so on.
\item[2.] Given the removable discontinuity of the function
\(f(x) = x^{0}\) at the point $x = 0$, we define the function at this point as \(f(0) = 1.\)
\end{itemize}
Let us consider two cases of the right-hand side of the differential equation
\eqref{DRD}. We will describe the algorithm for finding a particular solution
using a matrix differential operator:
\begin{itemize}
\item[a)] with the right-hand side \eqref{DRD}
\begin{equation}\label{epx}
f(x) = e^{\alpha x}P_{m}(x)
\end{equation}
where \(\alpha\in \mathbb{R}\) and \(P_{m}(x)\) is a polynomial of degree \(m\). The algorithm to determine the particular solution of that differential equation, by the method of undetermined coefficients, says:\newline
If $\alpha$ is the \(k\)-fold root ($k = 0,1, 2, \ldots,$) of the characteristic equation \eqref{chro} of the differential equation \eqref{DRD} with the right-hand side \eqref{epx}, then the particular solution of this equation is of the form
\begin{equation}\label{repx}
y = x^{k}e^{\alpha x}Q_{m}(x),
\end{equation}
where
\(Q_{m}(x) = A_{m}x^{m} + A_{m - 1}x^{m - 1} + \ldots + A_{1}x + A_{0}\)
is a polynomial of degree \(m\) with undetermined coefficients.
The values of these undetermined coefficients can be determined by substituting \eqref{repx} for \(y\) into the equation \eqref{DRD} and then comparing the coefficients for the same functions on the right-hand and left-hand sides of the equation. The algorithm for determining the particular solution of the differential equation \eqref{DRD} with the right-hand side \eqref{repx} by the method of
undetermined coefficients implies that this particular solution will be
in the vector space
\[V = \text{span}( x^{k + m}e^{\alpha x},x^{k + m - 1}e^{\alpha x},\ldots,
xe^{\alpha x},e^{\alpha x})\]
with the basis for \(V\)
\begin{equation}\label{bepq}
B = \left\{ x^{k + m}e^{\alpha x},x^{k + m - 1}e^{\alpha x},\ldots,
xe^{\alpha x},e^{\alpha x} \right\}
\end{equation}
The relevant matrix differential operator \(\mathcal{D}_B\) in the vector
space \(V\) with the basis \(B\) be a matrix of the type
\(( k + m + 1 ) \times ( k + m + 1 ).\)
\item[b)] with the right-hand side
\begin{equation}\label{espx}
f(x) = e^{\alpha x}\left(P_{r}(x)\sin{\beta x} + Q_{s}(x)\cos{\beta x} \right),
\end{equation}
where \(\alpha,\ \beta\) are real numbers, \(P_{r}(x)\) is a polynomial of degree \(r\),
\(Q_{s}(x)\) is a polynomial of degree \(s\).\newline
If \(\alpha + \beta i \) is a \(k\)-fold root ($k = 0,1, 2, \ldots,$) of
the characteristic equation \eqref{chro} of the differential equation \eqref{DRD} with
the right-hand side \eqref{espx}, then the particular solution of this equation has
the form
\begin{equation}\label{respx}
y = x^{k}e^{\alpha x}\left( U_{m}(x)\sin{\beta x} + V_{m}(x)\cos{\beta x}\right)
\end{equation}
where
\(m = \max{\{ r,s\}}\),
\(U_{m}(x) = A_{m}x^{m} + A_{m - 1}x^{m - 1} + \ldots + A_{1}x + A_{0},\),
\(V_{m}(x) = B_{m}x^{m} + B_{m - 1\ }x^{m - 1} + \ldots + B_{1}x + B_{0}\)
are polynomials of degree \(m\) with undetermined coefficients. The values of these undetermined coefficients can be determined by substituting \eqref{respx} for \(y\) into the equation \eqref{DRD} and then comparing the coefficients for the same functions on the right-hand and left-hand sides of the equation. The algorithm for determining the particular solution of the differential equation \eqref{DRD} with the right-hand side \eqref{respx} by the method of undetermined coefficients implies that this particular solution will be in the vector space
\[\begin{array}{l}
V = \text{span}( x^{k + m}e^{\alpha x}\sin{\beta x},x^{k + m}e^{\alpha x}\cos{\beta x},x^{k + m - 1}e^{\alpha x}\sin{\beta x},\\ x^{k + m - 1}e^{\alpha x}\cos{\beta x}\ldots,
xe^{\alpha x}\sin{\beta x}, xe^{\alpha x}\cos{\beta x},e^{\alpha x}\sin{\beta x}, \\
e^{\alpha x}\cos{\beta x})
\end{array}\]
with the basis for \(V\)
\begin{equation}\label{brespx}
\begin{array}{l}
B = \left\{ x^{k + m}e^{\alpha x}\sin{\beta x},x^{k + m}e^{\alpha x}\cos{\beta x},x^{k + m - 1}e^{\alpha x}\sin{\beta x},\right.\\ x^{k + m - 1}e^{\alpha x}\cos{\beta x}\ldots,
xe^{\alpha x}\sin{\beta x}, xe^{\alpha x}\cos{\beta x},e^{\alpha x}\sin{\beta x},\\
\left.e^{\alpha x}\cos{\beta x}\right\}
\end{array}
\end{equation}
The relevant matrix differential operator \(\mathcal{D}_B\) in the vector space \(V\) with the basis \(B\) be a matrix of the type \( 2(k + m + 1) \times 2(k + m + 1) .\)
\end{itemize}
Subsequently, in both cases a) and b), we create a matrix equation
\begin{equation}
\left( a_{n}\mathcal{D}_B^{n} + a_{n - 1}\mathcal{D}_B^{n - 1} + \cdots + a_{1}\mathcal{D}_B + a_{0}\mbf{I} \right)\mbf{y}_B = \mbf{f}_B
\end{equation}
where \(\mbf{f}_B=[f(x)]_B\) and $\mbf{y}_B=[y(x)]_B$, where $y(x)$ is a particular solution of the differential equation \eqref{DRD}.
\vspace{2mm}
If the right-side of the differential equation \eqref{DR} is the sum of several functions, then the principle of superposition can be used to solve it. \emph{The principle of superposition of solutions} says that if $y_i$ $(i=1,2,\dots m)$ is a solution of the differential equation
$$\left(a_{n}y^{(n)} + a_{n - 1}y^{(n - 1)} + \cdots + a_{1}y^{\prime} + a_{0}\right)y = f_i(x)$$
$(i=1,2, \dots m)$, then for any constants $k_1,k_2,\dots,k_m$, the function
$$y=k_1y_1+k_2y_2+\cdots+k_my_m$$ is a solution to the differential equation \eqref{DR} with
$$f(x)=k_1f_1(x)+k_2f_2(x)+\cdots+k_mf_m$$
In special case, if $f_1(x),f_2(x),\dots,f_m(x)$ form the basis $B$ of the vector space $V=\text{span}(f_1(x), f_2(x),\dots,f_m(x))$ then it is enough to solve only the equation
$$\left( a_{n}\mathcal{D}_B^{n} + a_{n - 1}\mathcal{D}_B^{n - 1} + \cdots + a_{1}\mathcal{D}_B + a_{0}\mbf{I} \right)\mbf{y}_B = \begin{bmatrix}
k_1\\
k_2\\
\vdots\\
k_m
\end{bmatrix}_B$$
\vspace{2mm}
In the next example, we want to show how the matrix differential operator can be used to support the finding of a particular solution of the differential equation by the method of undetermined coefficients.
\begin{example}\label{aaf} Determine the particular solution of the differential equation
\begin{equation}\label{raaf}
(D-2)^2(D+4)^2y=3e^{2x}
\end{equation}
\end{example}
\noindent\textit{Solution.} First we solve the equation
\begin{equation}\label{aag}
(D+4)^2y=3e^{2x}
\end{equation}
Since $\alpha=2$ is not a solution of the characteristic equation $(k+4)^2=0$, it follows that the solution of \eqref{aag} belongs to the vector space $V=\text{span}(e^{2x})$ with the basis $B_1=\{e^{2x}\}$. Then the matrix differential operator is
\[\mathcal{D}_{B_1}=[2]\]
and
\[\aligned
(\mathcal{D}_{B_1}+4\mbf{I}_1)^2\mbf{{y}_1}_{B_1}&=[3e^{2x}]_{B_1}\\
([2]+4[1])^2\mbf{{y}_1}_{B_1}&=[3]\\
[6]^2\mbf{{y}_1}_{B_1}&=[3]\\
\mbf{{y}_1}_{B_1}&=[6]^{-2}[3]\\
\mbf{{y}_1}_{B_1}&=\left[\frac1{12}\right]\\
y_1&=\frac1{12}e^{2x}\\
\endaligned\]
Now we have to determine the particular solution of the differential equation
\begin{equation}\label{aah}
(D-2)^2y=\frac1{12}e^{2x}
\end{equation}
Because $\alpha=2$ is 2-fold root of the characteristic equation $(k-2)^2=0$, it follows that the solution of \eqref{aah} belongs to the vector space\linebreak
$V=\text{span}(x^2e^{2x}, xe^{2x}, e^{2x})$ with the basis
$B=\{x^2e^{2x}, xe^{2x}, e^{2x}\}$.
Then the matrix differential operator is
$$B=\begin{bmatrix}
2&0&0\\
2&2&0\\
0&1&2\\
\end{bmatrix} $$
and
$$\aligned
(\mathcal{D}_B-2\mbf{I}_3)^2\mbf{y}_B&=\left[\frac1{12}e^{2x}\right]_B\\
\left[\begin{array}{c|cc}
0&0&0\\
0&0&0\\\hline
2&0&0\\
\end{array}
\right]\mbf{y}_B&=\begin{bmatrix}
0\\
0\\
\dfrac1{12}\\
\end{bmatrix}\\
\mbf{y}_B&=
\left[\begin{array}{c|cc}
0&0&0\\
0&0&0\\\hline
2&0&0\\
\end{array}
\right]^+\begin{bmatrix}
0\\
0\\
\dfrac1{12}\\
\end{bmatrix}\\
\endaligned$$
\[\mbf{y}_B=
\left[\begin{array}{cc|c}
0&0&\dfrac12\\\hline
0&0&0\\
0&0&0\\
\end{array}
\right]\begin{bmatrix}
0\\
0\\
\dfrac1{12}\\
\end{bmatrix}=\begin{bmatrix}
\dfrac1{24}\\
0\\
0\\
\end{bmatrix}\]
The particular solution of the differential equation \eqref{raaf} is
\[y=\frac1{24}x^2e^{2x}\]
\begin{example} Determine the particular solution of the differential equation
\begin{equation}\label{aab}
y^{\prime\prime} - 4y^\prime + 13y = 2xe^{2x}\cos{3x}
\end{equation}
using a pseudoinverse matrix differential operator.
\end{example}
\noindent\textit{Solution.} Because $2 +3i$ is 1-fold root of the characteristic equation\linebreak
$k^2-4k+13=0$, it follows that the particular solution of \eqref{aab} will be (due to \eqref{espx},\eqref{respx}) in the form
\[y = xe^{2x}\left( (Ax + B)\sin{\beta x} + (Cx + D)\cos{\beta x} \right)\]
In other words, the particular solution \(y\) will be in the vector space
\[V = \text{span}(x^{2}e^{2x}\sin{3x},x^{2}e^{2x}\cos{3x},xe^{2x}\sin{3x},xe^{2x}\cos{3x},e^{2x}\sin{3x}, e^{2x}\cos{3x})\]
with the basis
\[B = \left\{ x^{2}e^{2x}\sin{3x},x^{2}e^{2x}\cos{3x},xe^{2x}\sin{3x},xe^{2x}\cos{3x},e^{2x}\sin{3x}, e^{2x}\cos{3x} \right\}\]
The matrix differential operator
\[\mathcal{D}_B=\begin{bmatrix}
2 & - 3 & 0 & 0 & 0 & 0 \\
3 & 2 & 0 & 0 & 0 & 0 \\
2 & 0 & 2 & - 3 & 0 & 0 \\
0 & 2 & 3 & 2 & 0 & 0 \\
0 & 0 & 1 & 0 & 2 & - 3 \\
0 & 0 & 0 & 1 & 3 & 2 \\
\end{bmatrix}\]
\[\left( \mathcal{D_B}^{2} - 4\mathcal{D}_B +13\mbf{I}_{6} \right)\mbf{y}_B =[2xe^{2x}\cos{3x}]_B\]
After editing the previous equation, we get
\[
\left[\begin{array}{cccc|cc}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\\hline
0 & -12 & 0 & 0 & 0 & 0\\
12 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 & -6 & 0 & 0\\
0 & 2 & 6 & 0 & 0 & 0\\
\end{array}\right]
\mbf{y}_B=\begin{bmatrix}
0 \\
0 \\
0 \\
2 \\
0 \\
0 \\
\end{bmatrix}\]
\[\mbf{y}_B=\left[\begin{array}{cccc|cc}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\\hline
0 & -12 & 0 & 0 & 0 & 0\\
12 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 & -6 & 0 & 0\\
0 & 2 & 6 & 0 & 0 & 0\\
\end{array}\right]^{+}
\begin{bmatrix}
0 \\
0 \\
0 \\
2 \\
0 \\
0 \\
\end{bmatrix}\]
In accordance with Theorem \ref{T33} we determine the Moore-Penrose pseudoinverse. Then
\[\mbf{y}_B=
\renewcommand{\arraystretch}{1.9}
\left[\begin{array}{cc|cccc}
0 & 0 & 0 & \dfrac{1}{12} & 0 & 0\\
0 & 0 & -\dfrac{1}{12} & 0 & 0 & 0\\
0 & 0 & \dfrac{1}{36} & 0 & 0 & \dfrac{1}{6}\\
0 & 0 & 0 & \dfrac{1}{36} & -\dfrac{1}{6} & 0\\\hline
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}\right]
\begin{bmatrix}
0 \\
0 \\
0 \\
2 \\
0 \\
0 \\
\end{bmatrix}=\begin{bmatrix}
\dfrac{1}{6}\\
0\\
0\\
\dfrac{1}{18}\\
0\\
0\\
\end{bmatrix}\]
So, the particular solution is
\[y_{p} = \frac{1}{6}x^{2}e^{2x}\sin{3x} + \frac{1}{18} xe^{2x}\cos{3x}\qed \]
\begin{example} Find the integral
\[\int\left( 13xe^{2x}\sin{3x} - 13xe^{2x}\cos{3x} + 5e^{2x}\sin{3x} - 4e^{2x}\cos{3x} \right)dx\]
\end{example}
\noindent\textit{Solution}. We want to solve the differential equation
\[Dy = 13xe^{2x}\sin{3x} - 13xe^{2x}\cos{3x} + 5e^{2x}\sin{3x} - 4e^{2x}\cos{3x}\]
The solution will be in the vector space
\[V = \text{span}(xe^{2x}\sin{3x,}xe^{2x}\cos{3x,}e^{2x}\sin{3x,}e^{2x}\cos{3x})\]
with the basis
\[B = \left\{ xe^{2x}\sin{3x},xe^{2x}\cos{3x},e^{2x}\sin{3x},e^{2x}\cos{3x} \right\}\]
In \eqref{schemat} we calculated
\[\mathcal{D}_B =\begin{bmatrix}
2 & - 3 & 0 & 0 \\
3 & 2 & 0 & 0 \\
1 & 0\ & 2 & - 3 \\
0 & 1 & 3 & 2 \\
\end{bmatrix}\]
We are having to solve the matrix equation
\[\mathcal{D}_B\mbf{y}_B=\begin{bmatrix}
13 \\
- 13 \\
5 \\
- 4 \\
\end{bmatrix}\]
Then
\[\mbf{y}_B=\mathcal{D}_B^{-1}\begin{bmatrix}
13 \\
- 13 \\
5 \\
- 4 \\
\end{bmatrix} =
\renewcommand{\arraystretch}{1.9}
\left[\begin{array}{cccc}
\dfrac{2}{13} & \dfrac{3}{13} & 0 & 0 \\
- \dfrac{3}{13} & \dfrac{2}{13} & 0 & 0 \\
\dfrac{5}{169} & - \dfrac{12}{169} & \dfrac{2}{13} & \dfrac{3}{13} \\
\dfrac{12}{169} & \dfrac{5}{169} & - \dfrac{3}{13} & \dfrac{2}{13} \\
\end{array}\right]\begin{bmatrix}
13 \\
- 13 \\
5 \\
- 4 \\
\end{bmatrix} = \begin{bmatrix}
- 1 \\
- 5 \\
\dfrac{15}{13} \\
- \dfrac{16}{13} \\
\end{bmatrix}\]
We have calculated that
\[\aligned
\int{\left( 13xe^{2x}\sin{3x} - 13xe^{2x}\cos{3x} + 5e^{2x}\sin{3x} - 4e^{2x}\cos{3x} \right)dx} \\
=- xe^{2x}\sin{3x} - 5xe^{2x}\cos{3x} + \frac{15}{13}e^{2x}\sin{3x} - \frac{16}{13}e^{2x}\cos{3x} + C
\endaligned\]\qed
Sometimes it is useful to combine a differential operator with a matrix differential operator.
Relationship I in Table 2 allows us to reduce a matrix differential operator by two rows and two columns. In the following we will give an example of this.
\begin{example}{\label{domo}} Find a particular solution of
\begin{equation}\label{edomo}
(D^{2} - 5D + 16)y=xe^{2x}\sin{3x}
\end{equation}
\end{example}
\noindent\emph{Solution.} Using I in the Table 2 we have
\begin{equation}\label{rdomo}
\begin{aligned}
y=&\frac1{D^{2} - 5D + 16}xe^{2x}\sin{3x} \\
&=\left( x - \frac1{D^{2} - 5D + 16}(2D - 5) \right)\frac1{D^{2} - 5D + 16}e^{2x}\sin{3x}\\
\end{aligned}
\end{equation}
All we need to do is to create a $2\times 2$ matrix differential operator instead of $4\times 4$. The particular solution of \eqref{edomo} belongs to the vector space \linebreak
\(V = \text{span}\left( xe^{2x}\sin{3x}, xe^{2x}\cos{3x},e^{2x}\sin{3x}, e^{2x}\cos{3x}\right)\). It can be reduced using \eqref{rdomo} to the vector space
\(V = \text{span}\left(e^{2x}\sin{3x}, e^{2x}\cos{3x}\right)\) with the basis
\(B = \{e^{2x}\sin{3x}, e^{2x}\cos{3x}\}\). \par
Then
\[\mathcal{D}_B =\begin{bmatrix}
2 & - 3 \\
3 & 2 \\
\end{bmatrix}\]
\[\mathcal{D}_B^{2}- 5\mathcal{D}_B+16\mbf{I}_{2}=\begin{bmatrix}
1 & 3 \\
- 3 & 1 \\
\end{bmatrix}\]
\[\left(\mathcal{D}_B^{2}- 5\mathcal{D}_B+16\mbf{I}_{2}\right)^{-1}=
\renewcommand{\arraystretch}{2}
\begin{bmatrix}
\dfrac{1}{10} & - \dfrac{3}{10} \\
\dfrac{3}{10} & \dfrac{1}{10} \\
\end{bmatrix}\]
\[\left(2\mathcal{D}_B - 5\mbf{I}_{2}\right)^{-1}=\begin{bmatrix}
- 1 & - 6 \\
6 & - 1 \\
\end{bmatrix}\]
where \(\mbf{I}_{2}\) is the $2\times 2$ identity matrix.
The right-hand side of the relationship \eqref{rdomo} expressed by the differential operator is
\[\left( x\mbf{I}_{2} - \left( \mathcal{D}_B^{2}- 5\mathcal{D}_B+16\mbf{I}_{2}
\right)^{- 1}(2\mathcal{D}_B - 5\mbf{I}_{2} \right)\left( \mathcal{D}_B^{2}- 5
\mathcal{D}_B+16\mbf{I}_{2} \right)^{- 1}\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}=\]
\[\left( x \renewcommand{\arraystretch}{2}
\begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix} - \begin{bmatrix}
\dfrac{1}{10} & - \dfrac{3}{10} \\
\dfrac{3}{10} & \dfrac{1}{10} \\
\end{bmatrix}\begin{bmatrix}
- 1 & - 6 \\
6 & - 1 \\
\end{bmatrix} \right) \renewcommand{\arraystretch}{2}
\begin{bmatrix}
\dfrac{1}{10} & - \dfrac{3}{10} \\
\dfrac{3}{10} & \dfrac{1}{10} \\
\end{bmatrix}\begin{bmatrix}
1 \\
0 \\
\end{bmatrix}=\]
\[\left( \begin{bmatrix}
x & 0 \\
0 & x \\
\end{bmatrix} - \renewcommand{\arraystretch}{2}
\begin{bmatrix}
\dfrac{1}{10} & - \dfrac{3}{10} \\
\dfrac{3}{10} & \dfrac{1}{10} \\
\end{bmatrix}\begin{bmatrix}
- 1 & - 6 \\
6 & - 1 \\
\end{bmatrix} \right)\renewcommand{\arraystretch}{2}
\begin{bmatrix}
\dfrac{1}{10} \\
\dfrac{3}{10} \\
\end{bmatrix} =\]
\[\renewcommand{\arraystretch}{2}
\begin{bmatrix}
\dfrac{1}{10}x \\
\frac{3}{10}x \\
\end{bmatrix} -
\begin{bmatrix}\renewcommand{\arraystretch}{2}
\dfrac{1}{10} & - \dfrac{3}{10} \\
\dfrac{3}{10} & \dfrac{1}{10} \\
\end{bmatrix}\renewcommand{\arraystretch}{2}
\begin{bmatrix}
- \dfrac{19}{10} \\
\dfrac{3}{10} \\
\end{bmatrix} = \renewcommand{\arraystretch}{2}
\begin{bmatrix}
\dfrac{1}{10}x + \dfrac{7}{25} \\
\dfrac{3}{10}x + \dfrac{27}{50} \\
\end{bmatrix}\]
We can rewrite this using functions as
\[\frac{1}{D^2 - 5D + 16}xe^{2x}\sin{3x} = \left( \frac{1}{10}x + \frac{7}{25} \right)e^{2x}\sin{3x} + \left( \frac{3}{10}x + \frac{27}{50} \right)e^{2x}\cos{3x}\]
so the particular solution of the differential equation \eqref{edomo} is
\[y=\left( \frac{1}{10}x + \frac{7}{25} \right)e^{2x}\sin{3x} + \left( \frac{3}{10}x + \frac{27}{50} \right)e^{2x}\cos{3x}\] \qed
In the previous example, the number resulting from the right-hand side of the differential equation \eqref{edomo} was not the root of the corresponding characteristic equation of \eqref{edomo}.
In the next example it will be.
\begin{example}\label{coi} Determine the~particular solution of the equation
\begin{equation}\label{ecoi}
\left( D^{2} + 1 \right)y = x\cos x
\end{equation}
\end{example}
\emph{Solution.} The complex number $ i$ is a single root of the characteristic equation \( k^{2} + 1 = 0\), therefore the particular solution of the differential equation
\eqref{ecoi} will be in the vector space
\[V =\textrm{span}\left(x^{2}\sin{x}, x^{2}\cos{x}, x\sin x, x\cos{x},\sin x,\cos{x}\right)\]
with the basis for $V$
\[B = \{x^2\sin{x}, x^{2}\cos x, x\sin x, x\cos{x},\sin x,\cos x\}\]
The pseudoinverse matrix differential operator will be a $6\times 6$ matrix.
We will present three methods of solution: using a matrix differential operator, a combined method and using a complex variable\\
a) using a matrix differential operator\\
We have the matrix differential operator
\[\mathcal{D}_B=\begin{bmatrix}
0 & -1 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 & -1 & 0 & 0\\
0 & 2 & 1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & -1\\
0 & 0 & 0 & 1 & 1 & 0\end{bmatrix}\]
\[(\mathcal{D}_B^2+\mbf{I}_6)\mbf{y}_B=[x\cos x]_B\]
\[\left[\begin{array}{cccc|cc}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\\hline
0 & -4 & 0 & 0 & 0 & 0\\
4 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 & -2 & 0 & 0\\
0 & 2 & 2 & 0 & 0 & 0
\end{array}\right]\mbf{y}_B=
\begin{bmatrix}0\\
0\\
0\\
1\\
0\\
0\end{bmatrix}\]
Using a Moore-Penrose pseudoinverse matrix (Theorem \ref{T33} and \ref{SLR}) we have
\[\mbf{y}_B=(\mathcal{D}_B^2+\mbf{I}_6)^+[x\cos x]_B\]
\[\mbf{y}_B=
\left[\begin{array}{cc|cccc}
0 & 0 & 0 & \frac{1}{4} & 0 & 0\\
0 & 0 & -\frac{1}{4} & 0 & 0 & 0\\
0 & 0 & \frac{1}{4} & 0 & 0 & \frac{1}{2}\\
0 & 0 & 0 & \frac{1}{4} & -\frac{1}{2} & 0\\\hline
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]
\begin{bmatrix}0\\
0\\
0\\
1\\
0\\
0\end{bmatrix}=
\begin{bmatrix}\frac{1}{4}\\
0\\
0\\
\frac{1}{4}\\
0\\
0\end{bmatrix}\]
So the particular solution of \eqref{ecoi} is
\[y=\frac14x^2\sin x+\frac14x\cos x\]
b) We can reduce the size of the matrix $\mathcal{D}_B$ by using I in the Table 2.
\[\frac{1}{D^{2} + 1}x\cos x = x\frac{1}{D^{2} + 1}\cos x - \frac{1}{D^{2} + 1}2D
\frac{1}{D^{2} + 1}\cos x \]
Let's solve \(\dfrac{1}{D^{2} + 1}\cos x\) which corresponds to the particular solution of the differential equation $(D^2+1)y=\cos x$. This particular solution belongs to the vector space
\[V = \text{span}(x\sin x, x\cos x, \sin x, \cos x) \]
with the basis
\(B = \{ x\sin x, x\cos x, \sin x, \cos x\} \)
Then
\[\mathcal{D}_{B}=\begin{bmatrix}
0 & - 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
1 & 0 & 0 & - 1 \\
0 & 1 & 1 & 0 \\
\end{bmatrix}\]
\[\aligned
\mathcal{D}^2_{B}+\mbf{I}_4)\mbf{y}_B&=[\cos x]_B\\
\left[\begin{array}{cc|cc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\\hline
0 & - 2 & 0 & 0 \\
2 & 0 & 0 & 0 \\
\end{array}\right]\mbf{y}_B&=\begin{bmatrix}
0 \\
0 \\
0 \\
1 \\
\end{bmatrix}
\endaligned\]
\[\aligned
\mbf{y}_B&=\left[\begin{array}{cc|cc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\\hline
0 & - 2 & 0 & 0 \\
2 & 0 & 0 & 0 \\
\end{array}\right]^+\begin{bmatrix}
0 \\
0 \\
0 \\
1 \\
\end{bmatrix}\\
\mbf{y}_B&=\left[\begin{array}{cc|cc}
0 & 0 & 0 & \frac{1}{2} \\
0 & 0 & - \frac{1}{2} & 0 \\\hline
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}\right]\begin{bmatrix}
0 \\
0 \\
0 \\
1 \\
\end{bmatrix}=
\begin{bmatrix}
\frac{1}{2} \\
0 \\
0 \\
0 \\
\end{bmatrix}
\endaligned \]
We have calculated that
\[\frac{1}{D^{2} + 1}\cos x = \frac{x\sin x}{2}\]
Similarly
\[\left(\frac{1}{D^{2} + 1}\sin x\right)_B =
\left[\begin{array}{cc|cc}
0 & 0 & 0 & \frac{1}{2} \\
0 & 0 & - \frac{1}{2} & 0 \\\hline
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}\right]\begin{bmatrix}
0 \\
0 \\
1 \\
0 \\
\end{bmatrix}=
\begin{bmatrix}
0 \\
-\frac12 \\
0 \\
0 \\
\end{bmatrix}\]
So
\[\frac{1}{D^{2} + 1}\sin x=-\frac{x\cos x}x\]
Let's continue to calculate
\[\frac{1}{D^{2} + 1}x\cos x = x\frac{x\sin x}{2}- \frac{1}{D^{2} + 1}2D
\frac{1}{D^{2} + 1}\cos x \]
\[\frac{1}{D^{2} + 1}x\cos x = x\frac{x\sin x}{2}- \frac{1}{D^{2} + 1}(\sin x +x\cos x) \]
\[\frac{1}{D^{2} + 1}x\cos x = x\frac{x\sin x}{2}+\frac{x\cos x}2 -\frac{1}{D^{2} + 1}x\cos x\]
From this, by expressing $\dfrac{1}{D^{2} + 1}x\cos x$, we have
\[\frac{1}{D^{2} + 1}x\cos x = \frac{x^2\sin x}{4}+\frac{x\cos x}4 \]
\noindent c) using a complex variable. To use J in the Table 2. See also \cite{Ch}.
\[\aligned
y &= \frac{1}{D^{2} + 1}x\cos x =\frac{1}{D^{2} + 1}\left( x\frac{e^{ix} + e^{- ix}}2 \right) \\
&=\frac{1}{2}\left( e^{ix}\frac{1}{( D + i )^{2} + 1}x + e^{- ix}\frac{1}{( D - i )^{2} + 1}x \right) \\
&=\frac{1}{2}\left( e^{ix}\frac{1}{D( D + 2i)}x + e^{- ix}\frac{1}{D( D - 2i)}x \right)\\
&=\frac{1}{2}\left( e^{ix}\frac{1}{D}\left( - \frac{i}{2} + \frac{D}{4} \right)x + e^{- ix}
\frac{1}{D}\left( \frac{i}{2} + \frac{D}{4} \right)x \right)\\
&= \frac{1}{2}\left( e^{ix}\frac{1}{D}\left( - \frac{ix}{2} + \frac{1}{4} \right) + e^{- ix}
\frac{1}{D}\left( \frac{ix}{2} + \frac{1}{4}\right) \right)\\
&=\frac{1}{2}\left( e^{ix}\left( - \frac{ix^{2}}{4} + \frac{x}{4} \right) + e^{- ix}
\left( \frac{ix^{2}}{4} + \frac{x}{4} \right) \right) \\
&=\frac{ix^2}{4}\frac{- e^{ix} + e^{- ix}}{2} + \frac{x}{4}\frac{e^{ix} + e^{- ix}}{2}\\
&=\frac{x^2}{4}\frac{e^{ix} - e^{- ix}}{2i} + \frac{x}{4}\frac{e^{ix} + e^{- ix}}{2} = \frac{x^2\sin x}{4} + \frac{x\cos x}{4}
\endaligned \] \qed \par
In the previous example we compared the solution method with other methods. In the following example, we return to the differential operator for solving a linear ODE with a right-hand polynomial function. In this case, it is proposed to expand the operator $\frac1{\phi (D)}$ in powers of $D$. For example, in \cite{Ch} for
\[\frac1{\phi(D)}f(x)=\frac1{a_nD^n+a_{n-1}D^{n-1}+\ldots+a_1D+a_0}f(x)\]
is the expand for $a_0\ne 0$
\begin{equation}\label{aai}
y = \frac{1}{\phi(D)}f(x) =
\sum_{k = 0}^{\infty}(-1)^{k}\frac{1}{a_0}\left( \frac{a_n}{a_0}D^n + \ldots +
\frac{a_1}{a_0}D \right)^{k}f(x)
\end{equation}
In the case
\(a_{0} = a_{1} = \ldots = a_{k - 1} = 0\quad (1 \leq n \leq n))\ \text{and }\ (a_{k} \neq 0\), then
\begin{equation}\label{aaj}
y =\frac{1}{\phi(D)}f(x) = D^{- k}\frac{1}{a_{n}D^{n - k} + a_{n - 1}D^{n - k - 1} + \ldots + a_{k}}f(x),
\end{equation}
where we would expand an inverse operator as a Maclaurin series in the sense
of \eqref{aai}.\\
The disadvantage \eqref{aai} of expanding as a Maclaurin series is the calculation
$\left( \dfrac{a_n}{a_0}D^n + \ldots + \dfrac{a_1}{a_0}D \right)^{k}$ although it is not necessary to count all members of the power. We will show a different approach to solving this problem.
\begin{theorem}\label{mcr}
Let \(a_{0} \neq 0\), then the Maclaurin expansion
\[\frac{1}{a_{n}D^{n} + a_{n - 1}D^{n - 1} + \cdots + a_{1}D + a_{0}} = c_{0} + c_{1}D + c_{2}D^{2} + \ldots\]
where \(c_{k} = 0\) for \(k < 0\), \(c_{0} = \dfrac{1}{a_{0}}\),
\(c_{k} = \mbf{q}\cdot ( c_{k - n},c_{k - n + 1},\ldots,c_{k - 1})\), for \(k = 1,2,\ldots \)
is a dot product of vectors\,
\(\mbf{q} =\dfrac{1}{a_{0}}(-a_{n}, - a_{n - 1},\ldots,-a_1)\), \\* \( (c_{k - n},c_{k - n + 1},\ldots,c_{k - 1}).\)
\end{theorem}
\begin{proof}
The statement follows from the identity
\begin{equation}\label{rmcr}
1 = \left( a_{n}D^{n} + a_{n - 1}D^{n - 1} + \cdots + a_{1}D + a_{0} \right)\left( c_{0} + c_{1}D + c_{2}D^{2} + \ldots \right)
\end{equation}
after expanding the right-hand side of \eqref{rmcr} and comparing coefficients of the same powers of \(D\) we get
\[c_{0} = \frac{1}{a_{0}},\]
The coefficient of \(D^{k},\ k = 1,2,3,\ldots\) we get from the equation
\[c_{k}a_{0} + c_{k - 1}a_{1} + c_{k - 2}a_{2} + \ldots + c_{k - n + 1}a_{n - 1} + c_{k - n}a_{n} = 0,\]
where \(c_{k} = 0\) for \(k < 0.\)
From this, we immediately have the proof of the Theorem \ref{mcr}.
\end{proof}
\begin{example}
Find a particular solution of the equation
\begin{equation}\label{aak}
(D^3-D^2+2D+1)y=x^3+2x^2+3x
\end{equation}
using the Maclaurin expansion of the inverse differential operator.
\end{example}
\noindent\emph{Solution.}
Compute the coefficients of the Maclaurin expansion
\[ \aligned
c_{0} &= 1\\
c_{1} &= ( -1,1, -2)\cdot( c_{- 2},c_{- 1},c_{0}) =( -1,1, -2)\cdot( 0,0,1) = - 2\\
c_{2} &= ( -1,1, -2)\cdot( c_{- 1},c_{0},c_{1}) = ( -1,1, -2)\cdot(0,1,-2) = 5\\
c_{3} &= ( -1,1, -2 )\cdot( c_{0},c_{1},c_{2} ) = ( -1,1, -2 )\cdot( 1, -2,5) = -13\\
c_{4} &=( -1,1, -2)\cdot( c_{1},c_{2,}c_{3}) =(-1,1, -2)\cdot( -2, 5, - 13 ) = 33\\
\vdots&\\
\endaligned\]
Then we have
\[\aligned
&\frac1{D^3-D^2+2D+1}(x^3+2x^2+3x)\\
&=(1-2D+5D^2-13D^3+33D^4+\cdots)(x^3+2x^2+3x)\\
&=(1-2D+5D^2-13D^3)x^3+(1-2D+5D^2)2x^2+(1-2D)3x\\
&=(x^3-6x^2+30x-78)+(2x^2-8x+20)+(3x-6)\\
&=x^3-4x^2+25x-64\\
\endaligned\]
The particular solution of the differential equation \eqref{aak}
\[y=x^3-4x^2+25x-64\]\par
We will also present a shortened calculation using a matrix differential ope\-rator.
The solution of \eqref{aak} belongs to the vector space \(V=\textrm{span}(x^3,x^2,x,1)\) with
basis \(B=\{x^3,x^2,x,1\}\). Then
\[\mathcal{D }_B=\begin{bmatrix}
0&0&0&0\\
3&0&0&0\\
0&2&0&0\\
0&0&1&0\\
\end{bmatrix},\quad
\mathcal{D }_B^2=\begin{bmatrix}
0&0&0&0\\
0&0&0&0\\
6&0&0&0\\
0&2&0&0\\
\end{bmatrix},\quad
\mathcal{D }_B^3=\begin{bmatrix}
0&0&0&0\\
0&0&0&0\\
0&0&0&0\\
6&0&0&0\\
\end{bmatrix}\]
\[\mathcal{D }_B^n=\begin{bmatrix}
0&0&0&0\\
0&0&0&0\\
0&0&0&0\\
0&0&0&0\\
\end{bmatrix},\quad \textrm{for}\quad n>3 \]
\[\aligned
\mbf{y}_B&=(\mbf{I}_4-2\mathcal{D }_B+5\mathcal{D }_B^2-13\mathcal{D }_B^3)
[x^3+2x^2+3x]_{B}\\
&=\begin{bmatrix}1 & 0 & 0 & 0\\
-6 & 1 & 0 & 0\\
30 & -4 & 1 & 0\\
-78 & 10 & -2 & 1\end{bmatrix}
\begin{bmatrix}
1\\
2\\
3\\
0\\
\end{bmatrix}=\begin{bmatrix}
1\\
-4\\
25\\
-64
\end{bmatrix}\\
\endaligned\]
The particular solution
\[y=x^3-4x^2+25x-64\] \qed
\begin{example}\label{blc}Using block matrices determine a particular solution of the differential equation
\begin{equation}\label{rblc}
(D^2 + 6D + 13){y} = 2xe^{- 3x}\sin{2x} - 4xe^{- 3x}\cos{2x}
\end{equation}
\end{example}
\noindent\emph{Solution}. Since the roots of the characteristic equation
are \(- 3 + 2i, - 3 - 2i\) compared to the right-hand side of the
differential equation, it follows that the particular solution will be in a vector space
\[\aligned
V =& \textrm{span}\left(x^{2}e^{- 3x}\sin{2x},x^{2}e^{- 3x}\cos{3x},xe^{- 3x}\sin{2x},
xe^{- 3x}\cos{3x},\right. \\
&\left.e^{- 3x}\sin{2x}, e^{- 3x}\cos{3x}\right)\\
\endaligned \]
with basis B of V
\[\aligned
B =& \left\{x^{2}e^{- 3x}\sin{2x},x^{2}e^{- 3x}\cos{3x},xe^{- 3x}\sin{2x},
xe^{- 3x}\cos{3x},\right. \\
&\left.e^{- 3x}\sin{2x}, e^{- 3x}\cos{3x}\right\}\\
\endaligned \]
\[\mathcal{D }_B=
\left[\begin{array}{cc|cc|cc}
- 3 & - 2 & 0 & 0 & 0 & 0 \\
2 & - 3 & 0 & 0 & 0 & 0 \\\hline
2 & 0 & - 3 & - 2 & 0 & 0 \\
0 & 2 & 2 & - 3 & 0 & 0 \\\hline
0 & 0 & 1 & 0 & - 3 & - 2 \\
0 & 0 & 0 & 1 & 2 & - 3 \\
\end{array}\right]\]
We express this matrix as a block matrix
\[\mathcal{D}_B=\begin{bmatrix}
\mbf{C} & \mbf{0} & \mbf{0} \\
\mbf{2}\mbf{I} & \mbf{C} & \mbf{0} \\
\mbf{0} & \mbf{I} & \mbf{C} \\
\end{bmatrix}\]
where
\[\mbf{C }=\begin{bmatrix} - 3 & - 2 \\ 2 & - 3 \\ \end{bmatrix},\quad
\mbf{I} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix},\quad
\mbf{0} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \\ \end{bmatrix}\]
Using mathematical induction it can be proved that
\begin{equation}\label{mobl}
\mathcal{D}_B^{n}=\begin{bmatrix}
\mbf{C}^{n} & \mbf{0} & \mbf{0} \\
2n\mbf{C}^{n - 1} & \mbf{C}^{n} & \mbf{0} \\
{n(n - 1)\mbf{C}}^{n - 2} & n\mbf{C}^{n - 1} & \mbf{C}^{n} \\
\end{bmatrix},\quad n = 0,1,2,\ldots
\end{equation}
It is easy verify that equation \eqref{mobl} is true also for $n=-1,-2,\ldots$, i.e.
\[\mathcal{D}_B^{- n}=\begin{bmatrix}
\mbf{C}^{- n} & \mbf{0} & \mbf{0} \\
- 2n\mbf{C}^{- n - 1} & \mbf{C}^{- n} & \mbf{0} \\
n(n + 1)\mbf{C}^{- n - 2} & - n\mbf{C}^{- n - 1} & \mbf{C}^{- n} \\
\end{bmatrix}\quad n = 1,2,3,\ldots\]
Then
{\small \[\aligned
&\mathcal{D}_B^{2}+6\mathcal{D}_B+13\begin{bmatrix}
\mbf{I} & \mbf{0} & \mbf{0} \\
\mbf{0} & \mbf{I} & \mbf{0} \\
\mbf{0} & \mbf{0} & \mbf{I} \\
\end{bmatrix}=\begin{bmatrix}
\mbf{C}^{2} & \mbf{0} & \mbf{0} \\
4\mbf{C} & \mbf{C}^{2} & \mbf{0} \\
2\mbf{I} & 2\mbf{C} & \mbf{C}^{2} \\
\end{bmatrix}+6\begin{bmatrix}
\mbf{C} & \mbf{0} & \mbf{0} \\
\mbf{2}\mbf{I} & \mbf{C} & \mbf{0} \\
\mbf{0} & \mbf{I} & \mbf{C} \\
\end{bmatrix}+13\begin{bmatrix}
\mbf{I} & \mbf{0} & \mbf{0} \\
\mbf{0} & \mbf{I} & \mbf{0} \\
\mbf{0} & \mbf{0} & \mbf{I} \\
\end{bmatrix}\\
& \mbf=\begin{bmatrix}
\mbf{C}^{2}+6\mbf{C} +13\mbf{I} & \mbf{0} & \mbf{0} \\
4\mbf{C} +12\mbf{I} & \mbf{C}^{2}+6\mbf{C} +13\mbf{I} & \mbf{0} \\
2\mbf{I} & 2\mbf{C }+6\mbf{I} & \mbf{C}^{2}+6\mbf{C }+13\mbf{I} \\
\end{bmatrix}=\begin{bmatrix}
\mbf{0} & \mbf{0} & \mbf{0} \\
4\mbf{C} +12\mbf{I} & \mbf{0} & \mbf{0} \\
2\mbf{I} & 2\mbf{C} +6\mbf{I} & \mbf{0} \\
\end{bmatrix}\\
\endaligned \]}
because
\[\mbf{C}^{2}+6\mbf{C} +13\mbf{I }=\begin{bmatrix}
5 & 12 \\
- 12 & 5 \\
\end{bmatrix} + 6\begin{bmatrix}
- 3 & - 2 \\
2 & - 3 \\
\end{bmatrix} + 13\begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix} = \begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix} = \mbf{0}\]
Next
\[2\mbf{C} +6\mbf{I} =\begin{bmatrix}
0 & - 4 \\
4 & 0 \\
\end{bmatrix}\]
\[4\mbf{C} +12\mbf{I }=2\left( 2\mbf{C} +6\mbf{I} \right) =
\begin{bmatrix}
0 & - 8 \\
8 & 0 \\
\end{bmatrix}\]
\[2\mbf{I} =\begin{bmatrix}
2 & 0 \\
0 & 2 \\
\end{bmatrix}\]
Dividing the matrix into $2\times 2$ blocks, we have
\[\left[\begin{array}{cc|c}
\mbf{0} & \mbf{0} & \mbf{0} \\\hline
4\mbf{C} +12\mbf{I} & \mbf{0} & \mbf{0} \\
2\mbf{I} & 2\mbf{C} +6\mbf{I} & \mbf{0} \\
\end{array}\right]\]
From Theorem \ref{T33}, we have
\begin{equation}\label{aac}
\aligned
\mbf{y}_B &= \begin{bmatrix}
\mbf{0} & \mbf{0} & \mbf{0} \\
4\mbf{C} +12\mbf{I} & \mbf{0} & \mbf{0} \\
2\mbf{I} & 2\mbf{C} +6\mbf{I} & \mbf{0} \\
\end{bmatrix}^{\mbf{+}}\begin{bmatrix}
\mbf{b}_{1} \\
\mbf{b}_{2} \\
\mbf{b}_{3} \\
\end{bmatrix}\\&=\begin{bmatrix}
\mbf{0} & \frac{1}{2}( 2\mbf{C} +6\mbf{I})^{-1} & \mbf{0} \\
\mbf{0} & - ( 2\mbf{C}+6\mbf{I})^{-2} & ( 2\mbf{C} +6\mbf{I})^{-1} \\
\mbf{0} & \mbf{0} & \mbf{0} \\
\end{bmatrix}\begin{bmatrix}
\mbf{b}_{1} \\
\mbf{b}_{2} \\
\mbf{b}_{3} \\
\end{bmatrix}\\&=\begin{bmatrix}
\frac{1}{2}( 2\mbf{C} +6\mbf{I})^{-1}\mbf{b}_{2} \\
-( 2\mbf{C} +6\mbf{I})^{-2}\mbf{b}_{2} \\
\mbf{b}^{*} \\
\end{bmatrix}\\
\endaligned
\end{equation}
where
\(\mbf{b}_{1}=\begin{bmatrix} 0 \\ 0 \\ \end{bmatrix}\),
\(\mbf{b}_{2}=\begin{bmatrix} 2 \\ - 4 \\ \end{bmatrix}\),
\(\mbf{b}_{3}=\begin{bmatrix} 0 \\ 0 \\ \end{bmatrix},\)
\(\mbf{b}^{*}=\begin{bmatrix} 0 \\ 0 \\ \end{bmatrix}\).\\
Let us verify that the product of the following block matrices is the identity matrix
\[\aligned &\begin{bmatrix}
4\mbf{C} +12\mbf{I} & \mbf{0} \\
2\mbf{I} & 2\mbf{C} +6\mbf{I} \\
\end{bmatrix}\begin{bmatrix}
\dfrac{1}{2}(2\mbf{C} +6\mbf{I})^{-1} & \mbf{0} \\
-( 2\mbf{C}+6\mbf{I})^{- 2} & (2\mbf{C} +6\mbf{I})^{- 1} \\
\end{bmatrix} \\&= \begin{bmatrix}
(4\mbf{C} +12\mbf{I})\dfrac{1}{2}( 2\mbf{C}+6\mbf{I})^{-1} & \mbf{0} \\
2\mbf{I}\dfrac{1}{2}( 2\mbf{C}+6\mbf{I})^{-1} - (2\mbf{C} +6\mbf{I}
( 2\mbf{C} +6\mbf{I})^{- 2} & (2\mbf{C}+6\mbf{I})(2\mbf{C}
+6\mbf{I})^{- 1} \end{bmatrix} \\&=
\begin{bmatrix}
\mbf{I} & \mbf{0} \\
( 2\mbf{C}+6\mbf{I})^{- 1} - (2\mbf{C} +6\mbf{I})^{-1} & \mbf{I} \\
\end{bmatrix} = \begin{bmatrix}
\mbf{I} & \mbf{0} \\
\mbf{0} & \mbf{I} \\
\end{bmatrix}
\endaligned\]
The same is true in reverse order of multiplying the matrices.
Let us compute the elements of the last matrix in \eqref{aac}
\[\frac{1}{2}( 2\mbf{C }+6\mbf{I})^{-1}\mbf{b}_{2}=\dfrac{1}{2}
\begin{bmatrix}
0 & - 4 \\
4 & 0 \\
\end{bmatrix}^{- 1}\begin{bmatrix}
2 \\
- 4 \\
\end{bmatrix} = \dfrac{1}{2}\begin{bmatrix}
0 & \dfrac{1}{4} \\
- \dfrac{1}{4} & 0 \\
\end{bmatrix}\begin{bmatrix}
2 \\
- 4 \\
\end{bmatrix} = \begin{bmatrix}
- \dfrac{1}{2} \\
- \dfrac{1}{4} \\
\end{bmatrix}\]
\[\aligned
-( 2\mbf{C} +6\mbf{I})^{-2}\mbf{b}_{2}&= -\begin{bmatrix}
0 & - 4 \\
4 & 0 \\
\end{bmatrix}^{-2}\begin{bmatrix}
2 \\
- 4 \\
\end{bmatrix}= -\begin{bmatrix}
0 & \dfrac{1}{4} \\
- \dfrac{1}{4} & 0 \\
\end{bmatrix}^{2}\begin{bmatrix}
2 \\
- 4 \\
\end{bmatrix}\\& = - \begin{bmatrix}
- \dfrac{1}{16} & 0 \\
0 & - \dfrac{1}{16} \\
\end{bmatrix}\begin{bmatrix}
2 \\
- 4 \\
\end{bmatrix} = \begin{bmatrix}
\dfrac{1}{8} \\
- \dfrac{1}{4} \\
\end{bmatrix}
\endaligned\]
\[\mbf{0}\mbf{b}_{1}+\mbf{0}\mbf{b}_{2}+\mbf{0}\mbf{b}_{3}=\begin{bmatrix}
0 \\
0 \\
\end{bmatrix}\]
From this we can find the particular solution
\[\mbf{y}_{B} = \begin{bmatrix}
\dfrac{1}{2}( 2\mbf{C} +6\mbf{I})^{-1}\mbf{b}_{2} \\
-( 2\mbf{C} +6\mbf{I})^{-2}\mbf{b}_{2} \\
\mbf{b}^{*} \\
\end{bmatrix}=
\renewcommand{\arraystretch}{2.1}\left[
\begin{array}{c}
- \dfrac{1}{2} \\
- \dfrac{1}{4} \\\hline
\dfrac{1}{8} \\
- \dfrac{1}{4} \\\hline
0 \\
0 \\
\end{array}\right]
=\begin{bmatrix}
- \dfrac{1}{2} \\
- \dfrac{1}{4} \\
\dfrac{1}{8} \\
- \dfrac{1}{4} \\
0 \\
0 \\
\end{bmatrix}\]
or
\[y = - \frac{1}{2}x^{2}e^{-3x}\sin{2x} - \frac{1}{4}\ x^{2}e^{-3x}\cos{2x} + \frac{1}{8} xe^{- 3x}\sin{2x} - \frac{1}{4}xe^{- 3x}\cos{2x}\]
Although the original matrix \(\mathcal{D}_B\) was $6\times 6$, during the
calculation of particular solution of the differential equation \eqref{rblc}
we calculated the $2\times 2$ inverse matrix only.
\qed
\section{Conclusion}
The operational method is a fast and universal mathematical tool for
obtaining solutions of differential equations. Determining particular solutions of ordinary nonhomogeneous linear differential equations with constant coefficients using the undetermined coefficients method and the differential operator method are generally known.In particular, distinct from the differential operator method introduced in the literature, we propose and highlight utilizing the definition of the pseudoinverse matrix differential operator to determine a
particular solution of differential equations. This method is simple to understand and to apply, compared to some cases of the differential operator method. However, it requires the determination of an inverse or pseudoinverse matrix, which generally has a special type. If the matrix is
singular, then we will take a pseudoinverse matrix - Moore Penrose pseudoinverse matrix - instead of the inverse matrix. The paper shows that for its determination, we need to calculate only the inverse submatrix of the considered matrix. Finally, the technique of calculating a particular solution by expressing a matrix differential operator as a block matrix is illustrated. Combination of the operational method, the method of undetermined coefficients and application of the matrix differential method provides a powerful instrument to determine particular solutions of differential equations.
| {
"timestamp": "2021-01-07T02:17:21",
"yymm": "2101",
"arxiv_id": "2101.02037",
"language": "en",
"url": "https://arxiv.org/abs/2101.02037",
"abstract": "The article presents a matrix differential operator and a pseudoinverse matrix differential operator for finding a particular solution to nonhomogeneous linear ordinary differential equations (ODE) with constant coefficients with special types of the right-hand side. Calculation requires the determination of an inverse or pseudoinverse matrix. If the matrix is singular, the Moore-Penrose pseudoinverse matrix is used for the calculation, which is simply calculated as the inverse submatrix of the considered matrix. It is shown that block matrices are effectively used to calculate a particular solution.",
"subjects": "General Mathematics (math.GM)",
"title": "Matrix Differential Operator Method of Finding a Particular Solution to a Nonhomogeneous Linear Ordinary Differential Equation with Constant Coefficients",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105287006255,
"lm_q2_score": 0.7248702880639792,
"lm_q1q2_score": 0.7076984941791182
} |
https://arxiv.org/abs/1510.03564 | Linear-Vertex Kernel for the Problem of Packing $r$-Stars into a Graph without Long Induced Paths | Let integers $r\ge 2$ and $d\ge 3$ be fixed. Let ${\cal G}_d$ be the set of graphs with no induced path on $d$ vertices. We study the problem of packing $k$ vertex-disjoint copies of $K_{1,r}$ ($k\ge 2$) into a graph $G$ from parameterized preprocessing, i.e., kernelization, point of view. We show that every graph $G\in {\cal G}_d$ can be reduced, in polynomial time, to a graph $G'\in {\cal G}_d$ with $O(k)$ vertices such that $G$ has at least $k$ vertex-disjoint copies of $K_{1,r}$ if and only if $G'$ has. Such a result is known for arbitrary graphs $G$ when $r=2$ and we conjecture that it holds for every $r\ge 2$. | \section{Introduction}\label{sec:intro}
For a fixed graph $H$, the problem of deciding whether a graph $G$ has $k$ vertex-disjoint copies of $H$ is called {\sc $H$-Packing}. The problem has many applications (see, e.g., \cite{BYHNSS02,BKGS01,KiHe78}), but unfortunately it is almost always intractable. Indeed, Kirkpatrick and Hell \cite{KiHe78} proved that if $H$ contains a component with at least three vertices then {\sc $H$-Packing} is NP-complete. Thus, approximation, parameterised, and exponential algorithms have been studied for {\sc $H$-Packing} when $H$ is a fixed graph, see, e.g., \cite{BYHNSS02,Fel+11,Fel+05,PrSl06,WaNiFeCh08}.
In this note, we will consider {\sc $H$-Packing} when $H=K_{1,r}$ and study {\sc $K_{1,r}$-Packing} from parameterized preprocessing, i.e., kernelization, point of view.\footnote{We provide basic definitions on parameterized algorithms and kernelization in the next section, for recent monographs, see \cite{CFLMPPS15,DF13}; \cite{Kra2014,LMS2012} are recent survey papers on kernelization.} Here $k$ is the parameter. As a parameterized problem, {\sc $K_{1,r}$-Packing} was first considered by Prieto and Sloper \cite{PrSl06} who obtained an $O(k^2)$-vertex kernel for each $r\ge 2$ and a kernel with at most $15k$ vertices for $r=2$. (Since the case $r=1$ is polynomial-time solvable, we may restrict ourselves to $r\ge 2$.) The same result for $r=2$ was proved by Fellows {\em et al.} \cite{Fel+11} and it was improved to $7k$ by Wang {\em et al.} \cite{WaNiFeCh08}.
Fellows {\em et al.} \cite{Fel+11} note that, using their approach, the bound of \cite{PrSl06} on the number of vertices in a kernel for any $r\ge 3$ can likely be improved to subquadratic. We believe that, in fact, there is a linear-vertex kernel for every $r\ge 3$ and we prove Theorem \ref{th:ker} to support our conjecture. A path $P$ in a graph $G$, is called {\em induced} if it is an induced subgraph of $G$. For an integer $d\ge 3$, let ${\cal G}_d$ denote the set of all graphs with no induced path on $d$ vertices.
\begin{theorem}\label{th:ker}
Let integers $r\ge 2$ and $d\ge 3$ be fixed. Then {\sc $K_{1,r}$-Packing} restricted to graphs in ${\cal G}_d$, has a kernel with $O(k)$ vertices.
\end{theorem}
Since $d$ can be an arbitrary integer larger than two, Theorem \ref{th:ker} is on an ever increasing class of graphs which, in the ``limit", coincides with all graphs. To show that Theorem \ref{th:ker} is an optimal\footnote{If $K_{1,r}$-{\sc Parking} was polynomial time solvable, then it would have a kernel with $O(1)$ vertices.} result, in a sense, we prove that {\sc $K_{1,r}$-Packing} restricted to graphs in ${\cal G}_d$ is ${\cal NP}$-hard already for $d=5$ and every fixed $r\ge 3$:
\begin{theorem}\label{th:NP}
Let $r \geq 3$. It is ${\cal NP}$-hard to decide if the vertex set of a graph in ${\cal G}_5$ can be partitioned into vertex-disjoint copies of $K_{1,r}$.
\end{theorem}
We cannot replace ${\cal G}_5$ by ${\cal G}_4$ (unless ${\cal NP}={\cal P}$) due to the following assertion, whose proof is given in the Appendix.
\begin{theorem}\label{th:poly}
Let $r \geq 3$ and $G\in {\cal G}_4$. We can find the maximal number of vertex-disjoint copies of $K_{1,r}$ in $G$ in polynomial time.
\end{theorem}
\section{Terminology and Notation}
For a graph $G$, $V(G)$ ($E(G)$, respectively) denotes the vertex set (edge set, respectively) of $G$, $\Delta(G)$ denotes the maximum degree of $G$ and $n$ its number of vertices. For a vertex $u$ and a vertex set $X$ in $G$, $N(u)=\{v: uv\in E(G)\}$, $N[u] = N(u)\cup \{u\},$ $d(u)=|N(u)|$, $N_X(u) = N(u) \cap X$, $d_X(u)=|N_X(u)|$ and $G[X]$ is the subgraph of $G$ induced by $X$. We call $K_{1,r}$ an {\em $r$-star}.
We say a star {\em intersects} a vertex set if the star uses a vertex in the set.
We use $(G,k,r)$ to denote an instance of the $r$-star packing problem. If there are $k$ vertex-disjoint $r$-stars in $G$, we say $(G,k,r)$ is a {\sc Yes}-instance, and we write $G \in \star(k,r)$.
Given disjoint vertex sets $S,T$ and integers $s,r$, we say that $S$ has $s$ $r$-{\em stars in} $T$ if there are $s$ vertex-disjoint $r$-stars with centers in $S$ and leaves in $T$.
A \emph{parameterized problem} is a subset $L\subseteq \Sigma^* \times
\mathbb{N}$ over a finite alphabet $\Sigma$. A parameterized problem $L$ is
\emph{fixed-parameter tractable} if the membership of an instance
$(I,k)$ in $\Sigma^* \times \mathbb{N}$ can be decided in time
$f(k)|I|^{O(1)}$ where $f$ is a computable function of the
{\em parameter} $k$ only.
Given a parameterized problem $L$,
a \emph{kernelization of $L$} is a polynomial-time
algorithm that maps an instance $(x,k)$ to an instance $(x',k')$ (the
\emph{kernel}) such that $(x,k)\in L$ if and only if
$(x',k')\in L$ and $k'+|x'|\leq g(k)$ for some
function $g$.
It is well-known that a decidable parameterized problem $L$ is fixed-parameter
tractable if and only if it has a kernel. Kernels of small size are of
main interest, due to applications.
\section{Proof of Theorem \ref{th:ker}}
Note that the $1$-star packing problem is the classic maximum matching problem and if $k = 1$, the $r$-star packing problem is equivalent to deciding whether $\Delta(G) \ge r$.
Both of these problems can be solved in polynomial time. Henceforth, we assume $r,k>1$.
A vertex $u$ is called a \textit{small vertex} if $\max \{d(v):v\in N[u]\} < r$. A graph without a small vertex is a \textit{simplified graph}.
We now give two reduction rules for an instance $(G,k,r)$ of {\sc $K_{1,r}$-Packing}.
\begin{krule}\label{rule:small-vertices}
If graph $G$ contains a small vertex $v$, then return the instance $(G - v, k,r)$.
\end{krule}
It is easy to observe that Reduction Rule~\ref{rule:small-vertices} can be applied in polynomial time.
\begin{krule}\label{rule:obstacle-removal}
Let $G = (V,E)$ be a graph and let $C, L$ be two vertex-disjoint subsets of $V$. The pair $(C, L)$ is called a \textit{constellation} if $G[C \cup L] \in \star(|C|,r)$ and there is no star $K_{1,r}$ intersecting $L$ in the graph $G[V \setminus C]$. If $(C, L)$ is a constellation, return the instance $(G[V\setminus (C\cup L)], k-|C|)$.
\end{krule}
It is easy to observe that Reduction Rule~\ref{rule:obstacle-removal} can be applied in polynomial time, provided we are given a suitable constellation.
\begin{lemma} Reduction~Rules~\ref{rule:small-vertices} and~\ref{rule:obstacle-removal} are safe.\end{lemma}
\begin{proof}
Clearly, a small vertex $v$ can not appear in any $r$-star.
Therefore Reduction Rule \ref{rule:small-vertices} is safe as $G$ and $G-v$ will contain the same number of $r$-stars.
To see that Reduction Rule~\ref{rule:obstacle-removal} is safe, it is sufficient to show
that $G \in \star(k,r)$ if and only if $G[V\setminus(C\cup L)] \in \star(k-|C|,r)$.
On the one hand, if $G[V\setminus(C\cup L)] \in \star(k-|C|,r)$, the hypothesis $G[C \cup L] \in \star(|C|,r)$ implies $G \in \star(k,r)$. On the other hand, there are at most $|C|$ vertex-disjoint stars intersecting $C$. But by hypothesis, every star intersecting $L$ also intersects $C$. We deduce that there are at most $|C|$ stars intersecting $C \cup L$, and so if $G \in \star(k,r)$, there are at least $k-|C|$ stars in $G[V-(C\cup L)]$: $G[V\setminus(C\cup L)] \in \star(k-|C|,r)$.
\end{proof}
Note that as both rules modify a graph by deleting vertices, any graph $G'$ that is derived from a graph $G \in {\cal G}_d$ by an application of Rules~\ref{rule:small-vertices} or~\ref{rule:obstacle-removal} is also in ${\cal G}_d$.
Recall the Expansion Lemma, which is a generalization of the well-known Hall's theorem.
\begin{lemma}(\textbf{Expansion Lemma})\cite{FoLoMiPhSa11}
Let $r$ be a positive integer, and let $m$ be the size of the maximum matching in a bipartite graph $G$ with vertex bipartition
$X\cup Y$. If $|Y|>rm$, and there are no isolated vertices in $Y,$ then there exist nonempty vertex sets
$S\subseteq X, T \subseteq Y$ such that $S$ has $|S|$ $r$-stars in $T$ and no vertex in $T$ has a neighbor outside $S$.
Furthermore, the sets $S, T$ can be found in polynomial time in the size of $G$.
\end{lemma}
Henceforth, we will use the following modified version of the expansion lemma.
\begin{lemma}(\textbf{Modified Expansion Lemma})
Let $r$ be a positive integer, and let $m$ be the size of the maximum matching in a bipartite graph $G$ with vertex bipartition $X\cup Y$. If $|Y|>rm$, and there are no isolated vertices in $Y,$ then there exists a polynomial algorithm(in the size of $G$) which returns a partition $X=A_1\cup B_1$, $Y=A_2\cup B_2$, such that
$B_1$ has $|B_1|$ $r$-stars in $B_2$,
$E(A_1,B_2)=\emptyset$, and $|A_2|\leq r|A_1|$.
\end{lemma}
\begin{proof}
If $|Y|\leq rm$, then we may return $A_1 = X$, $A_2=Y$, $B_1=B_2=\emptyset$, as $m \leq |X|$ and hence $|Y|\leq r|X|$.
Otherwise, apply the Expansion Lemma to get nonempty vertex sets
$S\subseteq X, T \subseteq Y$ such that $S$ has $|S|$ $r$-stars in $T$ and no vertex in $T$ has a neighbor in $Y$ outside $S$.
Let $X'=X\setminus S$ and $Y'=Y \setminus T$. If $G[X' \cup Y']$ has isolated vertices in $Y'$, move all of them from $Y'$ to $T$.
If $|Y'|\leq r|X'|$, we may return $A_1 = X'$, $A_2=Y'$, $B_1=S,$ and $B_2=T$.
So now assume $|Y'|> r|X'|$.
In this case, apply the algorithm recursively on $G[X' \cup Y']$ to get a partition $X'=A_1'\cup B_1', Y' = A_2'\cup B_2'$, such that $B_1'$ has $|B_1'|$ stars in $B_2'$, $E(A_1',B_2')=\emptyset$, and $|A_2'|\leq r|A_1'|$.
Then return $A_1 = A_1'$, $B_1 = B_1' \cup S$, $A_2 = A_2'$, $B_2 = B_2' \cup T$. Observe that $B_1$ has $|B_1'|+|S| = |B_1|$ stars in $B_2$, $E(A_1, B_2) \subseteq E(A_1',B_2') \cup E(X\setminus S,T) = \emptyset$, and $|A_2| = |A_2'| \leq r|A_1'| = r|A_1|$, as required.
As each iteration reduces $|X|$ by at least $1$, we will have to apply less than $|X|+|Y|$ iterations, each of which uses at most one application of the Expansion Lemma, and so the algorithm runs in polynomial time.
\end{proof}
\noindent{\bf Proof of Theorem \ref{th:ker}.}
By exhaustively applying Reduction Rule~\ref{rule:small-vertices}, we may assume we have a simplified graph.
Let $G$ be a simplified graph in ${\cal G}_d$.
Now find a maximal $r$-star packing of the graph $G$ with $q$ stars.
We may assume $q < k$ as otherwise we have a trivial {\sc Yes}-instance.
Let $S$ be the set of vertices in this packing, and let $D = V(G)\setminus S$.
For any $u \in D$, let $D[u]$ be
the set of vertices $v \in D$ for which there is a path from $v$ to $u$ using only vertices in $D$ -
that is, $D[u]$ is the the set of vertices in the component of $G[D]$ containing $u$.
As our star-packing is maximal, $d_D(v) < r$ for every $v \in D$.
As $G \in {\cal G}_d$,
every $v \in D[u]$ has a path to $u$ in $G[D]$ with at most $d-1$ vertices
(as otherwise the shortest path in $G[D]$ from $v$ to $u$ is an induced path on at least $d$ vertices).
It follows that $|D[u]| \le 1 + r + r^2 + \dots + r^{d-1} \le r^{d}$.
We will now find a partition of $S$ into $Big(S) \cup Small(S)$, and $D$ into $B(D) \cup U(D)$, such that $|B(D)| \le r^{d+1}|Small(S)|$, and either $Big(S)=U(D) = \emptyset$ or $(Big(S), U(D))$ is a constellation.
As $|Small(S)|\le |S| \le (r+1)k$, it follows that either $|V(G)|\le (r+1)k + (r+1)r^{d+1}k$, or we can apply Reduction Rule \ref{rule:obstacle-removal} on $(Big(S), U(D))$.
We will construct $Big(S), Small(S),B(D),U(D)$ algorithmically as described below.
Throughout, we will preserve the properties that %
\begin{enumerate}
\item $|B(D)|\le|Small(S)|r^{d+1}$,
\item $U(D)$ has no neighbors in $Small(S) \cup B(D)$.
\end{enumerate}
Initially, set $Big(S)=S, U(D)=D, Small(S)=B(D)=\emptyset$.
While $|U(D) \cap N(Big(S))| > r|Big(S)|$, do the following.
If there is a vertex $u \in Big(S)$ such that $|N(u)\cap U(D)| < r$, let $X = \bigcup\{D[v]: v \in N(u) \cap U(D)\}$. Observe that as $|D[v]| \le r^{d}$ for all $v \in D$, $|X|< r^{d+1}$.
Now set $Small(S) = Small(S) \cup \{u\}$, $Big(S)=Big(S)\setminus \{u\}$, $B(D) = B(D) \cup X$, $U(D) = U(D)\setminus X$.
It follows that Property 1 is preserved.
Note that no vertex in the new $U(D)$ has a neighbor in $X$ (as all neighbors of $X$ in $D$ lie in $X$).
Similarly no vertex in the new $U(D)$ is adjacent to $u$ (as such a vertex would be in the old $U(D)$ and so would have been added to $X$).
Therefore there are still no edges between the new $U(D)$ and the new $Small(S) \cup B(D)$,
and so Property 2 is preserved.
Otherwise (if every vertex $u \in Big(S)$ has $|N(u)\cap U(D)| \ge r$), let $H$ denote the maximal bipartite subgraph of $G$ with vertex partition $Big(S)\cup (U(D) \cap N(Big(S))$,
and apply the Modified Expansion Lemma to $H$. We will get a partition $Big(S) = A_1 \cup B_1$ and $U(D) \cap N(Big(S)) = A_2 \cup B_2$ such that
$E(A_1,B_2) = \emptyset, |A_2| \le r|A_1|$
and $B_1$ has $|B_1|$ $r$-stars in $B_2$.
If the Modified Expansion Lemma returns $B_1=Big(S)$, then we claim that $(Big(S), U(D))$ is a constellation.
To see this, firstly note that $|Big(S)|$ has $|Big(S)|$ $r$-stars in $U(D)$.
Secondly, note that since we chose the vertices of a maximal star packing for $S$, there is no $r$-star contained in $G[U(D)]$.
As $U(D)$ has no neighbors in $Small(S) \cup B(D)$, it follows that there is no $r$-star intersecting $U(D)$ in $G \setminus Big(S)$.
Thus $(Big(S), U(D))$ is a constellation, and the claim is proved.
In this case the algorithm stops.
So now assume that the Modified Expansion Lemma returns $Big(S)=A_1 \cup B_1$ with $A_1 \neq \emptyset$.
Let $X = \bigcup \{D[v]: v \in N(A_1) \cap U(D)\}$.
Note that as $E(A_1,B_2) = \emptyset$ and $|A_2|\le r|A_1|$,
we have $|X| \le |\bigcup \{D[v]: v \in A_2\}| \le |A_2|r^{d} \le |A_1|r^{d+1}$.
Then let $Small(S) = Small(S) \cup A_1$, $Big(S) = Big(S) \setminus A_1$, $B(D) = B(D) \cup X$, $U(D) = U(D) \setminus X$.
Note that after this move, we still have that $|B(D)|\le |Small(S)|r^{d+1}$,
and $U(D)$ has no neighbors in $Small(S) \cup B(D)$.
Note that in either case, $|Big(S)|$ strictly decreases, so the algorithm must eventually terminate,
either because $(Big(S), U(D))$ is a constellation, or because $|U(D) \cap N(Big(S))| \leq r|Big(S)|$.
If $(Big(S), U(D))$ is a constellation,
apply Reduction Rule~\ref{rule:obstacle-removal} using $(Big(S), U(D))$. This gives us a partition in which $Big(S)=U(D)=\emptyset$.
Thus in either case, we have that $|U(D) \cap N(Big(S))| \leq r|Big(S)|$.
Note that every vertex $u \in U(D)$ is in $D[v]$ for some $v \in N(S)$ (as otherwise, either $\max \{d(v):v\in N[u]\} < r$ or $G[D]$ contains an $r$-star, a contradiction in either case). Moreover such a $v$ must be in $U(D) \cap N(Big(S))$, as there are no edges between $U(D)$ and $Small(S) \cup B(D)$.
Thus $|U(D)|\le r^d|U(D) \cap N(Big(S))| \leq r^{d+1}|Big(S)|$.
Then we have $|V(G)|=|S|+|U(D)|+|B(D)| \leq |S| + r^{d+1}|Big(S)| + r^{d+1}|Small(S)| \le (r^{d+1}+1)|S| \le (k-1)(r+1)(r^{d+1}+1) = O(k)$.
\section{Proof of Theorem \ref{th:NP}}
A {\em split graph} is a graph where the vertex set can be partitioned into a clique and an independent set.
An instance of the well-known ${\cal NP}$-hard problem {\sc $3$-Dimensional Matching} contains a vertex set that can be partitioned into three equally large sets $V_1,V_2,V_3$ (also called partite sets).
Let $k$ denote the size of each of $V_1,V_2,V_3$.
It furthermore contains a number of $3$-sets containing exactly one vertex from each $V_i$, $i=1,2,3$.
The problem is to decide if there exists a set of $k$ vertex disjoint $3$-sets (which would then cover all vertices).
Such a set of $k$ vertex disjoint $3$-sets is called a perfect matching.
The $3$-sets are also called {\em edges} (or {\em hyperedges}).
\begin{theorem} \label{thm1}
Let $r \geq 3$. It is ${\cal NP}$-hard to decide if the vertex set of a split graph can be partitioned into vertex disjoint copies of $K_{1,r}$.
\end{theorem}
\begin{proof}
We will reduce from {\sc $3$-Dimensional Matching}. Let ${\cal I}$ be an instance of $3$-dimensional matching.
Let $V_1,V_2,V_3$ denote the three partite sets of ${\cal I}$ and let $E$ denote the set of
edges in ${\cal I}$. Let $m = |E|$ and $k=|V_1|=|V_2|=|V_3|$.
We will build a split graph $G_{\cal I}$ as follows.
Let $V = V_1 \cup V_2 \cup V_3$ be the vertices of ${\cal I}$.
Let $X_1$ be a set of $m$ vertices and $X_2$ be a set of $m-k$ vertices and let $X = X_1 \cup X_2$.
Let $Y$ be a set of $(m-k)(r-1)$ vertices and let $W$ be a set of $k(r-3)$ vertices (if $r=3$ then $W$ is empty).
Let the vertex set of $G_{\cal I}$ be $V \cup X \cup Y \cup W$.
Add edges such that $X$ becomes a clique in $G_{\cal I}$. Let each vertex in $X_1$ correspond to a distinct edge in $E$ and connect that vertex with the $3$ vertices in $V$ which
belongs to the corresponding edge in $E$. Furthermore add all edges from $X_1$ to $W$. Finally, for each vertex in $X_2$ add $r-1$ edges to $Y$ in such a way that
each vertex in $Y$ ends up with degree one in $G_{\cal I}$. This completes the construction of $G_{\cal I}$.
Clearly $G_{\cal I}$ is a split graph as $X$ is a clique and $V \cup Y \cup W$ is an independent set. We will now show that the vertex set of $G_{\cal I}$ can be partitioned into
vertex disjoint copies of $K_{1,r}$ if and only if ${\cal I}$ has a perfect matching.
First assume that ${\cal I}$ has a perfect matching. Let $E' \subseteq E$ denote the edges of the perfect matching. For the vertices in $X_1$ that correspond to the edges in $E'$
we include the three edges from each such vertex to $V$ as well as $r-3$ edges to $W$. This can be done such that we obtain $k$ vertex disjoint copies of $K_{1,r}$ covering all of $V$ and $W$ as well as
$k$ vertices from $X_1$. Now for each vertex in $X_2$ include the $r-1$ edges to $Y$ as well as one edge to an unused vertex in $X_1$. This can be done such that
we obtain an additional $m-k$ vertex disjoint copies of $K_{1,r}$. We have now constructed $m$ vertex disjoint copies of $K_{1,r}$ which covers all the vertices in $G_{\cal I}$, as required.
Now assume that the vertex set of $G_{\cal I}$ can be partitioned into vertex disjoint copies of $K_{1,r}$.
As $|V \cup W \cup Y \cup X| = m(r+1)$ we note that we have $m$ vertex disjoint copies of $K_{1,r}$, which we will denote by ${\cal K}$.
As all vertices in $Y$ need to be included in such copies we note that
every vertex of $X_2$ is the center vertex of a $K_{1,r}$. Let ${\cal K}'$ denote these $m-k$ copies of $K_{1,r}$.
Each $K_{1,r}$ in ${\cal K}'$ must include $1$ edge from $X_2$ to $X_1$. These $m-k$ edges form a matching, implying
that $m-k$ vertices of $X_1$ also belong to the copies of $K_{1,r}$ in ${\cal K}'$.
This leaves $k$ vertices in $X_1$ that are uncovered and $rk$ vertices in $V \cup W$ that are uncovered.
Furthermore, as $V \cup W$ is an independent set, each copy of $K_{1,r}$ in ${\cal K} \setminus {\cal K}'$ must contain a vertex of $X_1$.
As $|{\cal K} \setminus {\cal K}'| = k$ we note that
the $k$ copies of $K_{1,r}$ in ${\cal K} \setminus {\cal K}'$ must include exactly one vertex from $X_1$.
Also as each vertex in $X_1$ has exactly three neighbours in $V$, each such
$K_{1,r}$ also contains $3$ vertices from $V$ (as $V$ needs to be covered) and therefore $r-3$ vertices form $W$.
Therefore the $k$ vertices in $X_1$ that belong to copies of $K_{1,r}$ in ${\cal K} \setminus {\cal K}'$ correspond
to $k$ edges in $E$ which form a perfect matching in $G_{\cal I}$.
This completes the proof as we have shown that $G_{\cal I}$ can be partitioned into
vertex disjoint copies of $K_{1,r}$ if and only if ${\cal I}$ has a perfect matching.
\end{proof}
The following lemma is known. We give the simple proof for completeness.
\begin{lemma} \label{lem1}
No split graph contains an induced path on 5 vertices.
\end{lemma}
\begin{proof}
Assume $G$ is a split graph where $V(G)$ is partitioned into an independent set $I$ and a clique $C$.
For the sake of contradiction assume that $P= p_0 p_1 p_2 p_3 p_4$ is an induced $P_5$ in $G$.
As $I$ is independent we note that $\{p_0,p_1\} \cap C \not= \emptyset$ and
$\{p_3,p_4\} \cap C \not= \emptyset$. As $C$ is a clique there is therefore an edge from
a vertex in $\{p_0,p_1\}$ to a vertex in $\{p_3,p_4\}$. This edge implies that $P$ is not an induced $P_5$ in $G$, a contradiction.
\end{proof}
\noindent{\bf Proof of Theorem \ref{th:NP}.}
By Lemma~\ref{lem1}, ${\cal G}_5$ contains all split graphs.
The result now follows immediately from Theorem~\ref{thm1}.~\qed
\vspace{3mm}
\noindent{\bf Acknowledgment.} Research of GG was partially supported by Royal Society Wolfson Research Merit Award. Research of BS was supported by China Scholarship Council.
| {
"timestamp": "2015-10-14T02:07:23",
"yymm": "1510",
"arxiv_id": "1510.03564",
"language": "en",
"url": "https://arxiv.org/abs/1510.03564",
"abstract": "Let integers $r\\ge 2$ and $d\\ge 3$ be fixed. Let ${\\cal G}_d$ be the set of graphs with no induced path on $d$ vertices. We study the problem of packing $k$ vertex-disjoint copies of $K_{1,r}$ ($k\\ge 2$) into a graph $G$ from parameterized preprocessing, i.e., kernelization, point of view. We show that every graph $G\\in {\\cal G}_d$ can be reduced, in polynomial time, to a graph $G'\\in {\\cal G}_d$ with $O(k)$ vertices such that $G$ has at least $k$ vertex-disjoint copies of $K_{1,r}$ if and only if $G'$ has. Such a result is known for arbitrary graphs $G$ when $r=2$ and we conjecture that it holds for every $r\\ge 2$.",
"subjects": "Data Structures and Algorithms (cs.DS); Computational Complexity (cs.CC)",
"title": "Linear-Vertex Kernel for the Problem of Packing $r$-Stars into a Graph without Long Induced Paths",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105266327962,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7076984926802101
} |
https://arxiv.org/abs/1708.09686 | Structural properties of biclique graphs and the distance formula | A \textit{biclique} is a maximal induced complete bipartite subgraph of $G$. The \textit{biclique graph} of a graph $G$, denoted by $KB(G)$, is the intersection graph of the family of all bicliques of $G$. In this work we study some structural properties of biclique graphs which are necessary conditions for a graph to be a biclique graph. In particular, we prove that for biclique graphs that are neither a $K_3$ nor a \textit{diamond}, the number of vertices of degree $2$ is less than half the number of vertices in the graph. Also, we present forbidden structures. For this, we introduce a natural definition of the distance between bicliques in a graph. We give a formula that relates the distance between bicliques in a graph $G$ and the distance between their respective vertices in $KB(G)$. Using these results, we can prove not only this new necessary condition involving the degree, but also that some graphs are not biclique graphs. For example, we show that the \textit{crown} is the smallest graph that is not a biclique graph although the known necessary condition for biclique graphs holds, answering an open problem about biclique graphs. Finally, we present some interesting related conjectures and open problems. | \section{Introduction}
Intersection graphs of certain special subgraphs of a general graph have been studied
extensively. It can be mentioned the line graphs (intersection graphs of the edges of a graph), the interval graphs (intersection graphs
of intervals of a line), and in particular, the clique graphs (intersection graphs of the cliques of a graph)
\cite{BoothLuekerJCSS1976,BrandstadtLeSpinrad1999,EscalanteAMSUH1973,FulkersonGrossPJM1965, GavrilJCTSB1974, LehotJA1974,McKeeMcMorris1999}.
The \emph{clique graph} of $G$, denoted by $K(G)$, is the intersection graph of the family of all cliques of $G$.
Clique graphs were introduced by Hamelink in~\cite{HamelinkJCT1968} and characterized in~\cite{RobertsSpencerJCTSB1971}. It was proved in~\cite{Alc'onFariaFigueiredoGutierrez2006}
that the clique graph recognition problem is NP-Complete.
The \emph{biclique graph} of a graph $G$, denoted by $KB(G)$, is the intersection graph of the family of all bicliques of $G$.
It was defined and characterized in~\cite{GroshausSzwarcfiterJGT2010}. However, no polynomial time algorithm is known for recognizing biclique graphs.
Bicliques were studied in many contexts. Depending on the context and the author, bicliques are defined in different ways: induced or not induced subgraphs, maximal or not, etc.
All of them are rather natural and clearly justified. In our work, we consider bicliques as being maximal bipartite complete induced subgraphs.
Bicliques have applications in various fields, for example, biology: protein-protein interaction networks~\cite{Bu01052003},
social networks: web community discovery~\cite{Kumar}, genetics~\cite{Atluri}, medicine~\cite{Niranjan}, information theory~\cite{Haemers200156}.
More applications (including some of these) can be found in~\cite{blablamec}.
In this work we define the distance between bicliques in a graph. Previous related work in the context of cliques can be found in~\cite{Balakrishnan,hedman,hedman2,peyrat,PizanaDAM2004}. We prove some properties for the bicliques using that concept.
Also, we present some applications of the distance concept. In~\cite{GroshausSzwarcfiterJGT2010} it was given a necessary condition for a graph to be a biclique graph. It was an open problem whether this condition was sufficient.
We give a different proof for that necessary condition using the formula of distances. We remark that, although the original proof is short, the new proof shows how to use the distance formula to solve problems in biclique graphs. Moreover, we prove that this necessary condition is not sufficient, that is, we present some structural properties which allows us to identify graphs that verify the condition but they are not biclique graphs.
Next, given a biclique graph that is not a $K_3$ or a \textit{diamond}, we prove that the number of vertices of degree two is less than the half of its vertices. Consequently, we give some forbidden structures.
We hope that these tools give some light to the main open problem in the context of biclique graphs, that is, the recognition of this class. In the appendix, we present a complete list of all biclique graphs up to $6$ vertices.
This work is organized as follows. In Section $2$ the notation is given. Section $3$ contains some preliminary and general properties. In Section $4$ we present the relation between distances in graphs and biclique graphs and some structural properties of biclique graphs. Section $5$ contains a bound on the number of vertices of degree two for biclique graphs and forbidden structures.
In the last two sections we present some open and interesting related problems, and the conclusions respectively.
\section{Preliminaries}
Along the paper we restrict to undirected simple graphs. Let $G=(V,E)$ be a graph with vertex set $V(G)$ and edge set $E(G)$, and
let $n=|V(G)|$ and $m=|E(G)|$. A \emph{subgraph} $G'$ of $G$ is a graph $G'=(V',E')$, where $V'\subseteq V$ and
$E'\subseteq (V'\times V') \cap E$. When $E'= (V'\times V') \cap E$, say that $V'$ \emph{induces} the subgraph $G'=(V',E')$, that is, $G'$ is an \emph{induced subgraph} of $G$.
A graph $G=(V,E)$ is \emph{bipartite} when $V= U \cup W$, $U \cap W = \emptyset$, and $E \subseteq U \times W$. Say that $G$ is a
\emph{complete graph} when every possible edge belongs to $E$. A complete graph of $n$ vertices is denoted $K_{n}$.
A \emph{clique} of $G$ is a maximal complete induced subgraph, while a \emph{biclique} is a maximal bipartite complete induced subgraph of $G$.
The \emph{open neighborhood} of a vertex $v \in V(G)$, denoted $N(v)$, is the set of vertices adjacent to $v$.
The \emph{closed neighborhood} of a vertex $v \in V(G)$, denoted $N[v]$, is the set $N(v) \cup \{v\}$.
The \emph{degree} of a vertex $v$, denoted by $d(v)$, is defined as $d(v) = |N(v)|$.
A vertex $v\in V(G)$ is \emph{universal} if it is adjacent to all of the other vertices in $V(G)$. A \emph{path} of $k$ vertices, denoted by $P_{k}$, is a set of vertices
$v_{1}v_{2}...v_{k} \in V(G)$ such that $v_{i} \neq v_{j}$ for all $1 \leq i \neq j \leq k$ and $v_{i}$ is adjacent to $v_{i+1}$
for all $1 \leq i \leq k-1$. A graph is \emph{connected} if there exists a path between each pair of vertices. The distance between two vertices $v,w \in G$ is defined
as the number of edges in a shortest path between them and denoted by $d_G(v,w)$. Whenever no confussion arises, we will simply write $d(v,w)$ instead of $d_G(v,w)$.
We assume that all the graphs of this paper are connected.
A \textit{diamond} is a complete graph with $4$ vertices minus an edge. A \textit{gem} is an induced path with $4$ vertices plus an
universal vertex.
Given a family of sets $\mathcal{H}$, the \emph{intersection graph} of $\mathcal{H}$ is a graph that has the members of
$\mathcal{H}$ as vertices and there is an edge between two sets $E,F\in\mathcal{H}$ when $E$ and $F$ have non-empty intersection.
A graph $G$ is an \emph{intersection graph} if there exists a family of sets $\mathcal{H}$ such that $G$ is the intersection
graph of $\mathcal{H}$. We remark that every graph is an intersection graph \cite{Szpilrajn-MarczewskiFM1945}.
\section{General properties}\label{generalProps}
In this section we present some properties of the biclique graph related to connectivity.
First we recall the theorem in~\cite{GroshausSzwarcfiterJGT2010} that gives a necessary condition for a graph to be a biclique graph.
\begin{theorem}[\cite{GroshausSzwarcfiterJGT2010}]\label{tMarina}
Let $G$ be a graph such that $G=KB(H)$, for some graph $H$. Then every induced $P_3$ of $G$ is contained in an induced \textit{diamond} or an induced \textit{gem} of $G$ as shown in Figure~\ref{FigtMarina}.
\end{theorem}
\begin{figure}[ht!]
\centering
\includegraphics[scale=.5]{diam_gem}
\caption{Induced $P_3$ in bold edges contained in a \textit{diamond} and in a \textit{gem} respectively.}
\label{FigtMarina}
\end{figure}
In Section~\ref{sectDist} we give a different proof of the Theorem~\ref{tMarina}.
One question that arises from Theorem~\ref{tMarina} is: Given a graph $G$ such that every induced $P_3$ is contained in
a \textit{diamond} or in a \textit{gem}; is $G=KB(H)$ for some graph $H$? In Section~\ref{sectDist} we show that this question is false
by proving a result that allows us to construct graphs that have every induced $P_3$ in
a \textit{diamond} or in a \textit{gem} although they are not biclique graphs.
Next we show the connectivity relation between $G$ and $KB(G)$.
\begin{proposition}\label{p1}
Let $G$ be a graph. $G$ is connected if and only if $KB(G)$ is connected.
\end{proposition}
\begin{proof} $\Rightarrow )$ Suppose $G$ is connected. Let $B$ and $B'$ bicliques of $G$.
If $B$ intersects $B'$ then these bicliques are adjacent as vertices of $KB(G)$.
If they do not intersect, as $G$ is connected, there is a path between each vertex of $B$ and $B'$.
Let $b \in B$ and $b' \in B'$ such that $d(b,b') = min \{d(v,w) $ / $ v \in B$, $w \in B'\}$.
Let $k=d(b,b')$. Clearly $k>0$, so take a path $P=bv_1\ldots v_{k-1}b'$ of length $k$ between $b$ and $b'$.
Now, each triple of consecutive vertices of $P$ is contained in a different biclique since both endpoints of each triple
are not adjacent. Finally taking the bicliques that contain the following triples
$\{b,v_1,v_2\}, \{v_2,v_3,v_4\},\ldots,\{v_{k-2},v_{k-1},b'\}$ we have that each biclique only intersects with the previous and the following
one, and therefore, as vertices of $KB(G)$, they form a path between $B$ and $B'$. Hence $KB(G)$ is connected.
The converse is clear.
\end{proof}
The following result is a direct consequence of Theorem~\ref{tMarina}
\begin{lemma}\label{l2}
Let $G$ be a graph such that $G=KB(H)$, for some graph $H$. Then $G$ is $2$-connected. Moreover, if $G$ has at least $3$ vertices
then $d(v) \geq 2$ for all $v \in V(G)$.
\end{lemma}
\begin{lemma}[\cite{marinayo}]\label{l3}
If $G$ is an induced subgraph of $H$, then $KB(G)$ is a subgraph (not necessarily induced) of $KB(H)$.
\end{lemma}
\section{Distances in $G$ and $KB(G)$}\label{sectDist}
In this section we define the distance between bicliques in a graph. Also, we study the relation between the distance
of bicliques in a graph $G$ and the distance between their respectives vertices in $KB(G)$.
We define the distance between bicliques as follows:
\begin{definition}\label{defDist}
Let $G$ be a graph and let $B, B'$ be bicliques of $G$. We define the \emph{distance} between $B$ and $B'$ as
$d(B,B') = min \{d(b,b') $ / $b \in B$, $b' \in B'\}$.
\end{definition}
The next formula states the relationship between the distances of $G$ and $KB(G)$.
This result is useful for giving a simpler proof of Theorem~\ref{tMarina} and also to show that the condition of Theorem~\ref{tMarina} is not sufficient.
\begin{lemma}\label{l4}
Let $G$ be a graph and let $B, B'$ be two bicliques of $G$. Then
$d_{KB(G)}(B,B') = \big\lfloor\frac{d_G(B,B') + 1}{2} \big\rfloor + 1$.
\end{lemma}
\begin{proof} Let $v_0,v_k$ be vertices of $G$ such that $v_0 \in B$, $v_k \in B'$ and $d(v_0, v_k) = d_G(B, B') = k$.
If $k=0$ then $B$ and $B'$ intersect in $G$. So they are adjacent as vertices in $KB(G)$. Therefore
$d_{KB(G)}(B, B') = \big\lfloor\frac{0 + 1}{2} \big\rfloor = 1$. Suppose now that $k>0$. Let $P_1=v_0 v_1\ldots v_k$
be a path in $G$ between $B$ and $B'$ of length $k$. Take $B_i \in V(KB(G))$ such that
$\{v_i, v_{i+1}, v_{i+2}\} \subseteq B_i$ in $G$ for $i=0,\ldots,k-2$. These bicliques $B_i$ of $G$ exist since
$v_i v_{i+2} \notin E(G)$, otherwise there would be a path of length less than $k$ between $B$ and $B'$.
Then $B B_0 B_2 B_4\ldots B_{2j} \ldots B'$ is a path in $KB(G)$ between $B$ and $B'$ of length
$\big\lfloor\frac{k + 1}{2} \big\rfloor + 1$ and therefore, as $d_G(B,B')=k$, we have that
$d_{KB(G)}(B,B')$ $\leq \big\lfloor\frac{d_{G}(B,B') + 1}{2} \big\rfloor + 1$.
This situation can be observed in Figure~\ref{Fig20}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.5]{dist1}
\caption{First inequality.}
\label{Fig20}
\end{figure}
Now let $P_2=B_0 B_1\ldots B_s$ be a path of minimum length in $KB(G)$ between $B=B_0$ and $B'=B_s$ (Fig~\ref{Fig21}).
Then $d_{KB(G)}(B, B') = s > 0$. Let $v_{2i} \in B_i \cap B_{i+1}$ be vertices of $V(G)$ for $i=0,\ldots,s-1$.
Like this, we obtain the vertices $v_0, v_2,\ldots,v_{2s-2}$ of $G$. Now, for $i=1,\ldots,s-1$, either $v_{2i-2}$ is adjacent
to $v_{2i}$ or $v_{2i-2}$ is not adjacent to $v_{2i}$. If they are not adjacent then there exists one vertex $v_{2i-1}$ adjacent
to both since $v_{2i-2}$ and $v_{2i}$ belong to the biclique $B_{i}$ of $G$. Then, the longest path between $v_0$ and
$v_{2s-2}$ occurs when these pair of consecutive vertices are not adjacent. In this situation, adding the vertex
adjacent to both between each pair, we have that $v_0v_1\ldots v_{2s-2}$ induce a path in $G$ between $B$ and $B'$ of length $2s-2$. Now, as
$B'$ either can or cannot contain the vertex $v_{2s-3}$, the length of the path is $2s-2+t$ for $t \in \{-1,0\}$. Then,
\begin{displaymath}
d_G(B,B') \leq 2s-2+t, \quad t \in \{-1,0\}
\end{displaymath}
\begin{displaymath}
\frac{d_G(B,B')+2+t}{2} \leq s = d_{KB(G)}(B,B'), \quad t \in \{0,1\}
\end{displaymath}
\begin{displaymath}
\frac{d_G(B,B')+t}{2} + 1 \leq d_{KB(G)}(B,B'), \quad t \in \{0,1\}
\end{displaymath}
Finally it follows,
\begin{displaymath}
\Big\lfloor\frac{d_G(B,B') + 1}{2} \Big\rfloor + 1 \leq d_{KB(G)}(B,B')
\end{displaymath}
Combining both inequalities we obtain the desired result.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.57]{dist2}
\caption{Second inequality.}
\label{Fig21}
\end{figure}
\end{proof}
Now, based on the distance between two bicliques of a graph $G$, we can assure the existence of other bicliques ``between them''.
That is, if the distance between the bicliques $B$ and $B'$ of $G$ is $k$, then there exist other bicliques at distance at most
$k-1$ to each of $B$ and $B'$.
This result will be very useful for proving not only the Theorem~\ref{tMarina} but also that the condition of this
Theorem is not sufficient.
\begin{theorem}\label{teodistk}
Let $G$ be a graph and let $B, B'$ be bicliques of $G$ such that $d_G(B, B')=k > 0$, then there exist at least $k+1$ bicliques in $G$ such that they are at distance at most $k-1$ to each of $B$ and $B'$.
\end{theorem}
\begin{proof} We will prove the theorem in two parts, first when $d_G(B, B')=1$ and last when $d_G(B, B')=k > 1$.
We remark that the proof could be done by induction on the length $k$ between $B$ and $B'$, but it would be longer
since we would have to show base cases $k=1,2$ before the inductive step.
Suppose first that $d_G(B, B')=1$. Let $v \in B$ and $w \in B'$ be adjacent vertices. Let $x \in B$ be a vertex adjacent to $v$ and $y \in B'$ adjacent to $w$ (Fig.~\ref{Fig22}).
\begin{figure}[ht!]
\centering
\includegraphics[scale=.3]{distentre2}
\caption{Bicliques $B$ and $B'$ are at distance $1$.}
\label{Fig22}
\end{figure}
We have the following cases:
\begin{itemize}
\item If $xw, xy, vy \notin E(G)$, then the sets of vertices $\{x,v,w\}$ and $\{v,w,y\}$ are contained in two bicliques of $G$ different from $B$ and $B'$.
\item If $xw \in E(G)$, $xy, vy \notin E(G)$, then $\{x,w,y\}$ and $\{v,w,y\}$ are contained in two bicliques of $G$ different from $B$ and $B'$. The case
$vy \in E(G)$, $xy, xw \notin E(G)$ is similar.
\item If $xw, yv \in E(G)$, $xy \notin E(G)$, then $\{x,v,y\}, \{x,w,y\}$ and $\{v,w\}$ are contained in three bicliques of $G$ different from $B$ and $B'$.
The cases $xw, xy \in E(G)$, $vy \notin E(G)$ and $xy, vy \in E(G)$, $xw \notin E(G)$ are similar.
\item If $xw, yw, xy \in E(G)$, then $\{x,y\}, \{v,y\}, \{v,w\}, \{x,w\}$ are contained in four bicliques of $G$ different from $B$ and $B'$.
\item If $xy \in E(G)$, $xw, vy \notin E(G)$, then the set of vertices $\{x,v,w,y\}$ is contained in a biclique of $G$ different from $B$ and $B'$.
Now, as $w \notin B$ there exists a vertex $z \in B$ either adjacent to $v$ and $w$ (or $x$ and $y$) or adjacent to $v$ and not adjacent to $y$
(or adjacent to $x$ and not adjacent to $w$). If $z$ is adjacent to $v$ and $w$, we are in one of the previous cases.
If $z$ is adjacent to $v$ and not adjacent to $y$, we can assume that $z$ is also not adjacent to $w$, otherwise we are again in one of the previous cases.
Therefore $\{z,v,w\}$ is contained in another biclique of $G$ different from $B$, $B'$ and the one that contains the vertices $\{x,v,w,y\}$.
Analogously, taking one vertex $u \in B'$ we obtain in in a similar way a third biclique.
\end{itemize}
In all cases there are at least two different bicliques that intersect $B$ and $B'$, that is, they are at distance $k-1=0$ to each of them.
Suppose last that $d_G(B,B')=k$ and let $v_0 \in B$ and $v_k \in B'$ be vertices such that $d_G(v_0,v_k)=d_G(B,B')=k>1$ (Fig.~\ref{Fig23}).
\begin{figure}[ht!]
\centering
\includegraphics[trim = 10mm 0mm 0mm 0mm, scale=.47]{distentrek}
\caption{Bicliques $B$ and $B'$ are at distance $k$.}
\label{Fig23}
\end{figure}
Then, there exists a path $P=v_0v_1\ldots v_k$ of length $k$ between $B$ and $B'$ (in fact between $v_0$ and $v_k$).
Clearly $v_i$ is not adjacent to $v_j$ for $0 \leq i < j \leq k$ and $j\neq i+1$, otherwise a shortest path would exist between $B$ and $B'$.
Then, each triple $\{v_{i},v_{i+1},v_{i+2}\}$ is contained in a different biclique of $G$ for $i=0,\ldots,k-2$.
Therefore we obtain $k-1$ bicliques that are at distance at most $k-1$ to each of $B$ and $B'$. We obtain the two remaining bicliques
as follows. As $B$ is a biclique, there exists a vertex $x\in B$ such that $xv_0 \in E(G)$. If $xv_1 \notin E(G)$, then $\{x,v_0,v_1\}$ is contained in a biclique of $G$ different from
$B$. Now if $xv_1 \in E(G)$ then $\{x,v_1,v_2\}$ is contained in a biclique of $G$ different from $B$. It is worth to mention that
$xv_i \notin E(G)$ for $i\geq 2$, otherwise a path of length less than $k$ would exist between $B$ and $B'$. Finally,
we obtain the remaining biclique in the same way taking a vertex $y \in B'$ such that $yv_k \in E(G)$.
\end{proof}
As an immediat result of Theorem~\ref{teodistk} for $k=1$, we obtain the following corollary.
\begin{corollary}\label{coroLoco}
Let $G$ be a graph and let $B, B'$ be bicliques of $G$ such that $B \cap B' = \emptyset$. Suppose that there exists an edge $e$
with one endpoint in $B$ and the other in $B'$. If $B_1$ is the biclique that contains the edge $e$ then there exists other
biclique $B_2 \neq B,B',B_{1}$ such that $B_2$ intersects $B,B'$ and $B_1$.
\end{corollary}
Now we give the proof of Theorem~\ref{tMarina} based on distances between bicliques.
\vspace*{3mm}
\noindent\prooff{~\ref{tMarina}} Let $uvw$ be an induced $P_{3}$ in $G$ and let $U$, $V$ and $W$ be the bicliques of $H$ associated to the vertices
$u$, $v$ and $w$ of $G$. As $d_{G}(u,w)=2$, by Lemma~\ref{l4} we have $d_{H}(U,W)=2$ or $d_{H}(U,W)=1$.
\begin{itemize}
\item Case $d_{H}(U,W)=2$. Therefore, there is no edge between bicliques $U$ and $W$. Now as $V$ intersects with $U$ and with $W$ we have that
$V$ contains a $P_{3}=abc$ such that $a \in U$ and $c \in W$ (Fig.~\ref{FigExtraExtra1}).
Since $d_{H}(U,W)=2$ and the path joining $U$ and $W$ uses the edges in $V$, we have by Theorem~\ref{teodistk} that there are
three bicliques ``between'' $U$ and $W$ in $H$. Furthermore, one of these bicliques is $V$. Let $Z_1$ and $Z_2$ be the other two bicliques in $H$ and
$z_1$ and $z_2$ their associated vertices in $G$. Following the proof of the second part of Theorem~\ref{teodistk}, we can see that $Z_1$ intersects with $U,V$ and $Z_2$, and
$Z_2$ intersects with $V,W$ and $Z_1$. Now, depending on the intersection between $U,Z_2$ and $W,Z_1$ we conclude that either
$\{u,v,w,z_{2}\}$ or $\{u,v,w,z_{1}\}$ induces a \textit{diamond}, or $\{u,v,w,z_{1},z_{2}\}$ induces a \textit{gem} in $G$, that contains the $P_{3}$.
\item Case $d_{H}(U,W)=1$. Therefore, there is at least one edge between $U$ and $W$. Now, if there is any of these edges, say $e$, such that $\{e\} \cap V \neq \emptyset$, then
by Theorem~\ref{teodistk}, there are two intersecting bicliques (at least one of them must contain $e$) between $U$ and $W$ that also intersect $U$ and $W$.
If $V$ is one of these two (if not, $V$ intersects at least one of them since $\{e\} \cap V \neq \emptyset$), calling $Z$ the biclique different from $V$, we have that $\{u,v,w,z\}$
induces a \textit{diamond} that contains the $P_3$ in $G$.
Finally, assume that for every edge $e$ between $U$ and $W$, $\{e\} \cap V = \emptyset$. Therefore following the first case,
$V$ contains a $P_{3}$ with one endpoint in $U$ and the other in $W$. Now as no edge between $U$ and $W$ intersects with $V$,
using the edges in that $P_3$ in $V$, we obtain the same conclusion as in the previous case. That is, two different bicliques $Z_1$ and $Z_2$ in $H$ such
that either $\{u,v,w,z_{2}\}$ or $\{u,v,w,z_{1}\}$ induces a \textit{diamond}, or $\{u,v,w,z_{1},z_{2}\}$ induces a \textit{gem} in $G$, that contains the $P_{3}$.
\end{itemize}
\begin{figure}[ht!]
\centering
\includegraphics[scale=.4]{ULTI_2}
\caption{Bicliques $U,V$ and $W$ of the graph $H$.}
\label{FigExtraExtra1}
\end{figure}
Since all cases were covered the result holds.
\qed
\vspace*{5mm}
Now we will show that although in the \textit{crown} (Fig.~\ref{FigExtra3}) every induced $P_{3}$ is contained in an induced \textit{diamond}, it is not a biclique graph. This result is a counterexample of the question of the sufficiency of the property of Theorem~\ref{tMarina}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.25]{crown2}
\caption{The \textit{crown} is not a biclique graph but has every $P_{3}$ in a \textit{diamond}.}
\label{FigExtra3}
\end{figure}
Indeed, we will prove a more general result that implies not only that the \textit{crown} graph is not a biclique graph but also many other graphs. We remark that the \textit{crown} is the smallest graph that verifies the condition of Theorem~\ref{tMarina} but is not a biclique graph.
\begin{proposition}\label{pULTI}
Let $G=KB(H)$ for some graph $H$ where $G \neq diamond$. Then, there do not exist $v_1,v_2 \in V(G)$ such that
$N(v_1)=N(v_2)$ and their neighborhood induces a $K_2$.
\end{proposition}
\begin{proof} Suppose that there exist $v_1,v_2 \in V(G)$ such that $N(v_1)=N(v_2)$ and their neighborhood induces a $K_2$. Then,
$d_{G}(v_1,v_2)=2$ and therefore, if $B$ is the biclique of $H$ that corresponds to $v_1$ and $B'$ is the biclique of $H$
that corresponds to $v_2$, by Lemma~\ref{l4}, $d_{H}(B,B')=2$ or $d_{H}(B,B')=1$. We will analyse each case.
\begin{itemize}
\item Case $d_{H}(B,B')=2$. Now, $H$ must contain a subgraph as depicted in Figure~\ref{FigUltiPosta}. We will show that we arrive to a contradiction.
Suppose first that one of both dotted edges does not exist, say $vy$. Then $\{v,x,y\}$ is contained in a biclique that does not intersect
$B'$. This is a contradiction since $N(v_1)=N(v_2)$ in $G$.
Suppose next that both dotted edges $vy$ and $vy'$ exist. In this case we arrive to a contradiction since in $H$ there are at least four bicliques
that intersect with $B$ and $B'$. We obtain one for each choice of $\{x,y\}$ in $B$, $\{x',y'\}$ in $B'$ and $v$.
Therefore $N(v_1)=N(v_2)$ does not induce a $K_2$ which is a contradiction.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.3]{nokbultima3}
\caption{Graph $H$ when $d_{H}(B,B')=2$.}
\label{FigUltiPosta}
\end{figure}
\item Case $d_{H}(B,B')=1$.
In this case, by Theorem~\ref{teodistk}, there exist at least two bicliques $B_{1},B_{2}$ in $H$ that intersect with $B$ and with
$B'$. Clearly they must be exactly two, otherwise $N(v_1)=N(v_2)$ would not induce a $K_{2}$. Now, by the first part of the proof of Theorem~\ref{teodistk}, only in the first two cases there are exactly two bicliques intersecting with $B$ and with $B'$
in $H$. Figure~\ref{FigDOSCASOS} shows both possible options (without vertices $z$, $z_{1}$ and $z_{2}$). The labels of
Theorem~\ref{teodistk} were respected as in Fig~\ref{Fig22}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.5]{casoscorona}
\caption{Unique two options for $H$ with two bicliques that intersect $B$ and $B'$.}
\label{FigDOSCASOS}
\end{figure}
\begin{itemize}
\item Case \textbf{(a)}: As $B'$ is biclique in $H$, there must exist a vertex $z$ adjacent to $y$ and not adjacent
to $w$. Now, as we can observe in Figure~\ref{FigDOSCASOS}\textbf{a}, $H$ has four bicliques such that they induce a
\textit{diamond} in $G$. As $G \neq diamond$, there must exist other biclique in $H$ that does not intersect neither with $B$ nor with $B'$.
So, this new biclique must be formed by edges in $B_{1} \cup B_{2}$ or by new vertices adjacent to $B_{1}$ or to $B_{2}$
(Fig.~\ref{FigDOSCASOS}\textbf{a}). In both cases, we can see that this new biclique intersect either with $B$, $B'$ or both at the same time
since $B$ and $B'$ have vertices in $B_{1}$ and in $B_{2}$. Then we obtain that $v_1$ or $v_2$ has an open neighborhood bigger that two vertices, a contradiction.
\item Case \textbf{(b)}: As $B$ is a biclique in $H$, there must exist a vertex $z_{1}$ adjacent to $x$. Analogously,
as $B'$ is a biclique in $H$, there must exist other vertex $z_{2}$ adjacent to $y$ (Fig.~\ref{FigDOSCASOS}\textbf{b}).
In the same way of case \textbf{(a)}, $H$ has four bicliques that induce a \textit{diamond} in $G$. Then, there must exist other
biclique in $H$ different from these four such that it does not intersect neither with $B$ nor with $B'$. Finally, using the
same argument as in the previous case, if this new biclique exists, then it intersects $B$ or $B'$ which is again a contradiction.
\end{itemize}
\end{itemize}
As no more cases are left, there do not exist $v_{1},v_{2} \in V(G)$ such that $N{(v_{1})}=N{(v_{2})}$ with their neighborhood inducing a $K_2$ which completes the proof.
\end{proof}
Figure~\ref{FigExtraExtra3} shows some examples of graphs where every $P_{3}$ is included in a \textit{diamond} that are not biclique graphs.
\begin{figure}[ht!]
\centering
\includegraphics[scale=.5]{nokbultima}
\caption{Graphs that are not biclique graphs by Proposition~\ref{pULTI}.}
\label{FigExtraExtra3}
\end{figure}
\section{Vertices of degree two in biclique graphs}
In this section we give a strong property for biclique graphs that have an induced $P_3$ contained in a \textit{gem} and not in
a \textit{diamond}. Also, we show some forbidden structures.
These properties give more tools to recognize graphs that are not biclique graphs.
The next result implies that the \textit{Haj\'os graph}, the \textit{rising sun} and the $X_1$ graph (see Fig.~\ref{hajosandsun}) are not biclique graphs by giving a forbidden structural property.
\begin{proposition}\label{B3}
Let $G=KB(H)$ for some graph $H$, let $v_1v_2v_3$ be an induced $P_3$ of $G$ that does not belong to a \textit{diamond}. Let
$v_4, v_5$ be the vertices of $G$ such that $\{v_1,v_2,v_3,v_4,v_5\}$ induces a \textit{gem} (where $v_1,v_4$ and $v_3,v_5$ are adjacent).
Then, if $v$ is a different vertex, not adjacent to $v_1$ and that does not belong to a \textit{diamond} with $v_1$, then it is not adjacent to $v_4$.
\end{proposition}
\begin{proof}
Let $B_1,\ldots,B_5,B$ the corresponding bicliques of $H$ to the vertices $v_1,\ldots,v_5,v$ of $G$.
Let $xy$ be an edge that belongs to $B_1\cap B_4$. Note that this edge exists as the $P_3$ in $G$ is not contained in a \textit{diamond}
(second part of the proof of Theorem~\ref{teodistk}). By contradiction, if $B$ and $B_4$ intersect (i.e., $v$ is adjacent to $v_4$
in $G$), then either $x$ is adjacent to some vertex $z$ of $B$, or the vertex $y$ is adjacent to some vertex $z$ of $B$.
In either cases, there is an edge between a vertex of $B_1$ and a vertex of $B$, which is a contradiction by
Theorem~\ref{teodistk}, as $v$ and $v_1$ would belong to a \textit{diamond} in $G$.
\end{proof}
\begin{corollary}
The Haj\'os graph, the rising sun and the $X_1$ graph are not biclique graphs (Fig.~\ref{hajosandsun}).
\end{corollary}
Moreover, these three graphs give forbidden structures for biclique graphs.
\begin{corollary}\label{coroHajos}
Let $G$ be a graph such that it contains the \textit{Haj\'os graph}, the \textit{rising sun} or the $X_1$ graph as induced subgraph where the vertices of degree two in the subgraph are also of degree two in $G$. Then, $G$ is not a biclique graph.
\end{corollary}
\begin{figure}[h]
\centering
\includegraphics[scale=.3]{hajosandsun}
\caption{The \textit{Haj\'os graph}, the \textit{rising sun} and the $X_1$ graph.}
\label{hajosandsun}
\end{figure}
Next we present the theorem that gives an upper bound on the number of vertices of degree two in a biclique graph.
\begin{theorem}\label{teogrado2}
Let $G=KB(H)$ for some graph $H$ where $G \neq K_3,diamond$ and $|V(G)|=n$. Then, the number of vertices of degree two in $G$ is strictly less than $n/2$.
\end{theorem}
\begin{proof}
Let $V_2 = \{ v \in G : d(v) = 2\}$. First we show that there are no edges between the vertices of $V_2$.
Suppose by contrary that $v_i,v_j\in V_2$ are adjacent. If they have a common neighbor, say $w$, since $G\neq K_3$, both
$\{v_i,w,w'\}$ and $\{v_j,w,w'\}$, where $w'$ is any other neighbor of $w$, induce a $P_3$ which is not contained in a \textit{diamond}. Otherwise, if the vertex $w$ is adjacent to $v_i$ and not adjacent to $v_j$, then $\{w,v_i,v_j\}$ induces a $P_3$
that is not included in a \textit{diamond} since no other vertex is adjacent to $v_i$. In both cases we arrive to a contradition by Theorem~\ref{tMarina}.
Next we show that for each $v_i$, there exists a vertex $w_i \in N(v_i)$ such that $w_i \notin N(v_j)$, for all $v_j \in V_2$, $j \neq i$. Clearly, $w_i\notin V_2$.
By contradiction, suppose that there is a vertex $v_i \in V_2$ such that each of
its two neighbors, say $v', v''$, are adjacent to some other vertex of $V_2$. Note again that by Theorem~\ref{tMarina}, $v'$ and $v''$ are adjacent. Consider the following cases:
\begin{itemize}
\item $N(v_i) = N(v_j) = \{v',v''\}$, for some $v_j \in V_2$. Since $G \neq diamond$, by Proposition~\ref{pULTI} we arrive to a contradiction.
\item $v' \in N(v_j)$ and $v'' \in N(v_k)$, for some $v_j, v_k \in V_2$, $j \neq k$. Let $v'''$ be the other vertex adjacent to
$v_j$. Now, as $\{v_i,v',v_j\}$ induces a $P_3$, then by Theorem~\ref{tMarina}, $\{v_i,v',v_j,v'',v'''\}$ induces a \textit{gem} containing that $P_3$, since $v',v'''$ and $v'',v'''$ should be adjacent. Finally, if the vertices
$v_i,v',v_j,v'',v''',v_k$ are respectively called $v_1,v_2,\ldots,v_5,v$, then by Proposition~\ref{B3} we obtain a contradiction since $v_k$ cannot be adjacent to $v''$. Note that depending on the other neighbor of $v_k$ and their adjacencies, the
\textit{Haj\'os graph}, the \textit{rising sun} or the $X_1$ graph appear.
\end{itemize}
Therefore, for each vertex of degree two, we can associate an unique neighbor of degree
greater than two that is not adjacent to any other vertex of degree two.
Since there is at least one neighbor of a vertex of $V_2$ that is not associated to any vertex and has degree greater than two, it follows that $|V_2| < n/2$.
\end{proof}
As an application of Theorem~\ref{teogrado2}, in Figure~\ref{FigExtraExtra3}, the first and third graphs have more vertices of degree two than vertices of degree greater that two, therefore, they are not biclique graphs. Other two examples are shown in Figure~\ref{ejteogrado2}.
\begin{figure}[h]
\centering
\includegraphics[scale=.3]{ejteogrado2}
\caption{Graphs that are not biclique graphs by Theorem~\ref{teogrado2}.}
\label{ejteogrado2}
\end{figure}
As another application of Proposition~\ref{B3}, we obtain the following result.
For that, we need the coming definition. A family of sets $\mathcal{A}$ is \textit{Helly} when
every subfamily of pairwise intersecting subsets has a non-empty intersection.
\begin{proposition}\label{2-helly}
Let $G$ be a biclique graph and let $A = \{ N[v] : v \in G$ and $d(v) = 2\}$. Then $A$ is Helly.
\end{proposition}
\begin{proof}
By contrary, let $A'$ be a minimal non-Helly subfamily of $A$. Now, since $A'$ is minimal and each $N[v_i] \in A'$ induces
a $K_3$, we have that $|A'|=3$. Moreover, as $A'$ is non-Helly, it induces the \emph{Haj\'os graph} where the vertices of degree two in the subgraph are also of degree two in $G$, therefore by Corollary~\ref{coroHajos}, $G$ is not a biclique graph, that is,
a contradiction. Finally, $A$ is a Helly family.
\end{proof}
\section{Open problems}
In this section, we present some conjectures. We look for proofs or counterexamples.
We propose first the following conjecture that generalizes Proposition~\ref{2-helly}.
\begin{conjecture}
Let $G$ be a biclique graph and let $A = \{ N[v] : v\in G$ and $v$ is simplicial$\}$. Then $A$ is Helly.
\end{conjecture}
Proposition~\ref{pULTI} can be extended leading to the following conjecture.
\begin{conjecture}\label{nokb_general}
Let $G=KB(H)$ for some graph $H$ where $G \neq diamond$. Then, there do not exist $v_{1},v_{2},\ldots,v_{i} \in V(G)$ such that
$N{(v_{1})}=N{(v_{2})}=\ldots =N{(v_{i})}$ and their neighborhood is contained in a $K_{i}$ for $i \geq 2$.
\end{conjecture}
The truthness of the Conjecture~\ref{nokb_general} would imply interesting results. For instance, if Conjecture~\ref{nokb_general} holds, we can give a proof of the following one.
\begin{conjecture}\label{hamil}
Let $G=KB(H)$ for some graph $H$. Then $G$ has a Hamiltonian cycle.
\end{conjecture}
\section{Conclusions}
In this work we give a formula for the distances of vertices in the biclique graph $KB(G)$ using the distances between the bicliques of $G$. This is a useful tool for proving structural properties in bicliques graphs. In particular, it allows us to give a different proof for the necessary condition for a graph to be a biclique graph given
in~\cite{GroshausSzwarcfiterJGT2010}. Also, it is used to answer (negatively) the question about the condition being sufficient or not.
Finally, we give an upper bound on the number of vertices of degree two for biclique graphs. Also we give some forbidden structures that are useful to recognize graphs which are not biclique graphs.
| {
"timestamp": "2018-10-23T02:15:32",
"yymm": "1708",
"arxiv_id": "1708.09686",
"language": "en",
"url": "https://arxiv.org/abs/1708.09686",
"abstract": "A \\textit{biclique} is a maximal induced complete bipartite subgraph of $G$. The \\textit{biclique graph} of a graph $G$, denoted by $KB(G)$, is the intersection graph of the family of all bicliques of $G$. In this work we study some structural properties of biclique graphs which are necessary conditions for a graph to be a biclique graph. In particular, we prove that for biclique graphs that are neither a $K_3$ nor a \\textit{diamond}, the number of vertices of degree $2$ is less than half the number of vertices in the graph. Also, we present forbidden structures. For this, we introduce a natural definition of the distance between bicliques in a graph. We give a formula that relates the distance between bicliques in a graph $G$ and the distance between their respective vertices in $KB(G)$. Using these results, we can prove not only this new necessary condition involving the degree, but also that some graphs are not biclique graphs. For example, we show that the \\textit{crown} is the smallest graph that is not a biclique graph although the known necessary condition for biclique graphs holds, answering an open problem about biclique graphs. Finally, we present some interesting related conjectures and open problems.",
"subjects": "Discrete Mathematics (cs.DM)",
"title": "Structural properties of biclique graphs and the distance formula",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435195,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.707698492180574
} |
https://arxiv.org/abs/1407.1916 | A restriction estimate using polynomial partitioning | If $S$ is a smooth compact surface in $\mathbb{R}^3$ with strictly positive second fundamental form, and $E_S$ is the corresponding extension operator, then we prove that for all $p > 3.25$, $\| E_S f\|_{L^p(\mathbb{R}^3)} \le C(p,S) \| f \|_{L^\infty(S)}$. The proof uses polynomial partitioning arguments from incidence geometry. | \subsection{Background on incidence geometry}
Incidence geometry studies the possible intersection patterns of simple geometric objects, such as lines or circles. Suppose that $\frak L$ is a set of lines in $\mathbb{R}^n$. We let $P_r(\frak L)$ be the set of $r$-rich points of $\frak L$: the set of points that lie in at least $r$ lines of $\frak L$. The most fundamental questions of incidence geometry asks, ``For given numbers $L$ and $r$, what is the maximum possible number of $r$-rich points that can be formed by a set of $L$ lines?'' Szemer\'edi and Trotter solved this problem up a constant factor in \cite{SzTr}. Other problems in incidence geometry involve sets of lines with extra conditions, other types of curves, and so on.
Polynomial partitioning is an important recent technique for attacking this type of problem. Partitioning is a divide-and-conquer approach. We pick a (non-zero) polynomial $P$, and consider its zero set $Z(P) \subset \mathbb{R}^n$. The complement $\mathbb{R}^n \setminus Z(P)$ is a union of connected components $O_i$, often called cells. To estimate the size of $P_r(\frak L)$, we can estimate the number of $r$-rich points in each cell $O_i$ and the number of $r$-rich points on the surface $Z(P)$. One crucial observation is that a line can cross $Z(P)$ at most $\Deg P$ times, and so it can enter at most $1 + \Deg P$ of the cells. Depending on the choice of $P$, $\mathbb{R}^n \setminus Z(P)$ can have as many as $\sim (\Deg P)^n$ cells. If there are $\sim (\Deg P)^n$ cells, then each line enters only a small fraction of the cells.
For this divide-and-conquer approach to be effective, we would like the points of $P_r(\frak L)$ to be evenly divided among the cells $O_i$. The following partitioning theorem deals with this issue. The partitioning theorem is a topological result, closely connected to the ham sandwich theorem proven by Stone and Tukey in \cite{ST}.
\begin{theorem} \label{partcomb} (Theorem 4.1 in \cite{GK}) Suppose that $X \subset \mathbb{R}^n$ is a finite set. For any $D \ge 1$, there is a polynomial $P$ of degree at most $D$ so that each component of $\mathbb{R}^n \setminus Z(P)$ contains at most $C_n D^{-n} |X|$ points of $X$.
\end{theorem}
If none of the points of $X$ are in $Z(P)$, then the points have to be quite evenly distributed among the components of $\mathbb{R}^n \setminus Z(P)$. We know that there are $\lesssim D^n$ components in total, and each component contains $\lesssim D^{-n} |X|$ points of $X$. However, it may happen that some or all of the points of $X$ lie in $Z(P)$. Theorem \ref{partcomb} really gives a kind of dichotomy: either the points cluster on a low degree surface, or else they can be evenly divided by a low degree surface.
Polynomial partitioning is used in incidence geometry roughly as follows. If the points of $P_r(\frak L)$ are evenly divided among the cells $O_i$, then we can do a divide-and-conquer argument, estimating the number of $r$-rich points in a typical cell. For a typical cell $O_i$, the number of lines intersecting $O_i$ is only a small fraction of the $L$ lines. Then we can estimate the number of $r$-rich points in $O_i$ either directly or by induction. On the other hand, if the points of $P_r(\frak L)$ cluster on a low-degree surface $Z(P)$, then there is some kind of special structure, and perhaps the original problem reduces to a lower-dimensional problem.
Polynomial partitioning was introduced in \cite{GK}, where it was applied to some problems about lines in $\mathbb{R}^3$. In \cite{KMS}, Kaplan, Matousek, and Sharir used polynomial partitioning to give new proofs of some classical results in incidence geometry, including the Szemer\'edi-Trotter theorem. Polynomial partitioning has been refined and applied to other problems by Solymosi and Tao \cite{SolTao}, Sharir and Solomon \cite{SS}, and others.
The proof of Theorem \ref{mainintro} uses ideas from these papers, especially the inductive setup introduced in \cite{SolTao}. In the next subsection, we will give some background on the restriction problem and explain how it connects with incidence geometry.
\subsection{Background on restriction}
One important example of a positively curved surface $S$ is the truncated paraboloid, defined by $\omega_3 = \omega_1^2 + \omega_2^2, \omega_1^2 + \omega_2^2 \le 1$. For the rest of the introduction, we focus on this example.
In \cite{B1}, Bourgain introduced the idea of studying $E_S f$ by breaking it into wave packets. For a large radius $R$, and for some exponent $p$, we would like to estimate $\int_{B_R} |E_S f|^p$. We first divide $S$ into caps $\theta$ of radius $\sim R^{-1/2}$. For each $\theta$, $E_S (f \chi_\theta)$ breaks into pieces supported on tubes. We let $\mathbb{T}(\theta)$ be a collection of finitely overlapping tubes covering $B_R$, pointing in the direction of the normal vector to $S$ at $\theta$, with length $\sim R$ and radius roughly $R^{1/2}$. We can then break $f \chi_{\theta}$ into pieces $f_T$, $T \in \mathbb{T}(\theta)$ so that $E_S f_T$ is essentially supported on $T$, $f_T$ is essentially supported on $\theta$, and the set of functions $f_T$ are essentially orthogonal. For each $T \in \mathbb{T}(\theta)$, $E_S f_T$ on $B_R$ is morally well-approximated by the following model:
$$\textrm{For } x \in B_R, E_S f_T(x) \textrm{ is approximately } a_T \chi_T e^{i \omega_{\theta} x} ,$$
\noindent where $\omega_{\theta}$ is the center of the cap $\theta$, and $a_T$ is a complex number with $|a_T| \sim R^{-1/2} \| f_T \|_{L^2(\theta)}$.
Without significant loss of generality, one can imagine that $a_T = 0$ for some tubes $T$ and that $|a_T|$ is constant on all the other tubes. In this case, $\int_{B_R} |E_S f|^p$ is related to the combinatorics of how the tubes (with $a_T \not= 0$) overlap. Bourgain \cite{B1} proved combinatorial estimates about overlapping tubes pointing in different directions. Applying these estimates to the wave packets, he gave new estimates on the restriction problem.
Wolff (see \cite{W3}) observed that these problems about overlapping tubes have a similar flavor to the problems in incidence geometry we discussed in the last subsection. He was able to adapt arguments from incidence geometry to prove estimates in analysis. Using the partitioning argument from \cite{CEGSW}, he proved a Kakeya-type result involving circles \cite{W5} and a local smoothing estimate for the wave equation \cite{W4}. Following this philosophy, we will adapt the polynomial partitioning approach from incidence geometry to control the wave packets above.
Before turning to polynomial partitioning, we also need to introduce the idea of broad points.
We pick a large constant $K$ and we divide $S$ into $K^2$ caps $\tau$, each of diameter $\sim K^{-1}$, and we write $f_\tau$ for $f \chi_{\tau}$. For a real number $\alpha \in (0,1)$, we say that $x$ is $\alpha$-broad for $Ef$ if
$$ \max_{\tau} | E f_{\tau} (x)| \le \alpha |Ef(x)|. $$
We define $\Br_\alpha Ef(x)$ to be $|Ef(x)|$ if $x$ is $\alpha$-broad for $Ef$ and zero otherwise.
From this definition, we see that
\begin{equation} \label{broadnarrow} |E f(x) | \le \max \left( \Br_\alpha Ef (x), \alpha^{-1} \max_\tau |E f_\tau(x)| \right). \end{equation}
The broad contribution is the hardest to estimate, and the second term can be handled by induction as we explain below. Our strongest result is the following estimate about the broad points:
\begin{theorem} \label{introthm2} If $S$ is the truncated paraboloid, and $\epsilon > 0$, then there is a large constant $K = K(\epsilon)$ so that for any radius $R$
$$ \| \Br_{K^{-\epsilon}} E_S f \|_{L^{3.25}(B^3_R)} \le C_\epsilon R^{\epsilon} \| f \|_{L^2(S)}^{12/13} \| f \|_{L^\infty(S)}^{1/13}. $$
\end{theorem}
We now briefly explain how Theorem \ref{introthm2} implies Theorem \ref{mainintro}. As an immediate corollary of Theorem \ref{introthm2} we get the estimate:
\begin{equation} \label{broadlinfty} \| \Br_{K^{-\epsilon}} Ef \|_{L^{3.25}(B^3_R)} \le C_\epsilon R^{\epsilon} \| f \|_{L^\infty(S)}. \end{equation}
Following ideas from \cite{BG}, this estimate implies that
\begin{equation} \label{linfty} \| Ef \|_{L^{3.25}(B^3_R)} \le C_\epsilon R^\epsilon \| f \|_{L^\infty(S)} \end{equation}
\noindent Here is a quick sketch of the argument. The idea is to prove Inequality \ref{linfty} by induction on the radius. By Inequality \ref{broadnarrow}, we see that for any $p$,
$$ \int_{B_R} |E f|^p \le \int_{B_R} \Br_{K^-\epsilon} Ef^p + K^{- p \epsilon} \sum_\tau \int_{B_R} |Ef_\tau|^p. $$
\noindent The broad term on the right-hand side is controlled by Inequality \ref{broadlinfty}. On the other hand, each integral $\int_{B_R} | E f_\tau |^p$ can be controlled by induction: after a change of variables, it can be controlled using Inequality \ref{linfty} on a smaller ball. The contributions from the $| E f_\tau |$ terms turn out to be dominated by the contribution from the broad term, and so the induction closes.
This observation is in a similar spirit to the bilinear approach to the restriction problem from \cite{TVV}.
Finally, by the $\epsilon$-removal theorem in \cite{T1}, Inequality \ref{linfty} in turn implies Theorem \ref{mainintro} for the paraboloid.
We also remark that the exponent $3.25$ is the sharp exponent in Theorem \ref{introthm2}, given the right-hand side. In order to control $L^p$ norms for $p < 3.25$, we would have to weight $\| f\|_\infty$ more and $\| f \|_2$ less.
\subsection{Examples}
We now give some examples of functions $f$ to illustrate Theorem \ref{introthm2}. These examples are supposed to give some sense of the theorem, and also to start to illustrate the connection between this theorem and incidence geometry questions.
The first example is a planar example. In this case, $E_S f$ is essentially supported in a planar slab of dimensions $R^{1/2} \times R \times R$. There are $\sim R^{1/2}$ caps $\theta \subset S$ for which the normal vector lies within an angle $\sim R^{-1/2}$ of the plane. For each of these $R^{1/2}$ caps $\theta$, there are $\sim R^{1/2}$ tubes $T \in \mathbb{T}(\theta)$ that lie in the planar slab. We pick a number $B$ between 1 and $R^{1/2}$, and for each of the $R^{1/2}$ caps $\theta$, we randomly pick $B$ tubes of $\mathbb{T}(\theta)$ that lie in our planar slab. We have now picked $\sim B R^{1/2}$ tubes $T$. An average point of the planar slab lies in $\sim B$ of our tubes. Since the tubes were selected randomly, most points of the planar slab lie in $\sim B$ of our tubes.
For each of our chosen tubes $T$ we choose $f_T$ so that $ |E_S f_T(x) | \gtrsim \chi_T$, and $\| f_T \|_2 \sim R^{1/2}$ and $\| f_T \|_\infty \sim R$. Now we let $f$ be a sum with random signs: $f = \sum_T \pm f_T$. Because of the random signs, $|Ef(x)| \gtrsim B^{1/2}$ on most points in the planar slab. Since the planar slab has volume $\sim R^{5/2}$, $ \| E_S f\|_{L^p(B_R)} \gtrsim B^{1/2} R^{\frac{5}{2p}}. $
Moreover, a typical point lies in $B$ different tubes in random directions (within the plane). If $B \ge K^{10 \epsilon}$, then almost every point will be $K^{-\epsilon}$ broad. Therefore, we get:
$$ \| Br_{K^{-\epsilon}} Ef \|_{L^p(B_R)} \gtrsim B^{1/2} R^{\frac{5}{2p}}. $$
On the other hand, we estimate $\| f \|_2$ and $\| f \|_\infty$. Since the $f_T$ are essentially orthogonal and $f$ is a sum of $B R^{1/2}$ functions $f_T$, and $\| f_T \|_2^2 \sim R$, we get
$$ \| f \|_2 \sim B^{1/2} R^{3/4}. $$
Also,
$$ \| f \|_\infty \le B \max_T \|f_T \|_\infty \sim B R. $$
The most interesting case for the moment is $B \sim K^{10 \epsilon}$. In this case, $B$ is a constant independent of $R$.
If $ \| Br_{K^{-\epsilon}} Ef \|_{L^p(B_R)} \le C_\epsilon R^{\epsilon} \| f\|_2^{12/13} \| f \|_\infty^{1/13}$, then a direct computation shows that $p \ge 13/4 = 3.25$. This shows that the exponent $3.25$ in Theorem \ref{introthm2} is sharp, given the right-hand side in the inequality.
It might be possible to get a smaller exponent $p$ by weighting $\| f \|_\infty$ more heavily. For instance, the following estimate is consistent with the planar example and appears plausible to me:
$$ \| Br_{K^{-\epsilon}} Ef \|_{L^3(B_R)} \le C_\epsilon R^{\epsilon} \| f\|_2^{2/3} \| f \|_\infty^{1/3}. $$
A second example involves a degree 2 algebraic surface called a regulus. This example was pointed out to me by Joshua Zahl. An example of a regulus is the surface $z = xy$. The key feature of a regulus is that it is doubly ruled, meaning that every point lies in two lines in the surface. The surface $z=xy$ contains two families of lines: ``vertical lines'' of the form $x=a$, $z = ay$; and ``horizontal lines'' of the form $y=b$, $z = b x$. Each point of the regulus lies in one line from each family. If we want to work in a ball of radius $R$, it is natural to consider a rescaled surface defined by $z/R = (x/R) (y/R)$. Instead of a planar slab, we consider the $R^{1/2}$-neighborhood of this surface in $B_R$. This neighborhood contains two families of tubes, corresponding to the horizontal and vertical lines. We can take $R^{1/2}$ ``horizontal tubes'', and $R^{1/2}$ ``vertical tubes'', all of radius $R^{1/2}$ and length $R$, so that each point lies in at least one horizontal tube and at least one vertical tube. For each tube $T$, we choose $f_T$ as above so that $|E_S f_T| \gtrsim 1$ on $T$, and so that $\| f_T \|_2 \sim R^{1/2}$ and $\| f_T \|_\infty \sim R$, and we choose $f = \sum_T f_T$.
The computations of $\| E_S f\|_{L^p(B_R)}$ and $\| f \|_2$ and $\| f \|_\infty$ are all the same as in the planar example. The points in the slab around the regulus are approximately $1/2$-broad. Because $1/2$ is larger than $K^{-\epsilon}$, this example is not directly relevant to Theorem \ref{introthm2}, but I think it is morally relevant. (It is a sharp example for the bilinear restriction estimate in \cite{T1}.)
These two examples may hint that low-degree polynomial surfaces are relevant to the restriction problem and that if $\| E_S f \|_{L^p(B_R)}$ is large, then there should be a low degree surface where many of the wave packets $E_S f_T$ cluster. These two low degree examples - planes and reguli - are also relevant in some incidence geometry problems about lines in $\mathbb{R}^3$. We consider one such problem in the next subsection and show how to study it using polynomial partitioning.
\subsection{Polynomial partitioning in incidence geometry}
In this section, we demonstrate how polynomial partitioning works in incidence geometry by proving a simple theorem. This proof will serve as a model for the proof of Theorem \ref{introthm2}.
Let us first formulate a question about lines in $\mathbb{R}^3$. We start with a naive question: how many 2-rich points can be formed by $L$ lines in $\mathbb{R}^3$? The answer is $L \choose 2$, which can happen if all the lines lie in a plane. What if we forbid this simple answer by adding a rule that at most $10$ of the $L$ lines lie in any plane? Can we still have $\sim L^2$ 2-rich points, or does the number drop off sharply? The answer is that there can still be $\sim L^2$ 2-rich points. The second example is that all the lines may lie in a regulus, such as the surface $z= xy$ discussed in the last subsection. Taking $L/2$ vertical lines and $L/2$ horizontal lines, we get $L^2/4$ 2-rich points. What if we forbid this example also by adding a rule that not too many lines lie in any plane or degree 2 surface? We have now arrived at the following question:
If $\frak L$ is a set of $L$ lines in $\mathbb{R}^3$ with at most $S$ lines in any plane or degree 2 algebraic surface, how big can $|P_2(\frak L)|$ be?
Katz and the author solved this problem in the range $S \ge L^{1/2}$ in \cite{GK}. (Although it is still open for small values of $S$, such as $S = 10$.)
\begin{theorem} (See Theorem 2.10 in \cite{GK}) If $\frak L$ is a set of $L$ lines in $\mathbb{R}^3$ with at most $S$ lines in any plane or degree 2 algebraic surface, then
$$ |P_2(\frak L) | \lesssim S L + L^{3/2}. $$
\end{theorem}
\noindent (If $S \ge L^{1/2}$, then the $SL$ term dominates. It is possible to get $\sim S L$ 2-rich points by choosing $L / S$ planes, and putting $S$ lines in each plane. If $S \le L^{1/2}$, then the $L^{3/2}$ dominates. It is unknown whether this estimate is sharp.)
In order to explain how to use polynomial partitioning, we prove a weak version of this theorem.
\begin{theorem} \label{incgeomintro} For any $\epsilon > 0$, there is a degree $D$ so that the following holds.
Suppose that $\frak L$ is a set of $L$ lines in $\mathbb{R}^3$ with at most $S$ lines in any algebraic surface of degree $D$. Then
$$ P_2(\frak L) \le C(\epsilon, S) L^{(3/2) + \epsilon}.$$
\end{theorem}
\noindent (This theorem is mostly interesting for small $S$. In this case, the final estimate is nearly as good as the best known estimate.)
\begin{proof}
The proof goes by induction on $L$. We apply the polynomial partitioning theorem, Theorem \ref{partcomb}, to the set $P_2(\frak L)$, using polynomials of degree at most $D$. (We will choose the value of $D = D(\epsilon)$ below.) We let $O_i$ be the components of $\mathbb{R}^n \setminus Z(P)$. Each $O_i$ contains $\lesssim D^{-3} |P_2(\frak L)|$ points of $P_2(\frak L)$. By a classical theorem of Milnor, \cite{M}, the number of cells $O_i$ is $\lesssim D^3$.
If at least half of the points of $P_2(\frak L)$ are in the union of the cells, then we will use induction to study the contribution of each cell. In this case, there must be $\sim D^3$ cells $O_i$ each containing $\sim D^{-3} |P_2(\frak L)|$ points of $P_2(\frak L)$. A crucial fact about polynomials that makes them useful in this setting is that a line can intersect $Z(P)$ in at most $D$ points, unless it lies in $Z(P)$. Therefore, each line of $\frak L$ can enter at most $D+1$ of the cells $O_i$. Therefore, we can find a cell $O_i$ that intersects $\lesssim D^{-2} L$ lines and contains $\sim D^{-3} |P_2(\frak L)|$ points. Let $\frak L_i$ be the set of lines of $\frak L$ that enter this cell $O_i$. Applying induction to bound the 2-rich points of $\frak L_i$, we get the following estimates:
$$ | P_2 (\frak L) | \lesssim D^3 | P_2 (\frak L_i) | \le D^3 C(\epsilon, S) | \frak L_i| ^{(3/2) + \epsilon} \lesssim C(\epsilon, S) D^3 ( D^{-2} L)^{(3/2) + \epsilon}.$$
Because of the exponent $(3/2) + \epsilon$, the total power of $D$ is $D^{-2 \epsilon}$. In total we get:
$$ |P_2(\frak L)| \le (C D^{-2 \epsilon}) C(\epsilon, S) L^{(3/2) + \epsilon}, $$
\noindent where $C$ is an absolute constant. We now choose $D = D(\epsilon)$ sufficiently large so that $C D^{-2 \epsilon} < 1$, and the induction closes.
If majority of the points of $P_2(\frak L)$ lie in $Z(P)$, then we estimate $|P_2(\frak L)|$ directly.
We let $\frak L_Z \subset \frak L$ be the set of lines of $\frak L$ that are contained in $Z(P)$. Each line of $\frak L \setminus \frak L_Z$ intersects $Z(P)$ in at most $D$ points.
Therefore, there are at most $DL $ points of $P_2(\frak L) \cap Z(P)$ that involve a line from $\frak L \setminus \frak L_Z$. Finally we have to estimate $| P_2(\frak L_Z)|$. By assumption, any algebraic surface of degree at most $D$ contains at most $S$ lines of $\frak L$, and so $| \frak L_Z| \le S$. Therefore, $| P_2(\frak L_Z)| \le S^2$. If the majority of the points of $P_2(\frak L)$ lie in $Z(P)$, then we have $|P_2(\frak L)| \le 2 (DL + S^2)$. By choosing $C(\epsilon, S)$ sufficiently large, this is at most $C(\epsilon, S) L^{(3/2) + \epsilon}$. \end{proof}
To summarize, we estimate $|P_2(\frak L)|$ by breaking it into three contributions: the contributions from the cells $O_i$, the contributions from lines passing through $Z(P)$, and the contribution of lines in $Z(P)$. We bound the contribution of the cells by induction, using that each cell contributes roughly equally and that each line can enter at most $\sim D$ cells.
We bound the contribution of lines passing through $Z(P)$ using the fact that each line can only intersect $Z(P)$ in $D$ points. Finally, we bound the contribution of lines lying in $Z(P)$ using the assumption that not too many lines lie in $Z(P)$. We also note that this last contribution is a 2-dimensional problem, which makes it simpler than the original problem.
In the next subsection, we will explain how to apply the polynomial partitioning approach to the restriction problem, and we will again see these three contributions.
\subsection{Polynomial partitioning and the restriction problem}
Now we're ready to start discussing polynomial partitioning and the restriction problem. We will use the following version of polynomial partitioning, which is also a direct corollary of the Stone-Tukey ham sandwich theorem:
\begin{theorem} \label{partintro} Suppose that $W \ge 0$ is a (non-zero) $L^1$ function on $\mathbb{R}^n$. Then for any degree $D \ge 1$, we can find a non-zero polynomial $P$ of degree at most $D$ so that $\mathbb{R}^n \setminus Z(P)$ is a union of $\sim_n D^n$ disjoint cells $O_i$, and so that all the integrals $\int_{O_i} W$ are equal.
\end{theorem}
We will give a detailed sketch of the proof of Theorem \ref{introthm2}. In the introduction, we will write $\Br Ef$ for $\Br_\alpha Ef$ where $\alpha$ is approximately $K^{-\epsilon}$ but may change a little during the argument.
We want to estimate the integral $\int_{B_R} \Br Ef^{3.25}$ for a large radius $R$. We apply the partitioning theorem to the function $\chi_{B_R} \Br Ef^{3.25}$, with a degree $D$ that we will choose below. By Theorem \ref{partintro}, we can find a polynomial $P$ of degree at most $D$ so that $\mathbb{R}^n \setminus Z(P)$ is a disjoint union of $\sim D^3$ cells $O_i$, and for each $i$
\begin{equation} \label{equationequi}
\int_{B_R \cap O_i} \Br Ef^{3.25} \sim D^{-3} \int_{B_R} \Br Ef^{3.25}.
\end{equation}
In the combinatorial setting, it was crucial to observe that each line can enter at most $D+1$ cells $O_i$. In some sense, the tubes $T$ are analogous to lines, but since the tubes have some finite width, it may happen that a tube $T$ enters far more than $D$ cells - a tube $T$ may even enter all of the cells. Let $W$ be the neighborhood of $Z(P)$ with thickness equal to the radius of a tube $T$. Define $O_i' := (O_i \cap B_R) \setminus W$. If a tube $T$ enters $O_i'$, then the central line of $T$ must enter $O_i$. Therefore, each tube $T$ intersects $O_i'$ for at most $D+1$ values of $i$.
We now break the integral that we care about, $\int_{B_R} \Br Ef^{3.25}$, into pieces coming from the cells $O_i'$ and a piece coming from the cell wall $W$. (This decomposition is analogous to considering the points of $P_2(\frak L)$ in the cells $O_i$ and the points in $Z(P)$.) Suppose first that the contribution from the cells dominates the integral. In this case, there must be $\sim D^3$ cells $O_i'$ so that for each of them
\begin{equation} \label{equationequi'}
\int_{B_R \cap O_i'} \Br Ef^{3.25} \sim D^{-3} \int_{B_R} \Br Ef^{3.25}.
\end{equation}
Since $Ef_T$ decays very sharply outside of $T$, on the set $O_i'$, $Ef$ is essentially equal to the sum of $Ef_T$ over all the $T$ that intersect $O_i'$. We let $\mathbb{T}_i$ be the union of all the tubes $T$ (in any $\mathbb{T}(\theta)$) which intersect $O_i'$, and we define $f_i = \sum_{T \in \mathbb{T}_i} f_T$. On $O_i'$, we essentially have $Ef = Ef_i$. We also essentially have $\Br Ef = \Br Ef_i$. Now we would like to estimate $\int_{O_i'} \Br Ef_i^{3.25}$ by using induction.
To set up the induction, we have to consider what we know about $f_i$. Theorem \ref{introthm2} involves $\| f \|_\infty$, but $\| f_i \|_\infty$ is not very well behaved. We don't have any way to show that $\| f_i \|_\infty$ is significantly smaller than $\| f \|_\infty$, and I think it may even be larger. Because the functions $f_T$ are essentially orthogonal, we get the following estimate about $f_i$: for each $\theta$, and each $i$,
\begin{equation} \label{f_Torthog}
\int_\theta |f_i|^2 \lesssim \int_\theta |f|^2.
\end{equation}
\noindent Moreover, because each tube enters $\lesssim D$ cells $O_i'$, the orthogonality of $f_T$ implies that
\begin{equation}
\sum_i \int_S |f_i|^2 \lesssim D \int_S |f|^2.
\end{equation}
To make the induction work, we need to prove a stronger theorem that involves $\max_\theta \| f \|_{L^2(\theta)}$ instead of $\| f \|_\infty$. It's convenient to write our inequality in terms of the average of $|f|^2$ over a cap $\theta$, which we write as $\oint_{\theta} |f|^2$.
\begin{theorem} \label{introbroad12/13} Let $S$ be the truncated paraboloid.
For any $\epsilon > 0$, there is a large constant $K = K(\epsilon)$ so that for every radius $R$ the following holds. If $f: S \rightarrow \mathbb{C}$, and for every $R^{-1/2}$-cap $\theta$,
\begin{equation} \label{avgtheta1}
\oint_\theta |f|^2 \le 1,
\end{equation}
then
\begin{equation} \label{12/13equation}
\int_{B_R} \Br Ef^{3.25} \le C_\epsilon R^{\epsilon} \left( \int_S |f|^2 \right)^{(3/2) + \epsilon}.
\end{equation}
\end{theorem}
This theorem implies Theorem \ref{introthm2} by a direct computation.
(Recall that in the introduction $\Br Ef$ stands for $\Br_\alpha Ef$ with $\alpha \sim K^{- \epsilon}$.
Theorem \ref{introbroad12/13} is slightly stronger than Theorem \ref{introthm2}, because it can happen that $\max_\theta \oint_\theta |f|^2 $ is much smaller than $\| f \|_\infty^2$. In particular, the planar example in the Examples section is sharp for Theorem \ref{introbroad12/13} with any value of $B \ge K^{10 \epsilon}$. )
We will see in the proof that the exponent $(3/2) + \epsilon$ appears here for the same reason that it appeared in the incidence geometry theorem from the last subsection. Once we have fixed the exponent $(3/2) + \epsilon$ on the right-hand side, 3.25 is the smallest possible exponent on the left-hand side, because of the planar example.
We now sketch the proof of Theorem \ref{introbroad12/13}. To estimate $\int_{B_R} \Br Ef^{3.25}$, we break $B_R$ into cells as above. Suppose that the integral is dominated by the contribution from the cells. Then we have $\sim D^3$ cells $O_i'$ so that
$$ \int_{B_R} \Br Ef^{3.25} \lesssim D^3 \int_{O_i'} \Br Ef_i^{3.25}. $$
We can choose one of these cells $O_i'$ so that $\int_S |f_i|^2 \lesssim D^{-2} \int_S |f|^2$, and $\max_\theta \oint_\theta |f_i|^2 \lesssim \max_\theta \oint_\theta |f|^2 \le 1$. By induction, we can assume that Theorem \ref{introbroad12/13} holds for $f_i$, giving
$$ \int_{B_R} \Br Ef^{3.25} \le C D^3 C_\epsilon R^{\epsilon} \left( D^{-2} \int_S |f|^2 \right)^{(3/2) + \epsilon} = C D^{-2 \epsilon} \cdot (\textrm{Right-hand side of equation \ref{12/13equation}}). $$
\noindent We choose $D$ large enough that $C D^{- 2 \epsilon} \le 1$ and the induction closes.
Next we consider the case when our integral is dominated by the contribution from $W$, the region near the algebraic surface $Z(P)$. As in the combinatorial case, there are two kinds of tubes: tubes that pass through $W$ transversally and tubes that lie in $W$. Roughly, we will show that a tube $T$ can only pass through $W$ transversally in $\lesssim \Poly(D)$ places, and we will use this estimate to bound the transverse tubes using induction. The tubes that lie in $W$ over a long stretch will be called tangential tubes. The contribution of the tangential tubes is morally a 2-dimensional problem - similar to the restriction problem in $\mathbb{R}^2$. We will bound the tangential contribution by using C\'ordoba's $L^4$ argument from \cite{C}.
Here is a little bit more detail. We pick a small parameter $\delta$ so that $R^\delta$ is much bigger than $\Poly(D)$ but still small compared to $R^\epsilon$. Now we divide $B_R$ into $\sim R^{3 \delta}$ smaller balls $B_j$ of radius $\sim R^{1 - \delta}$. For each $j$, we define $\mathbb{T}_{j, trans}$ to be the set of tubes $T$ that intersect $W \cap B_j$ ``transversally''. We let $\mathbb{T}_{j, tang}$ be the set of tubes $T$ that intersect $W \cap B_j$ ``tangentially''. We will postpone the precise definition to the body of the paper.
To bound the transverse tubes, we first show that any tube $T$ lies in $\mathbb{T}_{j, trans}$ for at most $\Poly(D)$ different balls $B_j$. Note that the tube $T$ intersects $\sim R^\delta$ balls $B_j$, and $R^\delta$ is far larger than $\Poly(D)$. We define $f_{j, trans} = \sum_{T \in \mathbb{T}_{j, trans}} f_T$. If the transverse terms dominate, then
$$ \int_{B_R} \Br Ef^{3.25} \lesssim \sum_j \int_{B_j} \Br Ef_{j, trans}^{3.25}. $$
\noindent Since $B_j$ is smaller than $B_R$, we can assume by induction on the radius that Theorem \ref{introbroad12/13} holds for each integral on the right-hand side. The average of $|f_{j, trans}|^2$ on a cap of radius $(R^{1 - \delta})^{-1/2}$ is $\lesssim$ the maximum of $\oint_{\theta} |f_{j,trans}|^2$ on a $R^{-1/2}$-cap $\theta$, and $\max_\theta \oint_\theta |f_{j, trans}|^2 \lesssim \max_\theta \oint_\theta |f|^2 \le 1$.
Moreover, since each tube $T$ lies in only $\Poly(D)$ sets $\mathbb{T}_{j, trans}$, we get that $\sum_j \int_S |f_{j, trans}|^2 \le \Poly(D) \int_S |f|^2$. Plugging this in, we get
$$ \int_{B_R} \Br Ef^{3.25} \le \Poly(D) C_\epsilon (R^{1-\delta})^\epsilon \left( \int_S |f|^2 \right)^{(3/2) + \epsilon} = $$
$$ = \Poly(D) R^{-\delta \epsilon} \cdot (\textrm{Right-hand side of equation \ref{12/13equation}}). $$
\noindent As long as $\Poly(D) R^{-\delta \epsilon} \le 1$, the induction closes. We can assume that $R$ is very large, and we choose $D, \delta$ in such a way that this factor is at most 1. This method of dealing with the transverse tubes is based on the ``induction-on-scales'' argument from \cite{W1} and \cite{T2}.
Finally, we discuss the contribution of the tangential tubes. We estimate this contribution directly without using induction. It might be helpful for the reader to imagine the planar example during this discussion. In the planar example, the contribution of the tangential tubes would dominate the integral, and the bounds that we prove are all sharp in the planar example.
One key point is that the set of tangential tubes $\mathbb{T}_{j, tang}$ cannot contain tubes of $\mathbb{T}(\theta)$ for every cap $\theta$. In fact, $\mathbb{T}_{j, tang}$ can only include contributions from roughly $R^{1/2}$ out of the $R$ caps $\theta$, as in the planar example.
Because we are estimating the broad part of $Ef$, we can reduce the tangential contribution to a bilinear-type estimate. We can choose $K^{-1}$-separated $K^{-1}$-caps $\tau_1$ and $\tau_2$, and it suffices to bound an integral of the form
\begin{equation}
\int_{W \cap B_j} |E f_{\tau_1, j, tang}|^{p/2} |Ef_{\tau_2, j, tang}|^{p/2},
\end{equation}
\noindent where $f_{\tau_1, j, tang}$ is the sum of $f_T$ where $T \in \mathbb{T}_{j, tang}$ and $\supp f_T \subset \tau_1$. The motivation for introducing broad points is to get a bilinear integral at this stage of the argument, instead of the linear integral $\int_{W \cap B_j} | E f_{j, tang}|^p$. Given our control of $f$, there are much better estimates for the bilinear integral than the linear one.
We are ultimately interested in $p = 3.25$, but we first prove bounds for $p=2$ and $p=4$ and then interpolate between them. When $p=2$ the estimate basically boils down to Plancherel. For $p=4$ we proceed as follows.
We divide $W \cap B_j$ into cubes $Q$ of side length $\sim R^{1/2}$. For each cube $Q$, the tubes in $\mathbb{T}_{j, tang}$ that go through $Q$ lie very close to a plane -- the plane is the tangent plane $T_x Z(P)$ for a point $x \in Z(P)$ near to $Q$. The angle between the tubes $T$ and the plane is roughly $R^{-1/2}$. Once we have reduced to the contributions of these coplanar tubes, the problem is essentially 2-dimensional. As observed in \cite{T2}, the integral $\int_{Q} |E f_{\tau_1, j, tang}|^{2} |Ef_{\tau_2, j, tang}|^{2}$ can be controlled by the $L^4$ argument from \cite{C}.
C\'ordoba's argument gives a square root cancellation estimate. Recall that $|Ef_T|$ is morally well-modelled by $R^{-1/2} \| f_T\|_2 \chi_T$. The $L^4$ argument gives the following inequality:
\begin{equation}
\int_{Q} |E f_{\tau_1, j, tang}|^{2} |Ef_{\tau_2, j, tang}|^{2} \lesssim \int_Q \left( \sum_{T_1 \in \mathbb{T}_{\tau_1, j, tang}} R^{-1} \| f_{T_1} \|^2_2 \chi_{T_1} \right) \left( \sum_{T_2 \in \mathbb{T}_{\tau_2, j, tang}} R^{-1} \|f_{T_2} \|_2^2 \chi_{T_2} \right) .
\end{equation}
\noindent Summing over $Q$, it's now straightforward to get a bound for $\int_{W \cap B_j} |E f_{\tau_1, j, tang}|^{2} |Ef_{\tau_2, j, tang}|^{2}$ and then for $ \int_{W \cap B_j} |E f_{\tau_1, j, tang}|^{p/2} |Ef_{\tau_2, j, tang}|^{p/2}$ with any $2 \le p \le 4$. At this stage, we can use the fact that $\mathbb{T}_{j, tang}$ only includes tubes from roughly $R^{1/2}$ caps $\theta$. The resulting estimates all match the planar example, so they are sharp.
\subsection{Outline of the paper}
In Section 1, we review polynomial partitioning, deducing the partitioning theorem that we need from the Borsuk-Ulam theorem in topology. In Section 2, we review background related to the restriction problem. In particular we review wave packet decompositions and parabolic scaling. In this section, we also review the idea of broad points and explain how to deduce $L^p$ estimates for $Ef$ from $L^p$ estimates for the broad part of $Ef$. In Section 3 and 4, we prove our main theorem. Section 3 contains the harmonic analysis part of the argument. We also need some geometric estimates about the way tubes interact with an algebraic surface. We prove these estimates in Section 4, using some simple algebraic geometry and differential geometry.
\section{Review of polynomial partitioning}
In this section, we review polynomial partitioning and prove the result that we use. We will need modifications of the results in the literature, so we give self-contained proofs. Polynomial partitioning is based on the Stone-Tukey ham sandwich theorem from topology, and we begin by recalling it.
For any function $f$, we write $Z(f)$ for the zero-set of $f$: $Z(f) := \{ x | f(x) = 0 \}$.
\begin{theorem} \label{STham} (Stone-Tukey, \cite{ST}) Suppose that $V$ is a vector space of continuous functions on $\mathbb{R}^n$. Suppose that for each non-zero element $f \in V$, the set $Z(f) \subset \mathbb{R}^n$ has measure zero.
Let $W_1, ..., W_N$ be $L^1$-functions on $\mathbb{R}^n$, and suppose that $N < \Dim V$. Then there exists a non-zero function $v \in V$ so that for each $W_j$, $j= 1, ..., N$,
$$ \int_{\{ v > 0 \} } W_j = \int_{ \{ v < 0 \} } W_j. $$
\end{theorem}
In our application, $V$ will be the vector space of polynomials on $\mathbb{R}^n$ of degree at most $D$. The dimension of this space is ${D + n \choose n} \sim_n D^n$. It's straightforward to check that for any non-zero polynomial $P$, $Z(P)$ has measure zero. Therefore, Theorem \ref{STham} has the following corollary:
\begin{cor} \label{polyham} (Polynomial ham sandwich theorem) If $W_1, ..., W_N$ are $L^1$-functions on $\mathbb{R}^n$, then there exists a non-zero polynomial $P$ of degree $\le C_n N^{1/n}$ so that for each $W_j$,
$$ \int_{\{ P > 0 \} } W_j = \int_{ \{ P < 0 \} } W_j. $$
\end{cor}
The proof of Theorem \ref{STham} is an elegant application of the Borsuk-Ulam theorem, which we now recall.
\begin{theorem} (Borsuk-Ulam) If $F: S^N \rightarrow \mathbb{R}^N$ is a continuous function obeying the antipodal condition $F(- v) = - F(v)$, then there exists a $v \in S^N$ with $F(v) = 0$.
\end{theorem}
The reader can find a proof of the Borsuk-Ulam theorem in \cite{GP} or \cite{Ma}.
We give the proof of Theorem \ref{STham} using the Borsuk-Ulam theorem:
\begin{proof}
Without loss of generality, we can assume that $\Dim V = N+1$, and we can identify $V$ with $\mathbb{R}^{N+1}$, so that $S^N \subset V \setminus \{ 0 \}$. We defining a function $F: V \setminus \{ 0 \} \rightarrow \mathbb{R}^N$ by setting the $j^{th}$ coordinate to
$$ F_j(v) := \int_{ \{ v > 0 \} } W_j - \int_{ \{ v < 0 \} } W_j . $$
It follows immediately that $F(-v) = - F(v)$. Moreover, if $F(v) = 0$, then $v$ obeys the conclusion of the ham sandwich theorem. It is also true that the function $F$ is continuous, which we will check below. Then the Borsuk-Ulam theorem implies that there exists a $v \in S^N \subset V \setminus \{ 0 \}$ so that $F(v) = 0$.
It just remains to check the continuity of the functions $F_j$ on $V \setminus \{ 0 \}$. This is a measure theory exercise. Suppose that $v_k \rightarrow v$ in $V \setminus \{ 0 \}$. Let $A_k \subset \mathbb{R}^n$ be the set of points where the sign of $v_k$ is different from the sign of $v$.
$$ | F_j(v_k) - F_j (v) | \le \int_{A_k} |W_j|. $$
We know that the functions $v_k \rightarrow v$ pointwise. Therefore, $\cap_{k_0} \cup_{k > k_0} A_k \subset v^{-1}(0)$. By the dominated convergence theorem,
$$ \lim_{k_0 \rightarrow \infty} \int_{\cup_{k \ge k_0} A_k } |W_j| \le \int_{Z(f)} |W_j| = 0. $$
This proves that $\lim_{k \rightarrow \infty} |F_j (v_k) - F_j(v)| = 0$, showing that $F_j$ is continuous on $V \setminus \{ 0 \}$. \end{proof}
Polynomial partitioning is a corollary of the ham sandwich theorem. It was proven in \cite{GK} in a discrete setting. Here we give the same argument in a continuous setting.
\begin{theorem} \label{polypart} Suppose that $W \ge 0$ is a (non-zero) $L^1$ function on $\mathbb{R}^n$. Then for each $D$ there a non-zero polynomial $P$ of degree at most $D$ so that $\mathbb{R}^n \setminus Z(P)$ is a union of $\sim D^n$ disjoint open sets $O_i$, and the integrals $\int_{O_i} W$ are all equal.
\end{theorem}
\begin{proof} Using Corollary \ref{polyham}, we construct a polynomial $P_1$ so that
$$ \int_{\{ P_1 > 0 \} } W = \int_{ \{ P_1 < 0 \} } W = 2^{-1} \int W. $$
Next we let $W_+ = \chi_{ \{ P_1 > 0 \} } W$ and $W_- = \chi_{ \{ P_1 < 0 \} } W$, and we Corollary \ref{polyham} to find a polynomial $P_2$ so that for $j = + $ or $-$,
$$ \int_{\{ P_2 > 0 \} } W_j = \int_{ \{ P_2 < 0 \} } W_j = 2^{-2} \int W. $$
We have now cut $\mathbb{R}^n$ into four cells determined by the signs of $P_1,$ and $P_2$. The integral of $W$ on each cell is equal to $2^{-2} \int W$. We next construct a polynomial $P_3$ that bisects $W$ restricted to each of these four cells.
Continuing inductively, we construct polynomials $P_1, ..., P_s$, for a number $s$ that we choose below. We let $P = \prod P_k$. The sign conditions of the polynomials cut $\mathbb{R}^n \setminus Z(P)$ into $2^s$ cells, $O_i$. The integral of $W$ on each of these cells is equal to $2^{-s} \int W$. Corollary \ref{polyham} tells us that the degree of $P_k$ is $\lesssim_n 2^{k/n}$. Therefore, the degree of $P$ is $\le C_n 2^{s/n}$. Now we choose $s$ so that $C_n 2^{s/n} \in [D/2, D]$, guaranteeing that the degree of $P$ is at most $D$. The number of cells $O_i$ is $2^s \sim_n D^n$.
\end{proof}
We say that a polynomial $P$ is non-singular if $\nabla P (x) \not= 0$ for each point in $Z(P)$. If $P$ is non-singular, then it follows that $Z(P)$ is a smooth hypersurface. For technical reasons, it is helpful in our arguments later to use non-singular polynomials. We next prove versions of the ham sandwich theorem and the partitioning theorem with non-singular polynomials. We recall the standard fact that non-singular polynomials are dense. More precisely, if $\Poly_D(\mathbb{R}^n)$ denotes the vector space of polynomials on $\mathbb{R}^n$ of degree at most $D$, then
\begin{lemma} Non-singular polynomials are dense in $\Poly_D(\mathbb{R}^n)$ for any $D, n$. Moreover, the singular polynomials have measure zero.
\end{lemma}
\begin{proof} Consider the map $E: \mathbb{R}^n \times \Poly_D(\mathbb{R}^n) \rightarrow \mathbb{R} \times \Poly_D(\mathbb{R}^n)$, given by $E(x, Q) = (Q(x), Q)$. The map $E$ is $C^\infty$ smooth, and so by Sard's theorem, the critical values of $E$ have measure zero.
Suppose that $(h, Q)$ is a regular value of $E$. Then we claim that $Q - h$ is a non-singular polynomial. Note that $(Q-h)(x) = 0$ if and only if $(x, Q) \in E^{-1}(h, Q)$. Since $(h, Q)$ is a regular value, we know that $dE_{x, Q}$ is surjective. But $dE_{x, Q} = (\nabla Q, id)$, where $id: \Poly_D(\mathbb{R}^n) \rightarrow \Poly_D(\mathbb{R}^n)$ is the identity map. Therefore, if $(Q-h)(x) = 0$, then $\nabla (Q-h)(x) = \nabla Q (x) \not= 0 $.
We have seen that for almost every $(h, Q)$, $Q-h$ is non-singular. By Fubini's theorem it follows that the set of singular polynomials has measure zero in $\Poly_D(\mathbb{R}^n)$, and so the non-singular polynomials are dense.
\end{proof}
Using the density of non-singular polynomials, we can prove a version of the polynomial ham sandwich theorem with non-singular polynomials, weakening perfect bisections to approximate bisections.
\begin{cor} \label{polyhamns} Suppose that $W_1, ..., W_N \ge 0$ are non-zero functions in $L^1(\mathbb{R}^n)$. Then for any $\delta > 0$, there is a non-singular polynomial $P$ so that for each $W_j$
$$(1 - \delta) \int_{ \{ P < 0 \} } W_j \le \int_{ \{ P > 0 \} }W_j \le (1 + \delta) \int_{ \{ P < 0 \}} W_j. $$
\end{cor}
\begin{proof} Let $P_0$ be a non-zero polynomial with $ \int_{\{ P_0 > 0 \} } W_j = \int_{ \{ P_0 < 0 \} } W_j. $ Then let $P_k$ be a sequence of non-singular polynomials approaching $P_0$. By the continuity argument in the proof of Theorem \ref{STham}, we have $\lim_{k \rightarrow \infty} \int_{ \{ P_k > 0 \} } W_j = \int_{ \{ P > 0 \} } W_j$, and so for large $k$, $P_k$ obeys the desired inequality.
\end{proof}
Finally, using Corollary \ref{polyhamns} in place of Corollary \ref{polyham} in the proof of Theorem \ref{polypart}, we get a partitioning result involving non-singular polynomials.
\begin{cor} \label{partitns} Let $W$ be a non-negative $L^1$ function on $\mathbb{R}^n$. Then for any $D$, there is a non-zero polynomial $P$ of degree at most $D$ so that $\mathbb{R}^n \setminus Z(P)$ is a disjoint union of $\sim D^n$ cells $O_i$, and the integrals $\int_{O_i} W$ agree up to a factor of 2. Moreover, the polynomial $P$ is a product of non-singular polynomials.
\end{cor}
\section{Preliminaries}
\subsection{Statement of results}
We will work with surfaces $S$ that are nearly paraboloids. The basic example is the truncated paraboloid defined by the equation $\omega_3 = \omega_1^2 + \omega_2^2$, $(\omega_1, \omega_2) \in B^2_1(0)$. The reader may want to focus on this example throughout.
Suppose that $S \subset \mathbb{R}^3$ is a smooth compact surface given as the graph of a function $h: B^2_1(0) \rightarrow \mathbb{R}$ which satisfies the following conditions for some large $L$:
\begin{cond} \label{CondSnice}
\begin{enumerate}
\item $0 < 1/2 \le \partial^2 h \le 2$.
\item $0 = h(0) = \partial h(0)$.
\item $h$ is $C^L$, and
\item for $3 \le l \le L$, $\| \partial^l h \|_{C^0} \le 10^{-9}$.
\end{enumerate}
\end{cond}
\begin{theorem} \label{mainthm} For any $\epsilon > 0$, there is some $L$ so that if $S$ obeys Conditions \ref{CondSnice} with $L$ derivatives, then for any radius $R$, the extension operator $E_S$ obeys the inequality
$$ \| E_S f \|_{L^{3.25}(B_R)} \le C_\epsilon R^\epsilon \| f \|_\infty. $$
\end{theorem}
By Tao's $\epsilon$-removal theorem \cite{T1}, we get the following corollary:
\begin{cor} \label{maincor} If $S$ obeys Conditions \ref{CondSnice}, then for all $p > 3.25$,
$$ \| E_S f \|_{L^p(\mathbb{R}^3)} \le C(p) \| f \|_\infty. $$
\end{cor}
A little later, at the end of Subsection \ref{parscal}, we will see that
the case of a general compact surface with positive second fundamental form can be reduced to the case of a surface obeying Conditions \ref{CondSnice}, so that Theorem \ref{mainintro} follows quickly from Corollary \ref{maincor}.
In coordinates, we have $\omega_3 = h(\omega_1, \omega_2) = h(\vec \omega)$. We write $\vec \omega \in \mathbb{R}^2$ for the first two coordinates of $\omega \in \mathbb{R}^3$.
\subsection{Broad points}
Let $S$ be as above. We divide $S$ into $\sim K^2$ caps $\tau$ of diameter $\sim K^{-1}$. Let $f_\tau$ denote the restriction of $f$ to $\tau$.
For $\alpha \in (0,1)$, we say that $x$ is $\alpha$-broad for $Ef$ if:
$$ \max_{\tau} | Ef_\tau (x)| \le \alpha | E{f}(x)|. $$
\noindent We define $\Br_\alpha Ef (x)$ to be $|Ef(x)|$ if $x$ is $\alpha$-broad for $Ef$ and zero otherwise. We remark that the definition of $\Br_\alpha Ef(x)$ depends on $K$ and on the choice of the caps $\tau$. Roughly speaking, if a point $x$ is not broad, then $|Ef(x)|$ is comparable to $|Ef_\tau(x)|$ for some cap $\tau$, and we can deal with these points separately, by some induction on the size of caps.
We will prove the following estimate about $L^p$ norms of the broad part of $Ef$.
\begin{theorem} \label{broad12/13} For any $\epsilon > 0$, there exists $K = K(\epsilon), L = L(\epsilon)$ so that if
$S$ obeys conditions \ref{CondSnice} with $L$ derivatives, then for any radius $R$,
$$ \| \Br_{K^{- \epsilon}} Ef \|_{L^{3.25} (B_R)} \le C_\epsilon R^\epsilon \| f \|_2^{12/13} \| f \|_\infty^{1/13}. $$
\noindent Also, $\lim_{\epsilon \rightarrow 0} K(\epsilon) = + \infty$.
\end{theorem}
We can deduce Theorem \ref{mainthm} from Theorem \ref{broad12/13} using a parabolic scaling argument from \cite{BG} that we explain in the next subsection.
\subsection{Parabolic scaling} \label{parscal}
Suppose that $B^2_r(\vec \omega_0) \subset B^2_1$. We let $S_0 \subset S$ be the graph of $h$ over $B^2_r(\omega_0)$. We can reduce the behavior of the operator $E_{S_0}$ on $B_R$ to the behavior of $E_{S_1}$ on a smaller ball, for a surface $S_1$ which is similar to the original $S$. If $S$ is a truncated paraboloid $\omega_3 = |\vec \omega|^2$, then $S_1$ will be a truncated paraboloid as well. This argument involves a change of coordinates which is essentially a parabolic rescaling.
We describe this change of coordinates. First we define $\tilde h$ to be $h$ minus its first-order Taylor expansion at $\vec \omega_0$:
\begin{equation} \label{tildeh}
\tilde h(\vec \omega) = h(\vec \omega) - (\vec \omega - \vec \omega_0) \partial h(\vec \omega_0) - h(\vec \omega_0).
\end{equation}
Next we parametrize $B^2_r(\vec \omega_0)$ by a coordinate $\vec \eta \in B^2(1)$:
$$\vec \omega =\vec \omega_0 + r \vec \eta. $$
Now we define the function $h_1$ by
\begin{equation} \label{h_1}
h_1 (\vec \eta) = r^{-2} \tilde h(\vec \omega) = r^{-2} \tilde h(\vec \omega_0 + r \vec \eta).
\end{equation}
We let $S_1$ be the graph of $h_1$. The surface $S_1$ maintains the good properties of $S$. If $h(\vec \omega) = | \vec \omega|^2$, then $h_1(\vec \eta) = |\vec \eta|^2$. If $h$ obeys Conditions \ref{CondSnice} with $L$ derivatives, then so does $h_1$. By equation \ref{tildeh}, we can check that
$$ 0 = \tilde h(\vec \omega_0) = \partial \tilde h(\vec \omega_0); \partial^2 \tilde h(\vec \omega) = \partial h(\vec \omega). $$
Now using equation \ref{h_1}, we see that $0 = h_1(0) = \partial h_1(0)$. Also, because of the parabolic rescaling, we have for any indices $i,j$, $\partial^2_{ij} h_1 (\vec \eta) = \partial^2_{ij} h(\vec \omega_0 + r \vec \eta)$. In particular for all $\vec \eta \in B^2_1$,
$$ 1/2 \le \partial^2 h_1 \le 2. $$
The function $h_1$ is clearly $C^\infty$ smooth, and another nice feature is that for $l \ge 3$, the $l^{th}$ derivatives of $h_1$ are smaller than for $h$. In particular, a direct calculation shows that for all $l \ge 2$,
$$ \| \partial^l h_1 \|_{C^0} = r^{l-2} \| \partial^l h \|_{C^0}. $$
The following lemma connects the behavior of $E_{S_0}$ on $B_R$ to the behavior of $E_{S_1}$ on a smaller ball.
\begin{lemma} \label{parabrescal} Suppose that $h$ obeys Conditions \ref{CondSnice}. Let $S_1$ be as above: the restriction of the graph of $h$ to a ball of radius $r$. If $E_{S_1}$ obeys the inequality
$$ \| E_{S_1} g \|_{L^p(B_{10 r R})} \le M \| g \|_{L^\infty(S_1)}, $$
then $E_{S_0}$ obeys the inequality
$$ \| E_{S_0} f \|_{L^p(B_R)} \le C r^{2 - \frac{4}{p}} M \| f \|_{L^\infty(S_0)}. $$
\end{lemma}
\begin{proof} Let $f \in L^p(S_0)$. We will express $E_{S_0} f$ using $E_{S_1}$.
$$ |E_{S_0} f(x) | = \left| \int_{S_0} e^{i \omega x} f(\omega) \dvol_{S_0} \right| . $$
Recall that we write $\vec \omega \in \mathbb{R}^2$ for the first two coordinates of $\omega \in \mathbb{R}^3$. Expressing the last integral in these coordinates, we get
$$ = \left| \int_{B^2_r(\vec \omega_0)} e^{i \vec \omega \cdot \vec x} e^{i h(\vec \omega) x_3} f |Jh| d \vec \omega \right|, $$
\noindent where $|Jh_0|$ is the Jacobian $(1 + |\nabla h|^2)^{1/2}$. Also, we write $\vec x$ for $(x_1, x_2)$. We rewrite this equation using $\tilde h$ and then using $h_1$.
$$ = \left| \int_{B^2_r(\vec \omega_0)} e^{i \vec \omega \cdot (\vec x + \partial h(\vec \omega_0) x_3)} e^{i \tilde h(\vec \omega) x_3} f |Jh| d \vec \omega \right| =$$
$$ = \left| \int_{B^2_1} e^{i \vec \eta \cdot r (\vec x + \partial h(\vec \omega_0))} e^{i h_1(\vec \eta) r^2 x_3} f |Jh| r^2 d \vec \eta \right|. $$
This expression is equal to $| E_{S_1} g(\bar x)|$ where
\begin{equation} \label{gdef}
g(\vec \eta) = f(\vec \omega_0 + r \vec \eta) r^2 |J h| |J h_1|^{-1},
\end{equation}
\begin{equation} \label{barxdef}
\bar x = (r x_1 + r \partial_1 h(\omega_0) x_3, r x_2 + r \partial_2 h(\omega_0) x_3, r^2 x_3).
\end{equation}
Since $\nabla h, \nabla h_1$ vanish at zero, and since $|\nabla^2 h|$ and $|\nabla^2 h_1|$ are at most 2, we know that $|\nabla h|$ and $|\nabla h_1|$ are at most 2 on the unit disk. Therefore the Jacobian factors $|J h_0|$ and $|Jh_1|$ are $\lesssim 1$. Therefore, we see from Equation \ref{gdef} that
$$ \| g \|_{L^\infty(S_1)} \lesssim r^{2} \| f \|_{L^\infty(S_0)}. $$
Since $|\partial h(\omega_0)| \le 2$, we see from Equation \ref{barxdef} that if $x \in B_R$, then $\bar x \in B_{10 r R}$. If we let $\Phi$ be the linear change of coordinates with $\bar x = \Phi (x)$, then the determinant of $\Phi$ is $r^{4}$. Therefore, we have
$$ \| E_{S_0} f \|_{L^p(B_R)} \le r^{-4/p} \| E_{S_1} g \|_{L^p(B_{ 10 r R})} \le $$
$$ \le r^{-4/p} M \| g \|_{L^\infty(S_1)} \lesssim r^{2 - \frac{4}{p}} M \| f \|_{L^\infty(S_1)}. $$
\end{proof}
Using parabolic rescaling, we now prove Theorem \ref{mainthm} from Theorem \ref{broad12/13}.
\begin{proof} We will prove the inequality by induction on the radius $R$. We would like to prove that $\| Ef \|_{L^{3.25}(B_R)} \le \bar C_\epsilon R^\epsilon \| f \|_\infty$ for some constant $\bar C_\epsilon$ indepedent of $R$. We know that $\| \Br_{K^{-\epsilon}} Ef\|_{L^{3.25}(B_R)} \le C_\epsilon R^\epsilon \| f \|_\infty$.
We wish to bound $\int_{B_R} | Ef(x)|^{3.25} dx$.
If $x$ is $K^{- \epsilon}$-broad, then $|Ef(x)| = \Br Ef$. If not, then there exists some $K^{-1}$-cap $\tau$ so that $|Ef(x)| \le K^\epsilon | Ef_\tau(x)|$. Therefore,
\begin{equation} \label{broad+caps}
\int_{B_R} |Ef|^{3.25} \le \int_{B_R} \Br_{K^{- \epsilon}} Ef^{3.25} + K^{O(\epsilon)} \sum_\tau \int_{B_R} |Ef_\tau|^{3.25}.
\end{equation}
The contribution of the broad term is bounded by Theorem \ref{broad12/13}. It is at most
$$( C_\epsilon R^\epsilon \| f \|_\infty )^{3.25}. $$
We have to prove the same bound for the $Ef_\tau$ terms. We bound each term using Lemma \ref{parabrescal}. We let $\tau$ be the graph of $h$ over $B^2_{K^{-1}} (\omega_0)$, and we let $S_1$ be the corresponding surface. We know that $S_1$ obeys Conditions \ref{CondSnice}. We can assume that $K$ is large enough so that $10 K^{-1} R < R/2$. Using induction on $R$ and applying Lemma \ref{parabrescal} with $r = K^{-1}$, we see that
$$ \int_{B_R} |Ef_{\tau}|^{3.25} \le C K^{-2.5} (\bar C_{\epsilon} R^{\epsilon} \| f_\tau \|_\infty)^{3.25}. $$
Since there are $\sim K^2$ caps $\tau$, their total contribution to the right-hand side of Equation \ref{broad+caps} is
$$ \le C K^{-(1/2) + O(\epsilon)} ( \bar C_\epsilon R^\epsilon \| f \|_\infty )^{3.25}. $$
We also know that $\lim_{\epsilon \rightarrow \infty} K(\epsilon) = \infty$. If $\epsilon$ is small enough, then $C K^{-(1/2) + O(\epsilon)} \le 1/100$. Now choosing $\bar C_\epsilon = 10 C_\epsilon$ the induction closes. \end{proof}
Using parabolic rescaling, we can also deduce Theorem \ref{mainintro} from Corollary \ref{maincor}.
We just sketch the argument, which is standard. If $S$ is a compact $C^\infty$ surface with strictly positive second fundamental form, then we can divide $S$ into $C(S)$ pieces so that each piece is contained in the graph of a smooth function. In appropriate orthonormal coordinates, each graph has the form $\omega_3 = h(\vec \omega)$ for $\vec \omega$ contained in a ball of radius $\sim_S 1$. We can assume that $0 = h(0) = \partial h(0)$. Because of the positive second fundamental form of $S$, we know that $0 < \lambda \le \partial^2 h \le \Lambda$, and we know that $h$ is $C^\infty$ smooth. For any $L$, we can do parabolic rescaling with caps of radius $r = r(\lambda, \Lambda, \| h \|_{C^L})$ so that the function $h_1$ will have $| \partial^l h_1 | \le 10^{- 9}$ for all $3 \le l \le L$. We can do another change of coordinates so that $\partial^2 h_1(0)$ is the identity matrix. This coordinate change may increase the higher derivatives of $h$, but if we follow by more parabolic rescaling, we are reduced to functions $h$ obeying Conditions \ref{CondSnice}. The total number of pieces in this decomposition is a constant depending only on $S$. Applying Theorem \ref{maincor} to each piece and summing, we get Theorem \ref{mainintro}.
\subsection{Wave packet decomposition}
In this subsection, we decompose $Ef$ on $B_R$ into wave packets in a basically standard way.
First we decompose $S$ into $R^{-1/2}$-caps $\theta$. We let $\omega_\theta$ be a point near the center of $S \cap \theta$, and we let $v_\theta$ denote the unit normal vector to $S$ at $\omega_\theta$.
Let $\delta > 0$ be a small parameter. For each cap $\theta$, we let $\mathbb{T}(\theta)$ be a set of cylindrical tubes parallel to $v_\theta$, with radius $R^{(1/2) + \delta}$ and length $\sim R$, covering $B_R$. We choose the tubes with radius a little bigger than $R^{1/2}$ so that the wave packets decay very sharply outside of the tubes. For each $\theta$, each point $x \in B_R$ lies in $O(1)$ tubes $T \in \mathbb{T}(\theta)$.
We let $\mathbb{T} = \cup_\theta \mathbb{T}(\theta)$.
For any cap $\theta$, we let $3 \theta$ be a larger cap containing $\theta$. If $\theta$ is the graph of $h$ over a ball $B^2_r(\vec \omega_\theta)$, then we can take $3 \theta$ to be the graph of $\theta$ over $B^2_{3r} (\vec \omega \theta)$.
If $T$ is a tube in $\mathbb{T}(\theta)$, we let $v(T) = v_\theta$ be the direction of the tube.
We can now state our result about wave packet decompositions.
\begin{prop} \label{wavepack} Suppose that $S$ obeys Conditions \ref{CondSnice}. Let $\mathbb{T}$ be as above, with $\delta > 0$. Suppose that $R$ is sufficiently large, depending on $\delta$. If $f$ is a function in $L^2(S)$, then for each $T \in \mathbb{T}$, we can choose a function $f_T$ so that the following holds:
\begin{enumerate}
\item If $T \in \mathbb{T}(\theta)$, then $\supp f_T \subset 3 \theta$.
\item If $x \in B_R \setminus T$, then $|E{f_T}(x)| \le R^{-1000} \| f \|_{L^2}$.
\item For any $x \in B_R$, $| E{f}(x) - \sum_{T \in \mathbb{T}} E{f_T}(x) | \le R^{-1000} \| f \|_{L^2}$.
\item (essential orthogonality) If $T_1, T_2 \in \mathbb{T}(\theta)$ and $T_1, T_2$ are disjoint, then $\int f_{T_1} \bar f_{T_2} \le R^{-1000} \int_\theta |f|^2$.
\item $\sum_{T \in \mathbb{T}(\theta)} \int_S |f_T|^2 \lesssim \int_\theta |f|^2$.
\end{enumerate}
\end{prop}
\begin{proof} Fix $\theta$. We define $f_\theta$ to be $f \chi_{\theta}$.
For each $\theta$ we choose orthonormal coordinates $\omega_1, ..., \omega_3$ so that $5 \theta$ is given by the graph of a function $h$:
$$ \omega_3 = h(\omega_1, \omega_2) = h(\vec \omega). $$
The domain of $h$ is a ball of radius $\sim R^{-1/2}$. We can choose the coordinates so that $h$ and $\partial h$ vanish at the center of the ball. Given Conditions \ref{CondSnice}, this function $h$ must obey the following inequalities on the ball:
\begin{equation}
|h| \lesssim R^{-1}; |\nabla h| \lesssim R^{-1/2}; |\nabla^l h| \lesssim_l 1 \textrm{ for all } l \ge 2.
\end{equation}
\noindent We let $(x_1, ..., x_3) = (\vec x, x_3)$ be the dual coordinates to $(\omega_1, ..., \omega_3) = (\vec \omega, \omega_3)$.
Now we define the tubes of $\mathbb{T}(\theta)$. We cover $\mathbb{R}^{2}$ with finitely overlapping balls $B$ of radius $R^{(1/2) + \delta}$. We let $T$ be the set of points $x = (\vec x, x_3)$ with $\vec x \in B$.
We let $\mathbb{T}(\theta)$ be the set of tubes corresponding to balls $B$ that cover $B^{2}(R)$, and we let $\tilde\mathbb{T}(\theta)$ be an infinite set of tubes corresponding to balls $B$ that cover $\mathbb{R}^{2}$.
We let $\phi_T$ be a partition of unity on $\mathbb{R}^2$ subordinate to the covering by balls $B$.
In fact, we make the slightly stronger assumption that the support of $\phi_T(\vec \omega)$ is contained in $(3/4) B$. We can also think of $\phi_T$ as a partition of unity on $\mathbb{R}^3$, subordinate to the covering by tubes $T$, where each function $\phi_T(x_1, x_2, x_3)$ is independent of $x_3$.
We can assume that $| \nabla^l \phi_T | \lesssim_l (R^{(1/2)+ \delta})^{-l}$, and so the Fourier transform obeys the estimate:
$$ | \hat \phi_T (\vec \omega) | \lesssim \Area B \left( 1 + R^{(1/2) + \delta} |\vec \omega| \right)^{10^6}. $$
We let $\psi_\theta$ be a smooth function which is 1 on $2 \theta$ and has support in $3 \theta$. We can also think of $\psi_\theta(\vec \omega)$ as a function on $\mathbb{R}^2$. We can assume that $| \nabla^l \psi_\theta| \lesssim_l R^{l/2}$.
We let $J$ denote the Jacobian factor $( 1 + | \nabla h|^2)^{1/2}$, and we define $F_\theta = J f_\theta$ so that
$$ F_\theta(\vec \omega) \domega = f_\theta(\omega) \dvol_S. $$
We can think of $F_\theta$ either as a function on $\mathbb{R}^2$ or as a function on $\theta$. Thinking of $F_\theta(\vec \omega)$ as a function on $\mathbb{R}^2$, we can define the convolution $\hat \phi_T * F_\theta$. Now we can define $F_T$ by:
$$ F_T (\vec \omega) := \psi_\theta (\vec \omega) \cdot ( \hat \phi_T * F_\theta) (\vec \omega). $$
We remark that in this formula, the $\psi_\theta$ has a very small effect. The convolution $\hat \phi_T * F_\theta$ is essentially supported in a small neighborhood of $\theta$, because $\hat \phi_T(\vec \omega)$ decays rapidly for $| \vec \omega| \ge R^{-(1/2) - \delta}$ and $F_\theta$ is supported on $\theta$. However, $\hat \phi_T * F_\theta$ does have a small tail, which we cut off by multiplying by $\psi_\theta$, so that $F_T$ is supported in $3 \theta$.
Finally, we define $f_T$ by $F_T = J f_T$ so that
$$ F_T(\vec \omega) \domega = f_T(\omega) \dvol_S. $$
We have now defined $f_T$ and we have to check that it obeys Properties 1-5.
Since $F_T = \psi_{\theta} \cdot ( \hat \phi_T * F_\theta)$, and $\supp \psi_\theta \subset 3 \theta$, it follows that $\supp f_T \subset 3 \theta$, which proves Property 1.
The proof of Property 2 is probably the most important. Let $T \in \tilde \mathbb{T}(\theta)$.
We write $Ef_T(x)$ as
$ \int e^{i \omega x} f_T(\omega) \dvol_S = \int e^{i \vec \omega \cdot \vec x} e^{i h(\vec \omega) x_3} F_T(\vec \omega) \domega$. Then we plug in that $F_T = \psi_\theta \cdot (\hat \phi_T * F_\theta)$ and group terms to get
\begin{equation} \label{expandingEf_T}
Ef_T(x) = \int e^{i \vec \omega \cdot \vec x} (e^{i h(\vec \omega) x_3} \psi_\theta) (\hat \phi_T * F_\theta) \domega.
\end{equation}
Let $G_{x_3}(\vec \omega) = e^{i h(\vec \omega) x_3} \psi_\theta$. If we interpret the right-hand side of Equation \ref{expandingEf_T} as an inverse Fourier transform, then intertwining multiplication and convolution, we get:
$$Ef_T(x_1, x_2, x_3) = G_{x_3}^\vee * (\phi_T \cdot \check{F}_\theta). $$
Since $x \in B_R$, $|x_3| \le R$. It then follows that $| \nabla^l G_{x_3}| \lesssim_l R^{l/2}$, and so
$$ |G_{x_3}^\vee(\vec x)| \lesssim \Area \theta \left(1 + |x| R^{-1/2} \right)^{- 10^6 \delta^{-1} }. $$
Since $x \notin T$, the distance from $x$ to $\supp \phi_T$ is $\ge (1/10) R^{(1/2) + \delta}$. Finally $| \check{F}_\theta| \lesssim \| f \|_{L^2(\theta)}$. Plugging these estimates into the convolution, we see that
$$ |Ef_T(x)| \le R^{-10000} \| f \|_{L^2(\theta)}. $$
This proves Property 2, but for the future we also note a slightly stronger estimate:
\begin{equation} \label{strongprop2}
|Ef_T(x)| \le R^{-10000} \| f \|_{L^2(\theta)} ( 1 + \Dist(x, T))^{-100}.
\end{equation}
Now we are ready to prove Property 3. We write $Ef(x)$ as
$$\sum_\theta E f_\theta(x) = \sum_\theta \int e^{i \omega x} F_\theta (\vec \omega) \domega.$$
Since $\psi_\theta$ is identically 1 on $\supp F_\theta \subset \theta$, we can rewrite this as
$$ = \sum_\theta \int e^{i \omega x} \psi_\theta F_\theta (\vec \omega) \domega.$$
Now the infinite sum $\psi_\theta \sum_{T \in \tilde \mathbb{T}(\theta)} \hat \phi_T * F_\theta$ converges to $\psi_\theta F_\theta$ in $L^2$ and hence in $L^1$ since the functions are all supported in $3 \theta$. Therefore, we can write $Ef(x)$ as a convergent infinite sum:
$$ Ef(x) = \sum_\theta \sum_{T \in \tilde \mathbb{T}(\theta)} \int e^{i \omega x} \psi_\theta (\hat \phi_T * F_\theta) \domega = \sum_{\theta} \sum_{T \in \tilde \mathbb{T}(\theta)} Ef_T(x). $$
Finally we want to prune the last sum by including only tubes $T$ in $\mathbb{T}(\theta)$ -- in other words, only the tubes $T$ that actually intersect $B_R$. Since $x \in B_R$, the tubes we remove are all disjoint from $x$. We bound their total contribution using the strong version of Property 2 in equation \ref{strongprop2}. This proves Property 3.
We prove Property 4 using Plancherel's theorem as follows. Suppose that $T_1, T_2$ are disjoint tubes in $\mathbb{T}(\theta)$. Expanding the definition of $f_{T_1}$ and $f_{T_2}$, we get
\begin{equation} \label{innerprodf_1f_2}
\int f_{T_1} \overline{ f_{T_2}} \dvol_S = \int J \psi_\theta (\hat \phi_{T_1} * F_\theta) \bar \psi_\theta \overline{(\hat \phi_{T_2} * F_\theta)} \domega = \int \left(J |\psi_\theta|^2 \cdot (\hat \phi_{T_1} * F_\theta) \right) \overline{(\hat \phi_{T_2} * F_\theta)} \domega.
\end{equation}
Let $G = J |\psi_\theta|^2$. Applying Plancherel, our integral is equal to:
\begin{equation} \label{towardsProp4}
\int \left( \check{G} * (\phi_{T_1} \check{F}_\theta) \right) \cdot \overline{ \phi_{T_2} \check{F}_\theta} dx_1 dx_2.
\end{equation}
Since $T_1$ and $T_2$ are disjoint, $Dist(\supp \phi_{T_1}, \supp \phi_{T_2}) \ge (1/4) R^{(1/2) + \delta}$. But on the other hand, $G$ obeys $| \nabla^l G | \lesssim_l R^{l/2}$, and so
$$ | \check{G}(\vec x)| \lesssim \Area \theta \left( 1 + |\vec x| R^{-1/2} \right)^{- 10^6 \delta^{-1}}. $$
Also $| \check{F}_\theta(x_1, x_2)| \lesssim \| f \|_{L^2(\theta)}$. Plugging these bounds into equation \ref{towardsProp4}, we get
$$ \int f_{T_1} \overline{ f_{T_2}} \dvol_S \lesssim R^{-10^5} \| f \|_{L^2(\theta)}^2. $$
This proves Property 4.
Finally, we turn to Property 5. Using Equation \ref{innerprodf_1f_2}, we see
$$ \sum_{T \in \mathbb{T}(\theta)} \int |f_T|^2 \dvol_S = \sum_{T \in \mathbb{T}(\theta)} \int |\psi_\theta|^2 J |\hat \phi_T * F_\theta|^2 \domega.$$
Since $1 \le J \le 2$, this last integral is
\begin{equation} \label{withpsiintegral}
\lesssim \sum_{T \in \mathbb{T}(\theta)} \int |\psi_\theta|^2 |\hat \phi_T * F_\theta|^2 \domega.
\end{equation}
Since $F_\theta$ is supported in $\theta$ and $\hat \phi_T$ decays rapidly, $\psi_\theta (\hat \phi_T * F_\theta)$ is almost equal to $(\hat \phi_T * F_\theta)$. In quantitative terms, since $| \hat \phi T (\vec \omega)|$ decays rapidly for $| \vec \omega | \ge R^{-(1/2) - \delta}$, we get
$$ \psi_\theta (\hat \phi_T * F_\theta) (\vec \omega) = (\hat \phi_T * F_\theta) (\vec \omega) + O( R^{-10^5} (1 + | \vec \omega|)^{-10} \| f_\theta \|_2). $$
Using this estimate, we see that line \ref{withpsiintegral} is
$$ \le \sum_{T \in \mathbb{T}(\theta)} \int |\hat \phi_T * F_\theta|^2 \domega + O(R^{-10^5} \| f \|_{L^2(\theta)}^2. $$
We can evaluate the last integral by Plancherel, giving
$$ \sum_{T \in \mathbb{T}(\theta)} \int |\phi_T|^2 |\check{F_\theta}|^2 dx_1 dx_2. $$
But since $\phi_T$ form a partition of unity, $\sum_{T \in \mathbb{T}(\theta)} |\phi_T|^2 \le 1$, and so the last line is bounded by
$$ \int |\check{F_\theta}|^2 dx_1 dx_2 = \int |F_\theta|^2 \domega \lesssim \int_\theta |f|^2. $$
This proves Property 5. \end{proof}
We will usually apply Proposition \ref{wavepack} to the functions $f_\tau$. By Property 1, if $f_\tau$ is supported in $\tau$, then for every $T$, $f_{\tau, T}$ is supported in a $O(R^{-1/2})$ neighborhood of $\tau$.
Suppose that $\mathbb{T}_i \subset \mathbb{T}$ are subsets. For each $\tau$ and for each subset, we can define a corresponding function $f_{\tau, i}$:
$$ f_{\tau, i} := \sum_{T \in \mathbb{T}_i} f_{\tau, T}. $$
\begin{lemma} \label{wavepack1} Consider some subsets $\mathbb{T}_i \subset \mathbb{T}$ indexed by $i \in I$. If each tube $T$ belongs to at most $\mu$ of the subsets $\{ \mathbb{T}_i \}_{i \in I}$, then for every $\theta$,
$$ \sum_{i \in I} \int_{3 \theta} | f_{\tau, i} |^2 \lesssim \mu \int_{10 \theta} |f_\tau|^2. $$
Also,
$$ \sum_{i \in I} \int_S | f_{\tau, i} |^2 \lesssim \mu \int_S |f_\tau|^2. $$
\end{lemma}
\begin{proof} Each $f_{\tau, i} = \sum_{T \in \mathbb{T}_i} f_{\tau, T}$. If $T \in \mathbb{T}(\theta')$, then $\supp f_{\tau, T} \subset 3 \theta'$. So in the integral on the left-hand-side, we only need to include the tubes in $\mathbb{T}(\theta')$ for $O(1)$ caps $\theta'$ each lying in $10 \theta$.
We define $\mathbb{T}_i(\theta') := \mathbb{T}_i \cap \mathbb{T}(\theta')$, and $f_{\tau, i, \theta'} = \sum_{T \in \mathbb{T}_{i}(\theta')} f_{\tau, T}$.
$$ \int_{3 \theta} | f_{\tau, i} |^2 \lesssim \sum_{3\theta' \cap 3 \theta \not= \phi} \int |f_{\tau, i, \theta'}|^2 .$$
For each $\theta'$, we expand $f_{\tau, i \theta'}$ to get
$$\sum_i \int |\sum_{T \in \mathbb{T}_i(\theta')} f_{\tau, T} |^2 = \sum_i \sum_{T_1, T_2 \in \mathbb{T}_i(\theta')} \int f_{\tau, T_1} \overline{f_{\tau, T_2}}. $$
We control the terms where $T_1$ and $T_2$ are disjoint using Property 4 above. Each tube $T_1 \in \mathbb{T}(\theta')$ intersects at most $O(1)$ other tubes $T_2 \in \mathbb{T}(\theta')$. Therefore, the last expression is bounded by:
$$ \lesssim \sum_i \left( \sum_{T \in \mathbb{T}_i(\theta')} \int |f_{\tau, T}|^2 + O( |\mathbb{T}_i(\theta')|^2 R^{-1000} \| f_{\tau, \theta'} \|_2^2) \right). $$
The big $O$ term contributes at most $|I| R^{-950} \| f_{\tau, \theta'} \|_2^2 \le \mu R^{-900} \| f_{\tau, \theta'} \|_2^2$, which is easily controlled by the right-hand-side. Using Property 5, the main term is bounded by
$$ \le \mu \sum_{T \in \mathbb{T}_i(\theta')} \int |f_{\tau, T}|^2 \lesssim \mu \int_{\theta'} |f_\tau|^2. $$
This proves that $\sum_{i \in I} \int_{3 \theta} | f_{\tau, i} |^2 \lesssim \mu \int_{10 \theta} |f_\tau|^2$, giving the first inequality in the conclusion. Finally, if we sum this inequality over all the caps $\theta \subset S$, we get the second inequality.
\end{proof}
As a special case, applying the lemma above to a single subset $\mathbb{T}_i \subset \mathbb{T}$, we get the following:
\begin{lemma} \label{wavepack2} If $\mathbb{T}_i \subset \mathbb{T}$, then for any cap $\theta$, and any $\tau$,
$$ \int_{3 \theta} |f_{\tau, i}|^2 \lesssim \int_{10 \theta} |f_\tau|^2. $$
\end{lemma}
\section{The harmonic analysis part of the proof}
In this section, we give the heart of the proof of Theorem \ref{broad12/13}. This section contains the proof except for the proofs of some geometric lemmas about how tubes intersect algebraic varieties. The geometric lemmas have a different flavor, and we prove them in the next section.
\subsection{The inductive setup}
We will prove Theorem \ref{broad12/13} by an inductive argument. In order to do the induction, we need to set up the Theorem in a slightly more general way.
Instead of taking $f_\tau$ to be $f$ restricted to $\tau$ and taking the caps $\tau$ disjoint, we need to allow the caps $\tau$ to overlap. Suppose that each $\tau$ is the graph of $h$ over a ball $B^2(\vec \omega_{\tau}, r)$, and that the union of $\tau$ is $S$. We consider a decomposition $f = \sum_\tau f_\tau$, where $\supp f_\tau \subset \tau$. We define $\alpha$-broad as before: $x$ is $\alpha$-broad for $Ef$ if $\max_\tau |Ef_\tau(x)| \le \alpha |Ef(x)|$.
We assume that the centers $\{ \vec \omega_{\tau} \} \subset B^2(1)$ are $K^{-1}$ separated. We define the multiplicity $\mu$ of the covering by saying that the radius $r$ for each cap $\tau$ lies in the range $[K^{-1}, \mu^{1/2} K^{-1}]$. Using the radius condition and the separation condition, it follows easily that any point lies in $O(\mu)$ different caps $\tau$.
\begin{theorem} \label{maintech} For any $\epsilon > 0$, there exists $K, L$ and a small $\delta_{trans} \in (0, \epsilon)$, depending only on $\epsilon$, so that the following holds.
Suppose that $S$ is the graph of a function $h$ obeying Conditions \ref{CondSnice} for $L$ derivatives.
Suppose that the caps $\tau$ cover $S$ as described above, with multiplicity at most $\mu$, and suppose that $\alpha \ge K^{-\epsilon}$.
If for any $\tau$ and any $\omega \in S$,
$$\oint_{B(\omega, R^{-1/2}) \cap S} |f_\tau|^2 \le 1,$$
then
$$ \int_{B_R} \Br_\alpha Ef^{3.25} \le C_\epsilon R^\epsilon \left(\sum_\tau \int_S |f_\tau|^2 \right)^{(3/2) + \epsilon} R^{\delta_{trans} \log (K^\epsilon \alpha \mu)}. $$
\noindent Moreover, $\lim_{\epsilon \rightarrow 0} K(\epsilon) = + \infty$.
\end{theorem}
We can easily recover Theorem \ref{broad12/13} from Theorem \ref{maintech}. Fix an $\epsilon > 0$. By scaling $f$, we can suppose that $\| f\|_\infty = 1$. We divide $S$ into a disjoint union of $K^{-1}$-caps $\tau$. The multiplicity of this cover is $\mu \lesssim 1$. We take $f_\tau = f \chi_\tau$. So $\sum_\tau \int_S |f_\tau|^2 = \int_S |f|^2$. Since $\| f \|_\infty = 1$, we see that the average value of $|f_\tau|^2$ on any region is at most 1. We take $\alpha = K^{-\epsilon}$. The last factor $R^{\delta_{trans} \log (K^\epsilon \alpha \mu)}$ is $\le R^{C \delta_{trans}} \le R^{O(\epsilon)}$. Now we can apply Theorem \ref{maintech}, and we see that $\int_{B_R} \Br_\alpha Ef^{3.25} \lesssim C_\epsilon R^{O(\epsilon)} (\int_S |f|^2)^{(3/2) + \epsilon}$. Since $\| f \|_\infty =1$, this last expression is bounded by $C_\epsilon R^{O(\epsilon)} \| f \|_2^{3} \| f \|_{\infty}^{1/4}$. Raising both sides to the power $(3.25)^{-1} = 4/13$, we get $\| \Br_\alpha Ef \|_{L^{3.25}(B_R)} \le C_\epsilon R^{O(\epsilon)} \| f \|_2^{12/13} \| f \|_{\infty}^{1/13}$. Since $\epsilon > 0$ is arbitrary, we recover Theorem \ref{broad12/13}.
There are several parameters to keep track of. For reference later, we list them here and say how they are related. We will take $\delta_{trans} = \epsilon^6$ and $K = e^{\epsilon^{-10}}$. We also introduce two other small parameters: $\delta = \epsilon^2$. We will have tubes of thickness $R^{(1/2) + \delta}$. In the next section, we will choose a degree $D = R^{\delta_{deg}}$ with $\delta_{deg} = \epsilon^4$. The key facts about the small parameters are
$$ \delta_{trans} \ll \delta_{deg} \ll \delta \ll \epsilon. $$
\noindent Also, we need $K$ very large compared to $\delta_{trans}$, so that $R^{\delta_{trans} \log (10^{-6} K^\epsilon) } \ge R^{1000}$.
During the proof of Theorem \ref{maintech}, we write $A \lesssim B$ for $A \le C(\epsilon) B$. For example, since $K$ is a constant depending on $\epsilon$, we have $K \lesssim 1$ and $\alpha \gtrsim 1$.
\subsection{Polynomial partitioning}
We will prove Theorem \ref{maintech} using polynomial partitioning. We pick a degree $D = R^{\delta_{deg}}$ with $\delta_{deg} = \epsilon^4$. Then we apply polynomial partitioning with this degree to the function $\chi_{B_R} \Br_\alpha Ef^{3.25}$. Corollary \ref{partitns} tells us that there exists a non-zero polynomial $P$ of degree at most $D$ so that $\mathbb{R}^n \setminus Z(P)$ is a disjoint union of $\sim D^3$ cells $O_i$, and so that for each $i$,
$$\int_{O_i \cap B_R} \Br_\alpha Ef ^{3.25} \sim D^{-3} \int_{B_R} \Br_\alpha Ef^{3.25}. $$
\noindent Moreover, we can assume that $P$ is a product of non-singular polynomials. This is a minor technical point that will help with the proofs of the Lemmas below.
We define $W := N_{R^{(1/2) + \delta}} Z(P)$, and we let $O_i' := (O_i \cap B_R) \setminus W$. Then we define $\mathbb{T}_i \subset \mathbb{T}$ as:
$$ \mathbb{T}_i := \{ T \in \mathbb{T} \textrm{ so that }T \cap O_i' \not= \phi \}. $$
We define $f_{\tau, i} = \sum_{T \in \mathbb{T}_i} f_{\tau, T}$. We define $f_i = \sum_\tau f_{\tau, i}$.
We remark that if $T \in \mathbb{T}_i$, then $T \cap O_i'$ is non-empty, and so the core line of $T$ must intersect $O_i$. Since a line can cross $Z(P)$ at most $D$ times, we see that each tube $T \in \mathbb{T}$ intersects at most $D+1$ of the $O_i'$. We state this estimate as a lemma.
\begin{lemma} Each tube $T \in \mathbb{T}$ lies in at most $D+1$ of the sets $\mathbb{T}_i$.
\end{lemma}
The integral of $\Br_\alpha Ef^{3.25}$ on a cell $O_i'$ will be controlled using induction. We also have to control the integral of $\Br_\alpha Ef^{3.25}$ on $W$.
We cover $B_R$ with $\sim R^{3 \delta}$ balls $B_j$ of radius $R^{1 - \delta}$. If $B_j \cap W$ is non-empty, then we note which tubes of $\mathbb{T}$ are tangent to $Z(P)$ in $B_j$ and which tubes of $\mathbb{T}$ are transverse to $Z(P)$ in $B_j$.
\begin{definition} \label{deftang}
$\mathbb{T}_{j, tang}$ is the set of all $T \in \mathbb{T}$ obeying the following two conditions:
\begin{itemize}
\item $T \cap W \cap B_j \not= \phi$.
\item If $z$ is any non-singular point of $Z(P)$ lying in $2 B_j \cap 10 T$, then
$$\Angle(v(T), T_z Z) \le R^{-(1/2) + 2 \delta}.$$
\end{itemize}
\end{definition}
(Recall that $v(T)$ is the unit vector in the direction of the tube $T$.)
\begin{definition} \label{deftrans}
$\mathbb{T}_{j, trans}$ is the set of all $T \in \mathbb{T}$ obeying the following two conditions:
\begin{itemize}
\item $T \cap W \cap B_j \not= \phi$.
\item There exists a non-singular point $z$ of $Z(P)$ lying in $2 B_j \cap 10 T$, so that
$$\Angle(v(T), T_z Z) > R^{-(1/2) + 2 \delta}.$$
\end{itemize}
\end{definition}
We claim that any tube $T \in \mathbb{T}$ that intersects $W \cap B_j$ lies in exactly one of $\mathbb{T}_{j,tang}$ and $\mathbb{T}_{j, trans}$. Looking at the definitions, the only thing that we need to check is that if $T$ intersects $W \cap B_j$, then there is a non-singular point of $Z(P)$ in $10 T \cap 2 B_j$. We recall that $W$ is the $R^{(1/2)+\delta}$ neighborhood of $Z(P)$, and that $R^{(1/2)+\delta}$ is also the radius of each tube $T$. Therefore, if $x \in T \cap W \cap B_j$, then there is a point $z \in Z(P)$ with $\Dist(x,z) \le R^{(1/2) + \delta}$. This point $z$ lies in $10 T \cap 2 B_j$. Also, since $P$ is a product of non-singular polynomials, the non-singular points are dense in $Z(P)$ and we can assume that $z$ is a non-singular point.
There are two important geometric lemmas about $\mathbb{T}_{j, tang}$ and $\mathbb{T}_{j, trans}$ that we use in our estimates. We state them here and prove them in the next section. The proofs use a little algebraic geometry and a little differential geometry. They have a different flavor from the harmonic analysis arguments we have been discussing, and so we put them in their own section which concentrates on those ideas.
We begin with an estimate about the transverse tubes.
\begin{lemma} \label{transbound} Each tube $T \in \mathbb{T}$ belongs to at most $\Poly(D) = R^{O(\delta_{deg})}$ different sets $\mathbb{T}_{j, trans}$.
\end{lemma}
We remark that a tube $T$ intersects $R^\delta$ different balls $B_j$. We chose $\delta_{deg} = \epsilon^4$ much smaller than $\delta = \epsilon^2$. So $T$ belongs to $\mathbb{T}_{j, trans}$ for only a tiny fraction of these balls. Using this estimate and induction we can control the contribution from the transverse tubes. It might also be worth noting the following. A line can transversely intersect $Z(P)$ in at most $D$ points. Lemma \ref{transbound} is an analogous estimate with a tube in place of a line. We get a weaker quantitative bound: polynomial in $D$ instead of linear in $D$. This is good enough for our purposes, but it would be interesting to understand the worst-case behavior.
Next we give an estimate for the tangential tubes.
\begin{lemma} \label{tangbound} For each $j$, the number of different $\theta$ so that $\mathbb{T}_{j, tang} \cap \mathbb{T}(\theta) \not= \phi$ is at most $R^{(1/2) + O(\delta)}$.
\end{lemma}
There are $\sim R$ different caps $\theta \subset S$. The lemma says that only on the order of $R^{1/2}$ of these caps can contribute to $\mathbb{T}_{j, tang}$. For instance, if $Z(P)$ is a plane, then only the directions tangent to the plane can appear in $\mathbb{T}_{j, tang}$.
We let $f_{\tau, j, tang} := \sum_{T \in \mathbb{T}_{j, tang}} f_{\tau, T}$ and $f_{j, tang} = \sum_\tau f_{\tau, j, tang}$ and similarly for $f_{\tau, j, trans}$ and $f_{j, trans}$.
\subsection{The inductive step}
In this subsection, we break $\int_{B_R} \Br_\alpha Ef^{3.25}$ into pieces coming from the $f_i$, the $f_{j, trans}$, and the $f_{j, tang}$. We call these the cellular pieces, the transverse pieces, and the tangential pieces. We will bound the tangential pieces directly, and we will bound the other pieces by induction. In this subsection, we explain how to break the integral into pieces, we state the bound for the tangential pieces, and we explain how the induction works. We will come back to prove the bound for the tangential pieces in the next subsection.
Throughout the arguments, we will assume that $\epsilon$ is sufficiently small and $R$ is sufficiently large.
If $x \in O_i'$, then $E{f_\tau}(x)$ is almost equal to $Ef_{\tau, i}(x)$ for each $\tau$. We also want to think about how the
$\alpha$-broad part of $E{f}(x)$ relates to the $\alpha$-broad part of $E{f_i}(x)$.
\begin{lemma} \label{declemma1} If $x \in O_i'$ and $R$ is large enough, then
$$\Br_\alpha Ef(x) \le 2 \Br_{2 \alpha} E{f}_i(x) + R^{-900} \sum_\tau \| f_\tau \|_2. $$
\end{lemma}
\begin{proof} By Proposition \ref{wavepack}, we know that
$$ Ef_\tau(x) = \sum_{T \in \mathbb{T}} Ef_{\tau, T}(x) + O( R^{-1000} \| f_\tau \|_2). $$
If $x \in T$, then $T$ must intersect $O_i'$ so $T \in \mathbb{T}_i$. If $x \notin T$, then Proposition \ref{wavepack} gives us the bound $|Ef_{\tau, T}(x)| \le R^{-1000} \| f_\tau \|_2$. The total contribution of these $T \notin \mathbb{T}_i$ is small, leaving
\begin{equation} \label{EftalmostEft_i}
Ef_\tau(x) = Ef_{\tau, i}(x) + O(R^{-990} \| f_\tau \|_2).
\end{equation}
Summing over $\tau$, we get
\begin{equation} \label{EfalmostEf_i}
Ef(x) = Ef_i(x) + O(R^{-990} \sum_\tau \| f_\tau \|_2).
\end{equation}
Now we have to deal with the $\alpha$-broad issue. We can assume that $|Ef(x)| \ge R^{-900} \sum_\tau \|f_\tau \|_2$ and hence $|Ef_i(x)| \ge (1/2) R^{-900} \sum_\tau \| f_\tau \|_2$. We can also assume that $x$ is $\alpha$-broad for $Ef$. Under these assumptions, it remains to show that $x$ is $2\alpha$-broad for $Ef_i$. In other words, we have to show that for each $\tau$,
$$ |Ef_{\tau, i}(x)| \le 2 \alpha |E f_i(x)|. $$
Using Equations \ref{EftalmostEft_i} and \ref{EfalmostEf_i}, we see that
$$ |Ef_{\tau, i}(x)| \le |Ef_\tau(x)| + O(R^{-990} \| f_\tau \|_2) \le \alpha |Ef(x)| + O(R^{-990} \| f_\tau \|_2) \le $$
$$\le \alpha |Ef_i(x)| + O(R^{-990} \sum_\tau \| f_\tau \|_2) \le 2 \alpha |Ef_i(x)|.$$
\end{proof}
If $x \in W \cap B_j$, then the situation is more complicated.
$E{f}(x)$ is almost equal to $E{f}_{j, trans}(x) + E{f}_{j, tang}(x)$.
But in order for the $\alpha$-broad parts to behave well, we will need to use not only $E{f}_{j, trans}$ but some other related functions.
Recall that $S$ is divided into $\sim K^2$ caps $\tau$ of diameter $K^{-1}$.
If $I$ is any subset of these caps, we let $f_{I, j, trans} = \sum_{\tau \in I} f_{\tau, j, trans}$. The function $f_{I, j, trans}$ comes with a natural decomposition: if $\tau \in I$, we let $f_{\tau, I, j, trans} = f_{\tau, j, trans}$, and if $\tau \notin I$, then $f_{\tau, I, j, trans} = 0$.
Eventually we will estimate the terms involving $f_i$ or $f_{j, trans}$ by induction. On the other hand, we will estimate the terms involving $f_{j, tang}$ by a direct computation. For this computation, we will use a bilinear version of $f_{j, tang}$ which we now define. We say that two caps $\tau_1, \tau_2$ are non-adjacent if the distance between them is $\ge K^{-1}$.
$$ \Bil (E{f}_{j, tang}) := \sum_{\tau_1, \tau_2 \textrm{ non-adjacent}} | E{f}_{\tau_1, j, tang} |^{1/2} | E{f}_{\tau_2, j, tang} |^{1/2}. $$
With these definitions in hand, we can now state our lemma connecting $\Br_\alpha Ef$ with $f_i, f_{j, trans}$, and $f_{j, tang}$.
\begin{lemma} \label{declemma2}
If $x \in B_j \cap W$ and $\alpha \mu \le 10^{-5}$, then
$$ \Br_\alpha |E{f}(x)| \le 2 \left( \sum_I \Br_{2 \alpha} |E{f}_{I, j, trans}(x)| + K^{100} \Bil (E{f}_{j, tang}) (x) + R^{-900} \sum_\tau \| f_\tau \|_2 \right). $$
\end{lemma}
Remark. In Lemma \ref{declemma2}, when we sum over $I$, we are summing over the roughly $2^{K^2}$ subsets of the set of caps $\tau$. Since $K$ is a constant depending on $\epsilon$, this large-sounding number will turn out to be minor.
\begin{proof}
Suppose $x \in B_j \cap W$. We can assume that $x$ is $\alpha$-broad for $E{f}$ and that $|Ef(x)| \ge R^{-900} \sum_\tau \| f_\tau \|_2$.
Let $I$ be the set of $K^{-1}$-caps $\tau$ so that $ | Ef_{\tau, j, tang} (x)| \le K^{-100} |Ef (x) |$ . In other words, $I^c$ is the set of caps $\tau$ so that $ | Ef_{\tau, j, tang} (x)| \ge K^{-100} |Ef (x) |$ . If $I^c$ contains two non-adjacent caps, then $ |Ef (x) | \le K^{100} \Bil (E{f}_{j, tang}) (x)$, and so the conclusion holds.
If $I^c$ does not contain two non-adjacent caps, then $I^c$ consists of at most $10^4 \mu$ caps, because the centers of the caps are $K^{-1}$ separated, and the radius of each cap is at most $\mu^{1/2} K^{-1}$.
Since $x$ is $\alpha$-broad for $Ef$, and $\alpha \mu \le 10^{-5}$, we have
$$ \sum_{\tau \in I^c} |Ef_{\tau}(x)| \le 10^4 \mu \alpha |Ef(x)| \le (1/10) | E f(x) |. $$
Therefore, $| Ef_I(x) | \ge (9/10) |Ef(x) |$. Next, we break up $Ef_I$ into tangential and transverse contributions.
If $T \in \mathbb{T}$ and $T$ intersects $B_j \cap W$, then $T$ belongs to $\mathbb{T}_{j, trans}$ or $\mathbb{T}_{j, tang}$. On the other hand, if $T$ does not intersect $B_j \cap W$, then $|f_{\tau, T}(x)| = O(R^{-1000} \| f_\tau \|_2)$. Therefore,
for any cap $\tau$, we have
\begin{equation} \label{eqdeclemmab}
|Ef_{\tau}(x)| \le |Ef_{\tau, j, trans}(x)| + |Ef_{\tau, j, tang}(x)| + O( R^{-990} \| f_\tau \|_2 ).
\end{equation}
Summing over $\tau \in I$, we see that
$$ |Ef_I(x)| \le |Ef_{I, j, trans}(x)| + \left( \sum_{\tau \in I} |Ef_{\tau, j, tang}(x)| \right) + O( R^{-990} \sum_\tau \| f_\tau \|_2 ). $$
But for each cap $\tau \in I$, $|Ef_{\tau, j, tang}(x)| \le K^{-100} |E f(x)|$, and so $ \sum_{\tau \in I} |Ef_{\tau, j, tang}| \le K^{-98} |Ef(x)|. $ Plugging this in and using that $|Ef_I(x)| \ge (9/10) |Ef(x)|$, we get:
$$ (9/10) |Ef (x)| \le |Ef_{I, j, trans}(x)| + K^{-98} |Ef(x)| + O(R^{-980} \sum_\tau \| f_\tau \|_2). $$
Since $|Ef(x)| \ge R^{-900} \sum_\tau \| f_\tau \|_2$, we see that
\begin{equation} \label{Ef_Itransbig}
|Ef(x)| \le (3/2) |E f_{I, j, trans}(x)|.
\end{equation}
In this case, it remains to prove that $x$ is $2 \alpha$-broad for $Ef_{I,j, trans}$. Given Equation \ref{Ef_Itransbig}, it suffices to prove that for each $\tau \in I$,
$$ |Ef_{\tau, j, trans}(x)| \le (1.1) \alpha |Ef(x)|. $$
From equation \ref{eqdeclemmab} above, we see that
$$ |Ef_{\tau, j, trans}(x)| \le |Ef_\tau(x)| + |Ef_{j, tang,\tau}(x)| + O(R^{-990} \| f_\tau \|_2). $$
Since $\tau \in I$, $|Ef_{\tau, j, tang}(x)| \le K^{-100} |Ef(x)|$. Therefore, we have
$$ |Ef_{\tau, j, trans}(x)| \le \alpha |Ef(x)| + K^{-100} |Ef(x)| + O(R^{-990} \| f_\tau \|_2). $$
Because $|Ef(x)| \ge R^{-900} \sum_\tau \|f_\tau \|_2$ and $\alpha \ge K^{-\epsilon}$, we have
$$ |Ef_{\tau, trans , j}(x)| \le (1.1) \alpha |Ef(x)| . $$
Hence the point $x$ is $2 \alpha$-broad for $Ef_{I, j, trans}$. \end{proof}
We can now state our estimate for the tangential terms.
\begin{prop} \label{tangtermbound}
$$\int_{B_j} \Bil (E{f}_{j, tang}) ^{3.25} \lesssim R^{O(\delta)} \left(\sum_\tau \int |f_\tau|^2 \right)^{3/2}. $$
\end{prop}
We will prove Proposition \ref{tangtermbound} in the next subsection. The argument is basically standard. The proof is important though, and it involves the key moment where we use that the exponent is $3.25$ and not smaller.
Now we use induction to prove Theorem \ref{maintech}. We do induction on the radius $R$. For each radius $R$, we also induct on $\sum_\tau \int |f_\tau|^2$. As a base of the induction, the theorem is true when $R=1$ or when $\sum_\tau \int |f_\tau|^2 \le R^{-1000}$. For $R=1$ the theorem is trivial. If $\sum_\tau \int |f_\tau|^2 \le R^{-1000}$, the theorem follows from observing that $\sup |\Br_\alpha E f| \le ( \sum_\tau \int_S |f_\tau| ) \le C R^{O(\epsilon)} ( \sum_\tau \int_S |f_\tau|^2 )^{1/2}$. Therefore,
$$ \int_{B_R} |\Br_\alpha E f |^{3.25} \le C R^3 \left( \sum_\tau \int_S |f_\tau| \right)^{3.25} \le
C R^4 \left( \sum_\tau \int_S |f_\tau|^2 \right)^{(3/2) + (1/8)} \le $$
$$ \le C R^{-100} \left( \sum_\tau \int_S |f_\tau|^2 \right)^{(3/2) + \epsilon}. $$
So we can assume Theorem \ref{maintech} holds for radii $\le R/2$ or for functions $g$ with $\sum_\tau \int |g_\tau|^2 \le (1/2) \sum_\tau \int |f_\tau|^2$. If $\mu \alpha \ge 10^{-6}$, the conclusion of Theorem \ref{maintech} is also trivial, because the factor $R^{\delta_{trans} \log(K^{\epsilon} \alpha \mu)}$ is so large. We chose $K(\epsilon) = e^{\epsilon^{-10}}$ and so the exponent $\epsilon^6 \log (K^\epsilon 10^{-6}) \gtrsim \epsilon^{-4}$. If $\epsilon$ is small enough, the factor $R^{\delta_{trans} \log(K^{\epsilon} \alpha \mu)}$ is at least $R^{1000}$, and then the bound is trivially true. So we can also assume that $\mu \alpha \le 10^{-6}$.
We decompose our main integral into pieces in the cells and a piece coming from the walls between cells:
$$\int_{B_R} \Br_\alpha Ef^{3.25} = \sum_i \int_{B_R \cap O_i'} \Br_\alpha Ef^{3.25} + \int_{B_R \cap W} \Br_\alpha Ef^{3.25}.$$
If the cellular term dominates, then we proceed as follows. Since $\int_{B_R \cap O_i} \Br_\alpha Ef^{3.25}$ is essentially independent of $i$, there must be $\sim D^3$ different cells $O_i'$ so that
\begin{equation} \label{signifi}
\int_{B_R \cap O_i'} \Br_\alpha Ef^{3.25} \sim D^{-3} \int_{B_R} \Br_\alpha Ef^{3.25}.
\end{equation}
For each such $i$, applying Lemma \ref{declemma1}, we see that
$$ \int_{B_R} \Br_\alpha Ef^{3.25} \lesssim D^3 \int_{B_R \cap O_i'} \Br_\alpha Ef^{3.25} \lesssim D^3 \int_{B_R} \Br_{2 \alpha} Ef_i^{3.25} + R^{-1000} \sum_\tau \| f_{\tau, i} \|_2^{3.25}. $$
(The last term is a minor error term coming from Lemma \ref{declemma1}. If that term dominates, then we get the desired bound for $\int_{B_R} \Br_\alpha Ef^{3.25}$ immediately.)
Next we consider $\sum_\tau \int |f_{\tau, i}|^2$. We noted above that each tube $T$ lies in $\mathbb{T}_i$ for at most $D+1$ values of $i$. By Lemma \ref{wavepack1}, we know that $\sum_i \int |f_{\tau, i}|^2 \lesssim D \int |f_\tau|^2$. Now we can choose a particular $i$ which obeys equation \ref{signifi} and so that
$$ \sum_\tau \int |f_{\tau, i}|^2 \lesssim D^{-2} \sum_\tau \int |f_\tau|^2. $$
We claim that we can apply Theorem \ref{maintech} to $f_i = \sum f_{\tau, i}$. By Proposition \ref{wavepack}, we know that $\supp f_{\tau, i}$ is in a tiny neighborhood of $\tau$. Therefore, the new multiplicity is only slightly larger than $\mu$ - it is certainly at most $2 \mu$. By Lemma \ref{wavepack2}, we know that for any $\omega \in S$,
$$\oint_{B(\omega, R^{-1/2}) \cap S} |f_{\tau, i}|^2 \lesssim \oint_{B(\omega, 10 R^{-1/2}) \cap S} |f_\tau|^2 \lesssim 1. $$
\noindent Therefore, after multiplying $f_i$ by a constant, it obeys all the assumptions of Theorem \ref{maintech}. Moreover, $\sum_\tau \int |f_{\tau, i}|^2 \le (1/2) \sum_\tau \int |f_\tau|^2$. By induction on $\sum_\tau \int |f_\tau|^2$, we can apply Theorem \ref{maintech} to $f_i$. When we do so, we get the following bound:
$$ \int_{B_R} \Br_\alpha Ef^{3.25} \lesssim D^3 \int_{B_R} \Br_{2 \alpha} Ef_i^{3.25} \lesssim $$
$$ D^3 C_\epsilon R^\epsilon R^{\delta_{trans} \log (4 \alpha \mu K^\epsilon) } \left( \sum_\tau \int |f_{\tau, i}|^2 \right)^{(3/2) + \epsilon} .$$
Since $\sum_\tau \int |f_{i,\tau}|^2 \lesssim D^{-2} \sum_\tau \int |f_\tau|^2$, we get all together:
$$ \int_{B_R} \Br_\alpha Ef^{3.25} \le \left(C D^{- 2 \epsilon} R^{C \delta_{trans} } \right) C_\epsilon R^\epsilon R^{\delta_{trans} \log (\alpha \mu K^\epsilon) } \left(\sum_\tau \int |f_\tau|^2 \right)^{(3/2) + \epsilon}. $$
To close the induction, it just suffices to prove that the term in parentheses is $\le 1$. This term is at most $R^{- \delta_{deg} \epsilon + C \delta_{trans}}$. Since $\delta_{deg} = \epsilon^4$ and $\delta_{trans} = \epsilon^6$, the exponent of $R$ is negative and the induction closes.
Returning to the decomposition $\int_{B_R} \Br_\alpha Ef^{3.25} = \sum_i \int_{B_R \cap O_i'} \Br_\alpha Ef^{3.25} + \int_{B_R \cap W} \Br_\alpha Ef^{3.25}$, let us now suppose that the contribution from the cell walls dominates. By Lemma \ref{declemma2}, we now have
$$ \int_{B_R} \Br_\alpha Ef^{3.25} \lesssim $$
$$\sum_{j, I} \int_{B_j} \Br_{2 \alpha} Ef_{I, j, trans}^{3.25} + \sum_j K^{100} \int_{B_j} \Bil (E{f}_{j, tang}) ^{3.25} + O(R^{-1000} \sum_\tau \| f_\tau \|_2^{3.25}). $$
If the final $O$-term dominates, then the conclusion holds trivially, using the fact that $\sum_\tau \| f_\tau \|_2^2 \lesssim 1$.
By Proposition \ref{tangtermbound}, we know that the tangential term is bounded by $R^{O(\delta)} (\sum_\tau \int |f_\tau|^2)^{3/2} \le R^\epsilon (\sum_\tau \int |f_\tau|^2)^{3/2}$. So if the tangential term dominates we are also done. Therefore, we are left with the case where
\begin{equation} \label{transversecase}
\int_{B_R} \Br_\alpha Ef^{3.25} \lesssim \sum_{j, I} \int_{B_j} \Br_{2 \alpha} E f_{j, trans, I}^{3.25}.
\end{equation}
We claim that we can apply Theorem \ref{maintech} to each integral on the right-hand-side. The ball $B_j$ has radius $R^{1-\delta}$, so by induction on the radius Theorem \ref{maintech} applies. We have to check that $f_{j, trans,I}$ satisfies the hypotheses. By Proposition \ref{wavepack}, $\supp f_{\tau, I, j, trans}$ lies in a small neighborhood of $\tau$ - a slightly larger cap. As above, the multiplicity of the new covering with slightly larger caps is at most $2 \mu$. By Lemma \ref{wavepack2}, we have for any $\omega \in S$,
$$ \oint_{B(\omega, R^{-1/2}) \cap S} |f_{j, trans, I,\tau}|^2 \lesssim \oint_{B(\omega, 10 R^{-1/2}) \cap S} |f_\tau|^2 \lesssim 1. $$
\noindent Therefore, we may apply Theorem \ref{maintech} to each of the integrals on the right-hand side of Equation \ref{transversecase}. We get the following upper bound:
$$ \int_{B_j} \Br_{2 \alpha} Ef_{j, trans, I}^{3.25} \lesssim C_\epsilon R^{(1 - \delta) \epsilon} R^{\delta_{trans} \log (4 \alpha \mu K^\epsilon)} (\sum_\tau \int |f_{\tau, j, trans}|^2)^{(3/2) + \epsilon}.$$
To bound $\int_{B_R} \Br_\alpha Ef^{3.25}$, we have to sum over all $j, I$. Now the crucial point is Lemma \ref{transbound}, which tells us that a given tube $T$ lies in $\mathbb{T}_{j, trans}$ for at most $\Poly(D)$ values of $j$. (The number of different values of $I$ is only a constant depending on $\epsilon$.) Therefore, by Lemma \ref{wavepack1},
$$ \sum_{j} \int |f_{\tau, j, trans}|^2 \lesssim \Poly(D) \sum_\tau \int |f_\tau|^2, $$
and hence
$$ \sum_{j, I} (\sum_{\tau \in I} \int |f_{\tau, j, trans}|^2)^{(3/2) + \epsilon} \lesssim \Poly(D) (\sum_\tau \int |f_\tau|^2)^{(3/2) + \epsilon}. $$
Summing over $j, I$ and plugging this in, we get the following bound:
$$ \int_{B_R} \Br_\alpha Ef^{3.25} \le \Poly(D) C_\epsilon R^{(1 - \delta) \epsilon} R^{\delta_{trans} \log (4 \alpha \mu K^{\epsilon})} (\sum_\tau \int |f_\tau|^2 )^{(3/2) + \epsilon} = $$
$$ = \left( C \Poly(D) R^{-\delta \epsilon} R^{C \delta_{trans}} \right) C_\epsilon R^{\epsilon} R^{\delta_{trans} \log( \alpha \mu K^\epsilon)} (\sum_\tau \int |f_\tau|^2 )^{(3/2) + \epsilon}. $$
To close the induction, we just have to check that the term in parentheses is less than 1. For sufficiently large $R$, this term is at most $R^{C \delta_{deg} - \delta \epsilon + C \delta_{trans} }$. Since $\delta = \epsilon^2$, $\delta_{deg} = \epsilon^4$, and $\delta_{trans} = \epsilon^6$, the exponent of $R$ is negative and the induction closes.
We have now finished carrying out the induction. It only remains to prove the bound for the tangential terms in Proposition \ref{tangtermbound}.
\subsection{The estimate for the tangential terms}
In this subsection, we prove Proposition \ref{tangtermbound}. In other words, we have to prove the following estimate:
$$\int_{B_j \cap W} \Bil (E{f}_{j, tang}) ^{3.25} \lesssim R^{O(\delta)} \left(\sum_\tau \int |f_\tau|^2 \right)^{3/2}. $$
Cover $B_j \cap W$ with cubes $Q$ of side length $R^{1/2}$. For each cube $Q$, we let $\mathbb{T}_{j, tang, Q}$ be the set of tubes in $\mathbb{T}_{j, tang}$ that intersect $Q$. On $Q$, we have
$$ Ef_{\tau, j, tang} = \sum_{T \in \mathbb{T}_{j, tang, Q}} Ef_{\tau, T} + O(R^{-990} \| f_{\tau} \|_2). $$
The terms of the form $O(R^{-990} \| f_\tau \|_2)$ are always negligible in our calculations, and in this subsection, we will abbreviate them by writing
\begin{equation} \label{Ef_tau}
Ef_{\tau, j, tang} = \sum_{T \in \mathbb{T}_{j, tang, Q}} Ef_{\tau, T} + \neglig.
\end{equation}
Because of the definition of $\mathbb{T}_{j, tang}$, Definition \ref{deftang}, we claim that all the tubes in $\mathbb{T}_{j, tang, Q}$ are nearly coplanar. Since $Q \cap W$ is non-empty, there must be a point $z \in Z(P)$ in the $R^{(1/2)+\delta}$-neighborhood of $Q$. For any $T \in \mathbb{T}_{j,tang, Q}$, $z \in 10T \cap 2B_j \cap Z(P)$. Also, since $P$ is a product of non-singular polynomials, the non-singular points are dense in $Z(P)$, and so we can assume that $z$ is non-singular. Now by Definition \ref{deftang}, the angle between $v(T)$ and $T_z Z(P)$ is $\le R^{-(1/2) + 2 \delta} \le R^{-(1/2) + O(\delta)}$.
Using this observation and the C\'ordoba $L^4$ argument, we get a bilinear estimate on $Q$:
\begin{lemma} \label{locbilinear} If $\tau_1$ and $\tau_2$ are non-adjacent caps, then
$$ \int_{Q} |Ef_{\tau_1, j, tang}|^2 |Ef_{\tau_2, j, tang}|^2 \lesssim R^{O(\delta)} R^{-1/2} (\sum_{T_1 \in \mathbb{T}_{j, tang, Q}} \| f_{\tau_1, T_1} \|_2^2 ) (\sum_{T_2 \in \mathbb{T}_{j, tang, Q}} \| f_{\tau_2, T_2} \|_2^2 ) + \neglig . $$
\end{lemma}
\begin{proof}
On $Q$, we have
$$Ef_{\tau, j, tang} = \sum_{T \in \mathbb{T}_{j, tang, Q}} Ef_{\tau, T} + \neglig. $$
We let $\eta_Q$ be a smooth bump function which is equal to 1 on $Q$ and with support in $10 Q$. (We can assume that $| \hat \eta_Q (\omega) | \lesssim \Vol(Q) (1 + |\omega| R^{1/2})^{10^6 \delta^{-1}}. $) Now we can bound
$$ \int_Q |Ef_{\tau_1, j, tang}|^2 |Ef_{\tau_2, j, tang}|^2 \le
\sum_{T_1, \bar T_1, T_2, \bar T_2 \in \mathbb{T}_{j, tang, Q}} \int \eta_Q Ef_{\tau_1, T_1} \overline{Ef_{\tau_1, \bar T_1}} Ef_{\tau_2, T_2} \overline{Ef_{\tau_2, \bar T_2}} + \neglig. $$
Each of the summands on the right-hand side we can evaluate with Plancherel, giving
\begin{equation} \label{cordobasum}
\sum_{T_1, \bar T_1, T_2, \bar T_2 \in \mathbb{T}_{Q,j, tang}} \int_{\mathbb{R}^3} (\hat \eta_Q * f_{\tau_1, T_1} \dvol_S * f_{\tau_2, T_2} \dvol_S) \overline{(f_{\tau_1, \bar T_1} \dvol_S * f_{\tau_2, \bar T_2} \dvol_S )} .
\end{equation}
Only very few of these terms are significant. For each tube $T$, let $\theta(T)$ denote the cap $\theta$ so that $T \in \mathbb{T}(\theta)$, and let $\omega(T)$ be the center of $\theta(T)$. The measure $f_{\tau, T} \dvol_S$ is supported on $3 \theta(T)$, and so the support lies in the $O(R^{1/2})$-neighborhood of $\omega(T)$. Because of the rapid decay of $\hat \eta_Q$, a term in the sum above is negligible unless
\begin{equation} \label{T_1+T_2=bar}
\omega(T_1) + \omega(T_2) = \omega(\bar T_1) + \omega (\bar T_2) + O(R^{- (1/2) + \delta}).
\end{equation}
Next we claim that equation \ref{T_1+T_2=bar} forces $\omega(T_1)$ to be $O(R^{-(1/2) + \delta})$ close to $\omega(\bar T_1)$, and the same for $\omega(T_2)$ and $\omega(\bar T_2)$. We know that $v(T_i)$ and $v(\bar T_i)$ all lie in a common plane $\pi(Q)$. Recall that $v(T_i)$ is essentially the unit normal vector to $S$ at $\omega(T_i)$. Therefore, at each point $\vec \omega(T_i), \vec \omega(\bar T_i) \in B^2(1)$, $\nabla h$ satisfies a linear equation:
\begin{equation}
m \cdot \nabla h(\vec \omega) + b = 0,
\end{equation}
\noindent for a vector $m \in \mathbb{R}^2$ with $|m| \le 1$, and a number $b$ with $|b| \lesssim 1$.
This equation defines a curve in $B^2(1)$. If $h$ were exactly quadratic, then this curve would be a straight line. Since $S$ satisfies Conditions \ref{CondSnice}, we know that $S$ is almost quadratic: the Hessian of $S$ obeys $1/2 \le \partial^2 h \le 2$, and the third derivative obeys $| \partial^3 h | \le 10^{-9}$ pointwise. Therefore, this curve is almost a straight line. After rotating in the $\omega_1, \omega_2$ plane, it can be given as a graph $\omega_2 = g(\omega_1)$, where $| \nabla g|, | \nabla^2 g|$ are at most $10^{-6}$.
Next we write $j(\omega_1) = h( \omega_1, g(\omega_1))$. Because $\partial_1^2 h \ge 1/2$ and $|\nabla g|, |\nabla^2 g|$ are small, it is straightforward to check with the chain rule that
$$ \partial^2 j \ge 1/4. $$
Let $\omega_1(T_i)$ be the $\omega_1$-coordinate of $\omega(T_i)$. Equation \ref{T_1+T_2=bar} is equivalent to the following:
\begin{equation} \label{sumsequal}
\omega_1(T_1) + \omega_1(T_2) = \omega_1(\bar T_1) + \omega_1(\bar T_2) + O(R^{-(1/2) + \delta}).
\end{equation}
\begin{equation} \label{jsumsequal}
j(\omega_1(T_1)) + j(\omega_1(T_2)) = j(\omega_1(\bar T_1)) + j(\omega_1(\bar T_2)) + O(R^{-(1/2) + \delta}).
\end{equation}
Equation \ref{sumsequal} implies that the $\omega_1(T_i)$ and the $\omega_1(\bar T_i)$ have essentially the same midpoint. Without loss of generality, we can assume that
$\omega_1(\bar T_1) < \omega_1(T_1) < \omega_1(T_2) < \omega_1(\bar T_2)$.
Also, since $\omega(T_1)$ lies in (or very near) $\tau_1$, and $\omega(T_2)$ lies in or very near $\tau_2$, $|\omega_1(T_1) - \omega_1(T_2)| \gtrsim K^{-1}$. Let $I_1$ be the interval $[\omega_1(\bar T_1), \omega_1(T_1)]$ and $I_2$ be the interval $[\omega_1(T_2), \omega_1(\bar T_2)]$. By Equation \ref{sumsequal}, the lengths of $I_1$ and $I_2$ are equal up to an error of $O(R^{-(1/2)+\delta})$.
Because of the bound $j'' \ge 1/4$, we see that for any $s_1 \in I_1$ and $s_2 \in I_2$, $j'(s_2) - j'(s_1) \ge (1/4) K^{-1}$. Using this bound and the fundamental theorem of calculus, we estimate that
$$ |I_1| + |I_2| \lesssim (\int_{I_2} j') - (\int_{I_1} j') + O(R^{-(1/2) + \delta}) = $$
$$ = j(\omega_1(\bar T_2)) - j(\omega_1(T_2)) - j(\omega_1(T_1)) + j(\omega_1(\bar T_1)) + O(R^{-(1/2) + \delta}) = O(R^{-(1/2) + \delta}). $$
This finishes the proof that $|\omega(T_i) - \omega(\bar T_i)| \lesssim R^{-(1/2)+\delta}$ for $i = 1,2$.
Next we observe that for each $\theta$, there are only $O(1)$ tubes of $\mathbb{T}(\theta)$ that intersect $Q$, and so there are only $O(1)$ tubes of $\mathbb{T}(\theta)$ in $\mathbb{T}_{j, tang, Q}$. Therefore, line \ref{cordobasum} is bounded by
\begin{equation} \label{cordobasum2}
R^{O(\delta)} \sum_{T_1, T_2 \in \mathbb{T}_{j, tang, Q}} \int | f_{\tau_1, T_1} \dvol_S * f_{\tau_2, T_2} \dvol_S |^2.
\end{equation}
Since $\theta(T_1)$ lies in $\tau_1$ and $\theta(T_2)$ lies in $\tau_2$, the angle between the tangent space of $S$ on $\theta(T_1)$ and on $\theta(T_2)$ is $\gtrsim K^{-1}$. We claim that this angle bound leads to the following inequality:
\begin{equation} \label{convolbound}
\int_{\mathbb{R}^3} | f_{\tau_1, T_1} \dvol_S * f_{\tau_2, T_2} \dvol_S |^2 \lesssim R^{-1/2} \| f_{\tau_1, T_1} \|_2^2 \|f_{\tau_2, T_2} \|_2^2.
\end{equation}
We sketch the proof of the claim. Let us abbreviate $f_{\tau_1, T_1} \dvol_S$ by $f_1 \dvol_{S_1}$ and $f_{\tau_2, T_2} \dvol_S$ by $f_2 \dvol_{S_2}$, where $S_i$ is a cap containing $\supp f_i$ with radius $\sim R^{-1/2}$. Because of the angle condition between $S_1$ and $S_2$, we can foliate $S_1$ by curves $\gamma_s$, $s \in [0, R^{-1/2}]$ so that the tangent direction of $\gamma_s$ is quantitatively transverse to the tangent plane of $S_2$, and so that $\dvol_{S_1} = J \cdot \dvol_{\gamma_s} ds$ for a Jacobian factor $J \sim 1$.
We can expand our original function $f_1 \dvol_{S_1} * f_2 \dvol_{S_2}$ as an integral:
$$ f_1 \dvol_{S_1} * f_2 \dvol_{S_2} = \int_0^{R^{-1/2}} (J f_1 \dvol_{\gamma_s} * f_2 \dvol_{S_2} ) ds. $$
Now by Minkowski's inequality and Cauchy-Schwarz,
\begin{equation} \label{minkow}
\| f_1 \dvol_{S_1} * f_2 \dvol_{S_2} \|_2^2 \le \left( \int_0^{R^{-1/2}} \| J f_1 \dvol_{\gamma_s} * f_2 \dvol_{S_2} \|_2 ds \right)^2 \le R^{-1/2} \int_0^{R^{-1/2}} \| J f_1 \dvol_{\gamma_s} * f_2 \dvol_{S_2} \|_2^2 ds.
\end{equation}
By a change of coordinates argument,
\begin{equation} \label{changecoord}
\int_{\mathbb{R}^3} |J f_1 \dvol_{\gamma_s} * f_2 \dvol_{S_2} |^2 \sim \int_{\gamma_s} |f_1|^2 \int_{S_2} |f_2|^2.
\end{equation}
Plugging Equation \ref{changecoord} into Equation \ref{minkow}, we get
\begin{equation} \| f_1 \dvol_{S_1} * f_2 \dvol_{S_2} \|_2^2 \le
R^{-1/2} \int_0^{R^{-1/2}} (\int_{\gamma_s} |f_1|^2) ds \int_{S_2} |f_2|^2 \lesssim
R^{-1/2} \int_{S_1} |f_1|^2 \int_{S_2} |f_2|^2.
\end{equation}
This finishes the proof of Equation \ref{convolbound}. Now using Equation \ref{convolbound} to bound line \ref{cordobasum2}, we see that
$$ \int_{Q} |Ef_{\tau_1, j, tang}|^2 |Ef_{\tau_2, j, tang}|^2 \lesssim R^{O(\delta)} R^{-1/2} (\sum_{T_1 \in \mathbb{T}_{j, tang, Q}} \| f_{\tau_1, T_1} \|_2^2 ) (\sum_{T_2 \in \mathbb{T}_{j, tang, Q}} \| f_{\tau_2, T_2} \|_2^2 ) + \neglig . $$
\end{proof}
Next we give an interpretation of Lemma \ref{locbilinear}. We would like to think of $|Ef_{\tau, T}|$ as well approximated by $\chi_T \| f_{\tau, T} \|_1 \lesssim \chi_T R^{-1/2} \| f_{\tau, T} \|_2$. Let $S_{\tau, j, tang}$ be a corresponding square function defined as follows:
$$ S_{\tau, j, tang} := \left( \sum_{T \in \mathbb{T}_{j, tang}} (\chi_T R^{-1/2} \| f_{\tau, T} \|_2)^2 \right)^{1/2}. $$
Lemma \ref{locbilinear} immediately implies that our integral over $Q$ is controlled by the integral with the corresponding square functions:
\begin{equation}
\int_{Q} |Ef_{\tau_1, j, tang}|^2 |Ef_{\tau_2, j, tang}|^2 \lesssim R^{O(\delta)} \int_Q S_{\tau_1, j, tang}^2 S_{\tau_2, j, tang}^2+ \neglig
\end{equation}
Summing over all $Q \subset B_j \cap W$, we get the following bound:
$$ \int_{B_j \cap W} |Ef_{\tau_1, j, tang}|^2 |Ef_{\tau_2, j, tang}|^2 \lesssim R^{O(\delta)} \int_{B_j \cap W} S_{\tau_1, j, tang}^2 S_{\tau_2, j, tang}^2+ \neglig. $$
The last integral involving square functions is easy to bound. Expanding the definition of square function, we get:
$$\le \sum_{T_1, T_2 \in \mathbb{T}_{j, tang}} R^{-2} \| f_{\tau_1, T_1} \|_2^2 \|f_{\tau_2, T_2} \|_2^2 \int \chi_{T_1} \chi_{T_2}. $$
Since $T_1$ comes from $\tau_1$ and $T_2$ comes from $\tau_2$, the angle between $v(T_1)$ and $v(T_2)$ is $\gtrsim K^{-1}$, and so the last integral is $\lesssim K R^{3/2}$. Therefore, the last sum is
$$ \lesssim R^{-1/2} (\sum_{T_1 \in \mathbb{T}_{j, tang}} \| f_{\tau_1, T_1} \|_2^2) (\sum_{T_2 \in \mathbb{T}_{j, tang}} \| f_{\tau_2, T_2} \|_2^2). $$
Using Proposition \ref{wavepack}, the functions $\{ f_{\tau, T} \}_{T \in \mathbb{T}}$ are almost orthogonal, and we see
$$ \sum_{T \in \mathbb{T}_{j, tang}} \| f_{\tau, T} \|_2^2 \lesssim \| f_{\tau, j, tang} \|_2^2 + \neglig. $$
Altogether, we have the bound:
\begin{equation*}
\int_{B_j \cap W} |Ef_{\tau_1, j, tang}|^2 |Ef_{\tau_2, j, tang}|^2 \lesssim R^{O(\delta)} R^{-1/2} \| f_{\tau_1, j, tang} \|_2^2 \| f_{\tau_2, j, tang} \|_2^2 + \neglig.
\end{equation*}
This implies the following $L^4$-bound on the bilinear term:
\begin{equation}
\| \Bil (E{f}_{j, tang}) \|_{L^4(B_j \cap W)} \lesssim R^{O(\delta)} R^{-1/8} (\sum_\tau \| f_{\tau, j, tang} \|^2_2)^{1/2} + \neglig.
\end{equation}
On the other hand we can easily get an $L^2$ bound and then interpolate to get bounds for the $L^p$ norm with any $2 \le p \le 4$. A standard estimate says that
$$ \| Ef \|_{L^2(B_R)} \lesssim R^{1/2} \| f \|_2. $$
(See for instance Lemma 2.1 in Lecture Notes 7 in \cite{Tnotes}.)
From this it easily follows that
\begin{equation}
\| \Bil (E{f}_{j, tang}) \|_{L^2(B_j \cap W)} \lesssim R^{1/2} (\sum_\tau \| f_{\tau, j, tang} \|^2_2)^{1/2}
\end{equation}
Interpolating between these by using Holder, we get for all $2 \le p \le 4$,
\begin{equation} \label{bilinbounda}
\int_{B_j \cap W} | \Bil (E{f}_{j, tang}) |^p \lesssim R^{O(\delta)} R^{\frac{5}{2} - \frac{3}{4}p} (\sum_\tau \| f_{\tau, j, tang} \|_2^2)^{p/2}.
\end{equation}
Next we consider $\| f_{\tau, j, tang} \|_2$. On the one hand, by Lemma \ref{wavepack2}, we know that $\| f_{\tau, j, tang} \|_2 \lesssim \| f_\tau \|_2$. We can get a different bound by taking advantage of the small number of directions of tubes in $\mathbb{T}_{j, tang}$.
Lemma \ref{tangbound} tells us that $\mathbb{T}_{j, tang}$ contains tubes in only $R^{O(\delta)} R^{1/2}$ different directions. Therefore, each function $f_{\tau, j, tang}$ is supported on $R^{O(\delta)} R^{1/2}$ caps $\theta$. On each cap, Lemma \ref{wavepack2} gives the bound
$$\oint_\theta | f_{\tau, j, tang} |^2 \lesssim \oint_{10 \theta} |f _\tau|^2 \lesssim 1. $$
\noindent Adding the contribution of $R^{(1/2) + O(\delta)}$ caps, we get the bound $\int |f_{\tau, j, tang}|^2 \lesssim R^{O(\delta)} R^{-1/2}$. Combining these two bounds for $\| f_{\tau, j, tang} \|_2$, we get for $p \ge 3$:
$$ (\sum_\tau \| f_{\tau, j, tang} \|_2^2)^{p/2} \le R^{O(\delta)} R^{\frac{3}{4} - \frac{p}{4}} (\sum_\tau \| f_{\tau, j, tang} \|_2^2)^{3/2}. $$
Substituting this bound into Equation \ref{bilinbounda}, we get:
$$ \int_{B_j \cap W} | \Bil (E{f}_{j, tang}) |^p \lesssim R^{O(\delta)} R^{\frac{13}{4} - p} (\sum_\tau \| f_{\tau} \|_2^2)^{3/2}. $$
Taking $p = 3.25 = 13/4$, this estimate is the bound in Proposition \ref{tangtermbound}.
\section{Estimates about the geometry of tubes and algebraic surfaces}
In this section, we prove Lemmas \ref{transbound} and \ref{tangbound}. These Lemmas estimate how tubes interact with an algebraic surface. Each Lemma generalizes a simple statement about lines intersecting an algebraic surface.
A line can transversally intersect a degree $D$ surface $Z(P)$ in at most $D$ points. Lemma \ref{transbound} says that a tube $T$ can belong to $\mathbb{T}_{j, trans}$ for at most $\Poly(D)$ values of $j$: there are $\le \Poly(D)$ balls $B_j$ where $T$ passes through $W$ transversally.
The directions of the lines in an algebraic surface $Z(P)$ all lie in an algebraic curve. Let $\mathbb{RP}^2$ denote the points at infinity in $\mathbb{R}^3$ -- also the set of directions of lines in $\mathbb{R}^3$. The projective closure of $Z(P)$ intersects $\mathbb{RP}^2$ in an algebraic curve. If a line $l$ lies in $Z(P)$, then the direction of the line must lie in this curve in $\mathbb{RP}^2$. Lemma \ref{tangbound} says that the tubes of $\mathbb{T}_{j, tang}$ contain tubes from at most roughly $R^{1/2}$ of the $R$ caps $\theta$. If $S$ is a sphere, this is roughly the number of caps that would intersect an algebraic curve of degree $D$ in $S$.
Transferring ideas from lines to tubes is sometimes straightforward and sometimes hard. Some of the methods that we use here come from the paper \cite{Gu2}.
\subsection{Bounding transversal intersections}
We begin with the estimate for transversal tubes, Lemma \ref{transbound}. Suppose that $T \in \mathbb{T}$. Recall from Definition \ref{deftrans} that if $T \in \mathbb{T}_{j, trans}$, then there is a non-singular point $z \in 10 T \cap 2 B_j \cap Z(P)$ so that $\Angle( v(T), T_z Z) > R^{-(1/2) + 2 \delta}$. We have to prove that any tube $T \in \mathbb{T}$ lies in $\mathbb{T}_{j, tang}$ for $\le \Poly(D)$ values of $j$. We state a slightly more general result.
\begin{lemma} \label{geomtransbound} Suppose that
\begin{itemize}
\item $T$ is a finite cylinder in $\mathbb{R}^3$ with radius $\rho$ and arbitrary length.
\item $a \in (0, 1/10)$ denotes an angle.
\item $T$ is subdivided into tube segments of length $\ge \rho a^{-1}$.
\item $Q$ is a non-singular polynomial of degree $D$.
\item $Z_{\ge a}(Q) := \{ z \in Z(Q) | \Angle(v(T), T_z Z(Q)) \ge a \}. $
\end{itemize}
Then $Z_{\ge a}(Q) \cap T$ is contained in $\lesssim D^3$ of the tube segments of $T$.
\end{lemma}
The reader may want to imagine $\rho =1$ and $a = 1/10$. (The general case can be reduced to this case by a change of coordinates. On the other hand, it is just as easy to prove the lemma for all $\rho$ and $a$ as stated, so we give the proof in the general case.)
To see that this Lemma implies Lemma \ref{transbound}, we first note that $P$ is a product of non-singular irreducible polynomials. For each of these polynomials, we apply the Lemma above to $10T$, taking $\rho = 10 R^{(1/2)+\delta}$, $a = R^{-(1/2)+2 \delta}$, and the length of the segments $\rho a^{-1} = 10 R^{1 - \delta}$. So each segment intersects $O(1)$ balls $B_j$. (This step motivates the choice of angle $R^{-(1/2) + 2 \delta}$ in the definitions of $\mathbb{T}_{j, tang}$ and $\mathbb{T}_{j, trans}$.)
There is probably a version of this lemma in any number of dimensions, but we will focus on 3 dimensions. In fact, we'll warm up by proving a 2-dimensional version of the lemma, and then go on to the more difficult 3-dimensional case. We begin with a lemma that holds in any number of dimensions.
If $T$ is a tube in $\mathbb{R}^n$ in direction $v(T)$, and $Q$ is a non-singular polynomial on $\mathbb{R}^n$, then we define $Z_{=a}(Q)$ as follows:
$$ Z_{=a}(Q) := \{ z \in Z(Q) | \Angle(v(T), T_z Z(Q)) = a \}. $$
We defined earlier a non-singular polynomial. Recall that we said that a polynomial $P$ on $\mathbb{R}^n$ is non-singular if for each point $x \in Z(P)$, $\nabla P(x) \not= 0$. There is an analogous definition for varieties defined by several polynoimals. Suppose that $Q_1, ..., Q_k$ are polynomials on $\mathbb{R}^n$. We say that $Z(Q_1, ..., Q_k)$ is a transverse complete intersection if for each point $x \in Z(Q_1, ..., Q_k)$, $\nabla Q_1(x), ..., \nabla Q_k(x)$ are linearly independent. In particular, a transverse complete intersection is always a smooth submanifold of dimension $n-k$.
\begin{lemma} \label{Z_=a} Suppose $Q$ is a non-singular polynomial on $\mathbb{R}^n$. For any $a$, $Z_{=a}(Q)$ is a variety $Z(Q, Q_1)$
where $Q_1$ is a polynomial (depending on $Q$ and $a$) of degree $\lesssim \Deg(Q)$. For almost every $a$, $Z(Q, Q_1)$ is a transverse complete intersection. \end{lemma}
\begin{proof}
Suppose $x \in Z(Q)$. Since $Q$ is non-singular, $\nabla Q(x) \not= 0$. The unit normal to $Z(Q)$ at $x$ is given by $\pm \frac{\nabla Q} {|\nabla Q|}$. Therefore, $x \in Z_{=a}(Q)$ if and only if
$$ \frac{\nabla Q} {|\nabla Q|} \cdot v(T) = \pm \sin a. $$
This holds if and only if
$$0 =(\nabla Q \cdot v(T))^2 - \sin^2 (a) |\nabla Q|^2 =: Q_1. $$
We see that $Q_1$ is a polynomial and that $Z_{=a}(Q) = Z(Q, Q_1)$.
Next we want to see that for almost every $a$, for each point $x \in Z_{=a}(Q)$, $\nabla Q$ and $\nabla Q_1$ are linearly independent.
Define a smooth function $f: Z(Q) \rightarrow \mathbb{R}$ by
$$ f = \frac{(\nabla Q \cdot v(T) )^2}{|\nabla Q|^2}. $$
We note that $|\nabla Q|$ never vanishes on $Z(Q)$, so $f$ is $C^\infty$ smooth. Also $f(x) = \sin^2(a)$ if and only if $x \in Z_{=a}(Q)$.
Fix any value of $a$. If $x_0 \in Z_{=a}(Q)$, and $Q_1$ is defined as above, then we claim that $\nabla Q$ and $\nabla Q_1$ are linearly dependent at $x_0$ if and only if $\nabla f(x_0) = 0$. We can see this as follows. Along the manifold $Z(Q)$, the polynomial $Q_1$ is equal to
$$Q_1 (x) = |\nabla Q|^2 ( f(x) - \sin^2(a) ). $$
\noindent At the point $x_0$, $f(x_0) - \sin^2 (a) = 0$. So when we differentiate, we see that
$$ \nabla Q_1(x_0) = |\nabla Q(x_0)|^2 \nabla f(x_0). $$
\noindent We have $\nabla Q(x_0), \nabla Q_1(x_0)$ linearly independent as vectors in $\mathbb{R}^n$ if and only the restriction of $\nabla Q_1(x_0)$ to $T_{x_0} Z(Q)$ is non-zero, if and only if $\nabla f(x_0) \not= 0$.
Now by Sard's theorem, the set of critical values of $f$ has measure zero. Therefore, for almost every $a$, $\sin^2(a)$ is a regular value of $f$. For any such $a$, $\nabla Q$ and $\nabla Q_1$ are linearly independent at every point of $Z_{=a}(Q) = Z(Q, Q_1)$.
\end{proof}
In this section we will also use Bezout's theorem. We use the following version -- see Theorem 5.2 in \cite{bezout?} for a clean and well-written proof.
\begin{theorem} \label{bezout} If $Z(Q_1, ..., Q_n)$ is a transverse complete intersection in $\mathbb{R}^n$, then the number of points in $Z(Q_1, ..., Q_n)$ is at most $\Deg(Q_1) ... \Deg(Q_n)$.
\end{theorem}
Now we can prove a 2-dimensional version of Lemma \ref{geomtransbound}.
\begin{lemma} \label{geomtransbound2d} Suppose that
\begin{itemize}
\item $T$ is a rectangle in $\mathbb{R}^2$ with width $2 \rho$ and arbitrary length.
\item $a \in (0, 1/10)$ denotes an angle.
\item $T$ is subdivided into rectangular segments of length $\ge \rho a^{-1}$.
\item $Q$ is a non-singular polynomial of degree $D$.
\item $Z_{\ge a}(Q) := \{ z \in Z(Q) | \Angle(v(T), T_z Z(Q)) \ge a \}. $
\end{itemize}
Then $Z_{\ge a}(Q) \cap T$ is contained in $\lesssim D^2$ of the tube segments of $T$.
\end{lemma}
\begin{proof} Using Lemma \ref{Z_=a}, we choose a generic $b \in [(9/10) a, a]$ so that $Z_{=b}$ is a transverse complete intersection. Since we are working in 2 dimensions, $Z_{=b}$ is a set of $\lesssim D^2$ points.
We choose coordinates $x_1, x_2$ so that $T$ is defined by $|x_2| \le \rho$. The $x_1$-axis is parallel to the long side of $T$, so $v(T) = \partial_1$. We say that a point $x \in Z(Q)$ is vertical if $T_x Z(Q)$ is parallel to the $x_2$-axis, or equivalently if $\partial_2 Q = 0$. By making a tiny perturbation of $T$, we can assume that $Z(Q, \partial_2 Q)$ is also a transverse complete intersection, and so consists of $\le D^2$ points.
We divide $2T$ into tube segments corresponding to the original tube segments. We label a tube segment bad if it lies within $10 \rho a^{-1}$ of a vertical point or a point of $Z_{=b}$. The total number of bad segments is $\lesssim D^2$.
Suppose that $x \in Z_{\ge a}(Q) \cap T$ and that $x$ is not in any of the bad tube segments. We consider the connected component of $Z(Q) \cap (2 T \setminus \textrm{ bad segments})$ that contains $x$ -- call this component $Z_{comp}$. The curve $Z_{comp}$ contains no vertical points. Therefore, it is defined as a graph $x_2 = h(x_1)$ for a smooth function $h: I \rightarrow \mathbb{R}$ on some interval $I$. Also, $Z_{comp}$ does not contain any points of $Z_{=b}(Q)$. Since $x \in Z_{\ge a}(Q) \subset Z_{\ge b}(Q)$, we see that $Z_{comp} \subset Z_{> b} (Q)$. Therefore, $|\nabla h| \ge \sin b \ge (1/2) a$ at every point of the interval $I$. Since $\nabla h$ is continuous, its sign must be constant. Therefore the length of $I$ is $\le 10 \rho a^{-1}$, and $Z_{comp}$ can be covered by $\lesssim 1$ tube segments.
It remains to prove that all these components $Z_{comp}$ can be covered by $\lesssim D^2$ tube segments. Some of the components $Z_{comp}$ have $\partial Z_{comp}$ that intersects the boundary of a bad tube segment. Since there are $\lesssim D^2$ bad tube segments, all such components can be covered by $\lesssim D^2$ tube segments. For other components $\partial Z_{comp}$ does not intersect the boundary of a bad tube segment. In this case, the two boundary points of $Z_{comp}$ must lie on the top and bottom of the rectangle $T$. In this case, $Z_{comp}$ ``goes across'' the rectangle $T$. For $|h| < \rho$, the line $x_2 = h$ must intersect $Z_{comp}$, and for almost every $h$, it must intersect $Z_{comp}$ transversally. Since any line has at most $D$ transverse intersections with $Z(Q)$, the number of such components is at most $D$. \end{proof}
Our 3-dimensional result, Lemma \ref{geomtransbound}, is more complicated than this 2-dimensional model. In 2 dimensions, $Z_{=b}(Q)$ was a set of points of controlled cardinality. But in 3 dimensions, $Z_{=b}(Q)$ will be a curve. The next step in approaching our 3-dimensional Lemma is to prove a result about algebraic curves in a 3-dimensional tube. We will use this result to control the curve $Z_{=b}(Q)$ (and some other curves).
If $Y = Z(Q_1, Q_2) \subset \mathbb{R}^3$ is a transverse complete intersection, then we define
$$ Y_{\ge a} := \{ y \in Y | \Angle (v(T), T_y Y) \ge a \}. $$
\begin{lemma} \label{geomtransboundcurve3d} Suppose that
\begin{itemize}
\item $T$ is a finite cylinder in $\mathbb{R}^3$ with radius $\rho$ and arbitrary length.
\item $a \in (0, 1/10)$ denotes an angle.
\item $T$ is subdivided into tube segments of length $\ge \rho a^{-1}$.
\item $Y = Z(Q_1, Q_2)$ is a transverse complete intersection.
\item $Q_1$ and $Q_2$ have degree at most $D$.
\end{itemize}
Then $Y_{\ge a} \cap T$ is contained in $\lesssim D^3$ of the tube segments of $T$.
\end{lemma}
We start by studying $Y_{=a}$ and proving a version of Lemma \ref{Z_=a} for cureves in $\mathbb{R}^3$.
\begin{lemma} \label{Y_=a} Suppose that $Y = Z(Q_1, Q_2)$ is a transverse complete intersection in $\mathbb{R}^3$ and that $Q_1$ and $Q_2$ have degree at most $D$. Then $Y_{=a}$ is an algebraic variety of the form $Z(Q_1, Q_2, Q_a)$, where $Q_a$ is a polynomial (depending on $Q_1, Q_2$, and $a$) of degree $\lesssim D$. Moreover, for almost every $a$, $Z(Q_1, Q_2, Q_a)$ is a transverse complete intersection. In particular,$Y_{=a}$ consists of $\lesssim D^3$ points.
\end{lemma}
\begin{proof} If $y \in Y = Z(Q_1, Q_2)$, then the vector $\nabla Q_1(x) \times \nabla Q_2(x)$ spans $T_y Y$. Therefore, we have $\Angle ( v(T), T_y Y) = a$ if and only if
$$ 0= \left( (\nabla Q_1 \times \nabla Q_2) \cdot v(T) \right)^2 - \cos^2 (a) | \nabla Q_1 \times \nabla Q_2|^2 =: Q_a. $$
This proves the first claim. Now we argue as in the proof of Lemma \ref{Z_=a}. We define
a function $f: Y \rightarrow \mathbb{R}$ by
$$ f = \frac{\left( (\nabla Q_1 \times \nabla Q_2) \cdot v(T) \right)^2 }{ | \nabla Q_1 \times \nabla Q_2|^2 }, $$
\noindent so that $f(y) = \cos^2 (a)$ if and only if $y \in Y_{=a}$. Fix $a$ and suppose that $y_0 \in Y_{=a}$. We can write $Q_a$ as
$$ Q_a(y) = | \nabla Q_1 \times \nabla Q_2|^2 \left( f(y) - \cos^2 (a) \right). $$
\noindent Since $f(y_0) - \cos^2 (a) = 0$, we see that $\nabla Q_1, \nabla Q_2, \nabla Q_a$ are linearly independent at $y_0$ if and only if $\nabla f(y_0) \not= 0$, where $\nabla f$ is considered as a vector field on $Y$. By Sard's theorem, the critical values of $f$ have measure 0. For almost every $a$, $\cos^2(a)$ is a regular value of $f$, and so $Z(Q_1, Q_2, Q_a)$ is a transverse complete intersection.
If $Z(Q_1, Q_2, Q_a)$ is a transverse complete intersection, then Bezout's theorem implies that it consist of $\lesssim D^3$ points.
\end{proof}
Now we can begin the proof of Lemma \ref{geomtransboundcurve3d}.
\begin{proof} By Lemma \ref{Y_=a}, we can choose $b \in [(9/10) a, a]$ so that $Y_{=b}$ consists of $\lesssim D^3$ points.
Choose coordinates $x_1, x_2, x_3$ so that $T$ is given by the equation $(x_2, x_3) \in B^2(0, \rho)$. In these coordinates $v(T) = \partial_1$. Define $Y_{e_i^\perp}$ to be the set of points $y \in Y$ where $T_y Y \subset e_i^\perp$. $Y_{e_i^\perp}$ is a variety: it is equal to $Z(Q_1, Q_2, (\nabla Q_1 \times \nabla Q_2) \cdot e_i )$. After a small generic rotation of $T$ (and hence the coordinates), we can assume that it is a transverse complete intersection and so it consists of $\lesssim D^3$ points.
We divide $2T$ into tube segments corresponding to the original tube segments. We label a tube segment bad if it lies within $10 \rho a^{-1}$ of a point of $Y_{=b}$ or $Y_{e_i^\perp}$. The total number of bad segments is $\lesssim D^3$.
Suppose that $y \in Y_{\ge a} \cap T$ and that $y$ is not in any of the bad tube segments. We consider the connected component of $Y \cap (2 T \setminus \textrm{ bad segments})$ that contains $y$ -- call this component $Y_{comp}$. The curve $Y_{comp}$ contains no points of $Y_{e_1^\perp}$. Therefore, it is defined as a graph $(x_2, x_3) = (h_2(x_1), h_3(x_1))$ for a smooth function $h = (h_2, h_3): I \rightarrow \mathbb{R}^2$ on some interval $I$. Since $Y_{comp}$ contains no points of $Y_{e_2^\perp}$ or $Y_{e_3^\perp}$, the sign of $\frac{dh_2}{dx_1}$ is constant and the sign of $\frac{dh_2}{dx_1}$ is constant.
Also, $Y_{comp}$ does not contain any points of $Y_{=b}$. Since $y \in Y_{\ge a}$, we see that $Y_{comp} \subset Y_{> b}$. Therefore, at every point of the interval $I$,
\begin{equation} \label{curveangle}
\left| \frac{dh_2}{dx_1} \right| + \left| \frac{dh_3}{dx_1} \right| \ge (1/10) a.
\end{equation}
\noindent Therefore the length of $I$ is $\le 100 \rho a^{-1}$, and $Y_{comp}$ can be covered by $\lesssim 1$ tube segments.
We have to prove that the set of such $Y_{comp}$ can be covered by $\lesssim D^3$ tube segments. Some of the $Y_{comp}$ have a boundary point in the boundary of a bad tube segment. The set of all such $Y_{comp}$ can be covered by $\lesssim D^3$ tube segments.
We consider components $Y_{comp}$ with no boundary point in a bad segment. Recall that $Y_{comp}$ contains a point of $T$, and the boundary of $Y_{comp}$ must lie in $\partial (2T)$. Let $I = (s_1, s_2)$. Then either $| h_2(s_1) - h_2(s_2) | \ge \rho$ (type 2) or $|h_3(s_1) - h_3(s_2)| \ge \rho$ (type 3).
Each component of type 2 intersects many planes of the form $x_2 = h$. For each type 2 component, the plane $x_2 = h$ intersects $Y_{comp}$ transversely for $h$ in a subinterval of $[- 2 \rho, 2 \rho]$ of measure at least $\rho$. By Bezout's theorem, there are at most $D^2$ points where $Y$ intersects a plane transversely, and so the total number of type 2 components is at most $4 D^2$. The number of type 3 components is also at most $4 D^2$.
\end{proof}
Now we can begin the proof of the main result of this subsection, Lemma \ref{geomtransbound}.
\begin{proof} By Lemma \ref{Z_=a}, we can choose an angle $b \in [(9/10)a, a]$ so that $Z_{=b}$ is a transverse complete intersection of polynomials of degree $\lesssim D$.
We remark that if $x \in Z_{=b}$, then $\Angle( v(T), T_x Z_{=b}) \ge b$. We state this as a general observation. Suppose that $Y \subset Z$ is a smooth curve and $x \in Y$. Recall that the angle $\Angle (v(T), T_x Z)$ is defined to be $\min_{0 \not= w \in T_x Z} \Angle( v(T), w)$. Since $T_x Y \subset T_x Z$, we get
\begin{equation} \label{anglecomparison}
\Angle (v(T), T_x Y) \ge \Angle( v(T), T_x Z).
\end{equation}
\noindent In particular, if $x \in Z_{=b}$, we see that $\Angle( v(T), T_x Z_{=b}) \ge \Angle( v(T), T_x Z) = b$. So if $Y = Z_{=b}$, then $Y_{\ge b}$ is all of $Y$.
Now by Lemma \ref{geomtransboundcurve3d}, $Z_{=b} \cap 10T$ can be covered by $\lesssim D^3$ tube segments.
Next we consider some other curves in $Z$. For any non-zero vector $w$, we define
$ \\Tan_w \subset Z$ by
$$ \\Tan_w := \{ x \in Z | w \in T_x Z \} = Z(Q, \nabla Q \cdot w). $$
For almost every $w$, $\Tan_w = Z(Q, \nabla Q \cdot w)$ is a transverse complete intersection. We let $W$ be a set of $O(1)$ unit vectors, including the coordinate vectors $e_1, e_2, e_3$, forming a $1/1000$-net on $S^2$. We will say more about the choice of $W$ below. After a tiny rotation of coordinates, we can assume that $\Tan_w$ is a transverse complete intersection for every $w \in W$. By Lemma \ref{geomtransboundcurve3d}, the $|W|$ curves $(\Tan_w)_{\ge b} \cap 10T$ can be covered by $\lesssim D^3$ tube segments.
We divide $10T$ into tube segments corresponding to the original tube segments. We label a tube segment bad if it lies within $100 \rho a^{-1}$ of a point of $Z_{=b}$ or $(\Tan_w)_{\ge b}$ for some $w \in W$. The total number of bad segments is $\lesssim D^3$.
Suppose that $x \in Z_{\ge a} \cap T$ and that $x$ is not in any of the bad tube segments. We consider the connected component of $Z \cap 2 T \cap B(x, 20 \rho a^{-1})$
that contains $x$ -- call this component $Z_{comp}$. We know that $Z_{comp}$ contains no point of $Z_{=b}$, and so $Z_{comp} \subset Z_{> b}$.
We also know that $Z_{comp}$ contains no point of $(\Tan_w)_{\ge b}$. We claim that $Z_{comp}$ contains no point of $\Tan_w$. Suppose that $x \in Z_{comp} \cap \Tan_w$. Since $x \in Z_{comp}$, we have just seen that $\Angle(v(T), T_x Z) > b$. But by equation \ref{anglecomparison}, we know that
$$\Angle ( v(T), T_x (\Tan_w)) \ge \Angle (v(T), T_x Z) > b. $$
\noindent Therefore, we would have $x \in (\Tan_w)_{\ge b}$. So we conclude that $Z_{comp}$ contains no point of $\Tan_w$.
Since $W$ includes a $(1/1000)$-net of unit vectors, and $Z_{comp}$ does not intersect $\cup_{w \in W} \Tan_w$, it follows that the tangent plane $T_z Z$ is almost constant as $z$ varies in $Z_{comp}$: the tangent plane can only vary by an angle at most $1/100$.
To finish the proof of Lemma \ref{geomtransbound}, we have to prove the following intersection estimate for $Z_{comp}$. Consider lines parallel to the $x_1$-axis of the form $x_2 = h_2, x_3 =h_3$ with $(h_2, h_3) \in B^2(2 \rho)$. We want to prove that for a subset of $B^2(2 \rho)$ with area $\ge \rho^2$, the corresponding line intersects $Z_{comp}$.
Suppose for a moment that we have such an intersection estimate. We claim that there are at most $4 \pi D$ points of $Z_{\ge a} \cap T$ that lie outside of the bad segments and are pairwise separated by $100 \rho a^{-1}$. To prove the claim, suppose that we had more than $4 \pi D$ such points. Consider the surface $Z_{comp}$ around each of the points -- because the points are separated, these surfaces are disjoint patches of $Z$. By an averaging argument, we can find $(h_2, h_3) \in B^2(2 \rho)$ so that the line $x_2 = h_2$, $x_3 = h_3$ intersects more than $D$ of the surfaces $Z_{comp}$. Also, the set of $(h_2, h_3)$ so that the line $x_2 = h_2$, $x_3 = h_3$ intersects $Z$ non-transversally has measure 0, so we can assume that our line intersects $Z$ transversally at more than $D$ points. This gives a contradiction, proving our claim.
Given this claim, the portion of $Z_{\ge a} \cap T$ outside of the bad segments can be covered by $\lesssim D$ tube segments. Since there are $\lesssim D^3$ bad tube segments, $Z_{\ge a} \cap T$ can be covered by $\lesssim D^3$ tube segments in total. So it only remains to prove the intersection estimate.
Recall that the tangent plane of $Z_{comp}$ is nearly constant. In the main case, $\Angle( v(T), T_z Z) \le (1/10)$ for all $z \in Z_{comp}$. Let us first handle this case. Because $Z_{comp}$ does not intersect $Tan_{e_3}$, at each point $z \in Z_{comp}$, the tangent plane $T_z Z$ can be given as a graph of the form $x_3 = L_z (x_1, x_2)$. Because $Z_{comp} \subset Z_{\ge b}$, we know that $(9/10)a \le \Angle(v(T), T_z Z)$. Being in the main case, we have also assumed that $\Angle( v(T), T_z Z) \le 1/10$. Therefore, we get the following inequalities about $L_z$:
$$ a/2 \le |L_z(1,0)| \le 1/10.$$
We would also like to know something about $L_z (0,1)$. The tangent plane $T_z Z$ is almost constant on $Z_{comp}$, so if there happens to be a single point $z_0 \in Z_{comp}$ where $|L_{z_0}(0,1)| \le 1/2$, then $|L_z(0,1)| \le 1$ for all $z \in Z_{comp}$. We can arrange this by performing a rotation in the $x_2-x_3$ plane by an angle which is a multiple of $\pi/10$. These rotations generate a finite group, so we can also assume that $W$ is invariant with respect to any of these rotations. After the rotation, we still have $(9/10) a \le \Angle(v(T), T_z Z) \le (1/10)$ for all $z \in Z_{comp}$. Therefore, without loss of generality we can arrange that for every $z \in Z_{comp}$, $L_z$ obeys the bounds
$$ a/2 \le |L_z(1,0)| \le 1/10; |L_z(0,1)| \le 1. $$
Recall that $Z_{comp}$ was defined around an original point $x \in Z_{\ge a} \cap T$. We let $\pi$ be a plane through $x$, perpendicular to $v(T)$. We can assume without loss of generality that the $x_1$ coordinate of the original point $x$ is zero, so that the plane $\pi$ is defined by $x_1 = 0$. The intersection $\pi \cap T$ is a disk of radius $\rho$ centered at $x$, and $\pi \cap T \cap Z_{comp}$ is a smooth curve in this disk. (Since $Tan_{e_2} \cap Z_{comp}$ is empty, $Z_{comp}$ is transverse to $\pi$.) We look at the component of this curve containing the point $x$. Because of the bound $|L_z(0,1)| \le 1$, this component can be given by a graph of the form $x_3 = g(x_2)$, for a function $g$ with $|\nabla g| \le 1$. The function $g$ is defined on an interval containing $[-\rho/2, \rho/2] =: I_2$. On $I_2$, we have $|g(x_2)| \le \rho/2$.
For each $b_2 \in I_2$, consider the intersection of $Z_{comp}$ with the plane $x_2 = b_2$. Since $Z_{comp}$ is disjoint from $Tan_{e_3}$, the intersection is a smooth curve. Consider the connected component of this intersection which contains the point $(0, b_2, g(b_2))$. Since $a/2 \le |L_z(1,0)| \le 1/10$, this connected component is given by a graph of the form $x_2 = b_2$, $x_3 = j_{b_2}(x_1)$, where $|\nabla j| \ge a/2$. By continuity the sign of $\frac{dj}{dx_1}$ must be constant. We also know that $|j(0)| = |g(b_2)| \le \rho/2$, and $|b_2| \le \rho_2$. The function $j$ is defined on an interval $I_1(b_2)$. Let $e_1$ be the positive endpoint of $I_1(b_2)$. Recalling the definition of $Z_{comp}$, we see that either $(b_2, j_{b_2}(e_1)) \in \partial B(2 \rho)$ or else $e_1 \ge 2 \rho a^{-1}$. In either case, the image of $j_{b_2}$ must cover an interval $I_3(b_2)$ of length $\ge \rho$. In the first case, we have $|j_{b_2}(e_1)| \ge (3/2) \rho$ and $|j_{b_2}(0)| \le (1/2) \rho$. In the second case, since $| \nabla j| \ge a/2$ and $j$ is defined on $[0, 2 \rho a^{-1}]$, the image of $j$ must again cover an interval of length $\rho$.
We have seen that $Z_{comp}$ intersects the line $x_2 =b_2$, $x_3 = b_3$ whenever $b_2 \in I_2$ and $b_3 \in I_3(b_2)$. The total area of this region is $\ge \rho^2$. This completes the proof of the intersection estimate in the main case that $\Angle(v(T), T_z Z) \le (1/10)$ for all $z \in Z_{comp}$.
Next we consider the minor case that $\Angle( v(T), T_z Z) \ge (1/20)$ for all $z \in Z_{comp}$. In this case, $T_z Z'$ is a graph of the form $x_1 = \bar L_z (x_2, x_3)$ where $\bar L_z$ is a linear function obeying
$$ |\bar L_z (x_2, x_3) | \le 40 | (x_2, x_3)|. $$
In this case, $Z_{comp}$ is a graph of the form $x_1 = h(x_2, x_3)$ over the disk $B^2(2 \rho)$ in the $x_2-x_3$ plane with $| \nabla h| \le 40$. But in this case, $Z_{comp}$ intersects every line of the form $x_2 = b_2, x_3 = b_3$ with $(b_2, b_3) \in B^2(2 \rho)$. This finishes the proof of the intersection estimate and hence the proof of Lemma \ref{geomtransbound}.
\end{proof}
\subsection{Directions of tangential tubes}
In this section, we prove Lemma \ref{tangbound}. The main tool in the proof is a theorem of Wongkew \cite{W} on the volumes of neighborhoods of real algebraic varieties. Here is a special case of the theorem.
\begin{theorem} \label{Wongkew} (Wongkew) If $P$ is a non-zero polynomial of degree $D$ on $\mathbb{R}^n$, then
$$\Vol \left( B(L) \cap N_\rho Z(P) \right) \le C_n D \rho L^{n-1}. $$
\end{theorem}
Remark. Recently, Zhang gave an application of Wongkew's theorem in incidence geometry \cite{Z}.
We need a minor generalization where the ball $B(L)$ is replaced by a rectangular region.
\begin{theorem} \label{Wongkew'} Suppose that $R$ is an $n$-dimensional rectangular grid of unit cubes with dimensions $R_1 \times ... \times R_n$, where $1 \le R_1 \le ... \le R_n$. Suppose that $P$ is a non-zero polynomial of degree $D$. Then the number of cubes of the grid that intersect $Z(P)$ is at most $C_n D \prod_{j=2}^n R_j$.
\end{theorem}
The proof of this theorem is a minor modification of Wongkew's proof.
\begin{proof} The proof is by induction on $n$. When $n=1$, the theorem reduces to the fact that a degree $D$ polynomial in one variable has at most $D$ zeroes.
By a theorem of Oleinik-Petrovskii, Milnor, and Thom \cite{M}, the number of connected components of $Z(P)$ is $\le C_n D^n$. Therefore, the number of cubes that contain a connected component of $Z(P)$ is $\lesssim D^n$. If a cube intersects $Z(P)$ and does not contain a component of $Z(P)$, then one of its boundary faces intersects $Z(P)$. We will count cubes of this type by induction on the dimension.
Now we want to count $(n-1)$-faces of the grid that intersect $Z(P)$.
Consider all the $(n-1)$-dimensional rectangular grids formed from $R$ by fixing one of the coordinates to an integer value. For each $j$, there are $R_j + 1$ such rectangular grids formed by intersecting $R$ with planes of the form $x_j = h_j$, $h_j = 0, ..., R_j$. The polynomial $P$ may vanish on at most $D$ of these $(n-1)$-planes, contributing at most $D \prod_{j=2}^n R_j$ $(n-1)$-dimensional faces. If $P$ does not vanish on one of these $(n-1)$-dimensional rectangular grids, then we can use induction to bound the number of $(n-1)$-faces of this $(n-1)$-dimensional grid that intersect $Z(P)$.
For $j \not=1$, there are $\lesssim R_j$ rectangular grids in the $e_j^\perp$ direction. In each of these grids, $Z(P)$ may intersect at most $C_{n-1} D R_j^{-1} \prod_{j'=2}^n R_{j'}$ $(n-1)$-faces. Altogether, this contributes $\lesssim_n D \prod_{j=2}^n R_j$ $(n-1)$-faces.
For $j=1$, the bound is even better. There are $\lesssim R_1$ rectangular grids in the $e_1^\perp$ direction. In each of these grids, $Z(P)$ may intersect at most $C_{n-1} D \prod_{j'=3}^n R_{j'}$ $(n-1)$-faces. So the total number of $(n-1)$-faces of this orientation is $\lesssim_n D R_1 R_3 R_4 ... R_n \le D \prod_{j=2}^n R_j$.
\end{proof}
Now we set up Lemma \ref{tangbound} in a slightly more general way.
Suppose that $B= B^3(L)$ is a 3-dimensional ball of radius $L$. Let $P$ be a product of non-singular polynomials of degree at most $D$, and let $Z = Z(P)$. Let $\mathbb{T}$ be a set of cylindrical tubes $T$ of thickness $\rho$. We say that $T \in \mathbb{T}$ lies in $\mathbb{T}_{tang}$ if $2 T \cap Z \cap (1.1) B \not= \phi$ and for each non-singular point $x \in 10 T \cap Z \cap 2 B$,
$$ \Angle( v(T), T_x Z) \le \rho / L . $$
We say that two tubes $T_1, T_2 \in \mathbb{T}$ point in different directions if the angle between $v(T_1)$ and $v(T_2)$ is at least $\rho / L$.
\begin{lemma} \label{geomtangbound} If $\mathbb{T}' \subset \mathbb{T}_{tang}$ are tubes pointing in pairwise different directions, then
$$ | \mathbb{T}' | \lesssim D^2 \log^2 (L/ \rho) L / \rho. $$
\end{lemma}
(To recover Lemma \ref{tangbound}, we take, $B = B_j$, $L = R^{1 - \delta}$, and $\rho = R^{(1/2) + \delta}$. Therefore, if $\mathbb{T}' \subset \mathbb{T}_{j, tang}$ consists of tubes that point in $R^{-(1/2) + 2 \delta}$-separated directions, then Lemma \ref{geomtangbound} guarantees that $|\mathbb{T}'| \lesssim R^{(1/2) + O(\delta)}$. Next if we let $\mathbb{T}''$ be a subset of $\mathbb{T}_{j, tang}$ consisting of tubes that point in $R^{-1/2}$-separated directions, then the bound for $|\mathbb{T}''|$ is at most $R^{O(\delta)}$ times larger than the bound for $|\mathbb{T}'|$. In conclusion, the number of different directions of tubes in $\mathbb{T}_{j, tang}$ is $\lesssim R^{(1/2) + O(\delta)}$. )
\begin{proof}
By scaling we can assume that $\rho = 1$.
If $T \in \mathbb{T}_{tang}$, then we claim that $T \cap (3/2) B$ is contained in the $10$-neighborhood of $Z(P)$. By assumption, there is a non-singular point $z_0 \in 2T \cap (1.1) B \cap Z(P)$. Since $P$ is a product of non-singular varieties, $Z(P)$ is a union of smooth varieties $Z(P_l)$, and $z_0$ is in exactly one of them -- say $z_0 \in Z(P_l)$.
We draw a curve in $Z(P_l)$ starting at $z_0$ and trying to stay as close as possible to the core line of $T$. We choose coordinates $x_1, x_2, x_3$ where $T$ is given by $x_2^2 + x_3^2 \le 1$ and where the center of $B$ has $x_1$-coordinate 0. Now at each point $z$ of $Z(P_l) \cap 10 T \cap 2 B$, $\Angle (v (T), T_z Z(P_l)) \le \rho / L = 1 / L$. Therefore, we can parametrize a curve in $Z(P_l)$ starting at $z_0$, given by a graph $(x_2, x_3) = g(x_1)$ with $|\nabla g| \le 1 / L$, for $|x_1| \le (3/2)L$. This curve lies in $Z(P)$, and $T \cap (3/2) B$ lies in the $10$-neighborhood of the curve.
The rest of the proof is a hairbrush argument, following Wolff's hairbrush idea from \cite{W2}. Suppose that $|\mathbb{T}'| = \beta L $. We will prove that
$\beta \lesssim D^2 \log^2 L$. For each tube $T$, $\Vol (B \cap T) \sim L$. We cover $N_{10} Z(P) \cap B$ with cubes $Q$ of side length $1$. By Theorem \ref{Wongkew}, the number of cubes $Q$ is $\lesssim D L^2$.
Each tube $T \in \mathbb{T}'$ intersects $\gtrsim L$ cubes. So on average, each cube intersects at least $\beta D^{-1}$ tubes.
Consider triples $(Q, T_1, T_2)$ with $Q$ in our set of cubes and $T_1, T_2 \in \mathbb{T}'$. Assuming that $\beta$ is significantly larger than $D$, a Cauchy-Schwarz argument implies that the number of triples is at least $(\beta/D)^2 D L^2 = \beta^2 D^{-1} L^2$.
We group the triples in dyadic blocks according to the size of $\Angle ( v(T_1), v(T_2))$. If $T_1 \not= T_2$, then this angle is between $1/L$ and $\pi / 2$, so we get $\sim \log L$ dyadic blocks. We pick a popular dyadic block with angle range $[\theta, 2 \theta]$, where $1/L \le \theta \le 2$. The number of triples with $\Angle (v(T_1), v(T_2)) \in [\theta, 2 \theta]$ is $\gtrsim \beta^2 D^{-1} L^2 ( \log L)^{-1}. $
There are $\beta L$ tubes in $\mathbb{T}'$. By the pigeonhole principle, one of these tubes $T_1$ must appear in $\gtrsim \beta D^{-1} L (\log L)^{-1}$ triples with $\Angle (v(T_1), v(T_2)) \in [\theta, 2 \theta]$.
We let $H$ (for hairbrush) denote the union of all these tubes, intersected with the ball $(3/2) B$.
Given the angle condition $\Angle(v_1(T), v_2(T)) \sim \theta$, each pair $T_1, T_2$ can appear in $\lesssim \theta^{-1}$ triples, and so the number of tubes $T_2$ in the hairbrush obeys
$$(\# \textrm{ of tubes } T_2 \textrm{ in } H) \gtrsim \beta D^{-1} \theta L (\log L)^{-1}. $$
We will get a lower bound on the volume of $H$ from Wolff's hairbrush argument, and we will get an upper bound on the volume of $H$ from Wongkew's theorem. Playing these bounds against each other, we will get the desired upper bound $\beta \lesssim D^2 \log^2 L$.
The tubes in the hairbrush $H$ are morally disjoint. We can divide the hairbrush into $\sim \theta L$ planar slabs of thickness 1. Outside of the $(\theta/10)L$-neighborhood of the core line of $T_1$, any point lies in $\lesssim 1$ of the planar slabs. Because of the angle condition, the tubes in each planar slab have angle separation $\gtrsim L^{-1}$. By a standard argument, the volume of their union is at least $(\log L)^{-1}$ times the sum of their volumes. (See for example Theorem 1.3 in Lecture Notes 6 of \cite{Tnotes}.) Therefore, we see that
$$ \Vol H \gtrsim (\log L)^{-1} (\# \textrm{ of tubes } T_2 \textrm{ in } H) L \gtrsim \beta D^{-1} \theta L^2 (\log L)^{-2}. $$
On the other hand, the hairbrush $H$ is contained in a cylinder around the core line of $T_1$ with radius $\theta L$. This cylinder is approximately a rectangle of dimensions $\theta L \times \theta L \times L$. The hairbrush $H$ is contained in the $O(1)$-neighborhood of $Z(P)$ inside this rectangle. Theorem \ref{Wongkew'} gives the following upper bound on the volume of $H$.
$$ \Vol H \lesssim D \theta L^2. $$
Combining the last two inequalities, we see that $\beta \lesssim D^2 (\log L)^2$.
\end{proof}
| {
"timestamp": "2015-02-04T02:03:53",
"yymm": "1407",
"arxiv_id": "1407.1916",
"language": "en",
"url": "https://arxiv.org/abs/1407.1916",
"abstract": "If $S$ is a smooth compact surface in $\\mathbb{R}^3$ with strictly positive second fundamental form, and $E_S$ is the corresponding extension operator, then we prove that for all $p > 3.25$, $\\| E_S f\\|_{L^p(\\mathbb{R}^3)} \\le C(p,S) \\| f \\|_{L^\\infty(S)}$. The proof uses polynomial partitioning arguments from incidence geometry.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A restriction estimate using polynomial partitioning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105307684549,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7076984898752493
} |
https://arxiv.org/abs/2003.13542 | Bisimulation as a Logical Relation | We investigate how various forms of bisimulation can be characterised using the technology of logical relations. The approach taken is that each form of bisimulation corresponds to an algebraic structure derived from a transition system, and the general result is that a relation $R$ between two transition systems on state spaces $S$ and $T$ is a bisimulation if and only if the derived algebraic structures are in the logical relation automatically generated from $R$. We show that this approach works for the original Park-Milner bisimulation and that it extends to weak bisimulation, and branching and semi-branching bisimulation. The paper concludes with a discussion of probabilistic bisimulation, where the situation is slightly more complex, partly owing to the need to encompass bisimulations that are not just relations. | \section{Introduction}
\label{sec:introduction}
This work forms part of a programme to view logical relations as a structure that arises naturally
from interpretations of logic and type theory and to expose the possibility of their use as a
wide-ranging framework for
formalising links between instances of mathematical structures. See
\cite{DBLP:journals/entcs/HermidaRR14} for an introduction to this.
The purpose of this paper is to show
how several notions of bisimulation (strong, weak, branching and probabilistic) can be viewed as instances of the use of logical relations.
It is not to prove new facts in process algebra. Indeed the work we produce is based on concrete
facts, particularly about weak bisimulation, that have long been known in the process algebra
community. What we do is look at them in a slightly different light.
We see there as being two advantages in this. The first is that the concept of bisimulation is
incorporated as a formal instance of a framework that also includes other traditional mathematical
structure, such as group homomorphisms.
Formally speaking, the theory of groups is standardly presented as an algebraic theory with
operations of multiplication ($.$), inverse ($(\ )^{-1}$) and a constant ($e$) giving the identity of the
multiplication operation. A group is a set equipped with interpretations of these operations under
which they satisfy certain equations. We will not need to bother with the equations here. If $G$ and
$H$ are groups, then a group homomorphism $\theta: G\to H$ is a function $G\to H$ between the
underlying sets that respects the group operations. We will consider the graph of this function as a
relation between $G$ and $H$. We abuse notation to conflate the function with its graph, and write
$\theta\subseteq G\times H$ for the relation $(g,\theta g)$.
Logical relations give a formal way of extending relations to higher types. In particular, the type for
multiplication is $(X\times X)\to X$, and the recipe for $(\theta\times \theta)\to \theta$ tells us that
$(._G, ._H) \in (\theta\times \theta)\to \theta$ if and only if for all $g_1,g_2\in G$ and $h_1,h_2\in H$,
if $(g_1,h_1)\in\theta$ and $(g_2,h_2)\in\theta$, then $(g_1._G g_2, h_1._H h_2)\in\theta$.
Rewriting this back into the standard functional style, this says precisely that
$\theta (g_1._G g_2) = (\theta g_1)._H (\theta g_2)$, the part of the standard requirements for a
group homomorphism relating to multiplication. In other words, this tells us that a relation $\theta$
is a group homomorphism between $G$ and $H$ if and only if the operations are in the appropriate
logical relations for their types and $\theta$ is functional:
\begin{itemize}
\item $(._G, ._H) \in (\theta\times \theta)\to \theta$
\item $(\ )^{-1(G)}, (\ )^{-1(H)}\in (\theta\to \theta)$
\item $(e_G,e_H)\in\theta$, and
\item $\theta$ is functional and total.
\end{itemize}
We get an equivalent characterisation of (strong) bisimulation. We can take a labelled transition
system (with labels $A$ and state space $S$) to be an operation of type $A\times S \to \pow S$,
or equivalently $A\to [S\to \pow S]$. Let $F$ and $G$ be two such (with the same set of labels, but
state spaces $S$ and $T$), then we show that $R\subset S\times T$ is a bisimulation if and only if
the transition operations are in the appropriate logical relation:
\begin{itemize}
\item $(F,G) \in A\times R \to \pow R$, or equivalently
\item $(F,G) \in A\to [R \to \pow R]$
\end{itemize}
Since {{\sf Rel}} is a cartesian closed category it does not matter which of these presentations we use,
the requirement on $R$ will be the same.
In order to do this we need to account for the interpretation of $\pow$ on relations and this leads us
into a slightly more general discussion of monadic types. This includes some results about monads
on {{\sf Set}} that we believe are new, or at least are not widely known.
Weak and branching bisimulation can be made to follow. It is widely known that weak bisimulation can be reduced to
the strong bisimulation of related systems, and we follow this approach. The interest for us is the
algebraic nature of the construction of the related system, and we give two such, one of which
explicitly includes $\tau$ actions and the other does not. In this case we get results of the form:
$R\subset S\times T$ is a weak bisimulation if and only if
the derived transition operations $\sat F$ and $\sat G$ are in the appropriate logical relation:
\begin{itemize}
\item $(\sat F,\sat G) \in A\to [R \to \pow R]$
\end{itemize}
This seems something of a cheat but there is an issue here. The notion of weak bisimulation is built
on a transition system that includes $\tau$ actions. These actions form a formal part of the semantic
structure, but are not supposed to be visible. You can argue that is also cheating, and that you would
really like a semantic structure that does not include mention of $\tau$, and that is what our
second construction does.
Branching and semi-branching bisimulations were introduced to deal with
perceived deficiencies in weak bisimulation. We show that they arise
naturally out of a variant of the notion of transition system in which
the system moves first by internal computations to a synchronisation
point, and then by the appropriate action to a new state.
Bisimulations between probabilistic systems are a little more problematic.
They don't quite fit the paradigm because, in the continuous case, we
have a Markov kernel rather than transitions between particular states.
Secondly there are different approaches to bisimilarity. We investigate
these and show that the logical relations approach can still be extended
to this setting, and that when we do so there are strong links with these
approaches to bisimilarity.
The notion of \emph{probabilistic bisimulation} for discrete probabilistic systems is due to Larsen and Skou~\cite{larsen_bisimulation_1991} and was later further studied by Van Glabbeek, Smolka and Steffen~\cite{van_glabbeek_reactive_1995}. The continuous case was instead discussed first by Desharnais, Edalat and Panangaden in~\cite{desharnais_bisimulation_2002}, where bisimulation is described as a span of \emph{zig-zag morphisms} between probabilistic transition systems, there called \emph{labelled Markov processes} (LMP), whose set of states is an analytic space. The hypothesis of analyticity is sufficient in order to prove that bisimilarity is a transitive relation, hence an equivalence relation.
In~\cite{panangaden2009labelled}, the author defined instead the notion of probabilistic bisimulation on a LMP (again with an analytic space of states) as an equivalence relation satisfying a property similar to Larsen and Skou's discrete case. For two LMPs with different sets of states, $S$ and $S'$ say, one can consider equivalence relations on $S+S'$.
Here we follow de Vink and Rutten's modus operandi of~\cite{de_vink_bisimulation_1999}, where they showed the connections between Larsen and Skou's definition in the discrete case and the \emph{coalgebraic} approach of the ``transition-systems-as-coalgebras paradigm'' described at length in~\cite{rutten_universal_2000}; then they used the same approach to give a notion of probabilistic bisimulation in the continuous case of transition systems whose set of states constitutes an ultrametric space. In this paper we see LMPs as coalgebras for the Giry functor $\Giry \colon \sf Meas \to \sf Meas$ (hence we consider arbitary measurable spaces) and a probabilistic bisimulation is defined as a $\Giry$-\emph{bisimulation}: a span in the category of $\Giry$-coalgebras. At the same time, we define a notion of logical relation for two such coalgebras $F \colon S \to \Giry S$ and $T \colon T \to \Giry T$ as a relation $R \subseteq S \times T$ such that $(F,G) \in [R \to \Giry R]$, for an appropriately defined relation $\Giry R$. It is easy to see that if $S=T$ and if $R$ is an equivalence relation, than the definitions of logical relation and bisimulation of~\cite{panangaden2009labelled} coincide. What is not straightforward is the connection between the definition of $\Giry$-bisimulation and of logical relation in the general case: here we present some sufficient hypotheses for them to coincide, obtaining a similar result to de Vink and Rutten, albeit the set of states are not necessarily ultrametric spaces.
Returning to why we are taking this approach, the second benefit we mentioned is that placing these constructions in this context opens up the possibility of
applying them in more general settings than {{\sf Set}}. The early work of Hermida
\cite{hermida1993fibrations,hermida1999some} shows that logical predicates can be obtained from
quite general interpretations of logic, and more recent work of the authors of this paper shows how
to extend this to general logical relations. The interpretation of covariant powerset given here is via
an algebraic theory of complete sup-lattices opening up the possibility of also extending it to more
general settings (though there will be design decisions about the indexing structures allowed). The
derived structures used to model weak bisimulation are defined through reflections, and so can be
interpreted in categories with the correct formal properties. All of this gives, we hope, a framework
that can be used flexibly in a wide range of settings.
As we have indicated, much of this is based on material well-known to the process algebra
community. We will not attempt to give a full survey of sources here.
The results on monadic types are also related to work by Goubault-Larrecq, Lasota and Nowak
\cite{goubault2008logical}, though our contributions concentrate on specific properties of {{\sf Set}}, which
enable their framework of image factorisation to be used.
The authors would like to thank Matthew Hennessy for suggesting that weak bisimulation would be
a reasonable challenge for assessing the strength of this technology.
\section{Bisimulation}
\label{sec:bisimulation}
The notion of bisimulation was introduced by Park for automata \cite{park1981concurrency},
extended by Milner to processes and then further modified to allow internal actions of those processes
\cite{milner1989communication}. The classical notion is {\em strong bisimulation}, defined as a relation
between labelled transition systems.
\begin{definition
A {\em transition system} consists of a set $S$, together with a function $f:S\longrightarrow \pow S$. We view elements $s\in S$ as states of the system, and read $f(s)$ as the set of states to which $s$ can evolve in a single step. A {\em labelled transition system} consists of a set $A$ of labels (or actions), a set $S$ of states, and a function $F: A \longrightarrow [S\longrightarrow \pow S]$. For $a\in A$ and $s\in S$ we read $Fas$ as the set of states to which $s$ can evolve in a single step by performing action $a$. $s'\in Fas$ is usually written as $\trans sa{s'}$ for $s'\in Fas$, using different arrows to represent different $F$'s.
\end{definition}
This definition characterises a labelled transition system as a function from labels to unlabelled transition systems. For each label we get the transition system of actions with that label. By uncurrying $F$ we get an equivalent definition as a function $A\times S \to \pow S$.
We can now define bisimulation.
\begin{definition
Let $S$ and $T$ be labelled transition systems for the same set of labels, $A$. Then a relation $R\subseteq S\times T$ is a {\em strong bisimulation} iff for all $a\in A$, whenever $sRt$
\begin{itemize}
\item[-] for all $\trans sa{s'}$, there is $t'$ such that $\trans ta{t'}$ and $s'Rt'$
\item[-] and for all $\trans ta{t'}$, there is $s'$ such that $\trans sa{s'}$ and $s'Rt'$.
\end{itemize}
\end{definition}
\section{Logical Relations}
\label{sec:logical-relations}
The idea behind logical relations is to take relations on base types, and extend them to relations on higher types in a structured way. The relations usually considered are binary, but they do not have to be. Even the apparently simple unary logical relations (logical predicates) are a useful tool. In this paper we will be considering binary relations except for a few throwaway remarks. We will also keep things simple by just working with sets.
As an example, suppose we have a relation $R_0\subseteq S_0 \times T_0$ and a relation $R_1\subseteq S_1 \times T_1$, then we can construct a relation $[R_0\rightarrow R_1]$ between the function spaces $[S_0\rightarrow S_1]$ and $[T_0\rightarrow T_1]$. If $f:S_0\longrightarrow S_1$ and
$g:T_0\longrightarrow T_1$, then $f [R_0\rightarrow R_1] g$ iff for all $s$, $t$ such that $s R_0 t$, then $f(s) R_1 g(t)$.
The significance of this definition for us is that it arises naturally out of a broader view of the structure. We consider categories of predicates and relations.
\begin{definition}\label{def:Pred-1}
The objects of the category {{\sf Pred}} are pairs $(P,A)$ where $A$ is a set and $P$ is a subset of $A$. A
morphism $(P,A)\to (Q,B)$ is a function $f: A\to B$ such that
$\forall a\in A. a\in P \implies f(a) \in Q$. Identities and composition are inherited from {\sf Set}.
\end{definition}
{{\sf Pred}} also has a logical reading. We can take $(P,A)$ as a predicate on the type $A$, and
associate it
with a judgement of the form $a:A \vdash P(a)$ (read "in the context $a:A$, $P(a)$ is a proposition").
A morphism
$t \colon (a:A\vdash P(a)) \to (b:B\vdash Q(b))$
has two parts: a substitution $b\mapsto t(a)$, and the logical consequence $P(a) \Rightarrow Q(t(a))$
(read ``whenever $P(a)$ holds, then so does $Q(t(a))$'').
\begin{definition}\label{def:Rel-1}
The objects of the category {{\sf Rel}} are triples $(R,A_1,A_2)$ where $A_1$ and $A_2$ are sets and
$R$ is a subset of $A_1\times A_2$ (a relation between $A_1$ and $A_2$). A morphism
$(R,A_1,A_2)\to (S,B_1,B_2)$ is a pair of functions $f_1: A_1\to B_1$
and $f_2: A_2\to B_2$ such that
$\forall a_1\in A_1, a_2\in A_2. (a_1,a_2)\in R \implies (f_1(a_1),f_2(a_2)) \in S$.
Identities and composition are inherited from ${\sf Set}\times{\sf Set}$.
\end{definition}
\begin{center}
\begin{tikzcd}
P \ar[r, dashed] \ar[d] & Q\ar[d] & R \ar[r, dashed]\ar[d] & S\ar[d]\\
A \ar[r, "f"] & B & A_1\times A_2 \ar[r, "f_1\times f_2"] & B_1\times B_2
\end{tikzcd}
\end{center}
${\sf Rel}_n$ is the obvious generalisation of ${\sf Rel}$ to n-ary relations.
{\sf Pred}{} has a forgetful functor $p:{\sf Pred}\rightarrow{\sf Set}$, $p(P,A) = A$, and similarly
{{\sf Rel}} has a forgetful functor
$q:{\sf Rel}\rightarrow{\sf Set}\times{\sf Set}$, $q(R,A_1,A_2) = (A_1,A_2)$. These functors carry a good deal of
structure and are critical to a deeper understanding of the constructions.
Moreover, both {{\sf Pred}} and {{\sf Rel}} are cartesian closed categories.
\begin{lemma} {{\sf Pred}} is cartesian closed and the forgetful functor $p:{\sf Pred}\to{\sf Set}$ preserves that structure. {{\sf Rel}} is also cartesian closed and the two projection functors preserve that structure. Moreover the function space in {{\sf Rel}} is given as in the example above.
\end{lemma}
So the definition we gave above to extend relations to function spaces can be motivated as the description of the function space in a category of relations.
\section{Covariant Powerset}
\label{sec:covariant-powerset}
We can do similar things with other type constructions. In particular we can extend relations to relations between powersets.
\begin{definition}\label{def-collection-rel}
Let $R\subseteq S\times T$ be a relation between sets $S$ and $T$. Then we define $\pow R\subseteq \pow S\times \pow T$ by: \\
$ U [\pow R] V $ iff
\begin{itemize}
\item[-] for all $u\in U$, there is a $v\in V$ such that $uRv$
\item[-] and for all $v\in V$, there is a $u\in U$ such that $uRv$
\end{itemize}
\end{definition}
Again this arises naturally out of the lifting of a construction on {{\sf Set}} to a construction on {{\sf Rel}}. In this case we have the covariant powerset monad, in which the unit $\eta: S \longrightarrow \pow S$ is $\eta s = \{ s\}$, and the multiplication $\mu: \pow{}^2 S \longrightarrow \pow S$ is $\mu X = \Union X$.
There are two ways to motivate the definition we have just given. They both arise out of constructions for general monads, and in the case of monads on {{\sf Set}} they coincide.
In {{\sf Pred}} our powerset operator sends $(Q,A)$ to $(\pow{Q}, \pow A)$ with the obvious inclusion.
In {{\sf Rel}} it almost sends $(R, A_1, A_2)$ to
$(\pow R,\pow{A_1}, \pow{A_2})$, where the ``relation" is as follows: if $U\subseteq R$ ({i.e.}
$U\in\pow R$) then $U$ projects onto $\mathop{\pi_1} U$ and $\mathop{\pi_2} U$. So for example, if
$R$ is the total relation on $\{0,1,2\}$ and $U=\{(0,1),(1,2)\}$, then $U$ projects onto $\{0,1\}$ and
$\{1,2\}$.
The issue is that there are other subsets that project onto the same elements, {e.g.}
$U'=\{(0,1),(1,1),(1,2)\}$, and hence this association does not give a monomorphic embedding of
$\pow R$ into
$\pow{A_1}\times \pow{A_2}$.
\begin{lemma}\label{lemma:egli}
If $R$ is a relation between sets $A_1$ and $A_2$, $P_1\subseteq A_1$ and $P_2\subseteq A_2$,
then the following are equivalent:
\begin{enumerate}
\item there is $U\subseteq R$ such that $\mathop{\pi_1} U = P_1$ and $\mathop{\pi_2} U = P_2$
\item for all $a_1\in P_1$ there is an $a_2\in P_2$ such that $a_1 R a_2$ and for all $a_2\in P_2$
there is an $a_1\in P_1$ such that $a_1 R a_2$.
\end{enumerate}
\end{lemma}
The latter is the Egli-Milner condition arising in the ordering on the Plotkin powerdomain
\cite{plotkin1976powerdomain}.
Thus for {{\sf Rel}} we take the powerset of $(R,A_1,A_2)$ to be $(\pow R, \pow{A_1},\pow{A_2})$, where
$P_1 [\pow R] P_2$ iff $P_1$ and $P_2$ satisfy the equivalent conditions of lemma \ref{lemma:egli}.
\paragraph{Covariant powerset as the algebraic theory of complete $\vee$-semilattices.}
This form of powerset does not characterise predicates on our starting point. Rather it characterises
arbitrary collections of elements of it. To make this precise, consider the following formalisation of the
theory of complete sup-semilattices. For each set $X$ we have an operation
$\bigvee_X : L^X \rightarrow L$. These operations satisfy the following equations:
\begin{enumerate}
\item\label{sup-eq-1} given a surjection $f: X\rightarrow Y$, $\bigvee_X \circ L^f = \bigvee_Y$.
\item\label{sup-eq-2} given an arbitrary function $f: X\rightarrow Y$,
$\bigvee_Y \circ (\lambda {y\in Y}. \bigvee_{f^{-1}\{y\}}\circ L^{i_y}) = \bigvee_X$, where
$i_y : f^{-1}\{y\} \to X$ is the inclusion of $f^{-1}\{y\}$ in $X$.
\end{enumerate}
The first axiom generalises idempotence and commutativity of the $\vee$-operator. The second says
that if we have a collection of sets of elements, take their $\bigvee$'s, and take the $\bigvee$ of the
results, then we get the same result by taking the union of the collection and taking the $\bigvee$ of
that. A particular case is that $\bigvee_\emptyset$ is the inclusion of a bottom element.
The fact that this theory includes a proper class of operators and a proper class of equations does not
cause significant problems.
\begin{lemma}
In the category of sets, $\pow A$ is the free complete sup-semilattice on $A$.
\end{lemma}
\begin{proof}
(Sketch) Interpreting the $\bigvee$ operators as unions, it is clear that $\pow A$ is a model of our
theory of complete sup-semilattices.
Suppose now that $f: A \rightarrow B$ and $B$ is a complete sup-semilattice. Then we have a map
$f^{\ast} : \pow A \rightarrow B$ defined by
$f^{\ast} (X) = \bigvee_X (\lambda x\in X. f(x))$. Equation (\ref{sup-eq-1}) tells us that the operators
$\bigvee_X$ are stable under isomorphisms of $X$, and hence we do not need to be concerned
about that level of detail. Equation (\ref{sup-eq-2}) now tells us that $f^{\ast}$ is a homomorphism.
Moreover, if $X\subseteq A$ then in $\pow A$, $X = \bigvee_X (\lambda x\in X. \{ x \})$. Hence $f^{\ast}$
is the only possible homomorphism extending $f$. This gives the free property for $\pow A$.
\end{proof}
\begin{lemma}
In {{\sf Pred}}, $(\pow P,\pow A)$ is the free complete sup-semilattice on $(P,A)$ and in {{\sf Rel}},
$(\pow R, \pow{A_1},\pow{A_2})$ is the free complete sup-semilattice on $(R,A_1,A_2)$.
\end{lemma}
\begin{proof}
We start with {{\sf Pred}}. For any set $X$, $(X,X)$ is the coproduct in {{\sf Pred}} of $X$ copies of $(1,1)$,
and $(Q^X,B^X)$ is the product of $X$ copies of $(Q,B)$. $X$-indexed union in the two components
gives a map
$\bigcup_X : ((\pow{P})^X,(\pow{A})^X) \rightarrow (\pow P,\pow A)$.
Since this works component-wise, these operators satisfy the axioms in the same way as in {\sf Set}.
$(\pow P,\pow A)$ is thus a complete sup-semilattice.
Moreover, if $f: (P,A) \rightarrow (Q,B)$ where $(Q,B)$ is a complete sup-semilattice, then we have
$f^{\ast}: \pow A \rightarrow B$ and (the restriction of) $f^{\ast}$ also maps $\pow P \rightarrow Q$. The proof
is now essentially as in {\sf Set}.
The proof in {{\sf Rel}} is similar.
\end{proof}
This type constructor has notable differences from a standard powerset. It (obviously) supports
collecting operations of union, including a form of quantifier: $\bigcup : \pow \pow X \to \pow X$.
However it does not support either intersection or a membership operator.
\begin{lemma}
\begin{enumerate}
\item $\cap : \pow X \times \pow X \to \pow X$ is not parametric.
\item $\in : X \times \pow X \to 2 = \{\top,\bot\}$ is not parametric.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider sets $A$ and $B$ and a relation $R$ in which $aRb$ and $aRb'$ where $b\neq b'$.
\begin{enumerate}
\item $\{a\} \pow R \{b\}$ and $\{a\} \pow R \{b'\}$, but $\{a\}\cap\{a\} = \{a\}$, while
$\{b\}\cap\{b'\} = \emptyset$, and it is not the case that $\{a\} \pow R \emptyset$.
\item $aRb'$ and $\{a\} \pow R \{b\}$, but applying $\in$ to both left and right components of this gives different results: \\
$\in (a,\{a\}) = \top$, while $\in (b',\{b\}) = \bot$.
\end{enumerate}
Hence $\cap$ and $\in$ are not parametric.
\end{proof}
Despite the lack of these operations, this type constructor is useful to model non-determinism.
\paragraph{Covariant powerset in {{\sf Rel}} using image factorisation.}
Suppose $Q\subseteq A$, then $\pow Q\subseteq \pow A$, and hence we can easily extend $\pow$ to
{{\sf Pred}}. However, if $R\subseteq A\times B$, then $\pow R$ is a subset of $\pow (A\times B)$, not
$\pow A \times \pow B$. The consequence is that $\pow$ does not automatically extend to {{\sf Rel}} in the same way.
The second way to get round this is to note that we have projection maps $R\to A$ and $R\to B$.
Applying the covariant $\pow$ we get $\pow R \to \pow A$ and $\pow R\to \pow B$, and hence
a map
$\phi: \pow R\to (\pow A \times \pow B)$. $\phi$ sends $U\subseteq R$ to
\[(\pi_A (U), \pi_B (U)) = (\{a\in A\ |\ \exists b\in B.\ (a,b)\in U\},\{b\in B\ |\ \exists a\in A.\ (a,b)\in U\})\]
This map is not necessarily monic.
\begin{example}
Let $A=\{0,1\}$, $B=\{x,y\}$, $R=A\times B$. Let $U=\{(0,x),(1,y)\}$, and $V=\{(0,y),(1,x)\}$. Then
$\phi U = \phi V = \phi R = A\times B$.
\end{example}
We therefore take its image factorization:
\[\begin{tikzcd
\pow R \ar[r, two heads] & \overline{\pow R} \ar[r, tail] & \pow A \times \pow B
\end{tikzcd}\]
Using this definition, $\overline{\pow R}$ is
\[\{ (U,V) \in \pow A \times \pow B \ |\ \exists S\subseteq R.\ U=\pi_A S \wedge V = \pi_B S \}\]
Now by lemma \ref{lemma:egli} we have that this gives the same extension of covariant powerset to
relations as the algebraic approach.
\begin{lemma}
The following are equivalent:
\begin{enumerate}
\item $ U [\pow R] V $
\item there is $S\subseteq R$ such that $\mathop{\pi_A} S = U$ and $\mathop{\pi_B} S = V$
\item for all $a \in U$ there is an $b\in V$ such that $a R b$ and for all $b \in V$
there is an $a \in U$ such that $a R b$.
\end{enumerate}
\end{lemma}
\section{Strong bisimulation via logical relations}
This now gives us the ingredients to introduce the notion of a logical relation between transition systems.
\begin{definition}
Suppose $f : S \longrightarrow \pow S$ and $g: T \longrightarrow \pow T$ are two transition systems. Then we say that $R\subseteq S\times T$ is a {\em logical relation of transition systems} if $(f,g)$ is in the relation $[R\rightarrow \pow R]$. Similarly, if $A$ is a set of labels and $F: A \longrightarrow [S\rightarrow \pow S]$ and $G: A \longrightarrow [T\rightarrow \pow T]$ are labelled transition systems, then we say that $R\subseteq S\times T$ is a {\em logical relation of labelled transition systems} if $(Fa,Ga)$ is in the relation $[R\rightarrow \pow R]$ for all $a\in A$.
\end{definition}
The following lemma is trivial to prove, but shows that we could take our uniform approach a step further:
\begin{lemma}
$R$ is a logical relation of labelled transition systems iff $(F,G)$ is in the relation
$[\mbox{\rm Id}_A \rightarrow [R\rightarrow \pow R]]$.
\end{lemma}
And the following lemma shows that we have rediscovered the notion of strong bisimulation as an
example of a general structural congruence.
\begin{lemma}
If $F: A \longrightarrow [S\rightarrow \pow S]$ and $G: A \longrightarrow [T\rightarrow \pow T]$ are two labelled transition systems, then $R\subseteq S\times T$ is a logical relation of labelled transition systems iff it is a strong bisimulation.
\end{lemma}
\begin{proof}
The proof is simply to expand the definition of what it means to be a logical relation of labelled transition
systems. If $R$ is a logical relation, and $sRt$ then, applying the definition of logical relation for
function space twice, $\{ s' | \trans sa{s'} \} \pow R \{ t' | \trans ta{t'} \}$. So if $\trans sa{s'}$, then
$s' \in \{ s' | \trans sa{s'} \}$. Hence, by definition of $\pow R$ there is a $t' \in \{ t' | \trans ta{t'} \}$
such that $s' R t'$. In other words, $\trans ta{t'}$ and $s'Rt'$.
Conversely, if $R$ is a strong bisimulation,
then $\lambda as.\ \{s' | \trans sa{s'} \}$ and $\lambda at.\ \{t' | \trans ta{t'} \}$ are in the relation
$[\mbox{\rm Id}_A \rightarrow [R\rightarrow \pow R]]$. We have to check that if $a \mbox{\rm Id}_A a'$ and $sRt$ then
$\{s' | \trans sa{s'} \} \pow R \{t' | \trans ta'{t'} \}$ But if $a \mbox{\rm Id}_A a'$, then $a=a'$, so this reduces to
$\{s' | \trans sa{s'} \} \pow R \{t' | \trans ta{t'} \}$. Now definition \ref{def-collection-rel}, says that we need
to verify that:
\begin{itemize}
\item[-] for all $\trans sa{s'}$, there is $t'$ such that $\trans ta{t'}$ and $s'Rt'$
\item[-] and for all $\trans ta{t'}$, there is $s'$ such that $\trans sa{s'}$ and $s'Rt'$.
\end{itemize}
This is precisely the bisimulation condition.
\end{proof}
This means that we have rediscovered bisimulation as the specific notion of congruence for transition systems arising out of a more general theory of congruences between typed structures.
\section{A digression on Monads}
The covariant powerset functor is an example of a monad, and the two
approaches given to extend it to {{\sf Rel}} at the end of
section \ref{sec:covariant-powerset} extend to
general monads. In the case of monads on {{\sf Set}} they are equivalent.
{{\sf Set}} satisfies the Axiom Schema of Separation:
\[ \forall u. \forall v. \exists w. \forall x. [ x\in v \leftrightarrow x\in w \wedge \phi (r,u) ] \]
This restricted form of comprehension says that for any predicate $\phi$ on a set $v$, there is a
subset of $v$ containing exactly the elements of $v$ that satisfy $\phi$. Since this is a set, we can
apply functors to it.
Moreover, classical sets have the property that any monic whose domain is a non-empty set has
a retraction. It follows that if $m$ is such a monic, then $Fm$ is also monic, where $F$ is any functor.
\begin{lemma}\label{lemma:monads-preserve-monics}
\begin{enumerate}
\item Let $F:{\sf Set}\to {\sf Set}$ be a functor, and $i: A\rightarrowtail B$ a monic, where $A\neq \emptyset$, then $Fi$ is also monic.
\item Let $M:{\sf Set}\to{\sf Set}$ be a monad, and $i:A\rightarrowtail B$ any monic, then $Mi$ is also monic.
\item Let $M:{\sf Set}\to{\sf Set}$ be a monad, then $M$ extends to a functor ${\sf Pred}\to {\sf Pred}$ over {\sf Set}.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item $i$ has a retraction which is preserved by $F$.
\item If $A$ is non-empty, then this follows from the previous remark. If $A$ is empty, then there are
two cases. If $M\emptyset = \emptyset$, then $Mi : \emptyset = M\emptyset = M A \rightarrow MB$ is
automatically monic. If $M\emptyset \neq \emptyset$, then let $r$ be any map
$B\rightarrow M\emptyset$. $MB$ is the free $M$-algebra on $B$, and therefore there is a unique
$M$-algebra homomorphism $r^{\ast} : MB\rightarrow M\emptyset$ extending this. $Mi$ is also an $M$-algebra homomorphism and hence so is the composite $r^{\ast} (Mi)$. Since $M\emptyset$ is
the initial $M$-algebra, it must be the identity, and hence $Mi$ is monic.
\item Immediate. \qed
\end{enumerate}
\renewcommand{\endproof}{}
\end{proof}
This means that we can make logical predicates work for monads on {{\sf Set}}, though there are
limitations we
will not go into here. We cannot necessarily do the same for monads on arbitrary categories, and we
have already seen that this approach does not work for logical relations. In order to extend to logical
relations we have our algebraic and image factorisation approaches.
It is widely known that a large class of monads, monads where the functor preserves filtered (or more
generally $\alpha$-filtered) colimits correspond to algebraic theories. However it is less commonly
understood that arbitrary monads can be considered as being given by operations and equations, and
that the property on the functor is really only used to reduce the collection of operations and equations
down from a proper class to a set.
Let $M$ be an arbitrary monad on {{\sf Set}}, and $\theta: MB \to B$ be an $M$-algebra. Let $A$ be an
arbitrary set, then any element of $MA$ gives rise to an $A$-ary operation on $B$. Specifically, let
$t$ be an element of $MA$. An $A$-tuple of elements of $B$ is given by a function $e: A\to B$, then
we apply $t$ to $e$ by composing $\theta$ and $Me$ and applying this to $t$: $(\theta\circ (Me))(t)$.
The monad multiplication can be interpreted as a mechanism for applying terms to terms, and we get
equations from the functoriality of $M$ and this interpretation of the monad operation.
We can look at models of this algebraic theory in the category {{\sf Rel}} and interpret $MR$ as the free
model of this theory on $R$. That is the algebraic approach we followed for the covariant powerset $\pow$.
Alternatively we can follow the second approach and use image factorisation.
\[
\begin{tikzcd}[sep=large]
M R \ar[r, two heads]
\ar[d]
& \overline{M R}
\ar[d,tail]\\
M(A\times B)
\ar[r, "\langle M\pi_A M{,}\pi_B \rangle"] &
MA\times MB
\end{tikzcd}
\]
Because of the particular properties of {{\sf Set}}, monads preserve image factorisation.
\begin{lemma}\label{lemma:monads-pres-fact}
Let $M$ be a monad on {{\sf Set}}.
\begin{enumerate}
\item $M$ preserves surjections: if $f: A\twoheadrightarrow B$ is a surjection from $A$ onto $B$, then $Mf$ is also a surjection.
\item $M$ preserves image factorisations: if
\begin{tikzcd}[cramped] A \ar[r, twoheadrightarrow,"p"] & P \ar[r,tail,"i"] & B\end{tikzcd}
is the image factorisation of $f = i\circ p$, then
\begin{tikzcd}[cramped] MA \ar[r, twoheadrightarrow,"Mp"] & MP \ar[r,tail,"Mi"] & MB\end{tikzcd} is the image factorisation of $Mf$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Any surjection in {{\sf Set}} is split. The splitting is preserved by functors, and hence surjections are preserved by all functors.
\item By lemma \ref{lemma:monads-preserve-monics}, $M$ preserves both surjections and monics, hence it preserves image factorisations. \qed
\end{enumerate}
\renewcommand{\endproof}{}
\end{proof}
Given any monad $M$
on {{\sf Set}}, $MA\times MB$ is automatically an $M$-algebra with operation
$\langle \mu_A\circ (M\pi_{MA}), \mu_B\circ (M\pi_{MB}) \rangle: M(MA\times MB) \to MA\times MB$.
Moreover, $\overline{M R}$ is also an $M$-algebra.
\begin{lemma}
$\overline{MR}$ is the smallest $M$ sub-algebra of $MA\times MB$ containing the image of $R$.
\end{lemma}
\begin{proof}
This follows immediately from the fact that $\overline{M R}$ is an $M$ sub-algebra of
$MA\times MB$.
\[
\begin{tikzcd}
M(M R)
\ar[r, two heads]
\ar[d, "\mu_R"] &
M(\overline{M R})
\ar[r, tail]
\ar[d, dashed] &
M(MA\times MB)
\ar[d, "\langle \mu_A\circ (M\pi_{MA}){,}\mu_B\circ (M\pi_{MB}) \rangle" ] \\
M R
\ar[r, two heads] &
\overline{M R}
\ar[r, tail] &
MA\times MB
\end{tikzcd}
\]
In the diagram above, the bottom horizontal composite is $\langle M\pi_A,M\pi_B\rangle$, and the top
composite is $M$ applied to this. By lemma \ref{lemma:monads-pres-fact}, $M$ preserves the image
factorization in the bottom composite. It is easy to see that the outer rectangle commutes. It follows
that there is a unique map across the centre making both squares commute, and hence that
$\overline{MR}$ is an $M$ sub-algebra of $MA\times MB$.
\end{proof}
The immediate consequence of this is that $\overline{MR}$ is the free $M$
algebra on $R$ in {{\sf Rel}} and hence the two constructions by free algebra,
and by direct image coincide in the case of monads on {{\sf Set}}.
\section{Monoids}\label{sec:monoids}
Bisimulation is only one of the early characterisations of equivalence for labelled transition systems.
Another was trace equivalence. That talks overtly about possible sequences
of actions in a way that bisimulation
does not. However the sequences are buried in the recursive nature of the
definition.
We extend our notion of transition from $A$ to $A^{\ast}$, in the usual way. The following is a simple
induction:
\begin{lemma}
If $S$ and $T$ are two labelled transition systems, the $R\subseteq S\times T$ is a bisimulation iff for
all $w\in A^{\ast}$, whenever $sRt$
\begin{itemize}
\item[-] for all $\trans sw{s'}$, there is $t'$ such that $\trans tw{t'}$ and $s'Rt'$
\item[-] and for all $\trans tw{t'}$, there is $s'$ such that $\trans sw{s'}$ and $s'Rt'$.
\end{itemize}
\end{lemma}
In other words, we could have used sequences instead of single actions, and we would have got the
same notion of bisimulation (but we would have had to work harder to use it).
Another way of looking at this is to observe that the set of transition systems on $S$,
$[S\rightarrow \pow S]$, carries a monoid structure. One way of seeing that is to note that
$[S\rightarrow \pow S]$ is equivalent to the set of $\Union$-preserving endofunctions on $\pow S$.
Another is that it is the set of endofunctions on $S$ in the Kleisli category for $\pow$.
More concretely, the unit of the monoid is $\mbox{\rm id} = \eta = \lambda s. \{s\}$, and the product is got from
collection, $f_0\cdot f_1 = \lambda s. \Union_{s'\in f_0(s)} f_1(s')$.
Unsurprisingly, since this structure is essentially obtained from the monad, for any
$R\subseteq S\times T$, $[R\rightarrow \pow R]$ also carries the structure of a monoid, and the
projections to $[S\rightarrow \pow S]$ and $[T\rightarrow \pow T]$ are monoid homomorphisms. This
means that we could characterise strong bisimulations as relations $R$ for which the monoid
homomorphisms giving the transition systems lift to a monoid homomorphism into the relation.
\section{Weak bisimulation}
The need for a different form of bisimulation arises when modelling processes. Processes can perform
internal computations that do not correspond to actions that can be observed directly or synchronised
with. In essence,
the state of the system can evolve on its own. This is modelled by incorporating a silent $\tau$ action
into the set of labels to represent this form of computation. Strong bisimulation is then too restrictive
because it requires a close correspondence in the structure of the internal computations.
In order to remedy this, Milner introduced a notion of ``weak'' bisimulation. We follow the account given
in his book \cite{milner1989communication}, in which he refers to this notion just as ``bisimulation''.
We write $A$ for the set of possible actions including $\tau$ and $L$ for the actions not including
$\tau$.
So $L = A - \{\tau\}$ and $A = L + \{\tau\}$. If $w \in A^{\ast}$, then we write $\hat{w}$ for the sequence
obtained from $w$ by deleting all occurrences of $\tau$. So $\hat{w}\in L^{\ast}$. For example,
if $w=\tau a_0 a_1 \tau \tau a_0 \tau$, then $\hat{w} = a_0 a_1 a_0$, and if $w' = \tau\tau\tau$, then
$\hat{w'} = \epsilon$, the empty string.
\begin{definition}
Let $S$ be a labelled transition system for $A$, and $v\in L^{\ast}$, then
\[ \mbox{$\Trans sv{s'}$ iff there is a $w\in A^{\ast}$ such that $v=\hat{w}$ and $\trans sw{s'}$.} \]
We can type $\Trans{}{}{}$ as $\Trans{}{}{}: L^{\ast} \to [S\to\pow S]$, and we refer to it as the
{\em system derived from $\trans{}{}{}$.}
\end{definition}
Observe that $\Trans{}{}{}$ is not quite a transition system in the sense previously defined.
If $S$ is a labelled transition system for $A$, then the extension of $\trans{}{}{}$ to
$A^{\ast}$ gives a monoid homomorphism $A^{\ast} \to [S\to\pow S]$. However $\Trans{}{}{}$ preserves
composition but not the identity. We have therefore only a semigroup homomorphism
$L^{\ast} \to [S\to\pow S]$. This prompts the definition of a lax labelled transition system
(definition \ref{def:lax-transition-system}).
We now return to the classical definition of weak bisimulation.
\begin{definition}
If $S$ and $T$ are two labelled transition systems for $A$, then a relation
$R\subseteq S\times T$ is a {\em weak bisimulation} iff for all $a\in A$, whenever $sRt$
\begin{itemize}
\item[-] for all $\trans sa{s'}$, there is $t'$ such that $\Trans t{a}{t'}$ and $s'Rt'$
\item[-] and for all $\trans ta{t'}$, there is $s'$ such that $\Trans s{a}{s'}$ and $s'Rt'$.
\end{itemize}
\end{definition}
The combination of two different transition relations in this definition is ugly, but fortunately it is well known that we can clean it up by just using the derived relation.
\begin{lemma}\label{lemma:weak-derived}
$R$ is a weak bisimulation iff for all $a\in A$, whenever $sRt$
\begin{itemize}
\item[-] for all $\Trans s{\overline{a}}{s'}$, there is $t'$ such that $\Trans t{\overline{a}}{t'}$ and $s'Rt'$
\item[-] and for all $\Trans t{\overline{a}}{t'}$, there is $s'$ such that $\Trans s{\overline{a}}{s'}$ and $s'Rt'$
\end{itemize}
where for $x \in A$, $\overline x$ is ``$x$'' seen as a one-letter word.
\end{lemma}
We can now extend as before to words in $L^{\ast}$.
\begin{lemma}
$R$ is a weak bisimulation iff for all $v\in L^{\ast}$, whenever $sRt$
\begin{itemize}
\item[-] for all $\Trans sv{s'}$, there is $t'$ such that $\Trans tv{t'}$ and $s'Rt'$
\item[-] and for all $\Trans tv{t'}$, there is $s'$ such that $\Trans sv{s'}$ and $s'Rt'$.
\end{itemize}
\end{lemma}
This now looks very similar to the situation for strong bisimulation. But as we have noted above,
there is a difference. Previously
our transition system was given by a monoid homomorphism $A^{\ast} \rightarrow [S\rightarrow \pow S]$.
Here the identity is not preserved and we only have a homomorphism of semi-groups.
\begin{lemma}
If $S$ is a labelled transition system for $A$, then for all $v_0,v_1 \in L^{\ast}$,
$\Trans {}{v_0 v_1}{} = \ \Trans{}{v_0}{}\cdot \Trans{}{v_1}{}$
\end{lemma}
In the following sections we present different approaches to understanding weak transition
systems.
\section{Weak bisimulation through saturation}\label{sec:weak bisim through saturation}
For this section we enrich our setting. For any $S$, $\pow S$ has a natural partial order, and hence so
do the transition systems on any set $S$, given by the inherited partial order on
$A \to [S\to \pow S]$.
\begin{definition
Given transition systems $F: A\to [S\to\pow S]$ and $G: A\to [T\to\pow T]$, we say that $F\leq G$
iff $S=T$ and $\forall a\in A. \forall s\in S. Fas \leq Gas$. This gives a partial order {\mbox{$A${\sf{}-TS}}} that we can
view as a category.
If $A=L+\{\tau\}$, where $\tau$ is an internal (silent) action, then we shall refer to these as {\em labelled transition systems with internal action} and write the partial order as {\mbox{$(L{+}\tau)${\sf{}-TS}}}.
\end{definition}
The notion of weak bisimulation applies to transition systems with internal action, while strong
bisimulation applies to arbitrary transition systems. Our aim is to find a systematic way of deriving the notion of weak bisimulation from strong.
In the following definition we make use of the fact that $[S\to \pow S]$ is a monoid, as noted in section
\ref{sec:monoids}.
\begin{definition
\label{def:saturated}
Let $F: (L+\{\tau\}) \to [S\to \pow S]$ be a transition system with internal action. We say that $F$ is
{\em saturated} if
\begin{enumerate}
\item\label{sat:one} $\mbox{\rm id} \leq F(\tau)$ and $F(\tau).F(\tau) \leq F(\tau)$ and
\item\label{sat:two} for all $a\in L$, $F(\tau).F(a).F(\tau) \leq F(a)$
\end{enumerate}
We write {\mbox{$L${\sf{}-Sat-TS}}} for the full subcategory of saturated transition systems
with internal actions.
\end{definition}
These conditions are purely algebraic, and so can easily be interpreted in more general settings than
{{\sf Set}}.
Note that some of the inequalities are, in fact, equalities:
\[
F(\tau) = F(\tau) . \mbox{\rm id} \leq F(\tau).F(\tau) \leq F(\tau)
\]
hence $F(\tau).F(\tau) = F(\tau)$.
Similarly $F(a) = \mbox{\rm id}.F(a).\mbox{\rm id} \leq F(\tau).F(a).F(\tau) \leq F(a)$, therefore $F(\tau).F(a).F(\tau)=F(a)$.
Moreover, if we look at the partial order consisting of unlabelled transition systems on a set $S$, then
the fact that the monoid multiplication preserves the partial order means that
$([S\to \pow S], . , \mbox{\rm id})$ is a monoidal category. Condition \ref{def:saturated}.\ref{sat:one} says precisely
that $F(\tau)$ is a monoid in this monoidal category, and condition \ref{def:saturated}.\ref{sat:two}
that $F(a)$ is an $(F(\tau),F(\tau))$-bimodule.
The notions of weak and strong bisimulation coincide for saturated transition systems.
\begin{proposition}
Suppose $F:(L+\{\tau\})\to [S\to\pow S]$ and $G:(L+\{\tau\})\to [T\to\pow T]$ are saturated transition systems with internal actions, then $R\subseteq S\times T$ is a weak bisimulation between the systems if and only if it is a strong bisimulation between them.
\end{proposition}
\begin{proof}
In one direction, any strong bisimulation is also a weak one. In the other, suppose $R$ is a weak bisimulation, that $sRt$, and that $\trans{s}{a}{s'}$. Then by definition of weak bisimulation there is
$\Trans{t}{a}{t'}$ where $s'Rt'$. We show that $\trans{t}{a}{t'}$. There are two cases:
\begin{itemize}
\item[] $a\neq\tau$: Then, by definition of $\Trans{}a{}$, we have
$t (\trans{}\tau{})^{\ast} \trans{}a{} (\trans{}\tau{})^{\ast} t'$. But since $F$ is saturated, this implies
$\trans{t}{a}{t'}$ as required.
\item[] $a=\tau$: Then $t (\trans{}\tau{})^{\ast} t'$, and again since $F$ is saturated, this implies
$\trans{t}{\tau}{t'}$.
\end{itemize}
Hence we have $\trans{t}{a}{t'}$ and $tRt'$. The symmetric case is identical, and so $R$ is a strong
bisimulation.
\end{proof}
Given any transition system with internal action, there is a least saturated transition system containing
it.
\begin{proposition}
The inclusion ${\mbox{$L${\sf{}-Sat-TS}}}\hookrightarrow{\mbox{$(L{+}\tau)${\sf{}-TS}}}$ has a reflection: $\sat{(\cdot)}$.
\end{proposition}
\begin{proof}
Suppose $F:(L+\{\tau\})\to [S\to\pow S]$ is a transition system with internal action.
Then $F$ is saturated
if and only if $F(\tau)$ is a monoid, and $F(a)$ is an $(F(\tau),F(\tau)$-bimodule.
So we construct the adjoint by
taking $\sat{F}(\tau)$ to be the free monoid on $F(\tau)$ and each $\sat{F}(a)$ to be the free
$(\sat{F}(\tau),\sat{F}(\tau))$-bimodule on $F(a)$. This construction works in settings other than {{\sf Set}}, but in {{\sf Set}} we can give a concrete construction:
\begin{itemize}
\item[] $\sat{F}(\tau) = F(\tau)^{\ast}$
\item[] $\sat{F}(a) = \sat{F}(\tau).F(a).\sat{F}(\tau)$ ($a\neq\tau$) \qed
\end{itemize}
\renewcommand{\endproof}{}
\end{proof}
\begin{proposition}
Suppose $F:(L+\{\tau\})\to [S\to\pow S]$ and $G:(L+\{\tau\})\to [T\to\pow T]$ are transition systems
with internal actions (not necessarily saturated), then $R\subseteq S\times T$ is a weak bisimulation
between $F$ and $G$ if and only if it is a strong bisimulation between $\sat{F}$ and $\sat{G}$.
\end{proposition}
\begin{proof}
This is a direct consequence of the concrete construction of the saturated reflection. It follows from
Lemma \ref{lemma:weak-derived}, since the transition relation on the saturation is the derived
transition relation on the
original transition system: $\trans{s}{a}{s'}$ in $\sat{F}$ if and only if $\Trans{s}{\sat{a}}{s'}$ with respect
to $F$ (and similarly for $G$).
\end{proof}
\begin{corollary}
Suppose $F:(L+\{\tau\})\to [S\to\pow S]$ and $G:(L+\{\tau\})\to [T\to\pow T]$ are transition systems
with internal actions, and $R\subseteq S\times T$. Then the following are equivalent:
\begin{enumerate}
\item $R$ is a weak bisimulation between $F$ and $G$
\item $\sat{F}$ and $\sat{G}$ are in the appropriate logical relation:
$(\sat{F},\sat{G})\in\mbox{\rm Id}_{L+\{\tau\}}\to [R\to\pow R]$
\item $R$ is the state space of a saturated transition system in {{\sf Rel}} whose first projection is
$\sat{F}$ and whose second is $\sat{G}$.
\end{enumerate}
\end{corollary}
The consequence of this is that we now have two separate ways of giving semantics to transition
systems with inner actions. Given $F:(L+\tau)\to[S\to\pow S]$, we can just take $F$ as a transition
system. If we then apply the standard logical relations framework to this definition we get that two
such, $F$ and $G$, are related by the logical relation $\mbox{\rm Id}_{(L+\tau)} \to [R\to\pow R]$ if and only if
$R$ is a strong bisimulation between $F$ and $G$. If instead we take the semantics to be $\sat{F}$,
typed as $\sat{F}:(L+\tau)\to[S\to\pow S]$, then $\sat{F}$ and $\sat{G}$ are related by the logical
relation $\mbox{\rm Id}_{(L+\tau)} \to [R\to\pow R]$ if and only if
$R$ is a weak bisimulation between $F$ and $G$.
\section{Lax transition systems}\label{sec:lax transition systems}
Saturated transition systems still include explicit $\tau$-actions even though these are supposed to be
internal actions only indirectly observable. We can however avoid $\tau$'s appearing explicitly in the
semantics by giving a relaxed variant of the monoid semantics.
We recall that for an arbitrary set of action labels $A$, the set of $A$-labelled transition systems
$A\to[S\to\pow S]$ is isomorphic to the set of monoid homomorphisms $A^{\ast}\to[S\to\pow S]$, and
moreover that for any transition systems $F$ and $G$ and relation $R\subseteq S\times T$,
$F$ is related to $G$ by ${\mbox{\rm Id}_A}\to[R\to\pow R]$ iff $F$ is related to $G$ as
monoid homomorphism by ${\mbox{\rm Id}_{A^{\ast}}}\to[R\to\pow R]$ iff $R$ is a strong bisimulation between $F$
and $G$.
We can model transition systems with internal actions similarly, by saying what transitions correspond
to sequences of visible actions. The price we pay is that, since $\tau$ is not visible, we have genuine
state transitions corresponding to the empty sequence. We no longer have a monoid homomorphism.
\begin{definition
\label{def:lax-transition-system}
A {\em lax transition system} on an alphabet $L$ (not including an internal action $\tau$) is a
function $F:L^{\ast} \to [S\to\pow S]$ such that:
\begin{enumerate}
\item $\mbox{\rm id}\leq F(\epsilon)$ (reflexivity)
\item $F(vw) = F(v).F(w)$ (composition)
\end{enumerate}
\end{definition}
\begin{definition
Let $F:(L+\{\tau\})\to [S\to\pow S]$ be a transition system with internal action, then its {\em laxification}
$\lax{F}:L^{\ast}\to [S\to\pow S]$ is the lax transition system defined by:
\begin{enumerate}
\item $\lax{F} (\epsilon) = F(\tau)^{\ast}$
\item $\lax{F} (a) = F(\tau)^{\ast}. F(a). F(\tau)^{\ast}$, for any $a\in L$.
\item $\lax{F} (vw) = \lax{F} (v) . \lax{F} (w)$.
\end{enumerate}
\end{definition}
It is trivial that $\lax{F}$ is a lax transition system.
\begin{lemma}
If $F:(L+\{\tau\})\to [S\to\pow S]$ is a transition system with internal action, then its laxification
$\lax{F}:L^{\ast}\to [S\to\pow S]$ is a lax transition system.
\end{lemma}
We have reproduced the derived transition system.
Note that if $G$ is a lax transition system, then $G(w)$ depends only on $G(\epsilon)$ and the $G(a)$,
all other values are determined by composition. Note also that if $F$ is saturated, then
$\lax{F}(\epsilon) = F(\tau)$ and $\lax{F}(a) = F(a)$.
We can also go the other way. Given a lax transition system, $F:L^{\ast}\to [S\to\pow S]$, then we can
define a transition system with inner action:
$\inner{F}:(L+\{\tau\})\to [S\to\pow S]$ where
\begin{itemize}
\item $\inner{F} (\tau) = F(\epsilon)$
\item $\inner{F} (a) = F(a)$
\end{itemize}
\begin{lemma}
If $F:(L+\{\tau\})\to [S\to\pow S]$ is a transition system with internal action, then its saturation
$\sat{F}$ can be constructed as $\inner{\lax{F}}$.
\end{lemma}
One way of looking at this is that a lax transition system is just a saturated one in thin disguise. But
from our perspective it gives us a different algebraic semantics for transition systems with inner
action that can also be made to account for weak bisimulation, and this time the $\tau$ actions do
not appear in the formal statement.
\begin{lemma}
Suppose $F:(L+\{\tau\})\to [S\to\pow S]$ and $G:(L+\{\tau\})\to [T\to\pow T]$ are transition systems
with internal actions, and $R\subseteq S\times T$. Then the following are equivalent:
\begin{enumerate}
\item $R$ is a weak bisimulation between $F$ and $G$
\item $(\lax{F},\lax{G})\in\mbox{\rm Id}_{L^{\ast}}\to [R\to\pow R]$
\item $R$ is the state space of a lax transition system in {{\sf Rel}} whose first projection is
$\lax{F}$ and whose second is $\lax{G}$.
\end{enumerate}
\end{lemma}
\section{(Semi-)Branching bisimulations}
In this section, we shall always consider two labelled transition systems $F \colon (L + \{\tau\}) \to [ S \to \pow S]$ and $G \colon (L + \{\tau\}) \to [T \to \pow T]$ with an internal action $\tau$. We begin by introducing the following notation: we say that $x \overset{\tau^*}{\to} y$, for $x$ and $y$ in $S$ (or in $T$) if and only if there is a finite, possibly empty, sequence of $\tau$ actions
\[
x \overset \tau \to \cdots \overset \tau \to y;
\]
if the sequence is empty, then we require $x = y$.
We now recall the notion of \emph{branching} bisimulation, which was introduced in~\cite{van_glabbeek_branching_1996}.
\begin{definition}\label{def:branching bisimulation}
A relation $R \subseteq S \times T$ is called a \emph{branching bisimulation} if and only if whenever $s R t$:
\begin{itemize}
\item $s \overset{a}{\to} {s'}$ implies $\bigl( (\exists t_1,t_2 \in T \ldotp t \overset {\tau^*} \to {t_1} \overset a \to {t_2} \land s R t_1 \land s' R t_2) \text{ or } ( a = \tau \land s' R t) \bigr)$,
\item $t \overset{a}{\to} {t'}$ implies $\bigl( (\exists s_1,s_2 \in S \ldotp s \overset {\tau^*} \to {s_1} \overset a \to {s_2} \land s_1 R t \land s_2 R t') \text{ or } ( a = \tau \land s R t') \bigr)$.
\end{itemize}
\end{definition}
\begin{remark}\label{rem:branching tau-case}
In particular, if $R$ is a branching bisimulation, $s R t$ and $s \overset \tau \to {s'}$ then there exists $t' \in T$ such that $t \overset {\tau^*} \to {t'}$ and $s' R t'$.
\end{remark}
We show how branching bisimulation is also an instance of logical relation between appropriate derived versions of $F$ and $G$.
\begin{definition}\label{def:branching saturation}
The \emph{branching saturation} of $F$, denoted by $\bsat F$, is a function
\[
\bsat F \colon (L+\{\tau\}) \to [ S \to \pow {(S \times S)}]
\]
defined as follows. Given $s \in S$ and $a \in L + \{\tau\}$,
\[
\bsat F a s = \{ (s_1,s_2) \in S \times S \mid (s \overset {\tau^*} \to s_1 \overset a \to s_2) \text{ or } (a = \tau \text{ and } s=s_1=s_2) \}.
\]
\end{definition}
\begin{theorem}\label{thm:branching iff logical relation}
Let $R \subseteq S \times T$. Then $R$ is a branching bisimulation if and only if $(\bsat F, \bsat G) \in \mbox{\rm Id}_{L+\{\tau\}} \to [R \to \pow{(R \times R)}]$.
\end{theorem}
\begin{proof}
Let us unpack the definition of the relation $\mbox{\rm Id}_{L+\{\tau\}} \to [R \to \pow{(R \times R)}]$. We have that $(\bsat F, \bsat G) \in \mbox{\rm Id}_{L+\{\tau\}} \to [R \to \pow{(R \times R)}]$ if and only if for all $a \in L+\{\tau\}$ and for all $s \in S$ and $t \in T$ such that $s R t$ we have $(\bsat F a s) [\pow{(R \times R)}] (\bsat G a t)$. By definition of $\pow{(R \times R)}$, this means that for all $(s_1,s_2) \in \bsat F a s$ there exists $(t_1,t_2)$ in $\bsat G a t$ such that $s_1 R t_1$ and $s_2 R t_2$.
Suppose then that $R$ is a branching bisimulation, consider $s R t$ and take $(s_1,s_2) \in \bsat F a s$. We have two possible cases to discuss: $a = \tau \text{ and } s=s_1=s_2$, or $s \overset {\tau^*} \to s_1 \overset a \to s_2$. In the first case, consider the pair $(t,t)$: this clearly belongs to $\bsat G a t$. In the second case, we are in the following situation:
\[
\begin{tikzcd}
s \ar[r,-,"R"] \ar[d,"\tau^*"'] & t \\
s_1 \ar[d,"a"'] \\
s_2
\end{tikzcd}
\]
If $\tau^*$ is the empty list, then $s=s_1$, hence $s_1 R t$: by definition of branching bisimulation, there are indeed $t_1$ and $t_2$ such that:
\[
\begin{tikzcd}[row sep=1em]
& t \ar[dd,"\tau^*"] \\
s \ar[ur,-,"R"] \ar[dr,-,"R"] \ar[dd,"a"']\\
& t_1 \ar[d,"a"] \\
s_2 \ar[r,-,"R"] & t_2
\end{tikzcd}
\]
hence $(t_1,t_2) \in \bsat G a t$. If $\tau^*=\tau^n$, with $n \ge 1$, then by Remark~\ref{rem:branching tau-case} applied to every $\tau$ in the list $\tau^*$, there exists $t'$ in $T$ such that $t \overset{\tau^*}{\to} t'$ and $s_1 R t'$. Now apply again the definition of branching bisimulation for $s R t'$: we have that there are $t_1$ and $t_2$ in $T$ such that:
\[
\begin{tikzcd}[row sep=1em]
& t \ar[dd,"\tau^*"] \\
s \ar[ur,-,"R"] \ar[dd,"\tau^*"'] \\
& t' \ar[dd,"\tau^*"] \\
s_1 \ar[dr,-,"R"] \ar[dd,"a"'] \ar[ur,-,"R"] \\
& t_1 \ar[d,"a"] \\
s_2 \ar[r,-,"R"] & t_2
\end{tikzcd}
\]
hence $(t_1,t_2)\in \bsat G a t$. This proves that if $R$ is a branching bisimulation, then $(\bsat F, \bsat G) \in \mbox{\rm Id}_{L+\{\tau\}} \to [R \to \pow{(R \times R)}]$.
Conversely, suppose $(\bsat F, \bsat G) \in \mbox{\rm Id}_{L+\{\tau\}} \to [R \to \pow{(R \times R)}]$ and that we are in the following situation:
\[
\begin{tikzcd}
s \ar[r,-,"R"] \ar[d,"a"'] & t \\
s'
\end{tikzcd}
\]
Then we have $(s,s') \in \bsat F a s$, because indeed $s \overset {\tau^*} \to s \overset a \to s'$. By definition of the relation $\pow{(R \times R)}$, there exists $(t_1,t_2) \in \bsat G a t$ such that $s R t_1$ and $s' R t_2$. It is immediate to see that this is equivalent to the condition required by Definition~\ref{def:branching bisimulation}, hence $R$ is in fact a branching bisimulation.
\end{proof}
In~\cite{van_glabbeek_branching_1996} also a weaker notion of branching bisimulation was introduced, which we recall now.
\begin{definition}
A relation $R \subseteq S \times T$ is called a \emph{semi-branching bisimulation} if and only if whenever $sRt$:
\begin{itemize}
\item $s \overset{a}{\to} {s'}$ implies $\bigl( (\exists t_1,t_2 \in T \ldotp t \overset {\tau^*} \to {t_1} \overset a \to {t_2} \land s R t_1 \land s' R t_2)$ or $( a = \tau \land \exists t' \in T \ldotp {t \overset {\tau^*} \to t'} \land s R t' \land s' R t') \bigr)$,
\item $t \overset{a}{\to} {t'}$ implies $\bigl( (\exists s_1,s_2 \in S \ldotp s \overset {\tau^*} \to {s_1} \overset a \to {s_2} \land s_1 R t \land s_2 R t')$ or $( a = \tau \land \exists s' \in S \ldotp {s \overset {\tau^*} \to s'} \land s' R t \land s' R t') \bigr)$.
\end{itemize}
\end{definition}
Every branching bisimulation is also semi-branching, but the converse is not true. The difference between branching and semi-branching bisimulation is in what is allowed to happen in the $\tau$-case. Indeed, if $s \overset{\tau}{\to} s'$ and $sRt$, in the branching case it must be that either also $s'Rt$, or $t$ can ``evolve'' into $t_1$, for $sRt_1$, by means of zero or more $\tau$ actions, and then $t_1$ has to evolve into a $t_2$ via a $\tau$ action with $s' R t_2$. In the semi-branching case, $t$ is always allowed to evolve into $t'$ with zero or more $\tau$ steps, as long as $s$ is still related to $t'$, as well as $s' R t'$. Figure~\ref{fig:branching vs semibranching} shows this in graphical terms.
\begin{figure}
\centering
\begin{tikzcd}
s \ar[d,"\tau"'] \ar[r,"R",-] & t \\
s' \ar[ur,-,"R"']
\end{tikzcd}
\qquad
\begin{tikzcd}
s \ar[d,"\tau"'] \ar[r,"R",-] \ar[dr,"R",-] & t \ar[d,"\tau^*"] \\
s' \ar[r,-,"R"] & t'
\end{tikzcd}
\caption{Difference between branching (left) and semi-branching (right) case for $\tau$ actions.}
\label{fig:branching vs semibranching}
\end{figure}
We can prove a result analogous to Theorem~\ref{thm:branching iff logical relation} for semi-branching bisimulations. To do so, we introduce an appropriate derived version of a labelled transition system $F \colon (L+\{\tau\}) \to [S \to \pow{(S\times S)}]$.
\begin{definition}
The \emph{semi-branching saturation} of $F$, denoted by $\sbsat F$, is a function
\[
\sbsat F \colon (L+\{\tau\}) \to [ S \to \pow{(S \times S)}]
\]
defined as follows. Given $s \in S$ and $a \in L+\{\tau\}$,
\[
\sbsat F a s = \{ (s_1,s_2) \in S \times S \mid (s \overset {\tau^*} \to s_1 \overset a \to s_2) \text{ or } (a = \tau \text{ and } s_1 = s_2 \text{ and } s \overset {\tau^*} \to s_1 \}.
\]
\end{definition}
Notice that Remark~\ref{rem:branching tau-case} continues to hold for semi-branching bisimulations too.
\begin{theorem}
Let $R \subseteq S \times T$ be a relation. Then $R$ is a semi-branching bisimulation if and only if $(\sbsat F, \sbsat G) \in \mbox{\rm id}{L+\{\tau\}} \to [R \to \pow{(R \times R)}]$.
\end{theorem}
\begin{proof}
Same argument of the proof of Theorem~\ref{thm:branching iff logical relation}.
\end{proof}
\section{The almost-monad}
In Section~\ref{sec:monoids}, we observed that $[S \to \pow{S}]$ enjoys a monoid structure inherited from the monadity of the covariant powerset $\pow{}$. Sadly, we cannot say quite the same for $[S \to \pow{(S \times S)}]$. Indeed, consider the functor $T(A)=\pow{(A \times A)}$:
\[
\begin{tikzcd}[arrow style=tikz,row sep=0em,column sep=3em]
{\sf Set} \ar[r,"- \times -"] \ar[rr,bend angle=30,bend left,"T"] & {\sf Set} \ar[r,"\pow{}"] & {\sf Set} \\
A \ar[r,|->] \ar[d,"f"'] & A \times A \ar[r,|->] \ar[d,"f \times f"] & \pow{(A \times A)} \ar[d,"\pow{(f\times f)}"] \\[2em]
B \ar[r,|->] & B \times B \ar[r,|->] & \pow{(B \times B)}
\end{tikzcd}
\]
where $\pow{(f\times f)}(S) = \{\bigl(f(x),f(y)\bigr) \mid (x,y) \in S\}$. We can define two natural transformations $\eta \colon \mbox{\rm Id}_{\sf Set} \to T$ and $\mu \colon T^2 \to T$ as follows: $\eta_A (a) = \{(a,a)\}$ and
\[
\begin{tikzcd}[row sep=0em,arrow style=tikz]
\pow{}\bigl( \pow{(A \times A)} \times \pow{(A \times A)} \bigr) \ar[r,"\mu_A"] & \pow{(A \times A)} \\
U \ar[r,|->] & \bigcup\limits_{(V,W) \in U} (V \cup W)
\end{tikzcd}
\]
It is not difficult to see that $\eta$ and $\mu$ are indeed natural, and that the following square commutes for every set $A$:
\[
\begin{tikzcd}[arrow style=tikz]
T^3 A \ar[r,"\mu_{TA}"] \ar[d,"T\mu_A"'] & T^2 A \ar[d,"\mu_A"] \\
T^2 A \ar[r,"\mu_A"'] & TA
\end{tikzcd}
\]
However, although the left triangle in the following diagram commutes, the right one fails to do so in general:
\[
\begin{tikzcd}[arrow style=tikz]
TA \ar[r,"\eta_{TA}"] \ar[dr,"\mbox{\rm id}_{TA}"'] & T^2 A \ar[d,"\mu_A"] & TA \ar[l,"T\eta_A"'] \ar[dl,"\mbox{\rm id}_{TA}"] \\
& TA
\end{tikzcd}
\]
Indeed, given $S \subseteq A \times A$, it is true that $S \cup S = S$, but
\[
\mu_A\bigl(T\eta_A(S)\bigr) = \mu_A\Bigl( \left\{ \bigl( \{(x,x)\}, \{(y,y)\} \bigr) \mid (x,y) \in S \right\} \Bigr) = \bigcup_{(x,y)\in S} \bigl( \{(x,x)\} \cup \{(y,y)\} \bigr) \ne S.
\]
This means that $(T,\eta,\mu)$ falls short of being a monad: it is only a ``left-semi-monoid'' in the category of endofunctors and natural transformations on ${\sf Set}$, in the sense that $\eta$ is only a left unit for the multiplication $\mu$.
One can go further, and build up the ``Kleisli non-category'' associated to $(T,\eta,\mu)$, following the usual definition for Kleisli category of a (proper) monad, where morphisms $A \to B$ are functions $A \to \pow{(B \times B)}$, and composition of $f \colon A \to \pow{(B \times B)}$ and $g \colon B \to \pow{(C \times C)}$ is the composite in ${\sf Set}$:
\[
\begin{tikzcd}[row sep=0em,arrow style=tikz]
A \ar[r,"f"] & TB \ar[r,"Tg"] & T^2 B \ar[r,"\mu_B"] & TB \\
a \ar[r,|->] & f(a) \ar[r,|->] & \{ (g(x),g(y)) \mid (x,y) \in f(a) \} \ar[r,|->] & \bigcup\limits_{(x,y)\in f(a)} (g(x) \cup g(y))
\end{tikzcd}
\]
This composition law has $\eta$ as a left-but-not-right identity. Whereas the set of endomorphisms on $A$ in the Kleisli category of a proper monad is always a monoid with the multiplication defined as the composition above, here we get that $[A \to \pow{(A \times A)}]$ is only a left-semi-monoid.
We can define a partial order on $[A \to \pow{(A \times A)}]$ in a canonical way, by setting $f \le g$ if and only if for all $a \in A$ $f(a) \subseteq g(a)$; by doing so, we can regard $[A \to \pow{(A \times A)}]$ as a category. The multiplication $f \cdot g \colon A \to \pow{(A \times A)}$, defined as $f \cdot g (a)=\bigcup_{(x,y)\in f(a)} \bigl(g(x) \cup g(y)\bigr)$, preserves the partial order, therefore $[A \to \pow{(A \times A)}]$ is a ``left-semi-monoidal'' category.
\section{Branching and semi-branching saturated systems}
In this section we investigate the properties of $\bsat F(\tau)$ and $\sbsat F(\tau)$ as objects of $[S \to \pow{(S \times S)}]$, for $F \colon (A+\{\tau\}) \to [S \to \pow{(S \times S)}]$, to explore whether it is possible to define an appropriate notion of branching or semi-branching saturated systems, where strong and branching (or semi-branching) bisimulations are the same, cf.\ weak case in Sections~\ref{sec:weak bisim through saturation} and~\ref{sec:lax transition systems}.
\begin{lemma}
$\eta_S \le \bsat F (\tau)$, but $\bsat F(\tau) \cdot \bsat F(\tau) \nleq \bsat F(\tau)$ in general.
\end{lemma}
\begin{proof}
By definition, the pair $(s,s)$, for $s \in S$, belongs to $\bsat F \tau (s)$, hence $\eta_S \le \bsat F (\tau)$.
Let now $(x,y) \in (\bsat F(\tau) \cdot \bsat F(\tau))(s) = \bigcup_{(s_1,s_2) \in \bsat F \tau (s)} \bigl( \bsat F\tau (s_1) \cup \bsat F \tau(s_2) \bigr)$: we want to check whether $(x,y) \in \bsat F \tau (s)$. Suppose that $(x,y) \in \bsat F \tau (s_1)$ for some $(s_1,s_2) \in \bsat F \tau (s)$. Then we are in one of the following four situations:
\begin{enumerate}
\item
$
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,"\tau"] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"\tau"] & y
\end{tikzcd}
$
\item
$
\begin{tikzcd}
s \ar[r,equal] & s_1 \ar[r,equal] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"\tau"] & y
\end{tikzcd}
$
\item
$
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,"\tau"] \ar[d,equal] & s_2 \\
& x \ar[r,equal] & y
\end{tikzcd}
$
\item
$
\begin{tikzcd}[row sep=1em,column sep=1em]
s \ar[r,equal] & s_1 \ar[r,equal] \ar[d,equal] & s_2 \\
& x \ar[r,equal] & y
\end{tikzcd}
$
\end{enumerate}
In cases 1 and 2, we can conclude that
$
\begin{tikzcd}[cramped,sep=small]
s \ar[r,"\tau^*"] & x \ar[r,"\tau"] & y
\end{tikzcd}
$, while in case 4 we get $s=x=y$, hence $(x,y) \in \bsat F \tau (s)$. However, if in case 3 we are in the situation whereby $s \ne s_1$, then $(x,y)\notin \bsat F \tau (s)$, as it is neither the case that $s=x=y$ nor
$
\begin{tikzcd}[cramped,sep=small]
s \ar[r,"\tau^*"] & x \ar[r,"\tau"] & y.
\end{tikzcd}
$
\end{proof}
It turns out, however, that the semi-branching saturation of $F$ behaves much better than $\bsat F$.
\begin{lemma}
$\sbsat F (\tau)$ is a left-semi-monoid in $[S \to \pow{(S \times S)}]$, and $\sbsat F (a)$ is a left $\sbsat F (\tau)$-module for all $a \in A$.
\end{lemma}
\begin{proof}
Again, it is immediate to see that $\eta_S \le \sbsat F(\tau)$, because
$
\begin{tikzcd}[cramped,sep=small]
s \ar[r,"\tau^*"] & s
\end{tikzcd}
$
for any $s$, given that $\tau^*$ can be the empty list of $\tau$'s.
Now we prove that $\sbsat F (\tau) \cdot \sbsat F (\tau) \le \sbsat F(\tau)$. Let $s \in S$ and $(x,y) \in (\sbsat F (\tau) \cdot \sbsat F (\tau))(s)$. Then there exists a pair $(s_1,s_2) \in \sbsat F \tau (s)$ such that $(x,y) \in \sbsat F \tau (s_1)$ or $(x,y) \in \sbsat F \tau (s_2)$. Suppose that $(x,y) \in \sbsat F \tau (s_1)$, then we are in one of the four following cases:
\begin{enumerate}
\item
$
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,"\tau"] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"\tau"] & y
\end{tikzcd}
$
\item
$
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,equal] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"\tau"] & y
\end{tikzcd}
$
\item
$
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,"\tau"] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,equal] & y
\end{tikzcd}
$
\item
$
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,equal] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,equal] & y
\end{tikzcd}
$
\end{enumerate}
In every case, we can conclude that $(x,y) \in \sbsat F \tau (s)$. Thus $\sbsat F(\tau)$ is a left-semi-monoid.
Finally, we show that $\sbsat F (\tau) \cdot \sbsat F (a) \le \sbsat F(a)$ for all $a \in A$. Let $s \in S$ and consider $(x,y) \in (\sbsat F \tau \cdot \sbsat F a)(s)$. Then $(x,y) \in \sbsat F a (s_1)$ or $(x,y) \in \sbsat F a (s_2)$ for some $(s_1,s_2) \in \sbsat F \tau (s)$. In the first case (and similarly for the second), it is
\[
\text{either }
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,"\tau"] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"a"] & y
\end{tikzcd}
\text{ or }
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,equal] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"a"] & y
\end{tikzcd}
\]
and in both cases we have $(x,y) \in \sbsat F a (s)$, as required.
\end{proof}
\begin{remark}
It is not true, in general, that $\sbsat F a \cdot \sbsat F \tau \le \sbsat F a$. Indeed, consider $s \in S$ and $(x,y) \in (\sbsat F a \cdot \sbsat F \tau)(s)=\bigcup_{(s_1,s_2) \in \sbsat F a (s)} (\sbsat F \tau (s_1) \cup \sbsat F \tau (s_2)$. Then the following is one of four possible scenarios:
\[
\begin{tikzcd}
s \ar[r,"\tau^*"] & s_1 \ar[r,"a"] \ar[dr,"\tau^*"] & s_2 \\
& & x \ar[r,"\tau"] & y
\end{tikzcd}
\]
where it is clear that $(x,y)\notin \sbsat F a (s)$.
\end{remark}
\section{The category {\sf Meas}}
Our next goal is to discuss bisimulation for continuous Markov processes (see \cite{panangaden2009labelled,de_vink_bisimulation_1999}). In order to do this we need to step cautiously out of the world of sets and functions, and into that of measurable spaces and measurable functions.
We recall that a measurable space $(X,\Sigma)$ is a set $X$ equipped with a $\sigma$-algebra, $\Sigma$, the algebra of measurable sets. A measurable function $f \colon (X,\Sigma_X)\to (Y,\Sigma_Y)$ is a function $f\colon X\to Y$ such that if $U$ is a measurable set of $(Y,\Sigma_Y)$, then $f^{-1} U$ is a measurable set of $(X,\Sigma_X)$. Together these form a category, {\sf Meas}.
\begin{lemma}
{\sf Meas} has all finite limits and $\Gamma = \sf Meas(1,- ):\sf Meas\to{\sf Set}$ preserves them.
\end{lemma}
\begin{proof}
Let $F:D\to\sf Meas$ be a functor from a finite category $D$. Then
$\varprojlim F$
is the measurable space on the set $\varprojlim (\Gamma F)$
equipped with the least
$\sigma$-algebra making the projections $\varprojlim (F)\to F d$ measurable.
\end{proof}
\begin{lemma}
{\sf Meas} has coequalisers. If
\begin{tikzcd}
(X,\Sigma_X) \ar[r, "f", shift left]
\ar[r, "g" below, shift right] & (Y,\Sigma_Y)
\end{tikzcd}
is a pair of parallel measurable functions, then their coequaliser is
$E: (Y,\Sigma_Y)\to (Y/{\sim},\overline\Sigma)$, where $\sim$ is the
equivalence relation on $Y$ generated by $fx\sim gx$, and
$\overline\Sigma$ is the largest $\sigma$-algebra on $Y/{\sim}$ making $Y\to Y/{\sim}$ measurable, {i.e.}
$\overline\Sigma = \{ V \ |\ e^{-1} V \in \Sigma_Y \}$.
\end{lemma}
\begin{corollary}
A morphism $e:(Y,\Sigma_Y) \to (Z,\Sigma_Z)$ in {\sf Meas} is a regular epi if and only if $\Gamma e$ is a surjection in {{\sf Set}}, and $U\in\Sigma_Z$ iff
$e^{-1}U\in\Sigma_Y$.
\end{corollary}
\begin{corollary}
Any morphism in {\sf Meas} factors essentially uniquely as a regular epi followed by a monomorphism.
\end{corollary}
However, {\sf Meas} is not regular because the pullback of a regular epi is not necessarily regular, as exhibited by this counterexample:
\begin{example}
Let $(Y,\Sigma_Y)$ be the measurable space on $Y=\{a_0,a_1,b_0,b_1\}$ with
$\Sigma_Y$ generated by the sets $\{a_0,a_1\}$ and $\{b_0,b_1\}$. Let
$(Z,\Sigma_Z)$ be the measurable space on $Z=\{a'_0,a'_1,b'\}$, where the
only measurable sets are $\emptyset$ and $Z$. Let $e: Y\to Z$ be given by
$e(a_i)= a'_i$, and $e(b_i) = b'$. Then $e$ is a regular epi. Now let
$(X,\Sigma_X)$ be the measurable space on
$X=\{a'_0,a'_1\}$ where $\Sigma_X = \{\emptyset, X\}$, and let $i:X\to Z$
be the inclusion of $X$ in $Z$. Then $i^{*}Y = \{a_0,a_1\}$ with
$\sigma$-algebra generated by the singletons, but $i^{*}e$ is not regular
epi because $(i^{*}e)^{-1}\{a'_0\}=\{a_0\}$ is measurable,
but $\{a'_0\}$ is not.
\end{example}
The consequence of this is that {\sf Meas} has all the apparatus to construct a relational calculus, but that calculus does not have all the properties we expect. Specifically it is not an allegory.
\section{Probabilistic bisimulation}
We follow the standard approach by defining a continuous Markov process to
be a coalgebra for the Giry functor. For simplicity we will work with
unlabelled processes.
\begin{definition}[Giry monad]
Let $(X,\Sigma_X)$ be a measurable space. The Giry functor, $\Pi$,
is defined as follows,
$\Giry (X,\Sigma_X) = (\Giry X, \Giry\Sigma_X)$:
\begin{itemize}
\item $\Giry X$ is the set of sub-probability measures on $(X,\Sigma_X)$.
\item $\Giry\Sigma_X$ is the least $\sigma$-algebra on $\Giry X$ such that for every $U\in\Sigma_X$, $\lambda\pi. \pi(U)$ is
measurable.
\end{itemize}
If $f: (X,\Sigma_X)\to (Y,\Sigma_Y)$ is a measurable function, then
$\Giry f (\pi) = \lambda V\in\Sigma_Y. \pi (f^{-1} V)$. $\Giry$ forms part of a monad in which the unit maps a point $x$ to the Dirac measure for $x$, and the multiplication is defined by integration~\cite{giry_categorical_1982}.
\end{definition}
\begin{definition}[continuous Markov process]
A {\em continuous Markov process} is a coalgebra in {\sf Meas} for the Giry functor, {i.e.} a continuous Markov process with state space
$(S,\Sigma_S)$ is a measurable function $F : (S,\Sigma_S)\to \Giry (S,\Sigma_S)$. A {\em homomorphism of continuous Markov processes} is simply a homomorphism of coalgebras.
\end{definition}
There are now two similar, but slightly different approaches to defining the notion of a probabilistic bisimulation. Panangaden \cite{panangaden2009labelled} follows Larsen and Skou's original definition for the discrete case. This begins by enabling a state space reduction for a single process and generates a notion of bisimulation between processes as a by-product. The second is the standard notion of bisimulation of coalgebras, as described by Rutten \cite{rutten_universal_2000}.
We begin with Panangaden's extension of the original definition of
Larsen and Skou \cite{panangaden2009labelled,larsen_bisimulation_1991}.
\begin{definition}[Strong probabilistic bisimulation]\label{def:strong-prob-bisim-1}
Suppose $F: S \to \Giry S$ is a continuous Markov process, then an equivalence relation $R$ on $S$ is a {\em (strong probabilistic) bisimulation} if and only if whenever $sRs'$, then for all $R$-closed measurable sets $U\in\Sigma_S$, $F s U = F s' U$.
\end{definition}
We note that the $R$-closed measurable sets are exactly those inducing the
$\sigma$-algebra on $S/R$, and hence that this definition of equivalence
corresponds to the ability to quotient the state space to give a continuous Markov process on $S/R$.
\begin{lemma}
An equivalence relation $R$ on $(X,\Sigma_X)$ is a strong probabilistic bisimulation relation if and only if when we equip $X/R$ with the largest $\sigma$-algebra such that $X\to X/R$ is measurable, $X/R$ carries the structure of a Giry coalgebra and the quotient is a coalgebra homomorphism in {\sf Meas}.
\end{lemma}
This definition assumes that $R$ is total. However that is not essential.
We could formulate it for relations that are symmetric and transitive, but
not necessarily total (partial equivalence relations). In this case we
have a correspondence with subquotients of the coalgebra. We do, however, have to be careful that the domain of $R$ is a well-defined sub-algebra.
Panangaden goes on to define a bisimulation between two coalgebras. We
simplify his definition as we do not consider specified initial states.
Given a binary relation $R$ between $S$ and $T$, we extend $R$ to a binary relation on the single set $S+T$. In order to apply the previous definition, we will want the equivalence relation on $S+T$ generated by $R$.
Now $(S+T)\times (S+T) = (S\times S) + (S\times T) + (T\times S) + (T\times T)$, and each of these components has a simple relation derived from $R$, specifically $R\Op R$, $R$, $\Op R$ and $\Op R R$.
\begin{definition}[z-closed]
$R \subseteq S\times T$ is {\em z-closed} iff $R\Op R R \subseteq R$, in other words, iff whenever $sRt \wedge s_1Rt \wedge s_1Rt_1$ then $sRt_1$.
\end{definition}
\begin{lemma}
$R \subseteq S \times T$ is z-closed if and only if
$R^{\ast} = R\Op R + R + \Op R + \Op R R$ is transitive as a
relation on
$(S+T)\times (S+T)$. Since $R^{\ast}$ is clearly symmetric,
$R$ is z-closed iff $R^{\ast}$ is a partial equivalence relation.
\end{lemma}
Secondly, given continuous Markov processes $F$ on $S$ and $G$ on $T$ we can define
their sum $F+G$ on $S+T$:
\begin{center}
\begin{tabular}{rclcl}
$(F+G) x U$ & $=$ & $F x (U\cap S)$ & if & $x\in S$\\
& $=$ & $G x (U\cap T)$ & if & $x\in T$
\end{tabular}
\end{center}
We can now make a definition that seems to us to contain the essence of
Panangaden's approach:
\begin{definition}[strong probabilistic bisimulation between
processes]\label{def:strong-prob-bisim-2}
$R$ is a strong probabilistic bisimulation between the continuous
Markov processes $F$ on $S$ and $G$ on $T$ iff
$R^{\ast} = R\Op R + R + \Op R + \Op R R$ is a strong probabilistic bisimulation as defined in Definition \ref{def:strong-prob-bisim-1}
on the sum process $F+G$ on $S+T$.
\end{definition}
Note that any such relation will be z-closed. Given that $R^{\ast}$ must
be total, it also induces an isomorphism between quotients of the
continuous Markov processes.
This definition corresponds exactly to what we get by taking the obvious
logical relations approach.
\paragraph{Logical relations of continuous Markov Processes}
Given a measurable space $(S,\Sigma_S)$, we treat $\Sigma_S$ as a subset
of the function space $S\to 2$, and use the standard mechanisms of
logical relations to extend a relation
$R\subseteq S\times T$ between two measurable spaces to a relation
$R_\Sigma$ between $\Sigma_S$ and $\Sigma_{T}$: $UR_\Sigma V$ if and
only if $\forall s,t. sRt \implies (s\in U \iff t\in V)$.
\begin{lemma}\begin{enumerate}
\item If $R$ is an equivalence relation then $UR_\Sigma V$ iff $U=V$ and is $R$-closed.
\item If $R$ is z-closed, then $UR_\Sigma V$ iff $U+V$ is an $R^{\ast}$-closed subset of $S+T$.
\item If $R$ is the graph of a function $f:S\to T$, then
$UR_\Sigma V$ iff $U=f^{-1}V$.
\end{enumerate}
\end{lemma}
Unpacking the definition of the Giry functor, the coalgebra structure
$\tau$ now has type $S\to (S\to 2)\to [0,1]$. We again apply the standard
machinery to this.
\begin{definition}[logical relation of continuous Markov processes]
If $R\subseteq S\times T$ is a relation between the state spaces of
continuous Markov processes $F: S\to\Giry S$ and $G: T\to\Giry T$,
then $R$ is a {\em logical relation of continuous Markov processes} iff
whenever $sRt$ and $UR_\Sigma V$, $F s U = G t V$.
\end{definition}
The following lemmas follow readily from the definitions.
\begin{lemma}
If $R\subseteq S\times T$ is a total and onto z-closed relation between continuous Markov processes $F: S\to\Giry S$ and
$G: T\to\Giry T$, then $R$ is a logical relation of continuous Markov processes if and only if $R$ is a strong probabilistic
bisimulation.
\end{lemma}
\begin{lemma}
If $R\subseteq S\times T$ is the graph of a measurable function $f$ between continuous Markov processes $F: S\to\Giry S$ and
$G: T\to\Giry T$, then $R$ is a logical relation of continuous Markov processes if and only if $f$ is a homomorphism of continuous Markov processes.
\end{lemma}
\begin{proof}
Observe that $f$ is a homomorphism if and only if for all $s\in S$ and
$V\in\Sigma_{T}$, $G (fs) V = F s (f^{-1}V)$.
\end{proof}
So logical relations capture both the concept of strong probabilistic
bisimulation (given that the candidate relations are restricted in
nature), and the concept of homomorphism of systems. But they do not
capture everything.
\paragraph{{$\Giry$}-bisimulation}
Recall from~\cite{rutten_universal_2000} that for a functor $H \colon \sf C \to \sf C$ and two $H$-coalgebras $f \colon A \to HA$ and $g \colon B \to HB$, an \emph{$H$-bisimulation} between $f$ and $g$ is a $H$-coalgebra $h \colon C \to HC$ together with two coalgebra-homomorphisms $l \colon C \to A$ and $r \colon C \to B$, that is, it is a span in the category of coalgebras for $H$:
\[
\begin{tikzcd}
A \ar[d,"f"'] & \ar[l,"l"'] C \ar[r,"r"] \ar[d,"h"] & B \ar[d,"g"] \\
HA & \ar[l,"Hl"'] HC \ar[r,"Hr"] & HB
\end{tikzcd}
\]
where the above diagram is required to be commutative.
\begin{definition}
A {$\Giry$}-bisimulation is simply an $H$-bisimulation in the category {\sf Meas} where the functor $H$ is $\Giry$.
\end{definition}
It is implicit in this definition that a bisimulation includes a coalgebra structure, and is not simply a relation. Where the functor $H$ corresponds to a traditional algebra generated by first-order terms and equations, the algebraic structure on the relation is unique. But that is not the case here.
\begin{example}\label{example:algebra-not-unique}
Consider a continuous Markov process
$F: { S}\to \Giry { S}$, then ${S}\times{ S}$ typically carries a number of continuous Markov process structures for
which both projections are homomorphisms. For example:
\begin{enumerate}
\item a ``two independent copies'' structure given by:
\[FF (s,s') (U,U') = (F s U) \times (F s' U')\]
\item a ``two observations of a single copy'' structure
given by:
\[\begin{array}{rcll}
F^2 (s,s') (U,U') & = & F s (U\cap U') & \mbox{ if $s=s'$}\\
& = & F s U \times F s' U' & \mbox{ if $s\neq s'$}
\end{array} \]
\end{enumerate}
\end{example}
\begin{example}
More specifically, consider the process $t$ modelling a single toss of a
fair coin. This can be modelled as a process with three states,
$C=\{S,H,T\}$: Start (S), Head tossed (H) and Tail tossed (T). From S we
move randomly to one of H and T and then stay there. The transition matrix
is given below. This is a discrete process, and we take all subsets to be
measurable.
\[t\mbox{ is given by}
\begin{array}{c|ccc}
& S & H & T \\
\hline
S & 0 & 0.5 & 0.5 \\
H & 0 & 1 & 0\\
T & 0 & 0 & 1
\end{array}
\]
Now consider the state space $C\times C$. We define two different process
structures on this. The first, $t^{*}$, is simply the product of the two
copies of $C$. The transition matrix for this is the tensor of the
transition matrix for $C$ with itself: the pairwise product of the
entries. This represents the process of two independent tosses of a coin.
\[t^{*}\mbox{ is given by }
\begin{array}{c|*{9}{c}}
& SS & HH & TT & HT & TH & SH & HS & ST & TS \\
\hline
SS & 0 & 0.25 & 0.25 & 0.25 & 0.25 & 0 & 0 & 0 & 0 \\
HH & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
TT & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
HT & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\
\ldots\\
TS & 0 & 0 & 0.5 & 0 & 0.5 & 0 & 0 & 0 & 0
\end{array}
\]
The second, $t^{+}$ is identical except for the first row:
\[
t^{+}\mbox{ is given by }
\begin{array}{c|*{9}{c}}
& SS & HH & TT & HT & TH & SH & HS & ST & TS \\
\hline
SS & 0 & 0.5 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 \\
\ldots
\end{array}
\]
This is motivated by the process of two observers watching a single toss
of a coin.
The projections are homomorphisms for both these structures. For example,
the first projection is a homomorphism for $t^{+}$ because for each
$I$, $J$, $K$:
\[ t I \{K\} = \sum_L t^{+} IJ \{KL\} \]
\end{example}
This means that in order to establish that a relation is a
$\Giry$-bisimulation, we have to define a structure and prove the
homomorphisms, and not simply validate some closure conditions.
Moreover, in contrast to the case for first-order theories, this
non-uniqueness of algebra structures implies that we can not
always reduce spans of homomorphisms to relations.
\begin{example}
Consider the sum of the two algebra structures from example
\ref{example:algebra-not-unique} as an algebra $t^{*} + t^{+}$
on
$(C\times C)+(C\times C)$. This is a $\Giry$-bisimulation from $C$ to
itself in which the legs of the span are the co-diagonal, $\nabla$,
followed by the
projections. The co-diagonal maps $(C\times C)+(C\times C)$ to its
relational image, but is not an algebra homomorphism for any algebra
structure on
$C\times C$. If there were an algebra homomorphism, for an algebra
structure $\delta$, say, then we would have that both
$(t^{*}+t^{+})(\inl SS) (\nabla^{-1}\{HT\} = t^{*} (SS) \{HT\}$
and
$(t^{*}+t^{+})(\inr SS) (\nabla^{-1}\{HT\} = t^{+} (SS) \{HT\}$
would be equal to
$\delta (SS) \{HT\}$. But the first is $t^{*} (SS) \{HT\} = 0.25$,
and the second is $t^{+} (SS) \{HT\} = 0$.
\end{example}
We now show that, despite these issues, $\Giry$-bisimulations give rise to
logical relations.
\begin{theorem}\label{thm:pi bisimulation implies logical relation}
Suppose
\[
\begin{tikzcd}
S \ar[d,"F"'] & \ar[l,"l"'] P \ar[r,"r"] \ar[d,"H"] & T \ar[d,"G"] \\
\Giry S & \ar[l,"\Giry l"'] \Giry P \ar[r,"\Giry r"] & \Giry T
\end{tikzcd}
\]
is a $\Giry$-bisimulation between the continuous Markov processes
$F$ and $G$. Let $R \subseteq S \times T$ be the relation which is
the image of $\langle l,r\rangle : P\to S\times T$, {i.e.} $sRt$ iff
$\exists p. lp=s \wedge rp=t$.
Then $R$ is a logical relation between $F$ and $ G$.
\end{theorem}
\begin{proof}
Suppose $sRt$ and $UR_\Sigma V$ for $U\in\Sigma_S$ and $V\in\Sigma_T$.
We must show that $F(s)(U) = G(t)(V)$
We begin by showing that $l^{-1}U = r^{-1}V$. Suppose $p\in P$, then
$(lp)R(rp)$, and hence $p\in l^{-1} U$ iff $lp\in U$ iff $rp\in V$ (since
$UR_\Sigma V$) iff $p\in r^{-1}V$.
Hence $l^{-1}U = r^{-1}V$, as required.
Now, since $sRt$, there is a $p$ such that $lp=s$ and $rp=t$. Then
\begin{align*}
F(s)(U)
&= H p (l^{-1} U) &\text{because $l$ is a $\Giry$-homomorphism}\\
&= H p (r^{-1} V) &\text{because $l^{-1}U = r^{-1}V$}\\
&= G(t)(V) &\text{because $r$ is a $\Giry$-homomorphism}
\end{align*}
as required.
\end{proof}
Establishing a converse is more problematic. There are a number of issues.
One is that $\Giry$-bisimulations of necessity work on spans not relations.
Another is that there might not
be much coherence between the relation $R$ and the $\sigma$-algebras
$\Sigma_S$ and $\sigma_T$. And a third is the fact that in order to define
a $\Giry$-algebra structure $H$ on $R$, we have to define $H (s,t) W$,
where $W$ is an element of the $\sigma$-algebra generated by the sets
$R\cap (U\times V)$, where $U\in\Sigma_S$ and $V\in\Sigma_T$. It is not
clear that such an extension will always exist, and example
\ref{example:algebra-not-unique} shows that there is no canonical way to
construct it.
Nevertheless we can show that a logical relation gives rise to a
$\Giry$-bisimulation, unfortunately not on the original algebras, but on
others with the same state space but a cruder measure structure.
The following lemma is immediate.
\begin{lemma}
Suppose $F: (S,\Sigma_S) \to \Giry (S,\Sigma_S)$ is a continuous Markov
process. Suppose also that $\Sigma'$ is a sub-$\sigma$-algebra of $\Sigma_S$, then $F$ restricts to a continuous Markov process $F'$ on
$(S,\Sigma')$, and $1_S: (S,\Sigma_S)\to (S,\Sigma')$ is a homomorphism.
\end{lemma}
If $R$ is a logical relation between continuous Markov processes $F$ on
$S$ and $G$ on $T$, then $R$ only gives us information about the
measurable sets included in $R_\Sigma$. The following lemmas are immediate
from the definitions.
\begin{lemma}
If $R\subseteq S\times T$ is a relation between the state spaces of
two continuous Markov processes $F$ and $G$ and $\proj 1 \colon R \to S$, $\proj 2 \colon R \to T$ are the two projections, then the following are
equivalent for $U\subseteq S$ and $V\subseteq T$:
\begin{enumerate}
\item $U [R\to\{0,1\}] V$
\item $U$ is closed under $R\Op R$, and
$UR = V\cap \mathop{\mbox{cod}} R$
\item $\invproj 1 U = \invproj 2 V$.
\end{enumerate}
\end{lemma}
\begin{lemma} If $R\subseteq S\times T$ is a relation between the
state spaces of
two continuous Markov processes $F$ and $G$, then the sets linked by
$[R\to\{0,1\}]$ have the following closure properties:
\begin{enumerate}
\item If $U [R\to\{0,1\}] V$ then $\compl U \ [R\to\{0,1\}]\ \compl V$
\item If for all $\alpha\in A$, $U_\alpha [R\to\{0,1\}] V_\alpha$ then $\Union_{\alpha\in A} U_\alpha\ [R\to\{0,1\}]\ \Union_{\alpha\in A} V_\alpha$.
\end{enumerate}
\end{lemma}
\begin{corollary}
The {\em measurable} subsets linked by $[R\to\{0,1\}]$ have the same closure properties and hence the following are $\sigma$-algebras:
\begin{enumerate}
\item $\SRC S = \{ U\in \Sigma_S | \exists V\in\Sigma_T. UR_\Sigma V\}$
\item $\SRC T = \{ V\in \Sigma_T | \exists U\in\Sigma_S. UR_\Sigma V\}$
\item $\begin{array}[t]{cl}
\Sigma_R & = \{ W\subseteq R | \exists U\in\Sigma_S,
V\in\Sigma_T.\ UR_\Sigma V \wedge W=\invproj 1 U \}\\
&= \{ W\subseteq R | \exists U\in\Sigma_S,
V\in\Sigma_T.\ UR_\Sigma V \wedge W=\invproj 1 U = \invproj 2 V\}.
\end{array}$
\end{enumerate}
\end{corollary}
\begin{theorem}\label{thm:logical relation implies pi bisimulation}
Suppose $R\subseteq S\times T$ is a relation between the
state spaces of two continuous Markov processes $F$ and $G$. If $R$ is a
logical relation then there is a $\Giry$-bisimulation:
\[
\begin{tikzcd}
(S,\SRC S) \ar[d,"F"'] & \ar[l,"\proj 1 {}"'] (R, \Sigma_R) \ar[r,"\proj 2 {}"] \ar[d,"H"] & (T, \SRC T) \ar[d,"G"] \\
\Giry (S, \SRC S) & \ar[l,"\Giry {\proj 1 {}}"'] \Giry (R, \Sigma_R) \ar[r,"\Giry {\proj 2 {}}"] & \Giry (T, \SRC T)
\end{tikzcd}
\]
\end{theorem}
\begin{proof}
Suppose $(s,t)\in R$ and $W\in\Sigma_R$, then we need to define
$H (s,t) W$. Suppose $U\in\SRC S$, $V\in\SRC T$, such that
$W=\invproj 1 U = \invproj 2 V$ and $UR_\Sigma V$. Then, since R is a
logical relation, $F(s)(U) = G(t)(V)$.
We claim that this is independent
of the choice of $U$ and $V$.
Suppose $U'\in\SRC S$, $V'\in\SRC T$, such that
$W=\invproj 1 U' = \invproj 2 V'$ and $U'R_\Sigma V'$.
Then $\invproj 1 U' = \invproj 2 V = W$, and hence $U'R_\Sigma V$, so
$F(s)(U')=G(t)(V)=F(s)(U)$.
We now define $H(s,t)(W) = F(s)(U)$.
We need to show that this is a $\Giry$-algebra structure.
First, we show that $H(s,t)$ is a sub-probability measure. We use a slightly non-standard characterisation of measures:
\begin{enumerate}
\item Since $\emptyset\in\SRC S$, $H(s,t)\emptyset=F(s)\emptyset = 0$.
\item For $W$, $W'$ in $\Sigma_R$, let $U$ and $U'$ be in $\SRC S$
such that
$\invproj 1 U = W$ and $\invproj 1 U' = W'$. Then, since $F(s)$ is a measure:
$F(s)(U) + F(s)(U') = F(s)(U\cup U') + F(s)(U\cap U')$. Now, since
$\invproj 1 {}$ preserves unions and intersections,
$H(s,t)(W) + H(s,t)(W') = H(s,t)(W\cup W') + H(s,t)(W\cap W')$.
\item If $W_i$ is an increasing chain of elements of $\Sigma_R$, then
let $U_i$ be an increasing chain of elements of $\SRC S$ such that
$\invproj 1 (U_i) = W_i$. Then $H(s,t)(\Union W_i) = F(s)(\Union U_i) = \lim F(s)(U_i) = \lim H(s,t)(U_i)$.
\end{enumerate}
To complete the proof it suffices to show that for each $W\in\Sigma_R$,
$H(-)(W)$ is a measurable function. Choose $U\in\SRC S$ and $V\in\SRC T$
such that $UR_\Sigma V$ and $W=\invproj 1 U = \invproj 2 V$. Now, given
$q\in [0,1]$, let $U_q = \{s\in S \mid F(s)(U)\leq q\}$ and
$V_q = \{t\in T \mid G(t)(V)\leq q\}$. Now suppose $sRt$, then, since $R$ is
a logical relation, $F(s)(U)=G(t)(V)$, hence $s\in U_q$ iff $t\in V_q$.
Therefore $U_q R_\Sigma V_q$. Moreover, $H(s,t)(W)=F(s)(U)=G(t)(V)$, and
hence $H(s,t)(W)\leq q$ iff $s\in U_q$ iff $t\in V_q$. It follows that
$\{(s,t) | H(s,t)(W)\leq q \}\in\Sigma_R$, and hence that $H(-)(W)$ is
measurable as required.
\end{proof}
Putting this together we see that if we have a logical relation between
$F$ and $G$, then we get the following diagram, in which the non-horizontal maps in the top section are identities on state spaces:
\[
\begin{tikzcd}
(S,\Sigma_S) \ar[d,"F"] \ar[dr] & &
\ar[ll,"\proj 1 {}"] (R,\Sigma_S\times\Sigma_T\upharpoonright R)
\ar [d] \ar[rr,"\proj 2 {}"]
& & \ar[dl] (T,\Sigma_T) \ar[d,"G"] \\
\Giry (S,\Sigma_S) \ar[dr] & (S,\SRC S) \ar[d,"F"'] & \ar[l,"\proj 1 {}"'] (R, \Sigma_R) \ar[r,"\proj 2 {}"] \ar[d,"H"] & (T, \SRC T) \ar[d,"G"]
& \ar[dl] \Giry{(T,\Sigma_T)}\\
& \Giry (S, \SRC S) & \ar[l,"\Giry {\proj 1 {}}"'] \Giry (R, \Sigma_R) \ar[r,"\Giry {\proj 2 {}}"] & \Giry (T, \SRC T)
\end{tikzcd}
\]
we can view Theorem \ref{thm:logical relation implies pi bisimulation}
as saying that we may be given too fine a measure structure on $S$ and $T$
for a logical relation to generate a $\Giry$-bisimulation, but we can
always get a $\Giry$-bisimulation with a coarser structure. Just how
coarse and how useful this structure might be depends on the logical
relation and its relationship with the original $\sigma$-algebras on
the state spaces.
\begin{example}
\begin{itemize}
\item In the contrived examples of \ref{example:algebra-not-unique}, we have taken the relation $R$ to be the whole of $C\times C$ and in effect used the algebra structure to restrict the effect of this. However, since $R=C\times C$, $\SRC C$ contains only the empty set and the whole of $C$. As a result, the continuous Markov process we get is not useful: the probability of evolving into the empty set is always 0, and the probability of evolving into something is always 1.
\item In the same examples we can restrict the state spaces for
$t^{*}$ and $t^{+}$. For $t^{*}$ we take
$R^{*} = \{SS,HH,TT,HT,TH\}$, reflecting the states accessible
from $SS$. In this case
$\SRC C = \{ \emptyset, \{S\}, \{H,T\}, \{S,H,T\}\}$.
For $t^{+}$ we take
$R^{+} = \{SS,HH,TT\}$, and $\SRC C$ contains all the subsets of $C$.
\end{itemize}
\end{example}
\refs
\bibitem[de~Vink and Rutten, 1999]{de_vink_bisimulation_1999}
de~Vink, E.~P. and Rutten, J. J. M.~M. (1999).
\newblock Bisimulation for probabilistic transition systems: a coalgebraic
approach.
\newblock {\em Theoretical Computer Science}, 221(1):271--293.
\bibitem[Desharnais et~al., 2002]{desharnais_bisimulation_2002}
Desharnais, J., Edalat, A., and Panangaden, P. (2002).
\newblock Bisimulation for {Labelled} {Markov} {Processes}.
\newblock {\em Information and Computation}, 179(2):163--193.
\bibitem[Giry, 1982]{giry_categorical_1982}
Giry, M. (1982).
\newblock A categorical approach to probability theory.
\newblock In Banaschewski, B., editor, {\em Categorical {Aspects} of {Topology}
and {Analysis}}, Lecture {Notes} in {Mathematics}, pages 68--85, Berlin,
Heidelberg. Springer.
\bibitem[Goubault-Larrecq et~al., 2002]{goubault2008logical}
Goubault-Larrecq, J., Lasota, S., and Nowak, D. (2002).
\newblock Logical {Relations} for {Monadic} {Types}.
\newblock In Bradfield, J., editor, {\em Computer {Science} {Logic}}, Lecture
{Notes} in {Computer} {Science}, pages 553--568, Berlin, Heidelberg.
Springer.
\bibitem[Hermida, 1999]{hermida1999some}
Hermida, C. (1999).
\newblock Some properties of {Fib} as a fibred 2-category.
\newblock {\em Journal of Pure and Applied Algebra}, 134(1):83--109.
\bibitem[Hermida et~al., 2014]{DBLP:journals/entcs/HermidaRR14}
Hermida, C., Reddy, U.~S., and Robinson, E.~P. (2014).
\newblock Logical {Relations} and {Parametricity} – {A} {Reynolds}
{Programme} for {Category} {Theory} and {Programming} {Languages}.
\newblock {\em Electronic Notes in Theoretical Computer Science}, 303:149--180.
\bibitem[Hermida, 1993]{hermida1993fibrations}
Hermida, C.~A. (1993).
\newblock Fibrations, {Logical} {Predicates} and {Indeterminates}.
\newblock {\em DAIMI Report Series}, (462).
\bibitem[Larsen and Skou, 1991]{larsen_bisimulation_1991}
Larsen, K.~G. and Skou, A. (1991).
\newblock Bisimulation through probabilistic testing.
\newblock {\em Information and Computation}, 94(1):1--28.
\bibitem[Milner, 1989]{milner1989communication}
Milner, R. (1989).
\newblock {\em Communication and concurrency}.
\newblock Prentice-Hall, Inc., USA.
\bibitem[Park, 1981]{park1981concurrency}
Park, D. (1981).
\newblock Concurrency and automata on infinite sequences.
\newblock In Deussen, P., editor, {\em Theoretical {Computer} {Science}},
Lecture {Notes} in {Computer} {Science}, pages 167--183, Berlin, Heidelberg.
Springer.
\bibitem[Panangaden, 2009]{panangaden2009labelled}
Panangaden, P. (2009).
\newblock {\em Labelled Markov Processes}.
\newblock Imperial College Press, GBR.
\bibitem[Plotkin, 1976]{plotkin1976powerdomain}
Plotkin, G.~D. (1976).
\newblock A powerdomain construction.
\newblock {\em Siam J. of Computing}.
\bibitem[Rutten, 2000]{rutten_universal_2000}
Rutten, J. J. M.~M. (2000).
\newblock Universal coalgebra: a theory of systems.
\newblock 249(1):3--80.
\bibitem[Van~Glabbeek et~al., 1995]{van_glabbeek_reactive_1995}
Van~Glabbeek, R.~J., Smolka, S.~A., and Steffen, B. (1995).
\newblock Reactive, {Generative}, and {Stratified} {Models} of {Probabilistic}
{Processes}.
\newblock {\em Information and Computation}, 121(1):59--80.
\bibitem[Van Glabbeek and Weijland, 1996]{van_glabbeek_branching_1996}
Van Glabbeek, R.~J. and Weijland, W.~P. (1996).
\newblock Branching {Time} and {Abstraction} in {Bisimulation} {Semantics}.
\newblock {\em J. ACM}, 43(3):555--600.
\endrefs
\end{document}
The coalgebra structure for a Pi-bisimulation is not necessarily unique. This is in contrast to what happens in the first-order case. Also this means that to prove that a relation is a Pi-bisimulation, we need to first find a candidate structure and then test whether it works or not. If the structure were uniquely determined by the relation, the task would be easier. It's unlikely that one can have a general recipe for a given relation to be a Pi-bisimulation.
Because there is more than one structure, you can come up with an example of span
Suppose you have a bisimulation defined via a span. Is there a bisimulation structure on the relation that the span generates? Not necessarily. There might be a coalgebra structure on the relation, but it might not be a coalgebra homomorphism from the span to it.
We can take a span that has a coalgebra strcture, which projects onto a relational span that has a coalgebra structure, but that projection is not a coalgebra homomorphism.
The definition we gave of Pi-bisimulation is "a span in the category of coalgebras", because there is the general definition of H-bisimulation for any endofunctor H on any categoty C, with C not necessarily a concrete category. But to prove our theorems about the connection with logical relations, we have only referred to Pi-bisimulations whose set of points is a relation between the sets of points of the transition systems.
===================================================================
The category $\sf Meas$ has binary products: the measurable space
$\mathcal S \times \mathcal T$ consists of the cartesian product of the
two underlying sets $S \times T$ together with the smallest
$\sigma$-algebra $\Sigma_{S \times T}$ which makes the projections
measurable; equivalently, $\Sigma_{S \times T}$ is the $\sigma$-algebra
generated by the sets $U \times V$, with
$U \in \Sigma_S$ and $V \in \Sigma_T$. If $R \subseteq S \times T$ is a
relation (that is, an ordinary subset of the cartesian product), $R$ can
be endowed with a natural $\sigma$-algebra that makes it a sub-measurable
space of $S \times T$:
\[
\Sigma_{R \subseteq S \times T} = \{R \cap A \mid A \in \Sigma_{S \times T}\}.
\]
We shall define $R$ to be a logical relation between $F$ and $G$ if
$(F,G) \in [R \to \Giry R]$, for an appropriate relation $\Giry R$. In
order to do so, we consider a sub-class of measurable sets of $S$ and $T$,
those that are $R$-\emph{closed}.
\begin{definition}
Let $R \subseteq X \times Y$ be a relation of sets. A subset
$A \subseteq X$ is said to be $R$-\emph{closed} if and only if
\[
\{ x \in X \mid \exists a \in A \ldotp \exists y \in Y \ldotp aRy \land xRy \} \subseteq A
\]
and similarly $B \subseteq Y$ is $R$-\emph{closed} if and only if
\[
\{ y \in Y \mid \exists b \in B \ldotp \exists x \in X \ldotp xRb \land xRy \} \subseteq B.
\]
If $R \subseteq S \times T$, with $(S,\Sigma_S)$ and $(T,\Sigma_T)$ measurable sets as we fixed, we denote the set of $R$-closed measurable sets of $S$ and $T$ as $\SRC S$ and $\SRC T$, respectively.
\end{definition}
\begin{remark}
Panangaden's definition of $R$-closed set~\cite{panangaden2009labelled} coincide with ours in the special case of $X=Y$ and $R$ an equivalence relation.
\end{remark}
\begin{remark}
Let $A \subseteq S$. Then $A$ is $R$-closed if and only if $R \Op R (A) \subseteq A$. If $R$ is total, then $A$ is $R$-closed if and only if $A = R\Op R (A)$. Similarly if $R$ is surjective, then $B \subseteq T$ is $R$-closed if and only if $B = \Op R R(B)$.
\end{remark}
We are now ready to define logical relations between probabilistic transition systems.
\begin{definition}
Let $R \subseteq S \times T$ for $\mathcal S$ and $\mathcal T$ fixed as above and let $\proj 1 \colon R \to S$, $\proj 2 \colon R \to T$ be the two projections. Identify the measurable subsets $U$ of $S$ and $V$ of $T$ with their characteristic function $\chi_U$ and $\chi_V$ and define
\[
R_\Sigma = \{ (U,V) \in \Sigma_S \times \Sigma_T \mid \chi_U [R \to \{0,1\}] \chi_V \}.
\]
For $F \colon \mathcal S \to \Giry \mathcal S$ and $ G \colon \mathcal T \to \Giry \mathcal T$, we say that $R$ is a \emph{logical relation} between $F$ and $ G$ if and only if $(F, G) \in [R \to \Giry R]$, where $\Giry R = [R_\Sigma \to [0,1]]$. Equivalently, $R$ is a logical relation between $F$ and $ G$ if and only if
\[
\forall s \in S \ldotp \forall t \in T \ldotp \forall U \in \Sigma_S \ldotp \forall V \in \Sigma_T \ldotp {sRt} \land {U R_\Sigma V} \implies F(s)(U) = G(t)(V).
\]
\end{definition}
\begin{remark}\label{rem:pi^-1(u)=pi^-1(V) iff (s in U iff t in V)}
If $R \subseteq S \times T$, $U \subseteq S$ and $V \subseteq T$, it is easy to see that $U R_\Sigma V$ implies $U$ and $V$ are $R$-closed and, moreover, that $U R_\Sigma V$ iff $\invproj1 (U) = \invproj2(V)$, as
\[
{\invproj1 (U) = \invproj2(V)} \iff \bigl( \forall (s,t)\in R \ldotp (s \in U \iff t \in V) \bigr).
\]
\end{remark}
Notice that if $\mathcal S = \mathcal T$ and $R$ is an equivalence relation, then $\invproj1 (U) = \invproj2(V)$ implies $U=V$ for any $U,V \in \Sigma_S$: the definition of logical relation above and that of probabilistic bisimulation in~\cite[Definition 7.2]{panangaden2009labelled} coincide.
\begin{theorem}\label{thm:pi bisimulation implies logical relation}
Let $R \subseteq S \times T$ be a relation. If $(R,\Sigma_{R \subseteq S \times T})$ is a $\Giry$-bisimulation between $F$ and $ G$, then $R$ is a logical relation between $F$ and $ G$.
\end{theorem}
\begin{proof}
To save space, let us write within this proof $\Sigma$ instead of $\Sigma_{R \subseteq S \times T}$. By assumption, there is a measurable function $\gamma \colon (R,\Sigma) \to \bigl(\Giry (R,\Sigma), \Sigma_{\Giry(R,\Sigma)}\bigr)$ making the following diagram commute:
\begin{equation}\label{eqn:R pi-bisimulation between phi and psi}
\begin{tikzcd}
(S,\Sigma_S) \ar[d,"F"'] & \ar[l,"\proj 1"'] (R,\Sigma) \ar[r,"\proj 2"] \ar[d,"\gamma"] & (T,\Sigma_T) \ar[d," G"] \\
\bigl( \Giry(S,\Sigma_S), \Sigma_{\Giry(S,\Sigma_S)} \bigr) & \ar[l,"\Giry\proj 1"'] \bigl( \Giry(R,\Sigma), \Sigma_{\Giry(R,\Sigma)} \bigr) \ar[r,"\Giry \proj 2"] & \bigl( \Giry(T,\Sigma_T), \Sigma_{\Giry(T,\Sigma_T)} \bigr)
\end{tikzcd}
\end{equation}
We prove that $R$ is a logical relation between $F$ and $ G$. Let $s\in S$, $t \in T$, $U \in \Sigma_S$, $V \in \Sigma_T$ such that $sRt$ and $U R_\Sigma V$. Then
\begin{align*}
F(s)(U)
&= \Giry\proj 1 \bigl(\gamma(s,t)\bigr)(U) &\text{because (\ref{eqn:R pi-bisimulation between phi and psi}) commutes}\\
&= \gamma(s,t)\bigl( \invproj 1 (U) \bigr) &\text{by definition of $\Giry\proj1$}\\
&= \gamma(s,t)\bigl( \invproj 2 (V) \bigr) &\text{because $U R_\Sigma V$}\\
&= \Giry\proj 2 \bigl(\gamma(s,t)\bigr)(V) &\text{by definition of $\Giry\proj2$}\\
&= G (t) (V) &\text{because (\ref{eqn:R pi-bisimulation between phi and psi}) commutes.} \tag*{\endproofbox}
\end{align*}
\renewcommand{\endproof}{}
\end{proof}
The opposite direction of the implication of Theorem~\ref{thm:pi bisimulation implies logical relation} requires more work, and will use a different $\sigma$-algebra for $R$ than that of sub-measurable space: we will introduce it in Proposition~\ref{prop:R_Sigma is a sigma algebra on R}. First, we note a crucial property of $R$-closed subsets of $S$ and $T$, and then we prove that the collection of $R$-closed measurable sets of $S$ and $T$, $\SRC S$ and $\SRC T$, are in fact $\sigma$-algebras on $S$ and $T$.
\begin{remark}\label{rem:U R-closed iff pi^-1(u)=pi^-1(UR)}
Let $R \subseteq S \times T$ and $U \subseteq S$. If $U$ is $R$-closed, then so is
\begin{equation}\label{eqn:UR}
UR = \{t \in T \mid \exists u \in U \ldotp u R t\}.
\end{equation}
Moreover, $U$ is $R$-closed if and only if
\[
\invproj 1 (U) = \invproj 2 (UR),
\]
because $\invproj 1 (U) = \{ (s,t)\in R \mid s \in U \} = R \cap (U \times UR)$ and saying that $U$ is $R$-closed is equivalent to say that $R \cap (U \times UR) = \{ (s,t) \in R \mid \exists u \in U \ldotp u R t \} (= \invproj 2 (UR))$, the non-trivial inclusion being $\subseteq$. The same can be said about a $V \subseteq T$: if $V$ is $R$-closed, then $RV=\{ s \in S \mid \exists v \in V \ldotp sRv \}$ is $R$-closed; $V$ is $R$-closed if and only if $\invproj1(RV)=\invproj2(V)$.
\end{remark}
\begin{proposition}\label{prop: SRC S is a sigma-algebra}
Let $R \subseteq S \times T$ and $(X,\Sigma_X)$ be either $(S,\Sigma_S)$ or $(T,\Sigma_T)$. Then
\[
\SRC X = \{ E \in \Sigma_X \mid \text{$E$ is $R$-closed} \} \subseteq \Sigma_X
\]
is a $\sigma$-algebra on $X$.
\end{proposition}
\begin{proof}
First of all, we have that
\[
\{ s \in S \mid \exists u \in \emptyset, t \in T \ldotp sRt \land uRt\}=\emptyset
\]
hence $\emptyset$ is $R$-closed.
Next, let $U \in \SRC X$. We show that $\compl U$ is $R$-closed, that is,
\[
\{ s \in S \mid \exists u \in \compl U, t \in T \ldotp sRt \land uRt\} \subseteq \compl U.
\]
Consider then $s \in S$, $u \in \compl U$, $t \in T$ such that $sRt$ and $uRt$. Since $U$ is $R$-closed, we have by Remark~\ref{rem:U R-closed iff pi^-1(u)=pi^-1(UR)} that $\invproj1 (U) = \invproj2 (UR)$, hence $t \notin UR$ (because of Remark~\ref{rem:pi^-1(u)=pi^-1(V) iff (s in U iff t in V)} and the fact that $u R t$). Since $s R t$ and $t \notin UR$, we have $s \notin U$, thus $\compl U$ is $R$-closed as well.
It is straightforward to see that countable unions of measurable, $R$-closed sets are still $R$-closed.
\end{proof}
\begin{proposition}\label{prop:R_Sigma is a sigma algebra on R}
Let $R \subseteq S \times T$. Then
\[
\Sigma_R = \{ R \cap (U \times V) \mid U \in \SRC S, V \in \SRC T, \invproj 1 (U) = \invproj 2 (V) \} \subseteq \pow{(R)}
\]
is a $\sigma$-algebra on $R$.
\end{proposition}
\begin{proof}
We have that $\emptyset = R \cap (\emptyset \times \emptyset)$, where $\emptyset$ is measurable and $R$-closed and, of course, $\invproj1(\emptyset)=\emptyset=\invproj2(\emptyset)$, hence $\emptyset \in \Sigma_R$.
Next, let $E\in\Sigma_R$: then $E=R \cap (U \times V)$ for $U$, $V$ measurable, $R$-closed and such that $\invproj1(U)=\invproj2(V)$. We have
\begin{align*}
\compl E &= R \setminus E \\
&= R \setminus (U \times V) \\
&= \{(s,t)\in R \mid s \notin U \lor s \notin V \} \\
&= \{(s,t)\in R \mid s \notin U \land s\notin V \} \\
&= R \cap (\compl U \times \compl V).
\end{align*}
Notice that here $\compl U = S \setminus U$ and $\compl V = T \setminus V$. The only non-trivial equality in the above chain is the fourth, which holds because of Remark~\ref{rem:pi^-1(u)=pi^-1(V) iff (s in U iff t in V)}. Of course $\compl U$ and $\compl V$ are measurable and by Proposition~\ref{prop: SRC S is a sigma-algebra} they are also $R$-closed; moreover, again because of Remark~\ref{rem:pi^-1(u)=pi^-1(V) iff (s in U iff t in V)}, we have:
\[
\invproj1(S \setminus U) = \{(s,t)\in R \mid s \notin U\} = \{(s,t)\in R \mid t \notin V\} = \invproj2(T \setminus V).
\]
Finally, let $E_i = R \cap (U_i \times V_i) \in \Sigma_R$, with $\invproj1(U_i)=\invproj2(V_i)$ for all $i\in\mathbb N$. We have that $\bigcup_{i \in \mathbb N} U_i$ and $\bigcup_{i \in \mathbb N} V_i$ are measurable and $R$-closed (Proposition~\ref{prop: SRC S is a sigma-algebra}); also:
\[
\bigcup_{i\in\mathbb N}E_i=\bigcup_{i\in\mathbb N} R \cap (U_i \times V_i) = R \cap \Bigl(\bigcup_{i\in\mathbb N}U_i \times \bigcup_{i\in\mathbb N}V_i\Bigr)
\]
We proceed to show the last equation. The ``$\subseteq$'' inclusion is trivial. Suppose $(s,t) \in R \cap \Bigl(\bigcup_{i\in\mathbb N}U_i \times \bigcup_{i\in\mathbb N}V_i\Bigr)$: then there are $i,j\in\mathbb N$ such that $s \in U_i$ and $t \in V_j$. Since though $s \in U_i$ if and only if $t \in V_i$ (Remark~\ref{rem:pi^-1(u)=pi^-1(V) iff (s in U iff t in V)}), we have that in fact $t \in V_i$. Hence $(s,t) \in R \cap \bigcup_{i\in\mathbb N}(U_i \times V_i)$. It is straightforward to see that $\invproj1 \bigl(\bigcup_{i\in\mathbb N}U_i\bigr) = \invproj2 \bigl( \bigcup_{i \in \mathbb N} V_i \bigr)$, thus concluding the proof.
\end{proof}
Now, if $U$ is any measurable subset of $S$, then $
\invproj 1 (U) = R \cap (U \times UR)
$
in general does not belong to $\Sigma_R$ because $UR$
is not necessarily measurable or even $R$-closed. This means that the projections $\proj1 \colon (R,\Sigma_R) \to (S,\Sigma_S)$ and $\proj2 \colon (R,\Sigma_R) \to (T,\Sigma_T)$ are not measurable. However, thanks to Remark~\ref{rem:U R-closed iff pi^-1(u)=pi^-1(UR)}, by endowing $S$ with the $\sigma$-algebra $\SRC S$ and $T$ with $\SRC T$, and assuming $R$ is somewhat coherent with $\Sigma_S$ and $\Sigma_T$, we do obtain measurability of the projections, as the following Lemma states.
\begin{lemma}
Let $R \subseteq S \times T$ be a relation such that $UR \in \Sigma_T$ for all $U \in \SRC S$ and $RV \in \Sigma_S$ for all $V \in \SRC T$. Then $\proj 1 \colon (R,\Sigma_R) \to (S,\SRC S)$ and $\proj 2 \colon (R,\Sigma_R) \to (T,\SRC T)$ are measurable.
\end{lemma}
\begin{proof}
Let $U \in \SRC S$. We have that $\invproj 1 (U) = R \cap (U \times UR)$, where $U$ and $UR$ are measurable, $R$-closed and such that $\invproj1 (U) = \invproj2 (UR)$ (see Remark~\ref{rem:U R-closed iff pi^-1(u)=pi^-1(UR)}). Hence $\invproj1 (U) \in \Sigma_R$. Similarly one proves that $\proj2$ is measurable.
\end{proof}
However, now we have to restrict our $F$ and $ G$ to $R$-closed measurable subsets only.
\begin{proposition}\label{prop:barphi is measurable}
Let $R \subseteq S \times T$ be a logical relation between $F$ and $ G$ such that $UR \in \Sigma_T$ for all $U \in \SRC S$ and $RV \in \Sigma_S$ for all $V \in \SRC T$. Then $F$ and $ G$ induce functions
\[
\begin{tikzcd}[row sep=0em]
(S,\SRC S) \ar[r,"\bar F"] & \bigl(\Giry(S,\SRC S),\Sigma_{\Giry(S,\SRC S)}\bigr) \\
s \ar[r,|->] & \restr{F(s)}{\SRC S}
\end{tikzcd}
\]
and
\[
\begin{tikzcd}[row sep=0em]
(T,\SRC T) \ar[r,"\bar G"] & \bigl(\Giry(T,\SRC T),\Sigma_{\Giry(T,\SRC T)}\bigr) \\
t \ar[r,|->] & \restr{ G(t)}{\SRC T}
\end{tikzcd}
\]
which are also measurable, where $\restr{F(s)}{\SRC S}$ and $\restr{ G(t)}{\SRC T}$ denote the restriction of $F(s) \colon \Sigma_S \to [0,1]$ and $ G(t) \colon \Sigma_T \to [0,1]$ to the sub-$\sigma$-algebra $\SRC S$ and $\SRC T$, respectively.
\end{proposition}
\begin{proof}
$\Sigma_{\Giry(S,\SRC S)}$ is the $\sigma$-algebra generated by the sets
\[
E_{A,r} = \{ \nu \colon \SRC S \to [0,1] \text{ sub-probability measure} \mid \nu(A)>r \}
\]
for all $A \in \SRC S$, $r \in [0,1]$. Let $A \in \SRC S$ and $r \in [0,1]$: we check that $\bar F^{-1}(E_{A,r}) \in \SRC S$.
\begin{align*}
\bar F^{-1}(E_{A,r}) &= \{s \in S \mid \restr{F(s)}{\SRC S}(A)>r \} \\
&= \{s \in S \mid F(s)(A)>r \} \\
&= F^{-1}\bigl( \{\mu \colon \Sigma_S \to [0,1] \text{ sub-probability measure} \mid \mu(A)>r\} \bigr) \in \Sigma_S.
\end{align*}
Similarly one proves that $\bar G^{-1}(E_{A,r}) \in \Sigma_T$. We show that $\bar F^{-1}(E_{A,r})$ is $R$-closed: we want to prove that
\[
\{s \in S \mid \exists s' \in S, t \in T \ldotp F(s')(A)>r \land sRt \land s'Rt\} \subseteq \bar F^{-1}(E_{A,r}).
\]
Let then $s \in S$, $s' \in S$, $t \in T$ be such that $sRt$, $s' R t$, $F(s')(A)>r$. We aim to prove that $F(s)(A)>r$ as well. Since $A$ is $R$-closed, we have that $AR = \{t \in T \mid \exists a \in A \ldotp a R t\}\in\SRC T$, see Remark~\ref{rem:U R-closed iff pi^-1(u)=pi^-1(UR)}, and since $R$ is a logical relation between $F$ and $ G$, we have that
\[
F(s)(A)= G(t)(AR)=F(s')(A) >r\tag*{\endproofbox}
\]
\renewcommand{\endproof}{}
\end{proof}
\begin{theorem}
Let $R \subseteq S \times T$ be a total relation such that for all $U \in \SRC S$ and for all $V \in \SRC T$ we have
\begin{align*}
UR = \{t \in T \mid \exists u \in U \ldotp u R t\} \in \Sigma_T, \\
RV = \{s \in S \mid \exists v \in V \ldotp s R v\} \in \Sigma_S.
\end{align*}
If $R$ is a logical relation between $F \colon \mathcal S \to \Giry{\mathcal S}$ and $ G \colon \mathcal T \to \Giry{\mathcal T}$, then $(R,\Sigma_R)$ is a $\Giry$-bisimulation between $\bar F$ and $\bar G$ as defined in Proposition~\ref{prop:barphi is measurable}.
\end{theorem}
\begin{proof}
Suppose that $R$ is a logical relation between $F$ and $ G$: we have to show that $(R,\Sigma_R)$ is a $\Giry$-coalgebra such that the two projections $\proj 1 \colon (R,\Sigma_R) \to (S,\SRC S)$ and $\proj 2 \colon (R,\Sigma_R) \to (T,\SRC T)$ are coalgebra-homomorphisms, that is, that
\begin{equation}\label{eqn:Giry-bisimulation}
\begin{tikzcd}[font=\small,column sep=1.5em]
(S,\SRC S) \ar[d,"\bar F"'] & \ar[l,"\proj 1"'] (R,\Sigma_R) \ar[r,"\proj 2"] \ar[d,"\gamma"] & (T,\SRC T) \ar[d,"\bar G"] \\
\bigl( \Giry(S,\SRC S), \Sigma_{\Giry(S,\SRC S)} \bigr) & \ar[l,"\Giry\proj 1"'] \bigl( \Giry(R,\Sigma_R), \Sigma_{\Giry(R,\Sigma_R)} \bigr) \ar[r,"\Giry \proj 2"] & \bigl( \Giry(T,\SRC T), \Sigma_{\Giry(T,\SRC T)} \bigr)
\end{tikzcd}
\end{equation}
commutes. For $(s,t) \in R$, define
\[
\begin{tikzcd}[row sep=0em]
\Sigma_R \ar[r,"{\gamma(s,t)}"] & {[0,1]} \\
R \cap (U \times V) \ar[r,|->] & F(s)(U) = G(t)(V)
\end{tikzcd}
\]
(if $R\cap( U \times V)$ is in $\Sigma_R$, then $U R_\Sigma V$, hence by assumption $F(s)(U)= G(t)(V)$). We prove that $\gamma (s,t)$ so defined is indeed a sub-probability measure, that $\gamma \colon (R,\Sigma_R) \to \bigl( \Giry(R,\Sigma_R), \Sigma_{\Giry(R,\Sigma_R)} \bigr)$ is measurable and that (\ref{eqn:Giry-bisimulation}) commutes.
We have first of all that
\begin{align*}
\gamma(s,t)(\emptyset) = \gamma(s,t)(R \cap (\emptyset \times \emptyset)) = F(s)(\emptyset) = 0, \\
\gamma(s,t)(R) = \gamma(s,t)(R \cap (S \times T)) = F(s)(S) \le 1.
\end{align*}
Next, let $E_i = R \cap (U_i \times V_i) \in \Sigma_R$ for all $i \in \mathbb N$, such that $E_i \cap E_j = \emptyset$ when $i \ne j$. Then, if $i \ne j$,
\[
\emptyset = R \cap (U_i \times V_i) \cap (U_j \times V_j) = R \cap \bigl( (U_i \cap U_j) \times (V_i \cap V_j) \bigr)
\]
from which we deduce that if $x \in U_i \cap U_j$ then there is no $y \in T$ such that $x R y$. By assumption of totality of $R$, this implies that $U_i \cap U_j = \emptyset$ whenever $i \ne j$, therefore:
\begin{align*}
\gamma(s,t)\Bigl(\Union_{i \in \mathbb N} E_i\Bigr) &= \gamma(s,t)\biggl( R \cap \Bigl(\Union_{i \in \mathbb N} U_i \times \Union_{i \in \mathbb N} V_i \Bigr) \biggr) \\
&= F(s)\Bigl(\Union_{i \in \mathbb N} U_i \Bigr) \\
&= \sum_{i \in \mathbb N} F(s)(U_i) \\
&= \sum_{i \in \mathbb N} \gamma(s,t) \bigl(R \cap (U_i \times V_i) \bigr) \\
&= \sum_{i \in \mathbb N} \gamma(s,t) (E_i)
\end{align*}
where we used the $\sigma$-additivity of $F(s)$ as a sub-probability measure. Hence $\gamma(s,t)$ is itself a sub-probability measure.
We now show that $\gamma$ is a measurable function. $\Sigma_{(\Giry(R,\Sigma_R))}$ is the smallest $\sigma$-algebra making the evaluation maps $e_A \colon \Giry(R,\Sigma_R) \to [0,1]$ measurable for all $A \in \Sigma_R$; equivalently, it is the $\sigma$-algebra generated by the sets
\[
E_{A,r} = \{ \nu \colon \Sigma_R \to [0,1] \text{ sub-probability measure} \mid \nu(A) > r \}
\]
with $A \in \Sigma_R$ and $r \in [0,1]$. Let then $A = R \cap (U \times V) \in \Sigma_R$ and $r \in [0,1]$:
\begin{align*}
\gamma(s,t)^{-1}(E_{A,r}) &= \{(s,t)\in R \mid \gamma(s,t)(A)>r \} \\
&= \{(s,t)\in R \mid F(s)(U)= G(t)(V) > r \} \\
&= R \cap \Bigl( \bigl( F(-)(U)^{-1}(r,1]) \bigr) \times \bigl( G(-)(V)^{-1}(r,1]) \bigr) \Bigr)
\end{align*}
where $X=F(-)(U)^{-1}(r,1])$ and $Y= G(-)(V)^{-1}(r,1])$ are both measurable sets: since $F$ and $ G$ are measurable, then by definition of $\Sigma_{\Giry(S,\Sigma_S)}$ and $\Sigma_{\Giry(T,\Sigma_T)}$ we have
\[
X=\{s \in S \mid F(s)(U) > r \} = F^{-1}\bigl( \{ \mu \colon \Sigma_S \to [0,1] \text{ sub-prob.\ meas.} \mid \mu(U) > r \} \bigr) \in \Sigma_S
\]
and similarly $Y \in \Sigma_T$. Moreover,
\[
\invproj1 (X) = \{(s,t)\in R \mid F(s)(U) >r\} = \{(s,t)\in R \mid G(t)(V) >r\} = \invproj2(Y).
\]
We show that $X$ and $Y$ are $R$-closed as well, thus proving that $\gamma(s,t)^{-1}(E_{A,r}) \in \Sigma_R$. Now, $X$ is $R$-closed if and only if, by definition,
\[
\{ s \in S \mid \exists u \in X \ldotp \exists t \in T \ldotp {sRt} \land {uRt} \} \subseteq X.
\]
Let then $s \in S$, $u \in S$, $t \in T$ be such that $F(s)(U) > r$, $s R t$, $u R t$. We aim to prove that $F(s)(U)>r$ as well. From the fact that $u R t$ and $\invproj 1 (U) = \invproj2 (V)$ we have that $F(u)(U) = G(t)(V)$, and since $s R t$, we have that $ G(t)(V)=F(s)(U)$. Hence $r < F(u)(U) = F(s)(U)$, as required. Similarly one can prove that $Y$ is $R$-closed too.
Finally, the commutativity of (\ref{eqn:Giry-bisimulation}). Let $(s,t) \in R$ and $U \in \SRC S$:
\begin{align*}
\Giry\proj1(\gamma(s,t))(U) &= \gamma(s,t)(\invproj1 (U)) \\
&= \gamma(s,t)(R \cap (U \times UR)) \\
&= F(s)(U) \\
&=\bar F(s)(U),
\end{align*}
hence the left square in (\ref{eqn:Giry-bisimulation}) commutes; in the same way one proves the commutativity of the right square.
\end{proof}
| {
"timestamp": "2020-03-31T02:33:07",
"yymm": "2003",
"arxiv_id": "2003.13542",
"language": "en",
"url": "https://arxiv.org/abs/2003.13542",
"abstract": "We investigate how various forms of bisimulation can be characterised using the technology of logical relations. The approach taken is that each form of bisimulation corresponds to an algebraic structure derived from a transition system, and the general result is that a relation $R$ between two transition systems on state spaces $S$ and $T$ is a bisimulation if and only if the derived algebraic structures are in the logical relation automatically generated from $R$. We show that this approach works for the original Park-Milner bisimulation and that it extends to weak bisimulation, and branching and semi-branching bisimulation. The paper concludes with a discussion of probabilistic bisimulation, where the situation is slightly more complex, partly owing to the need to encompass bisimulations that are not just relations.",
"subjects": "Logic in Computer Science (cs.LO)",
"title": "Bisimulation as a Logical Relation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105287006256,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7076984883763414
} |
https://arxiv.org/abs/1712.03497 | The minimum stretch spanning tree problem for typical graphs | With applications in distribution systems and communication networks, the minimum stretch spanning tree problem is to find a spanning tree T of a graph G such that the maximum distance in T between two adjacent vertices is minimized. The problem has been proved to be NP-hard and fixed-parameter polynomial algorithms have been obtained for some special classes of graphs. In this paper, we concentrate on the optimality characterizations for typical classes of graphs. We determine the exact optimality representations for Petersen graph, the complete k-partite graphs, split graphs, generalized convex graphs, and several planar grids, including rectangular grids, triangular grids, and triangular-rectangular grids. | \section{Introduction }
\hspace*{0.5cm} Since Peleg et al.\cite{Peleg89} in 1989, a series
of tree spanner problems arise in connection with applications in
distribution systems and communication networks (see survey
\cite{Lieb08}). A basic decision version of the tree spanner
problems for a graph $G$ is as follows: For a given integer $k$, is
there a spanning tree $T$ of $G$ (called a tree $k$-spanner) such
that the distance in $T$ between every pair of vertices is at most
$k$ times their distance in $G$? The corresponding optimization
version of the problem is to find the minimum $k$ such that there
exists a tree $k$-spanner of $G$. This spanning tree optimization
problem is referred to as {\it the minimum stretch spanning tree
problem} and MSST for short \cite{Brand04,Cai95,Fekete01,Madan96}.
We formulate the problem formally. Let $G$ be a simple connected
graph with vertex set $V(G)$ and edge set $E(G)$. Given a spanning
tree $T$ of $G$, for $uv\in E(G)$, let $d_T(u,v)$ denote the
distance between $u$ and $v$ in $T$, that is the length of the
unique $u$-$v$-path in $T$. Then the {\it stretch} of a spanning
tree $T$ is defined by
\begin{equation}
\sigma_T(G,T):=\max_{uv\in E(G)}\,d_T(u,v).
\end{equation}
Furthermore, the minimum stretch spanning tree problem is to
determine
\begin{equation}
\sigma_T(G):=\min \{\sigma_T(G,T): T \mbox{ is a spanning tree of }
G\}.
\end{equation}
This gives rise to a graph invariant $\sigma_T(G)$, called {\it the
tree-stretch} of $G$. Here, we follow the notation $\sigma_T(G)$ in
\cite{Fekete01}.
For an edge $e=uv$ not in $T$, the unique cycle in $T+e$ is called
the {\it fundamental cycle} with respect to $e$. So, the above
problem is equivalent to finding a spanning tree such that the
length of a maximum fundamental cycle is minimized, where the
tree-stretch $\sigma_T(G)$ is one less than the length of this
cycle. This is precisely {\it the shortest maximal fundamental cycle
problem} proposed by Galbiati \cite{Galb03}. As is well known, all
fundamental cycles with respect to a spanning tree $T$ constitute a
basis of the {\it cycle space} of $G$ \cite{Bondy08}. Thus we have
an optimal basis problem in the cycle space.
In the dual point of view, for each $e\in T$, the edge-cut between
two components of $T-e$ is a fundamental edge-cut (cocycle). Let
$X_e$ be the vertex set of one of these components. Write $\partial
(X_e):=\{uv\in E(G):u\in X_e, v\notin X_e\}$. Then $\partial (X_e)$
is the {\it fundamental edge-cut} with respect to $e$, and
$|\partial (X_e)|$ is called the {\it congestion} of edge $e$. The
minimum congestion spanning tree problem, proposed by Ostrovskii
\cite{Ostrov04} in 2004, is to determine
$$c_T(G):=\min \{\max_{e\in T}\,|\partial(X_e)|:T \mbox{ is a spanning tree of } G\}.$$
This graph invariant $c_T(G)$ is called {\it the tree-congestion} of
$G$.
Admittedly, the tree-congestion $c_T(G)$ is a variant of the
cutwidth $c(G)$ of $G$ and the tree-stretch $\sigma_T(G)$ is a
variant of the bandwidth $B(G)$ of $G$ (see surveys
\cite{Chung88,Diaz02}). In the circuit layout of VLSI designs and
network communication, the quality of an embedding is usually
evaluated by two parameters, namely, the dilation and the
congestion. The dilation motivates the bandwidth problem and the
congestion leads to the cutwidth problem.
So far the main concern of the tree spanner problems is in the
algorithmic aspects, including the NP-hardness \cite{Brand04,
Brand07,Cai95,Fekete01,Galb03}, the fixed-parameter polynomial
algorithms \cite{Brand04,Brand07,Fekete01,Fomin11}, and the
approximability \cite{Galb03}. Moreover, for the characterization
problem, it is known that determining $\sigma_T(G)\leq 2$ is
polynomially solvable \cite{Cai95}, while determining $\sigma_T\leq
k$ for $k\geq 4$ is NP-complete. A long-standing open problem is to
characterize $\sigma_T(G)=3$. In this respect, it is significant to
determine exact value of $\sigma_T(G)$ for typical classes of
graphs.
The minimum congestion spanning tree problem has been studied
extensively in the literature. On the complexity aspect, the
NP-hardness even for chain graphs or split graphs was shown in
\cite{Okamoto11}. Linear time algorithms for fixed parameter $k$ and
for planar graphs, bounded-degree graphs and treewidth bounded
graphs were presented in \cite{Bodlae12}. Additionally, determining
the exact values of $c_T(G)$ for special graphs has found an
increasing interest during the last decade, for example:\\
{\indent} $\bullet$ The complete graphs $K_n$, the complete
bipartite graphs $K_{m,n}$, and the planar grids $P_m\times P_n$
\cite{Caste09,Hruska08}.\\
{\indent} $\bullet$ The complete $k$-partite graphs
$K_{n_1,n_2,\ldots,n_k}$ and the torus grids $C_m\times C_n$
\cite{Caste09,Kozawa09}. \\
{\indent} $\bullet$ The triangular grids $T_n$ \cite{Ostrov10}. \\
{\indent} $\bullet$ The $k$-outerplanar graphs \cite{Bodlae11}.
Motivated by the above results on $c_T(G)$, our goal is to
investigate the dual invariant $\sigma_T(G)$ for some basic families
of graphs. The main results are parallel to those for $c_T(G)$.
The remaining of the paper is organized as follows. In Section 2, we
present a basic lower bound by using the girth and derive the exact
results for $K_n$, $C_n$, $K_{m,n}$, the Petersen graph, etc. In
Section 3, we characterize $K_{n_1,n_2,\ldots, n_k}$, split graphs
and generalized convex graphs. Section 4 is devoted to the exact
representations for a class of plane graphs, including rectangular
grids $P_m\times P_n$, triangular grids $T_n$, and
triangulated-rectangular grids $T_{m,n}$.
\section{Elementary properties}
\hspace*{0.5cm} We shall follow the graph-theoretic terminology and
notation of \cite{Bondy08}. Let $G$ be a simple connected graph on
$n$ vertices with vertex set $V(G)$ and edge set $E(G)$. For a
subset $S\subseteq V(G)$, the {\it neighbor set} of $S$ is defined
by $N_G(S):=\{v\in V(G)\setminus S: u\in S,uv\in E(G)\}$. We
abbreviate $N_G(\{v\})$ to $N_G(v)$ for a vertex $v\in V(G)$. For
$S\subseteq V(G)$, we denote by $G[S]$ the subgraph induced by $S$.
For an edge $e\in E(G)$, denote by $G-e$ the graph obtained from $G$
by deletion of $e$. For an edge $e$ not in $E(G)$, denote by $G+e$
the graph obtained from $G$ by addition of $e$.
Let $T$ be a spanning tree of $G$. As usual, the spanning tree $T$
is regarded as a set of edges. The {\it cotree} $\bar T$ of $T$ is
defined as the complement of $T$ in $E(G)$, namely $\bar T=
E(G)\setminus T$. For each $e\in \bar T$, the unique cycle in $T+e$
is a fundamental cycle, determined by the cotree edge $e$. The
tree-stretch $\sigma_T(G)$ is the minimum $\sigma_T(G,T)$ over all
spanning trees $T$ of $G$, and a spanning tree $T$ that minimizes
$\sigma_T(G,T)$ is called an {\it optimal tree}.
Let $P_n,C_n,K_n$ denote the path, the cycle, the complete graph,
respectively, on $n$ vertices. The {\it join} of two graphs $G$ and
$H$, denoted $G\vee H$, is the union of $G$ and $H$ and adding edges
from every vertex of $G$ to every vertex of $H$. For example,
$W_n=K_1\vee C_{n-1}$ is the wheel on $n$ vertices, $K_{m,n}=\bar
K_m\vee \bar K_n$ is the complete bipartite graph with $(m,n)$
partition. The {\it cartesian product} of two graphs $G$ and $H$,
denoted $G\times H$, is the graph with vertex set $V(G)\times V(H)$
and two vertices $(u,v)$ and $(u',v')$ are adjacent if and only if
either $[u=u'\, \mbox{and}\, vv'\in E(H)]$ or $[v=v'\, \mbox{and}\,
uu'\in E(G)]$. For example, $P_m\times P_n$ is the rectangular grid,
$C_m\times C_n$ is the torus grid.
A {\it block} of $G$ is a subgraph of $G$ which contains no cut
vertices and it is maximal with respect to this property. Two blocks
of $G$ have at most one vertex (a cut vertex) in common. As each
fundamental cycle is contained in a block, we have the following.
{\bf Proposition 2.1.} \ If $G$ has blocks $G_1,G_2,\ldots,G_k$,
then
$$\sigma_T(G)=\max_{1\leq i\leq k}\sigma_T(G_i).$$
So, we may assume that $G$ is itself a block, that is a 2-connected
graph (for $n\geq 3$). It is trivial that $\sigma_T(G)=1$ iff $G$ is
a tree. The {\it girth} of $G$ is the length of a shortest cycle in
$G$. By definition, we have a lower bound as follows.
{\bf Proposition 2.2.} \ Let $g(G)$ be the girth of $G$. Then
$\sigma_T(G)\geq g(G)-1$.
Several graphs attain this lower bound by choosing suitable spanning
trees. The following are some examples (see Figure 1, in which the
spanning trees are depicted by solid lines, while the cotrees by
dotted lines).
{\bf Proposition 2.3.} \ The following graphs have $\sigma_T(G)=g(G)-1$:\\
\indent (1) $\sigma_T(K_n)=2$ for the complete graphs $K_n$ ($n\geq 3$).\\
\indent (2) $\sigma_T(C_n)=n-1$ for the cycles $C_n$ ($n\geq 3$).\\
\indent (3) $\sigma_T(W_n)=2$ for the wheels $W_n=C_{n-1}\vee K_1$ ($n\geq 4$).\\
\indent (4) $\sigma_T(D_n)=2$ for the diamonds $D_n=K_2\vee \bar K_{n-2}$ ($n\geq 4$).\\
\indent (5) $\sigma_T(K_{m,n})=3$ for the complete bipartite graphs $K_{m,n}$ ($m,n\geq 2$).\\
\indent (6) $\sigma_T(P_3\times P_n)=3$ for special planar grids
$P_3\times P_n$ ($n\geq 2$).\\
\indent (7) $\sigma_T(G)=4$ for the Petersen graph $G$.
\begin{center}
\setlength{\unitlength}{0.4cm}
\begin{picture}(31,11)
\multiput(0,7)(1.5,0){2}{\circle*{0.3}}
\multiput(3.5,7)(1.5,0){2}{\circle*{0.3}}
\multiput(2.5,4)(0,6){2}{\circle*{0.3}}
\multiput(8,4)(0,3){3}{\circle*{0.3}}
\multiput(11,4)(0,3){3}{\circle*{0.3}}
\multiput(15,4)(2,0){4}{\circle*{0.3}}
\multiput(15,7)(2,0){4}{\circle*{0.3}}
\multiput(15,10)(2,0){4}{\circle*{0.3}}
\put(2.5,4){\line(-5,6){2.5}} \put(2.5,4){\line(-1,3){1}}
\put(2.5,4){\line(0,1){6}}\put(2.5,4){\line(1,3){1}}
\put(2.5,4){\line(5,6){2.5}} \put(8,4){\line(1,0){3}}
\put(8,4){\line(1,1){3}} \put(8,4){\line(1,2){3}}
\put(11,4){\line(-1,1){3}} \put(11,4){\line(-1,2){3}}
\put(15,4){\line(0,1){6}} \put(17,4){\line(0,1){6}}
\put(19,4){\line(0,1){6}} \put(21,4){\line(0,1){6}}
\put(15,7){\line(1,0){6}}
\bezier{12}(0,7)(1.25,8.5)(2.5,10) \bezier{12}(1.5,7)(2,8.5)(2.5,10)
\bezier{12}(3.5,7)(3,8.5)(2.5,10) \bezier{12}(5,7)(3.75,8.5)(2.5,10)
\bezier{12}(8,7)(9.5,7)(11,7) \bezier{12}(8,10)(9.5,8.5)(11,7)
\bezier{12}(8,7)(9.5,8.5)(11,10) \bezier{12}(8,10)(9.5,10)(11,10)
\bezier{30}(15,4)(18,4)(21,4) \bezier{30}(15,10)(18,10)(21,10)
\multiput(25.3,4)(4.4,0){2}{\circle*{0.3}}
\multiput(26.1,5.2)(2.8,0){2}{\circle*{0.3}}
\multiput(25.4,7.5)(4.2,0){2}{\circle*{0.3}}
\multiput(24,8)(7,0){2}{\circle*{0.3}}
\multiput(27.5,9)(1,0){1}{\circle*{0.3}}
\multiput(27.5,10.5)(1,0){1}{\circle*{0.3}}
\put(27.5,9){\line(0,1){1.5}} \qbezier(24,8)(25.75,9.25)(27.5,10.5)
\qbezier(27.5,10.5)(29.25,9.25)(31,8)
\qbezier(26.1,5.2)(26.8,7.1)(27.5,9)
\qbezier(28.9,5.2)(28.2,7.1)(27.5,9)
\qbezier(24,8)(24.7,7.75)(25.4,7.5)
\qbezier(29.6,7.5)(30.3,7.75)(31,8)
\qbezier(25.3,4)(25.7,4.6)(26.1,5.2)
\qbezier(28.9,5.2)(29.3,4.6)(29.7,4)
\bezier{15}(25.3,4)(27.5,4)(29.7,4)
\bezier{15}(25.4,7.5)(28,7.5)(29.6,7.5)
\bezier{15}(25.4,7.5)(27.15,6.35)(28.9,5.2)
\bezier{15}(26.1,5.2)(27.85,6.35)(29.6,7.5)
\bezier{15}(25.3,4)(24.65,6)(24,8)
\bezier{15}(31,8)(30.35,6)(29.7,4)
\put(-0.5,2.2){\makebox(1,0.5)[l]{\small (a) Diamond $D_6$}}
\put(8,2.2){\makebox(1,0.5)[l]{\small (b) $K_{3,3}$}}
\put(15.5,2.2){\makebox(1,0.5)[l]{\small (c) $P_3\times P_4$}}
\put(23.5,2.2){\makebox(1,0.5)[l]{\small (d) Petersen graph}}
\put(8,0.5){\makebox(1,0.5)[l]{\small Figure 1. Examples in
Proposition 2.3.}}
\end{picture}
\end{center}
{\bf Proof.} \ (1) The complete graph $K_n$ ($n\geq 3$) has girth
$g(K_n)=3$ and a star $K_{1,n-1}$ is an optimal tree. (2) The cycle
$C_n$ ($n\geq 3$) has the unique fundamental cycle itself. (3) The
wheel $W_n=C_{n-1}\vee K_1$ has girth $3$ and the star $K_{1,n-1}$
is an optimal tree. (4) The diamond $D_n$ has girth $3$ and the star
$K_{1,n-1}$ is an optimal tree (see Figure 1(a)). (5) Let $G$ be a
complete bipartite graph $K_{m,n}$ with bipartition $(X,Y)$ where
$|X|=m,|Y|=n$ ($m,n\geq 2$). Then $G$ has girth $4$. We can
construct a spanning tree $T$ by taking a star $K_{1,n}$ with center
$x\in X$ and a star $K_{1,m}$ with center $y\in Y$ (which is called
a {\it double star} with diameter three, see Figure 1(b)). Then each
fundamental cycle with respect to $T$ has length $4$, and thus $T$
is optimal. (6) For the planar grid $P_3\times P_n$, the girth is 4
and the `caterpillar' with leaves on the boundary of outer face is
an optimal tree (see Figure 1(c)). (7) For the Petersen graph $G$,
the girth is 5 and we take the spanning tree $T$ as shown in Figure
1(d). Then every fundamental cycle with respect to $T$ has length 5.
This completes the proof. $\Box$
It is interesting to characterize the graphs satisfying Proposition
2.3, namely, those graphs having a spanning tree that every
fundamental cycle is a shortest cycle. We shall see more examples in
the next section.
\section{Characterization of low stretch graphs}
\hspace*{0.5cm} This section is intended to approach the open
problem of characterizing $\sigma_T(G)=3$. Madanlel et al.
\cite{Madan96} showed that $\sigma_T(G)\leq 3$ for all interval and
permutation graphs, and that a regular bipartite graph $G$ has
$\sigma_T(G)\leq 3$ if and only if it is complete. Moreover,
Brandst\"adt et al. \cite{Brand07} showed $\sigma_T(G)=3$ for
bipartite ATE-free graphs and convex graphs. Here, an ATE
(asteroidal triple of edges) in a graph $G$ is a set $A$ of three
edges that for any two edges $e_1,e_2\in A$, there is a path from
$e_1$ to $e_2$ that avoids the neighborhood of the third edge $e_3$
(the neighborhood of $uv$ is $N_G(u)\cup N_G(v)$). An ATE-free
(asteroidal-triple-edge-free) graph is one which does not contain
any ATE. The bipartite convex graphs form a special class of
bipartite ATE-free graphs. A bipartite graph $G$ with bipartition
$(X,Y)$ is said to be {\it convex} if $Y$ can be ordered as
$Y=\{y_1,y_2,\ldots,y_n\}$ such that the neighbor set $N_G(x_i)$ is
a consecutive sequence in $Y$ for each $x_i\in X$. We present more
results in this context.
\subsection{Complete $k$-partite graphs}
\hspace*{0.5cm} Let $\{V_1,V_2,\ldots, V_k\}$ be a partition of
$V(G)$ with $n_i=|V_i|$ ($1\leq i\leq k$). The complete $k$-partite
graph $K_{n_1,n_2,\ldots,n_k}$ with $k\geq 2$ is a graph such that
$uv\in E(G)$ if and only if $u\in V_i$ and $v\in V_j$ for $i\neq j$.
{\bf Theorem 3.1.} \ Suppose that $n_1\leq n_2\leq\cdots \leq n_k$
and $k\geq 3$. Then
$$\sigma_T(K_{n_1,n_2,\ldots,n_k})=\begin{cases}
2,& \mbox{if}\,\, n_1=1\\
3,& \mbox{otherwise}.
\end{cases}$$
{\bf Proof.} \ Let $G=K_{n_1,n_2,\ldots,n_k}$ ($k\geq 3$).
Obviously, the girth of $G$ is 3. When $n_1=1$, we can construct a
spanning tree as a star centered at the unique vertex of $V_1$. Then
all fundamental cycles are triangles, and thus $\sigma_T(G)=2$. When
$n_1\geq 2$, we will show that for any spanning tree $T$,
$\sigma_T(G,T)\geq 3$. By letting $X=V_2\cup\cdots \cup V_k$, we
have a complete bipartite graph $G'$ of bipartition $(V_1,X)$. There
are two cases to consider.\\
{\indent}(i) The spanning tree $T$ contains no edges between
vertices in $X$. Then $T$ is a spanning tree of $G'$ and a
fundamental cycle with respect to $T$ in $G'$ is one in $G$. As $G'$
is bipartite, a fundamental cycle in $G'$ has length at least
$4$, whence $\sigma_T(G,T)\geq 3$.\\
{\indent}(ii) The spanning tree $T$ contains some edges between
vertices in $X$. Suppose that $xy\in T$ with $x\in V_i$ and $y\in
V_j$ ($2\leq i<j\leq k$). Let $u\in V_1$ be such that
$d_T(u,x)<d_T(u,y)$. If $d_T(u,x)\geq 2$, then $d_T(u,y)\geq 3$,
thus $\sigma_T(G,T)\geq 3$. Otherwise $ux\in T$. Take $z\in
V_i,z\neq x$. Then $d_T(x,z)\geq 2$. If the path $P_{xz}$ in $T$
contains $u$, then $d_T(y,z)\geq 3$. Otherwise $d_T(u,z)\geq 3$,
whence $\sigma_T(G,T)\geq 3$.\\
{\indent} On the other hand, we can construct a spanning tree $T$ in
the complete bipartite graph $G'$ as a double star (as in
Proposition 2.3(5)). Then for an edge between the vertices of $V_1$
and $X$, the fundamental cycle has length four, while for an edge
between the vertices of $X$, the fundamental cycle has length three.
Thus $\sigma_T(G,T)=3$. This completes the proof. $\Box$
\subsection{Split graphs}
\hspace*{0.5cm}A graph $G$ is a {\it split graph} if its vertex set
$V(G)$ can be partitioned into a clique $X$ of $G$ and an
independent set $Y$ of $G$. For split graphs, \cite{Okamoto11}
showed that the spanning tree congestion problem is NP-complete.
However, the dual problem is easy. It has been known in
\cite{Brand04,Venka97} that $\sigma_T(G)\leq 3$ for split graphs
$G$. Here we describe a precise characterization as follows.
{\bf Theorem 3.2.} \ For a split graph $G$ (apart from a tree),
$\sigma_T(G)=2$ if and only if there exists a vertex $x_0\in X$ such
that every vertex $y\in Y\setminus N_G(x_0)$ is a pendant vertex (of
degree one). Otherwise $\sigma_T(G)=3$.
{\bf Proof.} \ If there exists a vertex $x_0\in X$ such that every
vertex $y\in Y\setminus N_G(x_0)$ is pendant, then we can construct
a spanning tree $T^*$ by the star with edges from $x_0$ to
$N_G(x_0)$, and by joining each remaining vertex $y\in Y\setminus
N_G(x_0)$ to its unique neighbor in $X$. Then for any $x,x'\in X$,
we have $d_{T^*}(x,x')\leq 2$. For any edge $xy\in E(G)$ with $x\in
X$ and $y\in N_G(x_0)$, the path between $x$ and $y$ in $T^*$ is
either $x_0y$ or $xx_0y$, thus $d_{T^*}(x,y)\leq 2$. For any edge
$xy\in E(G)$ with $x\in X$ and $y\in Y\setminus N_G(x_0)$, we have
$x\neq x_0$. Then $x$ is the unique neighbor of $y$, thus
$d_{T^*}(x,y)=1$. Therefore $\sigma_T(G,T^*)=2$ and so
$\sigma_T(G)=2$.
Conversely, if $\sigma_T(G)=2$, then there is a spanning tree $T$
such that $\sigma_T(G,T)=2$. This spanning tree $T$ restricted in
$G[X]$ must be a star with center $x_0$. For otherwise there would
be $x,x'\in X$ such that $d_T(x,x')\geq 3$. If a vertex $y\in Y
\setminus N_G(x_0)$ is adjacent to two vertices $x_1,x_2\in X$
(where $yx_1\in T$), then the fundamental cycle $yx_1x_0x_2$ has
length greater than three, which contradicts $\sigma_T(G,T)=2$.
Furthermore, we show that $\sigma_T(G)\leq 3$ in any case. To this
end, we construct a spanning tree $T$ as follows. We choose a vertex
$x_0\in X$ arbitrarily and take the star from $x_0$ to $N_G(x_0)$,
and join each vertex $y\in Y\setminus N_G(x_0)$ to a neighbor in
$X$. For any $x,x'\in X$, we have $d_{T}(x,x')\leq 2$. For any edge
$xy\in \bar T$ with $x\in X$ and $y\in N_G(x_0)$, the path between
$x$ and $y$ in $T$ is $xx_0y$, thus $d_{T}(x,y)=2$. If there is an
edge $xy\in \bar T$ with $x\in X$ and $y\in Y\setminus N_G(x_0)$,
and $yx'\in T$, then the path between $x$ and $y$ in $T$ is
$xx_0x'y$. Thus $d_{T}(x,y)=3$. Therefore, $\sigma_T(G,T)\leq 3$ and
so $\sigma_T(G)\leq 3$. This completes the proof. $\Box$
\subsection{Generalized convex graphs}
\hspace*{0.5cm} A bipartite graph $G$ with bipartition $(X,Y)$ is a
{\it chain graph} if there is an order $x_1,x_2,\ldots,x_m$ in $X$
such that $N_G(x_1)\subseteq N_G(x_2) \subseteq \cdots \subseteq
N_G(x_m)$. Previously, \cite{Okamoto11} showed that the minimum
congestion spanning tree problem is NP-hard even for chain graphs.
However, the counterpart in the tree-stretch problem is quite easy,
since a chain graph is a special convex graph and $\sigma_T(G)\leq
3$ in known in $\cite{Brand07}$.
Now we consider a generalization of convex graphs. A subset family
${\cal F}$ is called {\it laminar} (or {\it nested}) if for any two
sets $A,B\in {\cal F}$, at least one of $A\setminus B,B\setminus A,
A\cap B$ is empty, that is, $A\cap B\neq \emptyset \Rightarrow
A\subseteq B \, \mbox{or}\, B\subseteq A$.
{\bf Definition.} \ A bipartite graph $G$ with bipartition $(X,Y)$
is a {\it generalized convex graph} if there exists a tree $\tau
(Y)$ on the vertex set $Y$ such that for each $x_i\in X$, the
neighbor set $Y_i=N_G(x_i)$ induces a subpath in $\tau (Y)$ and the
subset family $\Sigma=\{Y_i: x_i\in X\}$ satisfies the following \\
{\indent} {\bf Laminar property} \ For each maximal subset $Y_0\in
\Sigma$ (there exists no $Y_i\in \Sigma$ such that $Y_0\subset
Y_i$), the subset family $\{Y_i\setminus Y_0: Y_i\cap Y_0 \neq
\emptyset,Y_i\in \Sigma\}$ is laminar.
For a convex graph $G$, $\tau (Y)$ is itself a path and the subset
family $\Sigma=\{Y_i: x_i\in X\}$ can be regarded as a set of
intervals on the line of $\tau (Y)$. For each maximal interval
$Y_0\in \Sigma$, if $Y_i\cap Y_0 \neq \emptyset, Y_j\cap Y_0 \neq
\emptyset$, then $Y_i\setminus Y_0$ and $Y_j\setminus Y_0$ is either
disjointed or one is included in another. Hence the subset family
$\{Y_i\setminus Y_0: Y_i\cap Y_0 \neq \emptyset,Y_i\in \Sigma\}$ is
laminar. Thus the above definition is indeed a generalization of
that of convex graphs. Moreover, a generalized convex graph is not
necessarily an ATE-free graph. For example, when $\tau (Y)$ is not a
path, let $y_1,y_2,y_3$ be three leaves (pendant vertices) of $\tau
(Y)$ in different branches such that there is a path from $y_i$ to
$y_j$ that avoids the neighborhood of $y_k$ (for $\{i,j,k\}=
\{1,2,3\}$). Then the three edges $e_1,e_2,e_3$ incident with
$y_1,y_2,y_3$, respectively, in $G$ constitute an ATE.
We are going to show that $\sigma_T(G)=3$ for generalized convex
graphs. Since a bipartite graph (apart from a tree) has girth
$g(G)=4$, we have $\sigma_T(G)\geq 3$. It suffices to construct an
optimal spanning tree with stretch three.
Let $\Sigma=\{Y_1,Y_2,\ldots,Y_m\}$ be the family of neighbor sets,
where $Y_i=N_G(x_i)$ for $x_i\in X$ ($1\leq i\leq m$). By
assumption, we are given a tree $\tau (Y)$ on $Y$ that each $Y_i$
induces a subpath of it ($1\leq i\leq m$). Suppose that $Y_1$
contains a leaf (pendant vertex) of $\tau (Y)$ and it is maximal in
$\Sigma$ in the sense of inclusion. We consider this leaf as the
root of the tree. Starting with $Y_1$, we define the {\it level
sets} $L_k$ in $\Sigma$ by the following procedure:\\
{\indent} (i) Define $L_1:=\{Y_1\}$. Set $\Sigma:=\Sigma\setminus
L_1$ and $k:=1$. \\
{\indent} (ii) For each $Y_i\in L_k$, if $Y_j\in \Sigma$ satisfies
that $Y_i\cap Y_j\neq \emptyset$, $Y_j\setminus Y_i\neq \emptyset$,
and $Y_i\cup Y_j$ is maximal (i.e., there is no other $Y_l$ such
that $Y_i\cap Y_l\neq \emptyset$, $Y_l\setminus Y_i\neq \emptyset$,
and $Y_i\cup Y_j \subset Y_i\cup Y_l$), then $Y_j$ is called a {\it
successor} is $Y_i$ (and $Y_i$ is the {\it predecessor} of $Y_j$).
Let $L_{k+1}$ be the set of successors of $Y_i$ for all $Y_i\in L_k$. \\
{\indent} (iii) Set $\Sigma:=\Sigma\setminus L_{k+1}$ and $k:=k+1$.
If $\bigcup_{Y_i\in L_1\cup L_2\cup \cdots\cup L_k}Y_i=Y$, then let
$h:=k$ and stop, else go to (ii).
By this procedure, we construct the level sets $L_1,L_2,\ldots,L_h$.
Let $\Sigma^*:=\bigcup_{1\leq k\leq h}L_k$, which is a subfamily of
$\Sigma$. For all neighbor sets $Y_i$ in $\Sigma^*$, no one is
contained in another, and they constitute a cover of $Y$. Also, they
can be regarded as a directed tree rooted at $Y_1$ and running down
level by level. If $Y_j$ and $Y_l$ are successors of $Y_i$ in this
directed tree, then by the laminar property, we see that
$(Y_j\setminus Y_i)\cap (Y_l\setminus Y_i)=\emptyset$. Also, for
$Y_i\in L_{k-1}$ and $Y_j\in L_{k+1}$, we have $Y_i\cap
Y_j=\emptyset$.
In this situation, there may be some neighbor sets $Y_q\in \Sigma
\setminus \Sigma^*$, which are discarded in the above procedure. For
each $Y_q\in \Sigma \setminus \Sigma^*$, there must be a $Y_i\in
L_k$ and its successor $Y_j\in L_{k+1}$ such that $Y_i\cap Y_q\neq
\emptyset$, and $Y_i\cup Y_q \subseteq Y_i\cup Y_j$. For otherwise
we may choose $Y_q$ in the above procedure.
By means of the level structure $\{L_1,L_2,\ldots,L_h\}$, we
construct the spanning tree $T$ by the following algorithm.
\vskip 3mm {\bf Construction Algorithm}\\
\hspace*{0.2cm}\line(1,0){170}\\
{\indent} (1) For $L_1=\{Y_1\}$, construct a star $T_1$ with center
$x_1$ and all leaves $y\in Y_1$. Set $T:=T_1$ and $k:=1$.\\
{\indent} (2) For each neighbor set $Y_i\in L_k$, consider a
successor $Y_j\in L_{k+1}$, and construct a star $T_j$ with center
$x_j$ and all leaves $y\in (Y_j\setminus Y_i)\cup \{\bar y\}$, where
$\bar y$ is the last vertex in $Y_i\cap Y_j$ (according to the order
of the path of $Y_i$). Set $T:=T\cup T_j$. Repeat this step for all
successors of $Y_i$ and all $Y_i\in L_k$. \\
{\indent} (3) Set $k:=k+1$. If $k<h$, then go to (2). \\
{\indent} (4) For each neighbor set $Y_q\in \Sigma\setminus
\Sigma^*$, suppose that $Y_i\cap Y_q\neq \emptyset$ and
$Y_q\subseteq Y_i\cup Y_j$ for some $Y_i\in L_k$ and its successor
$Y_j\in L_{k+1}$. Let $\bar y$ be the last vertex in $Y_i\cap Y_q$.
Then set $T:=T\cup \{x_q\bar y\}$.\\
\hspace*{0.2cm}\line(1,0){170}\\
We claim that the output $T$ of the above algorithm is indeed a
spanning tree of $G$. In fact, we first construct a star $T_1$ with
center $x_1$ for level $L_1$. When $Y_i\in L_k$ has been considered,
we have a star $T_i$ with center $x_i$. Then we consider a successor
$Y_j\in L_{k+1}$ of $Y_i$ and add a star $T_j$. Since the stars
$T_i$ and $T_j$ have only one leaf in common, $T_i\cup T_j$ is
connected and contains no cycles, and thus is a tree. If $Y_i$ has
another successor $Y_l$, then by the laminar property, we have
$(Y_j\setminus Y_i)\cap (Y_l\setminus Y_i)=\emptyset$. Then $T_l$
and $T_i$ have one leaf in common, $T_l$ and $T_j$ have at most one
leaf in common (if the $\bar y\in Y_i$ is the same for $T_l$ and
$T_j$). Hence $T_i\cup T_j\cup T_l$ is also a tree. In this way, we
construct a set of stars in which any two stars have at most one
leaf in common. So we obtain a tree $T$ in Steps (1)-(3). In Step
(4), we add more pendant edges (with new leaves $x_q$) to $T$.
Additionally, all vertices of $G$ are considered when the algorithm
terminates. Therefore $T$ is finally a spanning tree.
{\bf Theorem 3.3.} \ For a generalized convex graph $G$ (apart from
a tree), it holds that $\sigma_T(G)=3$.
{\bf Proof.} \ We proceed to show that the spanning tree $T$
constructed by the above algorithm has stretch three. For each
cotree-edge $e\in \bar T$, there are two cases to consider:
{\bf Case 1:} \ $e=x_jy$ with $Y_j\in L_k$ for some $L_k$ in
$\sigma^*$. Let $Y_i\in L_{k-1}$ be the predecessor of $Y_j$. Then
$y\in Y_i\cap Y_j$. Thus $x_iy,x_i\bar y$ and $x_j\bar y$ are
contained in $T$ (where $\bar y$ is the last vertex in $Y_i\cap
Y_j$). Hence $e=x_jy$ and these three edges in $T$ constitute the
fundamental cycle with respect to $e$, which has length four.
{\bf Case 2:} \ $e=x_qy$ with $Y_q\in \Sigma\setminus \Sigma^*$.
Then there is some $Y_i\in L_k$ and its successor $Y_j\in L_{k+1}$
such that $Y_i\cap Y_q\neq \emptyset$ and $Y_q\subseteq Y_i\cup
Y_j$. If $y\in Y_i$ (say $Y_q\subseteq Y_i$), then $x_iy,x_i\bar y,
x_q\bar y\in T$ (where $\bar y$ is the last vertex in $Y_i\cap
Y_q$). Thus $e=x_qy$ and these three edges in $T$ constitute the
fundamental cycle with respect to $e$, which has length four. If
$y\in Y_j\setminus Y_i$, then by the laminar property, $Y_q\setminus
Y_i\subseteq Y_j\setminus Y_i$. Let $\bar y$ be the last vertex in
$Y_i\cap Y_j$. Then $\bar y\in Y_q$ and $x_jy,x_j\bar y,x_q\bar y\in
T$. Thus these three edges in $T$ and $e=x_qy\notin T$ also yield a
length four fundamental cycle.
To summarize, for every cotree-edge $e\in \bar T$, the fundamental
cycle with respect to $e$ has length four. Therefore,
$\sigma_T(G,T)=3$ and the theorem is proved. $\Box$
\section{Planar grids}
\hspace*{0.5cm} It is known that the minimum stretch spanning tree
problem is NP-hard for planar graphs in general \cite{Fekete01}. We
discuss some planar grids in this section.
Let $G$ be a simple connected planar graph. Suppose that we have a
planar embedding of $G$ on the plane so that it is a {\it plane
graph}. For a face $f$ of $G$, the degree of $f$, denoted by $d(f)$,
is the number of edges in its boundary. Our approach is based on the
spanning trees of the dual graph. The dual graph $G^*$ of $G$ is
defined as follows. Each face $f$ of $G$ (including the outer face)
corresponds to a vertex $f^*$ in $G^*$, and each edge $e$ of $G$
corresponds to an edge $e^*$ of $G^*$ in such a way that two
vertices $f^*$ and $g^*$ are joined by an edge $e^*$ in $G^*$ if and
only if their corresponding faces $f$ and $g$ are separated by the
edge $e$ in $G$. We may place each vertex $f^*$ in the face $f$ of
$G$ and draw each edge $e^*$ to cross the edge $e$ of $G$ exactly
once. This dual graph $G^*$ is also a plane graph.
A prominent property of duality is: A cycle $C$ of $G$ corresponds
an edge-cut (cocycle) $C^*$ of $G^*$, and an edge-cut $B$ of $G$
corresponds a cycle $B^*$ of $G^*$. In particular, for a spanning
tree $T$ of $G$, the cotree $\bar T$ corresponds to a spanning tree
$\bar T^*$ of $G^*$. A fundamental cycle with respect to $T$ in $G$
corresponds to a fundamental edge-cut with respect to $\bar T^*$ in
$G^*$ (see \cite{Bondy08} for details). For example, the cube $Q_3$
is shown in Figure 2(a) and a spanning tree $T$ with solid lines in
Figure 2(b). Meanwhile, the spanning tree $\bar T^*$ with dotted
lines of the dual graph $G^*$ is also drawn in Figure 2(b), in which
the vertices of faces are represented by small circles and the
vertex of outer face is denoted by $O$.
\begin{center}
\setlength{\unitlength}{0.3cm}
\begin{picture}(32,16)
\multiput(1,4)(10,0){2}{\circle*{0.3}}
\multiput(4,7)(4,0){2}{\circle*{0.3}}
\multiput(4,11)(4,0){2}{\circle*{0.3}}
\multiput(1,14)(10,0){2}{\circle*{0.3}}
\multiput(16,4)(10,0){2}{\circle*{0.3}}
\multiput(19,7)(4,0){2}{\circle*{0.3}}
\multiput(19,11)(4,0){2}{\circle*{0.3}}
\multiput(16,14)(10,0){2}{\circle*{0.3}}
\multiput(17.5,9)(7,0){2}{\circle{0.3}}
\multiput(21,5.5)(0,3.5){3}{\circle{0.3}}
\multiput(30,9)(3,0){1}{\circle{0.3}}
\put(1,4){\line(1,0){10}} \put(4,7){\line(1,0){4}}
\put(4,11){\line(1,0){4}}\put(1,14){\line(1,0){10}}
\put(1,4){\line(0,1){10}} \put(4,7){\line(0,1){4}}
\put(8,7){\line(0,1){4}} \put(11,4){\line(0,1){10}}
\put(1,4){\line(1,1){3}} \put(8,11){\line(1,1){3}}
\put(1,14){\line(1,-1){3}} \put(8,7){\line(1,-1){3}}
\put(19,7){\line(1,0){4}} \put(19,11){\line(1,0){4}}
\put(19,7){\line(0,1){4}} \put(16,4){\line(1,1){3}}
\put(23,11){\line(1,1){3}} \put(16,14){\line(1,-1){3}}
\put(23,7){\line(1,-1){3}}
\bezier{30}(21.2,9)(25,9)(29.8,9)
\bezier{40}(21,12.6)(24,20)(30,9.1)
\bezier{40}(21,5.4)(25,-2)(30,8.9) \bezier{25}(17.4,9)(13,11)(15,14)
\bezier{55}(15,14)(25,21)(30,9.1)
\put(5.6,5.2){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(2.2,8.5){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(5.6,8.5){\makebox(1,0.5)[l]{\footnotesize $2$}}
\put(9.2,8.5){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(5.6,12.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(31,8.5){\makebox(1,0.5)[l]{\footnotesize $O$}}
\put(1.5,0.8){\makebox(1,0.5)[l]{\small (a) The cube $Q_3$}}
\put(13,0.8){\makebox(1,0.5)[l]{\small (b) Spanning trees of $Q_3$
and its dual}} \put(5,-1){\makebox(1,0.5)[l]{\small Figure 2. The
cube $Q_3$ and its spanning trees.}}
\end{picture}
\end{center}
For a face $f$ of plane graph $G$ (a vertex of $G^*$), we define the
{\it level} of $f$, denoted by $\lambda(f)$, to be the length of a
shortest path from the vertex $f$ to the vertex $O$ of outer face in
$G^*$. We denote by $L_i$ the set of faces having level $i$ ($i=0,
1,\ldots$). Then the levels can be determined by the following
procedure:\\
{\indent} (i) Let $\lambda(O)=0$ and $L_0=\{O\}$.\\
{\indent} (ii) If $L_i$ has been defined, then for any face $f$
whose level $\lambda(f)$ is not defined and it is \\
{\indent} $\quad$ adjacent to a face $g\in L_i$, set $\lambda(f)=i+1$.\\
For example, the levels of the faces in $Q_3$ are shown in Figure
2(a) by the number in each face (except $O$ with level $0$), where
$|L_1|=4,|L_2| =1$. Here, we first consider the outer vertex $O$ as
the root. Then, all vertices in $L_1$ have the same {\it
predecessor} $O$. In general, when $\lambda(f)=i+1$ is defined in
terms of an adjacent vertex $g\in L_i$, $g$ is the predecessor of
$f$. Thus, a rooted tree (called {\it search tree}) is obtained
level by level. In this respect, we define the {\it maximum level}
of $G$ by
$$\lambda_{\max}(G):=\max_{f\in F}\lambda(f),$$ where $F$ is the set
of the faces of $G$. This is the height of the search tree.
\subsection{Rectangular grids}
\hspace*{0.5cm} First, we consider the rectangular grids $G=
P_m\times P_n$ ($2\leq m\leq n$) on the plane. Let $V(G):=
\{(i,j):1\leq i\leq m,1\leq j\leq n\}$ denote the vertex set of $G$,
and $(i,j)$ is adjacent to $(i',j')$ if $|i-i'|+|j-j|=1$ (see Figure
3(a)). Similar to the notation of matrices, we may call $R_i:=
\{(i,j):1\leq j\leq n\}$ the $i$-th row, and $Q_j:=\{(i,j): 1\leq
i\leq m\}$ the $j$-th column. The edges in the rows are called {\it
horizontal edges}. The edges in the columns are called {\it vertical
edges}.
\begin{center}
\setlength{\unitlength}{0.32cm}
\begin{picture}(36,15.5)
\multiput(1,4)(3,0){5}{\circle*{0.3}}
\multiput(1,7)(3,0){5}{\circle*{0.3}}
\multiput(1,10)(3,0){5}{\circle*{0.3}}
\multiput(1,13)(3,0){5}{\circle*{0.3}}
\multiput(19,4)(3,0){5}{\circle*{0.3}}
\multiput(19,7)(3,0){5}{\circle*{0.3}}
\multiput(19,10)(3,0){5}{\circle*{0.3}}
\multiput(19,13)(3,0){5}{\circle*{0.3}}
\multiput(20.5,5.5)(3,0){4}{\circle{0.3}}
\multiput(20.5,8.5)(3,0){4}{\circle{0.3}}
\multiput(20.5,11.5)(3,0){4}{\circle{0.3}}
\multiput(36,8.5)(3,0){1}{\circle{0.3}}
\put(1,4){\line(1,0){12}} \put(1,7){\line(1,0){12}}
\put(1,10){\line(1,0){12}}\put(1,13){\line(1,0){12}}
\put(1,4){\line(0,1){9}} \put(4,4){\line(0,1){9}}
\put(7,4){\line(0,1){9}} \put(10,4){\line(0,1){9}}
\put(13,4){\line(0,1){9}}
\put(19,7){\line(1,0){12}} \put(19,4){\line(0,1){3}}
\put(19,10){\line(0,1){3}} \put(19,10){\line(1,0){3}}
\put(22,4){\line(0,1){9}} \put(25,4){\line(0,1){9}}
\put(28,4){\line(0,1){3}} \put(28,10){\line(1,0){3}}
\put(28,10){\line(0,1){3}} \put(25,10){\line(1,0){3}}
\put(28,13){\line(1,0){3}} \put(28,4){\line(1,0){3}}
\bezier{30}(20.5,8.5)(16,9.5)(18,13)
\bezier{50}(18,13)(21,18)(31,15.5)
\bezier{40}(31,15.5)(34,15)(36,8.5)
\bezier{13}(23.5,8.5)(23.5,10.2)(23.5,11.5)
\bezier{13}(26.5,8.5)(28,8.5)(29.5,8.5)
\bezier{25}(29.5,8.5)(32,8.5)(36,8.5)
\bezier{50}(20.5,5.5)(21,1.5)(31,2)
\bezier{43}(23.5,5.5)(24,2)(31,2.5)
\bezier{36}(26.5,5.5)(27,2.5)(31,3)
\bezier{28}(29.5,5.5)(32,5.5)(36,8.5)
\bezier{50}(20.5,11.5)(21,17)(31,15)
\bezier{43}(23.5,11.5)(24,16)(31,14.5)
\bezier{36}(26.5,11.5)(27,15)(31,14)
\bezier{28}(29.5,11.5)(32,11.5)(36,8.5)
\bezier{35}(31,1.98)(34.8,3.2)(36,8.5)
\bezier{34}(31,2.5)(34.4,4)(36,8.5)
\bezier{33}(31,3)(34,4.3)(36,8.5)
\bezier{35}(31,15)(34,14.4)(36,8.5)
\bezier{34}(31,14.5)(34,14)(36,8.5)
\bezier{33}(31,14)(33.5,13.5)(36,8.5)
\put(2.3,11.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(5.3,11.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(8.3,11.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(11.3,11.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(2.3,8.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(11.3,8.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(2.3,5.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(5.3,5.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(8.3,5.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(11.3,5.3){\makebox(1,0.5)[l]{\footnotesize $1$}}
\put(5.3,8.3){\makebox(1,0.5)[l]{\footnotesize $2$}}
\put(8.3,8.3){\makebox(1,0.5)[l]{\footnotesize $2$}}
\put(0,13.5){\makebox(1,0.5)[l]{\footnotesize $(1,1)$}}
\put(12.2,13.5){\makebox(1,0.5)[l]{\footnotesize $(1,5)$}}
\put(0,3.1){\makebox(1,0.5)[l]{\footnotesize $(4,1)$}}
\put(12.2,3.1){\makebox(1,0.5)[l]{\footnotesize $(4,5)$}}
\put(36.6,8.3){\makebox(1,0.5)[l]{\footnotesize $O$}}
\put(2,0.8){\makebox(1,0.5)[l]{\small (a) Grid $P_4\times P_5$}}
\put(15,0.8){\makebox(1,0.5)[l]{\small (b) Spanning trees of
$P_4\times P_5$ and its dual}} \put(7,-1){\makebox(1,0.5)[l]{\small
Figure 3. Grid $P_4\times P_5$ and spanning trees.}}
\end{picture}
\end{center}
Hruska \cite{Hruska08} proved the tree-congestion as follows ($m\leq
n$):
$$c_T(P_m\times P_n)=\begin{cases}
m, & \mbox{if } m=n \mbox{ or } m \mbox{ odd}\\
m+1, & \mbox{otherwise}.
\end{cases}$$
In the following we derive a similar formula for the tree-stretch:
{\bf Theorem 4.1.} \ For the rectangular grids $P_m\times P_n$ with
$2\leq m\leq n$, we have
$$\sigma_T(P_m\times P_n)=2\left\lfloor\frac{m}{2}\right\rfloor+1.$$
{\bf Proof.} \ Let $G=P_m\times P_n$ ($2\leq m\leq n$). We first
show that
$$\lambda_{\max}(G)=\left\lfloor\frac{m}{2}\right\rfloor.$$
By induction on $m$. When $m=2,3$, all faces have level $1$, so
$\lambda_{\max}(G)=1$ and the assertion holds. Assume that $m\geq 4$
and the assertion holds for smaller $m$. We delete the boundary of
the outer faces from $G$ (the vertices and the edges on this
boundary are deleted). Then the remaining graph is $G'=P_{m-2}\times
P_{n-2}$. In this transformation, all faces with level $1$ are
removed. Therefore $\lambda_{\max}(G)=\lambda_{\max}(G')+1$. By
induction hypothesis, $\lambda_{\max}(G')=\lfloor (m-2)/2\rfloor$.
Hence
$$\lambda_{\max}(G)=\left\lfloor\frac{m-2}{2}\right\rfloor+1=
\left\lfloor\frac{m}{2}\right\rfloor.$$ For example, the levels of
$P_4\times P_5$ are shown in Figure 3(a) and $\lambda_{\max}=\lfloor
m/2\rfloor=2$.
We next show the lower bound
\begin{equation}
\sigma_T(G)\geq 2\lambda_{\max}+1.
\end{equation}
In fact, let $T$ be any given spanning tree of $G$. Then the cotree
$\bar T$ determines a spanning tree $\bar T^*$ in $G^*$. Suppose
that $f_0$ is a face with the maximum level $\lambda_{\max}$. For
brevity, we still denote its vertex in $G^*$ by $f_0$ and write
$\lambda=\lambda_{\max}$. Then the distance between $f_0$ and $O$ in
$\bar T^*$ is at least $\lambda$. Let $P^*$ be the path from $f_0$
to $O$ in $\bar T^*$ with the last edge $e^*_0$ incident with $O$.
The tree-edge $e^*_0$ in the spanning tree $\bar T^*$ determines a
fundamental edge-cut $C^*=\partial(X_{e_0})$, where $\bar T^*-e^*_0$
has two components and $X_{e_0}$ is the vertex set of the component
containing $P^*$. Then this fundamental edge-cut $C^*$ with respect
to $\bar T^*$ in $G^*$ corresponds to a fundamental cycle $C$ with
respect to $T$ in $G$. So, this fundamental cycle $C$ is determined
by the cotree edge $e_0$ on the boundary of the outer face that
corresponds to the edge $e^*_0$ in $P^*$. Note that all faces in
$P^*$ (with labels $1,2,\ldots,\lambda$) are contained in the region
surrounded by $C$. Without loss of generality, assume that $e_0$ is
on the row $R_1$. We draw $\lambda$ horizontal straight lines
passing through the centers of square faces of $P^*$. Then each of
these straight lines intersects $C$ at two vertical edges. Besides,
$C$ must have at least two more horizontal edges. Hence $C$ has
length at least $2\lambda +2$. Consequently, for any spanning tree
$T$, we find a fundamental cycle $C$ with length at least $2\lambda
+2$. By the arbitrariness of $T$, the lower bound (3) is proved.
Conversely, we can construct a spanning tree $T^*$ by taking all
columns and the row $R_{\lfloor m/2\rfloor}$. Then the maximal
fundamental cycles have length $2\lfloor m/2\rfloor+2$. Thus the
spanning tree $T^*$ is optimal. This completes the proof. $\Box$
\subsection{Triangular grids}
\hspace*{0.5cm} We next consider the triangular grids $T_n$, which
is defined as follows. The vertex set can be represented as
$\{(x,y)\in {\mathbf Z}^2: x+y\leq n,x,y\geq 0\}$ on the plane, and
two vertices $(x,y)$ and $(x',y')$ are joined by an edge if
$|x-x'|+|y-y'|=1$ or $|x-x'|+|y-y'|=2$ and $x+y=x'+y'$ (refer to
\cite{Lin14}). For example, $T_4$ is shown in Figure 4, and $T_1$ (a
triangle), $T_2$ and $T_3$ are shown in Figure 5. In this plane
embedding of $T_n$, the straight-lines $\{(x,y)\in {\mathbf R}^2:
y=k\}$ ($0\leq k\leq n-1$) are called {\it horizontal lines} and the
edges on them are called {\it horizontal edges}. Symmetrically, the
straight-lines $\{(x,y)\in {\mathbf R}^2: x=k\}$ ($0\leq k\leq n-1$)
are called {\it vertical lines} and the edges on them are called
{\it vertical edges}. In addition, there are slant edges.
\begin{center}
\setlength{\unitlength}{0.45cm}
\begin{picture}(26,12)
\multiput(1,3)(2,0){5}{\circle*{0.3}}
\multiput(1,5)(2,0){4}{\circle*{0.3}}
\multiput(1,7)(2,0){3}{\circle*{0.3}}
\multiput(1,9)(2,0){2}{\circle*{0.3}}
\multiput(1,11)(2,0){1}{\circle*{0.3}}
\put(1,3){\line(1,0){8}} \put(1,5){\line(1,0){6}}
\put(1,7){\line(1,0){4}} \put(1,9){\line(1,0){2}}
\put(1,3){\line(0,1){8}} \put(3,3){\line(0,1){6}}
\put(5,3){\line(0,1){4}} \put(7,3){\line(0,1){2}}
\put(1,5){\line(1,-1){2}} \put(1,7){\line(1,-1){4}}
\put(1,9){\line(1,-1){6}} \put(1,11){\line(1,-1){8}}
\put(1.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(5.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(7.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(2.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(4.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(6.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(1.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(5.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,5.5){\makebox(1,0.5)[l]{\small $3$}}
\put(2.1,6.2){\makebox(1,0.5)[l]{\small $2$}}
\put(4.1,6.2){\makebox(1,0.5)[l]{\small $2$}}
\put(1.4,7.5){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,7.5){\makebox(1,0.5)[l]{\small $1$}}
\put(2.1,8.2){\makebox(1,0.5)[l]{\small $2$}}
\put(1.4,9.5){\makebox(1,0.5)[l]{\small $1$}}
\multiput(14,3)(2,0){5}{\circle*{0.3}}
\multiput(14,5)(2,0){4}{\circle*{0.3}}
\multiput(14,7)(2,0){3}{\circle*{0.3}}
\multiput(14,9)(2,0){2}{\circle*{0.3}}
\multiput(14,11)(2,0){1}{\circle*{0.3}}
\multiput(14.8,3.7)(2,0){4}{\circle{0.3}}
\multiput(15.4,4.4)(2,0){3}{\circle{0.3}}
\multiput(14.8,5.7)(2,0){3}{\circle{0.3}}
\multiput(15.4,6.4)(2,0){2}{\circle{0.3}}
\multiput(14.8,7.7)(2,0){2}{\circle{0.3}}
\multiput(15.4,8.4)(2,0){1}{\circle{0.3}}
\multiput(14.8,9.7)(2,0){1}{\circle{0.3}}
\multiput(21,10)(2,0){1}{\circle{0.3}}
\put(14,5){\line(1,0){6}} \put(14,7){\line(1,0){4}}
\put(14,9){\line(1,0){2}} \put(20,3){\line(1,0){2}}
\put(14,3){\line(0,1){2}} \put(14,9){\line(0,1){2}}
\put(16,3){\line(0,1){6}} \put(18,3){\line(0,1){2}}
\put(20,3){\line(0,1){2}}
\bezier{5}(14.9,3.8)(15,4)(15.3,4.3)
\bezier{5}(16.9,3.8)(17,4)(17.3,4.3)
\bezier{5}(18.9,3.8)(19,4)(19.3,4.3)
\bezier{5}(14.9,5.8)(15,6)(15.3,6.3)
\bezier{5}(16.9,5.8)(17,6)(17.3,6.3)
\bezier{8}(17.5,6.4)(18,6)(18.7,5.7)
\bezier{5}(14.9,7.8)(15,8)(15.3,8.3)
\bezier{40}(14.8,3.5)(16,1.5)(22,2)
\bezier{35}(16.8,3.5)(18,1.8)(22,2.3)
\bezier{30}(18.8,3.5)(19.6,2.1)(22,2.6)
\bezier{40}(14.7,5.7)(10.5,8.5)(13,11)
\bezier{30}(14.7,7.7)(11.8,9.5)(13.5,11)
\bezier{45}(22,2)(27.1,3.5)(21,10)
\bezier{40}(22,2.3)(26.1,4)(21,10)
\bezier{35}(22,2.6)(25,4.5)(21,10)
\bezier{45}(13,11)(15.5,13.3)(21,10)
\bezier{40}(13.5,11)(16,13)(21,10)
\bezier{40}(14.8,9.7)(17,9.8)(21,10)
\bezier{30}(16.8,7.7)(19,9)(21,10)
\bezier{30}(18.8,5.7)(20,7.5)(21,10)
\bezier{40}(20.8,3.7)(20.9,6.7)(21,10)
\put(1.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(5.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(7.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(2.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(4.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(6.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(1.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(5.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,5.5){\makebox(1,0.5)[l]{\small $3$}}
\put(2.1,6.2){\makebox(1,0.5)[l]{\small $2$}}
\put(4.1,6.2){\makebox(1,0.5)[l]{\small $2$}}
\put(1.4,7.5){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,7.5){\makebox(1,0.5)[l]{\small $1$}}
\put(2.1,8.2){\makebox(1,0.5)[l]{\small $2$}}
\put(1.4,9.5){\makebox(1,0.5)[l]{\small $1$}}
\put(21.5,9.8){\makebox(1,0.5)[l]{\small $O$}}
\put(1,0.8){\makebox(1,0.5)[l]{\small (a) Triangular grid $T_4$}}
\put(12,0.8){\makebox(1,0.5)[l]{\small (b) Spanning trees of $T_4$
and its dual}} \put(4,-1){\makebox(1,0.5)[l]{\small Figure 4.
Triangular grid $T_4$ and spanning trees.}}
\end{picture}
\end{center}
Ostrovskii \cite{Ostrov10} developed an approach, called {\it
center-tail system}, to deal with the spanning tree congestion
problem for planar graphs, and obtained the result for triangular
grids as follows:
$$c_T(T_n)=\begin{cases}
4k, & \mbox{if } n=3k \\
4k, & \mbox{if } n=3k+1 \\
4k+2, & \mbox{if } n=3k+2
\end{cases}$$
(however, $T_n$ here is our $T_{n-1}$). We obtain the corresponding
result for tree-stretch as follows.
{\bf Theorem 4.2.} \ For the triangular grids $T_n$, we have
$$\sigma_T(T_n)=\left\lceil\frac{2n}{3}\right\rceil+1.$$
{\bf Proof.} \ For the triangular grids $T_n$, we first show that
$$\lambda_{\max}(T_n)=\left\lceil\frac{2n}{3}\right\rceil.$$
We use induction on $n$. When $1\leq n\leq 3$, the levels of faces
for $T_1,T_2,T_3$ are shown in Figure 5, in which
$\lambda_{\max}(T_1)=1,\lambda_{\max}(T_2)=\lambda_{\max}(T_3)=2$.
Hence the assertion holds for $1\leq n\leq 3$.
\begin{center}
\setlength{\unitlength}{0.45cm}
\begin{picture}(21,9)
\multiput(1,3)(3,0){2}{\circle*{0.3}}
\multiput(1,6)(2,0){1}{\circle*{0.3}}
\multiput(7,3)(2,0){3}{\circle*{0.3}}
\multiput(7,5)(2,0){2}{\circle*{0.3}}
\multiput(7,7)(2,0){1}{\circle*{0.3}}
\multiput(15,3)(2,0){4}{\circle*{0.3}}
\multiput(15,5)(2,0){3}{\circle*{0.3}}
\multiput(15,7)(2,0){2}{\circle*{0.3}}
\multiput(15,9)(2,0){1}{\circle*{0.3}}
\put(1,3){\line(1,0){3}} \put(1,3){\line(0,1){3}}
\put(1,6){\line(1,-1){3}}
\put(7,3){\line(1,0){4}} \put(7,5){\line(1,0){2}}
\put(7,3){\line(0,1){4}} \put(9,3){\line(0,1){2}}
\put(7,5){\line(1,-1){2}} \put(7,7){\line(1,-1){4}}
\put(15,3){\line(1,0){6}} \put(15,5){\line(1,0){4}}
\put(15,7){\line(1,0){2}} \put(15,3){\line(0,1){6}}
\put(17,3){\line(0,1){4}} \put(19,3){\line(0,1){2}}
\put(15,5){\line(1,-1){2}} \put(15,7){\line(1,-1){4}}
\put(15,9){\line(1,-1){6}}
\put(1.7,3.8){\makebox(1,0.5)[l]{\small $1$}}
\put(7.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(9.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(7.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(8.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(15.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(17.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(19.4,3.5){\makebox(1,0.5)[l]{\small $1$}}
\put(16.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(18.1,4.2){\makebox(1,0.5)[l]{\small $2$}}
\put(15.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(17.4,5.5){\makebox(1,0.5)[l]{\small $1$}}
\put(16.1,6.2){\makebox(1,0.5)[l]{\small $2$}}
\put(15.4,7.5){\makebox(1,0.5)[l]{\small $1$}}
\put(1,1){\makebox(1,0.5)[l]{\small (a) $T_1$}}
\put(8,1){\makebox(1,0.5)[l]{\small (b) $T_2$}}
\put(17,1){\makebox(1,0.5)[l]{\small (c) $T_3$}}
\put(3,-1){\makebox(1,0.5)[l]{\small Figure 5. The levels of faces
for $T_1,T_2,T_3$.}}
\end{picture}
\end{center}
Assume that $n\geq 4$ and the assertion holds for smaller $n$. We
delete the boundary of the outer faces from $T_n$ (the vertices and
the edges on this boundary are deleted). Then the resulting graph is
$T_{n-3}$. In this transformation, all faces with levels $1$ and $2$
are removed. Therefore $\lambda_{\max}(T_n)=\lambda_{\max}(T_{n-3})
+2$. By induction hypothesis, we have
$$\lambda_{\max}(T_n)=\left\lceil\frac{2(n-3)}{3}\right\rceil+2=
\left\lceil\frac{2n}{3}\right\rceil.$$ For example,
$\lambda_{\max}(T_4)=\lambda_{\max}(T_1)+2=3$, as shown in Figure
4(a).
We next show the lower bound
\begin{equation}
\sigma_T(G)\geq \lambda_{\max}+1.
\end{equation}
In fact, let $T$ be any given spanning tree of $G$. Then the cotree
$\bar T$ determines a spanning tree $\bar T^*$ in $G^*$. Similar to
the previous case, suppose that $f_0$ is a face with the maximum
level $\lambda=\lambda_{\max}$. Then the distance between $f_0$ and
$O$ in $\bar T^*$ is at least $\lambda$. Let $P^*$ be the path from
$f_0$ to $O$ in $\bar T^*$ with the last edge $e^*_0$ incident with
$O$. The tree-edge $e^*_0$ in the spanning tree $\bar T^*$
determines a fundamental edge-cut $C^*$, which corresponds to a
fundamental cycle $C$ with respect to $T$ in $G$. This fundamental
cycle $C$ is determined by the cotree edge $e_0$ on the boundary of
the outer face that corresponds to the edge $e^*_0$ in $P^*$.
Without loss of generality, assume that $e_0$ is on the horizontal
line $R_0=\{(x,y)\in {\mathbf R}^2: y=0\}$. Let $P^0$ be the
shortest path from $f_0$ to $O$ passing though $R_0$ in $G^*$.
Suppose that $e'_0$ is the edge in $R_0$ which corresponds the last
edge of $P^0$. Denote by $\partial(P^0)$ the boundary of the region
composed of the $\lambda$ faces of $P^0$. Then $\partial(P^0)$ is a
$(1\times \tfrac 12\lambda)$ rectangle (if $\lambda$ is even) or a
$(1\times \tfrac 12(\lambda-1))$ rectangle plus a triangle of $f_0$
at the top (if $\lambda$ is odd). It can be seen that the triangle
face at the top (or at the bottom) of $\partial(P^0)$ has two
boundary edges, and each of the other triangle faces has one
boundary edge. Hence the length of $\partial(P^0)$ is $\lambda+2$.
We draw $\lceil \tfrac12 \lambda\rceil$ horizontal straight lines
passing through the midpoints of the boundary edges in
$\partial(P^0)$. Then each of these straight lines intersects the
cycle $C$ twice. When $\lambda$ is even, the $\tfrac 12\lambda$
straight lines intersect the cycle $C$ at $\lambda$ edges. Besides,
$C$ must have at least two more horizontal edges (one is $e_0$ and
one in $f_0$). Hence the length of $C$ is at least $\lambda+2$. When
$\lambda$ is odd, the $\tfrac 12(\lambda+1)$ straight lines
intersect the cycle $C$ at $\lambda+1$ edges. And $C$ has one more
horizontal edge $e_0$. Thus the length of $C$ is at least
$\lambda+2$. Therefore, for any spanning tree $T$, we find a
fundamental cycle $C$ with length at least $\lambda+2$. By the
arbitrariness of $T$, the above lower bound (4) is proved.
Conversely, we can construct an optimal spanning tree $T$ as follows:\\
{\indent} (1) Take a face $f_0$ with the maximum level
$\lambda_{\max}$. \\
{\indent} (2) Take the horizonal line $H$ containing the horizontal
edge of $f_0$, and take the vertical line $V$ containing the
vertical edge of $f_0$.\\
{\indent} (3) In the part below $H$, take every vertical line
intersecting $H$; In the part above $H$, take every horizontal line
intersecting $V$. \\
{\indent} (4) In the remaining part of the lower right corner, take
all horizontal lines; In the remaining part of the upper left
corner, take all vertical lines. An example of $T_4$ can be seen in
Figure 4(b). It is easy to check that this spanning tree attain the
above lower bound. This completes the proof. $\Box$
\subsection{Triangulated-rectangular grids}
\hspace*{0.5cm} Finally, we consider the triangulated-rectangular
grids by the same method. A triangulated-rectangular grid $T_{m,n}$
is defined as follows: the vertex set $V(T_{m,n})$ is $\{(x,y)\in
{\mathbf Z}^2: 0\leq y\leq m-1, 0\leq x\leq n-1\}$, and two vertices
$(x,y)$ and $(x',y')$ are joined by an edge if $|x-x'|+|y-y'|=1$ or
$|x-x'|+|y-y'|=2$ and $x+y=x'+y'$, as shown in Figure 6. Clearly,
$T_{m,n}$ can be obtained from the rectangular grids $P_m\times P_n$
by adding slant edges.
\begin{center}
\setlength{\unitlength}{0.45cm}
\begin{picture}(12,11)
\multiput(1,3)(2,0){6}{\circle*{0.3}}
\multiput(1,5)(2,0){6}{\circle*{0.3}}
\multiput(1,7)(2,0){6}{\circle*{0.3}}
\multiput(1,9)(2,0){6}{\circle*{0.3}}
\multiput(1,11)(2,0){6}{\circle*{0.3}}
\put(1,3){\line(1,0){10}} \put(1,5){\line(1,0){10}}
\put(1,7){\line(1,0){10}} \put(1,9){\line(1,0){10}}
\put(1,11){\line(1,0){10}} \put(1,3){\line(0,1){8}}
\put(3,3){\line(0,1){8}} \put(5,3){\line(0,1){8}}
\put(7,3){\line(0,1){8}} \put(9,3){\line(0,1){8}}
\put(11,3){\line(0,1){8}} \put(1,5){\line(1,-1){2}}
\put(1,7){\line(1,-1){4}} \put(1,9){\line(1,-1){6}}
\put(1,11){\line(1,-1){8}} \put(3,11){\line(1,-1){8}}
\put(5,11){\line(1,-1){6}} \put(7,11){\line(1,-1){4}}
\put(9,11){\line(1,-1){2}}
\put(1.4,3.4){\makebox(1,0.5)[l]{\small $1$}}
\put(3.4,3.4){\makebox(1,0.5)[l]{\small $1$}}
\put(5.4,3.4){\makebox(1,0.5)[l]{\small $1$}}
\put(7.4,3.4){\makebox(1,0.5)[l]{\small $1$}}
\put(9.4,3.4){\makebox(1,0.5)[l]{\small $1$}}
\put(1.4,5.4){\makebox(1,0.5)[l]{\small $1$}}
\put(1.4,7.4){\makebox(1,0.5)[l]{\small $1$}}
\put(1.4,9.4){\makebox(1,0.5)[l]{\small $1$}}
\put(2.1,10.1){\makebox(1,0.5)[l]{\small $1$}}
\put(4.1,10.1){\makebox(1,0.5)[l]{\small $1$}}
\put(6.1,10.1){\makebox(1,0.5)[l]{\small $1$}}
\put(8.1,10.1){\makebox(1,0.5)[l]{\small $1$}}
\put(10.1,10.1){\makebox(1,0.5)[l]{\small $1$}}
\put(10.1,8.1){\makebox(1,0.5)[l]{\small $1$}}
\put(10.1,6.1){\makebox(1,0.5)[l]{\small $1$}}
\put(10.1,4.1){\makebox(1,0.5)[l]{\small $1$}}
\put(2.1,4.1){\makebox(1,0.5)[l]{\small $2$}}
\put(4.1,4.1){\makebox(1,0.5)[l]{\small $2$}}
\put(6.1,4.1){\makebox(1,0.5)[l]{\small $2$}}
\put(8.1,4.1){\makebox(1,0.5)[l]{\small $2$}}
\put(2.1,6.1){\makebox(1,0.5)[l]{\small $2$}}
\put(2.1,8.1){\makebox(1,0.5)[l]{\small $2$}}
\put(3.4,9.4){\makebox(1,0.5)[l]{\small $2$}}
\put(5.4,9.4){\makebox(1,0.5)[l]{\small $2$}}
\put(7.4,9.4){\makebox(1,0.5)[l]{\small $2$}}
\put(9.4,9.4){\makebox(1,0.5)[l]{\small $2$}}
\put(9.4,7.4){\makebox(1,0.5)[l]{\small $2$}}
\put(9.4,5.4){\makebox(1,0.5)[l]{\small $2$}}
\put(3.4,5.4){\makebox(1,0.5)[l]{\small $3$}}
\put(5.4,5.4){\makebox(1,0.5)[l]{\small $3$}}
\put(7.4,5.4){\makebox(1,0.5)[l]{\small $3$}}
\put(3.4,7.4){\makebox(1,0.5)[l]{\small $3$}}
\put(4.1,8.1){\makebox(1,0.5)[l]{\small $3$}}
\put(6.1,8.1){\makebox(1,0.5)[l]{\small $3$}}
\put(8.1,8.1){\makebox(1,0.5)[l]{\small $3$}}
\put(8.1,6.1){\makebox(1,0.5)[l]{\small $3$}}
\put(4.1,6.1){\makebox(1,0.5)[l]{\small $4$}}
\put(6.1,6.1){\makebox(1,0.5)[l]{\small $4$}}
\put(5.4,7.4){\makebox(1,0.5)[l]{\small $4$}}
\put(7.4,7.4){\makebox(1,0.5)[l]{\small $4$}}
\put(-1,1){\makebox(1,0.5)[l]{\small Figure 6. Triangulated-
rectangle grid $T_{5,6}$}}
\end{picture}
\end{center}
{\bf Theorem 4.3.} \ For the triangulated-rectangular grids
$T_{m,n}$ with $2\leq m\leq n$, we have
$$\sigma_T(T_{m,n})=m.$$
{\bf Proof.} \ Let $G=T_{m,n}$ ($2\leq m\leq n$). We first claim
that
$$\lambda_{\max}(G)=m-1.$$
By induction on $m$. When $m=2$, all faces have level $1$, so
$\lambda_{\max}(G)=1$; When $m=3$, it is also evident that
$\lambda_{\max}(G)=2$. Assume that $m\geq 4$ and the claim holds for
smaller $m$. We delete the boundary of the outer faces from $G$, so
that the remaining graph is $G'=T_{m-2,n-2}$. In this
transformation, all faces with levels $1$ and $2$ are removed.
Therefore $\lambda_{\max}(G)=\lambda_{\max}(G')+2$. By induction
hypothesis, $\lambda_{\max}(G')=m-2-1=m-3$. Hence $\lambda_{\max}(G)
=m-3+2=m-1$ and the claim follows.
Moreover, by the same method of the previous case we obtain the
lower bound
\begin{equation}
\sigma_T(G)\geq \lambda_{\max}+1.
\end{equation}
Conversely, we can construct an optimal tree $T$ by taking all
columns and the edges from $(\lfloor m/2\rfloor,j)$ to $(\lceil m/2
\rceil, j+1)$ for $1\leq j\leq n-1$ (see Figure 7). It is easy to
check that this spanning tree attain the above lower bound. The
proof is complete. $\Box$
\begin{center}
\setlength{\unitlength}{0.45cm}
\begin{picture}(20,10)
\multiput(1,4)(2,0){4}{\circle*{0.3}}
\multiput(1,6)(2,0){4}{\circle*{0.3}}
\multiput(1,8)(2,0){4}{\circle*{0.3}}
\put(1,4){\line(0,1){4}} \put(3,4){\line(0,1){4}}
\put(5,4){\line(0,1){4}} \put(7,4){\line(0,1){4}}
\put(1,6){\line(1,0){6}}
\bezier{30}(1,4)(4,4)(7,4) \bezier{30}(1,8)(4,8)(7,8)
\bezier{15}(1,6)(2,5)(3,4) \bezier{30}(1,8)(3,6)(5,4)
\bezier{30}(3,8)(5,6)(7,4) \bezier{15}(5,8)(6,7)(7,6)
\multiput(12,4)(2,0){5}{\circle*{0.3}}
\multiput(12,6)(2,0){5}{\circle*{0.3}}
\multiput(12,8)(2,0){5}{\circle*{0.3}}
\multiput(12,10)(2,0){5}{\circle*{0.3}}
\put(12,4){\line(0,1){6}} \put(14,4){\line(0,1){6}}
\put(16,4){\line(0,1){6}} \put(18,4){\line(0,1){6}}
\put(20,4){\line(0,1){6}} \put(12,8){\line(1,-1){2}}
\put(14,8){\line(1,-1){2}} \put(16,8){\line(1,-1){2}}
\put(18,8){\line(1,-1){2}}
\bezier{30}(12,4)(16,4)(20,4) \bezier{30}(12,6)(16,6)(20,6)
\bezier{30}(12,8)(16,8)(20,8) \bezier{30}(12,10)(16,10)(20,10)
\bezier{10}(12,6)(13,5)(14,4) \bezier{10}(14,6)(15,5)(16,4)
\bezier{10}(12,10)(13,9)(14,8) \bezier{10}(16,6)(17,5)(18,4)
\bezier{10}(14,10)(15,9)(16,8) \bezier{10}(18,6)(19,5)(20,4)
\bezier{10}(16,10)(17,9)(18,8) \bezier{10}(18,10)(19,9)(20,8)
\put(1.5,2.2){\makebox(1,0.5)[l]{\small (a) $m$ is odd}}
\put(13,2.2){\makebox(1,0.5)[l]{\small (b) $m$ is even}}
\put(4,0.5){\makebox(1,0.5)[l]{\small Figure 7. Optimal trees of
$T_{m,n}$.}}
\end{picture}
\end{center}
| {
"timestamp": "2017-12-12T02:08:49",
"yymm": "1712",
"arxiv_id": "1712.03497",
"language": "en",
"url": "https://arxiv.org/abs/1712.03497",
"abstract": "With applications in distribution systems and communication networks, the minimum stretch spanning tree problem is to find a spanning tree T of a graph G such that the maximum distance in T between two adjacent vertices is minimized. The problem has been proved to be NP-hard and fixed-parameter polynomial algorithms have been obtained for some special classes of graphs. In this paper, we concentrate on the optimality characterizations for typical classes of graphs. We determine the exact optimality representations for Petersen graph, the complete k-partite graphs, split graphs, generalized convex graphs, and several planar grids, including rectangular grids, triangular grids, and triangular-rectangular grids.",
"subjects": "Combinatorics (math.CO)",
"title": "The minimum stretch spanning tree problem for typical graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105287006256,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7076984883763413
} |
https://arxiv.org/abs/1406.7638 | Direct Density-Derivative Estimation and Its Application in KL-Divergence Approximation | Estimation of density derivatives is a versatile tool in statistical data analysis. A naive approach is to first estimate the density and then compute its derivative. However, such a two-step approach does not work well because a good density estimator does not necessarily mean a good density-derivative estimator. In this paper, we give a direct method to approximate the density derivative without estimating the density itself. Our proposed estimator allows analytic and computationally efficient approximation of multi-dimensional high-order density derivatives, with the ability that all hyper-parameters can be chosen objectively by cross-validation. We further show that the proposed density-derivative estimator is useful in improving the accuracy of non-parametric KL-divergence estimation via metric learning. The practical superiority of the proposed method is experimentally demonstrated in change detection and feature selection. | \section{Introduction}
%
Derivatives of probability density functions play key roles in various
statistical data analysis. For example:
\begin{itemize}
\item \emph{Mean shift} clustering seeks modes of the data
densities~\cite{fukunaga1975estimation,cheng1995mean,comaniciu2002mean,sasaki2014clustering},
where the first-order density derivative is the key ingredient.
\item The optimal bandwidth of kernel density estimation
depends on the second-order density derivative~\cite{silverman1986density}.
\item The bias of nearest-neighbor Kullback-Leibler divergence
estimation is governed by the second-order density derivative~\cite{noh2014bias}.
\item More applications in statistical problems such as
regression, Fisher information estimation, parameter estimation,
and hypothesis testing are discussed in~\cite{singh1977applications}.
\end{itemize}
Given such a wide range of applications, accurately estimating the
density derivatives from data has been a challenging research topic in
statistics and machine learning.
A naive approach to density-derivative estimation from samples
$\{x_i\}_{i=1}^n$ following probability density $p(x)$ on $\R{}$ is to perform density
estimation and then compute its derivatives. For example, suppose that
kernel density estimation (KDE) is used for density estimation:
\begin{align*}
\widehat{p}(x)\propto
\sum_{i=1}^n K\left(\frac{x-x_i}{h}\right),
\end{align*}
where $K$ is a kernel function (such as the Gaussian kernel) and $h>0$
is the bandwidth. Then the first-order density derivative is estimated
as follows \cite{bhattacharya1967estimation, schuster1969estimation}:
\begin{align*}
\widehat{p}'(x)\propto
\sum_{i=1}^n K'\left(\frac{x-x_i}{h}\right).
\end{align*}
A cross-validation method for selecting the bandwidth $h$ was proposed
in \cite{hardle1990bandwidth}. However, since a good density estimator
is not always a good density-derivative estimator, this approach is not
necessarily reliable; this problem becomes more critical if higher-order
density derivatives are estimated:
\begin{align*}
\widehat{p}^{(j)}(x)\propto
\sum_{i=1}^n K^{(j)}\left(\frac{x-x_i}{h}\right).
\end{align*}
A more direct approach of performing kernel density estimation for
density derivatives was proposed \cite{singh1977improvement}:
\begin{align*}
\widehat{p}^{(j)}(x)\propto
\sum_{i=1}^n K\left(\frac{x-x_i}{h}\right).
\end{align*}
However, this method suffers the bandwidth selection problem
because the optimal bandwidth depends on higher-order derivatives than
the estimated one \cite{singh1981exact}.
In this paper, we propose a novel density-derivative estimator
which finds the minimizer of the mean integrated square error (MISE)
to the true density-derivative.
The proposed method, which we call \emph{MISE for derivatives} (MISED),
possesses various useful properties:
\begin{itemize}
\item Density derivatives are directly estimated without going through
density estimation.
\item The solution can be computed analytically and efficiently.
\item All tuning parameters can be objectively optimized by cross-validation.
\item Multi-dimensional density derivatives can be directly estimated.
\item Higher-order density derivatives can be directly estimated.
\end{itemize}
Through experiments on change detection and feature selection,
we demonstrate the usefulness of the proposed MISED method.
\section{Direct Density-Derivative Estimation}
In this section, we describe our proposed MISED method.
\subsection{Problem Formulation}
Suppose that independent and identically
distributed samples $\mathcal{X}=\{\vector{x}_i\}_{i=1}^n$ from unknown
density $p(\vector{x})$ on $\R{d}$ are available. Our goal is to
estimate the $k$-th order (partial) derivative of $p(\vector{x})$,
\begin{align}
p_{k,\vector{j}}(\vector{x})=\frac{\partial^k}{\partial
x_1^{j_1}\partial x_2^{j_2}\dots\partial x_d^{j_d}} p(\vector{x}),
\label{defi:der}
\end{align}
where $j_1+j_2+\dots+j_d=k$ for $j_i\in\{0,1,\dots,k\}$ and
$\vector{j}=(j_1,j_2,\dots,j_d)$. When $k=1$ (or $k=2$),
$p_{k,\vector{j}}(\vector{x})$ corresponds to a single element in the
gradient vector (or the Hessian matrix) of $p(\vector{x})$.
%
\subsection{MISE for Density Derivatives}
%
Let $g_{k,\vector{j}}(\vector{x})$ be a model of
$p_{k,\vector{j}}(\vector{x})$ (its specific form will be introduced
later). We learn $g_{k,\vector{j}}(\vector{x})$ to minimize the MISE
to $p_{k,\vector{j}}(\vector{x})$:
\begin{align}
J_{\vector{j}}(g_{k,\vector{j}})&=\int
\left\{g_{k,\vector{j}}(\vector{x})-p_{k,\vector{j}}(\vector{x})\right\}^2
\mathrm{d}\vector{x} \nonumber \\ &=\int
\left\{g_{k,\vector{j}}(\vector{x})\right\}^2\mathrm{d}\vector{x} -2\int
g_{k,\vector{j}}(\vector{x})p_{k,\vector{j}}(\vector{x})\mathrm{d}\vector{x}
+\underbrace{\int
\left\{p_{k,\vector{j}}(\vector{x})\right\}^2\mathrm{d}\vector{x}}_{C},
\label{obj}
\end{align}
where $C$ is constant irrelevant to $g_{k,\vector{j}}(\vector{x})$ and
thus can be safely ignored.
The first term in (\ref{obj}) is accessible since
$g_{k,\vector{j}}(\vector{x})$ is a model specified by the user. The
second term in (\ref{obj}) seems intractable at a glance, but
\emph{integration by parts} allows us to transform it as
\begin{align*}
\int
g_{k,\vector{j}}(\vector{x})p_{k,\vector{j}}(\vector{x})\mathrm{d}\vector{x}
&= \int g_{k,\vector{j}}(\vector{x}) \frac{\partial^k}{\partial
x_1^{j_1}\partial x_2^{j_2}\dots\partial x_d^{j_d}} p(\vector{x})
\mathrm{d}\vector{x}, \\ &= \int \left[g_{k,\vector{j}}(\vector{x})
\frac{\partial^{k-1}}{\partial x_1^{j_1-1}\partial
x_2^{j_2}\dots\partial x_d^{j_d}}
p(\vector{x})\right]_{x_1=-\infty}^{x_1=\infty}\mathrm{d}\vector{x}_{\setminus
x_1}\\ &\phantom{=}-\int \frac{\partial}{\partial
x_1}g_{k,\vector{j}}(\vector{x}) \frac{\partial^{k-1}}{\partial
x_1^{j_1-1}\partial x_2^{j_2}\dots\partial x_d^{j_d}}
p(\vector{x})\mathrm{d}\vector{x},
\end{align*}
where $\mathrm{d}\vector{x}_{\setminus x_1}$ denotes the integration
except for $x_1$. The first term in the last equation vanishes under a
mild assumption on the tails of $g_{k,\vector{j}}(\vector{x})$ and
$\frac{\partial^{k-1}}{\partial x_1^{j_1-1}\partial
x_2^{j_2}\dots\partial x_d^{j_d}} p(\vector{x})$. By repeatedly
applying integration by parts $k$ times, we arrive at
\begin{align*}
J_{\vector{j}}(g_{k,\vector{j}})&=\int
\left\{g_{k,\vector{j}}(\vector{x})\right\}^2\mathrm{d}\vector{x} -2(-1)^k\int
\left\{\frac{\partial^k}{\partial x_1^{j_1}\partial
x_2^{j_2}\dots\partial x_d^{j_d}}
g_{k,\vector{j}}(\vector{x})\right\} p(\vector{x})\mathrm{d}\vector{x}+C.
\end{align*}
Ignoring the constant $C$ and approximating the expectation by the
sample average gives
\begin{align}
\int \left\{g_{k,\vector{j}}(\vector{x})\right\}^2\mathrm{d}\vector{x}
-\frac{2(-1)^k}{n} \sum_{i=1}^n \frac{\partial^k}{\partial
x_1^{j_1}\partial x_2^{j_2}\dots\partial x_d^{j_d}}
g_{k,\vector{j}}(\vector{x}_i). \label{MISE-accessible}
\end{align}
\subsection{Analytic Solution for Gaussian Kernels}
As a density-derivative model $g_{k,\vector{j}}$, we use the Gaussian
kernel model\footnote{ If $n$ is too large, we may only use a subset
of data samples as kernel centers. }:
\begin{align*}
g_{k,\vector{j}}(\vector{x})&= \sum_{i=1}^n\theta_{\vector{j},i}
\underbrace{\exp\left(-\frac{\|\vector{x}-\vector{x}_i\|^2}{2\sigma^2}\right)}_{\psi_i(x)}
=\vector{\theta}_{\vector{j}}^{\top}\vector{\psi}(x),
\end{align*}
for which the $k$-th derivative is given by
\begin{align*}
\frac{\partial^k}{\partial x_1^{j_1}\partial x_2^{j_2}\dots\partial
x_d^{j_d}} g_{k,\vector{j}}(\vector{x})
&=\sum_{i=1}^n\theta_{\vector{j},i}
\underbrace{\frac{\partial^k}{\partial x_1^{j_1}\partial
x_2^{j_2}\dots\partial x_d^{j_d}}
\exp\left(-\frac{\|\vector{x}-\vector{x}_i\|^2}{2\sigma^2}\right)}_{\varphi_{\vector{j},i}(\vector{x})}
=\vector{\theta}_{\vector{j}}^{\top}\vector{\varphi}_{\vector{j}}(x).
\end{align*}
Substituting these formulas into the objective function
\eqref{MISE-accessible} and adding the $\ell_2$-regularizer, we
obtain a practical objective function:
\begin{align}
\widetilde{J}_{\vector{j}} (\vector{\theta_j})=
\vector{\theta}_{\vector{j}}^{\top}\mathbf{G}\vector{\theta_j}
-2(-1)^{k}\vector{\theta_j}^{\top}\vector{h_j}
+\lambda\vector{\theta_j}^{\top}\vector{\theta_j}, \label{sample}
\end{align}
where
\begin{align*}
[\mathbf{G}]_{ij}&=\int\psi_i(\vector{x})\psi_j(\vector{x})\mathrm{d}\vector{x}
=(\pi\sigma^2)^{d/2}
\exp\left(-\frac{\|\vector{x}_i-\vector{x}_j\|^2}{4\sigma^2}\right)
\quad\mbox{and}\quad \vector{h}_{\vector{j}}
=\frac{1}{n}\sum_{i=1}^n\vector{\varphi}_{\vector{j}}(\vector{x}_i).
\end{align*}
The minimizer of (\ref{sample}) is given analytically as
\begin{align}
\widehat{\vector{\theta}}_{\vector{j}}=
\arg\min_{\vector{\theta_j}}\widetilde{J}_{\vector{j}}(\vector{\theta_j})
=(-1)^{k}\left(\mathbf{G}^{-1}+\lambda\I\right)\vector{h}_{\vector{j}},
\end{align}
where $\I$ denotes the identity matrix. Finally, a density-derivative
estimator is obtained as
\begin{align*}
\widehat{g}_{k,\vector{j}}(\vector{x})&=
\widehat{\vector{\theta}}_{\vector{j}}^{\top}\vector{\psi}(x).
\end{align*}
We call this method the \emph{mean integrated square error for
derivatives} (MISED) estimator, which can be regarded as an extension
of \emph{score matching} for density estimation
\cite{hyvarinen2005estimation}, \emph{least-squares density-difference}
for density-difference estimation
\cite{IEEE-PAMI:Kim+Scott:2010,NC:Sugiyama+etal:2013}, and
\emph{least-squares log-density gradients} for log-density gradient
estimation \cite{sasaki2014clustering} to higher-order derivatives.
\subsection{Model Selection by Cross-Validation}
\label{ssec:CV}
%
The performance of the MISED method depends on the choice of
model parameters (the Gaussian width
$\sigma$ and the regularization $\lambda$ in the current setup).
Below, we describe a method
to optimize the model by cross-validation,
which essentially follows the same line as \cite{hardle1990bandwidth}
for kernel density estimation:
\begin{enumerate}
\item Divide the sample $\mathcal{X}=\{\vector{x}_i\}_{i=1}^{n}$ into
$T$ disjoint subsets $\{\mathcal{X}_t\}_{t=1}^{T}$.
\item The estimator $\widehat{g}^{(t)}_{k,\vector{j}}(\vector{x})$ is
obtained using $\mathcal{X}\setminus\mathcal{X}_t$,
and then the hold-out MISE to
$\mathcal{X}_t$ is computed as
\begin{align}
\text{CV}(t)&=
\int \widehat{g}^{(t)}_{k,\vector{j}}(\vector{x})\mathrm{d}\vector{x}
-\frac{2(-1)^{k}}{|\mathcal{X}_t|}
\sum_{\vector{x}\in\mathcal{X}_t} \frac{\partial^k}{\partial
x_1^{j_1}\partial x_2^{j_2}\cdots\partial x_d^{j_d}}
\widehat{g}^{(t)}_{k,\vector{j}}(\vector{x}), \label{CV}
\end{align}
where $|\mathcal{X}_t|$ denotes the number of elements in
$\mathcal{X}_t$.
\item The model that minimizes
$\text{CV}=\frac{1}{T}\sum_{t=1}^T\text{CV}(t)$ is chosen.
\end{enumerate}
\subsection{Numerical Examples}
\begin{figure}[!t]
\centering \includegraphics[width=\textwidth]{ProfileMSE.eps}
\caption{\label{fig:DerEsti} Estimation of the derivatives of the
standard normal distribution. (a) and (b): First-order and
second-order derivative estimation. (c): Density estimation (only by
KDE). (d) and (e): Normalized mean squared error (MSE) for first-order
and second-order derivative estimation as functions of the
dimensionality of data.}
\end{figure}
Let us illustrate the behavior of MISED using $n=500$ samples drawn from
the standard normal distribution. The Gaussian bandwidth $\sigma$ and
the regularization parameter $\lambda$ included in MISED are chosen by
5-fold cross-validation from the nine candidates,
$\sigma\in\left\{10^{-0.3}, 10^{-0.1375},\dots,10^{1}\right\}$ and
$\lambda\in\{10^{-1},10^{-0.75},\ldots,10^{1}\}$. For comparison, we
also test the Gaussian KDE where the Gaussian bandwidth $h$ is also
chosen from the same candidate values by 5-fold cross-validation to
minimize the hold-out MISE (\ref{CV}).
Figures~\ref{fig:DerEsti} (a) and (b) depict the estimation results of
the first-order and second-order density derivatives. The derivatives
estimated by KDE are less accurate than MISED in particular for the
second-order derivative, although the density itself is reasonably
approximated as shown in Figure~\ref{fig:DerEsti} (c). This result
substantiates that a good density estimator is not necessary a good
density-derivative estimator.
Next, we evaluate how the performance is affected when the
dimensionality of the standard normal distribution is increased. We use
the common $\sigma$ (and $\lambda$ for MISED) for all $\vector{j}$ and
the summation of the hold-out MISE for all $\vector{j}$ is used as the
cross-validation score. Figures~\ref{fig:DerEsti} (d) and (e) show that
the increase of the normalized mean squared errors (MSE)\footnote{The
normalized MSE in this paper is defined by
$\frac{\frac{1}{n}\sum_{i=1}^n\sum_{\vector{j}}
(\widehat{g}_{k,\vector{j}}(\vector{x_i})-p_{k,\vector{j}}(\vector{x}_i))^2}{\sqrt{\frac{1}{n}\sum_{i=1}^n\sum_{\vector{j}}
\widehat{g}_{k,\vector{j}}(\vector{x_i})^2\frac{1}{n}\sum_{i=1}^n\sum_{\vector{j}}
p_{k,\vector{j}}(\vector{x_i})^2}}$.} for the MISED method is much
milder than that for KDE, illustrating the high reliability of MISED in
high-dimensional problems.
\section{Application to Kullback-Leibler (KL) Divergence Approximation}
\label{ssec:KL}
In this section, we apply density-derivative estimation to KL-divergence
approximation.
\subsection{Nearest-Neighbor KL-Divergence Approximation}
The KL-divergence from one density $p_1(\vector{x})$ to another density
$p_2(\vector{x})$, defined as
\begin{align*}
\mathrm{KL}(p_1\|p_2)=\int p_1(\vector{x})
\log\frac{p_1(\vector{x})}{p_2(\vector{x})}\mathrm{d}\vector{x},
\end{align*}
is useful for various purposes such as two-sample homogeneity testing
\cite{IEEE-IT:Kanamori+etal:2012}, feature selection \cite{Gavin09ANew},
and change detection \cite{NN:Liu+etal:2013}. Here, we consider the
KL-divergence approximator based on \emph{nearest-neighbor density
estimation} (NNDE) \cite{Wang06anearest-neighbor} from two sets of
samples $\mathcal{X}_1=\{\vector{x}_i\}_{i=1}^{n_1}$ and
$\mathcal{X}_2=\{\vector{x}_i\}_{i=n_1+1}^{n_1+n_2}$ following
$p_1(\vector{x})$ and $p_2(\vector{x})$ on $\R{d}$:
\begin{align*}
\widehat{\mathrm{KL}}(p_1\|p_2)=\frac{1}{n_1}\sum_{i=1}^{n_1}
\log\frac{(n_1-1)\mathrm{dist}_1(\vector{x}_i)^d}{n_2\mathrm{dist}_2(\vector{x}_i)^d},
\end{align*}
where $\mathrm{dist}_1(\vector{x})$ and $\mathrm{dist}_2(\vector{x})$
denote the distance from $\vector{x}$ to the nearest samples in
$\mathcal{X}_1$ and $\mathcal{X}_2$, respectively.
\subsection{Metric Learning for NNDE-Based KL-Divergence Approximation}
Although the KL-divergence itself is metric-invariant, the NNDE-based
KL-divergence approximator is metric-dependent. Indeed, it was shown in
\cite{noh2014bias} that the bias of the NNDE-based KL-divergence
approximator at $\vector{x}$ is approximately proportional to
\begin{align*}
\frac{\tr(\nabla\nabla p_1)}{((n_1-1)p_1)^{2/d}p_1}
-\frac{\tr(\nabla\nabla p_2)}{(n_2p_2)^{2/d}p_2},
\end{align*}
where $\nabla\nabla p_1$ and $\nabla\nabla p_2$
are the Hessian matrices which are metric-dependent.
Therefore, changing the metric in the input space
is expected to reduce the bias.
It was shown in \cite{noh2014bias} that the best local Mahalanobis
metric
$(\vector{x}-\vector{x}')^{\top}\widehat{\A}(\vector{x}-\vector{x}')$
for point $\vector{x}$ that minimizes the above approximate bias is
given by
\begin{align*}
\widehat{\A} \propto \left[
\begin{array}{cc}
\U_+ & \U_- \\
\end{array}
\right]
\left(
\begin{array}{cc}
d_+\Lambda_+ & 0 \\
0 & -d_-\Lambda_- \\
\end{array}
\right)
\left[
\begin{array}{cc}
\U_+ & \U_- \\
\end{array}
\right]^\top ,
\end{align*}
where $\Lambda_+\in\R{d_+\times d_+}$ and
$\Lambda_-\in\R{d_-\times d_-}$ are the diagonal matrices containing
$d_+$ positive and $d_-$ negative eigenvalues of $\mathbf{B}$, respectively:
\begin{align*}
\mathbf{B}&=\frac{1}{((n_1-1)p_1)^{2/d}}
\frac{\nabla\nabla p_1}{p_1}
-\frac{1}{(n_2p_2)^{2/d}}\frac{\nabla\nabla p_2}{p_2}.
\end{align*}
The matrices $\widehat{\A}$ and $\mathbf{B}$ share the same eigenvectors,
and $\U_+\in\R{d\times d_+}$ and $\U_-\in\R{d_-\times d_-}$
are collections of eigenvectors that
correspond to the eigenvalues in $\Lambda_+$ and $\Lambda_-$, respectively.
In~\cite{noh2014bias}, the authors assumed that $p_1$ and $p_2$ are both
nearly Gaussian, and estimated densities $p_1$ and $p_2$ as well as their Hessian matrices
$\nabla\nabla p_1$ and $\nabla\nabla p_2$ from the Gaussian models with
maximum likelihood estimation. It was demonstrated that the accuracy of
NNDE-based KL-divergence approximation is significantly improved when $p_1$ and $p_2$
are nearly Gaussian.
\subsection{Applying MISED to Metric Learning for NNDE-Based KL-Divergence Approximation}
However, the above method does not work well if $p_1$ and $p_2$ are
apart from Gaussian. Here, we propose to use our non-parametric
density-derivative estimator in metric learning for NNDE-based
KL-divergence approximation.
Since the scale of $\mathbf{B}$ is arbitrary, let us use the following
rescaled matrix $\widetilde{\mathbf{B}}$ instead:
\begin{align}
\widetilde{\mathbf{B}}&=\frac{1}{(n_1-1)^{2/d}}
\left\{\frac{p_2}{p_1}\right\}^{2/d+1}\nabla\nabla p_1
-\frac{1}{n_2^{2/d}}\nabla\nabla p_2.
\label{B}
\end{align}
We estimate the Hessian matrices $\nabla\nabla p_1$ and $\nabla\nabla
p_2$ by the proposed MISED method, and the density ratio $p_2/p_1$ by the unconstrained
least-squares density-ratio estimator \cite{kanamori2009least} that
directly estimates the density ratio in a non-parametric manner without
estimating each density. By this, we can perform metric learning in a
non-parametric way without explicitly estimating the densities $p_1$ and
$p_2$.
\begin{figure}[!t]
\centering \includegraphics[width=\textwidth]{KLdivArt.eps}
\caption{\label{fig:KL} KL-divergence estimation for (a) super-Gaussian,
(b) Gaussian and (c) sub-Gaussian data as a function of sample size $n$.
}
\end{figure}
\subsection{Numerical Examples}
We experimentally compare the behavior of the NNDE-based KL-divergence
approximator with MISED-based metric learning (MISED),
that without metric learning (NN) \cite{Wang06anearest-neighbor},
that with Gaussian-based metric learning (NNG) \cite{noh2014bias},
the density-ratio-based non-parametric KL-divergence estimator (Ratio)
\cite{IEEE-IT:Nguyen+etal:2010},
the risk-based nearest-neighbor KL-divergence estimator (fRisk)
\cite{ICML:Garcia-Garcia+etal:2011},
and the Gaussian parametric KL-divergence estimator with maximum likelihood estimation (GP).
We generate data samples from the generalized Gaussian distribution:
\begin{align*}
p_\mathrm{GG}(x;\mu,\beta,\rho)&=\frac{\beta^{1/2}}{2\Gamma(1+1/\rho)}
\exp\left(-\beta^{\rho/2}|x-\mu|^{\rho}\right),
\end{align*}
where $\mu\in\mathbb{R}$ denotes the mean, $\beta>0$ controls the variance,
and $\rho>0$ controls the Gaussianity:
$\rho<2$, $\rho=2$, and $\rho>2$ correspond to
super-Gaussian, Gaussian, and sub-Gaussian distributions, respectively.
For $\vector{x}=(x^{(1)},\ldots,x^{(d)})^\top$ with $d=5$, we set
\begin{align*}
p_1(\vector{x})&=\prod_{j=1}^dp_\mathrm{GG}(x^{(j)};0,\beta,\rho),\\
p_2(\vector{x})&=p_\mathrm{GG}(x^{(1)};2,\beta,\rho)\prod_{j=2}^dp_\mathrm{GG}(x^{(j)};0,\beta,\rho),
\end{align*}
where the value of $\beta$ is selected so that the variance is one.
We evaluate the performance of each method when sample size $n$ and
Gaussianity $\rho$ are changed.
The experimental results for $\rho=1,2,3$ and $n=500,1000,1500,2000$ are
presented in Figure~\ref{fig:KL}. The proposed MISED outperforms the
plain NN (without metric learning) for all three cases, and it
outperforms NNG and GP for the super-Gaussian and sub-Gaussian cases.
GP and NNG work the best for the Gaussian case as expected, but MISED
also still works reasonably well. fRisk is better than MISED for
the sub-Gaussian case, but it largely overestimates for the other two
cases. Ratio is a completely non-parametric method, but it
systematically underestimates for all three cases.
\subsection{Experiments on Distributional Change Detection}
\label{ssec:CDec}
\begin{figure}[!t]
\centering
\subfigure[Example 1]{
\begin{tabular}{@{}c@{}}
\includegraphics[width=0.47\textwidth]{CDInput1.eps}\\
\includegraphics[width=0.47\textwidth]{CDKL1.eps}\\
\end{tabular}
}
\subfigure[Example 2]{
\begin{tabular}{@{}c@{}}
\includegraphics[width=0.47\textwidth]{CDInput2.eps}\\
\includegraphics[width=0.47\textwidth]{CDKL2.eps}\\
\end{tabular}
}
\caption{\label{fig:chdec} HASC time series data (top) and
the KL-divergence estimated by MISED (bottom).
Green symbols represent the true change points.}
\end{figure}
\begin{table}[t]
\caption{\label{tab:AUC} Means and standard deviations of the area
under the ROC curve (AUC) over $10$ runs. The best method and
methods comparable to the best one in terms of the mean AUC by the
one-tailed Welch's t-test with significance level $5\%$ are
highlighted in boldface.} \centering
\vspace*{2mm}
\begin{tabular}{c|c|c|c}
GP & NNG~\cite{noh2014bias} & fRisk~\cite{ICML:Garcia-Garcia+etal:2011} & MISED \\
\hline
0.747(0.050) & 0.822(0.030) & {\bf 0.858(0.022)} & {\bf 0.839(0.028)}
\end{tabular}
\end{table}
The goal of change detection is to find abrupt changes in time-series
data. We use an $m$-dimensional real vector $\vector{y}(t)$
to represent a segment of time series at time $t$, and a
collection of $r$ such vectors is obtained from a sliding window:
\[
\vector{Y}(t) := \{ \vector{y}(t), \vector{y}(t+1),
\ldots,\vector{y}(t+r-1)\}.
\]
Following \cite{NN:Liu+etal:2013},
we consider an underlying density function that generates $r$
retrospective vectors in $\vector{Y}(t)$. We measure the KL-divergence
between the underlying density functions of the two sets,
$\vector{Y}(t)$ and $\vector{Y}(t + r + m)$ for every $t$,
and determine a point $t_0 + r + m$ as a change point if the KL-divergence for
$\vector{Y}(t_0)$ and $\vector{Y}(t_0 + r + m)$ is greater than a
predefined threshold. In the experiment, we set $r=3$ and $m=100$.
We use the \emph{Human Activity Sensing Consortium (HASC) Challenge
2011} collection\footnote{\url{http://hasc.jp/hc2011/}}, which provides human
activity information collected by a portable three-axis
accelerometer. Our task is to segment different activities such as
``stay'', ``walk'', ``jog'', and ``skip''. Because the orientation of
the accelerometer is not necessarily fixed, we took the $\ell_2$-norm of
3-dimensional accelerometer data and obtained one-dimensional data,
following \cite{NN:Liu+etal:2013}.
Figure~\ref{fig:chdec} depicts examples of time-series data and their
KL-divergences (which is regarded as a change score). These graphs show
that the change scores tend to be large at the true change points.
Next, we more systematically evaluate the performance of change
detection using the AUC (area under the ROC curve) scores. The results
are summarized in Table~\ref{tab:AUC}, showing that the proposed MISED
outperforms GP and NNG, and is comparable to fRisk. In the experiments
in Figure~\ref{fig:KL}, fRisk gave similar values for different
distributions even when the true KL-divergence is large. This was poor
as a KL-divergence approximator, but this property seems to work as a
``regularizer'' to stabilize the change score to avoid incurring big
error. Similar tendencies were also reported in the previous work
\cite{noh2014bias}.
\subsection{Experiments on Information-Theoretic Feature Selection}
Finally, KL-divergence approximation is applied to selecting relevant
features for classification. The \emph{Jensen-Shannon (JS) divergence}
is an information-theoretic measure between labels $y\in\{1,2\}$ and
features $\vector{x}\in\mathbb{R}^d$:
\begin{align*}
\mathrm{JS}(\mathcal{X};\vector{y})
&= -\sum_{y = 1}^2\int p(\vector{x},y)\log\frac{p(\vector{x})p(y)}{p(\vector{x},y)}
\mathrm{d}\vector{x}
\\
&=
p(y=1) \mathrm{KL}(p(\vector{x}|y=1)\|p(\vector{x}))\\
&\phantom{=}+ p(y=2) \mathrm{KL}(p(\vector{x}|y=2)\|p(\vector{x})),
\end{align*}
where $p(\vector{x}) = p(y=1) p(\vector{x}|y=1) + p(y=2)
p(\vector{x}|y=2)$.
\begin{figure}[!t] \centering \includegraphics[width=3.5in]{GeneFig2.eps}
\caption{Gene expression classification with feature selection. The best
method and methods comparable to the best one in terms of the mean AUC
by the one-tailed Welch's t-test with significance level $5\%$ are
highlighted by the asterisks.} \label{fig:geneSelection}
\end{figure}
We use two gene expression datasets of breast cancer prognosis studies:
``SMK-CAN-187'' \cite{Freije2004Gene} and ``VANTVEER''
\cite{Spira2007Airway}. The SMK-CAN-187 dataset contains 90 positive
(alive) and 97 negative (dead after 5 years) samples with 19993
features. We use 65 randomly selected samples per class for training
and use the rest for evaluating the test classification performance.
The VANTVEER dataset contains 46 positive and 51 negative samples with
24481 features. We use 35 randomly selected data per class for training
and use the rest for evaluating the test classification performance.
We choose $20$ features based on the forward selection strategy and
compare the AUC of classification. The results are summarized in
Figure~\ref{fig:geneSelection}, showing that the proposed method works
reasonably well.
\section{Conclusion}
We proposed a method to directly estimate density derivatives. The
proposed estimator, called MISED, was shown to possess various useful
properties, e.g., analytic and computationally efficient estimation
of multi-dimensional high-order density derivatives is possible and all
hyper-parameters can be chosen objectively by cross-validation. We
further proposed a MISED-based metric learning method to improve the
accuracy of nearest-neighbor KL-divergence approximation, and its practical
usefulness was experimentally demonstrated on change detection and
feature selection.
Estimation of density derivatives is versatile and useful in
various machine learning tasks beyond KL-divergence approximation.
In our future work, we will explore more applications
based on the proposed MISED method.
\bibliographystyle{unsrt}
| {
"timestamp": "2014-07-01T02:11:51",
"yymm": "1406",
"arxiv_id": "1406.7638",
"language": "en",
"url": "https://arxiv.org/abs/1406.7638",
"abstract": "Estimation of density derivatives is a versatile tool in statistical data analysis. A naive approach is to first estimate the density and then compute its derivative. However, such a two-step approach does not work well because a good density estimator does not necessarily mean a good density-derivative estimator. In this paper, we give a direct method to approximate the density derivative without estimating the density itself. Our proposed estimator allows analytic and computationally efficient approximation of multi-dimensional high-order density derivatives, with the ability that all hyper-parameters can be chosen objectively by cross-validation. We further show that the proposed density-derivative estimator is useful in improving the accuracy of non-parametric KL-divergence estimation via metric learning. The practical superiority of the proposed method is experimentally demonstrated in change detection and feature selection.",
"subjects": "Machine Learning (stat.ML)",
"title": "Direct Density-Derivative Estimation and Its Application in KL-Divergence Approximation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310526632796,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7076984868774332
} |
https://arxiv.org/abs/1910.05584 | A convergent numerical method to recover the initial condition of nonlinear parabolic equations from lateral Cauchy data | We propose a new numerical method for the solution of the problem of the reconstruction of the initial condition of a quasilinear parabolic equation from the measurements of both Dirichlet and Neumann data on the boundary of a bounded domain. Although this problem is highly nonlinear, we do not require an initial guess of the true solution. The key in our method is the derivation of a boundary value problem for a system of coupled quasilinear elliptic equations whose solution is the vector function of the spatially dependent Fourier coefficients of the solution to the governing parabolic equation. We solve this problem by an iterative method. The global convergence of the system is rigorously established using a Carleman estimate. Numerical examples are presented. | \section{Introduction}
Let $d \geq 1$ be the spatial dimension and $T > 0$.
Let $q: \mathbb{R} \to \mathbb{R}$ and $c: \mathbb{R}^d \to \mathbb{R}$ be two smooth functions in the class $C^1$.
Assume that $c({\bf x}) \geq c_0$ for some $c_0 > 0.$
Consider the problem
\begin{equation}
\left\{
\begin{array}{rcll}
c({\bf x})u_t({\bf x}, t) &=& \Delta u({\bf x}, t) + q(u({\bf x}, t)) &{\bf x} \in \mathbb{R}^d, t \in (0, T)\\
u({\bf x},0) &=& p({\bf x}) & {\bf x} \in \mathbb{R}^d,
\end{array}
\right.
\label{main eqn}
\end{equation}
where $p$ is a source function compactly supported in an open and bounded domain $\Omega$ of $\mathbb{R}^d$ with smooth boundary $\partial \Omega.$
We briefly discuss the unique solvability and some regularity properties of \eqref{main eqn}.
Assume that the initial condition of $p$ is in $H^{2 + \beta}(\mathbb{R}^d)$ for some $\beta \in [0, 1 + 4/d]$ and has compact support.
Assume further that
\begin{equation}
|q(s)| \leq C(1 + |s|) \quad \mbox{for all } s \in \mathbb{R}
\label{Lady}
\end{equation} for some constant $C > 0$.
Then \eqref{main eqn} has a unique solution with $|u({\bf x}, t)| \leq M$ and
$u \in H^{2 + \beta, 1 + \beta/2} (\mathbb{R}^d \times [0, T])$ for some constant $M > 0$.
These unique solvability and regularity properties can be obtained by applying Theorem 6.1 in \cite[Chapter 5, \S 6]{LadyZhenskaya:ams1968} and Theorem 2.1 in \cite[Chapter 5, \S 2]{LadyZhenskaya:ams1968}.
We are interested in the following problem.
\begin{problem}[Inverse Source Problem]
Assume that there is a number $M > 0$ such that $|u({\bf x}, t)| \leq M$ for all ${\bf x} \in \overline \Omega,$ $t \in [0, T]$.
Given the lateral Cauchy data
\begin{equation}
f({\bf x}, t) = u({\bf x}, t)
\quad
\mbox{and }
\quad
g({\bf x}, t) = \partial_\nu u({\bf x}, t)
\label{data}
\end{equation}
for ${\bf x} \in \partial \Omega$, $t \in [0, T]$,
determine the function $u({\bf x}, 0) = p({\bf x}), {\bf x} \in \Omega.$
\label{ISP}
\end{problem}
Problem \ref{ISP} arises from the problem of recovering the initial condition $p({\bf x})$ of parabolic equation \eqref{main eqn} from the lateral Cauchy data.
It has many real-world applications, e.g.,
determination of the spatially distributed temperature inside a solid from the boundary measurement of the heat and heat flux in the time domain \cite{Klibanov:ip2006};
identification the pollution on the surface of the rivers or lakes \cite{BadiaDuong:jiip2002};
effective monitoring the heat conduction processes in steel industries, glass and polymer-forming and nuclear power station \cite{LiYamamotoZou:cpaa2009}.
When the nonlinear term $q(u)$ takes the form $u(1 - u)$ (or $q(u) = u(1 - |u|^{\alpha})$) for some $\alpha > 0$, the parabolic equation in (\ref{main eqn}) is called the high dimensional version of the well-known Fisher (or Fisher-Kolmogorov) equation \cite{Fisher:ae1937}.
Although the nonlinearity $q$ does not satisfy condition \eqref{Lady}, we do not experience any difficulty in numerical computations of the forward problem.
It is worth mentioning that the Fisher equation occurs in ecology, physiology, combustion, crystallization, plasma physics, and in general phase transition problems, see \cite{Fisher:ae1937}.
Due to its realistic applications, the problem of determining the initial conditions of parabolic equations has been studied intensively. However, up to the knowledge of the authors, numerical solutions are computed only in the case when the nonlinearity is absent, see e.g., \cite{LiNguyen:IPSE2019}.
The uniqueness of Problem \ref{ISP} is well-known assuming that the nonlinearity $q$ is in class $C^1$, see \cite{Lavrentiev:AMS1986}.
On the other hand, the logarithmic stability results were rigorously proved in \cite{Klibanov:ip2006, LiYamamotoZou:cpaa2009}.
For completeness, we briefly recall the logarithmic stability of Problem \ref{ISP} in this paper.
The natural approach to solve this problem is the optimal control method; that means, minimizing some mismatch functionals.
However, since the known stability is logarithmic \cite{Klibanov:ip2006, LiYamamotoZou:cpaa2009}, the optimal control approach might not give good numerical results; especially, when the initial guess, if provided, is far away from the true solution.
A more important reason for us to not use the optimal control method is that the cost functional is nonconvex and, therefore, might have multi-minima.
We draw the reader's attention to the convexification methods, see \cite{KlibanovIoussoupova:SMA1995, Klibanov:sjma1997, Klibanov:nw1997, KlibanovNik:ra2017, Klibanov:ip2015, KlibanovKolesov:cma2019, KlibanovLiZhang:ip2019, KhoaKlibanovLoc:arxiv2019}, which convexify the cost functional and therefore the difficulty about the lack of the initial guess is avoided.
Applying the convexification method to numerically solve Problem \ref{ISP} will be studied in the near future project.
In this paper, rather than working on the convexification method, similarly to \cite{BAUDOUIN:SIAMNumAna:2017, Boulakia:preprint2019}, of which the authors have successfully solved a coefficient inverse problem for a hyperbolic equation and an inverse source problem for a parabolic equation by combining the contraction principle and a new Carleman estimate, we propose a numerical method for Problem \ref{ISP}.
The convergence of our method is proved based on the contraction principle using a new Carleman estimate.
The latter is similar to the idea of \cite{BAUDOUIN:SIAMNumAna:2017 ,Baudouin:preprint2019, Boulakia:preprint2019}.
As mentioned, since a good initial guess of the true solution of Problem \ref{ISP} is not always available, the optimal control method, which is widely used in the scientific community, might not be applicable.
To overcome this difficulty, we propose to solve Problem \ref{ISP} in the Fourier domain.
More precisely, we derive a system of elliptic PDEs whose solution consists of a finite number of the Fourier coefficients of the solution to the parabolic equation (\ref{main eqn}).
The solution of this system directly yields the knowledge of the function $u({\bf x}, t)$, from which the solution to our inverse problem follows.
We numerically solve this nonlinear system by an iterative process.
The initial solution can be computed by solving the system obtained by removing the nonlinear term.
Then, we approximate the nonlinear system by replacing the nonlinearity by the one acting on the initial solution obtained in the previous step.
Solving this approximation system, we find an updated solution.
Continuing this process, we get a fast convergent sequence reaching to the desired function.
The convergence of this iterative procedure is rigorously proved by using a new Carleman estimate and the standard arguments of the contraction principle.
The fast convergence will be shown in both analytic and numerical senses.
Two papers closely related to the current one are \cite{Boulakia:preprint2019} and \cite{LiNguyen:IPSE2019}. In \cite{Boulakia:preprint2019}, a source term for a nonlinear parabolic equation is computed and in \cite{LiNguyen:IPSE2019}, the second author and his collaborator computed the initial condition of the linear parabolic equation from the lateral Cauchy data.
On the other hand, the coefficient inverse problem for parabolic equations is also very interesting and studied intensively.
We draw the reader's attention to \cite{Borceaetal:ip2014, CaoLesnic:nmpde2018, CaoLesnic:amm2019, KeungZou:ip1998, Nguyen:arxiv2019, Nguyens:jiip2019, YangYuDeng:amm2008} for important numerical methods and good numerical results.
Besides, the problem of recovering the initial conditions for the hyperbolic equation is very interesting since it arises in many real-world applications.
For instance, the problems thermo- and photo-acoustic tomography play the key roles in biomedical imaging.
We refer the reader to some important works in this field \cite{LiuUhlmann:ip2015, KatsnelsonNguyen:aml2018, HaltmeierNguyen:SIAMJIS2017}.
Applying the Fourier transform, one can reduce the problem of reconstructing the initial conditions for hyperbolic equations to some inverse source problems for the Helmholtz equation, see \cite{NguyenLiKlibanov:IPI2019, WangGuoZhangLiu:ip2017, WangGuoLiLiu:ip2017, LiLiuSun:IPI2018, ZhangGuoLiu:ip2018} for some recent results.
The paper is organized as follows.
In Section \ref{sec method}, we derive a nonlinear system of elliptic PDEs, which leads to a numerical method to solve Problem \ref{ISP}.
This nonlinear system is solved by an iterative scheme. The proof of the convergence of this iteration is based on the contraction principle.
In Section \ref{Sec Carleman}, we establish and prove a Carleman estimate. This estimate plays an important role in the proof Theorem \ref{minimizer} that guarantees the existence and uniqueness of the least-squares solution to over-determined elliptic systems.
In Section \ref{sec convergence}, we prove the convergence of the iterative sequence.
In Section \ref{sec num}, we discuss the implementation of our method and show several numerical results.
Section \ref{sec remarks} is for concluding remarks.
\section{A numerical method to solve Problem \ref{ISP} } \label{sec method}
The main aims of this section are to derive a system of nonlinear elliptic equations, whose solutions directly yield the solutions to Problem \ref{ISP}, and then propose a method to solve it.
\subsection{A system of nonlinear elliptic equations} \label{sec 2.1}
Let $\{\Psi_n\}_{n \geq 1}$ be an orthonormal basis of $L^2(0, T).$
For each point ${\bf x} \in \Omega$, we can approximate $u({\bf x}, t)$, $t \in [0, T],$ as
\begin{equation}
u({\bf x}, t) = \sum_{n = 1}^\infty u_n({\bf x}) \Psi_n(t) \simeq \sum_{n = 1}^N u_n({\bf x}) \Psi_n(t)
\label{Fourier u}
\end{equation}
where
\begin{equation}
u_n({\bf x}) = \int_0^T u({\bf x}, t) \Psi_n(t) dt \quad n \geq 1.
\label{Fourier coefficient}
\end{equation}
\begin{remark}
Replacing $\simeq$ in \eqref{Fourier u} by $``="$ forms our approximate mathematical model.
We cannot prove the convergence of the model as $N \to \infty$. Indeed, such a result is very hard to prove due to both the nonlinearity and the ill-posedness of our inverse problem.
Therefore, our goal below is to find spatially dependent Fourier coefficients $u_n$ defined in \eqref{Fourier coefficient}. The number $N$ should be chosen numerically.
In fact, in Section \ref{sec num}, we verify that with appropriate values of $N$, the error causing from \eqref{Fourier u} is small, see also Figure \ref{fig choose N}
\end{remark}
Due to (\ref{Fourier u}), the function $u_t({\bf x}, t)$ is approximated by
\begin{equation}
u_t({\bf x}, t) \simeq \sum_{n = 1}^N u_n({\bf x}) \Psi_n'(t)
\quad {\bf x} \in \Omega, t \in [0, T].
\label{Fourier ut}
\end{equation}
From now on, we replace the approximation ``$\simeq$" by equality.
This obstacle will be considered numerically in Remark \ref{rem N} and Figure \ref{fig choose N}.
Plugging (\ref{Fourier u}) and (\ref{Fourier ut}) into the governing equation in (\ref{main eqn}), we obtain \begin{equation}
c({\bf x})\sum_{n = 1}^N u_n({\bf x}) \Psi_n'(t) =
\sum_{n = 1}^N \Delta u_n({\bf x}) \Psi_n(t)
+ q\Big(\sum_{n = 1}^N u_n({\bf x}) \Psi_n(t)\Big) \label{3.3}
\end{equation}
for all ${\bf x} \in \Omega.$
For each $m = 1, \dots, N$, multiply $\Psi_m(t)$ to both sides of (\ref{3.3}) and then integrate the resulting equation with respect to $t$ on $[0, T]$.
For all ${\bf x} \in \Omega,$ we have
\begin{multline}
c({\bf x})\sum_{n = 1}^N u_n({\bf x}) \int_0^T\Psi_n'(t) \Psi_m(t)dt
\\
=
\sum_{n = 1}^N \Delta u_n({\bf x}) \int_0^T\Psi_n(t) \Psi_m(t)dt
+ \int_0^Tq\Big(\sum_{n = 1}^N u_n({\bf x}) \Psi_n(t)\Big)\Psi_m(t)dt.
\label{3.4}
\end{multline}
The system (\ref{3.4}) with $m = 1, \dots, N$ can be rewritten as
\begin{equation}
c({\bf x}) \sum_{n = 1}^N s_{mn} u_n({\bf x}) = \Delta u_m({\bf x}) + q_m(u_1({\bf x}), u_2({\bf x}), \dots, u_N({\bf x}))
\label{system U}
\end{equation}
where
\[ s_{mn} = \int_0^T \Psi'_n(t) \Psi_m(t)dt
\]
and
\begin{equation}
q_m(u_1({\bf x}), u_2({\bf x}), \dots, u_N({\bf x})) = \int_{0}^T q\Big(\sum_{n = 1}^N u_n({\bf x}) \Psi_n(t)\Big) \Psi_m(t)dt.
\label{eqnqm}
\end{equation}
Due to (\ref{Fourier coefficient}), each function $u_m$, $m = 1, \dots, N$, satisfies the Cauchy boundary conditions
\begin{equation}
\left\{
\begin{array}{rl}
u_m({\bf x}) &= f_m({\bf x})
= \displaystyle\int_0^T f({\bf x}, t) \Psi_m(t)dt
\\
\partial_\nu u_m({\bf x}) &= g_m({\bf x})
=\displaystyle\int_0^T g({\bf x}, t) \Psi_m(t)dt
\end{array}
\right.
\label{boundary conditions}
\end{equation}
for all ${\bf x} \in \partial \Omega$, $m = 1, \dots, N.$ Here, $f({\bf x}, t)$ and $g({\bf x}, t)$ are the given data.
\begin{remark}
Problem \ref{ISP} becomes the problem of finding all functions $u_m({\bf x})$, ${\bf x} \in \Omega$, $m = 1, \dots, N$, satisfying (\ref{system U}) and the Cauchy boundary conditions (\ref{boundary conditions}).
In fact, if all of those functions are known, we can compute the function $u({\bf x}, t)$, ${\bf x} \in \Omega$, $t \in [0, T]$ via (\ref{Fourier u}).
Then, the initial condition $p({\bf x})$ is given by the function $u({\bf x}, 0).$
\end{remark}
\begin{remark}
From now on, we consider the values of $f_m({\bf x})$ and $g_m({\bf x})$ on $\partial \Omega$, $m = 1, \dots, N$, as the ``indirect data", see (\ref{boundary conditions}).
Denote by $f_m^*({\bf x})$ and $g_m^*({\bf x})$ the noiseless data. In numerical study, we set the noisy data as
\[
f_m^{\delta} = f_m^*(1 + \delta(-1 + 2{\rm rand})) \quad
g_m^{\delta} = g_m^*(1 + \delta(-1 + 2{\rm rand}))
\] on $\partial \Omega,$ $1 \leq m \leq N$ where $\delta > 0$ is the noise level and ${\rm rand}$ is the function taking uniformly distributed random numbers in the range $[0, 1]$.
In our numerical study, $\delta = 20\%.$
\end{remark}
\subsection{An iterative procedure to solve the system \eqref{system U}-- \eqref{boundary conditions}} \label{sec solve nonlinear system}
We propose a procedure to compute $u_1({\bf x})$, $ \dots,$ $u_N({\bf x})$.
We first approximate (\ref{system U})--(\ref{boundary conditions}) by solving the following over-determined problem
\begin{equation}
\left\{
\begin{array}{rcll}
c({\bf x}) \sum_{n = 1}^N s_{mn} u_n^{(0)}({\bf x}) &=& \Delta u_m^{(0)}({\bf x})
&{\bf x} \in \Omega,\\
u_m^{(0)}({\bf x}) &=& f_m({\bf x}) &{\bf x} \in \partial \Omega,\\
\partial_{\nu} u_m^{0}({\bf x}) &=& g_m({\bf x}) &{\bf x} \in \partial \Omega
\end{array}
\right.
\quad m = 1, 2, \dots, N
\label{U0}
\end{equation}
for a vector value function $(u^{(0)}_1, \dots, u^{(0)}_N)$.
Then, assume by induction that we know $(u^{(k-1)}_1, \dots, u^{(k-1)}_N)$, $k \geq 1$, we find $(u^{(k)}_1, \dots, u^{(k)}_N)$ by solving
\begin{equation}
\left\{
\begin{array}{ll}
c({\bf x}) \sum_{n = 1}^N s_{mn} u^{(k)}_n({\bf x}) = \Delta u^{(k)}_m({\bf x})
\\
\hspace{3cm}+ q_m[P (u^{(k-1)}_1({\bf x})), \dots, P(u_N^{(k-1)}({\bf x}))]
&{\bf x} \in \Omega,\\
u_m^{(k)}({\bf x}) = f_m({\bf x}) &{\bf x} \in \partial \Omega,\\
\partial_{\nu} u_m^{(k)}({\bf x}) = g_m({\bf x}) &{\bf x} \in \partial \Omega
\end{array}
\right.
\label{Up}
\end{equation}
where $q_m$ is defined in (\ref{eqnqm})
for $m = 1, 2, \dots, N.$
Here,
\begin{equation}
P(s) = \left\{
\begin{array}{ll}
M\sqrt{T} & s \in (M\sqrt{T}, \infty),\\
s & s \in [-M\sqrt{T}, M\sqrt{T}],\\
-M\sqrt{T}& s \in (-\infty, -M\sqrt{T}]
\end{array}
\right.
\quad \mbox{for all } s \in \mathbb{R}.
\label{cutoff function}
\end{equation}
serves as a cut-off function.
where $M > \|u^*\|_{L^{\infty}(\Omega \times [0, T])}$ is a fixed constant.
In practice since both Dirichlet and Neumann conditions imposed, problem (\ref{U0}) and problem (\ref{Up}) might have no solution.
However, since these two problems are linear, we can use the linear least-squares method to find the ``best fit" solutions.
In order to guarantee the convergence of the method, we include a Carleman weight function in the linear least-squares functional.
Define the set of admissible solution
\[
H = \{(u_m)_{m = 1}^N \in H^2(\Omega)^N: u_m|_{\partial \Omega} = f_m \mbox{ and } \partial_{\nu} u_m|_{\partial \Omega} = g_m, 1 \leq m \leq N\}.
\]
Throughout the paper, we assume that the set $H$ is nonempty.
In the analysis, we will need the following subspace of $H^2(\Omega)^N$
\begin{equation}
H_0 = \left\{
(v_1, \dots, v_N) \in H^2(\Omega):
v_m({\bf x}) = \partial_{\nu} v_m({\bf x}) = 0
\right\}.
\label{H0}
\end{equation}
Let ${\bf x}_0$ be a point in $\mathbb{R}^d \setminus \Omega$ with $\min\{r({\bf x}): {\bf x} \in \overline \Omega\} > 1$ and $b > \max\{r({\bf x}): {\bf x} \in \overline \Omega\}$ where
\[
r({\bf x}) = |{\bf x} - {\bf x}_0| \quad \mbox{for all } {\bf x} \in \mathbb{R}^d.
\]
We choose ${\bf x}_0$ such that $\min\{r({\bf x}): {\bf x} \in \overline \Omega\} > 1.$
To find $u^{(0)}$, we minimize the functional
$
J^{(0)}: H \to \mathbb{R}
$ with
\begin{equation}
J^{(0)}(u_1, \dots, u_N)
=
\sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})} \Big|\Delta u_m - c({\bf x}) \sum_{n = 1}^N s_{mn} u_n\Big|^2 d{\bf x}
\label{J0}
\end{equation}
where $\lambda$ and $\beta$ are the numbers as in Corollary \ref{Col 3.1}.
The obtained minimizer $(u_m^{(0)})_{m = 1}^N$ $\in$ $H$ is called the regularized solution to (\ref{U0}).
Next, assume, by induction, that we know $(u_m^{(k-1)})_{m = 1}^N$, $k \geq 1$, we set $(u_m^{(k)})_{m = 1}^N$ as the minimizer of
$
J^{(k)}: H \to \mathbb{R}
$ defined as
\begin{multline}
J^{(k)}(u_1, \dots, u_N) = \sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big|\Delta u_m - c({\bf x})\sum_{n = 1}^N s_{mn} u_n
\\
+ q_m(P(u_1^{(k - 1)}), \dots, P(u_N^{(k-1)}))\Big|^2 d{\bf x}.
\label{Jk}
\end{multline}
\begin{remark}
The function $e^{2\lambda b^{-\beta} r^\beta({\bf x})}$ in \eqref{J0} and \eqref{Jk} is called the Carleman weight function.
Its presence is very helpful to prove the existence and uniqueness of the minimizers for the functionals $J^{(k)}$, $k \geq 0,$ see Theorem \ref{minimizer}.
On the other hand, this Carleman weight function and the Carleman estimate (see Theorem \ref{Carleman}) play important roles for us to prove the convergence of our method, see Theorem \ref{main thm}.
\end{remark}
The following result guarantees the existence and uniqueness of the minimizer of (\ref{U0}) and the one of (\ref{Up}), $k \geq 1$.
\begin{Theorem}
Assume that $f_m$ and $g_m$ are in $L^2(\partial \Omega)$, $m = 1, 2, \dots, N$ and assume that $H$ is nonempty.
Then, each functional $J^{(k)}$, $k \geq 0$, has a unique minimizer provided that both $\lambda$ and $\beta$ are sufficiently large.
\label{minimizer}
\end{Theorem}
\begin{proof}
We only prove Theorem \ref{minimizer} when $k \geq 1$.
Since $H$ is nonempty, we can find a vector valued function $(\varphi_m)_{m = 1}^N \in H.$
Define
\begin{equation}
v_m({\bf x}) = u_m({\bf x}) - \varphi_m({\bf x}) \quad {\bf x} \in \Omega, m = 1, \dots, N.
\label{3939}
\end{equation}
We minimize
\[
I^{(k)}(v_1, \dots, v_N) = J^{(k)}(u_1 - \varphi_1, \dots, u_N - \varphi_N)
\] where $(v_m)_{m = 1}^N$ varies in $H_0$, defined in (\ref{H0}).
If $(v_m)_{m = 1}^N$ minimizes $I^{(k)},$ then by the variational principle,
\begin{eqnarray}
\hspace{-2cm}
\sum_{m = 1}^N\Big \langle
e^{2\lambda b^{-\beta} r^\beta({\bf x})} \Big(\Delta v_m - c({\bf x})\sum_{n = 1}^N s_{mn} v_n
+ \Delta \varphi_m - c({\bf x})\sum_{n = 1}^N s_{mn} \varphi_n
\nonumber
\\
+ q_m(P(u_1^{(k - 1)}), \dots, P(u_N^{(k-1)}))\Big),
\Delta h_m - c({\bf x})\sum_{n = 1}^N s_{mn} h_n \Big \rangle_{L^2(\Omega)} = 0
\label{39}
\end{eqnarray}
for all $(h_m)_{m = 1}^N \in H_0$.
The identity (\ref{39}) is equivalent to
\begin{multline}
\sum_{m = 1}^N\Big \langle
e^{2\lambda b^{-\beta} r^\beta({\bf x})} \Delta v_m - c({\bf x})\sum_{n = 1}^N s_{mn} v_n,
\Delta h_m - c({\bf x})\sum_{n = 1}^N s_{mn} h_n \Big \rangle_{L^2(\Omega)}
\\
=-\sum_{m = 1}^N\Big \langle
e^{2\lambda b^{-\beta} r^\beta({\bf x})} \Big( \Delta \varphi_m - c({\bf x})\sum_{n = 1}^N s_{mn} \varphi_n + q_m(P(u_1^{(k - 1)}), \dots, P(u_N^{(k-1)}))\Big),
\\
\Delta h_m - c({\bf x})\sum_{n = 1}^N s_{mn} h_n \Big \rangle_{L^2(\Omega)}.
\label{40}
\end{multline}
The left hand side of (\ref{40}) defines a bilinear form $\{\cdot, \cdot\}$ of a pair $((v_m)_{m = 1}^N, (h_m)_{m = 1}^N)$ in $H_0$.
We claim that $\{\cdot, \cdot\}$ is coercive; that means, \[
\{(v_m)_{m = 1}^N, (v_m)_{m = 1}^N\} \geq C\|(v_m)_{m = 1}^N\|_{H^2(\Omega)^N}^2
\] for some constant $C$.
In fact, using the inequality $(x - y )^2 \geq x^2/2 - y^2 $, we have
\begin{multline*}
\sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big|\Delta v_m - c({\bf x})\sum_{n = 1}^N s_{mn} v_n\Big|^2 d{\bf x}
\geq \sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v_m|^2 d{\bf x}
\\
-\sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big| c({\bf x})\sum_{n = 1}^N s_{mn} v_n\Big|^2 d{\bf x}.
\end{multline*}
Applying the Carleman estimate (\ref{33}), which will be proved in Section \ref{Sec Carleman}, for the function $v_m$ for each $m \in \{1, \dots, N\},$ we have
\begin{multline*}
\sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big|\Delta v_m - c({\bf x})\sum_{n = 1}^N s_{mn} v_n\Big|^2 d{\bf x}
\\
\geq \sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big[\frac{C\sum_{i, j = 1}^d|\partial^2_{x_i x_j} v_m|^2}{\lambda} +
C\lambda |\nabla v_m|^2 + C\lambda^3|v_m|^2
\Big]d{\bf x}
\\
-\sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big| c({\bf x})\sum_{n = 1}^N s_{mn} v_n\Big|^2 d{\bf x}.
\end{multline*}
Since $c({\bf x})$ and $s_{mn}$ are finite, we can choose $\lambda$ sufficiently large such that
\begin{equation*}
\sum_{m = 1}^N\int_{\Omega} e^{2\lambda b^{-\beta} r^\beta({\bf x})}\Big|\Delta v_m - c({\bf x})\sum_{n = 1}^N s_{mn} v_n\Big|^2 d{\bf x}
\geq
C\max_{{\bf x} \in \overline \Omega} \{e^{2\lambda b^{-\beta} r^\beta({\bf x})}\} \lambda^{-1}\sum_{m = 1}^N
\|(v_m)_l\|^2_{H^2(\Omega)}.
\end{equation*}
Applying the Lax-Milgram theorem, we can find a unique vector-valued function $(v_m)_{m = 1}^N$ satisfying (\ref{40}).
The vector-valued function $(u_m)_{m = 1}^N$ can be found via (\ref{3939}).
\end{proof}
Denote by $\{(u_1^{(k)}, \dots, u_N^{(k)})\}$ the sequence of the minimize of $J^{(k)}$, $k \geq 0$ .
We claim that this sequence converges to the true solution of (\ref{system U}) and (\ref{boundary conditions}) in $L^2(\Omega)^N$ as $k \to \infty$.
The proof of this fact is based on the contraction principle and the Carleman estimate in Section \ref{Sec Carleman} plays an important role.
\section{A new Carleman estimate} \label{Sec Carleman}
In this section, we establish a new Carleman estimate, which has been used to prove Theorem \ref{minimizer} that guarantees the unique existence of the functional $J^{(k)}$ in \eqref{Jk}, $k \geq 1$.
This estimate and its corollary, Corollary \ref{Col 3.1}, play a crucial role in the proof of our main result, see Theorem \ref{main thm} which guarantees the convergence of our numerical method.
\begin{Theorem}[Carleman estimate]
Let ${\bf x}_0$ be a point in $\mathbb{R}^d \setminus \overline \Omega$ such that $r({\bf x}) = |{\bf x} - {\bf x}_0| > 1$ for all ${\bf x} \in \Omega$.
Let $b > \max_{{\bf x} \in \overline \Omega} r({\bf x})$ be a fixed constant.
There exist positive constants $\beta_0$ depending only on $b$, ${\bf x}_0$, $\Omega$ and $d$ such that
for all function $v \in C^2(\overline \Omega)$ satisfying
\[
v({\bf x}) = \partial_{\nu} v({\bf x}) = 0 \quad \mbox{for all } {\bf x} \in \partial \Omega,
\]
the following estimate holds true
\begin{multline}
\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|\Delta v({\bf x})|^2 d{\bf x}
\geq
\frac{C}{\lambda \beta^{7/4} b^{-\beta}}\sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} r^{2\beta}({\bf x})|\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
+ C \lambda^3 \beta^4 b^{-3 \beta} \int_{\Omega} r^{2\beta}({\bf x}) e^{2\lambda b^{-\beta} r^{\beta}}|v({\bf x})|^2 d{\bf x}
\\
+ C \lambda \beta^{1/2} b^{-\beta}\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\label{Car est}
\end{multline}
for $\beta \geq \beta_0$ and $\lambda \geq \lambda_0$.
Here, $\lambda_0 = \lambda_0(b, \Omega, d, {\bf x}_0) > 1$ is a positive number with $\lambda_0 b^{-\beta} \gg 1$ and $C = C(b, \Omega, d, {\bf x}_0) > 1$ is a constant. These numbers depend only on listed parameters.
\label{Carleman}
\end{Theorem}
\begin{remark}
The Carleman estimate in Theorem \ref{Carleman} is more complicated than the version in \cite{NguyenLiKlibanov:IPI2019}.
The reason for us to establish this new estimate is that the Carleman weight function in \cite{NguyenLiKlibanov:IPI2019} decays fast when $\lambda \gg 1$, causing poor numerical results.
Unlike this, the Carleman estimate in Theorem \ref{Carleman} allows us to choose large $\lambda$ in implementation, making the theory and the computational codes more consistent.
\end{remark}
\begin{remark}
A new feature in Theorem \ref{Carleman} the presence of all second derivatives of the function $v$ on the right-hand side of \eqref{Car est}.
This makes it more convenient for us to prove the existence and uniqueness of the regularized solutions to a system of nonlinear elliptic equations appearing in our analysis in Section \ref{sec method}, see Theorem \ref{minimizer}.
\end{remark}
We split the proof of Theorem \ref{Carleman} into four lemmas, Lemma \ref{lem 3.4}--Lemma \ref{lem 3.3}.
\begin{Lemma} Let $v$ be the function as in Theorem \ref{Carleman}.
There exists a positive constant $\beta_0$ depending only on $b$, ${\bf x}_0$, $\Omega$ and $d$ such that
\begin{multline}
\int_{\Omega}\frac{e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2}{4 \lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})} d{\bf x}
\geq
C\lambda^2 \beta^3 b^{-2\beta} \int_{\Omega} r^{2\beta}({\bf x}) e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|v({\bf x})|^2 d{\bf x}
\\
- C \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\label{4.14}
\end{multline}
for all $\beta \geq \beta_0$ and $\lambda \geq \lambda_0$. Here, $\lambda_0$ is a constant such that $\lambda_0 b^{-\beta} \gg 1$.
\label{lem 3.4}
\end{Lemma}
\begin{proof}
By changing variables, if necessary, we can assume that ${\bf x}_0 = 0.$
Define the function
\begin{equation}
w({\bf x}) = e^{\lambda b^{-\beta} r^\beta({\bf x})} v({\bf x})
\quad \mbox{or} \quad
v({\bf x}) = e^{-\lambda b^{-\beta} r^\beta({\bf x})} w({\bf x})
\label{5.11}
\end{equation}
for all ${\bf x} \in \Omega$.
Since $v$ vanishes on $\partial \Omega$, so does $w$.
On the other hand, by the product rule in differentiation, for all ${\bf x} \in \Omega$,
\begin{equation}
\nabla v({\bf x})= e^{-\lambda b^{-\beta} r^\beta({\bf x})} \nabla w({\bf x}) -\beta \lambda b^{-\beta} r^{\beta - 2}({\bf x}) e^{-\lambda b^{-\beta} r^\beta({\bf x})} w({\bf x}){\bf x}
\label{5.1}
\end{equation}
It follows that
\[
e^{-\lambda b^{-\beta} r^\beta({\bf x})} \nabla w({\bf x}) \cdot \nu = \nabla v({\bf x})\cdot \nu
+ \beta\lambda b^{-\beta} r^{\beta - 2}({\bf x}) e^{-\lambda b^{-\beta} r^\beta({\bf x})} w({\bf x}){\bf x} = 0.
\]
for all ${\bf x} \in \partial \Omega$.
We thus obtain
$
w({\bf x}) = \partial_\nu w({\bf x}) = 0
$ for all ${\bf x} \in \partial \Omega$.
Hence, from now on, whenever we apply the integration by parts formula on $v$ and $w$, the integrals on $\partial \Omega$ vanishes.
We next compute the Laplacian of $v$ in terms of $w$.
For all ${\bf x} \in \Omega,$
\begin{align*}
\Delta v({\bf x}) &= e^{-\lambda b^{-\beta} r^\beta({\bf x})} \Delta w({\bf x})
+ 2 \nabla e^{-\lambda b^{-\beta} r^\beta({\bf x})} \cdot \nabla w({\bf x})
+ w({\bf x}) \Delta (e^{-\lambda b^{-\beta} r^\beta({\bf x})})
\\
& = e^{-\lambda b^{-\beta} r^\beta({\bf x})} \Big[
\Delta w({\bf x})
- 2 \lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})\nabla w({\bf x}) \cdot
{\bf x}
\\
& \hspace{6cm}
+e^{\lambda b^{-\beta} r^\beta({\bf x})} \Delta (e^{-\lambda b^{-\beta} r^\beta({\bf x})}) w({\bf x})
\Big].
\end{align*}
Using the inequality $(a - b + c)^2 \geq -2ab - 2bc,$ we have
\begin{multline}
|\Delta v({\bf x})|^2 \geq -4 \lambda \beta b^{-\beta} r^{\beta - 2}({\bf x}) e^{-2\lambda b^{-\beta} r^\beta({\bf x})}
\Big[
\Delta w({\bf x})\nabla w({\bf x}) \cdot {\bf x}
\\
+ e^{\lambda b^{-\beta} r^\beta({\bf x})} \Delta (e^{-\lambda b^{-\beta} r^\beta({\bf x})})w({\bf x}) \nabla w({\bf x}) \cdot {\bf x}
\Big]
\label{4.4}
\end{multline}
for all ${\bf x} \in \Omega.$
By a straight forward computation, for ${\bf x} \in \Omega,$
\[
\Delta(e^{-\lambda b^{-\beta} r^\beta({\bf x})}) = - \lambda \beta b^{-\beta} e^{-\lambda b^{-\beta} r^{\beta}({\bf x})} r^{\beta - 2}({\bf x}) \big[
(\beta - 2 + d) - \lambda b^{-\beta} \beta r^\beta({\bf x})
\big].
\]
Plugging this into (\ref{4.4}) gives
\begin{multline*}
|\Delta v({\bf x})|^2 \geq -4 \lambda \beta b^{-\beta} r^{\beta - 2}({\bf x}) e^{-2\lambda b^{-\beta} r^\beta({\bf x})}
\Big[
\Delta w({\bf x})\nabla w({\bf x}) \cdot {\bf x}
\\
-\lambda \beta b^{-\beta} r^{\beta - 2}({\bf x}) \big[
(\beta - 2 + d) - \lambda \beta b^{-\beta} r^\beta({\bf x})
\big] w({\bf x})\nabla w({\bf x}) \cdot {\bf x}
\Big]
\end{multline*}
for all ${\bf x} \in \Omega.$
Hence,
\begin{equation}
\int_{\Omega}\frac{e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2}{4 \lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})} d{\bf x}
\geq I_1 + I_2 + I_3
\label{4.5}
\end{equation}
where
\begin{eqnarray}
I_1 &=& -\int_{\Omega}\Delta w({\bf x}) \nabla w({\bf x}) \cdot {\bf x} d{\bf x}, \label{4.6}
\\
I_2 &=& \lambda \beta b^{-\beta} (\beta - 2 + d) \int_{\Omega}r^{\beta - 2}({\bf x}) w({\bf x}) \nabla w({\bf x}) \cdot {\bf x} d{\bf x}, \label{4.7}
\\
I_3 & =& - \lambda^2 b^{-2\beta} \beta^2 \int_{\Omega}r^{2\beta - 2}({\bf x}) w({\bf x}) \nabla w({\bf x}) \cdot {\bf x} d{\bf x}. \label{4.8}
\end{eqnarray}
\noindent{\bf Estimate $I_1$.}
Write ${\bf x} = (x_1, \dots, x_d)$ and integrating $I_1$ by parts.
It follows from (\ref{4.6}) that $I_1$ is equal to
\begin{align*}
\int_{\Omega} \nabla w({\bf x}) &\cdot \nabla [\nabla w({\bf x}) \cdot {\bf x}] d{\bf x}
\\
&
= \sum_{i, j = 1}^d\int_{\Omega} \partial_{x_i} w({\bf x}) \partial_{x_i} x_j\partial_{x_j} w({\bf x}))d{\bf x}
\\
& = \sum_{i, j = 1}^d \int_{\Omega} \partial_{x_i} w({\bf x}) [\partial_{x_j} w({\bf x}) \delta_{ij} + x_j \partial_{x_i x_j} w({\bf x})] d{\bf x}
\\
& = \sum_{i = 1}^d \int_{\Omega} |\partial_{x_i} w({\bf x})|^2 d{\bf x}
+ \sum_{i, j = 1}^d \int_{\Omega} x_j\partial_{x_i} w({\bf x}) \partial_{x_j x_i} w({\bf x}) d{\bf x}.
\end{align*}
Using the identity $\phi({\bf x}) \partial_{x_j} \phi({\bf x}) = \frac{1}{2} \partial_{x_j} (\phi({\bf x})^2)$ with $\Phi({\bf x}) = \partial_{x_i}w({\bf x})$ gives
\begin{eqnarray*}
I_1 &=& \int_{\Omega} |\nabla w({\bf x})|^2 d{\bf x} + \frac{1}{2} \sum_{i, j = 1}^d \int_{\Omega} x_j\partial_{x_j} (\partial_{x_i} w({\bf x}))^2 d{\bf x}
\\
&=& \int_{\Omega} |\nabla w({\bf x})|^2 d{\bf x} - \frac{1}{2} \sum_{i, j = 1}^d \int_{\Omega} (\partial_{x_i} w({\bf x}))^2\partial_{x_j} x_j d{\bf x}.
\end{eqnarray*}
Hence,
\begin{equation}
I_1 = \Big(1 - \frac{d}{2}\Big) \int_{\Omega}|\nabla w({\bf x})|^2d{\bf x}.
\label{4.9}
\end{equation}
\noindent{\bf Estimate $I_2$.}
We apply the identity $w \nabla w = \frac{1}{2} \nabla |w|^2$ to get from (\ref{4.7})
\begin{eqnarray*}
I_2 &=& \frac{\lambda \beta b^{-\beta} (\beta - 2 + d)}{2} \int_{\Omega}r^{\beta - 2}({\bf x}) \nabla |w({\bf x})|^2 \cdot {\bf x} d{\bf x}\\
&=& -\frac{\lambda \beta b^{-\beta} (\beta - 2 + d)}{2} \int_{\Omega} |w({\bf x})|^2 {\rm div} (r^{\beta - 2}({\bf x}) {\bf x}) d{\bf x}.
\end{eqnarray*}
Here, the integration by parts formula was used.
We; therefore, obtain
\begin{equation}
I_2 = -\frac{\lambda \beta b^{-\beta} (\beta - 2 + d)^2}{2} \int_{\Omega} |w({\bf x})|^2 d{\bf x}.
\label{4.10}
\end{equation}
\noindent{\bf Estimate $I_3.$} Using integration by parts formula again, by (\ref{4.8}),
\begin{eqnarray*}
I_3 &=& - \frac{\lambda^2 \beta^2 b^{-2\beta}}{2} \int_{\Omega}r^{2\beta - 2}({\bf x}) \nabla |w({\bf x})|^2 \cdot {\bf x} d{\bf x}
\\
&=& \frac{\lambda^2 \beta^2 b^{-2\beta}}{2} \int_{\Omega} |w({\bf x})|^2 {\rm div}[r^{2\beta - 2}({\bf x}){\bf x}] d{\bf x}.
\end{eqnarray*}
Hence,
\begin{eqnarray}
I_3 &=& \frac{\lambda^2 \beta^2(2\beta - 2 + d) b^{-2\beta}}{2} \int_{\Omega} |w({\bf x})|^2 r^{2\beta -2 }({\bf x}) d{\bf x}
\nonumber
\\
&\geq& C \lambda^2 \beta^3 b^{-2\beta}\int_{\Omega} r^{2\beta}({\bf x})|w({\bf x})|^2 d{\bf x}.
\label{4.11}
\end{eqnarray}
Combining (\ref{4.5}), (\ref{4.9}), (\ref{4.10}) and (\ref{4.11}) and using the fact that $\lambda_0 b^{-\beta} \gg 1$ (which implies $\lambda b^{-\beta} \gg 1$), we get
\begin{equation}
\int_{\Omega}\frac{e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2}{4\lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})} d{\bf x}
\geq
C\lambda^2 \beta^3 b^{-2\beta} \int_{\Omega} r^{2\beta}({\bf x}) |w({\bf x})|^2 d{\bf x} - C \int_{\Omega} |\nabla w({\bf x})|^2 d{\bf x}.
\label{4.12}
\end{equation}
Recall (\ref{5.11}) that $w = e^{\lambda b^{-\beta} r^{\beta}} v$.
We have for all ${\bf x} \in \Omega$,
\begin{equation}
\nabla w({\bf x})
= e^{\lambda b^{-\beta} r^\beta({\bf x})} [\nabla v({\bf x}) + \lambda b^{-\beta} \beta r^{\beta - 2}({\bf x})v({\bf x}) {\bf x}].
\label{4.13}
\end{equation}
It follows from (\ref{5.11}), (\ref{4.12}), (\ref{4.13}), the triangle inequality and the fact $\beta^3 \gg \beta^2$ that
\begin{equation*}
\int_{\Omega}\frac{e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2}{4 \lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})} d{\bf x}
\geq
C\lambda^2 \beta^3 b^{-2\beta} \int_{\Omega} r^{2\beta}({\bf x}) e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|v({\bf x})|^2 d{\bf x}
- C \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}.
\end{equation*}
Recall that $\rho = \max_{{\bf x} \in \overline \Omega} r(
x)$.
We have obtained the desired inequality (\ref{4.14}).
\end{proof}
\begin{Lemma}
Let $v$ be the function that satisfying all hypotheses of Theorem \ref{Carleman}.
There exist positive constants $\beta_0$ and $\lambda_0$ depending only on $b$, ${\bf x}_0$, $\Omega$ and $d$ such that
\begin{multline}
-\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} v({\bf x}) \Delta v({\bf x}) d{\bf x}
\geq
C \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\\
- C\lambda^2 \beta^2 b^{-2\beta}\int_{\Omega} e^{2\lambda b^{-\beta}r^{2\beta}({\bf x})} r^{2\beta}({\bf x})|v({\bf x})|^2d{\bf x}
\label{4.15}
\end{multline}
for all $\beta \geq \beta_0$ and $\lambda \geq \lambda_0.$
\label{lem 3.2}
\end{Lemma}
\begin{proof}
By integrating by parts, we have
\begin{align}
\int_{\Omega} &e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} v({\bf x}) \Delta v({\bf x}) d{\bf x}
\nonumber
\\
&=
\int_{\Omega} \nabla v({\bf x}) \cdot \nabla \big( e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} v({\bf x}) \big)d{\bf x}
\nonumber
\\
&= \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
+ \int_{\Omega} v({\bf x})\nabla v({\bf x}) \cdot
\nabla \big( e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}\big)d{\bf x}.
\label{27}
\end{align}
The absolute value of second integral in the right hand side of (\ref{27}) can be estimated as
\begin{align}
\Big|\int_{\Omega} & v({\bf x})\nabla v({\bf x}) \cdot
\nabla \big( e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}\big)d{\bf x}\Big| \nonumber
\\
&\leq 2\lambda \beta b^{-\beta}\int_{\Omega} r^{\beta - 1}({\bf x})e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|v({\bf x})||\nabla v({\bf x})|
d{\bf x}
\nonumber
\\
&\leq
C\lambda^2 \beta^2 b^{-2\beta}\int_{\Omega} e^{2\lambda b^{-\beta}r^{\beta}({\bf x})} r^{2\beta}({\bf x})|v({\bf x})|^2d{\bf x}
+ \frac{1}{2} \int_{\Omega} e^{2\lambda b^{-\beta}r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}.
\label{2828}
\end{align}
This, (\ref{27}) and (\ref{2828}) imply
\begin{multline*}
-\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} v({\bf x}) \Delta v({\bf x}) d{\bf x}
\geq
C \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\\
-C\lambda^2 \beta^2 b^{-2\beta}\int_{\Omega} e^{2\lambda b^{-\beta}r^{2\beta}({\bf x})} r^{2\beta}({\bf x})|v({\bf x})|^2d{\bf x}.
\end{multline*}
The lemma is proved.
\end{proof}
\begin{Lemma}
Let $v$ be the function satisfying all hypotheses of Theorem \ref{Carleman}.
There exist positive constants $\beta_0$ depending only on $b$, ${\bf x}_0$, $\Omega$ and $d$ such that
\begin{multline}
\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|\Delta v({\bf x})|^2 d{\bf x}
\geq
C \lambda^3 \beta^4 b^{-3 \beta} \int_{\Omega} r^{2\beta}({\bf x}) e^{2\lambda b^{-\beta} r^{\beta}}|v({\bf x})|^2 d{\bf x}
\\
+ C \lambda \beta^{1/2} b^{-\beta}\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\label{29}
\end{multline}
for all $\beta \geq \beta_0$ and $\lambda \geq \lambda_0.$
Here $\lambda_0$ is a constant satisfying $\lambda_0 b^{-\beta} > 1.$
\label{lem 3.2}
\end{Lemma}
\begin{proof}
Multiplying $\beta^{1/4}$ to (\ref{4.15}) and then applying the inequality $-ab \leq a^2/2 + b^2/2$, we have
\begin{multline*}
\int_{\Omega} \lambda \beta^{3/2} b^{-\beta}e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} r^{\beta - 2}({\bf x}) |v({\bf x})|^2 d{\bf x}
+
\int_{\Omega} \frac{e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}}{4\lambda b^{-\beta} \beta r^{\beta - 2}({\bf x})} |\Delta v({\bf x})|^2 d{\bf x}
\\
\geq
C \beta^{1/2} \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\\
- C\lambda^2 \beta^{5/2} b^{-2\beta}\int_{\Omega}
r^{2\beta}({\bf x})
e^{2\lambda b^{-\beta}r^{\beta}({\bf x})}|v({\bf x})|^2d{\bf x}.
\end{multline*}
Since $r({\bf x}) > 1,$ $\beta^{3/2} r^{\beta - 2}({\bf x}) \ll r^{2\beta}({\bf x})$,
we have
\begin{multline}
\int_{\Omega} \frac{e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}}{4\lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})} |\Delta v({\bf x})|^2 d{\bf x}
\geq
C \beta^{1/2} \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\\
- C\lambda^2 \beta^{5/2} b^{-2\beta}\int_{\Omega}
r^{2\beta}({\bf x})
e^{2\lambda b^{-\beta}r^{\beta}({\bf x})}|v({\bf x})|^2d{\bf x}.
\label{3030}
\end{multline}
Here, we have used the fact that $\lambda b^{-\beta} \gg 1.$
Adding (\ref{3030}) and (\ref{4.14}) together, we obtain
\begin{multline*}
\int_{\Omega} \frac{e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}}{\lambda \beta b^{-\beta} r^{\beta - 2}({\bf x})} |\Delta v({\bf x})|^2 d{\bf x}
\geq C \lambda^2 \beta^3 b^{-2 \beta} \int_{\Omega} r^{2\beta}({\bf x}) e^{2\lambda b^{-\beta} r^{\beta}}|v({\bf x})|^2 d{\bf x}
\\
+ C \beta^{1/2} \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x},
\end{multline*}
which implies (\ref{29}).
\end{proof}
\begin{Lemma}
Let $v$ be the function satisfying all hypotheses of Theorem \ref{Carleman}.
There exist positive constants $\beta_0$ and $\lambda_0$ depending only on $b$, ${\bf x}_0$, $\Omega$ and $d$ such that
\begin{multline}
\frac{1}{\lambda \beta^{7/4} b^{-\beta}}\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2d{\bf x}
\\
\geq
\frac{C}{\lambda \beta^{7/4} b^{-\beta}}\sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} r^{2\beta}({\bf x})|\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
- C \lambda \beta^{1/4} b^{-\beta} \int_\Omega e^{2\lambda b^{-\beta} r^\beta({\bf x})} |\nabla v({\bf x})|^2d{\bf x}
\label{28}
\end{multline}
for all $\beta \geq \beta_0$ and $\lambda \geq \lambda_0.$
\label{lem 3.3}
\end{Lemma}
\begin{proof}
By the density arguments, we can assume that $v \in C^3(\overline \Omega).$
Write ${\bf x} = (x_1, \dots, x_d)$.
We have
\begin{align*}
\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2d{\bf x}
&=
\sum_{i, j = 1}^d\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} \partial^2_{x_i x_i} v({\bf x}) \partial^2_{x_j x_j} v({\bf x}) d{\bf x}
\\
&=
\sum_{i, j = 1}^d
\int_{\Omega}\partial_{x_j}\Big[e^{2\lambda b^{-\beta} r^\beta({\bf x})} \partial^2_{x_i x_i} v({\bf x}) \partial_{x_j} v({\bf x}) \Big]d{\bf x}
\\
&\hspace{1cm}
- \sum_{i, j = 1}^d
\int_{\Omega} \partial_{x_j} v({\bf x})\partial_{x_j}\Big[e^{2\lambda b^{-\beta} r^\beta({\bf x})} \partial^2_{x_i x_i} v({\bf x}) \Big]d{\bf x}.
\end{align*}
The first integral in the right hand side above vanishes due to the divergence theorem.
Hence
\begin{multline}
\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2d{\bf x}
=
-\sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} \partial_{x_j} v({\bf x}) \partial^3_{x_i x_i x_j}v({\bf x}) d{\bf x}
\\
-\sum_{i, j = 1}^d \int_{\Omega}\partial_{x_j}(e^{2\lambda b^{-\beta} r^\beta({\bf x})}) \partial_{x_j} v({\bf x}) \partial^2_{x_i x_i}v({\bf x}) d{\bf x}.
\label{3.18}
\end{multline}
The first term in the right hand side of (\ref{3.18}) is rewritten as
\begin{align*}
-\sum_{i, j = 1}^d &\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} \partial_{x_j} v({\bf x}) \partial^3_{x_i x_i x_j}v({\bf x}) d{\bf x}
\\
&=
\sum_{i, j = 1}^d \int_{\Omega}\partial_{x_i}(e^{2\lambda b^{-\beta} r^\beta({\bf x})} \partial_{x_j} v({\bf x}) ) \partial^2_{ x_i x_j}v({\bf x}) d{\bf x}\\
&= \sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} |\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
&\hspace{3cm}
+\sum_{i, j = 1}^d \int_{\Omega}\partial_{x_j} v({\bf x})\partial_{x_i}(e^{2\lambda b^{-\beta} r^\beta({\bf x})} ) \partial^2_{ x_i x_j}v({\bf x}) d{\bf x}.
\end{align*}
Combining this and (\ref{3.18}), we have
\begin{multline*}
\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2d{\bf x}
=\sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta}r^\beta({\bf x})} |\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
+ \sum_{i, j = 1}^d \int_{\Omega}
\Big[
\partial_{x_j} v({\bf x})\partial_{x_i}(e^{2\lambda b^{-\beta} r^\beta({\bf x})} ) \partial^2_{ x_i x_j}v({\bf x})
\\
-\partial_{x_j}(e^{2\lambda b^{-\beta} r^\beta({\bf x})}) \partial_{x_j} v({\bf x}) \partial^2_{x_i x_i}v({\bf x}) d{\bf x}
\Big].
\end{multline*}
Hence,
\begin{multline*}
\int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})}|\Delta v({\bf x})|^2d{\bf x}
\geq \sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} |\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
- 2\sum_{i, j = 1}^d \int_{\Omega}
|\partial_{x_j} v({\bf x})||\partial_{x_i}(e^{2\lambda b^{-\beta}r^\beta({\bf x})} )| \partial^2_{ x_i x_j}v({\bf x})|d{\bf x}.
\end{multline*}
Note that for all $i = 1, \dots, d$,
\[
\partial_{x_i}(e^{2\lambda b^{-\beta} r^\beta({\bf x})}) = 2 \lambda b^{-\beta} \beta r^{\beta - 2} e^{2\lambda b^{-\beta} r^\beta({\bf x})}x_i \quad \mbox{for all } {\bf x} \in \Omega.
\]
Using the inequality $ab \leq a^2/2 + b^2/2$, we obtain (\ref{28}).
\end{proof}
We now prove Theorem \ref{Carleman}.
\begin{proof}[Proof of Theorem \ref{Carleman}]
Adding (\ref{29}) and (\ref{28}) together, we obtain
\begin{multline*}
(1 + \frac{1}{\lambda^2 \beta^{7/4} b^{-2\beta}})\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|\Delta v({\bf x})|^2 d{\bf x}
\\
\geq
\frac{C}{\lambda \beta^{7/4} b^{-\beta}}\sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} r^{2\beta}({\bf x})|\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
+ C \lambda^3 \beta^4 b^{-3 \beta} \int_{\Omega} r^{2\beta}({\bf x}) e^{2\lambda b^{-\beta} r^{\beta}}|v({\bf x})|^2 d{\bf x}
\\
+ C \lambda \beta^{1/2} b^{-\beta}\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}.
\end{multline*}
\end{proof}
\begin{Corollary}
Recall $\beta_0$ and $\lambda_0$ as in Theorem \ref{Carleman}.
Fix $\beta = \beta_0$ and let the constant $C$ depend on ${\bf x}_0,$ $\Omega,$ $d$ and $\beta$.
There exists a constant $\lambda_0$ depending only on ${\bf x}_0,$ $\Omega,$ $d$ and $\beta$ such that
for all function $v \in H^2(\Omega)$ with
\[
v({\bf x}) = \partial_{\nu} v({\bf x}) = 0
\quad \mbox{ on } \partial \Omega,
\]
we have
\begin{multline}
\int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})}|\Delta v({\bf x})|^2 d{\bf x}
\geq
C\lambda^{-1}\sum_{i, j = 1}^d \int_{\Omega}e^{2\lambda b^{-\beta} r^\beta({\bf x})} |\partial^2_{ x_i x_j}v({\bf x})|^2 d{\bf x}
\\
+ C \lambda^3 \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}}|v({\bf x})|^2 d{\bf x}
+ C \lambda \int_{\Omega} e^{2\lambda b^{-\beta} r^{\beta}({\bf x})} |\nabla v({\bf x})|^2 d{\bf x}
\label{33}
\end{multline}
for all $\lambda \geq \lambda_0$.
\label{Col 3.1}
\end{Corollary}
\begin{remark}
Although there are many versions of the Carleman estimate available, those versions are either too complicated, not suitable for us to prove Theorem \ref{minimizer} and Theorem \ref{main thm}, or not work in computations.
The main ideas of the proof follow from \cite{BeilinaKlibanovBook, BukhgeimKlibanov:smd1981, Klibanov:jiipp2013, Protter:1960AMS, MinhLoc:tams2015}.
\end{remark}
\begin{remark}
The presence of the second derivatives on the right-hand side of (\ref{33}) is a new feature of our Carleman estimate.
The presence of those second derivatives allows us to prove the existence and uniqueness of the minimizers of the cost functionals in Section \ref{sec solve nonlinear system}.
\end{remark}
\section{The convergence analysis} \label{sec convergence}
In this section, we prove a theorem that guarantees that the sequence of vector-valued functions, proposed in Section \ref{sec solve nonlinear system}, converges to the true solution to \eqref{system U}--\eqref{boundary conditions}.
This convergence implies that Algorithm \ref{alg} rigorously provides good numerical solutions to Problem \ref{ISP}.
\begin{Theorem}
Assume that problem (\ref{system U})--(\ref{boundary conditions}) has a unique solution $(u_m^*)_{m = 1}^N$.
Then, there is a constant $\lambda$ depending only on $\Omega$, $T$, $d$ and $N$
such that
\begin{equation}
\sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}(u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}
\leq
\Big[\frac{C}{ \lambda^3}\Big]^{k - 1} \sum_{m = 1}^N\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})} (u_m^{(1)} - u_m^*) \Big\|_{L^2(\Omega)}^2
\label{main est}
\end{equation}
for $k = 1, 2, \dots$ where $C$ is a constant depending only on $\Omega$, $T$, $M$, $d$, $N$ and $\|q\|_{C^1(\overline \Omega)}$.
In particular, if $\lambda$ is large enough such that $0 < C/\lambda^3 < 1,$ $\{u_m^{(k)}\}_{m = 1}^N$ converges to $u_m^{*}$ exponentially.
Moreover, with such a $\lambda$, the sequence $(p^{(k)})_{k \geq 1}$ obtained in Step \ref{Step 8} of Algorithm \ref{alg} converges to the true function $p^* = u^*({\bf x}, 0)$ given by \eqref{Fourier u} with $t = 0$.
\label{main thm}
\end{Theorem}
\begin{proof}
In the proof, $C$ is a generous constant that might change from estimate to estimate.
\noindent {\bf Step 1.}{\it\, Establish a priori bound.}
Recall $H_0$ as in (\ref{H0}).
Since $(u_1^{(k)}, \dots, u_N^{(k)})$ is the minimizer of $J^{(k)}$, by the variational principle, for all $h \in H_0$
\begin{multline}
\sum_{m = 1}^N
\Big\langle e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[ \Delta u^{(k)} - c({\bf x})\sum_{n = 1}^N s_{mn} u^{(k)} + q_m(P(u_1^{(k-1)}), \dots, P(u_N^{(k-1)}))\Big],
\\
e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta h_m - c({\bf x})\sum_{n = 1}^Ns_{mn} h_m\Big] \Big\rangle_{L^2(\Omega)}
= 0.
\label{3.6}
\end{multline}
On the other hand, since $(u_1^*, \dots, u_N^*)$ solves (\ref{system U})--(\ref{boundary conditions}),
\begin{multline}
\sum_{m = 1}^N
\Big\langle e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[ \Delta u^* - c({\bf x}) \sum_{n = 1}^N s_{mn} u^* + q_m(u_1^*, \dots, u_N^*)\Big],
\\
e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta h_m - c({\bf x})\sum_{n = 1}^Ns_{mn} h_m\Big] \Big\rangle_{L^2(\Omega)}
= 0.
\label{3.7}
\end{multline}
It follows from (\ref{3.6}) and (\ref{3.7}) that
\begin{multline}
\sum_{m = 1}^N
\Big\langle e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta (u^{(k)} - u^*) - c({\bf x})\sum_{n = 1}^N s_{mn} (u^{(k)} - u^*)
\\
+ q_m(P(u_1^{(k-1)}), \dots, P(u_N^{(k-1)}))) - q_m(u_1^*, \dots, u_N^*)\Big],
\\
e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta h_m - c({\bf x})\sum_{n = 1}^Ns_{mn} h_m \Big]
\Big\rangle_{L^2(\Omega)}
= 0.
\label{3.8}
\end{multline}
Using the test function $h_m = u_m^{(k)} - u^*_m$, $m = 1, \dots, N$, in (\ref{3.8}) and using H\"older's inequality, we have
\begin{multline}
\sum_{m = 1}^N
\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta (u^{(k)} - u^*) - c({\bf x})\sum_{n = 1}^N s_{mn} (u^{(k)} - u^*)\Big]\Big\|^2_{L^2(\Omega)}
\\
\leq
\sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[q_m(P(u_1^{(k-1)}), \dots, P(u_N^{(k-1)}))) - q_m(u_1^*, \dots, u_N^*)\Big]\Big\|_{L^2(\Omega)}
\\
\times
\Big\|e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta (u_m^{(k)} - u^*_m) - c({\bf x})\sum_{n = 1}^Ns_{mn} (u_m^{(k)} - u^*_m) \Big]
\Big\|_{L^2(\Omega)} .
\label{3.7777}
\end{multline}
Using the inequality $\sum_{m = 1}^N a_mb_m \leq (\sum_{m = 1}^N a^2_m)^{1/2}( \sum_{m = 1}^N b^2_m)^{1/2}$ for the right hand side of (\ref{3.7777}) and simplying the resulting, we get
\begin{multline}
\sum_{m = 1}^N
\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta (u_m^{(k)} - u_m^*) - c({\bf x})\sum_{n = 1}^N s_{mn} (u_m^{(k)} - u_m^*)\Big]\Big\|_{L^2(\Omega)}^2
\\
\leq
\sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[q_m(P(u_1^{(k-1)}), \dots, P(u_N^{(k-1)})))
- q_m(u_1^*, \dots, u_N^*)\Big]\Big\|_{L^2(\Omega)}^2.
\label{3.9}
\end{multline}
\noindent{\bf Step 2.}{\it\, Estimate the right hand side of (\ref{3.9}).}
Since $\|u^*({\bf x}, t)\|_{L^{\infty}} \leq M$, we have
\begin{align*}
|u_m^*({\bf x})| &= \Big|\int_0^T u^*({\bf x}, t) \Psi_m(t) dt\Big|
\leq \|u^*({\bf x}, t)\|_{L^2(0, T)} \|\|\Psi_m(t)\|_{L^2(0, T)}
\\
&= \Big(\int_0^T |u^*({\bf x}, t)|^2 dx\Big)^{1/2}
\leq
M \sqrt{T}
\end{align*}
for $m = 1, \dots, N.$
Therefore,
\[
\Big|q_m(P(u_1^{(k-1)}), \dots, P(u_N^{(k-1)})) - q_m(u_1^*, \dots, u_N^*)\Big|
\leq A_m \sum_{n = 1}^N\big|u_n^{(k-1)} - u_n^* \big|
\]
where
\[
A_m = \max\Big\{|\nabla q_m(s_1, \dots, s_N)|: |s_i| \leq M\sqrt{T}, i = 1, \dots, N\Big\}
\quad m = 1, \dots, N.
\]
Set $A = \sum_{m = 1}^N A_m$.
The right hand side of (\ref{3.9}) is bounded from above by
\begin{align}
\sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})} &\Big[q_m(P(u_1^{(k-1)}), \dots, P(u_N^{(k-1)}))) - q_m(u_1^*, \dots, u_N^*)\Big]\Big\|_{L^2(\Omega)}^2
\nonumber
\\
&\leq
A \sum_{m = 1}^N\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\big|P(u_m^{(k - 1)}) - u_m^* \big|\Big\|_{L^2(\Omega)}^2
\nonumber
\\
&\leq
A \sum_{m = 1}^N\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\big|u_m^{(k - 1)} - u_m^* \big|\Big\|_{L^2(\Omega)}^2.
\label{3.10}
\end{align}
Combining (\ref{3.9}) and (\ref{3.10}) gives
\begin{multline}
\sum_{m = 1}^N
\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta (u_m^{(k)} - u_m^*) - c({\bf x})\sum_{n = 1}^N s_{mn} (u_m^{(k)} - u_m^*)\Big]\Big\|^2_{L^2(\Omega)}
\\
\leq
A \sum_{m = 1}^N\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\big|u_m^{(k-1)} - u_m^* \big|\Big\|_{L^2(\Omega)}^2.
\label{3.1212}
\end{multline}
\noindent {\bf Step 3.} {\it Estimate the left hand side of (\ref{3.1212}).}
Using the inequality $(a - b)^2 \geq a^2/2 - 2 b^2,$ we have
\begin{multline}
\sum_{m = 1}^N
\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Big[\Delta (u_m^{(k)} - u_m^*) - c({\bf x})\sum_{n = 1}^N s_{mn} (u_n^{(k)} - u_n^*)\Big]\Big\|^2_{L^2(\Omega)}
\geq
\sum_{m = 1}^N\frac{1}{2}\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Delta (u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}
\\
-2 \sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}c({\bf x})\sum_{n = 1}^N s_{mn} (u_n^{(k)} - u_n^*)\Big\|^2_{L^2(\Omega)}.
\label{3.11}
\end{multline}
Applying Carleman estimate in Corollary \ref{Col 3.1}, for the function $u_m^{(k)} - u^*$, $m = 1, \dots, N$, we estimate
\begin{equation}
\sum_{m = 1}^N\frac{1}{2}\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Delta (u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}
\\
\geq
C \lambda^3 \sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}(u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}.
\label{3.12}
\end{equation}
Fix $\lambda \geq \lambda_0$ where $\lambda_0$ is as in Corollary \ref{Col 3.1}.
It follows from (\ref{3.11}) and (\ref{3.12}) that
\begin{multline}
\sum_{m = 1}^N\frac{1}{2}\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\Delta (u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}
-2 \sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}\sum_{n = 1}^N s_{mn} (u_n^{(k )} - u_n^*)\Big\|^2_{L^2(\Omega)}
\\
\geq
C \lambda^3 \sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta}r^\beta({\bf x})}(u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}.
\label{3.13}
\end{multline}
Combining (\ref{3.9}), (\ref{3.10}) and (\ref{3.13}) gives
\[
\sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}(u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}
\leq
\frac{A}{C \lambda^3} \sum_{m = 1}^N\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}(u_m^{(k - 1)} - u_m^*) \big|\Big\|_{L^2(\Omega)}^2.
\]
By induction,
we have
\begin{equation*}
\sum_{m = 1}^N \Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}(u_m^{(k)} - u_m^*)\Big\|^2_{L^2(\Omega)}
\\
\leq
\Big[\frac{A}{C \lambda^{3}}\Big]^{k - 1} \sum_{m = 1}^N\Big\| e^{\lambda b^{-\beta} r^\beta({\bf x})}(u_m^{(1)} - u_m^*) \big|\Big\|_{L^2(\Omega)}^2.
\end{equation*}
Replacing $A/C$ by the generous constant $C$, we have proved the estimate \eqref{main est}.
The convergence of $p^{(k)}$ to $p^*$ as $k \to \infty$ is obvious.
\end{proof}
\begin{remark}
The technique of using the Carleman estimate to prove Theorem \ref{main thm} is similar to the one in \cite{BAUDOUIN:SIAMNumAna:2017} in which a coefficient inverse problem for hyperbolic equations was considered.
We also find that this technique is applicable to solve an inverse source problem for nonlinear parabolic equations \cite{Boulakia:preprint2019} from the boundary and additional internal measurements.
\end{remark}
\begin{remark}
The convergence of $\{p^{(k)}\}_{k \geq 1}$ to the true solution to the inverse problem in Theorem \ref{main thm} is numerically confirmed in Section \ref{sec num}. See also Figures \ref{fig m1 error}--\ref{fig m4 error}.
\end{remark}
\section{Numerical implementation} \label{sec num}
For simplicity, we solve the inverse problem in the case $d = 2$.
\subsection{The forward problem}
We solve the forward problem of Problem \ref{ISP} as follows.
Let $R_1 > R > 0$ be two positive numbers.
Define the domains
\[
\Omega_1 = (-R_1, R_1)^2
\quad \mbox{and }
\quad \Omega = (-R, R)^2.
\]
We approximate (\ref{main eqn})defined on $\mathbb{R}^d \times (0, T)$ by the following problem defined on $\Omega_1 \times (0, T)$
\begin{equation}
\left\{
\begin{array}{rcll}
c({\bf x})u_t({\bf x}, t) &=& \Delta u({\bf x}, t) + q(u({\bf x}, t)) &{\bf x} \in \Omega_1, t \in (0, T),\\
u({\bf x},0) &=& p({\bf x}) & {\bf x} \in \Omega_1,\\
u({\bf x}, t) &=& 0 & {\bf x} \in \partial \Omega_1, t \in [0, T].
\end{array}
\right.
\label{main eqn in G}
\end{equation}
In our numerical tests,
the function $c$ is given by
\begin{equation*}
c(x, y) = 1 + 1/30
\Big[3(1-3x)^2e^{-9x^2 - (3y+1)^2}
\\
- 10(3x/5 - 27x^3 - 243y^5)e^{-9x^2-9y^2}
- 1/3e^{-(3x+1)^2 - 9y^2}\Big]
\label{ctrue}
\end{equation*}
for ${\bf x} = (x, y) \in \Omega.$
The range of $c$ is $[0.8, 1.25]$, which is not a perturbation of the constant function $1$.
We solve (\ref{main eqn in G}) by the finite difference method using the explicit scheme.
The data $f({\bf x}, t) = u({\bf x}, t)$ and $g({\bf x}, t) = \partial_{\nu} u({\bf x}, t)$ on $\partial \Omega \times [0, T]$ can be extracted easily.
In the next subsection, we discuss our choice of $\{\Psi_n\}_{n \geq 1}$ and the number $N$ in Section \ref{sec 2.1} and the truncation in \eqref{Fourier u}.
\subsection{A special orthonormal basis $\{\Psi_n\}_{n \geq 1}$ of $L^2(0, T)$ and the choice of the cut-off number $N$} \label{basis MK}
We will employ a special basis of $L^2(0, T)$.
For each $n = 1, 2, \dots,$ set $\phi_n(t) = (t-T/2)^{n - 1}\exp(t-T/2)$.
The set $\{\phi_n\}_{n = 1}^{\infty}$ is complete in $L^2(0, T).$
Applying the Gram-Schmidt orthonormalization process to this set, we obtain a basis of $L^2(0, T)$, named as $\{\Psi_n\}_{n = 1}^{\infty}$.
This basis was originally introduced to solve the electrical impedance tomography problem with partial data in \cite{Klibanov:jiip2017}.
Since then, this basis was widely used to solve a variety of inverse problems.
For instance, in \cite{Nguyens:jiip2019}, we employ this basis to solve an inverse source problem and a coefficient inverse problem for linear parabolic equations;
in \cite{NguyenLiKlibanov:IPI2019}, this special basis was used to solve an inverse source problem for elliptic equations;
in \cite{KlibanovNguyen:ip2019}, we solve the problem of finding Radon inverse with incomplete data;
in \cite{KlibanovAlexeyNguyen:SISC2019}, we solve an inverse source problem for the full transport radiative equation.
The most related paper with the current one is \cite{LiNguyen:IPSE2019}, in which the second author and his collaborator employed this basis to recover the initial condition for linear parabolic equations.
We next discuss the choice of $N$ in \eqref{Fourier u}.
Fix a positive integer $N_{\bf x}$.
On $\overline \Omega = [-R, R]^2,$ we arrange an $N_{\bf x} \times N_{\bf x}$ uniform grid
\[
\mathcal{G} = \Big\{(x_i, y_j): x_i = -R + (i - 1)h, y_j = -R + (j - 1)h, 1 \leq i, j \leq N_{\bf x}\Big\}
\]
where $h = 2R/(N_{\bf x} - 1)$ is the step size.
In our computations, we set $R_1 = 6$, $R = 1$, $T = 1.5$ and $N_{\bf x} = 80$.
To solve Problem \ref{ISP}, we need to compute the discrete values of the function $u$ on the grid $\mathcal{G}.$
\begin{figure}[h!]
\begin{center}
\subfloat[$N = 15$]{\includegraphics[width=0.3\textwidth]{ChooseN15}} \hfill
\subfloat[$N = 25$]{\includegraphics[width=0.3\textwidth]{ChooseN25}} \hfill
\subfloat[$N = 35$]{\includegraphics[width=0.3\textwidth]{ChooseN35}}
\subfloat[$N = 15$]{\includegraphics[width=0.3\textwidth]{ChooseN15CrossSection}} \hfill
\subfloat[$N = 25$]{\includegraphics[width=0.3\textwidth]{ChooseN25CrossSection}} \hfill
\subfloat[$N = 35$]{\includegraphics[width=0.3\textwidth]{ChooseN35CrossSection}}
\caption{\label{fig choose N}
The comparison of $f(x, R, t)$ and its partial Fourier sum $\sum_{n = 1}^N f_m(x, y = R, t)$ on $\{(x, y = R) \in \partial \Omega\}.$
The first row displays the graphs of the absolute differences of $f(x, R, t)$ and $\sum_{n = 1}^N f_n(x, R) \Psi_n(t)$.
The horizontal axis indicates $x$ and the vertical axis indicates $t$.
It is evident that the bigger $N$, the smaller the difference is.
The second row shows the true data $f(x, y = R, T)$ (solid line) and its approximation $\sum_{n = 1}^N f_n(x, y = R) \Psi_n(T)$ (dash--dot line).
We observe that when $N = 35$, the two curves coincide.
}
\end{center}
\end{figure}
The first step in our method is to find an appropriate cut off number $N$.
We do so as follows.
Take the data on $\{(x, y = R) \in \partial \Omega\}$, which is the top part of $\partial \Omega$, $f(x, y = R, t) = u_{\rm true}(x, y = R, t)$ in Test 1 in Section \ref{sec num example}.
Then, we compare the function $f(x, R, t)$ and the function $\sum_{n = 1}^N f_n(x, y= R) \Psi_n(t)$ where $f_n(x, y=R)$ is computed by (\ref{boundary conditions}).
Choose $N$ such that the function \[e_N({\bf x}, t) = \Big|f(x, y = R, t) - \sum_{n = 1}^N f_n(x, y = R) \Psi_n(t)\Big|\] is small enough.
We use the same number $N$ for all numerical tests.
In this paper, $N = 35,$ see Figure \ref{fig choose N} for an illustration.
\begin{remark}
In our computations, when the cut-off number $N$ is $ 15$ or $25$, the quality of the numerical results is poor.
When $N = 35$, we obtain good numerical results.
Increasing $N > 35$ does not improve the computed quality.
\label{rem N}
\end{remark}
\begin{remark}
In this numerical section, we choose the Carleman weight function $e^{\lambda b^{-\beta} |{\bf x} - {\bf x}_0|^{\beta}}$ when defining $J^{(k)}$, $k \geq 0$, where $\lambda = 40$ and $\beta = 10$. The point ${\bf x}_0$ is $(0, 1.5)$ and $b = 5$.
This and the condition $\lambda b^{-\beta}$ large conflict.
However, in practice, the Carleman weight function with these values of $\lambda$ and $\beta$ already helps provide good numerical solutions to Problem \ref{ISP}.
We numerically observe that the weight function blow-up when $\lambda b^{-\beta} \gg 1$, causing some unnecessary numerical difficulties.
\end{remark}
We next present the key step in the implementation of the inverse problem.
\subsection{Computing the vector-valued function $(u_m)_{m = 1}^N$}
Recall that $(u_m^{(0)}(x, y))_{m = 1}^N$ minimizes $J^{(0)}$ on $H$. Similarly to the argument in the first step of the proof of Theorem \ref{main thm}, for all $h \in H_0$, see the definition of $H_0$ in (\ref{H0}), by the variational principle, we have
\begin{equation}
\sum_{m = 1}^N
\Big\langle e^{\lambda b^{-\beta} r^{\beta}({\bf x})}\Big[ \Delta u_m^{(0)} - c({\bf x})\sum_{n = 1}^N s_{mn} u_m^{(0)} \Big],
e^{\lambda b^{-\beta} r^{\beta}({\bf x})}\Big[ \Delta h_m - c({\bf x})\sum_{n = 1}^Ns_{mn} h_m\Big] \Big\rangle_{L^2(\Omega)}
= 0.
\label{5.2}
\end{equation}
For any $u \in H$,
we next associate the values of $u_m$ $\{u_m(x_i, y_j): 1 \leq m \leq N, 1 \leq i, j \leq N_{\bf x}\}$ with an $N_{\bf x}^2 N$ dimensional vector $\frak{u}_\frak{i}$
with
\begin{equation}
\frak{u}_\frak{i} = u_m(x_i, y_j)
\label{u lineup}
\end{equation}
where
\begin{equation}
\frak{i} = (i - 1)N_{\bf x} N + (j - 1) N + m
\quad \mbox{for all } 1 \leq i, j \leq N_{\bf x}, 1 \leq m \leq N.
\label{lineup}
\end{equation}
The range of the index $\frak{i}$ is $\{1, \dots, N_{\bf x}^2 N\}.$
The ``line-up" finite difference form of (\ref{5.2}) is
\begin{equation}
\langle (\mathcal L - \mathcal S) \frak{u}^{(0)}, (\mathcal L - \mathcal S) \frak{h} \rangle = 0
\label{5.5}
\end{equation}
where $\frak{W}_\lambda^2$, $\frak{u}^{(0)}$ and $\frak{h}$ are the line-up versions of $(W_\lambda^2)_{m = 1}^N$, $(u_m^{0})_{m = 1}^N$ and $(h_m)_{m = 1}^N$ respectively.
Here, $\langle \cdot, \cdot \rangle$ is the classical Euclidian inner product.
In (\ref{5.5})
\begin{enumerate}
\item the $N_{\bf x}^2 N \times N_{\bf x}^2 N$ matrix $\mathcal L$ is defined as
\begin{enumerate}
\item $(\mathcal L)_{\frak{i} \frak{i}} = -\frac{4 e^{\lambda b^{-\beta} r^{\beta}(x_i, y_j)}}{d_{\bf x}^2}$ for $\frak{i}$ as in (\ref{lineup}) for $2 \leq i, j \leq N_{\bf x} - 1$, $1 \leq m \leq N$;
\item $(\mathcal L)_{\frak{i} \frak{j}} = \frac{e^{\lambda b^{-\beta} r^{\beta}(x_i, y_j)}}{d_{\bf x}^2}$ for $\frak{j} = (i \pm 1 - 1)N_{\bf x} N + (j - 1) N + m$ and $\frak{j} = (i - 1)N_{\bf x} N + (j \pm 1 - 1) N + m$; for $2 \leq i, j \leq N_{\bf x} - 1$, $1 \leq m \leq N$;
\item the other entries are $0$.
\end{enumerate}
\item the $N_{\bf x}^2 N \times N_{\bf x}^2 N$ matrix $\mathcal S$ is defined as
$(\mathcal S)_{\frak{i}\frak{j}} = e^{\lambda b^{-\beta} r^{\beta}(x_i, y_j)}c(x_i, y_j) s_{m n}$ for $\frak{i}$ as in (\ref{lineup}) and $\frak{j} = (i - 1)N_{\bf x} N + (j \pm 1 - 1) N + n$ for $2 \leq i, j \leq N_{\bf x} - 1$, $1 \leq m, n \leq N$.
The other entries are $0$.
\end{enumerate}
On the other hand, since $(u^{0}_m)_{m = 1}^N$ satisfies the boundary constraints (\ref{boundary conditions}), we have
\begin{equation}
\mathcal D \frak{u}^{(0)} = \frak{f} \quad
\mbox{and }
\quad \mathcal N \frak{u}^{(0)} = \frak{g}
\label{5.7}
\end{equation}
where
\begin{enumerate}
\item The $N_{\bf x}^2 N \times N_{\bf x}^2 N$ matrix $\mathcal D$ is defined as $\mathcal D_{\frak{i}\frak{i}} = 1$ for $\frak{i}$ as in (\ref{lineup}), $i \in \{1, N_{\bf x}\},$ $1 \leq j \leq N_{\bf x}$ or $2 \leq i \leq N_{\bf x} - 1$, $j \in \{1, N_{\bf x}\}.$
The other entries are $0$.
\item The $N_{\bf x}^2 N \times N_{\bf x}^2 N$ matrix $\mathcal N$ is defined as
\begin{enumerate}
\item $\mathcal N_{\frak{i}\frak{i}} = \frac{1}{d_{\bf x}}$ for $\frak{i}$ as in (\ref{lineup}), $i \in \{1, N_{\bf x}\},$ $1 \leq j \leq N_{\bf x}$ or $2 \leq i \leq N_{\bf x} - 1$, $j \in \{1, N_{\bf x}\}$, $1 \leq m \leq N;$
\item $\mathcal N_{\frak{i}\frak{j}} = -\frac{1}{d_{\bf x}}$ for $\frak{i}$ as in (\ref{lineup}) and $\frak{j} = (i + 1- 1)N_{\bf x} N + (j - 1) N + m$, $i = 1,$ $1 \leq j \leq N_{\bf x}$, $1 \leq m \leq N;$
\item $\mathcal N_{\frak{i}\frak{j}} = -\frac{1}{d_{\bf x}}$ for $\frak{i}$ as in (\ref{lineup}) and $\frak{j} = (i - 1- 1)N_{\bf x} N + (j - 1) N + m$, $i = N_{\bf x},$ $1 \leq j \leq N_{\bf x}$, $1 \leq m \leq N;$
\item $\mathcal N_{\frak{i}\frak{j}} = -\frac{1}{d_{\bf x}}$ for $\frak{i}$ as in (\ref{lineup}) and $\frak{j} = (i - 1)N_{\bf x} N + (j + 1 - 1) N + m$, $1 \leq i \leq N_{\bf x}$, $j = 1$, $1 \leq m \leq N;$
\item $\mathcal N_{\frak{i}\frak{j}} = -\frac{1}{d_{\bf x}}$ for $\frak{i}$ as in (\ref{lineup}) and $\frak{j} = (i - 1)N_{\bf x} N + (j - 1 - 1) N + m$, $2 \leq i \leq N_{\bf x} -1 $, $j = N_{\bf x},$ $1 \leq m \leq N;$
\item The other entries are $0$.
\end{enumerate}
\item The $N_{\bf x}^2 N$ dimensional vector $\frak{f}$ is defined as $\frak{f}_{\frak{i}} = f_m(x_i, y_j)$ for $\frak{i}$ as in (\ref{lineup}), $i \in \{1, N_{\bf x}\},$ $1 \leq j \leq N_{\bf x}$, $1 \leq m \leq N$ or $2 \leq i \leq N_{\bf x} - 1$, $j \in \{1, N_{\bf x}\}.$
\item The $N_{\bf x}^2 N$ dimensional vector $\frak{g}$ is defined as $\frak{g}_{\frak{i}} = g_m(x_i, y_j)$ for $\frak{i}$ as in (\ref{lineup}), $i \in \{1, N_{\bf x}\},$ $1 \leq j \leq N_{\bf x}$, $1 \leq m \leq N$ or $2 \leq i \leq N_{\bf x} - 1$, $j \in \{1, N_{\bf x}\}.$
\end{enumerate}
Solving (\ref{5.5})--(\ref{5.7}) by the least square method with the command ``lsqlin" built in Matlab, we obtain the vector $\frak{u}^{(0)}$ and hence the initial solution $(u_m^{(0)}(x_i, y_j))_{m = 1}^N$ for $1 \leq i, j \leq N_{\bf x}$.
\begin{remark}
In computation, defining the matrices above is ineffective due to their large size, $N_{\bf x}^2 N \times N_{\bf x}^2 N$ where $N_{\bf x} = 80$ and $N = 35$.
We note that most of those matrices' entries are 0.
So, instead of defining dense matrices, we use the invention of sparse matrices.
Moreover, using sparse matrices significantly reduces the computational time.
\end{remark}
We next compute the vector valued function $(u_m^{(k)})_{m = 1}^N$, $k \geq 1$, assuming by induction that $(u_m^{(k-1)})_{m = 1}^N$ is known.
Applying a very similar argument when deriving (\ref{5.5})--(\ref{5.7}), the vector $\frak{u}^{(k)}$ the line up version of $(u_m^{(k)}(x_i, y_j))_{m = 1}^N$ with $1 \leq i, j \leq N_{\bf x}$ satisfies the equations
\begin{equation}
(\mathcal L - \mathcal S)^T(\mathcal L - \mathcal S) \frak{u}^{(k)} = -(\mathcal L - \mathcal S)^T\frak{q}^{(k-1)}.
\label{5.8}
\end{equation}
and
\begin{equation}
\mathcal D \frak{u}^{(k)} = \frak{f} \quad
\mbox{and }
\quad \mathcal N \frak{u}^{(k)} = \frak{g}
\label{5.9}
\end{equation}
where $\frak{q}^{(k-1)}$ is the line up version of $(q_m(u_1^{(k - 1)}(x, y), \dots, u_N^{(k - 1)}(x, y)))_{m = 1}^N$.
To find $\frak{u}^{(k)}$, we solve (\ref{5.8})--(\ref{5.9}) by the least square method with the command ``lsqlin" of Matlab.
The value of the function $(u_m(x_i, y_j))_{m = 1}^N$ follows.
We next find $u(x, y, t)$ via (\ref{Fourier u}).
The desired solution to Problem \ref{ISP} $p(x, y)$ is set to be $u(x, y, 0)$.
\begin{remark}
In theory, we need to apply the cut-off function $P$, see (\ref{cutoff function}). This is only for our convenience to prove Theorem \ref{main thm}.
However, in computation, we can obtain good numerical results without applying the cut-off technique.
This can be explained by setting $M$ sufficiently large.
\end{remark}
We summarize the procedure to find $p$ in Algorithm \ref{alg}.
\begin{algorithm}[h!]
\caption{\label{alg}The procedure to solve Problem \ref{ISP}}
\begin{algorithmic}[1]
\State\,
Compute $\{\Psi_n\}_{n = 1}^N$ as in Section \ref{basis MK}.
Choose $N = 35$, see Figure \ref{fig choose N} and Remark \ref{rem N}.
\State\, Compute matrices $\mathcal{L}, \mathcal S, \mathcal D$ and $\mathcal N.$ Find the line up versions $\frak{f}$ and $\frak{g}$ of the data $f_m(x_i, y_j)$ and $g_m(x_i, y_j)$ for $(x_i, y_j) \in \mathcal G \cap \partial \Omega$, $1 \leq m \leq N$.
\State\, \label{Step 3} Solve (\ref{5.5})--(\ref{5.7}) by the least square method. The solution is denoted by $\frak{u}^{(0)}$.
Compute $u^{(0)}_m(x_i, y_j)$, $1 \leq i, j \leq N_{\bf x}$, $1 \leq m \leq N$ using
$
u^{(0)}_m(x_i, y_j) = (\frak{u}^{(0)})_{\frak{i}}
$ with $\frak{i}$ as in (\ref{lineup}).
\State\, Set the initial solution $p^{(0)} = \sum_{n = 1}^N u^{(0)}_n(x_i, y_j)\Psi_n(0).$
\For{$k = 1$ to $5$}
\State Find $\frak{q}^{(k-1)}$, the line up version of $q(P(u_1^{(k-1)}(x_i, y_j)), \dots, u_N^{(k-1)}(x_i, y_j)))$, $1 \leq i, j \leq N_{\bf x}$, $1 \leq m \leq N$ in the same manner of (\ref{u lineup}) and (\ref{lineup}).
\State\, Solve (\ref{5.8})--(\ref{5.9}) by the least square method. The solution is denoted by $\frak{u}^{(k)}$.
Compute $u^{(k)}_m(x_i, y_j)$, $1 \leq i, j \leq N_{\bf x}$, $1 \leq m \leq N$ using
$
u^{(k)}_m(x_i, y_j) = (\frak{u}^{(k)})_{\frak{i}}
$ with $\frak{i}$ as in (\ref{lineup}).
\State\, \label{Step 8} Set the initial solution $p^{(k)} = \sum_{n = 1}^N u^{(k)}_n(x_i, y_j)\Psi_n(0).$
\State\, \label{error estimate} Define the recursive error at step $k$ as $\|p^{(k)} - p^{(k - 1)}\|_{L^\infty}(\Omega)$.
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{remark}
We numerically observe that $\|p^{(5)} - p^{(4)}\|_\infty$ is sufficiently small in all tests in Section \ref{sec num example};
i.e., our iterative scheme converges fast.
Iterating the loop in Algorithm \ref{alg} five (5) times is enough to obtain good numerical results.
Therefore, we stop the iterative process when $k = 5.$
\end{remark}
\subsection{Numerical examples} \label{sec num example}
In this section, we show four (4) numerical results.
\noindent{\bf Test 1.}
The true source function is given by
\[
p_{\rm true} = \left\{
\begin{array}{ll}
8 & x^2 + (y - 0.3)^2 < 0.45^2,\\
0 & \mbox{otherwise.}
\end{array}
\right.
\]
The nonlinearity $q$ is given by
\[
q(s) = s(1 - s) \quad s \in \mathbb{R}.
\]
In this case, the parabolic equation in (\ref{main eqn}) is the Fisher equation.
The true and computed source functions $p$ are displayed in Figure \ref{example 1}.
It appears in the graph of this source function a big inclusion with contrast $8$.
\begin{figure}[h!]
\begin{flushleft}
\subfloat[]{\includegraphics[width = 0.3\textwidth]{u_true_1}}
\quad
\subfloat[\label{m1 init20}]{\includegraphics[width = 0.3\textwidth]{U_init_Noise20_1}} ~ \quad
\subfloat[\label{m1 p comp}]{\includegraphics[width = 0.3\textwidth]{U_comp_Noise20_1}}\quad
\subfloat[\label{m1 cross}]{\includegraphics[width = 0.3\textwidth]{CrossSection_Noise20_1}}
\quad
\subfloat[\label{fig m1 error}]{\includegraphics[width = 0.3\textwidth]{error_Noise20_1}}
\caption{\label{example 1} Test 1. The reconstruction of the source function. (a) The function $p_{\rm true}$
(b) The initial solution $p^{(0)}$ obtained by Step \ref{Step 3} in Algorithm \ref{alg}.
(c) The function $p^{(5)}$ obtained by Step \ref{Step 8} in Algorithm \ref{alg}.
(d) The true (solid), the initial source function (dot) in (b) and computed source function (dash-dot) on the vertical line in (c).
(e) The curve $\|p^{(k)} - p^{(k - 1)}\|_{L^{\infty}(\Omega)},$ $k = 1, \dots, 5.$
The noise level of the data in this test is $20\%$.
}
\end{flushleft}
\end{figure}
Our method to find the initial solution works very well in this case.
One can see in Figure \ref{m1 init20} that by solving the system (\ref{5.5})--(\ref{5.7}), we obtain the initial solution that clearly indicates the position of the inclusion.
The value of the reconstructed function inside the inclusion is somewhat acceptable and will improve after several iterations, see Figure \ref{m1 cross}.
The reconstructed function $p_{\rm comp} = p^{(5)}$ is a good approximation of the true function $p_{\rm true}$, see Figures \ref{m1 p comp} and \ref{m1 cross}.
It is evident from Figure \ref{fig m1 error} that our method converges fast.
The reconstructed maximal value inside the inclusion is 7.202 (relative error 9.98\%).
\noindent{\bf Test 2.} We test the case of multiple inclusions, each of which has a different value.
The true source function $p_{\rm true}$ is given by
\[
p_{\rm true}(x, y) = \left\{
\begin{array}{ll}
12 & (x - 0.5)^2 + (y - 0.5)^2 < 0.35^2,\\
10& (x + 0.5)^2 + (y + 0.5)^2 < 0.35^2,\\
14& (x - 0.5)^2 + (y + 0.5)^2 < 0.35^2,\\
9& (x + 0.5)^2 + (y - 0.5)^2 < 0.35^2,\\
0 &\mbox{otherwise.}
\end{array}
\right.
\]
In this test, the nonlinearity $q$ is given by
\[
q(s) = -s(1 - \sqrt{|s|}) \quad s \in \mathbb{R}.
\]
The true and computed source functions $p$ are displayed in Figure \ref{example 2}.
\begin{figure}[h!]
\begin{flushleft}
\subfloat[]{\includegraphics[width = 0.3\textwidth]{u_true_2}}
\quad
\subfloat[\label{m2 init20}]{\includegraphics[width = 0.3\textwidth]{U_init_Noise20_2}} ~ \quad
\subfloat[\label{m2 p comp}]{\includegraphics[width = 0.3\textwidth]{U_comp_Noise20_2}}\quad
\subfloat[\label{m2 cross}]{\includegraphics[width = 0.3\textwidth]{CrossSection_Noise20_2}}
\quad
\subfloat[\label{fig m2 error}]{\includegraphics[width = 0.3\textwidth]{error_Noise20_2}}
\caption{\label{example 2} Test 2. The reconstruction of the source function. (a) The function $p_{\rm true}$
(b) The initial solution $p^{(0)}$ obtained by Step \ref{Step 3} in Algorithm \ref{alg}.
(c) The function $p^{(5)}$ obtained by Step \ref{Step 8} in Algorithm \ref{alg}.
(d) The true (solid), the initial solution (dot) and computed source function (dash-dot) on the diagonal line in (c).
(e) The curve $\|p^{(k)} - p^{(k - 1)}\|_{L^{\infty}(\Omega)},$ $k = 1, \dots, 5.$
The noise level of the data in this test is $20\%$.
}
\end{flushleft}
\end{figure}
In this test, we successfully recover all four inclusions.
On the other hand, the value of $p$ in each inclusion is high, making the true solution far away from the constant background $p_0 = 0$.
Hence, $p_0 = 0$ might not serve as the initial guess.
Our method to find the initial solution in Step \ref{Step 3} in Algorithm \ref{alg} is somewhat effective, see Figure \ref{m2 init20}.
The computed images of the initial solution do not completely separate the inclusions.
Both computed values and images of the inclusions improve with iterations.
The computed source function $p_{\rm comp} = p^{(5)}$ is acceptable, see Figure \ref{m2 p comp}.
Figure \ref{m2 cross} shows that the constructed values in the inclusions are good. The procedure converges very fast, see Figure \ref{fig m2 error}.
The true maximal value of the upper left inclusion is 9 and the computed one is 8.992 (relative error 0.0\%).
The true maximal value of the upper right inclusion is 12 and the computed one is 13.4 (relative error 11.67\%).
The true maximal value of the lower left inclusion is 10 and the computed one is 10.13 (relative error 1.3\%).
The true maximal value of the lower right inclusion is 14 and the computed one is 14.86 (relative error 6.14\%).
\noindent {\bf Test 3.}
The true source function is given by
\[
p_{\rm true} = \left\{
\begin{array}{ll}
1 & 0.2^2 < x^2 + y^2 < 0.8^2,\\
0 & \mbox{otherwise.}
\end{array}
\right.
\]
The nonlinearity is given by
\[
q(s) = s^2 \quad s \in \mathbb{R}.
\]
The support of the function $p_{\rm true}$ is ring-like.
This test is interesting due to the presence of the void and the nonlinearity grows fast.
The true and computed source functions $p$ are displayed in Figure \ref{example 3}.
\begin{figure}[h!]
\begin{flushleft}
\subfloat[]{\includegraphics[width = 0.3\textwidth]{u_true_3}}
\quad
\subfloat[\label{m3 init20}]{\includegraphics[width = 0.3\textwidth]{U_init_Noise20_3}} \quad
\subfloat[\label{m3 p comp}]{\includegraphics[width = 0.3\textwidth]{U_comp_Noise20_3}}\quad
\subfloat[\label{m3 cross}]{\includegraphics[width = 0.3\textwidth]{CrossSection_Noise20_3}}
\quad
\subfloat[\label{fig m3 error}]{\includegraphics[width = 0.3\textwidth]{error_Noise20_3}}
\caption{\label{example 3} Test 3. The reconstruction of the source function. (a) The function $p_{\rm true}$
(b) The initial solution $p^{(0)}$ obtained by Step \ref{Step 3} in Algorithm \ref{alg}.
(c) The function $p^{(5)}$ obtained by Step \ref{Step 8} in Algorithm \ref{alg}.
(d) The true (solid), initial solution (dot) and computed source function (dash-dot) on horizontal line in (c).
(e) The curve $\|p^{(k)} - p^{(k - 1)}\|_{L^{\infty}(\Omega)},$ $k = 1, \dots, 5.$ The noise level of the data in this test is $20\%$.
}
\end{flushleft}
\end{figure}
In this test, our method to find the initial solution in Step \ref{Step 3} in Algorithm \ref{alg} is somewhat acceptable.
The void in the initial solution $p^{(0)}$ cannot be seen very well, see Figure \ref{m3 init20}.
The contrast and the void are improved with iteration.
The final reconstructed source function $p^{(5)}$ is satisfactory, see Figures \ref{m3 p comp} and \ref{m3 cross}.
The computed maximal value inside the ring is 1.094 (relative error = 9.4\%).
\noindent{\bf Test 4.} In this test, we identify two high contrast ``lines".
The true source function is given by
\[
p_{\rm true} = \left\{
\begin{array}{ll}
10 & \max\{|x|/4, 4|y - 0.6| < 0.9\} \mbox{and } |x| < 0.8,\\
8 &\max\{|x|/4, 4|y + 0.6| < 0.9\} \mbox{and } |x| < 0.8,\\
0 &\mbox{otherwise.}
\end{array}
\right.
\]
The nonlinearity is given by
\[
q(s) = - s^2 \quad s \in \mathbb{R}.
\]
The true and computed source functions $p$ are displayed in Figure \ref{example 4}.
\begin{figure}[h!]
\begin{flushleft}
\subfloat[]{\includegraphics[width = 0.3\textwidth]{u_true_4}}
\quad
\subfloat[\label{m4 init20}]{\includegraphics[width = 0.3\textwidth]{U_init_Noise20_4}} \quad
\subfloat[\label{m4 p comp}]{\includegraphics[width = 0.3\textwidth]{U_comp_Noise20_4}}\quad
\subfloat[\label{m4 cross}]{\includegraphics[width = 0.3\textwidth]{CrossSection_Noise20_4}}
\quad
\subfloat[\label{fig m4 error}]{\includegraphics[width = 0.3\textwidth]{error_Noise20_4}}
\caption{\label{example 4} Test 3. The reconstruction of the source function. (a) The function $p_{\rm true}$
(b) The initial solution $p^{(0)}$ obtained by Step \ref{Step 3} in Algorithm \ref{alg}.
(c) The function $p^{(5)}$ obtained by Step \ref{Step 8} in Algorithm \ref{alg}.
(d) The true and computed source function on the line (dash-dot) in (c).
(e) The curve $\|p^{(k)} - p^{(k - 1)}\|_{L^{\infty}(\Omega)},$ $k = 1, \dots, 5.$
The noise level of the data in this test is $20\%$.
}
\end{flushleft}
\end{figure}
It is evident that Algorithm \ref{alg} provides a good computed source function.
The initial solution by Step \ref{Step 3} in Algorithm \ref{alg} is quite good although there is a ``negative" artifact between the two detected lines, see Figure \ref{m4 init20}. This artifact is reduced significantly with iteration.
We observe that the shape and contrasts of two lines are reconstructed very well, see Figures \ref{m4 p comp} and \ref{m4 cross}.
Our method converges fast, see Figure \ref{fig m4 error}.
The true maximal value of the source function in the upper line is 10 and the computed one is 9.714 (relative error 2.8\%).
The true maximal value of the source function in the lower line is 8 and the computed one is 8.041 (relative error 0.51\%).
\section{Concluding remarks} \label{sec remarks}
In this paper, we analytically and numerically solve the problem of recovering the initial condition of nonlinear parabolic equations.
The first step in our method is to derive a system of nonlinear elliptic PDEs whose solutions are the Fourier coefficients of the solution to the governing nonlinear parabolic equation.
We propose an iterative scheme to solve the system above.
Finding the initial solution for this iterative process is a part of our algorithm.
The convergence of this iterative method was proved.
We show several numerical results to confirm the theoretical part.
\noindent{\bf Acknowledgment:}
The authors sincerely appreciate Michael V.Klibanov for many fruitful discussions that strongly improve the mathematical results and the presentation of this paper.
The work of the second author was supported by US Army Research Laboratory and US Army Research
Office grant W911NF-19-1-0044.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2020-09-29T02:19:20",
"yymm": "1910",
"arxiv_id": "1910.05584",
"language": "en",
"url": "https://arxiv.org/abs/1910.05584",
"abstract": "We propose a new numerical method for the solution of the problem of the reconstruction of the initial condition of a quasilinear parabolic equation from the measurements of both Dirichlet and Neumann data on the boundary of a bounded domain. Although this problem is highly nonlinear, we do not require an initial guess of the true solution. The key in our method is the derivation of a boundary value problem for a system of coupled quasilinear elliptic equations whose solution is the vector function of the spatially dependent Fourier coefficients of the solution to the governing parabolic equation. We solve this problem by an iterative method. The global convergence of the system is rigorously established using a Carleman estimate. Numerical examples are presented.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "A convergent numerical method to recover the initial condition of nonlinear parabolic equations from lateral Cauchy data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310526632796,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7076984868774331
} |
https://arxiv.org/abs/0706.4082 | Inf-sup estimates for the Stokes problem in a periodic channel | We derive estimates of the Babuuska-Brezzi inf-sup constant $\beta$ for two-dimensional incompressible flow in a periodic channel with one flat boundary and the other given by a periodic, Lipschitz continuous function $h$. If $h$ is a constant function (so the domain is rectangular), we show that periodicity in one direction but not the other leads to an interesting connection between $\beta$ and the unitary operator mapping the Fourier sine coefficients of a function to its Fourier cosine coefficients. We exploit this connection to determine the dependence of $\beta$ on the aspect ratio of the rectangle. We then show how to transfer this result to the case that $h$ is $C^{1,1}$ or even $C^{0,1}$ by a change of variables. We avoid non-constructive theorems of functional analysis in order to explicitly exhibit the dependence of $\beta$ on features of the geometry such as the aspect ratio, the maximum slope, and the minimum gap thickness (if $h$ passes near the substrate). We give an example to show that our estimates are optimal in their dependence on the minimum gap thickness in the $C^{1,1}$ case, and nearly optimal in the Lipschitz case. | \section{Introduction}
Many problems of industrial and biological importance involve fluid
flow in narrow channels with moving boundaries
\cite{langlois,poz:intro}. Examples include the flow of oil in
journal bearings or between moving machine parts, the flow of air
between disk drive platters and read-write heads, or the flow of mucus
under a crawling gastropod \cite{snail}. A primary objective in all
these problems is to solve for the pressure required to maintain
incompressibility. Indeed, it is the pressure that determines the load
sustainable by a journal bearing, and that provides propulsion against
viscous drag forces in peristaltic locomotion. However, only the
\emph{gradient} of pressure enters directly into the Stokes or
Navier-Stokes equations;
thus, regardless of the method used to solve the
equations, the pressure must be determined via its gradient.
The fundamental fact that makes it possible to extract $p$ from
$\nabla p$ is that the gradient is an isomorphism from
$L^2_\#(\Omega)$, the space of mean-zero square integrable functions,
onto the subspace of linear functionals in $H^{-1}(\Omega)^2$ that
annihilate the divergence free vector fields $\mb{u}\in
H^1_0(\Omega)^2$; see Section~\ref{sec:prelim} below. The
inf-sup constant $\beta$ (or rather, its inverse)
gives a bound on the norm of the inverse of this operator. Thus the
magnitude of $p$ (and our ability to estimate errors in $p$) depends
to a large extent on the size of~$\beta^{-1}$. However, to the
author's knowledge, every existing proof (e.g.~\cite{duvaut,necas})
that $\beta^{-1}$ is finite relies on Rellich's compactness theorem
to extract a subsequence whose lower order derivatives converge,
making it impossible to determine how large $\beta^{-1}$ might be or
how it depends on~$\Omega$.
The proof in \cite{duvaut} also uses the closed graph theorem, which,
like Rellich's theorem, leads to constants that depend on $\Omega$ in
an uncontrollable way.
These proofs are appropriate for pathological domains with bulbous
regions connected by thin, circuitous pathways; however, for ``nice
domains'', it should be possible to obtain better estimates of the
constants ---
existing theorems are of limited practical use.
In this paper, we derive explicit estimates of the inf-sup constant
$\beta$ for two-dimensional incompressible flow in a periodic channel
with one flat boundary and the other given by a periodic, Lipschitz
continuous function $h(x)$. Our goal is to determine how $\beta^{-1}$
depends on features of the geometry such as the aspect ratio, the
maximum slope, and the minimum gap thickness (if $h$ passes near the
substrate).
Although these requirements on $\Omega$ are fairly restrictive, such
geometries do cover a wide range of interesting applications.
Our interest in this problem arose in the course of deriving a-priori
error estimates for Reynolds' lubrication approximation
(and its higher order corrections)
with constants that depend on $\Omega$ in an
explicit, intuitive way; see \cite{rle:conv2} and also
\cite{langlois,Oron:97,poz:intro} for background on lubrication
theory. These a-priori estimates were used by the author and
A.~E.~Hosoi to monitor errors in the lubrication approximation while
studying shape optimization of swimming sheets over thin liquid films;
see~\cite{snail}.
\section{Preliminaries} \label{sec:prelim}
In this section we briefly review the weak formulation of the Stokes
equations, emphasizing the role played by the Babu\u{s}ka-Brezzi
inf-sup condition; see e.g.~\cite{braess,daVeiga,girault} for a more
detailed account.
Consider the two-dimensional, $x$-periodic Lipschitz domain $\Omega$
shown in Figure~\ref{fig:geom}:
\begin{equation}
\Omega=\{(x,y)\;:\;x\in T,\;\;0<y<h(x) \}, \qquad
h\in C^{0,1}(T), \qquad T=[0,L]_p.
\end{equation}
The case of non-zero Dirichlet boundary conditions may be reduced
to the homogeneous case by subtracting off an appropriate function
to transfer the inhomogeneity from the boundary conditions to
the body force $\mb{f}$; see e.g.~\cite{braess}.
We treat $\Omega$ and $T$ as $C^\infty$ manifolds by identifying the
points
\begin{equation}
\begin{aligned}
\Omega:& & (0,y)&\sim(L,y) \qquad 0<y<h(0), \\
T:& & 0&\sim L
\end{aligned}
\end{equation}
and adding a coordinate chart to each that ``wraps around''. In
particular: a function in $C^k(\Omega)$ or $C^k(T)$ is understood to
have $k$ continuous periodic derivatives;
$\partial\Omega=\Gamma_0\cup\Gamma_1$;
$\partial T=\varnothing$; the support of a function $\phi\in
C^k_c(\Omega)$ vanishes near $\Gamma_0$ and $\Gamma_1$ but not
necessarily at $x=0$ and $x=L$; and the Sobolev spaces $H^k(\Omega)$
and $H^k_0(\Omega)$ are the completions of $C^k(\overline\Omega)$ and
$C^k_c(\Omega)$ in the $\|\cdot\|_k$ norm, and thus contain only
$x$-periodic functions with appropriate smoothness at $x=0,L$.
\begin{figure}[t]
\begin{center}
\psdraft
\includegraphics[height=1.2in]{figs/geom}
\qquad \parbox[b][1.2in][c]{1.5in}{$$\begin{aligned}
h_1 &= \max_{0\le x\le L} h(x) \\
h_0 &= \min_{0\le x\le L} h(x)>0
\end{aligned}$$}
\psfull
\caption{ Two dimensional Stokes flow in a periodic channel. The
left and right boundaries have been identified and are
considered to be part of the interior of the domain. }
\label{fig:geom}
\end{center}
\end{figure}
In the weak formulation of the Stokes equations, we seek
the velocity $\mb{u}$ and pressure $p$ in the spaces
\begin{equation}
X=H^1_0(\Omega)^2, \qquad M=L^2_\#(\Omega)=\Big\{p\in L^2(\Omega)\;:\;
\int_\Omega p\,dA=0\Big\},
\end{equation}
respectively, such that
\begin{subequations}
\label{eqn:weak:stokes}
\begin{alignat}{2}
\label{eqn:weak:stokes1}
&a(\mb{u},\mb{v}) + b(\mb{v},p) & &=
\brak{\mb{f},\mb{v}} \\
\label{eqn:weak:stokes2}
&b(\mb{u},q) & &= 0
\end{alignat}
\end{subequations}
for all $\mb{v}\in X$ and $q\in M$, where the body force
$\mb{f}$ may be any linear functional in the dual space
$X'=H^{-1}(\Omega)^2$ and
\begin{equation}
a(\mb{u},\mb{v})=\int_\Omega \nabla\mb{u}:\nabla\mb{v}\,dA, \qquad
b(\mb{u},p)=-\int_\Omega p\,\nabla\cdot\mb{u}\,dA.
\end{equation}
We endow $M$ with the $L^2$ norm $\|\cdot\|_0$ and
$X$ with the energy norm (i.e.~the $H^1$ semi-norm)
$\|\mb{u}\|_a=\sqrt{a(\mb{u},\mb{u})}$,
which is equivalent to the $H^1$ norm $\|\mb{u}\|_1=\sqrt{\|\mb{u}\|_0^2
+ \|\mb{u}\|_a^2}$ due to the Poincar\'{e}-Friedrichs inequality
(see Lemma~\ref{lem:pf}):
\begin{equation}
\label{eqn:pf:ineq}
\|\mb{u}\|_0 \le \frac{h_1}{\sqrt{8}} \|\mb{u}\|_a, \qquad
(\mb{u}\in X), \qquad\qquad h_1 = \max_{0\le x\le L} h(x).
\end{equation}
Next we define the operators $B:X\rightarrow M'$
and $B':M\rightarrow X'$
via
\begin{equation}
\label{eqn:B:def}
\langle B\mb{u},p\rangle = b(\mb{u},p) =
\langle B'p,\mb{u}\rangle, \qquad (B=\operatorname{div}, \;\; B'=\operatorname{grad}).
\end{equation}
$B$ and $B'$ are clearly bounded and satisfy
\begin{equation}
\|B\|=\|B'\| = \sup_{p\in\dot{M}}\sup_{\mb{u}\in\dot{X}}
\frac{|b(\mb{u},p)|}{\|p\|_0\,\|\mb{u}\|_a}\le\sqrt{2},
\end{equation}
where $\dot{M}=M\setminus\{0\}$ and $\dot{X}=X\setminus\{\mb{0}\}$.
We note that if $\mb{u}\in X$ then $\nabla\cdot\mb{u}\in M$, i.e.~the
divergence of $\mb{u}$ has zero mean;
hence,
\begin{equation}
V := \ker B = \{\mb{u}\in X\;:\;\nabla\cdot\mb{u}=0\}.
\end{equation}
The Babu\u{s}ka-Brezzi $\inf$-$\sup$ condition
\begin{equation}
\label{eqn:inf:sup}
\exists \; \beta>0 \quad \text{such that} \quad
\opn{inf\, sup}_{p\in \dot{M}\,\, \mb{u}\in\dot{X}}
\frac{|b(\mb{u},p)|}{\|p\|_0\,\|\mb{u}\|_a}\ge\beta
\end{equation}
is precisely the condition required for $B'$ to be an isomorphism onto
its range with inverse bounded by $\|(B')^{-1}\|\le\beta^{-1}$. Once
we know the range of $B'$ is closed, we may take the polar
of the equation\, $\operatorname{ran}(B')^0=\ker(B)=V$ to conclude
\begin{equation}
\operatorname{ran}(B') = V^0 = \{\mb{f}\in X'\;:\;\brak{\mb{f},\mb{u}}=0
\text{ whenever } \mb{u}\in V\}.
\end{equation}
As $V^0$ is naturally isomorphic to $(X/V)'$, we see that
$\widetilde{B}:X/V\rightarrow M':(\mb{u}+V)\mapsto B\mb{u}$ is the
adjoint of
the composite map $M\overset{B'}{\longrightarrow}V^0\overset{\cong}{
\longrightarrow}(X/V)'$,
and is therefore itself an
isomorphism with the same bound on the inverse. Identifying
$X/V$ with
\begin{equation}
V^\perp=\{\mb{u}\in X\;:\;a(\mb{u},\mb{v})=0
\text{ whenever } \mb{v}\in V\},
\end{equation}
we learn that the restriction of $B$ to $V^\perp$ is an isomorphism
onto $M'$, which would be essential to the analysis of the Stokes
equations if the right hand side of (\ref{eqn:weak:stokes2}) were
inhomogeneous.
Other interesting solutions of $B\mb{u}=\varphi$ with $\varphi\in M'$
(requiring e.g.~$\mb{u}\in L^\infty(\Omega)^2\cap X$ or
$\nabla\times\mb{u}=0$ rather than $\mb{u}\in V^\perp$) are studied in
\cite{bourgain}.
Finally, we define $A:X\rightarrow X'$ and $\tilde{A}:V\rightarrow V'$
via
\begin{equation}
\brak{A\mb{u},\mb{v}} = a(\mb{u},\mb{v}), \quad
(\mb{u},\mb{v}\in X), \qquad
\brak{\tilde{A}\mb{u},\mb{v}} = a(\mb{u},\mb{v}), \quad
(\mb{u},\mb{v}\in V).
\end{equation}
Both are isometric isomorphisms in the $\|\cdot\|_a$ norm.
The weak solution $(\mb{u},p)$ of (\ref{eqn:weak:stokes})
must satisfy
$\mb{u}\in V$ so that $B\mb{u}=0$. But then $A\mb{u}+B'p=\mb{f}$
requires
\begin{equation}
(*)\;\; \tilde{A}\mb{u}=\tilde{\mb{f}}, \qquad
(\dagger)\;\; B'p = \mb{f}-A\mb{u},
\end{equation}
where $\tilde{\mb{f}}=\mb{f}\vert_V\in V'$ and we note that
$(\mb{f}-A\mb{u})\in\operatorname{range}(B')=V^0$ iff $\mb{u}$ satisfies $(*)$.
Since $\tilde{A}$ and $B'$ are isomorphisms onto their ranges, a
unique solution of (\ref{eqn:weak:stokes})
exists and we have the estimates
\begin{equation}
\|\mb{u}\|_a=\|\tilde{\mb{f}}\|_{V'}\le\|\mb{f}\|_{X'}=
\sup_{\mb{u}\in\dot{X}}\frac{|\brak{\mb{f},\mb{u}}|}{\|\mb{u}\|_a}, \qquad
\|p\|_0 \le 2\beta^{-1}\|\mb{f}\|_{X'}.
\end{equation}
In summary, the inf-sup condition (\ref{eqn:inf:sup}) is the key to
analyzing the weak formulation of the Stokes equations --- it is
equivalent to the assertion that the gradient $B'$ is an isomorphism
from $M=L^2_\#(\Omega)$ onto the polar set $V^0$ of linear functionals
in $X'$ that annihilate the divergence free vector fields $\mb{u}\in
V$.
It is instructive to compare the inf-sup condition written
in the form
\begin{equation}
\label{eqn:inf:sup:iso}
\beta\|p\|_0 \le \|B'p\|_{X'} =
\|\nabla p\|_{-1} \le \sqrt{2}\|p\|_0 \qquad
(p\in L^2_\#(\Omega)),
\end{equation}
to the Poincar\'{e}-Friedrichs inequality for mean-zero functions:
\begin{equation}
\label{eqn:pf:mz}
\|p\|_0\le C\|\nabla p\|_0 \quad \Rightarrow \quad
(1+C^2)^{-1/2}\|p\|_1 \le
\|\nabla p\|_0 \le \|p\|_1 \qquad
(p\in H^1_\#(\Omega)).
\end{equation}
Whereas (\ref{eqn:pf:mz}) is easy to prove for $p\in H^1_0(\Omega)$
(with $C=\frac{1}{\sqrt{8}}h_1$ in our case), it is more challenging
to prove for mean zero functions $p\in H^1_\#(\Omega)$. The usual
proof \cite{braess,evans} relies on Rellich's theorem that
$H^1(\Omega)$ is compactly embedded in $L^2(\Omega)$. As a result,
the proof does not tell us how large the constant $C$ might be or how
it depends on $\Omega$. Similarly, the usual proof \cite{duvaut}
of (\ref{eqn:inf:sup:iso}) makes use of Rellich's theorem that
$L^2(\Omega)$ is compactly embedded in $H^{-1}(\Omega)=H^1_0(\Omega)'$;
however, there is an added complication not present in proving
(\ref{eqn:pf:mz}): it must first be established that
\begin{equation}
\label{eqn:p0:duvaut}
\|p\|_0 \le C(\|p\|_{-1} + \|\nabla p\|_{-1}), \qquad
(p\in L^2(\Omega)).
\end{equation}
This can be done in our case (if $h\in C^{1,1}(T)$) by flattening out
the boundary and constructing appropriate extension operators from
$H^{-1}(\Omega)$ to $H^{-1}(T\times\mbb{R})$ to reduce the problem to
a case that can be solved using the Fourier transform; see Duvaut and
Lions \cite{duvaut} and also Nitsche \cite{nitsche}, who used a
similar technique to prove Korn's inequality. In this paper, we show
how to bypass (\ref{eqn:p0:duvaut}) and prove (\ref{eqn:inf:sup:iso})
directly \emph{without invoking Rellich's theorem}, which allows us to
determine how the constant $\beta$ depends on $\Omega$. We present
two versions of the proof: one assuming $h\in C^{1,1}(T)$, and the
other assuming only that $h\in C^{0,1}(T)$, i.e.~that $h$ is a
periodic, Lipschitz continuous function. Our proof does rely on the
boundary of $\Omega$ being the graph of a function $h(x)$; however, we
feel this is a sufficiently important case to warrant a separate
analysis. We sketch a proof of (\ref{eqn:pf:mz}) that avoids
Rellich's theorem in Appendix~\ref{sec:pf:mz} for comparison.
\section{A rectangular channel} \label{sec:R}
In the following theorem, we prove that $B'$ in (\ref{eqn:B:def}) is
an isomorphism onto its range \big(with
$\beta=\frac{1}{3}\min(1,4\frac{H}{L})$\big) when $\Omega$ is the
$x$-periodic rectangle $R=T\times(0,H)$ of height $H$. In
Sections~\ref{sec:curved} and~\ref{sec:lip}, we will transfer this
result to a general $x$-periodic domain $\Omega$ by a change of
variables. It is useful in this change of variables to know that the
constant $C_2$ in Theorem~\ref{thm:brezzi:R} (and especially in
Corollary~\ref{cor2:brezzi:R}) does not diverge as $H$ approaches
zero.
The periodicity of the domain in one direction but not the other leads
to an interesting relationship between the inf-sup condition and the
unitary operator mapping the Fourier sine coefficients of a function
of one variable to its Fourier cosine coefficients. By studying this
operator, we can obtain explicit estimates of $\beta$ and its
dependence on $L/H$.
Recall that every
$u\in H^1_0(R)$ must be zero (in the trace sense) on the top and
bottom walls but not necessarily on the side walls, where it is only
required to be periodic. Such a function can be expanded in a sine or
cosine series in the $y$-direction and differentiated term by term.
(If $u\in H^1(R)$ is not zero on the top and bottom walls, only the
cosine series can be differentiated term by term).
\begin{theorem} \label{thm:brezzi:R}
For all $q\in L^2_\#(R)$,
\begin{equation}
\|q\|_0^2\le C_1 \|\partial_x q\|^2_{-1}
+ C_2 \|\partial_y q\|^2_{-1},
\end{equation}
where
$C_1=\max\left(9,\frac{9}{16}\frac{L^2}{H^2}\right)$, $C_2=9$,
and $\|f\|_{-1}=\sup_{u\in H^1_0(R)}\frac{|\Brak{f,u}|}{\|u\|_a}$.
\end{theorem}
\begin{proof}
We may expand any $q\in L^2_0(R)$ and $u\in H^1_0(R)$ in a Fourier
series
\begin{align*}
q(x,y) &= \sum_{n\in\mbb{Z}}\bigg(a_{n0} +
\sum_{j=1}^\infty a_{nj}\sqrt{2}\cos \frac{\pi j y}{H}\bigg)
e^{\textstyle\frac{2\pi i nx}{L}}
= \sum_{n\in\mbb{Z}}\bigg(
\sum_{j=1}^\infty b_{nj}\sqrt{2}\sin \frac{\pi j y}{H}\bigg)
e^{\textstyle\frac{2\pi i nx}{L}},
\\ u(x,y) &= \sum_{n\in\mbb{Z}}\bigg(c_{n0} +
\sum_{j=1}^\infty c_{nj}\sqrt{2}\cos \frac{\pi j y}{H}\bigg)
e^{\textstyle\frac{2\pi i nx}{L}}
= \sum_{n\in\mbb{Z}}\bigg(
\sum_{j=1}^\infty d_{nj}\sqrt{2}\sin \frac{\pi j y}{H}\bigg)
e^{\textstyle\frac{2\pi i nx}{L}}
\end{align*}
so that
\begin{alignat}{2}
\|q\|_0^2 &= \sum_{\mbb{Z}\times\mbb{N}_0} LH|a_{nj}|^2 &
&= \sum_{\mbb{Z}\times\mbb{N}} LH|b_{nj}|^2, \\
\|u\|_{a}^2 &= \sum_{\mbb{Z}\times\mbb{N}_0}
LH \Big[\Big(\frac{2\pi n}{L}\Big)^2 +
\Big(\frac{\pi j}{H}\Big)^2\Big]|c_{nj}|^2
& &= \sum_{\mbb{Z}\times\mbb{N}}
LH \Big[\Big(\frac{2\pi n}{L}\Big)^2 +
\Big(\frac{\pi j}{H}\Big)^2\Big]|d_{nj}|^2.
\end{alignat}
Here $\mbb{N}_0=\{0\}\cup\mbb{N}$ and the sums are over ordered pairs
$(n,j)$. Let us denote
$(\mbb{Z}\times\mbb{N}_0)' =\mbb{Z}\times\mbb{N}_0
\setminus\{(0,0)\}$.
We claim that
\begin{equation}
\label{eqn:q:cd}
\begin{array}{ccccc}
& & A_1 & A_2 & \|\partial_y q\|^2_{-1} \\[4pt]
& & \downarrow & \hspace*{15pt} \searrow \hspace*{-15pt}
& \,\,\jr{=} \\[3pt]
\|q\|_0^2 & = &
\displaystyle \!\!\!\sum_{(\mbb{Z}\times\mbb{N}_0)'} \!
\frac{LH(2\pi n/L)^2|a_{nj}|^2}{(2\pi n/L)^2+(\pi j/H)^2} & + &
\displaystyle \sum_{\mbb{Z}\times\mbb{N}}
\frac{LH(\pi j/H)^2|a_{nj}|^2}{(2\pi n/L)^2+(\pi j/H)^2} \\[16pt]
& & \,\,\jr{\,\,\le} & & \,\,\jr{\,\,\ge} \\
\|q\|_0^2 & = & \displaystyle \sum_{\mbb{Z}\times\mbb{N}}
\frac{LH(2\pi n/L)^2|b_{nj}|^2}{(2\pi n/L)^2+(\pi j/H)^2}
& + & \displaystyle \sum_{\mbb{Z}\times\mbb{N}}
\frac{LH(\pi j/H)^2|b_{nj}|^2}{(2\pi n/L)^2+(\pi j/H)^2} \\[12pt]
&& \,\,\jr{=} & \hspace*{-15pt} \nwarrow \hspace*{15pt}
& \uparrow \\[4pt]
& & \|\partial_x q\|_{-1}^2 & B_1 & B_2
\end{array}
\end{equation}
Here $A_1$, $A_2$, $B_1$ and $B_2$ are labels to represent the
indicated sums. The horizontal assertions clearly hold (since $q\in
L^2_0(R)\Rightarrow a_{00}=0$) while the vertical assertions follow
from the Cauchy-Schwarz inequality and a particular choice of $u$ to
show that two of the upper bounds are least upper bounds:
\begin{alignat}{2}
\notag
\langle \partial_xq,u\rangle = \int_R (q)(-\partial_xu)\,dA &=
\sum_{\mbb{Z}\times\mbb{N}}
\,\, LH \, b_{-n,j}\Big(-\frac{2\pi in}{L} d_{nj}\Big) & &
\hspace*{-12pt}\le B_1^{1/2}\|u\|_{a}, \\
\notag
\langle \partial_xq,u\rangle
&= \!\!\!\sum_{(\mbb{Z}\times\mbb{N}_0)'}
\!\! LH \, a_{-n,j}\Big(-\frac{2\pi in}{L} c_{nj}\Big) & &
\hspace*{-12pt}\le A_1^{1/2}\|u\|_{a}, \\
\notag
\langle \partial_yq,u\rangle = \int_R (q)(-\partial_yu)\,dA &=
\sum_{\mbb{Z}\times\mbb{N}}
\,\, LH \, a_{-n,j}\Big(-\frac{\pi j}{H} d_{nj}\Big) & &
\hspace*{-12pt}\le A_2^{1/2}\|u\|_{a}, \\
\notag
\langle \partial_yq,u\rangle
&= \sum_{\mbb{Z}\times\mbb{N}}
\,\, LH \, b_{-n,j}\Big(\frac{\pi j}{H} c_{nj}\Big) & &
\hspace*{-12pt}\le B_2^{1/2}\|u\|_{a}, \\
\label{eqn:uqx}
d_{nj} = \frac{(2\pi i n/L)\,\bar{b}_{-n,j}}
{(2\pi n/L)^2+(\pi j/H)^2} \;\; &\Rightarrow \;\;
\|u\|_a = B_1^{1/2}, \quad \langle \partial_xq, u\rangle = B_1, \\
\label{eqn:uqy}
d_{nj} = \frac{-(\pi j/H)\,\bar{a}_{-n,j}}
{(2\pi n/L)^2+(\pi j/H)^2} \;\; &\Rightarrow \;\;
\|u\|_a = A_2^{1/2}, \quad \langle \partial_yq, u\rangle = A_2.
\end{alignat}
The choices of $c_{nj}$ analogous to (\ref{eqn:uqx}) and
(\ref{eqn:uqy}) do not generally lead to functions $u$ that satisfy
the boundary conditions on the top and bottom walls; hence, we cannot
replace the inequalities in (\ref{eqn:q:cd}) by equalities.
The theorem will be proved if we can show that
\begin{equation}
\label{eqn:Ai:Bi:ineq}
\theta(A_1+A_2) + (1-\theta)(B_1+B_2) \le C_1B_1 + C_2A_2
\end{equation}
for some $\theta\in[0,1]$. The result (\ref{eqn:alpha:C2}) below
turns out to be independent of $\theta$, so we set $\theta=1$ here
for simplicity.
We will prove (\ref{eqn:Ai:Bi:ineq}) by slicing the lattices
$(\mbb{Z}\times\mbb{N}_0)'$ and $\mbb{Z}\times\mbb{N}$ into vertical
strips and showing that
\begin{equation}
\label{eqn:Ai:Bi:ineq2}
A_{1,n}\le C_1B_{1,n}+(C_2-1)A_{2,n}, \qquad (n\in\mbb{Z}),
\end{equation}
where the subscript $n$ indicates that only the terms in strip $n$
should be included in the sum, e.g.~$A_{1,3}=\sum_{j=0}^\infty
LH(6\pi/L)^2|a_{3j}|^2/[(6\pi/L)^2+(\pi j/H)^2]$. Since $A_{1,0}=0$,
the $n=0$ case holds trivially. If we freeze
$n\in\mbb{Z}\setminus\{0\}$, we find that
\begin{equation}
\frac{1}{L}\int_0^L q(x,y)e^{-\textstyle\frac{2\pi inx}{L}}\,dx
\;=\;
a_{n0} + \sum_{k=1}^\infty a_{nk}\sqrt{2}\cos \frac{\pi k y}{H} \;=\;
\sum_{j=1}^\infty b_{nj}\sqrt{2}\sin \frac{\pi j y}{H}.
\end{equation}
Thus, the coefficients $a_{nk}$ and $b_{nj}$ are related to each
other by a unitary transformation
\begin{equation}
a_{nk} = \sum_{j=1}^\infty E_{k j}b_{nj}, \qquad
(n\in\mbb{Z},\;k\ge0).
\end{equation}
The entries of $E$ can be computed explicitly: for $j\ge1$ we have
\begin{equation}
\label{eqn:U:def}
E_{k j} = \left\{\begin{aligned}
&\textstyle\int_0^1 \sqrt{2}\sin (\pi j\eta)\,d\eta, & &k=0 \\
&\textstyle\int_0^1 2\sin(\pi j\eta)\cos(\pi k \eta)\,d\eta, & &k\ge1
\end{aligned}\right\}
= \begin{cases}
\;\, 2\sqrt{2}/(j\pi),
& k=0,\;\; j\text{ odd} \\[4pt]
\displaystyle
\frac{4j}{(j^2-k^2)\pi},
& k>0,\;\; j-k\text{ odd} \\[3pt]
\qquad 0, & \text{otherwise}
\end{cases}
\end{equation}
Keeping $n\in\mbb{Z}\setminus\{0\}$ frozen and dividing
(\ref{eqn:Ai:Bi:ineq2}) by $LH$, we must show that
\begin{equation*}
\textstyle
\sum_{k=0}^\infty \frac{(2\pi n/L)^2|a_{nk}|^2}{
(2\pi n/L)^2 + (\pi k/H)^2} \le
C_1\sum_{j=1}^\infty \frac{(2\pi n/L)^2|b_{nj}|^2}{
(2\pi n/L)^2 + (\pi j/H)^2} +
(C_2-1)\sum_{k=1}^\infty \frac{(\pi k/H)^2|a_{nk}|^2}{
(2\pi n/L)^2 + (\pi k/H)^2}.
\end{equation*}
This is accomplished via the following lemma using $\nu=2|n|H/L$
and $\nu_0=2H/L$.
\end{proof}
\vspace*{5pt}
\begin{lemma} Suppose $b\in \ell^2(\mbb{N})$ and let
$a=Eb\in \ell^2(\mbb{N}_0)$,
where $E$ maps the Fourier sine coefficients of a function to its
Fourier cosine coefficients; see (\ref{eqn:U:def}) above. Then for
$\nu>0$ there holds
\begin{equation}
\label{eqn:nu:ak}
\sum_{k=0}^\infty\frac{\nu^2}{\nu^2+k^2}|a_k|^2
\le C_1\sum_{j=1}^\infty\frac{\nu^2}{\nu^2+j^2}|b_j|^2 +
(C_2-1)\sum_{k=1}^\infty\frac{k^2}{\nu^2+k^2}|a_k|^2,
\end{equation}
with $C_1 = \max\left(9,\frac{9}{4}\nu^{-2}\right)$ and $C_2=9$. If
$\nu\ge\nu_0>0$, $C_1=\max\left(9,\frac{9}{4}\nu_0^{-2}\right)$ also
works .
\end{lemma}
\begin{proof}
It suffices to show that (\ref{eqn:nu:ak}) holds whenever $b$ is a
unit vector in $\ell^2(\mbb{N})$. The general case follows by re-scaling
this result. We will split each sum into terms of low and high index
and use different arguments to handle the two cases.
Let $k_0\ge0$, $j_0\ge1$, $k_1=k_0+1$ and $j_1=j_0+1$.
If we discard terms on the right hand side with $j\ge j_1$ and $k\le
k_0$,
we obtain a sufficient condition for (\ref{eqn:nu:ak}) to hold.
Also, on the left hand side,
$\sum_{k=k_1}^\infty
\frac{\nu^2}{\nu^2+k^2}|a_k|^2 \le
\frac{\nu^2}{k_1^2}\sum_{k=k_1}^\infty \frac{k^2}{\nu^2+k^2}|a_k|^2$,
so it suffices to show that
\begin{equation}
\label{eqn:nu:ak2}
\sum_{k=0}^{k_0}\frac{\nu^2}{\nu^2+k^2}|a_k|^2
\le C_1\sum_{j=1}^{j_0}\frac{\nu^2}{\nu^2+j^2}|b_j|^2 +
\left(C_2-1-\frac{\nu^2}{k_1^2}\right)
\sum_{k=k_1}^\infty\frac{k^2}{\nu^2+k^2}|a_k|^2.
\end{equation}
Next, we see that (\ref{eqn:nu:ak2}) will hold if we can show that
\begin{equation}
\label{eqn:alpha:C}
\alpha^2 \le
\frac{C_1\nu^2}{\nu^2+{j_0^2}}\beta^2 +
\frac{(C_2-1-\nu^2/k_1^2)k_1^2}{\nu^2+k_1^2}(1-\alpha^2),
\end{equation}
where $\alpha^2=\sum_{k=0}^{k_0}|a_k|^2$,
$\beta^2=\sum_{j=1}^{j_0}|b_j|^2$, and $1-\alpha^2 =
\sum_{k=k_1}^{\infty}|a_k|^2$. Note that $\beta$ here is not
the $\inf$-$\sup$ constant $\beta$, but rather a measure of
the relative weight of low frequency modes in comparison to high
frequency modes in a sine series expansion.
Solving (\ref{eqn:alpha:C}) for
$\alpha^2$, we require
\begin{equation}
\label{eqn:alpha:C2}
\alpha^2 \le \frac{C_1}{C_2}
\frac{1 + \nu^2/k_1^2}{1 + j_0^2/\nu^2}\beta^2 +
1 - \frac{1+\nu^2/k_1^2}{C_2}.
\end{equation}
Our goal is to show that for each $\nu>0$ there is a choice of
$j_0\ge1$, $k_1\ge1$, $C_1\le\max(9,(9/4)\nu^{-2})$ and $C_2\le9$ such
that (\ref{eqn:alpha:C2}) and consequently (\ref{eqn:nu:ak}) holds for
all unit vectors $b\in\ell^2(\mbb{N})$; ($b$ determines $a$, $\alpha$
and $\beta$). $C_1$ and $C_2$ can then be
increased if necessary to the values stated in the lemma without
violating (\ref{eqn:nu:ak}).
We now use the fact that $a$ and $b$ are unit vectors related by a
known unitary transformation to obtain a bound on $\alpha$ in terms of
$\beta$. Let $S$, $T$, $x$, $y$, $z$ be the sub-matrices and
sub-vectors
\begin{equation}
\begin{aligned}
&S = E(0\!:\!k_0,\,1\!:\!j_0), \qquad
T = E(0\!:\!k_0,\,j_1\!:\!\infty), \qquad
E(0\!:\!k_0,\,:) = [S,T]. \\
&z=a(0\!:\!k_0), \qquad
x=b(1\!:\!j_0), \qquad
y=b(j_1\!:\!\infty), \qquad
z=Sx+Ty.
\end{aligned}
\end{equation}
We have $\alpha=\|z\|$ and $\beta=\|x\|=\sqrt{1-\|y\|^2}$.
Since $\|S\|\le1$ and $\|y\|\le1$, the estimate
$\|z\|\le\|Sx\|+\|Ty\|$ gives
\begin{equation}
\alpha\le\beta+t, \qquad\qquad t=\|T\|\le1.
\end{equation}
If $t<1$,
this can be used to derive a bound on $\alpha^2$ of the form
(\ref{eqn:alpha:C2}). However, we can obtain a sharper estimate as
follows. First, we compute the singular value decomposition
$S=U\Sigma V^*$ and rotate the rows of $T$ by a unitary operator $Q$
such that
\begin{equation}
U^*[S,T]\begin{bmatrix}V & 0 \\ 0 & Q\end{bmatrix} =
\left(\begin{array}{cccccc|ccccc}
\sigma_0 & & 0 & 0 & \cdots & 0 & t_0 & & 0 & 0 & \cdots \\
& \ddots & & \vdots & & \vdots & & \ddots & & \vdots & \\
0 & & \sigma_{k_0} & 0 & \cdots & 0 & 0 & & t_{k_0} & 0 & \cdots
\end{array}\right),
\end{equation}
where $\sigma_k^2+t_k^2=1$ for $0\le k\le k_0$. We assume here that
$j_0\ge k_0+1$; otherwise we will not be able to derive a sufficient
condition for (\ref{eqn:alpha:C2}) to hold, for if $S$ has more rows
than columns, we can produce a unit vector $a=[z;0]$ with $S^*z=0$ so
that $b=E^*a$ yields $\alpha=1$ and $\beta=0$.
Next we define
$\tilde{z}=U^*z$, $\tilde{x}=V^*x$, $\tilde{y}=Q^*y$ so that
\begin{equation}
\alpha^2 = \sum_{k=0}^{k_0}|\tilde{z}_k|^2, \quad
\beta^2 = \sum_{j=1}^{j_0}|\tilde{x}_j|^2, \quad
1-\beta^2 = \sum_{j=1}^\infty|\tilde{y}_j|^2, \qquad
\tilde{z}_k = \sigma_k\tilde{x}_{k+1} +
t_k\tilde{y}_{k+1}
\end{equation}
and, by Lemma~\ref{lem:sumsq} below,
\begin{equation}
|\tilde{z}_k|^2 \le \frac{1}{1-t_k}|\sigma_k\tilde{x}_{k+1}|^2
+ \frac{1}{t_k}|t_k\tilde{y}_{k+1}|^2
= (1+t_k)|\tilde{x}_{k+1}|^2 + t_k|\tilde{y}_{k+1}|^2.
\end{equation}
Hence, majorizing $t_k$ by
$\|T\|=t_\text{max}=\sqrt{1-\sigma_\text{min}^2}$
and summing over $k$, we obtain
\begin{equation}
\alpha^2\le (1+t)\beta^2 + t(1-\beta^2) = \beta^2 + t,
\qquad t=\|T\|\le1.
\end{equation}
Thus, (\ref{eqn:alpha:C2}) holds if we define $C_1$ and $C_2$ via
$\left(1-\frac{1+\nu^2/k_1^2}{C_2}\right)=t$ and
$\left(\frac{C_1}{C_2}\frac{1+\nu^2/k_1^2}{1+j_0^2/\nu^2}\right)=1$:
\begin{equation}
\label{eqn:C1:C2:def}
C_1(\nu) = \frac{1+j_0^2/\nu^2}{1-t}, \qquad
C_2(\nu) = \frac{1+\nu^2/k_1^2}{1-t}.
\end{equation}
Next we look for choices of $k_1$ and $j_0$ that lead to a
window of values of $\nu$ over which $C_1$ and $C_2$ remain small.
We need enough such windows to cover the positive real line $\nu>0$.
The trade-off is that choosing $j_0\gg k_1$ makes $t$ small but
also makes one of the numerators in (\ref{eqn:C1:C2:def}) large.
We consider 3 cases:
\noindent
$\bullet$
\emph{Case 1:} $(0<\nu\le1/2)$. We set $k_1=j_0=1$ so that
$t=\sqrt{1-E_{01}^2}=.4352$ and
\begin{equation}
\begin{aligned}
C_2(\nu)&=(1+\nu^2)/(1-t)\le(5/4)/(1-t)=2.2133\le9/4, \\
C_1(\nu)&=(1+\nu^{-2})/(1-t)=C_2\nu^{-2}\le
(9/4)\nu^{-2},
\end{aligned}\qquad (0<\nu\le1/2).
\end{equation}
\noindent
$\bullet$ \emph{Case 2:} $(1/2\le\nu\le100)$. We wrote a program to
compute the singular value decomposition of $S$ for all pairs of small
integers $k_1$ and $j_0$ satisfying $k_1\le j_0\le 4k_1\le400$ to
determine $t=\sqrt{1-\sigma_\text{min}^2}$ for each pair. We
then choose a threshold $C_\text{thresh}$ and find the values
$\nu_\text{min}$ and $\nu_\text{max}$ such that
$C_1(\nu_\text{min})=C_\text{thresh}$ and
$C_2(\nu_\text{max})=C_\text{thresh}$. We then discard all
cases with $\nu_\text{max}<\nu_\text{min}$ and sort the remaining
intervals $[\nu_\text{min},\nu_\text{max}]$ by their first entry.
Finally, we discard all intervals for which $\nu_\text{min}$ of
the next interval is smaller than $\nu_\text{max}$ of the previous
interval (to avoid redundancy). The results with
$C_\text{thresh}=4.9$ and $C_\text{thresh}=8.9$ are
shown in Figure~\ref{fig:k1:j0}. The method breaks down
(i.e.~there are gaps between some of the windows) for
$C_\text{thresh}<5.83$.
\begin{figure}[t]
\begin{center}
\begin{minipage}{.37\linewidth}
\begin{center}
\captionof{table}{
Parameters used to construct $C_1(\nu)$ and $C_2(\nu)$ with
$C_\text{thresh}=8.9$. The corresponding table with
$C_\text{thresh}=5.9$ has 32 lines corresponding to the
smaller windows shown in Figure~\ref{fig:k1:j0}.}
\label{tbl:k1:j0}
\scriptsize
\begin{tabular}{rrcrr}
\hline
$k_1$ & $j_0$ & $t$ & $\nu_\text{min}$ & $\nu_\text{max}$ \\
\hline \\[-8pt]
1 & 1 & .43524 & 0.498 & 2.007 \\
3 & 3 & .57904 & 1.810 & 4.972 \\
6 & 8 & .54892 & 4.608 & 10.42 \\
13 & 17 & .58222 & 10.31 & 21.43 \\
25 & 37 & .54766 & 21.27 & 43.49 \\
50 & 76 & .54321 & 43.41 & 87.54 \\
99 & \hspace*{-5pt} 155 & .53535 & 87.54 & 175.3 \\
\hline
\end{tabular}
\end{center}
\end{minipage}
\hfill
\begin{minipage}{.57\linewidth}
\includegraphics[width=\linewidth]{figs/infsup1}
\caption{Plot of $C_1(\nu)$ and $C_2(\nu)$ over the range
$0.2\le\nu\le600$. Each criss-cross corresponds to a
different window $\nu_\text{min}\le\nu\le
\nu_\text{max}$ in Table~\ref{tbl:k1:j0}.}
\label{fig:k1:j0}
\end{minipage}
\end{center}
\end{figure}
\noindent
$\bullet$ \emph{Case 3:} $(\nu\ge100)$. We set $k_1=\lfloor
\nu/\sqrt{3} \rfloor$, $j_0=3k_1$ and bound $t$ by the Frobenius norm:
\begin{align}
t^2 & \le\|T\|_F^2 =
\sum_{k=0}^{k_0}\sum_{j=j_1}^\infty |E_{kj}|^2 =
\frac{8}{\pi^2}\sum_{j=j_1}^\infty\frac{\delta_{j,\tem{odd}}}{j^2} +
\frac{16}{\pi^2}\sum_{k=1}^{k_0}\sum_{j=j_1}^\infty
\frac{j^2\delta_{j-k,\tem{odd}}}{(j^2-k^2)^2} \\
\notag
&\le \frac{4}{\pi^2}\int_{j_0-1}^\infty \frac{1}{x^2}\,dx
\; + \;
\frac{8}{\pi^2}k_0 \int_{j_0-1}^\infty \frac{x^2}{(x^2-k_0^2)^2}
\,dx, \qquad (j_0-1=j_1-2) \\
\notag
&= \frac{4}{\pi^2 (j_0-1)} +
\frac{4}{\pi^2}\left[\frac{\kappa}{\kappa^2-1}
+ \frac{1}{2}\log\frac{\kappa+1}{\kappa-1}\right],
\qquad
\left(\kappa=\frac{j_0-1}{k_0}>\frac{j_0}{k_1}=3\right) \\
\notag
& \le \frac{4}{170\pi^2}+\frac{4}{\pi^2}
\left[\frac{3}{8} + \frac{1}{2}\log2\right]
= (.54298)^2,
\qquad \left(
k_1\ge\left\lfloor\frac{100}{\sqrt{3}}\right\rfloor=57,\;
j_0\ge171\right).
\end{align}
Here we represent sums of decreasing functions
sampled at even or odd integers by staircases of width two
and half the height of the function at the right endpoint.
Each choice of $k_1$ and $j_0$ will cover the range $\sqrt{3}k_1\le\nu
<\sqrt{3}(k_1+1)$; over this range, we have
\begin{equation}
\frac{\nu}{k_1}\le\left(\frac{k_1+1}{k_1}\right)
\left(\frac{\nu}{k_1+1}\right)\le
\frac{58}{57}\sqrt{3}, \qquad
\frac{j_0}{\nu}\le\left(\frac{j_0}{k_1}\right)
\left(\frac{k_1}{\nu}\right)\le
(3)\frac{1}{\sqrt{3}}=\sqrt{3}
\end{equation}
and we learn that $C_1$ and $C_2$ are bounded by
$\frac{1+3(58/57)^2}{1-.54298} = 8.985$.
Thus, for all $\nu>0$ we have
$C_1 \le \max(9,(9/4)\nu^{-2})$ and
$C_2 \le 9$,
as claimed.
\end{proof}
\vspace*{5pt}
\begin{corollary} \label{cor:brezzi:R}
For all $q\in L^2_\#(R)$, $\|q\|_{-1}^2\le\frac{L^2}{4\pi^2}
\big\|\partial_x q\big\|_{-1}^2 + \frac{H^2}{\pi^2}
\big\|\partial_y q\big\|_{-1}^2$.
\end{corollary}
\begin{proof}
Arguing as in (\ref{eqn:q:cd})--(\ref{eqn:uqy}), it is readily
shown that
\begin{equation*}
\|q\|_{-1}^2=\sum_{\mbb{Z}\times\mbb{N}}
\frac{LH|b_{nj}|^2}{(2\pi n/L)^2 + (\pi j/H)^2} \le
\frac{L^2}{4\pi^2}\sum_{\mbb{Z}\times\mbb{N}}
\frac{LH(2\pi n/L)^2|b_{nj}|^2}{(2\pi n/L)^2 + (\pi j/H)^2} +
\sum_{j=1}^\infty \frac{LH|b_{0j}|^2}{(\pi j/H)^2}.
\end{equation*}
The first term on the right hand side is simply
$\frac{L^2}{4\pi^2}\big\|\partial_x q\big\|_{-1}^2$ while the second
satisfies
\begin{equation}
\sum_{j=1}^\infty \frac{LH|b_{0j}|^2}{(\pi j/H)^2} \le
\frac{H^2}{\pi^2}\sum_{j=1}^\infty LH|b_{0j}|^2 =
\frac{H^2}{\pi^2}\sum_{j=1}^\infty LH|a_{0j}|^2 \le
\frac{H^2}{\pi^2}\big\|\partial_y q\big\|_{-1}^2,
\end{equation}
where the middle equality follows from the fact that $a_{00}=0$.
\end{proof}
\begin{corollary} \label{cor2:brezzi:R}
Suppose $q\in L^2_\#(R)$ and $\zeta\in L^\infty(T)$. Then
for any aspect ratio $H/L$, we have
\begin{equation}
\|\partial_y(\zeta q)\|_{-1}^2 \le C_2M^2\left(\|\partial_xq\|_{-1}^2
+ \|\partial_yq\|_{-1}^2\right),
\end{equation}
where $C_2=9$ and $M=\|\zeta\|_\infty$.
\end{corollary}
\begin{proof}
Since $\zeta$ does not depend on $y$, the Fourier coefficients of
$\tilde{q}=\zeta q$ are related to the those of $q$ via column-by-column
convolution with the Fourier coefficients of $\zeta$:
\begin{equation}
\tilde{a}_{nk} = \sum_{m\in\mbb{Z}}\hat{\zeta}_{n-m}a_{mk},
\qquad
\tilde{b}_{nj} = \sum_{m\in\mbb{Z}}\hat{\zeta}_{n-m}b_{mj}, \qquad
(n\in\mbb{Z}, \; k\ge0, \; j>0).
\end{equation}
Since multiplication by $\zeta$ is bounded in $L^2(T)$ by $M$,
convolution with $\hat{\zeta}$ is bounded in $\ell^2(\mbb{Z})$ by $M$.
Thus, by (\ref{eqn:q:cd}), we have
\begin{equation}
\label{eqn:zeta:q}
\|\partial_y(\zeta q)\|_{-1}^2 \le LH\sum_{k>0}\sum_{n\in\mbb{Z}}
|\tilde{a}_{nk}|^2 \le M^2 \left(LH
\sum_{k>0}\sum_{n\in\mbb{Z}} |a_{nk}|^2\right).
\end{equation}
The key point is that entries $a_{nk}$ with $k=0$ are absent from the
right hand side. The quantity in parentheses may be written as
$A_1+A_2$ just as in (\ref{eqn:q:cd}), but omitting the $k=0$ terms
from $A_1$. Thus, it suffices to show that (\ref{eqn:nu:ak}) holds
with $C_1$ replaced by $C_2$ if the $k=0$ term is omitted from the sum
on the left. For $\nu\ge1$, the result has already been proved
without omitting this term. But for $\nu<1$, we see that $C_1=0$ and
$C_2=2$ suffice (since $\nu^2\le k^2$ for $k\ge1$). Thus $C_1=C_2=9$
works for all $\nu>0$, as claimed.
\end{proof}
\section{Curved boundaries} \label{sec:curved}
We now perform a change of variables to transfer the result of
Theorem~\ref{thm:brezzi:R} from the rectangle $R$ to a domain $\Omega$
bounded on one side by a periodic, Lipschitz continuous function
$h\in C^{0,1}(T)$:
\begin{gather}
\notag
\parbox[b][.8in][c]{2.2in}{
\includegraphics[width=\linewidth]{figs/chgvar}
}
\quad
\parbox[b][.8in][c]{2.7in}{
$$\begin{aligned}
x &= \xi, & \;
y &= \frac{h(\xi)}{H}\eta, & \;
dx\,dy &= \frac{h(\xi)}{H}\,d\xi\,d\eta, \\[5pt]
\xi &= x, &
\eta &= \frac{H}{h(x)}y, &
d\xi\,d\eta &= \frac{H}{h(x)}\,dx\,dy,
\end{aligned}$$
} \\
\label{eqn:coord1}
\der{}{x} = \der{}{\xi} - \frac{\eta}{h}h_x\der{}{\eta}, \qquad
\der{}{y} = \frac{H}{h}\der{}{\eta}, \qquad
\der{}{\xi} = \der{}{x} + \frac{y}{h}h_x\der{}{y}, \qquad
\der{}{\eta} = \frac{h}{H}\der{}{y}.
\end{gather}
The main challenges involve avoiding lower order terms that have to be
dealt with using Rellich's compactness theorem, balancing the sources
of error to avoid excessive overestimation of the constants in the
error bounds, and dealing with various subtleties of the dual space
$H^{-1}(\Omega)$ such as the fact that if $p\in L^2(\Omega)$ and
$\zeta\in L^\infty(\Omega)$ then $\|\zeta p\|_{-1}$ need not be
smaller than $\|\zeta\|_\infty \|p\|_{-1}$.
For clarity, we postpone the case that $h$ is only Lipschitz
continuous to Section~\ref{sec:lip} and begin with the simplifying
assumption $h\in C^{1,1}(T)$. The aspect ratio of the rectangle
$R$ plays an essential role in the Lipschitz case but only a minor
role (improving our estimate of $\beta$) here.
\begin{theorem} \label{thm:brezzi:omega}
Suppose $h\in C^{1,1}(T)$ and $0<h_0\le h(x)\le h_1$ for $0\le x\le L$.
Then for every $p\in L^2_\#(\Omega)$ we have
\begin{equation} \label{eqn:thm:brezzi:omega}
\big\|p\big\|_{0,\Omega}\le
\beta^{-1}\big\|\nabla p\big\|_{-1,\Omega}, \qquad
\beta^{-1}=\frac{9}{4}\big(1+M^2\big)
\left(\frac{h_1}{h_0}\right)^{1/2}
\max\left(4,\frac{L}{h_0},\frac{h_1}{h_0}\right),
\end{equation}
where $M^2=\max\Big(\big\|h_x\big\|_\infty^2,
\big\|\frac{1}{2}hh_{xx}\big\|_\infty\Big)$.
\end{theorem}
\begin{remark} \upshape The quantity $\frac{1}{2}hh_{xx}$ arises naturally
in the study of Reynolds' lubrication approximation and its higher
order corrections on a periodic domain \cite{rle:conv2}.
\end{remark}
\begin{remark} \upshape In many practical applications, the aspect
ratio $L/h_0$ is large while $M\ll1$ and $h_1/h_0\approx1$; in this
regime, (\ref{eqn:thm:brezzi:omega}) shows that $\beta^{-1}$ scales
linearly with $L/h_0$. If the geometry has a narrow gap so that
$h_1/h_0\gg1$, we learn that $\beta^{-1}$ depends on the gap size as
$h_0^{-3/2}$. This dependence is shown to be optimal in
Example~\ref{exa:opt} below. We do not know if the quadratic
dependence on $M$ is optimal; it seems to be an unavoidable artifact
of changing variables to a rectangular geometry.
\end{remark}
\begin{proofof}{\it of Theorem~\ref{thm:brezzi:omega}}.
The coordinate transformation $(x,y)=F(\xi,\eta)$ defined in
(\ref{eqn:coord1})
provides a one-to-one correspondence between functions $p\in L^2(\Omega)$,
$u\in H^1_0(\Omega)$ and their counterparts $\tilde{p}=p\circ F\in
L^2(R)$, $\tilde{u}=u\circ F\in H^1_0(R)$:
\begin{equation}
\label{eqn:coord2}
\tilde{p}(\xi,\eta) = p\left(\xi,\frac{h(\xi)}{H}\eta\right), \qquad
\tilde{u}(\xi,\eta) = u\left(\xi,\frac{h(\xi)}{H}\eta\right).
\end{equation}
$F$ does not map $L^2_\#(\Omega)$ to $L^2_\#(R)$; however, the norm
of $p\in L^2_\#(\Omega)$ does not decrease if we add a constant
to $p$ to enforce $\int_\Omega h^{-1}p\,dA=0$ instead of
$\int_\Omega p\,dA=0$.
By Theorem~\ref{thm:brezzi:R}, this new $p$ satisfies
\begin{equation}
\label{eqn:p0:omega}
\big\|p\big\|_{0,\Omega}^2 \le
\Big\|\Big(\frac{h_1}{h}\Big)^{1/2}p\hspace{1pt}\Big\|_{0,\Omega}^2 =
\frac{h_1}{H}\big\|\tilde{p}\big\|_{0,R}^2\le
C_1\frac{h_1}{H}\big\|\partial_\xi\tilde{p}\big\|_{-1,R}^2 +
C_2\frac{h_1}{H}\big\|\partial_\eta\tilde{p}\big\|_{-1,R}^2,
\end{equation}
where $C_1=\frac{9}{16}\max(16,\frac{L^2}{H^2})$ and $C_2=9$.
But since the right hand side does not change when a constant
is added to $\tilde{p}$, the original $p$ also satisfies this
equation (dropping the intermediate inequalities).
We can relate the action of
$\nabla_\xi\tilde{p}$ on $\tilde{\mb{u}}$ to that of
$\nabla_x p$ on $\mb{u}$:
\begin{align}
\label{eqn:gradx:util}
\Brak{\partial_\xi\tilde{p},\tilde{u}}_R =
\Brak{\frac{H}{h}p,
\left(-\partial_x-\frac{y}{h}h_x\partial_y\right)u}_\Omega
&= H\Brak{\partial_xp,h^{-1}u}_\Omega +
H\Brak{\partial_yp,h_x\frac{y}{h^2}u}_\Omega, \\
\label{eqn:grady:util}
\Brak{\partial_\eta\tilde{p},\tilde{v}}_R =
\Brak{\frac{H}{h}p,-\frac{h}{H}v_y}_\Omega &=
\Brak{\partial_yp,v}_\Omega,
\end{align}
where we used $\partial_x (h^{-1}) + \partial_y(yh^{-2}h_x)=0$ in
(\ref{eqn:gradx:util}). If we had not introduced the factor of
$h^{-1/2}$ in (\ref{eqn:p0:omega}), this cancellation would not have
occurred and the proof would become much more complicated;
see Remark~\ref{rk:rellich} below.
It will be shown
in Lemmas~\ref{lem:Lhu}, \ref{lem:Lhu2} and~\ref{lem:Lhu3}
that
\begin{alignat}{2}
\label{eqn:Lhu:bound}
H\big\|h^{-1}u\big\|_{a,\Omega} &\le
C_3\big\|\tilde{u}\big\|_{a,R}, & \qquad
C_3^2 &= \max\bigg(3\frac{H}{h_0},\,
\big(1+3M^2\big)\frac{H^3}{h_0^3}\,\bigg),\\
\label{eqn:Lhu:bound2}
H\Big\|h_x\frac{y}{h^2}u\Big\|_{a,\Omega} &\le
C_4\big\|\tilde{u}\big\|_{a,R}, & \qquad
C_4^2 &= \max\bigg(8M^2\frac{H}{h_0},\,
\big(2M^2 + 6M^4\big)\frac{H^3}{h_0^3}\,\bigg),\\
\label{eqn:Lhu:bound3}
\big\|v\big\|_{a,\Omega} &\le
C_5\big\|\tilde{v}\big\|_{a,R}, & \qquad
C_5^2 &= \max\bigg(2\frac{h_1}{H},\,
\big(1+2M^2\big)\frac{H}{h_0}\,\bigg).
\end{alignat}
If $h$ only belongs to $C^{0,1}(T)$, then (\ref{eqn:Lhu:bound2})
does not hold and
we have to replace the last term
in (\ref{eqn:gradx:util}) by $\Brak{\partial_y(h_xp),Hyh^{-2}u}_\Omega$,
which requires a more difficult analysis; see Section~\ref{sec:lip} below.
Combining (\ref{eqn:gradx:util})--(\ref{eqn:Lhu:bound3}), we obtain
\begin{equation}
\begin{gathered}
\big|\langle\partial_\xi\tilde{p},\tilde{u}\rangle_R\big| \le
\Big(C_3\big\|\partial_xp\big\|_{-1,\Omega} +
C_4\big\|\partial_yp\big\|_{-1,\Omega}\Big)\big\|\tilde{u}\big\|_{a,R},
\\
\big|\langle\partial_\eta\tilde{p},\tilde{v}\rangle_R\big| \le
C_5\big\|\partial_yp\big\|_{-1,\Omega}\big\|\tilde{v}\big\|_{a,R}.
\end{gathered}
\end{equation}
It follows that
\begin{equation}
\big\|\partial_\xi\tilde{p}\big\|_{-1,R}^2 \le
3C_3^2\big\|\partial_xp\big\|_{-1,\Omega}^2 +
\frac{3}{2}C_4^2\big\|\partial_yp\big\|_{-1,\Omega}^2, \quad
\big\|\partial_\eta\tilde{p}\big\|_{-1,R}^2 \le
C_5^2\big\|\partial_yp\big\|_{-1,\Omega}^2,
\end{equation}
which, together with (\ref{eqn:p0:omega}), gives
\begin{equation}
\notag
\big\|p\big\|_{0,\Omega}^2 \le
\beta^{-2}\Big(\big\|\partial_xp\big\|_{-1,\Omega}^2 +
\big\|\partial_yp\big\|_{-1,\Omega}^2\Big), \quad
\beta^{-2} = \frac{h_1}{H}\max\left(
3 C_1C_3^2,\, \frac{3}{2}C_1C_4^2 +
C_2C_5^2\right).
\end{equation}
Next, we choose $H=h_0$ so that
\begin{equation*}
3C_1C_3^2 \le 9\big(1+M^2\big)C_1, \quad
\frac{3}{2}C_1C_4^2\le\left(12M^2+9M^4\right)C_1, \quad
C_2C_5^2 \le 18\frac{h_1}{h_0}+18M^2.
\end{equation*}
Finally, we observe that $\frac{h_1}{h_0}\le\frac{1}{4}\max\left(
16,\frac{h_1^2}{h_0^2}\right)$ regardless of whether
$\frac{h_1}{h_0}\ge4$. As a result, $C_2C_5^2\le (8+2M^2)\frac{9}{16}
\max\left(16,\frac{h_1^2}{h_0^2}\right)$ and
\begin{equation}
\beta^{-2} \le \frac{h_1}{h_0}\max\left\{ 9(1+M^2), \,
8 + 14M^2 + 9M^4\right\}\frac{9}{16}\max\left(
16,\frac{L^2}{h_0^2},\frac{h_1^2}{h_0^2}\right),
\end{equation}
which yields (\ref{eqn:thm:brezzi:omega}) when we majorize
the terms in braces by $9(1+M^2)^2$.
\end{proofof}
\begin{remark} \label{rk:rellich} \upshape
One might hope to improve (\ref{eqn:thm:brezzi:omega}) by
working directly
with $\|p\|_0$ in (\ref{eqn:p0:omega}) instead of via
$\big\|h^{-1/2}p\big\|_0$. The main difference is that
(\ref{eqn:gradx:util}) acquires a lower order term
\begin{equation*}
\bbrak{\big}{\partial_\xi(h^{1/2}\tilde{p}),\tilde{u}}_R =
H\bbrak{\big}{\partial_xp,h^{-1/2}u}_\Omega +
H\bbrak{\big}{\partial_yp,yh_xh^{-3/2}u}_\Omega +
\frac{H}{2}\bbrak{\big}{p,h_xh^{-3/2}u}_\Omega
\end{equation*}
that would normally be dealt with by invoking a compactness argument
to bound $\|p\|_{-1,\Omega}$ by a constant times $\|\nabla
p\|_{-1,\Omega}$. This is not acceptable in the current
calculation as this constant depends on $\Omega$, and hence $h$. It
is possible to bound $\|p\|_{-1,\Omega}$ in terms of
$\|\tilde{p}\|_{-1,R}$ and then use Corollary~\ref{cor:brezzi:R}.
But the final step of bounding
$\|\nabla_\xi\tilde{p}\|_{-1,R}$ by $\|\nabla_x p\|_{-1,\Omega}$
brings us back to the proof given above.
The following example shows that the power of
$h_0^{-3/2}$ in the formula (\ref{eqn:thm:brezzi:omega}) for
$\beta^{-1}$ is the best possible.
\end{remark}
\begin{example} \label{exa:opt} \upshape
Suppose $0<h_0<1$ and consider a periodic function $h(x)$ that
transitions smoothly and symmetrically between $h_0$ for
$x\in[3/8,1/2]\cup[7/8,1]$ and $1$ for $x\in[1/8,1/4]\cup [5/8,3/4]$.
Let $\Omega_1$, $\Omega_2$, $\Omega_3$, and $\Omega_4$ be the regions
under the curve $h$ with $x\in [0,3/8]$, $x\in [3/8,1/2]$,
$x\in [1/2,7/8]$ and $x\in [7/8,1]$, respectively. Let $p(x,y)$ be the
continuous, piecewise linear function that equals $-1$
on $\Omega_1$, $1$ on $\Omega_3$, and satisfies $p_x=\pm16$,
$p_y=0$ on $\Omega_2$ and $\Omega_4$.
\begin{equation*}
\includegraphics[scale=.55]{figs/inf_sup_ex}
\end{equation*}
Then for any $u\in H^1_0(\Omega)$, we have
$\big|\langle \partial_y p,u\rangle\big| = 0$ and
\begin{align}
\notag
\big|\langle \partial_x p,u\rangle\big| &\le
\int_{\Omega_2\cup\Omega_4} 16|u(x,y)|\,dA \le
16\sqrt{\operatorname{area(\Omega_2\cup\Omega_4)}}\,
\|u\|_{0,\Omega_2\cup\Omega_4} \\
&\le 8h_0^{1/2}\big(h_0/\sqrt{8}\big)
\|u_y\|_{0,\Omega_2\cup\Omega_4} \le
\sqrt{8}h_0^{3/2}\|u\|_{a,\Omega},
\end{align}
where we used the Cauchy-Schwarz and Poincar\'e-Friedrichs
inequalities; see Lemma \ref{lem:pf}. Thus $\big\|\nabla
p\big\|_{-1,\Omega}\le\sqrt{8}h_0^{3/2}$ while
$\|p\|_{0,\Omega}\ge1/2$, showing that $\beta^{-1}$ in
(\ref{eqn:thm:brezzi:omega}) must be at least
$\big(2\sqrt{8}\big)^{-1}h_0^{-3/2}$, i.e.~the power $h_0^{-3/2}$ is
optimal.
\end{example}
\section{Lipschitz boundaries} \label{sec:lip}
In this section we show how to modify the proof of
Theorem~\ref{thm:brezzi:omega} to handle the case that $h$ only
belongs to $C^{0,1}(T)$. The main difference is that $yh^{-2}h_xu$ no
longer belongs to $H^1_0(\Omega)$ in (\ref{eqn:gradx:util}), so a
different strategy is required to deal with the term
$\Brak{\partial_yp,Hyh^{-2}h_xu}_\Omega$. The idea is to show that
when $h^{-2}h_x$ is grouped with~$p$, this term can be made small in
comparison to the other two terms in (\ref{eqn:gradx:util}) by
choosing the aspect ratio of the rectangle $R$ small enough.
The loss
of a power of $h_0^{1/2}$ in the estimate of $\beta^{-1}$ when $M$ is
not small is discussed in Remark~\ref{rk:h0:degrade} below.
\begin{theorem} \label{thm:lip}
Suppose $h\in C^{0,1}(T)$ and $0<h_0\le h(x)\le h_1$ for $0\le x\le L$.
Then for every $p\in L^2_\#(\Omega)$ we have
\begin{equation} \label{eqn:thm:lip}
\big\|p\big\|_{0,\Omega}\le
\beta^{-1}\big\|\nabla p\big\|_{-1,\Omega}, \qquad
\beta^{-1}=2\max\left(4,\frac{L}{\sqrt{h_0h_1}},
8\frac{L}{h_0}M\right)\max(1,8M)\frac{h_1}{h_0},
\end{equation}
where $M=\|h_x\|_\infty$.
\end{theorem}
\begin{proof}
As before, (\ref{eqn:p0:omega}) holds for all $p\in L^2_\#(\Omega)$:
\begin{equation}
\label{eqn:p0:omega1}
\big\|p\big\|_{0,\Omega}^2 \le
C_1\frac{h_1}{H}\big\|\partial_\xi\tilde{p}\big\|_{-1,R}^2 +
C_2\frac{h_1}{H}\big\|\partial_\eta\tilde{p}\big\|_{-1,R}^2, \qquad
C_1=\frac{9}{16}\max\left(16,\frac{L^2}{H^2}\right), \;\; C_2=9.
\end{equation}
We now transform the problematic term in (\ref{eqn:gradx:util})
back to the $\xi,\eta$ coordinate system:
\begin{alignat}{2}
\label{eqn:gradx:util1}
\Brak{f,\tilde{u}}_R -
\Brak{g_1,\tilde{u}}_R &:=
\Brak{\partial_\xi\tilde{p},\tilde{u}}_R -
\Brak{\partial_\eta(h^{-1}h_x\tilde{p}),\eta\tilde{u}}_R & &=
\Brak{\partial_xp,Hh^{-1}u}_\Omega, \\
\label{eqn:grady:util1}
\Brak{g,\tilde{v}}_R &:=
\Brak{\partial_\eta\tilde{p},\tilde{v}}_R & &=
\Brak{\partial_yp,v}_\Omega.
\end{alignat}
So we can bound $\|p\|_{0,\Omega}$ in terms of $f$ and $g$ and we can
bound $(f-g_1)$ and $g$ in terms of $\|\nabla_xp\|_{-1,\Omega}$; thus,
we need a bridge from $f$ to $(f-g_1)$ and $g$. By
Corollary~\ref{cor2:brezzi:R} and Lemma~\ref{lem:eta:u:bound},
\begin{gather}
\notag
|\Brak{g_1,\tilde{u}}_R|
\le \|\partial_\eta(h^{-1}h_x\tilde{p})\|_{-1,R}\|\eta\tilde{u}\|_{a,R}
\le \Big(3Mh_0^{-1}\|\nabla_\xi\tilde{p}\|_{-1,R}\Big)
\left(\frac{4}{3}H\|\tilde{u}\|_{a,R}\right), \\
\label{eqn:g1:bound}
\Rightarrow \qquad \|g_1\|_{-1,R}^2 \le \theta^2
\Big(\|f\|_{-1,R}^2 + \|g\|_{-1,R}^2\Big), \qquad
\theta = 4\frac{H}{h_0}M.
\end{gather}
As a result,
$\|f\|^2 \le 2\|f-g_1\|^2 + 2\theta^2(\|f\|^2 + \|g\|^2)$,
which implies
\begin{equation}
\label{eqn:f:bound}
\|f\|^2 \le 4\|f-g_1\|^2 + 4\theta^2\|g\|^2, \qquad (\theta^2\le 1/4).
\end{equation}
Equation (\ref{eqn:p0:omega1}) now becomes
\begin{equation}
\label{eqn:p0:omega2}
\|p\|_{0,\Omega}^2 \le 4C_1\frac{h_1}{H}\|f-g_1\|_{-1,R}^2
+ (4\theta^2C_1 + C_2)\frac{h_1}{H}\|g\|_{-1,R}^2, \qquad
(HM\le h_0/8).
\end{equation}
From (\ref{eqn:gradx:util1}) and (\ref{eqn:grady:util1}) we see that
\begin{equation}
|\Brak{f-g_1,\tilde{u}}| \le
\|\partial_xp\|_{-1,\Omega}\|Hh^{-1}u\|_{a,\Omega}, \qquad
|\Brak{g,\tilde{v}}| \le
\|\partial_yp\|_{-1,\Omega}\|v\|_{a,\Omega}.
\end{equation}
By Lemmas~\ref{lem:Lhu} and \ref{lem:Lhu3} below, we then have
\begin{alignat}{2}
\label{eqn:Lhu2:bound}
\|f-g_1\|_{-1,R} &\le C_3\|\partial_xp\|_{-1,\Omega}, & \qquad
&C_3^2 = \max\left(\frac{9}{8}\frac{H}{h_0},(1+16M^2)
\frac{H^3}{h_0^3}\right), \\
\label{eqn:Lhu2:bound3}
\|g\|_{-1,R} &\le C_5\|\partial_yp\|_{-1,\Omega}, & \qquad
&C_5^2 = \max\left(\frac{9}{8}\frac{h_1}{H},(1+9M^2)
\frac{H}{h_0}\right).
\end{alignat}
It follows from (\ref{eqn:p0:omega2}) that
\begin{equation}
\notag
\big\|p\big\|_{0,\Omega}^2 \le
\beta^{-2}\Big(\big\|\partial_xp\big\|_{-1,\Omega}^2 +
\big\|\partial_yp\big\|_{-1,\Omega}^2\Big), \quad
\beta^{-2} = \frac{h_1}{H}\max\left(
4 C_1C_3^2,\, (4\theta^2C_1+C_2)C_5^2\right).
\end{equation}
Finally, we choose $H=\min\left(h_0,\frac{1}{8M}h_0\right)$ so that
if $M\ge1/8$ we have
\footnotesize
\begin{equation*}
4\frac{h_1}{H}C_3^2 \le \max
\left(\frac{9}{2},\frac{4}{64M^2}+1\right)\frac{h_1}{h_0}
\le 5\frac{h_1}{h_0},
\quad
\frac{h_1}{H}C_5^2 \le \max\left(72\frac{h_1^2}{h_0^2}M^2,
(1+9M^2)\frac{h_1}{h_0}\right)
\le 73M^2\frac{h_1^2}{h_0^2}
\end{equation*}
\normalsize
and if $M\le1/8$ we have
\footnotesize
\begin{equation*}
4\frac{h_1}{H}C_3^2 \le \max
\left(\frac{9}{2},4+64M^2\right)\frac{h_1}{h_0}
\le 5\frac{h_1}{h_0},
\quad
\frac{h_1}{H}C_5^2 \le \max\left(\frac{9}{8}\frac{h_1^2}{h_0^2},
(1+9M^2)\frac{h_1}{h_0}\right)
\le \frac{73}{64}\frac{h_1^2}{h_0^2}.
\end{equation*}
\normalsize
Moreover, $C_1 = \max\left(9,\frac{9}{16}\frac{L^2}{h_0^2},
36\frac{L^2}{h_0^2}M^2\right)$ and $4\theta^2C_1+C_2\le
2\max\left(9,36\frac{L^2}{h_0^2}M^2\right)$ regardless
of whether $M\le1/8$. Combining these results, we obtain
\begin{equation}
\begin{aligned}
4C_1\frac{h_1}{H}C_3^2 &\le 5\max\left(9,\frac{9}{16}\frac{L^2}{h_0^2},
36\frac{L^2}{h_0^2}M^2\right)\frac{h_1}{h_0}, \\
(4\theta^2C_1+C_2)\frac{h_1}{H}C_5^2 &\le
\frac{73}{32}\max\left(9,36\frac{L^2}{h_0^2}M^2\right)\max(1,64M^2)
\frac{h_1^2}{h_0^2}.
\end{aligned}
\end{equation}
Formula (\ref{eqn:thm:lip}) for $\beta^{-1}$ follows by taking the
square root of the maximum of these expressions after increasing the
constants and consolidating terms.
\end{proof}
\begin{remark} \upshape
Inequality (\ref{eqn:g1:bound})
is the key to this proof. For fixed $u$, both $\Brak{f,\tilde{u}}$
and $\Brak{g_1,\tilde{u}}$ in (\ref{eqn:gradx:util1}) scale like $H$
while $\Brak{g,\tilde{u}}$ in (\ref{eqn:grady:util1}) is independent
of $H$. Because of the way $\|\tilde{u}\|_{a,R}$ depends on $H$, it
follows that if $R_1=T\times H_1$, $R_2=T\times H_2$, and $H_1<H_2$,
then
\begin{equation*}
\|g_1\|_{-1,R_1}^2 \le \frac{H_1}{H_2}\|g_1\|_{-1,R_2}^2, \quad
\|f\|_{-1,R_1}^2 \le \frac{H_1}{H_2}\|f\|_{-1,R_2}^2, \quad
\|g\|_{-1,R_1}^2 \le \frac{H_2}{H_1}\|g\|_{-1,R_2}^2.
\end{equation*}
Thus, $\|g_1\|^2$ and $\theta^2\|g\|^2$ are both $O(H)$ quantities and
the surprising aspect of (\ref{eqn:g1:bound}) is that the $O(H^3)$
term $\theta^2\|f\|^2$ is sufficient to help $\theta^2\|g\|^2$
bound $\|g_1\|^2$.
\end{remark}
\begin{remark} \label{rk:h0:degrade} \upshape
We believe the optimal bound in the Lipschitz case should scale like
$\beta^{-1}\sim h_0^{-3/2}$, just as in the $C^{1,1}$ case; however,
proving this would require eliminating (or at least finding a better
bound for) the cross term $4\theta^2C_1\frac{h_1}{H}\|g\|_{-1,R}^2$ in
(\ref{eqn:p0:omega2}). As it stands, $\theta^2C_1$ and
$H^{-1}\|g\|^2$ each contribute a factor of $h_0^{-2}$ to this cross
term due to the requirement $HM\le h_0/8$, which yields
$\beta^{-2}\sim h_0^{-4}$. We suspect that the functions $p$ that
require $C_1$ to diverge as $H\rightarrow0$ are distinct from the
functions $p$ for which $\|f-g_1\|\ll\|f\|$ in (\ref{eqn:f:bound})%
, but we have not found a way to make this idea rigorous.
\end{remark}
| {
"timestamp": "2007-06-27T21:19:17",
"yymm": "0706",
"arxiv_id": "0706.4082",
"language": "en",
"url": "https://arxiv.org/abs/0706.4082",
"abstract": "We derive estimates of the Babuuska-Brezzi inf-sup constant $\\beta$ for two-dimensional incompressible flow in a periodic channel with one flat boundary and the other given by a periodic, Lipschitz continuous function $h$. If $h$ is a constant function (so the domain is rectangular), we show that periodicity in one direction but not the other leads to an interesting connection between $\\beta$ and the unitary operator mapping the Fourier sine coefficients of a function to its Fourier cosine coefficients. We exploit this connection to determine the dependence of $\\beta$ on the aspect ratio of the rectangle. We then show how to transfer this result to the case that $h$ is $C^{1,1}$ or even $C^{0,1}$ by a change of variables. We avoid non-constructive theorems of functional analysis in order to explicitly exhibit the dependence of $\\beta$ on features of the geometry such as the aspect ratio, the maximum slope, and the minimum gap thickness (if $h$ passes near the substrate). We give an example to show that our estimates are optimal in their dependence on the minimum gap thickness in the $C^{1,1}$ case, and nearly optimal in the Lipschitz case.",
"subjects": "Analysis of PDEs (math.AP); Functional Analysis (math.FA)",
"title": "Inf-sup estimates for the Stokes problem in a periodic channel",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310532836284,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7076984855713803
} |
https://arxiv.org/abs/2009.02179 | Eigenpolytopes, Spectral Polytopes and Edge-Transitivity | Starting from a finite simple graph $G$, for each eigenvalue $\theta$ of its adjacency matrix one can construct a convex polytope $P_G(\theta)$, the so called $\theta$-eigenpolytop of $G$. For some polytopes this technique can be used to reconstruct the polytopes from its edge-graph. Such polytopes (we shall call them spectral) are still badly understood. We give an overview of the literature for eigenpolytopes and spectral polytopes.We introduce a geometric condition by which to prove that a given polytope is spectral (more exactly, $\theta_2$-spectral). We apply this criterion to the edge-transitive polytopes. We show that every edge-transitive polytope is $\theta_2$-spectral, is uniquely determined by this graph, and realizes all its symmetries. We give a complete classification of distance-transitive polytopes. |
\section{Implementation in Mathematica}
\label{sec:appendix_mathematica}
The following short Mathematica script takes as input a graph $G$ (in the example below, this is the edge-graph of the dodecahedron), and an index $k$ of an eigenvalue.
It then compute the $v_i$ (or \texttt{vert} in the code), \shortStyle{i.e.,}\ the vertex-coordinates of the $\theta_k$-eigenpolytope.
If the dimension turns out to be appropriate, the spectral embedding of the graph, as well as the eigenpolytope are plotted.
\vspace{0.5em}
\begin{lstlisting}
(* Input:
* the graph G, and
* the index k of an eigenvalue (k = 1 being the largest eigenvalue).
*)
G = GraphData["DodecahedralGraph"];
k = 2;
(* Computation of vertex coordinates 'vert' *)
n = VertexCount[G];
A = AdjacencyMatrix[G];
eval = Tally[Sort@Eigenvalues[A//N], Round[#1-#2,0.00001]==0 &];
d = eval[[-k,2]]; (* dimension of the eigenpolytope *)
vert = Transpose@Orthogonalize@
NullSpace[eval[[-k,1]] * IdentityMatrix[n] - A];
(* Output:
* the graph G,
* its eigenvalues with multiplicities,
* the spectral embedding, and
* its convex hull (the eigenpolytope).
*)
G
Grid[Join[{{$\theta$,"mult"}}, eval], Frame$\to$All]
Which[
d<2 , Print["Dimension too low, no plot generated."],
d==2, GraphPlot[G, VertexCoordinates$\to$vert],
d==3, GraphPlot3D[G, VertexCoordinates$\to$vert,
d>3 , Print["Dimension too high, 3-dimensional projection is plotted."];
GraphPlot3D[G, VertexCoordinates$\to$vert[[;;,1;;3]] ]
]
If[d==2 || d==3,
Region`Mesh`MergeCells[ConvexHullMesh[vert]]
]
\end{lstlisting}
\section{Motivation}
\label{sec:motivation}
We dive into three curious topics in the interplay between graphs and polytopes, all of which will be addressed in this paper.
\subsection{Reconstructing polytopes from the edge-graph}
\label{sec:reconstructing_polytopes}
Since Steinitz \cite{...} it is known that the edge-graphs of polyhedra are exactly the 3-connected planar graphs, and that any such graph belongs to a unique combinatorial type of polyhedra.
On the other hand, the study of higher-dimensional polytopes taught us that the edge-graph carries very little information about its polytope if $d\ge 4$.
For example, the edge-graph of any simplex is the complete graph.
But so is the edge-graph of every \emph{neighborly polytope}, and so, given only the edge-graph, we will never know from which polytope it was obtained initially.
The same example shows that in many cases not even the dimension of the polytope can be reconstructed.
As a light in the dark, we have the result of Blind and Mani \cite{...} (with its modern proof from Kalai \cite{kalai1988simple}), which states that the combinatorial type of a \emph{simple polytope} can be reconstructed from its edge-graph.
One can then ask what other classes of polytopes allow such a unique reconstruction.
For example, it might be that every \enquote{sufficiently symmetric} polytope is reconstructable in this sense.
\begin{question}\label{q:symmetrc_polytopes_with_same_edge_graph}
Can there be two distinct sufficiently symmetric polytopes with the same edge-graph?
\end{question}
One might ask \enquote{what counts as sufficient symmetric?}, and this paper shall answer this question partially.
For example, vertex-transitivity is not sufficient:
every~even-dimen\-sional \emph{cyclic polytope} $C(n,2d)$ is neighborly and can be realized in a vertex-transitive way: set the coordinates of the $i$-th vertex to be
$$\big(\sin(t_i),\cos(t_i);\sin(2t_i),\cos(2t_i);...;\sin(dt_i),\cos(dt_i)\big)\in\RR^{2d}$$
with $t_i=2\pi i/n$. The edge-graph is $K_n$, independent of $d$.
The next candidate in line, \emph{edge-transitivity}, seems more promising.
And while a complete classification of edge-transitive polytopes might be possible (though, none is currently known), a more satisfying outcome would be a result independent of any such classification.
\subsection{Realizing symmetries of the edge-graph}
\label{sec:realizing_symmetries}
Polytopes are at most as symmetric as there edge-graph.
Any symmetry $T\in\Aut(P)\subset\Ortho(\RR^d)$ of a polytope $P\subset\RR^d$ induces a symmetry $\phi\in\Aut(G_P)\subseteq\Sym(n)$ of the edge-graph.
And thus $\Aut(P)\le\Aut(G_P)$ as abstract groups.
For general polytopes, it cannot be expected that the polytope shows many, or just any symmetries of the edge-graph.
However, one can always try to find a sufficiently symmetric realization of the polytope, so that a maximum of combinatorial symmetries of the edge-graph is realized as geometric symmetries of the polytope.
One can even hope to realize \emph{all} the symmetries of the edge-graph.
In the case of polyhedra, the circle packing version of the Steinitz theorem provides a way to realize all the symmetries of the edge-graph.
Again, higher dimensions tell a different story: there exist 4-dimensional polytopes with edge-graphs that can never be realized as polytopes the edge-graphes of which cannot be realized with all their symmeteries (consider \shortStyle{e.g.}\ the dual of the polytope constructed in \cite{bokowski1984combinatorial}).
Again, it seems that assuming sufficient symmetry of the polytope ensures that the polytope has already \emph{all} the symmetries of its edge-graph.
We can actually proof this in the case of so called \emph{distance-transitive} symmetry, and we hope to find the same in the case of only edge-transitive polytopes.
\begin{question}
Does every sufficiently symmetric polytope realize all the symmetries of its edge-graph?
\end{question}
\subsection{Rigidity and perfect polytopes}
\label{sec:rigid_perfect}
Every polytope can be continuously deformed while preserving its combinatorial type.
In the case of highly symmetric polytopes, one might be additionally interesting in keeping the symmetry properties during the deformation.
A polytope that cannot be deformed without loosing any of its symmetries is be called \emph{rigid} or, occasionally, a \emph{perfect polytope}.
All regular polytopes are examples of such rigid polytopes, and there exist many more.
While quite some examples are known, no general construction procedures has been described.
Again, one might hope to show that all sufficiently symmetric polytopes are rigid, in particular, the edge-transitive polytopes.
\begin{question}
Is every sufficiently symmetric polytope rigid?
\end{question}
\subsection{A motivating example}
\label{sec:motivating_example}
Consider your favorite highly symmetric polytope $P\subset\RR^d$.
For demonstration, we choose $P$ to be the \emph{regular dodecahedron}\footnote{Any other platonic solid (or regular polytope) might do the trick. Ideally, your polytope is transitive on vertices and edges.}
Here are some observations:
\begin{myenumerate}
\item $P$ is really symmetric (it is a regular polyhedron). All its symmetries must also be reflected in the edge-graph $G_P$, since any automorphism of $P$ restricts to an automorphism of $G_P$.
However, it is far from evident whether any particular symmetry of $G_P$ is reflected in $P$ as well?
In the case of the dodecahedron, every combinatorial symmetry of the edge-graph is indeed a geomery symmetry of $P$.
\item The study of higher dimensional polytopes teaches that the edge-graph of a polytope carries usually very little information about the polytope itself.
For example, the edge-graph of a simplex is a complete graph.
But so are the edge-graph of all \emph{neighborly polytopes}.
It was shown in \cite{kalai1988simple} that this problem of reconstruction does not happen for \emph{simple polytopes}, and it is our goal to show that this is also not to be expected for sufficiently symmetric polytope.
\item A polytope is called \emph{perfect} if its combinatorial type has a unique geometric realization of highest symmetry.
Many highly symmetric polytopes one immediately thinks about have this property. But many other do not.
We show that being edge-transitive seems to be the right assumption to ensure being perfect.
\end{myenumerate}
\section{Eigenpolytopes and spectral polytopes}
One way to connect spectral graph theory to polytope theory is via the construction of the so-called \emph{eigenpolytopes}.
In this section, we will survey this construction and its application primarily to the study of highly symmetric graphs.
From this section on, let $P\subset\RR^d$ denote a $d$-dimensio\-nal convex polytope with $d\ge 2$.
If not stated otherwise, we shall~assume that $P$ is \emph{full-dimensional}, that is, that $P$ is not contained in any proper subspace of $\RR^d$.
Let $\F_\delta(P)$ denote the set of \emph{$\delta$-dimensional faces} of $P$.
Fix some bijection $v\: V\to \F_0(P)$ between $V=\{1,...,n\}$ and the vertices of $P$.
Let $G_P=(V,E)$ be the graph with $ij\in E$ if and only if $\conv\{v_i,v_j\}$ is an edge of $P$.
$G_P$ is called \emph{edge-graph} of $P$, and $v$ the \emph{skeleton} of $P$.
Note that we can consider $v$ as a realization of $G_P$.
\subsection{Eigenpolytopes}
\begin{definition}
Given a graph $G$ and an eigenvalue $\theta\in \Spec(G)$ of multiplicity $d$. If $v$ is the $\theta$-realization of $G$, then
%
$$P_G(\theta):=\conv\{v_i\mid i\in V\}\subset\RR^d$$
%
is called \emph{$\theta$-eigenpolytope} of $G$, or just \emph{eigenpolytope} if $\theta$ is not specified.
\end{definition}
The notion of the eigenpolytope goes back to Godsil \cite{godsil1978graphs}, and can~be under\-stood~as a geometric way to study the combinatorial, algebraic~and~spectral properties of a graph.
Several authors investigated the eigenpolytopes of famous graphs or graph families.
For example, Powers \cite{powers1986petersen} gave a description for the face structure of~the~different eigenpolytopes of the Petersen graph, one of which will reappear as distance-transitive polytope in \cref{sec:distance_transitive_polytopes}.
Mohri \cite{mohri1997theta_1} analysed the geometry of the, what he called, \emph{Hamming polytopes}, the $\theta_2$-eigenpolytopes of the Hamming graphs.
As it turns out, but was not realized by himself, these polytopes are merely the cartesian products of regular simplices.
These, as well, are distance-transitive~\mbox{polytopes}. Powers \cite{powers1988eigenvectors} took a first look at the eigenpolytopes of the \emph{distance-regular} graphs.
Those especially will be discussed in greater detail below.
Finally, Padrol and Pfeifle \cite{padrol2010graph} investigated graph operations and their effect on the associated eigenpolytopes.
Before we proceed, we recall the relevant terminology for polytopes.
\subsection{Spectral polytopes}
Most graphs are not edge-graphs of polytopes (so-called \emph{polytopal} graphs), but eigenpolytope provide an operation that to every graph (and an eigenvalue thereof) assignes such a polytopal graph (the edge-graph of the eigenpolytope).
This operation is most interesting in the case of vertex-transitive graphs.
If $G$ is a vertex-transitive graph on $n$ vertices, every vertex of $G$ becomes a vertex of $P_G(\theta)$ (though maybe several vertices are assgined to the same point).
It seems interesting to ask for when this operation is a fixed point, that is, when is the edge-graph of the eigenpolytope isomorphic to the original graph.
We are specifically interested in one particular phenomenon associated with eigenpolytopes, that can best be observed in our running example.
\begin{definition}
\quad
\begin{myenumerate}
\item A polytope $P$ is called \emph{balanced} resp.\ \emph{spectral}, if this is the case for its skeleton.
\item A graph is called \emph{spectral} if it is the edge-graph of a spectral polytope.
\end{myenumerate}
\end{definition}
For a polytope, being spectral seems to be a quite exceptional property.
For example, being spectral implies that the polytope realizes all the symmetries of its edge-graph.
So, no neighborly polytope except for the simplex can be spectral.
In contrast, all (centered) neighborly polytopes are balanced, and many neighborly polytopes are eigenpolytopes (\shortStyle{e.g.}\ one of the Petersen-polytopes is neighborly).
If a graph is spectral to eigenvalue $\theta$, then it is isomorphic to the edge-graph of its $\theta$-eigenpolytop.
However, the converse is not true: the $\theta_3$-realization of the pentagon-graph is a pentagram, whose convex hull is a pentagon. But the pentagon-graph is not $\theta_3$-spectral.
\begin{center}
\includegraphics[width=0.7\textwidth]{img/convex_hull_5_gon}
\end{center}
\begin{example}
Every neighborly polytope has eigenvalue $-1$ (if centered appropriately), but none of these is spectral, except for the simplices.
\end{example}
\begin{example}
It is an easy exercise to show that the $d$-cubes and $d$-crosspolytopes are spectral with eigenvalue $\theta_2$.
In fact, it was shows by Licata and Power \cite{licata1986surprising} that also the regular polygons, all simplices, the regular dodecahedron and the regular icosahedron are spectral in this sense.
The only regular polytopes which they were not able to prove spectral were the 24-cell, 120-cell and 600-cell.
However, they conducted numerical experiments hinting in that direction.
\end{example}
\msays{The $\theta_3$-realization of the cube-graph is a polytope (the tetrahedron) and its edges correspond to edges of the graph, but it is not faithful.}
In a slightly more general setting, Izmestiev \cite{izmestiev2010colin} found that every polytope is~spectral, if we are allowed to add appropriate edge-weights to the edge-graph.
This~results in an adjacency matrix whose entries can be any non-negative real number instead of just zero and one.
This however might break the symmetry in a combinatorially highly symmetric graph and is therefore not our preferred perspective.
First results on this topic were obtained by Licata and Powers \cite{licata1986surprising}, who indentified several polytopes as being spectral.
Among these, all regular polytopes, excluding the exceptional 4-dimensional regular polytopes: 24-cell, 120-cell and 600-cell.
Still kind of \enquote{unexplained} is their finding that the \emph{rhombic dodecahedron} is spectral as well.
Since the edge graph of this polyhedron is not regular, this observation hinges critically on the fact that we consider eigenpolytopes constructed from the adjacency matrix, rather than, say, the Laplace matrix.
Godsil \cite{godsil1998eigenpolytopes} classified all spectral graphs among the distance-regular graphs (for a definition, see \cref{sec:distance_transitive_realizations}).
He found that these are the following:
\begin{theorem}[Godsil, \cite{...}]
\label{res:spectral_distance_regular_graphs}
Let $G$ be a distance-regular graph.
If $G$ is $\theta_2$-spectral,~then $G$ is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a cycle graph $C_n,n\ge 3$,
\item the edge-graph of the dodecahedron,
\item the edge-graph of the icosahedron,
\item the complement of a disjoint union of edges,
\item a Johnson graph $J(n,k)$,
\item a Hamming graph $H(d,q)$,
\item a halved $n$-cube $\nicefrac12 Q_n$,
\item the Schläfli graph, or
\item the Gosset graph.
\end{enumerate}
\end{theorem}
One remarkable fact about this list, which seems to have gone unnoticed, is that all these graphs are actually \emph{distance-transitive}.
In general there was not found a single polytope (or graph) that is $\theta_i$-spectral for any $i\not=2$ (assuming that the graph has at least one edge).
It is not implausible to conjecture that skeleta of polytopes can have, if at all, only eigenvalue $\theta_2$.
This comes from the study of so-called \emph{nodal domains}.
There is a remarkable construction due to Izmestiev \cite{izmestiev2010colin}.
\begin{theorem}[Izmestiev \cite{izmestiev2010colin}, Theorem 2.4]\label{res:izmestiev}
Let $P\subset\RR^d$ be a polytope, $G_P$ its~edge-graph and $v\:V\to\RR^d$ its skeleton. For $c=(c_1,...,c_n)\in\RR^n$ define
$$P^\circ(c):=\{x\in\RR^d\mid\<x,v_i\>\le c_i\text{ for all $i\in V$}\}.$$
Note that $P^\circ=P^\circ(1,...,1)$ is the polar dual of $P$.
The matrix $X\in\RR^{n\x n}$ with~compo\-nents
$$X_{ij}:=-\frac{\partial^2 \vol(P^\circ(c))}{\partial c_i\partial c_j}\Big|_{c=(1,...,1)}$$
has the following properties:
\begin{myenumerate}
%
\item $X_{ij}< 0$ whenever $ij\in E(G_P)$,
\item $X_{ij}=0$ whenever $ij\not\in E(G_P)$,
\item $XM=0$, where $M$ is the arrangement matrix of $v$,
\item $X$ has a unique negative eigenvalue, and this eigenvalue is simple,
\item $\dim\ker X=d$.
\end{myenumerate}
\end{theorem}
One can view the matrix $X$ (or better, $-X$) as kind of an adjacency matrix of~a vertex- and edge-weighted version of $G_P$.
Part $(iii)$ states that $v$ is balanced~with eigenvalue zero.
Since $\rank M=d$, part $(v)$ states that the arrangement space~$U:=$ $\Span M$ is already the whole eigenspace.
And part $(iv)$ states that zero is the second largest eigenvalue of $-X$.
\begin{theorem}\label{res:implies_spectral}
Let $P\subset\RR^d$ be a polytope, and $X\in\RR^{n\x n}$ the associated matrix~defined as in \cref{res:izmestiev}. If we have that
\begin{myenumerate}
\item $X_{ii}$ is independent of $i\in V(G_P)$, and
\item $X_{ij}$ is independent of $ij\in E(G_P)$,
\end{myenumerate}
then $P$ is spectral with eigenvalue $\theta_2$.
\begin{proof}
By assumption there are $\alpha,\beta\in\RR$, $\beta>0$, so that $X_{ii}=\alpha$ for all vertices~$i\in$ $V(G_P)$, and $X_{ij}=-\beta$ for all edges $ij\in E(G_P)$.
We then find
$$X=\alpha \Id - \beta A\quad\implies\quad (*)\;A=\frac\alpha\beta \Id-\frac1\beta X,$$
where $A$ is the adjacency matrix of $G_P$.
By \cref{res:izmestiev} $(iv)$ and $(v)$, the matrix $X$ has second smallest eigenvalue zero of multiplicity $d$.
By \cref{res:izmestiev} $(iii)$, the columns of $M$ are the corresponding eigenvectors.
Since $\rank M=d$ we find that these are all the eigenvectors, in particular, that the arrangement space $U:=\Span M$ is the 0-eigenspace of $X$.
By $(*)$, $A$ (that is, $G_P$) has then second \emph{largest} eigenvalue $\theta_2=\alpha/\beta$ of multiplicity $d$ with eigenspace $U$.
The skeleton of $P$ is then the $\theta_2$-realization of $G_P$ and $P$ is spectral.
\end{proof}
\end{theorem}
\begin{corollary}
If $P\subset\RR^d$ is simultaneously vertex- and edge-transitive, then $P$ is spectral with eigenvalue $\theta_2$.
\end{corollary}
It seems tempting to conjecture that the spectral polytopes are \emph{exactly} the polytopes that satisfy the conditions $(i)$ and $(ii)$ in \cref{res:implies_spectral}.
This would give an interesting geometric characterization of eigenpolytopes.
Izmestiev proved the following:
for $ij\in E$ let $\sigma_i,\sigma_j\in\F_{d-1}(P^\circ)$ be the dual facets to the vertices $v_i$ and $v_j$ in $P$. Then holds
$$X_{ij}= \frac{\vol_{d-2}(\sigma_i\cap\sigma_j)}{\|v_i\|\|v_j\|\sin\angle(v_i,v_j)}.$$
There seems to be no equally elementary formula for $X_{ii},i\in V$, but its value can be derived from $XM=0$, which implies
$$X_{ii} v_i = -\sum_{\mathclap{j\in N(i)}} M_{ij} v_j.$$
It seems out of reach to prove this fact.
There are many balanced polytopes that are not spectral (\shortStyle{e.g.}\ all appropriately centered neighborly polytopes that are not simplices).
Further, there are polytope for which being spectral appears completely exceptional, \shortStyle{e.g.}\ the \emph{rhombic dodecahedron} (already mentioned by Licata and Powers \cite{licata1986surprising}).
\subsection{Applications}
Spectral polytopes turn out to be exactly the kind of polytopes we need to address the problems listed in \cref{sec:motivation}.
For example, for fixed $i\in\NN$ consider the set of all $\theta_i$-spectral polytopes.
Given the edge-graph $G_P$ of some polytope in this set, this uniquely determines the polytope, simply since it can be reconstruced via the $\theta_i$-realizaion of $G_P$.
In other~words, the $\theta_i$-spectral polytopes are uniquely determined by their edge-graph.
Also, each $\theta_i$-spectral polytope realizes all the symmetries of its edge-graph by \cref{res:spectral_implies_symmetric}.
The question then is how large the class of these polytopes is, and whether their members are easily recognized.
For example, has every family of vertex-transitive polytopes with the same edge graph a spectral representative?
If so, this would imply that every vertex-transitive edge-graph can be realized as a polytope with all symmetries.
\section{Edge-transitive polytopes}
\subsection{Arc-transitive polytopes}
\label{sec:arc_transitive_polytopes}
Our final goal would be to establish that every arc-transitive polytope is spectral, or more precisely, $\theta_2$-spectral.
To anticipate~
the outcome, we were not able to prove this.
However, we also observe that this statement is true for \emph{all} known arc-transitive polytopes.
In the last section we have learned that the skeleton of an arc-transitive polytope is of full local dimension, hence irreducible, spherical, rigid and has an eigenvalue.
\begin{definition}
Given a polytope $P\subset\RR^d$.
\begin{myenumerate}
\item $P$ is called \emph{rigid} if it cannot be continuously deformed without changing its combinatorial type or symmetry group.
\item $P$ is called \emph{perfect} if any essentially distinct but combinatorially equivalent polytope $Q\subset\RR^d$ has a smaller symmetry group than $P$, that is $\Aut(Q)\subset\Aut(P)$.
\end{myenumerate}
\end{definition}
\enquote{Essentially distinct} in $(ii)$ means that $P$ and $Q$ are not just reoriented or~rescaled versions of one another.
If a polytope is perfect, then it is rigid.
It seems to be open whether the converse holds in general.
In theory it could happen that a combinatorial type might have two \enquote{most symmetric} realizations as a polytope, and those cannot be continuously deformed into each other.
\begin{theorem}
An arc-transitive polytope $P\subset\RR^d$ is $\theta_2$-spectral.
\begin{proof}
The skeleton of $P$ is an arc-transitive realization of $G_P$ of full local dimension, hence has an eigenvalue $\theta\in\Spec(G)$ according to \cref{res:...}.
The tricky part is to show that $\theta=\theta_2$.
For this, we use the following fact:
Let $v$ be the skeleton of $P$ and define
$$P^\circ(\mathbf c) = P^\circ(c_1,...,c_n):=\{x\in\RR^d\mid \<x,v_i\>\le c_i\text{ for all $i\in V$}\}.$$
Finally, consider the matrix
$$M_{ij}:=-\frac{\partial^2\mathrm{vol}(P(\mathbf c))}{\partial c_i \partial c_j}\Big|_{\mathbf c=(1,...,1)}.$$
Since $P$ is edge-transitive, $M_{ij}$ contains the same value for all $ij\in E$.
Since $P$ is vertex-transitive, $M_{ii}$ does not depend in $i\in V$.
Finally, it is proven in \cite{...} that $M_{ij}=0$ whenever $ij\not\in E$, and that $M$ has a single negative eigenvalue which is simple.
Apparently, there are $\alpha,\beta\in\RR$ so that $M=\alpha A+\beta\Id$, where $A$ is the adjacency matrix of $M$.
\end{proof}
\end{theorem}
\begin{theorem}
If $P\subset\RR^d$ is an arc-transitive polytope, then
\begin{myenumerate}
\item $P$ is rigid,
\item $\Aut(P)$ is irreducible,
\item any projection $\pi_U(P)$ onto a subspace $U\subseteq\RR^d$ is either not arc-transitive~or has a different edge-graph as $P$.
\end{myenumerate}
Furthermore, there is an eigenvalue $\theta\in\Spec(G_P)$ of the edge-graph $G_P$ resp.\ its~associated Laplacian eigenvalue $\lambda=\theta-\deg(G_P)$, so that the following holds:
\begin{myenumerate}
\setcounter{enumi}{3}
\item if $P$ has edge length $\ell$ and circumradius $r$, then
%
$$\frac\ell r=\sqrt{\frac{2\lambda}{\deg(G_P)}}.$$
\item if the polar polytope $P^\circ$ has dihedral angle $\alpha$, then
%
$$\cos\alpha=-\frac{\theta}{\deg(G_P)}.$$
\end{myenumerate}
\end{theorem}
\begin{conjecture}\label{conj:arc_transitve}
Each arc-transitive polytope $P\subset\RR^d$ is $\theta_2$-spectral, and satisfies
\begin{myenumerate}
\item $P$ is perfect,
\item $P$ realizes all the symmetries of its edge-graph,
\item $P$ is the only arc-transitive polytope (up to reorientation and rescaling) with edge-graph $G_P$.
\end{myenumerate}
\end{conjecture}
Probably in, in all these result, the eigenvalue $\theta$ resp.\ $\lambda$ can be replaced with $\theta_2$ resp.\ $\lambda_2$, but we were not able to prove that.
However, it holds in any single case of arc-transitive polytope that we have checked.
\begin{example}
There are many ways to compute the circumradius of an $n$-simplex with unit edge-length, and here is another one:
The complete graph $K_n$ is arc-transitive and has spectrum $\{(-1)^{n-1},n^1\}$. The second largest eigenvalue $\theta_2=-1$ gives rise to an $(n-1)$-dimensional polytope with edge-graph $K_n$, that is, the $(n-1)$-dimensional simplex.
Assuming edge length $\ell=1$, we can compute its circumradius:
$$r=\sqrt{\frac{\deg(G)}{\lambda_2}} = \sqrt{\frac{\deg(G)}{\deg(G)-\theta_2}} = \sqrt{\frac{n-1}{n-1-(-1)}} = \sqrt{1-\frac1n}.$$
\end{example}
\begin{example}
Compute the dihedral angle of the rhombic dodecahedron and the rhombic triacontahedron.
\end{example}
But this technique can also be used the other way around, namely, concluding from the knowledge of the metric properties of a polytope and the eigenvalues of its edge-graph to the fact that it is spectral.
\begin{example}
We show that all regular polytopes are in fact $\theta_2$-spectral, as was already conjectured by Licata and Powers \cite{licata1986surprising}.
The essential facts we are using are some tabulated metric and spectral properties of these polytopes, \shortStyle{e.g.}\ found in \cite{buekenhout1998number}.
\begin{itemize}
\item
Remarkable, the 24-cell has edge-length $\ell$ identical to circumradius $r$.
This determines the Laplacian eigenvalue of its skeleton:
$$1=\frac\ell r=\sqrt{\frac{2\lambda}{\deg(G)}} \quad\implies\quad \lambda=\frac12\deg(G) = 4.$$
One checks that $4$ is exactly the second largest Laplacian eigenvalue of its edge-graph, and that this eigenvalue has multiplicity four.
\item
600-cell: $3(1+\sqrt{5})$
\item
120-cell: $2\phi-1$
\end{itemize}
\begin{center}
\begin{tabular}{c|rrrr}
polytope & $\ell$ & $r$ & $\lambda_2$ & $\deg(G_P)$ \\
\hline
$\overset{\phantom.}24$-cell & $1$ & $1$ & $4$ & $8$ \\
$120$-cell & $3-\sqrt 5$ & $2\sqrt 2$ & $2\phi-1$ & $4$ \\
$600$-cell & $2/\phi$ & $2$ & $3(1+\sqrt 5)$ & $12$
\end{tabular}
\end{center}
\end{example}
\begin{theorem}
If a polytope has a connected, full-dimensional and spanning edge-orbit, then it is perfect.
\end{theorem}
\newpage
\section{Edge- and vertex-transitive polytopes}
The goal of this section is to show that every polytope $P\subset\RR^d$ that is simultane\-ously vertex- and edge-transitive is an eigenpolytope to the second smallest eigenvalue of its edge-graph.
Note that being only edge-transitive is just slightly weaker than being vertex- and edge-transitive, as it was shown in \cite{winter2020polytopes} that edge-transitivity of convex polytopes in dimension $d\ge 4$ implies vertex-transitivity anyway.
\begin{theorem}\label{res:vertex_edge_transitive_is_theta2_realization}
If $P\subset\RR^d$ is vertex-transitive and edge-transitive, then it is the~$\theta_2$-eigenpolytopes of its edge-graph.
\begin{proof}
Let $X\in\RR^{n\x n}$ be the matrix defined in \cref{res:izmestiev}.
Since $P$ is transitive on vertices, there is a number $\alpha\in\RR$ with $X_{ii}=\alpha$ for all $i\in V$.
Since $P$ is edge-transitive, there is a number $\beta<0$ so that $X_{ij}=\beta$ for all $ij\in E(G_P)$.
Since we also have $X_{ij}=0$ for all $ij\not\in E(G_P)$, we find that $X$ must be of the form
$$X= \alpha \Id - \beta A \quad\implies\quad A=\alpha\Id-\frac1\beta X,$$
where $A$ is the adjacency matrix of $G_P$.
$X$ has a single negative eigenvalue, which is simple, and $\dim\ker X=d$.
In other words, its second smallest eigenvalue is zero and has multiplicity $d$.
Consequently, $A$ has second \emph{largest} eigenvalue $\theta_2=\alpha$ of multiplicity $d$ to the same eigenvectors as $X$.
Let $v$ be the skeleton of $P$ with arranegment matrix $M$ and arrangement space $U$.
Since $XM=0$ and $\rank M=d$, we find that the $U=\Span M$ must be the $0$-eigenspace of $X$, hence the $\theta_2$-eigenspace of $A$.
Thus, $v$ is the $\theta_2$-realization of $G_P$ and $P$ is spectral.
\end{proof}
\end{theorem}
The argument of \cref{res:vertex_edge_transitive_is_theta2_realization} cannot be easily generalized to any weaker form of symmetry.
For example, one would like to proof that every spectral polytope has eigenvalue $\theta_2$, or more generally, every balanced polytope has eigenvalue $\theta_2$.
\begin{corollary}
If $P\subset\RR^d$ is vertex- and edge-transitive, then holds:
\begin{myenumerate}
\item $\Aut(P)$ is irreducible,
\item $P$ is the unique simultaneously vertex- and edge-transitive polytope for which the edge-graph is isomorphic to $G_P$,
\item $P$ realizes all symmetries of its edge-graph,
\item $P$ is a perfect (rigid) polytope,
\item $P$ is the unique most symmetric realization of its combinatorial type,
\item if $P$ has edge-length $\ell$ and circumradius $r$, then
%
$$\Big[\frac \ell r \Big]^2 = \frac{2\lambda_2(G_P)}{\deg(G_P)} = 2\Big(1-\frac{\theta_2(G_P)}{\deg(G_P)}\Big).$$
%
\item if the polar dual $P^\circ$ has dihedral angle $\alpha$, then
%
$$\cos(\alpha)=-\frac{\theta_2(G_P)}{\deg(G_P)}.$$
\end{myenumerate}
\end{corollary}
\section{Distance-transitive polytopes}
\subsection{Distance-transitive polytopes}
\label{sec:distance_transitive_polytopes}
In general, there is no apparent reason for why a spectral polytope must be $\theta_2$-spectral.
In the case of distance-transitive polytopes there can be found an easy argument involving the cosine sequence of its skeleton.
The argument is based on the number of sign changes in $u_\delta$ and the so called \emph{two-piece property} for embedded graphs.
\begin{definition}
A realization $v$ is said to have the \emph{two-piece property} (or TPP),~if every hyperplane cuts it into at most two pieces.
More formally, given any vector $x\in \RR^d$ and scalar $c\in\RR$, the induced subgraphs
$$
G_+:=G[\,i\in V\mid \<x,v_i\>\ge c\,]
\quad\text{and}\quad
G_-:=G[\,i\in V\mid \<x,v_i\>\le c\,]
$$
must be connected (if non-empty).
\end{definition}
The image below shows two symmetric realizations of the cycle graph $C_5$, one~of which has the TPP (left), and one of which has not (right).
We are going to use the following fact:
\begin{theorem}[\cite{...}, Theorem xx]
The skeleton of a polytope has the TPP.
\begin{proof}[Sketch of proof.]
Let $v$ be the skeleton of a polytope, $x\in\RR^d$ and $c\in \RR$.
Let $i_+\in V$ be the vertex which maximizes the functional $\<v_\cdot,x\>$.
By the simplex algorithm, every vertex on the \enquote{positive} side of the hyperplane $\<\cdot,x\>=c$ can reach $v_{i_+}$ by a path in the edge-graph that is non-decreasing in $\<v_\cdot,x\>$.
Hence the positive side is connected if non-empty.
Equivalently for the negative side.
\end{proof}
\end{theorem}
\begin{theorem}
A distance-transitive polytope $P$ is $\theta_2$-spectral.
\begin{proof}
Let $v$ be the full-dimensional and distance-transitive realization of $P$.
Since it is also arc-transitive, it has an eigenvalue (by \cref{cref:...}).
By \cref{res:...} it must then be the $\theta_k$-realization of $G_P$.
Now consider cutting $v$ by the hyperplane $v_i^\bot$ for some $i\in V$.
The number of components left after the cut is exactly one more than the number of sign changes of the cosine sequence $u_\delta$.
From \cref{res:...} we know that there are exactly $k-1$ sign changes for the $\theta_k$-realization, hence $k$ pieces after the cut.
Since $v$ is the skeleton of a polytope, we have the TPP, and $k\le 2$ pieces.
Since $v$ cannot be the trivial $\theta_1$-realization of $G_P$, it must be the $\theta_2$-realization, and $P$ the $\theta_2$-eigenpolytope.
\end{proof}
\end{theorem}
\begin{corollary}
Let $P$ be a distance-transitive polytope.
\begin{myenumerate}
\item Among distance-transitive polytopes, $P$ is uniquely determined by its edge-graph.
\item $P$ realizes all the symmetries of its edge-graph.
\item $P$ is the unique most symmetric realization of its combinatorial type.
\end{myenumerate}
\end{corollary}
Godsil's classification of the $\theta_2$-spectral distance-regular graphs (see \cref{res:spectral_distance_regular_graphs}; all of which~turned out to be distance-transitive) allows a complete classification of distance-tran\-sitive polytopes.
\begin{theorem}\label{res:distance_transitive_classification}
If $P\subset\RR^d$ is a distance-transitive polytope, then it is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a regular polygon $(d=2)$,
\item the regular dodecahedron $(d=3)$,
\item the regular icosahedron $(d=3)$,
\item a cross-polytopes, that is, $\conv\{\pm e_1,...,\pm e_d\}$ where $\{e_1,...,e_d\}\subset\RR^d$ is the standard basis of $\RR^d$,
\item a hyper-simplex $\Delta(d,n)$, that is, the convex hull of all vectors $v\in\{0,1\}^{d+1}$ with exactly $k$ 1-s,
\item a cartesian power of a regular simplex (this includes hypercubes),
\item a demi-cube, that is, the convex hull of all vectors $v\in\{-1,1\}^d$ with~an~even number of 1-s,
\item the $2_{21}$-polytope, also called Gosset-polytope $(d=6)$,
\item the $3_{21}$-polytope, also called Schläfli-polytope $(d=7)$.
\end{enumerate}
The order of this list agrees with the list of graphs in \cref{res:spectral_distance_regular_graphs}.
\end{theorem}
Note that the list in \cref{res:distance_transitive_classification} contains many polytopes that are not regular, and contains all regular polytopes except for the 24-cell, 120-cell and 600-cell.
The distance-transitive polytopes thus form a distinct class of remarkably symmetric polytopes which is not immediately related to the class of regular polytopes.
Noteworthy however, is that all the distance-transitive polytopes are \emph{Wythoffian poly\-topes}, that is, they are orbit polytopes of finite reflection groups.
\Cref{fig:distance_transitive_Coxeter} shows the Coxeter-Dynkin diagrams of these polytopes.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/distance_transitive_Coxeter}
\caption{Coxeter-Dynkin diagrams of distance-transitive polytopes.}
\label{fig:distance_transitive_Coxeter}
\end{figure}
\section{Conclusion and open questions}
\label{sec:future}
In this paper we have studied \emph{eigenpolytopes} and \emph{spectral polytopes}.
The former are polytopes constructed from a graph and one of its eigenvalues.
A polytope is spectral if it is the eigenpolytopes of its edge-graph.
These are of interest because spectral graph theory then ensures a strong interplay between the combinatorial properties of the edge-graph and the geometric properties of the polytope.
The study of eigenpolytopes and spectral polytopes has left us with many open questions.
Most notably, how to detect spectral polytopes purely from their geome\-try.
We introduced a tool (\cref{res:implies_spectral}), which was sufficient to proof that (most) edge-transitive polytopes are spectral.
We do not know how much more general it can be applied.
\begin{question}
\label{q:characterization}
Does \cref{res:implies_spectral} already characterize $\theta_2$-spectral polytopes (or even spectral polytopes in general)?
\end{question}
If the answer is affirmative, this would provide a geometric characterization of polytopes that are otherwise defined purely in terms of spectral graph theory.
The result of Izmestiev suggests that polytopes with sufficiently regular geometry are $\theta_2$-spectral: the entry of the matrix $X$ in \cref{res:izmestiev} at index $ij\in E$ can be~expressed as
$$X_{ij}=\frac{\vol(\sigma_i\cap\sigma_j)}{\|v_i\|\|v_j\|\sin\angle(v_i,v_j)},$$
where $\sigma_i$ and $\sigma_j$ are the facets of the polar dual $P^\circ$ that correspond to the vertices $v_i,v_j\in\F_0(P)$.
Because of this formula, it might be actually easier to classify the polar duals of $\theta_2$-spectral polytopes.
An affirmative answer to \cref{q:characterization} would also mean a negative answer to the following:
\begin{question}
\label{q:not_theta_2}
Is there a $\theta_k$-spectral polytope/graph for some $k\not=2$?
\end{question}
The answer is known to be negative for edge-transitive polytopes/graphs (see \cref{res:edge_transitive_spectral_graph}), but unknown in general.
The second-largest eigenvalue $\theta_2$ is special for other reasons too.
Even if a graph is not $\theta_2$-spectral, it seems to still imprint its adjacency information onto the edge-graph of its $\theta_2$-eigenpolytope.
\begin{question}
\label{q:realizing_edges}
Given an edge $ij\in E$ of $G$, if $v_i$ and $v_j$ (as defined in \cref{def:eigenpolytope}) are distinct vertices of the $\theta_2$-eigenpolytope $P_G(\theta_2)$, is then also $\conv\{v_i,v_j\}$ an edge of $P_G(\theta_2)$?
\end{question}
This was proven for distance-regular graphs in \cite{godsil1998eigenpolytopes}, and is not necessarily true for eigenvalues other than $\theta_2$.
All known spectral polytopes are exceptionally symmetric.
It is unclear whether this is true in general.
\begin{question}
\label{q:trivial_symmetry}
Are there spectral polytopes with trivial symmetry group?
\end{question}
An example for \cref{q:trivial_symmetry} must be asymmetric, yet with a reasonably large eigenspaces.
Such graphs exist among the distance-regular graphs, but all spectral distance-regular graphs were determined in \cite{godsil1998eigenpolytopes} (see also \cref{res:spectral_distance_regular_graphs}) and turned out to be distance-transitive, \shortStyle{i.e.,}\ highly symmetric.
A clear connection between being spectral and being symmetric is missing.
To emphasize our ignorance, we ask the following:
\begin{question}
\label{q:spectral_non_vertex_transitive}
Can we find more spectral polytopes that are \emph{not vertex-transitive}?
What characterizes them?
\end{question}
The single known spectral polytope that is \emph{not} vertex-transitive is the \emph{rhombic dodecahedron} (see \cref{fig:edge_transitive}).
The fact that it is spectral appears purely accidental, as there seems to be no reason for it to be spectral, except that we can explicitly check that it is. For comparison, the highly related \emph{rhombic triacontahedron} is not spectral.
On the other hand, vertex-transitive spectral polytopes might be quite common.
\begin{question}
\label{q:specific_instance}
Let $P\subset\RR^d$ be a polytope with the following properties:
\begin{myenumerate}
\item $P$ is vertex-transitive,
\item $P$ realizes all the symmetries of its edge-graph, and
\item $\Aut(P)$ is irreducible.
\end{myenumerate}
Is $P$ (combinatorially equivalent to) a spectral polytope?
\end{question}
No condition in \cref{q:specific_instance} can be dropped.
If we drop vertex-transitivity,~we could take some polytope whose edge-graph has trivial symmetry and only small eigenspaces. Dropping $(ii)$ leaves vertex-transitive neighborly polytopes, for which we know that these are mostly not spectral (except for the simplex).
Dropping $(iii)$ leaves us with the prisms and anti-prisms, the eigenspaces of their edge-graphs are rarely of dimension greater than two.
Finally, we wonder whether these spectral techniques can be any help in classifying the edge-transitive polytopes.
\begin{question}
Can we classify the edge-transitive graphs that are spectral, and~by this, the edge-transitive polytopes?
\end{question}
\begin{question}
Can the existence of half-transitive polytopes be excluded by using spectral graph theory (see \cref{sec:arc_transitive})?
\end{question}
\iffalse
\hrulefill
We have seen that in some situations computing the eigenpolytope of the edge-graph of a polytope results in the same polytope again.
Polytopes/graphs for which this happens are called \emph{spectral}.
We have introduced a tool for proving that certain geometrically well-behaved polytopes are necessarily $\theta_2$-spectral.
It is open whether spectral polytopes exist to other eigenvalues than $\theta_2$:
\begin{question}
\label{q:not_theta_2}
Is there a $\theta_k$-spectral polytope/graph for some $k\not=2$?
\end{question}
We have seen that the answer is negative for the distance-transitive (actually distance-regular) graphs (see \cref{res:distance_regular_theta_k}).
Our hope is that the proof can~be~extended to a larger class of graphs as \shortStyle{e.g.}\ 1-walk-regular or arc-transitive graphs.
Even if a graph is not $\theta_2$-spectral, it seems to still imprint its adjacency information on the edge-graph of its $\theta_2$-eigenpolytope:
\begin{question}
\label{q:realizing_edges}
Let $G$ be a graph with $\theta_2$-eigenpolytope $P=P_G(\theta_2)$.
For each~$i\in$ $V$ let $v_i\in\RR^d$ be as defined in \cref{def:eigenpolytope}.
Now, if $ij\in E$ and $v_i\not= v_j$ are distinct vertices of $P$, is then also $\conv\{v_i,v_j\}$ an edge of $P$?
\end{question}
The answer is affirmative for distance-regular graphs as shown by Godsil in \cite{godsil1998eigenpolytopes}.
All known spectral polytopes are highly symmetric.
We therefore ask the following:
\begin{question}
\label{q:trivial_symmetry}
Are there spectral polytopes with trivial symmetry group?
\end{question}
To construct an example, one has to find a graph with no non-trivial symmetries, but whose eigenspaces are still reasonably large.
Such graphs can be found among the distance-regular graphs.
However, all spectral graphs among these were already determined by Godsil and all of them turned out to be distance-transitive.
A clear connection between being spectral and being symmetric is missing.
To emphasize our ignorance, we have the following:
\begin{question}
Can we find more spectral polytopes that are \emph{not vertex-transitive}?
What characterizes them?
\end{question}
Only one such polytope is known: the \emph{rhombic dodecahedron}.
We know that it spectral by explicit computation, but a systematic proof of this fact (that admits a generalization) is missing.
In general, how else can one characterize spectral polytopes?
The result of~Izmes\-tiev suggests that polytopes with sufficiently regular geometry are $\theta_2$-spectral.
In fact, the value of the matrix $X$ in \cref{res:izmestiev} at an index $ij\in E$ can be given~explicitly
$$X_{ij}=\frac{\vol(\sigma_i\cap\sigma_j)}{\|v_i\|\|v_j\|\sin\angle(v_i,v_j)},$$
where $\sigma_i$ and $\sigma_j$ are the facets of the polar dual $P^\circ$ that correspond to the vertices $v_i,v_j\in\F_0(P)$.
Because of this, it might be actually easier to classify the polar duals of $\theta_2$-spectral polytopes.
\begin{question}
\label{q:characterization}
Does \cref{res:implies_spectral} already characterize $\theta_2$-spectral polytopes (or even spectral polytopes in general)?
\end{question}
There are very specific classes of polytopes that might have spectral of their combiantorial type:
\begin{question}
\label{q:specific_instance}
Let $P\subset\RR^d$ be a polytope with the following properties:
\begin{myenumerate}
\item $P$ is vertex-transitive,
\item $P$ realizes all the symmetries of its edge-graph, and
\item $\Aut(P)$ is irreducible.
\end{myenumerate}
Is $P$ (combinatorially equivalent to) a spectral polytope? Or at least an eigenpolytope?
\end{question}
We know that $P$ in \cref{q:specific_instance} is not necessarily spectral if we drop any of the conditions.
If we drop vertex-transitivity we could take some polytope with trivial symmetry. Dropping $(ii)$ gives the neighborly polytopes, for which we know that these are only balanced, but mostly not spectral (except for the simplex).
Dropping $(iii)$ gives us the prisms and anti-prisms which mostly do not have eigenspaces of dimension three or higher.
\cref{q:specific_instance} has equally interesting consequences if it is only true for the eigenpolytope case.
For example, several polytopes with relevance in integer optimization seem to be eigenpolytopes, \shortStyle{e.g.}\ the Birkhoff polytope or the traveling salesman polytopes.
This would allow a formulation of the respective optimization problem in the language of spectral graph theory.
Also, this gives a way to study the symmetry groups of these polytopes (which are oftem much larger than expected from their definition).
Finally, we wonder whether these spectral techniques can be any help in classifying edge-transitive polytopes.
\begin{question}
Can we classify the edge-transitive polytopes, or equivalently, the simultaneously vertex- and edge-transitive graphs that are spectral?
\end{question}
\begin{question}
Can spectral graph theory help to show that no half-transitive polyopes exist?
\end{question}
\fi
\section{Introduction}
\label{sec:introduction}
\emph{Eigenpolytopes} are a construction in the intersection of combinatorics and geometry, using techniques from spectral graph theory.
Eigenpolytopes provide a way~to associate several polytopes to a finite simple graph, one for each eigenvalues of~its adjacency matrix.
A formal definition can be found in \cref{sec:def_eigenpolytope}.
\vspace{0.5em}
\begin{center}
\includegraphics[width=0.7\textwidth]{img/cube_intro_pic}
\end{center}
\vspace{0.5em}
Eigenpolytopes can be applied from two directions:
for the first, one starts~from a given graph, computes its eigenpolytopes, and tries to deduce, from the geometry and combinatorics of these polytopes, something about the original~graph.
For the other direction, one starts with a polytope, asks whether it is an eigenpolytope, and if so, for which graphs, which eigenvalues, and how these relate to the original polytope.
Eigenpolytopes have several interesting geometric and algebraic properties, and establishing that a family of polytopes consists of eigenpolytopes opens up their study to the techniques of spectral graph theory.
For some graphs the connection to their eigenpolytopes is especially strong: it can happen that a graph is the edge-graph of one of its eigenpolytopes, or equivalently, that a polytope is an eigenpolytope of its edge-graph.
Such graphs/polytopes are quite special and we shall call them \emph{spectral}.
For example, all regular polytopes are spectral, but there are many others.
Their properties are not well-understood.
We survey the literature of eigenpolytope and spectral polytopes.
We establish a technique with which to prove that certain polytopes are spectral polytopes and we apply it to \emph{edge-transitive} polytopes. That are polytopes for which the Euclidean symmetry group $\Aut(P)\subset\Ortho(\RR^d)$ acts transitively on the set of edge $\F_1(P)$.
As we shall explain, this characterization suffices to proves that an edge-transitive polytope is uniquely determined by its edge-graph, and also realizes all its combinatorial symmetries.
A complete classification of edge-transitive polytopes is not known as of yet.
However, using results on eigenpolytopes, we are able to give a complete classification of a sub-class of the edge-transitive polytopes, namely, the \emph{distance-transitive polytopes}.
\subsection{Outline of the paper}
\cref{sec:eigenpolytopes} starts with a motivating example for directing the reader towards the definition of the \emph{eigenpolytope} as well as the phenomenon of \emph{spectral} graphs and polytopes.
We include a literature overview for~\mbox{eigenpolytopes} and spectral polytopes.
In \cref{sec:balanced_spectral} we give a first rigorous definition for the notion \enquote{spectral polytope} via \emph{balanced polytopes}. The latter is a notion related to the rigidity theory.
In \cref{sec:izmestiev} we introduce the, as of yet, most powerful tool for proving that~certain polytopes are spectral.
In the final section, \cref{sec:edge_transitive}, we apply this result to \emph{edge-transitive} polytopes.
It is a simple corollary of the previous section that these are $\theta_2$-spectral.
We explore the implications of this finding: edge-transitive polytopes (in dimension $d\ge 4$) are uniquely determined by the edge-graph and realize all of its symmetries.~%
We~discuss sub-classes, such as the arc-, half- and distance-transitive polytopes.
We close with a complete classification of the latter (based on a result of Godsil).
\section{Eigenpolytopes and spectral polytopes}
\label{sec:eigenpolytopes}
\subsection{A motivating example}
\label{sec:example}
Let $G=(V,E)$ be the edge-graph of the cube, with vertex set $V=\{1,...,8\}$,~num\-bers assigned to the vertices as in the figure below.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{img/cube}
\end{figure}
\noindent
The spectrum of that graph (\shortStyle{i.e.,}\ of its adjacency matrix) is $\{(-3)^1,(-1)^3,1^3,3^1\}$.
Most often, one denotes the largest eigenvalue by $\theta_1$, the second-largest by $\theta_2$, and so on.
In spectral graph theory, there exists the general rule of thumb that the most exciting eigenvalue of a graph is not its largest, but its \emph{second-largest} eigenvalue $\theta_2$ (which is related to the \emph{algebraic connectivity} of $G$).
For the edge-graph of the cube, we have $\theta_2=1$,~of multiplicity \emph{three}.
And here are three linearly independent eigenvectors to $\theta_2$:
\vspace{0.5em}
\begin{center}
\raisebox{-3.3em}{\includegraphics[width=0.23\textwidth]{img/cube_eigenvector}}
\qquad
$
u_1 = \begin{pmatrix}
\phantom+1\\ \phantom+1 \\ \phantom+1 \\ \phantom+1 \\ -1 \\ -1 \\ -1 \\ -1
\end{pmatrix},\quad
u_2 = \begin{pmatrix}
\phantom+1\\ \phantom+1 \\ -1 \\ -1 \\ \phantom+1 \\ \phantom+1 \\ -1 \\ -1
\end{pmatrix},\quad
u_3 = \begin{pmatrix}
\phantom+1\\ -1 \\ \phantom+1 \\ -1 \\ \phantom+1 \\ -1 \\ \phantom+1 \\ -1
\end{pmatrix}.
$
\end{center}
\vspace{0.7em}
\noindent
We can write these more compactly in a single matrix $\Phi\in\RR^{8\x 3}$:
\vspace{0.4em}
\begin{center}
$\Phi= \;\begin{blockarray}{(lll)r
\phantom+1 & \phantom+1 & \phantom+1\;\; &\text{\quad \footnotesize $\leftarrow v_1$}\\
\phantom+1 & \phantom+1 & -1 & \text{\quad \footnotesize $\leftarrow v_2$} \\
\phantom+1 & -1 & \phantom+1 & \text{\quad \footnotesize $\leftarrow v_3$} \\
\phantom+1 & -1 & -1 & \text{\quad \footnotesize $\leftarrow v_4$} \\
-1 & \phantom+1 & \phantom+1 & \text{\quad \footnotesize $\leftarrow v_5$} \\
-1 & \phantom+1 & -1 & \text{\quad \footnotesize $\leftarrow v_6$} \\
-1 & -1 & \phantom+1 & \text{\quad \footnotesize $\leftarrow v_7$} \\
-1 & -1 & -1 & \text{\quad \footnotesize $\leftarrow v_8$} \\
\end{blockarray}.$
\qquad
\raisebox{-4.2em}{\includegraphics[width=0.4\textwidth]{img/cube_coordinates}}
\end{center}
We now take a look at the rows of that matrix, of which it has exactly eight.~These rows are naturally assigned to the vertices of $G$ (assign $i\in V$ to the $i$-th row of~$\Phi$), and each row can be interpreted as~a~vec\-tor in $\RR^3$.
If we place each vertex $i\in V$ at the position $v_i\in\RR^3$ given by the $i$-th row of $\Phi$, we find that this embedds the graph $G$ \emph{exactly} as the skeleton of a cube (see the figure above).
In other words: if we compute the convex hull of the $v_i$, we get back the polyhedron from which we have started.
What a coincidence, isn't it?
This example was specifically chosen for its nice numbers, but in~fact,~the same works out as well for many other polytopes, inclu\-ding all the regular polytopes~in all dimension.
One probably learns to appreciate this magic when suddenly in~need for the vertex coordinates of some not so nice polytope, say, the regular dodecahedron or 120-cell.
With this technique in the toolbox, these coordinates are just one eigenvector-computation away (we included a short Mathematica script in \cref{sec:appendix_mathematica}).
Note also, that we never specified~the~dim\-en\-sion of~the~embed\-ding, but it just so happened, that the second-largest eigenvalue has the right multiplicity.
This phenomenon definitely deserves an explanation.
\subsubsection*{On the choice of eigenvectors}
\label{sec:choice_of_eigenvectors}
One might object that the chosen eigenvectors $u_1, u_2$ and $u_3$ look suspiciously cherry-picked, and we may not get such a nice result if we would have chosen just any eigenvectors.
And this is true.
For an appropriate choice of these vectors, we can, instead of a cube, get a cuboid, or a parallelepiped.
In fact, we can obtain any \emph{linear} transformations of the cube.
\emph{But}, we can also get \emph{only} linear transformations, and nothing else.
The reason is the following well~know fact from linear algebra:
\begin{theorem}
\label{res:same_column_span}
Two matrices $\Phi,\Psi\in\RR^{n\x d}$ have the same column span, \shortStyle{i.e.,}~$\Span \Phi=\Span \Psi$, if and only if their rows are related by an invertible linear transformation, \shortStyle{i.e.,}\ $\Phi=\Psi T$ for some $T\in\GL(\RR^d)$.
\end{theorem}
\noindent
In our case, the column span is the $\theta_2$-eigenspace, and the rows are the coordinates of the $v_i$.
We say that any two polytopes constructed in this way are \emph{linearly equivalent}.
The only notable property of the chosen basis in the example is, that the vectors $u_1, u_2$ and $u_3$ are orthogonal and of the same length.
Any other choice of such a basis of~the~\mbox{$\theta_2$-eigen}\-space (\shortStyle{e.g.}\ an orthonormal basis) would also have given a cube, but reoriented, rescaled and probably with less nice coordinates.
For details on how this choice relates to the orientation, see \shortStyle{e.g.}\ \cite[Theorem 3.2]{winter2019geometry}.
\subsection{Eigenpolytopes}
\label{sec:def_eigenpolytope}
We compile our example into a definition.
\begin{definition}\label{def:eigenpolytope}
Start with a graph $G=(V,E)$, an eigenvalue $\theta\in\Spec(G)$ thereof, as well as an orthonormal basis $\{u_1,...,u_d\}\subset\RR^n$ of the $\theta$-eigenspace.
We define the \emph{eigenpolytope matrix} $\Phi\in\RR^{n\x d}$ as the matrix in which the $u_i$ are the columns:\phantom{mm}
\begin{equation}
\label{eq:eigenpolytope_matrix}
\Phi :=\begin{pmatrix}
\mid & & \mid \\
u_1 & \!\!\cdots\!\!\! & u_d \\
\mid & & \mid
\end{pmatrix}=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}.
\end{equation}
Let $v_i\in\RR^d$ denote the $i$-th row of $\Phi$.
The polytope
$$P_G(\theta):=\conv\{v_i\mid i\in V\}\subset\RR^d$$
is called \emph{$\theta$-eigenpolytope} (or just \emph{eigenpolytope}) of $G$.
\end{definition}
For later use we define the \emph{eigenpolytope map}
\begin{equation}
\label{eq:eigenpolytope_map}
\phi:V\ni i\mapsto v_i\in\RR^d
\end{equation}
that to each vertex $i\in V$ assignes the $i$-th row of the eigenpolytope matrix.
Note that the basis $\{u_1,...,u_d\}\subset\Eig_G(\theta)$ in \cref{def:eigenpolytope} is explicitly chosen~to be an \emph{orthonormal basis}.
This is not strictly necessarily, but this choice is convenient from a geometric point of view:
a different choice for this basis gives the same~poly\-tope, but with a different orientation rather than, say, transformed~by~a~general linear transformation.
This preserves metric properties and is closer to how polytopes are usually consider up to rigid motions.
We can also reasonably speak of \emph{the} $\theta$-eigenpolytope, as any two differ only by orientation.
With this terminology in place, our observation in the example of \cref{sec:example}~can be summarized as \enquote{the cube is the $\theta_2$-eigenpolytope of its edge-graph}, or alternatively as \enquote{the cube-graph is the edge-graph of its $\theta_2$-eigenpolytope}.
Here is~a~depic\-tion of all the eigenpolytopes of the cube-graph, one for each eigenvalue:
\vspace{0.5em}
\begin{center}
\includegraphics[width=0.7\textwidth]{img/cube_eigenpolytopes}
\end{center}
\noindent
We observe that the phenomenon from \cref{sec:example} only happens for $\theta_2$.
In general, the $\theta_1$-eigenpolytope of a regular graph will always be a single point (which is, why we rarely care about the largest eigenvalue).
Also, whenever a graph is bipartite, the eigenpolytope to the smallest eigenvalue is 1-dimensional, hence a line segment.
We are now free to compute the eigenpolytopes of all kinds of graphs, \mbox{including} graphs which are not the edge-graph of any polytope (so-called \emph{non-polytopal} graphs).
It is then little surprising that no edge-graph of any of its eigenpolytope gives the original graph again.
But even if we start from a polytopal graph, one is not guaranteed to find an eigen\-polytope that has the initial graph as its edge-graph (\shortStyle{e.g.}\ the edge-graph of the triangular prism has no eigenvalue of multiplicity three, hence no eigenpolytope of dimension three, see also \cref{ex:prism}).
Equivalently, if one starts with a polytope, it~is~not guaranteed that this polytope is the eigenpolytope of its edge-graph (or even combinatorially equivalent to it).
\begin{example}
\label{ex:neighborly_1}
A \emph{neighorly polytope} is a polytope whose edge-graph is the complete graph $K_n$.
The spectrum of $K_n$ is $\{(-1)^{n-1},(n-1)^1\}$.
One checks that~the~eigenpolytopes are a single point (for $\theta_1=n-1$) and the regular simplex of dimension $n-1$ (for $\theta_2=-1$).
Consequently, no neighborly polytope other than a simplex is combinatorially equivalent to an eigenpolytope of its edge-graph.
\end{example}
That a graph and its eigenpolytope translate into each other as well as in the case of the cube in \cref{sec:example} is a very special phenomenon, to which we shall give a name:
a polytope (or graph) for which this happens, will be called \emph{spectral}\footnote{There was at least one previous attempt to give a name to this phenomenon, namely, in \cite{licata1986surprising}, where it was called \emph{self-reproducing}.}.
We cannot formalize this definition right away, as there is some subtlety we have to discuss first (we give a formal definition in \cref{sec:balanced_spectral}, see \cref{def:spectral}).
\begin{example}
\label{ex:pentagon}
The image below shows two spectral realizations of the 5-cycle $C_5$\footnote{Spectral realizations are essentially defined like eigenpolytopes, assinging coordinates $v_i\in\RR^d$ to each vertex $i\in V$ (as in \cref{def:eigenpolytope}), but without taking the convex hull. Instead, one draws the edges between adjacent vertices.}.
\begin{center}
\includegraphics[width=0.4\textwidth]{img/C5}
\end{center}
The left image~shows the realization to the second-largest eigenvalue $\theta_2$, the right image shows the realization to the smallest eigenvalue $\theta_3$.
In both cases, the convex hull (the actual eigenpolytope) is a regular pentagon, whose edge-graph is $C_5$ again.
But we see that only in the case of $\theta_2$ the edges of the graphs get properly mapped into the edges of the pentagon.
While it is true that the 5-cycle $C_5$ is the edge-graph of its $\theta_3$-eigenpolytope, the adjacency informations gets scrambled in the process:
while, say, vertex 1 and 2 are adjacent in $C_5$, their images $v_1$ and $v_2$ do not form an edge in the $\theta_3$-eigenpolytope.
We do not want to call this \enquote{spectral}, as the adjacency information is not preserved.
The same can happen in higher dimensions too, \shortStyle{e.g.}\ with $G$ being the edge-graph of the dodecahedron:
\begin{center}
\includegraphics[width=0.55\textwidth]{img/dodecahedron_eigenpolytopes_2}
\end{center}
\end{example}
\begin{observation}
\label{res:theta_2_observations}
From studying many examples, there are two interesting observations to be made, both concern $\theta_2$, none of which is rigorously proven:
\begin{myenumerate}
\item
It appears as if only $\theta_2$ can give rise to spectral polytopes/graphs.
At~least, all known examples are $\theta_2$-spectral (see also \cref{q:not_theta_2}). Some considerations on nodal domains make this plausible, but no proof is known in the general case (a proof is known in certain special cases, see \cref{res:edge_transitive_spectral_graph}).
\item
If $i\in V$ is a vertex of $G$, then $v_i$ is not necessarily a vertex of every~eigenpolytope ($v_i$ might end up in the interior of $P_G(\theta)$ or one of its faces).
And even if $v_i,v_j\in\F_0(P_G(\theta))$ are distinct vertices and $ij\in E$ is an edge of $G$, it is still not necessarily true that $\conv\{v_i,v_j\}$ is also an edge of the~eigen\-polytope (as seen in \cref{ex:pentagon}).
However, this seems to be no concern in the case $\theta_2$.
It appears as if all edges of $G$ become edges of the $\theta_2$-eigenpolytope, even if $G$ is not spectral (under mild assumptions on the end vertices of the edge).
In other words, the adjacency information of $G$ gets imprinted on the edge-graph of the $\theta_2$-eigenpolytope, whether $G$ is spectral or not.
This is known to be true only in the case of distance-regular graphs \cite[Theorem 3.3 (b)]{godsil1998eigenpolytopes}, but unproven~in general (see also \cref{q:realizing_edges})
\end{myenumerate}
\end{observation}
\subsection{Litarture}
Eigenpolytope were first introduced by Godsil \cite{godsil1978graphs} in 1978.
Godsil proved the existence of a group homomorphism $\Aut(G)\to\Aut(P_G(\theta))$, \shortStyle{i.e.,}\, any combinatorial symmetry of the graph translates into a Euclidean symmetry of the polytope.
From that, he deduces results about the combinatorial symmetry group of the original graph.
We say more about the group homomorphism: for every $\theta\in\Spec(G)$ we have
\begin{theorem}[\!\cite{godsil1978graphs}, Theorem 2.2]
\label{res:realizing_symmetries}
If $\sigma\in\Aut(G)\subseteq\Sym(n)$ is a symmetry of $G$, and $\Pi_\sigma\in\Perm(\RR^n)$ is the associated permutation matrix, then
$$T_\sigma:=\Phi^{\Tsymb} \Pi_\sigma \Phi \;\in\; \Ortho(\RR^d),\qquad (\text{$\Phi$ is the eigenpolytope matrix})$$
is a Euclidean symmetry of the eigenpolytope $P_G(\theta)$ that also permutes the $v_i$ as~prescribed by $\sigma$, \shortStyle{i.e.,}\ $T_\sigma\circ \phi = \phi\circ\sigma$, or $T_\sigma v_i = v_{\sigma(i)}$ for all $i\in V$.
\end{theorem}
This result is also proven (more generally for spectral graph realizations) in \cite[Corollary 2.9]{winter2020symmetric}.
\cref{res:realizing_symmetries} explicitly uses that eigenpolytopes are defined using an \emph{orthonormal} bases rather than any basis of the eigenspace, to conclude that the symmetries $T_\sigma$ are \emph{orthogonal} matrices.
Also, the statement of \cref{res:realizing_symmetries} is not too satisfying in general, as it can happen that non-trivial~symmetries of $G$ are mapped to the identity transformation.
We not necessarily have $\Aut(G)\cong\Aut(P_G(\theta))$.
Several authors construct the eigenpolytopes of certain famous graphs or graph families.
Powers \cite{powers1986petersen} computed the eigenpolytopes of the \emph{Petersen graph}, which he termed the \emph{Petersen polytopes} (one of which will appear as a distance-transitive polytope in \cref{sec:distance_transitive}).
The same author also investigates eigenpolytopes of general distance-regular graphs in \cite{powers1988eigenvectors}.
In \cite{mohri1997theta_1}, Mohri described the face structure of the \emph{Hamming polytopes}, the $\theta_2$-eigenpolytopes of the Hamming graphs.
Seemingly unknown to the author, these polytopes can also by described as the cartesian powers of regular simplices (also distance transitive, see \cref{sec:distance_transitive}).
There exists a wonderful enumeration of the eigenpolytopes (actually, spectral realizations) of the edge-graphs of all uniform polyhedra in \cite{blueSpectral}. Sadly, this write-up was never published formally.
This provides empirical evidence that every uniform polyhedron
has a spectral realization.
The same question might then be asked for uniform polytopes in higher dimensions.
Rooney \cite{rooney2014spectral} used the combinatorial structure of the eigenpolytope (the size of their facets) to deduce statements about the size of cocliques in a graph.
In \cite{padrol2010graph}, the authors investigates how common graph operations translate to operations on their eigenpolytopes.
Particular attention was given to the eigenpolytopes of distance-regular graphs \cite{powers1988eigenvectors,godsil1998eigenpolytopes,godsil1995euclidean}.
It was shown that in a $\theta_2$-eigenpolytope of a distance-regular graph~$G$,~every edge of $G$ corresponds to an edge of the eigenpolytope \cite{godsil1998eigenpolytopes}.
Consequently, $G$~is a spanning subgraph of the edge-graph of the eigenpolytope.
It remains open if the same holds for less regular graphs, \shortStyle{e.g.}\ 1-walk regular graphs or arc-transitive graphs (see also \cref{q:realizing_edges}).
The observation that some polytopes are the eigenpolytopes
of their edge-graph (\shortStyle{i.e.,}\ they are \emph{spectral} in our terminology) was made repeatedly, \shortStyle{e.g.}\ in \cite{godsil1995euclidean} and \cite{licata1986surprising}.
In the latter, this was shown for all regular polytopes, excluding the exceptional 4-dimensional polytopes, the 24-cell, 120-cell and 600-cell.
This gap was filled in \cite{winter2020symmetric} via general considerations concerning spectral realizations of arc-transitive graphs.
In sum, all regular polytopes are known to be $\theta_2$-spectral.
The next major result for spectral polytopes was obtained by Godsil in \cite{godsil1998eigenpolytopes}, where he was able to classify all $\theta_2$-spectral distance-regular graphs (see also \cref{sec:distance_transitive}):
\begin{theorem}[\!\cite{godsil1998eigenpolytopes}, Theorem 4.3]
\label{res:spectral_distance_regular_graphs}
Let $G$ be distance-regular.
If $G$ is $\theta_2$-spectral,~then $G$ is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a cycle graph $C_n,n\ge 3$,
\item the edge-graph of the dodecahedron,
\item the edge-graph of the icosahedron,
\item the complement of a disjoint union of edges,
\item a Johnson graph $J(n,k)$,
\item a Hamming graph $H(d,q)$,
\item a halved $n$-cube $\nicefrac12 Q_n$,
\item the Schläfli graph, or
\item the Gosset graph.
\end{enumerate}
\end{theorem}
A second look at this list reveals a remarkable \enquote{coincidence}: while the generic distance-regular graph has few or no symmetries, all the graphs in this list are highly symmetric, in fact, \emph{distance-transitive} (a definition will be given in \cref{sec:distance_transitive}).
It is a widely open question whether being spectral is a property solely reserved for highly symmetric graphs and polytopes (see also \cref{q:trivial_symmetry}).
There is only a single known spectral polytope that is not vertex-transitive (see also \cref{rem:edge_not_vertex} and \cref{q:spectral_non_vertex_transitive}).
\section{Balanced and spectral polytopes}
\label{sec:balanced_spectral}
In this section we give a second approach to \emph{spectral polytopes} that circumvents the mentioned subtleties.
For the rest of the paper, let $P\subset\RR^d$ denote a full-dimensional polytope in~dimen\-sion $d\ge 2$ with vertices $v_1,...,v_n\in\F_0(P)$.
We disinguish the \emph{skeleton} of $P$, which is the graph with vertex set $\F_0(P)$ and edge set $\F_1(P)$, from the~\mbox{\emph{edge-graph}}~$G_P=(V,E)$ of $P$, which is isomor\-phic to the skeleton, but has vertex set $V=\{1,...,n\}$. The isomorphism will be denoted
\begin{equation}
\label{eq:vertex_map}
\psi:V\ni i\mapsto v_i\in\F_0(P),
\end{equation}
and we call it the \emph{skeleton map}.
\subsection{Balanced polytopes}
\begin{definition}
The polytope $P$ is called \emph{$\theta$-balanced} (or just \emph{balanced}) for some~real number $\theta\in\RR$, if
\begin{equation}
\label{eq:balanced}
\sum_{\mathclap{j\in N(i)}} v_j = \theta v_i,\quad\text{for all $i\in V$},
\end{equation}
where $N(i):=\{j\in V\mid ij\in E\}$ denotes the \emph{neighborhood} of a vertex $i\in V$.
\end{definition}
One way to interpret the balancing condition \eqref{eq:balanced} is as a kind of self-stress~con\-dition on the skeleton of $P$ (the term \enquote{balanced} is motivated from this).
For each edge $ij\in E$, the vector $v_j-v_i$ is parallel to the edge $\conv\{v_i,v_j\}$.
If $P$ is $\theta$-balanced, at each vertex $i\in V$ we have the equation
$$\sum_{\mathclap{j\in N(i)}} (v_j-v_i) = \sum_{\mathclap{j\in N(i)}} v_j - \deg(i) v_i = \big(\theta-\deg(i)\big)v_i.$$
This equation can be interpreted as two forces that cancel each other out: on the left, a contracting force along each edge (proportion only to the length of that edge), and on the right, a force repelling each vertex away from the origin (proportional to the distance of that vertex from the origin, and proportional to $\theta-\deg(i)$).
A second interpretation of \eqref{eq:balanced} is via spectral graph theory.
Define the matrix
\begin{equation}
\label{eq:arrangement_matrix}
\Psi :=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}
\end{equation}
in which the $v_i$ are the rows.
This matrix will be called the \emph{arrangement matrix} of $P$.
Note that the skeleton map $\psi$ assignes $i\in V$ to the $i$-th row of $\Psi$.
Since we use that $P\subset\RR^d$ is full-dimensional,~we have $\rank \Psi=d$.
\begin{observation}\label{res:eigenvalue}
Suppose that $P$ is $\theta$-balanced.
The defining equation \eqref{eq:balanced} can be equivalently written as the matrix equation $A\Psi=\theta \Psi$.
In this form,~it~is~apparent that $\theta$ is an eigenvalue of the adjacency matrix $A$, and the columns of $\Psi$ are $\theta$-eigenvectors, or $\Span\Psi\subseteq\Eig_{G_P}(\theta)$.
\end{observation}
We have seen that for a balanced polytope, the columns of $\Psi$ must be eigenvectors.
But they are not necessarily a complete set of~$\theta$-eigen\-vectors, \shortStyle{i.e.,}\ they not necessarily span the whole eigenspace.
\begin{example}
\label{ex:neighborly}
Every centered neighborly polytope $P$ is balanced, but except if it is a simplex, it is not spectral (the latter was shown in \cref{ex:neighborly_1}).
Centered~means that
$$\sum_{i\in V} v_i = 0.$$
Since $P$ is neighborly, we have $G_P=K_n$ and $N(i)=V\setminus\{i\}$ for all $i\in V$. Therefore
$$\sum_{\mathclap{j\in N(i)}} v_j = \sum_{\mathclap{j\in V}} v_j - v_i = -v_i,\quad\text{for all $i\in V$}.$$
And indeed, $K_n$ has spectrum $\{(-1)^{n-1},(n-1)^1\}$.
So $P$ is $(-1)$-balanced.
\end{example}
The last example shows that every neighborly polytopes can be made balanced by merely translating it.
More generally, many polytopes have a realization (of~their combinatorial type) that is balanced.
But other polytopes do not:
\begin{example}
\label{ex:prism}
Let $P\subset\RR^3$ be a triangular prism.
The spectrum of the edge-graph of $P$ is $\{(-2)^2,0^2,1^1,3^1\}$.
Note that there~is~no eigenvalue of multiplicity greater than~two.
In particular, we cannot choose three linearly independent eigenvectors to a common eigenvalue.
But if $P$ were balanced, then \cref{res:eigenvalue} tells us that the columns of the arrangement matrix $\Psi$ would be three eigenvectors to the same eigenvalue (linearly independent, since $\rank \Psi=3$), which is not possible.
And so, no realization of $P$ can be balanced.
\end{example}
\subsection{Spectral graphs and polytopes}
In the extreme case, when the columns of $\Psi$ span the whole eigenspace, we can finally give a compact definition of what we want~to~consider as \emph{spectral}:
\begin{definition}
\label{def:spectral}\quad
\begin{myenumerate}
\item A polytope $P$ is called \emph{$\theta$-spectral} (or just \emph{spectral}), if its arrangement matrix $\Psi$ satisfies $\Span \Psi=\Eig_{G_P}(\theta)$.
\item A graph is said to be \emph{$\theta$-spectral} (or just \emph{spectral}) if it is (isomorphic to) the edge-graph of~a $\theta$-spectral polytope.
\end{myenumerate}
\end{definition}
This definition is now perfectly compatible with our initial motivation for the~term \enquote{spectral} in \cref{sec:def_eigenpolytope}.
\begin{lemma}\quad
\label{res:naive}
\begin{myenumerate}
\item If a polytope $P$ is $\theta$-spectral, then $P$ is linearly equivalent to the $\theta$-eigenpoly\-tope of its edge-graph (see also \cref{res:naive_polytope}).
\item If a graph $G$ is $\theta$-spectral, then $G$ is (isomorphic to) the edge-graph of its~$\theta$-eigen\-polytope (see also \cref{res:naive_graph}).
\end{myenumerate}
\end{lemma}
In both cases, the converse is \emph{not} true.
This is intentional, to avoid the problems mentioned in \cref{ex:pentagon}.
Both statement will be proven below by formulating a more technical condition that is then actually equivalent to being spectral.
\begin{proposition}
\label{res:naive_polytope}
A polytope $P$ is $\theta$-spectral if and only if it is linearly equivalent to the $\theta$-eigenpolytope of its edge-graph via some linear map $T\in\GL(\RR^d)$ for which the following diagram commutes:
\begin{equation}
\label{eq:diagram_polytope}
\begin{tikzcd}
P \arrow[r, "T"] & P_{G_P}(\theta) \\
& G_P \arrow[lu, "\psi"] \arrow[u, "\phi"']
\end{tikzcd}
\end{equation}
where $\phi$ and $\psi$ denote the eigenpolytope map and skeleton map respectively.
\end{proposition}
\begin{proof
By definition, the $\theta$-eigenpolytope of $G_P$ satisfies $\Span \Phi=\Eig_{G_P}(\theta)$, where $\Phi$ is the corresponding eigenpolytope matrix.
Now, by definition, $P$ is $\theta$-spectral if and only if $\Span \Psi = \Eig_{G_P}(\theta)$, where $\Psi$ is its arrangement matrix.
But by \cref{res:same_column_span}, $\Phi$ and $\Psi$ have the same span if and only of their rows are related by some invertible linear map $T\in\GL(\RR^d)$, that is, $\Psi T=\Phi$, or $T\circ \psi=\phi$. The latter expresses exactly that \eqref{eq:diagram_polytope} commutes.
\end{proof}
This also proves \cref{res:naive} $(i)$.
\begin{proposition}
\label{res:naive_graph}
A graph $G$ is $\theta$-spectral if and only if the eigenpolytope map~$\phi\:$ $V(G)\to \RR^d$ provides an isomorphism between $G$ and the skeleton of its $\theta$-eigenpoly\-tope $P_G(\theta)$.
\begin{proof}
Suppose first that $G$ is $\theta$-spectral.
Then there is a $\theta$-spectral polytope $Q$~with edge-graph~$G_Q=G$ and skeleton map $\psi\:V(G_Q)\to\F_0(Q)$.
By \cref{res:naive} $(i)$, $Q$ is linearly equivalent to $P_G(\theta)$ via some linear map $T\in\GL(\RR^d)$.
By \cref{res:naive_polytope}, the eigenpolytope map satisfies $\phi=T\circ \psi$.
Since $T$ induces an isomorphism between the skeleta of $Q$ and $P_G(\theta)$, and $\psi$ is an isomorphism between $G$ and the skeleton of $Q$, we find that $\phi$ must be an isomorphism between $G$ and the skeleton of $P_G(\theta)$.
This shows one direction.
For the converse, suppose that $\phi$ is an isomorphism.
Set $P:=P_G(\theta)$ and let~$G_P$ be its edge-graph with skeleton map $\psi\:V(G_P)\to \F_0(P)$.
Then $\sigma:=\psi^{-1}\circ\phi$ is a graph isomorphism between $G$ and $G_P$.
So, since $G\cong G_P$, each eigenpolytope of $G$ is also an eigenpolytope of $G_P$.
We can therefore choose $P_{G_P}(\theta)=P_G(\theta)$, with corresponding eigenpolytope map $\phi':=\sigma^{-1}\circ\phi$.
In sum, the outer square in the following diagram commutes:
\begin{center}
\begin{tikzcd}
G \arrow[r, "\sigma"] \arrow[d, "\phi"'] & G_P \arrow[d, "\phi'"] \arrow[ld, "\psi"'] \\
\mathllap{P:=\,}P_G(\theta) \arrow[r, "\Id"'] & P_{G_P}(\theta)
\end{tikzcd}
\end{center}
Also, by construction of $\sigma$, the upper triangle commutes.
In conclusion, the lower triangle must commute as well, which is exactly \eqref{eq:diagram_polytope} with $T=\Id$. This proves that $P$~is $\theta$-spectral via \cref{res:naive_polytope}.
Since $G$ is isomorphic to $G_P$, $G$ is $\theta$-spectral.
\end{proof}
\end{proposition}
\noindent
This also proves \cref{res:naive} $(ii)$.
\iffalse
\begin{proposition}
\label{res:naive_graph}
A graph $G$ is $\theta$-spectral if and only if it is isomorphic to the edge-graph~of its $\theta$-eigenpolytope $P:=P_G(\theta)$ via some isomorphism $\sigma\:V(G)\to V(G_P)$ for which the following diagram commutes:
\begin{equation}
\label{eq:diagram}
\begin{tikzcd}
G \arrow[rd, "\phi"'] \arrow[r, "\sigma"] & G_P \arrow[d, "\psi"] \\
& P_G(\theta)\mathrlap{\,:=P }
\end{tikzcd}
\end{equation}
\begin{proof}
First, suppose that $G$ is $\theta$-spectral.
By definition, there exists a $\theta$-spectral polytope $Q$ with edge-graph $G_Q=G$, and let $\psi_Q\:V(G_Q)\to\F_0(Q)$ be the map \eqref{eq:vertex_map} (we use subscripts since there will be further such maps).
Since $Q$ is $\theta$-spectral, \cref{res:naive} $(i)$ states that $Q$ and $P_{G_Q}(\theta)=P_G(\theta)= P$ are linearly equivalent via some map $T\in\GL(\RR^d)$.
Let $\psi_P\:V(G_P)\to\F_0(P)$ be~the corresponding map \eqref{eq:vertex_map} for $P$.
We can then define the isomorphism as $\sigma:=\psi_Q\circ T\circ \psi_P^{-1}$ (all involved maps are invertible and preserve adjacency).
The following diagram then commutes:
\begin{center}
\begin{tikzcd}
\mathllap{G=\,}G_Q \arrow[r, "\sigma"] \arrow[d, "\psi_Q"'] & G_P \arrow[d, "\psi_P"] \\
Q \arrow[r, "T"'] & P\mathrlap{\,=P_G(\theta)}
\end{tikzcd}
\end{center}
We can now insert the diagonal arrow $\phi\: V(G)\to\RR^d\supseteq P_G(\theta)$ given by \eqref{eq:eigenpolytope_map}.
\begin{center}
\begin{tikzcd}
\mathllap{G=\,}G_Q \arrow[r, "\sigma"] \arrow[d, "\psi_Q"'] \arrow[rd, "\phi"] & G_P \arrow[d, "\psi_P"] \\
Q \arrow[r, "T"'] & P\mathrlap{\,:= P_G(\theta)}
\end{tikzcd}
\end{center}
Since $Q$ is spectral, the the lower triangle commutes by \cref{res:naive_polytope}.
Since also the outer square commutes, we find that the upper triangle (which is exactly \eqref{eq:diagram}) commutes as well, and we are done.
For the other direction, suppose that there is an isomorphism $\sigma:V(G)\to V(G_P)$ so that \eqref{eq:diagram} commutes.
Since $G$ and $G_P$ are isomorphic, $P=P_G(\theta)$ can be consider as a $\theta$-eigenpolytope of $G_P$ as well, thus $P=P_{G_P}(\theta)$.
If $\phi$ is the map \eqref{eq:eigenpolytope_map} for the eigenpolytope of $G$, then because $G_P$ uses the same eigenpolytope, $\phi\circ\sigma^{-1}$ is the respective map \eqref{eq:eigenpolytope_map} for the eigenpolytope of $G_P$.
The following diagram then commutes:
\begin{center}
\begin{tikzcd}
G \arrow[rd, "\phi"'] \arrow[r, "\sigma"] & G_P \arrow[rd, "\phi\circ\sigma^{-1}"] & \\
& \mathllap{P=:\,}P_G(\theta) \arrow[r, "\Id"'] & P_{G_P}(\theta)
\end{tikzcd}
\end{center}
We can now insert the vertical arrow $\psi\: V(G_P)\to \F_0(P)$ given by \eqref{eq:vertex_map}.
\begin{center}
\begin{tikzcd}
G \arrow[rd, "\phi"'] \arrow[r, "\sigma"] & G_P \arrow[rd, "\phi\circ\sigma^{-1}"] \arrow[d, "\psi"] & \\
& \mathllap{P=:\,}P_G(\theta) \arrow[r, "\Id"'] & P_{G_P}(\theta)
\end{tikzcd}
\end{center}
The left triangle is exactly \eqref{eq:diagram}, hence commutes by assumption.
The outer parallelogram commutes by construction, and so we find that the right triangle commutes as well.
This triangle is exactly \eqref{eq:diagram_polytope} from \cref{res:naive_polytope} with $T=\Id$.
This shows that $P
$ is $\theta$-spectral, and since $G\cong G_P$ is the edge-graph of $P$, we found that $G$ is $\theta$-spectral, and we are done.
\end{proof}
\end{proposition}
\fi
\iffalse
\begin{lemma}
If $P$ is linearly equivalent to the $P_{G_P}(\theta)$, and this linear map $T\in\GL(\RR^d)$ satisfies $\Phi=\Psi T$, then $P$ is $\theta$-spectral.
\end{lemma}
\begin{lemma}
If $G$ is isomorphic to the edge-graph of $P_G(\theta)=:Q$, and the isomorphism $\sigma\:V(G)\to V(G_Q)$ saitsies $\Phi=\Pi_\sigma \Psi$, then $G$ is $\theta$-spectral.
\end{lemma}
\begin{lemma}
\label{res:naive_polytope}
If $P$ is $\theta$-spectral, then $P$ is linearly equivalent to the $\theta$-eigenpolytope of its edge-graph. The converse is not true.
However, $P$ is $\theta$-spectral if and only if $P$ is linearly equivalent to $P_{G_P}(\theta)$ via~some map $T\in\GL(\RR^d)$, and $T$ additionally satisfies $T \phi(i) = \psi(i)$ for all $i\in V$, where~$\phi\:$ $V(G_P)\to\RR^d$ and $\psi\:V(G_P)\to\F_0(P)$ are the maps defined in \eqref{eq:eigenpolytope_map} resp.\ \eqref{eq:vertex_map}.
\begin{proof}
By definition, the $\theta$-eigenpolytope of $G_P$ satisfies $\Span \Phi=\Eig_{G_P}(\theta)$, where $\Phi$ is as defined in \eqref{eq:eigenpolytope_matrix}.
Now, by definition, $P$ is $\theta$-spectral if and only if $\Span \Psi = \Eig_{G_P}(\theta)$, where $\Psi$ is its arrangement matrix \eqref{eq:arrangement_matrix}.
But by \cref{res:same_column_span}, $\Phi$ and $\Psi$ can have the same span if and only of their rows are related by some invertible linear map $T\in\GL(\RR^d)$, that is, $T\phi(i)=\psi(i)$ for all $i\in V$.
A counterexample for the initial converse was given in \cref{ex:pentagon}.
\end{proof}
\end{lemma}
\begin{corollary}
If $P$ is $\theta$-spectral, then so is $G_P$ and $P_{G_P}(\theta)$.
\end{corollary}
The respective statement for graphs is the following:
\begin{lemma}
\label{res:naive_graph}
If $G$ is $\theta$-spectral, then $G$ is isomorphic to the edge-graph of its $\theta$-eigenpolytope. The converse is not true.
However, $G$ is $\theta$-spectral if and only if it is isomorphic to the edge-graph of its $\theta$-eigenpolytope, and the isomorphism is given by $\phi\:V\to\RR^d$~as~de\-fined in \eqref{eq:eigenpolytope_map}.
\begin{proof}
Let $Q:=P_G(\theta)$ be the $\theta$-eigenpolytope of $G$.
Then we try to prove $G\cong G_Q$.
If $G$ is $\theta$-spectral, then there is a $\theta$-spectral polytope $P$ with edge-graph $G_P\cong G$.
By this, $P_{G_P}(\theta)\cong P_G(\theta)=Q$.
By \cref{res:naive_polytope}, $P$ is linearly~equivalent to~$P_{G_P}(\theta)$, hence linearly equivalent to $Q$.
In particular, $G_P\cong G_Q$, and so we found $G\cong G_P\cong G_Q$.
A counterexample for the initial converse was given in \cref{ex:pentagon}.
\end{proof}
\end{lemma}
\fi
\iffalse
\begin{corollary}
\quad
\begin{myenumerate}
\item If a polytope $P$ is $\theta$-spectral, then it is the $\theta$-eigenpolytope of its edge-graph (up to some invertible linear transformation).
\item If a graph $G$ is $\theta$-spectral, then it is (isomorphic to) the edge-graph of its $\theta$-eigenpolytope.
\item If a graph $G$ is $\theta$-spectral, then its $\theta$-eigenpolytope is $\theta$-spectral.
\end{myenumerate}
\end{corollary}
\begin{corollary}
\label{res:naive_definition}
If $P$ is $\theta$-spectral, then it is the $\theta$-eigenpolytope of its edge-graph (up to invertible linear transformation)
\begin{proof}
If $P$ is $\theta$-spectral with arrangement matrix $M'$, then by \mbox{\cref{def:spectral} $(i)$} we have $\Span M'=\Eig_{G_P}(\theta)$.
Let further $M$ be the matrix \eqref{eq:eigenpolytope_matrix} used in \cref{def:eigenpolytope} to define the $\theta$-eigenpolytope of $G_P$.
By definition we have $\Span M=\Eig_{G_P}(\theta)$.
By \cref{res:same_column_span}, since $M$ and $M'$ have the same column span, their rows are related by an invertible linear transformation.
These rows are the vertices of $P$ and $P_{G_P}(\theta)$ respectively, finishing the proof.
\end{proof}
\end{corollary}
The converse of \cref{res:naive_definition} is not true.
This is intentional, since we do not want confiurations as in \cref{ex:pentagon} to be called \enquote{spectral}.
\begin{corollary}
\label{res:spectral_graph_naive}
If a graph $G$ is $\theta$-spectral, then $\phi\:V\ni i\mapsto v_i\in\RR^d$ (as \mbox{introduced} in \cref{def:eigenpolytope}) is an isomorphism between $G$ and the edge-graph of $P_G(\theta)$.
That includes
\begin{myenumerate}
\item for all $i\in V$ holds $v_i\in\F_0(P_G(\theta))$.
\item for distinct $i,j\in V$ holds $v_i\not= v_j$.
\item $ij\in E$ if and only if $\conv\{v_i,v_j\}\in\F_1(P_G(\theta))$.
\end{myenumerate}
\begin{proof}
Since $G$ is $\theta$-spectral, it is (isomorphic to) the edge-graph of some $\theta$-spectral polytope $P$.
Recall that $n$ denotes the number of vertices of $G$.
Since $P_G(\theta)$ is the convex hull of $v_1,...,v_n$, $P_G(\theta)$ has at most $n$ vertices, and $\phi$ is surjective.
Since $P_G(\theta)$ is combinatorially equivalent to $P$ (by \cref{res:naive_definition}), $P_G(\theta)$ has exactly $n$ vertices, and $\phi$ must also be injective.
This also shows $(i)$ and $(ii)$.
\end{proof}
\end{corollary}
Note that not every graph that is isomorphic to the edge-graph of its $\theta$-eigenpoly\-tope is also $\theta$-spectral (again, consider \cref{ex:pentagon}).
However, if the isomorphism is the one from \cref{res:spectral_graph_naive}, then this conclusion is valid.
\iffalse
\begin{lemma}
If $P\subset\RR^d$ is a (full-dimensional) polytope, then the following are~equivalent:
\begin{myenumerate}
\item $P$ is $\theta$-spectral,
\item $P$ is $\theta$-balanced and the eigenvalue $\theta$ has multiplicity $d$.
\end{myenumerate}
If $G$ is a graph, then the following are equivalent:
\begin{myenumerate}
\setcounter{enumi}{2}
\item $G$ is $\theta$-spectral,
\item the $\theta$-eigenpolytope of $G$ is $\theta$-spectral with edge-graph $G$,
\item $G$ is isomorphic to the edge-graph of its $\theta$-eigenpolytope, with isomorphism $V\ni i\mapsto v_i\in\RR^d$ (as constructed in \cref{def:eigenpolytope}).
\end{myenumerate}
\begin{proof}
By \cref{res:eigenvalue}, the balancing equation \eqref{eq:balanced} is equivalent to $\Span M\subseteq\Eig_G(\theta)$.
Recall that the rank~of~$M$~is~$d$.
Now, $\theta$ having multiplicity $d$ (\shortStyle{i.e.,}\ $(ii)$) is equivalent to $\dim\Eig_G(\theta)=d=\rank M$. This is then equivalent to $M=\Eig_G(\theta)$ which is $(i)$.
If $G$ is $\theta$-spectral then it is (isomorphic to) the edge-graph of \emph{some} $\theta$-spectral polytope $P$.
But \cref{res:naive_definition} states that $P$ is (up to linear transformation) the $\theta$-eigenpolytope of its edge-graph, or, since isomorphic, the $\theta$-eigenpolytope of $G$.
\end{proof}
\end{lemma}
\fi
\fi
It is also possible to give a definition of spectral graphs purely in terms of graph theory, without any explicit reference to polytopes:
\begin{lemma}
\label{res:spectral_2}
A graph $G$ is $\theta$-spectral if and only if it satisfies both of the following:
\begin{myenumerate}
\item for each vertex $i\in V$ exists a $\theta$-eigenvector $u=(u_1,...,u_n)\in\Eig_G(\theta)$~whose single largest component is $u_i$, or equivalently,
$$\Argmax_{k\in V} u_k = \{i\}.$$
\item any two vertices $i,j\in V$ form an edge $ij\in E$ in $G$ if and only~if there is a $\theta$-eigenvector $u=(u_1,...,u_n)\in\Eig_G(\theta)$ whose only two largest components are $u_i$ and $u_j$, or equivalently,
$$\Argmax_{k\in V} u_k = \{i,j\}.$$
\end{myenumerate}
\end{lemma}
This characterization of spectral graphs can be interpreted as follows: a spectral graph can be reconstructed from knowing a single eigenspace, rather than, say, all eigenspaces and their associated eigenvalues.
\begin{proof}[Proof of \cref{res:spectral_2}]
Let $P_G(\theta)\subset\RR^d$ be the $\theta$-eigenpolytope of $G$ with eigenpolytope matrix $\Phi$ and eigenpolytope map $\phi\:V\ni i\mapsto v_i\in\RR^d$.
Since $\Span\Phi=\Eig_G(\theta)$, the eigenvectors $u=(u_1,...,u_n)\in\Eig_G(\theta)$ are exactly the vectors that can be written as $u=\Phi x$ for some $x\in\RR^d$.
If then $e_k\in\RR^n$ denotes the $k$-th standard basis vector, we have
$$u_k = \<u,e_k\> = \<\Phi x, e_k\> = \<x,\Phi^{\Tsymb}\! e_k\> = \<x, v_k\>.$$
Therefore, there is a $\theta$-eigenvector $u=(u_1,...,u_n)\in\Eig_G(\theta)$ with
$\Argmax_{k\in V} u_k = \{i_1,...,i_m\}$
if and only if there is a vector $x\in\RR^d$ with
$$\Argmax_{k\in V} \<x,v_k\> = \{i_1,...,i_m\}.$$
But this last line is exactly what it means for $\conv\{v_{i_1},...,v_{i_m}\}$ to be a face of $P_G(\theta)$ $=\conv\{v_1,...,v_n\}$ (and $x$ is a normal vector of that face).
In this light, we can interpret $(i)$ as stating that $v_1,...,v_n$ form $n$ distinct vertices of $P_G(\theta)$, and $(ii)$ as stating that $\conv\{v_i,v_j\}$ is an edge of $P_G(\theta)$ if and only if $ij\in E$.
And this means exactly that $\phi$ is a graph isomorphism between $G$ and the skeleton of $P_G(\theta)$.
By \cref{res:naive_graph}, this is equivalent to $G$ being $\theta$-spectral.
\end{proof}
\iffalse
If the graph satisfies $(i)$ and $(ii)$, then the map $\sigma:=\psi^{-1}\circ\phi:V(G)\to V(G_P)$ is bijective because of $(i)$, and a graph isomorphism because of $(ii)$.
This isomorphism makes \eqref{eq:diagram} commute by definition, and so $G$ is $\theta$-spectral by \cref{res:naive_graph}.
Conversely, if $G$ is $\theta$-spectral, then it is isomorphic to the edge-graph $G_P$ of its $\theta$-eigenpolytope via some isomorphism $\sigma\: V(G)\to V(G_P)$ that makes \eqref{eq:diagram} commute.
We first note that that implies that $P_G(\theta)$ has exactly the vertices $v_1,...,v_n$, which means that $G$ satisfies $(i)$.
Since $\sigma$ and $\psi$ are graph isomorphisms, so is $\psi\circ\sigma$, and since \eqref{eq:diagram} commutes, also $\phi$.
This means $ij\in E$ if and only if $\conv\{\phi(i),\phi(k)\}=\conv\{v_i,v_j\}\in P_G(\theta)$.
This is equivalent to $(ii)$, and we are done.
Suppose that $G$ is $\theta$-spectral, then
We show first that there is a $\theta$-eigenvectors $u=(u_1,...,u_n)\in\Eig_G(\theta)$ with
$$\Argmax_{k\in V} u_k=\{i_1,...,i_m\}$$
if and only if $\{v_{i_1},...,v_{i_m}\}$ (as defined in \cref{def:eigenpolytope}) is the vertex set of a face~of the $\theta$-eigenpolytope of $G$.
Recall that a set $\{v_{i_1},...,v_{i_m}\}\subseteq \F_0(P)$ of vertices is the vertex set of a face of $P$ if and only if there is a vector $x\in\RR^d$ (a normal vector) with
$$\Argmax_{k\in V} \<x,v_k\> = \{i_1,...,i_m\}.$$
Let $M$ be the arrangement matrix of $P$, set $u=(u_1,...,u_n):=Mx$, and let $e_k\in\RR^n$ be the $k$-th standard basis vector.
Then
$$\<x,v_k\> = \<x, M^{\Tsymb}\! e_k\> = \<M x, e_k\> =\<u,e_k\> = u_k.$$
By $u=Mx$, the $x\in\RR^d$ are in one-to-one correspondence with the $u\in\Span M=\Eig_G(\theta)$, which are exactly the $\theta$-eigenvectors of $G$.
So $\{v_{i_1},...,v_{i_m}\}$ forms a face of $P$ if and only if
$$(*)\quad \Argmax_{k\in V} u_k = \Argmax_{k\in V} \<u,v_k\> = \{i_1,...,i_m\},$$
Condition $(i)$ and $(ii)$ are now the version of $(*)$ for vertices and edges of $P$, which correspond to vertices and edges of $G$ via the isomorphism $\phi$.
\fi
In practice, to reconstruct a spectral graph from an eigenspace, the steps could be the following: given a subspace $U\subseteq\RR^n$ (the claimed eigenspace), then
\begin{myenumerate}
\item choose any basis $u_1,...,u_d\in\RR^n$ of $U$,
\item build the matrix $\Phi=(u_1,...,u_d)\in\RR^{n\x d}$ in which the $u_i$ are the columns,
\item define $v_i$ as the $i$-th \emph{row} of $\Phi$,
\item define $P:=\conv\{v_1,...,v_n\}\subset\RR^d$ as the convex hull of the $v_i$,
\item the reconstructed graph $G=G_P$ is then the edge-graph of $P$.
\end{myenumerate}
\subsection{Properties of spectral polytopes}
We discuss two properties of spectral polytopes that make them especially interesting in polytope theory.
\subsubsection*{Reconstruction from the edge-graph}
\label{sec:spectral_reconstruction}
The edge-graph of a general polytope carries little information about that polytope \shortStyle{i.e.,}\ given only its edge-graph, we can often not reconstruct the polytope from this (up to combinatorial equivalence).
Often, one cannot even deduce the dimension of the polytope from its edge-graph.
Reconstruction might be possible in certain special cases, as \shortStyle{e.g.}\ for 3-dimensional polyhedra, simple polytopes or zonotopes.
The spectral polytopes provide another such class.
\begin{theorem}\label{res:reconstruction}
A $\theta_k$-spectral polytope is uniquely~determined by its edge-graph up to invertible linear transformations.
\end{theorem}
The proof is simple:
every $\theta_k$-spectral polytope is linearly equivalent to the~$\theta_k$-eigenpolytope of its edge-graph (by \cref{res:naive} $(i)$).
Our definition of the \mbox{$\theta_k$-eigen}\-polytope already suggests an explicit procedure to construct it (a script for this~is included in \cref{sec:appendix_mathematica}).
This property of spectral polytopes appears more exciting when applied to graph classes that are not obviously spectral (see \cref{sec:edge_transitive}).
\subsubsection*{Realizing symmetries of the edge-graph}
\label{sec:spectral_symmetries}
Every Euclidean symmetry of a polytope induces a combinatorial symmetry on its edge-graph.
The converse is far from true.
Think, for example, about a rectangle that is not a square.
Even worse, it can happen that a polytope does not even have a realization that realizes all the symmetries of its edge-graph (\shortStyle{e.g.}\ the polytope constructed in \cite{bokowski1984combinatorial}).
We have previously discussed (in \cref{res:realizing_symmetries}) the existence of a homomorphism $\Aut(G)\to\Aut(P_G(\theta))$ between the symmetries of a graph $G$ and the symmetries of its eigenpolytopes.
There are two caveats:
\begin{myenumerate}
\item this is not necessarily an isomorphism, and
\item it says nothing about the symmetries of the edge-graph of $P_G(\theta)$, as this one needs not to be isomorphic to $G$
\end{myenumerate}
Still, it suffices to makes statement of the following form:
if $G$ is vertex-transitive, then so are all its eigenpolytopes.
This might not work with other transitivities, as for example edge-transitivity.
This is no concern for spectral graphs/polytopes:
\begin{theorem} \quad
\label{res:symmetries}
\begin{myenumerate}
\item If $G$ is $\theta$-spectral, then $P_G(\theta)$ realizes all its symmetries, which includes
$$\Aut(G)\cong\Aut(P_G(\theta))$$
%
via the map $\sigma\mapsto T_\sigma$ given in \cref{res:realizing_symmetries}, as wells as that $T_\sigma$ permutes the vertices and edges of $P_G(\theta)$ exactly as $\sigma$ permutes the vertices and edges of the graph $G$.
\item If $P$ is $\theta$-spectral, then $P$ has a realization that realizes all the symmetries~of its edge-graph, namely, the $\theta$-eigenpolytope of its edge-graph.
\end{myenumerate}
%
\end{theorem}
This is mostly straight forward, with large parts already addressed in \mbox{\cref{res:realizing_symmetries}}.
The major difference is that for spectral graphs $G$ the eigenpolytope has exactly the distinct vertices $v_1,...,v_n\in\RR^d$.
The statement from \cref{res:realizing_symmetries} that $T_\sigma$~per\-mutes the $v_i$ as prescribed by $\sigma$, then becomes, that $T_\sigma$ permutes the \emph{vertices} as prescribed by $\sigma$, and hence also the edges.
Also, since the $v_i$ are distinct, no non-trivial symmetry $\sigma$ can result in trivial $T_\sigma$, making $\sigma\mapsto T_\sigma$ into a group \emph{isomorphism}.
For part $(ii)$ merely recall that the eigenpolytope $P_{G_P}(\theta)$ is indeed a realization of $P$ by \cref{res:naive} $(i)$.
The major consequence of this is, that for spectral graphs/polytopes also more complicates types of symmetries translate between a polytope and its graph, as \shortStyle{e.g.}\ edge-transitivity (see also \cref{sec:edge_transitive}).
\iffalse
But, we have to be careful: we cannot expect a general statement about symmetries of spectral polytopes, as \shortStyle{e.g.}\ a rectangle and a square are both spectral (since they are linear transformations of each other), but one is apparently more symmetric, if by symmetry we understand a rigid motion.
There are two solutions: either, generalizing symmetries to general invertible linear transformations, or, restricting the polytopes about which we talk.
Both perspectives are equivalent, and we decided to choose the second.
Let us call a polytope \emph{normalized} if $M^{\Tsymb}\! M=\Id$ for its arrangement matrix $M$, that is, the columns of $M$ form a orthonormal basis of $\Span M$.
By \cref{res:same_column_span}, every polytope is just one invertible linear transformation away from being normalized.
Also, the way in which we defined eigenpolytopes in \cref{def:eigenpolytope} ensures that they are normalized.
We have the following:
\begin{theorem}
\label{res:realizing_symmetries}
A normalized spectral polytope realizes all the combinatorial symmetries of its edge-graph as Euclidean symmetries.
More precisely, we have the following: if $\sigma\in\Aut(G_P)\subseteq\Sym(V)$ is a combinatorial symmetry of the edge-graph, and $\Pi_\sigma\in\Perm(\RR^n)$ is the associated permutation matrix, then
$$T_\sigma:=M^{\Tsymb} \Pi_\sigma M \;\in\; \Ortho(\RR^d)$$
is a Euclidean symmetry of $P$ that permutes its vertices exactly as $\sigma$ permutes the vertices of $G_P$ (that is $T_\sigma v_i = v_{\sigma(i)}$).
\end{theorem}
We should mention that this result is more generally true for all eigenpolytopes of a graph and was already proven in \cite[Theorem 2.2]{godsil1978graphs}.
Another proof in the context of spectral graph realizations can be found \cite[Corollary 2.9]{winter2020symmetric}.
As a general consequence we have that if $G$ is vertex-transitive, then so are all its eigenpolytopes.
This does not extend to other kinds of symmetries, \shortStyle{e.g.}\ edge-transitivity.
However, if $G$ is $\theta$-spectral, then the symmetries of $G$ and its~$\theta$-eigenpolytope translate into each other one-to-one.
For example, $G$ is edge-transitive if and only if $P_G(\theta)$ is.
We make use of this in later sections.
\fi
\section{The Theorem of Izmestiev}
\label{sec:izmestiev}
We introduce our, as of yet, most powerful tool for proving that certain polytopes are $\theta_2$-spectral.
For this, we make use of a more general theorem by Izmestiev \cite{izmestiev2010colin}, first proven in the context of the Colin de Verdière graph invariant.
The proof of this theorem requires techniques from convex geometry, most notably, mixed volumes, which we not address here.
We need to introduce some terminology.
As before, let $P\subset\RR^d$ denote a full-dimensional polytope of dimension $d\ge 2$, with~edge-graph $G_P=(V,E),V=\{1,...,n\}$ and vertices $v_i\in \F_0(P),i\in V$.
Recall, that the \emph{polar dual} of $P$ is the polytope
$$P^\circ:=\{x\in\RR^d\mid \<x,v_i\>\le 1\text{ for all $i\in V$}\}.$$
We can replace the $1$-s in this definition by variables $c=(c_1,...,c_n)$, to obtai
$$P^\circ(c):=\{x\in\RR^d\mid\<x,v_i\>\le c_i\text{ for all $i\in V$}\}.$$
The usual polar dual is then $P^\circ=P^\circ(1,...,1)$.
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{img/P_dual_2}
\caption{Visualization of $P^\circ(c)$ for different values of $c\in\RR^n$.}
\end{figure}
In the following, $\vol(\free)$ denotes the volume of convex sets in $\RR^d$ (\shortStyle{w.r.t.}\ the usual Lebesgue measure).
Note that the function $\vol(P^\circ(c))$ is differentiable in $c$, and so we can compute partial derivatives \shortStyle{w.r.t.}\ the components of $c$.
\begin{theorem}[Izmestiev \cite{izmestiev2010colin}, Theorem 2.4]\label{res:izmestiev}
Define a matrix $X\in\RR^{n\x n}$ with~compo\-nents
$$X_{ij}:=-\frac{\partial^2 \vol(P^\circ(c))}{\partial c_i\partial c_j}\Big|_{c=(1,...,1)}.$$
The matrix $X$ has the following properties:
\begin{myenumerate}
%
\item $X_{ij}< 0$ whenever $ij\in E(G_P)$,
\item $X_{ij}=0$ whenever $ij\not\in E(G_P)$,
\item $X\Psi=0$ (where $\Psi$ is the arrangement matrix of $P$),
\item $X$ has a unique negative eigenvalue, and this eigenvalue is simple,
\item $\dim\ker X=d$.
\end{myenumerate}
\end{theorem}
One can view the matrix $X$ as some kind of adjacency matrix of~a vertex- and edge-weighted version of $G_P$.
Part $(iii)$ states that $v$ satisfies a weighted form of the balancing condition \eqref{eq:balanced} with eigenvalue zero.
Since $\rank \Psi=d$, part $(v)$ states that $\Span \Psi$ is already the whole 0-eigenspace.
And part $(iv)$ states that zero is the second smallest eigenvalue of $X$.
\begin{theorem}\label{res:implies_spectral}
Let $X\in\RR^{n\x n}$ be the matrix defined in \cref{res:izmestiev}.
If we have
\begin{myenumerate}
\item $X_{ii}$ is independent of $i\in V(G_P)$, and
\item $X_{ij}$ is independent of $ij\in E(G_P)$,
\end{myenumerate}
then $P$ is $\theta_2$-spectral.
\begin{proof}
By assumption there are $\alpha,\beta\in\RR$, $\beta>0$, so that $X_{ii}=\alpha$ for all vertices~$i\in$ $V(G_P)$, and $X_{ij}=\beta<0$ for all edges $ij\in E(G_P)$ (we have $\beta<0$ by \cref{res:izmestiev} $(i)$).
We can write this as
$$X=\alpha \Id + \beta A\quad\implies\quad (*)\;A=\frac\alpha\beta \Id+\frac1\beta X,$$
where $A$ is the adjacency matrix of $G_P$.
By \cref{res:izmestiev} $(iv)$ and $(v)$, the matrix $X$ has second smallest eigenvalue zero of multiplicity $d$.
By \cref{res:izmestiev} $(iii)$, the columns of $M$ are the corresponding eigenvectors.
Since $\rank \Psi=d$ we find that these are all the eigenvectors and $\Span \Psi$ is the 0-eigenspace of $X$.
By $(*)$ the eigenvalues of $A$ are the eigenvalues of $X$, but scaled by $1/\beta$ and shifted by $\alpha/\beta$. Since $1/\beta <0$, the second-\emph{smallest} eigenvalue of $X$ gets mapped onto the second-\emph{largest} eigenvalue of $A$.
Therefore, $A$ (and also $G_P$) has second-largest eigenvalue $\theta_2=\alpha/\beta$ of multiplicity $d$, and $\Span \Psi$ is the corresponding eigenspace.
By definition, $P$ is then the $\theta_2$-eigenpolytope of $G_P$ and is therefore~$\theta_2$-spectral.
\end{proof}
\end{theorem}
It is unclear whether \cref{res:implies_spectral} already characterizes $\theta_2$-spectral polytopes, or even spectral polytopes in general (see also \cref{q:characterization}).
\section{Edge-transitive polytopes}
\label{sec:edge_transitive}
We apply \cref{res:implies_spectral} to edge-transitive polytopes, that is, to polytopes for~which the Euclidean symmetry group $\Aut(P)\subset\Ortho(\RR^d)$ acts transitively on the edge set $\F_1(P)$.
No classification of edge-transitive polytopes is known.
Some \mbox{edge-transitive} polytopes are listed in \cref{sec:classification}.
Despite the name of this section, we are actually going to address polytopes that are simultaneously vertex- and edge-transitive.
This is not a huge deviation from the title: as shown in \cite{winter2020polytopes}, edge-transitive polytopes in dimension $d\ge 4$ are always also vertex-transitive, and the exceptions in lower dimensions are few (a continuous family of $2n$-gons for each $n\ge 2$, and two exceptional polyhedra).
\cref{res:implies_spectral} can be directly applied to simultaneously vertex- and edge-transitive polytopes, and so we have
\begin{corollary}
\label{res:vertex_edge_transitive_cor}
A simultaneously vertex- and edge-transitive polytope is $\theta_2$-spectral.
\end{corollary}
We collect all the notable consequences in the following theorem:
\begin{theorem}\label{res:edge_vertex_transitive}
If $P\subset\RR^d$ is simultaneously vertex- and edge-transitive, then
\begin{myenumerate}
\item $\Aut(P)\subset\RR^d$ is irreducible as a matrix group.
\item $P$ is uniquely determined by its edge-graph up to scale and orientation.\footnote{This shows that $P$ is \emph{perfect}, \shortStyle{i.e.,}\ is the unique maximally symmetric realization of its combinatorial type. See \cite{gevay2002perfect} for an introduction to perfect polytopes.}
\item $P$ realizes all the symmetries of its edge-graph.
\item if $P$ has edge length $\ell$ and circumradius $r$, then
%
\begin{equation}
\label{eq:circumradius}
\frac{\ell}r = \sqrt{\frac{2\lambda_2}{\deg(G_P)}} = \sqrt{2\Big(1-\frac{\theta_2}{\deg(G_P)}\Big)},
\end{equation}
%
where $\deg(G_P)$ is the vertex degree of $G_P$, and $\lambda_2=\deg(G_P)-\theta_2$ denotes its second smallest Laplacian eigenvalue.
\item if $\alpha$ is the dihedral angle of the polar dual $P^\circ$, then
%
\begin{equation}
\label{eq:dihedral_angle}
\cos(\alpha)=-\frac{\theta_2}{\deg(G_P)}.
\end{equation}
\end{myenumerate}
\begin{proof}
The complete proof of $(i)$ and $(ii)$ has to be postponed until \cref{sec:rigidity} (see \cref{res:edge_transitive_rigid}).
Concerning $(ii)$, from \cref{res:vertex_edge_transitive_cor} and \cref{res:reconstruction} already follows that $P$ is determined by its edge-graph up to \emph{invertible linear transformations}, but not necessarily only up to scale and orientation.
Part $(iii)$ follows from \cref{res:symmetries}.
Part $(iv)$ and $(v)$ were proven (in a more general setting) in \cite[Proposition 4.3]{winter2020symmetric}.
This applies literally to $(iv)$.
For $(v)$, note the following: if $\sigma_i\in\F_{d-1}(P^\circ)$ is the facet of the polar dual $P^\circ$ that corresponds to the vertex $v_i\in\F_1(P)$, then the dihedral angle between $\sigma_i$ and $\sigma_j$ is $\pi-\angle(v_i,v_j)$.
The latter expression was proven in \cite{winter2020symmetric} to agree with \eqref{eq:dihedral_angle}.
\end{proof}
\end{theorem}
It is worth emphasizing that large parts of \cref{res:edge_vertex_transitive} do not apply to polytopes of a weaker symmetry, as \shortStyle{e.g.}\ vertex-transitive polytopes.
Prisms are counterexamples to both $(i)$ and $(ii)$.
There are vertex-transitive neighborly polytopes (other than simplices) and they are counterexamples to $(ii)$ and $(iii)$.
\iffalse
A simultaneously vertex- and edge-transitive polytope can neither deformed nor projected to keep its edge-graph and symmetry.
The fact that $P$ is perfect can already be proven
from classical rigidity results and thus can be formulated for a much larger class of polytopes:
\begin{theorem}[Alexandrov, \cite{...}]
\label{res:alexandrov_rigidity}
A polyope is uniquely determined (up to orientation) by its combinatorial type and the shape of its facets.
\end{theorem}
\begin{theorem}
An inscribed polytope with all edges of length $\ell$ is uniquely determined (up to orientation) by its combinatorial type.
\begin{proof}
The proof is by induction.
A 2-dimensional inscribed polytope with all edges of length $\ell$ ic clearly a regular polygon, hence of unique shape.
Now consider $P$ is a $d$-dimensional inscribed polytope with all edges of length $\ell$.
Then also each facet of $P$ is inscribed with this edge-length.
Also, the combinatorial type of each facet is determined by the combinatorial type of $P$.
By induction assumption, the shape of each facet is hence uniquely determined.
By \cref{res:alexandrov_rigidity} this uniquely determines the shape of $P$.
\end{proof}
\end{theorem}
\begin{corollary}
A simultaneously vertex- and edge-transitive polytope is perfect.
\end{corollary}
\fi
\begin{remark}
\label{rem:edge_not_vertex}
There are two edge-transitive polyhedra that are not vertex-transitive: the \emph{rhombic dodecahedron} and the \emph{rhombic triacontahedron} (see also \cref{fig:edge_transitive}).~Only the former is $\theta_2$-spectral, and the latter is not spectral for any eigenvalue (this was already mentioned in \cite{licata1986surprising}).
Since the rhombic dodecahedron is not vertex-transitive, nothing of this follows from \cref{res:vertex_edge_transitive_cor}.
However, this polytope satisfies the conditions of \cref{res:implies_spectral}, which seems purely accidental.
It is the only known spectral polytope that is not vertex-transitive.
\end{remark}
\subsection{Rigidity and irreducibility}
\label{sec:rigidity}
The goal of this section is to prove the missing part of \cref{res:edge_vertex_transitive}:
\begin{theorem}
\label{res:edge_transitive_rigid}
If $P\subset\RR^d$ is simultaneously vertex- and edge-transitive, then
\begin{myenumerate}
\item $\Aut(P)\subset\Ortho(\RR^d)$ is irreducible as a matrix group, and
\item $P$ is determined by its edge-graph up to scale and orientation.
\end{myenumerate}
\end{theorem}
To prove \cref{res:edge_transitive_rigid}, we make use of \emph{Cauchy's rigidity theorem} for polyhedra (with its beautiful proof listed in \cite[Section 12]{aigner2010proofs}).
It states that every~polyhedron is uniquely determined by its combinatorial type and the shape of its faces.
This was generalized by Alexandrov to general dimensions $d\ge 3$ (proven \shortStyle{e.g.}\ in \cite[Theorem 27.2]{pak2010lectures}):
\begin{theorem}[Alexandrov]
\label{res:alexandrov}
Let $P_1,P_2\subset\RR^d, d\ge 3$ be two polytopes, so that
\begin{myenumerate}
\item $P_1$ and $P_2$ are combinatorially equivalent via a face lattice~isomorphism~$\phi:\F(P_1)\to\F(P_2)$, and
\item each facet $\sigma\in\F_{d-1}(P_1)$ is congruent to the facet $\phi(\sigma)\in\F_{d-1}(P_2)$.
\end{myenumerate}
Then $P_1$ and $P_2$ are congruent, \shortStyle{i.e.,}\ are the same up to orientation.
\end{theorem}
\begin{proposition}
\label{res:regular_rigid}
Let $P_1,P_2\subset\RR^d$ be two combinatorially equivalent polytopes,~each of which has
\begin{myenumerate}
\item all vertices on a common sphere (\shortStyle{i.e.,}\ is inscribed), and
\item all edges of the same length $\ell_i$.
\end{myenumerate}
Then $P_1$ and $P_2$ are the same up to scale and orientation.
\begin{proof}[Proof.\!\!]\footnote{This proof was proposed by the user \emph{Fedor Petrov} on MathOverflow \cite{petrovMO}.}
W.l.o.g.\ assume that $P_1$ and $P_2$ have the same circumradius, otherwise~re\-scale $P_2$.
It then suffices to show that $P_1$ and $P_2$ are the same up to orientation.
We proceed with induction by the dimension $d$.
The induction base is given by $d=2$, which is trivial, since any two inscribed polygons with constant edge length are regular and thus completely determined (up to scale and orientation) by their number of vertices.
Suppose now that $P_1$ and $P_2$ are combinatorially equivalent polytopes of dimension $d\ge 3$ that satisfy $(i)$ and $(ii)$.
Let $\phi$ be the face lattice isomorphism between them.
Let $\sigma\in\F_{d-1}(P_1)$ be a facet of $P_1$, and $\phi(\sigma)$ the corresponding facet in $P_2$.
In particular, $\sigma$ and $\phi(\sigma)$ are combinatorially equivalent.
Furthermore, both $\sigma$ and $\phi(\sigma)$ are of dimension $d-1$ and satisfy $(i)$ and $(ii)$. This is obvious for $(ii)$, and for $(i)$ recall that facets of inscribed polytopes are also inscribed.
By induction hypothesis, $\sigma$ and $\phi(\sigma)$ are then congruent.
Since this holds for all facets $\sigma\in\F_{d-1}(P_1)$, \cref{res:alexandrov} tells us that $P_1$ and $P_2$ are congruent, that is, the same up to orientation.
\end{proof}
\end{proposition}
We can now prove the main theorem of this section:
\begin{proof}[Proof of \cref{res:edge_transitive_rigid}]
By \cref{res:edge_vertex_transitive} the combinatorial type of $P$ is determined by its edge-graph.
By vertex-transitivity, all vertices are on a sphere.
By edge-transitivity, all edges are of the same length.
We can then apply \cref{res:regular_rigid} to obtain that $P$ is unique up to scale and orientation.
This proves $(ii)$.
Suppose now, that $\Aut(P)$ is not irreducible, but that $\RR^d$ decomposes as $\RR^d=W_1\oplus W_2$ into non-trivial orthogonal $\Aut(P)$-invariant subspaces.
Let $T_\alpha\in\GL(\RR^d)$ be the linear map that acts as identity on $W_1$, but as $\alpha\Id$ on $W_2$ for some $\alpha >1$.
Then $T_\alpha P$ is a non-orthogonal linear transformation of $P$ (in particular, combinatorially equivalent), on which $\Aut(P)$ still acts vertex- and edge-transitively.
By $(ii)$, this cannot be. Hence $\Aut(P)$ must be irreducible, which proves $(i)$.
\end{proof}
\subsection{A word on classification}
\label{sec:classification}
Despite the simple appearance of the definition of an edge-transitive polytope,
no classification was obtained so far.
There exists a classification of the 3-dimension edge-transitive polyhedra: besides the Platonic solids, these are the ones shown in \cref{fig:edge_transitive} (nine in total).
\begin{figure}[h!]
\includegraphics[width=0.75\textwidth]{img/edge_transitive_polyhedra}
\caption{From left to right, these are: the cuboctahedron, the icosido\-decahedron, the rhombic dodecahedron, and the rhombic triacontahedron.}
\label{fig:edge_transitive}
\end{figure}
There are many known edge-transitive polytopes in dimension $d\ge 4$ (so we~are not talking about a class as restricted as the regular polytopes).
There are 15 known edge-transitive 4-polytopes (and an infinite family of duoprisms\footnote{The $(n,m)$-duoprism is the cartesian product of a regular $n$-gon and a regular $m$-gon. Those are edge-transitive if and only of $n=m$. Technically, the 4-cube is the $(4,4)$-duoprism but is usually not counted as such, because of its exceptionally large symmetry group.}), but already here, no classification is known.
It is known that the number of irreducible\footnote{Being not the cartesian product of lower dimensional edge-transitive polytopes.} edge-transitive polytopes grows at least linearly with the number of dimensions.
For example, there are $\lfloor d/2\rfloor$ \emph{hyper-simplices} in dimension $d$.
These are edge-transitive (even distance-transitive, see \cref{sec:distance_transitive}).
It is the hope of the author, that the classification of the edge-transitive polytopes can be obtained using their spectral properties.
Their classification can now be stated purely as a problem in spectral graph theory:
the classification of the edge-transitive polytopes (in dimension $d\ge 4$) is equivalent to the classification of $\theta_2$-spectral edge-transitive graphs, and since \cref{res:spectral_2}, we have a completely graph theoretic characterization of spectral graphs.
\begin{theorem}
\label{res:edge_transitive_spectral_graph}
Let $G$ be an edge-transitive graph. If $G$ is $\theta_k$-spectral, then
\begin{myenumerate}
\item $k=2$, and
\item if $G$ is \ul{not} vertex-transitive, then $G$ is the edge-graph of the rhombic dodecahedron (see \cref{fig:edge_transitive}).
\end{myenumerate}
\begin{proof}
We first prove $(ii)$.
As shown in \cite{winter2020polytopes} all edge-transitive polytopes in dimen\-sion $d\ge 4$ are vertex-transitive.
If $G$ is edge-transitive, not vertex-transitive and $\theta_k$-spectral, then its $\theta_k$-eigenpolytope is also edge-transitive but not vertex-transitive, hence of dimension $d\le 3$.
One checks that the 2-dimensional spectral polytopes are regular polygons, hence vertex-transitive.
The remaining polytopes are polyhedra, and we mentioned in \cref{rem:edge_not_vertex} that among these, only the rhombic dodecahedron is spectral, in fact $\theta_2$-spectral.
This proves $(ii)$.
Equivalently, if $G$ is vertex- and edge-transitive, then so is its eigenpolytope.
By \cref{res:vertex_edge_transitive_cor} this is a $\theta_2$-eigenpolytope.
Together with part $(ii)$, we find $k=2$~in~all cases, which proves $(i)$.
\end{proof}
\end{theorem}
\subsection{Arc- and half-transitive polytopes}
\label{sec:arc_transitive}
In a graph or polytope, an \emph{arc} is~an incident vertex-edge-pair.
A graph or polytope is called \emph{arc-transitive} if its symmetry group acts transitively on the arcs.
Being arc-transitive implies both, being vertex-transitive, and being edge-transitive.
In addition to that, in an arc-transitive graph, every edge can be mapped, not only onto every other edge, but also onto itself with flipped orientation.
There exist graphs that are simul\-taneously vertex- and edge-transitive, but not arc-transitive.
Those are called \emph{half-transitive} graphs, and are comparatively rare.
The smallest one has $27$ vertices and is known as the \emph{Holt graph} (see \cite{bouwer1970vertex,holt1981graph}).
For polytopes on the other hand, it is unknown whether there eixsts a distinction being arc-transitive and being simultaneously vertex- and edge-transitive.
No \emph{half-transitive polytope} is known.
Because of \cref{res:edge_vertex_transitive} $(i)$, we know that the edge-graph of a half-transitive polytope must itself be half-transitive.
Since such graphs are rare, the existence of half-transitive polytopes seems unlikely.
\begin{example}
The Holt graph is not the edge-graph of a half-transitive polytope: the Holt graph is of degree four, and its second-largest eigenvalue is of multiplicity six, giving rise to a 6-dimensional $\theta_2$-eigenpolytope.
But a 6-dimensional polytope must have an edge-graph of degree at least six, and so the Holt graph is not spectral.
\end{example}
The lack of examples of half-transitive polytopes means that all known edge-transitive polytopes in dimension $d\ge 4$ are in fact arc-transitive.
Likewise, a~classification of arc-transitive polytopes is not known.
\iffalse
\subsection{Half-transitive polytopes}
\label{sec:half_transitive}
Arc-transitivity implies vertex- and edge-transi\-ti\-vity, but for graphs, the converse is not true.
A graph (and we shall adopt this terminology for polytopes) is called \emph{half-transitive}, if it is both vertex- and edge-transitive, but not arc-transitive.
In a half-transitive graph, each edge can be mapped onto any other edge, but not onto itself with flipped orientation.
The existence of half-transitive graphs was proven first by Holt \cite{holt1981graph}.
Such graphs are relatively rare. The smallest half-transitive graph is now known as the Holt graph and has 27 vertices.
It is not known whether there are any \emph{half-transitive polytopes}.
However, by \cref{res:...} we can now conclude that they are at least as rare as the half-transitive graphs.
\begin{corollary}
The edge-graph of half-transitive polytope must be half-transitive.
\end{corollary}
It is known that the Holt graph is not the edge-graph of a half-transitive polytope.
\fi
\subsection{Distance-transitive polytopes}
\label{sec:distance_transitive}
Our previous results about edge-transitive polytopes already allow for a complete classification of a particular subclass, namely, the \emph{distance-transitive polytopes}, thereby also providing a list of examples of edge-transitive polytopes in higher dimensions.
The distance-transitive symmetry is usually only considered for graphs, and the distance-transitive graphs form a subclass of the distance-regular graphs.
The usual reference for these is the classic monograph by Brouwer, Cohen and Neumaier \cite{brouwer1989distance}.
For any two vertices $i,j\in V$ of a graph $G$, let $\dist(i,j)$ denote the graph-theoretic \emph{distance} between those vertices, that is, the length of the shortest path connecting them.
The \emph{diameter} $\diam(G)$ of $G$ is the largest distance between any two vertices in $G$.
\begin{definition}
A graph is called \emph{distance-transitive} if $\Aut(G)$ acts transitively on each of the sets
$$D_{\delta}:=\{(i,j)\in V\times V \mid \dist(i,j)=\delta\},\quad\text{for all $\delta\in\{0,...,\diam(G)\}$}.$$
Analogously, a polytope $P\subset\RR^d$ is said to be \emph{distance-transitive}, if its Euclidean symmetry group $\Aut(P)$ acts transitively on each of the sets
$$D_{\delta}:=\{(v_i,v_j)\in \F_0(P)\times \F_0(P) \mid \dist(i,j)=\delta\},\quad\text{for all $\delta\in\{0,...,\diam(G_P)\}$}.$$
Note that the distance between the vertices is still measured along the edge-graph rather than via the Euclidean distance.
\end{definition}
Being arc-transitive is equivalent to being transitive on the set $D_1$.
Hence, distance-transitivity implies arc-transitivity, thus edge-transitivity.
By our considerations in the previous sections, we know that the \mbox{classification} of distance-transitive polytopes is equivalent~to the classification of the $\theta_2$-spectral distance-transitive graphs.
Those where classified by Godsil (see \cref{res:spectral_distance_regular_graphs}).
In the following theorem we translated each such $\theta_2$-spectral distance-transitive graph into its respective eigenpolytope.
This gives a complete classification of the distance-transitive polytopes.
\begin{theorem}\label{res:distance_transitive_classification}
If $P\subset\RR^d$ is distance-transitive, then it is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a regular polygon $(d=2)$,
\item the regular dodecahedron $(d=3)$,
\item the regular icosahedron $(d=3)$,
\item a cross-polytopes, that is, $\conv\{\pm e_1,...,\pm e_d\}$ where $\{e_1,...,e_d\}\subset\RR^d$ is the standard basis of $\RR^d$,
\item a hyper-simplex $\Delta(d,k)$, that is, the convex hull of all vectors $v\in\{0,1\}^{d+1}$ with exactly $k$ 1-entries,
\item a cartesian power of a regular simplex (also known as the Hamming polytopes; this includes regular simplices and hypercubes),
\item a demi-cube, that is, the convex hull of all vectors $v\in\{-1,1\}^d$ with~an~even number of 1-entries,
\item the $2_{21}$-polytope, also called Gosset-polytope $(d=6)$,
\item the $3_{21}$-polytope, also called Schläfli-polytope $(d=7)$.
\end{enumerate}
The ordering of the polytopes in this list agrees with the ordering of graphs in the list in \cref{res:spectral_distance_regular_graphs}.
The latter two polytopes where first constructed by Gosset in \cite{gosset1900regular}.
\end{theorem}
We observe that the list in \cref{res:distance_transitive_classification} contains many polytopes that are not regular, and contains all regular polytopes excluding the 4-dimensional exceptions, the 24-cell, 120-cell and 600-cell.
The distance-transitive polytopes thus form a distinct class of remarkably symmetric polytopes which is not immediately related to the class of regular polytopes.
Another noteworthy observation is that all the distance-transitive polytopes are \emph{Wythoffian poly\-topes}, that is, they are orbit polytopes of finite reflection groups.
\Cref{fig:distance_transitive_Coxeter} shows the Coxeter-Dynkin diagrams of these polytopes.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/distance_transitive_Coxeter}
\caption{Coxeter-Dynkin diagrams of distance-transitive polytopes.}
\label{fig:distance_transitive_Coxeter}
\end{figure}
\iffalse
Finally, we can show that \cref{res:distance_transitive_classification} is the final word on distance-transitive (actually distance-regular) graphs that are spectral: we can show that such a graph cannot be $\theta_k$-spectral for any $k\not=2$.
\begin{theorem}
\label{res:distance_regular_theta_k}
If $G$ is distance-transitive (actually distance-regular) and $\theta_k$-spectral, then $k=2$.
\end{theorem}
The proof uses some special properties of distance-transitive (actually distance-regular graphs) that we are citing in place.
In general, we recommend the chapters 11 and 13 of \cite{godsil1993algebraic}, the latter of which contains all the tools we need.
\begin{proof}[Proof of \cref{res:distance_regular_theta_k}]
The $\theta_1$-eigenpolytope of any graphs is a single point.
Since we generally assumed that our polytopes ar non-trivial, \shortStyle{i.e.,}\ of dimension $d\ge 2$, we from now on assume $k\ge 2$.
Let~$v_i\in$ $\RR^d$ be the coordinates assigned to the vertices $i\in V$ of $G$ as given~in~\cref{def:eigenpolytope}.
We know that the arrangement of the points $v_i$ in $\RR^d$ is as symmetric as the underlying graph, which in the case of distance-transitive graphs means that the value of $\<v_i,v_j\>$ depends only on the distance $\dist(i,j)$ (more generally, the same holds for distance-regular graphs).
We can then define the so-called \emph{cosine sequence} $u_\delta:=\<v_i,v_j\>$ whenever $\dist(i,j)$ $=\delta$.
A well-known result for distance-transitive (actually distance-regular) graphs is, that the sequence $u_0,...,u_{\diam(G)}$ has \emph{exactly} $k-1$ sign changes (see \cite{godsil1993algebraic}, Chapter 13, Lemma 2.1).
It remains to show, that if the $v_i$ embedd $G$ as the skeleton of a polytope, then there can be at most one signs change, hence $k=2$.
\begin{center}
\includegraphics[width=0.4\textwidth]{img/5_gon_cut}
\end{center}
This property is occasionally known as the \emph{tightness} of the skeleton of a polytope.
It says that the skeleton cannot be cut into more than two pieces by any hyperplane.
We give a formal proof below.
Suppose that there are at least two signs changes in $u_\delta$. This means, there exist $\delta>\delta'>0$ with $u_0u_{\delta'}<0$ and $u_{\delta'}u_{\delta}<0$, or equivalently $u_0 > u_\delta > 0 > u_{\delta'}$.
Suppose that the $v_i$ embedd $G$ as the skeleton of a polytope $P$.
Consider the linear optimization problem maximizing the linear functional $\<v_i,\free\>$ over $P$, which clearly attains its maximum in $v_i$.
Let $j\in V$ be a vertex with $\dist(i,j)=\delta$.
Applying the simplex algorithm yields a monotone path in the skeleton from $v_j$ to $v_i$.
But any such path must pass though a vertex $j'\in V$ with $\dist(i,j')=\delta'<\delta$. But we know that $\<v_i,v_{j'}\> = u_{\delta'} < 0 < u_{\delta}$.
So the path cannot have been monotone.
This is a contradiction, showing that there cannot have been two sign changes.
\end{proof}
Whether a similar argument is possible for larger classes of graphs is unknown.
\fi
\iffalse
\hrulefill
Let $G=(V,E)$ be a (simple, undirected) graph on the vertex set $V=\{1,...,n\}$.
From any eigenvalue $\theta\in\Spec(G)$ of $G$ (\shortStyle{i.e.,}\ of its adjacency matrix $A$) of multiplicity $d$, we can construct a so-called \emph{spectral realization} (or \emph{$\theta$-realization}) $v\: V\to\RR^d$ of $G$ as follows.
Let $\{u_1,...,u_d\}\subset\RR^n$ be an orthonormal basis of the $\theta$-eigenspace of $G$ and define the \emph{arrangement matrix} $M:=(u_1,...,u_d)\in\RR^{n\x d}$ in which the $u_i$ are the columns.
This matrix has one row per vertex, and so we can obtain a realization $v$ by assigning $i\in V$ to the $i$-th row of $M$:
$$M :=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}\mathrlap{\in\RR^{n\times d}.}$$
The realization $V\ni i\mapsto v_i$ is called $\theta$-realization of $G$. The $\theta$-realization is unique up to re-orientation (depending only on the choice of the basis $\{u_1,...,u_d\}$).
\begin{definition}
The \emph{$\theta$-eigenpolytope} $P_G(\theta)$ of a graph $G$ is the convex hull of the $\theta$-realization $v$ of $G$:
$$P_G(\theta):=\conv\{v_i\mid i\in V\}.$$
\end{definition}
\fi
\iffalse
\section{The setting}
\label{sec:setting}
Throughout the paper, $P\subset\RR^d$ denotes a $d$-dimensional polytope of full dimen\-sion, that is, $P$ is the convex hull of finitely many points $v_1,...,v_n\in\RR^d$ (its \emph{vertices}), and $P$ is not contained in a proper affine subspace of $\RR^d$.
In particular, we assume the vertice to be labeled with the numbers $1$ to $n$.
By $\F_\delta(P)$, we denote the $\delta$-dimensional faces of $P$, in particular, its vertex set $\F_0(P)$, and its edge set $\F_1(P)$.
By $G_P=(V,E)$ we denote the \emph{edge-graph} of $P$, with vertex set $V=\{1,...,n\}$, so that $i\in V$ corresponds to the vertex $v_i\in \F_0(P)$, and each $ij\in E$ correspond to the edge $\conv\{v_i,v_j\}\in\F_1(P)$.
The term vertex (resp.\ edge) will be used for the elements of $V$ and $\F_0(P)$ (resp.\ $E$ and $\F_1(P)$) likewise, but no confusion is to be expected as these are in a one-to-one correspondence.
Finally, there are two matrices of major importance: for every polytope $P\subset\RR^d$ with labeled vertices $v_1,...,v_n$, the matrix
$$M :=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}\mathrlap{\in\RR^{n\times d}}$$
is called \emph{arrangement matrix} of $P$ (the vertices $v_i$ are the rows of $M$).
The matrix $A\in\{0,1\}^{n\x n}$ with
$$A_{ij}=\begin{cases} 1 & \text{if $ij\in E$}\\0&\text{if $ij\not\in E$}\end{cases}$$
is the usual \emph{adjacency matrix} of $G_P$.
\section{Two open questions}
Most of this book talks about the relation between polytopes and their edge-graphs.
It is well-known, that not every graphs is the edge-graph of polytope.
Since Steinitz, we know that the edge-graphs of polyhedra (3-dimensional polytopes) are exactly the 3-connected planar graphs, and each such graph determines a single combinatorial type of polytopes.
In contrast, no characterization of polytopes in dimension $d\ge 4$ is known, and even worse, if a graph is an edge-graph, then in general, it tells us little about the polytope.
That means, there can be many combinatorially distinct polytopes with the same edge-graph, even of the same dimension.
As a light in the dark, we have the result of Blind \& Mani (later refined by Kalai), that every \emph{simple} polytope (that is, a $d$-dimensional polytope whose edge-graph is of degree $d$) is uniquely determined by its edge-graph.
Similar results are known for other classes of polytopes, \shortStyle{e.g.}\ zonotopes.
In this paper, we add another class of polyopes to that list: \emph{spectral polytopes}.
Given the definition of spectral polytope, this is not too surprising (it sounds like an algorithm to compute the polytope from its edge-graph), but the interesting bit is in the graphs that turn out to be spectral and which would not have been expected to be uniquely determined otherwise.
For example, consider highly symmetric polytopes.
Each regular polytopes is uniquely determined (among regular polytopes) by itd edge-graph. In other words, there are no two regular polytopes with the same edge-graph.
On the other hand, there are many distinct vertex-transitive polytopes with the same edge-graph (\shortStyle{e.g.}\ any two vertex-transitive $n$-vertex neighborly polytopes).
So regularity is too strong to be interesting, but vertex-transitivity is too weak.
In this paper, we show that edge-transitivity also already gives unique reconstruction from the edge-graph, since they are spectral.
\fi
\iffalse
\section{Balanced and spectral polytopes}
\label{sec:balanced}
We now come back to the curious observations, that sometimes, a polytope is the eigenpolytope of its edge-graph.
A polytope with this property will be called \emph{spectral}.
However, for this section, we decided to go a different path, and introduce spectral polytopes from a different perspective: via balanced polytopes.
The overall goal of this section is to investigate how to can find out whether a given polytope already is spectral.
The precursor to spectral polytopes are the balanced polytopes that we introduce now.
\begin{definition}
$P\subset\RR^d$ is called \emph{$\theta$-balanced} (or just \emph{balanced})~for~a~num\-ber $\theta\in\RR$ if
\begin{equation}\label{eq:balanced}
\sum_{\mathclap{j\in N(i)}} v_j=\theta v_i,\quad \text{for all $i\in V$}.
\end{equation}
The value $\theta$ is actually an eigenvalue of $G$ (see \cref{res:eigenvalue} below).
If the~multi\-plicity of this eigenvalue $\theta$ is exactly $d$, then $P$ is called \emph{$\theta$-spectral} (or just \emph{spectral}).
\end{definition}
\begin{example}
All neighborly polytopes are balanced (if the barycenter of its vertices is the origin).
We can see this as follows:
$$\sum_{\mathclap{j\in N(i)}} v_j = -v_i + \underbrace{\sum_{\mathclap{j\in V}} v_j}_{=0} = -v_i.$$
Note that the spectrum of $K_n$ is $\{(-1)^{n-1},(n-1)^1\}$, and so $-1$ is indeed the second largest eigenvalue.
\end{example}
\begin{example}
All regular polytope are $\theta_2$-spectral.
This was first shown by Licata and Powers \cite{licata1986surprising} for all regular polytopes excluding the 4-dimensional exceptions (the 24-cell, 120-cell and 600-cell).
For the remaining polytopes this was proven in \cite{winter...} (Theorem ??).
Alternatively, the same follows from \cref{res:edge_vertex_transitive}.
Other polytopes that were recognized as spectral by Licata and Powers were the
\emph{rhombic dodecahedron}, ... \msays{TODO}
\end{example}
\begin{example}
Probably all uniform polytopes have a realization that is spectral.
\end{example}
Note that all these polytopes are spectral to the \emph{second-largest eigenvalue $\theta_2$}.
This is plausible for different reasons, but no proof of this fact is currently known.
\begin{observation}
Nodal domains ...
\end{observation}
Being balanced can be interpreted in at least two different way.
One of which is in terms of rigidity theory (and applies more generally to graph realizations rather than just to skeleta of polytopes).
The other is purely in terms of spectral graph theory.
\begin{remark}
Being $\theta$-balanced can also be interpreted in the context of rigidity theory.
For an edge $ij\in E$, the vector $v_j-v_i$ is pointing from the vertex $v_i$ along the edge $ij$ and can be interpreted as a force pulling $v_i$ along this edge.
At each vertex, the sum of these forces behaves as follows:
\begin{equation}\label{eq:self_stress}
\sum_{\mathclap{j\in N(i)}} (v_j-v_i) = \sum_{\mathclap{j\in N(i)}} v_j - \deg(G)v_i =\big(\theta-\deg(G)\big) v_i.
\end{equation}
In the language of rigidity theory: if we put the ...
Equation \eqref{eq:self_stress} can also be written as $LM=\theta M$, where $L$ is the Laplacian of $G$.
\end{remark}
\begin{remark}
Let us call a graph \emph{$\theta$-spectral} if it is the edge-graph of a $\theta$-spectral polytope.
We can characterize such graphs purely in the language of spectral graph theory.
A graph is $\theta$-spectral if $i,j\in V$ are adjacent in $G$ if and only if there exists a $\theta$-eigenvector $u\in\RR^n$ that is maximized exactly on $i$ and $j$:
$$\Argmax_{k\in V} u_k = \{i,j\}.$$
\end{remark}
Now the central observation concerning balanced polytopes, paving the way to spectral polytopes, is the following:
\begin{observation}\label{res:eigenvalue}
The defining equation \eqref{eq:balanced} can be written as $AM=\theta M$, where $A\in\RR^{n\x n}$ (where the matrices $A$ and $M$ are defined as in \cref{sec:setting}).
In this form, it is apparent that $\theta$ is an eigenvalue of $A$, and the columns of $M$ are the corresponding eigenvectors.
\end{observation}
Since $P$ is assumed to have full dimension, the matrix $M$ has rank $d$, \shortStyle{i.e.,}\ its columns are linearly independent eigenvectors.
But they not necessarily span the whole eigenspace.
If they do, this is called a spectral polytope:
\begin{definition}
A $\theta$-balanced polytope $P\subset\RR^d$ is called \emph{$\theta$-spectral} (or just \emph{spectral}), if any of the following equivalent conditions is satisfied:
\begin{myenumerate}
\item $\theta\in\Spec(G_P)$ has multiplicity $d$ (the dimension of $P$),
\item the columns of $M$ span the $\theta$-eigenspace of $G_P$.
\end{myenumerate}
\end{definition}
\subsection{Applications}
There are at least two interesting facts about spectral polytopes.
\begin{corollary}
A spectral polytope realizes all the symmetries of its edge-graph.
\end{corollary}
\begin{corollary}
A $\theta_2$-spectral polytope is uniquely determined by its edge-graph (among $\theta_2$-spectral polytopes).
\end{corollary}
Of course, $\theta_2$ can be replaced by any $\theta_i$.
\fi
\section{Implementation in Mathematica}
\label{sec:appendix_mathematica}
The following short Mathematica script takes as input a graph $G$ (in the example below, this is the edge-graph of the dodecahedron), and an index $k$ of an eigenvalue.
It then compute the $v_i$ (or \texttt{vert} in the code), \shortStyle{i.e.,}\ the vertex-coordinates of the $\theta_k$-eigenpolytope.
If the dimension turns out to be appropriate, the spectral embedding of the graph, as well as the eigenpolytope are plotted.
\vspace{0.5em}
\begin{lstlisting}
(* Input:
* the graph G, and
* the index k of an eigenvalue (k = 1 being the largest eigenvalue).
*)
G = GraphData["DodecahedralGraph"];
k = 2;
(* Computation of vertex coordinates 'vert' *)
n = VertexCount[G];
A = AdjacencyMatrix[G];
eval = Tally[Sort@Eigenvalues[A//N], Round[#1-#2,0.00001]==0 &];
d = eval[[-k,2]]; (* dimension of the eigenpolytope *)
vert = Transpose@Orthogonalize@
NullSpace[eval[[-k,1]] * IdentityMatrix[n] - A];
(* Output:
* the graph G,
* its eigenvalues with multiplicities,
* the spectral embedding, and
* its convex hull (the eigenpolytope).
*)
G
Grid[Join[{{$\theta$,"mult"}}, eval], Frame$\to$All]
Which[
d<2 , Print["Dimension too low, no plot generated."],
d==2, GraphPlot[G, VertexCoordinates$\to$vert],
d==3, GraphPlot3D[G, VertexCoordinates$\to$vert,
d>3 , Print["Dimension too high, 3-dimensional projection is plotted."];
GraphPlot3D[G, VertexCoordinates$\to$vert[[;;,1;;3]] ]
]
If[d==2 || d==3,
Region`Mesh`MergeCells[ConvexHullMesh[vert]]
]
\end{lstlisting}
\section{Motivation}
\label{sec:motivation}
We dive into three curious topics in the interplay between graphs and polytopes, all of which will be addressed in this paper.
\subsection{Reconstructing polytopes from the edge-graph}
\label{sec:reconstructing_polytopes}
Since Steinitz \cite{...} it is known that the edge-graphs of polyhedra are exactly the 3-connected planar graphs, and that any such graph belongs to a unique combinatorial type of polyhedra.
On the other hand, the study of higher-dimensional polytopes taught us that the edge-graph carries very little information about its polytope if $d\ge 4$.
For example, the edge-graph of any simplex is the complete graph.
But so is the edge-graph of every \emph{neighborly polytope}, and so, given only the edge-graph, we will never know from which polytope it was obtained initially.
The same example shows that in many cases not even the dimension of the polytope can be reconstructed.
As a light in the dark, we have the result of Blind and Mani \cite{...} (with its modern proof from Kalai \cite{kalai1988simple}), which states that the combinatorial type of a \emph{simple polytope} can be reconstructed from its edge-graph.
One can then ask what other classes of polytopes allow such a unique reconstruction.
For example, it might be that every \enquote{sufficiently symmetric} polytope is reconstructable in this sense.
\begin{question}\label{q:symmetrc_polytopes_with_same_edge_graph}
Can there be two distinct sufficiently symmetric polytopes with the same edge-graph?
\end{question}
One might ask \enquote{what counts as sufficient symmetric?}, and this paper shall answer this question partially.
For example, vertex-transitivity is not sufficient:
every~even-dimen\-sional \emph{cyclic polytope} $C(n,2d)$ is neighborly and can be realized in a vertex-transitive way: set the coordinates of the $i$-th vertex to be
$$\big(\sin(t_i),\cos(t_i);\sin(2t_i),\cos(2t_i);...;\sin(dt_i),\cos(dt_i)\big)\in\RR^{2d}$$
with $t_i=2\pi i/n$. The edge-graph is $K_n$, independent of $d$.
The next candidate in line, \emph{edge-transitivity}, seems more promising.
And while a complete classification of edge-transitive polytopes might be possible (though, none is currently known), a more satisfying outcome would be a result independent of any such classification.
\subsection{Realizing symmetries of the edge-graph}
\label{sec:realizing_symmetries}
Polytopes are at most as symmetric as there edge-graph.
Any symmetry $T\in\Aut(P)\subset\Ortho(\RR^d)$ of a polytope $P\subset\RR^d$ induces a symmetry $\phi\in\Aut(G_P)\subseteq\Sym(n)$ of the edge-graph.
And thus $\Aut(P)\le\Aut(G_P)$ as abstract groups.
For general polytopes, it cannot be expected that the polytope shows many, or just any symmetries of the edge-graph.
However, one can always try to find a sufficiently symmetric realization of the polytope, so that a maximum of combinatorial symmetries of the edge-graph is realized as geometric symmetries of the polytope.
One can even hope to realize \emph{all} the symmetries of the edge-graph.
In the case of polyhedra, the circle packing version of the Steinitz theorem provides a way to realize all the symmetries of the edge-graph.
Again, higher dimensions tell a different story: there exist 4-dimensional polytopes with edge-graphs that can never be realized as polytopes the edge-graphes of which cannot be realized with all their symmeteries (consider \shortStyle{e.g.}\ the dual of the polytope constructed in \cite{bokowski1984combinatorial}).
Again, it seems that assuming sufficient symmetry of the polytope ensures that the polytope has already \emph{all} the symmetries of its edge-graph.
We can actually proof this in the case of so called \emph{distance-transitive} symmetry, and we hope to find the same in the case of only edge-transitive polytopes.
\begin{question}
Does every sufficiently symmetric polytope realize all the symmetries of its edge-graph?
\end{question}
\subsection{Rigidity and perfect polytopes}
\label{sec:rigid_perfect}
Every polytope can be continuously deformed while preserving its combinatorial type.
In the case of highly symmetric polytopes, one might be additionally interesting in keeping the symmetry properties during the deformation.
A polytope that cannot be deformed without loosing any of its symmetries is be called \emph{rigid} or, occasionally, a \emph{perfect polytope}.
All regular polytopes are examples of such rigid polytopes, and there exist many more.
While quite some examples are known, no general construction procedures has been described.
Again, one might hope to show that all sufficiently symmetric polytopes are rigid, in particular, the edge-transitive polytopes.
\begin{question}
Is every sufficiently symmetric polytope rigid?
\end{question}
\subsection{A motivating example}
\label{sec:motivating_example}
Consider your favorite highly symmetric polytope $P\subset\RR^d$.
For demonstration, we choose $P$ to be the \emph{regular dodecahedron}\footnote{Any other platonic solid (or regular polytope) might do the trick. Ideally, your polytope is transitive on vertices and edges.}
Here are some observations:
\begin{myenumerate}
\item $P$ is really symmetric (it is a regular polyhedron). All its symmetries must also be reflected in the edge-graph $G_P$, since any automorphism of $P$ restricts to an automorphism of $G_P$.
However, it is far from evident whether any particular symmetry of $G_P$ is reflected in $P$ as well?
In the case of the dodecahedron, every combinatorial symmetry of the edge-graph is indeed a geomery symmetry of $P$.
\item The study of higher dimensional polytopes teaches that the edge-graph of a polytope carries usually very little information about the polytope itself.
For example, the edge-graph of a simplex is a complete graph.
But so are the edge-graph of all \emph{neighborly polytopes}.
It was shown in \cite{kalai1988simple} that this problem of reconstruction does not happen for \emph{simple polytopes}, and it is our goal to show that this is also not to be expected for sufficiently symmetric polytope.
\item A polytope is called \emph{perfect} if its combinatorial type has a unique geometric realization of highest symmetry.
Many highly symmetric polytopes one immediately thinks about have this property. But many other do not.
We show that being edge-transitive seems to be the right assumption to ensure being perfect.
\end{myenumerate}
\section{Eigenpolytopes and spectral polytopes}
One way to connect spectral graph theory to polytope theory is via the construction of the so-called \emph{eigenpolytopes}.
In this section, we will survey this construction and its application primarily to the study of highly symmetric graphs.
From this section on, let $P\subset\RR^d$ denote a $d$-dimensio\-nal convex polytope with $d\ge 2$.
If not stated otherwise, we shall~assume that $P$ is \emph{full-dimensional}, that is, that $P$ is not contained in any proper subspace of $\RR^d$.
Let $\F_\delta(P)$ denote the set of \emph{$\delta$-dimensional faces} of $P$.
Fix some bijection $v\: V\to \F_0(P)$ between $V=\{1,...,n\}$ and the vertices of $P$.
Let $G_P=(V,E)$ be the graph with $ij\in E$ if and only if $\conv\{v_i,v_j\}$ is an edge of $P$.
$G_P$ is called \emph{edge-graph} of $P$, and $v$ the \emph{skeleton} of $P$.
Note that we can consider $v$ as a realization of $G_P$.
\subsection{Eigenpolytopes}
\begin{definition}
Given a graph $G$ and an eigenvalue $\theta\in \Spec(G)$ of multiplicity $d$. If $v$ is the $\theta$-realization of $G$, then
%
$$P_G(\theta):=\conv\{v_i\mid i\in V\}\subset\RR^d$$
%
is called \emph{$\theta$-eigenpolytope} of $G$, or just \emph{eigenpolytope} if $\theta$ is not specified.
\end{definition}
The notion of the eigenpolytope goes back to Godsil \cite{godsil1978graphs}, and can~be under\-stood~as a geometric way to study the combinatorial, algebraic~and~spectral properties of a graph.
Several authors investigated the eigenpolytopes of famous graphs or graph families.
For example, Powers \cite{powers1986petersen} gave a description for the face structure of~the~different eigenpolytopes of the Petersen graph, one of which will reappear as distance-transitive polytope in \cref{sec:distance_transitive_polytopes}.
Mohri \cite{mohri1997theta_1} analysed the geometry of the, what he called, \emph{Hamming polytopes}, the $\theta_2$-eigenpolytopes of the Hamming graphs.
As it turns out, but was not realized by himself, these polytopes are merely the cartesian products of regular simplices.
These, as well, are distance-transitive~\mbox{polytopes}. Powers \cite{powers1988eigenvectors} took a first look at the eigenpolytopes of the \emph{distance-regular} graphs.
Those especially will be discussed in greater detail below.
Finally, Padrol and Pfeifle \cite{padrol2010graph} investigated graph operations and their effect on the associated eigenpolytopes.
Before we proceed, we recall the relevant terminology for polytopes.
\subsection{Spectral polytopes}
Most graphs are not edge-graphs of polytopes (so-called \emph{polytopal} graphs), but eigenpolytope provide an operation that to every graph (and an eigenvalue thereof) assignes such a polytopal graph (the edge-graph of the eigenpolytope).
This operation is most interesting in the case of vertex-transitive graphs.
If $G$ is a vertex-transitive graph on $n$ vertices, every vertex of $G$ becomes a vertex of $P_G(\theta)$ (though maybe several vertices are assgined to the same point).
It seems interesting to ask for when this operation is a fixed point, that is, when is the edge-graph of the eigenpolytope isomorphic to the original graph.
We are specifically interested in one particular phenomenon associated with eigenpolytopes, that can best be observed in our running example.
\begin{definition}
\quad
\begin{myenumerate}
\item A polytope $P$ is called \emph{balanced} resp.\ \emph{spectral}, if this is the case for its skeleton.
\item A graph is called \emph{spectral} if it is the edge-graph of a spectral polytope.
\end{myenumerate}
\end{definition}
For a polytope, being spectral seems to be a quite exceptional property.
For example, being spectral implies that the polytope realizes all the symmetries of its edge-graph.
So, no neighborly polytope except for the simplex can be spectral.
In contrast, all (centered) neighborly polytopes are balanced, and many neighborly polytopes are eigenpolytopes (\shortStyle{e.g.}\ one of the Petersen-polytopes is neighborly).
If a graph is spectral to eigenvalue $\theta$, then it is isomorphic to the edge-graph of its $\theta$-eigenpolytop.
However, the converse is not true: the $\theta_3$-realization of the pentagon-graph is a pentagram, whose convex hull is a pentagon. But the pentagon-graph is not $\theta_3$-spectral.
\begin{center}
\includegraphics[width=0.7\textwidth]{img/convex_hull_5_gon}
\end{center}
\begin{example}
Every neighborly polytope has eigenvalue $-1$ (if centered appropriately), but none of these is spectral, except for the simplices.
\end{example}
\begin{example}
It is an easy exercise to show that the $d$-cubes and $d$-crosspolytopes are spectral with eigenvalue $\theta_2$.
In fact, it was shows by Licata and Power \cite{licata1986surprising} that also the regular polygons, all simplices, the regular dodecahedron and the regular icosahedron are spectral in this sense.
The only regular polytopes which they were not able to prove spectral were the 24-cell, 120-cell and 600-cell.
However, they conducted numerical experiments hinting in that direction.
\end{example}
\msays{The $\theta_3$-realization of the cube-graph is a polytope (the tetrahedron) and its edges correspond to edges of the graph, but it is not faithful.}
In a slightly more general setting, Izmestiev \cite{izmestiev2010colin} found that every polytope is~spectral, if we are allowed to add appropriate edge-weights to the edge-graph.
This~results in an adjacency matrix whose entries can be any non-negative real number instead of just zero and one.
This however might break the symmetry in a combinatorially highly symmetric graph and is therefore not our preferred perspective.
First results on this topic were obtained by Licata and Powers \cite{licata1986surprising}, who indentified several polytopes as being spectral.
Among these, all regular polytopes, excluding the exceptional 4-dimensional regular polytopes: 24-cell, 120-cell and 600-cell.
Still kind of \enquote{unexplained} is their finding that the \emph{rhombic dodecahedron} is spectral as well.
Since the edge graph of this polyhedron is not regular, this observation hinges critically on the fact that we consider eigenpolytopes constructed from the adjacency matrix, rather than, say, the Laplace matrix.
Godsil \cite{godsil1998eigenpolytopes} classified all spectral graphs among the distance-regular graphs (for a definition, see \cref{sec:distance_transitive_realizations}).
He found that these are the following:
\begin{theorem}[Godsil, \cite{...}]
\label{res:spectral_distance_regular_graphs}
Let $G$ be a distance-regular graph.
If $G$ is $\theta_2$-spectral,~then $G$ is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a cycle graph $C_n,n\ge 3$,
\item the edge-graph of the dodecahedron,
\item the edge-graph of the icosahedron,
\item the complement of a disjoint union of edges,
\item a Johnson graph $J(n,k)$,
\item a Hamming graph $H(d,q)$,
\item a halved $n$-cube $\nicefrac12 Q_n$,
\item the Schläfli graph, or
\item the Gosset graph.
\end{enumerate}
\end{theorem}
One remarkable fact about this list, which seems to have gone unnoticed, is that all these graphs are actually \emph{distance-transitive}.
In general there was not found a single polytope (or graph) that is $\theta_i$-spectral for any $i\not=2$ (assuming that the graph has at least one edge).
It is not implausible to conjecture that skeleta of polytopes can have, if at all, only eigenvalue $\theta_2$.
This comes from the study of so-called \emph{nodal domains}.
There is a remarkable construction due to Izmestiev \cite{izmestiev2010colin}.
\begin{theorem}[Izmestiev \cite{izmestiev2010colin}, Theorem 2.4]\label{res:izmestiev}
Let $P\subset\RR^d$ be a polytope, $G_P$ its~edge-graph and $v\:V\to\RR^d$ its skeleton. For $c=(c_1,...,c_n)\in\RR^n$ define
$$P^\circ(c):=\{x\in\RR^d\mid\<x,v_i\>\le c_i\text{ for all $i\in V$}\}.$$
Note that $P^\circ=P^\circ(1,...,1)$ is the polar dual of $P$.
The matrix $X\in\RR^{n\x n}$ with~compo\-nents
$$X_{ij}:=-\frac{\partial^2 \vol(P^\circ(c))}{\partial c_i\partial c_j}\Big|_{c=(1,...,1)}$$
has the following properties:
\begin{myenumerate}
%
\item $X_{ij}< 0$ whenever $ij\in E(G_P)$,
\item $X_{ij}=0$ whenever $ij\not\in E(G_P)$,
\item $XM=0$, where $M$ is the arrangement matrix of $v$,
\item $X$ has a unique negative eigenvalue, and this eigenvalue is simple,
\item $\dim\ker X=d$.
\end{myenumerate}
\end{theorem}
One can view the matrix $X$ (or better, $-X$) as kind of an adjacency matrix of~a vertex- and edge-weighted version of $G_P$.
Part $(iii)$ states that $v$ is balanced~with eigenvalue zero.
Since $\rank M=d$, part $(v)$ states that the arrangement space~$U:=$ $\Span M$ is already the whole eigenspace.
And part $(iv)$ states that zero is the second largest eigenvalue of $-X$.
\begin{theorem}\label{res:implies_spectral}
Let $P\subset\RR^d$ be a polytope, and $X\in\RR^{n\x n}$ the associated matrix~defined as in \cref{res:izmestiev}. If we have that
\begin{myenumerate}
\item $X_{ii}$ is independent of $i\in V(G_P)$, and
\item $X_{ij}$ is independent of $ij\in E(G_P)$,
\end{myenumerate}
then $P$ is spectral with eigenvalue $\theta_2$.
\begin{proof}
By assumption there are $\alpha,\beta\in\RR$, $\beta>0$, so that $X_{ii}=\alpha$ for all vertices~$i\in$ $V(G_P)$, and $X_{ij}=-\beta$ for all edges $ij\in E(G_P)$.
We then find
$$X=\alpha \Id - \beta A\quad\implies\quad (*)\;A=\frac\alpha\beta \Id-\frac1\beta X,$$
where $A$ is the adjacency matrix of $G_P$.
By \cref{res:izmestiev} $(iv)$ and $(v)$, the matrix $X$ has second smallest eigenvalue zero of multiplicity $d$.
By \cref{res:izmestiev} $(iii)$, the columns of $M$ are the corresponding eigenvectors.
Since $\rank M=d$ we find that these are all the eigenvectors, in particular, that the arrangement space $U:=\Span M$ is the 0-eigenspace of $X$.
By $(*)$, $A$ (that is, $G_P$) has then second \emph{largest} eigenvalue $\theta_2=\alpha/\beta$ of multiplicity $d$ with eigenspace $U$.
The skeleton of $P$ is then the $\theta_2$-realization of $G_P$ and $P$ is spectral.
\end{proof}
\end{theorem}
\begin{corollary}
If $P\subset\RR^d$ is simultaneously vertex- and edge-transitive, then $P$ is spectral with eigenvalue $\theta_2$.
\end{corollary}
It seems tempting to conjecture that the spectral polytopes are \emph{exactly} the polytopes that satisfy the conditions $(i)$ and $(ii)$ in \cref{res:implies_spectral}.
This would give an interesting geometric characterization of eigenpolytopes.
Izmestiev proved the following:
for $ij\in E$ let $\sigma_i,\sigma_j\in\F_{d-1}(P^\circ)$ be the dual facets to the vertices $v_i$ and $v_j$ in $P$. Then holds
$$X_{ij}= \frac{\vol_{d-2}(\sigma_i\cap\sigma_j)}{\|v_i\|\|v_j\|\sin\angle(v_i,v_j)}.$$
There seems to be no equally elementary formula for $X_{ii},i\in V$, but its value can be derived from $XM=0$, which implies
$$X_{ii} v_i = -\sum_{\mathclap{j\in N(i)}} M_{ij} v_j.$$
It seems out of reach to prove this fact.
There are many balanced polytopes that are not spectral (\shortStyle{e.g.}\ all appropriately centered neighborly polytopes that are not simplices).
Further, there are polytope for which being spectral appears completely exceptional, \shortStyle{e.g.}\ the \emph{rhombic dodecahedron} (already mentioned by Licata and Powers \cite{licata1986surprising}).
\subsection{Applications}
Spectral polytopes turn out to be exactly the kind of polytopes we need to address the problems listed in \cref{sec:motivation}.
For example, for fixed $i\in\NN$ consider the set of all $\theta_i$-spectral polytopes.
Given the edge-graph $G_P$ of some polytope in this set, this uniquely determines the polytope, simply since it can be reconstruced via the $\theta_i$-realizaion of $G_P$.
In other~words, the $\theta_i$-spectral polytopes are uniquely determined by their edge-graph.
Also, each $\theta_i$-spectral polytope realizes all the symmetries of its edge-graph by \cref{res:spectral_implies_symmetric}.
The question then is how large the class of these polytopes is, and whether their members are easily recognized.
For example, has every family of vertex-transitive polytopes with the same edge graph a spectral representative?
If so, this would imply that every vertex-transitive edge-graph can be realized as a polytope with all symmetries.
\section{Edge-transitive polytopes}
\subsection{Arc-transitive polytopes}
\label{sec:arc_transitive_polytopes}
Our final goal would be to establish that every arc-transitive polytope is spectral, or more precisely, $\theta_2$-spectral.
To anticipate~
the outcome, we were not able to prove this.
However, we also observe that this statement is true for \emph{all} known arc-transitive polytopes.
In the last section we have learned that the skeleton of an arc-transitive polytope is of full local dimension, hence irreducible, spherical, rigid and has an eigenvalue.
\begin{definition}
Given a polytope $P\subset\RR^d$.
\begin{myenumerate}
\item $P$ is called \emph{rigid} if it cannot be continuously deformed without changing its combinatorial type or symmetry group.
\item $P$ is called \emph{perfect} if any essentially distinct but combinatorially equivalent polytope $Q\subset\RR^d$ has a smaller symmetry group than $P$, that is $\Aut(Q)\subset\Aut(P)$.
\end{myenumerate}
\end{definition}
\enquote{Essentially distinct} in $(ii)$ means that $P$ and $Q$ are not just reoriented or~rescaled versions of one another.
If a polytope is perfect, then it is rigid.
It seems to be open whether the converse holds in general.
In theory it could happen that a combinatorial type might have two \enquote{most symmetric} realizations as a polytope, and those cannot be continuously deformed into each other.
\begin{theorem}
An arc-transitive polytope $P\subset\RR^d$ is $\theta_2$-spectral.
\begin{proof}
The skeleton of $P$ is an arc-transitive realization of $G_P$ of full local dimension, hence has an eigenvalue $\theta\in\Spec(G)$ according to \cref{res:...}.
The tricky part is to show that $\theta=\theta_2$.
For this, we use the following fact:
Let $v$ be the skeleton of $P$ and define
$$P^\circ(\mathbf c) = P^\circ(c_1,...,c_n):=\{x\in\RR^d\mid \<x,v_i\>\le c_i\text{ for all $i\in V$}\}.$$
Finally, consider the matrix
$$M_{ij}:=-\frac{\partial^2\mathrm{vol}(P(\mathbf c))}{\partial c_i \partial c_j}\Big|_{\mathbf c=(1,...,1)}.$$
Since $P$ is edge-transitive, $M_{ij}$ contains the same value for all $ij\in E$.
Since $P$ is vertex-transitive, $M_{ii}$ does not depend in $i\in V$.
Finally, it is proven in \cite{...} that $M_{ij}=0$ whenever $ij\not\in E$, and that $M$ has a single negative eigenvalue which is simple.
Apparently, there are $\alpha,\beta\in\RR$ so that $M=\alpha A+\beta\Id$, where $A$ is the adjacency matrix of $M$.
\end{proof}
\end{theorem}
\begin{theorem}
If $P\subset\RR^d$ is an arc-transitive polytope, then
\begin{myenumerate}
\item $P$ is rigid,
\item $\Aut(P)$ is irreducible,
\item any projection $\pi_U(P)$ onto a subspace $U\subseteq\RR^d$ is either not arc-transitive~or has a different edge-graph as $P$.
\end{myenumerate}
Furthermore, there is an eigenvalue $\theta\in\Spec(G_P)$ of the edge-graph $G_P$ resp.\ its~associated Laplacian eigenvalue $\lambda=\theta-\deg(G_P)$, so that the following holds:
\begin{myenumerate}
\setcounter{enumi}{3}
\item if $P$ has edge length $\ell$ and circumradius $r$, then
%
$$\frac\ell r=\sqrt{\frac{2\lambda}{\deg(G_P)}}.$$
\item if the polar polytope $P^\circ$ has dihedral angle $\alpha$, then
%
$$\cos\alpha=-\frac{\theta}{\deg(G_P)}.$$
\end{myenumerate}
\end{theorem}
\begin{conjecture}\label{conj:arc_transitve}
Each arc-transitive polytope $P\subset\RR^d$ is $\theta_2$-spectral, and satisfies
\begin{myenumerate}
\item $P$ is perfect,
\item $P$ realizes all the symmetries of its edge-graph,
\item $P$ is the only arc-transitive polytope (up to reorientation and rescaling) with edge-graph $G_P$.
\end{myenumerate}
\end{conjecture}
Probably in, in all these result, the eigenvalue $\theta$ resp.\ $\lambda$ can be replaced with $\theta_2$ resp.\ $\lambda_2$, but we were not able to prove that.
However, it holds in any single case of arc-transitive polytope that we have checked.
\begin{example}
There are many ways to compute the circumradius of an $n$-simplex with unit edge-length, and here is another one:
The complete graph $K_n$ is arc-transitive and has spectrum $\{(-1)^{n-1},n^1\}$. The second largest eigenvalue $\theta_2=-1$ gives rise to an $(n-1)$-dimensional polytope with edge-graph $K_n$, that is, the $(n-1)$-dimensional simplex.
Assuming edge length $\ell=1$, we can compute its circumradius:
$$r=\sqrt{\frac{\deg(G)}{\lambda_2}} = \sqrt{\frac{\deg(G)}{\deg(G)-\theta_2}} = \sqrt{\frac{n-1}{n-1-(-1)}} = \sqrt{1-\frac1n}.$$
\end{example}
\begin{example}
Compute the dihedral angle of the rhombic dodecahedron and the rhombic triacontahedron.
\end{example}
But this technique can also be used the other way around, namely, concluding from the knowledge of the metric properties of a polytope and the eigenvalues of its edge-graph to the fact that it is spectral.
\begin{example}
We show that all regular polytopes are in fact $\theta_2$-spectral, as was already conjectured by Licata and Powers \cite{licata1986surprising}.
The essential facts we are using are some tabulated metric and spectral properties of these polytopes, \shortStyle{e.g.}\ found in \cite{buekenhout1998number}.
\begin{itemize}
\item
Remarkable, the 24-cell has edge-length $\ell$ identical to circumradius $r$.
This determines the Laplacian eigenvalue of its skeleton:
$$1=\frac\ell r=\sqrt{\frac{2\lambda}{\deg(G)}} \quad\implies\quad \lambda=\frac12\deg(G) = 4.$$
One checks that $4$ is exactly the second largest Laplacian eigenvalue of its edge-graph, and that this eigenvalue has multiplicity four.
\item
600-cell: $3(1+\sqrt{5})$
\item
120-cell: $2\phi-1$
\end{itemize}
\begin{center}
\begin{tabular}{c|rrrr}
polytope & $\ell$ & $r$ & $\lambda_2$ & $\deg(G_P)$ \\
\hline
$\overset{\phantom.}24$-cell & $1$ & $1$ & $4$ & $8$ \\
$120$-cell & $3-\sqrt 5$ & $2\sqrt 2$ & $2\phi-1$ & $4$ \\
$600$-cell & $2/\phi$ & $2$ & $3(1+\sqrt 5)$ & $12$
\end{tabular}
\end{center}
\end{example}
\begin{theorem}
If a polytope has a connected, full-dimensional and spanning edge-orbit, then it is perfect.
\end{theorem}
\newpage
\section{Edge- and vertex-transitive polytopes}
The goal of this section is to show that every polytope $P\subset\RR^d$ that is simultane\-ously vertex- and edge-transitive is an eigenpolytope to the second smallest eigenvalue of its edge-graph.
Note that being only edge-transitive is just slightly weaker than being vertex- and edge-transitive, as it was shown in \cite{winter2020polytopes} that edge-transitivity of convex polytopes in dimension $d\ge 4$ implies vertex-transitivity anyway.
\begin{theorem}\label{res:vertex_edge_transitive_is_theta2_realization}
If $P\subset\RR^d$ is vertex-transitive and edge-transitive, then it is the~$\theta_2$-eigenpolytopes of its edge-graph.
\begin{proof}
Let $X\in\RR^{n\x n}$ be the matrix defined in \cref{res:izmestiev}.
Since $P$ is transitive on vertices, there is a number $\alpha\in\RR$ with $X_{ii}=\alpha$ for all $i\in V$.
Since $P$ is edge-transitive, there is a number $\beta<0$ so that $X_{ij}=\beta$ for all $ij\in E(G_P)$.
Since we also have $X_{ij}=0$ for all $ij\not\in E(G_P)$, we find that $X$ must be of the form
$$X= \alpha \Id - \beta A \quad\implies\quad A=\alpha\Id-\frac1\beta X,$$
where $A$ is the adjacency matrix of $G_P$.
$X$ has a single negative eigenvalue, which is simple, and $\dim\ker X=d$.
In other words, its second smallest eigenvalue is zero and has multiplicity $d$.
Consequently, $A$ has second \emph{largest} eigenvalue $\theta_2=\alpha$ of multiplicity $d$ to the same eigenvectors as $X$.
Let $v$ be the skeleton of $P$ with arranegment matrix $M$ and arrangement space $U$.
Since $XM=0$ and $\rank M=d$, we find that the $U=\Span M$ must be the $0$-eigenspace of $X$, hence the $\theta_2$-eigenspace of $A$.
Thus, $v$ is the $\theta_2$-realization of $G_P$ and $P$ is spectral.
\end{proof}
\end{theorem}
The argument of \cref{res:vertex_edge_transitive_is_theta2_realization} cannot be easily generalized to any weaker form of symmetry.
For example, one would like to proof that every spectral polytope has eigenvalue $\theta_2$, or more generally, every balanced polytope has eigenvalue $\theta_2$.
\begin{corollary}
If $P\subset\RR^d$ is vertex- and edge-transitive, then holds:
\begin{myenumerate}
\item $\Aut(P)$ is irreducible,
\item $P$ is the unique simultaneously vertex- and edge-transitive polytope for which the edge-graph is isomorphic to $G_P$,
\item $P$ realizes all symmetries of its edge-graph,
\item $P$ is a perfect (rigid) polytope,
\item $P$ is the unique most symmetric realization of its combinatorial type,
\item if $P$ has edge-length $\ell$ and circumradius $r$, then
%
$$\Big[\frac \ell r \Big]^2 = \frac{2\lambda_2(G_P)}{\deg(G_P)} = 2\Big(1-\frac{\theta_2(G_P)}{\deg(G_P)}\Big).$$
%
\item if the polar dual $P^\circ$ has dihedral angle $\alpha$, then
%
$$\cos(\alpha)=-\frac{\theta_2(G_P)}{\deg(G_P)}.$$
\end{myenumerate}
\end{corollary}
\section{Distance-transitive polytopes}
\subsection{Distance-transitive polytopes}
\label{sec:distance_transitive_polytopes}
In general, there is no apparent reason for why a spectral polytope must be $\theta_2$-spectral.
In the case of distance-transitive polytopes there can be found an easy argument involving the cosine sequence of its skeleton.
The argument is based on the number of sign changes in $u_\delta$ and the so called \emph{two-piece property} for embedded graphs.
\begin{definition}
A realization $v$ is said to have the \emph{two-piece property} (or TPP),~if every hyperplane cuts it into at most two pieces.
More formally, given any vector $x\in \RR^d$ and scalar $c\in\RR$, the induced subgraphs
$$
G_+:=G[\,i\in V\mid \<x,v_i\>\ge c\,]
\quad\text{and}\quad
G_-:=G[\,i\in V\mid \<x,v_i\>\le c\,]
$$
must be connected (if non-empty).
\end{definition}
The image below shows two symmetric realizations of the cycle graph $C_5$, one~of which has the TPP (left), and one of which has not (right).
We are going to use the following fact:
\begin{theorem}[\cite{...}, Theorem xx]
The skeleton of a polytope has the TPP.
\begin{proof}[Sketch of proof.]
Let $v$ be the skeleton of a polytope, $x\in\RR^d$ and $c\in \RR$.
Let $i_+\in V$ be the vertex which maximizes the functional $\<v_\cdot,x\>$.
By the simplex algorithm, every vertex on the \enquote{positive} side of the hyperplane $\<\cdot,x\>=c$ can reach $v_{i_+}$ by a path in the edge-graph that is non-decreasing in $\<v_\cdot,x\>$.
Hence the positive side is connected if non-empty.
Equivalently for the negative side.
\end{proof}
\end{theorem}
\begin{theorem}
A distance-transitive polytope $P$ is $\theta_2$-spectral.
\begin{proof}
Let $v$ be the full-dimensional and distance-transitive realization of $P$.
Since it is also arc-transitive, it has an eigenvalue (by \cref{cref:...}).
By \cref{res:...} it must then be the $\theta_k$-realization of $G_P$.
Now consider cutting $v$ by the hyperplane $v_i^\bot$ for some $i\in V$.
The number of components left after the cut is exactly one more than the number of sign changes of the cosine sequence $u_\delta$.
From \cref{res:...} we know that there are exactly $k-1$ sign changes for the $\theta_k$-realization, hence $k$ pieces after the cut.
Since $v$ is the skeleton of a polytope, we have the TPP, and $k\le 2$ pieces.
Since $v$ cannot be the trivial $\theta_1$-realization of $G_P$, it must be the $\theta_2$-realization, and $P$ the $\theta_2$-eigenpolytope.
\end{proof}
\end{theorem}
\begin{corollary}
Let $P$ be a distance-transitive polytope.
\begin{myenumerate}
\item Among distance-transitive polytopes, $P$ is uniquely determined by its edge-graph.
\item $P$ realizes all the symmetries of its edge-graph.
\item $P$ is the unique most symmetric realization of its combinatorial type.
\end{myenumerate}
\end{corollary}
Godsil's classification of the $\theta_2$-spectral distance-regular graphs (see \cref{res:spectral_distance_regular_graphs}; all of which~turned out to be distance-transitive) allows a complete classification of distance-tran\-sitive polytopes.
\begin{theorem}\label{res:distance_transitive_classification}
If $P\subset\RR^d$ is a distance-transitive polytope, then it is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a regular polygon $(d=2)$,
\item the regular dodecahedron $(d=3)$,
\item the regular icosahedron $(d=3)$,
\item a cross-polytopes, that is, $\conv\{\pm e_1,...,\pm e_d\}$ where $\{e_1,...,e_d\}\subset\RR^d$ is the standard basis of $\RR^d$,
\item a hyper-simplex $\Delta(d,n)$, that is, the convex hull of all vectors $v\in\{0,1\}^{d+1}$ with exactly $k$ 1-s,
\item a cartesian power of a regular simplex (this includes hypercubes),
\item a demi-cube, that is, the convex hull of all vectors $v\in\{-1,1\}^d$ with~an~even number of 1-s,
\item the $2_{21}$-polytope, also called Gosset-polytope $(d=6)$,
\item the $3_{21}$-polytope, also called Schläfli-polytope $(d=7)$.
\end{enumerate}
The order of this list agrees with the list of graphs in \cref{res:spectral_distance_regular_graphs}.
\end{theorem}
Note that the list in \cref{res:distance_transitive_classification} contains many polytopes that are not regular, and contains all regular polytopes except for the 24-cell, 120-cell and 600-cell.
The distance-transitive polytopes thus form a distinct class of remarkably symmetric polytopes which is not immediately related to the class of regular polytopes.
Noteworthy however, is that all the distance-transitive polytopes are \emph{Wythoffian poly\-topes}, that is, they are orbit polytopes of finite reflection groups.
\Cref{fig:distance_transitive_Coxeter} shows the Coxeter-Dynkin diagrams of these polytopes.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/distance_transitive_Coxeter}
\caption{Coxeter-Dynkin diagrams of distance-transitive polytopes.}
\label{fig:distance_transitive_Coxeter}
\end{figure}
\section{Conclusion and open questions}
\label{sec:future}
In this paper we have studied \emph{eigenpolytopes} and \emph{spectral polytopes}.
The former are polytopes constructed from a graph and one of its eigenvalues.
A polytope is spectral if it is the eigenpolytopes of its edge-graph.
These are of interest because spectral graph theory then ensures a strong interplay between the combinatorial properties of the edge-graph and the geometric properties of the polytope.
The study of eigenpolytopes and spectral polytopes has left us with many open questions.
Most notably, how to detect spectral polytopes purely from their geome\-try.
We introduced a tool (\cref{res:implies_spectral}), which was sufficient to proof that (most) edge-transitive polytopes are spectral.
We do not know how much more general it can be applied.
\begin{question}
\label{q:characterization}
Does \cref{res:implies_spectral} already characterize $\theta_2$-spectral polytopes (or even spectral polytopes in general)?
\end{question}
If the answer is affirmative, this would provide a geometric characterization of polytopes that are otherwise defined purely in terms of spectral graph theory.
The result of Izmestiev suggests that polytopes with sufficiently regular geometry are $\theta_2$-spectral: the entry of the matrix $X$ in \cref{res:izmestiev} at index $ij\in E$ can be~expressed as
$$X_{ij}=\frac{\vol(\sigma_i\cap\sigma_j)}{\|v_i\|\|v_j\|\sin\angle(v_i,v_j)},$$
where $\sigma_i$ and $\sigma_j$ are the facets of the polar dual $P^\circ$ that correspond to the vertices $v_i,v_j\in\F_0(P)$.
Because of this formula, it might be actually easier to classify the polar duals of $\theta_2$-spectral polytopes.
An affirmative answer to \cref{q:characterization} would also mean a negative answer to the following:
\begin{question}
\label{q:not_theta_2}
Is there a $\theta_k$-spectral polytope/graph for some $k\not=2$?
\end{question}
The answer is known to be negative for edge-transitive polytopes/graphs (see \cref{res:edge_transitive_spectral_graph}), but unknown in general.
The second-largest eigenvalue $\theta_2$ is special for other reasons too.
Even if a graph is not $\theta_2$-spectral, it seems to still imprint its adjacency information onto the edge-graph of its $\theta_2$-eigenpolytope.
\begin{question}
\label{q:realizing_edges}
Given an edge $ij\in E$ of $G$, if $v_i$ and $v_j$ (as defined in \cref{def:eigenpolytope}) are distinct vertices of the $\theta_2$-eigenpolytope $P_G(\theta_2)$, is then also $\conv\{v_i,v_j\}$ an edge of $P_G(\theta_2)$?
\end{question}
This was proven for distance-regular graphs in \cite{godsil1998eigenpolytopes}, and is not necessarily true for eigenvalues other than $\theta_2$.
All known spectral polytopes are exceptionally symmetric.
It is unclear whether this is true in general.
\begin{question}
\label{q:trivial_symmetry}
Are there spectral polytopes with trivial symmetry group?
\end{question}
An example for \cref{q:trivial_symmetry} must be asymmetric, yet with a reasonably large eigenspaces.
Such graphs exist among the distance-regular graphs, but all spectral distance-regular graphs were determined in \cite{godsil1998eigenpolytopes} (see also \cref{res:spectral_distance_regular_graphs}) and turned out to be distance-transitive, \shortStyle{i.e.,}\ highly symmetric.
A clear connection between being spectral and being symmetric is missing.
To emphasize our ignorance, we ask the following:
\begin{question}
\label{q:spectral_non_vertex_transitive}
Can we find more spectral polytopes that are \emph{not vertex-transitive}?
What characterizes them?
\end{question}
The single known spectral polytope that is \emph{not} vertex-transitive is the \emph{rhombic dodecahedron} (see \cref{fig:edge_transitive}).
The fact that it is spectral appears purely accidental, as there seems to be no reason for it to be spectral, except that we can explicitly check that it is. For comparison, the highly related \emph{rhombic triacontahedron} is not spectral.
On the other hand, vertex-transitive spectral polytopes might be quite common.
\begin{question}
\label{q:specific_instance}
Let $P\subset\RR^d$ be a polytope with the following properties:
\begin{myenumerate}
\item $P$ is vertex-transitive,
\item $P$ realizes all the symmetries of its edge-graph, and
\item $\Aut(P)$ is irreducible.
\end{myenumerate}
Is $P$ (combinatorially equivalent to) a spectral polytope?
\end{question}
No condition in \cref{q:specific_instance} can be dropped.
If we drop vertex-transitivity,~we could take some polytope whose edge-graph has trivial symmetry and only small eigenspaces. Dropping $(ii)$ leaves vertex-transitive neighborly polytopes, for which we know that these are mostly not spectral (except for the simplex).
Dropping $(iii)$ leaves us with the prisms and anti-prisms, the eigenspaces of their edge-graphs are rarely of dimension greater than two.
Finally, we wonder whether these spectral techniques can be any help in classifying the edge-transitive polytopes.
\begin{question}
Can we classify the edge-transitive graphs that are spectral, and~by this, the edge-transitive polytopes?
\end{question}
\begin{question}
Can the existence of half-transitive polytopes be excluded by using spectral graph theory (see \cref{sec:arc_transitive})?
\end{question}
\iffalse
\hrulefill
We have seen that in some situations computing the eigenpolytope of the edge-graph of a polytope results in the same polytope again.
Polytopes/graphs for which this happens are called \emph{spectral}.
We have introduced a tool for proving that certain geometrically well-behaved polytopes are necessarily $\theta_2$-spectral.
It is open whether spectral polytopes exist to other eigenvalues than $\theta_2$:
\begin{question}
\label{q:not_theta_2}
Is there a $\theta_k$-spectral polytope/graph for some $k\not=2$?
\end{question}
We have seen that the answer is negative for the distance-transitive (actually distance-regular) graphs (see \cref{res:distance_regular_theta_k}).
Our hope is that the proof can~be~extended to a larger class of graphs as \shortStyle{e.g.}\ 1-walk-regular or arc-transitive graphs.
Even if a graph is not $\theta_2$-spectral, it seems to still imprint its adjacency information on the edge-graph of its $\theta_2$-eigenpolytope:
\begin{question}
\label{q:realizing_edges}
Let $G$ be a graph with $\theta_2$-eigenpolytope $P=P_G(\theta_2)$.
For each~$i\in$ $V$ let $v_i\in\RR^d$ be as defined in \cref{def:eigenpolytope}.
Now, if $ij\in E$ and $v_i\not= v_j$ are distinct vertices of $P$, is then also $\conv\{v_i,v_j\}$ an edge of $P$?
\end{question}
The answer is affirmative for distance-regular graphs as shown by Godsil in \cite{godsil1998eigenpolytopes}.
All known spectral polytopes are highly symmetric.
We therefore ask the following:
\begin{question}
\label{q:trivial_symmetry}
Are there spectral polytopes with trivial symmetry group?
\end{question}
To construct an example, one has to find a graph with no non-trivial symmetries, but whose eigenspaces are still reasonably large.
Such graphs can be found among the distance-regular graphs.
However, all spectral graphs among these were already determined by Godsil and all of them turned out to be distance-transitive.
A clear connection between being spectral and being symmetric is missing.
To emphasize our ignorance, we have the following:
\begin{question}
Can we find more spectral polytopes that are \emph{not vertex-transitive}?
What characterizes them?
\end{question}
Only one such polytope is known: the \emph{rhombic dodecahedron}.
We know that it spectral by explicit computation, but a systematic proof of this fact (that admits a generalization) is missing.
In general, how else can one characterize spectral polytopes?
The result of~Izmes\-tiev suggests that polytopes with sufficiently regular geometry are $\theta_2$-spectral.
In fact, the value of the matrix $X$ in \cref{res:izmestiev} at an index $ij\in E$ can be given~explicitly
$$X_{ij}=\frac{\vol(\sigma_i\cap\sigma_j)}{\|v_i\|\|v_j\|\sin\angle(v_i,v_j)},$$
where $\sigma_i$ and $\sigma_j$ are the facets of the polar dual $P^\circ$ that correspond to the vertices $v_i,v_j\in\F_0(P)$.
Because of this, it might be actually easier to classify the polar duals of $\theta_2$-spectral polytopes.
\begin{question}
\label{q:characterization}
Does \cref{res:implies_spectral} already characterize $\theta_2$-spectral polytopes (or even spectral polytopes in general)?
\end{question}
There are very specific classes of polytopes that might have spectral of their combiantorial type:
\begin{question}
\label{q:specific_instance}
Let $P\subset\RR^d$ be a polytope with the following properties:
\begin{myenumerate}
\item $P$ is vertex-transitive,
\item $P$ realizes all the symmetries of its edge-graph, and
\item $\Aut(P)$ is irreducible.
\end{myenumerate}
Is $P$ (combinatorially equivalent to) a spectral polytope? Or at least an eigenpolytope?
\end{question}
We know that $P$ in \cref{q:specific_instance} is not necessarily spectral if we drop any of the conditions.
If we drop vertex-transitivity we could take some polytope with trivial symmetry. Dropping $(ii)$ gives the neighborly polytopes, for which we know that these are only balanced, but mostly not spectral (except for the simplex).
Dropping $(iii)$ gives us the prisms and anti-prisms which mostly do not have eigenspaces of dimension three or higher.
\cref{q:specific_instance} has equally interesting consequences if it is only true for the eigenpolytope case.
For example, several polytopes with relevance in integer optimization seem to be eigenpolytopes, \shortStyle{e.g.}\ the Birkhoff polytope or the traveling salesman polytopes.
This would allow a formulation of the respective optimization problem in the language of spectral graph theory.
Also, this gives a way to study the symmetry groups of these polytopes (which are oftem much larger than expected from their definition).
Finally, we wonder whether these spectral techniques can be any help in classifying edge-transitive polytopes.
\begin{question}
Can we classify the edge-transitive polytopes, or equivalently, the simultaneously vertex- and edge-transitive graphs that are spectral?
\end{question}
\begin{question}
Can spectral graph theory help to show that no half-transitive polyopes exist?
\end{question}
\fi
\section{Introduction}
\label{sec:introduction}
\emph{Eigenpolytopes} are a construction in the intersection of combinatorics and geometry, using techniques from spectral graph theory.
Eigenpolytopes provide a way~to associate several polytopes to a finite simple graph, one for each eigenvalues of~its adjacency matrix.
A formal definition can be found in \cref{sec:def_eigenpolytope}.
\vspace{0.5em}
\begin{center}
\includegraphics[width=0.7\textwidth]{img/cube_intro_pic}
\end{center}
\vspace{0.5em}
Eigenpolytopes can be applied from two directions:
for the first, one starts~from a given graph, computes its eigenpolytopes, and tries to deduce, from the geometry and combinatorics of these polytopes, something about the original~graph.
For the other direction, one starts with a polytope, asks whether it is an eigenpolytope, and if so, for which graphs, which eigenvalues, and how these relate to the original polytope.
Eigenpolytopes have several interesting geometric and algebraic properties, and establishing that a family of polytopes consists of eigenpolytopes opens up their study to the techniques of spectral graph theory.
For some graphs the connection to their eigenpolytopes is especially strong: it can happen that a graph is the edge-graph of one of its eigenpolytopes, or equivalently, that a polytope is an eigenpolytope of its edge-graph.
Such graphs/polytopes are quite special and we shall call them \emph{spectral}.
For example, all regular polytopes are spectral, but there are many others.
Their properties are not well-understood.
We survey the literature of eigenpolytope and spectral polytopes.
We establish a technique with which to prove that certain polytopes are spectral polytopes and we apply it to \emph{edge-transitive} polytopes. That are polytopes for which the Euclidean symmetry group $\Aut(P)\subset\Ortho(\RR^d)$ acts transitively on the set of edge $\F_1(P)$.
As we shall explain, this characterization suffices to proves that an edge-transitive polytope is uniquely determined by its edge-graph, and also realizes all its combinatorial symmetries.
A complete classification of edge-transitive polytopes is not known as of yet.
However, using results on eigenpolytopes, we are able to give a complete classification of a sub-class of the edge-transitive polytopes, namely, the \emph{distance-transitive polytopes}.
\subsection{Outline of the paper}
\cref{sec:eigenpolytopes} starts with a motivating example for directing the reader towards the definition of the \emph{eigenpolytope} as well as the phenomenon of \emph{spectral} graphs and polytopes.
We include a literature overview for~\mbox{eigenpolytopes} and spectral polytopes.
In \cref{sec:balanced_spectral} we give a first rigorous definition for the notion \enquote{spectral polytope} via \emph{balanced polytopes}. The latter is a notion related to the rigidity theory.
In \cref{sec:izmestiev} we introduce the, as of yet, most powerful tool for proving that~certain polytopes are spectral.
In the final section, \cref{sec:edge_transitive}, we apply this result to \emph{edge-transitive} polytopes.
It is a simple corollary of the previous section that these are $\theta_2$-spectral.
We explore the implications of this finding: edge-transitive polytopes (in dimension $d\ge 4$) are uniquely determined by the edge-graph and realize all of its symmetries.~%
We~discuss sub-classes, such as the arc-, half- and distance-transitive polytopes.
We close with a complete classification of the latter (based on a result of Godsil).
\section{Eigenpolytopes and spectral polytopes}
\label{sec:eigenpolytopes}
\subsection{A motivating example}
\label{sec:example}
Let $G=(V,E)$ be the edge-graph of the cube, with vertex set $V=\{1,...,8\}$,~num\-bers assigned to the vertices as in the figure below.
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{img/cube}
\end{figure}
\noindent
The spectrum of that graph (\shortStyle{i.e.,}\ of its adjacency matrix) is $\{(-3)^1,(-1)^3,1^3,3^1\}$.
Most often, one denotes the largest eigenvalue by $\theta_1$, the second-largest by $\theta_2$, and so on.
In spectral graph theory, there exists the general rule of thumb that the most exciting eigenvalue of a graph is not its largest, but its \emph{second-largest} eigenvalue $\theta_2$ (which is related to the \emph{algebraic connectivity} of $G$).
For the edge-graph of the cube, we have $\theta_2=1$,~of multiplicity \emph{three}.
And here are three linearly independent eigenvectors to $\theta_2$:
\vspace{0.5em}
\begin{center}
\raisebox{-3.3em}{\includegraphics[width=0.23\textwidth]{img/cube_eigenvector}}
\qquad
$
u_1 = \begin{pmatrix}
\phantom+1\\ \phantom+1 \\ \phantom+1 \\ \phantom+1 \\ -1 \\ -1 \\ -1 \\ -1
\end{pmatrix},\quad
u_2 = \begin{pmatrix}
\phantom+1\\ \phantom+1 \\ -1 \\ -1 \\ \phantom+1 \\ \phantom+1 \\ -1 \\ -1
\end{pmatrix},\quad
u_3 = \begin{pmatrix}
\phantom+1\\ -1 \\ \phantom+1 \\ -1 \\ \phantom+1 \\ -1 \\ \phantom+1 \\ -1
\end{pmatrix}.
$
\end{center}
\vspace{0.7em}
\noindent
We can write these more compactly in a single matrix $\Phi\in\RR^{8\x 3}$:
\vspace{0.4em}
\begin{center}
$\Phi= \;\begin{blockarray}{(lll)r
\phantom+1 & \phantom+1 & \phantom+1\;\; &\text{\quad \footnotesize $\leftarrow v_1$}\\
\phantom+1 & \phantom+1 & -1 & \text{\quad \footnotesize $\leftarrow v_2$} \\
\phantom+1 & -1 & \phantom+1 & \text{\quad \footnotesize $\leftarrow v_3$} \\
\phantom+1 & -1 & -1 & \text{\quad \footnotesize $\leftarrow v_4$} \\
-1 & \phantom+1 & \phantom+1 & \text{\quad \footnotesize $\leftarrow v_5$} \\
-1 & \phantom+1 & -1 & \text{\quad \footnotesize $\leftarrow v_6$} \\
-1 & -1 & \phantom+1 & \text{\quad \footnotesize $\leftarrow v_7$} \\
-1 & -1 & -1 & \text{\quad \footnotesize $\leftarrow v_8$} \\
\end{blockarray}.$
\qquad
\raisebox{-4.2em}{\includegraphics[width=0.4\textwidth]{img/cube_coordinates}}
\end{center}
We now take a look at the rows of that matrix, of which it has exactly eight.~These rows are naturally assigned to the vertices of $G$ (assign $i\in V$ to the $i$-th row of~$\Phi$), and each row can be interpreted as~a~vec\-tor in $\RR^3$.
If we place each vertex $i\in V$ at the position $v_i\in\RR^3$ given by the $i$-th row of $\Phi$, we find that this embedds the graph $G$ \emph{exactly} as the skeleton of a cube (see the figure above).
In other words: if we compute the convex hull of the $v_i$, we get back the polyhedron from which we have started.
What a coincidence, isn't it?
This example was specifically chosen for its nice numbers, but in~fact,~the same works out as well for many other polytopes, inclu\-ding all the regular polytopes~in all dimension.
One probably learns to appreciate this magic when suddenly in~need for the vertex coordinates of some not so nice polytope, say, the regular dodecahedron or 120-cell.
With this technique in the toolbox, these coordinates are just one eigenvector-computation away (we included a short Mathematica script in \cref{sec:appendix_mathematica}).
Note also, that we never specified~the~dim\-en\-sion of~the~embed\-ding, but it just so happened, that the second-largest eigenvalue has the right multiplicity.
This phenomenon definitely deserves an explanation.
\subsubsection*{On the choice of eigenvectors}
\label{sec:choice_of_eigenvectors}
One might object that the chosen eigenvectors $u_1, u_2$ and $u_3$ look suspiciously cherry-picked, and we may not get such a nice result if we would have chosen just any eigenvectors.
And this is true.
For an appropriate choice of these vectors, we can, instead of a cube, get a cuboid, or a parallelepiped.
In fact, we can obtain any \emph{linear} transformations of the cube.
\emph{But}, we can also get \emph{only} linear transformations, and nothing else.
The reason is the following well~know fact from linear algebra:
\begin{theorem}
\label{res:same_column_span}
Two matrices $\Phi,\Psi\in\RR^{n\x d}$ have the same column span, \shortStyle{i.e.,}~$\Span \Phi=\Span \Psi$, if and only if their rows are related by an invertible linear transformation, \shortStyle{i.e.,}\ $\Phi=\Psi T$ for some $T\in\GL(\RR^d)$.
\end{theorem}
\noindent
In our case, the column span is the $\theta_2$-eigenspace, and the rows are the coordinates of the $v_i$.
We say that any two polytopes constructed in this way are \emph{linearly equivalent}.
The only notable property of the chosen basis in the example is, that the vectors $u_1, u_2$ and $u_3$ are orthogonal and of the same length.
Any other choice of such a basis of~the~\mbox{$\theta_2$-eigen}\-space (\shortStyle{e.g.}\ an orthonormal basis) would also have given a cube, but reoriented, rescaled and probably with less nice coordinates.
For details on how this choice relates to the orientation, see \shortStyle{e.g.}\ \cite[Theorem 3.2]{winter2019geometry}.
\subsection{Eigenpolytopes}
\label{sec:def_eigenpolytope}
We compile our example into a definition.
\begin{definition}\label{def:eigenpolytope}
Start with a graph $G=(V,E)$, an eigenvalue $\theta\in\Spec(G)$ thereof, as well as an orthonormal basis $\{u_1,...,u_d\}\subset\RR^n$ of the $\theta$-eigenspace.
We define the \emph{eigenpolytope matrix} $\Phi\in\RR^{n\x d}$ as the matrix in which the $u_i$ are the columns:\phantom{mm}
\begin{equation}
\label{eq:eigenpolytope_matrix}
\Phi :=\begin{pmatrix}
\mid & & \mid \\
u_1 & \!\!\cdots\!\!\! & u_d \\
\mid & & \mid
\end{pmatrix}=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}.
\end{equation}
Let $v_i\in\RR^d$ denote the $i$-th row of $\Phi$.
The polytope
$$P_G(\theta):=\conv\{v_i\mid i\in V\}\subset\RR^d$$
is called \emph{$\theta$-eigenpolytope} (or just \emph{eigenpolytope}) of $G$.
\end{definition}
For later use we define the \emph{eigenpolytope map}
\begin{equation}
\label{eq:eigenpolytope_map}
\phi:V\ni i\mapsto v_i\in\RR^d
\end{equation}
that to each vertex $i\in V$ assignes the $i$-th row of the eigenpolytope matrix.
Note that the basis $\{u_1,...,u_d\}\subset\Eig_G(\theta)$ in \cref{def:eigenpolytope} is explicitly chosen~to be an \emph{orthonormal basis}.
This is not strictly necessarily, but this choice is convenient from a geometric point of view:
a different choice for this basis gives the same~poly\-tope, but with a different orientation rather than, say, transformed~by~a~general linear transformation.
This preserves metric properties and is closer to how polytopes are usually consider up to rigid motions.
We can also reasonably speak of \emph{the} $\theta$-eigenpolytope, as any two differ only by orientation.
With this terminology in place, our observation in the example of \cref{sec:example}~can be summarized as \enquote{the cube is the $\theta_2$-eigenpolytope of its edge-graph}, or alternatively as \enquote{the cube-graph is the edge-graph of its $\theta_2$-eigenpolytope}.
Here is~a~depic\-tion of all the eigenpolytopes of the cube-graph, one for each eigenvalue:
\vspace{0.5em}
\begin{center}
\includegraphics[width=0.7\textwidth]{img/cube_eigenpolytopes}
\end{center}
\noindent
We observe that the phenomenon from \cref{sec:example} only happens for $\theta_2$.
In general, the $\theta_1$-eigenpolytope of a regular graph will always be a single point (which is, why we rarely care about the largest eigenvalue).
Also, whenever a graph is bipartite, the eigenpolytope to the smallest eigenvalue is 1-dimensional, hence a line segment.
We are now free to compute the eigenpolytopes of all kinds of graphs, \mbox{including} graphs which are not the edge-graph of any polytope (so-called \emph{non-polytopal} graphs).
It is then little surprising that no edge-graph of any of its eigenpolytope gives the original graph again.
But even if we start from a polytopal graph, one is not guaranteed to find an eigen\-polytope that has the initial graph as its edge-graph (\shortStyle{e.g.}\ the edge-graph of the triangular prism has no eigenvalue of multiplicity three, hence no eigenpolytope of dimension three, see also \cref{ex:prism}).
Equivalently, if one starts with a polytope, it~is~not guaranteed that this polytope is the eigenpolytope of its edge-graph (or even combinatorially equivalent to it).
\begin{example}
\label{ex:neighborly_1}
A \emph{neighorly polytope} is a polytope whose edge-graph is the complete graph $K_n$.
The spectrum of $K_n$ is $\{(-1)^{n-1},(n-1)^1\}$.
One checks that~the~eigenpolytopes are a single point (for $\theta_1=n-1$) and the regular simplex of dimension $n-1$ (for $\theta_2=-1$).
Consequently, no neighborly polytope other than a simplex is combinatorially equivalent to an eigenpolytope of its edge-graph.
\end{example}
That a graph and its eigenpolytope translate into each other as well as in the case of the cube in \cref{sec:example} is a very special phenomenon, to which we shall give a name:
a polytope (or graph) for which this happens, will be called \emph{spectral}\footnote{There was at least one previous attempt to give a name to this phenomenon, namely, in \cite{licata1986surprising}, where it was called \emph{self-reproducing}.}.
We cannot formalize this definition right away, as there is some subtlety we have to discuss first (we give a formal definition in \cref{sec:balanced_spectral}, see \cref{def:spectral}).
\begin{example}
\label{ex:pentagon}
The image below shows two spectral realizations of the 5-cycle $C_5$\footnote{Spectral realizations are essentially defined like eigenpolytopes, assinging coordinates $v_i\in\RR^d$ to each vertex $i\in V$ (as in \cref{def:eigenpolytope}), but without taking the convex hull. Instead, one draws the edges between adjacent vertices.}.
\begin{center}
\includegraphics[width=0.4\textwidth]{img/C5}
\end{center}
The left image~shows the realization to the second-largest eigenvalue $\theta_2$, the right image shows the realization to the smallest eigenvalue $\theta_3$.
In both cases, the convex hull (the actual eigenpolytope) is a regular pentagon, whose edge-graph is $C_5$ again.
But we see that only in the case of $\theta_2$ the edges of the graphs get properly mapped into the edges of the pentagon.
While it is true that the 5-cycle $C_5$ is the edge-graph of its $\theta_3$-eigenpolytope, the adjacency informations gets scrambled in the process:
while, say, vertex 1 and 2 are adjacent in $C_5$, their images $v_1$ and $v_2$ do not form an edge in the $\theta_3$-eigenpolytope.
We do not want to call this \enquote{spectral}, as the adjacency information is not preserved.
The same can happen in higher dimensions too, \shortStyle{e.g.}\ with $G$ being the edge-graph of the dodecahedron:
\begin{center}
\includegraphics[width=0.55\textwidth]{img/dodecahedron_eigenpolytopes_2}
\end{center}
\end{example}
\begin{observation}
\label{res:theta_2_observations}
From studying many examples, there are two interesting observations to be made, both concern $\theta_2$, none of which is rigorously proven:
\begin{myenumerate}
\item
It appears as if only $\theta_2$ can give rise to spectral polytopes/graphs.
At~least, all known examples are $\theta_2$-spectral (see also \cref{q:not_theta_2}). Some considerations on nodal domains make this plausible, but no proof is known in the general case (a proof is known in certain special cases, see \cref{res:edge_transitive_spectral_graph}).
\item
If $i\in V$ is a vertex of $G$, then $v_i$ is not necessarily a vertex of every~eigenpolytope ($v_i$ might end up in the interior of $P_G(\theta)$ or one of its faces).
And even if $v_i,v_j\in\F_0(P_G(\theta))$ are distinct vertices and $ij\in E$ is an edge of $G$, it is still not necessarily true that $\conv\{v_i,v_j\}$ is also an edge of the~eigen\-polytope (as seen in \cref{ex:pentagon}).
However, this seems to be no concern in the case $\theta_2$.
It appears as if all edges of $G$ become edges of the $\theta_2$-eigenpolytope, even if $G$ is not spectral (under mild assumptions on the end vertices of the edge).
In other words, the adjacency information of $G$ gets imprinted on the edge-graph of the $\theta_2$-eigenpolytope, whether $G$ is spectral or not.
This is known to be true only in the case of distance-regular graphs \cite[Theorem 3.3 (b)]{godsil1998eigenpolytopes}, but unproven~in general (see also \cref{q:realizing_edges})
\end{myenumerate}
\end{observation}
\subsection{Litarture}
Eigenpolytope were first introduced by Godsil \cite{godsil1978graphs} in 1978.
Godsil proved the existence of a group homomorphism $\Aut(G)\to\Aut(P_G(\theta))$, \shortStyle{i.e.,}\, any combinatorial symmetry of the graph translates into a Euclidean symmetry of the polytope.
From that, he deduces results about the combinatorial symmetry group of the original graph.
We say more about the group homomorphism: for every $\theta\in\Spec(G)$ we have
\begin{theorem}[\!\cite{godsil1978graphs}, Theorem 2.2]
\label{res:realizing_symmetries}
If $\sigma\in\Aut(G)\subseteq\Sym(n)$ is a symmetry of $G$, and $\Pi_\sigma\in\Perm(\RR^n)$ is the associated permutation matrix, then
$$T_\sigma:=\Phi^{\Tsymb} \Pi_\sigma \Phi \;\in\; \Ortho(\RR^d),\qquad (\text{$\Phi$ is the eigenpolytope matrix})$$
is a Euclidean symmetry of the eigenpolytope $P_G(\theta)$ that also permutes the $v_i$ as~prescribed by $\sigma$, \shortStyle{i.e.,}\ $T_\sigma\circ \phi = \phi\circ\sigma$, or $T_\sigma v_i = v_{\sigma(i)}$ for all $i\in V$.
\end{theorem}
This result is also proven (more generally for spectral graph realizations) in \cite[Corollary 2.9]{winter2020symmetric}.
\cref{res:realizing_symmetries} explicitly uses that eigenpolytopes are defined using an \emph{orthonormal} bases rather than any basis of the eigenspace, to conclude that the symmetries $T_\sigma$ are \emph{orthogonal} matrices.
Also, the statement of \cref{res:realizing_symmetries} is not too satisfying in general, as it can happen that non-trivial~symmetries of $G$ are mapped to the identity transformation.
We not necessarily have $\Aut(G)\cong\Aut(P_G(\theta))$.
Several authors construct the eigenpolytopes of certain famous graphs or graph families.
Powers \cite{powers1986petersen} computed the eigenpolytopes of the \emph{Petersen graph}, which he termed the \emph{Petersen polytopes} (one of which will appear as a distance-transitive polytope in \cref{sec:distance_transitive}).
The same author also investigates eigenpolytopes of general distance-regular graphs in \cite{powers1988eigenvectors}.
In \cite{mohri1997theta_1}, Mohri described the face structure of the \emph{Hamming polytopes}, the $\theta_2$-eigenpolytopes of the Hamming graphs.
Seemingly unknown to the author, these polytopes can also by described as the cartesian powers of regular simplices (also distance transitive, see \cref{sec:distance_transitive}).
There exists a wonderful enumeration of the eigenpolytopes (actually, spectral realizations) of the edge-graphs of all uniform polyhedra in \cite{blueSpectral}. Sadly, this write-up was never published formally.
This provides empirical evidence that every uniform polyhedron
has a spectral realization.
The same question might then be asked for uniform polytopes in higher dimensions.
Rooney \cite{rooney2014spectral} used the combinatorial structure of the eigenpolytope (the size of their facets) to deduce statements about the size of cocliques in a graph.
In \cite{padrol2010graph}, the authors investigates how common graph operations translate to operations on their eigenpolytopes.
Particular attention was given to the eigenpolytopes of distance-regular graphs \cite{powers1988eigenvectors,godsil1998eigenpolytopes,godsil1995euclidean}.
It was shown that in a $\theta_2$-eigenpolytope of a distance-regular graph~$G$,~every edge of $G$ corresponds to an edge of the eigenpolytope \cite{godsil1998eigenpolytopes}.
Consequently, $G$~is a spanning subgraph of the edge-graph of the eigenpolytope.
It remains open if the same holds for less regular graphs, \shortStyle{e.g.}\ 1-walk regular graphs or arc-transitive graphs (see also \cref{q:realizing_edges}).
The observation that some polytopes are the eigenpolytopes
of their edge-graph (\shortStyle{i.e.,}\ they are \emph{spectral} in our terminology) was made repeatedly, \shortStyle{e.g.}\ in \cite{godsil1995euclidean} and \cite{licata1986surprising}.
In the latter, this was shown for all regular polytopes, excluding the exceptional 4-dimensional polytopes, the 24-cell, 120-cell and 600-cell.
This gap was filled in \cite{winter2020symmetric} via general considerations concerning spectral realizations of arc-transitive graphs.
In sum, all regular polytopes are known to be $\theta_2$-spectral.
The next major result for spectral polytopes was obtained by Godsil in \cite{godsil1998eigenpolytopes}, where he was able to classify all $\theta_2$-spectral distance-regular graphs (see also \cref{sec:distance_transitive}):
\begin{theorem}[\!\cite{godsil1998eigenpolytopes}, Theorem 4.3]
\label{res:spectral_distance_regular_graphs}
Let $G$ be distance-regular.
If $G$ is $\theta_2$-spectral,~then $G$ is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a cycle graph $C_n,n\ge 3$,
\item the edge-graph of the dodecahedron,
\item the edge-graph of the icosahedron,
\item the complement of a disjoint union of edges,
\item a Johnson graph $J(n,k)$,
\item a Hamming graph $H(d,q)$,
\item a halved $n$-cube $\nicefrac12 Q_n$,
\item the Schläfli graph, or
\item the Gosset graph.
\end{enumerate}
\end{theorem}
A second look at this list reveals a remarkable \enquote{coincidence}: while the generic distance-regular graph has few or no symmetries, all the graphs in this list are highly symmetric, in fact, \emph{distance-transitive} (a definition will be given in \cref{sec:distance_transitive}).
It is a widely open question whether being spectral is a property solely reserved for highly symmetric graphs and polytopes (see also \cref{q:trivial_symmetry}).
There is only a single known spectral polytope that is not vertex-transitive (see also \cref{rem:edge_not_vertex} and \cref{q:spectral_non_vertex_transitive}).
\section{Balanced and spectral polytopes}
\label{sec:balanced_spectral}
In this section we give a second approach to \emph{spectral polytopes} that circumvents the mentioned subtleties.
For the rest of the paper, let $P\subset\RR^d$ denote a full-dimensional polytope in~dimen\-sion $d\ge 2$ with vertices $v_1,...,v_n\in\F_0(P)$.
We disinguish the \emph{skeleton} of $P$, which is the graph with vertex set $\F_0(P)$ and edge set $\F_1(P)$, from the~\mbox{\emph{edge-graph}}~$G_P=(V,E)$ of $P$, which is isomor\-phic to the skeleton, but has vertex set $V=\{1,...,n\}$. The isomorphism will be denoted
\begin{equation}
\label{eq:vertex_map}
\psi:V\ni i\mapsto v_i\in\F_0(P),
\end{equation}
and we call it the \emph{skeleton map}.
\subsection{Balanced polytopes}
\begin{definition}
The polytope $P$ is called \emph{$\theta$-balanced} (or just \emph{balanced}) for some~real number $\theta\in\RR$, if
\begin{equation}
\label{eq:balanced}
\sum_{\mathclap{j\in N(i)}} v_j = \theta v_i,\quad\text{for all $i\in V$},
\end{equation}
where $N(i):=\{j\in V\mid ij\in E\}$ denotes the \emph{neighborhood} of a vertex $i\in V$.
\end{definition}
One way to interpret the balancing condition \eqref{eq:balanced} is as a kind of self-stress~con\-dition on the skeleton of $P$ (the term \enquote{balanced} is motivated from this).
For each edge $ij\in E$, the vector $v_j-v_i$ is parallel to the edge $\conv\{v_i,v_j\}$.
If $P$ is $\theta$-balanced, at each vertex $i\in V$ we have the equation
$$\sum_{\mathclap{j\in N(i)}} (v_j-v_i) = \sum_{\mathclap{j\in N(i)}} v_j - \deg(i) v_i = \big(\theta-\deg(i)\big)v_i.$$
This equation can be interpreted as two forces that cancel each other out: on the left, a contracting force along each edge (proportion only to the length of that edge), and on the right, a force repelling each vertex away from the origin (proportional to the distance of that vertex from the origin, and proportional to $\theta-\deg(i)$).
A second interpretation of \eqref{eq:balanced} is via spectral graph theory.
Define the matrix
\begin{equation}
\label{eq:arrangement_matrix}
\Psi :=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}
\end{equation}
in which the $v_i$ are the rows.
This matrix will be called the \emph{arrangement matrix} of $P$.
Note that the skeleton map $\psi$ assignes $i\in V$ to the $i$-th row of $\Psi$.
Since we use that $P\subset\RR^d$ is full-dimensional,~we have $\rank \Psi=d$.
\begin{observation}\label{res:eigenvalue}
Suppose that $P$ is $\theta$-balanced.
The defining equation \eqref{eq:balanced} can be equivalently written as the matrix equation $A\Psi=\theta \Psi$.
In this form,~it~is~apparent that $\theta$ is an eigenvalue of the adjacency matrix $A$, and the columns of $\Psi$ are $\theta$-eigenvectors, or $\Span\Psi\subseteq\Eig_{G_P}(\theta)$.
\end{observation}
We have seen that for a balanced polytope, the columns of $\Psi$ must be eigenvectors.
But they are not necessarily a complete set of~$\theta$-eigen\-vectors, \shortStyle{i.e.,}\ they not necessarily span the whole eigenspace.
\begin{example}
\label{ex:neighborly}
Every centered neighborly polytope $P$ is balanced, but except if it is a simplex, it is not spectral (the latter was shown in \cref{ex:neighborly_1}).
Centered~means that
$$\sum_{i\in V} v_i = 0.$$
Since $P$ is neighborly, we have $G_P=K_n$ and $N(i)=V\setminus\{i\}$ for all $i\in V$. Therefore
$$\sum_{\mathclap{j\in N(i)}} v_j = \sum_{\mathclap{j\in V}} v_j - v_i = -v_i,\quad\text{for all $i\in V$}.$$
And indeed, $K_n$ has spectrum $\{(-1)^{n-1},(n-1)^1\}$.
So $P$ is $(-1)$-balanced.
\end{example}
The last example shows that every neighborly polytopes can be made balanced by merely translating it.
More generally, many polytopes have a realization (of~their combinatorial type) that is balanced.
But other polytopes do not:
\begin{example}
\label{ex:prism}
Let $P\subset\RR^3$ be a triangular prism.
The spectrum of the edge-graph of $P$ is $\{(-2)^2,0^2,1^1,3^1\}$.
Note that there~is~no eigenvalue of multiplicity greater than~two.
In particular, we cannot choose three linearly independent eigenvectors to a common eigenvalue.
But if $P$ were balanced, then \cref{res:eigenvalue} tells us that the columns of the arrangement matrix $\Psi$ would be three eigenvectors to the same eigenvalue (linearly independent, since $\rank \Psi=3$), which is not possible.
And so, no realization of $P$ can be balanced.
\end{example}
\subsection{Spectral graphs and polytopes}
In the extreme case, when the columns of $\Psi$ span the whole eigenspace, we can finally give a compact definition of what we want~to~consider as \emph{spectral}:
\begin{definition}
\label{def:spectral}\quad
\begin{myenumerate}
\item A polytope $P$ is called \emph{$\theta$-spectral} (or just \emph{spectral}), if its arrangement matrix $\Psi$ satisfies $\Span \Psi=\Eig_{G_P}(\theta)$.
\item A graph is said to be \emph{$\theta$-spectral} (or just \emph{spectral}) if it is (isomorphic to) the edge-graph of~a $\theta$-spectral polytope.
\end{myenumerate}
\end{definition}
This definition is now perfectly compatible with our initial motivation for the~term \enquote{spectral} in \cref{sec:def_eigenpolytope}.
\begin{lemma}\quad
\label{res:naive}
\begin{myenumerate}
\item If a polytope $P$ is $\theta$-spectral, then $P$ is linearly equivalent to the $\theta$-eigenpoly\-tope of its edge-graph (see also \cref{res:naive_polytope}).
\item If a graph $G$ is $\theta$-spectral, then $G$ is (isomorphic to) the edge-graph of its~$\theta$-eigen\-polytope (see also \cref{res:naive_graph}).
\end{myenumerate}
\end{lemma}
In both cases, the converse is \emph{not} true.
This is intentional, to avoid the problems mentioned in \cref{ex:pentagon}.
Both statement will be proven below by formulating a more technical condition that is then actually equivalent to being spectral.
\begin{proposition}
\label{res:naive_polytope}
A polytope $P$ is $\theta$-spectral if and only if it is linearly equivalent to the $\theta$-eigenpolytope of its edge-graph via some linear map $T\in\GL(\RR^d)$ for which the following diagram commutes:
\begin{equation}
\label{eq:diagram_polytope}
\begin{tikzcd}
P \arrow[r, "T"] & P_{G_P}(\theta) \\
& G_P \arrow[lu, "\psi"] \arrow[u, "\phi"']
\end{tikzcd}
\end{equation}
where $\phi$ and $\psi$ denote the eigenpolytope map and skeleton map respectively.
\end{proposition}
\begin{proof
By definition, the $\theta$-eigenpolytope of $G_P$ satisfies $\Span \Phi=\Eig_{G_P}(\theta)$, where $\Phi$ is the corresponding eigenpolytope matrix.
Now, by definition, $P$ is $\theta$-spectral if and only if $\Span \Psi = \Eig_{G_P}(\theta)$, where $\Psi$ is its arrangement matrix.
But by \cref{res:same_column_span}, $\Phi$ and $\Psi$ have the same span if and only of their rows are related by some invertible linear map $T\in\GL(\RR^d)$, that is, $\Psi T=\Phi$, or $T\circ \psi=\phi$. The latter expresses exactly that \eqref{eq:diagram_polytope} commutes.
\end{proof}
This also proves \cref{res:naive} $(i)$.
\begin{proposition}
\label{res:naive_graph}
A graph $G$ is $\theta$-spectral if and only if the eigenpolytope map~$\phi\:$ $V(G)\to \RR^d$ provides an isomorphism between $G$ and the skeleton of its $\theta$-eigenpoly\-tope $P_G(\theta)$.
\begin{proof}
Suppose first that $G$ is $\theta$-spectral.
Then there is a $\theta$-spectral polytope $Q$~with edge-graph~$G_Q=G$ and skeleton map $\psi\:V(G_Q)\to\F_0(Q)$.
By \cref{res:naive} $(i)$, $Q$ is linearly equivalent to $P_G(\theta)$ via some linear map $T\in\GL(\RR^d)$.
By \cref{res:naive_polytope}, the eigenpolytope map satisfies $\phi=T\circ \psi$.
Since $T$ induces an isomorphism between the skeleta of $Q$ and $P_G(\theta)$, and $\psi$ is an isomorphism between $G$ and the skeleton of $Q$, we find that $\phi$ must be an isomorphism between $G$ and the skeleton of $P_G(\theta)$.
This shows one direction.
For the converse, suppose that $\phi$ is an isomorphism.
Set $P:=P_G(\theta)$ and let~$G_P$ be its edge-graph with skeleton map $\psi\:V(G_P)\to \F_0(P)$.
Then $\sigma:=\psi^{-1}\circ\phi$ is a graph isomorphism between $G$ and $G_P$.
So, since $G\cong G_P$, each eigenpolytope of $G$ is also an eigenpolytope of $G_P$.
We can therefore choose $P_{G_P}(\theta)=P_G(\theta)$, with corresponding eigenpolytope map $\phi':=\sigma^{-1}\circ\phi$.
In sum, the outer square in the following diagram commutes:
\begin{center}
\begin{tikzcd}
G \arrow[r, "\sigma"] \arrow[d, "\phi"'] & G_P \arrow[d, "\phi'"] \arrow[ld, "\psi"'] \\
\mathllap{P:=\,}P_G(\theta) \arrow[r, "\Id"'] & P_{G_P}(\theta)
\end{tikzcd}
\end{center}
Also, by construction of $\sigma$, the upper triangle commutes.
In conclusion, the lower triangle must commute as well, which is exactly \eqref{eq:diagram_polytope} with $T=\Id$. This proves that $P$~is $\theta$-spectral via \cref{res:naive_polytope}.
Since $G$ is isomorphic to $G_P$, $G$ is $\theta$-spectral.
\end{proof}
\end{proposition}
\noindent
This also proves \cref{res:naive} $(ii)$.
\iffalse
\begin{proposition}
\label{res:naive_graph}
A graph $G$ is $\theta$-spectral if and only if it is isomorphic to the edge-graph~of its $\theta$-eigenpolytope $P:=P_G(\theta)$ via some isomorphism $\sigma\:V(G)\to V(G_P)$ for which the following diagram commutes:
\begin{equation}
\label{eq:diagram}
\begin{tikzcd}
G \arrow[rd, "\phi"'] \arrow[r, "\sigma"] & G_P \arrow[d, "\psi"] \\
& P_G(\theta)\mathrlap{\,:=P }
\end{tikzcd}
\end{equation}
\begin{proof}
First, suppose that $G$ is $\theta$-spectral.
By definition, there exists a $\theta$-spectral polytope $Q$ with edge-graph $G_Q=G$, and let $\psi_Q\:V(G_Q)\to\F_0(Q)$ be the map \eqref{eq:vertex_map} (we use subscripts since there will be further such maps).
Since $Q$ is $\theta$-spectral, \cref{res:naive} $(i)$ states that $Q$ and $P_{G_Q}(\theta)=P_G(\theta)= P$ are linearly equivalent via some map $T\in\GL(\RR^d)$.
Let $\psi_P\:V(G_P)\to\F_0(P)$ be~the corresponding map \eqref{eq:vertex_map} for $P$.
We can then define the isomorphism as $\sigma:=\psi_Q\circ T\circ \psi_P^{-1}$ (all involved maps are invertible and preserve adjacency).
The following diagram then commutes:
\begin{center}
\begin{tikzcd}
\mathllap{G=\,}G_Q \arrow[r, "\sigma"] \arrow[d, "\psi_Q"'] & G_P \arrow[d, "\psi_P"] \\
Q \arrow[r, "T"'] & P\mathrlap{\,=P_G(\theta)}
\end{tikzcd}
\end{center}
We can now insert the diagonal arrow $\phi\: V(G)\to\RR^d\supseteq P_G(\theta)$ given by \eqref{eq:eigenpolytope_map}.
\begin{center}
\begin{tikzcd}
\mathllap{G=\,}G_Q \arrow[r, "\sigma"] \arrow[d, "\psi_Q"'] \arrow[rd, "\phi"] & G_P \arrow[d, "\psi_P"] \\
Q \arrow[r, "T"'] & P\mathrlap{\,:= P_G(\theta)}
\end{tikzcd}
\end{center}
Since $Q$ is spectral, the the lower triangle commutes by \cref{res:naive_polytope}.
Since also the outer square commutes, we find that the upper triangle (which is exactly \eqref{eq:diagram}) commutes as well, and we are done.
For the other direction, suppose that there is an isomorphism $\sigma:V(G)\to V(G_P)$ so that \eqref{eq:diagram} commutes.
Since $G$ and $G_P$ are isomorphic, $P=P_G(\theta)$ can be consider as a $\theta$-eigenpolytope of $G_P$ as well, thus $P=P_{G_P}(\theta)$.
If $\phi$ is the map \eqref{eq:eigenpolytope_map} for the eigenpolytope of $G$, then because $G_P$ uses the same eigenpolytope, $\phi\circ\sigma^{-1}$ is the respective map \eqref{eq:eigenpolytope_map} for the eigenpolytope of $G_P$.
The following diagram then commutes:
\begin{center}
\begin{tikzcd}
G \arrow[rd, "\phi"'] \arrow[r, "\sigma"] & G_P \arrow[rd, "\phi\circ\sigma^{-1}"] & \\
& \mathllap{P=:\,}P_G(\theta) \arrow[r, "\Id"'] & P_{G_P}(\theta)
\end{tikzcd}
\end{center}
We can now insert the vertical arrow $\psi\: V(G_P)\to \F_0(P)$ given by \eqref{eq:vertex_map}.
\begin{center}
\begin{tikzcd}
G \arrow[rd, "\phi"'] \arrow[r, "\sigma"] & G_P \arrow[rd, "\phi\circ\sigma^{-1}"] \arrow[d, "\psi"] & \\
& \mathllap{P=:\,}P_G(\theta) \arrow[r, "\Id"'] & P_{G_P}(\theta)
\end{tikzcd}
\end{center}
The left triangle is exactly \eqref{eq:diagram}, hence commutes by assumption.
The outer parallelogram commutes by construction, and so we find that the right triangle commutes as well.
This triangle is exactly \eqref{eq:diagram_polytope} from \cref{res:naive_polytope} with $T=\Id$.
This shows that $P
$ is $\theta$-spectral, and since $G\cong G_P$ is the edge-graph of $P$, we found that $G$ is $\theta$-spectral, and we are done.
\end{proof}
\end{proposition}
\fi
\iffalse
\begin{lemma}
If $P$ is linearly equivalent to the $P_{G_P}(\theta)$, and this linear map $T\in\GL(\RR^d)$ satisfies $\Phi=\Psi T$, then $P$ is $\theta$-spectral.
\end{lemma}
\begin{lemma}
If $G$ is isomorphic to the edge-graph of $P_G(\theta)=:Q$, and the isomorphism $\sigma\:V(G)\to V(G_Q)$ saitsies $\Phi=\Pi_\sigma \Psi$, then $G$ is $\theta$-spectral.
\end{lemma}
\begin{lemma}
\label{res:naive_polytope}
If $P$ is $\theta$-spectral, then $P$ is linearly equivalent to the $\theta$-eigenpolytope of its edge-graph. The converse is not true.
However, $P$ is $\theta$-spectral if and only if $P$ is linearly equivalent to $P_{G_P}(\theta)$ via~some map $T\in\GL(\RR^d)$, and $T$ additionally satisfies $T \phi(i) = \psi(i)$ for all $i\in V$, where~$\phi\:$ $V(G_P)\to\RR^d$ and $\psi\:V(G_P)\to\F_0(P)$ are the maps defined in \eqref{eq:eigenpolytope_map} resp.\ \eqref{eq:vertex_map}.
\begin{proof}
By definition, the $\theta$-eigenpolytope of $G_P$ satisfies $\Span \Phi=\Eig_{G_P}(\theta)$, where $\Phi$ is as defined in \eqref{eq:eigenpolytope_matrix}.
Now, by definition, $P$ is $\theta$-spectral if and only if $\Span \Psi = \Eig_{G_P}(\theta)$, where $\Psi$ is its arrangement matrix \eqref{eq:arrangement_matrix}.
But by \cref{res:same_column_span}, $\Phi$ and $\Psi$ can have the same span if and only of their rows are related by some invertible linear map $T\in\GL(\RR^d)$, that is, $T\phi(i)=\psi(i)$ for all $i\in V$.
A counterexample for the initial converse was given in \cref{ex:pentagon}.
\end{proof}
\end{lemma}
\begin{corollary}
If $P$ is $\theta$-spectral, then so is $G_P$ and $P_{G_P}(\theta)$.
\end{corollary}
The respective statement for graphs is the following:
\begin{lemma}
\label{res:naive_graph}
If $G$ is $\theta$-spectral, then $G$ is isomorphic to the edge-graph of its $\theta$-eigenpolytope. The converse is not true.
However, $G$ is $\theta$-spectral if and only if it is isomorphic to the edge-graph of its $\theta$-eigenpolytope, and the isomorphism is given by $\phi\:V\to\RR^d$~as~de\-fined in \eqref{eq:eigenpolytope_map}.
\begin{proof}
Let $Q:=P_G(\theta)$ be the $\theta$-eigenpolytope of $G$.
Then we try to prove $G\cong G_Q$.
If $G$ is $\theta$-spectral, then there is a $\theta$-spectral polytope $P$ with edge-graph $G_P\cong G$.
By this, $P_{G_P}(\theta)\cong P_G(\theta)=Q$.
By \cref{res:naive_polytope}, $P$ is linearly~equivalent to~$P_{G_P}(\theta)$, hence linearly equivalent to $Q$.
In particular, $G_P\cong G_Q$, and so we found $G\cong G_P\cong G_Q$.
A counterexample for the initial converse was given in \cref{ex:pentagon}.
\end{proof}
\end{lemma}
\fi
\iffalse
\begin{corollary}
\quad
\begin{myenumerate}
\item If a polytope $P$ is $\theta$-spectral, then it is the $\theta$-eigenpolytope of its edge-graph (up to some invertible linear transformation).
\item If a graph $G$ is $\theta$-spectral, then it is (isomorphic to) the edge-graph of its $\theta$-eigenpolytope.
\item If a graph $G$ is $\theta$-spectral, then its $\theta$-eigenpolytope is $\theta$-spectral.
\end{myenumerate}
\end{corollary}
\begin{corollary}
\label{res:naive_definition}
If $P$ is $\theta$-spectral, then it is the $\theta$-eigenpolytope of its edge-graph (up to invertible linear transformation)
\begin{proof}
If $P$ is $\theta$-spectral with arrangement matrix $M'$, then by \mbox{\cref{def:spectral} $(i)$} we have $\Span M'=\Eig_{G_P}(\theta)$.
Let further $M$ be the matrix \eqref{eq:eigenpolytope_matrix} used in \cref{def:eigenpolytope} to define the $\theta$-eigenpolytope of $G_P$.
By definition we have $\Span M=\Eig_{G_P}(\theta)$.
By \cref{res:same_column_span}, since $M$ and $M'$ have the same column span, their rows are related by an invertible linear transformation.
These rows are the vertices of $P$ and $P_{G_P}(\theta)$ respectively, finishing the proof.
\end{proof}
\end{corollary}
The converse of \cref{res:naive_definition} is not true.
This is intentional, since we do not want confiurations as in \cref{ex:pentagon} to be called \enquote{spectral}.
\begin{corollary}
\label{res:spectral_graph_naive}
If a graph $G$ is $\theta$-spectral, then $\phi\:V\ni i\mapsto v_i\in\RR^d$ (as \mbox{introduced} in \cref{def:eigenpolytope}) is an isomorphism between $G$ and the edge-graph of $P_G(\theta)$.
That includes
\begin{myenumerate}
\item for all $i\in V$ holds $v_i\in\F_0(P_G(\theta))$.
\item for distinct $i,j\in V$ holds $v_i\not= v_j$.
\item $ij\in E$ if and only if $\conv\{v_i,v_j\}\in\F_1(P_G(\theta))$.
\end{myenumerate}
\begin{proof}
Since $G$ is $\theta$-spectral, it is (isomorphic to) the edge-graph of some $\theta$-spectral polytope $P$.
Recall that $n$ denotes the number of vertices of $G$.
Since $P_G(\theta)$ is the convex hull of $v_1,...,v_n$, $P_G(\theta)$ has at most $n$ vertices, and $\phi$ is surjective.
Since $P_G(\theta)$ is combinatorially equivalent to $P$ (by \cref{res:naive_definition}), $P_G(\theta)$ has exactly $n$ vertices, and $\phi$ must also be injective.
This also shows $(i)$ and $(ii)$.
\end{proof}
\end{corollary}
Note that not every graph that is isomorphic to the edge-graph of its $\theta$-eigenpoly\-tope is also $\theta$-spectral (again, consider \cref{ex:pentagon}).
However, if the isomorphism is the one from \cref{res:spectral_graph_naive}, then this conclusion is valid.
\iffalse
\begin{lemma}
If $P\subset\RR^d$ is a (full-dimensional) polytope, then the following are~equivalent:
\begin{myenumerate}
\item $P$ is $\theta$-spectral,
\item $P$ is $\theta$-balanced and the eigenvalue $\theta$ has multiplicity $d$.
\end{myenumerate}
If $G$ is a graph, then the following are equivalent:
\begin{myenumerate}
\setcounter{enumi}{2}
\item $G$ is $\theta$-spectral,
\item the $\theta$-eigenpolytope of $G$ is $\theta$-spectral with edge-graph $G$,
\item $G$ is isomorphic to the edge-graph of its $\theta$-eigenpolytope, with isomorphism $V\ni i\mapsto v_i\in\RR^d$ (as constructed in \cref{def:eigenpolytope}).
\end{myenumerate}
\begin{proof}
By \cref{res:eigenvalue}, the balancing equation \eqref{eq:balanced} is equivalent to $\Span M\subseteq\Eig_G(\theta)$.
Recall that the rank~of~$M$~is~$d$.
Now, $\theta$ having multiplicity $d$ (\shortStyle{i.e.,}\ $(ii)$) is equivalent to $\dim\Eig_G(\theta)=d=\rank M$. This is then equivalent to $M=\Eig_G(\theta)$ which is $(i)$.
If $G$ is $\theta$-spectral then it is (isomorphic to) the edge-graph of \emph{some} $\theta$-spectral polytope $P$.
But \cref{res:naive_definition} states that $P$ is (up to linear transformation) the $\theta$-eigenpolytope of its edge-graph, or, since isomorphic, the $\theta$-eigenpolytope of $G$.
\end{proof}
\end{lemma}
\fi
\fi
It is also possible to give a definition of spectral graphs purely in terms of graph theory, without any explicit reference to polytopes:
\begin{lemma}
\label{res:spectral_2}
A graph $G$ is $\theta$-spectral if and only if it satisfies both of the following:
\begin{myenumerate}
\item for each vertex $i\in V$ exists a $\theta$-eigenvector $u=(u_1,...,u_n)\in\Eig_G(\theta)$~whose single largest component is $u_i$, or equivalently,
$$\Argmax_{k\in V} u_k = \{i\}.$$
\item any two vertices $i,j\in V$ form an edge $ij\in E$ in $G$ if and only~if there is a $\theta$-eigenvector $u=(u_1,...,u_n)\in\Eig_G(\theta)$ whose only two largest components are $u_i$ and $u_j$, or equivalently,
$$\Argmax_{k\in V} u_k = \{i,j\}.$$
\end{myenumerate}
\end{lemma}
This characterization of spectral graphs can be interpreted as follows: a spectral graph can be reconstructed from knowing a single eigenspace, rather than, say, all eigenspaces and their associated eigenvalues.
\begin{proof}[Proof of \cref{res:spectral_2}]
Let $P_G(\theta)\subset\RR^d$ be the $\theta$-eigenpolytope of $G$ with eigenpolytope matrix $\Phi$ and eigenpolytope map $\phi\:V\ni i\mapsto v_i\in\RR^d$.
Since $\Span\Phi=\Eig_G(\theta)$, the eigenvectors $u=(u_1,...,u_n)\in\Eig_G(\theta)$ are exactly the vectors that can be written as $u=\Phi x$ for some $x\in\RR^d$.
If then $e_k\in\RR^n$ denotes the $k$-th standard basis vector, we have
$$u_k = \<u,e_k\> = \<\Phi x, e_k\> = \<x,\Phi^{\Tsymb}\! e_k\> = \<x, v_k\>.$$
Therefore, there is a $\theta$-eigenvector $u=(u_1,...,u_n)\in\Eig_G(\theta)$ with
$\Argmax_{k\in V} u_k = \{i_1,...,i_m\}$
if and only if there is a vector $x\in\RR^d$ with
$$\Argmax_{k\in V} \<x,v_k\> = \{i_1,...,i_m\}.$$
But this last line is exactly what it means for $\conv\{v_{i_1},...,v_{i_m}\}$ to be a face of $P_G(\theta)$ $=\conv\{v_1,...,v_n\}$ (and $x$ is a normal vector of that face).
In this light, we can interpret $(i)$ as stating that $v_1,...,v_n$ form $n$ distinct vertices of $P_G(\theta)$, and $(ii)$ as stating that $\conv\{v_i,v_j\}$ is an edge of $P_G(\theta)$ if and only if $ij\in E$.
And this means exactly that $\phi$ is a graph isomorphism between $G$ and the skeleton of $P_G(\theta)$.
By \cref{res:naive_graph}, this is equivalent to $G$ being $\theta$-spectral.
\end{proof}
\iffalse
If the graph satisfies $(i)$ and $(ii)$, then the map $\sigma:=\psi^{-1}\circ\phi:V(G)\to V(G_P)$ is bijective because of $(i)$, and a graph isomorphism because of $(ii)$.
This isomorphism makes \eqref{eq:diagram} commute by definition, and so $G$ is $\theta$-spectral by \cref{res:naive_graph}.
Conversely, if $G$ is $\theta$-spectral, then it is isomorphic to the edge-graph $G_P$ of its $\theta$-eigenpolytope via some isomorphism $\sigma\: V(G)\to V(G_P)$ that makes \eqref{eq:diagram} commute.
We first note that that implies that $P_G(\theta)$ has exactly the vertices $v_1,...,v_n$, which means that $G$ satisfies $(i)$.
Since $\sigma$ and $\psi$ are graph isomorphisms, so is $\psi\circ\sigma$, and since \eqref{eq:diagram} commutes, also $\phi$.
This means $ij\in E$ if and only if $\conv\{\phi(i),\phi(k)\}=\conv\{v_i,v_j\}\in P_G(\theta)$.
This is equivalent to $(ii)$, and we are done.
Suppose that $G$ is $\theta$-spectral, then
We show first that there is a $\theta$-eigenvectors $u=(u_1,...,u_n)\in\Eig_G(\theta)$ with
$$\Argmax_{k\in V} u_k=\{i_1,...,i_m\}$$
if and only if $\{v_{i_1},...,v_{i_m}\}$ (as defined in \cref{def:eigenpolytope}) is the vertex set of a face~of the $\theta$-eigenpolytope of $G$.
Recall that a set $\{v_{i_1},...,v_{i_m}\}\subseteq \F_0(P)$ of vertices is the vertex set of a face of $P$ if and only if there is a vector $x\in\RR^d$ (a normal vector) with
$$\Argmax_{k\in V} \<x,v_k\> = \{i_1,...,i_m\}.$$
Let $M$ be the arrangement matrix of $P$, set $u=(u_1,...,u_n):=Mx$, and let $e_k\in\RR^n$ be the $k$-th standard basis vector.
Then
$$\<x,v_k\> = \<x, M^{\Tsymb}\! e_k\> = \<M x, e_k\> =\<u,e_k\> = u_k.$$
By $u=Mx$, the $x\in\RR^d$ are in one-to-one correspondence with the $u\in\Span M=\Eig_G(\theta)$, which are exactly the $\theta$-eigenvectors of $G$.
So $\{v_{i_1},...,v_{i_m}\}$ forms a face of $P$ if and only if
$$(*)\quad \Argmax_{k\in V} u_k = \Argmax_{k\in V} \<u,v_k\> = \{i_1,...,i_m\},$$
Condition $(i)$ and $(ii)$ are now the version of $(*)$ for vertices and edges of $P$, which correspond to vertices and edges of $G$ via the isomorphism $\phi$.
\fi
In practice, to reconstruct a spectral graph from an eigenspace, the steps could be the following: given a subspace $U\subseteq\RR^n$ (the claimed eigenspace), then
\begin{myenumerate}
\item choose any basis $u_1,...,u_d\in\RR^n$ of $U$,
\item build the matrix $\Phi=(u_1,...,u_d)\in\RR^{n\x d}$ in which the $u_i$ are the columns,
\item define $v_i$ as the $i$-th \emph{row} of $\Phi$,
\item define $P:=\conv\{v_1,...,v_n\}\subset\RR^d$ as the convex hull of the $v_i$,
\item the reconstructed graph $G=G_P$ is then the edge-graph of $P$.
\end{myenumerate}
\subsection{Properties of spectral polytopes}
We discuss two properties of spectral polytopes that make them especially interesting in polytope theory.
\subsubsection*{Reconstruction from the edge-graph}
\label{sec:spectral_reconstruction}
The edge-graph of a general polytope carries little information about that polytope \shortStyle{i.e.,}\ given only its edge-graph, we can often not reconstruct the polytope from this (up to combinatorial equivalence).
Often, one cannot even deduce the dimension of the polytope from its edge-graph.
Reconstruction might be possible in certain special cases, as \shortStyle{e.g.}\ for 3-dimensional polyhedra, simple polytopes or zonotopes.
The spectral polytopes provide another such class.
\begin{theorem}\label{res:reconstruction}
A $\theta_k$-spectral polytope is uniquely~determined by its edge-graph up to invertible linear transformations.
\end{theorem}
The proof is simple:
every $\theta_k$-spectral polytope is linearly equivalent to the~$\theta_k$-eigenpolytope of its edge-graph (by \cref{res:naive} $(i)$).
Our definition of the \mbox{$\theta_k$-eigen}\-polytope already suggests an explicit procedure to construct it (a script for this~is included in \cref{sec:appendix_mathematica}).
This property of spectral polytopes appears more exciting when applied to graph classes that are not obviously spectral (see \cref{sec:edge_transitive}).
\subsubsection*{Realizing symmetries of the edge-graph}
\label{sec:spectral_symmetries}
Every Euclidean symmetry of a polytope induces a combinatorial symmetry on its edge-graph.
The converse is far from true.
Think, for example, about a rectangle that is not a square.
Even worse, it can happen that a polytope does not even have a realization that realizes all the symmetries of its edge-graph (\shortStyle{e.g.}\ the polytope constructed in \cite{bokowski1984combinatorial}).
We have previously discussed (in \cref{res:realizing_symmetries}) the existence of a homomorphism $\Aut(G)\to\Aut(P_G(\theta))$ between the symmetries of a graph $G$ and the symmetries of its eigenpolytopes.
There are two caveats:
\begin{myenumerate}
\item this is not necessarily an isomorphism, and
\item it says nothing about the symmetries of the edge-graph of $P_G(\theta)$, as this one needs not to be isomorphic to $G$
\end{myenumerate}
Still, it suffices to makes statement of the following form:
if $G$ is vertex-transitive, then so are all its eigenpolytopes.
This might not work with other transitivities, as for example edge-transitivity.
This is no concern for spectral graphs/polytopes:
\begin{theorem} \quad
\label{res:symmetries}
\begin{myenumerate}
\item If $G$ is $\theta$-spectral, then $P_G(\theta)$ realizes all its symmetries, which includes
$$\Aut(G)\cong\Aut(P_G(\theta))$$
%
via the map $\sigma\mapsto T_\sigma$ given in \cref{res:realizing_symmetries}, as wells as that $T_\sigma$ permutes the vertices and edges of $P_G(\theta)$ exactly as $\sigma$ permutes the vertices and edges of the graph $G$.
\item If $P$ is $\theta$-spectral, then $P$ has a realization that realizes all the symmetries~of its edge-graph, namely, the $\theta$-eigenpolytope of its edge-graph.
\end{myenumerate}
%
\end{theorem}
This is mostly straight forward, with large parts already addressed in \mbox{\cref{res:realizing_symmetries}}.
The major difference is that for spectral graphs $G$ the eigenpolytope has exactly the distinct vertices $v_1,...,v_n\in\RR^d$.
The statement from \cref{res:realizing_symmetries} that $T_\sigma$~per\-mutes the $v_i$ as prescribed by $\sigma$, then becomes, that $T_\sigma$ permutes the \emph{vertices} as prescribed by $\sigma$, and hence also the edges.
Also, since the $v_i$ are distinct, no non-trivial symmetry $\sigma$ can result in trivial $T_\sigma$, making $\sigma\mapsto T_\sigma$ into a group \emph{isomorphism}.
For part $(ii)$ merely recall that the eigenpolytope $P_{G_P}(\theta)$ is indeed a realization of $P$ by \cref{res:naive} $(i)$.
The major consequence of this is, that for spectral graphs/polytopes also more complicates types of symmetries translate between a polytope and its graph, as \shortStyle{e.g.}\ edge-transitivity (see also \cref{sec:edge_transitive}).
\iffalse
But, we have to be careful: we cannot expect a general statement about symmetries of spectral polytopes, as \shortStyle{e.g.}\ a rectangle and a square are both spectral (since they are linear transformations of each other), but one is apparently more symmetric, if by symmetry we understand a rigid motion.
There are two solutions: either, generalizing symmetries to general invertible linear transformations, or, restricting the polytopes about which we talk.
Both perspectives are equivalent, and we decided to choose the second.
Let us call a polytope \emph{normalized} if $M^{\Tsymb}\! M=\Id$ for its arrangement matrix $M$, that is, the columns of $M$ form a orthonormal basis of $\Span M$.
By \cref{res:same_column_span}, every polytope is just one invertible linear transformation away from being normalized.
Also, the way in which we defined eigenpolytopes in \cref{def:eigenpolytope} ensures that they are normalized.
We have the following:
\begin{theorem}
\label{res:realizing_symmetries}
A normalized spectral polytope realizes all the combinatorial symmetries of its edge-graph as Euclidean symmetries.
More precisely, we have the following: if $\sigma\in\Aut(G_P)\subseteq\Sym(V)$ is a combinatorial symmetry of the edge-graph, and $\Pi_\sigma\in\Perm(\RR^n)$ is the associated permutation matrix, then
$$T_\sigma:=M^{\Tsymb} \Pi_\sigma M \;\in\; \Ortho(\RR^d)$$
is a Euclidean symmetry of $P$ that permutes its vertices exactly as $\sigma$ permutes the vertices of $G_P$ (that is $T_\sigma v_i = v_{\sigma(i)}$).
\end{theorem}
We should mention that this result is more generally true for all eigenpolytopes of a graph and was already proven in \cite[Theorem 2.2]{godsil1978graphs}.
Another proof in the context of spectral graph realizations can be found \cite[Corollary 2.9]{winter2020symmetric}.
As a general consequence we have that if $G$ is vertex-transitive, then so are all its eigenpolytopes.
This does not extend to other kinds of symmetries, \shortStyle{e.g.}\ edge-transitivity.
However, if $G$ is $\theta$-spectral, then the symmetries of $G$ and its~$\theta$-eigenpolytope translate into each other one-to-one.
For example, $G$ is edge-transitive if and only if $P_G(\theta)$ is.
We make use of this in later sections.
\fi
\section{The Theorem of Izmestiev}
\label{sec:izmestiev}
We introduce our, as of yet, most powerful tool for proving that certain polytopes are $\theta_2$-spectral.
For this, we make use of a more general theorem by Izmestiev \cite{izmestiev2010colin}, first proven in the context of the Colin de Verdière graph invariant.
The proof of this theorem requires techniques from convex geometry, most notably, mixed volumes, which we not address here.
We need to introduce some terminology.
As before, let $P\subset\RR^d$ denote a full-dimensional polytope of dimension $d\ge 2$, with~edge-graph $G_P=(V,E),V=\{1,...,n\}$ and vertices $v_i\in \F_0(P),i\in V$.
Recall, that the \emph{polar dual} of $P$ is the polytope
$$P^\circ:=\{x\in\RR^d\mid \<x,v_i\>\le 1\text{ for all $i\in V$}\}.$$
We can replace the $1$-s in this definition by variables $c=(c_1,...,c_n)$, to obtai
$$P^\circ(c):=\{x\in\RR^d\mid\<x,v_i\>\le c_i\text{ for all $i\in V$}\}.$$
The usual polar dual is then $P^\circ=P^\circ(1,...,1)$.
\begin{figure}[h!]
\includegraphics[width=0.85\textwidth]{img/P_dual_2}
\caption{Visualization of $P^\circ(c)$ for different values of $c\in\RR^n$.}
\end{figure}
In the following, $\vol(\free)$ denotes the volume of convex sets in $\RR^d$ (\shortStyle{w.r.t.}\ the usual Lebesgue measure).
Note that the function $\vol(P^\circ(c))$ is differentiable in $c$, and so we can compute partial derivatives \shortStyle{w.r.t.}\ the components of $c$.
\begin{theorem}[Izmestiev \cite{izmestiev2010colin}, Theorem 2.4]\label{res:izmestiev}
Define a matrix $X\in\RR^{n\x n}$ with~compo\-nents
$$X_{ij}:=-\frac{\partial^2 \vol(P^\circ(c))}{\partial c_i\partial c_j}\Big|_{c=(1,...,1)}.$$
The matrix $X$ has the following properties:
\begin{myenumerate}
%
\item $X_{ij}< 0$ whenever $ij\in E(G_P)$,
\item $X_{ij}=0$ whenever $ij\not\in E(G_P)$,
\item $X\Psi=0$ (where $\Psi$ is the arrangement matrix of $P$),
\item $X$ has a unique negative eigenvalue, and this eigenvalue is simple,
\item $\dim\ker X=d$.
\end{myenumerate}
\end{theorem}
One can view the matrix $X$ as some kind of adjacency matrix of~a vertex- and edge-weighted version of $G_P$.
Part $(iii)$ states that $v$ satisfies a weighted form of the balancing condition \eqref{eq:balanced} with eigenvalue zero.
Since $\rank \Psi=d$, part $(v)$ states that $\Span \Psi$ is already the whole 0-eigenspace.
And part $(iv)$ states that zero is the second smallest eigenvalue of $X$.
\begin{theorem}\label{res:implies_spectral}
Let $X\in\RR^{n\x n}$ be the matrix defined in \cref{res:izmestiev}.
If we have
\begin{myenumerate}
\item $X_{ii}$ is independent of $i\in V(G_P)$, and
\item $X_{ij}$ is independent of $ij\in E(G_P)$,
\end{myenumerate}
then $P$ is $\theta_2$-spectral.
\begin{proof}
By assumption there are $\alpha,\beta\in\RR$, $\beta>0$, so that $X_{ii}=\alpha$ for all vertices~$i\in$ $V(G_P)$, and $X_{ij}=\beta<0$ for all edges $ij\in E(G_P)$ (we have $\beta<0$ by \cref{res:izmestiev} $(i)$).
We can write this as
$$X=\alpha \Id + \beta A\quad\implies\quad (*)\;A=\frac\alpha\beta \Id+\frac1\beta X,$$
where $A$ is the adjacency matrix of $G_P$.
By \cref{res:izmestiev} $(iv)$ and $(v)$, the matrix $X$ has second smallest eigenvalue zero of multiplicity $d$.
By \cref{res:izmestiev} $(iii)$, the columns of $M$ are the corresponding eigenvectors.
Since $\rank \Psi=d$ we find that these are all the eigenvectors and $\Span \Psi$ is the 0-eigenspace of $X$.
By $(*)$ the eigenvalues of $A$ are the eigenvalues of $X$, but scaled by $1/\beta$ and shifted by $\alpha/\beta$. Since $1/\beta <0$, the second-\emph{smallest} eigenvalue of $X$ gets mapped onto the second-\emph{largest} eigenvalue of $A$.
Therefore, $A$ (and also $G_P$) has second-largest eigenvalue $\theta_2=\alpha/\beta$ of multiplicity $d$, and $\Span \Psi$ is the corresponding eigenspace.
By definition, $P$ is then the $\theta_2$-eigenpolytope of $G_P$ and is therefore~$\theta_2$-spectral.
\end{proof}
\end{theorem}
It is unclear whether \cref{res:implies_spectral} already characterizes $\theta_2$-spectral polytopes, or even spectral polytopes in general (see also \cref{q:characterization}).
\section{Edge-transitive polytopes}
\label{sec:edge_transitive}
We apply \cref{res:implies_spectral} to edge-transitive polytopes, that is, to polytopes for~which the Euclidean symmetry group $\Aut(P)\subset\Ortho(\RR^d)$ acts transitively on the edge set $\F_1(P)$.
No classification of edge-transitive polytopes is known.
Some \mbox{edge-transitive} polytopes are listed in \cref{sec:classification}.
Despite the name of this section, we are actually going to address polytopes that are simultaneously vertex- and edge-transitive.
This is not a huge deviation from the title: as shown in \cite{winter2020polytopes}, edge-transitive polytopes in dimension $d\ge 4$ are always also vertex-transitive, and the exceptions in lower dimensions are few (a continuous family of $2n$-gons for each $n\ge 2$, and two exceptional polyhedra).
\cref{res:implies_spectral} can be directly applied to simultaneously vertex- and edge-transitive polytopes, and so we have
\begin{corollary}
\label{res:vertex_edge_transitive_cor}
A simultaneously vertex- and edge-transitive polytope is $\theta_2$-spectral.
\end{corollary}
We collect all the notable consequences in the following theorem:
\begin{theorem}\label{res:edge_vertex_transitive}
If $P\subset\RR^d$ is simultaneously vertex- and edge-transitive, then
\begin{myenumerate}
\item $\Aut(P)\subset\RR^d$ is irreducible as a matrix group.
\item $P$ is uniquely determined by its edge-graph up to scale and orientation.\footnote{This shows that $P$ is \emph{perfect}, \shortStyle{i.e.,}\ is the unique maximally symmetric realization of its combinatorial type. See \cite{gevay2002perfect} for an introduction to perfect polytopes.}
\item $P$ realizes all the symmetries of its edge-graph.
\item if $P$ has edge length $\ell$ and circumradius $r$, then
%
\begin{equation}
\label{eq:circumradius}
\frac{\ell}r = \sqrt{\frac{2\lambda_2}{\deg(G_P)}} = \sqrt{2\Big(1-\frac{\theta_2}{\deg(G_P)}\Big)},
\end{equation}
%
where $\deg(G_P)$ is the vertex degree of $G_P$, and $\lambda_2=\deg(G_P)-\theta_2$ denotes its second smallest Laplacian eigenvalue.
\item if $\alpha$ is the dihedral angle of the polar dual $P^\circ$, then
%
\begin{equation}
\label{eq:dihedral_angle}
\cos(\alpha)=-\frac{\theta_2}{\deg(G_P)}.
\end{equation}
\end{myenumerate}
\begin{proof}
The complete proof of $(i)$ and $(ii)$ has to be postponed until \cref{sec:rigidity} (see \cref{res:edge_transitive_rigid}).
Concerning $(ii)$, from \cref{res:vertex_edge_transitive_cor} and \cref{res:reconstruction} already follows that $P$ is determined by its edge-graph up to \emph{invertible linear transformations}, but not necessarily only up to scale and orientation.
Part $(iii)$ follows from \cref{res:symmetries}.
Part $(iv)$ and $(v)$ were proven (in a more general setting) in \cite[Proposition 4.3]{winter2020symmetric}.
This applies literally to $(iv)$.
For $(v)$, note the following: if $\sigma_i\in\F_{d-1}(P^\circ)$ is the facet of the polar dual $P^\circ$ that corresponds to the vertex $v_i\in\F_1(P)$, then the dihedral angle between $\sigma_i$ and $\sigma_j$ is $\pi-\angle(v_i,v_j)$.
The latter expression was proven in \cite{winter2020symmetric} to agree with \eqref{eq:dihedral_angle}.
\end{proof}
\end{theorem}
It is worth emphasizing that large parts of \cref{res:edge_vertex_transitive} do not apply to polytopes of a weaker symmetry, as \shortStyle{e.g.}\ vertex-transitive polytopes.
Prisms are counterexamples to both $(i)$ and $(ii)$.
There are vertex-transitive neighborly polytopes (other than simplices) and they are counterexamples to $(ii)$ and $(iii)$.
\iffalse
A simultaneously vertex- and edge-transitive polytope can neither deformed nor projected to keep its edge-graph and symmetry.
The fact that $P$ is perfect can already be proven
from classical rigidity results and thus can be formulated for a much larger class of polytopes:
\begin{theorem}[Alexandrov, \cite{...}]
\label{res:alexandrov_rigidity}
A polyope is uniquely determined (up to orientation) by its combinatorial type and the shape of its facets.
\end{theorem}
\begin{theorem}
An inscribed polytope with all edges of length $\ell$ is uniquely determined (up to orientation) by its combinatorial type.
\begin{proof}
The proof is by induction.
A 2-dimensional inscribed polytope with all edges of length $\ell$ ic clearly a regular polygon, hence of unique shape.
Now consider $P$ is a $d$-dimensional inscribed polytope with all edges of length $\ell$.
Then also each facet of $P$ is inscribed with this edge-length.
Also, the combinatorial type of each facet is determined by the combinatorial type of $P$.
By induction assumption, the shape of each facet is hence uniquely determined.
By \cref{res:alexandrov_rigidity} this uniquely determines the shape of $P$.
\end{proof}
\end{theorem}
\begin{corollary}
A simultaneously vertex- and edge-transitive polytope is perfect.
\end{corollary}
\fi
\begin{remark}
\label{rem:edge_not_vertex}
There are two edge-transitive polyhedra that are not vertex-transitive: the \emph{rhombic dodecahedron} and the \emph{rhombic triacontahedron} (see also \cref{fig:edge_transitive}).~Only the former is $\theta_2$-spectral, and the latter is not spectral for any eigenvalue (this was already mentioned in \cite{licata1986surprising}).
Since the rhombic dodecahedron is not vertex-transitive, nothing of this follows from \cref{res:vertex_edge_transitive_cor}.
However, this polytope satisfies the conditions of \cref{res:implies_spectral}, which seems purely accidental.
It is the only known spectral polytope that is not vertex-transitive.
\end{remark}
\subsection{Rigidity and irreducibility}
\label{sec:rigidity}
The goal of this section is to prove the missing part of \cref{res:edge_vertex_transitive}:
\begin{theorem}
\label{res:edge_transitive_rigid}
If $P\subset\RR^d$ is simultaneously vertex- and edge-transitive, then
\begin{myenumerate}
\item $\Aut(P)\subset\Ortho(\RR^d)$ is irreducible as a matrix group, and
\item $P$ is determined by its edge-graph up to scale and orientation.
\end{myenumerate}
\end{theorem}
To prove \cref{res:edge_transitive_rigid}, we make use of \emph{Cauchy's rigidity theorem} for polyhedra (with its beautiful proof listed in \cite[Section 12]{aigner2010proofs}).
It states that every~polyhedron is uniquely determined by its combinatorial type and the shape of its faces.
This was generalized by Alexandrov to general dimensions $d\ge 3$ (proven \shortStyle{e.g.}\ in \cite[Theorem 27.2]{pak2010lectures}):
\begin{theorem}[Alexandrov]
\label{res:alexandrov}
Let $P_1,P_2\subset\RR^d, d\ge 3$ be two polytopes, so that
\begin{myenumerate}
\item $P_1$ and $P_2$ are combinatorially equivalent via a face lattice~isomorphism~$\phi:\F(P_1)\to\F(P_2)$, and
\item each facet $\sigma\in\F_{d-1}(P_1)$ is congruent to the facet $\phi(\sigma)\in\F_{d-1}(P_2)$.
\end{myenumerate}
Then $P_1$ and $P_2$ are congruent, \shortStyle{i.e.,}\ are the same up to orientation.
\end{theorem}
\begin{proposition}
\label{res:regular_rigid}
Let $P_1,P_2\subset\RR^d$ be two combinatorially equivalent polytopes,~each of which has
\begin{myenumerate}
\item all vertices on a common sphere (\shortStyle{i.e.,}\ is inscribed), and
\item all edges of the same length $\ell_i$.
\end{myenumerate}
Then $P_1$ and $P_2$ are the same up to scale and orientation.
\begin{proof}[Proof.\!\!]\footnote{This proof was proposed by the user \emph{Fedor Petrov} on MathOverflow \cite{petrovMO}.}
W.l.o.g.\ assume that $P_1$ and $P_2$ have the same circumradius, otherwise~re\-scale $P_2$.
It then suffices to show that $P_1$ and $P_2$ are the same up to orientation.
We proceed with induction by the dimension $d$.
The induction base is given by $d=2$, which is trivial, since any two inscribed polygons with constant edge length are regular and thus completely determined (up to scale and orientation) by their number of vertices.
Suppose now that $P_1$ and $P_2$ are combinatorially equivalent polytopes of dimension $d\ge 3$ that satisfy $(i)$ and $(ii)$.
Let $\phi$ be the face lattice isomorphism between them.
Let $\sigma\in\F_{d-1}(P_1)$ be a facet of $P_1$, and $\phi(\sigma)$ the corresponding facet in $P_2$.
In particular, $\sigma$ and $\phi(\sigma)$ are combinatorially equivalent.
Furthermore, both $\sigma$ and $\phi(\sigma)$ are of dimension $d-1$ and satisfy $(i)$ and $(ii)$. This is obvious for $(ii)$, and for $(i)$ recall that facets of inscribed polytopes are also inscribed.
By induction hypothesis, $\sigma$ and $\phi(\sigma)$ are then congruent.
Since this holds for all facets $\sigma\in\F_{d-1}(P_1)$, \cref{res:alexandrov} tells us that $P_1$ and $P_2$ are congruent, that is, the same up to orientation.
\end{proof}
\end{proposition}
We can now prove the main theorem of this section:
\begin{proof}[Proof of \cref{res:edge_transitive_rigid}]
By \cref{res:edge_vertex_transitive} the combinatorial type of $P$ is determined by its edge-graph.
By vertex-transitivity, all vertices are on a sphere.
By edge-transitivity, all edges are of the same length.
We can then apply \cref{res:regular_rigid} to obtain that $P$ is unique up to scale and orientation.
This proves $(ii)$.
Suppose now, that $\Aut(P)$ is not irreducible, but that $\RR^d$ decomposes as $\RR^d=W_1\oplus W_2$ into non-trivial orthogonal $\Aut(P)$-invariant subspaces.
Let $T_\alpha\in\GL(\RR^d)$ be the linear map that acts as identity on $W_1$, but as $\alpha\Id$ on $W_2$ for some $\alpha >1$.
Then $T_\alpha P$ is a non-orthogonal linear transformation of $P$ (in particular, combinatorially equivalent), on which $\Aut(P)$ still acts vertex- and edge-transitively.
By $(ii)$, this cannot be. Hence $\Aut(P)$ must be irreducible, which proves $(i)$.
\end{proof}
\subsection{A word on classification}
\label{sec:classification}
Despite the simple appearance of the definition of an edge-transitive polytope,
no classification was obtained so far.
There exists a classification of the 3-dimension edge-transitive polyhedra: besides the Platonic solids, these are the ones shown in \cref{fig:edge_transitive} (nine in total).
\begin{figure}[h!]
\includegraphics[width=0.75\textwidth]{img/edge_transitive_polyhedra}
\caption{From left to right, these are: the cuboctahedron, the icosido\-decahedron, the rhombic dodecahedron, and the rhombic triacontahedron.}
\label{fig:edge_transitive}
\end{figure}
There are many known edge-transitive polytopes in dimension $d\ge 4$ (so we~are not talking about a class as restricted as the regular polytopes).
There are 15 known edge-transitive 4-polytopes (and an infinite family of duoprisms\footnote{The $(n,m)$-duoprism is the cartesian product of a regular $n$-gon and a regular $m$-gon. Those are edge-transitive if and only of $n=m$. Technically, the 4-cube is the $(4,4)$-duoprism but is usually not counted as such, because of its exceptionally large symmetry group.}), but already here, no classification is known.
It is known that the number of irreducible\footnote{Being not the cartesian product of lower dimensional edge-transitive polytopes.} edge-transitive polytopes grows at least linearly with the number of dimensions.
For example, there are $\lfloor d/2\rfloor$ \emph{hyper-simplices} in dimension $d$.
These are edge-transitive (even distance-transitive, see \cref{sec:distance_transitive}).
It is the hope of the author, that the classification of the edge-transitive polytopes can be obtained using their spectral properties.
Their classification can now be stated purely as a problem in spectral graph theory:
the classification of the edge-transitive polytopes (in dimension $d\ge 4$) is equivalent to the classification of $\theta_2$-spectral edge-transitive graphs, and since \cref{res:spectral_2}, we have a completely graph theoretic characterization of spectral graphs.
\begin{theorem}
\label{res:edge_transitive_spectral_graph}
Let $G$ be an edge-transitive graph. If $G$ is $\theta_k$-spectral, then
\begin{myenumerate}
\item $k=2$, and
\item if $G$ is \ul{not} vertex-transitive, then $G$ is the edge-graph of the rhombic dodecahedron (see \cref{fig:edge_transitive}).
\end{myenumerate}
\begin{proof}
We first prove $(ii)$.
As shown in \cite{winter2020polytopes} all edge-transitive polytopes in dimen\-sion $d\ge 4$ are vertex-transitive.
If $G$ is edge-transitive, not vertex-transitive and $\theta_k$-spectral, then its $\theta_k$-eigenpolytope is also edge-transitive but not vertex-transitive, hence of dimension $d\le 3$.
One checks that the 2-dimensional spectral polytopes are regular polygons, hence vertex-transitive.
The remaining polytopes are polyhedra, and we mentioned in \cref{rem:edge_not_vertex} that among these, only the rhombic dodecahedron is spectral, in fact $\theta_2$-spectral.
This proves $(ii)$.
Equivalently, if $G$ is vertex- and edge-transitive, then so is its eigenpolytope.
By \cref{res:vertex_edge_transitive_cor} this is a $\theta_2$-eigenpolytope.
Together with part $(ii)$, we find $k=2$~in~all cases, which proves $(i)$.
\end{proof}
\end{theorem}
\subsection{Arc- and half-transitive polytopes}
\label{sec:arc_transitive}
In a graph or polytope, an \emph{arc} is~an incident vertex-edge-pair.
A graph or polytope is called \emph{arc-transitive} if its symmetry group acts transitively on the arcs.
Being arc-transitive implies both, being vertex-transitive, and being edge-transitive.
In addition to that, in an arc-transitive graph, every edge can be mapped, not only onto every other edge, but also onto itself with flipped orientation.
There exist graphs that are simul\-taneously vertex- and edge-transitive, but not arc-transitive.
Those are called \emph{half-transitive} graphs, and are comparatively rare.
The smallest one has $27$ vertices and is known as the \emph{Holt graph} (see \cite{bouwer1970vertex,holt1981graph}).
For polytopes on the other hand, it is unknown whether there eixsts a distinction being arc-transitive and being simultaneously vertex- and edge-transitive.
No \emph{half-transitive polytope} is known.
Because of \cref{res:edge_vertex_transitive} $(i)$, we know that the edge-graph of a half-transitive polytope must itself be half-transitive.
Since such graphs are rare, the existence of half-transitive polytopes seems unlikely.
\begin{example}
The Holt graph is not the edge-graph of a half-transitive polytope: the Holt graph is of degree four, and its second-largest eigenvalue is of multiplicity six, giving rise to a 6-dimensional $\theta_2$-eigenpolytope.
But a 6-dimensional polytope must have an edge-graph of degree at least six, and so the Holt graph is not spectral.
\end{example}
The lack of examples of half-transitive polytopes means that all known edge-transitive polytopes in dimension $d\ge 4$ are in fact arc-transitive.
Likewise, a~classification of arc-transitive polytopes is not known.
\iffalse
\subsection{Half-transitive polytopes}
\label{sec:half_transitive}
Arc-transitivity implies vertex- and edge-transi\-ti\-vity, but for graphs, the converse is not true.
A graph (and we shall adopt this terminology for polytopes) is called \emph{half-transitive}, if it is both vertex- and edge-transitive, but not arc-transitive.
In a half-transitive graph, each edge can be mapped onto any other edge, but not onto itself with flipped orientation.
The existence of half-transitive graphs was proven first by Holt \cite{holt1981graph}.
Such graphs are relatively rare. The smallest half-transitive graph is now known as the Holt graph and has 27 vertices.
It is not known whether there are any \emph{half-transitive polytopes}.
However, by \cref{res:...} we can now conclude that they are at least as rare as the half-transitive graphs.
\begin{corollary}
The edge-graph of half-transitive polytope must be half-transitive.
\end{corollary}
It is known that the Holt graph is not the edge-graph of a half-transitive polytope.
\fi
\subsection{Distance-transitive polytopes}
\label{sec:distance_transitive}
Our previous results about edge-transitive polytopes already allow for a complete classification of a particular subclass, namely, the \emph{distance-transitive polytopes}, thereby also providing a list of examples of edge-transitive polytopes in higher dimensions.
The distance-transitive symmetry is usually only considered for graphs, and the distance-transitive graphs form a subclass of the distance-regular graphs.
The usual reference for these is the classic monograph by Brouwer, Cohen and Neumaier \cite{brouwer1989distance}.
For any two vertices $i,j\in V$ of a graph $G$, let $\dist(i,j)$ denote the graph-theoretic \emph{distance} between those vertices, that is, the length of the shortest path connecting them.
The \emph{diameter} $\diam(G)$ of $G$ is the largest distance between any two vertices in $G$.
\begin{definition}
A graph is called \emph{distance-transitive} if $\Aut(G)$ acts transitively on each of the sets
$$D_{\delta}:=\{(i,j)\in V\times V \mid \dist(i,j)=\delta\},\quad\text{for all $\delta\in\{0,...,\diam(G)\}$}.$$
Analogously, a polytope $P\subset\RR^d$ is said to be \emph{distance-transitive}, if its Euclidean symmetry group $\Aut(P)$ acts transitively on each of the sets
$$D_{\delta}:=\{(v_i,v_j)\in \F_0(P)\times \F_0(P) \mid \dist(i,j)=\delta\},\quad\text{for all $\delta\in\{0,...,\diam(G_P)\}$}.$$
Note that the distance between the vertices is still measured along the edge-graph rather than via the Euclidean distance.
\end{definition}
Being arc-transitive is equivalent to being transitive on the set $D_1$.
Hence, distance-transitivity implies arc-transitivity, thus edge-transitivity.
By our considerations in the previous sections, we know that the \mbox{classification} of distance-transitive polytopes is equivalent~to the classification of the $\theta_2$-spectral distance-transitive graphs.
Those where classified by Godsil (see \cref{res:spectral_distance_regular_graphs}).
In the following theorem we translated each such $\theta_2$-spectral distance-transitive graph into its respective eigenpolytope.
This gives a complete classification of the distance-transitive polytopes.
\begin{theorem}\label{res:distance_transitive_classification}
If $P\subset\RR^d$ is distance-transitive, then it is one of the following:
\begin{enumerate}[label=$(\text{\roman*}\,)$]
\item a regular polygon $(d=2)$,
\item the regular dodecahedron $(d=3)$,
\item the regular icosahedron $(d=3)$,
\item a cross-polytopes, that is, $\conv\{\pm e_1,...,\pm e_d\}$ where $\{e_1,...,e_d\}\subset\RR^d$ is the standard basis of $\RR^d$,
\item a hyper-simplex $\Delta(d,k)$, that is, the convex hull of all vectors $v\in\{0,1\}^{d+1}$ with exactly $k$ 1-entries,
\item a cartesian power of a regular simplex (also known as the Hamming polytopes; this includes regular simplices and hypercubes),
\item a demi-cube, that is, the convex hull of all vectors $v\in\{-1,1\}^d$ with~an~even number of 1-entries,
\item the $2_{21}$-polytope, also called Gosset-polytope $(d=6)$,
\item the $3_{21}$-polytope, also called Schläfli-polytope $(d=7)$.
\end{enumerate}
The ordering of the polytopes in this list agrees with the ordering of graphs in the list in \cref{res:spectral_distance_regular_graphs}.
The latter two polytopes where first constructed by Gosset in \cite{gosset1900regular}.
\end{theorem}
We observe that the list in \cref{res:distance_transitive_classification} contains many polytopes that are not regular, and contains all regular polytopes excluding the 4-dimensional exceptions, the 24-cell, 120-cell and 600-cell.
The distance-transitive polytopes thus form a distinct class of remarkably symmetric polytopes which is not immediately related to the class of regular polytopes.
Another noteworthy observation is that all the distance-transitive polytopes are \emph{Wythoffian poly\-topes}, that is, they are orbit polytopes of finite reflection groups.
\Cref{fig:distance_transitive_Coxeter} shows the Coxeter-Dynkin diagrams of these polytopes.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/distance_transitive_Coxeter}
\caption{Coxeter-Dynkin diagrams of distance-transitive polytopes.}
\label{fig:distance_transitive_Coxeter}
\end{figure}
\iffalse
Finally, we can show that \cref{res:distance_transitive_classification} is the final word on distance-transitive (actually distance-regular) graphs that are spectral: we can show that such a graph cannot be $\theta_k$-spectral for any $k\not=2$.
\begin{theorem}
\label{res:distance_regular_theta_k}
If $G$ is distance-transitive (actually distance-regular) and $\theta_k$-spectral, then $k=2$.
\end{theorem}
The proof uses some special properties of distance-transitive (actually distance-regular graphs) that we are citing in place.
In general, we recommend the chapters 11 and 13 of \cite{godsil1993algebraic}, the latter of which contains all the tools we need.
\begin{proof}[Proof of \cref{res:distance_regular_theta_k}]
The $\theta_1$-eigenpolytope of any graphs is a single point.
Since we generally assumed that our polytopes ar non-trivial, \shortStyle{i.e.,}\ of dimension $d\ge 2$, we from now on assume $k\ge 2$.
Let~$v_i\in$ $\RR^d$ be the coordinates assigned to the vertices $i\in V$ of $G$ as given~in~\cref{def:eigenpolytope}.
We know that the arrangement of the points $v_i$ in $\RR^d$ is as symmetric as the underlying graph, which in the case of distance-transitive graphs means that the value of $\<v_i,v_j\>$ depends only on the distance $\dist(i,j)$ (more generally, the same holds for distance-regular graphs).
We can then define the so-called \emph{cosine sequence} $u_\delta:=\<v_i,v_j\>$ whenever $\dist(i,j)$ $=\delta$.
A well-known result for distance-transitive (actually distance-regular) graphs is, that the sequence $u_0,...,u_{\diam(G)}$ has \emph{exactly} $k-1$ sign changes (see \cite{godsil1993algebraic}, Chapter 13, Lemma 2.1).
It remains to show, that if the $v_i$ embedd $G$ as the skeleton of a polytope, then there can be at most one signs change, hence $k=2$.
\begin{center}
\includegraphics[width=0.4\textwidth]{img/5_gon_cut}
\end{center}
This property is occasionally known as the \emph{tightness} of the skeleton of a polytope.
It says that the skeleton cannot be cut into more than two pieces by any hyperplane.
We give a formal proof below.
Suppose that there are at least two signs changes in $u_\delta$. This means, there exist $\delta>\delta'>0$ with $u_0u_{\delta'}<0$ and $u_{\delta'}u_{\delta}<0$, or equivalently $u_0 > u_\delta > 0 > u_{\delta'}$.
Suppose that the $v_i$ embedd $G$ as the skeleton of a polytope $P$.
Consider the linear optimization problem maximizing the linear functional $\<v_i,\free\>$ over $P$, which clearly attains its maximum in $v_i$.
Let $j\in V$ be a vertex with $\dist(i,j)=\delta$.
Applying the simplex algorithm yields a monotone path in the skeleton from $v_j$ to $v_i$.
But any such path must pass though a vertex $j'\in V$ with $\dist(i,j')=\delta'<\delta$. But we know that $\<v_i,v_{j'}\> = u_{\delta'} < 0 < u_{\delta}$.
So the path cannot have been monotone.
This is a contradiction, showing that there cannot have been two sign changes.
\end{proof}
Whether a similar argument is possible for larger classes of graphs is unknown.
\fi
\iffalse
\hrulefill
Let $G=(V,E)$ be a (simple, undirected) graph on the vertex set $V=\{1,...,n\}$.
From any eigenvalue $\theta\in\Spec(G)$ of $G$ (\shortStyle{i.e.,}\ of its adjacency matrix $A$) of multiplicity $d$, we can construct a so-called \emph{spectral realization} (or \emph{$\theta$-realization}) $v\: V\to\RR^d$ of $G$ as follows.
Let $\{u_1,...,u_d\}\subset\RR^n$ be an orthonormal basis of the $\theta$-eigenspace of $G$ and define the \emph{arrangement matrix} $M:=(u_1,...,u_d)\in\RR^{n\x d}$ in which the $u_i$ are the columns.
This matrix has one row per vertex, and so we can obtain a realization $v$ by assigning $i\in V$ to the $i$-th row of $M$:
$$M :=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}\mathrlap{\in\RR^{n\times d}.}$$
The realization $V\ni i\mapsto v_i$ is called $\theta$-realization of $G$. The $\theta$-realization is unique up to re-orientation (depending only on the choice of the basis $\{u_1,...,u_d\}$).
\begin{definition}
The \emph{$\theta$-eigenpolytope} $P_G(\theta)$ of a graph $G$ is the convex hull of the $\theta$-realization $v$ of $G$:
$$P_G(\theta):=\conv\{v_i\mid i\in V\}.$$
\end{definition}
\fi
\iffalse
\section{The setting}
\label{sec:setting}
Throughout the paper, $P\subset\RR^d$ denotes a $d$-dimensional polytope of full dimen\-sion, that is, $P$ is the convex hull of finitely many points $v_1,...,v_n\in\RR^d$ (its \emph{vertices}), and $P$ is not contained in a proper affine subspace of $\RR^d$.
In particular, we assume the vertice to be labeled with the numbers $1$ to $n$.
By $\F_\delta(P)$, we denote the $\delta$-dimensional faces of $P$, in particular, its vertex set $\F_0(P)$, and its edge set $\F_1(P)$.
By $G_P=(V,E)$ we denote the \emph{edge-graph} of $P$, with vertex set $V=\{1,...,n\}$, so that $i\in V$ corresponds to the vertex $v_i\in \F_0(P)$, and each $ij\in E$ correspond to the edge $\conv\{v_i,v_j\}\in\F_1(P)$.
The term vertex (resp.\ edge) will be used for the elements of $V$ and $\F_0(P)$ (resp.\ $E$ and $\F_1(P)$) likewise, but no confusion is to be expected as these are in a one-to-one correspondence.
Finally, there are two matrices of major importance: for every polytope $P\subset\RR^d$ with labeled vertices $v_1,...,v_n$, the matrix
$$M :=
\begin{pmatrix}
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_1^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\; \\
& \vdots & \\[0.4ex]
\;\rule[.5ex]{2.5ex}{0.4pt}\!\!\!\! & v_n^{\Tsymb} & \!\!\!\!\rule[.5ex]{2.5ex}{0.4pt}\;\;
\end{pmatrix}\mathrlap{\in\RR^{n\times d}}$$
is called \emph{arrangement matrix} of $P$ (the vertices $v_i$ are the rows of $M$).
The matrix $A\in\{0,1\}^{n\x n}$ with
$$A_{ij}=\begin{cases} 1 & \text{if $ij\in E$}\\0&\text{if $ij\not\in E$}\end{cases}$$
is the usual \emph{adjacency matrix} of $G_P$.
\section{Two open questions}
Most of this book talks about the relation between polytopes and their edge-graphs.
It is well-known, that not every graphs is the edge-graph of polytope.
Since Steinitz, we know that the edge-graphs of polyhedra (3-dimensional polytopes) are exactly the 3-connected planar graphs, and each such graph determines a single combinatorial type of polytopes.
In contrast, no characterization of polytopes in dimension $d\ge 4$ is known, and even worse, if a graph is an edge-graph, then in general, it tells us little about the polytope.
That means, there can be many combinatorially distinct polytopes with the same edge-graph, even of the same dimension.
As a light in the dark, we have the result of Blind \& Mani (later refined by Kalai), that every \emph{simple} polytope (that is, a $d$-dimensional polytope whose edge-graph is of degree $d$) is uniquely determined by its edge-graph.
Similar results are known for other classes of polytopes, \shortStyle{e.g.}\ zonotopes.
In this paper, we add another class of polyopes to that list: \emph{spectral polytopes}.
Given the definition of spectral polytope, this is not too surprising (it sounds like an algorithm to compute the polytope from its edge-graph), but the interesting bit is in the graphs that turn out to be spectral and which would not have been expected to be uniquely determined otherwise.
For example, consider highly symmetric polytopes.
Each regular polytopes is uniquely determined (among regular polytopes) by itd edge-graph. In other words, there are no two regular polytopes with the same edge-graph.
On the other hand, there are many distinct vertex-transitive polytopes with the same edge-graph (\shortStyle{e.g.}\ any two vertex-transitive $n$-vertex neighborly polytopes).
So regularity is too strong to be interesting, but vertex-transitivity is too weak.
In this paper, we show that edge-transitivity also already gives unique reconstruction from the edge-graph, since they are spectral.
\fi
\iffalse
\section{Balanced and spectral polytopes}
\label{sec:balanced}
We now come back to the curious observations, that sometimes, a polytope is the eigenpolytope of its edge-graph.
A polytope with this property will be called \emph{spectral}.
However, for this section, we decided to go a different path, and introduce spectral polytopes from a different perspective: via balanced polytopes.
The overall goal of this section is to investigate how to can find out whether a given polytope already is spectral.
The precursor to spectral polytopes are the balanced polytopes that we introduce now.
\begin{definition}
$P\subset\RR^d$ is called \emph{$\theta$-balanced} (or just \emph{balanced})~for~a~num\-ber $\theta\in\RR$ if
\begin{equation}\label{eq:balanced}
\sum_{\mathclap{j\in N(i)}} v_j=\theta v_i,\quad \text{for all $i\in V$}.
\end{equation}
The value $\theta$ is actually an eigenvalue of $G$ (see \cref{res:eigenvalue} below).
If the~multi\-plicity of this eigenvalue $\theta$ is exactly $d$, then $P$ is called \emph{$\theta$-spectral} (or just \emph{spectral}).
\end{definition}
\begin{example}
All neighborly polytopes are balanced (if the barycenter of its vertices is the origin).
We can see this as follows:
$$\sum_{\mathclap{j\in N(i)}} v_j = -v_i + \underbrace{\sum_{\mathclap{j\in V}} v_j}_{=0} = -v_i.$$
Note that the spectrum of $K_n$ is $\{(-1)^{n-1},(n-1)^1\}$, and so $-1$ is indeed the second largest eigenvalue.
\end{example}
\begin{example}
All regular polytope are $\theta_2$-spectral.
This was first shown by Licata and Powers \cite{licata1986surprising} for all regular polytopes excluding the 4-dimensional exceptions (the 24-cell, 120-cell and 600-cell).
For the remaining polytopes this was proven in \cite{winter...} (Theorem ??).
Alternatively, the same follows from \cref{res:edge_vertex_transitive}.
Other polytopes that were recognized as spectral by Licata and Powers were the
\emph{rhombic dodecahedron}, ... \msays{TODO}
\end{example}
\begin{example}
Probably all uniform polytopes have a realization that is spectral.
\end{example}
Note that all these polytopes are spectral to the \emph{second-largest eigenvalue $\theta_2$}.
This is plausible for different reasons, but no proof of this fact is currently known.
\begin{observation}
Nodal domains ...
\end{observation}
Being balanced can be interpreted in at least two different way.
One of which is in terms of rigidity theory (and applies more generally to graph realizations rather than just to skeleta of polytopes).
The other is purely in terms of spectral graph theory.
\begin{remark}
Being $\theta$-balanced can also be interpreted in the context of rigidity theory.
For an edge $ij\in E$, the vector $v_j-v_i$ is pointing from the vertex $v_i$ along the edge $ij$ and can be interpreted as a force pulling $v_i$ along this edge.
At each vertex, the sum of these forces behaves as follows:
\begin{equation}\label{eq:self_stress}
\sum_{\mathclap{j\in N(i)}} (v_j-v_i) = \sum_{\mathclap{j\in N(i)}} v_j - \deg(G)v_i =\big(\theta-\deg(G)\big) v_i.
\end{equation}
In the language of rigidity theory: if we put the ...
Equation \eqref{eq:self_stress} can also be written as $LM=\theta M$, where $L$ is the Laplacian of $G$.
\end{remark}
\begin{remark}
Let us call a graph \emph{$\theta$-spectral} if it is the edge-graph of a $\theta$-spectral polytope.
We can characterize such graphs purely in the language of spectral graph theory.
A graph is $\theta$-spectral if $i,j\in V$ are adjacent in $G$ if and only if there exists a $\theta$-eigenvector $u\in\RR^n$ that is maximized exactly on $i$ and $j$:
$$\Argmax_{k\in V} u_k = \{i,j\}.$$
\end{remark}
Now the central observation concerning balanced polytopes, paving the way to spectral polytopes, is the following:
\begin{observation}\label{res:eigenvalue}
The defining equation \eqref{eq:balanced} can be written as $AM=\theta M$, where $A\in\RR^{n\x n}$ (where the matrices $A$ and $M$ are defined as in \cref{sec:setting}).
In this form, it is apparent that $\theta$ is an eigenvalue of $A$, and the columns of $M$ are the corresponding eigenvectors.
\end{observation}
Since $P$ is assumed to have full dimension, the matrix $M$ has rank $d$, \shortStyle{i.e.,}\ its columns are linearly independent eigenvectors.
But they not necessarily span the whole eigenspace.
If they do, this is called a spectral polytope:
\begin{definition}
A $\theta$-balanced polytope $P\subset\RR^d$ is called \emph{$\theta$-spectral} (or just \emph{spectral}), if any of the following equivalent conditions is satisfied:
\begin{myenumerate}
\item $\theta\in\Spec(G_P)$ has multiplicity $d$ (the dimension of $P$),
\item the columns of $M$ span the $\theta$-eigenspace of $G_P$.
\end{myenumerate}
\end{definition}
\subsection{Applications}
There are at least two interesting facts about spectral polytopes.
\begin{corollary}
A spectral polytope realizes all the symmetries of its edge-graph.
\end{corollary}
\begin{corollary}
A $\theta_2$-spectral polytope is uniquely determined by its edge-graph (among $\theta_2$-spectral polytopes).
\end{corollary}
Of course, $\theta_2$ can be replaced by any $\theta_i$.
\fi
| {
"timestamp": "2020-09-07T02:12:53",
"yymm": "2009",
"arxiv_id": "2009.02179",
"language": "en",
"url": "https://arxiv.org/abs/2009.02179",
"abstract": "Starting from a finite simple graph $G$, for each eigenvalue $\\theta$ of its adjacency matrix one can construct a convex polytope $P_G(\\theta)$, the so called $\\theta$-eigenpolytop of $G$. For some polytopes this technique can be used to reconstruct the polytopes from its edge-graph. Such polytopes (we shall call them spectral) are still badly understood. We give an overview of the literature for eigenpolytopes and spectral polytopes.We introduce a geometric condition by which to prove that a given polytope is spectral (more exactly, $\\theta_2$-spectral). We apply this criterion to the edge-transitive polytopes. We show that every edge-transitive polytope is $\\theta_2$-spectral, is uniquely determined by this graph, and realizes all its symmetries. We give a complete classification of distance-transitive polytopes.",
"subjects": "Metric Geometry (math.MG)",
"title": "Eigenpolytopes, Spectral Polytopes and Edge-Transitivity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105245649665,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.707698485378525
} |
https://arxiv.org/abs/1410.6954 | Alternative links, homogeneous links, and graphs | In this paper we review the definitions of homogeneous and alternative links. We also give two new characterizations of an alternative link diagram, one within the context of the enhanced checkerboard graph and another from the labeled Seifert graph. | \section{Introduction}
In \cite{LK} Kauffman introduced the class of alternative links and conjectured that the class of alternative links is identical to the class of pseudoalternating links introduced by Murasugi and Mayland in \cite{EM}. It was known that all alternative links were pseudoalternating, and it would be shown in \cite{MS} that all pseudoalternating links of genus one were alternative, but not necessarily in cases of higher genera, thus resolving Kauffman's conjecture in the negative. This was done by looking at the intermediary class of homogeneous links which were introduced in \cite{PC}. The remaining question is then:
\begin{question}
Are the classes of alternative and homogeneous links identical?
\end{question}
Studying these two classes is interesting because both classes of links have the interesting property of having a minimal genus Seifert surface that can be realized by Seifert's algorithm (\cite{LK},\cite{PC}). The difficulty in answering our question lies in the fact that the alternativity of a link is difficult to determine. In this paper we will review homogeneous and alternative links and then give two new characterizations of an alternative diagram. First, in sections 4 and 5 we explain how to construct the enhanced checkerboard graph and the labeled Seifert graph, respectively. In section 6 the definition of a homogeneous link is discussed. The definition of alternative links and our new characterizations of an alternative diagram are discussed in section 7. We conclude with some open questions in section 8. A basic knowledge of knot theory and graph theory is assumed throughout however some preliminary concepts are covered in sections 2 and 3. To avoid confusion, we will refer to a regular projection of an oriented link into the plane as a diagram.
\section{Preliminaries}
The Seifert Algorithm is a procedure by which one can obtain a surface bounded by a link $L$ from one of its diagrams, \cite{LK}. Seifert's algorithm proceeds as follows. Given a diagram $D$ we smooth all of the crossings according to the following scheme.
\begin{figure}[H]
\centering
\includegraphics{smoothings}
\end{figure}
Following this procedure produces a collection of oriented circles in the plane bounded by discs. These circles are called Seifert circles. A diagram of $10_{138}$ before and after smoothing the crossings is displayed below.
\begin{figure}[H]
\centering
\caption{}
\includegraphics{seifertalgorithm}
\end{figure}
To create a Seifert surface from these circles attach a twisted band to the discs bounding the circles in place of the crossings. The band is twisted in accordance with the smoothed crossing between the two circles. If one circle should be properly contained in another lift the innermost circle from the plane so that it lies above the plane containing the circle that properly contained it. If instead of creating a surface we place the appropriate positive or negative site markers as in \cite{LK} we acquire a diagram of Seifert circles with site markers between them instead of crossings. These site markers are shown below.
\begin{figure}[H]
\centering
\caption{Top: positive marker Bottom: negative marker}
\includegraphics{sitemarkerscheme}
\end{figure}
The components of the complement of the union of Seifert circles are referred to in \cite{LK} as \emph{spaces}. We will refer to the union of circles created by replacing the crossings of a diagram $D$ with site markers as the Seifert diagram, denoted $D_{s}$. Similarly we will denote the collection of spaces in the complement of the union of Seifert circles as $\hat{D}_{s}$. Pictured below is the Seifert diagram of the diagram previously shown.
\begin{figure}[H]
\centering
\caption{$D_{s}$}
\includegraphics{seifertdiagramprime}
\end{figure}
In this paper we will often refer to the \emph{height} of a Seifert circle as defined in \cite{PM}:
\begin{definition}
Given a Seifert circle $C$ of a Seifert diagram $D_{s}$ the height of $C$, denoted $h(C)$ is defined as the number of Seifert circles in $D_{s}$ that properly contain $C$.
\end{definition}
We will also, on occasion refer to the height of a space bounded by a Seifert circle. In this case we are referring to the bounded space in the plane properly contained between Seifert circles or the large unbounded space surrounding the diagram, more formally:
\begin{definition}
Given a Seifert diagram $D_{s}$ a space $A\subseteq \hat{D}_{s}$ is said to be of height $k$ if and only if it is bounded by a single Seifert circle of height $k$ or by a Seifert circle of height $k$ and one or more Seifert circles of height $k+1$. A space of the Seifert diagram is said to be of height $-1$ if and only if it is unbounded.
\end{definition}
A diagram divides the plane into several regions, including the large outside region surrounding the diagram. We will say that two regions $R_{1}$ and $R_{2}$ of a diagram are touching if they meet at a crossing and we will say that the crossing is incident to both $R_{1}$ and $R_{2}$. For example, the regions $A$, $B$, $C$ and $D$ in the figure below are touching.
\begin{figure}[H]
\centering
\includegraphics{touchingregions}
\end{figure}
Observe that in the above diagram two arcs begin bordering $A$ and stop bordering $D$ at the crossing according to the orientation. Similarly one arc begins bordering $B$ and $C$ and stops bordering $B$ and $C$. We will say that if an arc begins bordering a region $R$ at a crossing $c$ then that arc is \emph{going into} $R$. Similarly, if an arc ceases to border a region $R$ at a crossing $c$ then the arc is said to be \emph{going out} of $R$. We then define the following:
\begin{definition}
Let $c$ be a crossing incident to a region $R$ then $c$ is:\\
\mbox{ }\\
a. Crossing into $R$ if and only if two arcs begin bordering $R$ according to the orientation at $c$.\\
\mbox{ }\\
b. Crossing out of $R$ if and only if two arcs stop bordering $R$ according to the orientation at $c$.\\
\mbox{ }\\
c. Sideswiping $R$ if and only if one arc begins bordering $R$ and one arc stops bordering $R$ according to the orientation at $c$.
\end{definition}
In the previous diagram the crossing show is crossing into $A$, out of $D$ and sideswiping $B$ and $C$.
\begin{lemma}
The number of arcs going into a region $R$ is equal to the number of arcs leaving $R$.
\end{lemma}
\begin{proof}
Every arc into a region $R$ must connect to a unique arc going out of $R$. If not then we could have a single arc into $R$ attaching to multiple arcs leaving $R$ which is impossible or we could have two arcs into $R$ connecting to a single arc going out of $R$ which is also impossible. It then must be that every arc going into a region $R$ must connect to one and only on arc going out of $R$ and similarly every arc going out of $R$ must have come from one and only one arc coming into $R$. We then have that the number of arcs going into $R$ is equal to the number going out of $R$.
\end{proof}
Note that if $R_{1}$ and $R_{2}$ are two regions of a link diagram with a crossing $c$ that is going out of $R_{1}$ and into $R_{2}$ then, upon smoothing the crossings of the diagram according to the Seifert algorithm $R_{1}$ and $R_{2}$ will be joined in the same space of the Seifert diagram. Similarly if there is a crossing out of $R_{2}$ into a region distinct from $R_{1}$, say $R_{3}$, then upon smoothing the crossings $R_{1}$,$R_{2}$, and $R_{3}$ will also all be joined in the same space of the Seifert diagram. We then introduce the following relation:
\begin{definition}
Given a diagram $D$ with regions $R_{0},R_{1},\dots, R_{n}$ we will say that two regions $R$ and $R^{\prime}$ are spatially connected, denoted $R\sim R^{\prime}$, if and only if there is a sequence of regions $R_{1},\dots,R_{n}$ with $R=R_{1}$ and $R^{\prime}=R_{n}$ such that for every pair of regions $R_{i}$ and $R_{i+1}$ there is a crossing $c$ that is either crossing out of $R_{i}$ and into $R_{i+1}$ or otherwise crossing into $R_{i}$ and out of $R_{i+1}$.
\end{definition}
\section{Graph Theory}
A basic knowledge of Graph Theory is assumed throughout this paper, however some terms are used frequently enough to merit a restatement of their definitions as taken from \cite{JA}:
\begin{definition}
Given a graph $G$ with vertex set $V$ a vertex $v\in V$ is a cut vertex if $G\setminus v$ is has more connected components than $G$.
\end{definition}
\begin{definition}
A separation of a connected graph is a decomposition of the graph into two nonempty connected subgraphs which have just one vertex in common. This common vertex is called a separating vertex of the graph. A graph is said to be separable if it contains a separating vertex.
\end{definition}
Pictured below is a graph and it's smallest nontrivial nonseparable subgraphs:
\begin{figure}[H]
\centering
\includegraphics{nonseparable}
\end{figure}
\begin{definition}
A block $B$ of a finite graph $G$ is a subgraph which is nonseparable and is maximal with respect to this property.
\end{definition}
An example of a graph and it's blocks is displayed below:
\begin{figure}[H]
\centering
\includegraphics{blocks}
\end{figure}
\iffalse
Recall that a planar graph is a graph that can be embedded into the plane such that no two edges intersect. A planar graph embedded in this way is called a plane graph. We can then define the \emph{dual} of a plane graph:
\begin{definition}
Given a plane graph $G$ we define the planar dual of $G$ denoted $G^{*}$ as the graph obtained in the following way. For every face $f$ of $G$ there is a corresponding vertex $f^{*}$ of $G^{*}$, and corresponding to each edge $e$ of $G$ there is an edge $e^{*}$ of $G^{*}$. Two vertices $f^{*},g^{*}\in G^{*}$ are joined by the edge $e^{*}$ if and only if $f$ and $g$ share $e$ as a common bordering edge in $G$.
\end{definition}
An example of plane graph with labeled faces and it's dual with correspondingly labeled vertices are shown below:
\begin{figure}[H]
\centering
\includegraphics{dualgraph}
\end{figure}
\fi
Given a digraph $G$ we denote the in degree of a vertex $v$ as $\delta^{-}(v)$, the outdegree of $v$ as $\delta^{+}(v)$ and the degree of $v$ as $d(v)=\delta^{-}(v)+\delta^{+}(v)$.
We will also the use the following theorem from section 2.4 of \cite{JA}:
\begin{theorem}
A graph $G$ has a cycle decomposition if and only if it is an even graph.
\end{theorem}
\section{The Enhanced Checkerboard Graph}
Let $D$ be a diagram. Checkerboard color the diagram as in \cite{LK}. We can construct a signed planar digraph $\Phi(D)$ called the enhanced checkerboard graph from this diagram in the following way. Assign a black vertex to each black region and a white vertex to each white region including the large ``outside" region of the diagram. Two vertices are connected by a directed edge labeled either $+$ or $-$ according to the scheme shown below. A clarification of how loops are handled is also shown.
\begin{figure}[H]\label{crossingscheme1}
\centering
\includegraphics{digraphadjacencyprime}
\end{figure}
To more clearly illustrate how $\Phi(D)$ is constructed a colored diagram and the corresponding $\Phi(D)$ of $10_{138}$ are shown below. Note that, by construction $\Phi(D)$ is necessarily planar. This is due to the fact that our diagrams come from regular projections which only admit single and double points.
\begin{figure}[H]\label{loops}
\includegraphics{exampledigraph}
\end{figure}
A similar graph construction is explained in \cite{PM} from which we borrow the $\Phi$ notation.
\begin{theorem}
For every vertex $v\in \Phi(D)$, $\delta^{+}(v)=\delta^{-}(v)$.
\end{theorem}
\begin{proof}
Let $v$ be an arbitrary vertex of $\Phi(D)$. Assume for the sake of contradiction that $\delta^{-}(v)\neq\delta^{+}(v)$ We then have that the number of crossings into the region corresponding to $v$, say $R_{v}$, is not equal to the number of crossings out of $R_{v}$. This would imply that the number of arcs into $R_{v}$ is not equal to the number of arcs leaving $R_{v}$ because the only other crossings with arcs going into $R_{v}$ are sideswiping which contribute one arc into $R_{v}$ and one arc out. We know that the number of arcs going into $R_{v}$ must equal those leaving so we have a contradiction from which we can conclude that $\delta^{-}(v)=\delta^{+}(v)$.
\end{proof}
In other words, $\Phi(D)$ is an even graph and has a cycle decomposition that is to say every edge is a part of a cycle.
\begin{theorem}
Two regions $R_{1}$ and $R_{n}$ of a diagram $D$ are part of the same space in the Seifert diagram $\hat{D}_{s}$ if and only if their is a (not necessarily directed) path between their corresponding vertices in $\Phi(D)$.
\end{theorem}
\begin{proof}
($\Rightarrow$) Let $R_{1}$ and $R_{n}$ be two regions that are part of the same space in the Seifert diagram. Then $R_{1}$ and $R_{2}$ are spatially connected which means there is a sequence of regions $R_{1},\dots,R_{n}$ such that for $1\leq i\leq n$ there is a crossing $c$ in $D$ that is crossing out of $R_{i}$ and into $R_{i+1}$ or otherwise into $R_{i}$ and out of $R_{i+1}$. In either case, the vertices corresponding to $R_{i}$ and $R_{i+1}$ are joined by an edge in $\Phi(D)$. We may then conclude that there will be a path in $\Phi(D)$ from the vertex corresponding to $R_{1}$ to that of $R_{n}$.\\
($\Leftarrow$) Let $v_{0}$ and $v_{n}$ be distinct vertices of $\Phi(D)$ such that there is a (not necessarily directed path) path $P=v_{0}, v_{1},v_{2},\dots,v_{n}$ between them. Then we have that there is a crossing $c$ in $D$ that is either crossing out of $R_{i}$ and into $R_{i+1}$ or vice versa for $0\leq i\leq n$ which gives us that $R_{0}$ and $R_{n}$ are spatially connected and are therefore in the same space of $\hat{D}_{s}$. This completes the proof.
\end{proof}
We can then view a component of $\Phi(D)$ as a characterization of the site markers in a particular space of a given $\hat{D}_{s}$.
\section{The Seifert Graph and the Labelled Seifert Graph}
As shown in \cite{PC} we can construct the Seifert graph of a diagram from it's corresponding Seifert diagram. This is done by assigning a vertex to each Seifert circle and a signed edge labeled either $+$ or $-$ between two vertices for every site marker between their corresponding Seifert circles. The sign of these edges is determined by the marker it corresponds to. Edges with the same label correspond to markers of the same type. More precisely, a negative site marker corresponds to a negatively signed edge and positive site markers a positively signed edge. For clarity we will represent positive edges as solid and negative edges as hashed. We will say that two edges are of the same type if they are both solid or are both hashed. An example of a Seifert graph from the same diagram of $10_{138}$ that we have used in previous sections is displayed below.
\begin{figure}[H]
\centering
\caption{}
\includegraphics{seifertgraph}
\end{figure}
Here we define an extension of the Seifert graph that we call the labeled Seifert graph defined in the following way,
\begin{definition}
The class of labeled Seifert graphs $\mathscr{LS}$ is the class containing the graphs $G$ that meet the following conditions:\\
\begin{tabular}{ll}
i. & The vertices of $G$ are labeled with nonnegative integers.\\
ii. & The edges of $G$ are labeled $+$ or $-$.
\end{tabular}
\end{definition}
When we consider a labeled Seifert graph constructed from a particular diagram $D$ we denote that graph as $G(D)$. $G(D)$ is constructed in the same way as the original Seifert graph with the only addition being that the vertices are labeled with the height of the Seifert circle that they correspond to. This is to say if $C$ is a Seifert circle with $h(C)=i$ then the corresponding vertex in $G(D)$ is labeled with an $i$. Displayed below is the same Seifert graph as before only now as a labeled Seifert graph:
\begin{figure}[H]
\centering
\caption{}
\includegraphics{labeledseifertgraph}
\end{figure}
\iffalse
\begin{figure}[H]\label{crossingsigns}
\centering
\caption{}
\end{figure}
\begin{figure}[H]\label{seifertgraph}
\centering
\caption{}
\end{figure}
\fi
Consider an arbitrary labeled Seifert graph $G$. Let $G_{i}$ be the subgraph of $G$ constructed in the following way. Delete all vertices $v$ such that $h(v)< i-1$ or $h(v)>i$. Then delete all edges between vertices of height $i-1$. This potentially disconnected subgraph is $G_{i}$. We will refer to all $G_{i}$'s as \emph{height subgraphs}. Observe that $G=\bigcup_{i=0}^{n}G_{i}$ where $n$ is the maximum height of a circle in the Seifert diagram. For clarity, when discussing a labeled Seifert graph constructed from a diagram $D$ we will denote the height subgraphs as $G_{i}(D)$. For example, in the figure below are the $G_{i}(D)$'s for the same projection of $10_{138}$ we have been using.
\begin{figure}[H]
\centering
\caption{$G_{0}(D)$ and $G_{1}(D)$}
\includegraphics{gis}
\end{figure}
\section{Homogeneous Links}
Homogeneous Links were defined in \cite{PC} in the following way.
\begin{definition}
Given a diagram $D$ and Seifert graph $G$ we say that a block $B_{i}$ of $G$ is homogeneous if all edges in $B_{i}$ are of the same type. We say that $D$ is homogeneous if all blocks in $G$ are homogeneous. We say that $L$ is homogeneous if it has a homogeneous diagram.
\end{definition}
Note the following result from \cite{PM}:
\begin{theorem}\label{block}
Let $D$ be a diagram, $F$ it's corresponding Seifert surface and $G$ the corresponding Seifert Graph. Then all the Seifert circles associated to a block of $G$ have the same height, except possibly one of them which contains all the other being its height one less.
\end{theorem}
Given that our labelled Seifert graph is an extension of the Seifert graph this result will hold for labelled Seifert graphs as well. From this we may conclude the following:
\begin{corollary}
Let $B$ be a block of some labelled Seifert graph $G(D)$, then there is some $i$ such that $B\subset G_{i}(D)$.
\end{corollary}
\section{Alternative Links}
Recall that in the construction of the Seifert diagram crossings are replaced with positive or negative site markers. Two markers are said to be of the \emph{same type} if they are both positive or are both negative. A space in the Seifert diagram may contain one, two or zero marker types. We then have the following definition from \cite{LK}:
\begin{definition}
A diagram $D$ is alternative if and only if each space in it's corresponding Seifert diagram $D_{s}$ contains at most one marker type. $L$ is said to be alternative if it has an alternative diagram.
\end{definition}
We can recharacterize this definition in the context of the spatial graph of a diagram, but first we will need the following definition.
\begin{definition}
Given a spatial graph $\Phi(D)$ we will say that $\Phi(D)$ is alternative if and only if it does not contain a walk with both positive and negative edges.
\end{definition}
\begin{theorem}
Let $D$ be a diagram, then $D$ is alternative if and only if $\Phi(D)$ is alternative.
\end{theorem}
\begin{proof}
($\Rightarrow$) Let $D$ be an alternative diagram. Consider $\Phi(D)$. Each crossing of $D$ corresponds to an edge in $\Phi(D)$. Recall that two edges in a connected component of $\Phi(D)$ correspond to markers in the same space of the Seifert diagram of $D$. Because $D$ is alternative we know that each space contains only one marker type we then have that each connected component of $\Phi(D)$ has only one edge type which is to say there is no walk in $\Phi(D)$ which contains both negative and positive edges.\\
($\Leftarrow$) Let $\Phi(D)$ be the spatial graph of the oriented link diagram $D$. Assume also that $\Phi(D)$ is alternative. Then there is no walk in $\Phi(D)$ which contains both negative and positive crossings. As we have noted, the edges of a connected component of $\Phi(D)$ corresponds to the edges in a space of the Seifert diagram of $D$. Because there is no walk containing both edge types in $\Phi(D)$ we then know that there is no space in the Seifert diagram of $D$ that has more than one edge type which is to say that $D$ is alternative. This completes the proof.
\end{proof}
We can also recharacterize the definition of an alternative diagram in terms of the labeled Seifert graph.
\begin{definition} Given a labeled Seifert graph $G$ we will say that a subgraph $G_{i}$ is alternative if and only if each connected component of $G_{i}$ has at most one edge type. The graph $G$ is alternative if and only if each $G_{i}$ is alternative.
\end{definition}
We may then present our main theorem, a recharacterization of an alternative diagram in the context of labeled Seifert graphs.
\begin{theorem}
Let $D$ be a diagram and $G(D)$ the labeled Seifert graph of $D$, then $D$ is alternative if and only if $G(D)$ is alternative.
\end{theorem}
\begin{proof}
($\Rightarrow$). Let $D$ be an alternative diagram. Consider $G(D)$. Assume for the sake of contradiction that $G(D)$ is not alternative. Then there is some $i$ such that $G_{i}(D)$ is not alternative. This is to say there is some connected component in $G_{i}(D)$ that has two edges of different types. Let this component be denoted $H$. Let $v_{0}$ be the single vertex in $H$ that is at height $i$ and let all other vertices be denoted by $w_{1},\dots ,w_{n}$. There are then three cases to check. In the first case there are two edges incident to $v_{0}$ that are of different types. In this case it is easy to see that these edges correspond to crossings of different types in the space corresponding to $v_{0}$ which would give that $D$ is not alternative. In the second case two edges between $w_{j}$ , $w_{k}$, and $w_{l}$ are of different types. All edges between these $w_{i}$'s correspond to crossings in the space associated to $v_{0}$. If any two of these are of different types then we have that $D$ is not alternative. In the final case there is an edge incident to $v_{0}$ that is a different type than an edge between two $w_{i}$'s. We know that edges incident to $v_{0}$ in $G_{i}(D)$ correspond to crossings in the space associated to $v_{0}$ and similarly, edges amongst the $w_{i}$'s correspond to crossing in the space corresponding to $v_{0}$. We then have that if there are differing types between these edges then $D$ must not be alternative. This is our final contradiction. We then must have that all $G_{i}(D)$ are alternative thereby making $G(D)$ alternative by definition.\\
($\Leftarrow$) Let $G(D)$ be an alternative labeled Seifert graph for some diagram $D$. We wish to show that there is no space in the Seifert diagram of $D$ that has more than one type of site marker. Assume for the sake of contradiction that there is such a space. If this space is the larger outside space then this would imply that there is a Seifert circle of height 0 that is connected to other distinct circles of height 0 by different site markers. This would imply that there are edges of differing types in the subgraph $G_{0}(D)$ which is a contradiction. Then consider a space between circles of height $k$ and height $k+1$. If this space has differing site markers then the circle of height $k$ must contain circles of height $k+1$. If these circles have crossings of different types amongst them then these crossings will show as different types of edges in $G_{k}(D)$. Similarly for the other 2 possible cases. These would all contradict $G(D)$ being alternative, so it must be that $D$ is alternative.
\end{proof}
It was noted in \cite{PC} that all alternative link diagrams are homogeneous. This is easy to see because if a labeled Seifert graph is alternative then all of its $G_{i}(D)$'s are alternative which would imply that the blocks contained in each $G_{i}(D)$ are homogeneous. However, not all homogeneous diagrams are alternative. Below are two diagrams of a $9_{43}$ with their respective labeled Seifert graphs. The second diagram is alternative and homogeneous while the first diagram is homogeneous, but not alternative.
\begin{figure}[H]
\caption{A homogeneous diagram that is not alternative}
\includegraphics{9431}
\end{figure}
\begin{figure}[H]
\caption{A diagram that is both homogeneous and alternative}
\includegraphics{9432}
\end{figure}
Similarly, it is not the case that all minimal crossing diagrams of a homogeneous link are homogeneous as can be seen in the infamous ``Perko Pair" which answers question 1 from \cite{PC}. This example also shows that minimal crossing diagrams of alternative links are not necessarily alternative. Note that $10_{161}$ might not the smallest homogeneous link that has a nonhomogeneous minimal crossing diagram. All minimal crossing diagrams of alternating links are alternating and therefore homogeneous and alternating. It must then be that such a minimal example is nonalternating. The smallest nonalternating homogeneous link is $8_{19}$, \cite{PC}. The sequence of Reidemeister moves between the two diagrams of $10_{161}$ with the corresponding spatial and labeled Seifert graphs are included in the appendix.
\begin{figure}[H]
\caption{A minimal crossing homogeneous diagram and a minimal crossing nonhomogeneous diagram of $10_{161}$}
\includegraphics{perko1}
\end{figure}
\section{Questions}
In \cite{PC} the question was asked as to whether or not every homogeneous link has a homogeneous minimal crossing diagram. It is then natural to ask the same of alternative links.
\begin{question}
Does every alternative link have an alternative minimal crossing diagram?
\end{question}
\begin{question}
Given a labeled Seifert graph $G(D)$ of a diagram $D$ is there a sequence of Reidemeister moves that can be done on $D$ to get a diagram $D^{\prime}$ where the labeled Seifert graph, $G(D^{\prime})$ has different heights, but is isomorphic to $G(D)$ otherwise?
\end{question}
\begin{figure}[H]
\caption{$G(D)$ and $G(D^{\prime})$}
\includegraphics{figure9}
\end{figure}
\begin{question}
Is there a homogeneous link that is not alternative?
\end{question}
There is a complete classification of homogeneous knots up to 9 crossings in \cite{PC}. All homogeneous links up to $10_{134}$ can be seen to be alternative from their minimal crossing diagrams given on \cite{JC}. The first homogeneous knot listed that is not immediately identifiable as alternative is $10_{138}$.
\begin{figure}[H]
\caption{Does $10_{138}$ admit an alternative diagram?}
\includegraphics{figure10}
\end{figure}
| {
"timestamp": "2014-10-28T01:08:36",
"yymm": "1410",
"arxiv_id": "1410.6954",
"language": "en",
"url": "https://arxiv.org/abs/1410.6954",
"abstract": "In this paper we review the definitions of homogeneous and alternative links. We also give two new characterizations of an alternative link diagram, one within the context of the enhanced checkerboard graph and another from the labeled Seifert graph.",
"subjects": "Geometric Topology (math.GT)",
"title": "Alternative links, homogeneous links, and graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105245649665,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.707698485378525
} |
https://arxiv.org/abs/2009.11479 | Theoretical Analysis of the Advantage of Deepening Neural Networks | We propose two new criteria to understand the advantage of deepening neural networks. It is important to know the expressivity of functions computable by deep neural networks in order to understand the advantage of deepening neural networks. Unless deep neural networks have enough expressivity, they cannot have good performance even though learning is successful. In this situation, the proposed criteria contribute to understanding the advantage of deepening neural networks since they can evaluate the expressivity independently from the efficiency of learning. The first criterion shows the approximation accuracy of deep neural networks to the target function. This criterion has the background that the goal of deep learning is approximating the target function by deep neural networks. The second criterion shows the property of linear regions of functions computable by deep neural networks. This criterion has the background that deep neural networks whose activation functions are piecewise linear are also piecewise linear. Furthermore, by the two criteria, we show that to increase layers is more effective than to increase units at each layer on improving the expressivity of deep neural networks. | \section{Introduction}
\label{secintro}
In recent researches on machine learning, deep neural networks show very good performance on prediction. For example, some papers consider that to increase layers in neural networks contributes to the accuracy of image recognition e.g., \cite{Simonyan, Veit}. There are also papers which consider that to increase layers contributes to the accuracy of speech recognition e.g., \cite{Hochreiter, Hochreiter97}. Neural networks are getting deeper and deeper with each passing year.
In this situation, there are many researchers that are interested in the relation between the depth of neural networks and good performance. In order to show the advantage of deepening neural networks, the followings should be distinguished.
\begin{enumerate}
\item the expressivity of deep neural networks as functions which express the relation between explanatory variables and objective variables.
\item the efficiency of learning by the gradient method.
\end{enumerate}
The prediction accuracy after learning is dependent on both the expressivity and the efficiency of learning. Unless deep neural networks have enough expressivity, they cannot have good performance even though learning is successful
Therefore, there have been some researches that evaluate deep neural networks with distinguishing between the expressivity and the efficiency of learning e.g., \cite{Yarotsky18, Telgarsky16}. Some of the researches assume that data is scattered around the target function and investigate the approximation accuracy when approximating the target function by deep neural networks. They discuss the distance between deep neural networks and the target function. Then they regard that the shorter that distance is, the more expressivity the deep neural networks have. For example, some studies consider how many layers and units are necessary for the distance between deep neural networks and the target function to be less than $\epsilon$, for any $\epsilon>0$ and any target function e.g., \cite{Cybenko, Hornik}. Besides, other studies consider how the lower limit of the distance between deep neural networks and the target function is described for the depth and the number of units e.g., \cite{Mhaskar, Yarotsky}.
On the other hand, there are other kinds of researches that evaluate deep neural networks with distinguishing between the expressivity and the efficiency of learning. Those researches investigate the property of linear regions of functions by computable by deep neural networks. Since deep neural networks whose activation functions are piecewise linear are also piecewise linear, investigating the property of linear regions is a good way to know the shape of the lines drawn by those deep neural networks. For example, some studies count the number of linear regions of deep neural networks \cite{Montufar, Pascanu, Serra}. They regard that the larger the number of linear regions of deep neural networks is, the more expressivity the deep neural networks have. Besides, other studies calculate the volume of the boundary between two linear regions of deep neural networks \cite{Hanin}. They regard that the larger the volume of the boundary between two linear regions of deep neural networks is, the more expressivity the deep neural networks have.
In this paper, we propose two criteria to evaluate the expressivity of deep neural networks independently from the efficiency of learning, similar to the above studies. The first criterion evaluates the approximation accuracy of deep neural networks to the target function. It shows the ratio of the values of the parameters which make the distance between deep neural networks and the target function short. We regard that the larger the ratio is, the more expressivity deep neural networks have. The novelty of this criterion is that we can figure out the volume of parameter values which give functions close to the target function. The proposed criterion shows not only the existence of the appropriate parameter values but also how many the appropriate parameter values exist.
The second criterion evaluates the property of linear regions of functions computable by deep neural networks whose activation functions are piecewise linear. It shows the size of the maximal linear region of deep neural networks. We regard that the smaller the size of the maximal linear region is, the more expressivity the deep neural networks have. The novelty of this criterion is figuring out how fine linear regions the area which is the least flexible has. The proposed criterion shows whether linear regions are scattered finely and uniformly or not.
The above two criteria have a relation. The small size of the maximal linear region of deep neural networks is a requirement for the good approximation to the target function if the target function is smooth. This requirement is understood by referring to Fig. \ref{figlinearappro}. Unless the size of the maximal linear region of deep neural networks is small, there is a gap between the deep neural networks and the target function no matter how close the deep neural networks and the target function.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=3.5cm,width=\linewidth]{linear_region_approximation.png}}
\caption{The relation between the size of the maximal linear regions of deep neural networks and the approximation to the target function by deep neural networks. If the size of the maximal linear region of deep neural networks is large, a gap inevitably occurs between deep neural networks and the target function, as the left figure. On the other hand, if the size of the maximal linear region of deep neural networks is small, there is a possibility that the target function is approximated by deep neural networks, as the right figure.}
\label{figlinearappro}
\end{center}
\end{figure}
Moreover, by the two criteria, we show that to increase layers is more effective than to increase units at each layer on improving the expressivity of neural networks. We adopt theoretical proofs and computer simulations to show it.
In summary, our contributions are as follows.
\begin{enumerate}
\item We propose new criteria to evaluate the expressivity of deep neural networks independently from the efficiency of learning.
\item We show that to increase layers is more effective than to increase units at each layer on improving the expressivity of neural networks, by the new criteria.
\end{enumerate}
\begin{comment}
In this paper, we explain the architecture of deep neural networks in Section \ref{secdeep}. In Section \ref{secdef1}, we define the first criterion. By the first criterion, we show that increasing layers is more effective than increasing units at each layer on improving the expressivity of neural networks, in Section \ref{secexex}. In Section \ref{secdef}, we define the second criterion. By the second criterion, we show the same fact in Section \ref{seceva}. Section \ref{secdef} includes the proofs of lemmas which are needed for the proofs of the theorem shown in Section \ref{seceva}.
\end{comment}
\section{Definitions and Theoretical Facts}
\subsection{Deep Neural Networks}
\label{secdeep}
We define a neural network as a composite function $\bm{F}_{\bm{\theta}} : \mathbb{R}^{n_0}\to\mathbb{R}^{n_L}$ parametrized by $\bm{\theta}$, which alternately computes linear functions $\bm{f}^{(l)}: \mathbb{R}^{n_{l-1}}\to\mathbb{R}^{n_l}\ (l=1,\cdots,L)$ and non-linear activation functions $\bm{g}^{(l)} : \mathbb{R}^{n_l}\to\mathbb{R}^{n_l}\ (l=1,\cdots,L-1)$ such as
\begin{align}
\bm{F}_{\bm{\theta}}=\bm{f}^{(L)}\circ \bm{g}^{(L-1)}\circ \bm{f}^{(L-1)}\circ\cdots\circ \bm{g}^{(1)}\circ \bm{f}^{(1)}. \nonumber
\end{align}
The linear function $\bm{f}^{(l)}$ is given by
\begin{align}
f^{(l)}_j(\bm{x})=\sum^{n_{l-1}}_{i=1}w^{(l)}_{ji}x_i+b^{(l)}_j\ (j=1,\cdots,n_l), \nonumber
\end{align}
where $x_i$ is the $i$-th coordinate of $\bm{x}$ and $f^{(l)}_j$ is the $j$-th coordinate of $\bm{f}^{(l)}$. The parameter $\bm{\theta}$ is a vector which consists of $w^{(l)}_{ji}\in\mathbb{R}$ and $b^{(l)}_j\in\mathbb{R}$ for each $i\in\{1,2,\cdots,n_{l-1}\},\ j\in\{1,2,\cdots,n_l\}$ and $l\in\{1,2,\cdots,L\}$. We define $\bm{\Theta}\subset\mathbb{R}^M$ as the set of $\bm{\theta}$, where $M$ is the number of parameters on the neural network. We call $\bm{g}^{(l)}\circ\bm{f}^{(l)}$ as the $l$-th layer and call the $j$-th coordinate of $\bm{g}^{(l)}\circ \bm{f}^{(l)}$ as the $j$-th unit of the $l$-th layer. We note that the $0$-th layer, which is called as the input layer is not counted into the number of layers.
Deep neural networks are used for supervised learning. We can express the relation between explanatory variables and objective variables with deep neural networks if the values of $\bm{\theta}$ are set appropriately.
\subsection{The Ratio of the Desired Parameters}
\label{secdef1}
In this section, we assume a probability distribution for objective variables in supervised learning and propose a new criterion to evaluate the expressivity of functions computable by deep neural networks under that assumption. For simplicity, we restrict the dimension of objective variables to 1. Let the probability distribution of objective variables be normal distribution such as
\begin{align}
y = F^*(\bm{x})+\varepsilon\ \ (\varepsilon\sim N(0, \sigma^2)), \label{eqnorm}
\end{align}
where $F^* : \mathbb{R}^{n_0}\to\mathbb{R}$ is a continuous function. The goal of deep learning is expressing the relation between explanatory variables and objective variables by approximating $F^*$ with deep neural networks $F_{\bm{\theta}}$ in the set $\{F_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$. Therefore, we call $F^*$ as the target function in this paper.
When approximating the target function $F^*$ by deep neural networks $F_{\bm{\theta}}$, it is desirable for the set $\{F_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$ to have many functions close to $F^*$. However, there have been few studies which investigate the volume of the functions close to $F^*$ in $\{F_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$ although some studies prove that at least one function close to $F^*$ exist in $\{F_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$ e.g., \cite{Cybenko, Yarotsky}. Therefore, we define a new criterion to investigate it independently from the efficiency of learning as follows.
\begin{defi}\label{defibetterparam}
Let $F_{\bm{\theta}} : \mathbb{R}^{n_0}\to\mathbb{R}$ be a deep neural network. Let $F^* : \mathbb{R}^{n_0}\to\mathbb{R}$ be the target function defined as (\ref{eqnorm}). Moreover, we assume that explanatory variables $\bm{x}\in\mathbb{R}^{n_0}$ are randam variables with a probability density function $p(\bm{x})$. Then we define \emph{the ratio of the desired parameters} between the set $\mathcal{F}:=\{F_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$ and $F^*$ as
\begin{align}
R_{\epsilon}(\mathcal{F}, F^*):=\frac{\int_{\bm{\theta}\in\bm{\Theta}_{\epsilon}(\mathcal{F}, F^*)}d\bm{\theta}}{\int_{\bm{\theta}\in\bm{\Theta}}d\bm{\theta}}, \label{eqR}
\end{align}
where
\begin{align}
d(F_{\bm{\theta}}, F^*)&:=\int_{\bm{x}\in\mathbb{R}^{n_0}}(F_{\bm{\theta}}(\bm{x})-F^*(\bm{x}))^2p(\bm{x})d\bm{x}, \label{eqd}\\
\bm{\Theta}_{\epsilon}(\mathcal{F}, F^*)&:=\{\bm{\theta}\in\bm{\Theta}\ |\ d(F_{\bm{\theta}}, F^*)\leq\epsilon\}. \nonumber
\end{align}
\end{defi}
We can say that $d(F_{\bm{\theta}}, F^*)$ represents the distance between the deep neural network $F_{\bm{\theta}}$ and the target function $F^*$. Moreover, $R_{\epsilon}(\mathcal{F}, F^*)$ calculates the ratio of the values of $\bm{\theta}$ which make that distance short. Therefore, we regard that the larger $R_{\epsilon}(\mathcal{F}, F^*)$ is, the more functions close to the target function the set $\mathcal{F}$ has.
The ratio of the desired parameters shows the volume of parameter values which give functions close to the target function. Therefore, we can know not only the existence of the appropriate parameter values but also how many the appropriate parameter values exist. If we remind learning with the gradient method, we can consider that the ratio of the desired parameters may show the number of local solutions of the loss function.
\subsection{Deep Neural Networks and Linear Regions}
\label{secdef}
\subsubsection{ReLU Neural Networks}
We can use some kinds of non-linear functions for the non-linear activation function $\bm{g}^{(l)}$. One example is the ReLU function defined as $g(x)=\max\{0, x\}$. ReLU neural networks have the special property as follows.
\begin{lem}[\cite{Yarotsky}\ ]\label{lem1}
Let $\mathcal{D}$ be a bounded subset of $\mathbb{R}^{n_0}$. Let $\bm{F}_{\bm{\theta}} : \mathcal{D}\to\mathbb{R}^{n_L}$ be a ReLU neural network with $L$ layers, $M$ parameters and $S$ units in total. Then there exists a neural network $\tilde{\bm{F}}_{\bm{\theta}} : \mathcal{D}\to\mathbb{R}^{n_L}$ with any continuous piecewise linear activation function, having $L$ layers, $4M$ parameters and $2S$ units in total, and that computes the same function as $\bm{F}_{\bm{\theta}}$.
\end{lem}
Lemma \ref{lem1} points out that deep neural networks with any piecewise linear activation function can describe any ReLU neural network. In the proof of \cite{Yarotsky}, two units of $\tilde{\bm{F}}_{\bm{\theta}}$ are assigned to the each unit of $\bm{F}_{\bm{\theta}}$ when we replace $\bm{F}_{\bm{\theta}}$ with $\tilde{\bm{F}}_{\bm{\theta}}$. Therefore, the number of units at the $l$-th layer $n_l$ increases to $2n_l$. Lemma \ref{lem1} is used for the proof of the theorem shown in Section \ref{secbene}.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=4.5cm,width=7cm]{Figure5.png}}
\caption{The examples of linear regions of piecewise linear functions.}
\label{fig5}
\end{center}
\end{figure}
\subsubsection{Fineness of Linear Regions}
The ReLU function $g(x)=\max\{0, x\}$ is a piecewise linear function whose inclination changes at the origin. If a neural network has piecewise linear activation functions like this, $\bm{f}^{(l)}(l=1,2,\cdots,L)$ are linear functions and $\bm{g}^{(l)}(l=1,2,\cdots,L-1)$ are piecewise linear functions. Therefore, a neural network whose activation functions are piecewise linear is a piecewise linear function. Based on this fact, we can introduce a \emph{linear region} to consider the flexibility of deep neural networks with piecewise linear activation functions. It is defined as follows.
\begin{defi}[\cite{Montufar}\ ]\label{defi1}
Let $\mathcal{D}$ be a bounded subset of $\mathbb{R}^{n}$. Let $\bm{F} : \mathcal{D}\to\mathbb{R}^{m}$ be a piecewise linear function. Let $U$ be a $n$-dimensional interval in $\mathcal{D}$. Let $\bm{F}|_U : U\to\mathbb{R}^{m}$ be the function given by restricting the input domain of $\bm{F}$ from $\mathcal{D}$ to $U$. We say that $U$ is a \emph{linear region} of $\bm{F}$ if the followings hold.
\begin{itemize}
\item $\bm{F}|_U$ is linear.
\item For any subset $V\subset\mathcal{D}$ such that $V\supsetneq U$, $\bm{F}|_V$\ is not linear.
\end{itemize}
\end{defi}
\begin{ex}
For example, let $F : [0,1]\to\mathbb{R}$ be the function in Fig. \ref{fig5}. We can say that $[0, \frac{1}{4}]$, $[\frac{1}{4}, \frac{2}{3}]$ and $[\frac{2}{3}, 1]$ are linear regions of $F$. $[\frac{1}{3}, \frac{1}{2}]$ is not a linear region.
\end{ex}
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=3.7cm,width=9.5cm]{Figure7.png}}
\caption{The problem of evaluating the flexibility of deep neural networks by the number of linear regions. We cannot distinguish between $F_1$ and $F_2$ when we use the number of linear regions. Therefore, we propose a new criterion in this paper.}
\label{fig7}
\end{center}
\end{figure}
In this paper, the set of linear regions of $\bm{F}$ is represented by $L(\bm{F})$. We note that
\begin{align}
\bigcup_{U\in L(\bm{F})}U=\mathcal{D},\ \ \ \ \ \sum_{U\in L(\bm{F})}\int_{\bm{x}\in U}d\bm{x}=\int_{\bm{x}\in\mathcal{D}}d\bm{x} \label{eqpiece}
\end{align}
when $\bm{F} : \mathcal{D}\to\mathbb{R}^{m}$ is a piecewise linear function.
There have been some studies that evaluate the flexibility of deep neural networks by the number of linear regions \cite{Pascanu, Montufar, Serra}. However, there is a room for improvement in the evaluation of the flexibility by the number of linear regions when we want to know whether linear regions are scattered uniformly. The number of linear regions cannot ensure that many linear regions do not gather in a certain area in the input domain. This problem appears in Fig. \ref{fig7}. $F_2$ does not keep its flexibility in the left-side area. However, $F_1$ and $F_2$ are regarded that they have similar flexibility by the number of linear regions. There is also the same problem in the evaluation of the flexibility by the volume of the boundary between two linear regions proposed in \cite{Hanin}. Therefore, we propose a new criterion to evaluate the flexibility of deep neural networks. It is defined as follows.
\begin{defi}\label{defi2}
Let $\mathcal{D}$ be a bounded subset of $\mathbb{R}^{n}$. We define the \emph{fineness} of linear regions of a piecewise linear function $\bm{F} : \mathcal{D}\to\mathbb{R}^{m}$ such as
\begin{align}
I(\bm{F}):=\max_{U\in L(\bm{F})}\frac{\int_{\bm{x}\in U}d\bm{x}}{\int_{\bm{x}\in \mathcal{D}}d\bm{x}}
\label{eqif}
\end{align}
\end{defi}
\begin{ex}
We consider the example function $F : [0,1]\to\mathbb{R}$ in Fig. \ref{fig5}. In this example, the fineness of linear regions $I(F)$ is computed as follows.
\begin{align}
I(F)&=\max_{U\in\left\{\left[0, \frac{1}{4}\right], \left[\frac{1}{4}, \frac{2}{3}\right], \left[\frac{2}{3}, 1\right]\right\}}\frac{\int_{x\in U}dx}{\int_{x\in [0,1]}dx}=\int^{\frac{2}{3}}_{\frac{1}{4}}dx=\frac{5}{12}. \nonumber
\end{align}
\end{ex}
Note that $0\leq I(\bm{F})\leq 1$. We regard that the smaller the value of $I(\bm{F})$ is, the finer the linear regions of $\bm{F}$ are. The fineness of linear regions shows the size of the maximal linear region of piecewise linear functions and shows how fine linear regions the area which is the least flexible has. Therefore, we can know whether linear regions are scattered finely and uniformly on the input domain or not, by that criterion. It is different from the number of linear regions. In the case of neural networks $\bm{F}_{\bm{\theta}}$ whose activation functions are piecewise linear, the fineness of linear regions $I(\bm{F}_{\bm{\theta}})$ changes when the values of $\bm{\theta}$ change. Therefore, we regard that the smaller the value of $\min_{\bm{\theta}\in\bm{\Theta}}I(\bm{F}_{\bm{\theta}})$ is, the more flexibility the set $\{\bm{F}_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$ has.
For the readers who are interested in the relation between the number of linear regions and the fineness of linear regions, we show the following fact.
\begin{coro}\label{coro1}
Let $\mathcal{D}$ be a bounded subset of $\mathbb{R}^{n}$. Let $\bm{F} : \mathcal{D}\to\mathbb{R}^{m}$ be a piecewise linear function. The fineness of linear regions $I(\bm{F})$ and the number of linear regions $|L(\bm{F})|$ satisfy
\begin{align}
|L(\bm{F})|\geq \frac{1}{I(\bm{F})}. \nonumber
\end{align}
\end{coro}
\begin{proofc}
From (\ref{eqpiece}),
\begin{align}
\int_{\bm{x}\in\mathcal{D}}d\bm{x}&=\sum_{U\in L(\bm{F})}\int_{\bm{x}\in U}d\bm{x} \nonumber\\
&\leq |L(\bm{F})|\times \max_{U\in L(\bm{F})}\int_{\bm{x}\in U}
d\bm{x}. \nonumber
\end{align}
Therefore,
\begin{align}
|L(\bm{F})|\geq \frac{\int_{\bm{x}\in\mathcal{D}}d\bm{x}}{\max_{U\in L(\bm{F})}\int_{\bm{x}\in U}d\bm{x}}=\frac{1}{I(\bm{F})}\ \ \ \ (\because\ (\ref{eqif})). \nonumber
\end{align}
\qed
\end{proofc}
\subsubsection{The Partial Order of Piecewise Linear Functions}
Next, we define a partial order between two piecewise linear functions as follows.
\begin{defi}\label{defi3}
Let $\mathcal{D}$ be a bounded subset of $\mathbb{R}^{n}$. Let $\bm{F}, \bm{G} : \mathcal{D}\to\mathbb{R}^{m}$ be two piecewise linear functions. We say that $\bm{G}\preceq_r \bm{F}$, where $0\leq r\leq 1$, if the following statements hold.
\begin{itemize}
\item For any linear region $U\in L(\bm{F})$, there exist $s\in\mathbb{N}\ (s\geq1)$ and some linear regions $V_i\in L(\bm{G})\ (i=1,2,\cdots,s)$ such that
\begin{align}
U=\bigcup^{s}_{i=1}V_i. \label{equ}
\end{align}
\item For any linear region $U\in L(\bm{F})$ and $V\in\{V'\in L(\bm{G})\ |\ V'\subset U\}$,
\begin{align}
\int_{\bm{x}\in V}d\bm{x}\leq r\int_{\bm{x}\in U}d\bm{x}. \label{eqr}
\end{align}
\end{itemize}
\end{defi}
\begin{ex}
For example, let $F, G : [0,1]\to\mathbb{R}$ be the functions in Fig. \ref{fig5}. We can say that $G\preceq_{\frac{5}{9}} F$ because $r$ is computed such as
\begin{align}
r=\frac{\frac{1}{4}-\frac{1}{9}}{\frac{1}{4}-0}=\frac{5}{9}. \nonumber
\end{align}
\end{ex}
Equation (\ref{equ}) claims that the linear regions of $\bm{G}$ are finer than those of $\bm{F}$ and (\ref{eqr}) claims that the linear regions of $\bm{G}$ are fine $r$ times or more those of $\bm{F}$. In fact, we can say the following.
\begin{lem}\label{lem4}
Let $\mathcal{D}$ be a bounded subset of $\mathbb{R}^{n}$. Let $\bm{F}, \bm{G} : \mathcal{D}\to\mathbb{R}^{m}$ be two piecewise linear functions. We can say that
\begin{align}
\bm{G}\preceq_r \bm{F}\ \Rightarrow\ I(\bm{G})\leq rI(\bm{F}), \nonumber
\end{align}
where $0\leq r\leq 1$.
\end{lem}
\setcounter{proofl}{1}
\begin{proofl}
\begin{align}
I(\bm{G})&=\max_{V\in L(\bm{G})}\frac{\int_{\bm{x}\in V}d\bm{x}}{\int_{\bm{x}\in\mathcal{D}}d\bm{x}}\ \ \ \ (\because (\ref{eqif})) \nonumber\\
&=\max_{U\in L(\bm{F})}\max_{V\in\{V'\in L(\bm{G})\ |\ V'\subset U\}}\frac{\int_{\bm{x}\in V}d\bm{x}}{\int_{\bm{x}\in\mathcal{D}}d\bm{x}}\ \ \ \ (\because (\ref{equ})) \nonumber\\
&\leq \max_{U\in L(\bm{F})}r\frac{\int_{\bm{x}\in U}d\bm{x}}{\int_{\bm{x}\in\mathcal{D}}d\bm{x}}\ \ \ \ (\because (\ref{eqr}))\nonumber\\
&=rI(\bm{F}) \ \ \ \ (\because (\ref{eqif})).\nonumber
\end{align}
\qed
\end{proofl}
Lemma \ref{lem4} will be used for the proof of the theorem shown in Section \ref{secbene}.
\subsubsection{Linear Regions and Identification of Input Subsets}
Montufar et al. \cite{Montufar} define the identification of input subsets in their paper. It is defined as follows.
\begin{defi}[\cite{Montufar}\ ]\label{defi4}
We say that a map $\bm{g} : \mathbb{R}^n\to\mathbb{R}^{m}$ \emph{identifies} $K(\in\mathbb{N})$ subsets $U_1,\cdots,U_K(\subset\mathbb{R}^n)$ onto $V(\subset\mathbb{R}^m)$ if it maps them to a commom image $V=\bm{g}(U_1)=\cdots=\bm{g}(U_K)$ in $\mathbb{R}^{m}$. We also say that $U_1,\cdots,U_K$ are \emph{identified} onto $V$ by $\bm{g}$.
\end{defi}
\begin{ex}
For example, the four quadrants of two dimensional euclidean space are identified onto $[0,\infty)^2$ by the function $\bm{g} : \mathbb{R}^2\to[0, \infty)^2$ such as $\bm{g}(x_1, x_2)=(|x_1|, |x_2|)^T$.
\end{ex}
If $U_1,\cdots,U_K(\subset\mathbb{R}^n)$ are all of the subsets identified onto $V(\subset\mathbb{R}^{m})$ by $\bm{g} : \mathbb{R}^n\to\mathbb{R}^{m}$, we can say that
\begin{align}
\bm{g}^{-1}(V)=\bigcup^K_{k=1}U_k. \label{eqid}
\end{align}
Montufar et al. \cite{Montufar} interpret (\ref{eqid}) that the common subset $V$ is replicated to $U_1,\cdots,U_K$.
We can prove the following lemma related to Definition \ref{defi4}.
\begin{lem}\label{lem3}
Let $\mathcal{D}_1$ be a bounded subset of $\mathbb{R}^{n}$. Let $\mathcal{D}_2$ be a bounded subset of $\mathbb{R}^{n'}$. Let $\bm{g} : \mathcal{D}_1\to\mathcal{D}_2$ be a piecewise linear function identifying all its linear regions onto $\mathcal{D}_2$. Let $\bm{f} : \mathcal{D}_2\to\mathbb{R}^{m}$ be a piecewise linear function. The following holds.
\begin{align}
\bm{f}\circ \bm{g}\preceq_{I(\bm{f})}\bm{g}. \nonumber
\end{align}
\end{lem}
\begin{proofl}
For any linear region $U\in L(\bm{g})$ and $V\in L(\bm{f})$,\ $\bm{g}|^{-1}_U(V)$ is a linear region of $\bm{f} \circ \bm{g}$ and
\begin{align}
U&=\bm{g}|_U^{-1}(\mathcal{D}_2)\ \ \ \ (\because \bm{g}(U)=\mathcal{D}_2)\nonumber\\
&=\bm{g}|_U^{-1}\left(\bigcup_{V\in L(\bm{f})}V\right)\ \ \ \ (\because (\ref{eqpiece})) \nonumber\\
&=\bigcup_{V\in L(\bm{f})}\bm{g}|_U^{-1}(V). \nonumber
\end{align}
Since $\bm{g}|_U^{-1}$ is linear, the jacobian of $\bm{g}|_U^{-1}$ satisfies $\left|J_{\bm{g}|_U^{-1}}\right|\neq0$. Therefore, from the property of integration by substitution with $\bm{x}=\bm{g}|_U^{-1}(\bm{y})$,
\begin{align}
\int_{\bm{x}\in U}d\bm{x}&=\int_{\bm{x}\in \bm{g}|_U^{-1}(\mathcal{D}_2)}d\bm{x}=\left|J_{\bm{g}|_U^{-1}}\right|\int_{\bm{y}\in\mathcal{D}_2}d\bm{y}. \label{eqk}
\end{align}
From (\ref{eqk}),
\begin{align}
\left|J_{\bm{g}|_U^{-1}}\right|=\frac{\int_{\bm{x}\in U}d\bm{x}}{\int_{\bm{y}\in\mathcal{D}_2}d\bm{y}}. \label{eqj}
\end{align}
Moreover,
\begin{align}
\int_{\bm{x}\in\bm{g}|_U^{-1}(V)}d\bm{x}&=\left|J_{\bm{g}|_U^{-1}}\right|\int_{\bm{y}\in V}d\bm{y} \nonumber\\
&=\frac{\int_{\bm{x}\in U}d\bm{x}}{\int_{\bm{y}\in\mathcal{D}_2}d\bm{y}}\int_{\bm{y}\in V}d\bm{y}\ \ \ \ (\because \mbox{(\ref{eqj})}) \nonumber\\
&\leq I(\bm{f})\int_{\bm{x}\in U}d\bm{x}\ \ \ \ (\because (\ref{eqif})). \nonumber
\end{align}
\qed
\end{proofl}
\subsubsection{The Relation Between the Depth of Neural Networks and the Fineness of Linear Regions}
\label{secbene}
We can show that to increase layers is more effective than to increase units at each layer on improving the expressivity of neural networks, by proving the following theorem
\begin{theo}\label{theo}
Let $\tilde{\bm{F}}_{\bm{\theta}} : [0,1]^{n_0}\to\mathbb{R}^{n_L}$ be a neural network
with any piecewise linear activation function, having $L$ layers and $n_l$ units at the $l$-th layer, where $n_l\geq 2n_0$ for any $l\in\{1,2,\cdots,L-1\}$. We can say that
\begin{align}
^\exists\bm{\theta}\in\bm{\Theta}\ \ \mathrm{s.t.}\ \ I(\tilde{\bm{F}}_{\bm{\theta}})\leq \prod^{L-1}_{l=1}\left\lfloor\frac{n_l}{2n_0}\right\rfloor^{-n_0}. \label{eqtheo}
\end{align}
\end{theo}
If $n_l=n(\geq2n_0)$ for any $l\in\{1,2,\cdots,L-1\}$, we can say that
\begin{align}
^\exists\bm{\theta}\in\bm{\Theta}\ \ \mathrm{s.t.}\ \ I(\tilde{\bm{F}}_{\bm{\theta}})\leq O\left(\left(\frac{n}{2n_0}\right)^{-n_0(L-1)}\right) \nonumber
\end{align}
from Theorem \ref{theo}. In other words, $\tilde{\bm{F}}_{\bm{\theta}}$ can express functions which have the fineness of linear regions $O((\frac{n}{2n_0})^{-n_0(L-1)})$ if $\bm{\theta}$ has appropriate values. Therefore, we consider that the flexibility of the set of deep neural networks $\{\tilde{\bm{F}}_{\bm{\theta}}\ |\ \bm{\theta}\in\bm{\Theta}\}$ grows exponentially in $L$ and polynominally in $n$.
\begin{comment}
Although this result seems to be similar to the result in \cite{Montufar}, there exist important differences as follows.
\begin{enumerate}
\item We evaluate the flexibility of deep neural networks by the fineness of linear regions, not the number of linear regions.
\item We claim that the flexibility grows exponentially in the depth when using any piecewise linear activation function, not a certain activation function.
\end{enumerate}
\end{comment}
We use the following lemma in order to prove Theorem \ref{theo}.
\begin{lem}\label{lem2}
Let $\bm{F}_{\bm{\theta}} : [0,1]^{n_0}\to\mathbb{R}^{n_L}$ be a ReLU neural network with $L$ layers and $n_l$ units at the $l$-th layer, where $n_l\geq n_0$ for any $l\in\{1,2,\cdots,L-1\}$. We can say that
\begin{align}
^\exists{\bm{\theta}}\in\bm{\Theta}\ \ \mathrm{s.t.}\ \ I(\bm{F}_{\bm{\theta}})\leq \prod^{L-1}_{l=1}\left\lfloor\frac{n_l}{n_0}\right\rfloor^{-n_0}. \nonumber
\end{align}
\end{lem}
\setcounter{proofl}{3}
\begin{prooft}
Let $\bm{F}_{\bm{\theta}} : [0,1]^{n_0}\to\mathbb{R}^{n_L}$ be a ReLU neural network with $L$ layers and $\frac{n_l}{2}(n_l\geq 2n_0)$ units at the $l$-th layer for any $l\in\{1,2,\cdots,L-1\}$. From Lemma \ref{lem2}, there exists $\hat{\bm{\theta}}\in\bm{\Theta}$ such that
\begin{align}
I(\bm{F}_{\hat{\bm{\theta}}})\leq \prod^{L-1}_{l=1}\left\lfloor\frac{n_l}{2n_0}\right\rfloor^{-n_0}. \nonumber
\end{align}
Moreover, from Lemma \ref{lem1}, there exists a neural network $\tilde{\bm{F}}_{\bm{\theta}} : [0,1]^{n_0}\to\mathbb{R}^{n_L}$ with any piecewise linear activation function, having $L$ layers and $n_l$ units at the $l$-th layer, and that computes the same function as $\bm{F}_{\hat{\bm{\theta}}}$. It satisfies
\begin{align}
I(\tilde{\bm{F}}_{\bm{\theta}})\leq \prod^{L-1}_{l=1}\left\lfloor\frac{n_l}{2n_0}\right\rfloor^{-n_0}. \nonumber
\end{align}
\qed
\end{prooft}
Now, we prove Lemma \ref{lem2}.
\begin{proofl}
We prove Lemma \ref{lem2} by two steps as follows.
\begin{description}
\item[Step 1.]\ Construct a specific ReLU neural network $\bm{F}_{\hat{\bm{\theta}}}$ which recursively caluculates the functions with common parameters.
\item[Step 2.]\ Show that the fineness of linear regions of the network $\bm{F}_{\hat{\bm{\theta}}}$ satisfies $I(\bm{F}_{\hat{\bm{\theta}}})\leq\prod^{L-1}_{l=1}\lfloor\frac{n_l}{n_0}\rfloor^{-n_0}$ with applying Lemma \ref{lem4} and Lemma \ref{lem3} inductively.
\end{description}
\textbf{\underline{Step 1}.} Let an integer $n\in\mathbb{N}$ satisfy $n\geq n_0$ and $p:=\lfloor\frac{n}{n_0}\rfloor$. We define a $pn_0$-dimensional function $\tilde{\bm{f}} : [0,1]^{n_0}\to\mathbb{R}^{pn_0}$ as
\begin{align}
&\tilde{f}_{(i-1)n_0+j}(\bm{x}):=\left\{\begin{array}{ll}
px_j&(i=1)\\
2px_j-2(i-1)&(i=2,\cdots,p)
\end{array}\right. \nonumber \\
&\hspace{5cm} (j=1,2,\cdots,n_0), \nonumber
\end{align}
where $\tilde{f}_{(i-1)n_0+j}$ is the $((i-1)n_0+j)$-th coordinate of $\tilde{\bm{f}}$ and $x_j$ is the $j$-th coordinate of $\bm{x}$. We define a $n_0$-dimensional function $\tilde{\tilde{\bm{f}}} : \mathbb{R}^{pn_0}\to[0,1]^{n_0}$ as
\begin{align}
\tilde{\tilde{f}}_j(\bm{x}):=\sum^p_{i=1}(-1)^{i-1}x_{(i-1)n_0+j}\ (j=1,2,\cdots,n_0), \nonumber
\end{align}
where $\tilde{\tilde{f}}_j$ is the $j$-th coordinate of $\tilde{\tilde{\bm{f}}}$. Furthermore, we define a $n_L$-dimensional function $\tilde{\tilde{\tilde{\bm{f}}}} : [0,1]^{n_0}\to\mathbb{R}^{n_L}$ as
\begin{align}
\tilde{\tilde{\tilde{f}}}_j(\bm{x}):=\sum^{n_0}_{i=1}w_{ji}x_i+b_j\ (w_{ji}>0;\ j=1,2,\cdots,n_L), \nonumber
\end{align}
where $\tilde{\tilde{\tilde{f}}}_j$ is the $j$-th coordinate of $\tilde{\tilde{\tilde{\bm{f}}}}$. We construct the neural network using these functions as follows.
At first, let the number of units at the $l$-th layer be equal to $n(\geq n_0)$ for any $l\in\{1,2,\cdots,L-1\}$. We separate the set of $n$ units at the $l$-th layer, $(l=1,2,\cdots,L-1)$, into $p(=\lfloor\frac{n}{n_0}\rfloor)$ subsets and remainder units. Each of the $p$ subsets contains $n_0$ units. Let the values of the parameters of the remainder $n-pn_0$ units be $0$. For any $l\in\{1,2,\cdots,L-1\}$, let the non-linear activation function $\bm{g}^{(l)} : \mathbb{R}^{pn_0}\to\mathbb{R}^{pn_0}$ be
\begin{align}
g^{(l)}_j(\bm{x})=\max\{0, x_j\}\ (j=1,2,\cdots,pn_0), \nonumber
\end{align}
where $g^{(l)}_j$ is the $j$-th coordinate of $\bm{g}^{(l)}$.
Next, we consider the linear functions $\bm{f}^{(l)}(l=1,2,\cdots,L)$. When $l=1$, for the $j$-th unit of the $i$-th subset, ($j = 1, 2, \dots , n_0$, $i = 1, 2, \dots , p$), let the linear function $f^{(1)}_{(i-1)n_0+j} : [0,1]^{n_0}\to\mathbb{R}$ be
\begin{align}
f^{(1)}_{(i-1)n_0+j}=\tilde{f}_{(i-1)n_0+j}. \nonumber
\end{align}
When $l=2, 3, \dots , L-1$, for the $j$-th unit of the $i$-th subset, ($j = 1, 2, \dots , n_0$, $i = 1, 2, \dots , p$), let the linear function $f^{(l)}_{(i-1)n_0 + j} : \mathbb{R}^{pn_0}\to\mathbb{R}$ be
\begin{align}
f_{(i-1)n_0 + j}^{(l)}=\tilde{f}_{(i-1)n_0 + j}\circ \tilde{\tilde{\bm{f}}}. \nonumber
\end{align}
When $l=L$, let the linear function $\bm{f}^{(L)} : \mathbb{R}^{pn_0}\to\mathbb{R}^{n_L}$ be
\begin{align}
\bm{f}^{(L)}=\tilde{\tilde{\tilde{\bm{f}}}}\circ \tilde{\tilde{\bm{f}}}.\nonumber
\end{align}
Then the constructed network is represented as follows. Since $\bm{g}^{(1)}, \bm{g}^{(2)}, \cdots, \bm{g}^{(L-1)}$ are common, we express them as $\bm{g}$.
\begin{align}
\bm{F}_{\hat{\bm{\theta}}}&=\bm{f}^{(L)}\circ \bm{g}^{(L-1)}\circ \bm{f}^{(L-1)}\circ\cdots\circ \bm{g}^{(1)}\circ \bm{f}^{(1)} \nonumber \\
&=\tilde{\tilde{\tilde{\bm{f}}}} \circ \tilde{\tilde{\bm{f}}} \circ \bm{g} \circ \tilde{\bm{f}} \circ \cdots \circ \tilde{\tilde{\bm{f}}} \circ \bm{g} \circ \tilde{\bm{f}} \circ \tilde{\tilde{\bm{f}}} \circ \bm{g} \circ \tilde{\bm{f}}. \nonumber
\end{align}
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=5cm,width=9cm]{Figure1_ver2.png}}
\caption{The graph of $h_j\ (j=1,\cdots,n_0)$, where $p$ is a even number.}
\label{fig1}
\end{center}
\end{figure}
\textbf{\underline{Step 2}.} Let $\bm{h} := \tilde{\tilde{\bm{f}}} \circ \bm{g} \circ \tilde{\bm{f}}$. We can re-represent our network as
\begin{align}
\bm{F}_{\hat{\bm{\theta}}}=\tilde{\tilde{\tilde{\bm{f}}}} \circ \underbrace{\bm{h} \circ \bm{h} \circ \cdots \circ \bm{h}}_{L-1}. \nonumber
\end{align}
We consider the fineness of linear regions $I(\tilde{\tilde{\tilde{\bm{f}}}})$ and $I(\bm{h})$ as well as the identified regions of $\bm{h}$ in order to apply Lemma \ref{lem4} and Lemma \ref{lem3}. Since $\tilde{\tilde{\tilde{\bm{f}}}}$ is linear on $[0,1]^{n_0}$,
\begin{align}
I(\tilde{\tilde{\tilde{\bm{f}}}})=1. \nonumber
\end{align}
On the other hand, the $j$-th coordinate of $\bm{h} : [0,1]^{n_0}\to[0,1]^{n_0}$ is described such as
\begin{align}
h_j(\bm{x})&=\max\left\{0,\ px_{j}\right\}-\max\left\{0,\ 2px_{j}-2\right\}+\cdots \nonumber\\
&\hspace{1cm}\cdots+(-1)^{p-1}\max\left\{0,\ 2px_{j}-2(p-1)\right\} \nonumber\\
&\hspace{4cm}(j=1,2,\cdots,n_0). \nonumber
\end{align}
Figure \ref{fig1} shows the graph of $h_j$. From Fig. \ref{fig1}, we can say that $h_j$ identifies its linear regions
\begin{align}
\left[\frac{t}{p}, \frac{t+1}{p}\right]\ (t=0,1,\cdots,p-1) \nonumber
\end{align}
onto $[0,1]$. In other words, $\bm{h}$ identifies these linear regions on the each coordinate $h_1,h_2,\cdots,h_{n_0}$. Therefore, it identifies its linear regions
\begin{align}
\prod^{n_0}_{j=1}\left[\frac{t_j}{p}, \frac{t_j+1}{p}\right]\ (t_j=0,1,\cdots,p-1) \nonumber
\end{align}
onto $[0,1]^{n_0}$. Since these linear regions are $n_0$-dimensional hypercubes with the side length of $p^{-1}$ for any $t_j\in\{0,1,\cdots,p-1\}$, the fineness of linear regions $I(\bm h)$ satisfies
\begin{align}
I(\bm{h})=p^{-n_0}. \nonumber
\end{align}
Then we can say that
\begin{align}
\tilde{\tilde{\tilde{\bm f}}}\circ \bm{h}\preceq_{I(\tilde{\tilde{\tilde{\bm{f}}}})}\bm{h} \nonumber
\end{align}
from Lemma \ref{lem3}. Therefore, from Lemma \ref{lem4},
\begin{align}
I(\tilde{\tilde{\tilde{\bm{f}}}}\circ \bm{h})\leq I(\tilde{\tilde{\tilde{\bm{f}}}})I(\bm{h})=p^{-n_0}. \label{eqiii}
\end{align}
In a similar manner, from Lemma \ref{lem3},
\begin{align}
(\tilde{\tilde{\tilde{\bm{f}}}}\circ \bm{h})\circ \bm{h}\preceq_{I(\tilde{\tilde{\tilde{\bm{f}}}}\circ \bm{h})}\bm{h}. \nonumber
\end{align}
Therefore, from Lemma \ref{lem4} and (\ref{eqiii}),
\begin{align}
I(\tilde{\tilde{\tilde{\bm{f}}}}\circ \bm{h}\circ \bm{h})\leq I(\tilde{\tilde{\tilde{\bm{f}}}}\circ \bm{h})I(\bm{h})
\leq I(\tilde{\tilde{\tilde{\bm{f}}}})I(\bm{h})I(\bm{h})
=p^{-2n_0}. \nonumber
\end{align}
If we inductively repeat these operations, we get the inequation as follows.
\begin{align}
I(\bm{F}_{\hat{\bm{\theta}}})&=I(\tilde{\tilde{\tilde{\bm{f}}}} \circ \underbrace{\bm{h} \circ \bm{h} \circ \cdots \circ \bm{h}}_{L-1}) \nonumber\\
&\leq I(\tilde{\tilde{\tilde{\bm{f}}}})\underbrace{I(\bm{h})I(\bm{h})\cdots I(\bm{h})}_{L-1}=p^{-n_0(L-1)}. \nonumber
\end{align}
If the number of units at each layer is different from each other and satisfies $n_l\geq n_0(l=1,2,\cdots,L-1)$, we may change $p=\lfloor\frac{n}{n_0}\rfloor$ to
\begin{align}
p_l:=\left\lfloor\frac{n_l}{n_0}\right\rfloor\ (l=1,2,\cdots,L-1). \nonumber
\end{align}
In this case, the fineness of linear regions $I(\bm{F}_{\hat{\bm{\theta}}})$ satisfies
\begin{align}
I(\bm{F}_{\hat{\bm{\theta}}})\leq\prod^{L-1}_{l=1}p_l^{-n_0}. \nonumber
\end{align}
\qed
\end{proofl}
\section{Experiments}
\begin{table}[t]
\caption{The neural networks compared in the computer simulations.}
\label{tabmodel}
\centering
\begin{tabular}{|c|cc|}\hline
$l$&Network 1 ($L=6$)&Network 2 ($L=2$)\\\hline
0&$n_0=1$&$n_0=1$\\
1&$n_1=4$&$n_1=20$\\
2&$n_2=4$&$n_2=1$\\
3&$n_3=4$&\\
4&$n_4=4$&\\
5&$n_5=4$&\\
6&$n_6=1$&\\\hline
\end{tabular}
\end{table}
We compared two types of neural networks shown in Table \ref{tabmodel} in terms of the expressivity. The activation functions in those neural networks are the ReLU function. Although the depth of them is different, the number of units is equal to 22. We approximately calculated the ratio of the desired parameters and the fineness of linear regions by computer simulations in order to evaluate the expressivity of the two neural networks.
\subsection{The Calculation of the Ratio of the Desired Parameters}
\label{secexex}
We calculated the ratio of the desired parameters by computer simulations. We executed the calculation with restricting the input and output dimension to one for simplicity. It is difficult to calculate (\ref{eqd}) by computer simulations. Therefore, instead of it, we calculated such as
\begin{align}
\hat{d}(F_{\bm{\theta}}, F^*):=\frac{1}{N}\sum^N_{n=1}(F_{\bm{\theta}}(x_n)-F^*(x_n))^2, \nonumber
\end{align}
where $x_n\in\mathbb{R}(n=1,2,\cdots,N)$ are finite samples. For the target function $F^*$, we adopted the two kinds of functions. One was the sin function $F^*(x)=\sin(4\pi x)$ and the other was the Weierstrass function such as Fig. \ref{figweier}. The samples $x_n(n=1,2,\cdots,N)$ were set as $x_n=(n-1)\times 10^{-4}$ and $N=10^4$.
Moreover, it is also difficult to calculate (\ref{eqR}) by computer simulations. Therefore, we approximately calculated it with random sampling of the values of $\bm{\theta}$. We generated the values of $\bm{\theta}$ with random sampling of standard normal distribution and calculated $\hat{d}(F_{\bm{\theta}}, F^*)$ with the generated values of $\bm{\theta}$, $2\times 10^4$ times. Then we confirmed whether the each $\hat{d}(F_{\bm{\theta}}, F^*)$ was less than $\epsilon_k(k=1,2,\cdots,10^4)$ or not. When the target function was $\sin(4\pi x)$, $\epsilon_k=0.4+4(k-1)\times10^{-5}$. When the target function was the Weierstrass function, $\epsilon_k=0.6+2(k-1)\times10^{-5}$. We counted up when $\hat{d}(F_{\bm{\theta}}, F^*)\leq\epsilon_k$ and calculated the ratio of $\bm{\theta}$ which satisfied the inequation by dividing the last count by $2\times 10^4$. The following summarizes the algorithm which approximately calculates $R_{\epsilon}(\mathcal{F}, F^*)$.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=4.5cm,width=7.5cm]{weierstrass_rough.png}}
\caption{The Weierstrass function we used in the computer simulations.}
\label{figweier}
\end{center}
\end{figure}
\begin{algorithmic}
\STATE $\epsilon_k\ \leftarrow\ 0.4+4(k-1)\times10^{-5}$ or $0.6+2(k-1)\times10^{-5}\ (k=1,2,\cdots,10^4)$
\STATE $t_{k}\ \leftarrow\ 0\ (k=1,2,\cdots,10^4)$
\STATE $x_n\ \leftarrow\ (n-1)\times 10^{-4}\ (n=1,2,\cdots,10^4)$
\STATE $F^*(x_n)\ \leftarrow\ \sin(4\pi x_n)$ or the Weierstrass function\ $(n=1,2,\cdots,10^4)$
\FOR{$j=1$ {\bfseries to} $2\times 10^4$}
\STATE Generate $\bm{\theta}$ with random sampling of standard normal distribution
\STATE Compute a neural network $F_{\bm{\theta}}(x_n)\ (n=1, 2, \cdots, 10^4)$
\STATE $\hat{\mu}\ \leftarrow\ \frac{1}{10^4}\sum^{10^4}_{n=1}F_{\bm{\theta}}(x_n)$
\STATE $\hat{\sigma}\ \leftarrow\ \sqrt{\frac{1}{10^4}\sum^{10^4}_{n=1}\left(F_{\bm{\theta}}(x_n)-\hat{\mu}\right)^2}$
\STATE $F_{\bm{\theta}}(x_n)\ \leftarrow\ \frac{F_{\bm{\theta}}(x_n)-\hat{\mu}}{\hat{\sigma}}\ (n=1,2,\cdots,10^4)$
\STATE $\hat{d}(F_{\bm{\theta}}, F^*)\ \leftarrow\ \frac{1}{10^4}\sum^{10^4}_{n=1}(F_{\bm{\theta}}(x_n)-F^*(x_n))^2$
\FOR{$k=1$ {\bfseries to} $10^4$}
\IF{$\hat{d}(F_{\bm{\theta}}, F^*)\leq \epsilon_k$}
\STATE $t_{k}\ \leftarrow\ t_{k}+1$
\ENDIF
\ENDFOR
\ENDFOR
\STATE $\hat{R}_{\epsilon_k}(\mathcal{F}, F^*)\ \leftarrow\ \frac{t_{k}}{2\times 10^4}\ (k=1,2,\cdots,10^4)$
\RETURN $\hat{R}_{\epsilon_k}(\mathcal{F}, F^*)\ (k=1,2,\cdots,10^4)$
\end{algorithmic}
We ran the above algorithm using the neural networks in Table \ref{tabmodel} as $F_{\bm{\theta}}$. Figure \ref{figresult1} and Fig. \ref{figresult2} show the results of the calculation. From these results, we can that Network 1 can express more functions close to $\sin(4\pi x)$ and the Weierstrass function than Network 2. It suggests that to increase layers is more effective than to increase units at each layer on improving the expressivity of neural networks.
\subsection{The Calculation of the Fineness of Linear Regions}
In order to confirm Theorem \ref{theo}, we approximately calculated the minimum value of the fineness of linear regions $\min_{\bm{\theta}\in\bm{\Theta}}I(\bm{F}_{\bm{\theta}})$ by computer simulations. We must detect linear regions from the input domain of deep neural networks in order to know the fineness of linear regions $I(\bm{F}_{\bm{\theta}})$. Therefore, we did it by calculating the inflection of the derivative of functions computed by deep neural networks.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=5cm,width=7cm]{sin_norm_nu.png}}
\caption{The result when approximately calculating the ratio of the desired parameters with the sin function. From this figure, we can say that to increase layers is more effective than to increase units at each layer on improving on the approximation accuracy since the ratio of the desired parameters of Network 1 is over that of Network 2.}
\label{figresult1}
\end{center}
\end{figure}
We executed the calculation with restricting the input and output dimension to one for simplicity. We generated an arithmetic sequence $x_n=(n-1)\times10^{-5}(n=1,2,\cdots,10^5)$ and calculated the gradient of the line connecting $(x_n, F_{\bm{\theta}}(x_n))$ and $(x_{n+1}, F_{\bm{\theta}}(x_{n+1}))$ such as
\begin{align}
g_n:=\frac{F_{\bm{\theta}}(x_{n+1})-F_{\bm{\theta}}(x_n)}{x_{n+1}-x_n}. \nonumber
\end{align}
Then we calculated the inflection of $g_{n}$ such as
\begin{align}
d_{n+1}:=|g_{n+1}-g_{n}|. \nonumber
\end{align}
We regarded that $x_{n+1}$ was the boundary of linear regions if $d_{n+1}$ was larger than a threshold value and calculated the size of the maximal interval between the two boundaries as the fineness of linear regions $I(F_{\bm{\theta}})$. As we have mentioned in Section \ref{secintro}, we must calculate it independently from the efficiency of learning. Therefore, we searched the minimum value of $I(F_{\bm{\theta}})$ by generating the value of $\bm{\theta}$ randomly and repeatedly, not by learning. The following summarizes the algorithm to approximately calculate $\min_{\bm{\theta}\in\bm{\Theta}}I(F_{\bm{\theta}})$.
\begin{algorithmic}
\STATE $x_n\ \leftarrow\ (n-1)\times 10^{-5}\ (n=1,2,\cdots,10^5)$
\STATE $fineness\_list\ \leftarrow\ [\ ]$
\FOR{$i=1$ {\bfseries to} $10^3$}
\STATE Generate the value of $\bm{\theta}$ by randam sampling of uniform distribution $U(0, 1)$
\STATE $count\ \leftarrow\ 0$
\STATE $count\_list\ \leftarrow\ [\ ]$
\FOR{$n=1$ {\bfseries to} $10^5-2$}
\FOR{$k=0,1,2$}
\STATE Compute the output of a neural network $F_{\bm{\theta}}(x_{n+k})$
\ENDFOR
\STATE $g_n\ \leftarrow\ \frac{F_{\bm{\theta}}(x_{n+1})-F_{\bm{\theta}}(x_{n})}{x_{n+1}-x_n}$
\STATE $g_{n+1}\ \leftarrow\ \frac{F_{\bm{\theta}}(x_{n+2})-F_{\bm{\theta}}(x_{n+1})}{x_{n+2}-x_{n+1}}$
\STATE $d_{n+1}\ \leftarrow\ |g_{n+1}-g_n|$
\IF{$d_{n+1} < 0.5$}
\STATE $count\ \leftarrow\ count + 1$
\ELSE
\STATE Append $count$ to $count\_list$
\STATE $count\ \leftarrow\ 0$
\ENDIF
\ENDFOR
\STATE Append $count$ to $count\_list$
\STATE $count\_max\ \leftarrow\ $ the maximum value in $count\_list$
\STATE $fineness\ \leftarrow\ \frac{count\_max}{10^5}$
\STATE Append $fineness$ to $fineness\_list$
\ENDFOR
\STATE $fineness\_min\ \leftarrow\ $ the minimum value in $fineness\_list$
\RETURN $fineness\_min$
\end{algorithmic}
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[height=5.8cm,width=9.5cm]{weierstrass_norm_nu.png}}
\caption{The result when approximately calculating the ratio of the desired parameters with the Weierstrass function.
From this figure, we can say that to increase layers is more effective than to increase units at each layer on improving on the approximation accuracy since the ratio of the desired parameters of Network 1 is over that of Network 2.
}
\label{figresult2}
\end{center}
\end{figure}
In the above algorithm, the threshold value of the inflection $d_{n+1}$ is set as 0.5. If $d_{n+1}\geq 0.5$, $x_{n+1}$ is regarded as the boundary of linear regions. The variable $count$ increases if a deep neural network is linear. Then it returns to 0 at the boundary of linear regions. The list $count\_list$ stocks the size of linear regions and the maximum value in $count\_list$ shows the fineness of linear regions. However, the maximum value in $count\_list$ must be divided by $10^5$ because the difference between $x_n$ and $x_{n+1}$ is $10^{-5}$. On the other hand, we also calculated the value of $\prod^{L-1}_{l=1}\lfloor\frac{n_l}{2n_0}\rfloor^{-n_0}$, the right side of (\ref{eqtheo}).
We ran the above algorithm using the neural networks in Table \ref{tabmodel} as $F_{\bm{\theta}}$. The results of the simulations follow Theorem \ref{theo}. The minimum value of the fineness of linear regions of Network 1 is 0.00004 and that of Network 2 is 0.06345, where the values of $\bm{\theta}$ are restricted to the finite sampling. On the other hand, the value of $\prod^{L-1}_{l=1}\lfloor\frac{n_l}{2n_0}\rfloor^{-n_0}$ of Network 1 is 0.03125 and that of Network 2 is 0.1. From these results, we can confirm Theorem \ref{theo} and say that to increase layers is more effective than to increase units at each layer when we want neural networks to be more flexible.
\section{Conclusion}
In this paper, we proposed two new criteria to evaluate the expressivity of functions computable by deep neural networks independently from the efficiency of learning. The first criterion, the ratio of the desired parameters shows the approximation accuracy of deep neural networks to the target function. The second criterion, the fineness of linear regions shows the property of linear regions of functions computable by deep neural networks. Furthermore, by the two criteria, we showed that to increase layers is more effective than to increase units at each layer on improving the expressivity of deep neural networks. It is hoped that our studies will contribute to a better understanding of the advantage of deepening neural networks.
\section*{Acknowledgment}
This research was supported by JSPS Grants-in-Aid for Scientific Research JP17K00316, JP17K06446, JP18K11585, JP19K04914.
| {
"timestamp": "2020-09-25T02:07:31",
"yymm": "2009",
"arxiv_id": "2009.11479",
"language": "en",
"url": "https://arxiv.org/abs/2009.11479",
"abstract": "We propose two new criteria to understand the advantage of deepening neural networks. It is important to know the expressivity of functions computable by deep neural networks in order to understand the advantage of deepening neural networks. Unless deep neural networks have enough expressivity, they cannot have good performance even though learning is successful. In this situation, the proposed criteria contribute to understanding the advantage of deepening neural networks since they can evaluate the expressivity independently from the efficiency of learning. The first criterion shows the approximation accuracy of deep neural networks to the target function. This criterion has the background that the goal of deep learning is approximating the target function by deep neural networks. The second criterion shows the property of linear regions of functions computable by deep neural networks. This criterion has the background that deep neural networks whose activation functions are piecewise linear are also piecewise linear. Furthermore, by the two criteria, we show that to increase layers is more effective than to increase units at each layer on improving the expressivity of deep neural networks.",
"subjects": "Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE); Machine Learning (stat.ML)",
"title": "Theoretical Analysis of the Advantage of Deepening Neural Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105231864132,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7076984843792526
} |
https://arxiv.org/abs/1906.06634 | On stabilizer-free weak Galerkin finite element methods on polytopal meshes | A stabilizing/penalty term is often used in finite element methods with discontinuous approximations to enforce connection of discontinuous functions across element boundaries. Removing stabilizers from discontinuous finite element methods will simplify the formulations and reduce programming complexity significantly. The goal of this paper is to introduce a stabilizer free weak Galerkin finite element method for second order elliptic equations on polytopal meshes in 2D or 3D. This new WG method keeps a simple symmetric positive definite form and can work on polygonal/polyheral meshes. Optimal order error estimates are established for the corresponding WG approximations in both a discrete $H^1$ norm and the $L^2$ norm. Numerical results are presented verifying the theorem. | \section{Introduction}\label{Section:Introduction}
We consider Poisson equation with a homogeneous Dirichlet boundary condition in $d$ dimension
as our model problem for the sake of clear presentation.
This stabilizer free weak Galerkin method can also be used for
other partial differential equations.
The Poisson problem seeks an unknown function $u$ satisfying
\begin{eqnarray}
-\Delta u&=&f\quad \mbox{in}\;\Omega,\label{pde}\\
u&=&0\quad\mbox{on}\;\partial\Omega,\label{bc}
\end{eqnarray}
where $\Omega$ is a polytopal domain in $\mathbb{R}^d$.
The weak form of the problem (\ref{pde})-(\ref{bc}) is to find $u\in H^1_0(\Omega)$
such that
\begin{eqnarray}
(\nabla u,\nabla v)=(f,v)\quad \forall v\in
H_0^1(\Omega).\label{weakform}
\end{eqnarray}
The $H^1$ conforming finite element method for the problem (\ref{pde})-(\ref{bc}) keeps the same simple form as in (\ref{weakform}): find $u_h\in V_h\subset H^1_0(\Omega)$
such that
\begin{eqnarray}
(\nabla u_h,\nabla v)=(f,v)\quad \forall v\in V_h,\label{cfe}
\end{eqnarray}
where $V_h$ is a finite dimensional subspace of $H_0^1(\Omega)$.
The functions in $V_h$ are required to be continuous, which makes the
classic finite element formulation (\ref{cfe}) less flexible
in element constructions and in mesh generations.
In contrast, finite element methods using discontinuous approximations
have two advantages: 1. easy construction of high order elements
and avoiding constructing some special elements such as $C^1$ conforming elements;
2. easy working on general meshes.
Therefore, discontinuous finite element methods are the most active research area
in the context of finite element methods for the past two decades.
Discontinuous approximation was first used in finite element procedure as early as in
1970s \cite{Babu73, DoDu76,ReHi73, Whee78}.
Local discontinuous Galerkin methods were introduced in \cite{cs1998}.
Then a paper \cite{abcm} in 2002 provides a unified analysis of discontinuous Galerkin
finite element methods for Poisson equation.
More discontinuous finite element methods have been developed such as
hybridizable discontinuous Galerkin method \cite{cgl},
mimetic finite differences method \cite{Lipnikov2011},
hybrid high-order method \cite{de},
weak Galerkin method \cite{wy} and references therein.
One obvious disadvantage of discontinuous finite element methods is their rather complex
formulations which are often necessary to enforce weak continuity of discontinuous solutions
across element boundaries.
Most of discontinuous finite element methods have one or more stabilizing terms
to guarantee stability and convergence of the methods.
Existing of stabilizing terms further complicates formulations.
Complexity of discontinuous finite element methods makes them difficult
to be implemented and to be analyzed.
The purpose of this paper is to obtain a finite element
formulation close to its original PDE weak form (\ref{weakform})
for discontinuous polynomials.
We believe that finite element formulations for discontinuous approximations
can be as simple as follows:
\begin{equation}\label{dfe}
(\nabla_w u_h,\nabla_w v)=(f,v),
\end{equation}
if $\nabla_w$, an approximation of gradient, is appropriately defined.
The formulation (\ref{dfe}) can be viewed as the counterpart of (\ref{weakform}) for discontinuous approximations.
In fact such an ultra simple formulation (\ref{dfe}) has been achieved
for one kind of WG method in \cite{wy},
and for the conforming DG methods in \cite{cdg1,cdg2}.
The lowest order WG method developed in \cite{wy} has been improved
in \cite{liu} for convex polygonal meshes,
in which non-polynomial functions are used for computing weak gradient.
In this paper, we develop a WG finite element method that
has an ultra simple formulation (\ref{dfe}) and can work on polytopal meshes for any polynomial degree $k\ge 1$.
The idea is to raise the degree of polynomials used to compute weak gradient $\nabla_w$. Using higher degree polynomials in computation of weak gradient will not change
the size, neither the global sparsity of the stiffness matrix. On the other side, the simple formulation of the stabilizer free WG method (\ref{dfe}) will reduce programming complexity significantly.
Optimal order error estimates are established for the corresponding
WG approximations in both a discrete $H^1$ norm and the $L^2$ norm.
Numerical results are presented verifying the theorem.
\section{Weak Galerkin Finite Element Schemes}\label{Section:wg-fem}
Let ${\cal T}_h$ be a partition of the domain $\Omega$ consisting of
polygons in two dimension or polyhedra in three dimension satisfying
a set of conditions specified in \cite{wymix}. Denote by ${\cal E}_h$
the set of all edges or flat faces in ${\cal T}_h$, and let ${\cal
E}_h^0={\cal E}_h\backslash\partial\Omega$ be the set of all
interior edges or flat faces. For every element $T\in {\mathcal T}_h$, we
denote by $h_T$ its diameter and mesh size $h=\max_{T\in{\mathcal T}_h} h_T$
for ${\cal T}_h$.
We start by introducing weak function $v=\{v_0,v_b\}$ on element $T\in{\mathcal T}_h$ such that
$$
v=
\left\{
\begin{array}{l}
\displaystyle
v_0\quad {\rm in}\; T,
\\ [0.08in]
\displaystyle
v_b\quad {\rm on}\;\partial T.
\end{array}
\right.
$$
If $v$ is continuous on $\Omega$, then $v=\{v,v\}$.
For a given integer $k \ge 1$, let $V_h$ be the weak Galerkin finite
element space associated with ${\mathcal T}_h$ defined as follows
\begin{equation}\label{vhspace}
V_h=\{v=\{v_0,v_b\}:\; v_0|_T\in P_k(T),\ v_b|_e\in P_{k}(e),\ e\subset{\partial T}, T\in {\mathcal T}_h\}
\end{equation}
and its subspace $V_h^0$ is defined as
\begin{equation}\label{vh0space}
V^0_h=\{v: \ v\in V_h,\ v_b=0 \mbox{ on } \partial\Omega\}.
\end{equation}
We would like to emphasize that any function $v\in V_h$ has a single
value $v_b$ on each edge $e\in{\mathcal E}_h$.
For given $T\in{\mathcal T}_h$ and $v=\{v_0,v_b\}\in V_h+H^1(\Omega)$, a weak gradient $\nabla_w v \in [P_j(T)]^d$ ($j > k$) is defined as the unique polynomial satisfying
\begin{equation}\label{d-d}
(\nabla_w v, {\bf q})_T = -(v_0, \nabla\cdot {\bf q})_T+ \langle v_b, {\bf q}\cdot{\bf n}\rangle_{\partial T}\qquad
\forall {\bf q}\in [P_j(T)]^d,
\end{equation}
where $j$ will be specified later.
Let $Q_0$ and $Q_b$ be the two element-wise defined $L^2$ projections onto $P_k(T)$ and $P_k(e)$ with $e\subset\partial T$ on $T$ respectively. Define $Q_hu=\{Q_0u,Q_bu\}\in V_h$. Let ${\mathbb Q}_h$ be the element-wise defined $L^2$ projection onto $[P_{j}(T)]^d$ on each element $T$.
For simplicity, we adopt the following notations,
\begin{eqnarray*}
(v,w)_{{\mathcal T}_h} &=&\sum_{T\in{\mathcal T}_h}(v,w)_T=\sum_{T\in{\mathcal T}_h}\int_T vw d{\bf x},\\
{\langle} v,w{\rangle}_{\partial{\mathcal T}_h}&=&\sum_{T\in{\mathcal T}_h} {\langle} v,w{\rangle}_{\partial T}=\sum_{T\in{\mathcal T}_h} \int_{\partial T} vw ds.
\end{eqnarray*}
\begin{algorithm}
A numerical approximation for (\ref{pde})-(\ref{bc}) can be
obtained by seeking $u_h=\{u_0,u_b\}\in V_h^0$
satisfying the following equation:
\begin{equation}\label{wg}
(\nabla_wu_h,\nabla_wv)_{{\mathcal T}_h}=(f,\; v_0) \quad\forall v=\{v_0,v_b\}\in V_h^0.
\end{equation}
\end{algorithm}
\begin{lemma}
Let $\phi\in H^1(\Omega)$, then on any $T\in{\mathcal T}_h$,
\begin{equation}\label{key}
\nabla_w\phi ={\mathbb Q}_h\nabla\phi.
\end{equation}
\end{lemma}
\begin{proof}
Using (\ref{d-d}) and integration by parts, we have that for
any ${\bf q}\in [P_{j}(T)]^d$
\begin{eqnarray*}
(\nabla_w \phi,{\bf q})_T &=& -(\phi,\nabla\cdot{\bf q})_T
+\langle \phi,{\bf q}\cdot{\bf n}\rangle_{{\partial T}}\\
&=&(\nabla \phi,{\bf q})_T=({\mathbb Q}_h\nabla\phi,{\bf q})_T,
\end{eqnarray*}
which implies the desired identity (\ref{key}).
\end{proof}
\section{Well Posedness}
For any $v\in V_h+H^1(\Omega)$, let
\begin{equation}\label{3barnorm}
{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}^2=(\nabla_wv,\nabla_wv)_{{\mathcal T}_h}.
\end{equation}
We introduce a discrete $H^1$ semi-norm as follows:
\begin{equation}\label{norm}
\|v\|_{1,h} = \left( \sum_{T\in{\mathcal T}_h}\left(\|\nabla
v_0\|_T^2+h_T^{-1} \| v_0-v_b\|^2_{\partial T}\right) \right)^{\frac12}.
\end{equation}
It is easy to see that $\|v\|_{1,h}$ define a norm in $V_h^0$. The following lemma indicates that $\|\cdot\|_{1,h}$ is equivalent
to the ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ in (\ref{3barnorm}).
\begin{lemma} There exist two positive constants $C_1$ and $C_2$ such
that for any $v=\{v_0,v_b\}\in V_h$, we have
\begin{equation}\label{happy}
C_1 \|v\|_{1,h}\le {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} \leq C_2 \|v\|_{1,h}.
\end{equation}
\end{lemma}
\medskip
\begin{proof}
For any $v=\{v_0,v_b\}\in V_h$, it follows from the definition of
weak gradient (\ref{d-d}) and integration by parts that
\begin{eqnarray}\label{n-1}
(\nabla_wv,{\bf q})_T=(\nabla v_0,{\bf q})_T+{\langle} v_b-v_0,
{\bf q}\cdot{\bf n}{\rangle}_{\partial T},\quad \forall {\bf q}\in [P_{j}(T)]^d.
\end{eqnarray}
By letting ${\bf q}=\nabla_w v$ in (\ref{n-1}) we arrive at
\begin{eqnarray*}
(\nabla_wv,\nabla_w v)_T=(\nabla v_0,\nabla_w v)_T+{\langle} v_b-v_0,
\nabla_w v\cdot{\bf n}{\rangle}_{\partial T}.
\end{eqnarray*}
From the trace inequality (\ref{trace}) and the inverse inequality
we have
\begin{eqnarray*}
\|\nabla_wv\|^2_T &\le& \|\nabla v_0\|_T \|\nabla_w v\|_T+ \|
v_0-v_b\|_{\partial T} \|\nabla_w v\|_{\partial T}\\
&\le& \|\nabla v_0\|_T \|\nabla_w v\|_T+ Ch_T^{-1/2}\|
v_0-v_b\|_{\partial T} \|\nabla_w v\|_T,
\end{eqnarray*}
which implies
$$
\|\nabla_w v\|_T \le C \left(\|\nabla v_0\|_T +h_T^{-1/2}\|v_0-v_b\|_{\partial T}\right),
$$
and consequently
$${|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} \leq C_2 \|v\|_{1,h}.$$
Next we will prove $C_1 \|v\|_{1,h}\le {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} $.
For $v\in V_h$ and ${\bf q}\in [P_j(T)]^d$, by \eqref{d-d} and integration by parts, we have
\begin{equation}\label{n2}
(\nabla_w v,{\bf q})_T=(\nabla v_0,{\bf q})_T+{\langle} v_b-v_0, {\bf q}\cdot{\bf n}{\rangle}_{\partial T}.
\end{equation}
Let $n$ be the number of the edges/faces on a polygon/polyhadron. It has been proved in \cite{cdg2} that there exists ${\bf q}_0\in [P_j(T)]^d$, $j=n+k-1$, such that
\begin{equation}\label{2e}
(\nabla v_0,{\bf q}_0)_T=0, \ \ {\langle} v_b-v_0,{\bf q}_0\cdot{\bf n}{\rangle}_{{\partial T}\setminus e}=0,
\ \ {\langle} v_b-v_0, {\bf q}_0\cdot{\bf n}{\rangle}_e=\|v_0-v_b\|_e^2,
\end{equation}
and
\begin{equation}
\|{\bf q}_0\|_T \le C h_T^{1/2} \| v_b-v_0 \|_e.\label{22e}
\end{equation}
Substituting ${\bf q}_0$ into (\ref{n2}), we get
\begin{equation}\label{n3}
(\nabla_wv,{\bf q}_0)_T=\|v_b-v_0\|^2_e.
\end{equation}
It follows from Cauchy-Schwarz inequality and (\ref{22e}) that
\[
\|v_b-v_0\|^2_e\le C\|\nabla_w v\|_T\|{\bf q}_0\|_T
\le Ch_T^{1/2}\|\nabla_w v\|_T\|v_0-v_b\|_e,
\]
which implies
\begin{equation}\label{n4}
h_T^{-1/2}\|v_0-v_b\|_{\partial T}\le C\|\nabla_w v\|_T.
\end{equation}
It follows from the trace inequality, the inverse inequality and (\ref{n4}),
$$
\|\nabla v_0\|_T^2 \leq \|\nabla_w v\|_T \|\nabla v_0\|_T
+Ch_T^{-1/2}\| v_0-v_b\|_{\partial T} \|\nabla v_0\|_T\le C\|\nabla_w v\|_T \|\nabla v_0\|_T.
$$
Combining the above estimate and (\ref{n4}),
by the definition \eqref{norm},
we prove the lower bound of (\ref{happy}) and complete the proof of the lemma.
\end{proof}
\medskip
\begin{lemma}
The weak Galerkin finite element scheme (\ref{wg}) has a unique
solution.
\end{lemma}
\smallskip
\begin{proof}
If $u_h^{(1)}$ and $u_h^{(2)}$ are two solutions of (\ref{wg}), then
$\varepsilon_h=u_h^{(1)}-u_h^{(2)}\in V_h^0$ would satisfy the following equation
$$
(\nabla_w \varepsilon_h,\nabla_w v)=0,\qquad\forall v\in V_h^0.
$$
Then by letting $v=\varepsilon_h$ in the above
equation we arrive at
$$
{|\hspace{-.02in}|\hspace{-.02in}|} \varepsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}^2 = (\nabla_w \varepsilon_h,\nabla_w \varepsilon_h)=0.
$$
It follows from (\ref{happy}) that $\|\varepsilon_h\|_{1,h}=0$. Since $\|\cdot\|_{1,h}$ is a norm in $V_h^0$, one has $\varepsilon_h=0$.
This completes the proof of the lemma.
\end{proof}
\section{Error Estimates in Energy Norm}
Let $e_h=u-u_h$ and $\epsilon_h=Q_hu-u_h$. Next we derive an error equation that $e_h$ satisfies.
\begin{lemma}
For any $v\in V_h^0$, the following error equation holds true
\begin{eqnarray}
(\nabla_we_h,\nabla_wv)_{{\mathcal T}_h}=\ell(u,v),\label{ee}
\end{eqnarray}
where
\begin{eqnarray*}
\ell(u,v)&=& \langle (\nabla u-{\mathbb Q}_h\nabla u)\cdot{\bf n},v_0-v_b\rangle_{{\partial T}_h}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
For $v=\{v_0,v_b\}\in V_h^0$, testing (\ref{pde}) by $v_0$ and using the fact that
$\sum_{T\in{\mathcal T}_h}\langle \nabla u\cdot{\bf n}, v_b\rangle_{\partial T}=0$, we arrive at
\begin{equation}\label{m1}
(\nabla u,\nabla v_0)_{{\mathcal T}_h}- \langle
\nabla u\cdot{\bf n},v_0-v_b\rangle_{{\partial T}_h}=(f,v_0).
\end{equation}
It follows from integration by parts, (\ref{d-d}) and (\ref{key}) that
\begin{eqnarray}
(\nabla u,\nabla v_0)_{{\mathcal T}_h}&=&({\mathbb Q}_h\nabla u,\nabla v_0)_{{\mathcal T}_h}\nonumber\\
&=&-(v_0,\nabla\cdot ({\mathbb Q}_h\nabla u))_{{\mathcal T}_h}+\langle v_0, {\mathbb Q}_h\nabla u\cdot{\bf n}\rangle_{\partial{\mathcal T}_h}\nonumber\\
&=&({\mathbb Q}_h\nabla u, \nabla_w v)_{{\mathcal T}_h}+\langle v_0-v_b,{\mathbb Q}_h\nabla u\cdot{\bf n}\rangle_{\partial{\mathcal T}_h}\nonumber\\
&=&( \nabla_w u, \nabla_w v)_{{\mathcal T}_h}+\langle v_0-v_b,{\mathbb Q}_h\nabla u\cdot{\bf n}\rangle_{\partial{\mathcal T}_h}.\label{j1}
\end{eqnarray}
Combining (\ref{m1}) and (\ref{j1}) gives
\begin{eqnarray}
(\nabla_w u,\nabla_w v)_{{\mathcal T}_h}&=&(f,v_0)+\ell(u,v).\label{j2}
\end{eqnarray}
The error equation follows from subtracting (\ref{wg}) from (\ref{j2}),
\begin{eqnarray*}
(\nabla_we_h,\nabla_wv)_{{\mathcal T}_h}=\ell(u,v),\quad \forall v\in V_h^0.
\end{eqnarray*}
This completes the proof of the lemma.
\end{proof}
For any function $\varphi\in H^1(T)$, the following trace
inequality holds true (see \cite{wymix} for details):
\begin{equation}\label{trace}
\|\varphi\|_{e}^2 \leq C \left( h_T^{-1} \|\varphi\|_T^2 + h_T
\|\nabla \varphi\|_{T}^2\right).
\end{equation}
\begin{lemma} For any $w\in H^{k+1}(\Omega)$ and
$v=\{v_0,v_b\}\in V_h^0$, we have
\begin{eqnarray}
|\ell(w, v)|&\le&
Ch^{k}|w|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}.\label{mmm1}
\end{eqnarray}
\end{lemma}
\medskip
\begin{proof}
Using the Cauchy-Schwarz inequality, the trace inequality (\ref{trace}) and (\ref{happy}), we have
\begin{eqnarray*}
|\ell(w,v)|&=&\left|\sum_{T\in{\mathcal T}_h}\langle (\nabla w-{\mathbb Q}_h\nabla
w)\cdot{\bf n}, v_0-v_b\rangle_{\partial T}\right|\\
&\le & C \sum_{T\in{\mathcal T}_h}\|(\nabla w-{\mathbb Q}_h\nabla w)\|_{{\partial T}}
\|v_0-v_b\|_{\partial T}\nonumber\\
&\le & C \left(\sum_{T\in{\mathcal T}_h}h_T\|(\nabla w-{\mathbb Q}_h\nabla w)\|_{{\partial T}}^2\right)^{\frac12}
\left(\sum_{T\in{\mathcal T}_h}h_T^{-1}\|v_0-v_b\|_{\partial T}^2\right)^{\frac12}\\
&\le & Ch^{k}|w|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|},
\end{eqnarray*}
which proves the lemma.
\end{proof}
\smallskip
\begin{lemma}
Let $w\in H^{k+1}(\Omega)$, then
\begin{equation}\label{eee2}
{|\hspace{-.02in}|\hspace{-.02in}|} w-Q_hw{|\hspace{-.02in}|\hspace{-.02in}|}\le Ch^k|w|_{k+1}.
\end{equation}
\end{lemma}
\begin{proof}
It follows from (\ref{d-d}), integration by parts, and (\ref{trace}),
\begin{eqnarray*}
(\nabla_w(w-Q_hw), {\bf q})_{T}&=&-(w-Q_0w, \nabla\cdot{\bf q})_{T}+{\langle} w-Q_bw, {\bf q}\cdot{\bf n}{\rangle}_{{\partial T}}\\
&=&(\nabla (w-Q_0w), {\bf q})_{T}+{\langle} Q_0w-Q_bw, {\bf q}\cdot{\bf n}{\rangle}_{{\partial T}}\\
&\le& \|\nabla (w-Q_0w)\|_T\|{\bf q}\|_T+Ch^{-1/2}\|w-Q_0w\|_{\partial T}\|{\bf q}\|_T\\
&\le& Ch^k|w|_{k+1, T}\|{\bf q}\|_T.
\end{eqnarray*}
Letting ${\bf q}=\nabla_w(w-Q_hw)$ in the above equation and taking summation over $T$, we have
\[
{|\hspace{-.02in}|\hspace{-.02in}|} w-Q_hw{|\hspace{-.02in}|\hspace{-.02in}|}\le Ch^k|w|_{k+1}.
\]
We have proved the lemma.
\end{proof}
\begin{theorem} Let $u_h\in V_h$ be the weak Galerkin finite element solution of (\ref{wg}). Assume the exact solution $u\in H^{k+1}(\Omega)$. Then,
there exists a constant $C$ such that
\begin{equation}\label{err1}
{|\hspace{-.02in}|\hspace{-.02in}|} u-u_h{|\hspace{-.02in}|\hspace{-.02in}|} \le Ch^{k}|u|_{k+1}.
\end{equation}
\end{theorem}
\begin{proof}
It is straightforward to obtain
\begin{eqnarray}
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2&=&(\nabla_we_h, \nabla_we_h)_{{\mathcal T}_h}\label{eee1}\\
&=&(\nabla_wu-\nabla_wu_h,\nabla_we_h)_{{\mathcal T}_h}\nonumber\\
&=&(\nabla_wQ_hu-\nabla_wu_h,\nabla_we_h)_{{\mathcal T}_h}+(\nabla_wu-\nabla_wQ_hu,\nabla_we_h)_{{\mathcal T}_h}\nonumber\\
&=&(\nabla_we_h,\nabla_w\epsilon_h)_{{\mathcal T}_h}+(\nabla_wu-\nabla_wQ_hu,\nabla_we_h)_{{\mathcal T}_h}.\nonumber
\end{eqnarray}
We will bound each terms in (\ref{eee1}).
Letting $v=\epsilon_h\in V_h^0$ in (\ref{ee}) and using (\ref{mmm1}) and (\ref{eee2}), we have
\begin{eqnarray}
|(\nabla_we_h,\nabla_w\epsilon_h)_{{\mathcal T}_h}|&=&|\ell(u,\epsilon_h)|\nonumber\\
&\le& Ch^{k}|u|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} \epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}\nonumber\\
&\le& Ch^{k}|u|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} Q_hu-u_h{|\hspace{-.02in}|\hspace{-.02in}|}\nonumber\\
&\le& Ch^{k}|u|_{k+1}({|\hspace{-.02in}|\hspace{-.02in}|} Q_hu-u{|\hspace{-.02in}|\hspace{-.02in}|}+{|\hspace{-.02in}|\hspace{-.02in}|} u-u_h{|\hspace{-.02in}|\hspace{-.02in}|})\nonumber\\
&\le& Ch^{2k}|u|^2_{k+1}+\frac14 \3bare_h{|\hspace{-.02in}|\hspace{-.02in}|}^2.\label{eee3}
\end{eqnarray}
The estimate (\ref{eee2}) implies
\begin{eqnarray}
|(\nabla_wu-\nabla_wQ_hu,\nabla_we_h)_{{\mathcal T}_h}|&\le& C{|\hspace{-.02in}|\hspace{-.02in}|} u-Q_hu{|\hspace{-.02in}|\hspace{-.02in}|} {|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}\nonumber\\
&\le& Ch^{2k}|u|^2_{k+1}+\frac14{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2.\label{eee4}
\end{eqnarray}
Combining the estimates (\ref{eee3}) and (\ref{eee4}) with (\ref{eee1}), we arrive
\[
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|} \le Ch^{k}|u|_{k+1},
\]
which completes the proof.
\end{proof}
\section{Error Estimates in $L^2$ Norm}
The standard duality argument is used to obtain $L^2$ error estimate.
Recall $e_h=\{e_0,e_b\}=u-u_h$ and $\epsilon_h=\{\epsilon_0,\epsilon_b\}=Q_hu-u_h$.
The considered dual problem seeks $\Phi\in H_0^1(\Omega)$ satisfying
\begin{eqnarray}
-\Delta\Phi&=& \epsilon_0,\quad
\mbox{in}\;\Omega.\label{dual}
\end{eqnarray}
Assume that the following $H^{2}$-regularity holds
\begin{equation}\label{reg}
\|\Phi\|_2\le C\|\epsilon_0\|.
\end{equation}
\begin{theorem} Let $u_h\in V_h$ be the weak Galerkin finite element solution of (\ref{wg}). Assume that the
exact solution $u\in H^{k+1}(\Omega)$ and (\ref{reg}) holds true.
Then, there exists a constant $C$ such that
\begin{equation}\label{err2}
\|u-u_0\| \le Ch^{k+1}|u|_{k+1}.
\end{equation}
\end{theorem}
\begin{proof}
Testing (\ref{dual}) by $e_0$ and using the fact that $\sum_{T\in{\mathcal T}_h}\langle \nabla
\Phi\cdot{\bf n}, \epsilon_b\rangle_{\partial T}=0$ give
\begin{eqnarray}\nonumber
\|\epsilon_0\|^2&=&-(\Delta\Phi,\epsilon_0)\\
&=&(\nabla \Phi,\ \nabla \epsilon_0)_{{\mathcal T}_h}-{\langle}
\nabla\Phi\cdot{\bf n},\ \epsilon_0- \epsilon_b{\rangle}_{{\partial T}_h}.\label{jw.08}
\end{eqnarray}
Setting $u=\Phi$ and $v=\epsilon_h$ in (\ref{j1}) yields
\begin{eqnarray}
(\nabla\Phi,\;\nabla \epsilon_0)_{{\mathcal T}_h}=(\nabla_w \Phi,\;\nabla_w \epsilon_h)_{{\mathcal T}_h}+{\langle}
{\mathbb Q}_h\nabla\Phi\cdot{\bf n},\ \epsilon_0-\epsilon_b{\rangle}_{{\partial T}_h}.\label{j1-new}
\end{eqnarray}
Substituting (\ref{j1-new}) into (\ref{jw.08}) gives
\begin{eqnarray}
\|\epsilon_0\|^2&=&(\nabla_w \epsilon_h,\ \nabla_w\Phi)_{{\mathcal T}_h}-{\langle}
(\nabla\Phi-{\mathbb Q}_h\nabla\Phi)\cdot{\bf n},\ \epsilon_0-\epsilon_b{\rangle}_{{\partial T}_h}\nonumber\\
&=&(\nabla_w e_h,\ \nabla_w\Phi)_{{\mathcal T}_h}+(\nabla_w (Q_hu-u),\ \nabla_w\Phi)_{{\mathcal T}_h}+\ell(\Phi, \epsilon_h)\nonumber\\
&=&(\nabla_w e_h,\ \nabla_wQ_h\Phi)_{{\mathcal T}_h}+(\nabla_w e_h,\ \nabla_w(\Phi-Q_h\Phi))_{{\mathcal T}_h}\nonumber\\
&+&(\nabla_w (Q_hu-u),\ \nabla_w\Phi)_{{\mathcal T}_h}+\ell(\Phi, \epsilon_h)\nonumber\\
&=&\ell(u,Q_h\Phi)+(\nabla_w e_h,\ \nabla_w(\Phi-Q_h\Phi))_{{\mathcal T}_h}+(\nabla_w (Q_hu-u),\ \nabla_w\Phi)_{{\mathcal T}_h}+\ell(\Phi, \epsilon_h)\nonumber\\
&=&I_1+I_2+I_3+I_4.\label{m2}
\end{eqnarray}
Next we will estimate all the terms on the right hand side of (\ref{m2}). Using the Cauchy-Schwarz inequality, the trace inequality (\ref{trace}) and the definitions of $Q_h$ and $\Pi_h$
we obtain
\begin{eqnarray*}
I_1&=&|\ell(u,Q_h\Phi)|\le\left| \langle (\nabla u-{\mathbb Q}_h\nabla
u)\cdot{\bf n},\;
Q_0\Phi-Q_b\Phi\rangle_{{\partial T}_h} \right|\\
&\le& \left(\sum_{T\in{\mathcal T}_h}\|(\nabla u-{\mathbb Q}_h\nabla
u)\|^2_{\partial T}\right)^{1/2}
\left(\sum_{T\in{\mathcal T}_h}\|Q_0\Phi-Q_b\Phi\|^2_{\partial T}\right)^{1/2}\nonumber \\
&\le& C\left(\sum_{T\in{\mathcal T}_h}h\|(\nabla u-{\mathbb Q}_h\nabla
u)\|^2_{\partial T}\right)^{1/2}
\left(\sum_{T\in{\mathcal T}_h}h^{-1}\|Q_0\Phi-\Phi\|^2_{\partial T}\right)^{1/2} \nonumber\\
&\le& Ch^{k+1}|u|_{k+1}|\Phi|_2.\nonumber
\end{eqnarray*}
It follows from (\ref{err1}) and (\ref{eee2}) that
\begin{eqnarray*}
I_2&=&|(\nabla_w e_h,\ \nabla_w(\Phi-Q_h\Phi))_{{\mathcal T}_h}|\le C{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|} {|\hspace{-.02in}|\hspace{-.02in}|} \Phi-Q_h\Phi{|\hspace{-.02in}|\hspace{-.02in}|}\\
&\le& Ch^{k+1}|u|_{k+1}|\Phi|_2.
\end{eqnarray*}
To bound $I3$, we define a $L^2$ projection element-wise onto $[P_1(T)]^d$ denoted by $R_h$. Then it follows from the definition of weak gradient (\ref{d-d})
\begin{eqnarray*}
(\nabla_w (Q_hu-u),\ R_h\nabla_w\Phi)_{T}&=&-(Q_0u-u,\nabla\cdot R_h\nabla_w\Phi)_T+ {\langle} (Q_bu-u, R_h\nabla_w\Phi\cdot{\bf n}{\rangle}_{\partial T}=0
\end{eqnarray*}
Using the equation above and (\ref{eee2}) and the definition of $R_h$, we have
\begin{eqnarray*}
I_3&=&|(\nabla_w (Q_hu-u),\ \nabla_w\Phi)_{{\mathcal T}_h}|\\
&=&|(\nabla_w (Q_hu-u),\ \nabla_w\Phi-R_h\nabla_w\Phi)_{{\mathcal T}_h}|\\
&=&|(\nabla_w (Q_hu-u),\ \nabla\Phi-R_h\nabla\Phi)_{{\mathcal T}_h}|\\
&\le& Ch^{k+1}|u|_{k+1}|\Phi|_2.
\end{eqnarray*}
It follows from (\ref{mmm1}), (\ref{eee2}) and (\ref{err1}) that
\begin{eqnarray*}
I_4&=&|\ell(\Phi,\epsilon_h)|\le Ch|\Phi|_2{|\hspace{-.02in}|\hspace{-.02in}|} \epsilon_h{|\hspace{-.02in}|\hspace{-.02in}|}\\
&\le& Ch|\Phi|_2({|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}+{|\hspace{-.02in}|\hspace{-.02in}|} u-Q_hu{|\hspace{-.02in}|\hspace{-.02in}|})\\
&\le& Ch^{k+1 }|u|_{k+1}\|\Phi\|_2.
\end{eqnarray*}
Combining all the estimates above
with (\ref{m2}) yields
$$
\|\epsilon_0\|^2 \leq C h^{k+1}|u|_{k+1} \|\Phi\|_2.
$$
It follows from the above inequality and
the regularity assumption (\ref{reg}).
$$
\|\epsilon_0\|\leq C h^{k+1}|u|_{k+1}.
$$
The triangle inequality implies
$$
\|e_0\|\le \|\epsilon_0\|+\|u-Q_0u\| \leq C h^{k+1}|u|_{k+1}.
$$
We have completed the proof.
\end{proof}
\section{Numerical Experiments}\label{Section:numerical-experiments}
We solve the following Poisson equation on the unit square:
\begin{align} \label{s1} -\Delta u = 2\pi^2 \sin\pi x\sin \pi y, \quad (x,y)\in\Omega=(0,1)^2,
\end{align} with the boundary condition $u=0$ on $\partial \Omega$.
\begin{figure}[h!]
\begin{center} \setlength\unitlength{1.25pt}
\begin{picture}(260,80)(0,0)
\def\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}}
\put(0,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \put(20,0){\line(-1,1){20}}\end{picture}}
{\setlength\unitlength{5pt}
\multiput(0,0)(20,0){1}{\multiput(0,0)(0,20){1}{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}}}}
{\setlength\unitlength{2.5pt}
\multiput(45,0)(20,0){2}{\multiput(0,0)(0,20){2}{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}}}}
\multiput(180,0)(20,0){4}{\multiput(0,0)(0,20){4}{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}}}
\end{picture}\end{center}
\caption{\label{grid1} The first three levels of grids used in the computation of Table \ref{t1}. }
\end{figure}
In the first computation, the level one grid consists of two unit right triangles
cutting from the unit square by a forward
slash. The high level grids are the half-size refinements of the previous grid.
The first three levels of grids are plotted in Figure \ref{grid1}.
The error and the order of convergence are shown in Table \ref{t1}.
The numerical results confirm the convergence theory.
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Error profiles and convergence rates for \eqref{s1} on triangular grids }\label{t1}
\begin{tabular}{c|cc|cc}
\hline
level & $\|u_h- Q_0 u\| $ &rate & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h- u{|\hspace{-.02in}|\hspace{-.02in}|} $ &rate \\
\hline
&\multicolumn{4}{l}{by $P_1$ elements with $P_1^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_1$ elements with $P_2^2$ weak gradient} \\ \hline
6& 0.4295E-03 & 1.99& 0.5369E-01 & 1.00\\
7& 0.1075E-03 & 2.00& 0.2684E-01 & 1.00\\
8& 0.2688E-04 & 2.00& 0.1342E-01 & 1.00\\
\hline
&\multicolumn{4}{l}{by $P_2$ elements with $P_2^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_2$ elements with $P_3^2$ weak gradient} \\ \hline
6& 0.2383E-05 & 3.01& 0.1013E-02 & 2.00\\
7& 0.2971E-06 & 3.00& 0.2532E-03 & 2.00\\
8& 0.3709E-07 & 3.00& 0.6330E-04 & 2.00\\
\hline
&\multicolumn{4}{l}{by $P_3$ elements with $P_3^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_3$ elements with $P_4^2$ weak gradient} \\ \hline
6& 0.2468E-07 & 4.02& 0.1430E-04 & 3.00\\
7& 0.1532E-08 & 4.01& 0.1789E-05 & 3.00\\
8& 0.9550E-10 & 4.00& 0.2237E-06 & 3.00\\
\hline
&\multicolumn{4}{l}{by $P_4$ elements with $P_4^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_4$ elements with $P_5^2$ weak gradient} \\ \hline
5& 0.8154E-08 & 4.99& 0.2441E-05 & 4.00\\
6& 0.2551E-09 & 5.00& 0.1526E-06 & 4.00\\
7& 0.8257E-11 & 4.99& 0.9539E-08 & 4.00\\
\hline
\end{tabular}%
\end{table}%
In the next computation, we use a family of polygonal grids (with 12-side polygons)
shown in Figure \ref{12gon}.
The numerical results in Table \ref{t3} indicate that the polynomial degree $j$ for the weak gradient needs to
be larger, which confirms the theory: $j$ depending on the number of edges of a polygon.
The convergence history confirms the theory.
\begin{figure}[htb]\begin{center}\setlength\unitlength{1.5in}
\begin{picture}(3.2,1.4)
\put(0,0){\includegraphics[width=1.5in]{g12g1-eps-converted-to.pdf}}
\put(1.1,0){\includegraphics[width=1.5in]{g12g2-eps-converted-to.pdf}}
\put(2.2,0){\includegraphics[width=1.5in]{g12g3-eps-converted-to.pdf}}
\end{picture}
\caption{ The first three polygonal grids for the computation of Table \ref{t3}. } \label{12gon}
\end{center}
\end{figure}
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Error profiles and convergence rates for \eqref{s1} on polygonal grids
shown in Figure \ref{12gon} }\label{t3}
\begin{tabular}{c|cc|cc}
\hline
level & $\|u_h- Q_0u\| $ &rate & ${|\hspace{-.02in}|\hspace{-.02in}|} u_h-u{|\hspace{-.02in}|\hspace{-.02in}|} $ &rate \\
\hline
&\multicolumn{4}{l}{by $P_1$ elements with $P_2^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_1$ elements with $P_3^2$ weak gradient} \\ \hline
5& 0.9671E-03 & 1.98& 0.1350E+00 & 1.00 \\
6& 0.2425E-03 & 2.00& 0.6750E-01 & 1.00 \\
7& 0.6067E-04 & 2.00& 0.3375E-01 & 1.00 \\
\hline
&\multicolumn{4}{l}{by $P_2$ elements with $P_3^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_2$ elements with $P_4^2$ weak gradient} \\ \hline
5& 0.5791E-05 & 3.00& 0.3247E-02 & 2.00 \\
6& 0.7233E-06 & 3.00& 0.8120E-03 & 2.00 \\
7& 0.9040E-07 & 3.00& 0.2030E-03 & 2.00 \\
\hline
&\multicolumn{4}{l}{by $P_3$ elements with $P_4^2$ weak gradient $\Rightarrow$ singular} \\ \hline
&\multicolumn{4}{l}{by $P_3$ elements with $P_5^2$ weak gradient} \\ \hline
4& 0.8809E-06 & 4.00& 0.3575E-03 & 2.99 \\
5& 0.5509E-07 & 4.00& 0.4475E-04 & 3.00 \\
6& 0.3447E-08 & 4.00& 0.5595E-05 & 3.00 \\
\hline
\end{tabular}%
\end{table}%
| {
"timestamp": "2019-07-15T02:05:01",
"yymm": "1906",
"arxiv_id": "1906.06634",
"language": "en",
"url": "https://arxiv.org/abs/1906.06634",
"abstract": "A stabilizing/penalty term is often used in finite element methods with discontinuous approximations to enforce connection of discontinuous functions across element boundaries. Removing stabilizers from discontinuous finite element methods will simplify the formulations and reduce programming complexity significantly. The goal of this paper is to introduce a stabilizer free weak Galerkin finite element method for second order elliptic equations on polytopal meshes in 2D or 3D. This new WG method keeps a simple symmetric positive definite form and can work on polygonal/polyheral meshes. Optimal order error estimates are established for the corresponding WG approximations in both a discrete $H^1$ norm and the $L^2$ norm. Numerical results are presented verifying the theorem.",
"subjects": "Numerical Analysis (math.NA)",
"title": "On stabilizer-free weak Galerkin finite element methods on polytopal meshes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105307684548,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7076984840724722
} |
https://arxiv.org/abs/1904.12053 | Sample Amplification: Increasing Dataset Size even when Learning is Impossible | Given data drawn from an unknown distribution, $D$, to what extent is it possible to ``amplify'' this dataset and output an even larger set of samples that appear to have been drawn from $D$? We formalize this question as follows: an $(n,m)$ $\text{amplification procedure}$ takes as input $n$ independent draws from an unknown distribution $D$, and outputs a set of $m > n$ ``samples''. An amplification procedure is valid if no algorithm can distinguish the set of $m$ samples produced by the amplifier from a set of $m$ independent draws from $D$, with probability greater than $2/3$. Perhaps surprisingly, in many settings, a valid amplification procedure exists, even when the size of the input dataset, $n$, is significantly less than what would be necessary to learn $D$ to non-trivial accuracy. Specifically we consider two fundamental settings: the case where $D$ is an arbitrary discrete distribution supported on $\le k$ elements, and the case where $D$ is a $d$-dimensional Gaussian with unknown mean, and fixed covariance. In the first case, we show that an $\left(n, n + \Theta(\frac{n}{\sqrt{k}})\right)$ amplifier exists. In particular, given $n=O(\sqrt{k})$ samples from $D$, one can output a set of $m=n+1$ datapoints, whose total variation distance from the distribution of $m$ i.i.d. draws from $D$ is a small constant, despite the fact that one would need quadratically more data, $n=\Theta(k)$, to learn $D$ up to small constant total variation distance. In the Gaussian case, we show that an $\left(n,n+\Theta(\frac{n}{\sqrt{d}} )\right)$ amplifier exists, even though learning the distribution to small constant total variation distance requires $\Theta(d)$ samples. In both the discrete and Gaussian settings, we show that these results are tight, to constant factors. Beyond these results, we formalize a number of curious directions for future research along this vein. |
\subsection{Lower Bound}
In this section we show that the above procedure is optimal, up to constant factors for amplifying samples from discrete distributions. The proof is constructive and shows that a simple verifier can distinguish any amplifier when $m > \alpha \frac{n}{\sqrt{k}}$ for a fixed $\alpha$. The proof relies on the fact that the amplifier cannot add samples beyond the support of the samples it has already seen. When $m$ is sufficiently larger than $n$, we can show there are distributions for which large regions of the support are below the threshold required for the birthday paradox meaning that with high probability every new sample will reveal additional information about the support. The amplifier will not be able to add samples in that region.
\begin{proposition}\label{prop:disc_lower}
There is a constant $c$, such that for every sufficiently large $k$, $\mathcal{C}$ does not admit an $\left(n, n+\frac{cn}{\sqrt{k}}\right)$ amplification procedure.
\end{proposition}
The proposition follows by constructing a verifier and class of discrete distributions over $k$ elements, $\mathcal C$ with the following property: for a universal constant $c$ and $p \leftarrow Uniform[\mathcal C]$, the verifier can detect \emph{any} $(n, n+ \frac{cn}{\sqrt d})$ amplifier from with sufficiently high probability.
Before we prove Proposition \ref{prop:disc_lower}, we introduce some additional notation and a basic martingale inequality. Let $C^k$ be the set of discrete uniform distributions over $k$ integers in $0,\dots,8k$. Let $C^k_l$ be the set of discrete distributions with mass $1-l$ on one element and uniform mass over $k-1$ remaining integers in $0, \dots, 8k$. We also rely on some martingale inequalities which can be found in \cite{chung2006concentration}.
\begin{fact} \label{fact:martingale}
Let $X$ be the martingale associated with a filter $\mathcal F$ satisfying:
\begin{enumerate}
\item $\mathrm{Var}[X_i \mid \mathcal F_{i-1}] \le \sigma^2_i$ for $1 \le i \le n$
\item $0 \le X_i \le 1$ almost surely.
\end{enumerate}
Then, we have
$$ Pr(X - \mathbb E[x] \ge \lambda ) \le e^{-\frac{\lambda^2}{2 \left ( \sum \sigma_i^2 + \lambda/3 \right )}}.$$
Similarly the following holds (though not simultaneously):
$$ Pr(X - \mathbb E[x] \le -\lambda ) \le e^{-\frac{\lambda^2}{2 \left ( \sum \sigma_i^2 + \lambda/3 \right )}}.$$
\end{fact}
Finally we rely on slight generalization of the birthday paradox which can be found in \cite{bellare1994security}.
\begin{fact} \label{fact:bday}
Let $n$ samples be drawn from a uniform distribution over $k$ elements. Then the probability of the samples containing a duplicate is less than $\frac{n^2}{2k}$.
\end{fact}
The proof proceeds in two parts. First we prove a lemma that shows the desired result for $n \le \frac k 2$. We then show show a class of distributions that allows us to reduce the general case to the result shown in the lemma.
\begin{lemma} \label{lemma:discrete_base_case}
For sufficiently large $k$, fixed $c$ and $m=n + 30 \frac{n}{\sqrt k}\le \frac k 4$ the following holds:
There exists a verifier that for $p \sim Uniform[C^k]$ the following holds true:
\begin{enumerate}
\item For all $p$, it accepts $X_m$ with probability at least $\frac 3 4$ over the randomness in $X_m$.
\item It rejects $f(X_n)$ with probability at least $\frac 3 4$ for any amplifier $f$ over the randomness in $X_n,p$ and the amplifier.
\end{enumerate}
\end{lemma}
\begin{proof}
First we consider the case when $n \le \frac{\sqrt k}{2}$. Consider the verifier that takes $\frac{\sqrt k}{2} + 1 < \sqrt{\frac k 2}$ samples from the given samples uniformly at random and accepts if there are no repeats by Fact \ref{fact:bday} and the support is correct. The probability of a duplicate with the real distribution is less than $\frac 1 4$ by fact \ref{fact:bday} so the verifier will accept samples from the true distribution with at least probability $\frac 3 4$.
An amplified set, on the other hand, must have repeats outside of the original elements it saw. This is because if the amplifier expanded the support of the set, the verifier would catch it with probability $\frac 7 8$. To show this, consider a sample added by the amplifier outside of the seen support. Conditioned on the at most $\frac k 4$ unique samples seen so far (which implies that $\frac 3 4$ of the support is still unseen), the probability, over the choice of $p$, of said sample being in the set is at most $\frac{(3/4)k}{8k -n} \le \frac{(3/4)k}{7.5k} \le \frac 1 8$. Hence if the amplified set has any element outside the original support then it is rejected with probability $\frac{7}{8}$. Note that if the amplified set has at most $\frac{\sqrt k}{2}$ unique elements, then it can be immediately distinguished for having too many repeats.
We now examine the case when $n > \frac{\sqrt{k}}{2}$. Since the verifier can identify when the amplifier introduces unseen elements with probabiltiy at least $\frac 7 8$, we condition on the event that the verifier identifies such elements for the remainder of this proof. The proof proceeds by showing that a set the size of the amplified set must have significantly more unique elements than the original set. Before we proceed with the details of the proof we define the martingale that is central to the argument.
Consider the scenario where the $n$ samples are drawn in sequence, and let $\mathcal F_i$ denote the filtration corresponding to the $i$-th draw (i.e., information in the first $i$ draws). Let $U_i$ be the indicator that is the $i$th sample was previously unseen. Let $U^n = \sum\limits_{i = 1}^n U_i$. Note that $B_i = \mathbb E \left [\sum\limits^n_{j=1} U_j \mid \mathcal F_{i} \right]$ is a Doob martingale with respect to the filtration $\mathcal F_i$ and $B_n=U$. Also, $B_i$ has differences bounded by 1 as $U_i$ is an indicator random variable. If $j$ is the count of previously seen elements then $\mathrm{Var} \left [B_i \mid \mathcal F_{i} \right ] \le \mathrm{Var}[U_i \mid \mathcal F_{i}] \le \frac{(k-j)j}{k^2}$. Since $n<\frac k 2$, the variance is upper bounded by $\frac i k \le \frac n k$.
The verifier will accept only if all elements are within the support of the distribution and the number unique elements is greater than $\mathbb E[U^n] + 7 \frac{n}{\sqrt k}$ under $X_n$.
The remainder of the proof will show the following:
\begin{enumerate}
\item $U^n$ concentrates around its expectation within a $O\left ( \frac{n}{\sqrt k} \right)$ margin for $X_n$ (this shows the amplifier gets too few unique samples to be accepted by the verifier).
\item The expectation $\mathbb{E}[U^m-U^n]$ increases by at least $\Omega \left ( \frac{n}{\sqrt k} \right )$ from $X_n$ to $X_m$ (which shows the number of unique items is sufficiently different in expectation between $X_n$ and $X_m$).
\item $U^m$ concentrates around its expectation within a $O\left ( \frac{n}{\sqrt k} \right)$ margin for $X_m$ (this combined with the previous statement shows the verifier accepts real samples with sufficiently high probability).
\end{enumerate}
The upper tail bound follows via Fact \ref{fact:martingale}. Recall that $\frac{n}{\sqrt k} < 4 \frac{n^2}{k}$ since $n > \frac{\sqrt k}{2} $.
\begin{align*}
\Pr \left(U^n - \mathbb E[U^n] \ge 7\frac{n}{\sqrt k} \right ) &\le \exp \left ( {-\frac{7^2\frac{n^2}{k}}{2 \left ( \sum \sigma_i^2 + \frac{7 n}{3 \sqrt k} \right )}} \right )\\
& \le \exp \left ( {-\frac{7^2\frac{n^2}{k}}{2 \left ( \frac{n^2}{k} + \frac{7 n}{3 \sqrt k} \right )}} \right )\\
& \le \exp \left ( {-\frac{7^2\frac{n^2}{k}}{2 \left ( \frac{n^2}{k} + 7\frac{4 n^2}{3 k} \right )}} \right ) \\
& = \exp \left ( {-\frac{7^2\frac{n^2}{k}}{2 \left (1+ \frac{4}{3} 7 \right) \frac{n^2}{k}}} \right )\\
& \le \frac 1 8 .
\end{align*}
Note that this suffices to show that the verifier can distinguish any amplifier with sufficiently many unique samples.
Let $k$ be sufficiently large that the following conditions hold for both $k$ and $k-1$:
\begin{enumerate}
\item $n + 30 \frac{n}{\sqrt k} < \frac{k}{2}$
\item The samples increased by at most a factor of $2$
\end{enumerate}
Now we note that the $\mathbb E[U^n] $ and $\mathbb E[U^m] $ must differ by at least $\frac{15 n}{\sqrt k}$, since $m < \frac k 2$ implying that every new sample has at least a $\frac 1 2$ probability of being unique. Now all the remains to show that the verifier will accept $X_m$ is to show concentration of $U$ within $\frac{8 n}{\sqrt k}$ of its mean.
Since the number of samples increased by at most a factor of two, the bound on the $\sigma_i^2$ increased by at most a factor of two. This suffices for the lower tail bound on $U$ for $X_m$---
\begin{align*}
\Pr \left(U^m - \mathbb E[U^m] \le -8\frac{n}{\sqrt k} \right ) &\le \exp \left ( {-\frac{8^2\frac{n^2}{k}}{2 \left ( \sum \sigma_i^2 + \frac{8 n}{3 \sqrt k} \right )}} \right )\\
& \le \exp \left ( {-\frac{8^2\frac{n^2}{k}}{2 \left ( 4\frac{n^2}{k} + \frac{8 n}{3 \sqrt k} \right )}} \right )\\
& \le \exp \left ( {-\frac{8^2\frac{n^2}{k}}{2 \left ( 4\frac{n^2}{k} + 8\frac{4 n^2}{3 k} \right )}} \right ) \\
& = \exp \left ( {-\frac{8^2\frac{n^2}{k}}{2 \left (4+ \frac{4}{3} 8 \right) \frac{n^2}{k}}} \right )\\
& < \frac 1 8 .
\end{align*}
Thus $X_m$ will have sufficiently many unique elements to be accepted by the verifier with probability at least $\frac 7 8$. A success probability of $\frac 3 4$ follows from subtracting the probabiltiy that the verifier did not properly identify unseen samples.
\begin{comment}
&\le \exp \left ( {-\frac{Q^2\frac{n^2}{k}}{2 \left ( \sum \sigma_i^2 + \frac{Q n}{3 \sqrt k} \right )}} \right )\\
& \le \exp \left ( {-\frac{Q^2\frac{n^2}{k}2}{2 \left ( \left (n + \frac{P n}{\sqrt k} \right ) \frac{n + \frac{P n}{\sqrt k}}{k} + \frac{Q n}{3 \sqrt k} \right )}} \right )\\
& \le
\end{comment}
\end{proof}
We are now ready to prove Proposition \ref{prop:disc_lower}.
\begin{proof}
If $n \le \frac k 4$, then Lemma \ref{lemma:discrete_base_case} applies directly. If not, we use the set of distributions $\mathcal C^k_{\frac {k} {4}}$ with the intention of applying Lemma \ref{lemma:discrete_base_case} on samples that land in the uniform region.
The verifier will check that the samples are within the support of the distribution, more than $n+7 \frac{n}{\sqrt k}$ samples are in the uniform region and the verifier from Lemma \ref{lemma:discrete_base_case} accepts on the uniform region.
First note that after $n$ samples, at most $\frac k 4+ \frac{\sqrt k}{4}$ samples will be in the uniform region with at least probability $\frac{15}{16}$ by a Chebyshev bound. Conditioned on this event, Lemma \ref{lemma:discrete_base_case} shows that the amplifier cannot output more than $\frac k 4 + O(\sqrt k)$ samples in the uniform region and will be rejected by our verifier.
Now we show that the verifier will accept real samples with good probability. Note that the expected number of samples to receive in the uniform region for $X_m$ is $\frac k 4 + c \sqrt k$. The variance on this quantity is $\frac k 4 + c \sqrt k$. An application of Chebyshev's inequality shows that with probability at least $\frac {15}{16}$ sufficiently many samples will land in the uniform region.
\begin{align*}
\frac k 4 + c \sqrt k - 4 \sqrt{\frac k 4 + c \sqrt k} &\ge \frac k 4 + c \sqrt k - 2\sqrt{k} -4 \sqrt{c \sqrt k}\\
& \ge \frac k 4 + c \sqrt k - 2\sqrt{k} -4 \sqrt{c k}.
\end{align*}
Since the expression above is increasing with $c$, we can choose a $c$ sufficiently large so that the verifier will accept with sufficiently high probability.
\end{proof}
\begin{comment}nd a lemma which show the result for small $n$. The main proof will proceed via
We also use the following fact (proven here \url{https://cseweb.ucsd.edu/~mihir/papers/cbc.pdf})
\begin{fact} \label{fact:bday}
Let $n$ samples be drawn from a uniform distribution over $k$ elements. Then the probability of the samples containing a duplicate is less than $\frac{q^2}{2N}$.
\end{fact}
\todo{is it ok to use $X_n, Y_m$ like this?}
\begin{lemma} \label{lemma:sub_bday_regime
Consider any positive integers $n,k$ such that $n+1 \le \frac{\sqrt{k}}{2}$. Then for $p \sim Uniform[\mathcal C^k]$ and any amplifier $f$ the following verifier $v$ can distinguish $f(X_n)$ from $Y_{n+1}$. \\
$$ v(x_1,...x_{n+1}) = \begin{cases} x_1...x_{n+1} \textrm{ are in the support of p and have no repeats} & \textrm{ACCEPT}\\ \textrm{otherwise} & \textrm{REJECT} \end{cases}$$
Furthermore the verifier rejects real samples with at most probability $\frac 1 8$ while rejecting amplified samples with probability 1.
\end{lemma}
Proof of lemma \ref{lemma:sub_bday_regime}:
\begin{proof}
First we note that fact \ref{fact:bday} guarantees that the probability of having a duplicate is less than $\frac 1 8$. Thus the verifier accepts real samples with probability at least $\frac 7 8$.
The verifier relies on the fact that an amplifier must repeat a sample. If it returns a sample outside the support of the samples it has seen, it will output something outside the support of $p$ with probability 1 and be caught by the verifier. If it repeats a sample, it will also be caught by the verifier.
\end{proof}
Proof of theorem \ref{thm:disc_lower}:
\begin{proof}
If $n+1 \le \frac{\sqrt{k}}{2}$, then we simply have the case in lemma \ref{lemma:sub_bday_regime} and no amplification can be done.
Otherwise let $l = \frac{\sqrt{k}}{8m}$ and consider $C^k_{l}$. Consider the following verifier:
$$ v(x_1,...x_{m}) = \begin{cases} x_1...x_m \textrm{ are in the support of p and have no repeats in the uniform region, and at least } \frac{\sqrt k}{4} \textrm{ samples in the uniform region} & \textrm{ACCEPT}\\ \textrm{otherwise} & \textrm{REJECT} \end{cases}$$
For any $p \in C^k_{l}$ and $m$ samples, the number of samples in the uniform region of the support can be seen as a binomial distribution with expectation $\frac{\sqrt{k}}{8}$ (with the same bound on the variance). Furthermore, the number of samples in the uniform region lies in $[\frac{\sqrt{k}}{16}, \frac{\sqrt{k}}{2}]$ with probability at least $1 - \frac 1 {16}$.
Furthermore, with at least probability $1 - \frac 1 {16}$, there will be no more than $\frac{\sqrt{k}}{2}$ samples in the uniform region. Hoeffding's inequality applied to the binomial region will show that there will be at least $\frac{\sqrt{k}}{4}$ samples with probability greater than $1 - \exp \left ( -\frac{mp^2}{2} \right )$ This means that lemma \ref{lemma:sub_bday_regime} can be applied to show that there will be no repeats and the verifier will accept if given real samples.
\end{proof}
\end{comment}
\subsection{Lower Bound for Procedures which Return a Superset of the Input Samples}
In this section we prove the lower bound from Proposition \ref{prop:gaussian_modify_full}.
\begin{proposition}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N\left(\mu, I\right)$ with unknown mean $\mu$. There is an absolute constant, $c$, such that for sufficiently large $d$, if $n \le \frac{cd}{\log d},$ there is no $(n,n+1)$ amplification procedure that always returns a superset of the original $n$ points.
\end{proposition}
\begin{proof}
The outline of the proof is very similar to the proof of Proposition \ref{prop:gaussian_lower}. As in the proof of Proposition \ref{prop:gaussian_lower}, we define a verifier $v(Z_{n+1})$ for the distribution $N(\mu,I)$ which takes as input $(n+1)$ samples $\{x_i'\in \mathbb{R}^d, i \in [n+1]\}$, and a distribution $D_{\mu}$ over $\mu$, such that if $n< O(d/\log(d))$; (i) for all $\mu$, the verifier will accept with probability $1-1/e^2$ when given as input a set $Z_{n+1}$ of $(n+1)$ i.i.d. samples from $N(\mu,I)$, (ii) but will reject any $(n,n+1)$ amplification procedure which does not modify the input samples with probability $1-1/e^2$, where the probability is with respect to the randomness in $\mu\leftarrow D_{\mu}$, the set $X_n$ and in any internal randomness of the amplifier. Note that by Definition \ref{def2} of an amplification procedure, this implies that there is no $(n,n+1)$ amplification procedure which does not modify the input samples for $n< O(d/\log(d))$. We choose $D_{\mu}$ to be $N(0,\sqrt{d}I)$. Let $\hat{\mu}_{-i}$ be the mean of the all except the $i$-th sample returned by the amplification procedure. The verifier performs the following tests, and accepts if all tests pass, and rejects otherwise---
\begin{enumerate}
\item $\forall \;i \in [n+1],\norm{x_{i}'-\mu}^2\le 15d$.
\item $\forall\; i \in [n+1], \ip{x_{i}'-\hat{\mu}_{-i}}{\mu-\hat{\mu}_{-i}} \ge d/(4n)$.
\end{enumerate}
We first show that for a sufficiently large constant $C$ and $n< O(d/\log(d))$, $(n+1)$ i.i.d. samples from $N(\mu, I)$ pass the above tests with probability at least $1-1/e^2$. As $\norm{x_{i}'-\mu}^2$ is a $\chi^2$ random variable with $d$ degrees of freedom, by the concentration bound for a $\chi^2$ random variable \eqref{eq:chisquare}, a true sample $x_{i}'$ passes the first test with failure probability $e^{-3d}$. Hence by a union bound, all samples $\{x_i, i \in [n+1]\}$ pass the first test with probability at least $1-de^{-3d}\ge 1- 1/e^3$.
Let $E$ denote the following event,
\begin{align*}
\forall\; i\in[n+1], \norm{\hat{\mu}_{n}-\mu}^2 \ge d/n-\sqrt{20d\log d}/n &\ge d/(2n),\\
\forall\; i\in[n+1],
\norm{\hat{\mu}_{n}-\mu}^2 \le d/n+\sqrt{20d\log d}/n &\le 2d/n.
\end{align*}
Note that $\hat{\mu}_{-i}\leftarrow N(\mu,\frac{I}{n})$. Hence, by using \eqref{eq:chisquare2} with $t=20\sqrt{\frac{\log d}{ d}}$, and a union bound over all $i\in [n+1]$,
\begin{align*}
\Pr[E] \ge 1-1/e^3.
\end{align*}
Note that as $x_{i}'\leftarrow N(\mu,I)$, for a fixed $\hat{\mu}_{-i}$, $\ip{x_{i}'-\hat{\mu}_{-i}}{\mu-\hat{\mu}_{-i}} \leftarrow N(\norm{\hat{\mu}_{-i}-\mu}^2, \norm{\hat{\mu}_{-i}-\mu}^2)$. Hence conditioned on $E$, by standard Gaussian tail bounds,
\begin{align*}
\Pr\Big[ \ip{x_{i}'-\hat{\mu}_{-i}}{\mu-\hat{\mu}_{-i}} \le d/(2n) - \sqrt{20d\log d/n} \Big] \le 1/n^2, \\
\implies \Pr\Big[ \ip{x_{i}'-\hat{\mu}_{-i}}{\mu-\hat{\mu}_{-i}} \le d/(4n) \Big] \le 1/n^2,
\end{align*}
where in the last step we use the fact that $n< \frac{d}{C\log d}$ for a large constant $C$. Therefore, conditioned on $E$, $\{x_i, i \in [n+1]\}$ pass the third test with probability at least $1-1/e^3$. Hence by a union bound, $(n+1)$ samples drawn from $N(\mu,I)$ will satisfy all 3 tests with failure probability at most $1/e^2$. Hence for any $\mu$, the verifier accepts $n+1$ i.i.d. samples from $N(\mu,I)$ with probability at least $1-1/e^2$.\\
We now show that for $n<\frac{d}{C\log d}$ and $\mu$ sampled from $ D_{\mu}=N(0,\sqrt{d}I)$, the verifier rejects any $(n,n+1)$ amplification procedure which does not modify the input samples with high probability over the randomness in $\mu$ and the set $X_n$. Let $D_{\mu|X_n}$ be the posterior distribution of $\mu$ conditioned on the set $X_n$. As in Proposition \ref{prop:gaussian_lower}, $D_{\mu|X_n}=N(\bar{\mu},\bar{\sigma}^2 I)$, where,
\begin{align*}
\bar{\mu} = \frac{n}{n+1/\sqrt{d}}\mu_0, \quad \bar{\sigma}^2 = \frac{1}{n+1/\sqrt{d}}.
\end{align*}
We will show that with probability $1-e^{-3d}$ over the randomness in the set $X_n$ received by the amplifier and with probability $1-1/e^2$ over $\mu\leftarrow D_{\mu|X_n}$ and any internal randomness of the amplifier, the amplifier cannot output a set $Z_{n+1}$ which contains the set $X_n$ as a subset and which is accepted by the verifier. To show this, we first claim that $\norm{\mu_0}\le 30d^{3/4}$ with probability $1-e^{d}$. Note that $\mu_0 \leftarrow N(\mu, \frac{I}{n})$, where $\mu\leftarrow N(0,\sqrt{d}I)$. By \eqref{eq:chisquare}, with probability at least $1-e^{-3d}$, $\norm{\mu}\le 15d^{3/4}$ and $\norm{\mu-\mu_0}\le 15\sqrt{d}$. Hence by the triangle inequality, $\norm{\mu_0}\le 30d^{3/4}$ with probability at least $1-e^{-3d}$. We now show that for sets $X_n$ such that $\norm{\mu_0}\le 30d^{3/4}$, $Z_{n+1}$ cannot pass the verifier with probability more than $1-e^2$ over the randomness in $\mu|X_n$. The proof consists of two cases, and the analysis of the cases is similar to the proof of Proposition \ref{prop:gaussian_lower}. Without loss of generality, assume that $Z_{n+1}=\{x_1',X_n\}$, hence $x_1'$ is the only sample not present in the set. We will show that either $x_1'$ or $\hat{\mu}_{-1}$ fail one of the three tests performed by the verifier with high probability.
\subsubsection*{Case 1: $\norm{x_{1}'-\bar{\mu}}^2 \ge 100d$.}
We show that the first test is not satisfied with high probability in this case. As $\mu|X_n \leftarrow N(\bar{\mu},\bar{\sigma}^2)$, hence by \eqref{eq:chisquare}, $ \norm{\mu-\bar{\mu}}^2 \le 15d/n $ with probability $1-e^{-3d}$. Therefore, if $\norm{x_{1}'-\bar{\mu}}^2 \ge 100d$, then with probability $e^{-3d}$,
\begin{align*}
\norm{x_{1}'-{\mu}}^2 \ge (\sqrt{100d} - \sqrt{15d/n})^2 > 15d,
\end{align*}
in which case the first test is not satisfied. Hence in the first case, the amplifier succeeds with probability at most $e^{-3d}$.
\subsubsection*{Case 2: $\norm{x_{1}'-\bar{\mu}}^2 < 100d$.}
Note that for the sample $x_1'$, $\mu_{-1}=\mu_0$ as the last $n$ samples are the same as the original set $X_n$. We now bound $\norm{\hat{\mu}_{-1}-\bar{\mu}}$ as follows,
\begin{align*}
\norm{\hat{\mu}_{-1}-\bar{\mu}} = \Big\| \mu_0-\frac{n}{n+1/\sqrt{d}}\mu_0 \Big\| \le \frac{\norm{\mu_0}}{n\sqrt{d}}\le \frac{30d^{1/4}}{n}.
\end{align*}
We now expand $\ip{x_1'-\hat{\mu}_{-1}}{\mu-\hat{\mu}_{-1}}$ in the third test as follows,
\begin{align*}
\ip{x_1'-\hat{\mu}_{-1}}{\mu-\hat{\mu}_{-1}} &= \ip{x_1'-\bar{\mu}}{\mu-\bar{\mu}} - \ip{\hat{\mu}_{-1}-\bar{\mu}}{\mu-\bar{\mu}} - \ip{x_1'-\bar{\mu}}{\hat{\mu}_{-1}-\bar{\mu}}+ \norm{\hat{\mu}_{-1}-\bar{\mu}}^2,\\
&\le \ip{x_1'-\bar{\mu}}{\mu-\bar{\mu}} - \ip{\hat{\mu}_{-1}-\bar{\mu}}{\mu-\bar{\mu}} + \norm{x_1'-\bar{\mu}} \norm{\hat{\mu}_{-1}-\bar{\mu}}+ \norm{\hat{\mu}_{-1}-\bar{\mu}}^2.
\end{align*}
Note that $\ip{\hat{\mu}_{-1}-\bar{\mu}}{{\mu}-\bar{\mu}}$ is distributed as $N(0,\bar{\sigma}^2\norm{\hat{\mu}_{-1}-\bar{\mu}}^2)$ and hence with probability $1-1/e^3$ it is at most $10\norm{\hat{\mu}_{-1}-\bar{\mu}}/\sqrt{n}$. Similarly, with probability $1-1/e^3$, $\ip{x_1'-\bar{\mu}}{\mu-\bar{\mu}}$ is at most $10\norm{x_1'-\bar{\mu}}/\sqrt{n}$. Therefore, with probability $1-2/e^3$,
\begin{align*}
\ip{x_1'-\hat{\mu}_{-1}}{\mu-\hat{\mu}_{-1}} &\le 10\norm{x_1'-\bar{\mu}}/\sqrt{n} + 10\norm{\hat{\mu}_{-1}-\bar{\mu}}/\sqrt{n} + \norm{x_1'-\bar{\mu}} \norm{\hat{\mu}_{-1}-\bar{\mu}}+ \norm{\hat{\mu}_{-1}-\bar{\mu}}^2,\\
&\le 100\sqrt{\frac{d}{n}} + 300\frac{d^{3/4}}{n^2} + 300\frac{d^{3/4}}{{n}} + 900\frac{\sqrt{d}}{n^2}\\
&\le 100\sqrt{\frac{d}{n}} + 1500\frac{d^{3/4}}{{n}}\\
&= 100\sqrt{\frac{n}{d}}\Big(\frac{d}{n}\Big) + \frac{1500}{d^{1/4}}\Big(\frac{d}{n}\Big).
\end{align*}
Hence for a sufficiently large constant $C$, $n<\frac{d}{C\log d}$ and $d$ sufficiently large, with probability $1-2/e^3$,
\begin{align*}
\ip{x_1'-\hat{\mu}_{-1}}{\mu-\hat{\mu}_{-1}} &\le \frac{d}{5n},
\end{align*}
which implies that the second test is not satisfied. Hence the amplifier succeeds in this case with probability at most $2/e^3$.
The overall success probability of the amplifier is the maximum success probability across the two cases, hence for sets $X_n$ such that the $\norm{\mu_0}\le 30d^{3/4}$, the verifier accepts the amplified set $Z_{n+1}$ with probability at most $2/e^3$. As $\Pr\Big[\norm{\mu_0}\le 30d^{3/4}\Big]\ge 1-e^{-3d}$, the overall success probability of the amplifier over the randomness in $\mu$, $X_n$ and any internal randomness of the amplifier is at most $1/e^2$.
\end{proof}
\subsection{Lower Bound}
In this section we prove the lower bound from Theorem \ref{thm:gaussian_full} and show that it is impossible to amplify beyond $O \left ( \frac{ n }{\sqrt d} \right )$ samples.
\begin{proposition}\label{prop:gaussian_lower}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N\left(\mu, I\right)$ with unknown mean $\mu$. There is a fixed constant $c$ such that for all sufficiently large $d,n>0$, $\mathcal{C}$ does not admit an $\left(n, m\right)$ amplification procedure for $m\ge n+\frac{cn}{\sqrt{d}}$.
\end{proposition}
\begin{proof}
We prove the theorem in two parts. For a fixed constant $C$, we show that,
\begin{enumerate}
\item For $n< \sqrt{d}/C$, $\mathcal{C}$ does not admit an $(n,m)$ amplification procedure for $m>n$.
\item For $n\ge \sqrt{d}/C$, $\mathcal{C}$ does not admit an $(n,m)$ amplification procedure for $m\ge n + 1 + Cn/\sqrt{d}$.
\end{enumerate}
Note that these together imply the lower bound in Theorem \ref{prop:gaussian_lower} with $c=2C$. We begin with the proof of the first part of the theorem.
{\flushleft\emph{Part 1 of Theorem \ref{prop:gaussian_lower}:}} Note that it is sufficient to prove the theorem for $m=n+1$, as an amplification procedure for $m>n+1$ implies an amplification procedure for $m=n+1$ by discarding the residual samples. We define a verifier $v(Z_{n+1})$ for the distribution $N(\mu,I)$ which takes as input a set $Z_{n+1}$ of $(n+1)$ samples $\{x_i'\in \mathbb{R}^d, i \in [n+1]\}$, and a distribution $D_{\mu}$ over $\mu$, such that for a fixed constant $C$; (i) for all $\mu$, the verifier will accept with probability $1-1/e^2$ when given as input a set $Z_{n+1}$ of $(n+1)$ i.i.d. samples from $N(\mu,I)$, (ii) but will reject any $(n,n+1)$ amplification procedure for $n< \sqrt{d}/C$ with probability $1-1/e^2$, where the probability is with respect to the randomness in $\mu\leftarrow D_{\mu}$, the set $X_n$ and in any internal randomness of the amplifier. Note that by Definition \ref{def2} of an amplification procedure, this implies that there is no $(n,n+1)$ amplification procedure for $n< \sqrt{d}/C$. We choose $D_{\mu}$ to be $N(0,\sqrt{d}I)$. Let $\hat{\mu}_{n}$ be the mean of the first $n$ samples returned by the amplification procedure. The verifier performs the following three tests, accepts if all tests pass, and rejects otherwise---
\begin{enumerate}
\item $\norm{x_{n+1}'-\mu}^2\le 15d$.
\item $\Big|\norm{\hat{\mu}_{n}-\mu}^2-d/n\Big|\le 10\sqrt{d}/n$.
\item $\ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} \ge d/(4n)$.
\end{enumerate}
We first show that for a fixed constant $C$, $n< \sqrt{d}/C$ and any $\mu$, $(n+1)$ i.i.d. samples from $N(\mu, I)$ pass the above tests with probability $1-1/e^2$. We will use the following concentration bounds for a $\chi^2$ random variable $Z$ with $d$ degrees of freedom \cite{laurent2000adaptive,wainwright2015basic},
\begin{align}
\Pr\Big[Z - d \ge 2\sqrt{dt} + 2t\Big] &\le e^{-t},\; \forall \; t>0,\label{eq:chisquare}\\
\Pr\Big[|Z - d|\ge dt] &\le 2e^{-dt^2/8}, \; \forall \; t\in (0,1).\label{eq:chisquare2}
\end{align}
As $\norm{x_{n+1}'-\mu}^2$ is a $\chi^2$ random variable with $d$ degrees of freedom, by using \eqref{eq:chisquare} and setting $t=3d$, a true sample $x_{n+1}'$ passes the first test with failure probability $e^{-3d}$. Note that $\hat{\mu}_{n}\leftarrow N(\mu,\frac{I}{n})$. Hence by using \eqref{eq:chisquare2} and setting $t=10/\sqrt{d}$,
\begin{align*}
\Pr\Big[\Big|\norm{\hat{\mu}_{n}-\mu}^2-d/n\Big|>10\sqrt{d}/n\Big] \le 1/e^3.
\end{align*}
Hence $\hat{\mu}_{n}$ passes the second test with probability $1-1/e^3$. Conditioned on $\hat{\mu}_{n}$ passing the second test, we show that $x_{n+1}'$ passes the third test with probability $1-1/e^3$. If $\hat{\mu}_n$ passes the second test,
\begin{align*}
\norm{\hat{\mu}_{n}-\mu}^2 \ge d/n-10\sqrt{d}/n &\ge d/(2n),\\
\norm{\hat{\mu}_{n}-\mu}^2 \le d/n+10\sqrt{d}/n &\le 2d/n.
\end{align*}
Note that as $x_{n+1}'\leftarrow N(\mu,I)$, for a fixed $\hat{\mu}_{n}$, $\ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} \leftarrow N(\norm{\hat{\mu}_{n}-\mu}^2, \norm{\hat{\mu}_{n}-\mu}^2)$. Hence conditioned on passing the second test, by standard Gaussian tail bounds,
\begin{align*}
\Pr\Big[ \ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} \le d/(2n) - 10\sqrt{2d/n} \Big] \le 1/e^3, \\
\implies \Pr\Big[ \ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} \le d/(4n) \Big] \le 1/e^3,
\end{align*}
where in the last step $10\sqrt{2d/n}\le d/(4n)$ as $n< \sqrt{d}/C$. Hence by a union bound, $(n+1)$ samples drawn from $N(\mu,I)$ will satisfy all 3 tests with failure probability $e^{-3d}+2/e^3\le 1/e^2$. Hence for any $\mu$, the verifier accepts $(n+1)$ i.i.d. samples from $N(\mu,I)$ with probability $1-1/e^2$.
We now show that for $\mu$ sampled from $ D_{\mu}=N(0,\sqrt{d}I)$, the verifier rejects any $(n,n+1)$ amplification procedure for $n< \sqrt{d}/C$ with high probability over the randomness in $\mu$. Let $D_{\mu \mid X_n}$ be the posterior distribution of $\mu$ conditioned on the set $X_n$. We will show that for any set $X_n$ received by the amplifier, the amplified set $Z_{n+1}$ is accepted by the verifier with probability at most $1/e^2$ over $\mu\leftarrow D_{\mu \mid X_n}$. This implies that with probability $1-1/e^2$ over the randomness in $\mu\leftarrow D_{\mu}$, the set $X_n$ and any internal randomness in the amplifier, the amplifier cannot output a set $Z_{n+1}$ which is accepted by the verifier, completing the proof of the first part of Proposition \ref{prop:gaussian_lower}.
To show the above claim, we first find the posterior distribution $D_{\mu \mid X_n}$ of $\mu$ conditioned on the amplifier's set $X_n$. Let $\mu_0$ be the mean of the set $X_n$. By standard Bayesian analysis (see, for instance, \citep{murphy2007conjugate}), the posterior distribution $D_{\mu \mid X_n}=N(\bar{\mu},\bar{\sigma}^2 I)$, where,
\begin{align*}
\bar{\mu} = \frac{n}{n+1/\sqrt{d}}\mu_0, \quad \bar{\sigma}^2 = \frac{1}{n+1/\sqrt{d}}.
\end{align*}
We now break into three cases to show that with probability $1-1/e^2$ over $\mu \mid X_n$, the set $Z_{n+1}$ cannot satisfy all 3 tests.
\subsubsection*{Case 1: $\norm{x_{n+1}'-\bar{\mu}}^2 \ge 100d$.}
We show that the first test is not satisfied with high probability in this case. As $\mu\mid X_n \leftarrow N(\bar{\mu},\bar{\sigma}^2)$, hence by \eqref{eq:chisquare}, $ \norm{\mu-\bar{\mu}}^2 \le 15d/n $ with probability $1-e^{-3d}$. Therefore, if $\norm{x_{n+1}'-\bar{\mu}}^2 \ge 100d$, then with probability $e^{-d}$,
\begin{align*}
\norm{x_{n+1}'-{\mu}}^2 \ge (\sqrt{100d} - \sqrt{15d/n})^2 > 15d,
\end{align*}
in which case the first test is not satisfied. Hence in the first case, the amplifier succeeds with probability at most $e^{-3d}$.
\subsubsection*{Case 2: $\norm{\hat{\mu}_{n}-\bar{\mu}}^2 \ge 30\sqrt{d}/n$.}
We expand $\norm{\hat{\mu}_{n}-\mu}^2$ in the second test as follows,
\begin{align*}
\norm{\hat{\mu}_{n}-\mu}^2 &= \norm{\hat{\mu}_{n}-\overline{\mu} - (\mu - \overline{\mu})}^2\\
&=\norm{\hat{\mu}_{n}-\bar{\mu}}^2 -2 \ip{\hat{\mu}_{n}-\bar{\mu}}{{\mu}-\bar{\mu}} + \norm{\mu-\bar{\mu}}^2.
\end{align*}
By using \eqref{eq:chisquare2} and setting $t=10/\sqrt{d}$, with probability $1-1/e^3$,
\begin{align*}
\norm{\mu-\bar{\mu}}^2&\ge \frac{d}{n+1/\sqrt{d}}-\frac{10\sqrt{d}}{n+1/\sqrt{d}}\\
&\ge \Big(\frac{d}{n}\Big) \Big(1-\frac{1}{n\sqrt{d}}\Big) -\frac{10\sqrt{d}}{n}\\
&= d/n -\sqrt{d}/n^2 -10\sqrt{d}/n\\
&\ge d/n -12\sqrt{d}/n.
\end{align*}
As $\mu \mid X_n \leftarrow N(\bar{\mu},\bar{\sigma}^2)$, $\ip{\hat{\mu}_{n}-\bar{\mu}}{{\mu}-\bar{\mu}}$ is distributed as $N(0,\bar{\sigma}^2\norm{\hat{\mu}_{n}-\bar{\mu}}^2)$. Hence with probability $1-1/e^3$, $\ip{\hat{\mu}_{n}-\bar{\mu}}{{\mu}-\bar{\mu}}\le 10\norm{\hat{\mu}_{n}-\bar{\mu}}/\sqrt{n+1/\sqrt{d}}\le 10\norm{\hat{\mu}_{n}-\bar{\mu}}/\sqrt{n}$. Therefore, with probability $1-2/e^3$,
\begin{align*}
\norm{\hat{\mu}_{n}-\mu}^2 &\ge \norm{\hat{\mu}_{n}-\bar{\mu}}^2 -(20/\sqrt{n}) \norm{\hat{\mu}_{n}-\bar{\mu}}+ d/n-12\sqrt{d}/n,\\
&= \norm{\hat{\mu}_{n}-\bar{\mu}}^2\Big(1-\frac{20}{\norm{\hat{\mu}_{n}-\bar{\mu}}\sqrt{n}}\Big) + d/n-12\sqrt{d}/n,\\
&\ge 0.9\norm{\hat{\mu}_{n}-\bar{\mu}}^2 + d/n-12\sqrt{d}/n,\\
&\ge d/n + 15\sqrt{d}/n,
\end{align*}
which implies that the second test is not satisfied. Hence in this case, the amplifier succeeds with probability at most $2/e^3$.
\subsubsection*{Case 3: $\norm{x_{n+1}'-\bar{\mu}}^2 < 100d$ and $\norm{\hat{\mu}_{n}-\bar{\mu}}^2 < 30\sqrt{d}/n$.}
We expand $\ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}}$ in the third test as follows,
\begin{align*}
\ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} &= \ip{x_{n+1}'-\bar{\mu}}{\mu-\bar{\mu}} - \ip{\hat{\mu}_n-\bar{\mu}}{\mu-\bar{\mu}} - \ip{x_{n+1}'-\bar{\mu}}{\hat{\mu}_n-\bar{\mu}}+ \norm{\hat{\mu}_n-\bar{\mu}}^2,\\
&\le \ip{x_{n+1}'-\bar{\mu}}{\mu-\bar{\mu}} - \ip{\hat{\mu}_n-\bar{\mu}}{\mu-\bar{\mu}} + \norm{x_{n+1}'-\bar{\mu}} \norm{\hat{\mu}_n-\bar{\mu}}+ \norm{\hat{\mu}_n-\bar{\mu}}^2.
\end{align*}
As in the previous case, $\ip{\hat{\mu}_{n}-\bar{\mu}}{{\mu}-\bar{\mu}}$ is distributed as $N(0,\bar{\sigma}^2\norm{\hat{\mu}_{n}-\bar{\mu}}^2)$ and hence with probability $1-1/e^3$ it is at most $10\norm{\hat{\mu}_{n}-\bar{\mu}}/\sqrt{n}$. Similarly, with probability $1-1/e^3$, $\ip{x_{n+1}'-\bar{\mu}}{\mu-\bar{\mu}}$ is at most $10\norm{x_{n+1}'-\bar{\mu}}/\sqrt{n}$. Therefore, with probability $1-2/e^3$,
\begin{align*}
\ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} &\le 10\norm{x_{n+1}'-\bar{\mu}}/\sqrt{n} + 10\norm{\hat{\mu}_{n}-\bar{\mu}}/\sqrt{n} + \norm{x_{n+1}'-\bar{\mu}} \norm{\hat{\mu}_n-\bar{\mu}}+ \norm{\hat{\mu}_n-\bar{\mu}}^2,\\
&\le 100\sqrt{\frac{d}{n}} + 100\frac{d^{1/4}}{n} + 60\frac{d^{3/4}}{\sqrt{n}} + 30\frac{\sqrt{d}}{n}\\
&\le 300\frac{d^{3/4}}{\sqrt{n}}=300\Big(\frac{d}{n}\Big)\Big(\frac{\sqrt{n}}{d^{1/4}}\Big).
\end{align*}
Hence for sufficiently large constant $C$ and $n<\sqrt{d}/C$, with probability $1-4/e^3$,
\begin{align*}
\ip{x_{n+1}'-\hat{\mu}_{n}}{\mu-\hat{\mu}_{n}} < \frac{d}{5n},
\end{align*}
which implies that the third test is not satisfied. Hence the amplifier succeeds in this case with probability at most $2/e^3$.\\
The overall probability of the amplifier succeeding is the maximum probability of success across the three cases, hence the verifier accepts $Z_{n+1}$ with probability at most $1/e^2$ for $n<\sqrt{d}/C$. This completes the proof of the first part of the theorem.\\
{\flushleft \emph{Part 2 of Proposition \ref{prop:gaussian_lower}:}} In this case, the verifier just uses an analog of the second test of the previous verifier and tests the following,
\begin{align}
\Big|\norm{\hat{\mu}_{m-1}-\mu}^2-d/(m-1)\Big|\le 10\sqrt{d}/(m-1).\label{eq:simple_test}
\end{align}
By the same argument as in part 1, $m$ samples drawn i.i.d. from $N(\mu,I)$ will be accepted by the verifier with probability $1-1/e^2$. We now show that for $\mu\leftarrow N(0,\sqrt{d}I)$, the verifier will reject the samples produced by a $(n,m)$ amplification scheme for $m=n+1+100n/\sqrt{d}$ with probability $1-1/e^2$ over the randomness in $\mu$. As before, this implies that there is no $(n,m)$ amplification scheme for $m=n+1+100n/\sqrt{d}$.
The posterior distribution $D_{\mu \mid X_n}$ of the mean given the samples $X_n$ is the same as in the previous case. We show that any set $Z_m$ returned by the amplifier fails \eqref{eq:simple_test} with probability $1-1/e^2$ over the randomness in $\mu \mid X_n$. By the same analysis as in case 2 of the previous part, we get that with probability at least $1-2/e^3$,
\begin{align*}
\norm{\hat{\mu}_{m-1}-\mu}^2 \ge \norm{\hat{\mu}_{n}-\bar{\mu}}^2-20{\norm{\hat{\mu}_{n}-\bar{\mu}}/\sqrt{n}} + d/n-\sqrt{d}/n^2-10\sqrt{d}/n.
\end{align*}
Note that $\norm{\hat{\mu}_{n}-\bar{\mu}}^2-20{\norm{\hat{\mu}_{n}-\bar{\mu}}/\sqrt{n}}\ge -100/n$. Therefore, with probability at least $1-2/e^3$,
\begin{align*}
\norm{\hat{\mu}_{m-1}-\mu}^2 \ge -100/n + d/n-\sqrt{d}/n^2-10\sqrt{d}/n \ge d/n-20\sqrt{d}/n.
\end{align*}
To pass \eqref{eq:simple_test}, $\norm{\hat{\mu}_{m-1}-\mu}^2 \le d/(m-1) + 10\sqrt{d}/(m-1)$. Therefore, if an amplifier passes the test with probability greater than $1-2/e^3$ over the randomness in $\mu \mid X_n$ for $m> n + 1+ 100n/\sqrt{d}$, then,
\begin{align*}
&d/n-20\sqrt{d}/n \le \norm{\hat{\mu}_{m-1}-\mu}^2 \le d/(m-1) + 10\sqrt{d}/(m-1), \\
\implies& d/n-20\sqrt{d}/n \le d/(m-1) + 10\sqrt{d}/(m-1),\\
\implies& d/n-20\sqrt{d}/n \le d/(n+100n/\sqrt{d}) + 10\sqrt{d}/(n+100n/\sqrt{d}),\\
\implies& d/n-20\sqrt{d}/n \le d/n(1+100/\sqrt{d})^{-1} + 10\sqrt{d}/n(1+100/\sqrt{d})^{-1},\\
\implies& d/n-20\sqrt{d}/n \le d/n(1-50/\sqrt{d}) + 10\sqrt{d}/n(1-50/\sqrt{d}), \\
\implies& -20\sqrt{d}/n \le -40\sqrt{d}/n -1000/n,\\
\implies& -20\sqrt{d}/n \le -30\sqrt{d}/n ,
\end{align*}
which is a contradiction. Hence for $m> n + 1+ 100n/\sqrt{d}$, every $(n,m)$ amplifier is rejected by the verifier with probability greater than $1-1/e^2$ over the randomness in $\mu$, the set $X_n$, and any internal randomness of the amplifier.
\end{proof}
\subsection{Lower Bound}
In this section we prove the lower bound from Theorem \ref{thm:gaussian_full} and show that it is impossible to amplify beyond $O \left ( \frac{ n }{\sqrt d} \right )$ samples.
\begin{proposition}\label{prop:gaussian_lower}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N\left(\mu, I\right)$ with unknown mean $\mu$. There is a fixed constant $c$ such that for all sufficiently large $d,n>0$, $\mathcal{C}$ does not admit an $\left(n, m\right)$ amplification procedure for $m\ge n+\frac{cn}{\sqrt{d}}$.
\end{proposition}
\begin{proof}
Note that it is sufficient to prove the theorem for $m=n+cn/\sqrt{d}$ for a fixed constant $c$, as an amplification procedure for $m>n+cn/\sqrt{d}$ implies an amplification procedure for $m=n+cn/\sqrt{d}$ by discarding the residual samples. To prove the theorem for $m=n+cn/\sqrt{d}$, we will define a distribution $D_{\mu}$ over $\mu$ and a verifier $v(Z_{m})$ for the distribution $N(\mu,I)$ which takes as input a set $Z_{m}$ of $m$ samples, such that: (i) for all $\mu$, the verifier $v(Z_{m})$ will accept with probability $1-1/e^2$ when given as input a set $Z_{m}$ of $m$ i.i.d. samples from $N(\mu,I)$, (ii) but will reject any $(n,m)$ amplification procedure for $m=n+cn/\sqrt{d}$ with probability $1-1/e^2$, where the probability is with respect to the randomness in $\mu\leftarrow D_{\mu}$, the set $X_n$ and in any internal randomness of the amplifier. Note that by Definition \ref{def2} of an amplification procedure, this implies that there is no $(n,m)$ amplification procedure for $m=n+cn/\sqrt{d}$. \\
We now define the distribution $D_{\mu}$ and the verifier $v(Z_{m})$. We choose $D_{\mu}$ to be $N(0,\sqrt{d}I)$. Let $\hat{\mu}_{m}$ be the mean of the samples $Z_{m}$ returned by the amplification procedure. The verifier $v(Z_{m})$ performs the following test, accepts if $\hat{\mu}_{m}$ passes the test, and rejects otherwise---
\begin{align}
\Big|\norm{\hat{\mu}_{m}-\mu}^2-d/m\Big|\le 10\sqrt{d}/m.\label{eq:simple_test}
\end{align}
We first show that $m$ i.i.d. samples from $N(\mu, I)$ pass the above test with probability $1-1/e^2$. We will use the following concentration bounds for a $\chi^2$ random variable $Z$ with $d$ degrees of freedom \cite{laurent2000adaptive,wainwright2015basic},
\begin{align}
\Pr\Big[Z - d \ge 2\sqrt{dt} + 2t\Big] &\le e^{-t},\; \forall \; t>0,\label{eq:chisquare}\\
\Pr\Big[|Z - d|\ge dt] &\le 2e^{-dt^2/8}, \; \forall \; t\in (0,1).\label{eq:chisquare2}
\end{align}
Note that $\hat{\mu}_{m}\leftarrow N(\mu,\frac{I}{m})$ for $m$ i.i.d. samples from $N(\mu, I)$. Hence by using \eqref{eq:chisquare2} and setting $t=10/\sqrt{d}$,
\begin{align*}
\Pr\Big[\Big|\norm{\hat{\mu}_{m}-\mu}^2-d/m\Big|>10\sqrt{d}/m\Big] \le 1/e^3.
\end{align*}
Hence $m$ i.i.d. samples from $N(\mu, I)$ pass the test with probability at least $1-1/e^2$.
We now show that for $\mu$ sampled from $ D_{\mu}=N(0,\sqrt{d}I)$, the verifier rejects any $(n,m)$ amplification procedure for $m=n+ cn/\sqrt{d}$ with high probability over the randomness in $\mu$. Let $D_{\mu \mid X_n}$ be the posterior distribution of $\mu$ conditioned on the set $X_n$. We will show that for any set $X_n$ received by the amplifier, the amplified set $Z_{m}$ is accepted by the verifier with probability at most $1/e^2$ over $\mu\leftarrow D_{\mu \mid X_n}$. This implies that with probability $1-1/e^2$ over the randomness in $\mu\leftarrow D_{\mu}$, the set $X_n$ and any internal randomness in the amplifier, the amplifier cannot output a set $Z_{m}$ which is accepted by the verifier, completing the proof of Proposition \ref{prop:gaussian_lower}.
To show the above claim, we first find the posterior distribution $D_{\mu \mid X_n}$ of $\mu$ conditioned on the amplifier's set $X_n$. Let $\mu_0$ be the mean of the set $X_n$. By standard Bayesian analysis (see, for instance, \citep{murphy2007conjugate}), the posterior distribution $D_{\mu \mid X_n}=N(\bar{\mu},\bar{\sigma}^2 I)$, where,
\begin{align*}
\bar{\mu} = \frac{n}{n+1/\sqrt{d}}\mu_0, \quad \bar{\sigma}^2 = \frac{1}{n+1/\sqrt{d}}.
\end{align*}
We show that any set $Z_m$ returned by the amplifier for $m=n+100n/\sqrt{d}$ fails the test \eqref{eq:simple_test} with probability $1-1/e^2$ over the randomness in $\mu \mid X_n$. We expand $\norm{\hat{\mu}_{m}-\mu}^2$ in the test as follows,
\begin{align*}
\norm{\hat{\mu}_{m}-\mu}^2 &= \norm{\hat{\mu}_{m}-\overline{\mu} - (\mu - \overline{\mu})}^2\\
&=\norm{\hat{\mu}_{m}-\bar{\mu}}^2 -2 \ip{\hat{\mu}_{m}-\bar{\mu}}{{\mu}-\bar{\mu}} + \norm{\mu-\bar{\mu}}^2.
\end{align*}
By using \eqref{eq:chisquare2} and setting $t=10/\sqrt{d}$, with probability $1-1/e^3$,
\begin{align*}
\norm{\mu-\bar{\mu}}^2&\ge \frac{d}{n+1/\sqrt{d}}-\frac{10\sqrt{d}}{n+1/\sqrt{d}}\\
&\ge \Big(\frac{d}{n}\Big) \Big(1-\frac{1}{n\sqrt{d}}\Big) -\frac{10\sqrt{d}}{n}\\
&= d/n -\sqrt{d}/n^2 -10\sqrt{d}/n\\
&\ge d/n -12\sqrt{d}/n.
\end{align*}
As $\mu \mid X_n \leftarrow N(\bar{\mu},\bar{\sigma}^2)$, $\ip{\hat{\mu}_{m}-\bar{\mu}}{{\mu}-\bar{\mu}}$ is distributed as $N(0,\bar{\sigma}^2\norm{\hat{\mu}_{m}-\bar{\mu}}^2)$. Hence with probability $1-1/e^3$, $\ip{\hat{\mu}_{m}-\bar{\mu}}{{\mu}-\bar{\mu}}\le 10\norm{\hat{\mu}_{m}-\bar{\mu}}/\sqrt{n+1/\sqrt{d}}\le 10\norm{\hat{\mu}_{m}-\bar{\mu}}/\sqrt{n}$. Therefore, with probability $1-2/e^3$,
\begin{align*}
\norm{\hat{\mu}_{m}-\mu}^2 \ge \norm{\hat{\mu}_{m}-\bar{\mu}}^2 -(20/\sqrt{n}) \norm{\hat{\mu}_{m}-\bar{\mu}}+ d/n-12\sqrt{d}/n.
\end{align*}
We claim that $\norm{\hat{\mu}_{m}-\bar{\mu}}^2-20{\norm{\hat{\mu}_{m}-\bar{\mu}}/\sqrt{n}}\ge-100/n$. To verify, note that $\norm{\hat{\mu}_{m}-\bar{\mu}}^2-20{\norm{\hat{\mu}_{m}-\bar{\mu}}/\sqrt{n}}+100/n$ is a non-negative quadratic function in $\norm{\hat{\mu}_{m}-\bar{\mu}}$. Therefore, with probability at least $1-2/e^3$,
\begin{align*}
\norm{\hat{\mu}_{m}-\mu}^2 \ge -100/n + d/n-\sqrt{d}/n^2-10\sqrt{d}/n \ge d/n-20\sqrt{d}/n.
\end{align*}
To pass \eqref{eq:simple_test}, $\norm{\hat{\mu}_{m}-\mu}^2 \le d/m + 10\sqrt{d}/m$. Therefore, if an amplifier passes the test with probability greater than $1-2/e^3$ over the randomness in $\mu \mid X_n$ for $m= n + 100n/\sqrt{d}$, then,
\begin{align*}
&d/n-20\sqrt{d}/n \le \norm{\hat{\mu}_{m}-\mu}^2 \le d/m + 10\sqrt{d}/m, \\
\implies& d/n-20\sqrt{d}/n \le d/m + 10\sqrt{d}/m,\\
\implies& d/n-20\sqrt{d}/n \le d/(n+100n/\sqrt{d}) + 10\sqrt{d}/(n+100n/\sqrt{d}),\\
\implies& d/n-20\sqrt{d}/n \le d/n(1+100/\sqrt{d})^{-1} + 10\sqrt{d}/n(1+100/\sqrt{d})^{-1},\\
\implies& d/n-20\sqrt{d}/n \le d/n(1-50/\sqrt{d}) + 10\sqrt{d}/n(1-50/\sqrt{d}), \\
\implies& -20\sqrt{d}/n \le -40\sqrt{d}/n -1000/n,\\
\implies& -20\sqrt{d}/n \le -30\sqrt{d}/n ,
\end{align*}
which is a contradiction. Hence for $m= n + 100n/\sqrt{d}$, every $(n,m)$ amplifier is rejected by the verifier with probability greater than $1-1/e^2$ over the randomness in $\mu$, the set $X_n$, and any internal randomness of the amplifier.
\end{proof}
\section{Learning, Testing, and Sample Amplification}
How much do you need to know about a distribution, $D$, in order to produce a dataset of size $m$ that is indistinguishable from a set of independent draws from $D$? Do you need to \emph{learn} $D$, to nontrivial accuracy in some natural metric, or does it suffice to have access to a smaller dataset of size $n<m$ drawn from $D$, and then ``amplify'' this dataset to create one of size $m$? In this work we formalize this question, and show that for two natural classes of distribution, discrete distributions with bounded support, and $d$-dimensional Gaussians, non-trivial data ``amplification'' is possible even in the regime in which you are given too few samples to learn.
From a theoretical perspective, this question is related to the meta-question underlying work on distributional property testing and estimation: \emph{To answer basic hypothesis testing or property estimation questions regarding a distribution $D$, to what extent must one first learn $D$, and can such questions be reliably answered given a relatively modest amount of data drawn from $D$?} Much of the excitement surrounding distributional property testing and estimation stems from the fact that, for many such testing and estimation questions, a surprisingly small set of samples from $D$ suffices---significantly fewer samples than would be required to learn $D$. These surprising answers have been revealed over the past two decades. The question posed in our work fits with this body of work, though instead of asking how much data is required to perform a hypothesis test, we are asking how much data is required to \emph{fool} an optimal hypothesis test---in this case an ``identity tester'' which knows $D$ and is trying to distinguish a set of $m$ independent samples drawn from $D$, versus $m$ datapoints constructed in some other fashion.
From a more practical perspective, the question we consider also seems timely. Deep neural network based systems, trained on a set of samples, can be designed to perform many tasks, including testing whether a given input was drawn from a distribution in question (i.e. ``discrimination''), as well as sampling (often via the popular Generative Adversarial Network (GAN) approach). There are many relevant questions regarding the extent to which current systems are successful in accomplishing these tasks, and the question of how to quantify the performance of these systems is still largely open. In this work, however, we ask a different question: Suppose a system \emph{can} accomplish such a task---what would that actually mean? If a system can produce a dataset that is indistinguishable from a set of $m$ independent draws from a distribution, $D$, does that mean the system knows $D$, or are there other ways of accomplishing this task?
\subsection{Formal Problem Definition}
We begin by formally stating two essentially equivalent definitions of sample amplification and then provide an illustrative example. Our first definition states that a function $f$ mapping a set of $n$ datapoints to a set of $m$ datapoints is a valid amplification procedure for a class of distributions $\mathcal{C}$, if for all $D \in \mathcal{C}$, letting $X_n$ denote the random variable corresponding to $n$ independent draws from $D$, the distribution of $f(X_n)$ has small total variation distance\footnote{We overload the notation $D_{TV}(\cdot, \cdot)$ for total variation distance, and also use it when the argument is a random variable instead of the distribution of the random variable, whenever convenient.} to the distribution defined by $m$ independent draws from $D$.
\begin{definition}\label{def1}
A class $\mathcal{C}$ of distributions over domain $S$ admits an $(n,m)$ \emph{amplification procedure} if there exists a (possibly randomized) function $f_{\mathcal{C},n,m} : S^n \rightarrow S^m$, mapping a dataset of size $n$ to a dataset of size $m$, such that for every distribution $D \in \mathcal{C}$, $$D_{TV}\left(f_{\mathcal{C},n,m}(X_n), D^{m} \right) \le 1/3,$$ where $X_n$ is the random variable denoting $n$ independent draws from $D$, and $D^m$ denotes the distribution of $m$ independent draws from $D$. If no such function $f_{\mathcal{C},n,m}$ exists, we say that $\mathcal{C}$ \emph{does not admit} an $(n,m)$ amplification scheme.
\end{definition}
Crucially, in the above definition we are considering the random variable $f(X_n)$ whose randomness comes from the randomness of $X_n$, as well as any randomness in the function $f$ itself. For example, every class of distributions admits an $(n,n)$ amplification procedure, corresponding to taking the function $f$ to be the identity function. If, instead, our definition had required that the \emph{conditional} distribution of $f(X_n)$ given $X_n$ be close to $D^m$, then the above definition would simply correspond to asking how well we can \emph{learn} $D$, given the $n$ samples denoted by $X_n$.
Definition~\ref{def1} is also equivalent, up to the choice of constant $1/3$ in the bound on total variation distance, to the following intuitive formulation of sample amplification as a game between two parties: the ``amplifier'' who will produce a dataset of size $m$, and a ``verifier'' who knows $D$ and will either accept or reject that dataset. The verifier's protocol, however, must satisfy the condition that given $m$ independent draws from the true distribution in question, the verifier must accept with probability at least $3/4$, where the probability is with respect to both the randomness of the set of samples, and any internal randomness of the verifier. We briefly describe this formulation, as it parallels the pseudo-randomness framework, and a number of natural directions for future work---such as if the verifier is computationally bounded, or only has sample access to $D$---are easier to articulate in this setting.
\begin{definition}\label{def2}
The \emph{sample amplification} game consists of two parties, an \emph{amplifier} corresponding to a function $f_{n,m}: S^n \rightarrow S^m$ which maps a set of $n$ datapoints in domain $S$ to a set of $m$ datapoints, and a \emph{verifier} corresponding to a function $v:S^m \rightarrow \{ACCEPT, REJECT\}$. We say that a verifier $v$ is \emph{valid} for distribution $D$ if, when given as input a set of $m$ independent draws from $D$, the verifier accepts with probability at least $3/4$, where the probability is over both the randomness of the draws and any internal randomness of $v$: $$\Pr_{X_m \leftarrow D^m}[v(X_m) = ACCEPT] \ge 3/4.$$ A class $\mathcal{C}$ of distributions over domain $S$ admits an $(n,m)$ \emph{amplification procedure} if, and only if, there is an amplifier function $f_{\mathcal{C},n,m}$ that, for every $D\in \mathcal{C}$, can ``win'' the game with probability at least $2/3$; namely, such that for every $D \in \mathcal{C}$ and valid verifier $v_D$ for $D$ $$\Pr_{X_n \leftarrow D^n}[v_D(f_{\mathcal{C},n,m}(X_n))=ACCEPT] \ge 2/3,$$ where the probability is with respect to the randomness of the choice of the $n$ samples, $X_n,$ and any internal randomness in the amplifier and verifier, $f$ and $v$.
\end{definition}
As was the case in Definition~\ref{def1}, in the above definition it is essential that the verifier only observes the output $f(X_n)$ produced by the amplifier. If the verifier sees both the amplified samples, $f(X_n)$ in addition to the original data, $X_n$, then the above definition also becomes equivalent to asking how well the class of distributions in question can be \emph{learned} given $n$ samples. \\
\vspace{-10pt}
\begin{figure}[h]
\centering
\includegraphics[scale=0.45]{game.png}
\caption{Sample amplification can be viewed as a game between an ``amplifier'' that obtains $n$ independent draws from an unknown distribution $D$ and must output a set of $m > n$ samples, and a ``verifier'' that receives the $m$ samples and must ACCEPT or REJECT. The verifier knows the true distribution $D$ and is computationally unbounded but does not know the amplifier's training set (the set of $n$ input samples). An amplification scheme is successful if, for every verifier, with probability at least $2/3$ the verifier will accept the output of the amplifier. [In the setting illustrated above, observant readers might recognize that one of the images in the ``Output'' set is a painting which was sold in October, 2018 for over \$400k by Christie's auction house, and which was ``painted'' by a Generative Adversarial Network (GAN)~\cite{cohn_2018}].}
\end{figure}
\vspace{-15pt}
\begin{example}
Consider the class of distributions $\mathcal{C}$ corresponding to i.i.d. flips of a coin with unknown bias $p$. We claim that there are constants $c' \ge c > 0$ such that $(n, n+cn)$ sample amplification is possible, but $(n, n+c'n)$ amplification is not possible. To see this, consider the amplification strategy corresponding to returning a random permutation of the original samples together with $cn$ additional tosses of a coin with bias $\hat{p}$, where $\hat{p}$ is the empirical bias of the $n$ original samples. Because of the random permutation, the total variation distance between these samples and $n+cn$ i.i.d. tosses of the $p$-biased coin is a function of only the distribution of the total number of $heads$. Hence this is equivalent to the distance between $Binomial(n+cn,p),$ and the distribution corresponding to first drawing $h \leftarrow Binomial(n,p)$, and then returning $h+Binomial(cn,h/n)$. It is not hard to show that the total variation distance between these two can be bounded by any small constant by taking $c$ to be a sufficiently small constant. Intuitively, this is because both distributions have the same mean, they are both unimodal, and have variances that differ by a small constant factor for small constant $c$. For the lower bound, to see that amplification by more than a constant factor is impossible, note that if it were possible, then one could learn $p$ to error $o(1/\sqrt{n})$, with small constant probability of failure, by first amplifying the original samples and then returning the empirical estimate of $p$ based on the amplified samples.
In the above setting, this constant factor amplification is not surprising, since the amplifier can learn the distribution to non-trivial accuracy. It is worth observing, however, that the above amplification scheme corresponding to a $(n, n+1)$ amplifier will return a set of $n+1$ samples, whose total variation distance from $n+1$ i.i.d. samples is only $O(1/n)$; this is despite the fact that the amplifier can only learn the distribution to total variation distance $\Theta(1/\sqrt{n}).$
\end{example}
\vspace{-10pt}
\subsection{Summary of Results}
Our main results provide tight bounds on the extent to which sample amplification is possible for two fundamental settings, unstructured discrete distributions, and $d$-dimensional Gaussians with unknown mean and fixed covariance. Our first result is for discrete distributions with support size at most $k$. In this case, we show that sample amplification is possible given only $O(\sqrt{k})$ samples from the distribution, and tightly characterize the extent to which amplification is possible.\footnote{This addresses a variant of an open problem posed in the Frontiers in Distribution Testing workshop at FOCS 2017 (\url{https://sublinear.info/index.php?title=Open_Problems:85}).} Note that learning the distribution to small total variation distance requires $\Theta(k)$ samples in this case.
\begin{theorem}\label{thm:discrete-full}
Let $\mathcal{C}$ denote the class of discrete distributions with support size at most $k$. For sufficiently large $k,$ and $m = n+O\left(\frac{n}{\sqrt{k}}\right)$, $\mathcal{C}$ admits an $\left(n, m\right)$ amplification procedure.
This bound is tight up to constants, i.e., there is a constant $c$, such that for every sufficiently large $k$, $\mathcal{C}$ does not admit an $\left(n, n+\frac{cn}{\sqrt{k}}\right)$amplification procedure.
\end{theorem}
Our amplification procedure for discrete distributions is extremely simple: roughly, we generate additional samples from the empirical distribution of the initial set of $n$ samples, and then randomly shuffle together the original and the new samples. For technical reasons, we do not exactly sample from the empirical distribution but from a suitable modification which facilitates the analysis.
Our second result concerns $d$-dimensional Gaussian distributions with unknown mean and fixed covariance. We show that we can amplify even with only $O(\sqrt{d})$ samples from the distribution. In contrast, learning to small constant total variation distance requires $\Theta(d)$ samples. Unlike the discrete setting, however, we do not get optimal amplification in this setting by generating additional samples from the empirical distribution of the initial set of $n$ samples, and then randomly shuffling together the original and new samples. Moreover, we show a lower bound proving that, for $n=o(d/\log d)$ there is no $(n,n+1)$ amplification procedure which always returns a superset of the original $n$ samples. Curiously, however, the procedure that generates new samples from the empirical distribution, and then randomly shuffles together the new and old samples, is able to amplify at $n= \Omega(d/\log d)$, even though learning is not possible until $n=\Theta(d)$. Additionally, as $n$ goes from $10 \frac{d}{\log d}$ to $1000 \frac{d}{\log d}$, this amplification procedure goes from being unable to amplify at all, to being able to amplify by nearly $\sqrt{d}$ samples. This is formalized in the following proposition.
\begin{proposition}\label{prop:gaussian_modify_full}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions with unknown mean $\mu$ and covariance $\Sigma$. There is an absolute constant, $c$, such that for sufficiently large $d$, if $n \le \frac{cd}{\log d},$ there is no $(n,n+1)$ amplification procedure that always returns a superset of the original $n$ points.
On the other hand, there is a constant $c'$ such that for any $\epsilon$, for $n = \frac{d}{\epsilon \log d}$, and for sufficiently large $d$, there is an $\left(n,n+c'n^{\frac{1}{2}-9\epsilon}\right)$ amplification protocol for $\mathcal{C}$ that returns a superset of the original $n$ samples.
\end{proposition}
The above proposition suggests that to be able to amplify at input size $n = o(d/\log d)$, one must modify the input samples. A naive way to modify the input samples is to discard all the original $n$ samples and generate $m$ new samples from the distribution $N(\hat{\mu},\Sigma)$, where $\hat{\mu}$ is empirical mean $\hat{\mu}$ of the original set $X_n$. However this does not even give an $(n, n)$ amplification procedure for any value of $n$. To achieve optimal amplification in the Gaussian case, the amplifier first computes the empirical mean $\hat{\mu}$ of the original set $X_n$, and then draws $m-n$ new samples from $N(\hat{\mu},\Sigma)$. We then shift the original $n$ samples to ``decorrelate'' the original set and the new samples; intuitively, this step hides the fact that the $m-n$ new samples were generated based on the empirical mean of the original samples. The final set of returned samples consists of the shifted versions of the $n$ original samples along with the $m-n$ freshly generated ones. This procedure gives $(n, n+O(\frac{n}{\sqrt{d}}))$ amplification, and we also show that this amplification is tight up to constant factors.
\begin{theorem}\label{thm:gaussian_full}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N\left(\mu, \Sigma\right)$ with unknown mean $\mu$ and fixed covariance $\Sigma$. For all $d,n>0$ and $m = n+O\left(\frac{n}{\sqrt{d}}\right)$, $\mathcal{C}$ admits an $\left(n, m\right)$ amplification procedure.
This bound is tight up to constants, i.e., there is a fixed constant $c$ such that for all $d,n>0$, $\mathcal{C}$ does not admit an $\left(n, m\right)$ amplification procedure for $m\ge n+\frac{cn}{\sqrt{d}}$.
\end{theorem}
\begin{comment}
Our second result concerns $d$-dimensional Gaussian distributions with unknown mean and fixed covariance. We show that we can amplify even with only $O(\sqrt{d})$ samples from the distribution. In contrast, learning the distribution to small constant total variation distance requires $\Theta(d)$ samples. As in the discrete case, we also tightly characterize the extent to which amplification is possible in this setting.
\begin{theorem}\label{thm:gaussian_full}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N\left(\mu, \Sigma\right)$ with unknown mean $\mu$ and fixed covariance $\Sigma$. For all $d,n>0$ and $m = n+O\left(\frac{n}{\sqrt{d}}\right)$, $\mathcal{C}$ admits an $\left(n, m\right)$ amplification procedure.
This bound is tight up to constants, i.e., there is a fixed constant $c$ such that for all $d,n>0$, $\mathcal{C}$ does not admit an $\left(n, m\right)$ amplification procedure for $m\ge n+\frac{cn}{\sqrt{d}}$.
\end{theorem}
Unlike the discrete case, however, we do not get optimal amplification in this setting by generating additional samples from the empirical distribution of the initial set of $n$ samples, and then randomly shuffling together the original and new samples. Also, note that the procedure that generates all the output samples from the empirical distribution of $n$ input samples, and throws away these original $n$ samples, does not even give an $(n, n)$ amplification procedure for any value of $n$.
The optimal amplification algorithm in the Gaussian case first computes the empirical mean $\hat{\mu}$ of the original set $X_n$, and then draws $m-n$ new samples from $N(\hat{\mu},\Sigma)$. We then shift the original $n$ samples to ``decorrelate'' the original set and the new samples; intuitively, this step hides the fact that the $m-n$ new samples were generated based on the empirical mean of the original samples. The final set of returned samples consists of the shifted versions of the $n$ original samples along with the $m-n$ freshly generated ones.
In contrast to the amplification procedure in the discrete setting, where the final set of returned samples contains the (unmodified) original set, in this Gaussian setting, none of the returned samples are contained in the original set of $n$ samples. A natural question is whether this is necessary: \emph{Does there exist a procedure which achieves optimal amplification in the Gaussian setting, that returns a superset of the original samples?} We show a lower bound proving that, for $n=o(d/\log d)$ there is no $(n,n+1)$ amplification procedure which always returns a superset of the original $n$ samples. Curiously, however, the procedure that generates new samples from the empirical distribution, and then randomly shuffles together the new and old samples, is able to amplify at $n= \Omega(d/\log d)$, even though learning is not possible until $n=\Theta(d)$. Additionally, as $n$ goes from $10 \frac{d}{\log d}$ to $1000 \frac{d}{\log d}$, this amplification procedure goes from being unable to amplify at all, to being able to amplify by nearly $\sqrt{d}$ samples. This is formalized in the following proposition.
\begin{proposition}\label{prop:gaussian_modify_full}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions with unknown mean $\mu$ and covariance $\Sigma$. There is an absolute constant, $c$, such that for sufficiently large $d$, if $n \le \frac{cd}{\log d},$ there is no $(n,n+1)$ amplification procedure that always returns a superset of the original $n$ points.
On the other hand, there is a constant $c'$ such that for any $\epsilon$, for $n = \frac{d}{\epsilon \log d}$, and for sufficiently large $d$, there is an $\left(n,n+c'n^{\frac{1}{2}-9\epsilon}\right)$ amplification protocol for $\mathcal{C}$ that returns a superset of the original $n$ samples.
\end{proposition}
\end{comment}
\vspace{-10pt}
\subsection{Open Directions}\label{sec:open}
From a technical perspective, there are a number of natural open directions for future work, including establishing tight bounds on amplification for other natural distribution classes, such as $d$ dimensional Gaussians with unknown mean and covariance. More conceptually, it seems worth getting a broader understanding of the range of potential amplification algorithms, and the settings to which each can be applied.
\vspace{-5pt}
\paragraph{Weaker or More Powerful Verifiers?}
Our results showing that non-trivial amplification is possible even in the regime in which learning is not possible, rely on the modeling assumption that the verifier gets no information about the amplifier's training set, $X_n$ (the set of $n$ i.i.d. samples). If this dataset is revealed to the verifier, then the question of amplification is equivalent to learning. This prompts the question about a middle ground, where the verifier has some information about the set $X_n$, but does not see the entire set; this middle ground also seems the most practically relevant (e.g. how much do I need to know about a GAN's training set to decide whether it actually understands a distribution of images?).
{\centering
\begin{quote}\emph{How does the power of the amplifier vary depending on how much information the verifier has about $X_n$? If the verifier is given a uniformly random subsample of $X_n$ of size $n' \ll n,$ how does the amount of possible amplification scale with $n'$?}\end{quote}}
Rather than considering how to increase the power of the verifier, as the above question asks, it might also be worth considering the consequences of decreasing either the computational power, or information theoretic power of the verifier.
{\centering \begin{quote} \emph{If the verifier, instead of knowing distribution $D$, receives only a set of independent draws from $D$, how much more power does this give the amplifier? Alternately, if the verifier is constrained to be an efficiently computable function, does this provide additional power to the amplifier in any natural settings?}\end{quote}}
\vspace{-15pt}
\paragraph{Better Amplifiers in the Discrete Setting.}
In the discrete distribution setting, our amplification results are tight (to constant factors) in a worst-case sense, and our amplifier essentially just returns the original $n$ samples, together with additional samples drawn from the empirical distribution of those $n$ samples, and then randomly permutes the order of these datapoints. This begs the question: \emph{In the case of discrete distributions, is there any benefit to considering more sophisticated amplification schemes?} Below we sketch one example motivating a more clever amplification approach.
\begin{example}\label{example:good-turing}
Consider obtaining $n$ samples corresponding to independent draws from a discrete distribution that puts probability $p \gg 1/n$ on a single domain element, and with probability $1-p$ draws a sample from the uniform distribution over some infinite discrete domain. If $p<2/3,$ then the amplification approach that adds samples from the empirical distribution of the data to the original set of samples, will fail. Indeed, with probability at least $1/3$ it will introduce a second sample of one of the ``rare'' elements, and such samples can be rejected by the verifier. For this setting, the optimal amplifier would always introduce extra samples corresponding to the element of probability $p$.
\end{example}
The above example motivates a more sophisticated amplification strategy for the discrete distribution setting. Approaches such as Good-Turing frequency estimation, or more modern variants of it, adjust the empirical probabilities to more accurately reflect the true probabilities (see e.g.~\cite{good1953population,orlitsky2003always,valiant2016instance}). Indeed, in a setting such as Example~\ref{example:good-turing}, based on the fact that only one domain element is observed more than once, it is easy to conclude that the total probability mass of all the elements observed just once, is likely at most $O(1/n)$, which implies that a successful amplification scheme cannot duplicate any of them. While inserting samples from a Good-Turing adjusted empirical distribution will not improve the amplification in a worst-case sense for discrete distributions with a bounded support size, such schemes seem \emph{strictly} better than the schemes we currently analyze. The following question outlines one potential avenue for quantifying this, along the lines of the recent work on ``instance optimal'' distribution testing and estimation (see e.g.~\cite{acharya2012competitive,orlitsky2015competitive,valiant2016instance}):
{\centering
\begin{quote}\emph{Is there an ``instance optimal'' amplification scheme, which, for every distribution, $D$, amplifies as well as could be hoped? Specifically, to what extent is there an amplification scheme which performs nearly as well as a hypothetical optimal scheme that knows distribution $D$ up to relabeling/permuting the domain?}\end{quote}}
\vspace{-15pt}
\paragraph{Potential Applications of Sample Amplification.}
An interesting future direction is to examine if sample amplification is a useful primitive in settings where the samples are given as input to downstream analysis. Amplification does not add any new information to the original data, but it could still make the original information more easily accessible to certain types of algorithms which interact with the data in limited ways. For example, many popular algorithms and heuristics are not information theoretically optimal, despite their widespread use. It seems worth examining if amplification schemes could improve the statistical efficiency of these commonly used methods. Since the amplified samples are ``good'' in an information theoretic sense (they are indistinguishable from true samples), the performance of downstream algorithms \emph{cannot} be significantly hurt. Below, we provide a toy example of a setting where amplification improves the accuracy of a standard downstream estimator.
\begin{example}\label{example:amphelps}
Given labeled examples, $(x_1,y_1),\ldots,(x_n,y_n)$ drawn from a distribution, $D$, with $x_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$, a natural quantity to estimate is the fraction of variance in $y$ explainable as a linear function of $x$: $$\inf_{\theta \in \mathbb{R}^d} \mathbb{E}_{(x,y)\sim D}[(\theta^T x - y)^2].$$ The standard unbiased estimator for this quantity is the training error of the least-squares linear model, scaled by a factor of $\frac{1}{n-d}.$ This scaling factor makes this estimate unbiased, although the variance is large when $n$ is not much larger than $d$. Figure~\ref{fig:learnability} shows the expected squared error of this estimator on raw samples, and on $(n,n+2)$ amplified samples, in the case where $x_i \sim N(0,I_d),$ and $y_i = \theta^T x_i + \eta$ for some model $\|\theta\|_2=1$ and independent noise $\eta\sim N(0,\frac{1}{4})$---hence the true value for the ``unexplainable variance'' is $1/4$. Here, the amplification procedure draws two additional datapoints, $x$ from the isotropic Gaussian with mean equal to the empirical mean, and labels them according to the learned least-squares regression model $\hat{\theta}$ with independent noise of variance $5/n$ times the empirical estimate of the unexplained variance.
\end{example}
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-\baselineskip}
\centering
\includegraphics[width=0.42\textwidth]{learnabilityFig.png}
\captionof{figure}{\small{Toy example illustrating potential benefit of feeding amplified samples into a commonly used estimator. See Example~\ref{example:amphelps} for a description of the specific setup.
}}
\label{fig:learnability}
\vspace{-\baselineskip}
\end{wrapfigure}
One potential limitation to applications of amplification is that our existing results show that it is only possible to amplify the sample size by sub-constant factors (for the settings considered). If the algorithm using the amplified data is limited, however, then we could hope for much larger amplification factors. This is reminiscent of the open problem in the previous section on whether larger amplification is possible against weaker classes of verifiers.
In practice, there is already growing interest in using generative models for data augmentation to improve classification accuracy \citep{antoniou2017data,frid2018gan,wang2018low,yi2019generative}. Given our results which show that amplification is significantly easier than learning, such pipelines might be more effective than one would initially suspect. It is also worth thinking more generally about how to design modular data analysis or learning pipelines, where a first component of the pipeline could be an amplifier tailored to the specific data distribution, followed by more generic learning algorithms that do not attempt to leverage structural properties of the data distribution. Such modular pipelines might prove to be significantly easier to develop and maintain, in practice.
\vspace{-5pt}
\paragraph{Implications for Generative Modelling.}
The sample amplification framework has some connections to generative modelling. Generative models such as GANs aim to produce new samples from an unknown distribution $D$ given a training set of size $n$ drawn from $D$. It is tempting to try to relate the amplification setting to GANs by viewing the amplifier and verifier as analogs of the generator and discriminator, respectively. This is \emph{not} an accurate correspondence: For GANs, the discriminator typically evaluates examples individually (or in small batches), and often has seen the same training set as the generator, whereas our verifier explicitly evaluates a full set of samples without knowledge of the training samples.
The samples generated by a generative model are often evaluated by humans (either manually or algorithmically). This evaluation is usually aimed at understanding the quality of output samples \emph{conditioned on the training data}---if some of the output samples are copies of the training set, this is not satisfactory---which again corresponds to learning rather than sample amplification.
Despite these differences, some ways that generative models are actually used, do closely mirror the amplification setting. For example, when generative models are used to augment a training set that is used to learn a classifier, both the generated samples and the original dataset are fed into the learning algorithm. The learning algorithm does not necessarily distinguish between ``new'' and ``old'' samples. In this setting, it does make sense to evaluate the set of ``new'' and ``old'' samples together, as a single set of ``amplified'' samples, rather than evaluating the ``new'' samples conditioned on the ``old'' ones. This exactly corresponds to our amplification formulation. Given that amplification is often easier than learning, it might be worthwhile trying to develop more techniques that are explicitly trying to amplify, rather than learn.
A second, distinct connection between amplification and GANs, relates to the question of how humans (or algorithms) can evaluate the samples produced by a GAN. The gap between learning (evaluating the generated samples conditioned on the training set), and amplification (evaluating the generated samples without knowledge of the training set), suggests that in order to truly evaluate the samples produced by a GAN, we would need to closely inspect the training data used by the GAN. This is obviously impractical in many settings, and motivates some of the questions described above concerning how much access a verifier needs to the input examples in order for there to be a gap between learning, and amplifying.
\iffalse
\edit{
It is important to note that the typical problem setup for generative models differs from the sample amplification setup.
Generative models are typically examined through the more traditional learning setting. In contrast to the sample amplification where our results hold over the randomness of the input data, with generative models we typically ask the model generate ``good'' samples \emph{conditioned} on the samples. Thus, the standard learning lower bounds apply.
However, this does not necessarily correspond to how these generative models used. For example, when generative models are used to augment data to train a classifier, both the new samples and the original dataset are fed into the training algorithm. The classifier does not necessarily distinguish between ``new'' and ``old'' samples.
The improved sample complexity implies that these methods can be more effective than the learning lower bounds suggest. Furthermore, the results of proposition \ref{prop:gaussian_modify_full} suggests that new methods may be necessary to achieve optimal performance in these scenarios.
Distinction: internal setup is the learning problem. Tend to think of evaluating the new samples, given the training set (despite the fact that we generally havent really looked at trianing set).
Despite these differences, some ways generative models are used fits exactly within our framework.... feed in original plus new samples. [add brian's comment: if this is how GANs are used, maybe worth designing them specifically for this task, which would correspond to designing amplifier in our framework.]
The sample amplification setting is closely related to questions in generative modelling. Generative models such as GANs aim to produce new samples from an unknown distribution $D$ given a training set of size $n$ drawn from $D$. The generated samples are then evaluated by humans or used for downstream learning tasks. In the sample amplification framework, the generative model can be regarded as the amplifier whose goal is to generate $m>n$ samples which look indistinguishable from true samples from $D$ to an evaluator (either human or an algorithm). The evaluator in this case corresponds to the verifier in our framework.
There has been significant practical advancement in generative modelling recently, which has led to great interest in understanding how close the estimated distribution is to the original distribution. This is usually done by evaluating the quality of the samples generated by these models. In this work, we aim to understand the implication of a model being able to generate new samples which look indistinguishable to samples drawn from the original distribution. In particular, we ask whether it is necessary to learn a distribution in order to be able to produce new samples from it.
As mentioned in Section \ref{sec:open}, another question pertinent to generative modelling is to understand how much we need to know about the training set of the generative model in addition to the generated samples to evaluate whether it has learnt the distribution. In our framework, this corresponds to asking how much the verifier needs to know about the training set to preclude the possibility of an amplifier generating more samples without learning the true distribution.\\
One of the applications of generative models is to increase the size of the dataset for a downstream task. In our framework, this corresponds to the class of verifiers being the class of algorithms which use this dataset to solve the downstream task. In these settings, it seems reasonable to assume that these verifiers are computationally limited or only have partial knowledge of the true distribution. One could hope for much greater amplification to be possible in this setting than in our current setup. This indicates that algorithms which are limited either computationally or in their knowledge of the data distribution could benefit from additional data generated by such amplification schemes.
} \fi
\subsection{Related Work}
The question of deciding whether a set of samples consists of independent draws from a specified distribution is one of the fundamental problems at the core of distributional property testing. Interest in this problem was sparked by the seminal work of Goldreich and Ron~\cite{GR00}, who considered the specific problem of determining whether a set of samples was drawn from a uniform distribution of support size $k$. This sparked a line of work on the slightly more general problem of ``identity testing'' whether a set of samples was drawn from a specified distribution, $D$, versus a distribution with distance at least $\epsilon$ from $D$~\cite{batu2001testing,paninski2008coincidence,valiant2017automatic,diakonikolas2016new}. Beyond the specific question of identity testing, there is an enormous body of work on other distributional property testing questions, including the ``tolerant'' version of identity testing, as well as the multi-distribution analogs (see e.g.~\cite{batu2013testing,valiant2011testing,chan2014optimal,orlitsky2015competitive,bhattacharya2015testing,levi2013testing,diakonikolas2016new}). In the majority of these works, the assumption is that the given samples consist of independent draws from some fixed distribution, and the common theme in these results is that such tests can typically be accomplished with far less data than would be required to learn the distribution. While the identity testing problem is clearly related to the amplification problem we consider, these appear to be quite distinct problems. In particular, in the identity testing setting, the main technical challenge is understanding what statistics of a set of i.i.d. samples are capable of distinguishing samples drawn from the prescribed distribution, versus samples drawn from any distribution that is at least $\epsilon$-far from that distribution. In contrast, in the amplification setting, the core question is how the amplifier can leverage a set of independent samples from $D$ to generate a larger set of (presumably) non-independent samples that can successfully masquerade as a set of independent samples drawn from $D$; of course, the catch is that the amplifier must do this in the data regime in which it is impossible for it to learn much about $D$.
\begin{comment}
Beyond the specific question of identity testing, there is an enormous body of work on other distributional property testing questions, including the ``tolerant'' version of identity testing where one wishes to distinguish whether samples were drawn independently from a distribution that is close to a specified distribution, $D$, versus far from $D$, as well as the multi-distribution analogs where one obtains two (or more) sets of i.i.d. samples, drawn respectively from unknown distributions $D_1$, $D_2$, and wishes to distinguish the case that the two distributions are identical (or close) versus have significant total variation distance (see e.g.~\cite{batu2013testing,valiant2011testing,chan2014optimal,orlitsky2015competitive,bhattacharya2015testing,levi2013testing,diakonikolas2016new}). In the majority of these works, the assumption is that given samples consist of independent draws from some fixed distribution, and the common theme in these results is the punchline that such tests can typically be accomplished with far less data than would be required to learn the distribution in question.
\end{comment}
Within this line of work on distributional property testing and estimation, there is also a recent thread of work on designing estimators for specific properties (such as entropy, or distance to uniformity), whose performance given $n$ independent draws from the distribution in question is comparable to the expected performance of a naive ``plugin'' estimator (which returns the property value of the empirical distribution) based on $m > n$ independent draws~\cite{valiant2016instance,yi2018data}. The term ``data amplification'' has been applied to this line of work, although it is a different problem from the one we consider. In particular, we are considering the extent to which the samples can be used to create a larger set of samples; the work on property estimation is asking to what extent one can craft superior estimators whose performance is comparable to the performance that a more basic estimator would achieve with a larger sample size.
The recent work on \emph{sampling correctors}~\cite{canonne2018sampling} also considers the question of how to produce a ``good'' set of draws from a given distribution. That work assumes access to independent draws from a distribution, $D$, which is close to having some desired structural property, such as monotonicity or uniformity, and considers how to ``correct'' or ``improve'' those samples to produce a set of samples that appear to have been drawn from a different distribution $D'$ that possesses the desired property (or is closer to possessing the property). Part of that work also considers the question of whether such a protocol requires access to additional randomness.
Our formulation of sample amplification as a game between an amplifier and a verifier, closely resembles the setup for \emph{pseudo-randomness} (see~\cite{vadhan2012pseudorandomness} for a relatively recent survey). There, the pseudo-random generator takes a set of $n$ independent fair coin flips, and outputs a longer string of $m > n$ outcomes. The verifier's job is to distinguish the output of the generator from a set of $m$ independent tosses of the fair coin (i.e. truly random bits). In contrast to our setting, in pseudo-randomness, both players know that the distribution in question is the uniform distribution, the catch is that the generator does not have access to randomness, and the verifier is computationally bounded. Beyond the superficial similarity in setup, we are not aware of any deeper connections between our notion of amplification and pseudorandomness.
Finally, it is also worth mentioning the work of Viola on the complexity of sampling from distributions~\cite{viola2012complexity}. That work also considers the challenge of generating samples from a specified distribution, though the problem is posed as the computational challenge of producing samples from a specified distribution, given access to uniformly random bits. One of the punchlines of that work is that there are distributions, such as the distribution over pairs $(x,y)$ where $x$ is a uniformly random length-$n$ string, and $y=parity(x),$ where small circuits can sample from the distribution, yet no small circuit can compute $y=parity(x)$ given $x$. A different way of phrasing that punchline is that there are distributions that are easy to sample from, for which it is much harder to sample from their conditional distributions (e.g. in the parity case, sampling $(x,y)$ given $x$ is hard).
\iffalse
Lets not discuss GANs. We don't want this to seem like a paper on GANs...or do we ?
\fi
\vspace{-5pt}
\section{Algorithms and Proof Overview}
In this section, we describe our algorithms for data amplification for discrete and Gaussian distributions. We also give an intuitive overview of the proofs of both the upper and lower bounds.
\vspace{-5pt}
\subsection{Discrete Distributions with Bounded Support}
We begin by providing some intuition for amplification in the discrete distribution setting, by considering the simple case where the distribution in question is a uniform distribution over an unknown support. We then describe how our more general amplification algorithm extends this intuition.
\input{section_2_discrete}
\subsection{Gaussian Distributions with Unknown Mean and Fixed Covariance}
\input{section_2_gaussian}
\section{Proofs: Gaussian with Unknown Mean and Fixed Covariance}
\input{upper_bound_gaussian.tex}
\input{lower_bound_gaussian}
\input{upper_bound_gaussian_nonmod.tex}
\input{lower_bound_gaussian_nonmod.tex}
\input{upper_bound_discrete.tex}
\input{lower_bound_discrete}
\bibliographystyle{plain}
\subsubsection{General Lower Bound for Gaussians}
We show a lower bound that there is no $(n,m)$ amplification procedure for Gaussian distibutions with unknown mean for $m\ge n+\frac{cn}{\sqrt{d}}$, where $c$ is a fixed constant.
We define a verifier such that for $\mu\leftarrow N(0,\sqrt{d}I)$ and $m=n+\frac{cn}{\sqrt{d}}$, $m$ true samples from $N(\mu,I)$ are accepted by the verifier with high probability over the randomness in the samples, but $m$ samples generated by any $(n,m)$ amplification scheme are rejected by the verifier with high probability over the randomness in the samples and $\mu$.
The verifier for this setting evaluates the inner product and the distance of the sample from the mean as in Section \ref{sec:overview_lower_empirical}, and in addition, also checks $\norm{\mu- \hat{\mu}_{-m}}^2$. In total, it evaluates the following,
\begin{enumerate}
\item $\ip{x_{m}'-\hat{\mu}_{-m}}{\mu-\hat{\mu}_{-m}}$,
\item $\norm{x_{m}'-\hat{\mu}_{-m}}$,
\item $\norm{\mu- \hat{\mu}_{-m}}$.
\end{enumerate}
Note that unlike Section \ref{sec:overview_lower_empirical}, we show that the verifier only needs to do the above tests for any one index $i$---chosen to be $m$ above---instead of $\forall \; i\in[m]$. We now explain why these tests are sufficient to prove the lower bound. Note that for $m$ true samples drawn from $N(\mu,I)$, $\norm{\mu- \hat{\mu}_{-m}}^2 \approx \frac{d}{m-1}$. Also, the squared distance $\norm{\mu- \hat{\mu}^2}$ of the mean $\hat{\mu}$ of the original set $X_n$ from $\mu$ is concentrated around $\frac{d}{n}$. Using this, for $m=n+1+\frac{cn}{\sqrt{d}}$, we can show that no algorithm can find a $\hat{\mu}_{-m}$ which satisfies $\norm{\mu- \hat{\mu}_{-m}}^2 \approx \frac{d}{m-1}\ll \frac{d}{n}$ with decent probability over $\mu\leftarrow N(0,\sqrt{d}I)$. This is because such an algorithm could be used to find the true mean $\mu$ with much smaller error than $\frac{d}{n}$, which is not possible with $n$ samples. This argument works for $m=n+1+\frac{cn}{\sqrt{d}}$, but that does not rule out $(n,n+1)$ amplification schemes for $n\ll \sqrt{d}$. To show that $(n,n+1)$ amplification is not possible for $n\ll \sqrt{d}$, we use the inner product test along with the test for $\norm{\mu- \hat{\mu}_{-m}}^2$ to distinguish between $(n+1)$ true samples from $N(\mu,I)$ and those produced by an $(n,n+1)$ amplification scheme, with high probability over $\mu$ and the samples. The analysis is similar to the analysis in Section \ref{sec:overview_lower_empirical}.
\fi
\section{Proofs: Discrete Distributions with Bounded Support}
\subsection{Upper Bound}
In this section we prove the upper bound from Theorem \ref{thm:discrete-full}. The algorithm itself is presented in Algorithm \ref{alg:discrete}.
For clarity of writing, we assume that the number of input samples is $4n$, instead of $n$.
\algdiscrete
\begin{proposition}
Let $\mathcal{C}$ denote the class of discrete distributions with support size at most $k$. For sufficiently large $k,$ and $m = 4n+O\left(\frac{n}{\sqrt{k}}\right)$, $\mathcal{C}$ admits an $\left(4n, m\right)$ amplification procedure.
\end{proposition}
\begin{proof}
To avoid dependencies between the count of different elements, we first prove our results in a Poissonized setting, and then in lemma \ref{lem:discrete-ub2}, we describe how to use the amplifier for Poissonized setting to get an amplifier for the original multinomial setting. Let $D \in \mathcal{C}$ be an unknown probability distribution over $[k]$, and let $p_i$ denote the probability mass associated with $i \in [k]$. Throughout the proof, we use random variable $X_q$ to denote $q$ independent samples from $D$, where $q$ can also be a random variable. Suppose we are given $N = N_1 + N_2$ independent samples from $D$, denoted by $X_{N_1} \text{ and } X_{N_2}$, where $N_1$ and $N_2$ are drawn from $\text{Poisson}(n)$. We show how to amplify them to $\tilde{M} = N + R$ samples, denoted by $Z_{\tilde{M}}$, such that $D_{TV}(Z_{\tilde{M}}, X_{M})$ is small, where $M \leftarrow \text{Poisson}(2n+r)$.
Our amplifying procedure involves estimating the probability of each element using $X_{N_1}$, generating $R$ independent samples using these estimates, and randomly shuffling these samples with $X_{N_2}$.
Let $u_i$ be the count of element $i$ in $X_{N_1}$ and $y_i$ be the count of $i$ in $X_{N_2}$ noting they are both distributed as $\text{Poisson}(np_i)$. The amplification procedure proceeds through the following steps:
\begin{enumerate}
\item Estimate the frequency $\hat{p}_i$ of each element using $u_i$, that is, $\hat{p}_i = \frac{u_i}{n}$.
\item Draw $\hat{z}_i \leftarrow \text{Poisson}(r\hat{p}_i) $ additional samples of element $i$ for all $i \in [k]$.
\item Append these generated samples to $X_{N_2}$ to get $Z_{N_2 + R}$.
\item Randomly permute the elements of $Z_{N_2 + R}$, and append them to $X_{N_1}$ to get $Z_{\tilde{M}}$.
\end{enumerate}
We first show that $Z_{\tilde{M}}$ is close in total variation distance, to $\text{Poisson}(2n+r)$ samples generated from $D$. We will prove this by showing that with high probability over the choice of $X_{N_1}$, the distribution of $Z_{N_2+R}$ is close to $\text{Poisson}(n+r)$ samples generated from $D$. After this, we can use lemma \ref{lem:coupling} to show that appending $Z_{N_2 + R}$ to the samples in $X_{N_1}$ results in a sequence with low total variation distance to $X_{M}$. Since our amplification procedure randomly permutes the last $N_2 + R$ elements, we can argue this using only the count of each element. Recall $y_i$ is the count of element $i$ in $X_{N_2}$, and $\hat{z}_i$ is the number of additional samples of element $i$ added by our amplification procedure. Let $z_i \leftarrow \text{Poisson}(rp_i)$, and let $v_i=y_i+z_i$ and $\hat{v}_i=y_i+\hat{z}_i$. Here, $v_i$ denotes the count of element $i$ in $\text{Poisson}(n+r)$ samples drawn from $D$, and $\hat{v}_i$ denotes the corresponding count in samples generated using our amplification procedure. We use $P_v$ to denote the distribution associated with random variable $v$.
\begin{lemma}\label{lem:discrete1}
For $r\le n\epsilon^{1.5}/(4\sqrt{k})$, with probability $1-\epsilon$ over the randomness in $\{u_i,i\in [k]\}$,
\begin{align*}
d_{TV} \left (\prod_{i=1}^{k}{v_i}, \prod_{i=1}^{k}{\hat{v}_i} \right) \le \epsilon/2.
\end{align*}
where $\prod$ refers to the product distribution.
\end{lemma}
\begin{proof}
We partition the support $[k]$ into two sets. Let $S=\{i:p_i\ge \epsilon/(2nk)\}$ and $S^c= [k]\backslash S$. Let $|S|=k'$. Without loss of generality, assume that $S={\{i:1\le i \le k'\}}$ and $S^c={\{i: k'+1\le i \le k\}}$. We will separately bound the contribution of the variables in the set $S$ and $S^c$ to the total variation distance. For the first set $S$, we will upper bound $\sum_{i=1}^{k'} D_{KL}(v_i\parallel \hat{v}_i)$, and use Pinsker's inequality to then bound the total variation distance. For the second set $S^c$, we will directly bound $\sum_{i=k'+1}^{k} d_{TV}(v_i, \hat{v}_i)$. All our bounds will be with high probability over the randomness in the first set $\{u_i, i \in [k]\}$.
We first bound the total variation distance for the variables in the first set $S$. Note that because the sum of two Poisson random variables is a Poisson random variable, $v_i$ is distributed as $\text{Poisson}(np_i+rp_i)$ and $\hat{v}_i$ is distributed as $\text{Poisson}(np_i +ru_i /n)$. We will use the following expression for the KL divergence $D_{KL}(P\parallel Q)$ between two Poisson distributions $P$ and $Q$ with means $\lambda_1$ and $\lambda_2$ respectively---
\begin{align}
D_{KL}(P\parallel Q) = \lambda_1 \log \left( \frac{\lambda_1}{\lambda_2} \right) + \lambda_2-\lambda_1.
\end{align}
Using this expression, we can write the KL divergence between the distributions of $v_i$ and $\hat{v}_i$ as follows,
\begin{align*}
D_{KL}({v_i}\parallel {\hat{v}_i}) = p_i (n+r) \log \left( \frac{p_i (n+r)}{p_i n + ru_i/n} \right) + (ru_i/n-rp_i).
\end{align*}
Let $\delta_i = u_i-np_i$. We can rewrite the above expression as follows,
\begin{align*}
D_{KL}({v_i}\parallel {\hat{v}_i}) &= p_i (n+r) \log \left( \frac{p_i (n+r)}{p_i (n+r) + r\delta_i/n} \right) + r\delta_i/n,\\
&= p_i (n+r) \log \left( \frac{1}{1 + {r\delta_i}/({np_i(n+r)})} \right) + r\delta_i/n.
\end{align*}
Note that $\log(1+x)\ge x-2x^2$ for $x\ge 0.8$. As $\delta_i\ge -np_i$, therefore ${r\delta_i}/({np_i(n+r)})\ge -0.8$ for $r\le n$. Therefore,
\begin{align}
p_i (n+r) \log \left( \frac{1}{1 + {r\delta_i}/({np_i(n+r)})} \right) &\le -r\delta_i/n+\frac{2r^2\delta_i^2}{n^2p_i(n+r)},\nonumber\\
\implies D_{KL}(v_i\parallel \hat{v}_i) &\le \frac{2r^2\delta_i^2}{n^2p_i(n+r)},\nonumber\\
\implies \sum_{i=1}^{k'} D_{KL}(v_i\parallel \hat{v}_i) &\le \frac{2r^2}{n^2}\sum_{i=1}^{k'}\frac{\delta_i^2}{np_i}.\label{eq:discrete1}
\end{align}
We will now bound $\sum_{i=1}^{k'}\frac{\delta_i^2}{np_i}$. As a $\text{Poisson}(\lambda)$ random variable has variance $\lambda$ and $\delta_i=u_i-np_i$ where $u_i\leftarrow \text{Poisson}(np_i)$, therefore,
\begin{align*}
\mathbb{E}\left[ \sum_{i=1}^{k'}\frac{\delta_i^2}{np_i} \right] = k'.
\end{align*}
Also, the fourth central moment of a $\text{Poisson}(\lambda)$ random variable is $\lambda(1+3\lambda)$, hence
\begin{align*}
\text{Var}[\delta_i^2] &= \mathbb{E} \left [\delta_i^4 \right ]-\mathbb{E} \left [\delta_i^2 \right ]^2,\\
&= np_i(1+3np_i)-(np_i)^2=np_i(1+2np_i),\\
\implies \text{Var}\left[ \sum_{i=1}^{k'}\frac{\delta_i^2}{np_i} \right] &= \sum_{i=1}^{k'} \frac{1+2np_i}{np_i}.
\end{align*}
As $p_i \ge {\epsilon}/{(2nk)}$ for $i\in S$ and $k'\le k$, therefore,
\begin{align*}
\text{Var}\left[ \sum_{i=1}^{k'}\frac{\delta_i^2}{np_i} \right] \le 2k^2/\epsilon+2k\le 4k^2/\epsilon.
\end{align*}
Hence by Chebyshev's inequality,
\begin{align}
\Pr\left[ \sum_{i=1}^{k'}\frac{\delta_i^2}{np_i}-k' \ge 4k/{\epsilon} \right] &\le \epsilon/4,\nonumber\\
\implies \Pr\left[ \sum_{i=1}^{k'}\frac{\delta_i^2}{np_i} \ge 4k/{\epsilon} \right] &\le \epsilon/4.\label{eq:discrete2}
\end{align}
Let $E_1$ be the event that $\sum_{i=1}^{k'}\frac{\delta_i^2}{np_i} \le 4k/{\epsilon}$. By \eqref{eq:discrete2}, $\Pr(E_1)\ge 1-\epsilon/4$. Conditioned on the event $E_1$ and using \eqref{eq:discrete1}, we can bound the KL divergence as follows,
\begin{align*}
D_{KL}\infdivx[\Big]{\prod_{i\in S}{v_i}}{ \prod_{i\in S}{\hat{v}_i} }= \sum_{i=1}^{k'} D_{KL}({v_i}\parallel {\hat{v}_i}) &\le \frac{8r^2k}{n^2\epsilon}.
\end{align*}
Hence for $r\le n\epsilon^{1.5}/(4\sqrt{k})$ and conditioned on the event $E_1$,
\begin{align*}
D_{KL}\infdivx[\Big]{\prod_{i\in S}{v_i}}{\prod_{i\in S}{\hat{v}_i}} \le \epsilon^2/2.
\end{align*}
Hence using Pinsker's inequality, conditioned on the event $E_1$,
\begin{align*}
d_{TV} \left (\prod_{i\in S}{v_i}, \prod_{i\in S}{\hat{v}_i} \right ) \le \epsilon/2.
\end{align*}
We will now bound the total variation distance for the variables in the set $S^c$. Let $E_2$ be the event that $u_i=0,\; \forall \; i \in S^c$. Note that as $u_i\sim \text{Poisson}(np_i)$ where $p_i < \epsilon/(2nk)$, $u_i=0$ with probability at least $e^{-{\epsilon}/(2k)}$, hence $\Pr(E_2)\ge e^{-{\epsilon}/2}\ge 1-\epsilon/2$. We now condition on the event $E_2$. Recall that $v_i=y_i+z_i$, where $z_i\sim \text{Poisson}(rp_i)$ and $\hat{v}_i=y_i+\hat{z}_i$, where $\hat{z}_i=0$ conditioned on $E_2$. By a coupling argument on $y_i$, the total variation distance between the distributions of $v_i$ and $\hat{v}_i$ equals the total variation distance between the distributions of $z_i$ and $\hat{z}_i$. As $\hat{z}_i=0$, conditioned on the event $E_2$,
\begin{align*}
d_{TV}({v_i}, {\hat{v}_i}) &= \Pr[z_i\ne 0]=1-e^{-rp_i}\le 1-e^{-r\epsilon/(2nk)}\\
&\le \frac{r\epsilon}{2nk}\le \frac{\epsilon}{2k}, \quad \text{as } r\le n.
\end{align*}
Hence conditioned on $E_2$,
\begin{align*}
d_{TV}\left (\prod_{i\in S^c}{v_i}, \prod_{i\in S^c}{\hat{v}_i} \right )\le \sum_{i=k'+1}^{k} d_{TV} \left ({v_i}, {\hat{v}_i} \right ) \le \epsilon/2.
\end{align*}
Hence conditioned on the events $E_1$ and $E_2$,
\begin{align*}
d_{TV}\left (\prod_{i=1}^{k}{v_i}, \prod_{i=1}^{k}{\hat{v}_i} \right ) \le d_{TV} \left (\prod_{i\in S}{v_i}, \prod_{i\in S}{\hat{v}_i} \right ) + d_{TV} \left (\prod_{i\in S^c}{v_i}, \prod_{i\in S^c}{\hat{v}_i} \right ) \le \epsilon.
\end{align*}
As $\Pr(E_1) \ge 1-\epsilon/4 $ and $\Pr(E_2) \ge 1-\epsilon/2$, by a union bound $\Pr(E_1 \cup E_2)\ge 1-\epsilon $. Hence with probability $1-\epsilon$ over the randomness in $\{u_i,i\in [k]\}$,
\begin{align*}
d_{TV} \left (\prod_{i=1}^{k}{v_i}, \prod_{i=1}^{k}{\hat{v}_i} \right) \le \epsilon.
\end{align*}
\end{proof}
Lemma \ref{lem:discrete1} says that with high probability over the first $N_1$ samples, the $N_2 + R$ samples are close in total variation distance to $\text{Poisson}(n+r)$ samples drawn from $D$. Using lemma \ref{lem:discrete1} and lemma \ref{lem:coupling}, we can conclude that for $r\le n\epsilon^{1.5}/(4\sqrt{k})$, $D_{TV}(X_{M}, Z_{\tilde{M}}) \leq \epsilon + \epsilon/2 = 3\epsilon/2$.
Next, we show how to use the above amplification procedure to amplify samples in the non-Poissonized setting. Given $N = N_1 + N_2$ samples from $D$, we have shown how to amplify them to get ${\tilde{M}} = N+R$ samples. Given such an amplifier as a black box, and $4n$ samples from $D$, one can use the first $N$ samples to generate $M$ samples. Then append these $M$ samples with the remaining $4n - N$ samples to get an amplifier in our original non-Poissonized setting.
\begin{lemma}\label{lem:discrete-ub2}
Let $N = N_1 + N_2$ where $N_1, N_2 \leftarrow \text{Poisson}(n)$, and let $M \leftarrow \text{Poisson}(2n+r)$.
Suppose we are given an $(N,M)$ amplifier $f$ (as described above) satisfying $D_{TV}(f(X_N), X_{M}) \leq \frac{3\epsilon}{2}$, for all $D \in \mathcal C$. Then there exists an amplifier $f': \mathcal [k]^{4n} \rightarrow \mathcal [k]^{4n+\frac{r}{8}}$, such that $D_{TV}(f'(X_{4n}), X_{4n+\frac{r}{8}}) \leq \frac{5 \epsilon}{2}$, for $\epsilon \geq 2e^{-\frac{n}{20}} + e^{-\frac{25r}{88}}$, and for $r\le n\epsilon^{1.5}/(4\sqrt{k})$.
\end{lemma}
\begin{proof}
We divide the proof into three steps:
\begin{itemize}
\item \textbf{Step 1: } $f$ takes as input $X_{N_1}$ and $X_{N_2}$, samples of size $N_1$ and $N_2$ drawn from $D$. To simulate these samples, we use the 4n samples available to us from $D$. We draw $N_1', N_2' \leftarrow \text{Poisson}(n)$, and let $N' = N_1'+N_2'$. If $N' \leq 4n$, we set $X_{N_1'} = (x_1, x_2, \dots, x_{N_1'})$ and $X_{N_2'} = (x_{N_1'+1},x_{N_1'+2}, \dots, x_{N_2'})$. Otherwise, we set $X_{N_1'}=\underbrace{(x_1, x_1, \dots, x_1)}_{N_1' \text{ times}}$, and $X_{N_2'}=\underbrace{(x_1, x_1, \dots, x_1)}_{N_2' \text{ times}}$, but this happens with very small probability leading to small total variation distance between $f(X_{N_1}, X_{N_2})$ and $f(X_{N_1'}, X_{N_2'})$, and by triangle inequality, small TV distance between $f(X_{N_1'}, X_{N_2'})$ and $X_M$. We denote $(X_{N_1}, X_{N_2})$ by $X_N$ and $(X_{N_1'}, X_{N_2'})$ by $X_{N'}$.
\item \textbf{Step 2: } We would like to finally output $\frac{r}{8}$ more samples. Let us denote the number of samples in $f(X_{N'})$ by $M'$. If $M' < N'+\frac{r}{8}$, we append $N'+\frac{r}{8} - M'$ arbitrary samples to it (say $x_1$) so that the total sample size is equal to $N' + \frac{r}{8}$. If $M' \geq N' + \frac{r}{8}$, we don't do anything in this step. Let $t_1(f(X_{N'}))$ denote the samples outputted in this step. Since the number of new samples added by $f$ is roughly distributed as $\text{Poisson}(r)$, the probability that the number of new samples is less than $r/8$ is small, leading to small $TV$ distance between $t_1(f(X_{N'}))$ and $f(X_{N'})$, and by triangle inequality, small TV distance between $t_1(f(X_{N'}))$ and $X_M$.
\item \textbf{Step 3: } Let $M_1'$ denote the number of samples in $t_1(f(X_{N'}))$, and let $Q_1' = 4n+\frac{r}{8} - M_1'$ denote the number of extra samples needed to output $4n+\frac{r}{8}$ samples in total. If $Q_1' \geq 0$, we append $Q_1'$ i.i.d. samples from $D$ to $t_1(f(X_{N'}))$, and if $Q_1' < 0$, we remove last $\lvert Q_1' \rvert$ samples from $t_1(f(X_{N'}))$. We use $t_2(t_1(f(X_{N'})))$ to denote the output of this step. Step 2 ensures $M_1' \geq N' +\frac{r}{8}$, which implies $Q_1' \leq 4n - N'$. Let $X_{4n-N'} = (x_{N'+1}, x_{N'+2}, \dots, x_{4n})$ denote the leftover samples in $X_{4n}$ after removing the first $N'$ samples. When $Q_1' \geq 0$, we use the first $Q_1'$ samples from $X_{4n-N'}$ to simulate i.i.d. samples from $D$, that is,
$t_2(t_1(f(X_{N'}))) = \text{append}(t_1(f(X_{N'})), (x_{N'+1}, x_{N'+2}, \dots, x_{N'+Q_1'}))$. $t_2(t_1(f(X_{N'})))$ is the final output of our amplifier $f'$.
Similarly, let $Q_1 = 4n+\frac{r}{8} - M$ denote the number of extra samples needed to be appended to $X_M$ to output $4n+\frac{r}{8}$ samples in total. If $Q_1 \geq 0$, $t_2(X_M)$ correspond to appending $Q_1$ samples from $D$ to $X_M$, and otherwise, it corresponds to removing last $\lvert Q_1 \rvert$ samples from $X_M$. Since applying the same transformation to two random variables can't increase their total variation distance, and from step 2, we know that $D_{TV}(t_1(f(X_{N'})), X_M)$ is small, we get $D_{TV}(t_2(t_1(f(X_{N'}))), t_2(X_M))$ is small.
As $t_2(X_M)$ corresponds to $4n + \frac{r}{8}$ i.i.d. samples from $D$, $D_{TV} (X_{4n+\frac{r}{8}},t_2(X_M) ) = 0$. Using triangle inequality, we get $D_{TV}(t_2(t_1(f(X_{N'}))) ,X_{4n+\frac{r}{8}})$ is small which is the desired result.
\end{itemize}
Next, we prove that the total variation distances involved in each of these steps are small.
\begin{itemize}
\item \textbf{Step 1: } We first bound $D_{TV}(f(X_N), f(X_{N'}))$.
\begin{equation*}
\begin{split}
D_{TV}(f(X_N), f(X_{N'})) &\leq D_{TV}(X_{N}, X_{N'})\\
&=\frac{1}{2}\sum_{x} \ \lvert \Pr(X_{N} = x) - \Pr(X_{N'} = x) \rvert\\
&=\frac{1}{2}\sum_{x} \ \lvert \Pr(X_{N} = x \mid N \leq 4n)\Pr(N \leq 4n) - \Pr(X_{N'} = x \mid N' \leq 4n)\Pr(N' \leq 4n) \\
&+ \Pr(X_{N} = x \mid N > 4n)\Pr(N > 4n) -
\Pr(X_{N'} = x \mid N' > 4n)\Pr(N' > 4n) \rvert
\end{split}
\end{equation*}
where the first inequality holds as applying the same transformation to two random variables can't increase their total variation distance. Now, note that $X_N$ and $X_{N'}$ have the same distribution conditioned on $N \leq 4n$ and $ N' \leq 4n$. Also, $\Pr(N \leq 4n) = \Pr(N' \leq 4n)$ and $\Pr(N > 4n) = \Pr(N' > 4n)$, as both $N$ and $N'$ are drawn from $\text{Poisson}(2n)$ distribution. This gives us
\begin{equation*}
\begin{split}
D_{TV}(f(X_N), f(X_{N'}))
&=\frac{1}{2} \sum_{x} \ \Pr(N > 4n) \lvert
\Pr(X_{N} = x \mid N > 4n) -
\Pr(X_{N'} = x \mid N' > 4n) \rvert\\
&\leq \Pr(N > 4n)
\end{split}
\end{equation*}
Using the triangle inequality, we get $D_{TV}(X_M, f(X_N')) \leq \Pr(N > 4n) + 3\epsilon/2$. To bound $\Pr(N > 4n)$, we use the following Poisson tail bound \citep{canonne2017short}: for $X \leftarrow \text{Poisson}(\lambda)$,
\begin{equation}\label{eq:poisson-tail}
\Pr[X \geq \lambda + x], \Pr[X \leq \lambda - x] \leq e^{\frac{-x^2}{\lambda + x}}.
\end{equation}
As $N$ is distributed as $\text{Poisson}(2n)$, we get $\Pr(N > 4n) \leq e^{-n}$, which implies $D_{TV}(X_M, f(X_N')) \leq e^{-n} + \frac{3\epsilon}{2}$.
\item \textbf{Step 2: } In this step, we need to show $D_{TV}(t_1(f(X_{N'})), X_M)$ is small. Note that $t_1(f(X_{N'}))$ is equal to $f(X_{N'})$ except when $M' < N'+\frac{r}{8}$. From step 1, we know that $D_{TV}(f(X_{N'}), X_M)$ is small. If we show $D_{TV}(f(X_{N'}), t_1(f(X_{N'})))$ is small, then by triangle inequality, we get $D_{TV}(X_M, t_1(f(X_{N'})))$ is small. Let $M' = N'+R'$ where $R'$ denote the number of new samples added by the amplification procedure $f$ to $X_{N'}$.
\begin{equation*}
\begin{split}
&D_{TV}\left(t_1\left(f\left(X_{N'}\right)\right), f\left(X_{N'}\right)\right)\\
&=\frac{1}{2}\sum_{x} \ \lvert \Pr\left(t_1\left(f\left(X_{N'}\right)\right) = x\right) - \Pr\left(f\left(X_{N'}\right) = x\right) \rvert\\
&= \frac{1}{2}\sum_{x} \ \lvert \Pr\left(R' < \frac{r}{8}\right) \left(\Pr\left(t_1\left(f\left(X_{N'}\right)\right) = x \mid R' < \frac{r}{8}\right) - \Pr\left(f\left(X_{N'}\right) = x \mid R' < \frac{r}{8}\right)\right)\\
& + \Pr\left(R' \geq \frac{r}{8}\right)\left(\Pr\left(t_1\left(f\left(X_{N'}\right)\right) = x \mid R' \geq \frac{r}{8}\right) - \Pr\left(f\left(X_{N'}\right) = x \mid R' \geq \frac{r}{8}\right)\right)\rvert\\
\end{split}
\end{equation*}
We know $\Pr\left(t_1\left(f\left(X_{N'}\right)\right) = x \mid R' \geq \frac{r}{8}\right) = \Pr\left(f\left(X_{N'}\right) = x \mid R' \geq \frac{r}{8}\right)$. This gives
\begin{equation*}
\begin{split}
&D_{TV}\left(t_1\left(f\left(X_{N'}\right)\right), f\left(X_{N'}\right)\right)\\
&= \frac{1}{2}\sum_{x} \ \lvert \Pr\left(R' < \frac{r}{8}\right) \left(\Pr\left(t_1\left(f\left(X_{N'}\right)\right) = x \mid R' < \frac{r}{8}\right) - \Pr\left(f\left(X_{N'}\right) = x \mid R' < \frac{r}{8}\right)\right) \rvert\\
&\leq \Pr\left(R' < \frac{r}{8}\right)
\end{split}
\end{equation*}
Now, we need to bound $\Pr\left(R' < \frac{r}{8}\right)$. From the description of $f$, we know that the number of new copies of element $i$ added by $f$ is distributed as $\text{Poisson}\left(r \hat{p}_i\right)$. Here, $\hat{p}_i =\frac{u_i}{n}$ where $u_i$ denotes the number of occurrences of element $i$ in $X_{N_1'}$. Since the total number of samples in $X_{N_1'}$ is $N_1'$, we get $\sum_{i=1}^k \hat{p}_i = \frac{\sum_{i=1}^k u_i}{n} = \frac{N_1'}{n}$. Note that $R'$ is equal to the sum of number of new copies of each element, and as the sum of Poisson random variables is Poisson, we get $R'$ is distributed as $\text{Poisson}\left(r\frac{N_1'}{n}\right)$.
\begin{equation*}
\begin{split}
\Pr\left(R' < \frac{r}{8}\right) &= \Pr\left(R' < \frac{r}{8} \mid {N_1'} \geq \frac{3n}{4}\right) \Pr\left({N_1'} \geq \frac{3n}{4}\right) + \Pr\left(R' < \frac{r}{8} \mid {N_1'} < \frac{3n}{4}\right) \Pr\left({N_1'} < \frac{3n}{4}\right) \\
&\leq \Pr\left(R' < \frac{r}{8} \mid {N_1'} \geq \frac{3n}{4}\right) + \Pr\left({N_1'} < \frac{3n}{4}\right)
\end{split}
\end{equation*}
Using Poisson tail bound (\ref{eq:poisson-tail}), we get
\begin{equation*}
\begin{split}
\Pr\left(R' < \frac{r}{8} \mid {N_1'} \geq \frac{3n}{4}\right) &\leq exp\left(-\frac{\left(5r/8\right)^2}{3r/4+5r/8}\right) = e^{-25r/88}\\
\Pr\left(N_1' < \frac{3n}{4}\right) &\leq exp\left(-\frac{\left(n/4\right)^2}{n+n/4} \right) = e^{-n/20}
\end{split}
\end{equation*}
This gives us $D_{TV}(f(X_{N'}), t_1(f(X_{N'}))) \leq e^{-25r/88} + e^{-n/20}$. By triangle inequality, we get $D_{TV}(X_M, t_1(f(X_{N'}))) \leq \frac{3\epsilon}{2}+ e^{-n} + e^{-25r/88} + e^{-n/20}$.
\item \textbf{Step 3: } For this step, we need to show $D_{TV}(t_2(t_1(f(X_{N'}))), t_2(X_M))$ is small. Since applying the same transformation to two random variables doesn't increase their TV distance, we get
\begin{equation*}
\begin{split}
D_{TV}(t_2(t_1(f(X_{N'}))), t_2(X_M)) &\leq D_{TV}(t_1(f(X_{N'})), X_M)\\
&\leq \frac{3\epsilon}{2}+ e^{-n} + + e^{-25r/88} + e^{-n/20}
\end{split}
\end{equation*}
As $D_{TV} (X_{4n+\frac{r}{8}},t_2(X_M) ) = 0$, using triangle inequality, we get
\begin{equation*}
\begin{split}
D_{TV}(t_2(t_1(f(X_{N'}))),X_{4n+\frac{r}{2}} )
&\leq \frac{3\epsilon}{2}+ e^{-n} + e^{-25r/88} + e^{-n/20}
\end{split}
\end{equation*}
\end{itemize}
For $\epsilon \geq 2e^{-n/20}+e^{-25r/88}$, this gives us $D_{TV}(f'(X_{4n}),X_{4n+\frac{r}{8}} ) = D_{TV}(t_2(t_1(f(X_{N'}))), X_{4n+\frac{r}{8}} ) \leq \frac{5 \epsilon}{2}$.
\end{proof}
From lemma \ref{lem:discrete-ub2}, we get that for $\epsilon \geq 2e^{-n/20}+e^{-25r/88}$, and for $r \leq n\epsilon^{1.5}/(4\sqrt{k})$, $D_{TV}(f'(X_{4n}), X_{4n+\frac{r}{8}} ) \leq \frac{5 \epsilon}{2}$. We can assume $n$ is at least $\sqrt{k}$, and $r$ is at least $8$, as otherwise the theorem is trivially true. So for $k$ large enough (implying large $n$), we can put $\epsilon = \frac{2}{15}$, to get $D_{TV}(t_2(t_1(f(X_{N'}))), X_{4n+\frac{r}{8}} ) \leq \frac{1}{3}$, which finishes the proof!
\end{proof}
\subsection{Upper Bound for Procedures which Returns a Superset of the Input Samples}
In this section we prove the upper bound in Proposition \ref{prop:gaussian_modify_full}. The algorithm itself is presented in Algorithm \ref{alg:gaussian2}. Before we proceed with the proof we prove a brief lemma that will be useful for bounding the total variation distance.
\begin{lemma}\label{lem:coupling}
Let $X, Y_1, Y_2$ be random variables such that with probability at least $1 - \epsilon$ over X, $D_{TV}(Y_1 | X, Y_2| X) \leq \epsilon'$, then $D_{TV}((X, Y_1), (X, Y_2)) \leq \epsilon + \epsilon'$.
\end{lemma}
\begin{proof}
From the definition of total variation distance, we know
\begin{equation*}
\begin{split}
D_{TV}((X, Y_1), (X, Y_2)) &= \frac{1}{2}\sum_{x,y} \left \lvert \Pr((X, Y_1) = (x,y)) - \Pr((X, Y_2) = (x, y))) \right \rvert\\
& = \frac{1}{2}\sum_{x,y} \Pr(X = x) \left \lvert \Pr \left (Y_1 = y \mid X = x \right ) - \Pr(Y_2 = y \mid X = x) \right \rvert\\
& = \sum_{x} \Pr(X = x) \ \frac{1}{2}\sum_{y} \lvert \Pr(Y_1 = y \mid X = x) - Pr(Y_2 = y \mid X = x) \rvert\\
& = \sum_x \Pr(X = x) \ d_{TV}(Y_1 \mid X=x, Y_2 \mid X=x).
\end{split}
\end{equation*}
Since with probability $(1 - \epsilon)$ over $X$, $d_{TV}(Y_1 \mid X, Y_2 \mid X)$ is at most $\epsilon'$, and total variation distance is always bounded by 1, we get $\sum_x Pr(X = x) \ d_{TV}(Y_1 \mid X=x, Y_2 \mid X=x) \leq (1-\epsilon)\epsilon' + \epsilon \leq \epsilon' + \epsilon$.
This same proof with summations appropriately replaced with integrals will go through when the random variables in consideration are defined over continuous domains.
\end{proof}
Now we prove the upper bound from Proposition \ref{prop:gaussian_modify_full}. As in Proposition \ref{prop:gaussian-ub-main}, it is sufficient to prove this bound only for the case of identity covariance gaussians as our algorithm in this case is also invariant to linear transformation.
\begin{proposition}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N(\mu, I)$ with unknown mean $\mu$. There is a constant $c'$ such that for any $\epsilon$, and $n = \frac{d}{\epsilon \log d}$, and for sufficiently large $d$, there is an $\left(n,n+c'n^{\frac{1}{2}-9\epsilon}\right)$ amplification protocol for $\mathcal{C}$ that returns a superset of the original $n$ samples.
\end{proposition}
\alggaussiannonmod
\begin{proof}
Let $m = n+r$ ,where $r = O\left(n^{\frac{1}{2} - 9\epsilon}\right)$. We begin by describing the procedure to generate $m$ samples $Z_m = \left(x_1', x_2', \dots, x_m'\right)$, given $n$ i.i.d. samples $X_{n} = \left(x_1, x_2, \dots, x_{n}\right)$ drawn from $N\left(\mu, I\right)$. Let $\tilde{\mu} = \sum_{i=1}^{n/2} \frac{x_i}{n/2}$ denote the mean of first $\frac{n}{2}$ samples in $X_{n}$. For distributions $P$ and $Q$, let $(1-\alpha) P + \alpha Q$ denote the mixture distribution where $(1 - \alpha)$ and $\alpha$ are the respective mixture weights.
We first describe how to generate $Z_m' = (x_1'',x_2'', \dots, x_m'')$, given $n$ i.i.d samples $X_{n}$.
For $i \in \{1, 2, \dots, \frac{n}{2}\}$, we set $x_i''= x_i$. For $i \in \{\frac{n}{2}+1, \frac{n}{2}+2, \dots, m\}$, we set $x_i''$ to a random independent draw from the mixture distribution $\left(1 - \frac{10r}{r+\frac{n}{2}}\right)N(\mu, I_{d \times d}) + \frac{10r}{r+\frac{n}{2}}N(\tilde{\mu}, I_{d \times d})$.
Now, the construction of $Z_m$ is very similar to $Z_m'$ except that we don't have access to $N(\mu, I_{d \times d})$ to sample points from the mixture distribution. So, for $Z_m$, set $x_i'= x_i$ for $i \in \{1, 2, \dots, \frac{n}{2}\}$. For $i \in \{\frac{n}{2}+1, \frac{n}{2}+2, \dots, m\}$, we use samples from $(x_{\frac{n}{2}+1},x_{\frac{n}{2}+2}, \dots, x_{n} )$ instead of producing new samples from $N(\mu, I_{d \times d})$. With probability $\left(1 - \frac{10r}{r+\frac{n}{2}}\right)$, we generate a random sample without replacement from $\left(x_{\frac{n}{2}+1}, x_{\frac{n}{2}+2}, \dots, x_{n}\right)$, and with probability $\frac{10r}{r+\frac{n}{2}}$ we generate a sample from $N(\tilde{\mu},I)$, and set $x_i'$ equal to that sample. As we sample from $(x_{\frac{n}{2}+1}, x_{\frac{n}{2}+2}, \dots, x_{n})$ without replacement, we can generate only $\frac{n}{2}$ samples this way. The expected number of samples needed is $(\frac{n}{2}+r)(1 - \frac{10r}{r+\frac{n}{2}}) = \frac{n}{2} - 9r$, and with high probability, we won't need more than $\frac{n}{2}$ samples. If the total number of required samples from $\left(x_{\frac{n}{2}+1}, x_{\frac{n}{2}+2}, \dots, x_{n}\right)$ turns out to be more than $\frac{n}{2}$, we set $x_i$ to an arbitrary $d-$dimensional vector (say $x_1$) but this happens with low probability, leading to insignificant loss in total variation distance.
Let $X_m$ denote the random variable corresponding to $m$ i.i.d. samples from $N(\mu, I)$. We want to show that $D_{TV}(X_m, Z_m)$ is small. By triangle inequality, $D_{TV}(X_m, Z_m) \leq D_{TV}(X_m, Z_m') + D_{TV}(Z_m', Z_m)$.
We first bound $D_{TV}(Z_m, Z_m')$. Let $Y, Y' \leftarrow \text{Binomial} \left (r+\frac{n}{2}, 1 - \frac{10r}{r+\frac{n}{2}} \right)$ be random variables that denotes the number of samples from $(1 - \frac{10r}{r+\frac{n}{2}})$ mixture component in $Z_m$ and $Z_m'$ respectively. Let $\Omega$ denote the sample space of $Z_m$ and $Z_m'$.
\begin{equation*}
\begin{split}
D_{TV}(Z_m, Z_m') &= \max_{E \subseteq \Omega} \ \lvert \Pr(Z_m \in E) - \Pr(Z_m' \in E)\rvert \\
&= \max_{E \subseteq \Omega} \ \lvert \Pr \left(Z_m \in E \mid Y \leq \frac{n}{2}\right)\Pr \left(Y \leq \frac{n}{2}\right) + \Pr \left(Z_m \in E \mid Y > \frac{n}{2}\right)\Pr \left(Y > \frac{n}{2} \right)\\ & \hspace{34pt} -\Pr \left(Z_m' \in E \mid Y' \leq \frac{n}{2} \right)\Pr \left(Y' \leq \frac{n}{2}\right) - \Pr \left(Z_m' \in E \mid Y' > \frac{n}{2} \right)\Pr \left(Y' > \frac{n}{2}\right) \rvert \\
\end{split}
\end{equation*}
Since $Y$ and $Y'$ have the same distribution, we have $\Pr\left(Y' \leq \frac{n}{2}\right) = \Pr \left(Y \leq \frac{n}{2} \right)$, and $\Pr \left( Y' > \frac{n}{2} \right) = \Pr \left(Y > \frac{n}{2} \right)$. This gives us
\begin{equation*}
\begin{split}
D_{TV}(Z_m, Z_m')
&= \max_{E \subseteq \Omega} \ \lvert \Pr \left(Z_m \in E \mid Y \leq \frac{n}{2}\right)\Pr \left(Y \leq \frac{n}{2}\right) + \Pr \left(Z_m \in E \mid Y > \frac{n}{2}\right)\Pr \left(Y > \frac{n}{2}\right)\\ & \hspace{34pt} -\Pr \left(Z_m' \in E \mid Y' \leq \frac{n}{2}\right)\Pr \left(Y \leq \frac{n}{2}\right) - \Pr \left(Z_m' \in E \mid Y' > \frac{n}{2}\right)\Pr \left(Y > \frac{n}{2}\right) \rvert \\
&\leq \max_{E \subseteq \Omega} \Pr \left(Y \leq \frac{n}{2}\right) \mid \Pr \left(Z_m \in E | Y \leq \frac{n}{2} \right) - \Pr \left(Z_m' \in E | Y' \leq \frac{n}{2} \right) \mid \\
& \hspace{34pt} + \Pr \left(Y > \frac{n}{2}\right) \mid \Pr \left(Z_m \in E | Y > \frac{n}{2}\right) - \Pr \left(Z_m' \in E | Y' > \frac{n}{2}\right) \mid. \\
\end{split}
\end{equation*}
where the last inequality holds because of the triangle inequality.
Now, note that $\Pr(Z_m \in E | Y \leq \frac{n}{2}) = \Pr(Z_m' \in E | Y' \leq \frac{n}{2})$ for all $E$, and $\lvert \Pr(Z_m \in E | Y > \frac{n}{2}) - \Pr(Z_m' \in E | Y' > \frac{n}{2}) \rvert \leq 1$. This gives us
$$D_{TV}(Z_m, Z_m') \leq \Pr \left(Y > \frac{n}{2}\right).$$
We know $\mathbb{E}[Y] = \frac{n}{2}-9r$, and $\text{Var}[Y] = \left(\frac{n}{2}+r \right)\left(1 - \frac{10r}{\frac{n}{2}+r}\right) \left(\frac{10r}{\frac{n}{2}+r}\right) \leq 10r$. Using Bernstein's inequality, we get
\begin{equation*}
\begin{split}
\Pr \left[Y > \frac{n}{2} \right] &= \Pr(Y - \mathbb{E}[Y] > 9r)\\
&\leq \exp \left(\frac{-(9r)^2}{2(10r + 9r/3)}\right)\\
&\leq \exp\left(\frac{-81r}{26}\right).
\end{split}
\end{equation*}
So we get $D_{TV}(Z_m, Z_m') \leq \exp\left(\frac{-81r}{26}\right)$.
Next, we calculate $D_{TV}(X_m, Z_m')$. We write $X_m = (X_m^1, X_m^2)$ and $Z_m' = (Z_m^{1'}, Z_m^{2'})$ where $X_m^1$ and $Z_m^{1'}$ denote the first $\frac{n}{2}$ samples of $X_m$ and $Z_m'$ , and $X_m^2$ and $Z_m^{2'}$ denote rest of their samples. Since $X_m^1$ and $Z_m^{1'}$ are drawn from the same distribution, $\Pi_{i=1}^\frac{n}{2} N(\mu, I)$, and $Z_m^{1'}, X_m^1, X_m^2$ are independent, we get $(Z_m^{1'}, X_m^2)$ and $(X_m^{1}, X_m^2)$ are equal in distribution. This gives us
\begin{align*}
D_{TV}(X_m, Z_m') = D_{TV}((X_m^1, X_m^2), (Z_m^{1'}, Z_m^{2'})) = D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})).
\end{align*}
From Lemma \ref{lem:coupling}, we know that, if with probability at least $1 - \epsilon_1$ over $Z_m^{1'}$, $D_{TV}(X_m^2|Z_m^{1'}, Z_m^{2'}|Z_m^{1'}) \leq \epsilon_2$, then $D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})) \leq \epsilon_1 + \epsilon_2$. Here, $Z_m^{1'}$ and $X_m^2$ are independent, and the only dependency between $Z_m^{1'}$ and $Z_m^{2'}$ is via the mean $\tilde{\mu}$ of the elements of $Z_m^{1'}$. So $D_{TV}(X_m^2|Z_m^{1'}, Z_m^{2'}|Z_m^{1'}) = D_{TV}(X_m^2, Z_m^{2'}|\tilde{\mu})$. We will show that with high probability over $\tilde{\mu}$, this total variation distance is small. \\
We first estimate $\norm{\tilde{\mu} - \mu}$. Note that $\mathbb{E}_{Z_m^{1'}}[\norm{\tilde{\mu} - \mu}^2] = \frac{2d}{n}$, and $\frac{n}{2}\norm{\tilde{\mu} - \mu}^2$ is a $\chi^2$ random variable with $d$ degrees of freedom. To bound the deviation of $\norm{\tilde{\mu} - \mu}^2$ around it's mean, we will use the following concentration bound for a $\chi^2$ random variable $R$ with $d$ degrees of freedom \citep[Example 2.5]{wainwright2015basic}.
\begin{align*}
\Pr[|R - d| \ge dt] \le 2e^{-dt^2/8}, \text{ for all } t \in (0,1).
\end{align*}
This gives us $\Pr(\lvert \frac{n}{2}\norm{\tilde{\mu} - \mu}^2 - d \rvert \geq {0.5 d}) \leq 2e^{-d/32}$, that is, $\norm{\tilde{\mu} - \mu} \leq \sqrt{\frac{3 d}{n}} \leq \sqrt{3 \epsilon \log d}$ with probability at least $1 - 2e^{-d/32}$.
$X_m^2$ is distributed as the product of $\frac{n}{2}+r$ gaussiaus $\Pi_{i = 1}^{\frac{n}{2}+r} N(\mu, I_{d \times d})$ and $Z_m^{2'}|\tilde{\mu}$ is distributed as the product of $\frac{n}{2}+r$ mixture distributions $\Pi_{i=1}^{\frac{n}{2}+r} (1 - \frac{10r}{\frac{n}{2}+r})N(\mu, I_{d \times d}) + \frac{10r}{\frac{n}{2}+r}N(\tilde{\mu}, I_{d \times d})$. We evaluate the total variation distance between these two distributions by bounding their squared Hellinger distance, since squared Hellinger distance is easy to bound for product distributions and is within a quadratic factor of the total variation distance for any distribution. By the subadditivity of the squared Hellinger distance, we get
\begin{equation}\label{eq:thm-hellinger-subadd}
\begin{split}
&H\left(\Pi_{i = 1}^{\frac{n}{2}+r} N(\mu, I_{d \times d}),\Pi_{i=1}^{\frac{n}{2}+r} \left(1 - \frac{10r}{\frac{n}{2}+r}\right)N \left(\mu, I_{d \times d} \right) + \frac{10r}{\frac{n}{2}+r}N \left(\tilde{\mu}, I_{d \times d} \right) \right)^2\\
&\leq \left(\frac{n}{2}+r \right) \ H\left( N \left(\mu, I_{d \times d}\right), \left(1 - \frac{10r}{\frac{n}{2}+r}\right)N \left(\mu, I_{d \times d}\right) + \frac{10r}{\frac{n}{2}+r}N \left(\tilde{\mu}, I_{d \times d}\right) \right)^2.
\end{split}
\end{equation}
For sufficiently large $d$, $r$ and $n$ satisfy $r \leq \frac{n}{18}$, so we can use Lemma \ref{lem:hellinger-gaussian} to get
\begin{equation}\label{eq:thm-hellinger-ubound}
\begin{split}
H\left( N(\mu, I_{d \times d}), \left(1 - \frac{10r}{\frac{n}{2}+r} \right)N \left(\mu, I_{d \times d}\right) + \frac{10r}{\frac{n}{2}+r}N \left(\tilde{\mu}, I_{d \times d}\right) \right)^2
&\leq \frac{576r^2}{n^2}e^{3\norm{\tilde{\mu} - \mu}^2}\\
&\leq \frac{576r^2 d^{9\epsilon}}{n^2},
\end{split}
\end{equation}
with probability at least $1 - 2e^{-d/32}$ over $\tilde{\mu}$. From \eqref{eq:thm-hellinger-subadd} and \eqref{eq:thm-hellinger-ubound}, we get that with probability at least $1 - 2e^{-d/32}$ over $\tilde{\mu}$,
\begin{equation*}
\begin{split}
&H\left(\Pi_{i = 1}^{\frac{n}{2}+r} N(\mu, I_{d \times d}),\Pi_{i=1}^{\frac{n}{2}+r} \left(1 - \frac{10r}{\frac{n}{2}+r} \right) N(\mu, I_{d \times d}) + \frac{10r}{\frac{n}{2}+r}N(\tilde{\mu}, I_{d \times d}) \right)^2\leq (\frac{n}{2}+r)\frac{576r^2 d^{9\epsilon}}{n^2}\leq \frac{576r^2 d^{9\epsilon}}{n},
\end{split}
\end{equation*}
where the last inequality holds because $r < \frac{n}{2}$. As the total variation distance between two distributions is upper bounded by $\sqrt{2}$ times their Hellinger distance, we get that with probability at least $1 - 2e^{-d/32}$ over $\tilde{\mu}$,
\begin{equation*}
\begin{split}
&D_{TV}\left(\Pi_{i = 1}^{\frac{n}{2}+r} N(\mu, I_{d \times d}),\Pi_{i=1}^{\frac{n}{2}+r} \left(1 - \frac{10r}{\frac{n}{2}+r}\right)N(\mu, I_{d \times d}) + \frac{10r}{\frac{n}{2}+r}N(\tilde{\mu}, I_{d \times d}) \right)\\
&\leq \frac{24 \sqrt{2} r d^{9\epsilon/2}}{\sqrt{n}}\leq \frac{24\sqrt{2} r n^{9\epsilon}}{\sqrt{n}},
\end{split}
\end{equation*}
where the last inequality is true because $n > \sqrt{d}$.
Now, from Lemma \ref{lem:coupling}, we know that if with probability at least $1 - \epsilon_1$ over $Z_m^{1'}$, $D_{TV}(X_m^2|Z_m^{1'}, Z_m^{2'}|Z_m^{1'}) \leq \epsilon_2$, then $D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})) \leq \epsilon_1 + \epsilon_2$. In this
case, $\epsilon_1 = 2e^{-d/32}$ and $\epsilon_2 = \frac{24\sqrt{2} r n^{9\epsilon}}{\sqrt{n}}$, so we get $D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})) = D_{TV}(X_m, Z_m') \leq 2e^{-d/32} + \frac{24\sqrt{2} r n^{9\epsilon}}{\sqrt{n}}$. We also know that $D_{TV}(Z_m, Z_m') \leq e^{-81r/26}$. Using triangle inequality, we get
\begin{align*}
D_{TV}(X_m, Z_m) \leq 2e^{-d/32} + \frac{24\sqrt{2} r n^{9\epsilon}}{\sqrt{n}} + e^{-81r/26}.
\end{align*}
For $\delta > 2(2e^{-d/32} + e^{-81r/26})$, and for $ r \leq \frac{n ^{\frac{1}{2} - 9\epsilon} \delta }{48\sqrt{2}}$, we get $D_{TV}(X_m, Z_m) \leq \delta$. For $d$ large enough, setting $\delta= \frac{1}{3}$ and $r \leq \frac{n ^{\frac{1}{2} - 9\epsilon} }{144\sqrt{2}} $, we get the desired result. Note that we haven't tried to optimize the constants in this proof.
\begin{lemma}\label{lem:hellinger-gaussian}
Let $P = N(0, I_{d \times d})$ and $Q = N(\hat{\mu}, I_{d \times d})$ be $d$-dimensional gaussian distributions. For $r \leq \frac{n}{18}$, $H\left(P, \left(1 - \frac{10r}{r+\frac{n}{2}}\right)P + \frac{10r}{r+\frac{n}{2}}Q\right) \leq \frac{24r}{n}e^{\frac{3\norm{\hat{\mu}}^2}{2}}$.
\end{lemma}
\begin{proof}
We work in the rotated basis where $Q = N((\norm{\hat{\mu}},\underbrace{0, 0, \dots, 0}_{d-1 \text{ times}}), I_{d \times d})$ and $P = N(0, I_{d \times d})$. Let $P_1 = N(0, 1)$ and $Q_1 = N(\norm{\hat{\mu}}, 1)$ denote the projection of $P$ and $Q$ along the first coordinate axis respectively. Note that the mixture distribution in question is the product of $\left(\left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1\right)$ and $N(0, I_{d-1 \times d-1})$, and $P$ is the product of $P_1$ and $N(0, I_{d-1 \times d-1})$. Since the squared Hellinger distance is subadditive for product distributions, we get,
\begin{equation*}
\begin{split}
H\left(P, \left(1 - \frac{10r}{r+\frac{n}{2}}\right)P + \frac{10r}{r+\frac{n}{2}}Q\right)^2 &\leq H\left(P_1,\left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1\right)^2 + H(N(0,I_{d-1 \times d-1}), N(0,I_{d-1 \times d-1}))^2 \\
&= H\left(P_1,\left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1\right)^2.
\end{split}
\end{equation*}
Therefore, to bound the required Hellinger distance, we just need to bound $H \left(P_1, \left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1 \right)$. Let $p_1$ and $q_1$ denote the probability densities of $P_1$ and $\left(\left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1\right)$ respectively. We get $H\left(P_1,\left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1\right)^2 = \int_{- \infty}^{\infty} \left(\sqrt{p_1} - \sqrt{q_1}\right )^2 dx$
\begin{equation*}
\begin{split}
&= \int_{- \infty}^{\infty} \left(\sqrt{\frac{1}{\sqrt{2 \pi}}e^{-x^2/2}} - \sqrt{\left(1 -\frac{10r}{r+\frac{n}{2}}\right)\frac{1}{\sqrt{2 \pi}} e^{-x^2/2} + \frac{10r}{r+\frac{n}{2}} \frac{1}{\sqrt{2 \pi}} e^{-(x - \norm{\hat{\mu}})^2/2}}\right)^2 dx\\
&= \int_{- \infty}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{10r}{r+\frac{n}{2}} + \frac{10r}{r+\frac{n}{2}}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} \right)^2 dx.\\
\end{split}
\end{equation*}
We will evaluate this integral as a sum of integral in two regions.
\begin{enumerate}
\item From $-\infty$ to $\norm{\hat{\mu}}/2$:
\begin{equation*}
\begin{split}
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{10r}{r+\frac{n}{2}} + \frac{10r}{r+\frac{n}{2}}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} \right)^2 dx
&\leq
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{10r}{r+\frac{n}{2}} } \right)^2 dx.
\end{split}
\end{equation*}
Since $r \leq \frac{n}{18}$, we get $\frac{10r}{r + \frac{n}{2}} \leq 1$. Using $1 - y \leq \sqrt{1 - y}$ for $0 \leq y \leq 1$, we get
\begin{equation*}
\begin{split}
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{10r}{r+\frac{n}{2}} } \right)^2 dx
&\leq
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(\frac{10r}{r+\frac{n}{2}} \right)^2 dx\\
&\leq \frac{400r^2}{n^2}.
\end{split}
\end{equation*}
\item From $\frac{\norm{\hat{\mu}}}{2}$ to $\infty$, we get $\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{10r}{r+\frac{n}{2}} + \frac{10r}{r+\frac{n}{2}}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} \right)^2 dx$.
\begin{equation*}
\begin{split}
&\leq
\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left( \sqrt{1 + \frac{10r}{r+\frac{n}{2}} e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} - 1 \right)^2 dx.\\
\end{split}
\end{equation*}
This is because $x \geq \norm{\hat{\mu}}/2$, and therefore $\frac{10r}{r+\frac{n}{2}}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}} \geq \frac{10r}{r+\frac{n}{2}}$. Now, using $\sqrt{1+y} \leq 1 + \frac{y}{2}$, we get
\begin{equation*}
\begin{split}
&\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left( \sqrt{1 + \frac{10r}{r+\frac{n}{2}} e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} - 1 \right)^2 dx\\
&\leq
\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left( {1 + \frac{5r}{r+\frac{n}{2}} e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} - 1 \right)^2 dx\\
&\leq
\frac{100r^2}{n^2} \int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} {e^{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}} e^{-x^2/2}
dx\\
&=
\frac{100r^2}{n^2}e^{-\norm{\hat{\mu}}^2} \int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} {e^{2\norm{\hat{\mu}} x - x^2/4}} e^{-x^2/4}
dx.\\
\end{split}
\end{equation*}
Since $2\norm{\hat{\mu}} x - x^2/4 \leq 4 \norm{\hat{\mu}}^2$, we get
\begin{equation*}
\begin{split}
\frac{100r^2}{n^2} e^{-\norm{\hat{\mu}}^2}\left(
\int_{\norm{\hat{\mu}}/2}^{\infty}
e^{2\norm{\hat{\mu}} x - x^2/4}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/4}
\right) dx
&\leq
\frac{100r^2}{n^2} e^{3\norm{\hat{\mu}}^2}\left(
\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/4}
\right) dx\\
&\leq
\frac{100\sqrt{2} r^2}{n^2} e^{3\norm{\hat{\mu}}^2}\left(
\int_{-\infty}^{\infty}
\frac{1}{\sqrt{4 \pi}} e^{-x^2/4}
\right) dx\\
&\leq
\frac{100\sqrt{2} r^2}{n^2} e^{3\norm{\hat{\mu}}^2}.\\
\end{split}
\end{equation*}
\end{enumerate}
Adding the two integrals, we get
\begin{equation*}
\begin{split}
H \left(P_1, \left(1 - \frac{10r}{r+\frac{n}{2}}\right)P_1 + \frac{10r}{r+\frac{n}{2}}Q_1 \right)^2
&\leq \frac{400r^2}{n^2} + \frac{100\sqrt{2}r^2}{n^2}e^{3\norm{\hat{\mu}}^2}\\
&\leq \frac{576r^2}{n^2}e^{3\norm{\hat{\mu}}^2}.
\end{split}
\end{equation*}
This gives us $H(P, \left(1 - \frac{10r}{r+\frac{n}{2}}\right)P + \frac{10r}{r+\frac{n}{2}}Q) \leq \frac{24r}{n} e^{3\norm{\hat{\mu}}^2/2}$ which completes the proof.
\end{proof}
\end{proof}
\iffalse
\begin{proof}
For clarity of writing, we consider the number of input samples to be $2n$ and number of output samples $m = 2n+r$ ,where $r = O(n^{\frac{1}{2} - 3\epsilon^2})$. We begin by describing the procedure to generate $m$ samples $Z_m = \left(x_1', x_2', \dots, x_m'\right)$, given $2n$ i.i.d. samples $X_{2n} = \left(x_1, x_2, \dots, x_{2n}\right)$ drawn from $N\left(\mu, I\right)$. Let $\tilde{\mu} = \sum_{i=1}^n \frac{x_i}{n}$ denote the mean of first $n$ samples in $X_{2n}$. For distributions $P$ and $Q$, let $(1-\alpha) P + \alpha Q$ denote the mixture distribution where $(1 - \alpha)$ and $\alpha$ are the respective mixture weights.
We first describe how to generate $Z_m' = (x_1'',x_2'', \dots, x_m'')$, given $2n$ i.i.d samples $X_{2n}$.
For $i \in \{1, 2, \dots, n\}$, we set $x_i''= x_i$. For $i \in \{n+1, n+2, \dots, m\}$, we set $x_i''$ to a random independent draw from the mixture distribution $\left(1 - \frac{2r}{r+n}\right)N(\mu, I_{d \times d}) + \frac{2r}{r+n}N(\tilde{\mu}, I_{d \times d})$.
Now, the construction of $Z_m$ is very similar to $Z_m'$ except that we don't have access to $N(\mu, I_{d \times d})$ to sample points from the mixture distribution. So, for $Z_m$, set $x_i'= x_i$ for $i \in \{1, 2, \dots, n\}$. For $i \in \{n+1, n+2, \dots, m\}$, we use samples from $(x_{n+1},x_{n+2}, \dots, x_{2n} )$ instead of sampling from $N(\mu, I_{d \times d})$. With probability $\left(1 - \frac{2r}{r+n}\right)$, we generate a random sample without replacement from $(x_{n+1}, x_{n+2}, \dots, x_{2n})$, and with probability $\frac{2r}{r+n}$ we generate a sample from $N(\tilde{\mu},I)$, and set $x_i'$ equal to that sample. As we sample from $(x_{n+1}, x_{n+2}, \dots, x_{2n})$ without replacement, we can generate only $n$ samples this way. The expected number of samples needed is $(n+r)(1 - \frac{2r}{r+n}) = n - r$, and with high probability, we won't need more than $n$ samples. If the total number of required samples from $(x_{n+1}, x_{n+2}, \dots, x_{2n})$ turns out to be more than $n$, we set $x_i$ to an arbitrary $d-$dimensional vector (say $x_1$) but this happens with low probability, leading to insignificant loss in total variation distance.
Let $X_m$ denote the random variable corresponding to $m$ i.i.d. samples from $N(\mu, I)$. We want to show that $D_{TV}(X_m, Z_m)$ is small. By triangle inequality, $D_{TV}(X_m, Z_m) \leq D_{TV}(X_m, Z_m') + D_{TV}(Z_m', Z_m)$.
We first compute $D_{TV}(Z_m, Z_m')$. Let $Y, Y' \leftarrow \text{Binomial}(r+n, 1 - \frac{2r}{r+n})$ be random variables that denotes the number of samples from $(1 - \frac{2r}{r+n})$ mixture component in $Z_m$ and $Z_m'$ respectively. Let $\Omega$ denote the sample space of $Z_m$ and $Z_m'$.
\begin{equation*}
\begin{split}
D_{TV}(Z_m, Z_m') &= \max_{E \subseteq \Omega} \ \lvert \Pr(Z_m \in E) - \Pr(Z_m' \in E)\rvert \\
&= \max_{E \subseteq \Omega} \ \lvert \Pr(Z_m \in E \mid Y \leq n)\Pr(Y \leq n) + \Pr(Z_m \in E \mid Y > n)\Pr(Y > n)\\ & \hspace{34pt} -\Pr(Z_m' \in E \mid Y' \leq n)\Pr(Y' \leq n) - \Pr(Z_m' \in E \mid Y' > n)\Pr(Y' > n) \rvert \\
\end{split}
\end{equation*}
Since $Y$ and $Y'$ have the same distribution, we have $\Pr[Y' \leq n] = \Pr[Y \leq n]$, and $\Pr[Y' > n] = \Pr[Y > n]$. This gives us
\begin{equation*}
\begin{split}
D_{TV}(Z_m, Z_m')
&= \max_{E \subseteq \Omega} \ \lvert \Pr(Z_m \in E \mid Y \leq n)\Pr(Y \leq n) + \Pr(Z_m \in E \mid Y > n)\Pr(Y > n)\\ & \hspace{34pt} -\Pr(Z_m' \in E \mid Y' \leq n)\Pr(Y \leq n) - \Pr(Z_m' \in E \mid Y' > n)\Pr(Y > n) \rvert \\
&\leq \max_{E \subseteq \Omega} \Pr(Y \leq n) \mid \Pr(Z_m \in E | Y \leq n) - \Pr(Z_m' \in E | Y' \leq n) \mid \\
& \hspace{34pt} + \Pr(Y > n) \mid \Pr(Z_m \in E | Y > n) - \Pr(Z_m' \in E | Y' > n) \mid. \\
\end{split}
\end{equation*}
where the last inequality holds because of the triangle inequality.
Now, note that $\Pr(Z_m \in E | Y \leq n) = \Pr(Z_m' \in E | Y' \leq n)$ for all $E$, and $\lvert \Pr(Z_m \in E | Y > n) - \Pr(Z_m' \in E | Y' > n) \rvert \leq 1$. This gives us
$$D_{TV}(Z_m, Z_m') \leq \Pr(Y > n)$$.
We know $\mathbb{E}[Y] = n-r$, and $\text{Var}[Y] = (n+r)(1 - \frac{2r}{n+r})(\frac{2r}{n+r}) \leq 2r$. Using Bernstein's inequality, we get
\begin{equation*}
\begin{split}
\Pr[Y > n] &= \Pr(Y - \mathbb{E}[Y] > r)\\
&\leq \exp \left(\frac{-r^2}{2(2r + r/3)}\right)\\
&\leq \exp\left(\frac{-r}{5}\right)
\end{split}
\end{equation*}
So we get $D_{TV}(Z_m, Z_m') \leq \exp\left(\frac{-r}{5}\right)$.
Next, we calculate $D_{TV}(X_m, Z_m')$. We write $X_m = (X_m^1, X_m^2)$ and $Z_m' = (Z_m^{1'}, Z_m^{2'})$ where $X_m^1$ and $Z_m^{1'}$ denote the first $n$ samples of $X_m$ and $Z_m'$ , and $X_m^2$ and $Z_m^{2'}$ denote rest of their samples. Since $X_m^1$ and $Z_m^{1'}$ are distributed according to the same distribution, $\Pi_{i=1}^n N(\mu, I)$, and $Z_m^{1'}, X_m^1, X_m^2$ are independent, we get $(Z_m^{1'}, X_m^2)$ and $(X_m^{1}, X_m^2)$ are equal in distribution. This gives us
\begin{align*}
D_{TV}(X_m, Z_m') = D_{TV}((X_m^1, X_m^2), (Z_m^{1'}, Z_m^{2'})) = D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})).
\end{align*}
From Lemma \ref{lem:coupling}, we know that, if with probability at least $1 - \epsilon_1$ over $Z_m^{1'}$, $D_{TV}(X_m^2|Z_m^{1'}, Z_m^{2'}|Z_m^{1'}) \leq \epsilon_2$, then $D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})) \leq \epsilon_1 + \epsilon_2$. Here, $Z_m^{1'}$ and $X_m^2$ are independent, and the dependency between $Z_m^{1'}$ and $Z_m^{2'}$ is via the mean $\tilde{\mu}$ of the elements of $Z_m^{1'}$. So $D_{TV}(X_m^2|Z_m^{1'}, Z_m^{2'}|Z_m^{1'}) = D_{TV}(X_m^2, Z_m^{2'}|\tilde{\mu})$. We will show that with high probability over $\tilde{\mu}$, this total variation distance is small. \\
We first estimate $\norm{\tilde{\mu} - \mu}$. Note that $\mathbb{E}_{Z_m^{1'}}[\norm{\tilde{\mu} - \mu}^2] = \frac{d}{n}$, and $n\norm{\tilde{\mu} - \mu}^2$ is a $\chi^2$ random variable with $d$ degrees of freedom. To bound the deviation of $\norm{\tilde{\mu} - \mu}^2$ around it's mean, we will use the following concentration bound for a $\chi^2$ random variable $R$ with $d$ degrees of freedom \citep[Example 2.5]{wainwright2015basic}.
\begin{align*}
\Pr[|R - d| \ge dt] \le 2e^{-dt^2/8}, \text{ for all } t \in (0,1).
\end{align*}
This gives us $\Pr(\lvert n\norm{\tilde{\mu} - \mu}^2 - d \rvert \geq {0.5 d}) \leq 2e^{-d/32}$, that is, $\norm{\tilde{\mu} - \mu} \leq \sqrt{\frac{1.5 d}{n}} \leq \epsilon\sqrt{\log d}$ with probability at least $1 - 2e^{-d/32}$.
$X_m^2$ is distributed as the product of $n+r$ gaussiaus $\Pi_{i = 1}^{n+r} N(\mu, I_{d \times d})$ and $Z_m^{2'}|\tilde{\mu}$ is distributed as the product of $n+r$ mixture distributions $\Pi_{i=1}^{n+r} (1 - \frac{2n}{n+r})N(\mu, I_{d \times d}) + \frac{2n}{n+r}N(\tilde{\mu}, I_{d \times d})$. We evaluate the total variation distance between these two distributions by bounding their squared Hellinger distance, since squared Hellinger distance is easy to bound for product distributions and is within a quadratic factor of the total variation distance for any distribution. By the subadditivity of the squared Hellinger distance, we get
\begin{equation}\label{eq:thm-hellinger-subadd}
\begin{split}
&H\left(\Pi_{i = 1}^{n+r} N(\mu, I_{d \times d}),\Pi_{i=1}^{n+r} (1 - \frac{2n}{n+r})N(\mu, I_{d \times d}) + \frac{2n}{n+r}N(\tilde{\mu}, I_{d \times d}) \right)^2\\
&\leq (n+r) H\left( N(\mu, I_{d \times d}), (1 - \frac{2n}{n+r})N(\mu, I_{d \times d}) + \frac{2n}{n+r}N(\tilde{\mu}, I_{d \times d}) \right)^2.
\end{split}
\end{equation}
From Lemma \ref{lem:hellinger-gaussian}, we get
\begin{equation}\label{eq:thm-hellinger-ubound}
\begin{split}
H\left( N(\mu, I_{d \times d}), (1 - \frac{2n}{n+r})N(\mu, I_{d \times d}) + \frac{2n}{n+r}N(\tilde{\mu}, I_{d \times d}) \right)^2
&\leq \frac{9r^2}{n^2}e^{3\norm{\tilde{\mu} - \mu}^2}\\
&\leq \frac{9r^2 d^{3\epsilon^2}}{n^2},
\end{split}
\end{equation}
with probability at least $1 - 2e^{-d/32}$ over $\tilde{\mu}$. From \eqref{eq:thm-hellinger-subadd} and \eqref{eq:thm-hellinger-ubound}, we get that with probability at least $1 - 2e^{-d/32}$ over $\tilde{\mu}$,
\begin{equation*}
\begin{split}
&H\left(\Pi_{i = 1}^{n+r} N(\mu, I_{d \times d}),\Pi_{i=1}^{n+r} (1 - \frac{2n}{n+r})N(\mu, I_{d \times d}) + \frac{2n}{n+r}N(\tilde{\mu}, I_{d \times d}) \right)^2\leq (n+r)\frac{9r^2 d^{3\epsilon^2}}{n^2}\leq \frac{18r^2 d^{3\epsilon^2}}{n},
\end{split}
\end{equation*}
where the last inequality holds because $r < n$. As the total variation distance between two distributions is upper bounded by $\sqrt{2}$ times their Hellinger distance, we get that with probability at least $1 - 2e^{-d/32}$ over $\tilde{\mu}$,
\begin{equation*}
\begin{split}
&D_{TV}\left(\Pi_{i = 1}^{n+r} N(\mu, I_{d \times d}),\Pi_{i=1}^{n+r} (1 - \frac{2n}{n+r})N(\mu, I_{d \times d}) + \frac{2n}{n+r}N(\tilde{\mu}, I_{d \times d}) \right)\\
&\leq \frac{6 r d^{3\epsilon^2/2}}{\sqrt{n}}\leq \frac{6 r n^{3\epsilon^2}}{\sqrt{n}},
\end{split}
\end{equation*}
where the last inequality is true because $n > \sqrt{d}$.
Now, from Lemma \ref{lem:coupling}, we know that if with probability at least $1 - \epsilon_1$ over $Z_m^{1'}$, $D_{TV}(X_m^2|Z_m^{1'}, Z_m^{2'}|Z_m^{1'}) \leq \epsilon_2$, then $D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})) \leq \epsilon_1 + \epsilon_2$. In this
case, $\epsilon_1 = 2e^{-d/32}$ and $\epsilon_2 = \frac{6 r n^{3\epsilon^2}}{\sqrt{n}}$, so we get $D_{TV}((Z_m^{1'}, X_m^2), (Z_m^{1'}, Z_m^{2'})) = D_{TV}(X_m, Z_m') \leq 2e^{-d/32} + \frac{6 r n^{3\epsilon^2}}{\sqrt{n}}$. We also know that $D_{TV}(Z_m, Z_m') \leq e^{-r/5}$. Using triangle inequality, we get
\begin{align*}
D_{TV}(X_m, Z_m) \leq 2e^{-d/32} + \frac{6 r n^{3\epsilon^2}}{\sqrt{n}} + e^{-r/5}.
\end{align*}
For $\delta > 2(2e^{-d/32} + e^{-r/5})$, and for $ r \leq \frac{n ^{\frac{1}{2} - 3\epsilon^2} \delta }{12}$, we get $D_{TV}(X_m, Z_m) \leq \delta$. For $d$ large enough, setting $\delta= \frac{1}{3}$ and $r = \frac{n ^{\frac{1}{2} - 3\epsilon^2} }{36} $, we get the desired result.
\begin{lemma}\label{lem:hellinger-gaussian}
Let $P = N(0, I_{d \times d})$ and $Q = N(\hat{\mu}, I_{d \times d})$ be $d$-dimensional gaussian distributions. For $n = \frac{2d}{\epsilon^2 \log d}, r = O\left({n^{\frac{1}{2} - \epsilon^2}}\right), \norm{\hat{\mu}} \leq \epsilon \sqrt{\log d}$, where $\epsilon < 0.1$, and for $d$ large enough, the Hellinger distance $H\left(P, \left(1 - \frac{2r}{r+n}\right)P + \frac{2r}{r+n}Q\right) \leq \frac{3r}{n}e^{\frac{3\norm{\hat{\mu}}^2}{2}}$.
\end{lemma}
\begin{proof}
We work in the rotated basis where $Q = N((\norm{\hat{\mu}},\underbrace{0, 0, \dots, 0}_{d-1 \text{ times}}), I_{d \times d})$ and $P = N(0, I_{d \times d})$. Let $P_1 = N(0, 1)$ and $Q_1 = N(\norm{\hat{\mu}}, 1)$ denote the projection of $P$ and $Q$ along the first coordinate axis respectively. Note that the mixture distribution in question is the product of $\left(\left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1\right)$ and $N(0, I_{d-1 \times d-1})$, and $P$ is the product of $P_1$ and $N(0, I_{d-1 \times d-1})$. Since the squared Hellinger distance is subadditive for product distributions, we get,
\begin{equation*}
\begin{split}
H\left(P, \left(1 - \frac{2r}{r+n}\right)P + \frac{2r}{r+n}Q\right)^2 &\leq H\left(P_1,\left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1\right)^2 + H(N(0,I_{d-1 \times d-1}), N(0,I_{d-1 \times d-1}))^2 \\
&= H\left(P_1,\left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1\right)^2.
\end{split}
\end{equation*}
Therefore, to bound the required Hellinger distance, we just need to find $H \left(P_1, \left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1 \right)$. Let $p_1$ and $q_1$ denote the probability densities of $P_1$ and $\left(\left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1\right)$ respectively. We get $H\left(P_1,\left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1\right)^2 = \int_{- \infty}^{\infty} \left(\sqrt{p_1} - \sqrt{q_1}\right )^2 dx$
\begin{equation*}
\begin{split}
&= \int_{- \infty}^{\infty} \left(\sqrt{\frac{1}{\sqrt{2 \pi}}e^{-x^2/2}} - \sqrt{\left(1 -\frac{2r}{r+n}\right)\frac{1}{\sqrt{2 \pi}} e^{-x^2/2} + \frac{2r}{r+n} \frac{1}{\sqrt{2 \pi}} e^{-(x - \norm{\hat{\mu}})^2/2}}\right)^2 dx\\
&= \int_{- \infty}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{2r}{r+n} + \frac{2r}{r+n}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} \right)^2 dx.\\
\end{split}
\end{equation*}
We will evaluate this integral as a sum of integral in two regions.
\begin{enumerate}
\item From $-\infty$ to $\norm{\hat{\mu}}/2$:
\begin{equation*}
\begin{split}
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{2r}{r+n} + \frac{2r}{r+n}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} \right)^2 dx
&\leq
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{2r}{r+n} } \right)^2 dx.
\end{split}
\end{equation*}
Using $1 - y \leq \sqrt{1 - y}$ for $0 \leq y \leq 1$, we get
\begin{equation*}
\begin{split}
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{2r}{r+n} } \right)^2 dx
&\leq
\int_{- \infty}^{\norm{\hat{\mu}}/2}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(\frac{2r}{r+n} \right)^2 dx\\
&\leq \frac{4r^2}{n^2}.
\end{split}
\end{equation*}
\item From $\frac{\norm{\hat{\mu}}}{2}$ to $\infty$, we get $\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left(1 - \sqrt{1 - \frac{2r}{r+n} + \frac{2r}{r+n}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} \right)^2 dx$.
\begin{equation*}
\begin{split}
&\leq
\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left( \sqrt{1 + \frac{2r}{r+n} e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} - 1 \right)^2 dx.\\
\end{split}
\end{equation*}
This is because $x \geq \norm{\hat{\mu}}/2$, and therefore $\frac{2r}{r+n}e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}} \geq \frac{2r}{r+n}$. Now, using $\sqrt{1+y} \leq 1 + \frac{y}{2}$, we get
\begin{equation*}
\begin{split}
&\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left( \sqrt{1 + \frac{2r}{r+n} e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} - 1 \right)^2 dx\\
&\leq
\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/2}
\left( {1 + \frac{r}{r+n} e^{\frac{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}{2}}} - 1 \right)^2 dx\\
&\leq
\frac{r^2}{n^2} \int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} {e^{- \norm{\hat{\mu}}^2 + 2 \norm{\hat{\mu}} x}} e^{-x^2/2}
dx\\
&=
\frac{r^2}{n^2}e^{-\norm{\hat{\mu}}^2} \int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} {e^{2\norm{\hat{\mu}} x - x^2/4}} e^{-x^2/4}
dx.\\
\end{split}
\end{equation*}
Since $2\norm{\hat{\mu}} x - x^2/4 \leq 4 \norm{\hat{\mu}}^2$, we get
\begin{equation*}
\begin{split}
\frac{r^2}{n^2} e^{-\norm{\hat{\mu}}^2}\left(
\int_{\norm{\hat{\mu}}/2}^{\infty}
e^{2\norm{\hat{\mu}} x - x^2/4}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/4}
\right) dx
&\leq
\frac{r^2}{n^2} e^{3\norm{\hat{\mu}}^2}\left(
\int_{\norm{\hat{\mu}}/2}^{\infty}
\frac{1}{\sqrt{2 \pi}} e^{-x^2/4}
\right) dx\\
&\leq
\frac{\sqrt{2} r^2}{n^2} e^{3\norm{\hat{\mu}}^2}\left(
\int_{-\infty}^{\infty}
\frac{1}{\sqrt{4 \pi}} e^{-x^2/4}
\right) dx\\
&\leq
\frac{\sqrt{2} r^2}{n^2} e^{3\norm{\hat{\mu}}^2}.\\
\end{split}
\end{equation*}
\end{enumerate}
Adding the two integrals, we get
\begin{equation*}
\begin{split}
H \left(P_1, \left(1 - \frac{2r}{r+n}\right)P_1 + \frac{2r}{r+n}Q_1 \right)^2
&\leq \frac{4r^2}{n^2} + \frac{\sqrt{2}r^2}{n^2}e^{3\norm{\hat{\mu}}^2}\\
&\leq \frac{8r^2}{n^2}e^{3\norm{\hat{\mu}}^2}.
\end{split}
\end{equation*}
This gives us $H(P, \left(1 - \frac{2r}{r+n}\right)P + \frac{2r}{r+n}Q) \leq \frac{3r}{n} e^{3\norm{\hat{\mu}}^2/2}$ which completes the proof.
\end{proof}
\end{proof}
\fi
\subsection{Upper Bound}
In this section, we prove the upper bound in Theorem \ref{thm:gaussian_full} by showing that Algorithm \ref{alg:gaussian1} can be used as a $(n, n + \frac{n}{\sqrt d})$ amplification procedure.
First, note that it is sufficient to prove the theorem for the case when input samples come from an identity covariance Gaussian. This is because, for the purpose of analysis we can transform our samples to those coming from indentity covariance Gaussian, as our amplification procedure is invariant to linear transformations to samples. In particular, let $f_\Sigma$ denote our amplification procedure for samples coming from $N(\mu, \Sigma)$, and, $Y_n = (y_1, y_2, \dots, y_n)$ denote the random variable corresponding to $n$ samples from $N(\mu, \Sigma)$. Let $X_n = (x_1, x_2, \dots, x_n)$ denote $n$ samples from $N(\mu, I)$, such that $Y_n = \Sigma^\frac{1}{2}(X_n - \mu) + \mu = (\Sigma^\frac{1}{2}(x_1 - \mu) + \mu, \Sigma^\frac{1}{2}(x_2 - \mu) + \mu, \dots, \Sigma^\frac{1}{2}(x_n - \mu) + \mu)$. Due to invariance of our amplification procedure to linear transformations, we get that $\Sigma^{\frac{1}{2}}(f_I(X_n) - \mu) + \mu$ is equal in distribution to $ f_{\Sigma}(\Sigma^\frac{1}{2}(X_n - \mu) + \mu) = f_{\Sigma}(Y_n)$. This gives us
\begin{equation*}
\begin{split}
D_{TV}(f_\Sigma(Y_n), Y_m) &=
D_{TV}(f_\Sigma(\Sigma^\frac{1}{2}(X_n - \mu) + \mu), \Sigma^\frac{1}{2}(X_m - \mu) + \mu)\\
&= D_{TV}(\Sigma^{\frac{1}{2}}(f_I(X_n) - \mu) + \mu, \Sigma^\frac{1}{2}(X_m - \mu) + \mu)\\
&\leq D_{TV}(f_I(X_n), X_m),
\end{split}
\end{equation*}
where the last inequality is true because the total variation distance between two distributions can't increase if we apply the same transformation to both the distributions. Hence, we can conclude that it is sufficient to prove our results for identity covariance case. This is true for both the amplification procedures for Gaussians that we have discussed. So in this whole section, we will work with identity covariance Gaussian distributions.
\begin{proposition}\label{prop:gaussian-ub-main}
Let $\mathcal{C}$ denote the class of $d-$dimensional Gaussian distributions $N\left(\mu, I\right)$ with unknown mean $\mu$. For all $d,n>0$ and $m = n+O\left(\frac{n}{\sqrt{d}}\right)$, $\mathcal{C}$ admits an $\left(n, m\right)$ amplification procedure.
\end{proposition}
\begin{proof}
The amplification procedure consists of two parts. The first uses the provided samples to learn the empirical mean $\hat \mu$ and generate $m-n$ new samples from $\mathcal N(\hat{\mu}, I)$. The second part adjusts the first $n$ samples to ``hide" the correlations that would otherwise arise from using the empirical mean to generate additional samples.
Let $\epsilon_{n+1}, \epsilon_{n+2}, \dots, \epsilon_{m}$ be $m-n$ i.i.d. samples generated from $N\left(0, I\right)$, and let $\hat{\mu} = \frac{\sum_{i=1}^n x_i}{n}$. The amplification procedure will return $x_1', \dots, x_m'$ with:
\begin{equation}
x_i'=
\begin{cases}
x_i - \frac{\sum_{j=n+1}^m \epsilon_j}{n}, & \text{for}\ i \in \{1,2, \dots, n\} \\
\hat{\mu} + \epsilon_i, & \text{for}\ i \in \{n+1, n+2, \dots, m\}.\\
\end{cases}
\end{equation}
We will show later in this proof that subtracting $\frac{\sum_{j=n+1}^m \epsilon_j}{n}$ will serve to decorrelate the first $n$ samples from the remaining samples.
Let $f_{\mathcal{C},n,m} : S^n \rightarrow S^m$ be the random function denoting the map from $X_n$ to $Z_m$ as described above, where $S = \mathbb{R}^d$. We need to show
$$D_{TV}\left(Z_m = f_{\mathcal{C},n,m}\left(X_n\right), X_m\right) \le 1/3,$$
where $X_n$ and $X_m$ denote $n$ and $m$ independent samples from $N\left(\mu, I\right)$ respectively.
For ease of understanding, we first prove this result for the univariate case, and then extend it to the general setting.
So consider the setting where $d=1$. In this case, $X_m$ corresponds to $m$ i.i.d. samples from a Gaussian with mean $\mu$, and variance $1$. $X_m$ can also be thought of as a single sample from an $m-$dimensional Gaussian $N\Big(\underbrace{\left(\mu, \mu, \dots, \mu\right)}_{m \text{ times}}, I_{m \times m}\Big)$. Now, $f_{\mathcal{C},n,m}$ is a map that takes $n$ i.i.d samples from $N\left(\mu,1\right)$, $m-n$ i.i.d samples ($\epsilon_i$)
from $N\left(0,1\right)$, and outputs $m$ samples that are a linear combination of the $m$ input samples. So, $f_{\mathcal{C},n,m}\left(X_n\right)$ can be thought of as a
$m-$dimensional random variable obtained by applying a linear transformation to a sample drawn from
$N\Big(\Big(\underbrace{\mu, \mu, \dots, \mu}_{n \text{ times}}, \underbrace{0, 0,\dots, 0}_{m-n \text{ times}}\Big), I_{m \times m}\Big)$. As a linear transformation applied to Gaussian random variable outputs a Gaussian random variable, we get that $Z_m = \left(x_1', x_2', \dots, x_m'\right)$ is distributed according to $N\left(\tilde{\mu},\Sigma_{m \times m}\right)$, where $\tilde{\mu}$ and $\Sigma_{m \times m}$ denote the mean and covariance. Note that $\tilde{\mu} = \underbrace{\left(\mu, \mu, \dots, \mu\right)}_{m \text{ times}}$ as
\begin{equation}
\mathbb{E}[x_i']=
\begin{cases}
\mathbb{E}[x_i] - \mathbb{E} \left [\frac{\sum_{j=n+1}^m \epsilon_j}{n} \right ] = \mu - 0 = \mu, & \text{for}\ i \in \{1,2, \dots, n\} \\
\mathbb{E}[\hat{\mu}] + \mathbb{E}[\epsilon_i] = \mu + 0 = \mu, & \text{for}\ i \in \{n+1, n+2, \dots, m\}.\\
\end{cases}
\end{equation}
Next, we compute the covariance matrix $\Sigma_{m \times m}$.
For $i=j$, and $i \in \{1,2, \dots, n\}$, we get
\begin{equation*}
\begin{split}
\Sigma_{ii} & = \mathbb{E}[\left(x_i' - \mu\right)^2] \\
& = \mathbb{E}\left [\left(x_i - \mu\right)^2 \right ] + \mathbb{E} \Bigg [\left(\frac{\sum_{j=n+1}^m \epsilon_j}{n}\right)^2\Bigg]\\
& = 1 + \frac{m-n}{n^2}.
\end{split}
\end{equation*}
For $i=j$, and $i \in \{n+1,n+2, \dots, n+m\}$, we get
\begin{equation*}
\begin{split}
\Sigma_{ii} & = \mathbb{E}\left [\left(x_i' - \mu\right)^2 \right ] \\
& = \mathbb{E} \left [\left(\hat{\mu} - \mu\right)^2 \right ] + \mathbb{E} \left [ \epsilon_i^2 \right]\\
& = \frac{1}{n} +1 .
\end{split}
\end{equation*}
For $i \in \{1,2, \dots, n\}, j \in \{n+1,n+2, \dots, n+m\}$, we get
\begin{equation*}
\begin{split}
\Sigma_{ij} & = \mathbb{E} \left [\left(x_i' - \mu\right)\left(x_j' - \mu\right) \right] \\
& = \mathbb{E}\left[\left(x_i - \frac{\sum_{k=n+1}^m \epsilon_k}{n} - \mu\right)\left(\hat{\mu} + \epsilon_j - \mu\right)\right]\\
& = \mathbb{E}[\left(x_i - \mu\right)\left(\hat{\mu} - \mu\right)] - \mathbb{E}\left[\left(\frac{\sum_{k=n+1}^m \epsilon_k}{n}\right)\left(\epsilon_j\right)\right]\\
& = \frac{1}{n} - \frac{1}{n}\\
& = 0.
\end{split}
\end{equation*}
For $i, j \in \{1,2, \dots, n\}, i \neq j$, we get
\begin{equation*}
\begin{split}
\Sigma_{ij} & = \mathbb{E}\Big[\left(x_i' - \mu\right)\left(x_j' - \mu\right)\Big] \\
& = \mathbb{E}\Bigg[\left(x_i - \frac{\sum_{k=n+1}^m \epsilon_k}{n} - \mu\right)\left(x_j - \frac{\sum_{k=n+1}^m \epsilon_k}{n} - \mu\right)\Bigg]\\
& = \mathbb{E} \left [ (x_i - \mu) (x_j - \mu ) \right ] + \mathbb{E}\left[\left(\frac{\sum_{k=n+1}^m \epsilon_k}{n}\right)^2\right]\\
& = \frac{m-n}{n^2}.
\end{split}
\end{equation*}
For $i, j \in \{n+1,n+2, \dots, m\}, i \neq j$, we get
\begin{equation*}
\begin{split}
\Sigma_{ij} & = \mathbb{E}[\left(x_i' - \mu\right)\left(x_j' - \mu\right)] \\
& = \mathbb{E}[\left(\hat{\mu} + \epsilon_i - \mu\right)\left(\hat{\mu} + \epsilon_j - \mu\right)]\\
& = \mathbb{E} \left [\left(\hat{\mu} - \mu\right)^2 \right ]\\
& = \frac{1}{n}.
\end{split}
\end{equation*}
This gives us
\[
\Sigma_{m \times m} = \begin{bmatrix}
1+\frac{m-n}{n^2} & \frac{m-n}{n^2} & \cdots & \frac{m-n}{n^2} & 0 & 0 & \cdots & 0 \\
\frac{m-n}{n^2} & 1+\frac{m-n}{n^2} & \cdots &
\frac{m-n}{n^2} & 0 & 0 & \cdots & 0 \\
\vdots & \cdots & \cdots & \vdots & \vdots & \cdots & \cdots & \vdots\\
\vdots & \cdots & \cdots & \frac{m-n}{n^2} & \vdots & \cdots & \cdots & \vdots\\
\frac{m-n}{n^2} & \cdots & \frac{m-n}{n^2} & 1+\frac{m-n}{n^2} & 0 & 0 & \cdots & 0 \\
0 & \cdots & \cdots & 0 & 1 + \frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n}\\
0 & \cdots & \cdots & 0 & \frac{1}{n} & 1+\frac{1}{n} & \cdots & \frac{1}{n}\\
\vdots & \cdots & \cdots & \vdots & \vdots & \cdots & \cdots & \vdots\\
\vdots & \cdots & \cdots & \vdots & \vdots & \cdots & \cdots & \frac{1}{n}\\
0 & \cdots & \cdots & 0 & \frac{1}{n} & \cdots &\frac{1}{n} & 1+\frac{1}{n}\\
\end{bmatrix}.
\]
Now, finding $D_{TV}\left(Z_m, X_m\right)$ reduces to computing $D_{TV}\left(N\left(\tilde{\mu}, I_{m \times m}\right), N\left(\tilde{\mu}, \Sigma_{m \times m}\right)\right)$. From \cite[Theorem 1.1]{devroye2018total}, we know that $D_{TV}\left(N\left(\tilde{\mu}, I_{m \times m}\right), N\left(\tilde{\mu}, \Sigma_{m \times m}\right)\right) \leq \min\left(1, \frac{3}{2}||\Sigma - I||_F\right)$. This gives us
\begin{equation}\label{unidim}
\begin{split}
D_{TV}\left(N\left(\tilde{\mu}, I_{m \times m}\right), N\left(\tilde{\mu}, \Sigma_{m \times m}\right)\right) &\leq \min\left(1, \frac{3}{2}||\Sigma - I||_F\right)\\
&\leq \sqrt{\frac{3}{2}\left(\left(\frac{m - n}{n^2}\right)^2 n^2 + \frac{1}{n^2} \left(m-n\right)^2\right)}\\
& = \frac{\sqrt{3}\left(m-n\right)}{n}.
\end{split}
\end{equation}
Now, for $d > 1$, by a similar argument as above, $X_m$ can be thought of as $d$ independent samples from the following $d$ distributions: $N\Big(\underbrace{\left(\mu_1, \mu_1, \dots, \mu_1\right)}_{m \text{ times}}, I_{m \times m}\Big), \dots,N\Big(\underbrace{\left(\mu_d, \mu_d, \dots, \mu_d\right)}_{m \text{ times}}, I_{m \times m}\Big)$.
Or equivalently, as a single sample from $N\Big(\Big(\underbrace{\mu_1, \mu_1, \dots, \mu_1}_{m \text{ times}},\dots ,\underbrace{\mu_d, \mu_d, \dots, \mu_d}_{m \text{ times}}\big), I_{md \times md}\Big)$. Similarly, $Z_m$ can be thought of as $d$ independent samples from $N\Big(\underbrace{\left(\mu_i, \mu_i, \dots, \mu_i\right)}_{m \text{ times}}, \Sigma_{m \times m}\Big)$, or equivalently, a single sample from $N\Big(\Big(\underbrace{\mu_1, \mu_1, \dots, \mu_1}_{m \text{ times}},\dots ,\underbrace{\mu_d, \mu_d, \dots, \mu_d}_{m \text{ times}}\big), \tilde{\Sigma}_{md \times md}\Big)$ where $\tilde{\Sigma}_{md \times md}$ is a block diagonal matrix with block diagonal entries equal to $\Sigma_{m \times m}$ (denoted as $\Sigma$ in the figure).
\[
\tilde{\Sigma}_{md \times md} = \begin{bmatrix}
\Sigma_{} & 0 & \cdots & \cdots & 0 \\
0 & \Sigma_{} & 0 & \cdots & \vdots \\
\vdots & 0 & \ddots & 0 & \vdots \\
\vdots & \cdots & 0 & \ddots & 0\\
0 & \cdots & \cdots & 0 & \Sigma_{}
\end{bmatrix}.
\]
Similar to \eqref{unidim}, we get
\begin{equation*}
\begin{split}
&D_{TV}\Big(
N\Big(\Big(\underbrace{\mu_1, \mu_1, \dots, \mu_1}_{m \text{ times}},\dots ,\underbrace{\mu_d, \mu_d, \dots, \mu_d}_{m \text{ times}}\big), I_{md \times md}\Big)
, N\Big(\Big(\underbrace{\mu_1, \mu_1, \dots, \mu_1}_{m \text{ times}},\dots ,\underbrace{\mu_d, \mu_d, \dots, \mu_d}_{m \text{ times}}\big), \tilde{\Sigma}_{md \times md}\Big)
\Big)\\
& \hspace{130pt} \leq \min\left(1, \frac{3}{2}||\tilde{\Sigma} - I||_F\right)\\
& \hspace{130pt} \leq \sqrt{d \left(\frac{3}{2}\left(\left(\frac{m - n}{n^2}\right)^2 n^2 + \frac{1}{n^2} \left(m-n\right)^2\right)\right)}\\
& \hspace{130pt} = \frac{\sqrt{3d}\left(m-n\right)}{n}.
\end{split}
\end{equation*}
If we want the total variation distance to be less than $\delta$, we get $m-n = O\left(\frac{n \delta}{\sqrt{d}}\right)$. Setting $\delta = \frac{1}{3}$, we get $m = n+ O\left(\frac{n}{\sqrt{d}}\right)$, which completes the proof.
\end{proof} | {
"timestamp": "2019-12-04T02:07:08",
"yymm": "1904",
"arxiv_id": "1904.12053",
"language": "en",
"url": "https://arxiv.org/abs/1904.12053",
"abstract": "Given data drawn from an unknown distribution, $D$, to what extent is it possible to ``amplify'' this dataset and output an even larger set of samples that appear to have been drawn from $D$? We formalize this question as follows: an $(n,m)$ $\\text{amplification procedure}$ takes as input $n$ independent draws from an unknown distribution $D$, and outputs a set of $m > n$ ``samples''. An amplification procedure is valid if no algorithm can distinguish the set of $m$ samples produced by the amplifier from a set of $m$ independent draws from $D$, with probability greater than $2/3$. Perhaps surprisingly, in many settings, a valid amplification procedure exists, even when the size of the input dataset, $n$, is significantly less than what would be necessary to learn $D$ to non-trivial accuracy. Specifically we consider two fundamental settings: the case where $D$ is an arbitrary discrete distribution supported on $\\le k$ elements, and the case where $D$ is a $d$-dimensional Gaussian with unknown mean, and fixed covariance. In the first case, we show that an $\\left(n, n + \\Theta(\\frac{n}{\\sqrt{k}})\\right)$ amplifier exists. In particular, given $n=O(\\sqrt{k})$ samples from $D$, one can output a set of $m=n+1$ datapoints, whose total variation distance from the distribution of $m$ i.i.d. draws from $D$ is a small constant, despite the fact that one would need quadratically more data, $n=\\Theta(k)$, to learn $D$ up to small constant total variation distance. In the Gaussian case, we show that an $\\left(n,n+\\Theta(\\frac{n}{\\sqrt{d}} )\\right)$ amplifier exists, even though learning the distribution to small constant total variation distance requires $\\Theta(d)$ samples. In both the discrete and Gaussian settings, we show that these results are tight, to constant factors. Beyond these results, we formalize a number of curious directions for future research along this vein.",
"subjects": "Machine Learning (cs.LG); Statistics Theory (math.ST); Machine Learning (stat.ML)",
"title": "Sample Amplification: Increasing Dataset Size even when Learning is Impossible",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105280113491,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7076984820739283
} |
https://arxiv.org/abs/2103.02507 | Factoring isometries of quadratic spaces into reflections | Let $V$ be a vector space endowed with a non-degenerate quadratic form $Q$. If the base field $\mathbb{F}$ is different from $\mathbb{F}_2$, it is known that every isometry can be written as a product of reflections. In this article, we detail the structure of the poset of all minimal length reflection factorizations of an isometry. If $\mathbb{F}$ is an ordered field, we also study factorizations into positive reflections, i.e., reflections defined by vectors of positive norm. We characterize such factorizations, under the hypothesis that the squares of $\mathbb{F}$ are dense in the positive elements (this includes Archimedean and Euclidean fields). In particular, we show that an isometry is a product of positive reflections if and only if its spinor norm is positive. As a final application, we explicitly describe the poset of all factorizations of isometries of the hyperbolic space. | \section{Wall's parametrization of the orthogonal group}
\label{sec:wall-parametrization}
In this section, we recall Wall's parametrization of the orthogonal
group of a quadratic space, which was first introduced in
\cite{wall1959structure}. To be as self-contained as
possible, we give proofs for the most important results.
We largely follow the treatment of \cite[Chapter 11]{taylor1992geometry}, but the
reader can also refer to \cite{wall1959structure, wall1963conjugacy,
hahn1979unipotent}.
Let $V$ be a finite-dimensional vector space over a field $\mathbb{F}$. For
now, no hypothesis on $\mathbb{F}$ is required. A \emph{quadratic form} on
$V$ is a map $Q \colon V \to \mathbb{F}$ such that:
\begin{enumerate}
\item $Q(av) = a^2 Q(v)$ for all $a \in \mathbb{F}$ and $v \in V$;
\item the map $\beta(u,v) = Q(u+v) - Q(u) - Q(v)$ is bilinear.
\end{enumerate}
The pair $(V, Q)$ is called a \emph{quadratic space}, and the symmetric bilinear form $\beta$ is called the \emph{polar form} of
$Q$.
From now on, assume that $(V, Q)$ is a \emph{non-degenerate} quadratic space, i.e., the polar form
$\beta$ is non-degenerate: $\beta(u, v) = 0$ for all $v \in V$ implies
$u = 0$.
If the characteristic of $\mathbb{F}$ is not $2$, the polar form $\beta$
determines $Q$ via the relation $Q(u) = \frac12 \beta(u, u)$. On the
other hand, if the characteristic of $\mathbb{F}$ is $2$, $\beta$ is
alternating (i.e., $\beta(u, u) = 0$ for all $u \in V$) and does not
determine $Q$.
A non-zero vector $u \in V$ is \emph{isotropic} if
$\beta(u, u) = 0$ and it is \emph{singular} if $Q(u) = 0$.
These two notions coincide when the characteristic of $\mathbb{F}$ is not $2$.
Given a linear subspace $W \subseteq V$, its orthogonal subspace is
defined as $W^\perp = \{ v \in V \mid \beta(v, w) = 0 \text{ for all }
w \in W \}$.
A subspace $W \subseteq V$ is \emph{totally singular} if $Q(u) = 0$ for
all $u \in W$, and it is \emph{non-degenerate} if $W \cap W^\perp =
\{0\}$ (i.e., if $\beta|_W$ is non-degenerate). Since $\beta$ is
non-degenerate, we have that $\dim(W) + \dim(W^\perp) = \dim(V)$ and
$(W^\perp)^\perp = W$ for every subspace $W \subseteq V$. However,
note that $W \cap W^\perp$ might be non-trivial, so $V$ is not
necessarily the direct sum of $W$ and $W^\perp$. If $V = W_1 \oplus
W_2$ and $W_1 = W_2^\perp$, we also write $V = W_1 \perp W_2$.
\begin{definition}[Orthogonal group]
The \emph{orthogonal group} of $(V, Q)$ is \[ O(V, Q) = \{ f \in
\GL(V) \mid Q(f(u)) = Q(u) \, \text{ for all $u \in V$} \}. \] The
elements of the orthogonal group are called \emph{isometries}. We
also write $O(V)$ in place of $O(V, Q)$, since the ambient quadratic
form $Q$ is always fixed.
\end{definition}
By definition, an isometry $f \in O(V)$ also preserves the polar form
$\beta$:
\begin{align*}
\beta(f(u), f(v)) &= Q(f(u) + f(v)) - Q(f(u)) - Q(f(v)) \\
&= Q(f(u+v)) - Q(f(u)) - Q(f(v)) \\
&= Q(u+v) - Q(u) - Q(v) \\
&= \beta(u, v).
\end{align*}
Notice that if $f \colon V \to V$ is a linear map that preserves $\beta$, then $f \in \GL(V)$ because $\beta$ is non-degenerate.
Our aim is to characterize the factorizations of isometries as
products of reflections. A \emph{reflection} is a non-trivial
isometry that fixes every vector in a hyperplane of $V$. Every
reflection can be written as
\begin{equation}
r_v(u) = u - \frac{\beta(u, v)}{Q(v)} v
\label{eq:reflection}
\end{equation}
for some non-singular vector $v \in V$, and $r_v$ is called the
reflection with respect to $v$. Note that $r_v = r_{w}$ for every
non-zero scalar multiple $w$ of $v$. In addition, $r_v$ fixes the
hyperplane $\<v\>^\perp$, sends $v$ to $-v$, has order $2$ and
determinant $-1$. The set of reflections is closed under conjugation:
$f r_v f^{-1} = r_{f(v)}$ for every $f \in O(V)$.
The following are two important subspaces associated with an isometry.
\begin{definition}
Given an isometry $f \in O(V)$, its \emph{fixed space} is $\Fix(f) =
\ker(\id - f)$ and its \emph{moved space} is $\Mov(f) = \im(\id -
f)$.
\end{definition}
The fixed space is simply the subspace of vectors that are fixed by
$f$. The moved space is the subspace of ``movement'' vectors $f(u)-u$,
for $u \in V$. It is also called the \emph{residual space} of $f$.
The notation ``$\Fix(f)$'' and ``$\Mov(f)$'' is the one used in
\cite{brady2015factoring}, but several different notations for the
moved space have appeared in the literature, including $V_f$, $[V, f]$, and $M(f)$
\cite{wall1959structure, wall1963conjugacy,taylor1992geometry,brady2002partial}.
\begin{lemma}\label{lemma:fix-move-orthogonal}
For every isometry $f \in O(V)$, we have that $\Fix(f) =
\Mov(f)^\perp$.
\end{lemma}
\begin{proof}
By definition, the subspaces $\Fix(f)$ and $\Mov(f)$ have
complementary dimensions, so it is enough to show that $\beta(u, v)
= 0$ for every $u \in \Fix(f)$ and $v \in \Mov(f)$. For this, write
$v = w - f(w)$ for some $w \in V$. Then
\begin{align*}
\beta(u, v) &= \beta(u, w - f(w))
= \beta(u, w) - \beta(u, f(w)) \\
&= \beta(u, w) - \beta(f(u), f(w))
= 0. \qedhere
\end{align*}
\end{proof}
Notice that an isometry $f \in O(V)$ is a reflection if and only
if $\Mov(f)$ is one-dimensional (in which case $f=r_v$ where $\Mov(f) = \< v \>$), and this happens if and only if $\Fix(f)$ is a hyperplane (in
which case $\Fix(f) = \< v \>^\perp$).
When $f$ is not a reflection, its moved space $\Mov(f)$ does not
determine $f$ uniquely. For example, if $V = \mathbb{R}^n$ and $Q$ is the standard
(positive definite) quadratic form, a $2$-dimensional subspace $W
\subseteq V$ is the moved space of infinitely many rotations. By
\Cref{lemma:fix-move-orthogonal}, each of $\Fix(f)$ and $\Mov(f)$
determines the other, so no additional information comes from knowing
both of them.
The Wall form adds the information needed to determine $f$.
\begin{definition}[\cite{wall1959structure}]\label{def:wall-form}
Let $f \in O(V)$ be an isometry. The \emph{Wall form} of $f$ is the
bilinear form $\chi_f$ on $\Mov(f)$ defined as $\chi_f(u, v) =
\beta(w, v)$, where $w \in V$ is any vector such that $u = w -
f(w)$.
\end{definition}
\begin{theorem}\label{thm:wall-form}
The Wall form $\chi_f$ is a well-defined non-degenerate bilinear
form on $\Mov(f)$, and it satisfies $\chi_f(u,u) = Q(u)$ for all $u
\in V$.
\end{theorem}
\begin{proof}
Suppose that $u = w - f(w) = w' - f(w')$ for some $w, w' \in V$.
Then $w - w' \in \Fix(f) = \Mov(f)^\perp$ by
\Cref{lemma:fix-move-orthogonal}, and therefore $\beta(w, v) - \beta(w', v) = \beta(w - w', v) = 0$,
so $\chi_f(u, v)$ is well-defined.
It is immediate to see that $\chi_f$ is a bilinear form. If
$\chi_f$ is degenerate, then there is a non-zero vector $v \in
\Mov(f)$ such that $\chi_f(u, v) = 0$ for all $u \in \Mov(f)$. Then
$\beta(w, v) = 0$ for all $w \in V$. This is impossible, because
$\beta$ is non-degenerate.
Finally, if $u = w - f(w)$, we have $\chi_f(u, u) = \beta(w, u) =
-\beta(w, -u) = Q(w) + Q(u) - Q(w - u) = Q(w) + Q(u) - Q(f(w)) =
Q(u)$.
\end{proof}
The Wall form $\chi_f$ is not necessarily symmetric. In fact, we show
in \Cref{lemma:wall-form-properties} that $\chi_f$ is symmetric if and
only if $f$ is an involution. As anticipated, the Wall form $\chi_f$
carries enough information to recover the isometry $f$.
\begin{theorem}[Wall's parametrization]
\label{thm:wall-parametrization}
The map $f \mapsto (\Mov(f), \chi_f)$ is a one-to-one correspondence
between the orthogonal group $O(V)$ and the set of pairs $(W, \chi)$
such that $W$ is a subspace of $V$ and $\chi$ is a non-degenerate
bilinear form on $W$ satisfying $\chi(u,u) = Q(u)$ for $u \in W$.
\end{theorem}
\begin{proof}
To prove injectivity, consider two isometries $f, g \in O(V)$ such
that $\Mov(f) = \Mov(g) = W$ and $\chi_f = \chi_g = \chi$. By
definition of Wall form, $\chi_f(w-f(w), v) = \beta(w, v) =
\chi_g(w-g(w), v)$ and therefore $\chi(w-f(w), v) = \chi(w-g(w),
v)$, for every $v \in W$ and $w \in V$. Since $\chi$ is
non-degenerate, this implies that $w-f(w) = w-g(w)$ for all $w \in
V$, thus $f = g$.
To prove surjectivity, given a pair $(W, \chi)$, we want to construct
an isometry $f \in O(V)$ such that $\Mov(f) = W$ and $\chi_f =
\chi$. For $w \in V$, denote by $\alpha_w \in W^*$ the linear
functional given by $\alpha_w(v) = \beta(w, v)$. Since $\chi$ is
non-degenerate, the linear map $\varphi \colon W \to W^*$ given by
$\varphi(u)(v) = \chi(u, v)$ is an isomorphism. Define $f\colon V
\to V$ as follows: $f(w) = w - \varphi^{-1}(\alpha_w)$. By
construction, for any $w \in V$ and $v \in W$ we have
\begin{equation}
\beta(w, v) = \alpha_w(v) = \varphi(w - f(w))(v) = \chi(w - f(w), v).
\label{eq:wall-parametrization}
\end{equation}
This allows us to check that $f$ is an isometry. Indeed, by setting
$v = w - f(w)$ in \cref{eq:wall-parametrization} we obtain
\begin{align*}
\beta(w, w - f(w)) &= \chi(w - f(w), w - f(w)) = Q(w-f(w)) \\
&= Q(w) + Q(f(w)) - \beta(w, f(w)),
\end{align*}
which simplifies to $Q(f(w)) = Q(w)$. By
definition of $f$, we immediately see that $\Mov(f) = W$, and
\cref{eq:wall-parametrization} implies that $\chi = \chi_f$.
\end{proof}
We now list some properties of the Wall form.
\begin{lemma}\label{lemma:wall-form-properties}
For every $f \in O(V)$ and $u, v \in \Mov(f)$, the following properties hold.
\begin{enumerate}[(i)]
\item $\chi_f(u, v) + \chi_f(v, u) = \beta(u, v)$.
\item $\chi_f(f(u), v) = -\chi_f(v, u)$.
\item $\Mov(f) = \Mov(f^{-1})$ and $\chi_{f^{-1}}(u, v) = \chi_f(v,
u)$.
\item $\Mov(gfg^{-1}) = g(\Mov(f))$ and $\chi_{gfg^{-1}}(g(u), g(v))
= \chi_f(u, v)$ for every $g \in O(V)$.
\item $\chi_f$ is symmetric if and only if $f$ is an involution.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) follows from the identity $\chi_f(u,u) = Q(u)$ of
\Cref{thm:wall-form}, by replacing $u$ with $u+v$. To prove (ii),
write
\begin{align*}
\chi_f(f(u), v) + \chi_f(v, u) &= \chi_f(f(u), v) -
\chi_f(u, v) + \beta(u, v) \\
&= \beta(u, v) - \chi_f(u-f(u), v) = 0,
\end{align*}
where the first equality follows from (i) and the last equality
follows from the definition of $\chi_f$. In (iii), it is obvious
that $\Mov(f) = \Mov(f^{-1})$, and we only need to check that
$\tilde\chi_{f^{-1}}(u,v) := \chi_f(v,u)$ satisfies
\Cref{def:wall-form} for $f^{-1}$. Indeed,
\begin{align*}
\tilde\chi_{f^{-1}}\big(w-f^{-1}(w), v \big) &= \chi_f\big(v, w-f^{-1}(w)\big) = - \chi_f(f(w) - w, v) \\
&= \chi_f(w - f(w), v) =
\beta(w, v),
\end{align*}
where the second equality follows from (ii). In
(iv), it is immediate that $\Mov(gfg^{-1}) = g(\Mov(f))$. We show
that $\tilde\chi_f (u,v) := \chi_{gfg^{-1}}(g(u), g(v))$ satisfies
\Cref{def:wall-form}:
\begin{align*}
\tilde\chi_f(w - f(w), v) &=
\chi_{gfg^{-1}}(g(w - f(w)), g(v)) \\
&= \chi_{gfg^{-1}}(g(w) - g(f(w)), g(v)) \\
&= \beta(g(w), g(v)) = \beta(w, v).
\end{align*}
Therefore $\tilde\chi_f
= \chi_f$. In (v), if $\chi_f$ is symmetric, then we can apply
(iii) and deduce that $\Mov(f^{-1}) = \Mov(f)$ and
$\chi_{f^{-1}}(u,v) = \chi_f(v, u) = \chi_f(u, v)$. Then
$\chi_{f^{-1}} = \chi_f$, so $f^{-1} = f$ by
\Cref{thm:wall-parametrization}. Conversely, if $f$ is an
involution, then $\chi_f(u, v) = \chi_{f^{-1}}(u, v) = \chi_f(v,
u)$, so $\chi_f$ is symmetric.
\end{proof}
Fix a subspace $W \subseteq V$, and look at all isometries $f \in
O(V)$ such that $\Mov(f) = W$. Property (i) of
\Cref{lemma:wall-form-properties} says that the symmetrization of the
Wall form $\chi_f$ is necessarily equal to the ambient bilinear form
$\beta$ (restricted to $W = \Mov(f)$). In particular, if $W$ is
non-degenerate and the characteristic of $\mathbb{F}$ is not $2$, there is
exactly one isometry $f$ such that $\Mov(f) = W$ and $\chi_f$ is
symmetric, and $f$ is an involution by property (v). On the opposite
side, if $W$ is totally singular, then $\chi_f$ is alternating by
\Cref{thm:wall-form}. In this case, isometries $f$ with $\Mov(f) = W$
only exist if $\dim W$ is even (otherwise every alternating bilinear
form on $W$ is degenerate, as the rank is necessarily even; see for
example \cite[Theorem 2.10]{grove2002classical}).
\section{Factorizations and reflection length}\label{sec:factorizations}
In this section, we continue to follow \cite{wall1959structure} and
\cite[Chapter 11]{taylor1992geometry} and show how Wall's
parametrization leads to a nice procedure to build factorizations of
isometries. For a field $\mathbb{F} \neq \mathbb{F}_2$, this allows proving that any
isometry $f \in O(V)$ can be written as a product of reflections. It
also allows us to characterize the \emph{reflection length}, i.e., the
minimal length $k$ of a factorization $f = r_1r_2\dotsm r_k$ as a
product of reflections. We refer to \cite[Theorem
11.41]{taylor1992geometry} for the case $\mathbb{F} = \mathbb{F}_2$, which we do not
treat here. Finally, at the end of this section, we introduce the
spinor norm.
\begin{definition}[Orthogonal complements]\label{def:left-right-orthogonal-complement}
Let $\chi$ be a non-degenerate bilinear form on a finite-dimensional
vector space $W$. Define the \emph{left} and \emph{right orthogonal
complement} of a subspace $U \subseteq W$ as \begin{align*}
U^{\triangleleft} &= \{ v \in W \mid \chi(v, u) = 0 \, \text{ for
all } u \in U \} \\ U^{\triangleright} &= \{ v \in W \mid
\chi(u, v) = 0 \, \text{ for all } u \in U \},
\end{align*} respectively.
\end{definition}
Since $\chi$ is non-degenerate, we have that $\dim U^\triangleleft =
\dim U^\triangleright = \dim W - \dim U$. As an immediate consequence,
$(U^\triangleright)^\triangleleft = (U^\triangleleft)^\triangleright =
U$. We will mostly use this notation in the case where $\chi =
\chi_f$ is the Wall form of an isometry $f \in O(V)$ and $W =
\Mov(f)$.
The following is the basic building block that allows us to construct
factorizations of isometries.
\begin{theorem}[Factorization theorem]\label{thm:factorization}
Let $f \in O(V)$ be an isometry, and let $U_1 \subseteq \Mov(f)$ be
a subspace such that the restriction $\chi_1 = \chi_f|_{U_1}$ is
non-degenerate. Let $U_2 = U_1^\triangleright$ (respectively, $U_2
= U_1^\triangleleft$), and $\chi_2 = \chi_f|_{U_2}$. Denote by
$f_1$ and $f_2$ the elements of $O(V)$ associated with $(U_1,
\chi_1)$ and $(U_2, \chi_2)$ under Wall's parametrization.
\begin{enumerate}[(a)]
\item $\Mov(f) = U_1 \oplus U_2$, and $f = f_1 f_2$ (respectively,
$f = f_2f_1$).
\item $f_1 f_2 = f_2 f_1$ if and only if $\Mov(f) = U_1 \perp U_2$.
In this case, $f_1$ coincides with $f$ on $U_2^\perp$, and $f_2$
coincides with $f$ on $U_1^\perp$.
\end{enumerate}
Conversely, every factorization $f = f_1f_2$ with $\Mov(f) =
\Mov(f_1) \oplus \Mov(f_2)$ arises in this way.
\end{theorem}
\begin{proof}
We prove part (a) in the case $U_2 = U_1^\triangleright$, the case
$U_2 = U_1^\triangleleft$ being analogous. Since $\chi_1$ is
non-degenerate, no non-zero vector of $U_1$ can be right-orthogonal
to all of $U_1$. This means that $U_1 \cap U_2 = \{0\}$. We also
have $\dim U_1 + \dim U_2 = \dim \Mov(f)$, and therefore $\Mov(f) =
U_1 \oplus U_2$.
Notice that $\chi_2$ is non-degenerate because $\chi_f$ is
non-degenerate, so $f_2$ is well-defined. To prove that $f =
f_1f_2$, consider the following chain of equalities that holds for
every $w \in V$, $u_1 \in U_1$, and $u_2 \in U_2$:
\begin{align*}
\chi_f &\big(w - f_1f_2(w), u_1 + u_2\big) = \chi_f\big(w - f_2(w)
+ f_2(w) - f_1f_2(w), u_1 + u_2\big) \\ &= \chi_f(w - f_2(w), u_1
+ u_2) + \chi_f\big((\id - f_1)f_2(w), u_1 + u_2\big)
\\ &\stackrel{(1)}{=} \chi_f\big(w-f_2(w), u_1\big) + \chi_f\big(w
- f_2(w), u_2\big) + \chi_f\big((\id - f_1)f_2(w), u_1 \big)
\\ &\stackrel{(2)}{=} \beta\big(w - f_2(w), u_1\big) + \beta(w,
u_2) + \beta\big(f_2(w), u_1\big) \\ &= \beta(w, u_1 + u_2) \\ &=
\chi_f\big(w - f(w), u_1 + u_2\big).
\end{align*}
Here (1) follows from bilinearity of $\chi_f$, the term
$\chi_f\big((\id-f_1)f_2(w), u_2\big)$ vanishing because
$(\id-f_1)f_2(w) \in \Mov(f_1) = U_1$ and $u_2 \in U_2$; in (2), the
first term is rewritten using property (i) of
\Cref{lemma:wall-form-properties}, whereas the other two terms are
rewritten using the definitions of $\chi_1$ and $\chi_2$. From the
previous equalities and the fact that $\chi_f$ is non-degenerate, it
follows that $w - f_1f_2(w) = w - f(w)$ for all $w \in V$, so $f =
f_1f_2$.
We now prove part (b). Suppose that $f_1f_2 = f_2f_1$. By
property (iv) of \Cref{lemma:wall-form-properties}, $f$ fixes
$\Mov(f_1) = U_1$. Then, by property (ii), we have that $\chi_f(u_2, u_1) = - \chi_f(f(u_1), u_2) = 0$ for all $u_1 \in U_1$ and $u_2 \in U_2$.
Therefore $U_2 = U_1^\triangleright = U_1^\triangleleft$.
Property (i) implies that $\Mov(f) = U_1 \perp U_2$.
Conversely, suppose that $\Mov(f) = U_1 \perp U_2$. Since $U_2 =
U_1^\triangleright$, property (i) of
\Cref{lemma:wall-form-properties} implies that $U_2 =
U_1^\triangleleft$. By the first part of this theorem, we obtain
that $f = f_2f_1$, and therefore $f_1f_2 = f_2f_1$. In addition,
$\Fix(f_2) = \Mov(f_2)^\perp = U_2^\perp$, and thus $f(v) =
f_1f_2(v) = f_1(v)$ for every $v \in U_2^\perp$. Similarly, $f(v) =
f_2f_1(v) = f_2(v)$ for every $v \in U_1^\perp$.
Finally, given any factorization $f = f_1f_2$ such that $\Mov(f) =
\Mov(f_1) \oplus \Mov(f_2)$, we need to show that
$\chi_f|_{\Mov(f_2)} = \chi_{f_2}$. Let $u, v \in \Mov(f_2)$. By
definition of $\chi_f$, we have that $\chi_f(u, v) = \beta(w, v)$,
where $w \in V$ is a vector such that $u = w - f(w)$. Now write $u
= w - f_2(w) + f_2(w) - f_1f_2(w)$, and notice that $w-f_2(w) \in
\Mov(f_2)$ and $f_2(w) - f_1f_2(w) \in \Mov(f_1)$. Since $u \in
\Mov(f_2)$ and $\Mov(f) = \Mov(f_1) \oplus \Mov(f_2)$, we have that
$u = w - f_2(w)$. Then $\chi_{f_2}(u, v) = \beta(w, v) = \chi_f(u,
v)$.
\end{proof}
From the definition of moved space, it is easy to see that
$\Mov(f_1f_2) \subseteq \Mov(f_1) + \Mov(f_2)$ for any two isometries
$f_1, f_2 \in O(V)$. \Cref{thm:factorization} allows to construct
factorizations $f = f_1f_2$ where the equality $\Mov(f_1f_2) =
\Mov(f_1) \oplus \Mov(f_2)$ holds. These are called \emph{direct
factorizations} in \cite{wall1959structure}.
More generally, we give the following definition.
\begin{definition}[Direct factorization]\label{def:direct-factorization}
A factorization $f = f_1 \dotsm f_k$ is called a direct
factorization if $\Mov(f) = \Mov(f_1)\oplus \dotsb
\oplus \Mov(f_k)$ and no $f_i$ is the identity.
\end{definition}
Recall that the reflections are precisely the isometries with a
one-di\-men\-sio\-nal moved space. The relation $\Mov(f_1f_2) \subseteq
\Mov(f_1) + \Mov(f_2)$ yields a lower bound on the reflection length
of an isometry $f \in O(V)$: if $f = r_1\dotsm r_k$ is a product of
$k$ reflections, then $\Mov(f) \subseteq \Mov(r_1) +
\dotsb + \Mov(r_k)$, so $k \geq \dim \Mov(f)$. This lower bound is
attained precisely when the factorization is direct.
In the rest of this section, we are going to see that
most isometries admit a direct factorization, but not all of them.
\begin{lemma}\label{lemma:triangular-basis}
Let $\chi$ be a non-degenerate bilinear form on a finite-dimensional
vector space $W$ over a field $\mathbb{F} \neq \mathbb{F}_2$. If $\chi$ is not
alternating, then $W$ has a basis $e_1, \dotsc, e_m$ such that
$\chi(e_i, e_i) \neq 0$ for all $i$, and $\chi(e_i, e_j) = 0$ for $i
< j$.
\end{lemma}
\begin{proof}
Since $\chi$ is not alternating, there exists a vector $u \in W$
such that $\chi(u, u) \neq 0$. We prove the statement by induction on
$m = \dim W$, the case $m=1$ being trivial.
Suppose from now on that $m > 1$. Note that $W = \< u
\> \oplus \< u \>^\triangleright$, and that the restriction
$\chi|_{\< u \>^\triangleright}$ is also non-degenerate.
If $\chi|_{\< u \>^\triangleright}$ is not alternating, we are done
by induction. Suppose now by contradiction that $\chi|_{\< u
\>^\triangleright}$ is alternating. Let $v \in \< u
\>^\triangleright$ be a non-zero vector chosen as follows: if $\< u
\>^\triangleleft \neq \< u \>^\triangleright$, choose $v$ so that
$\chi(v, u) \neq 0$; otherwise, choose any non-zero vector. Since
$\mathbb{F} \neq \mathbb{F}_2$, there exists $a \in \mathbb{F}^\times := \mathbb{F}\setminus \{0\}$ such that $\chi(u +
av, u + av) = \chi(u, u) + a\chi(v, u)$ does not vanish. By
replacing $v$ with $av$, we may assume that $\chi(u+v, u+v) \neq 0$.
Since $\chi|_{\< u \>^\triangleright}$ is non-degenerate and alternating, the dimension of $\< u
\>^\triangleright$ is necessarily even, so it is at least $2$. Then
there exists a vector $w \in \< u \>^\triangleright$ such that
$\chi(v, w) = 1$. Define $c = \chi(u+v, u+v)$, so that for all $b
\in \mathbb{F}$ we have
\begin{align*}
\chi(u+v, u+bv-cw) &= \chi(u, u) + \chi(v, u) - c \chi(v, w) \\
&= \chi(u, u) + \chi(v, u) - \chi(u+v, u+v) = 0 \\
\chi(u + bv - cw, u + bv - cw)
&= \chi(u, u) + b \chi(v, u) - c \chi(w, u).
\end{align*}
If $\chi(u, u) + b \chi(v, u) - c \chi(w, u) \neq 0$ for some $b \in
\mathbb{F}$, then the restriction $\chi|_{\< u + v \>^\triangleright}$ is
non-degenerate and non-alternating. Therefore we can set $e_1 =
u+v$ and be done by induction. Suppose instead that $\chi(u, u) + b
\chi(v, u) - c \chi(w, u) = 0$ for all $b \in \mathbb{F}$. Then $\chi(v, u)
= 0$ and $\chi(w, u) \neq 0$. In particular, $w \in \< u \>^\triangleright$ and $w \not\in \< u \>^\triangleleft$, so $\< u
\>^\triangleleft \neq \< u \>^\triangleright$. This is a
contradiction because $v$ was chosen so that $\chi(v, u) \neq
0$.
\end{proof}
\begin{remark}
It is worth noting that \Cref{lemma:triangular-basis} is false for
$\mathbb{F} = \mathbb{F}_2$. See \cite[Chapter 11]{taylor1992geometry} for
additional details.
\end{remark}
The following lemma describes how the moved space changes when
multiplying an isometry by a reflection.
\begin{lemma}\label{lemma:reflection-multiplication}
Let $f \in O(V)$ be an isometry, and let $v \in V$ be a non-singular vector.
\begin{enumerate}[(a)]
\item If $v \in \Mov(f)$, then $\Mov(r_v f) = \< v
\>^\triangleright$, where the right orthogonal complement is taken
inside $\Mov(f)$ with respect to the Wall form $\chi_f$. In
particular, $\dim \Mov(r_v f) = \dim \Mov(f) - 1$.
\item If $v \not\in \Mov(f)$, then $\Mov(r_v f) = \Mov(f) \oplus \<
v \>$. In particular, $\dim \Mov(r_v f) = \dim \Mov(f) + 1$.
\end{enumerate}
As a consequence, if $f$ is a product of $k$ reflections, then $\dim
\Mov(f) \equiv k \pmod 2$.
\end{lemma}
\begin{proof}
Part (a) is a direct consequence of \Cref{thm:factorization}. For
part (b), suppose that $v \not\in \Mov(f)$. Since $r_v$ is an
involution, the fixed space $\Fix(r_v f)$ consists of the vectors $u
\in V$ such that $r_v(u) = f(u)$. By substituting the expression
for $r_v$ (\cref{eq:reflection}), we get the equivalent condition
\[ u - f(u) = \frac{\beta(u, v)}{Q(v)} v. \]
Since $v \not\in \Mov(f)$, this condition is satisfied if and only
if $f(u) = u$ and $\beta(u, v) = 0$. Therefore $\Fix(r_v f) =
\Fix(u) \cap \< v\>^\perp$. Then $\Mov(r_v f) = \Mov(u) \oplus \< v
\>$ by \Cref{lemma:fix-move-orthogonal}.
\end{proof}
We are now ready to give a simple formula for the reflection length of
any isometry.
\begin{theorem}[Reflection length]\label{thm:reflection-length}
Assume $\mathbb{F} \neq \mathbb{F}_2$, and let $f \in O(V)$ be an isometry different
from the identity. The reflection length of $f$ is equal to
$\dim\Mov(f)$ if $\Mov(f)$ is not totally singular, and to
$\dim\Mov(f) + 2$ otherwise. In particular, every isometry can be
written as a product of at most $\dim V$ reflections.
\end{theorem}
\begin{proof}
If $\Mov(f)$ is not totally singular, then
\Cref{lemma:triangular-basis} applies to the Wall form $\chi_f$ and
yields a basis of $\Mov(f)$ consisting of non-singular vectors $e_1,
\dotsc, e_m$ such that $\chi(e_i, e_j) = 0$ for $i < j$. By a
repeated application of \Cref{thm:factorization}, we get a direct
factorization $f = r_{e_1} \dotsm r_{e_m}$ of length $m = \dim
\Mov(f)$.
Suppose now that $\Mov(f)$ is totally singular. Choose any
non-singular vector $v \in V$, and consider $g = r_v f$. By
\Cref{lemma:reflection-multiplication}, we have $\Mov(g) = \Mov(f)
\oplus \< v \>$. In particular, $\Mov(g)$ contains the non-singular
vector $v$, so by the previous part $g$ can be written as a product
of $\dim\Mov(g)$ reflections. Then $f$ can be written as a product
of $\dim\Mov(g) + 1 = \dim\Mov(f) + 2$ reflections. It is not
possible to use less than $\dim\Mov(f) + 2$ reflections: a
factorization into $\dim\Mov(f)$ reflections would be a direct
factorization, which does not exist because $\Mov(f)$ is totally
singular; a factorization into $\dim\Mov(f)+1$ reflections does not
exist by the last part of \Cref{lemma:reflection-multiplication}.
Finally, we want to show that the reflection length is always at
most $\dim V$. This is immediate if $\Mov(f)$ is not totally
singular, so assume now that $\Mov(f)$ is totally singular. Since
$\chi_f$ is non-degenerate, we have $\dim\Mov(f) \geq 2$. On the
other hand, $\dim\Mov(f)$ is bounded above by the Witt index of
$\beta$, which is at most $\frac12 \dim V$. Therefore the
reflection length is $\dim\Mov(f) + 2 \leq 2 \, \dim \Mov(f) \leq
\dim V$.
\end{proof}
In the final part of this section, we introduce the spinor norm following \cite[Section 4]{wall1959structure}.
See also \cite{zassenhaus1962spinor, hahn1979unipotent, scharlau2012quadratic}.
Let $\mathbb{F}^\times = \mathbb{F} \setminus \{ 0 \}$.
\begin{definition}[Wall's spinor norm]
The \emph{spinor norm} is the map $\theta \colon O(V) \to \mathbb{F}^\times
/ (\mathbb{F}^\times)^2$ defined as $\theta(f) = [\det(A)]$, where $A$ is
the matrix of $\chi_f$ with respect to any basis of $\Mov(f)$. Here
$[a]$ indicates the class of $a \in \mathbb{F}^\times$ in the quotient group
$\mathbb{F}^\times / (\mathbb{F}^\times)^2$.
\end{definition}
Note that the $\det(A) \neq 0$ because $\chi_f$ is non-degenerate, and
$\theta(f)$ does not depend on the choice of the basis. For example,
we have $\theta(\id) = 1$ and $\theta(r_v) = [Q(v)]$ for every non-singular vector $v\in V$.
\begin{lemma}\label{lemma:spinor-norm-direct-factorization}
Given a direct factorization $f = f_1 f_2$, we have $\theta(f) =
\theta(f_1)\theta(f_2)$.
\end{lemma}
\begin{proof}
This follows immediately from \Cref{thm:factorization}.
\end{proof}
\begin{theorem}
The spinor norm is a group homomorphism.
\end{theorem}
\begin{proof}
If $\mathbb{F} = \mathbb{F}_2$, the spinor norm is trivial, so we can assume from now
on that $\mathbb{F}\neq \mathbb{F}_2$. Then $O(V)$ is generated by reflections by
\Cref{thm:reflection-length}. Therefore it is enough to show that,
for every factorization $f = r_1 \dotsm r_k$ into reflections, we
have $\theta(f) = \theta(r_1) \dotsm \theta(r_k)$. We prove this by
induction on $k$, the cases $k=0$ and $k=1$ being trivial.
Fix a length $k$ reflection factorization $f = r_1 \dotsm r_k$ with
$k \geq 2$. Let $g = r_1 f = r_2 \dotsm r_k$. If $f = r_1 g$ is a
direct factorization, then $\theta(f) = \theta(r_1) \theta(g)$ by
\Cref{lemma:spinor-norm-direct-factorization}. If $f = r_1 g$ is
not a direct factorization, then $g = r_1f$ is a direct
factorization by \Cref{lemma:reflection-multiplication}, and $\theta(g) = \theta(r_1) \theta(f)$ by
\Cref{lemma:spinor-norm-direct-factorization}. Since all
non-trivial elements of $\mathbb{F}^\times / (\mathbb{F}^\times)^2$ have order $2$,
we have $\theta(f) = \theta(r_1) \theta(g)$ in both cases. By
induction, $\theta(g) = \theta(r_2) \dotsm \theta(r_k)$ and
thus $\theta(f) = \theta(r_1) \theta(g) = \theta(r_1) \dotsm
\theta(r_k)$.
\end{proof}
\section{Partial order on the orthogonal group}
\label{sec:partial-order}
In this section, we introduce the partial order on $O(V)$ naturally
induced by minimal reflection factorizations. It generalizes the
partial order of \cite{brady2002partial}. We show that for most
isometries $f \in O(V)$, the interval $[\id,f]$ naturally includes into
the poset (i.e., partially ordered set) of subspaces of $\Mov(f)$. We
assume throughout this section that $\mathbb{F} \neq \mathbb{F}_2$, so that
\Cref{thm:reflection-length} applies.
\begin{definition}[Partial order on $O(V)$]\label{def:partial-order}
Given two isometries $f, g \in O(V)$, define $g \leq f$ if and only
if $f$ admits a minimal length reflection factorization that starts
with a minimal length reflection factorization of $g$.
Equivalently, $g \leq f$ if and only if $l(f) = l(g) + l(g^{-1}f)$,
where $l \colon O(V) \to \mathbb{N}$ denotes the reflection length.
\end{definition}
Since the set of reflections is closed under conjugation, it is
equivalent to require that $f$ admits a minimal factorization that
\emph{ends} with a minimal factorization of $g$. Notice that $O(V)$
is ranked (in the sense of posets) by the reflection length $l$, and
it has the identity as the unique $\leq$-minimal element. This
partial order was studied in \cite{brady2002partial} for isometries of
an anisotropic bilinear form $\beta$, and in \cite{brady2015factoring}
for isometries of the affine Euclidean space.
Although the global combinatorics of $O(V)$ is complicated, most of
the intervals \[ [g, f] = \{ h \in O(V) \mid g \leq h \leq f \} \quad
\text{for $g \leq f$} \] have a structure that we can explicitly describe.
Notice that the interval $[g, f]$ is isomorphic (as a poset) to the interval $[\id, g^{-1}f]$ via the isomorphism $h \mapsto g^{-1} h$. Therefore, the
combinatorial study of all intervals in $O(V)$ reduces to the study of
the intervals of the form $[\id, f]$.
Recall from \Cref{sec:factorizations} that the reflection length of an
isometry $f \in O(V)$ is at least $\dim\Mov(f)$, and the reflection
factorizations of length $\dim\Mov(f)$ (if they exist) are the direct
factorizations. In light of \Cref{thm:reflection-length}, we can
characterize in a couple of different ways the isometries $f$ with
reflection length equal to $\dim\Mov(f)$.
\begin{definition}\label{def:minimal-isometry}
An isometry $f \in O(V)$ is \emph{minimal}
if any of the following equivalent conditions hold:
\begin{enumerate}[(i)]
\item $f$ admits a direct factorization as a product of reflections;
\item its reflection length is equal to $\dim \Mov(f)$;
\item $f = \id$, or $\Mov(f)$ is not totally singular.
\end{enumerate}
\end{definition}
Roughly speaking, condition (iii) tells us that most isometries are
minimal. There are many simple sufficient conditions for an isometry
to be minimal: if $\dim \Mov(f) > \frac12 \dim V$, then $f$ is
minimal; if $\dim\Mov(f)$ is odd, then $f$ is minimal (because all
alternating forms are degenerate, so $\chi_f$ is not alternating); if
$V$ contains no singular vectors, then all isometries are minimal.
\begin{remark}
If the characteristic of $\mathbb{F}$ is not $2$, there are several
additional conditions equivalent to \Cref{def:minimal-isometry}. In
fact, the moved space $\Mov(f)$ is totally singular if and only if
$\beta$ vanishes on $\Mov(f)$, which happens if and only if the Wall
form $\chi_f$ is skew-symmetric (by property (i) of
\Cref{lemma:wall-form-properties}). In addition, it is noted in
\cite[Corollary 6.3]{grove2002classical} that $\Mov(f)$ is totally
singular if and only if $(f - \id)^2 = 0$ (i.e., the unipotency index
of $f$ is $2$), or equivalently $\Mov(f) \subseteq \Fix(f)$.
See also \cite{nokhodkar2017applications}.
\end{remark}
In what follows, we aim to describe the combinatorics of the
interval $[\id,f]$ associated with a minimal isometry $f$.
\begin{lemma}\label{lemma:isometries-below-f}
Let $f \in O(V)$ be a minimal isometry, and let $g \leq f$. Then:
\begin{enumerate}[(a)]
\item $\Mov(g) \subseteq \Mov(f)$;
\item $g$ is minimal;
\item $\chi_g$ is the restriction of $\chi_f$ to $\Mov(g)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $k = \dim \Mov(f)$. Since $f$ is minimal, its reflection length
is equal to $k$, and $\Mov(f) = \Mov(r_1) \oplus
\dotsb \oplus \Mov(r_k)$ for every minimal length factorization
$f=r_1\dotsm r_k$ of $f$ as a product of reflections. Then there is
one such factorization for which $g = r_1 \dotsm r_m$ for some $m
\leq k$, and the reflection length of $g$ is equal to $m$. By a
repeated application of part (b) of
\Cref{lemma:reflection-multiplication}, we get that $\Mov(g) =
\Mov(r_1) \oplus \dotsb \oplus \Mov(r_m) \subseteq
\Mov(f)$. In addition, the reflection factorization $g = r_1\dotsm
r_m$ is a direct factorization, so $g$ is minimal.
If $g = f$, then $\chi_g = \chi_f$ and we are done. Suppose now that
$g \neq f$, i.e., $m < k$. Since $\Mov(r_k)$ is 1-dimensional, the
property $\chi(u,u) = Q(u)$ (\Cref{thm:wall-form}) implies that
$\chi_{r_k}$ is the restriction of $\chi_f$ to $\Mov(r_k)$. By
\Cref{thm:factorization}, $\chi_{r_1\dotsm r_{k-1}}$ is the
restriction of $\chi_f$ to $\Mov(r_1\dotsm r_{k-1})$. Now, $f' :=
r_1\dotsm r_{k-1}$ is minimal by part (b), and $g \leq f'$, so we
are done by induction on $k$.
\end{proof}
In the full group $O(V)$, there can be many isometries with the same
moved space. However, once we restrict to an interval $[\id, f]$ where $f$ is minimal, an isometry is completely determined by its moved space.
\begin{theorem}[Minimal intervals]\label{thm:intervals}
Let $f \in O(V)$ be a minimal isometry. Then $g \mapsto \Mov(g)$ is
an order-preserving bijection between the interval $[\id,f]$ and the
poset of linear subspaces $U \subseteq \Mov(f)$ that satisfy the
following conditions:
\begin{enumerate}[(i)]
\item $U = \{0\}$ or $U$ is not totally singular;
\item $U^\triangleright = \{0\}$ or $U^\triangleright$ is not
totally singular;
\item $\chi_f|_U$ is non-degenerate.
\end{enumerate}
In addition, the rank of $g \in [\id,f]$ is equal to $\dim\Mov(g)$.
\end{theorem}
\begin{proof}
Let $g \in [\id,f]$, and let $U = \Mov(g)$. We have that $g$ is
minimal by \Cref{lemma:isometries-below-f}, so $U$ satisfies
condition (i). In addition, we have $U^\triangleright = \Mov(g^{-1}f)$ by
\Cref{thm:factorization}, and $g^{-1}f \in [\id,f]$ is also minimal,
so condition (ii) is satisfied. Finally, condition (iii) is a
consequence of \Cref{thm:factorization}.
We now explicitly construct the inverse map $\phi$. Suppose that $U
\subseteq \Mov(f)$ satisfies all three conditions. By
\Cref{thm:factorization} and condition (iii), there is a direct
factorization $f = f_1f_2$ where $f_1$ is the isometry associated
with $(U, \chi_f|_U)$. By conditions (i) and (ii), both $f_1$ and
$f_2$ are minimal. Then their reflection lengths are
$\dim\Mov(f_1)$ and $\dim\Mov(f_2)$, which add up to $\dim\Mov(f)$.
Therefore $f_1 \in [\id,f]$. Define $\phi(U) = f_1$.
We now check that $\phi$ is indeed the inverse of $\Mov$. For any
isometry $g \in [\id,f]$, we have that $g' = \phi(\Mov(g))$ is an
isometry such that $\Mov(g') = \Mov(g)$, and $\chi_{g'} =
\chi_f|_{\Mov(g)}$. By \Cref{lemma:isometries-below-f}, we also
have that $\chi_g = \chi_f|_{\Mov(g)}$. This means that $g'$ and
$g$ have the same moved space and the same Wall form, so $g' = g$ by
\Cref{thm:wall-parametrization}. In addition, for any subspace $U
\subseteq \Mov(f)$ satisfying conditions (i)-(iii), we have that
$\Mov(\phi(U)) = U$ by construction of $\phi$.
If $g \leq g'$ in $[\id,f]$, then $g'$ is minimal by part (b) of
\Cref{lemma:isometries-below-f}, and $\Mov(g) \subseteq \Mov(g')$ by
part (a) of \Cref{lemma:isometries-below-f}. This means that the
bijection $g \mapsto \Mov(g)$ is order-preserving. Finally, the
rank of an isometry $g$ in $[\id,f]$ is given by its reflection
length, which is equal to $\dim\Mov(g)$ because $g$ is minimal.
\end{proof}
For every $U \subseteq \Mov(f)$, we have that $U^\triangleleft =
f(U^\triangleright)$ by property (ii) of
\Cref{lemma:wall-form-properties}, so $U^\triangleleft$ and
$U^\triangleright$ are isometric. In particular, $U^\triangleleft$ is
totally singular if and only if $U^\triangleright$ is totally
singular, and this gives an equivalent way to write condition (ii) of
\Cref{thm:intervals}. Note that condition (ii) is not redundant, due
to the following example.
\begin{example}\label{example:direct-factorization}
Consider an isometry $f$ with a $3$-dimensional moved space and a
Wall form given by the following matrix, with respect to some basis
$e_1, e_2, e_3$ of $\Mov(f)$:
\[ \begin{mymatrix}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & -1 & 0
\end{mymatrix}. \]
If $U_1 = \< e_1 \>$ and $U_2 = U_1^\triangleright = \< e_2, e_3
\>$, then \Cref{thm:factorization} yields a direct factorization $f
= f_1 f_2$ such that $\chi_{f_1} = \chi_f|_{U_1}$ is not
alternating, whereas $\chi_{f_2} = \chi_f|_{U_2}$ is alternating. Then $f_1$ is
minimal, and $f_2$ is not. As a consequence, we have $f_1 \not\leq
f$ despite the inclusion $\Mov(f_1) \subseteq \Mov(f)$.
\end{example}
Notice that the bijection $g \mapsto \Mov(g)$ of \Cref{thm:intervals}
is not a poset isomorphism. Indeed, it is possible to have elements
$g, g' \in [\id,f]$ with $g \not\leq g'$ but $\Mov(g) \subseteq
\Mov(g')$. We construct such a case in the following example.
\begin{example}
Consider an isometry $f$ with a $4$-dimensional moved space and a
Wall form given by the following matrix, with respect to some basis
$e_1, e_2, e_3, e_4$ of $\Mov(f)$:
\[ \begin{mymatrix}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{mymatrix}. \]
By \Cref{thm:intervals}, the subspaces $U = \< e_1 \>$ and $U' = \<
e_1, e_2, e_3 \>$ have associated isometries $g, g' \in [\id,f]$
with $\Mov(g) = U$ and $\Mov(g') = U'$. Then $\Mov(g) \subseteq
\Mov(g')$, but $g \not\leq g'$ as seen in
\Cref{example:direct-factorization}.
\end{example}
In the case where the bilinear form $\beta$ is anisotropic, we recover
the description of the intervals in $O(V)$ given in
\cite{brady2002partial}.
In fact, the same description is obtained in the more general setting where $V$ contains no singular vectors.
\begin{corollary}\label{cor:anisotropic-intervals}
Suppose that $V$ contains no singular vectors, and let $f \in O(V)$ be any isometry.
Then $f$ is minimal, and $g \mapsto \Mov(g)$ is
an isomorphism between the interval $[\id,f]$ and the
poset of all linear subspaces $U \subseteq \Mov(f)$.
\end{corollary}
\begin{proof}
We already noted that every isometry $f$ is minimal if $V$ contains no singular vectors.
To prove that $g \mapsto \Mov(g)$ is an order-preserving bijection, it is enough to apply \Cref{thm:intervals} and show that conditions (i)--(iii) are satisfied by every subspace $U \subseteq \Mov(f)$.
Conditions (i) and (ii) are trivially satisfied because $\{0\}$ is the only totally singular subspace of $V$.
For condition (iii), $\chi_f(u, u) = Q(u) \neq 0$ for any non-zero vector $u \in U$, so $\chi_f|_U$ is non-degenerate.
To conclude the proof, we need to show that $\Mov(g) \subseteq \Mov(g')$ implies $g \leq g'$ for every $g,g' \in [\id, f]$.
If we define $h = g^{-1}g'$, we obtain that $g' = gh$ is a direct factorization by \Cref{thm:factorization}.
Since $h$ is minimal, we deduce that $l(g') = l(g) + l(h)$ and therefore $g \leq g'$.
\end{proof}
In the last part of this section, we turn our attention to non-minimal isometries, which behave in a substantially different way.
\begin{theorem}
\label{thm:non-minimal-isometries}
Let $f \in O(V)$ be a non-minimal isometry.
\begin{enumerate}[(a)]
\item For every reflection $r \in O(V)$, we have $r < f$ and $rf < f$.
\item Every isometry $g < f$ is minimal.
\item $f$ is $\leq$-maximal in $O(V)$.
\end{enumerate}
\end{theorem}
\begin{proof}
In the proof of \Cref{thm:reflection-length}, it is shown that any
reflection $r \in O(V)$ is part of some minimal length reflection
factorization of $f$. This implies both $r \leq f$ and $rf \leq f$.
Note that $r \neq f$ because every reflection is minimal, and
clearly $rf \neq f$, so the strict relations of part (a) hold. From
that proof it is also clear that $rf$ is minimal, so every isometry
$g < f$ is minimal by \Cref{lemma:isometries-below-f}, proving part
(b). Part (c) follows from \Cref{lemma:isometries-below-f} and part
(b).
\end{proof}
In the following, we give a coarse description of the structure of $[\id, f]$ for a non-minimal isometry $f$.
Note that $[\id, f]$ contains multiple isometries with the same moved space, so a bijection like the one of \Cref{thm:intervals} does not exist.
Denote by $(\id, f) = [\id, f] \setminus \{\id, f\}$ the open interval between the identity and $f$.
Let $\mathcal{W}_f$ be the set of all
subspaces $W \subseteq V$ containing $\Mov(f)$ as a codimension-one
subspace and not totally singular.
For any subspace $W \in
\mathcal{W}_f$, let $P_{f, W} = \{ g \in (\id, f) \mid \Mov(g) \subseteq W \}$.
\begin{theorem}[Non-minimal intervals]
Let $f \in O(V)$ be a non-minimal isometry. As a poset, the open interval $(\id, f)$ is the disjoint union (also called ``parallel composition'') of the subposets $P_{f, W}$:
\[ (\id, f) = \bigsqcup_{W \in \mathcal{W}_f} P_{f, W}. \]
\label{thm:non-minimal-intervals}
\end{theorem}
\begin{proof}
Let $g \in (\id, f)$. Then $g \leq rf$ for some reflection $r$, and $rf$ is minimal by \Cref{thm:non-minimal-isometries}.
Since $f$ is non-minimal, $\Mov(f)$ is a codimension-one subspace of $W = \Mov(rf)$ by part (b) of \Cref{lemma:reflection-multiplication}.
Then $W \in \mathcal{W}_f$ because $rf$ is minimal, and $g \in P_{f, W}$ by \Cref{lemma:isometries-below-f}.
Let $W' \in \mathcal{W}_f$ be any subspace such that $g \in P_{f, W'}$.
Note that $g$ is minimal by \Cref{thm:non-minimal-isometries}, so $\Mov(g) \nsubseteq \Mov(f)$.
Since $\Mov(f)$ is a
codimension-one subspace of $W'$, we have that $W' =
\Mov(f) + \Mov(g)$.
Therefore $W'$ is uniquely determined by $f$ and $g$.
In other words, $g$ is contained in exactly one $P_{f, W'}$.
Finally, if $g \in P_{f, W}$ and $g' \leq g$, then $\Mov(g') \subseteq \Mov(g)$ by \Cref{lemma:isometries-below-f} and therefore $g' \in P_{f, W} \cup \{\id\}$.
This means that there is no order relation between $P_{f, W}$ and $P_{f, W'}$ if $W \neq W'$.
\end{proof}
\pgfdeclarelayer{bg} %
\pgfsetlayers{bg,main} %
\begin{figure}[t]
\begin{tikzpicture}
\newcommand{1}{1}
\newcommand{2.5}{2.5}
\node[inner sep=0.05cm, fill, circle, label=below:{$\id$}] (id) at (0,0) {};
\node[inner sep=0.05cm, fill, circle, label=above:{$f$}] (f) at (0,3.5) {};
\draw (-3, 1) to [bend left=15] (-3, 2.5);
\draw (-2, 1) to [bend right=15] (-2, 2.5);
\node at (-2.5, 1.7) {\scriptsize $P_{f,W}$};
\draw (-1, 1) to [bend left=15] (-1, 2.5);
\draw (0, 1) to [bend right=15] (0, 2.5);
\node at (-0.5, 1.7) {\scriptsize $P_{f,W'}$};
\node at (2.8, 1.6) {$\dots$};
\draw (2, 1) to [bend right=15] (2, 2.5);
\draw (1, 1) to [bend left=15] (1, 2.5);
\node at (1.5, 1.7) {\scriptsize $P_{f,W''}$};
\begin{pgfonlayer}{bg} %
\draw[fill=gray!30] (-3, 1) -- (-2, 1) -- (id.center) -- (-3, 1);
\draw[fill=gray!30] (-1, 1) -- (0, 1) -- (id.center) -- (-1, 1);
\draw[fill=gray!30] (1, 1) -- (2, 1) -- (id.center) -- (1, 1);
\draw[fill=gray!30] (-3, 2.5) -- (-2, 2.5) -- (f.center) -- (-3, 2.5);
\draw[fill=gray!30] (-1, 2.5) -- (0, 2.5) -- (f.center) -- (-1, 2.5);
\draw[fill=gray!30] (1, 2.5) -- (2, 2.5) -- (f.center) -- (1, 2.5);
\end{pgfonlayer}
\node (rf) at (6.5, 2.5) {$rf$ for all reflections $r$};
\draw[->] (rf) -- (3.5, 2.5);
\node (r) at (6.5, 1) {all reflections $r$};
\draw[->] (r) -- (3.5, 1);
\end{tikzpicture}
\caption{Coarse structure of an interval $[\id, f]$ for a non-minimal isometry $f$, as described by \Cref{thm:non-minimal-intervals}.}
\label{fig:non-minimal-interval}
\end{figure}
\Cref{fig:non-minimal-interval} shows the Hasse diagram of a non-minimal interval $[1,f]$, as described by the previous theorem.
Note that each subposet $P_{f, W}$ is self-dual: the map $g \mapsto g^{-1}f$ is an order-reversing bijection from $P_{f, W}$ to itself.
\section{Positive factorizations}
\label{sec:positive-factorizations}
Let $(V, Q)$ be a non-degenerate quadratic space over an ordered field
$\mathbb{F}$. In particular, $\mathbb{F}$ has characteristic $0$. A non-singular
vector $v \in V$ is said to be \emph{positive} if $Q(v) > 0$, and
\emph{negative} if $Q(v) < 0$. In this section we focus on the
factorizations of isometries into \emph{positive reflections},
i.e., reflections with respect to positive vectors. We refer to these
factorizations as \emph{positive reflection factorizations}. Under
the hypothesis that $\mathbb{F}$ is \emph{square-dense} (the squares are dense
in the positive elements), we obtain a clean description of the
minimal length of a positive reflection factorization of any isometry
$f \in O(V)$. In particular, we show that $f$ admits a positive
reflection factorization if and only if its spinor norm is positive.
Recall that a subspace $W \subseteq V$ is \emph{positive definite}
(resp.\ \emph{negative definite}) if $Q(v) > 0$ (resp. $< 0$) for
every non-zero vector $v \in W$. It is \emph{positive semi-definite}
(resp.\ \emph{negative semi-definite}) if $Q(v) \geq 0$ (resp.\ $\leq
0$) for all $v \in W$. By the inertia theorem of Jacobi and Sylvester
\cite[Theorem 4.4]{scharlau2012quadratic}, $V$ can be decomposed as an
orthogonal direct sum $V^+ \perp V^-$, where $V^+$ is a positive
definite subspace and $V^-$ is a negative definite subspace. The
dimensions of $V^+$ and $V^-$ do not depend on the chosen
decomposition, and the pair $(\dim V^+, \dim V^-)$ is called the
\emph{signature} of $(V, Q)$.
We refer to \cite{scharlau2012quadratic} for additional theory on
quadratic spaces over ordered fields. We assume from now on that $V$
is not negative definite, because otherwise there are no positive
vectors.
Denote by $\mathbb{F}^+ \subseteq \mathbb{F}$ the subset of all positive elements of $\mathbb{F}$.
Since $(\mathbb{F}^\times)^2 \subseteq \mathbb{F}^+$, there is a well-defined quotient
map $\pi \colon \mathbb{F}^\times / (\mathbb{F}^\times)^2 \to \mathbb{F}^\times / \ \mathbb{F}^+ \cong
\mathbb{Z}_2$. In other words, every element of $\mathbb{F}^\times / (\mathbb{F}^\times)^2$ is
either positive or negative, and this notion is well-defined.
\begin{definition}
An isometry $f \in O(V)$ is \emph{positive} (resp.\ negative) if its
spinor norm $\theta(f)$ is positive (resp.\ negative).
\end{definition}
Notice that this definition is compatible with the previous definition
of positive reflection: a reflection $r_v$ is positive if and only if
$Q(v) > 0$. The positive isometries form a subgroup $O_+(V)$ of
$O(V)$, being the kernel of the composition \[ O(V) \xrightarrow{\theta}
\mathbb{F}^\times / (\mathbb{F}^\times)^2 \xrightarrow{\pi} \mathbb{Z}_2. \] In particular, if
an isometry $f \in O(V)$ can be written as a product of positive
reflections, then it is positive. The subgroup $O_+(V)$ has index $2$
in $O(V)$ unless $V$ is positive definite, in which case $O_+(V) =
O(V)$.
\begin{example}[Isometries over the real numbers]
If $\mathbb{F} = \mathbb{R}$ and $V$ is not (positive or negative) definite, then
$O(V)$ has four connected components. They are detected by the
surjective group homomorphism $O(V) \to \mathbb{Z}_2 \times \mathbb{Z}_2$ defined as
$f \mapsto (\pi(\theta(f)), \det(f))$. The connected component of the identity is
$O_+(V) \cap SO(V)$.
\end{example}
We are interested in determining the \emph{positive reflection length}
of a positive isometry $f \in O_+(V)$, i.e., the minimal length of a
positive reflection factorization of $f$. A lower bound for
the positive reflection length is given by the reflection length, which is computed in \Cref{thm:reflection-length}. The following example shows
that this lower bound is not always attained.
\begin{example}
Suppose that $W \subseteq V$ is a $2$-dimensional negative definite
subspace, and let $\chi = \frac12 \beta|_{W}$. Let $f \in O(V)$ be
the isometry with $\Mov(f) = W$ and $\chi_f = \chi$. Then $f$ is
positive and minimal (in the sense of \Cref{def:minimal-isometry}),
but all the reflections $r \leq f$ are negative. Therefore $f$ is a
product of $2$ negative reflections, but it cannot be written as a
product of $2$ positive reflections. Note that $f$ is an
involution, by property (v) of \Cref{lemma:wall-form-properties}.
\end{example}
More generally, if $f$ is an involution, we have $\chi_f = \frac12
\beta|_{\Mov(f)}$ by properties (i) and (v) of
\Cref{lemma:wall-form-properties}. Then a triangular basis (as in
\Cref{lemma:triangular-basis}) of positive vectors exists if and only
if $\Mov(f)$ is positive definite. In other words, an involution $f$
admits a direct factorization into positive reflections if and only if
$\Mov(f)$ is positive definite.
We aim to show that all positive non-involutions admit a direct
factorization into positive reflections provided that $\Mov(f)$
contains at least one positive vector. To prove this, in the
rest of this section, we are going to assume that the field $\mathbb{F}$
satisfies the following property.
\begin{definition}
An ordered field $\mathbb{F}$ is \emph{square-dense} if the set of squares
$(\mathbb{F}^\times)^2$ is dense in the set of positive elements $\mathbb{F}^+$. In
other words, for every $0 < a < b$, there exists a square $c^2$ such
that $a < c^2 < b$.
\end{definition}
The class of square-dense fields includes all \emph{Archimedean
fields} (i.e., the subfields of $\mathbb{R}$) and \emph{Euclidean fields}
(i.e., ordered fields where every positive element is a square), which
include all \emph{real closed fields}. See \cite[Chapter
3]{scharlau2012quadratic} for the definitions and properties of
these classes of fields, particularly in relation to the theory of
quadratic forms. An example of an ordered field that is not
square-dense is the field of rational functions $\mathbb{Q}(X)$, with the
order determined by $a < X$ for all $a \in \mathbb{Q}$ (this is a typical
example of a non-Archimedean field).
Our reason to choose the square-dense property as our working
hypothesis is that it is quite general, but at the same time, it allows us
to obtain the same characterization of the positive reflection length
(\Cref{thm:positive-factorizations}) that we would obtain over the
real numbers.
We start by proving a variant of \Cref{lemma:triangular-basis}.
\begin{lemma}\label{lemma:basis-with-one-positive-vector}
Let $\chi$ be a non-degenerate bilinear form on a finite-dimensional
vector space $W$ over an ordered field $\mathbb{F}$, with $\dim W \geq 2$.
Suppose that there is at least one vector $u \in W$ with $\chi(u, u)
> 0$. Then there is a basis $e_1, \dotsc, e_m$ such that $\chi(e_1,
e_1) > 0$, $\chi(e_i, e_i) \neq 0$ for $i \geq 2$, and $\chi(e_i,
e_j) = 0$ for $i<j$.
\end{lemma}
\begin{proof}
Proceed as in the proof of \Cref{lemma:triangular-basis}, starting
with a vector $u$ such that $\chi(u,u) > 0$. Choose $a \in
\mathbb{F}^\times$ such that $\chi(u, u) + a \chi(v, u) > 0$, for example by
taking $a = \chi(v, u)$. Then the first basis vector $e_1$
satisfies $\chi(e_1, e_1) > 0$. The rest of the proof is unchanged.
\end{proof}
Next, we prove a technical lemma in dimension 3. This is the building
block that allows us to construct triangular bases of positive
vectors when the Wall form is not symmetric.
\begin{lemma}\label{lemma:non-symmetric-3d}
Let $W$ be a $3$-dimensional vector space over a square-dense field
$\mathbb{F}$. Let $\chi$ be a non-degenerate bilinear form on $W$. Suppose
that $\chi$ is not symmetric, and that there is at least one vector
$u \in W$ with $\chi(u,u) > 0$. Then there exist two vectors $v_1,
v_2 \in W$ such that $\chi(v_1, v_1) > 0$, $\chi(v_2, v_2) > 0$, and
$\chi(v_1, v_2) = 0$.
\end{lemma}
\begin{proof}
By \Cref{lemma:basis-with-one-positive-vector}, there exists a
vector $e_1 \in W$ such that $\chi(e_1, e_1) > 0$ and $\chi|_{\< e_1
\>^\triangleright}$ is not alternating. Fix any non-zero vector
$e_2 \in \< e_1 \>^\triangleleft \cap \< e_1 \>^\triangleright$. If
$\chi(e_2, e_2) > 0$, we are done by choosing $v_1 = e_1$ and $v_2 =
e_2$. So we may assume that $\chi(e_2, e_2) \leq 0$.
\emph{Case 1: $\chi(e_2, e_2) = 0$.} Since $\chi|_{\< e_1
\>^\triangleright}$ is not alternating, there exists a vector $e_3
\in \< e_1 \>^\triangleright$ such that $\chi(e_3, e_3) \neq 0$. If
$\chi(e_3, e_3) > 0$, we are done by choosing $v_1 = e_1$ and $v_2 =
e_3$. So we can assume that $\chi(e_3, e_3) < 0$. Note that $e_3$
is not a scalar multiple of $e_2$, so $e_2, e_3$ is a basis of $\<
e_1 \>^\triangleright$. Therefore $e_1, e_2, e_3$ is a basis of
$W$, and in this basis the matrix of $\chi$ has the following form:
\[
\begin{mymatrix}{ccc}
\gamma & 0 & 0 \\
0 & 0 & c \\
a & b & -\delta
\end{mymatrix},
\]
with $\gamma, \delta > 0$, and $b, c \neq 0$ (otherwise $\chi$ is
degenerate).
We may also assume $a \neq 0$ since otherwise we can exchange $e_2$ and $e_3$ and reduce to the case 2 below.
If $b + c \neq 0$, then set $v_1 = e_1$ and $v_2 = 2\delta e_2 +
(b+c) e_3$. We have that $\chi(v_1, v_2) = 0$, and $\chi(v_2, v_2)
= \delta (b+c)^2 > 0$, so we are done. Suppose now that $b + c =
0$, so the matrix of $\chi$ becomes
\[
\begin{mymatrix}{ccc}
\gamma & 0 & 0 \\
0 & 0 & -b \\
a & b & -\delta
\end{mymatrix}.
\]
Let $v_1 = ab e_1 + \gamma\delta e_2$ and $v_2 = \delta e_1 + a
e_3$. Then
\begin{align*}
\chi(v_1, v_1) &= \gamma (ab)^2 > 0 \\
\chi(v_1, v_2) &= \gamma \cdot ab \cdot \delta - b \cdot \gamma\delta \cdot a = 0 \\
\chi(v_2, v_2) &= \gamma \delta^2 + a \cdot \delta \cdot a - \delta a^2 = \gamma\delta^2 > 0.
\end{align*}
\emph{Case 2: $\chi(e_2, e_2) < 0$.} Then $\chi|_{\<e_1, e_2\>}$ is
non-degenerate, and $\< e_1, e_2 \> \cap \< e_1, e_2
\>^\triangleright = \{0\}$. Let $e_3 \in \< e_1, e_2
\>^\triangleright$ be any non-zero vector. Note that $\chi(e_3,
e_3) \neq 0$, because $\chi$ is non-degenerate. If $\chi(e_3, e_3)
> 0$, we are done by setting $v_1 = e_1$ and $v_2 = e_3$, so we can
assume that $\chi(e_3, e_3) < 0$. Then the matrix of $\chi$ with
respect to the basis $e_1, e_2, e_3$ has the following form:
\[
\begin{mymatrix}{ccc}
\gamma & 0 & 0 \\
0 & -\delta & 0 \\
a & b & -\epsilon
\end{mymatrix},
\]
where $\gamma, \delta, \epsilon > 0$, and at least one of $a$ and
$b$ is non-zero (because $\chi$ is not symmetric). Define
\begin{align*}
v_1 &= qe_1 + e_2 \\ v_2 &= e_1 + \frac{\gamma}{\delta} q e_2 +
\frac{1}{2\epsilon} \left(a + \frac{\gamma}{\delta}bq \right) e_3,
\end{align*}
where $q \in \mathbb{F}$ is yet to be determined. Then
\begin{align*}
\chi(v_1, v_1) &= \gamma q^2 - \delta \\ \chi(v_1, v_2) &= \gamma
q - \delta \cdot \frac{\gamma}{\delta}q = 0 \\ \chi(v_2, v_2) &=
\gamma - \frac{\gamma^2}{\delta} q^2 + \frac{1}{4\epsilon} \left(a
+ \frac{\gamma}{\delta}bq \right)^2.
\end{align*}
We are going to show how to choose $q$ so that $\chi(v_1, v_1) > 0$
and $\chi(v_2, v_2) > 0$. The first condition is
\begin{equation}
q^2 > \frac{\delta}{\gamma}.
\label{eq:first-inequality}
\end{equation}
Now fix the sign of $q$ so that $abq \geq 0$.
Then
\[ \chi(v_2, v_2) \geq
\gamma - \frac{\gamma^2}{\delta}q^2 + \frac{1}{4\epsilon} \left( a^2 +
\left(\frac{\gamma}{\delta}b \right)^2 q^2 \right). \] In order to have
$\chi(v_2, v_2) > 0$, it is enough to have that the right hand side
of the previous equation is positive, and this condition can be
rewritten as
\begin{equation}\label{eq:second-inequality}
\left(1 - \frac{b^2}{4\delta\epsilon} \right) q^2 < \left( 1 +
\frac{a^2}{4\gamma\epsilon} \right) \frac{\delta}{\gamma}.
\end{equation}
If $b^2 \geq 4\delta\epsilon$, then \cref{eq:second-inequality} is always
satisfied, and \cref{eq:first-inequality} is satisfied for
\[ q = \pm \left(\frac{\delta}{\gamma} + 1 \right). \]
If $b^2 < 4\delta\epsilon$, then
\cref{eq:first-inequality,eq:second-inequality} are satisfied if
\[ \frac{\delta}{\gamma} < q^2 < \frac{1 + a^2/4\gamma\epsilon}{1 -
b^2/4\delta\epsilon} \cdot \frac{\delta}{\gamma} \] Recall that at least
one of $a$ and $b$ is non-zero, so these inequalities define a
non-empty interval in $\mathbb{F}^+$. Since $\mathbb{F}$ is square-dense, this
interval contains at least one square $q^2$.
\end{proof}
It is worth mentioning that \Cref{lemma:non-symmetric-3d} does not
hold over a general ordered field $\mathbb{F}$, as we show in the next
example.
\begin{example}\label{example:counterexample-positive-factorization}
Let $\mathbb{F} = \mathbb{Q}(X)$, with the non-Archimedean order determined by $a <
X$ for all $a \in \mathbb{Q}$. On $W = \mathbb{F}^3$, consider the non-symmetric
bilinear form $\chi$ defined by the following matrix:
\[
\begin{mymatrix}{ccc}
1 & 0 & 0 \\
0 & -X & 0 \\
0 & 1 & -X
\end{mymatrix}.
\]
Let $v = (p, q, r) \in W$ be any vector satisfying $\chi(v, v) > 0$.
Then we have $p^2 - Xq^2 - Xr^2 + qr > 0$. Note that $\deg(qr)
< \max \{ \deg(Xq^2), \deg(Xr^2) \}$, unless both $q$ and $r$ are
zero. Therefore we must have $\deg(p^2) \geq \max \{ \deg(Xq^2),
\deg(Xr^2) \}$, which can be rewritten as $\deg(p) > \deg(q)$ and
$\deg(p) > \deg(r)$. Now, suppose to have two vectors $v_1 = (p_1,
q_1, r_1)$, $v_2 = (p_2, q_2, r_2)$ with $\chi(v_1, v_1) > 0$ and
$\chi(v_2, v_2) > 0$. Then $\chi(v_1, v_2) = p_1p_2 - Xq_1q_2 -
Xr_1r_2 + r_1q_2$, and here the degree of $p_1p_2$ is greater than
the degree of all other terms. Therefore $\chi(v_1, v_2) \neq 0$.
\end{example}
We are going to need some flexibility in the choice of the vectors
$v_1, v_2$ given by \Cref{lemma:non-symmetric-3d}. The following two
easy lemmas allow us to modify a pair $(v_1, v_2)$ while maintaining the
properties we need.
\begin{lemma}\label{lemma:perturb-orthogonal-pair}
Let $W$ be a finite-dimensional vector space over an ordered field
$\mathbb{F}$, with $\dim W \geq 2$. Let $\chi$ be a non-degenerate bilinear
form on $W$, and suppose to have two non-zero vectors $v_1, v_2 \in
W$ with $\chi(v_1, v_2) = 0$. For every $u \in W$, there exists a
vector $w \in W$ such that $\chi(v_1 + a u, v_2 + a w)
= 0$ for all $a \in \mathbb{F}$.
\end{lemma}
\begin{proof}
If $u \in \< v_1 \>$, then we can simply choose $w = 0$. Suppose
now that $u \not \in \<v_1\>$. Then $\< v_1 \>^\triangleright$ and
$\< u \>^\triangleright$ are two distinct hyperplanes of $W$. The
set $H = \{ w \in W \mid \chi(u, v_2) + \chi(v_1, w) = 0 \}$ is an
affine translate of $\< v_1 \>^\triangleright$, and so it intersects
the linear hyperplane $\< u \>^\triangleright$. Let $w \in H \cap \<
u \>^\triangleright$. Then
\[ \chi(v_1 + a u, v_2 + a
w) = \chi(v_1, v_2) + a \big(\chi(u, v_2) + \chi(v_1, w)
\big) + a^2 \chi(u, w) = 0 \]
for all $a \in \mathbb{F}$.
\end{proof}
\begin{lemma}\label{lemma:perturb-positive-vector}
Let $W$ be a finite-dimensional vector space over an ordered field
$\mathbb{F}$. Let $\chi$ be a non-degenerate bilinear form on $W$, and
suppose to have a vector $v \in W$ with $\chi(v, v) > 0$. For every
$u \in W$, there exists $\delta \in \mathbb{F}^+$ such that $\chi(v+
a u, v + a u) > 0$ for all $a$ in the open
interval $(-\delta, \delta)$.
\end{lemma}
\begin{proof}
We have
\[ \chi(v+ a u, v + a u) = \chi(v, v) + a
\chi(u, v) + a \chi(v, u) + a^2 \chi(u, u). \]
The absolute value of the last three summands can be made smaller than
$\frac13 \chi(v, v)$, for a sufficiently small $a$.
\end{proof}
We are finally able to refine
\Cref{lemma:basis-with-one-positive-vector}, and obtain a whole
triangular basis of positive vectors.
\begin{lemma}\label{lemma:positive-basis}
Let $W$ be a finite-dimensional vector space over a square-dense
field $\mathbb{F}$. Let $\chi$ be a non-degenerate bilinear form on $W$
with $\det(\chi) > 0$. Suppose that $\chi$ is not symmetric, and
that there is at least one vector $u \in W$ with $\chi(u,u) > 0$.
Then $W$ has a basis $e_1, \dotsc, e_m$ such that $\chi(e_i, e_i) >
0$ for all $i$, and $\chi(e_i, e_j) = 0$ for $i<j$.
\end{lemma}
\begin{proof}
The proof is by induction on $m = \dim W$, the case $m=1$ being
trivial. By \Cref{lemma:basis-with-one-positive-vector}, there is a
basis $e_1, \dotsc, e_m$ such that $\chi(e_1, e_1) > 0$, $\chi(e_i,
e_i) \neq 0$ for $i \geq 2$, and $\chi(e_i, e_j) = 0$ for $i<j$. If
$m=2$, since $\det(\chi) > 0$, we deduce that $\chi(e_2, e_2) > 0$
and we are done. Assume from now on that $m \geq 3$.
Since $\chi$ is not symmetric, there exist two indices $2 \leq i < j
\leq m$ such that at least one of $\chi(e_i, e_1)$, $\chi(e_j,
e_1)$, $\chi(e_j, e_i)$ is not zero. Apply
\Cref{lemma:non-symmetric-3d} to the restriction of $\chi$ to the
3-dimensional subspace $U = \< e_1, e_i, e_j\>$ and get two
positive vectors $v_1, v_2 \in U$ such that $\chi(v_1, v_2) = 0$.
In particular, the subspace $\< v_1 \>^\triangleright$ contains the
positive vector $v_2$ (here the right orthogonal complement is taken
in the entire space $W$ with respect to the bilinear form $\chi$).
By
\Cref{lemma:perturb-orthogonal-pair,lemma:perturb-positive-vector},
there exists $a \in \mathbb{F}^\times$ such that for all $i=1, \dotsc,
m$ we have: (1) $\chi(v_1 + a e_i, v_1 + a e_i) > 0$; (2) the
subspace $\< v_1 + a e_i \>^\triangleright$ contains some
positive vector $v_2 + a e_i'$. Let $N = \{ v_1, v_1 +
a e_1, \dotsc, v_1 + a e_n \}$, and notice that $\< N
\> = W$. We are going to prove that there is at least one vector $u
\in N$ such that $\chi|_{\< u \>^\triangleright}$ is not symmetric.
Then we are done by applying the induction hypothesis on $\chi|_{\<
u \>^\triangleright}$.
Suppose by contradiction that $\chi|_{\< u \>^\triangleright}$ is
symmetric for every $u \in N$. In other words, the alternating form
$\gamma(v, w) := \chi(v, w) - \chi(w, v)$ vanishes on the hyperplane
$\< u \>^\triangleright$ for every $u \in N$. In particular, the
rank of $\gamma$ is at most $2$. However, the rank of $\gamma$ is
even (because $\gamma$ is alternating) and non-zero (because $\chi$
is not symmetric), so it is equal to $2$. For $u \in W$, denote by
$\alpha_u, \alpha'_u \in W^*$ the linear forms defined by
$\alpha_u(w) = \chi(u, w)$ and $\alpha_u'(w) = \gamma(u,w)$. Let
$\phi, \psi \colon W \to W^*$ be the linear maps given by $\phi(u) =
\alpha_u$ and $\psi(u) = \alpha_u'$. Note that $\phi$ is a vector
space isomorphism because $\chi$ is non-degenerate, whereas $\psi$
has rank $2$ because $\gamma$ has rank $2$. For every $u \in N$
we have $\gamma|_{\< u \>^\triangleright} = 0$, which can be written
as: $w \in \ker \alpha'_v$ for every $v, w \in \< u
\>^\triangleright$. By definition of $\alpha_u$, we have $\< u
\>^\triangleright = \ker\alpha_u$. Therefore, for every $u \in N$
and $v \in \ker\alpha_u$, we have $\ker \alpha_u \subseteq \ker
\alpha'_v$ and thus $\alpha'_v$ is a scalar multiple of $\alpha_u$.
This means that, for every $u \in N$, the image of the restriction
of $\psi$ to the hyperplane $\ker \alpha_u$ is contained in the
$1$-dimensional subspace $\< \alpha_u \>$. Since $\psi$ has rank
$2$, $\alpha_u$ must be in the image of $\psi$. Then the isomorphism
$\phi$ sends $N$ inside the image of $\psi$, which is a
$2$-dimensional subspace of $V^*$. This is a contradiction, because
$N$ spans $W$, whereas the image of $\psi$ has codimension $m-2 \geq
1$ in $W^*$.
\end{proof}
We are now ready to compute the positive reflection length of any
positive isometry.
\begin{theorem}[Positive reflection length]\label{thm:positive-factorizations}
Let $(V, Q)$ be a non-de\-ge\-ne\-ra\-te quadratic space over a square-dense
field $\mathbb{F}$. Assume that $V$ is not negative definite, and let $f
\in O_+(V)$ be a positive isometry with $f \neq \id$. If at least
one of the following conditions holds:
\begin{enumerate}[(i)]
\item $\Mov(f)$ is positive definite,
\item $f$ is not an involution and $\Mov(f)$ is not negative semi-definite,
\end{enumerate}
then the positive reflection length of $f$ is equal to $\dim\Mov(f)$.
Otherwise, it is equal to $\dim\Mov(f) + 2$.
In particular, every positive isometry is a product of positive
reflections.
\end{theorem}
\begin{proof}
Let $m = \dim\Mov(f) \geq 1$. If (i) holds, then $\Mov(f)$ is not
totally singular and $f$ has a direct factorization as a product of
reflections by \Cref{thm:reflection-length}. These reflections are
positive, because $\Mov(f)$ is positive definite.
If (ii) holds, then $\chi_f$ is not symmetric by property (v) of
\Cref{lemma:wall-form-properties}, and \Cref{lemma:positive-basis}
yields a basis $e_1, \dotsc, e_m$ such that $\chi_f(e_i, e_i) > 0$
for all $i$ and $\chi(e_i, e_j) = 0$ for $i < j$. By
\Cref{thm:factorization}, we have $f = r_1 \dotsm r_m$ where $r_i$
is the reflection with respect to $e_i$. Therefore, $f$ is a product
of $m$ positive reflections.
Conversely, if $f$ can be written as a product of $m$ positive
reflections with respect to some positive vectors $e_1, \dotsc,
e_m$, then by \Cref{thm:factorization} we have $\chi(e_i, e_i) > 0$
for all $i$ and $\chi(e_i, e_j) = 0$ for $i < j$. In particular,
$\Mov(f)$ contains at least one positive vector. If $\chi_f$ is
symmetric, then $\Mov(f)$ is positive definite and (i) holds. If
$\chi_f$ is not symmetric, then (ii) holds. Therefore, if both (i)
and (ii) do not hold, then every factorization of $f$ as a product of
positive reflections requires at least $m+2$ reflections.
Finally, we are going to show that any positive isometry $f$ can be
written as a product of $\leq m+2$ positive reflections. We do this
by induction on $m$, the case $m=0$ being trivial. Let $m \geq 1$.
If $\Mov(f)$ contains at least one positive vector $u$, then we can
write $f = r_uf'$ where $\dim \Mov(f') = m-1$ by
\Cref{lemma:reflection-multiplication}, and proceed by induction.
Therefore we may assume that $\Mov(f)$ is negative semi-definite.
We are going to show that there is at least one positive vector $v
\in V$ such that $\chi_{r_vf}$ is not symmetric. Notice that
$\Mov(r_vf) = \Mov(f) \oplus \< v \>$ by
\Cref{lemma:reflection-multiplication}, so $\Mov(r_vf)$ contains the
positive vector $v$. Then \Cref{lemma:positive-basis} can be
applied to $\chi = \chi_{r_vf}$, yielding a factorization of $r_vf$
as a product of $m+1$ positive reflections, and thus allowing us to
write $f$ as a product of $m+2$ positive reflections.
We only need to show that, if $\Mov(f) \neq \{0\}$ is negative
semi-definite, then there is at least one positive vector $v \in V$ such
that $\chi_{r_vf}$ is not symmetric. Let $v$ be any positive
vector. Recall that $\Mov(f) = \< v \>^\triangleright$, where the
right orthogonal complement is taken in $\Mov(r_vf) = \Mov(f) \oplus
\< v \>$ with respect to the bilinear form $\chi_{r_vf}$. If
$\chi_{r_vf}$ is symmetric, then $\Mov(r_v f) = \Mov(f) \perp \< v
\>$.
Therefore $v \in \Mov(f)^\perp = \Fix(f)$.
The set of positive vectors of $V$ is non-empty because $V$ is not
negative definite, and it spans $V$ by
\Cref{lemma:perturb-positive-vector}. If $\chi_{r_v f}$ is
symmetric for all positive vectors $v \in V$, then $v \in \Fix(f)$ for
all positive vectors $v$, so $\Fix(f) = V$ and thus $f = \id$, which is a
contradiction.
\end{proof}
We say that an isometry $f \in O_+(V)$ is \emph{positive-minimal} if
it is a product of $\dim\Mov(f)$ positive reflections.
\Cref{thm:positive-factorizations} provides a characterization of
positive-minimal isometries: an involution is positive-minimal if and
only if its moved space is positive definite; a non-involution is
positive-minimal if and only if its moved space is not negative
semi-definite (i.e., it contains at least one positive vector).
If we replace reflection factorizations with \emph{positive}
reflection factorizations in \Cref{def:partial-order}, we obtain a partial order on the group $O_+(V)$. This is not simply the
restriction to $O_+(V)$ of the partial order on $O(V)$. Indeed, if $f
\in O_+(V)$ is minimal but not positive-minimal, then there is a
minimal positive factorization $f = r_1 r_2 g$ with $l(g) = l(f) =
\dim\Mov(f)$, and we have $g \leq f$ in $O_+(V)$ but $g \not\leq f$ in
$O(V)$. For the same reason, the rank function of $O_+(V)$ is not the
restriction of the rank function of $O(V)$.
If $f \in O_+(V)$ is a positive-minimal isometry, then
\Cref{thm:positive-factorizations} allows us to include the interval $[\id,
f]$ in $O_+(V)$ into the poset of linear subspaces of $\Mov(f)$, in
the same spirit as \Cref{thm:intervals}.
\section{Isometries of the hyperbolic space}
\label{sec:hyperbolic-space}
In this section, we describe reflection length and intervals in the
isometry group of the hyperbolic space $\H^n$. We follow the notation
of \cite{cannon1997hyperbolic}.
Let $V = \mathbb{R}^{n+1}$, with the quadratic form $Q(x) = x_1^2 +
\dotsb + x_n^2 - x_{n+1}^2$. Then $(V, Q)$ is a real quadratic space
of signature $(n, 1)$. The \textit{hyperboloid model} of the hyperbolic space is
\[ \H^n = \{ x \in V \mid Q(x) = -1 \text{ and } x_{n+1} > 0 \}. \]
The quadratic form $Q$
induces a (positive definite) Riemannian metric on $\H^n$. The
condition $x_{n+1} > 0$ selects the upper sheet of the hyperboloid $\{
Q(x) = -1 \}$. Every isometry of $\H^n$ uniquely extends to an
isometry of $(V, Q)$; conversely, every isometry of $(V, Q)$ that
fixes $\H^n$ (as a set) restricts to an isometry of $\H^n$.
\begin{lemma}
The subgroup of $O(V)$ that fixes $\H^n$ (as a set) coincides with
the index-two subgroup $O_+(V)$ of the positive isometries.
\end{lemma}
\begin{proof}
Both subgroups have index $2$, so it is enough to show one
containment. By \Cref{thm:positive-factorizations}, the subgroup
$O_+(V)$ is generated by the positive reflections $r \in O(V)$, and
therefore it is enough to show that every positive reflections fixes
$\H^n$. If $v \in V$ is a positive vector, then $\< v \>^\perp$ has
signature $(n-1, 1)$, so it intersects $\H^n$. Therefore $r_v$
fixes at least one point of $\H^n$, so it fixes $\H^n$ as a set.
\end{proof}
Reflections in the hyperbolic space $\H^n$ are restrictions of
positive reflections of $(V, Q)$. Therefore, the study of reflection
length and intervals in the isometry group of $\H^n$ reduces to the
study of positive reflection length and intervals in $O_+(V)$. This
is exactly the setting of \Cref{sec:positive-factorizations}.
It turns out that every isometry of $\H^n$ is positive-minimal.
\begin{theorem}\label{thm:hyperbolic-reflection-length}
The positive reflection length of an isometry $f \in O_+(V)$ is
equal to $\dim\Mov(f)$.
\end{theorem}
\begin{proof}
We prove this by induction on $k = \dim\Mov(f)$, the case $k=0$ (the
identity) being trivial. If $k = 1$, then $f$ is a positive
reflection. If $k \geq 2$, then $\Mov(f)$ intersects the hyperplane
$\{ x_{n+1} = 0 \}$ non-trivially, so it contains at least one
positive vector $v$. By \Cref{thm:factorization}, there is a direct
factorization $f = r_v g$. Then $\dim\Mov(g) = k-1$, and $g$ can be
written as a product of $k-1$ positive reflections by induction.
\end{proof}
We are then able to obtain a clean description of all intervals $[\id,
f]$ in $O_+(V)$.
\begin{theorem}\label{thm:hyperbolic-space-intervals}
Let $f \in O_+(V)$. The interval $[\id, f]$ in $O_+(V)$ is
isomorphic to the poset of linear subspaces $U \subseteq \Mov(f)$
such that $\det(\chi_f|_U) > 0$.
\end{theorem}
\begin{proof}
By \Cref{thm:hyperbolic-reflection-length}, we have that $f$ is
positive-minimal. Therefore, all minimal length factorizations of
$f$ into positive reflections are direct factorizations. In
particular, the interval $[\id, f]$ in $O_+(V)$ is contained in the
interval $[\id, f]$ in the whole group $O(V)$. To avoid confusion,
denote by $[\id, f]_+$ the interval in $O_+(V)$. If $g \in [\id,
f]$ is a positive isometry, then $h = g^{-1}f$ is also positive,
and $g$ and $h$ are positive-minimal by
\Cref{thm:hyperbolic-reflection-length}. Therefore $g \in [\id,
f]_+$. This shows that $[\id, f]_+ = [\id, f] \cap O_+(V)$.
By \Cref{thm:intervals}, the map $g \mapsto \Mov(g)$ is a bijection
between $[\id, f]_+$ and the poset of linear subspaces $U \subseteq
\Mov(f)$ such that: $U$ satisfies conditions (i)-(iii) of
\Cref{thm:intervals}; (iv) $\det(\chi_f|_U) > 0$ (this is the same
as saying that the preimage of $U$ is a positive isometry). Since
the signature of $V$ is $(n, 1)$, the totally singular subspaces
have dimension $0$ or $1$, so conditions (i) and (ii) are implied by
condition (iii). In addition, we can disregard condition (iii) as
it is implied by (iv). Putting everything together, the map $g
\mapsto \Mov(g)$ is a bijection between $[\id, f]_+$ and the poset of
linear subspaces $U \subseteq \Mov(f)$ satisfying $\det(\chi_f|_U) >
0$.
If $g \leq g'$ in $[\id, f]_+$, then $g \leq g'$ in $[\id, f]$, and
thus $\Mov(g) \subseteq \Mov(g')$ by \Cref{thm:intervals}.
Conversely, suppose that we have $g, g' \in [\id, f]_+$ such that $\Mov(g)
\subseteq \Mov(g')$.
By \Cref{lemma:isometries-below-f}, $\chi_g$ and $\chi_{g'}$ are the restrictions of $\chi_f$ to $\Mov(g)$ and $\Mov(g')$, respectively.
Then $\chi_g = \chi_{g'}|_{\Mov(g)}$, so there is a direct factorization
$g' = gh$ and $h$ is positive-minimal by
\Cref{thm:hyperbolic-reflection-length}. Therefore $g \leq g'$ in
$[\id, f]_+$. This shows that the bijection $g \mapsto \Mov(g)$ is
a poset isomorphism.
\end{proof}
Notice that \Cref{thm:hyperbolic-space-intervals} gives a poset
isomorphism, whereas \Cref{thm:intervals} only gives an
order-preserving bijection.
A counterexample like the one in
\Cref{example:direct-factorization} cannot occur in this context,
since all positive isometries are positive-minimal.
Indeed, for \Cref{example:direct-factorization} to arise, the Witt index of the ambient space $V$ needs to be at least $2$ (in other words, over an ordered field, the signature needs to be $(p, q)$ with $p, q \geq 2$).
It is also true that all isometries of $O(V)$ are minimal, by
\Cref{thm:reflection-length}. Indeed, the only non-trivial totally
singular subspaces are one-dimensional, and they do not arise as moved
spaces of any isometry, because the Wall form would be identically
zero.
Recall that, if we interpret the hyperboloid model as lying in the projective space
$\P(V)$, the singular lines $\< v \> \subseteq \{ Q(x) = 0 \}$ can be
interpreted as ``points at infinity'' of the hyperbolic space $\H^n$.
Then the isometries of $\H^n$ can be classified into three types:
\emph{elliptic} isometries, that fix at least one point of $\H^n$;
\emph{parabolic} isometries, that fix no point of $\H^n$ and fix
exactly one point at infinity; \emph{hyperbolic} isometries, that fix
no point of $\H^n$ and fix two points at infinity. See \cite[Section
12]{cannon1997hyperbolic}. We now rewrite this classification in
terms of fixed space and moved space.
\begin{definition}
An isometry $f \in O_+(V)$ is
\begin{itemize}
\item \emph{elliptic} if $\Fix(f)$ contains a negative vector
(i.e., it is not positive semi-definite);
\item \emph{parabolic} if $\Fix(f)$ is positive semi-definite but
not positive definite;
\item \emph{hyperbolic} if $\Fix(f)$ is positive definite.
\end{itemize}
\end{definition}
\begin{lemma}\label{lemma:hyperbolic-isometries-fix-mov}
Let $f \in O_+(V)$. We have that $\Fix(f) \cap
\Mov(f) = \{0\}$ if $f$ is elliptic or hyperbolic, whereas $\Fix(f)
\cap \Mov(f)$ is a singular line if $f$ is parabolic. In
addition:
\begin{itemize}
\item $f$ is elliptic if and only if $\Mov(f)$ is positive definite;
\item $f$ is parabolic if and only if $\Mov(f)$ is positive
semi-definite but not positive definite;
\item $f$ is hyperbolic if and only if $\Mov(f)$ contains a negative
vector.
\end{itemize}
\end{lemma}
\begin{proof}
We have that $\Mov(f) = \Fix(f)^\perp$ by
\Cref{lemma:fix-move-orthogonal}. Therefore $\Fix(f) \cap \Mov(f)$
is a totally singular subspace, so its dimension is at most $1$.
If $\Fix(f) \cap \Mov(f)$ contains a non-trivial singular vector
$v$, then $\Fix(f)$ is not positive definite, so $f$ is elliptic or
parabolic.
If $f$ is elliptic, then up to conjugating by an isometry in
$O_+(V)$ we may assume that $f$ fixes the point $e_{n+1} = (0,
\dotsc, 0, 1) \in \H^n$. Then $f$ is an isometry also with respect
to the standard (positive definite) Euclidean quadratic form $Q_E(x)
= x_1^2 + \dotsc + x_{n+1}^2$. Therefore $\Fix(f)$ and $\Mov(f)$
are $Q_E$-orthogonal by \Cref{lemma:fix-move-orthogonal}, and in
particular $\Fix(f) \cap \Mov(f) = \{0 \}$.
If $f$ is parabolic, then $\Fix(f)$ contains a singular line, so $\Fix(f) \cap \Mov(f)$ is a singular line.
This finishes the proof of the first part of the statement.
We now prove the classification in terms of the moved space. If $f$
is elliptic, then $\Fix(f)$ contains a negative vector and $V =
\Fix(f) \perp \Mov(f)$, so $\Mov(f)$ is positive definite.
Similarly, if $f$ is hyperbolic, then $\Fix(f)$ is positive definite
and $V = \Fix(f) \perp \Mov(f)$, so $\Mov(f)$ contains a negative
vector. If $f$ is parabolic, then $\Mov(f)$ contains a singular
vector and so it is not positive definite. Finally, if $\Mov(f)$
contains a negative vector $w$, then $\< w \>^\perp$ is positive
definite and $\Fix(f) = \Mov(f)^\perp \subseteq \< w \>^\perp$, so
$f$ is not parabolic.
\end{proof}
For elliptic isometries, the description of the intervals given by
\Cref{thm:hyperbolic-space-intervals} becomes particularly simple
thanks to the following observation.
\begin{lemma}\label{lemma:wall-form-positive-definite}
Let $f \in O_+(V)$. If $U \subseteq \Mov(f)$
is a positive definite subspace, then $\det(\chi_f|_{U}) > 0$.
\end{lemma}
\begin{proof}
The restriction $\chi_f|_U$ is non-degenerate, because $\chi(u, u) =
Q(u) > 0$ for all $u \in U$. Applying \Cref{lemma:triangular-basis}
to $\chi_f|_U$, we obtain a basis $e_1, \dotsc, e_m$ of $U$ such
that $\chi_f(e_i, e_i) \neq 0$ for all $i$, and $\chi(e_i, e_j) = 0$
for $i < j$. Additionally, we have $\chi_f(e_i, e_i) = Q(e_i) > 0$
for all $i$. Therefore, $\det(\chi_f|_U) > 0$.
\end{proof}
\begin{theorem}[Elliptic intervals]
Let $f \in O_+(V)$ be an elliptic isometry. Then the interval
$[\id, f]$ is isomorphic to the poset of all linear subspaces of
$\Mov(f)$. In particular, the isomorphism type of $[\id, f]$ only
depends on the dimension of $\Mov(f)$, and not on the Wall form
$\chi_f$.
\end{theorem}
\begin{proof}
This follows immediately from \Cref{thm:hyperbolic-space-intervals}
and \Cref{lemma:wall-form-positive-definite}.
\end{proof}
The description of \Cref{thm:hyperbolic-space-intervals} can be
simplified also for parabolic intervals.
\begin{lemma}\label{lemma:wall-form-restriction-degenerate}
Let $f \in O_+(V)$ be a positive isometry, and $U \subseteq \Mov(f)$
a subspace. The restriction $\chi_f|_U$ is degenerate if and only
if there is a singular vector $v \in \Mov(f) \setminus \{0\}$ such
that $\< v \> \subseteq U \subseteq \< v \>^\triangleright$.
Note that $\< v \>^\triangleright = \< w \>^\perp$ where $w$ is any vector such that $w - f(w) = v$.
\end{lemma}
\begin{proof}
The restriction $\chi_f|_U$ is degenerate if and only if there is a
non-zero vector $v \in U$ such that $\chi_f(v, u) = 0$ for all $u
\in U$, or equivalently $\< v \> \subseteq U \subseteq \< v
\>^\triangleright$. Since $\chi_f(v, v) = Q(v)$, the containment
$\< v \> \subseteq \< v \>^\triangleright$ holds if and only if $v$
is singular.
Finally, by definition of $\chi_f$, we have $\chi_f(v, u) = \beta(w, u)$ for all $u \in U$, and therefore $\< v \>^\triangleright = \< w \>^\perp$.
\end{proof}
\begin{theorem}[Parabolic intervals]\label{thm:parabolic-intervals}
Let $f \in O_+(V)$ be a parabolic isometry which pointwise fixes
the singular line $\< v \>$. Then the interval $[\id, f]$ is
isomorphic to the poset of linear subspaces $U \subseteq \Mov(f)$
that do not satisfy $\< v \> \subseteq U \subseteq \< v
\>^\triangleright$. In particular, the isomorphism type of $[\id,
f]$ only depends on the dimension of $\Mov(f)$, and not on the
Wall form $\chi_f$.
\end{theorem}
\begin{proof}
Let $U \subseteq \Mov(f)$ be a subspace. If $\< v \> \nsubseteq U$,
then $U$ is positive definite and thus $\det(\chi_f|_U) > 0$ by
\Cref{lemma:wall-form-positive-definite}. Since $\< v \>$ is the
only singular line in $\Mov(f)$, the restriction $\chi_f|_U$ is
degenerate if and only if $\<v \> \subseteq U \subseteq \< v
\>^\triangleright$ by
\Cref{lemma:wall-form-restriction-degenerate}. Finally, if $\< v \>
\subseteq U \nsubseteq \< v \>^\triangleright$, then \Cref{lemma:triangular-basis} yields a basis
$e_1, \dotsc, e_m$ of $U$ such that $\chi_f(e_i, e_i) \neq 0$ for
all $i$ and $\chi_f(e_i, e_j) = 0$ for $i < j$. Since $f$ is
parabolic, $\Mov(f)$ is positive semi-definite by
\Cref{lemma:hyperbolic-isometries-fix-mov} and therefore $\chi(e_i,
e_i) = Q(e_i) > 0$ for all $i$. Thus $\det(\chi_f|_U) > 0$ also in
this case. We conclude by applying
\Cref{thm:hyperbolic-space-intervals}.
\end{proof}
The subgroup of $O_+(V)$ that fixes a singular line $\< v \>$ is
isomorphic to the isometry group of the affine Euclidean space
$\mathbb{R}^n$. This is easily seen in the \emph{half-space model} of the
hyperbolic space (see \cite[Section 12]{cannon1997hyperbolic}). In
particular, parabolic intervals are isomorphic to intervals in the
group of affine Euclidean isometries, which have been explicitly
described in \cite{brady2015factoring}.
Our description is more compact than the one of \cite{brady2015factoring}, where the elliptic and the parabolic portions of an interval are described separately.
The results of this section leave open the following natural question: if $f \in O_+(V)$ is a hyperbolic isometry, does the isomorphism type of $[\id, f]$ depend only on the dimension of $\Mov(f)$?
\bibliographystyle{amsalpha-abbr}
| {
"timestamp": "2021-03-04T02:31:21",
"yymm": "2103",
"arxiv_id": "2103.02507",
"language": "en",
"url": "https://arxiv.org/abs/2103.02507",
"abstract": "Let $V$ be a vector space endowed with a non-degenerate quadratic form $Q$. If the base field $\\mathbb{F}$ is different from $\\mathbb{F}_2$, it is known that every isometry can be written as a product of reflections. In this article, we detail the structure of the poset of all minimal length reflection factorizations of an isometry. If $\\mathbb{F}$ is an ordered field, we also study factorizations into positive reflections, i.e., reflections defined by vectors of positive norm. We characterize such factorizations, under the hypothesis that the squares of $\\mathbb{F}$ are dense in the positive elements (this includes Archimedean and Euclidean fields). In particular, we show that an isometry is a product of positive reflections if and only if its spinor norm is positive. As a final application, we explicitly describe the poset of all factorizations of isometries of the hyperbolic space.",
"subjects": "Group Theory (math.GR)",
"title": "Factoring isometries of quadratic spaces into reflections",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105266327962,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7076984810746563
} |
https://arxiv.org/abs/1812.04806 | A Discontinuous Galerkin Method for the Stokes Equation by Divergence-free Patch Reconstruction | A discontinuous Galerkin method by patch reconstruction is proposed for Stokes flows. A locally divergence-free reconstruction space is employed as the approximation space, and the interior penalty method is adopted which imposes the normal component penalty terms to cancel out the pressure term. Consequently, the Stokes equation can be solved as an elliptic system instead of a saddle-point problem due to such weak form. The number of degree of freedoms of our method is the same as the number of elements in the mesh for different order of accuracy. The error estimations of the proposed method are given in a classical style, which are then verified by some numerical examples. | \section{Introduction}
The incompressible Stokes equations describe the low Reynolds number
flows which are the linearization of the Navier-Stokes equations
\cite{temam2001navier}. In the finite element method for
incompressible Stokes problem see, e.g. \cite{crouzeix1973conforming},
the major concern is how to impose the incompressibility
condition. The conforming mixed finite element methods are usually
introduced weakly by the Lagrange multiplier, and solve a saddle
points problem. The methods are required to setup the approximation
spaces for velocity and pressure to satisfy the in-sup conditions,
which is necessary to guarantee the numerical stability. Consequently,
the numerical solution obtained is weakly divergence-free, we refer
readers to \cite{boffi2013mixed} for more details.
The other approach to solve the incompressible Stokes equation is
constructing a divergence-free space to automatically fulfill the
incompressible condition and eliminate the pressure. There are many
efforts on constructing global divergence-free basis function. For
example, the Crouzeix-Raviart elements was proposed in
\cite{crouzeix1973conforming}, which are divergence-free in each
element and continuous at the midpoint of element edges. The $H(div)$
finite elements was proposed in \cite{wang2009robust}. The methods to
construct these divergence-free spaces are quite subtle that it is a
nontrivial job to extend these methods to a grid with its elements in
unusual geometry. An alternative way is abandoning the normal
continuity of basis function, and using the locally divergence-free
elements which are proposed in \cite{baker1990piecewise}. See the
recent developments in this fold in
\cite{karakashian1998nonconforming, cockburn2007note, liu2011penalty}.
The discontinuous Galerkin (DG) method by patch reconstruction was
recently introduced in \cite{li2016discontinuous} to solving the
elliptic equation, then was developed in \cite{li2017discontinuous}
and \cite{li2018finite} to various model problems. We are motivated to
approximate the divergence-free velocity in Stokes problem using the
discontinuous space by patch reconstruction. Actually, we propose a
locally divergence-free space by imposing the constrain locally in
patch reconstruction. With the new space, we adopt the interior
penalty method to Stokes problems \cite{hansbo2008piecewise}. The
penalty term is introduced on the normal component to control the
inconsistent error. Consequently, an elliptic problem is attained to
be solved instead of a saddle-point problem. To construct the locally
divergence-free space, we use the piecewisely solenoidal polynomial
functions only. The space has one degree of freedom in each
element. Therefore, the dimension of approximation space does not
depend on the polynomial orders. The interpolation error estimate of
this space can be obtained naturally, which leads to the approximate
error estimate of the Stokes problem following the standard
techniques. We note that the new space is a subspace of canonical
locally divergence-free DG space. Our method enjoys the advantages of
DG method, for example that it works on meshes with polygonal
elements. We demonstrate by numerical examples on different meshes.
The rest of this paper is organized as follows. In Section
\ref{sec:basis}, we describe the method to construct the locally
divergence-free approximation space and and give the approximation
estimate of this space. Then we present the interior penalty method of
discontinued Galerkin method for Stokes problems in Section
\ref{sec:weakform}. The error estimate of the proposed method is
obtained under the DG energy norm. Finally, numerical results for
two-dimensional examples are presented in Section \ref{sec:examples}
to validate our estimates and demonstrate the capacity of our method.
\section{Construction of Approximation Space}\label{sec:basis}
Let $\Omega$ be a polygonal domain in $\mb{R}^2$. The mesh $\mathcal{T}_h$ is a
partition of $\Omega$ with polygons denoted by $K$ that $\bigcup_{K \in
\mathcal{T}_h} \bar{K} = \Omega$. For any two elements $K_0$, $K_1
\in \mathcal{T}_h$, $K_0 \bigcap K_1 = \emptyset$ if $K_0 \neq
K_1$. Here $h{:}=\max_{K\in\mathcal{T}_h}h_K$ with $h_K$ the diameter of
$K$. We denote by $\abs{K}$ the area of $K$. $\Gamma_h$ denotes the union
of boundaries of element $K\in \mathcal{T}_h$, where $\Gamma_h^0$ is the union
of interior boundaries $\Gamma_h^0 \triangleq \Gamma_h \setminus
\partial \Omega$. We assume that the partition $\mathcal{T}_h$ possesses the
following shape regularity conditions ~\cite{Brezzi:2009,DaVeiga2014}:
\begin{enumerate}
\item[{\bf A1}\;]There exists an integer $N$ independent of $h$, that
any element $K$ admits a sub-decomposition $\wt{\mathcal{T}_h}|_K$ which
consists of at most $N$ triangles $T$.
\item[{\bf A2}\;] $\wt{\mathcal{T}_h}$ is a compatible sub-decomposition, that
any $T\in\wt{\mathcal{T}_h}$ is shape-regular in the sense of
Ciarlet-Raviart~\cite{ciarlet:1978}: there exists a real positive
number $\sigma$ independent of $h$ such that $h_T/\rho_T\le\sigma$,
where $\rho_T$ is the radius of the greatest ball inscribed in $T$.
\end{enumerate}
Let $D$ to be a subdomain of $\Omega$, which may be an element or an
aggregation of the elements belong to $\mathcal{T}_h$. Let $H^{m}(D)$ demote
the Sobolev space of real valued functions on $D$ with a positive
integer $m$. For vector valued functions, we introduce the spaces
$\gv{H}^{m}(D)=[H^{m}(D)]^2$. Let $\mb{P}_m(D)$ be a set of polynomial
with total degree not greater than $m$ on domain $D$, and the
corresponding vector valued polynomial spaces $[\mb{P}_m(D)]^2$ is
denoted by $\pmb{\mb{P}}_m(D)$.
Assumptions {\bf A1} and {\bf A2} allow quite general shapes in the
partition, such as non-convex or degenerate elements. They also lead
to some common used properties and inequalities:
\begin{enumerate}
\item[{\bf M1}]$\forall T\in\wt{\mathcal{T}_h}$, there exists $\rho_1\ge 1$
that depends on $N$ and $\sigma$ such that $h_K/h_T\le\rho_1$.
\item[{\bf M2}][{\it Agmon inequality}]\;There exists a constant $C$
that depends on $N$ and $\sigma$, but independent of $h_K$ such that
\begin{equation}\label{eq:agmon}
\nm{v}{L^2(\partial K)}^2\le C\Lr{h_K^{-1}\nm{v}{L^2(K)}^2+h_K\nm{\nabla
v}{L^2(K)}^2},\qquad\text{for all\quad}v\in H^1(K).
\end{equation}
\item[{\bf M3}][{\it Approximation property}]\;There exists a constant
$C$ that depends on $N,r$ and $\sigma$, but independent of $h_K$
such that for any $v\in H^{r+1}(K)$, there exists an approximation
polynomial $\wt{v}\in\mb{P}_r(K)$ such that
\begin{equation}\label{eq:app}
\nm{v-\wt{v}}{L^2(K)}+h_K\nm{\nabla(v-\wt{v})}{L^2(K)}\le
Ch_K^{r+1}\snm{v}{H^{r+1}(K)}.
\end{equation}
\item[{\bf M4}][{\it Inverse inequality}]\;For any $v\in\mb{P}_r(K)$,
there exists a constant $C$ that depends only on $N,r,\sigma$ and
$\rho_1$ such that
\begin{equation}\label{eq:inverse}
\nm{\nabla v}{L^2(K)}\le Ch_K^{-1}\nm{v}{L^2(K)}.
\end{equation}
\item[{\bf M5}][{\it Discrete trace inequality}]\;For any
$v\in\mb{P}_r(K)$, there exists a constant $C$ that depends only on
$N,r,\sigma$ and $\rho_1$ such that
\begin{equation}\label{eq:traceinv}
\nm{v}{L^2(\partial K)}\le C h_K^{-1/2}\nm{v}{L^2(K)}.
\end{equation}
\end{enumerate}
The above four
inequalities~\eqref{eq:agmon},~\eqref{eq:app},~\eqref{eq:inverse}
and~\eqref{eq:traceinv} are hold for vector valued space
$\gv{H}^{m}(D)$ and $\pmb{\mb{P}}_m(D)$ with corresponding norms,
\begin{equation*}
\|\gv v\|_{H^{m}(D)}^2\triangleq\sum_{i=1}^2\|\gv v_i\|_{
H^{m}(D)}^2,\quad \gv v\in \gv{H}^m(D).
\end{equation*}
We are interested in the spaces of solenoidal vector
fields
\[
\gv{S}^m(D)=\{\gv{v}\in \gv{H}^m(D):\div \gv{v}=0 \quad \text{in} \ D
\},
\]
and the polynomial solenoidal vector field
\[
\mb{S}_m(D)=\{\gv{v}\in \pmb{\mb{P}}_m(D):\div \gv{v}=0 \quad
\text{in} \ D \}.
\]
The above inequalities are also hold for the $\gv{S}^m(D)$ and
$\mb{S}_m(D)$, see in~\cite{baker1990piecewise}. Here we restate the
approximation property:
{\bf M3} [{\it Approximation property}]\;There exists a constant $C$
that depends on $N,r$ and $\sigma$, but independent of $h_K$ such that
for any $\gv{v}\in \gv{S}^{r+1}(K)$, there exists an approximation
polynomial $\wt{\gv v}\in\mb{S}_r(K)$ such that
\begin{equation}\label{eq:app-restate}
\nm{\gv v-\wt{\gv v}}{L^2(K)}+h_K\nm{\nabla(\gv v-\wt{\gv v})}{L^2(K)}\le
Ch_K^{r+1}\snm{\gv v}{H^{r+1}(K)}.
\end{equation}
In~\cite{li2016discontinuous}, we introduced the reconstruction
operator which mapping the piecewise constant space to discontinuous
piecewise polynomial space. In this paper, we intend to construct a
reconstruction operator $\mc{S}$ that embeds the piecewise constant
vector valued space to discontinuous piecewise solenoidal vector
fields. For each element $K\in\mathcal{T}_h$, we assign a sampling node or
collocation point $\gv{x}_K \in K$ and element patch $S(K)$. $S(K)$
usually contains $K$ and some elements around $K$. It is quite
flexible to assign the sampling nodes and construct the element
patch. Usually, we let the barycenter of the element $K$ to be the
sampling node, while a perturbation is allowed,
cf. ~\cite{li2016discontinuous}. The element patch is built up by
adding Von Neumann neighbors (adjacent edge-neighboring elements)
recursively until the size of the element patch reaches a fixed
number, for the details we refer to~\cite{li2016discontinuous,
li2012efficient}.
Let $\mc{I}(K)$ denote the set of the sampling nodes belonging to
$S(K)$ with $\#\mc{I}(K)$ denote its cardinality
\[
\mc{I}(K) = \left\{ \gv{x}_{K'} | K' \in S(K) \right\},
\]
and let $\#S(K)$ be the number of elements belonging to $S(K)$. These
two numbers $\#\mc{I}(K)$ and $\#S(K)$ are equal to each other. We
define $d_K{:}=\text{diam}\;S(K)$ and $d=\max_{K\in\mathcal{T}_h}d_K$.
Denote the piecewise constant space associated with $\mathcal{T}_h$ by $U_h$,
i.e.,
\[
U_h{:}=\set{v\in L^2(\Omega)}{v|_K\in\mb{P}_0(K)}.
\]
and let $\gv{U}_h=[U_h]^2$ to be the piecewise constant vector valued
space.
For any $\gv{v}\in \gv{U}_h$ and for any $K\in\mathcal{T}_h$, we reconstruct a
high order solenoidal polynomials $\mc{S}_K \gv{v}$ of degree $m$ by
solving the following discrete least-square problem.
\begin{equation}\label{eq:leastsquares}
\mc{S}_K
\gv{v}=\arg \min_{\gv{p}\in\mb{S}_m(S(K))}\sum_{\gv{x}\in\mc{I}(K)}\abs{\gv{v}(\gv{x})-\gv{p}(\gv{x})}^2.
\end{equation}
Though $\mc{S}_K \gv{v}$ gives a solenoidal approximation on the
entire element patch $S(K)$, but we only use the approximation on the
element $K$. Then the reconstruction operator $\mc{S}_K$ can extended
to the function space $[C^0(S(K))]^2\cap \gv{S}^m(S(K))$, still denote
by $\mc{S}_K$ without ambiguity,
\[
\mc{S}_K: ~\gv{v} \mapsto \mc{S}_K \gv{v} = \mc S_K\tilde{\gv{v}},
\quad \forall \gv{v}\in [C^0(S(K))]^2\cap \gv{S}^m(S(K)),
\]
where $\tilde{\gv{v}}( \gv{x}_K')=\gv{v}( \gv{x}_K'), \forall
\gv{x}_K' \in \mc{I}(K)$.
We make the following assumption on the sampling node
set $\mc{I}(K)$.
\noindent\vskip .5cm {\bf Assumption A}\; For any $K\in\mathcal{T}_h$ and
$\gv{p}\in\mb{S}_m(S(K))$,
\begin{equation}\label{assumption:uniqueness}
\gv{p}|_{\mc{I}(K)}=0\quad\text{implies\quad}\gv{p}|_{S(K)}\equiv 0.
\end{equation}
This assumption can guarantee the uniqueness of the solution of discrete
least-square problem~\eqref{eq:leastsquares}. Obvious, a necessary
condition for the solvability of ~\eqref{eq:leastsquares} is that
$\#\mc{I}(K)$ is greater than $(m+1)(m+4)/4$. {\bf Assumption A} is
equal to the following quantitative estimate
\[
\Lambda(m, \mc{I}(K))<\infty
\]
with
\begin{equation}\label{eq:cons}
\Lambda(m, \mc{I}(K)){:}=\max_{\gv{p}\in\mb{S}_m(S(K))}
\dfrac{\nm{\gv{p}}{L^\infty(S(K))}}{\nm{\gv{p}|_{\mc{I}(K)}}{\ell_\infty}}.
\end{equation}
If the mesh is quasi-uniform triangulation and each element patch is
convex, the quantitative estimate of the uniform upper bound of
$\Lambda(m, \mathcal I_K)$ for the general polynomial is obtained in
\cite{li2012efficient}. The requirements of the uniform upper bound
are hard to be satisfied while the polygonal meshes are applied. We
refer the readers to \cite{li2016discontinuous} which gives the
uniform upper bound under milder assumptions, such as polygonal
partition and non-convex element patch. Due to the solenoidal
polynomial space is the subspace of the general polynomial vector
valued space, $\Lambda(m, \mathcal I(K))$ in \eqref{eq:cons} is
uniformly bounded under the same assumption.
\begin{lemma}\label{theorem:localapp}
If {\em Assumption A} holds, then there exists a unique solution of
\eqref{eq:leastsquares}, denoted by $\mc{S}_K \gv{v}$, for any
$K\in\mathcal{T}_h$. Moreover $\mc{S}_K$ satisfies
\begin{equation}\label{eq:invariance}
\mc{S}_K \gv{g}=\gv{g} \quad\text{for
all\quad}\gv{g}\in\mb{S}_m(S(K)).
\end{equation}
The stability property holds true for any $K\in\mathcal{T}_h$ and $\gv{g}\in
[C^0(S(K))]^2$ and $\div \gv{g}=0$ as
\begin{equation}\label{eq:continuous}
\nm{\mc{S}_K \gv{g}}{L^{\infty}(K)}\le\Lambda(m , \mc{I}(K)) \sqrt{\#
\mc{I}(K)}\nm{\gv{g}|_{\mc{I}(K)}}{\ell_\infty},
\end{equation}
and the quasi-optimal approximation property is valid in the sense
that
\begin{equation}\label{eq:approximation}
\nm{\gv{g} -\mc{S}_K \gv{g}}{L^{\infty}(K)}\le\Lambda_m
\inf_{\gv{p}\in\mb{S}_m(S(K))} \nm{\gv{g} - \gv{p}}{L^{\infty}(S(K))},
\end{equation}
where $\Lambda_m{:}=\max_{K\in \mathcal{T}_h} \{1+\Lambda(m,\mc{I}(K))\sqrt{\#
\mc{I}(K)}\}$.
\end{lemma}
\begin{proof}
The reconstruction operator $\mc{S}_K$ is a projection operator from
$[C^0(\Omega)]^2$ to $\mb{S}_m(S(K))$ with the discrete $l_2$ norm,
the identity~\eqref{eq:invariance} is obvious.
By the {\em Assumption A} and definition of $\Lambda(m,\mc{I}(k))$ in
equation~\eqref{eq:cons},
\begin{equation}\label{continuous_1}
\|\mc{S}_K\gv{g}\|_{L^{\infty}(K)} \leq
\|\mc{S}_K\gv{g}\|_{L^{\infty}(S(K))} \leq
\Lambda(m,\mc{I}(K))\max_{x\in \mc{I}_k}|\mc{S}_K\gv{g}(x)|.
\end{equation}
From the projection property of operator $\mc{S}_K$, we have
\begin{equation}\label{continuous_2}
\sum_{x\in \mc{I}(K)} \Lr{\mc{S}_K\gv{g}(x)}^2 \leq \sum_{x\in
\mc{I}}\Lr{\gv{g}(x)}^2 \leq \#\mc{I}(K) \max_{x\in
\mc{I}_k}|\gv{g}(x)|^2 .
\end{equation}
Combining ~\eqref{continuous_1} and~\eqref{continuous_2}, we get
\[
\|\mc{S}_K\gv{g}\|_{L^{\infty}(K)} \leq \Lambda(m,\mc{I}(K))
\sqrt{\#\mc{I}(K)} \nm{\gv{g}|_{\mc{I}(K)}}{\ell_\infty}.
\]
Assume that $\gv{p}_0$ is the best approximation of $\gv{g}$ under
$L^{\infty}$ norm, $\gv{p}_0 \in \mb{S}_m(S(K))$, and
\[
\|\gv{g}-\gv{p}_0\|_{L^{\infty}(S(K))} = \inf_{\gv{p}\in
\mb{S}_m(S(K))} \|\gv{g}-\gv{p}\|_{L^{\infty}(S(K))}.
\]
Then, we have following estimate:
\begin{eqnarray*}
\|\mc{S}_K\gv{g} -\gv{p}_0\|_{L^{\infty}(K)} &=& \|\mc{S}_K(\gv{g}
-\gv{p}_0)\|_{L^{\infty}(K)} \\ &\leq& \Lambda(m,\mc{I}(K))
\sqrt{\#\mc{I}(K)} \max_{x \in \mathcal{I}(K)} |(\gv{g}
-\gv{p}_0)(x)| \\ &\leq& \Lambda(m,\mc{I}(K)) \sqrt{\#\mc{I}(K)}
\|\gv{g}-\gv{p}_0\|_{L^{\infty}(S(K))} \\ &=& \Lambda(m,\mc{I}(K))
\sqrt{\#\mc{I}(K)} \inf_{\gv{p}\in \mb{S}_m(S(K))}
\|\gv{g}-\gv{p}\|_{L^{\infty}(S(K))}.
\end{eqnarray*}
By triangle inequality, the left side of \eqref{eq:approximation} can
be written as
\begin{eqnarray*}
\|\gv{g} - \mc{S}_K\gv{g}\|_{L^{\infty}(K)} &\leq& \|
\gv{g}-\gv{p}_0\|_{L^{\infty}(K)} + \|\mc{S}_K\gv{g}
-\gv{p}_0\|_{L^{\infty}(K)} \\ &\leq& (1+\Lambda(m,\mc{I}(K))
\sqrt{\#\mc{I}(K)}) \inf_{\gv{p}\in
\mb{S}_m(S(K))}\|\gv{g}-\gv{p}\|_{L^{\infty}(S(K))}.
\end{eqnarray*}
Together with the definition of $\Lambda_m$, it implies the
quasi-optimality \eqref{eq:approximation}.
\end{proof}
\begin{lemma}
If {\em Assumption A} holds and $\gv{g}\in [C^0(S(K))]^2\cap
\gv{S}^{m+1}(S(K))$, then there exists a constant $C$ that depends on
$N,\sigma$, $\gamma$ and $\rho_1$ such that
\begin{align}
\nm{\gv{g}-\mc{S}_{K} \gv{g}}{L^{2}(K)}&\le C\Lambda_m
h_Kd_K^m\snm{\gv{g}}{H^{m+1}(S(K))}.\label{eq:l2app}\\ \nm{\nabla(\gv{g}-\mc{S}_K
\gv{g})}{L^2(K)}&\le C
\Lr{h_K^m+\Lambda_{m}d_K^m}\snm{\gv{g}}{H^{m+1}(S(K))}.\label{eq:h1app}\\ \nm{\gv{g}-\mc{S}_{K}
\gv{g}}{L^{2}(\partial K)}&\le C\Lambda_m
h_K^{1/2}d_K^m\snm{\gv{g}}{H^{m+1}(S(K))}.\label{eq:traceapp}
\end{align}
\end{lemma}
\begin{proof}
By~\cite[Theorem 4.3]{baker1990piecewise}, we take
$\gv{p}=\gv{\chi}\in\mb{S}_m$in the right-hand side
of~\eqref{eq:approximation}, where $\gv{\chi}$ is the approximation
solenoidal polynomial of order $m$. Then
\begin{equation}\label{eq:starapp}
\inf_{p \in\mb{S}_m(S(K))}\nm{\gv g-\gv p}{L^{\infty}(S(K))}\le\nm{\gv
g- \gv \chi}{L^{\infty}(S(K))}\le Cd_K^m\snm{\gv g}{H^{m+1}(S(K))},
\end{equation}
where $C$ depends on $N,m,\sigma$ and $\gamma$.
Substituting the above estimate~\eqref{eq:starapp}
into~\eqref{eq:approximation}, we obtain
\[
\nm{\gv g-\mc{S}_{K} \gv g}{L^{2}(K)}\le\abs{K}^{1/2}\|\gv
g-\mc{S}_{K} \gv g\|_{L^{\infty}(K)} \le C\Lambda_m h_Kd_K^m\snm{\gv
g}{H^{m+1}(S(K))}.
\]
which gives~\eqref{eq:l2app}.
Then, assume that $\wh{\gv g}_m$ be the approximation polynomial
in~\eqref{eq:app} for function $\gv g$, by the {\em inverse
inequality}~\eqref{eq:inverse} and the approximation
estimate~\eqref{eq:l2app}, then we have
\begin{align*}
\nm{\nabla(\gv g-\mc{S}_K \gv g)}{L^2(K)}&\le\nm{\nabla(\gv g-\wh{\gv
g}_m)}{L^2(K)} +\nm{\nabla(\wh{\gv g}_m-\mc{S}_K \gv
g)}{L^2(K)}\\ &\le Ch_K^m\snm{\gv
g}{H^{m+1}(K)}+Ch_K^{-1}\nm{\wh{\gv g}_m-\mc{S}_{K}
g}{L^{2}(K)}\\ &\le Ch_K^m\snm{\gv g}{H^{m+1}(K)}+Ch_K^{-1}\nm{\gv
g-\wh{\gv g}_m}{L^2(K)} +Ch_K^{-1}\nm{\gv g-\mc{S}_{K} \gv
g}{L^{2}(K)}\\ &\le C\Lr{h_K^m+\Lambda_{m}d_K^m}\snm{\gv
g}{H^{m+1}(S(K))}.
\end{align*}
This gives~\eqref{eq:h1app}.
Combing the {\em Agmon inequality}~\eqref{eq:agmon}, ~\eqref{eq:l2app}
and ~\eqref{eq:h1app}, one has
\begin{align*}
\nm{\gv g-\mc{S}_K \gv g}{L^2(\partial K)}^2&\le C \Lr{h_K^{-1}\nm{\gv
g-\mc{S}_K \gv g}{L^2(K)}^2 +h_K\nm{\nabla(\gv g_m-\mc{S}_K \gv
g)}{L^2(K)}^2}\\ &\le Ch_K \Lambda_{m}^2d_K^{2m}\snm{\gv
g}{H^{m+1}(S(K))}^2.
\end{align*}
Taking the square root of both sides gives~\eqref{eq:traceapp}
completes the proof.
\end{proof}
A global reconstruction operator $\mc{S}$ is defined by
$\mc{S}|_K=\mc{S}_K$. Given $\mc{S}$, we embed $\gv{U}_h$ into a
piecewise discontinuous solenoidal polynomial finite element space
with its order to be $m$. The approximation space $\gv{V}_h$ is defined by
\[\gv{V}_h=\mc{S}\gv{U}_h.\]
Furthermore, the reconstruction operator $\mc{S}$ can be extended to
function space $[C^0(\Omega)]^2\cap \gv{S}^m(\Omega)$, and we still
denote by $\mc{S}$ without ambiguity,
\[
\mc{S}: ~\gv{u} \mapsto \mc{S} \gv{u} = \mc S\tilde{\gv{u}}, \quad
\forall \gv{u}\in [C^0(\Omega)]^2\cap \gv{S}^m(\Omega),
\]
where $\tilde{\gv{u}}\in \gv{U}_h$ and $\tilde{\gv{u}}(
\gv{x}_K)=\gv{u}( \gv{x}_K)$.
The basis function of $\gv{V}_h$ are given by the following
process. Define $e_K\in U_h$ to be the characteristic function
corresponding to $K$,
\begin{displaymath}
e_{K}(x)=\begin{cases} 1,\ x \in K,\\ 0,\ x \notin K.\\
\end{cases}
\end{displaymath}
Let $\gv{\lambda}_K$ denote the basis function and it is defined by
the reconstruction process:
\begin{displaymath}
\gv{\lambda}_K=\begin{cases}
\mc{S}[e_K,0],\ \text{x-component},\\ \mc{S}[0,e_K],\ \text{y-component}.\\
\end{cases}
\end{displaymath}
The reconstruction operator can be wrote explicitly with the given
basis functions $\{\gv{\lambda}_K|\forall K\in \mathcal{T}_h\}$,
\begin{equation}\label{interpolation}
\mathcal{S} \gv{u} = \sum_{K \in \mathcal{T}_h} \gv{u}(\gv{x}_{K}) *
\gv{\lambda}_K, \quad \forall \gv{u} \in [C^0(\Omega)]^2\cap
\gv{S}^m(\Omega) .
\end{equation}
Next, we will show the implementation of the proposed method by
an example on square domain $[0,1]\times[0,1]$. Here a third order
solenoidal field reconstruction is considered, the basis functions of
the corresponding solenoidal field $\gv{\xi}_j$ are listed as follows,
\[
\left(
\begin{array}{c}
1 \\ 0\\
\end{array}
\right), \left(
\begin{array}{c}
0\\ 1 \\
\end{array}
\right), \left(
\begin{array}{c}
0\\ x \\
\end{array}
\right), \left(
\begin{array}{c}
x \\ -y \\
\end{array}
\right), \left(
\begin{array}{c}
y \\ 0 \\
\end{array}
\right),
\]
\[
\left(
\begin{array}{c}
0 \\ x^2\\
\end{array}
\right), \left(
\begin{array}{c}
2 xy\\ -y^2 \\
\end{array}
\right), \left(
\begin{array}{c}
x^2\\ -2xy \\
\end{array}
\right), \left(
\begin{array}{c}
y^2 \\ 0\\
\end{array}
\right),
\]
\[
\left(
\begin{array}{c}
0 \\ x^3\\
\end{array}
\right), \left(
\begin{array}{c}
3 xy^2\\ -y^3 \\
\end{array}
\right), \left(
\begin{array}{c}
x^2y\\ -xy^2 \\
\end{array}
\right), \left(
\begin{array}{c}
x^3\\ -3x^2y \\
\end{array}
\right), \left(
\begin{array}{c}
y^3 \\ 0\\
\end{array}
\right).
\]
The total degree of freedoms of the third order solenoidal field is
14, it means the size of element patch at least need to be taken as 7
to guarantee the solvability of the least square problem. Thus we take
$\# S(K)$ as 10 and the barycenter as the sampling node. Figure
\ref{tri_mesh_patch} shows domain and the corresponding triangulation,
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{mesh_patch.pdf}
\includegraphics[width=0.4\textwidth]{patch_point.pdf}
\caption{The triangulation (left)/element patch and the sampling
nodes (right).}
\label{tri_mesh_patch}
\end{center}
\end{figure}
Here we take an element $K_0$ as a demonstration element, the element
patch of $K_0$ is taken as
\begin{displaymath}
S(K_0) = \left\{ K_{0},\cdots, K_{i},\cdots ,K_{9} \right\},\quad
i=1,2,\cdots,8.
\end{displaymath}
and the corresponding set of the sampling nodes is
\[
\mc{I}(K_0) = \left\{ (x_{K_0},y_{K_0})
,\cdots,(x_{K_i},y_{K_i}),\cdots ,(x_{K_9},y_{K_9})\right\},\quad
i=1,2,\cdots,8.
\]
The element patch and sampling nodes are shown in Figure
\ref{tri_mesh_patch}.
For a given function $\gv{g}=[g_1,g_2]^T\in \gv{S}^m(\Omega)$, the
least square problem \eqref{eq:leastsquares} is as
\begin{equation*}
\mc S_{K_0} \gv{g}= \argmin_{\{s_j\}\in\mb{R}}
\ \sum_{(x_{K'},y_{K'})\in
\mc{I}_{K_0}}\abs{\gv{g}(x_{K'},y_{K'})-\sum_{j=1}^{14}s_j\gv{\xi}_j}^2.
\end{equation*}
The problem is solved directly by calculating the generalized inverse
of a matrix,
\begin{displaymath}
[s_1,\cdots,s_j,\cdots,s_{14}]^T = (A^T A)^{-1} A^T b,\quad j=2,3,\cdots,13,
\end{displaymath}
where $A$ and $b$ are defined as follows
\[
A = \begin{bmatrix} 1 & 0 & 0 & x_{K_0} & y_{K_0}& \cdots\\ \vdots &
\vdots & \vdots & \vdots & \vdots & \cdots\\ 1 & 0 & 0 & x_{K_i} &
y_{K_i}& \cdots\\ \vdots & \vdots & \vdots & \vdots & \vdots &
\cdots\\ 1 & 0 & 0 & x_{K_9} & y_{K_9}& \cdots\\ 0 & 1 & x_{K_0}
&-y_{K_0} & 0 & \cdots\\ \vdots & \vdots & \vdots & \vdots & \vdots
& \cdots\\ 0 & 1 & x_{K_i} &-y_{K_i} & 0 & \cdots\\ \vdots & \vdots
& \vdots & \vdots & \vdots & \cdots\\ 0 & 1 & x_{K_9} &-y_{K_9} & 0
& \cdots
\end{bmatrix} ,\quad
b = \begin{bmatrix} g_1(x_{K_0},y_{K_0}) \\ \vdots
\\ g_1(x_{K_i},y_{K_i}) \\ \vdots \\ g_1(x_{K_9},y_{K_9})
\\ g_2(x_{K_0},y_{K_0}) \\ \vdots \\ g_2(x_{K_i},y_{K_i})
\\ \vdots \\ g_2(x_{K_9},y_{K_9})
\end{bmatrix}
\quad i=1,2,\cdots,8.
\]
$A$ is a $20\times 14$ matrix, limited by the page space, we only list
the first order part, the rest is easy to be complemented. $b$ is a
$20\times 1$ vector.
Matrix $(A^T A)^{-1} A^T$ contains all the information of the basis
functions that are defined on element $K_0$, the matrix is relevant
with $\gv{\lambda}_{K_i},i=0,\cdots,9$. The basis function
$\gv{\lambda}_{K_0}$ is determined after the reconstruction are
conducted on each element. Figure \ref{tri_mesh_basis} shows the basis
function $\gv{\lambda}_{K_0}$. We note that the support of
$\gv{\lambda}_{K_0}$ is not equal to the element patch
$S(K_0)$. Insteadly, for any element $K\in\mathcal{T}_h$, the support of the
basis function $\gv{\lambda}_K$ is related with all the element
patches which includes $K$:
\begin{equation}\label{supp}
\mathop{\mathrm{supp}}(\gv{\lambda}_K)= \bigcup_{K' \in \mathcal{T}_h, K \in S(K')} K'.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{x_basis.pdf}
\includegraphics[width=0.4\textwidth]{y_basis.pdf}
\caption{The x component basis function (left)/ y component basis
function(right).}
\label{tri_mesh_basis}
\end{center}
\end{figure}
Due to the discontinuity of the reconstructed finite element space,
the DG method can be directly employed. We end this section by
introducing the average and jump operators which are common used in DG
method. Let $e$ be an interior edge shared by two adjacent elements
$e=\partial K^{+} \cap \partial K^{-}$. The corresponding unit outward
normal vector are denoted by $\gv{\mathrm n}^{+}$,
$\gv{\mathrm n}^{-}$. Let $v$ and $\gv{v}$ be the scalar valued and
vector valued functions on $\mathcal T_h$, respectively. Following the
traditional DG notations, we define the $average$ operator
$\{ \cdot \}$ on element edges as follows:
\begin{displaymath}
\{v\}=\frac{1}{2}(v^{+} + v^{-}), \quad \{ \gv{v} \} =
\frac{1}{2}(\gv{v}^{+} + \gv{v}^{-}) , \quad\text{on }\ e\in\Gamma^0.
\end{displaymath}
with
$v^+=v|_{K^+},\ v^-=v|_{K^-},\ \gv v^+=\gv v|_{K^+},\ \gv v^-=\gv
v|_{K^-}$. The $jump$ operator $[ \hspace{-2pt} [ \cdot ] \hspace{-2pt} ]$ on element edges is as
\begin{displaymath}
\begin{aligned}
[ \hspace{-2pt} [ v ] \hspace{-2pt} ] =v^{+} \gv{\mathrm n}^{+} + v^{-} \gv{\mathrm n}^{-},
\quad [ \hspace{-2pt} [ \gv{v} ] \hspace{-2pt} ] =\gv{v}^{+}\cdot \gv{\mathrm
n}^{+}+\gv{v}^{-}\cdot \gv{\mathrm n}^{-}, \\ [ \hspace{-2pt} [ \gv{v}
\otimes\gv{\mathrm n} ] \hspace{-2pt} ] =\gv{v}^{+}\otimes \gv{\mathrm
n}^{+}+\gv{v}^{-}\otimes \gv{\mathrm n}^{-},\quad
\text{on}\ e\in\Gamma^0.\\
\end{aligned}
\end{displaymath}
For $e \in \partial \Omega$, we set
\begin{displaymath}
\begin{aligned}
\{v\}=v&,\quad \{\gv v\}=\gv v,\quad [ \hspace{-2pt} [ v ] \hspace{-2pt} ]=v\gv{\mathrm n},
\\ [ \hspace{-2pt} [ \gv v ] \hspace{-2pt} ]=\gv v\cdot\gv{\mathrm n}&, \quad [ \hspace{-2pt} [
\gv{v}\otimes \gv{\mathrm n} ] \hspace{-2pt} ] = \gv{v}\otimes \gv{\mathrm n}
,\quad\text{on}\ e\in\partial\Omega.\\
\end{aligned}
\end{displaymath}
\section{Approximation of Stokes Equation}\label{sec:weakform}
We consider the Stokes equation with Dirichlet boundary condition:
\begin{equation}\label{eq:gov}
\left\{\begin{array}{ll}- \Delta \gv{u} + \nabla p=\gv{f} &
\mathrm{in}\,\, \Omega, \\[1.5ex] \div \gv{u} = 0 & \mathrm{in }\,\,
\Omega, \\[1.5ex] \gv{u} = \gv{g} & \mathrm{on }\,\, \partial\Omega,
\\[1.5ex]
\end{array}\right.
\end{equation}
where $\gv{u}$ is the velocity field, $p$ is the pressure, and
$\gv{g}$ is the given boundary value satisfying the compatibility
condition $(\gv{g}\cdot \gv{n},1)_{\partial \Omega} = 0$. We take the
local divergence-free reconstruction space $\gv{V}_h$ as the trial and
test function spaces. The approximation problem of \eqref{eq:gov} is:
{\it Seek $ \gv{u}_h \in \gv{V}_h$, such that
\begin{equation}\label{eq:weakform}
B_h(\gv{u}_h,\gv{v}_h) = F_h(\gv{v}_h), \quad \forall \gv{v}_h \in
\gv{V}_h,
\end{equation}
where the bilinear form is defined by
\begin{equation}\label{eq:bilinear_operator}
\begin{split}
B_h(\gv{u}_h,\gv{v}_h) = & \sum_{K\in \mathcal{T}_h} ( \grad
\gv{u}_h,\grad \gv{v}_h) - ([ \hspace{-2pt} [ \gv{u}_h \otimes \gv{n} ] \hspace{-2pt} ] ,\{\grad
\gv{v}_h\})_{\Gamma_h} - (\{\grad \gv{u}_h\}, [ \hspace{-2pt} [ \gv{v}_h \otimes
\gv{n} ] \hspace{-2pt} ] )_{\Gamma_h} \\ &+\eta ([ \hspace{-2pt} [ \gv{u}_h \otimes \gv{n}] \hspace{-2pt} ],
[ \hspace{-2pt} [ \gv{v}_h \otimes \gv{n}] \hspace{-2pt} ])_{\Gamma_h} +\epsilon ([ \hspace{-2pt} [ \gv{u}_h
\cdot \gv{n} ] \hspace{-2pt} ] , [ \hspace{-2pt} [ \gv{v}_h \cdot \gv{n}] \hspace{-2pt} ])_{\Gamma_h},
\end{split}
\end{equation}
and
\begin{equation}\label{eq:linear_operator}
F_h(\gv{v}_h) = \sum_{K\in \mathcal{T}_h} ( \gv{f} , \gv{v}_h ) -
(\gv{g}, \grad \gv{v}_h \cdot \gv{n})_{\partial \Omega} + \eta
(\gv{g},\gv{v}_h)_{\partial \Omega} + \epsilon ( \gv{g} \cdot \gv{n} ,
\gv{v}_h \cdot \gv{n})_{\partial \Omega},
\end{equation}
where $\eta$ and $\epsilon$ are penalty parameters, corresponding to
different penalty term, respectively. }
We note that the weak form \eqref{eq:weakform} is inconsistent
\cite{hansbo2008piecewise}. Precisely, the solution $\gv{u}$ of the
equation \eqref{eq:gov} does not satisfy the weak form
\eqref{eq:weakform}, saying
\[
B_h(\gv{u},\gv{v}_h)\neq F_h(\gv{v}_h) , \quad \forall \gv{v}_h \in
\gv{V}_h.
\]
The inconsistency is caused by the discontinuous of normal component
$\gv{v}_h \cdot \gv{n}$ on interior element edges. Instead of
satisfying the weak form \eqref{eq:weakform}, $\gv{u}$ satisfies
\begin{equation}\label{eq:inconsistent}
B_h(\gv{u},\gv{v}_h)=F_h( \gv{f} , \gv{v}_h) -(p , [ \hspace{-2pt} [ \gv{v}_h \cdot
\gv{n} ] \hspace{-2pt} ] )_{\Gamma_h}, \quad \forall \gv{v}_h \in \gv{V}_h
\end{equation}
Therefore, for the interior penalty scheme, the non-consistent penalty
parameter $\epsilon$ need to be taken great enough to control the
consistency error which is similar to the interior penalty method for
elliptic equation \cite{arnold2002unified}. Meanwhile, the penalty
parameter $\eta$ is chosen to guarantee the stability of the operator
$B_h(\cdot,\cdot)$. The magnitude of penalty parameters are taken as
follows,
\begin{equation}\label{penalty}
\epsilon=\mathrm{O}(h^{-(m+1)}), \quad \eta=\mathrm{O}(h^{-1}),
\end{equation}
where $m$ is the degree of polynomials in $\gv{V}_h$.
Let us define some mesh depended semi-norms, for $\forall \gv{v}_h \in
\gv{V}_h$,
\begin{equation*}
\begin{split}
|\gv{v}_h|_{1,h}^{2} &:= \sum_{K\in
\mathcal{T}_h}|\gv{v}_h|_{1,K}^{2}, \qquad |\gv{v}_h|_{*}^{2} :=
\sum_{e\in \Gamma_h} h_e^{-1} \|[ \hspace{-2pt} [ \gv{v}_h \otimes \gv{n}] \hspace{-2pt} ]
\|_{L^2(e)}^2 ,\\ \ |\gv{v}_h|_{\diamond}^{2} &:= \sum_{e\in \Gamma_h}
h_e^{-(m+1)}\|[ \hspace{-2pt} [ \gv{v}_h\cdot \gv{n} ] \hspace{-2pt} ] \|_{L^2(e)}^2,
\end{split}
\end{equation*}
and introduce the energy norm,
\begin{equation}\label{eq:energy_norm}
\|\gv{v_h}\|_{h}^2 := |\gv{v}_h|^2_{1,h}+ |\gv{v}_h|^2_{*} +
|\gv{v}_h|^2_{\diamond}.
\end{equation}
By the results in the previous section, we instantly have following
approximation estimate about energy norm,
\begin{lemma}\label{lemma:approximate_energy_error}
For $\gv{u}\in [H^{m+1}(\Omega)]^2$ to be the solution of equation
\eqref{eq:gov}, and $\gv{u}_I\in \gv{V}_h$ to be the interpolation
of $\gv{u}$, there exists a constant $C$ that depends on $N$,
$\sigma$, $\gamma$ and $m$, such that
\begin{equation}\label{eq:approx_energy_error}
\|\gv{u}-\gv{u}_I\|_h \leq C
(h^{\frac{m}{2}}+\Lambda_{m}d^{\frac{m}{2}}) | \gv{u}
|_{H^{m+1}(\Omega)} .
\end{equation}
\end{lemma}
Here we need to clarify that the approximation error is not optimal,
which is due to the fact that the BDM space is not a subspace of the
approximation space $\gv{V}_h$. The third term of energy norm is
dominant in the approximation error. This fact makes the numerical
solution not able to attain the optimal accuracy order.
For the consistency error, we have following estimate,
\begin{lemma}\label{lemma:inconsistent}
For $\forall \gv{v}_{h} \in \gv{V}_h$ and $p\in H^1(\Omega)$, we
have
\begin{equation}\label{eq:inconsistent_error}
\sum_{e\in \Gamma_h} \int_{e} p [ \hspace{-2pt} [ \gv{v}_{h} \cdot \gv{n} ] \hspace{-2pt} ]\,ds
\leq C h^\frac{m}{2} \|p\|_{H^{1}(\Omega)} \|\gv{v}_{h}\|_h.
\end{equation}
\end{lemma}
\begin{proof}
By {\em Agmon inequality}~\eqref{eq:agmon}, we have
\begin{equation*}
\begin{split}
\sum_{e\in \Gamma_h} \int_{e}p [ \hspace{-2pt} [ \gv{v}_{h} \cdot \gv{n} ] \hspace{-2pt} ] \,ds &
\leq \sum_{e\in \Gamma_h} \int_{e} h^{\frac{m+1}{2}} p
h^{-\frac{m+1}{2}} [ \hspace{-2pt} [ \gv{v}_{h} \cdot \gv{n} ] \hspace{-2pt} ] \,ds \\ & \leq C
h^{\frac{m}{2}} \sum_{K \in \mathcal{T}_h} \Lr{ \|p\|_{L^{2}(K)}^2+h^2
\|\nabla p\|_{K}^2}^{\frac{1}{2}} \sum_{e\in \Gamma_h}
\Lr{h^{-(m+1)} \|[ \hspace{-2pt} [ \gv{v}_{h} \cdot \gv{n} ] \hspace{-2pt} ]\|_{L^2(e)}^2
}^{\frac{1}{2}} \\ & \leq C h^{\frac{m}{2}} \|p\|_{H^{1}(\Omega)}
\|\gv{v}_{h}\|_h
\end{split}
\end{equation*}
\end{proof}
Next, the boundedness and the stability of the operator
$B_h(\cdot,\cdot)$ can be claimed as
\begin{lemma}\label{bounded}
For $\forall \gv{u}_{h} \in \gv{V}_h$ , $\forall \gv{v}_{h} \in
\gv{V}_h$, and sufficiently large $\eta$ and $\epsilon$ we have
\begin{equation}
\begin{split}
B_h(\gv{u}_{h},\gv{v}_{h})& \leq C \|\gv{u}_{h}\|_h\|\gv{v}_{h}\|_h
,\\ B(\gv{v}_{h},\gv{v}_{h}) & \geq C \|\gv{v}_{h}\|_h^2.
\end{split}
\end{equation}
\end{lemma}
Now, with the above lemmas, we are ready to give the error estimate of
the numerical solution \eqref{eq:weakform},
\begin{theorem}\label{thm:energy_error}
For $\gv{u}\in [H^{m+1}(\Omega)]^2$, $p\in H^1(\Omega)$ to be the
solution of equation \eqref{eq:gov}, and $\gv{u}_h\in \gv{V}_h$ to
be the solution of equation \eqref{eq:weakform}, we then have
\begin{equation}\label{energy_error_estimate}
\|\gv{u}-\gv{u}_h\|_h \leq C h^{\frac{m}{2}} \Lr{| \gv{u}
|_{H^{m+1}(\Omega)}+ \|p\|_{H^{1}(\Omega)}}.
\end{equation}
Furthermore, the $L^2$ error estimate is as
\begin{equation}\label{L2_error_estimate}
\begin{split}
\|\gv{u} - \gv{u}_h \|_{L^2(\Omega)} &\leq C h \Lr{| \gv{u}
|_{H^{m+1}(\Omega)}+ \|p\|_{H^{1}(\Omega)}},\ m=1, \\ \|\gv{u} -
\gv{u}_h \|_{L^2(\Omega)} &\leq C h^{\frac{m}{2}+1}\Lr{ | \gv{u}
|_{H^{m+1}(\Omega)}+ \|p\|_{H^{1}(\Omega)}}, \ m\geq 2 .
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
We split the error with an interpolation function $\gv{u}_I$ in
$\gv{V}_h$,
\begin{equation}\label{eq:tri}
\|\gv{u}-\gv{u}_h\|_h \leq \|\gv{u}-\gv{u}_I\|_h +
\|\gv{u}_I-\gv{u}_h\|_h.
\end{equation}
Then together with the consistency error, we have
\begin{equation}
B_h(\gv{u}-\gv{u}_h,\gv{v}_h) = (p , [ \hspace{-2pt} [ \gv{v}_h \cdot \gv{n} ] \hspace{-2pt} ]
)_{\Gamma_h}, \quad \forall \gv{v}_h \in \gv{V}_h
\end{equation}
From the stability of bilinear operator $B_h(\cdot,\cdot)$, the second
term in \eqref{eq:tri} has
\begin{equation}\label{error}
\begin{split}
C \|\gv{u}_I-\gv{u}_h\|^2_h\leq &
B_h(\gv{u}_I-\gv{u}_h,\gv{u}_I-\gv{u}_h) \\ = &
B_h(\gv{u}_I-\gv{u},\gv{u}_I-\gv{u}_h) +
B_h(\gv{u}-\gv{u}_h,\gv{u}_I-\gv{u}_h) \\ = & B_h
(\gv{u}_I-\gv{u},\gv{u}_I-\gv{u}_h) + \left(p, [ \hspace{-2pt} [ \gv{n} \cdot
(\gv{u}_I-\gv{u}_h) ] \hspace{-2pt} ] \right )_{\Gamma_h^0} \\ \leq & C
\|\gv{u}-\gv{u}_I\|_h \|\gv{u}_I-\gv{u}_h\|_h + C h^{\frac{m}{2}}
\|p\|_{H^{1}(\Omega)} \|\gv{u}_{I}-\gv{u}_h\|_h .
\end{split}
\end{equation}
Collecting the above estimates and Lemma \ref{lemma:inconsistent}, it
gives
\begin{equation}\label{eq:err_interp_numerial}
\|\gv{u}_I-\gv{u}_h\|_h \leq C h^{\frac{m}{2}}\Lr{ | \gv{u}
|_{H^{m+1}(\Omega)}+ \|p\|_{H^{1}(\Omega)}}.
\end{equation}
The first term in \eqref{eq:tri} is the interpolation error,
\begin{equation}\label{eq:interp}
\|\gv{u} - \gv{u}_I\|_{h} \leq C h^{\frac{m}{2}}
|\gv{u}|_{H^{m+1}(\Omega)}.
\end{equation}
Collecting estimates \eqref{eq:tri} ,\eqref{eq:err_interp_numerial}
and \eqref{eq:interp} together, we can have
\eqref{energy_error_estimate}.
For the $L^2$ error estimate, we define the auxiliary functions
$(\gv{\varphi} , q)$ and the adjoint problem
\begin{equation}\label{eq:adjoint}
\left\{\begin{array}{ll}- \Delta \gv{\varphi} + \nabla
q=\gv{u}-\gv{u}_h & \mathrm{in }\,\, \Omega, \\[1.5ex] \div
\gv{\varphi} = 0 & \mathrm{in }\,\, \Omega, \\[1.5ex] \gv{\varphi} = 0
& \mathrm{on }\,\, \partial\Omega, \\[1.5ex]
\end{array}\right.
\end{equation}
then apply the test function $\gv{u}-\gv{u}_h$, we have
\begin{equation}\label{eq:l2_split}
\begin{split}
(\gv{u}-\gv{u}_h,\gv{u}-\gv{u}_h)_{\Omega}=&
B_h(\gv{\varphi},\gv{u}-\gv{u}_h) + (q , [ \hspace{-2pt} [ (\gv{u}-\gv{u}_h) \cdot
\gv{n} ] \hspace{-2pt} ] )_{\Gamma_h}.
\end{split}
\end{equation}
The regularity of Stokes equations implies
\begin{equation}
\begin{split}
\|q\|_{H^1(\Omega)} &\leq \|\gv{u} - \gv{u}_h \|_{L^2(\Omega)},
\\ \|\gv{\varphi}\|_{H^2(\Omega)} &\leq \|\gv{u} - \gv{u}_h
\|_{L^2(\Omega)}, \\ \|\gv{\varphi} - \gv{\varphi}_{BDM} \|_{h} &\leq
h\|\gv{u} - \gv{u}_h \|_{L^2(\Omega)}.
\end{split}
\end{equation}
where $\gv{\varphi}_{BDM}$ is the BDM interpolation function. This
gives
\begin{equation}\label{eq:l2_split_t1}
\begin{split}
B_h(\gv{\varphi},\gv{u}-\gv{u}_h) &=
B_h(\gv{\varphi}-\gv{\varphi}_{BDM},\gv{u}-\gv{u}_h) \\ &\leq C
\|\gv{u}-\gv{u}_h\|_{h} \|\gv{\varphi}-\gv{\varphi}_{BDM}\|_{h}
\\ &\leq C h \|\gv{u}-\gv{u}_h\|_{h} \|\gv{u} - \gv{u}_h
\|_{L^2(\Omega)}
\end{split}
\end{equation}
and
\begin{equation}\label{eq:l2_split_t2}
\begin{split}
(q , [ \hspace{-2pt} [ (\gv{u}-\gv{u}_h) \cdot \gv{n} ] \hspace{-2pt} ] )_{\Gamma_h} &\leq C
h^{\frac{m}{2}} \|q\|_{H^1(\Omega)}\|\gv{u}-\gv{u}_h\|_{h} \\ &\leq
C h^{\frac{m}{2}} \|\gv{u}-\gv{u}_h\|_{h} \|\gv{u} - \gv{u}_h
\|_{L^2(\Omega)}.
\end{split}
\end{equation}
Inserting \eqref{eq:l2_split_t1} and \eqref{eq:l2_split_t2} to the
equation \eqref{eq:l2_split},
\begin{equation}\label{eq:l2_split_t3}
\|\gv{u}-\gv{u}_h\|_{L^2(\Omega)} \leq C h \|\gv{u}-\gv{u}_h\|_{h} + C
h^{\frac{m}{2}} \|\gv{u}-\gv{u}_h\|_{h}.
\end{equation}
Then substituting the energy norm estimate
\eqref{energy_error_estimate} into \eqref{eq:l2_split_t3}, the $L^2$
error estimate \eqref{L2_error_estimate} is obtained.
\end{proof}
\section{Numerical Examples}\label{sec:examples}
We present some numerical examples to illustrate the
effectiveness of the proposed method. One merit of the method is the
linear system is maintained with the given mesh while ignore the
increase of approximation order. The maximum degree of approximation
polynomial is taken as $4$ which is limited by the condition number
of linear system. The numerical examples with various polygonal meshes
are demonstrated. A direct solver is used to solve the
linear algebra systems after discretisation.
\subsection{Analytical example}
This is an example with a smooth solution to verify the
convergence order as the error estimate predicted. The
Stokes problem with Dirichlet boundary condition is solved in two
dimensional square domain $\Omega=[0,1]\times[0,1]$. The exact
velocity field and pressure are given as follows,
\begin{align*}
\gv{u} =&\left(\begin{array}{ll} \sin(2\pi x)*\cos (2\pi y) \\ -
\cos(2\pi x)*\sin (2\pi y)
\end{array}\right),\\
p= &x^2+y^2.
\end{align*}
The body force $\gv{f} $ and boundary condition $\gv{g}$ are given
correspondingly. The quasi-uniform triangular and quadrilateral meshes
are used, both of them are generated by the software {\it
gmsh}\cite{geuzaine2009gmsh}.
\begin{table}[]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c||c||c||c||c||c||c||c||}
\hline & & h=1.0E-1 & 5.0E-2 & & 2.5E-2 & & 1.25E-2 & & 6.25E-3
& \\ \cline{3-4} \cline{6-6} \cline{8-8} \cline{10-10}
\multirow{-2}{*}{$P^{m}$} & \multirow{-2}{*}{Norm} & error &
error & \multirow{-2}{*}{order} & error &
\multirow{-2}{*}{order} & error & \multirow{-2}{*}{order} &
error & \multirow{-2}{*}{order} \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 5.41E-02 & 2.41E-02 & 1.16 & 9.97E-03 &
1.27 & 4.20E-03 & 1.24 & 1.92E-03 & 1.13 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{1.50E+00} & 8.76E-01 & 0.77 & 4.61E-01 &
0.92 & 2.55E-01 & 0.85 & 1.48E-01 & 0.77 \\ \cline{2-11}
\multirow{-3}{*}{1} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{2.10E+00} & 1.39E+00 & 0.59 & 8.16E-01 &
0.77 & 4.97E-01 & 0.71 & 3.15E-01 & 0.65 \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 1.47E-02 & 2.88E-03 & 2.34 & 6.79E-04 &
2.08 & 1.72E-04 & 1.97 & 4.47E-05 & 1.94 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{3.97E-01} & 1.19E-01 & 1.73 & 3.78E-02 &
1.66 & 1.28E-02 & 1.55 & 4.84E-03 & 1.40 \\ \cline{2-11}
\multirow{-3}{*}{2} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{7.72E-01} & 3.52E-01 & 1.13 & 1.55E-01 &
1.17 & 7.53E-01 & 1.04 & 3.71E-03 & 1.01 \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 5.84E-03 & 8.18E-04 & 2.83 & 1.06E-04 &
2.94 & 1.88E-05 & 2.49 & 2.97E-06 & 2.66 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{1.41E-01} & 2.67E-02 & 2.40 & 5.30E-03 &
2.33 & 1.48E-03 & 1.83 & 3.54E-04 & 2.06 \\ \cline{2-11}
\multirow{-3}{*}{3} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{3.33E-01} & 9.47E-01 & 1.81 & 2.66E-02 &
1.83 & 9.20E-03 & 1.53 & 3.04E-03 & 1.59 \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 1.14E-03 & 1.03E-04 & 3.46 & 8.25E-06 &
3.64 & 1.08E-06 & 2.92 & 1.36E-07 & 2.99 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{3.24E-02} & 4.06E-03 & 2.99 & 4.99E-04 &
3.02 & 9.96E-05 & 2.32 & 1.77E-05 & 2.48 \\ \cline{2-11}
\multirow{-3}{*}{4} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{9.44E-02} & 1.62E-02 & 2.54 & 3.76E-03 &
2.10 & 9.15E-04 & 2.04 & 2.12E-04 & 2.10 \\ \hline
\end{tabular}}
\caption{Numerical errors on the quasi-uniform triangle meshes for
Example 1.}
\label{tri_mesh_table}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth]{tri_L2.pdf}
\includegraphics[width=0.3\textwidth]{tri_H1.pdf}
\includegraphics[width=0.3\textwidth]{tri_U.pdf}
\caption{The accuracy order on triangle meshes for Example 1.}
\label{tri_mesh_fig}
\end{center}
\end{figure}
\begin{table}[]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c||c||c||c||c||c||c||c||}
\hline & & h=1.0E-1 & 5.0E-2 & & 2.5E-2 & & 1.25E-2 & & 6.25E-3
& \\ \cline{3-4} \cline{6-6} \cline{8-8} \cline{10-10}
\multirow{-2}{*}{$P^{m}$} & \multirow{-2}{*}{Norm} & error &
error & \multirow{-2}{*}{order} & error &
\multirow{-2}{*}{order} & error & \multirow{-2}{*}{order} &
error & \multirow{-2}{*}{order} \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 1.03E-01 & 4.41E-02 & 1.22 & 1.74E-02 &
1.33 & 8.14E-03 & 1.10 & 3.67E-03 & 1.14 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{1.97E+00} & 1.08E+00 & 0.86 & 5.54E-01 &
0.96 & 2.96E-01 & 0.90 & 1.62E-01 & 0.87 \\ \cline{2-11}
\multirow{-3}{*}{1} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{3.10E+00} & 1.92E+00 & 0.69 & 1.22E+00 &
0.64 & 8.27E-01 & 0.56 & 5.50E-01 & 0.58 \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 3.80E-02 & 7.71E-03 & 2.30 & 1.92E-03 &
1.99 & 4.97E-04 & 1.95 & 1.21E-04 & 2.03 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{7.95E-01} & 2.06E-01 & 1.94 & 7.11E-02 &
1.53 & 2.81E-02 & 1.33 & 1.10E-02 & 1.35 \\ \cline{2-11}
\multirow{-3}{*}{2} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{1.48E+00} & 5.82E-01 & 1.35 & 2.63E-01 &
1.14 & 1.22E-01 & 1.10 & 5.58E-03 & 1.13 \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 1.11E-02 & 1.88E-03 & 2.56 & 2.75E-04 &
2.77 & 3.94E-05 & 2.80 & 5.96E-06 & 2.72 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{2.59E-01} & 3.31E-02 & 2.96 & 6.17E-03 &
2.42 & 1.74E-03 & 1.82 & 4.92E-04 & 1.82 \\ \cline{2-11}
\multirow{-3}{*}{3} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{4.11E-01} & 9.71E-02 & 2.08 & 3.06E-02 &
1.66 & 1.10E-02 & 1.46 & 3.82E-03 & 1.53 \\ \hline \hline
& $\|\cdot\|_{L^{2}}$ & 8.18E-03 & 6.85E-04 & 3.57 & 5.32E-05 &
3.68 & 4.15E-06 & 3.68 & 4.04E-07 & 3.35 \\ \cline{2-11} &
\multicolumn{1}{c|}{$\|\cdot\|_{1,h}$} &
\multicolumn{1}{c|}{1.59E-01} & 9.69E-03 & 4.03 & 1.26E-03 &
2.94 & 2.18E-04 & 2.52 & 4.06E-05 & 2.42 \\ \cline{2-11}
\multirow{-3}{*}{4} & \multicolumn{1}{c|}{$\|\cdot\|_{h}$} &
\multicolumn{1}{c|}{2.52E-01} & 3.55E-02 & 2.82 & 8.73E-03 &
2.02 & 2.25E-03 & 1.95 & 5.62E-04 & 2.00 \\ \hline
\end{tabular}}
\caption{Numerical errors on the quasi-uniform quadrilateral meshes
for Example 1.}
\label{quad_mesh_table}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth]{quad_L2.pdf}
\includegraphics[width=0.3\textwidth]{quad_H1.pdf}
\includegraphics[width=0.3\textwidth]{quad_U.pdf}
\caption{The accuracy order on quadrilateral meshes for Example 1.}
\label{quad_mesh_fig}
\end{center}
\end{figure}
Table \ref{tri_mesh_table} and \ref{quad_mesh_table} present the error
of the numerical solution in $L^2$ norm, $|\cdot|_{1,h}$ seminorm and
$\|\cdot\|_h$ norm. Figure \ref{tri_mesh_fig} and \ref{quad_mesh_fig}
plot the accuracy order under different norm, respectively. The
behavior of the proposed method agrees perfectly with the theoretical
analysis on the convergence order to be $h^{\frac{m}{2}}$ on
$\|\cdot\|_h$ norm. And the convergence order of $L^2$ norm and
$|\cdot|_{1,h}$ seminorm are not optimal as usual situations, which
are $m+1$ and $m$, respectively. This is due to the fact that the
space $\gv{V}_h$ is not large enough that it can not contain the $BDM$
space as a subspace as the traditional discontinued Galerkin
space. The difference between $|\cdot|_{1,h}$ seminorm and
$\|\cdot\|_h$ norm error implies that the seminorm
$|\cdot|_{\diamond}$ is dominate in energy norm. The condition number
of resulting matrix is mainly determined by the penalty parameter
$\epsilon$. And the exponential growth of $\epsilon$ with the
increasing of the degree of the polynomial results in very
ill-posed linear system.
\subsection{Lid-driven cavity example}
The lid-driven cavity is a benchmark test for the incompressible flow
that does not have an exact solution. Here we sets the cavity domain
as a rectangle $\Omega=[0,1]\times[0,1.5]$. The body force is zero,
and a Dirichlet boundary condition is given as
\begin{equation}
\gv{g}=\left\{\begin{array}{ll} (1,0) \quad \text{if} \ x
\in(0,1),y=1.5,\\ (0,0) \quad \text{other boundaries.}
\end{array}\right.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.31\textwidth]{lid_vel.pdf}
\includegraphics[width=0.31\textwidth]{lid_scaled.pdf}
\includegraphics[width=0.31\textwidth]{lid_streamline.pdf}
\caption{The velocity, scaled velocity field and streamline of
lid-driven cavity}
\label{lid_driven_fig}
\end{center}
\end{figure}
Figure \ref{lid_driven_fig} shows the velocity fields and the scaled
velocity. The domain is segmented by a quasi-uniform triangle mesh
meanwhile the third order polynomial is employed. The numerical
results fit the realities. The scaled velocity present the expected
phenomenon which the Contra vortices appear in the bottom corner. The
subtle vortices will capture with refined mesh.
\subsection{Flow around cylinder}
Flow around cylinder is another representative benchmark test for the
incompressible flow. The numerical setting is slightly different from
the traditional setting. The polygonal elements are considered to
demonstrate the compatibility of the proposed method. While the
computational domain is a rectangle minus a circle
$\Omega=[0,1.5]\times[0,1] / (x-0.5)^2+(y-0.5)^2 \leq 0.2^2$, which is
partitioned into polygonal elements mesh by PolyMesher
\cite{talischi2012polymesher}. The Dirichlet boundary condition is
given while with zero body force.
\begin{equation}
\gv{g}=\left\{\begin{array}{ll} (y(1-y),0) \quad \text{if} \ x=0,1.5,y
\in(0,1),\\ (0,0) \quad \text{elsewhere.}
\end{array}\right.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{poly_mesh.pdf}
\includegraphics[width=0.45\textwidth]{flow_around_cylinder_vel.pdf}
\caption{The velocity and scaled velocity field of flow around
cylinder}
\label{flow_around_cylinder_fig}
\end{center}
\end{figure}
Figure \ref{flow_around_cylinder_fig} shows the polygonal mesh and the
corresponding velocity field. The proposed method provides a smooth
solution which is agreed with the expected behavior. and it can handle
a wide variety of polygonal elements without additional techniques.
\section{Conclusions}
The discontinuous Galerkin method by locally divergence-free patch
reconstruction for simulation of incompressible Stokes problems is
developed. The interior penalty method with solenoidal velocity field
allows calculating the velocity field with no presence of
pressure. This method can achieve high order accuracy with one degree
of freedom per element. In spite of such advantage, our method suffers
from the inconsistence penalty term $h^{-(m+1)} $ which leads to an
ill-posed linear system for large $m$.
\bibliographystyle{abbrv}
| {
"timestamp": "2018-12-13T02:05:30",
"yymm": "1812",
"arxiv_id": "1812.04806",
"language": "en",
"url": "https://arxiv.org/abs/1812.04806",
"abstract": "A discontinuous Galerkin method by patch reconstruction is proposed for Stokes flows. A locally divergence-free reconstruction space is employed as the approximation space, and the interior penalty method is adopted which imposes the normal component penalty terms to cancel out the pressure term. Consequently, the Stokes equation can be solved as an elliptic system instead of a saddle-point problem due to such weak form. The number of degree of freedoms of our method is the same as the number of elements in the mesh for different order of accuracy. The error estimations of the proposed method are given in a classical style, which are then verified by some numerical examples.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A Discontinuous Galerkin Method for the Stokes Equation by Divergence-free Patch Reconstruction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435195,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.70769848057502
} |
https://arxiv.org/abs/2202.08876 | An alternative approach to train neural networks using monotone variational inequality | Despite the vast empirical success of neural networks, theoretical understanding of the training procedures remains limited, especially in providing performance guarantees of testing performance due to the non-convex nature of the optimization problem. The current paper investigates an alternative approach of neural network training by reducing to another problem with convex structure -- to solve a monotone variational inequality (MVI) -- inspired by a recent work of (Juditsky & Nemirovsky, 2019). The solution to MVI can be found by computationally efficient procedures, and importantly, this leads to performance guarantee of $\ell_2$ and $\ell_{\infty}$ bounds on model recovery and prediction accuracy under the theoretical setting of training a single-layer linear neural network. In addition, we study the use of MVI for training multi-layer neural networks and propose a practical algorithm called \textit{stochastic variational inequality} (SVI), and demonstrate its applicability in training fully-connected neural networks and graph neural networks (GNN) (SVI is completely general and can be used to train other types of neural networks). We demonstrate the competitive or better performance of SVI compared to widely-used stochastic gradient descent methods on both synthetic and real network data prediction tasks regarding various performance metrics, especially in the improved efficiency in the early stage of training. | \section{Introduction}
Neural network (NN) training \citep{Duchi2010AdaptiveSM,pmlr-v28-sutskever13,Kingma2015AdamAM,Ioffe2015BatchNA} is the essential process in NN study. Optimization guarantee for training loss as well as generalization error have been obtained with over-parametrized networks \citep{neyshabur2014search,mei2018mean,arora2019exact,arora2019fine,allen2019convergence,du2019gradient}. However, due to the inherent non-convexity of loss objectives, theoretical developments are still diffused and lag behind the vast empirical successes.
Recently, the seminal work \citep{VI_est} presented a somehow surprising result that some non-convex issues can be circumvented in special cases by problem reformulation. In particular, it was shown that when estimating the parameters of the generalized linear models (GLM), instead of minimizing a least-square loss function, which leads to a non-convex optimization problem, and thus no guarantees can be obtained for global convergence nor model recovery, one can reformulate the problem as solving a monotone variational inequality (VI), a general form of convex optimization. The reformulation through monotone VI leads to strong performance guarantees and computationally efficient procedures.
In this paper, inspired by \citep{VI_est} and the fact that certain GLM (such as logistic regression) can be viewed as the simplest neural network with only one layer, we consider a new scheme for neural network training based on monotone VI. This is a drastic departure from the widely used gradient descent algorithm for neural network training --- we replace the gradient of a loss function with a carefully constructed monotone operator. The benefits of such an approach include (i) in special cases (one input layer networks), we can establish strong training and prediction guarantees -- see Section \ref{sec:guarantee}; (ii) for general cases, through extensive numerical studies (on synthetic and real-data), we demonstrate the faster convergence to a local solution by our approach relative to gradient descent in a comparable setup.
To the best of our knowledge, the current paper is the first to study monotone VI for training neural networks. Beyond fully-connected neural networks, we especially study monotone VI training for node classification in graph neural networks (GNN) \citep{Wu2019ACS,Pilanci2020NeuralNA}, due to the ubiquity of network data. Our techniques directly apply to a large class of GNN models, e.g., GCN \citep{GCN} and ChebNet \citep{chebnet}. Specifically, our technical contributions include:
\begin{itemize}[noitemsep,topsep=0em]
\item Reformulate the last-layer parameter recovery problem in FCNN and GNN as solving a monotone VI, with guarantee on recovery and prediction accuracy.
\item Develop a heuristic algorithm called \SVI \ for training multi-layer neural networks. In particular, the algorithm provides a fundamentally different but easy-to-implement way of defining the gradient of parameters during estimation.
\item Compare \SVI \ with standard stochastic gradient descent (SGD)
extensively to demonstrate the competitiveness and improvement of \SVI, in terms of model recovery and prediction accuracy on various problems.
\end{itemize}
{\it Literature.} Monotone VI
has been studied in priors mainly in the optimization problem context \citep{Kinderlehrer1980AnIT,Facchinei2003FiniteDimensionalVI}, which can be viewed as the most general problems with convex structure \citep{juditsky2021well}. More recently, VI has been used to solve min-max problems in Generative Adversarial Networks \citep{Lin2018SolvingWS} and reinforcement learning \citep{Georgios20I}. In particular, our theory and techniques are inspired by \citep{VI_est}, which uses strongly monotone VI for signal recovery in generalized linear models (GLM). In contrast to their work, we offer a thorough investigation of using VI to train mutli-layer NN and address the case when VI may not be strongly monotone. On the other other hand, we emphasize the difference from \citep{Pilanci2020NeuralNA}, which views two-layer NNs as convex regularizers: we focus on model recovery rather than change variables to convexify the loss minimization. In addition, our \SVI \ extends beyond two-layer networks.
\section{Problem setup}
Assume feature $X\in \R^n$. Suppose the conditional expectation of the categorical response vector
\xc{maybe the footnote should be here?}
$Y \in \{ 1,\ldots,K \}$ can be modeled by an $L$-layer neural network $f(X,\Theta^*)$
\xc{notationwise, it is a bit unusual to have $\Theta^*$ with star in the model itself.
it can be understood that star means some 'true' model, but usually, theta in a certain family is the parametrization of the net, and we can assume that the true model is inside this family (then there is no approximation error). starting by giving eqn (1) with theta-star is strange, also, the notation X-star is strange.
}
\footnote{The techniques and theory in this work can be used to model the conditional expectation of continuous random variables.}:
\begin{equation}\label{eq:NN}
\condexp{Y}{X} := f(X,\Theta^*)=\phi_L(g_L(\Xstar{L},\Thetastar{L})),
\end{equation}
where $\Xstar{L}=\phi_{L-1}(g_{L-1}(\Xstar{L-1},\Thetastar{L-1})), \Xstar{1}=X$ denote the nonlinear feature transformation from the previous layer; denote the ``true'' model $\Theta^*=\{\Thetastar{1},\ldots,\Thetastar{L}\}$; each $\phi_l$ denotes the activation function at layer $l$.
The goal is to estimate parameters $\Theta^*$ in model (\ref{eq:NN}) using $N$ training data, with performance guarantee on bounding the error between the final predictor $f(X_t,\widehat \Theta)$ and the ground truth $f(X_t, \Theta^*), t > N$.
\xc{do we want to explain different ways of measuring 'error' here?}
To this end, we will cast the estimation problem as solving a monotone variational inequality.
\xc{strictly speaking, our overall training method is not a single 'cast' of the problem as VI, e.g., the multi-layer method.
improve this sentence.
}
\subsection{Preliminaries of VI with monotone operator}
Given a parameter set $\bTheta \subset \R^{p}$, we call a continuous mapping (operator) $F: \bTheta \rightarrow \R^p$ \textit{monotone} if for all $\Theta_1, \Theta_2 \in \bTheta$,
$
\langle F(\Theta_1)-F(\Theta_2),\Theta_1-\Theta_2 \rangle \geq 0.
$
\xc{cite arkardi here?}
The operator is called \textit{strongly monotone} with modulus $\kappa$ if for all $\Theta_1, \Theta_2 \in \bTheta$,
\begin{equation}\label{eq:modulus}
\langle F(\Theta_1)-F(\Theta_2),\Theta_1-\Theta_2 \rangle \geq \kappa \|\Theta_1-\Theta_2\|^2_2.
\end{equation}
If $F \in C^1(\bTheta)$ (i.e., continuously differentiable on $\bTheta$) and $\bTheta$ is closed and convex with non-empty interior, (\ref{eq:modulus}) holds if and only if $\lambda_{\min}(\nabla F(\Theta))\geq \kappa$ for all $\Theta \in \bTheta$, where $\lambda_{\min}(\cdot)$ denotes the minimum eigenvalue of a square matrix.
Now, for a monotone operator $F$ on $\bTheta$, the problem $\VI$ is to find an $\bar{\Theta} \in \bTheta$ such that for all $\Theta \in \bTheta$,
\[
\langle F(\bar{\Theta}), \Theta-\bar{\Theta} \rangle \geq 0. \qquad \VI
\]
\xc{this eqn key as 'VI( F,theta)' looks like a function, maybe change to a simpler name}
It is known that if $\bTheta$ is compact, then $\VI$ has at least one solution \citep[Theorem 4.1]{outrata2013nonsmooth}. In addition, if $\kappa>0$ in (\ref{eq:modulus}), then $\VI$ has exactly one solution \citep[Theorem 4.4]{outrata2013nonsmooth}. Under mild computability assumptions, the solution can be efficiently solved to a high accuracy, using various iterative scheme (with a similar form as stochastic gradient descent or accelerated gradient descent by replacing the gradient by the monotone operator) \citep{VI_est}.
\subsection{VI for one-layer neural network training}
For illustration purposes, let consider training a one-layer neural network $L=1$. In this special case, the training of neural network is mathematically equivalent to estimation of a GLM model with proper chosen activation function \cite{VI_est}. Thus, we can write $\condexp{Y}{X} = \phi (\eta(X)\Theta)$, where $\eta(X)$ can be viewed as the feature transformation and in the simplest case for fully connected neural networks $\eta(X) = X$, i.e., the identity map. We construct the monotone operator $F$ as
\begin{equation}\label{eq:operatr}
F(\Theta):=\EEXY{\etaT{X}[\phi (\eta(X)\Theta)-Y]},
\end{equation}
where $\etaT{X}$ denotes the transpose of $\eta(X)$. Given training samples, we can form a empirical sample version of $F$ denoted by $\widehat F$. Then the training based on VI
\begin{equation}\label{eq:VI_training}
\Theta \leftarrow \operatorname{Proj}_{\bTheta}(\Theta - \gamma \widehat F(\Theta)).
\end{equation}
where $\gamma>0$ is the step-size, $\operatorname{Proj}_{\bTheta}$ denotes the projection operation to ensure the updated parameters are still within feasible parameter domain $\bTheta$. Note that this recursion differs from the standard gradient descent where the monotone operator $\widehat F$ plays the role the gradient; although, $\widehat F$ does not need to correspond to the gradient of any loss function.
\subsection{VI for graph neural network training}\label{sec:GNN_reform}
Now we discuss VI training for node classification in graph neural network (GNN), since GNN training (even the simplest one-layer setting) is not covered by \citep{VI_est}. It should be understood that our techniques can be applied for general neural network training (e.g., fully-connected networks).
Suppose we have an undirected and connected graph $\G=(\V,\E, W)$, where $\V$ is a finite set of $n$ vertices, $\E$ is a set of edges, and $W\in\R^{n\times n}$ is a weighted adjacency matrix encoding node connections. Let $D$ be the degree matrix of $W$ and $L_g=I_{n}-D^{-1 / 2} W D^{-1 / 2}$ be the normalized graph Laplacian, which has the eigendecomposition $L_g=U\Lambda U^T$. For a graph signal $X \in \R^{n \times C}$ with $C$ input channels, it is then filtered via a function $g_{\Theta}(L_g)$ which acts on $L_g$ with channel-mixing parameters $\Theta \in \R^{C\times F}$ for $F$ output channels. Thus, the filtered signal $X'=g_{\Theta}(L_g)X$.
Our VI framework based on \eqref{eq:operatr} can thus handle a class of graph filters in which the filter and parameter are decoupled. For instance, for some positive integer $R$, $g_{\Theta}(L_g)X=\sum_{r=1}^R g_r(L_g) X \Theta_r$, where $g_r(L)$ is fixed (non-adaptive) graph filters determined by graph Laplacian, and $\Theta_r$ is trainable parameters. Then, there exists a mapping $\eta$ such that $g_{\Theta}(L_g)X=\eta(X)\Theta$. Note that this class of filters include some popular ones:
\begin{itemize}[topsep=0em,noitemsep]
\item
{\it Graph Convolutional Network (GCN)} \citep{GCN}: $g_{\Theta}(L_g)X=(\tilde{D}^{-\frac{1}{2}} \tilde{W} \tilde{D}^{-\frac{1}{2}})X\Theta$, where $\tilde{W}:=W+I_n$ is the graph with self-loops.
\item
{\it Chebshev Network (Chebnet)} \citep{chebnet}:\\
$g_{\Theta}(L_g)X=\sum_{k=1}^K T_k(L_g)X\Theta_k$ for the $K$th-order Chebyshev polynomial.
\item
{\it Graph Sample and Aggregate (GraphSAGE)} \citep{GraphSAGE}:
$[g_{\Theta}(L_g)X]_i=\Theta_1 x_i+ \text{mean}_{j\in N(i)} x_j$.
\end{itemize}
\section{Last-layer training with guarantees}\label{sec:guarantee}
Now consider training of multi-layer neural networks. We will present a guarantee for recovering the last layer parameters; this can be understood as if the previous layers have provided necessary feature extraction. In particular, we provide pointwise (resp. in expectation) error bound on predicting $\condexp{Y_t}{X_t}$ in Section \ref{sec:modulus_1} (resp. Section \ref{sec:modulus_2}), where $\kappa$ in \eqref{eq:modulus} satisfies $\kappa>0$ (resp. $\kappa=0$) for the strongly monotone (resp. monotone) VI. We also consider imprecise graph knowledge in Section \ref{sec:imprecise}
Let $\{(X_i,Y_i)\}_{i=1}^N$ be $N$ training data from model (\ref{eq:NN}). For the subsequent theoretical analyses, we need to assume that for each $i$, the quantity $\Xstar{i,L}$ in \eqref{eq:NN} is known, which is the output from the previous $L-1$ hidden layers using $X_i$. Thus, $\eta(X)$ becomes $\eta(\Xstar{L}$) for a generic feature $X$. For the operator $F(\Theta_L)$ in (\ref{eq:operatr}), where $\bTheta$ is the parameter space of $\Theta_L$, consider its empirical average $\Femp(\Theta_L)$
$
\Femp(\Theta_L):= (1/N) \sum_{i=1}^N \Fempi(\Theta_L),
$
where $\Fempi(\Theta_L):=\etaT{\Xstar{i,L}}[\phi (\eta(\Xstar{i,L})\Theta_L)-Y]$. Let $\widehat{\Theta}_L^{(T)}$ be the estimated parameter after $T$ training iterations, using $\Femp$ and \eqref{eq:VI_training}. For a test feature $X_t$ and its nonlinear transformation $\Xstar{t,L}$, consider
\begin{equation}\label{posterior_estimate}
\condexphat{Y_t}{X_t}=\phi_L(\etagraph{\Xstar{t,L}}\widehat{\Theta}_L^{(T)}),
\end{equation}
which is the estimated prediction using $\widehat{\Theta}_L^{(T)}$ and will be measured against the true model $\condexp{Y_t}{X_t}$. In particular, we will provide error bound on $\|\condexphat{Y_t}{X_t}-\condexp{Y_t}{X_t}\|_2,$ which crucially depends on the strong monotonicity modulus $\kappa$ in (\ref{eq:modulus}) for $F(\Theta_L)$.
We first state a few properties of $F(\Theta_L)$ defined in (\ref{eq:operatr}), which explicitly identify the form of $\kappa$. All proofs of Lemmas and Theorems are contained in Appendix \ref{sec:theory_append}.
\begin{lemma}\label{lem1}
Assume $\phi_L$ is $K$-Lipschitz continuous and monotone on its domain. Then,
\begin{itemize}[noitemsep,topsep=0em]
\item $F(\Theta_L)$ is both monotone and $K_2$-Lipschitz, where $K_2:=K \EE_X\{ \|\eta(\Xstar{L})\|^2_2$.
\item $\kappa=\lambda_{\min}(\nabla \phi_L) \EE_X[\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))]$ if $\nabla \phi_L$ exists; it is 0 otherwise.
\item $F(\Theta^*_L)=0$.
\end{itemize}
\end{lemma}
For simplicity, we assume from now on the stronger condition that $\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))>0$ for any generic nonlinear feature $\Xstar{L}$. In addition, because we are considering the last layer in classification, $\phi_L$ is typically chosen as sigmoid or softmax, both of which are differentiable so $\lambda_{\min}(\nabla \phi_L)$ exits. Note the property $F(\Theta^*_L)=0$ implies that $\Theta^*_L$ is a solution to $\VI$. In particular, if $\kappa>0$, it is the \textit{unique} solution. As a result, $\Theta^*_L$ can be efficiently solved to a high accuracy using $F(\Theta_L)$ under appropriated chosen $\gamma$ in \eqref{eq:VI_training}.
\subsection{Case 1: modulus $\kappa > 0$}\label{sec:modulus_1}
Under appropriate selection of step size and additional assumptions, we can obtain bounds on the recovered parameters following techniques in \citep{VI_est}, which helps us to bound the error in model recovery and prediction. Note that the guarantee relies on the value $\kappa$, which may be approximated by the empirical average using $N$ training data.
\begin{lemma}[Parameter recovery guarantee]\label{thm:alg_guarantee}
Suppose that there exists $M<\infty$ such that $ \forall \Theta_L \in \bTheta$,
\[
\EE_{X,\YTheta}{\|\eta(\Xstar{L})\YTheta\|_2}\leq M,
\]
where $\condexp{\YTheta}{X}=\phi(\eta(\Xstar{L})\Theta_L)$. Choose adaptive step sizes $\gamma=\gamma_t:=[\kappa(t+1)]^{-1}$ in \eqref{eq:VI_training}. The sequence of estimates $\widehat{\Theta}_L^{(T)}$ obeys the error bound
\begin{equation}\label{eq:algo_err_bound}
\EE_{\widehat{\Theta}_L^{(T)}}{\{\|\widehat{\Theta}_L^{(T)}-\Theta^*_L\|^2_2\} \leq \frac{4M^2}{\kappa^2(T+1)}}.
\end{equation}
\end{lemma}
We can use the above result to bound the error in posterior prediction.
\begin{theorem}[Prediction error bound for model recovery, strongly monotone $F$]\label{thm:generalization_err}
We have for a given test signal $X_t, t>N$ that for $p\in [2,\infty]$,
\[
\mathbb E_{\widehat{\Theta}_L^{(T)}}\{\|\condexphat{Y_t}{X_t}-\condexp{Y_t}{X_t}\|_p\} \leq (T+1)^{-1}C_t,
\]
where $C_t:=[4M^2K\lambda_{\max}(\etaT{X_{t,L}^*}\eta(X_{t,L}^*))]/\kappa^2$ and $\condexphat{Y_t}{X_t}$ is defined in \eqref{posterior_estimate}.
In particular, $p=2$ yields the sum of squared error bound on prediction and $p=\infty$ yields entry-wise bound (over the graph nodes).
\end{theorem}
Note that the guarantee is point-wise, as we are able to bound the parameter recovery error in expectation in Lemma \ref{thm:alg_guarantee}. In particular, we will demonstrate in the experiment section that when the loss function is non-convex in parameters, the recovered parameter using VI is closer to the ground truth than that using the SGD (see Figure \ref{tab:non-conve_probit}).
\begin{corollary}[Prediction error bound for test loss]\label{cor:lossbound} Fix an $\epsilon>0$. For any test signal $X_t$, assume $N>C_t/\epsilon$. For a generic loss function $\Lc:(X,Y)\rightarrow \R_+$, which measures the error between $\condexp{Y}{X}$ and $Y$, denote $\Lc^*(X_t,Y_t)$ (resp. $\hat{\Lc}(X_t,Y_t)$) as the loss under the true (resp. estimated) model. Then,
\begin{enumerate}
\item When $L$ is the mean-square error (MSE) loss,
$\mathbb E_{\widehat{\Theta}_L^{(T)}}\{|\hat{\Lc}(X_t,Y_t)-\Lc^*(X_t,Y_t)|\}\leq \epsilon$.
\item When $L$ is the binary cross-entropy loss,
\begin{align*}
&\mathbb E_{\widehat{\Theta}_L^{(T)}}\{|\hat{\Lc}(X_t,Y_t)-\Lc^*(X_t,Y_t)|\}\\
\hspace{-0.4in} \leq &\sum_{i=1}^n Y_{t,i}^T \ln\left(\frac{E_{t,i}}{E_{t,i}-\boldsymbol\epsilon}\right)+(\boldsymbol 1- Y_{t,i}^T)\ln\left(\frac{ \boldsymbol 1-E_{t,i}}{\boldsymbol 1-E_{t,i}-\boldsymbol \epsilon}\right),
\end{align*}
where $E_{t,i}:=\EE[Y_t|X_t]_i$ is the true conditional expectation of $Y_t$ at node $i$, $\boldsymbol 1$ and $\boldsymbol \epsilon$ are vectors of 1 and $\epsilon$, and the division is point-wise.
\end{enumerate}
\end{corollary}
\begin{remark}[When we can have $\kappa>0$]
Recall that $\kappa=\lambda_{\min}(\nabla \phi_L) \EE_X[\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))]$. As we assumed the quantity $\EE_X[\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))]$ is always lower bounded away from zero, we are only concern with the minimum eigenvalue of the gradient of $\phi$ acting on its inputs. Note that when $\phi_L$ is a point-wise function on its vector inputs, this gradient matrix is diagonal. In the case of the sigmoid function, we know that for any $y\in \R^n$
\[
\lambda_{\min}[\nabla \phi_L]|_{y}=\min_{i=1,\ldots,n} \phi_L(y_i)(1-\phi_L(y_i)),
\]
which is bounded away from zero. In general, we only need that the point-wise activation is continuously differentiable with positive derivaties.
\end{remark}
\subsection{Case 2: modulus $\kappa > 0$}\label{sec:modulus_2}
We may also encounter cases where the operator $F$ is only monotone but not strongly monotone. For instance, let $\phi$ be the softmax function (applied row-wise to the filtered signal $\eta(\Xstar{L})\Theta_L \in \mathbb R^{n\times F}$), whose gradient matrix is thus block-diagonal. For any vector $z\in \R^F$, $\nabla \phi(z)=\text{diag}(\phi(z))-\phi(z)\phi(z)^T$, which satisfies $\nabla \phi(z)\boldsymbol 1=\boldsymbol 0$ for any $z$ \citep[Proposition 2]{softmax}. Therefore, the minimum eigenvalue of the gradient matrix of $\phi$ is always zero. Hence $\kappa=0$.
In this case, note that the solution of $\VI$ needs not be unique: a solution $\bar{\Theta}_L$ that satisfies the condition $\langle F(\bar{\Theta}_L), \Theta-\bar{\Theta}_L\rangle \geq 0, \ \forall \Theta \in \bTheta$ may not be the true signal $\Thetastar{L}$. Nevertheless, suppose $F(\bar{\Theta}_L)=\EEXY{\eta(\Xstar{L})[\phi_L (\etaT{\Xstar{L})}\bar{\Theta}_L]-Y}=0$. Because we assumed that the minimum singular value of $\eta(\Xstar{L})$ is always positive, we can still correctly predict the signal in expectation (a weaker guarantee than Theorem \ref{thm:alg_guarantee}):
\[\EE_X[\phi_L (\eta(\Xstar{L})\bar{\Theta}_L)]=\EE_X[\phi_L (\eta(\Xstar{L})\Thetastar{L})].\]
Hence, we directly approximate the zero of $F$, by using the operator extrapolation methoed (OE) in \citep{Georgios20I}
We then have a similar $l_p$ performance guarantee in prediction:
\begin{theorem}[Prediction error bound for model recovery, monotone $F$]\label{thm:generalization_err_nostrong}
Suppose we run the OE algorithm \citep{Georgios20I} for $T$ iterations with $\lambda_t=1$, $\gamma_t=[4K_2]^{-1}$, where $K_2$ is the Lipschitz constant of $F$. Let $R$ be uniformly chosen from $\{2,3,\ldots,T\}$. Then for $p\in [2,\infty]$,
\begin{align*}
&\EE_{\widehat{\Theta}_L^{(R)}} \{\| \EE_X\{ \sigma_{\min} (\etaT{\Xstar{L}})[\condexphat{Y_t}{X_t}-\condexp{Y_t}{X_t}]\}\|_p \} \\
& \leq T^{-1/2} C_t^{''},
\end{align*}
where $\sigma_{\min} (\cdot)$ denotes the minimum singular value of its input matrix and $C_t^{''}:=3\sigma+12K_2\sqrt{2\|\Thetastar{L}\|_2^2+2\sigma^2/L^2}$, in which $\sigma^2:=\EE[(\Fempi(\Theta_L)-F(\Theta_L))^2]$ is the variance of the unbiased estimator.
\end{theorem}
\subsection{Imprecise graph knowledge}\label{sec:imprecise}
Because graph knowledge is rarely perfectly known, the robustness of the algorithm with imprecise graph topology knowledge is important in practice for node classification. Thus, we want to analyze the difference in prediction quality when the true graph adajacency matrix $W$ is estimated by another $W'$. For simplicity, we concentrates on the GCN case and drop certain super/sub-scripts (e.g., $X=\Xstar{L}$ and $\Theta^*=\Thetastar{L}$) and denote $\eta(X)$ under $W$ (resp. $W'$) as $L_gX$ (resp. $L_g'X$). Moreover, for a given $k \in \{1,\ldots, n\}$, we can decompose $L_g$ as $L_g=L_g^++L_g^-$, where $L_g^+:=U\Lambda^+U^T, \Lambda^+=\text{diag}(\lambda_1,\ldots,\lambda_k,\boldsymbol 0)$ denotes the high-energy portion of the filter (similarly for $L_g^-$).
We show that under certain assumptions on the similarity between high/low-portions of the true and estimated graph filters, there exists a solution $\Theta^{*}_{L'}$ that can leads to prediction error between $\condexp{Y}{X}$ under $\Theta^*_L$ and $\Theta^{*}_{L'}$. In particular, $\Theta^{*}_{L'}$ is the solution of $\VI$ if $\phi_L$ is the sigmoid function, so that we can guarantee small prediction error in expectation based on earlier results in such cases.
\begin{proposition}[Quality of minimizer under model mismatch]\label{model_mismatch} Fix an $\epsilon>0$.
Assume that (1) $L_g^+=L_g^{'+}$ (2) $\|L_g^{'-}-L_g^-\|_2 \leq \epsilon/[K\EE_X \|X^-\|]$, where $K$ is the Lipschitz continuous constant for $\phi_L$ and $X^-$ belongs to the span of $L_g^-$ and $L_g^{'-}$. Then, there exists $\Theta^{*}_{L'}$ such that
\[
\| \condexp{Y}{X}'-\condexp{Y}{X}\|_2\leq \epsilon \|\Theta^{*}_L\|_2,
\]
where $ \condexp{Y}{X}'$ denotes the conditional expectation under $L_g'$ and $\Theta^{*}_{L'}$.
\end{proposition}
\section{Training \xc{+of} multi-layer neural networks}\label{sec:heuristic}
So far, we have been assuming that $\Xstar{L}$ is known, which is the nonlinear transformation of the input feature $X$ after $L-1$ layers. Algorithms and guarantees then apply to estimating parameters in the last layer. However, implicit in this assumption is the condition that weights $\{\Thetastar{1},\ldots,\Thetastar{L-1}\}$ in all previous layers are known, which is hardly practical. Therefore, we provide a heuristic algorithm for training parameters in all layers. Algorithm \ref{alg:heuristic} summarizes the steps, where $X^t_{i,l}$ is the hidden representation of $X_i$ using weights $\Theta^t_{1:l-1}$ from previous layers, and $f(X^t_{j,l+1}, \Theta^t_{l+1:L})=f(X_j,\Theta^t)$ is the output of the neural network from layer $l+1$ onward.
\begin{algorithm}[h!]
\cprotect\caption{Stochastic variational inequality (\SVI).}
\label{alg:heuristic}
\begin{algorithmic}[1]
\REQUIRE{{\small Inputs are training data $\{(X_i,Y_i)\}_{i=1}^N$, batch number $B$, and epochs $E$; An $L$-layer network $f(X,\Theta):=\{g_l(X_l,\Theta_l)\}_{l=1}^L$; Initial parameters $\hat{\Theta}^0:=\{\hat{\Theta}^0_l\}_{l=1}^L$; Loss function $\Lc(f(X,\Theta),Y)$.}
}
\ENSURE{Estimated parameters $\hat{\Theta}^T$, $T:=\ceil{NE/B}$}
\STATE Let $I:=\ceil{N/B}$ be the number of iterations per epoch and denote batches as $b_1,\ldots,b_I$.
\FOR {epoch $e=0,\ldots,E$ and iteration $i=1,\ldots,I$}
\STATE Compute $t:=eI+i$ as the current iteration index
\STATE Compute gradient simultaneously at each layer $l$ as:
\STATE If $l<L$\\
{\small $F^t_l:=b_i^{-1}\sum_{j \in b_i} \etaT{X^t_{j,l}}\nabla_{X^t_{j,l+1}} \Lc(f(X^t_{j,l+1}, \hat{\Theta}^t_{l+1:L}),Y_j)$}
\STATE If $l=L$ \\
{\small $F^t_L:=b_i^{-1}\sum_{j \in b_i} \etaT{X^t_{j,L}}[\phi_L (\eta(X^t_{j,L})\hat{\Theta}_L-Y_j]$}
\STATE Update $\hat{\Theta}^{t+1}_l$ according to \eqref{eq:VI_training} using $F^t_l$, where one may incorporate acceleration techniques.
\ENDFOR
\end{algorithmic}
\end{algorithm}
We briefly justify line 5 in Algorithm \ref{alg:heuristic}, which is the heuristic step in previous layers. Let $\Lc(f(\Xnostar{l+1}, \Thetanostar{l+1:L}),Y)$ be the MSE loss and note that
\begin{align*}
&\nabla_{\Xnostar{l+1}} \Lc(f(\Xnostar{l+1}, \Thetanostar{l+1:L}),Y)\\=&[f(\Xnostar{l+1}, \Thetanostar{l+1:L})-Y] \nabla_{\Xnostar{l+1}}f(\Xnostar{l+1}, \Thetanostar{l+1:L}),
\end{align*}
which can be approximated via backpropagating the loss on \textit{hidden layer outputs} rather than on the \textit{model parameters}. $F_l(\Theta_l)$ is then formed by mapping the gradient back to the parameter space, similar as $F_L(\Theta_L)$ in line 6. In addition, $F_l(\Theta_l)$ still satisfy the condition that evaluated on true parameters $\Thetastar{l:L}$, $F_l(\Thetastar{l})=0$. Furthermore, when $\EE_{X,Y}\{\nabla_{\Xnostar{l+1}} \Lc\}=0$, we may correctly predict the true conditional expectation $\condexp{Y}{X}$---for example, if $\sigma_{\min}(\nabla_{\Xnostar{l+1}}f(\Xnostar{l+1}, \Thetanostar{l+1:L}))>0$, the gradient equals to zero implies correct model prediction in expectation. However, we are unable to verify that it is monotone in $\Theta_l$, even for fixed parameters in later layers. In fact, we suspect monotonicity in all layers is impossible, as this would make the non-convex parameter recovery problem convex without relaxation.
\begin{remark}[Effect on neuron dynamics] \label{remark:dynamic}
We remark a key difference between the gradient update in vanilla stochastic gradient descent (SGD) and that in \SVI, which ultimately affects neuron dynamics. Suppose the activation function at layer $l$ is ReLU, so $\Xnostar{l+1}=\text{ReLU}(\eta(\Xstar{L})\Thetastar{l})$. Under the MSE loss, we notice that for SGD:
$\nabla_{\Theta_l} \Lc =[f(\Xnostar{l+1}, \Thetanostar{l+1:L})-Y] \nabla_{\Theta_l} \text{ReLU}(\eta(\Xstar{L})\Theta_l)$,
and for \SVI:
$\nabla_{\Xnostar{l+1}} \Lc
= [f(\Xnostar{l+1}, \Thetanostar{l+1:L})-Y] \nabla_{\Xnostar{l+1}}f(\Xnostar{l+1}, \Thetanostar{l+1:L}).$
In particular, SGD does not update inactive neurons because the gradient of ReLU with respect to them are zero. However, \SVI \ does not discriminate between these neurons as the gradient is taken with respect to \textit{next-layer} outputs. Thus, one can expect that \SVI \ results in much further neuron displacement than SGD, a phenomenon that we illustrate in Figure \ref{fig:dynamics}.
\end{remark}
\section{Experiments}
\begin{table*}[!t]
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{|P{2.5cm}|P{2.5cm}P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}|}
\toprule
{\textbf{Probit model}} & \multicolumn{2}{c}{Para recovery error in $\ell_2$ norm} & \multicolumn{4}{c}{MSE loss} & \multicolumn{4}{c}{Classification error} \\
\hline
{Feature dimension} & SGD & \SVI & SGD train & \SVI \ train & SGD test & \SVI \ test & SGD train & \SVI \ train & SGD test & \SVI \ test \\
\hline
50 & 0.596 (5.9e-05) & 0.485 (1.1e-04) & 0.039 (4.1e-04) & 0.034 (4.0e-04) & 0.043 (1.8e-04) & 0.038 (2.3e-04) & 0.034 (7.1e-04) & 0.035 (8.9e-04) & 0.046 (9.4e-04) & 0.049 (1.4e-03) \\
100 & 0.683 (5.2e-05) & 0.593 (8.4e-05) & 0.032 (6.1e-05) & 0.026 (5.5e-05) & 0.043 (6.6e-04) & 0.039 (7.0e-04) & 0.021 (5.4e-04) & 0.021 (1.4e-04) & 0.053 (2.4e-03) & 0.054 (9.4e-04) \\
200 & 0.78 (2.1e-04) & 0.717 (2.3e-04) & 0.025 (3.8e-05) & 0.019 (1.0e-05) & 0.045 (1.2e-03) & 0.041 (1.0e-03) & 0.011 (7.6e-04) & 0.011 (8.3e-04) & 0.058 (3.3e-03) & 0.055 (3.6e-03) \\
\bottomrule
\end{tabular}
\egroup
}
\cprotect \caption{Non-convex probit model, with identical learning rate for both. We show average results at end of epochs, with standard errors in brackets. \SVI \ consistently reaches smaller parameter recovery error and MSE loss than SGD, and SGD may be slightly better in terms of classification error.}
\label{tab:non-conve_probit}
\end{table*}
\begin{table*}[!t]
\begin{varwidth}[b]{0.67\linewidth}
\centering
\resizebox{\linewidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{|P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}|}
\toprule
{\textbf{Two-moon data}} & \multicolumn{4}{c}{MSE loss} & \multicolumn{4}{c}{Classification error} \\
\hline
{\# Hidden neurons} & SGD train & \SVI \ train & SGD test & \SVI \ test & SGD train & \SVI \ train & SGD test & \SVI \ test \\
\hline
8 & 0.00062 (4.0e-05) & 0.00054 (2.1e-05) & 0.06709 (4.8e-03) & 0.05993 (4.5e-03) & 0.08333 (7.2e-03) & 0.076 (3.8e-03) & 0.098 (6.6e-03) & 0.08933 (1.1e-02) \\
16 & 0.00053 (7.7e-05) & 0.00042 (8.5e-05) & 0.05852 (7.1e-03) & 0.04702 (7.7e-03) & 0.06533 (1.5e-02) & 0.052 (1.3e-02) & 0.078 (9.6e-03) & 0.06333 (1.3e-02) \\
32 & 0.00022 (2.9e-05) & 0.00014 (1.8e-05) & 0.02375 (2.2e-03) & 0.01517 (6.9e-04) & 0.01867 (3.8e-03) & 0.01533 (2.4e-03) & 0.01733 (2.2e-03) & 0.016 (9.4e-04) \\
64 & 0.00016 (1.7e-05) & 8e-05 (1.4e-05) & 0.01774 (1.2e-03) & 0.00872 (1.0e-03) & 0.01 (2.5e-03) & 0.006 (1.6e-03) & 0.01067 (3.0e-03) & 0.00933 (2.0e-03) \\
\bottomrule
\end{tabular}
\egroup
}%
\caption{Two-moon data, binary classification, with identical learning rate and ReLU activation for both. We show average results at end of epochs, with standard errors in brackets. \SVI \ consistently reaches smaller training and/or test MSE loss and/or classification errors than SGD across different number of hidden neurons.}
\label{tab:moon_table}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{FC_vanillaTraining_moon_H=64_lr=0.15.pdf}
\vspace{-0.25in}
\captionof{figure}{Two-moon data, 64 hidden neurons, with MSE loss (left) and classification error (right). \SVI \ shows faster convergence.}
\label{fig:moon_fig}
\end{minipage}
\end{table*}
We test and compare \SVI \ in Algorithm \ref{alg:heuristic} with SGD on a number of experiments: non-convex one-layer model recovery and prediction, classification with fully-connected (FC) networks, model recovery on random graphs, and real-data network classification using multi-layer GCN models. We aim to show that \SVI \ is competitive or better than SGD under various settings.
\subsection{Setup}
All implementation are done using \Verb|PyTorch| \citep{NEURIPS2019_9015} and \Verb|PyTorch Geometric| \citep{Fey/Lenssen/2019} (for GNN). To ensure fair comparison, we carefully describe the experiment setup. In particular, the following inputs are \textit{identical} to both \SVI \ and SGD in each experiment.
\begin{itemize}[noitemsep,topsep=0em]
\item Data: (a) the size of training and test data (b) batch (batch size and samples in mini-batches).
\item Model: (a) architecture (e.g., layer choice, activation function, hidden neurons) (b) loss function.
\item Training regime: (a) parameter initialization (b) hyperparameters for backpropagation (e.g., learning rate, momemtum factor, acceleration) (c) total number of epochs.
\end{itemize}
In short, all except the way gradients are defined are kept the same---our proposed \SVI \ backpropagates gradients with respect to hidden input and transform the gradient back to parameter domain, whereas SGD do so with respect to parameters in each hidden layer. All results are averaged over three random trials, in which data are redrawn for simulated example and networks re-initialized. We show standard errors in tables as brackets and in plots as error bars. Partial results are shown due to space limitation, where Appendix \ref{sec:experiment_appendix} contains the extensive results.
\begin{table*}[!t]
\begin{varwidth}[b]{0.75\linewidth}
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{|P{2.5cm}|P{2.3cm}P{2cm}P{2.5cm}P{2cm}|P{2.3cm}P{2cm}P{2.5cm}P{2cm}|P{2.3cm}P{2cm}P{2.5cm}P{2cm}|}
\toprule
{\textbf{Large Graph}} & \multicolumn{4}{c}{Relative error in posterior prediction--$\ell_2$ norm} & \multicolumn{4}{c}{Relative error in MSE loss} & \multicolumn{4}{c}{Absolute error in posterior prediction--$\ell_{\infty}$ norm} \\
\hline
{\# Hidden neurons} & SGD (Known) & SGD (Perturbed) & \SVI \ (Known) & \SVI \ (Perturbed) & SGD (Known) & SGD (Perturbed) & \SVI \ (Known) & \SVI \ (Perturbed) & SGD (Known) & SGD (Perturbed) & \SVI \ (Known) & \SVI \ (Perturbed)\\
\hline
2 & 0.133 (1.1e-02) & 0.133 (1.1e-02) & 0.091 (8.5e-03) & 0.094 (7.2e-03) & 0.017 (2.6e-03) & 0.018 (2.8e-03) & 0.008 (1.5e-03) & 0.009 (1.1e-03) & 0.145 (1.2e-02) & 0.147 (1.3e-02) & 0.099 (8.7e-03) & 0.108 (5.3e-03) \\
4 & 0.108 (6.9e-03) & 0.109 (6.4e-03) & 0.081 (5.9e-03) & 0.085 (5.0e-03) & 0.012 (1.5e-03) & 0.012 (1.2e-03) & 0.006 (1.2e-03) & 0.007 (9.0e-04) & 0.115 (6.9e-03) & 0.119 (5.3e-03) & 0.089 (4.6e-03) & 0.101 (2.5e-03) \\
8 & 0.102 (9.2e-03) & 0.104 (8.7e-03) & 0.068 (6.2e-04) & 0.074 (7.4e-04) & 0.01 (1.9e-03) & 0.011 (1.7e-03) & 0.005 (3.0e-04) & 0.005 (5.9e-04) & 0.106 (8.9e-03) & 0.112 (7.7e-03) & 0.078 (8.2e-04) & 0.094 (7.7e-04) \\
16 & 0.093 (8.4e-03) & 0.096 (7.7e-03) & 0.067 (6.6e-04) & 0.073 (7.3e-04) & 0.009 (1.3e-03) & 0.009 (1.3e-03) & 0.005 (3.1e-04) & 0.005 (5.7e-04) & 0.102 (8.8e-03) & 0.106 (7.0e-03) & 0.076 (9.6e-04) & 0.093 (7.8e-04) \\
32 & 0.096 (9.4e-03) & 0.099 (8.7e-03) & 0.068 (3.2e-04) & 0.074 (6.6e-05) & 0.01 (2.2e-03) & 0.01 (2.1e-03) & 0.005 (2.3e-04) & 0.005 (4.4e-04) & 0.104 (1.0e-02) & 0.109 (8.1e-03) & 0.077 (4.0e-04) & 0.094 (2.0e-04) \\
\bottomrule
\end{tabular}
\egroup
}
\caption{Model recovery on the large random graph, with identical learning rate and ReLU activation for both. We measure recovery performance according to relative errors in predicting the posterior probability $\condexp{Y_i}{X_i}$ on test data. The middle column compares the relative error in MSE loss against loss under the true model. We observe consistently better model recovery performance by \SVI, regardless of the number of hidden neurons for estimation, comparison metrics, and whether graph is perturbed.}
\label{tab:small_large_graph}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{large_graph_know_est_graph_H=32_MSE_single.pdf}
\vspace{-0.2in}
\captionof{figure}{Model recovery on the large random graph under relative error in MSE test loss. \SVI \ consistently reaches smaller error under faster convergence, even just after the first training epoch.}
\label{fig:small_large_graph}
\end{minipage}
\end{table*}
\subsection{Synthetic Data Experiments}
Notation-wise, $X\sim \mathcal N(a,b)$ means the random variable $X$ follows a normal distribution with mean $a$ and variance $b^2$; $N$ (resp. $N_1$) denotes the size of training (resp. test) sample, $\textsf{lr}$ denotes the learning rate, $B$ denotes the batch size, and $E$ denotes training epochs.
\noindent \textit{(a) Non-convex probit model.} For each $i\geq 1$, $y_i \in \{0,1\}$ and $\condexp{y_i}{X_i}=\Phi(X_i^T\beta+b)$, where $X_i \in \R^p, X_{ij} \overset{i.i.d.}{\sim} \mathcal N(0.05,1)$, $\beta_j \overset{i.i.d.}{\sim} \mathcal N(-0.05,1)$, and $b \sim \mathcal N(-0.1,1)$. Here, $\Phi(z)=\PP(Z\leq z), Z\sim \mathcal N(0,1)$. We let $N=2000$, $N_1=500$, and use a fully-connected one-layer network. The goal is to recover parameter $\{\beta,b\}$ via minimizing the non-convex MSE objective (i.e., $N^{-1} \sum_{i=1}^N \|y_i-\Phi(X_i^T\beta+b)\|_2$) and make prediction on test samples. We let $B=200$ and $E=200$ and use $\textsf{lr}$=0.005 and momemtum = 0.9.
Table \ref{tab:non-conve_probit} shows that \SVI \ consistently reaches small parameter recovery error than SGD across feature dimensions, with 8$\sim$10\% difference. Moreover, the same phenomenon appears in both training and test MSE losses. Lastly, SGD may reach smaller classification error, but the difference is very small. See Appendix \ref{sec:probit} for additional results.
\noindent \textit{(b) Fully connected network.} We next consider the classification problem of determining the cluster label of each sample of a simulated two-moon dataset. Figure \ref{fig:moon-data} visualizes the dataset. We use a two-layer fully-connected network with ReLU (layer 1) and softmax (layer 2) activation. We let $N=N_1=500$, $\textsf{lr}$=0.15, $B=100$, and $E=100$.
Table \ref{tab:moon_table} shows that \SVI \ consistently reaches smaller MSE losses and classification errors on training and test data across different number of hidden neurons. Figure \ref{fig:moon_fig} shows that \SVI \ also converges faster than SGD throughout epochs. See Appendix \ref{sec:twomoon} for additional results.
\noindent \textit{(c) Model recovery on random graphs.} Given a graph $G=(\mathcal V,\mathcal E), |\mathcal V|=n$ and a signal $X_i \in \R^{n\times C}$, we generate $Y_i \in \R^{n\times F}$, where $\condexp{Y_i}{X_i}$ is a two-layer GCN model with ReLU (layer 1) and sigmoid (layer 2) activation. We let $C=2$ and $F=1$ and let the true number of hidden nodes be 2. Entries of all true weight and bias parameters are i.i.d. samples from $N(1,1)$ under a fixed seed. We consider both small ($n=15$) and large ($n=40$) graphs, where $\PP[(i,j) \in \mathcal{E}]=0.15$; Figure \ref{fig:GCN_simul_appendx} visualizes the graphs. The true parameters are identical in both graphs. We examine model recovery performance in terms of correctly predicting the conditional expectation $\condexp{Y_t}{X_t}$ on test data, which is also the posterior probability of $Y_t$ given $X_t$, and reaching small MSE loss relative to the true loss under $\condexp{Y_t}{X_t}$. Moreover, we examine the recovery performance either when the true graph (i.e., $\mathcal{E}$) is known or when the edge set are perturbed---a $p$ fraction of edges in $\mathcal{E}$ and in $\mathcal{E}^C$ are randomly discarded and inserted, where $\mathcal{E}^C$ denotes edges in a fully-connected graph that does not exist in $\mathcal{E}$. We set $p=0.2$ (resp. 0.05) for small (resp. large) graphs. Finally, we let $N=2000, N_1=200$, $B=100$, and $E=200$. We use $\textsf{lr}$=0.0001, momemtum = 0.99, and also the Nesterov momentum \citep{pmlr-v28-sutskever13}.
Table \ref{tab:small_large_graph} shows that \SVI \ consistently reaches smaller relative error in posterior prediction measured in $\ell_2$ norm, even in over-parametrized cases (e.g., use 32 hidden neurons in estimation when the ground truth has only 2 neurons). The pattern is consistent even if the graph information is perturbed. The exact pattern persists when we measure the model recovery in terms of relative error in MSE loss or posterior prediction in $\ell_{\infty}$ norm. Figure \ref{fig:small_large_graph} (large graph) also shows that \SVI \ consistently converges faster than SGD. See Appendix \ref{sec:GCN_simul_appendix} for additional results, including those on the small random graph and those under batch normalization \citep{pmlr-v37-ioffe15} and/or the Adam optimization method \cite{Kingma2015AdamAM}.
To better understand how \SVI \ update parameters, Figure \ref{fig:dynamics} zooms in the dynamics for 50 neurons (right figure) and shows the corresponding relative cross-entropy loss on test data (left figure). We use both the cross-entropy loss and the softplus activation in the first layer to illustrate that \SVI \ maintains good performance under other choices. Qualitatively, the behavior of \SVI \ barely differs under such choices (see Figure \ref{fig:dynamics_apppend} for additional results). We regenerate 2000 training and test data with batch size equals to 200 and run both methods for 200 epochs to avoid lazy training of SGD \citep{chizat2018lazy}. Regarding the right figure, we plot the norm of first-layer neuron weights, where the norm is defined in terms of the inner product with initial weights, against the second-layer neuron weights, which are scalars because $F=1$. One circle represents one neuron, with arrows represent the direction of moving along the initial weights. We then connect the initial and final dots to indicate the displacement of neuron. The same visualization techniques are used in \citep{pellegrini2020analytic}. It is clear that \SVI \ displaces neurons much further after 200 epochs, which is anticipated in Remark \ref{remark:dynamic}. In this case, we think it can be beneficial due to the much faster error convergence.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{SGD_VI_dynamics_H=50_Cross-Entropy_splus.pdf}
\caption{Model recovery on small graph with 50 neurons, with identical learning rate and Softplus activation for both. Left: Relative error in training and test losses. Right: Visualization of neuron dynamics. After 200 epochs, \SVI \ displaces the neurons from their initial position much further than SGD, where such displacement also leads to much faster convergence in the relative error.}
\label{fig:dynamics}
\end{figure}
\begin{table*}[!t]
\begin{varwidth}[b]{0.75\linewidth}
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{|P{3cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|}
\toprule
{\textbf{California solar data}} & \multicolumn{4}{c}{MSE loss} & \multicolumn{4}{c}{Classification error} & \multicolumn{4}{c}{Weighted $F_1$ score} \\
\hline
{\# Hidden neurons} & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test \\
\hline
8 & 0.217 (6.6e-03) & 0.219 (5.9e-03) & 0.195 (7.0e-03) & 0.207 (5.7e-03) & 0.297 (8.1e-03) & 0.321 (4.5e-03) & 0.296 (1.4e-02) & 0.318 (1.5e-02) & 0.706 (8.1e-03) & 0.675 (5.6e-03) & 0.703 (1.5e-02) & 0.681 (1.5e-02) \\
16 & 0.229 (7.6e-03) & 0.225 (7.3e-03) & 0.194 (7.3e-03) & 0.204 (6.4e-03) & 0.314 (1.3e-02) & 0.335 (6.8e-03) & 0.292 (1.7e-02) & 0.305 (1.7e-02) & 0.687 (1.3e-02) & 0.655 (9.4e-03) & 0.709 (1.7e-02) & 0.695 (1.7e-02) \\
32 & 0.224 (2.7e-03) & 0.223 (1.4e-03) & 0.185 (2.5e-03) & 0.196 (2.3e-03) & 0.297 (3.4e-03) & 0.333 (7.4e-03) & 0.274 (4.4e-03) & 0.292 (5.0e-03) & 0.704 (3.8e-03) & 0.659 (8.4e-03) & 0.727 (4.0e-03) & 0.708 (5.1e-03) \\
64 & 0.213 (4.5e-04) & 0.211 (9.6e-04) & 0.18 (5.1e-04) & 0.189 (3.9e-04) & 0.283 (2.5e-03) & 0.308 (6.0e-03) & 0.262 (2.1e-03) & 0.27 (1.5e-03) & 0.719 (2.6e-03) & 0.689 (6.7e-03) & 0.738 (1.7e-03) & 0.73 (1.4e-03) \\
\bottomrule
\end{tabular}
\egroup
}
\caption{California solar binary ramping event detection under a three-layer GCN model, with identical learning rate and ReLU activation for both. In addition to MSE loss and classification loss, we also use the weighted $F_1$ score as an additional metric, which weighs $F_1$ score in each class by its support. \SVI \ consistently outperforms SGD on all these metrics on both training and test data.}
\label{tab:solar_tab}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{Vanilla_vs_vanilla_VI__graph_inferred_solar_CA_vanillahidden_64_VIhidden_64_new__all_layer_MSE_single.pdf}
\vspace{-0.3in}
\captionof{figure}{Classification error under 64 neurons in both hidden layers. \SVI \ converges faster than SGD on both training and test data, especial in the initial stage.}
\label{fig:solar_fig}
\end{minipage}
\end{table*}
\begin{table*}[!t]
\begin{varwidth}[b]{0.75\linewidth}
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{|P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|}
\toprule
{\textbf{Traffic data}} & \multicolumn{4}{c}{Cross-Entropy loss} & \multicolumn{4}{c}{Classification error} & \multicolumn{4}{c}{Weighted $F_1$ score} \\
\hline
{\# Hidden neurons} & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test \\
\hline
8 & 0.837 (7.0e-03) & 0.834 (1.1e-02) & 0.967 (6.2e-03) & 0.969 (5.5e-03) & 0.397 (1.3e-02) & 0.414 (1.5e-02) & 0.412 (1.6e-02) & 0.427 (1.6e-02) & 0.572 (2.9e-02) & 0.554 (2.9e-02) & 0.572 (2.4e-02) & 0.558 (2.3e-02) \\
16 & 0.808 (5.6e-03) & 0.804 (4.1e-03) & 0.955 (3.3e-03) & 0.956 (4.0e-03) & 0.391 (1.3e-02) & 0.411 (1.2e-02) & 0.37 (3.8e-03) & 0.384 (7.0e-03) & 0.576 (3.1e-02) & 0.556 (3.0e-02) & 0.624 (7.3e-03) & 0.61 (1.0e-02) \\
32 & 0.791 (5.7e-03) & 0.787 (6.5e-03) & 0.939 (3.4e-03) & 0.941 (3.6e-03) & 0.366 (5.9e-03) & 0.383 (7.9e-03) & 0.358 (4.5e-03) & 0.37 (4.9e-03) & 0.628 (6.7e-03) & 0.611 (9.0e-03) & 0.641 (4.6e-03) & 0.629 (5.0e-03) \\
64 & 0.791 (2.0e-03) & 0.787 (2.2e-03) & 0.933 (2.1e-03) & 0.935 (2.1e-03) & 0.36 (2.7e-03) & 0.375 (4.4e-03) & 0.346 (1.7e-03) & 0.358 (1.6e-03) & 0.637 (2.8e-03) & 0.622 (4.7e-03) & 0.652 (2.0e-03) & 0.641 (1.8e-03) \\
\bottomrule
\end{tabular}
\egroup
}
\caption{Traffic data multi-class anomaly detection under a three-layer GCN model, with identical learning rate and ReLU activation for both. We note that \SVI \ remains competitive or outperforms SGD in terms of classification error and weighted $F_1$ scores, with clear improvement when hidden neurons increase. However, \SVI \ reaches much larger cross-entropy losses, which we think are benign (see Section \ref{sec:real_data} (b) for justification).}
\label{tab:traffic_tab}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{Vanilla_vs_vanilla_VI__graph_inferred_Traffic_vanillahidden_64_VIhidden_64_new__all_layer_Cross-Entropy_single.pdf}
\vspace{-0.3in}
\captionof{figure}{Classification error
under 64 neurons in both
hidden layers. \SVI \
converges faster,
especial after a single training epoch.}
\label{fig:traffic_fig}
\end{minipage}
\end{table*}
\subsection{Real-data Network Prediction}\label{sec:real_data}
To test the flexibility of \SVI, we use three-layer GCN models for both real-data experiments, where both layers share the same number of hidden neurons with the ReLU activation. The output layer either uses sigmoid (binary classification) or softmax (multi-class classification). We fix $\textsf{lr}$=0.0001, momemtum = 0.99, and use the Nesterov momentum \citep{pmlr-v28-sutskever13}.
\noindent \textit{(a) Binary solar ramping prediction.} The raw solar radiation data are retrieved from the National Solar Radiation Database for 2017 and 2018. We consider data from 10 city downtown in California and from 9 location in downtown Los Angeles, where each city or location is a node in the network. The goal is to identify ramping events daily, which are abrupt changes in the solar power generation. Thus, $Y_{t,i}=1$ if node $i$ at day $t$ experiences a ramping event. We define feature $X_t:=\{Y_{t-1}\ldots,Y_{t-d}\}$ as the collection of past $d$ days of observation and pick $d=5$. We estimate edges via a $k$-nearest neighbor approach based on the correlation between training ramping labels, with $k=4$. Data in 2017 are used for training ($N=360$) and the rest for testing ($N_1=365$), and we let $B=30$ and $E=100$.
Table \ref{tab:solar_tab} shows that \SVI \ almost always outperforms SGD under any metric on both training and test data. The pattern is consistent across different numbers of hidden neurons. Figure \ref{fig:solar_fig} also shows that \SVI \ converges faster than SGD, which aligns with earlier simulation results. Hence, \SVI \ also performs well beyond two-layer GNN. See Appendix \ref{sec:solar_append} for additional results, including those under batch normalization, that illustrate the faster convergence of \SVI.
\noindent \textit{(b) Multi-class traffic flow anomaly detection.} The raw bi-hourly traffic flow data are from the California Department of Transportation, where we collect data from 20 non-uniformly spaced traffic sensors in 2020. Data are available hourly, with $Y_{t,i}=1$ (resp. 2) if the current traffic flow lies outside the upper (resp. lower) 90\% quantile over the past four days of traffic flow of its nearest four neighbors based on sensor proximity. As before, we define feature $X_t$ as the collection of past $d$ days of observation and set $d=4$, where the edges include the nearest 5 neighbors based on location. Data in the first nine months are training data (e.g., $N=6138$) and the rest for testing ($N_1=2617$). We let $B=600$ and $E=100$ and use the cross-entropy loss to the performance of \SVI \ under alternative loss functions.
Table \ref{tab:traffic_tab} shows that \SVI \ reaches smaller training and test classification error for large hidden neurons and remains competitive when fewer neurons are used. In terms of weighted $F_1$ scores, \SVI \ also reaches higher training and test scores in all except one choice of hidden neurons. Lastly, we note that \SVI \ results in much larger cross-entropy losses, which might happen if the predicted posterior probabilities by \SVI \ are more uniform than SGD. In this case, such behavior is benign due to smaller test errors of \SVI, which likely happen because there are potential ambiguity in the test data: the true posterior probabilities in certain portions do not concentrate on a specific class. As a result, more uniform posterior probability predictions by \SVI \ actually better estimate the true probabilities, leading to smaller test errors and higher weighted $F_1$ scores. See Appendix \ref{sec:traffic_append} for additional results that illustrate the faster convergence of \SVI.
\section{Conclusion}
We have presented a new monotone operator variational inequality (VI) approach for training neural networks, particularly training graph neural networks (GNN), which enjoys strong prediction guarantees in training one-layer neural networks. In training general multi-layer neural networks, the proposed method can obtain faster convergence and make more accurate predictions relative to a comparable stochastic gradient descent algorithm, using extensive simulation and real-data study regarding various performance metrics.
\clearpage
\section{Introduction}
Neural network (NN) training \citep{Duchi2010AdaptiveSM,pmlr-v28-sutskever13,Kingma2015AdamAM,Ioffe2015BatchNA} is the essential process in the study of deep models. Optimization guarantee for training loss as well as generalization error have been obtained with over-parametrized networks \citep{neyshabur2014search,mei2018mean,arora2019exact,arora2019fine,allen2019convergence,du2019gradient}. However, due to the inherent non-convexity of loss objectives, theoretical developments are still diffused and lag behind the vast empirical successes.
Recently, the seminal work \citep{VI_est} presented a somehow surprising result that some non-convex issues can be circumvented in special cases by problem reformulation. In particular, it was shown that when estimating the parameters of the generalized linear models (GLM), instead of minimizing a least-square loss function, which leads to a non-convex optimization problem, and thus no guarantees can be obtained for global convergence nor model recovery, one can reformulate the problem as solving an monotone variational inequality (MVI), a general form of convex optimization. The reformulation through MVI leads to strong performance guarantees and computationally efficient procedures.
In this paper, inspired by \citep{VI_est} and the fact that certain GLM (such as logistic regression) can be viewed as the simplest neural network with only one layer, we consider a new scheme for neural network training based on MVI. This is a drastic departure from the widely used gradient descent algorithm for neural network training --- we replace the gradient of a loss function with a carefully constructed monotone operator. The benefits of such an approach include (i) in special cases (one input layer networks), we can establish strong training and prediction guarantees -- see Section \ref{sec:guarantee}; (ii) for general cases, through extensive numerical studies on synthetic and real-data in Section \ref{sec:expr_main}, we demonstrate the faster convergence to a local solution by our approach relative to gradient descent in a comparable setup.
To the best of our knowledge, the current paper is the first to study MVI for training neural networks. Our \SVI, as a general way of modifying the parameter update scheme in NN training, can be readily applied to various deep architectures. In this work, beyond fully-connected (FC) neural networks, we especially study MVI training for node classification in graph neural networks (GNN) \citep{Wu2019ACS,Pilanci2020NeuralNA}, due to the ubiquity of network data and the importance of network prediction problems.
Specifically, our technical contributions include:
\begin{itemize}[noitemsep,topsep=0em]
\item Reformulate the one-layer parameter recovery problem in neural network training as solving an monotone variational inequality, with guarantees on recovery and prediction accuracy.
\item Develop a heuristic algorithm called \textit{stochastic variational inequality} (\SVI) for training multi-layer neural networks. In particular, the algorithm provides a fundamentally different but easy-to-implement way of defining the gradient of parameters during estimation.
\item Compare \SVI \ with standard stochastic gradient descent (SGD)
extensively to demonstrate the competitiveness and improvement of \SVI, in terms of model recovery and prediction accuracy on various problems.
\end{itemize}
{\it Literature.} MVI
has been studied in priors mainly in the optimization problem context \citep{Kinderlehrer1980AnIT,Facchinei2003FiniteDimensionalVI}, which can be viewed as the most general problems with convex structure \citep{juditsky2021well}. More recently, VI has been used to solve min-max problems in Generative Adversarial Networks \citep{Lin2018SolvingWS} and reinforcement learning \citep{Georgios20I}. In particular, our theory and techniques are inspired by \citep{VI_est}, which uses strongly monotone VI for signal recovery in generalized linear models (GLM). In contrast to their work, we offer a thorough investigation of using VI to train multi-layer NN and address the case when VI may not be strongly monotone. On the other hand, we emphasize the difference from \citep{Pilanci2020NeuralNA}, which views two-layer NNs as convex regularizers: we focus on model recovery rather than change variables to convexify the loss minimization. In addition, our \SVI \ extends beyond two-layer networks.
The rest of the work proceeds as follows. Section \ref{setup} describes the general NN and GNN models we tackle and provides preliminaries of MVI. Section \ref{sec:VI_training} provides the \SVI \ algorithm for training one-layer and multi-layer neural networks. Section \ref{sec:guarantee} presents various theoretical guarantees, including the model recovery recovery guarantees measured in $\ell_p$ error. We also emphasize cases where the MVI training is identical to gradient-based methods and when the graph is estimated in GNN models. Section \ref{sec:expr_main} compares \SVI \ and the stochastic gradient descent (SGD) on various synthetic and real-data examples to illustrate the potential benefits of \SVI. Section \ref{sec:conclusion} concludes the paper. Appendix \ref{sec:theory_append} contains proofs and \ref{sec:experiment_appendix} contains various additional experiment results.
\section{Problem setup}\label{setup}
We first introduce the general neural network model in Section \ref{sec:NN_notation} and then specify the class of graph neural networks (GNN) we can tackle in Section \ref{sec:GNN_notation}, after which the MVI preliminaries are introduced in Section \ref{sec:VI_prelim}.
\subsection{Notation of general NN} \label{sec:NN_notation}
Assume a generic feature $X\in \R^C$, where $C$ denotes the feature dimension. Suppose the conditional expectation $\EE[Y|X]$ of the categorical response vector\footnote{The techniques and theory in this work can be used to model the conditional expectation of continuous random variables.}
$Y \in \{ 1,\ldots,K \}$ is modeled by an $L$-layer neural network $f(X,\Theta)$
:
\begin{equation}\label{eq:NN}
\EE[Y|X,\Theta]:=f(X,\Theta)=\phi_L(g_L(X_{L},\Theta_{L})),
\end{equation}
where $X_{L}=\phi_{L-1}(g_{L-1}(X_{L-1},\Theta_{L-1})), X_1=X$ denote the nonlinear feature transformation from the previous layer, $\Theta=\{\Theta_1,\ldots,\Theta_L\}$ denotes model parameters, and each $\phi_l$ denotes the activation function at layer $l$. In particular, assume there exists $\Theta^*=\{\Thetastar{1},\ldots,\Thetastar{L}\}$ so that $\EE[Y|X]=f(X,\Theta^*)$.
The goal is to estimate parameters $\Theta^*$ using $N$ training data, with performance guarantee on bounding the error between the final predictor $f(X_t,\widehat \Theta)$ and the ground truth $f(X_t, \Theta^*), t > N$. In particular, we would consider the $\ell_p$ error $\|f(X_t,\widehat \Theta)-f(X_t, \Theta^*)\|_p$. To this end, we adopt monotone variational inequality (MVI) for training in Section \ref{sec:VI_training}. In particular, we will cast the one-layer training as solving an MVI in Section \ref{sec:VI_one_layer} and present a practical algorithm motivated by MVI to train parameters in all layers in Section \ref{sec:heuristic}.
\subsection{Notation of GNN models}\label{sec:GNN_notation}
We first introduce the basics of graph filtering and then specify a class of filters that our MVI method to be proposed in Section \ref{sec:VI_training} can handle. It should be understood that our techniques can be applied for general neural network training (e.g., networks in \eqref{eq:NN}). Experiments for both FC networks and GNN appear in Section \ref{sec:expr_main}.
Suppose we have an undirected and connected graph $\G=(\V,\E, W)$, where $\V$ is a finite set of $n$ vertices, $\E$ is a set of edges, and $W\in\R^{n\times n}$ is a weighted adjacency matrix encoding node connections. Let $I_n$ denote an identity matrix of size $n$. Let $D$ be the degree matrix of $W$ and $L_g=I_{n}-D^{-1 / 2} W D^{-1 / 2}$ be the normalized graph Laplacian, which has the eigendecomposition $L_g=U\Lambda U^T$. For a graph signal $X \in \R^{n \times C}$ with $C$ input channels, it is then filtered via a function $g_{\Theta}(L_g)$ which acts on $L_g$ with channel-mixing parameters $\Theta \in \R^{C\times F}$ for $F$ output channels. Thus, the filtered signal $X'=g_{\Theta}(L_g)X$.
Our MVI framework to be introduced next can handle a class of graph filters in which the filter and parameter are \textit{decoupled} before applying the activation function. For instance, for some positive integer $R$, $g_{\Theta}(L_g)X=\sum_{r=1}^R g_r(L_g) X \Theta_r$, where $g_r(L)$ is fixed (non-adaptive) graph filters determined by graph Laplacian, and $\Theta_r$ is trainable parameters. Note that this class of filters include some popular ones:
\begin{itemize}[topsep=0em,noitemsep]
\item
{\it Graph Convolutional Network (GCN)} \citep{GCN}:
\[g_{\Theta}(L_g)X=(\tilde{D}^{-\frac{1}{2}} \tilde{W} \tilde{D}^{-\frac{1}{2}})X\Theta,\] where $\tilde{W}:=W+I_n$ is the graph with self-loops.
\item
{\it Chebshev Network (Chebnet)} \citep{chebnet}:
\[g_{\Theta}(L_g)X=\sum_{k=1}^K T_k(L_g)X\Theta_k,\] for the $K^{\rm{th}}$-order Chebyshev polynomials $\{T_1,\ldots,T_K\}$.
\item
{\it Graph Sample and Aggregate (GraphSAGE)} \citep{GraphSAGE}:
\[[g_{\Theta}(L_g)X]_i=\Theta_1 x_i+ \text{mean}_{j\in N(i)} x_j.\]
\end{itemize}
More generally, our framework works as long as that there exists a mapping $\eta$ such that
\begin{equation}\label{eq1}
g_{\Theta}(L_g)X=\eta(X)\Theta.
\end{equation}
\subsection{Preliminaries of MVI}\label{sec:VI_prelim}
We now introduce the notion of MVI and discuss the problem computability, where concrete training techniques are presented in Section \ref{sec:VI_training}. Given a parameter set $\bTheta \subset \R^{p}$, we call a continuous mapping (operator) $F: \bTheta \rightarrow \R^p$ \textit{monotone} if for all $\Theta_1, \Theta_2 \in \bTheta$,
$
\langle F(\Theta_1)-F(\Theta_2),\Theta_1-\Theta_2 \rangle \geq 0.
$ \citep{VI_est}
The operator is called \textit{strongly monotone} with modulus $\kappa$ if for all $\Theta_1, \Theta_2 \in \bTheta$,
\begin{equation}\label{eq:modulus}
\langle F(\Theta_1)-F(\Theta_2),\Theta_1-\Theta_2 \rangle \geq \kappa \|\Theta_1-\Theta_2\|^2_2.
\end{equation}
If $F \in C^1(\bTheta)$ (i.e., continuously differentiable on $\bTheta$) and $\bTheta$ is closed and convex with non-empty interior, (\ref{eq:modulus}) holds if and only if $\lambda_{\min}(\nabla F(\Theta))\geq \kappa$ for all $\Theta \in \bTheta$, where $\lambda_{\min}(\cdot)$ denotes the minimum eigenvalue of a square matrix.
Now, for a monotone operator $F$ on $\bTheta$, the problem $\VI$ is to find an $\bar{\Theta} \in \bTheta$ such that for all $\Theta \in \bTheta$,
\[
\langle F(\bar{\Theta}), \Theta-\bar{\Theta} \rangle \geq 0. \qquad \VI
\]
It is known that if $\bTheta$ is compact, then $\VI$ has at least one solution \citep[Theorem 4.1]{outrata2013nonsmooth}. In addition, if $\kappa>0$ in (\ref{eq:modulus}), then $\VI$ has exactly one solution \citep[Theorem 4.4]{outrata2013nonsmooth}. Under mild computability assumptions, the solution can be efficiently solved to high accuracy, using various iterative schemes (with a similar form as stochastic gradient descent or accelerated gradient descent by replacing the gradient by the monotone operator) \citep{VI_est}.
\section{MVI for neural network training}\label{sec:VI_training}
We now present the MVI-based algorithms for training neural networks. We start with simple one-layer network training with MVI, where the framework and techniques directly come from \citep{VI_est}, and highlight the similarity/difference with the standard stochastic gradient descent (SGD). We then generalize to training multiple-layer neural networks.
\subsection{One-layer neural network training}\label{sec:VI_one_layer}
Let us first consider training a one-layer neural network (i.e., $L=1$ in \eqref{eq:NN}) with a particular form of pre-activation: $g(X,\Theta)=\eta(X)\Theta$ for an arbitrary feature transformation $\eta$. In the simplest case for fully connected neural networks $\eta(X) = X$, i.e., the identity map. In the GNN case, as in \eqref{eq1}, it is the filtered signal before multiplying with the channel-mixing coefficients $\Theta$. Note that in this special case, the training of neural network is mathematically equivalent to the estimation of a GLM with properly chosen activation function \cite{VI_est}.
We construct the monotone operator $F$ as
\begin{equation}\label{eq:operatr}
F(\Theta):=\EEXY{\etaT{X}[\phi (\eta(X)\Theta)-Y]},
\end{equation}
where $\etaT{X}$ denotes the transpose of $\eta(X)$. We will explain a few properties of $F$ in Sec. \ref{sec:guarantee}, Lemma \ref{lem1}. Given training samples, we can form a empirical sample version of $F$ denoted by $\widehat F$. Then the training based on MVI
\begin{equation}\label{eq:VI_training}
\Theta \leftarrow \operatorname{Proj}_{\bTheta}(\Theta - \gamma \widehat F(\Theta)).
\end{equation}
where $\gamma>0$ is the step-size, $\operatorname{Proj}_{\bTheta}$ denotes the projection operation to ensure the updated parameters are still within feasible parameter domain $\bTheta$.
Note that \eqref{eq:VI_training} differs from SGD where the gradient of a certain loss objective takes the role of the monotone operator $\widehat F$. In particular, $\widehat F$ does not need to correspond to the gradient of any loss function. Nevertheless, we will show in Section \ref{sec:equivalence}, Proposition \ref{prop:equivalence} that $\widehat F$ is exactly the gradient of the cross-entropy loss with respect to parameter $\Theta$ if $\phi$ is either the sigmoid or softmax, whereby \eqref{eq:VI_training} coincides with SGD.
\subsection{Multi-layer neural network training}\label{sec:heuristic}
Based on our one-layer training framework, we require the general model \eqref{eq:NN} to take the form that for each $l\leq L$, \begin{equation}\label{eq2}
g_l(X_l,\Theta_l)=\eta_l(X_l)\Theta_l,
\end{equation}
where $\eta_l$ is the nonlinear feature transformation in layer $l$, which can differ over layers. Based on Section \ref{sec:VI_one_layer}, it is natural to replace the gradient of the objective with respect to $\Theta_l$ by $F_l(\Theta_l)$ in \eqref{eq:operatr}, where $Y$ is replaced by $Y_l$ such that $\EE[Y_l|X]=\phi_l(\eta_l(X_l)\Theta_l)$, so that dimensions match.
Unfortunately, $Y_l$ implicitly depends on the true parameters in all previous layers, which are never known, so this naive approach fails. Instead, we provide a heuristic \textit{stochastic variational inequality} (\SVI) in Algorithm \ref{alg:heuristic}, which is motivated by both MVI in the one-layer training training and the traditional empirical loss minimization. In the algorithm, $X^t_{i,l}$ is the hidden representation of $X_i$ using parameter estimates at the $t^{\rm{th}}$ iteration in all previous layers, denoted as $\hat \Theta^t_{1:l-1}$. Meanwhile, $f(X^t_{j,l+1}, \hat \Theta^t_{l+1:L})=f(X_j,\hat \Theta^t)$ is the output of the neural network from layer $l+1$ onward.
\begin{algorithm}[htbp]
\cprotect \caption{Stochastic variational inequality (\SVI).}
\label{alg:heuristic}
\begin{algorithmic}[1]
\REQUIRE{
Inputs are (a) Training data $\{(X_i,Y_i)\}_{i=1}^N$, batch number $B$, and epochs $E$; (b) An $L$-layer network $f(X,\Theta):=\{\phi_l(g_l(X_l,\Theta_l))\}_{l=1}^L$; (c) Initial parameters $\hat{\Theta}^0_{1:L}:=\{\hat{\Theta}^0_l\}_{l=1}^L$; (d) Loss function $\Lc(f(X,\Theta),Y)$.}}
\ENSURE{Estimated parameters $\hat{\Theta}^T$, $T:=\ceil{NE/B}$}
\STATE Let $I:=\ceil{N/B}$ be the number of iterations per epoch and denote batches as $b_1,\ldots,b_I$.
\FOR {epoch $e=0,\ldots,E$ and iteration $i=1,\ldots,I$}
\STATE Compute $t:=eI+i$ as the current iteration index
\STATE Compute gradient simultaneously at each layer $l$ as:
\STATE If $l<L$\\
{\scriptsize $F^t_l:=b_i^{-1}\sum_{j \in b_i} \etaT{X^t_{j,l}}\nabla_{X^t_{j,l+1}} \Lc(f(X^t_{j,l+1}, \hat{\Theta}^t_{l+1:L}),Y_j)$}
\STATE If $l=L$ \\
{\scriptsize $F^t_L:=b_i^{-1}\sum_{j \in b_i} \etaT{X^t_{j,L}}[\phi_L (\eta(X^t_{j,L})\hat{\Theta}_L)-Y_j]$}
\STATE Update $\hat{\Theta}^{t+1}_l$ according to \eqref{eq:VI_training} using $F^t_l$, where one may incorporate acceleration techniques.
\ENDFOR
\end{algorithmic}
\end{algorithm}
We briefly justify line 5 in Algorithm \ref{alg:heuristic}, which is the heuristic step in previous layers. For instance, let $\Lc(f(\Xnostar{l+1}, \Thetanostar{l+1:L}),Y)$ be the MSE loss and note that
\[\nabla_{\Xnostar{l+1}} \Lc(f(\Xnostar{l+1}, \Thetanostar{l+1:L}),Y)=[f(\Xnostar{l+1}, \Thetanostar{l+1:L})-Y] \nabla_{\Xnostar{l+1}}f(\Xnostar{l+1}, \Thetanostar{l+1:L}),\]
which can be computed via backpropagating the loss on \textit{hidden layer outputs} $X_{l+1}$ rather than on the \textit{model parameter} $\Theta_l$; $F_l(\Theta_l)$ is then formed by mapping the gradient back to the parameter space, similar as \eqref{eq:operatr}.
However, we cannot verify that $F_l(\Theta_l)$ is monotone in $\Theta_l$, even if parameters in later layers are fixed. In fact, we suspect monotonicity in all layers is impossible, as this would make the non-convex multi-layer neural network training convex without any relaxation.
\begin{remark}[Effect on training dynamics] \label{remark:dynamic}
We remark a key difference between the gradient update in vanilla stochastic gradient descent (SGD) and that in \SVI, which ultimately affects training dynamics. Suppose the activation function at layer $l$ is ReLU, so $\Xnostar{l+1}=\text{ReLU}(\eta(\Xstar{L})\Thetastar{l})$. Under the MSE loss, we notice that for SGD:
\[\nabla_{\Theta_l} \Lc =[f(\Xnostar{l+1}, \Thetanostar{l+1:L})-Y] \nabla_{\Theta_l} \text{ReLU}(\eta(\Xstar{L})\Theta_l),\]
and for \SVI:
\[\nabla_{\Xnostar{l+1}} \Lc
= [f(\Xnostar{l+1}, \Thetanostar{l+1:L})-Y] \nabla_{\Xnostar{l+1}}f(\Xnostar{l+1}, \Thetanostar{l+1:L}).\]
In particular, SGD does not update weights of inactive neurons because the gradient of ReLU with respect to them is zero. However, \SVI \ does not discriminate between these neurons as the gradient is taken with respect to \textit{outputs} from the current layer. Thus, one can expect that \SVI \ results in further weight updates than SGD, which experimentally seem to speed up the initial model convergence. We illustrate this phenomenon in Figure \ref{fig:dynamics} and will provide explanations in future works.
\end{remark}
\section{Guarantee of model recovery by MVI}\label{sec:guarantee}
We now present guarantees on model recovery for the \textit{last-layer training} when previous layers are fully known; in practice, this can be understood as if previous layers have provided necessary feature extraction. This guarantee naturally hold in one-layer networks. In particular, we provide pointwise (resp. in expectation) error bound on predicting $\condexp{Y_t}{X_t}$ in Section \ref{sec:modulus_1} (resp. Section \ref{sec:modulus_2}), where $\kappa$ in \eqref{eq:modulus} satisfies $\kappa>0$ (resp. $\kappa=0$) for the strongly monotone (resp. monotone) VI. In addition, we discuss in Section \ref{sec:equivalence} when the monotone operator $F$ in \eqref{eq:operatr} is exactly the expectation of the gradient of model parameters; guarantees thus hold for gradient-based methods in such cases. Lastly, we also consider imprecise graph knowledge in Section \ref{sec:imprecise}. We remark that guarantees in Section \ref{sec:modulus_1}, \ref{sec:modulus_2}, and \ref{sec:equivalence} hold as long as $\EE[Y|X]=\phi(\eta(X)\Theta)$, which encompasses both FC networks and certain classes of GNN models mentioned earlier in Section \ref{setup}.
Let $\{(X_i,Y_i)\}_{i=1}^N$ be $N$ training data from model (\ref{eq:NN}) with $\Theta=\Theta^*$. For the subsequent theoretical analyses, we assume that for each $i$, the quantity $\Xstar{i,L}$ is known, which is the output from the previous $L-1$ hidden layers using $X_i$ and the true parameters $\{\Theta^*_1,\ldots,\Theta^*_{L-1}\}$. Thus, $\eta(X)$ becomes $\eta(\Xstar{L}$) for a generic feature $X$. For the operator $F(\Theta_L)$ in (\ref{eq:operatr}), where $\bTheta$ is the parameter space of $\Theta_L$, consider its empirical average
$
\Femp(\Theta_L):= (1/N) \sum_{i=1}^N \Fempi(\Theta_L),
$
where $\Fempi(\Theta_L):=\etaT{\Xstar{i,L}}[\phi (\eta(\Xstar{i,L})\Theta_L)-Y]$. Let $\widehat{\Theta}_L^{(T)}$ be the estimated parameter after $T$ training iterations, using $\Femp$ and \eqref{eq:VI_training}. For a test feature $X_t$ and its nonlinear transformation $\Xstar{t,L}$, consider
\begin{equation}\label{posterior_estimate}
\condexphat{Y_t}{X_t}:=\phi_L(\etagraph{\Xstar{t,L}}\widehat{\Theta}_L^{(T)}),
\end{equation}
which is the estimated prediction using $\widehat{\Theta}_L^{(T)}$ and will be measured against the true model $\condexp{Y_t}{X_t}$ under $\Theta^*_L$. In particular, we will provide error bound on $\|\condexphat{Y_t}{X_t}-\condexp{Y_t}{X_t}\|_p, p\geq 2$ which crucially depends on the strong monotonicity modulus $\kappa$ in (\ref{eq:modulus}) for $F(\Theta_L)$.
We first state a few properties of $F(\Theta_L)$ defined in (\ref{eq:operatr}), which explicitly identify the form of $\kappa$. All proofs of Lemmas and Theorems are contained in Appendix \ref{sec:theory_append}.
\begin{lemma}\label{lem1}
Assume $\phi_L$ is $K$-Lipschitz continuous and monotone on its domain. For a generic input $X$,
\begin{enumerate}[noitemsep,topsep=0em]
\item $F(\Theta_L)$ is both monotone and $K_2$-Lipschitz, where $K_2:=K \EE_X\{ \|\eta(\Xstar{L})\|^2_2$.
\item $\kappa=\lambda_{\min}(\nabla \phi_L) \EE_X[\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))]$ if $\nabla \phi_L$ exists; it is 0 otherwise.
\item $F(\Theta^*_L)=0$.
\end{enumerate}
\end{lemma}
For simplicity, we assume from now on the stronger condition that $\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))>0$ for any generic nonlinear feature $\Xstar{L}$. In addition, because we are considering the last layer in classification, $\phi_L$ is typically chosen as sigmoid or softmax, both of which are differentiable so $\lambda_{\min}(\nabla \phi_L)$ exits. Note the property $F(\Theta^*_L)=0$ implies that $\Theta^*_L$ is a solution to $\VI$. In particular, if $\kappa>0$, it is the \textit{unique} solution. As a result, $\Theta^*_L$ can be efficiently solved to a high accuracy using $F(\Theta_L)$ under appropriated chosen $\gamma$ in \eqref{eq:VI_training}.
\subsection{Case 1: modulus $\kappa > 0$}\label{sec:modulus_1}
Under appropriate selection of step size and additional assumptions, we can obtain bounds on the recovered parameters following techniques in \citep{VI_est}, which helps us to bound the error in model recovery and prediction. Note that the guarantee relies on the value $\kappa$, which may be approximated by the empirical average using $N$ training data.
\begin{lemma}[Parameter recovery guarantee]\label{thm:alg_guarantee}
Suppose that there exists $M<\infty$ such that $ \forall \Theta_L \in \bTheta$,
\[
\EE_{X,\YTheta}{\|\eta(\Xstar{L})\YTheta\|_2}\leq M,
\]
where $\condexp{\YTheta}{X}=\phi(\eta(\Xstar{L})\Theta_L)$. Choose adaptive step sizes $\gamma=\gamma_t:=[\kappa(t+1)]^{-1}$ in \eqref{eq:VI_training}. The sequence of estimates $\widehat{\Theta}_L^{(T)}$ obeys the error bound
\begin{equation}\label{eq:algo_err_bound}
\EE_{\widehat{\Theta}_L^{(T)}}{\{\|\widehat{\Theta}_L^{(T)}-\Theta^*_L\|^2_2\} \leq \frac{4M^2}{\kappa^2(T+1)}}.
\end{equation}
\end{lemma}
We can use the above result to bound the error in posterior prediction.
\begin{theorem}[Prediction error bound for model recovery, strongly monotone $F$]\label{thm:generalization_err}
We have for a given test signal $X_t, t>N$ that for $p\in [2,\infty]$,
\[
\mathbb E_{\widehat{\Theta}_L^{(T)}}\{\|\condexphat{Y_t}{X_t}-\condexp{Y_t}{X_t}\|_p\} \leq (T+1)^{-1}C_t,
\]
where $C_t:=[4M^2K\lambda_{\max}(\etaT{X_{t,L}^*}\eta(X_{t,L}^*))]/\kappa^2$ and $\condexphat{Y_t}{X_t}$ is defined in \eqref{posterior_estimate}.
In particular, $p=2$ yields the sum of squared error bound on prediction and $p=\infty$ yields entry-wise bound (over the graph nodes).
\end{theorem}
Note that the guarantee is point-wise, as we are able to bound the parameter recovery error in expectation in Lemma \ref{thm:alg_guarantee}. In particular, we will demonstrate in the experiment section that when the loss function is non-convex in parameters, the recovered parameter using MVI is closer to the ground truth than that using the SGD (see Table \ref{tab:non-conve_probit}).
Moreover, we remark that the same order of convergence holds when $F$ in \eqref{eq:operatr} is estimated by the empirical average of \textit{mini-batches} of training data. In particular, the proof of Theorem \ref{thm:generalization_err} only requires access to an unbiased estimator of $F$, so that the batch size can range from one to $N$, where $N$ is the size of the training data.
\begin{corollary}[Prediction error bound for test loss]\label{cor:lossbound} Fix an $\epsilon>0$. For any test signal $X_t$, assume $N>C_t/\epsilon$. For a generic loss function $\Lc:(X,Y)\rightarrow \R_+$, which measures the error between $\condexp{Y}{X}$ and $Y$, denote $\Lc^*(X_t,Y_t)$ (resp. $\hat{\Lc}(X_t,Y_t)$) as the loss under the true (resp. estimated) model. Then,
\begin{enumerate}
\item When $\Lc$ is the mean-square error (MSE) loss,
$\mathbb E_{\widehat{\Theta}_L^{(T)}}\{|\hat{\Lc}(X_t,Y_t)-\Lc^*(X_t,Y_t)|\}\leq \epsilon$.
\item When $\Lc$ is the binary cross-entropy loss,
\[
\mathbb E_{\widehat{\Theta}_L^{(T)}}\{|\hat{\Lc}(X_t,Y_t)-\Lc^*(X_t,Y_t)|\}\leq \sum_{i=1}^n Y_{t,i}^T \ln\left(\frac{E_{t,i}}{E_{t,i}-\boldsymbol\epsilon}\right)+(\boldsymbol 1- Y_{t,i}^T)\ln\left(\frac{ \boldsymbol 1-E_{t,i}}{\boldsymbol 1-E_{t,i}-\boldsymbol \epsilon}\right),
\]
where $E_{t,i}:=\EE[Y_t|X_t]_i$ is the true conditional expectation of $Y_t$ at node $i$, $\boldsymbol 1$ and $\boldsymbol \epsilon$ are vectors of 1 and $\epsilon$, and the division is point-wise.
\end{enumerate}
\end{corollary}
\begin{remark}[When we can have $\kappa>0$]
Recall that $\kappa=\lambda_{\min}(\nabla \phi_L) \EE_X[\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))]$. As we assumed the quantity $\EE_X[\lambda_{\min}(\etaT{\Xstar{L}}\eta(\Xstar{L}))]$ is always lower bounded away from zero, we are only concern with the minimum eigenvalue of the gradient of $\phi$ acting on its inputs. Note that when $\phi_L$ is a point-wise function on its vector inputs, this gradient matrix is diagonal. In the case of the sigmoid function, we know that for any $y\in \R^n$
\[
\lambda_{\min}[\nabla \phi_L]|_{y}=\min_{i=1,\ldots,n} \phi_L(y_i)(1-\phi_L(y_i)),
\]
which is bounded away from zero. In general, we only need that the point-wise activation is continuously differentiable with positive derivaties.
\end{remark}
\subsection{Case 2: modulus $\kappa > 0$}\label{sec:modulus_2}
We may also encounter cases where the operator $F$ is only monotone but not strongly monotone. For instance, let $\phi$ be the softmax function (applied row-wise to the filtered signal $\eta(\Xstar{L})\Theta_L \in \mathbb R^{n\times F}$), whose gradient matrix is thus block-diagonal. For any vector $z\in \R^F$, $\nabla \phi(z)=\text{diag}(\phi(z))-\phi(z)\phi(z)^T$, which satisfies $\nabla \phi(z)\boldsymbol 1=\boldsymbol 0$ for any $z$ \citep[Proposition 2]{softmax}. Therefore, the minimum eigenvalue of the gradient matrix of $\phi$ is always zero. Hence $\kappa=0$.
In this case, note that the solution of $\VI$ needs not be unique: a solution $\bar{\Theta}_L$ that satisfies the condition $\langle F(\bar{\Theta}_L), \Theta-\bar{\Theta}_L\rangle \geq 0, \ \forall \Theta \in \bTheta$ may not be the true signal $\Thetastar{L}$. Nevertheless, suppose $F(\bar{\Theta}_L)=\EEXY{\eta(\Xstar{L})[\phi_L (\etaT{\Xstar{L})}\bar{\Theta}_L]-Y}=0$. Because we assumed that the minimum singular value of $\eta(\Xstar{L})$ is always positive, we can still correctly predict the signal in expectation (a weaker guarantee than Theorem \ref{thm:alg_guarantee}):
\[\EE_X[\phi_L (\eta(\Xstar{L})\bar{\Theta}_L)]=\EE_X[\phi_L (\eta(\Xstar{L})\Thetastar{L})].\]
Hence, we directly approximate the zero of $F$, by using the operator extrapolation methoed (OE) in \citep{Georgios20I}
We then have a similar $\ell_p$ performance guarantee in prediction:
\begin{theorem}[Prediction error bound for model recovery, monotone $F$]\label{thm:generalization_err_nostrong}
Suppose we run the OE algorithm \citep{Georgios20I} for $T$ iterations with $\lambda_t=1$, $\gamma_t=[4K_2]^{-1}$, where $K_2$ is the Lipschitz constant of $F$. Let $R$ be uniformly chosen from $\{2,3,\ldots,T\}$. Then for $p\in [2,\infty]$,
\[
\EE_{\widehat{\Theta}_L^{(R)}} \{\| \EE_X\{ \sigma_{\min} (\etaT{\Xstar{L}})[\condexphat{Y_t}{X_t}-\condexp{Y_t}{X_t}]\}\|_p \}\leq T^{-1/2} C_t^{''},
\]
where $\sigma_{\min} (\cdot)$ denotes the minimum singular value of its input matrix and the constant $C_t^{''}:=3\sigma+12K_2\sqrt{2\|\Thetastar{L}\|_2^2+2\sigma^2/L^2}$, in which $\sigma^2:=\EE[(\Fempi(\Theta_L)-F(\Theta_L))^2]$ is the variance of the unbiased estimator.
\end{theorem}
Similar as we remarked after Theorem \ref{thm:generalization_err}, the convergence rate in Theorem \ref{thm:generalization_err_nostrong} is also unaffected by the batch size. The only difference is that under mini-batches of size $B$, the variance of the unbiased estimator is reduced by a factor of $B$, so that the constant $C_t^{''}$ in Theorem \ref{thm:generalization_err_nostrong} is smaller.
\subsection{Equivalence between monotone operator and gradient of parameters} \label{sec:equivalence}
In practice, neural networks, including one-layer ones, are commonly trained via empirical loss minimization, in contrast to solving MVI. Nevertheless, we will show that under the cross-entropy loss, when $\phi$ is either the sigmoid function or the softmax function, the monotone operator $F$ defined in \eqref{eq:operatr} is exactly the expectation of the gradient of the loss with respect to parameters. As a result, both methods yield the same last-layer training dynamics, and previous guarantee also apply to gradient-based methods in such cases.
For notation simplicity, we consider a generic pair of input signal $X \in \R^{C}$ and categorical output $Y \in \{0,\ldots, F\}$\footnote{In GNN, $X\in \R^{n\times C}$ and $Y \in \{0,\ldots, F\}^n$, but we can easily apply the same analyses at each node as the loss sums over nodes.}. Recall that $X_L$ in \eqref{eq:NN} is the nonlinear feature transformation from the previous $L-1$ layers. Given parameters $\Theta:=\{\Theta_1,\ldots,\Theta_L\}$ for $L$ layers, we simplify the model $\EE[Y|\eta_L(X_L),\Theta_L]$, which is based purely on the last layer parameter $\Theta_L$ as
\begin{equation}\label{eq:model_simplify}
\EE[Y|\eta_L(X_L),\Theta_L]:=\phi(\eta_L(X_L)\Theta_L),
\end{equation}
where $\EE[Y|\eta_L(X_L),\Theta^*_L]$ is the true model. Now, the cross-entropy loss $\Lc(Y,\Theta_L)$ is
\begin{align}
\Lc(Y,\Theta_L)&:=-Y\ln (\phi(\eta_L(X_L)\Theta_L))-(1-Y)\ln (1-\phi(\eta_L(X_L)\Theta_L)) && \text{$Y \in \{0,1\}$}. \label{eq:binaryCE}\\
\Lc(Y,\Theta_L)&:=-e_{Y}^T \ln(\phi(\eta_L(X_L)\Theta_L)) \qquad \text{$Y \in \{0,\ldots,F\}, F>1$}. \label{eq:multiCE}
\end{align}
In \eqref{eq:binaryCE}, $\phi(x)=\exp(x)/(1+\exp{x})$ is the sigmoid function applied point-wise. In \eqref{eq:multiCE}, $e_k$ is the $k^{\rm{th}}$ standard basis vector in $\R^{F+1}$ and $\phi(\boldsymbol x)=\exp(\boldsymbol x_i)/\sum_j \exp(\boldsymbol x_j)$ is the softmax function applied row-wise. We now have the following proposition, whose proof can be found in Appendix \ref{sec:theory_append}.
\begin{proposition}[The equivalence between MVI and parameter gradient] \label{prop:equivalence}
Consider a generic pair of input signal $X$ and random realization $Y$ following the model \eqref{eq:model_simplify} under an arbitrary nonlinear transformation $\eta_L(X_L)$. If $Y \in \{0,1\}$ as in binary (resp. $\{0,\ldots, F\}$ as in multi-class for $F>1$) classification, with $\phi$ being the sigmoid (resp. softmax) function under the cross-entropy loss $\Lc(Y,\Theta)$ defined in \eqref{eq:binaryCE} (resp. \eqref{eq:multiCE}), we have that for any parameter $\Theta$
\[
\EE_{X,Y}[\nabla_{\Theta_L} \Lc(Y,\Theta_L)]=F(\Theta_L),
\]
where the monotone operator $F(\Theta_L)$ substitutes $\eta_L(X_L)$ and $\Theta_L$ in \eqref{eq:operatr}.
\end{proposition}
\subsection{Imprecise graph knowledge}\label{sec:imprecise}
Because graph knowledge is rarely perfectly known, the robustness of the algorithm with imprecise graph topology knowledge is important in practice for node classification. Thus, we want to analyze the difference in prediction quality when the true graph adajacency matrix $W$ is estimated by another $W'$. For simplicity, we concentrates on the GCN case and drop certain super/sub-scripts (e.g., $X=\Xstar{L}$ and $\Theta^*=\Thetastar{L}$) and denote $\eta(X)$ under $W$ (resp. $W'$) as $L_gX$ (resp. $L_g'X$). Moreover, for a given $k \in \{1,\ldots, n\}$, we can decompose $L_g$ as $L_g=L_g^++L_g^-$, where $L_g^+:=U\Lambda^+U^T, \Lambda^+=\text{diag}(\lambda_1,\ldots,\lambda_k,\boldsymbol 0)$ denotes the high-energy portion of the filter (similarly for $L_g^-$).
We show that under certain assumptions on the similarity between high/low-portions of the true and estimated graph filters, there exists a solution $\Theta^{*}_{L'}$ that can lead to prediction error between $\condexp{Y}{X}$ under $\Theta^*_L$ and $\Theta^{*}_{L'}$. In particular, $\Theta^{*}_{L'}$ is the solution of $\VI$ if $\phi_L$ is the sigmoid function so that we can guarantee a small prediction error in expectation based on earlier results in such cases.
\begin{proposition}[Quality of minimizer under model mismatch]\label{model_mismatch} Fix an $\epsilon>0$.
Assume that (1) $L_g^+=L_g^{'+}$ (2) $\|L_g^{'-}-L_g^-\|_2 \leq \epsilon/[K\EE_X \|X^-\|]$, where $K$ is the Lipschitz continuous constant for $\phi_L$ and $X^-$ belongs to the span of $L_g^-$ and $L_g^{'-}$. Then, there exists $\Theta^{*}_{L'}$ such that
\[
\| \condexp{Y}{X}'-\condexp{Y}{X}\|_2\leq \epsilon \|\Theta^{*}_L\|_2,
\]
where $ \condexp{Y}{X}'$ denotes the conditional expectation under $L_g'$ and $\Theta^{*}_{L'}$.
\end{proposition}
\section{Experiments}\label{sec:expr_main}
We test and compare \SVI \ in Algorithm \ref{alg:heuristic} with SGD on several experiments: one-layer model recovery and prediction, two-layer fully-connected (FC) network classification and, two/three-layer graph convolutional networks (GCN) model recovery on random graphs, and three-layer real-data network node classification. We aim to compare the performance of \SVI \ and SGD. The code is available at \url{https://github.com/hamrel-cxu/SVI-NN-training}.
\subsection{Result summary}
We first provide a summary of the extensive experiments being performed.
\vspace{0.1in}
\noindent \textit{One-layer networks} (Sec. 5.3.1 and Appendix \ref{sec:probit}): We had theoretical guarantees in Section \ref{sec:guarantee}. When data are generated from a non-convex probit model, \SVI \ outperforms SGD with much smaller parameter recovery error and smaller/competitive training and/or test losses and/or errors (see Table \ref{tab:non-conve_probit}). When \SVI \ is identical to SGD (Proposition \ref{prop:equivalence}), we verify the insensitivity of \SVI \ to batch size (see Figure \ref{fig:GCN_no_hidden_layer_append}), as remarked after Theorems \ref{thm:generalization_err} and \ref{thm:generalization_err_nostrong}.
\vspace{0.1in}
\noindent \textit{Two-layer networks} (Sec. 5.3.2 \& 5.3.3 and Appendix \ref{sec:two_layer_append}): Without acceleration techniques such as batch normalization \citep{pmlr-v37-ioffe15} (BN)), \SVI \ consistently reaches smaller training and/or test losses and/or errors than SGD throughout training epochs (see Sec. 5.3.2 \& 5.3.3). When BN is used, SGD in some cases eventually reaches smaller errors or losses than \SVI. However, \SVI \ almost always reaches very small errors or losses with a few training epochs, and it takes SGD a lot more epochs to reach the same values (see Appendix \ref{sec:two_layer_append}).
\vspace{0.1in}
\noindent \textit{Three-layer networks} (Sec. \ref{sec:real_data} and Appendix \ref{sec:real_multi_layer_append} \& \ref{sec:multi_layer_GCN}: On real test data, \SVI \ without BN acceleration almost always reaches smaller classification error and higher weighted $F_1$ scores than SGD, with faster initial convergence as well (see Sec. \ref{sec:real_data}). Meanwhile, BN worsens \SVI \ and improves SGD, but \SVI \ maintains fast initial convergence, and \SVI \ without BN is still competitive or better than SGD with BN (see Appendix \ref{sec:real_multi_layer_append}). On simulated data, SGD with BN almost always eventually reaches smaller errors and losses than \SVI\ with BN, but \SVI \ still enjoys fast initial convergence, and the smallest eventual error and loss by \SVI \ without BN is smaller than most eventual errors and losses by SGD with BN (see Appendix \ref{sec:multi_layer_GCN}).
\subsection{Setup and Comparison Metrics}
\noindent \textit{Setup.} All implementation are done using \Verb|PyTorch| \citep{NEURIPS2019_9015} and \Verb|PyTorch Geometric| \citep{Fey/Lenssen/2019} (for GNN). To ensure fair comparison, we carefully describe the experiment setup. In particular, the following inputs are \textit{identical} to both \SVI \ and SGD in each experiment.
\begin{itemize}[noitemsep,topsep=0em]
\item Data: (a) the size of training and test data (b) batch (batch size and samples in mini-batches).
\item Model: (a) architecture (e.g., layer choice, activation function, hidden neurons) (b) loss function.
\item Training regime: (a) parameter initialization (b) hyperparameters for backpropagation (e.g., learning rate, momemtum factor, acceleration) (c) total number of epochs (d) acceleration techniques (e.g., BN).
\end{itemize}
In short, all except the way gradients are defined are kept the same for each comparison---our proposed \SVI \ backpropagates gradients with respect to hidden input and transform the gradient back to parameter domain, whereas SGD do so with respect to parameters in each hidden layer.
\vspace{0.1in}
\noindent \textit{Comparison metrics.} For a random feature $X\in \R^{n\times C}$, where $n$ is the number of graph nodes and $C$ is the input dimension, let the true (or predicted) model be $\EE[Y|X,\Theta]\in \R^{n\times F}$ (or $\EE[Y|X,\widehat \Theta]$), where $F$ is the output dimension. Given $N$ realized pairs $\{(X_i,Y_i)\}_{i=1}^N$ where $N$ denotes either the training or test sample size, we employ the following metrics for various tasks.\footnote{We one-hot encode $Y_i$ in multi-class classification to make sure certain operations are well-defined.}
{\small \begin{align}
\text{MSE loss}&:=N^{-1}\sum_{i=1}^N \sum_{j=1}^n\|\EE[Y_i|X_i,\widehat \Theta]_j-Y_{i,j}\|_2 \label{eq:MSE_loss} \\
\text{Cross-entropy loss}&:=N^{-1}\sum_{i=1}^N \sum_{j=1}^n Y_{i,j}^T\EE[Y_i|X_i,\widehat \Theta]_j \label{eq:CE_loss}\\
\text{Classification error}&:=(n\cdot N)^{-1}\sum_{i=1}^N\sum_{j=1}^n \sum_{f=1}^F \textbf{1}(Y_{i,j,f}\neq \hat Y_{i,j,f}) \label{eq:classification_error} \\
\ell_p\text{ parameter recovery error}&:=\|\hat \Theta-\Theta\|_p \label{eq:l_p_para_err} \\
\ell_p\text{ model recovery error}&:=N^{-1}\sum_{i=1}^N\sum_{j=1}^n \|\EE[Y_i|X_i,\widehat \Theta]_j-\EE[Y_i|X_i,\Theta]_j\|_p \label{eq:l_p_model_err}.
\end{align}}
For GCN model recovery We let $p=2$ or $\infty$ in \eqref{eq:l_p_para_err} and \eqref{eq:l_p_model_err} and when $p=2$, compute the relative error using $\|\Theta\|_p$ or $N^{-1}\sum_{i=1}^N\sum_{j=1}^n \|\EE[Y_i|X_i,\Theta]_j\|_p$ on the denominator. In addition, all results are averaged over three random trials, in which features $X_i$ are redrawn for simulated example and networks re-initialized. We show standard errors in tables as brackets and in plots as error bars. Partial results are shown due to space limitation, where Appendix \ref{sec:experiment_appendix} contains the extensive results.
\begin{table*}[!t]
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{P{2.5cm} P{2.5cm}P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}}
{\textbf{Probit model}} & \multicolumn{2}{c}{$\ell_2$ para recovery error} & \multicolumn{4}{c}{MSE loss} & \multicolumn{4}{c}{Classification error} \\
\hline
{Feature dimension} & SGD & \SVI & SGD train & \SVI \ train & SGD test & \SVI \ test & SGD train & \SVI \ train & SGD test & \SVI \ test \\
\hline
50 & 0.596 (5.9e-05) & 0.485 (1.1e-04) & 0.039 (4.1e-04) & 0.034 (4.0e-04) & 0.043 (1.8e-04) & 0.038 (2.3e-04) & 0.034 (7.1e-04) & 0.035 (8.9e-04) & 0.046 (9.4e-04) & 0.049 (1.4e-03) \\
100 & 0.683 (5.2e-05) & 0.593 (8.4e-05) & 0.032 (6.1e-05) & 0.026 (5.5e-05) & 0.043 (6.6e-04) & 0.039 (7.0e-04) & 0.021 (5.4e-04) & 0.021 (1.4e-04) & 0.053 (2.4e-03) & 0.054 (9.4e-04) \\
200 & 0.78 (2.1e-04) & 0.717 (2.3e-04) & 0.025 (3.8e-05) & 0.019 (1.0e-05) & 0.045 (1.2e-03) & 0.041 (1.0e-03) & 0.011 (7.6e-04) & 0.011 (8.3e-04) & 0.058 (3.3e-03) & 0.055 (3.6e-03) \\
\end{tabular}
\egroup
}
\cprotect \caption{Non-convex one-layer probit model, with identical learning rate for both. We show average results at end of 200 epochs, with standard errors in brackets. \SVI \ consistently reaches smaller parameter recovery error and MSE loss than SGD, and SGD may be slightly better in terms of classification error. Metrics are defined in \eqref{eq:l_p_para_err}, \eqref{eq:MSE_loss}, and \eqref{eq:classification_error}, respectively.}
\label{tab:non-conve_probit}
\end{table*}
\begin{table*}[!t]
\begin{varwidth}[b]{0.67\linewidth}
\centering
\resizebox{\linewidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{P{2.5cm} P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}|P{2.5cm}P{2.5cm}P{2.5cm}P{2.5cm}}
{\textbf{Two-moon data}} & \multicolumn{4}{c}{MSE loss} & \multicolumn{4}{c}{Classification error} \\
\hline
{\# Hidden neurons} & SGD train & \SVI \ train & SGD test & \SVI \ test & SGD train & \SVI \ train & SGD test & \SVI \ test \\
\hline
8 & 0.00062 (4.0e-05) & 0.00054 (2.1e-05) & 0.06709 (4.8e-03) & 0.05993 (4.5e-03) & 0.08333 (7.2e-03) & 0.076 (3.8e-03) & 0.098 (6.6e-03) & 0.08933 (1.1e-02) \\
16 & 0.00053 (7.7e-05) & 0.00042 (8.5e-05) & 0.05852 (7.1e-03) & 0.04702 (7.7e-03) & 0.06533 (1.5e-02) & 0.052 (1.3e-02) & 0.078 (9.6e-03) & 0.06333 (1.3e-02) \\
32 & 0.00022 (2.9e-05) & 0.00014 (1.8e-05) & 0.02375 (2.2e-03) & 0.01517 (6.9e-04) & 0.01867 (3.8e-03) & 0.01533 (2.4e-03) & 0.01733 (2.2e-03) & 0.016 (9.4e-04) \\
64 & 0.00016 (1.7e-05) & 8e-05 (1.4e-05) & 0.01774 (1.2e-03) & 0.00872 (1.0e-03) & 0.01 (2.5e-03) & 0.006 (1.6e-03) & 0.01067 (3.0e-03) & 0.00933 (2.0e-03) \\
\end{tabular}
\egroup
}%
\cprotect \caption{Two-moon data, binary classification, with identical learning rate and ReLU activation for both. We show average results at end of epochs, with standard errors in brackets. \SVI \ consistently reaches smaller training and/or test MSE loss and/or classification errors than SGD across different number of hidden neurons.}
\label{tab:moon_table}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{FC_vanillaTraining_moon_H=64_lr=0.15.pdf}
\vspace{-0.25in}
\captionof{figure}{ Two-moon data, 64 hidden neurons, with MSE loss (left) and classification error (right). \protect \SVI \ shows faster convergence.}
\label{fig:moon_fig}
\end{minipage}
\end{table*}
\subsection{Synthetic Data Experiments}
Notation-wise, $X\sim \mathcal N(a,b)$ means the random variable $X$ follows a normal distribution with mean $a$ and variance $b^2$; $N$ (resp. $N_1$) denotes the size of training (resp. test) sample, $\textsf{lr}$ denotes the learning rate, $B$ denotes the batch size, and $E$ denotes training epochs.
\vspace{0.1in}
\noindent \textit{5.3.1 One-layer model recovery and prediction.} We start with model recovery and prediction on data generated from a one-layer probit model, which leads to non-convex objectives. Specifically, for each $i\geq 1$, $y_i \in \{0,1\}$ and $\condexp{y_i}{X_i}=\Phi(X_i^T\beta+b)$, where $X_i \in \R^p, X_{ij} \overset{i.i.d.}{\sim} \mathcal N(0.05,1)$, $\beta_j \overset{i.i.d.}{\sim} \mathcal N(-0.05,1)$, and $b \sim \mathcal N(-0.1,1)$. Here, $\Phi(z)=\PP(Z\leq z), Z\sim \mathcal N(0,1)$. We let $N=2000$, $N_1=500$, and use a fully-connected one-layer network. The goal is to recover parameter $\{\beta,b\}$ via minimizing the non-convex MSE objective (i.e., $N^{-1} \sum_{i=1}^N \|y_i-\Phi(X_i^T\beta+b)\|_2$) and make prediction on test samples. We let $B=200$ and $E=200$ and use $\textsf{lr}$=0.005 and momemtum = 0.9.
Table \ref{tab:non-conve_probit} shows that \SVI \ consistently reaches small parameter recovery error than SGD across feature dimensions, with 8$\sim$10\% difference. Moreover, the same phenomenon appears in both training and test MSE losses. Lastly, SGD may reach a smaller classification error, but the difference is very small. See Appendix \ref{sec:probit}, Figure \ref{fig:append_probit} for additional results that illustrates model intermediate convergence results.
We also examine the insensitivity of \SVI \ to changes in batch size when data are generated from a one-layer GCN model with sigmoid activation. We note that when the input feature dimension $C$ is fixed, \SVI \ nearly converges to the same MSE loss defined in \eqref{eq:MSE_loss} and the same $\ell_{\infty}$ error defined in \eqref{eq:l_p_model_err}, regardless of the batch size $B$; the differences are only caused by the fewer number of iterations when $B$ increases. The setup is described in Appendix \ref{sec:probit} and detailed results are in Figure \ref{fig:GCN_no_hidden_layer_append}.
\vspace{0.1in}
\noindent \textit{5.3.2 Two-layer FC network classification.} We next consider the classification problem of determining the cluster label of each sample of a simulated two-moon dataset. Figure \ref{fig:moon-data} visualizes the dataset. We use a two-layer fully-connected network with ReLU (layer 1) and softmax (layer 2) activation. We let $N=N_1=500$, $\textsf{lr}$=0.15, $B=100$, and $E=100$. No batch normalization is used.
Table \ref{tab:moon_table} shows that \SVI \ consistently reaches smaller MSE losses and classification errors on training and test data across different number of hidden neurons. Figure \ref{fig:moon_fig} shows that \SVI \ also converges faster than SGD throughout epochs. See Appendix \ref{sec:two_layer_append}, Figure \ref{fig:append_twomoon} for additional results that illustrates model intermediate convergence results.
\vspace{0.1in}
\begin{table*}[!t]
\begin{varwidth}[b]{0.75\linewidth}
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{ P{2.5cm} P{2.3cm}P{2cm}P{2.5cm}P{2cm}|P{2.3cm}P{2cm}P{2.5cm}P{2cm}|P{2.3cm}P{2cm}P{2.5cm}P{2cm}}
{\textbf{Large Graph}} & \multicolumn{4}{c}{$\ell_2$ model recovery test error} & \multicolumn{4}{c}{MSE test loss} & \multicolumn{4}{c}{$\ell_{\infty}$ model recovery test error} \\
\hline
{\# Hidden neurons} & SGD (Known) & SGD (Perturbed) & \SVI \ (Known) & \SVI \ (Perturbed) & SGD (Known) & SGD (Perturbed) & \SVI \ (Known) & \SVI \ (Perturbed) & SGD (Known) & SGD (Perturbed) & \SVI \ (Known) & \SVI \ (Perturbed)\\
\hline
2 & 0.116 (3.4e-03) & 0.117 (3.6e-03) & 0.091 (8.4e-03) & 0.092 (7.5e-03) & 0.25 (1.9e-04) & 0.25 (2.0e-04) & 0.249 (4.2e-04) & 0.249 (3.6e-04) & 0.129 (4.3e-03) & 0.13 (4.5e-03) & 0.1 (8.6e-03) & 0.102 (7.3e-03) \\
4 & 0.1 (8.1e-03) & 0.101 (7.6e-03) & 0.078 (6.2e-03) & 0.081 (5.6e-03) & 0.25 (4.0e-04) & 0.25 (3.7e-04) & 0.248 (3.0e-04) & 0.249 (2.8e-04) & 0.109 (8.4e-03) & 0.11 (7.8e-03) & 0.087 (4.8e-03) & 0.091 (3.8e-03) \\
8 & 0.084 (7.7e-03) & 0.086 (7.0e-03) & 0.067 (5.5e-04) & 0.07 (4.4e-04) & 0.249 (3.4e-04) & 0.249 (3.2e-04) & 0.248 (4.0e-06) & 0.248 (1.0e-06) & 0.088 (7.2e-03) & 0.091 (6.5e-03) & 0.076 (8.2e-04) & 0.083 (5.2e-04) \\
16 & 0.08 (7.3e-03) & 0.081 (6.5e-03) & 0.066 (3.3e-04) & 0.07 (2.1e-04) & 0.249 (2.8e-04) & 0.249 (2.5e-04) & 0.248 (6.0e-06) & 0.248 (1.0e-05) & 0.087 (7.8e-03) & 0.089 (6.6e-03) & 0.075 (4.6e-04) & 0.082 (1.4e-04) \\
32 & 0.082 (8.1e-03) & 0.084 (7.5e-03) & 0.066 (5.6e-05) & 0.07 (6.2e-05) & 0.249 (3.6e-04) & 0.249 (3.4e-04) & 0.248 (1.1e-05) & 0.248 (1.2e-05) & 0.089 (8.8e-03) & 0.091 (7.9e-03) & 0.075 (1.7e-04) & 0.082 (1.0e-04) \\
\end{tabular}
\egroup
}
\cprotect \caption{Two-layer GCN model recovery on the large random graph, with identical learning rate and ReLU activation for both \SVI \ and SGD under $B=100$. The true data are generated from a two-layer GCN. We observe consistently better model recovery performance by \SVI, regardless of the number of hidden neurons for estimation, comparison metrics, and whether graph is perturbed. Metrics are defined in \eqref{eq:l_p_model_err} and \eqref{eq:MSE_loss} respectively.}
\label{tab:small_large_graph}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{large_graph_know_est_graph_H=32_MSE_single.pdf}
\vspace{-0.2in}
\captionof{figure}{$\ell_{\infty}$ model recovery error on the large random graph.\ \protect \SVI \ consistently reaches smaller error under faster convergence, even just after the first training epoch.}
\label{fig:small_large_graph}
\end{minipage}
\end{table*}
\noindent \textit{5.3.3 Two-layer GCN model recovery.} Given a graph $G=(\mathcal V,\mathcal E), |\mathcal V|=n$ and a signal $X_i \in \R^{n\times C}$, we generate $Y_i \in \R^{n\times F}$, where $\condexp{Y_i}{X_i}$ is a two/three-layer GCN model with ReLU (layer 1/layer 1 and 2) and sigmoid (layer 2/layer 3) activation. We let $C=2$ and $F=1$ and let the true number of hidden nodes in each hidden layer always be 2. Entries of all true weight and bias parameters (resp. features $X_i$) are i.i.d. samples from $N(1,1)$ (resp. $N(0,1)$) under a fixed seed (resp. fixed seeds per random trial). We consider both small ($n=15$) and large ($n=40$) graphs, where $\PP[(i,j) \in \mathcal{E}]=0.15$; Figure \ref{fig:GCN_simul_appendx} visualizes the graphs. The true parameters are identical in both graphs. Meanwhile, we examine the recovery performance either when the true graph (i.e., $\mathcal{E}$) is known or when the edge set are perturbed---a $p$ fraction of edges in $\mathcal{E}$ and in $\mathcal{E}^C$ are randomly discarded and inserted, where $\mathcal{E}^C$ denotes edges in a fully-connected graph that does not exist in $\mathcal{E}$. We set $p=0.2$ (resp. 0.05) for small (resp. large) graphs. Finally, we let $N=2000, N_1=2000$, and $E=200$. We use $\textsf{lr}$=0.001, momemtum = 0.99, and also the Nesterov momentum \citep{pmlr-v28-sutskever13}.
Table \ref{tab:small_large_graph} shows that without BN for both methods, \SVI \ consistently reaches smaller $\ell_2$ model recovery error for the two-layer GCN, even in an over-parametrized network (e.g., use 32 hidden neurons in estimation when the ground truth has only two neurons). The pattern is consistent even if the graph information is perturbed. The exact pattern persists when we measure the model recovery in terms of relative error in MSE loss or $\ell_{\infty}$ model recovery error. Figure \ref{fig:small_large_graph} (large graph) also shows that \SVI \ consistently converges faster than SGD.
In Appendix \ref{sec:two_layer_append}, we present additional results regarding model recovery on the small random graph (see Table \ref{tab:append_small_graph}), intermediate convergence results (see Figure \ref{fig:GCN_simul_append2}), and sensitivity of \SVI \ and SGD to changes in batches sizes, without and with BN (see Tables \ref{tab:GCN_two_layer_append_noBN} and \ref{tab:GCN_two_layer_append_BN} for results at the final epochs and Figure \ref{fig:dynamics_apppend_bsize1} and \ref{fig:dynamics_apppend_bsize2_half} for intermediate convergence results). Overall, \SVI \ converges faster even if SGD uses the BN acceleration and \SVI \ can yield better/competitive eventual results.
To better understand how \SVI \ update parameters, Figure \ref{fig:dynamics} zooms in the dynamics for 16 neurons (right figure) and shows the corresponding $\ell_{\infty}$ model recovery test error (left figure). Regarding the right figure, we plot the norm of first-layer neuron weights, where the norm is defined in terms of the inner product with initial weights, against the second-layer neuron weights, which are scalars because $F=1$. One circle represents one neuron, with arrows representing the direction of moving along the initial weights. We then connect the initial and final dots to indicate the displacement of neurons. The same visualization techniques are used in \citep{pellegrini2020analytic}. It is clear that \SVI \ displaces neurons further after 200 epochs, which is anticipated in Remark \ref{remark:dynamic}. In this case, we think it can be beneficial due to the much faster error convergence.
\begin{figure}[htbp]
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{SGD_VI_dynamics_H=16_MSE_bsize=100_ndynamics.pdf}
\subcaption{Two-layer GCN}
\end{minipage}
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{SGD_VI_dynamics_H=16_MSE_more_layers_bsize=100_ndynamics.pdf}
\subcaption{Three-layer GCN}
\end{minipage}
\cprotect \caption{$\ell_{\infty}$ model recovery error on the small graph with 16 neurons under $B=100$, with identical learning rate and Softplus activation for both. Left: relative $l_\infty$ error in training and test data, assuming the true graph is known. Right: visualization of the training dynamics. After 200 epochs, \SVI \ displaces the neurons from their initial position further than SGD, where such displacement also leads to much faster convergence in the relative error.}
\label{fig:dynamics}
\end{figure}
\begin{table*}[!t]
\begin{varwidth}[b]{0.75\linewidth}
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{P{3cm}P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}}
{\textbf{California solar data}} & \multicolumn{4}{c}{MSE loss} & \multicolumn{4}{c}{Classification error} & \multicolumn{4}{c}{Weighted $F_1$ score} \\
\hline
{\# Hidden neurons} & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test \\
\hline
8 & 0.217 (6.6e-03) & 0.219 (5.9e-03) & 0.195 (7.0e-03) & 0.207 (5.7e-03) & 0.297 (8.1e-03) & 0.321 (4.5e-03) & 0.296 (1.4e-02) & 0.318 (1.5e-02) & 0.706 (8.1e-03) & 0.675 (5.6e-03) & 0.703 (1.5e-02) & 0.681 (1.5e-02) \\
16 & 0.229 (7.6e-03) & 0.225 (7.3e-03) & 0.194 (7.3e-03) & 0.204 (6.4e-03) & 0.314 (1.3e-02) & 0.335 (6.8e-03) & 0.292 (1.7e-02) & 0.305 (1.7e-02) & 0.687 (1.3e-02) & 0.655 (9.4e-03) & 0.709 (1.7e-02) & 0.695 (1.7e-02) \\
32 & 0.224 (2.7e-03) & 0.223 (1.4e-03) & 0.185 (2.5e-03) & 0.196 (2.3e-03) & 0.297 (3.4e-03) & 0.333 (7.4e-03) & 0.274 (4.4e-03) & 0.292 (5.0e-03) & 0.704 (3.8e-03) & 0.659 (8.4e-03) & 0.727 (4.0e-03) & 0.708 (5.1e-03) \\
64 & 0.213 (4.5e-04) & 0.211 (9.6e-04) & 0.18 (5.1e-04) & 0.189 (3.9e-04) & 0.283 (2.5e-03) & 0.308 (6.0e-03) & 0.262 (2.1e-03) & 0.27 (1.5e-03) & 0.719 (2.6e-03) & 0.689 (6.7e-03) & 0.738 (1.7e-03) & 0.73 (1.4e-03) \\
\end{tabular}
\egroup
}
\cprotect \caption{California solar binary ramping event detection under a three-layer GCN model, with identical learning rate and ReLU activation for both. In addition to MSE loss and classification loss, we also use the weighted $F_1$ score as an additional metric, which weighs $F_1$ score in each class by its support. \SVI \ consistently outperforms SGD on all these metrics on both training and test data.}
\label{tab:solar_tab}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{Vanilla_vs_vanilla_VI__graph_inferred_solar_CA_vanillahidden_64_VIhidden_64_new__all_layer_MSE_single.pdf}
\vspace{-0.3in}
\captionof{figure}{Classification error under 64 neurons in both hidden layers. \protect\SVI \ converges faster than SGD on both training and test data, especial in the initial stage.}
\label{fig:solar_fig}
\end{minipage}
\end{table*}
\begin{table*}[!t]
\begin{varwidth}[b]{0.75\linewidth}
\centering
\resizebox{\textwidth}{!}{%
\bgroup
\def\arraystretch{1.5
\begin{tabular}{P{2.5cm} P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}|P{2cm}P{1.5cm}P{2cm}P{2.5cm}}
{\textbf{Traffic data}} & \multicolumn{4}{c}{Cross-Entropy loss} & \multicolumn{4}{c}{Classification error} & \multicolumn{4}{c}{Weighted $F_1$ score} \\
\hline
{\# Hidden neurons} & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test & SGD Training & SGD Test & \SVI \ Training & \SVI \ Test \\
\hline
8 & 0.837 (7.0e-03) & 0.834 (1.1e-02) & 0.967 (6.2e-03) & 0.969 (5.5e-03) & 0.397 (1.3e-02) & 0.414 (1.5e-02) & 0.412 (1.6e-02) & 0.427 (1.6e-02) & 0.572 (2.9e-02) & 0.554 (2.9e-02) & 0.572 (2.4e-02) & 0.558 (2.3e-02) \\
16 & 0.808 (5.6e-03) & 0.804 (4.1e-03) & 0.955 (3.3e-03) & 0.956 (4.0e-03) & 0.391 (1.3e-02) & 0.411 (1.2e-02) & 0.37 (3.8e-03) & 0.384 (7.0e-03) & 0.576 (3.1e-02) & 0.556 (3.0e-02) & 0.624 (7.3e-03) & 0.61 (1.0e-02) \\
32 & 0.791 (5.7e-03) & 0.787 (6.5e-03) & 0.939 (3.4e-03) & 0.941 (3.6e-03) & 0.366 (5.9e-03) & 0.383 (7.9e-03) & 0.358 (4.5e-03) & 0.37 (4.9e-03) & 0.628 (6.7e-03) & 0.611 (9.0e-03) & 0.641 (4.6e-03) & 0.629 (5.0e-03) \\
64 & 0.791 (2.0e-03) & 0.787 (2.2e-03) & 0.933 (2.1e-03) & 0.935 (2.1e-03) & 0.36 (2.7e-03) & 0.375 (4.4e-03) & 0.346 (1.7e-03) & 0.358 (1.6e-03) & 0.637 (2.8e-03) & 0.622 (4.7e-03) & 0.652 (2.0e-03) & 0.641 (1.8e-03) \\
\end{tabular}
\egroup
}
\cprotect \caption{Traffic data multi-class anomaly detection under a three-layer GCN model, with identical learning rate and ReLU activation for both. We note that \SVI \ remains competitive or outperforms SGD in terms of classification error and weighted $F_1$ scores, with clear improvement when hidden neurons increase. However, \SVI \ reaches much larger cross-entropy losses, which we think are benign (see Section 5.4.2 for justification).}
\label{tab:traffic_tab}
\end{varwidth}
\hspace{0.05in}
\begin{minipage}[b]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{Vanilla_vs_vanilla_VI__graph_inferred_Traffic_vanillahidden_64_VIhidden_64_new__all_layer_Cross-Entropy_single.pdf}
\vspace{-0.3in}
\captionof{figure}{Classification error
under 64 neurons in both
hidden layers. \protect\SVI \
converges faster,
especial after a single training epoch.}
\label{fig:traffic_fig}
\end{minipage}
\end{table*}
\subsection{Real-data Network Prediction}\label{sec:real_data}
We lastly use three-layer GCN for real-data experiments, where both layers share the same number of hidden neurons with the ReLU activation. The output layer uses sigmoid (binary node classification) or softmax (multi-class node classification). We fix $\textsf{lr}$ = 0.001, momemtum = 0.99, and use the Nesterov momentum \citep{pmlr-v28-sutskever13}.
\vspace{0.1in}
\noindent \textit{5.4.1 Binary solar ramping prediction.} The raw solar radiation data are retrieved from the National Solar Radiation Database for 2017 and 2018. We consider data from 10 cities downtown in California and from 9 locations in downtown Los Angeles, where each city or location is a node in the network. The goal is to identify ramping events daily, abrupt changes in the solar power generation. Thus, $Y_{t,i}=1$ if node $i$ at day $t$ experiences a ramping event. We define feature $X_t:=\{Y_{t-1}\ldots,Y_{t-d}\}$ as the collection of past $d$ days of observation and pick $d=5$. We estimate edges via a $k$-nearest neighbor approach based on the correlation between training ramping labels, with $k=4$. Data in 2017 are used for training ($N=360$) and the rest for testing ($N_1=365$), and we let $B=30$ and $E=100$.
Table \ref{tab:solar_tab} shows that \SVI \ almost always outperforms SGD under any metric on both training and test data. The pattern is consistent across different numbers of hidden neurons. Figure \ref{fig:solar_fig} also shows that \SVI \ converges faster than SGD, which aligns with earlier simulation results. See Appendix \ref{sec:real_multi_layer_append}, Figure \ref{fig:append_CA} (CA) and \ref{fig:append_LA} (LA) for intermediate convergence results, including those under BN.
\vspace{0.1in}
\noindent \textit{5.4.2 Multi-class traffic flow anomaly detection.} The raw bi-hourly traffic flow data are from the California Department of Transportation, where we collected data from 20 non-uniformly spaced traffic sensors in 2020. Data are available hourly, with $Y_{t,i}=1$ (resp. 2) if the current traffic flow lies outside the upper (resp. lower) 90\% quantile over the past four days of traffic flow of its nearest four neighbors based on sensor proximity. As before, we define feature $X_t$ as the collection of past $d$ days of observation and set $d=4$, where the edges include the nearest five neighbors based on location. Data in the first nine months are training data (e.g., $N=6138$) and the rest for testing ($N_1=2617$). We let $B=600$ and $E=100$ and use the cross-entropy loss to test the performance of \SVI \ under alternative loss functions.
Table \ref{tab:traffic_tab} shows that \SVI \ reaches smaller training and test classification error for large hidden neurons and remains competitive when fewer neurons are used. In terms of weighted $F_1$ scores, \SVI \ also reaches higher training and test scores in all except one choice of hidden neurons. Lastly, we note that \SVI \ results in much larger cross-entropy losses, which might happen if the predicted posterior probabilities by \SVI \ are more uniform than SGD. In this case, such behavior is benign due to smaller test errors of \SVI, which likely happen because there is potential ambiguity in the test data: the true posterior probabilities in certain portions do not concentrate on a specific class. As a result, more uniform posterior probability predictions by \SVI \ better estimate the true probabilities, leading to smaller test errors and higher weighted $F_1$ scores. See Appendix \ref{sec:real_multi_layer_append}, Figure \ref{fig:append_traffic} for intermediate convergence results.
\section{Conclusion}\label{sec:conclusion}
We have presented a new monotone operator variational inequality (VI) approach for training neural networks, particularly training graph neural networks, which enjoys strong prediction guarantees in training one-layer neural networks. In training general multi-layer neural networks, the proposed method can obtain faster convergence and make more accurate predictions relative to a comparable stochastic gradient descent algorithm, using extensive simulation and real-data study regarding various performance metrics.
At present, the following tasks remain open. Theoretically, the guarantees hold only when all except the last layers of the network are known. Instead, it is important to relax and quantify the level of knowledge we need for these layers to better explain the previous-layer heuristics of \SVI. Methodologically, moving beyond decoupled features $\eta_l(X_l)$ and $\Theta_l$ as in \eqref{eq2} is important. Application-wise, it is useful to address a wider range of problems in GNN, including edge and graph classification \citep{Zhou2020GraphNN}. We will explore them in the future.
\section*{Acknowledgement}
The authors are partially supported by NSF DMS-2134037.
\section{Experiments}
We consider two different estimation schemes, where the initial starting point is $\Theta^{(0)}=\boldsymbol 0$. The first scheme is to estimate 100 matrix estimates using \textit{one pair} of $(X_i,Y_i), i=1,\ldots,100$. The second scheme is to estimate one parameter using all 100 pairs. The first scheme can be more realistic in reality when we only have one realization of the graph input and output. Note that the modulus $\thetapGNN{2}$ is not explicitly computable as mentioned earlier but plays a key part in both defining the stepsize and bounding the estimation error. Thus, we choose 10 uniformly space values in (0.1, 10) as a reasonably large range. \chentodo{Estimate with ALL data.} After estimating the parameters, we compute the norm of difference between estimated and true parameters and make predictions on 100 test data. The estimation accuracy is computed as the number of correct node prediction (out of $F$ choices per node) divided by $n$. Note that for fairness in comparison, we only make one prediction per random estimate in the first scheme, so we actually generate two pairs at each $i=1,\ldots,100$ and use the first or second to estimate or test. Lastly, we note that the overall computation is extremely fast in both cases (e.g., finish in 2-3 seconds) because constructing the gradient $F_{(X_i,Y_i)}$ barely takes any time and we need not project back the estimates (since $\B = \R^{C\times F}$ for the softmax activation).
\begin{wrapfigure}[12]{r}{.25\textwidth}
\vspace{-0.175in}
\centering
\includegraphics[width=\linewidth]{simple_graph.pdf}
\vspace{-0.2in}
\caption{An example nearly identical to \citep[Fig. 7]{convex_recovery}, except edge direction.}
\label{fig:simple_network}
\end{wrapfigure}
Figure \ref{fig:GCN_results} shows the histograms of results over different values of the modulus. For the first scheme, Figures \ref{fig:GCN_results_1obs_norm_diff} visualizes the norm of difference between estimated and true parameters and \ref{fig:GCN_results_1obs_est_accu} presents prediction accuracy on 8 nodes. For the second scheme, Figure \ref{fig:GCN_results_allobs_est_accu} presents prediction accuracy on 8 nodes and the norm difference between the single random estimate and the true parameter appears in the title. The main takeaways are as follows. First, different choices of the modulus barely affects estimation accuracy in both schemes. An exception happens for $\theta_2[\text{GNN}]=0.1$ in the second scheme, where the projected gradient descent takes too large steps over 100 observation. On the other hand, larger modulus indeed brings the estimated parameter closer to the true parameter value, although the effect seems negligible when the modulus exceeds 6.7 (resp. 5.6) in the first (resp. second) scheme. Comparing the estimation accuracy in both schemes reveals that estimating the parameter using all 100 observations (and predict on 100 test points) is better than using one observation (and predict on one test point), due to the prediction guarantees in Theorem \ref{thm:alg_guarantee}. Overall, the predictions are very accurate (e.g., the median is always 0.88, so at least a half predictions only make one wrong prediction).
\begin{figure}
\begin{minipage}{0.32\textwidth}
\centering
\captionsetup[subfigure]{justification=centering}
\includegraphics[width=\linewidth]{Norm_diff_ratio_single_channel_initial_type_is_zero_modulus_0.1}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\captionsetup[subfigure]{justification=centering}
\includegraphics[width=\linewidth]{Norm_diff_ratio_single_channel_initial_type_is_zero_modulus_1}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\captionsetup[subfigure]{justification=centering}
\includegraphics[width=\linewidth]{Norm_diff_ratio_single_channel_initial_type_is_zero_modulus_5}
\end{minipage}
\cprotect\caption{Single-channel output, relative estimation error of the true parameter over the projected gradient descent.}
\label{fig:single_channel_para_est}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}{\textwidth}
\centering
\captionsetup[subfigure]{justification=centering}
\includegraphics[width=\linewidth]{GCN_norm_diff_one_obs.pdf}
\subcaption{100 single observations: norm difference.}
\label{fig:GCN_results_1obs_norm_diff}
\end{minipage}
\hspace{0.1in}
\begin{minipage}{\textwidth}
\centering
\captionsetup[subfigure]{justification=centering}
\includegraphics[width=\linewidth]{GCN_est_err_one_obs.pdf}
\subcaption{100 single observations: estimation accuracy.}
\label{fig:GCN_results_1obs_est_accu}
\end{minipage}
\hspace{0.1in}
\begin{minipage}{\textwidth}
\centering
\captionsetup[subfigure]{justification=centering}
\includegraphics[width=\linewidth]{GCN_norm_diff_all_obs.pdf}
\subcaption{All 100 observations: estimation accuracy and norm difference.}
\label{fig:GCN_results_allobs_est_accu}
\end{minipage}
\vspace{-0.15in}
\cprotect\caption{Comparing estimation using single vs. total observations over different modulus values: the true graph has 8 nodes, 20 input channels, and 10 output channels. The Froobenius norm of the true $\Theta$ is 7.05.}
\label{fig:GCN_results}
\end{figure}
\subsection{Real-data}
\chentodo{One example is prediction in power systems \citep{GNN_power}. Wind prediction in \citep{GNN_wind} is also possible.}
\chentodo{Also, maybe we should compare with the parameters estimated using backpropagation? The Python \Verb|PyG| module seems a good fit.}
\end{document}
| {
"timestamp": "2022-02-21T02:00:51",
"yymm": "2202",
"arxiv_id": "2202.08876",
"language": "en",
"url": "https://arxiv.org/abs/2202.08876",
"abstract": "Despite the vast empirical success of neural networks, theoretical understanding of the training procedures remains limited, especially in providing performance guarantees of testing performance due to the non-convex nature of the optimization problem. The current paper investigates an alternative approach of neural network training by reducing to another problem with convex structure -- to solve a monotone variational inequality (MVI) -- inspired by a recent work of (Juditsky & Nemirovsky, 2019). The solution to MVI can be found by computationally efficient procedures, and importantly, this leads to performance guarantee of $\\ell_2$ and $\\ell_{\\infty}$ bounds on model recovery and prediction accuracy under the theoretical setting of training a single-layer linear neural network. In addition, we study the use of MVI for training multi-layer neural networks and propose a practical algorithm called \\textit{stochastic variational inequality} (SVI), and demonstrate its applicability in training fully-connected neural networks and graph neural networks (GNN) (SVI is completely general and can be used to train other types of neural networks). We demonstrate the competitive or better performance of SVI compared to widely-used stochastic gradient descent methods on both synthetic and real network data prediction tasks regarding various performance metrics, especially in the improved efficiency in the early stage of training.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "An alternative approach to train neural networks using monotone variational inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105335255604,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7076984802682391
} |
https://arxiv.org/abs/1902.00523 | On the Objects and Morphisms of Category of Soft sets | Soft set theory can deal uncertainties in nature by parametrization process. In this paper, we explore the objects and morphisms of category of soft sets, Sset(U) in detail. Also, gives characterizations of monomorphisms and epimorphisms in Sset(U). | \section{Introduction}
The concept of soft sets introduced by D. Molodtsov \cite{mol}, is widely applied for solving problems in several areas including vagueness and uncertainties. In this paper, we devoted to presenting basic categorical ideas of soft sets over a fixed universal set $U$ and its properties. The class of all soft set over $U$ is denoted by \textbf{Sset(U)}.\\
In \cite{zho}, M. Zhou et al. initiated to develop some categorical properties of soft sets such as equalizers, finite products, adjoint. R. A. Barzooei et al. \cite{bor} explain the concepts of epimorphism and monomorphism using a pair of functions as a morphism. O. Zahiri \cite{zah} discussed the existence of product and coproduct in the category of soft sets. In \cite{Liu}, Z. Liu proved that the category of soft sets is a topos. B. P. Varol et al. \cite{var} investigate the soft neighborhood sets and its topological properties in detail. In \cite{sar}, S. K. Sardar et al. defines soft category as a soft set and prove that soft category is a generalization of the fuzzy category.
The remaining portion of this article is organized as follows. In Sec. 2, some basic notions in category theory and soft set theory are given. In Sec. 3, objects in the category of soft sets are studied and establish a necessary and sufficient condition for an object to an initial object, terminal object, separator, and co-separator. In Sec 4, morphisms in the category of soft sets and establish some characterizations of epimorphism and monomorphism in detail.
\section{Preliminaries}
The basic category theory terminologies are taken from J. Adámek, H. Herrlich and G. E. Strecker \cite{str} and we use standard terminology in soft sets from \cite{maji}, \cite{mol}.
A \textbf{category} is a quadruple $C = (\mathcal{O}, hom, id, \circ)$ consisting of
\begin{itemize}
\item[1.] A class $\mathcal{O}$, whose members are called $C$-objects
\item[2.] For each pair $(A,B)$ of $C$-objects, a set $hom(A,B)$, whose members are called $C$-morphisms from $A$ to $B$
\item[3.] For each $C$-object $A$, a morphism
$id_A: A \rightarrow A$, called the $C$-identity on $A$
\item[4.] A composition law associating with each $C$-morphism $f: A\rightarrow B $ and each $C$-morphism $g: B \rightarrow C$ an $C$-morphism $g \circ f: A \rightarrow C$, called the composite of $f$ and $g$, subject to the following conditions:\\
(a) composition is associative\\
(b) $C$-identities act as identities with respect to composition\\
(c) the sets $hom(A,B)$ are pairwise disjoint
\end{itemize}
An object $A$ is said to be an \textbf{initial object} if for each object $B$, $|hom(A,B)|= 1$ and an object $A$ is called a \textbf{terminal object} provided that for each object $B$, $|hom(B,A)|=1$. An object $A$ is called a \textbf{zero object}, if it is both an initial object and a terminal object. An object $S$ is called a \textbf{separator} provided that whenever $f,g: A \rightarrow B$ are distinct morphisms, there exists a morphism $h: S \rightarrow A$ such that $f\circ h \neq g\circ h$. An object $C$ is called a \textbf{co-separator} provided that whenever $f,g: A \rightarrow B$ are distinct morphisms there exists a morphism $k: B \rightarrow C$ such that $k\circ f \neq k\circ g$.
A morphism $f: A \rightarrow B$ is said to be an \textbf{epimorphism}, if for all pairs $h,k : B \rightarrow C$ of morphisms such that $h \circ f = k \circ f$, implies $h = k$. That is, f is right cancellable for composition. A morphism $f: A \rightarrow B$ is said to be a \textbf{monomorphism}, provided that for all pairs $h,k : C \rightarrow A$ of morphisms such that $f \circ h = f \circ k$, implies $h = k$. That is, f is left cancellable for composition. A morphism is called a \textbf{bimorphism} provided that it is both a monomorphism
and an epimorphism.\\
Let $\mathcal{F}_A$ and $\mathcal{G}_B$ be two soft sets over $U$. A soft morphism (Soft function \cite{zho}) from $\mathcal{F}_A$ to $\mathcal{G}_B$ is a function $\alpha : A \rightarrow B$ such that $\mathcal{F} (a)\subseteq (\mathcal{G} \circ \alpha)(a)$, for each $a\in A$. The composition of two soft morphisms is again a soft morphism. Each soft morphism $\alpha:\mathcal{F}_A \rightarrow \mathcal{G}_B$ in $hom(\mathcal{F}_A, \mathcal{G}_B)$ is considered as a triple $(\mathcal{F}_A,\alpha,\mathcal{G}_B)$. The composition of soft morphisms is associative and identity function is the identity soft morphism. This establishes that \textbf{Sset(U)} is a category.
\section{Objects in \textbf{Sset(U)} }
In this section, we characterize initial object, terminal object, separator and co-separator of the category, \textbf{Sset(U)} in detail.
\begin{theorem}
An object $\mathcal{F}_A \in \textbf{Sset(U)}$ is an initial object if and only if $A = \emptyset$.
\end{theorem}
\begin{proof}
Let $\mathcal{F}_A$ be an initial object in the category of soft sets, $\textbf{Sset(U)}$. Then $|Mor(\mathcal{F}_A, \mathcal{G}_B)| = 1$, for every soft set $\mathcal{G}_B$ in $\textbf{Sset(U)}$.
If $A \neq \emptyset$. Then $|Mor(\mathcal{F}_A, \mathcal{G}_B)| > 1$, for every absolute soft soft set $\mathcal{G}_B$. So $A$ is a null set.\\
Conversely assume $A=\emptyset$, there is exactly one function from empty set to any set. So $\mathcal{F}_A$ is an initial object in $\textbf{Sset(U)}$.
\end{proof}
\begin{theorem}
An object $\mathcal{F}_A \in \textbf{Sset(U)}$ is a terminal object if and only if $\mathcal{F}_A$ is an absolute soft set with $|A| = 1$.
\end{theorem}
\begin{proof}
Let $\mathcal{F}_A$ be an absolute soft set with $|A| = 1$, then there is exactly one soft morphism (constant function) between any soft set $\mathcal{G}_B$ and $\mathcal{F}_A$.\\
Conversely, if $|A| > 1$. Then $|Mor(\mathcal{G}_B, \mathcal{F}_A)| > 1$ for every null soft set $\mathcal{G}_B$, which is a contradiction. So $|A| = 1$. Also, if $\mathcal{F}_A$ is not an absolute soft set, then there is no soft morphism between absolute soft set $\mathcal{G}_B$ and $\mathcal{F}_A$. So $\mathcal{F}_A$ must be an absolute soft set with $|A| = 1$.
\end{proof}
\begin{theorem}
The category \textbf{Sset(U)} has no zero object.
\end{theorem}
\begin{theorem}
An object $\mathcal{H}_C$ in \textbf{Sset(U)} is a separator if and only if $\mathcal{H}$ is a null soft set with $C\neq \emptyset$.
\end{theorem}
\begin{proof}
Suppose $\mathcal{H}_C$ is a null soft set with $C\neq \emptyset$. Let $\mathcal{F}_A$ and $\mathcal{G}_B$ be two object in \textbf{Sset(U)} and $\alpha, \beta : \mathcal{F}_A \rightarrow \mathcal{G}_B$ be two distinct soft morphisms, ie $\alpha(a) \neq \beta(a)$, for some $a\in A$.\\
Define $\gamma: C\rightarrow A$ by $\gamma(c) = a$, for all $c\in C$. By definition of $\mathcal{H}$, $\gamma$ is a soft morphism from $\mathcal{H}_C$ to $\mathcal{F}_A$ (ie $\emptyset = \mathcal{H}(c) \subseteq (\mathcal{F} \circ \gamma)(c)$, for all $c\in C$). Also $(\alpha \circ \gamma)(c)= \alpha(a)$ and $(\beta \circ \gamma)(c)= \beta(a)$. So $\alpha \circ \gamma \neq \beta \circ \gamma$. Hence $\mathcal{H}_C$ is a separator in \textbf{Sset(U)}.\\
Conversely assume that $\mathcal{H}_C$ is a separator in \textbf{Sset(U)}. If $C= \emptyset$, then $\alpha \circ \gamma = \beta \circ \gamma$. Now assume there exists $c \in C$ such that $\mathcal{H}(c) \neq \emptyset$, then there is no soft morphism between $\mathcal{H}_C$ and null soft set $\mathcal{F}_A$. Hence $\mathcal{H}_C$ is a null soft set with $C \neq \emptyset$.
\end{proof}
\begin{theorem}
An object $\mathcal{H}_C$ in \textbf{Sset(U)} is a co-separator if and only if there exist $c_1, c_2 \in C$, $c_1 \neq c_2$ such that $\mathcal{H}(c_1)=\mathcal{H}(c_2) = U$.
\end{theorem}
\begin{proof}
Suppose $c_1, c_2 \in C$, $c_1 \neq c_2$ with $\mathcal{H}(c_1)=\mathcal{H}(c_2) = U$. Let $\alpha, \beta : \mathcal{F}_A \rightarrow \mathcal{G}_B$ be two distinct soft morphisms, i.e. $\alpha(a_1) \neq \beta(a_1)$, for some $a_1\in A$. \\
Define $\gamma: B\rightarrow C$ by $ \gamma(b) = \left\{ \begin{array}{ll}
c_1 & \mbox{if $ b=\alpha(a_1) $};\\
c_2 & \mbox{$ otherwise $}\end{array} \right.$\\ Then $\gamma: \mathcal{G}_B \rightarrow \mathcal{H}_C$ is a soft morphism, since $\mathcal{G}(b) \subseteq U= \mathcal{H}(c_1)=\mathcal{H}(c_2)$. Also, $(\gamma \circ \alpha)(a_1) = c_1 \neq c_2= (\gamma \circ \beta)(a_1)$. Hence $\mathcal{H}_C$ is a co-separator.\\
Conversely, if $C= \emptyset$, then there is no soft morphism between $\mathcal{G}_B$ and $\mathcal{H}_C$ and if $|C|=1$, then $\gamma \circ \alpha= \gamma \circ \beta$. Therefore $C$ contains at least two distinct points. Since $\mathcal{H}_C$ is a co-separator, then there exists a soft morphism $\gamma : \mathcal{G}_B \rightarrow \mathcal{H}_C$ such that $\gamma \circ \alpha \neq \gamma \circ \beta$, i.e. $(\gamma \circ \alpha)(a) = c_1 \neq c_2= (\gamma \circ \beta)(a)$, for some $a\in A$.
Now let $\alpha(a)= b_1$ and $\beta(a)= b_2$ and assume that $\mathcal{G}$ is an absolute soft set, then $\mathcal{H}(c_1) = (\mathcal{H}\circ \gamma)(b_1) = U$ and $\mathcal{H}(c_2) = (\mathcal{H}\circ \gamma)(b_2) = U$.
\end{proof}
\section{Morphisms in \textbf{Sset(U)} }
In this section, we investigate the notion of epimorphism and mono morphism and characterize them.
\begin{theorem}\label{epi}
A soft morphism $\alpha: \mathcal{F}_A \rightarrow \mathcal{G}_B $ is an epimorphism if and only if $\alpha$ is surjective.
\end{theorem}
\begin{proof}
Assume that $\alpha: \mathcal{F}_A \rightarrow \mathcal{G}_B $ is an epimorphism,
Let $C= \{0,1\}$ and $\mathcal{H}_C$ be an absolute soft set. Define $\beta, \gamma: B \rightarrow C$ by $\beta(B) =\{0\}$ and $ \gamma(b) = \left\{ \begin{array}{ll}
0 & \mbox{if $ b=\alpha(a), a\in A $};\\
1 & \mbox{$ otherwise $}\end{array} \right.$\\
Then $\beta, \gamma : \mathcal{G}_B \rightarrow \mathcal{H}_C$ are soft morphisms with $\beta \circ \alpha = \gamma \circ \alpha $, which implies $\beta= \gamma$, since $\alpha$ is an epimorphism. From the definition of $\beta$ and $\gamma$, $\beta(B)=\{0\}= \gamma(\alpha(A))$. Hence $\alpha(A)= B$, i.e. $\alpha$ is surjective.\\
Conversely assume that $\alpha$ is surjective. Let $\beta, \gamma : \mathcal{G}_B \rightarrow \mathcal{H}_C$ be soft morphisms with $\beta \circ \alpha = \gamma \circ \alpha$. Let $b\in B$, then there exists $a\in A$ such that $\alpha(a)= b$.\\
Now, for any $b\in B$
\begin{eqnarray*}
\beta(b) & = & \beta(\alpha(a)) \\
& = & (\beta \circ \alpha)(a) \\
& = & (\gamma \circ \alpha)(a) \\
& = & \gamma(\alpha(a))\\
& = & \gamma(b)
\end{eqnarray*}
Which implies $\beta = \gamma$. Hence $\alpha: \mathcal{F}_A \rightarrow \mathcal{G}_B $ is an epimorphism.
\end{proof}
\begin{theorem}\label{mono}
A soft morphism $\alpha: \mathcal{F}_A \rightarrow \mathcal{G}_B $ is a monomorphism if and only if $\alpha$ is injective.
\end{theorem}
\begin{proof}
Suppose $\alpha $ is a monomorphism. Let $a_1, a_2 \in A$ and let $C=\{c\}$ and $\mathcal{H}(c)= \emptyset$. Define $\beta, \gamma : C \rightarrow A$ by $\beta(c)= a_1$ and $\gamma(c)= a_2$. By definition of $\mathcal{H}$, $\beta$ and $ \gamma$ are soft morphisms from $\mathcal{H}_C$ to $\mathcal{F}_A$. For $a_1, a_2 \in A$,
\begin{eqnarray*}
\alpha(a_1) = \alpha(a_2) & \implies & (\alpha \circ \beta)(c) = (\alpha \circ \gamma)(c) \\
& \implies & \beta(c)= \gamma(c)\hspace{0.8cm} \text{Since $\alpha$ is a monomorphism} \\
& \implies & a_1= a_2
\end{eqnarray*}
Which implies $\alpha$ is injective. \\
Conversely assume $\alpha$ is injective. Let $\beta, \gamma : \mathcal{H}_C \rightarrow \mathcal{F}_A$ be soft morphisms with $\alpha \circ \beta = \alpha \circ \gamma$. Now, for any $c\in C$
\begin{eqnarray*}
(\alpha \circ \beta)(c) = (\alpha \circ \gamma)(c) & \implies & \alpha ( \beta(c)) = \alpha ( \gamma(c)) \\
& \implies & \beta(c)= \gamma(c)\hspace{0.8cm} \text{Since $\alpha$ is injective} \\
& \implies & \beta = \gamma
\end{eqnarray*}
Hence $\alpha$ is a monomorphism.
\end{proof}
\begin{corollary}
In \textbf{Sset(U)}, the bimorphisms are precisely the bijective soft morphisms.
\end{corollary}
\begin{proof}
Use the definition of bimorphism and Theorem \ref{epi} and \ref{mono}.
\end{proof}
\begin{theorem}
A soft morphism $\alpha : \mathcal{F}_A \rightarrow \mathcal{G}_B$ is an isomorphism if and only if $\alpha$ is bijective and $\mathcal{F}(a)= (\mathcal{G}\circ \alpha)(a)$, for all $a\in A$.
\end{theorem}
\begin{proof}
Suppose $\alpha$ is an isomorphism, then there exists a soft morphism $\beta : \mathcal{G}_B \rightarrow \mathcal{F}_A$ such that $\beta \circ \alpha = id_{\mathcal{F}_A}$ and $\alpha \circ \beta = id_{\mathcal{G}_B}$, i.e. $\beta \circ \alpha = id_A$ and $\alpha \circ \beta = id_B$ are identity functions. So $\alpha$ is bijective. Let $a\in A$ and $\alpha(a)= b$, then $\beta(b)=a$. Since $\alpha$ and $\beta$ are soft morphisms, we have
\begin{center}
$\mathcal{F}(a) \subseteq (\mathcal{G}\circ \alpha)(a)= \mathcal{G}(b) \subseteq (\mathcal{F}\circ \beta)(b)= \mathcal{F}(a)$
\end{center}
Hence $\mathcal{F}(a) = (\mathcal{G}\circ \alpha)(a)$, for all $a \in A$.\\
Conversely assume that $\alpha$ is bijective and $\mathcal{F}(a)= (\mathcal{G}\circ \alpha)(a)$, for all $a\in A$. Then there exist a function $\beta: B \rightarrow A$ such that $\beta \circ \alpha = id_A$ and $\alpha \circ \beta = id_B$. So $\beta \circ \alpha = id_{\mathcal{F}_A}$ and $\alpha \circ \beta = id_{\mathcal{G}_B}$ are identity soft morphisms. Hence $\alpha$ is an isomorphism.
\end{proof}
\section{Conclusions}
In this paper, we have investigated objects and morphisms in the category of soft sets, \textbf{Sset(U)}. All these concepts are basic supporting structures for research and development on the categorical version of the soft set theory. For future work, one can study the sections, retractions, regular and extremal monomorphism, regular and extremal epimorphism and quotient objects in the category of soft sets.
\section{Acknowledgment}
The first author is very much indebted to University Grants Commission, India for awarding Teacher Fellowship under Faculty Development Programme ($XII^{th}$ plan).
| {
"timestamp": "2019-02-05T02:00:16",
"yymm": "1902",
"arxiv_id": "1902.00523",
"language": "en",
"url": "https://arxiv.org/abs/1902.00523",
"abstract": "Soft set theory can deal uncertainties in nature by parametrization process. In this paper, we explore the objects and morphisms of category of soft sets, Sset(U) in detail. Also, gives characterizations of monomorphisms and epimorphisms in Sset(U).",
"subjects": "Category Theory (math.CT)",
"title": "On the Objects and Morphisms of Category of Soft sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310532836284,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.707698479768603
} |
https://arxiv.org/abs/2008.04348 | Design based incomplete U-statistics | U-statistics are widely used in fields such as economics, machine learning, and statistics. However, while they enjoy desirable statistical properties, they have an obvious drawback in that the computation becomes impractical as the data size $n$ increases. Specifically, the number of combinations, say $m$, that a U-statistic of order $d$ has to evaluate is $O(n^d)$. Many efforts have been made to approximate the original U-statistic using a small subset of combinations since Blom (1976), who referred to such an approximation as an incomplete U-statistic. To the best of our knowledge, all existing methods require $m$ to grow at least faster than $n$, albeit more slowly than $n^d$, in order for the corresponding incomplete U-statistic to be asymptotically efficient in terms of the mean squared error. In this paper, we introduce a new type of incomplete U-statistic that can be asymptotically efficient, even when $m$ grows more slowly than $n$. In some cases, $m$ is only required to grow faster than $\sqrt{n}$. Our theoretical and empirical results both show significant improvements in the statistical efficiency of the new incomplete U-statistic. | \subsection{Theoretical properties of one-sample ICUDO}\label{sec:thms}
To proceed with the asymptotic properties of $U_{oa}$, we define
\begin{eqnarray}\label{eq:rt}
R(t)=\sum_{j>t}\binom{d}{j}\delta_{j}^2.
\end{eqnarray}
\begin{theorem}\label{thm:noassumptiononesample}
For any $(g,F)$, using $OA(m,d,L,t)$ in step 1 of the ICUDO algorithm, we have
\begin{eqnarray}\label{eq:noassumptiononesample}
{\rm MSE}(U_{oa})={\rm MSE}(U_0)+\frac{R(t)}{m}+o\left(\frac{1}{m}\right)+O\left(\frac{1}{n^2}\right).
\end{eqnarray}
\end{theorem}
We now explain the meanings of the three terms in (\ref{eq:noassumptiononesample}) generated in the process of approximating the complete U-statistic $U_0$ using $U_{oa}$. The term $O(n^{-2})$ is the bias square of $U_{oa}$ due to the inclusion of combinations with duplicate units, such as the first column of (\ref{eq:example}). Essentially, $U_{oa}$ is approximating the V-statistic, which is biased for $\Theta$ itself. The term $o(m^{-1})$ is due to the sampling variability when we draw one point from each selected group, that is, step 3 of the algorithm. The term $R(t)/m$ is due to the usage of the OA structure in place of a complete enumeration of all group combinations. Compared with the second term in (\ref{eq:generalchangeonesample}) for the ICUR, $R(0)/m$, we are able to eliminate all $\delta_j^2$ with $j\leq t$ owing to the projective uniformity of the OA in all $t$-dimensional projected spaces. If $\delta_j^2=0$ for $d'\leq j\leq d$, an OA with strength $t\geq d'$ yields $R(t)=0$. We discuss the hidden benefit of using a lower strength OA in Example \ref{example:tradeoff}.
In the non-degenerate case, recall the MSE$(U_0)\asymp n^{-1}$ and $\lim_{n\rightarrow\infty}{\rm Eff}(U_{\rm RND})\leq d/(1+d)$ for the ICUR when $m\asymp n$. Under the same situation, Theorem \ref{thm:noassumptiononesample} implies that $U_{oa}$ is asymptotically efficient by simply taking $t=d$. In fact, stronger results can be derived for the ICUDO so that $m$ is allowed to grow more slowly than $n$ under various conditions. We give Theorem \ref{thm:lipschitzonesample} here as one example; additional results can be found in the Supplementary Material.
\begin{theorem}\label{thm:lipschitzonesample}
Suppose $(i)$ the kernel function $g$ is Lipschitz continuous, and $(ii)$ $F$ has density function $f(x)>c$ for some fixed $c>0$ and $x\in[a,b]$, and $f(x)=0$ otherwise. For $U_{oa}$ based on $OA(m,d,L,t)$ with $L^2\leq n(\log n)^{-1}$, we have
\begin{eqnarray}\label{eqn:1202}
{\rm MSE}(U_{oa})={\rm MSE}(U_0)+\frac{R(t)}{m}+O\left(\frac{1}{mL^2}\right)+O\left(\frac{1}{n^2}\right).
\end{eqnarray}
\end{theorem}
For $t=d=2$, we automatically have $R(t)=0$. If the conditions in Theorem \ref{thm:lipschitzonesample} hold, we only need $m\succ \sqrt{n}$ to achieve ${\rm Eff}(U_{oa})\rightarrow 1$, while the ICUR requires $m\succ n$. In general, $R(t)$ decreases in $t$ and could vanish if we take $t$ large enough so that $\delta_j^2=0$, for all $j>t$. Without knowledge of $\delta_j^2$, simply taking $t=d$ will eliminate $R(t)$ too. On the other hand, the term $O\left(\frac{1}{mL^2}\right)$ in (\ref{eqn:1202}) is decreasing in $L$, meaning the more groups we use to divide the data, the more homogeneous the units we could have in each group. However, $L$ and $t$ are subject to the constraint $m=\lambda L^t$, where $\lambda$ is the number of replicates of each $t$-tuple in OA and is equal to one in all examples presented here. As a result, $L$ and $t$ cannot be increased simultaneously. To gain insight to the trade-off between $L$ and $t$, we need to determine the constant term for $O\left(\frac{1}{mL^2}\right)$. For this, we derive the following theorem. A more detailed discussion on how to choose $L$ and $t$, given $m$, is provided in the Supplementary Material. Denote by $U(0,1)$ the uniform distribution on $[0,1]$.
\begin{theorem}\label{thm:tradeoff}
Suppose $g$ has a continuous first-order derivative on $[0,1]^d$, $X\sim U(0,1)$, and there exists some $c\in (0,\frac{1}{2})$, such that $L\preceq n^{c}$. For $U_{oa}$ based on $OA(m,d,L,t)$,
\begin{eqnarray}\label{eq:tradeoff}
{\rm MSE}(U_{oa})={\rm MSE}(U_0)+\frac{R(t)}{m}+\frac{d}{12mL^2}E\gamma^2(X_1,\ldots,X_d)+o\left(\frac{1}{mL^2}\right),
\end{eqnarray}
where $\gamma(x_1,\ldots,x_d)=\frac{\partial g}{\partial x_1}(x_1,\ldots,x_d)$.
\end{theorem}
The assumption of a uniform distribution for $X$ is not as strict as it seems. To see this, for $X\sim F$, let $Z=F(X)\sim U(0,1)$. Applying Theorem \ref{thm:tradeoff} to $g_{F}(Z_1,\ldots,Z_d):=g(F^{-1}(Z_1),\ldots,F^{-1}(Z_d))=g(X_1,\ldots,X_d)$, we have the following corollary.
\begin{corollary}
Suppose $g_F$ has a continuous first-order derivative on $[0,1]^d$, and there exists some $c\in (0,\frac{1}{2})$, such that $L\preceq n^{c}$. Then, {\rm(\ref{eq:tradeoff})} still holds.
\end{corollary}
The term $E\gamma^2$ in (\ref{eq:tradeoff}) provides a nice interpretation of the trade-off between $t$ and $L$. When the kernel function $g$ has a large variability (large $E\gamma^2$), it is more challenging to make each group as homogeneous as possible, which enforces larger values of $L$. On the other hand, if $g$ is quite flat on the domain (small $E\gamma^2$), we prefer fewer groups to improve the strength of the OA.
\begin{example}\label{example:tradeoff}
The kernel function $g(x_1,x_2,x_3)=x_1x_2x_3$ estimates $\mu^3$, where $\mu=E(X)$. We compare the performance of three methods: $U_{\rm RND}$; $U_{oa_2}$ based on $OA(m,3,\sqrt{m},2)$, with strength $t=2$; and $U_{oa_3}$ based on $OA(m,3,m^{1/3},3)$, with strength $t=3$. The data consist of $n=10^4$ i.i.d. observations simulated from
$N(\mu,1)$, where $\mu$ takes the values of $0.5$ and $2$; see Table \ref{tb:t.normal} for the simulation results.
\end{example}
\begin{table}[h]
\begin{center}
\caption{Result of Example \ref{example:tradeoff}.}\label{tb:t.normal}
\begin{tabular}{|c|ccc|ccc|}\hline
\multirow{2}{*}{$m/n$}{}&
\multicolumn{3}{c|}{$\mu=0.5$}&\multicolumn{3}{c|}{$\mu=2$}\\
&Eff$(U_{\rm RND})$&${\rm Eff}(U_{oa_2})$&${\rm Eff}(U_{oa_3})$&Eff$(U_{\rm RND})$&${\rm Eff}(U_{oa_2})$&${\rm Eff}(U_{oa_3})$\\\hline
0.005 &0.133\%&0.171\% &{\bf0.218}\%& 1.110\% &{\bf9.908}\%&2.323\% \\
0.01 &0.290\%&{0.464}\%&{\bf0.579}\%& 2.485\% &{\bf26.84}\%&8.455\% \\
0.05 &1.291\%&{2.448}\%&{\bf6.096}\%&10.31\%&{\bf75.12}\% &51.71\% \\
0.1 &2.936\%&{4.527}\%&{\bf16.62}\%&20.13\% &{\bf91.87}\%&76.80\% \\
0.5 &12.58\%&{21.89}\%&{\bf71.78}\%&50.78\%&{\bf100.0}\% &98.53\% \\
1.0 &21.05\%&{33.26}\%&{\bf99.94}\%&67.51\% &{\bf100.0}\%&99.64\%\\\hline
\end{tabular}
\end{center}
\end{table}
In Table \ref{tb:t.normal}, both $U_{oa_2}$ and $U_{oa_3}$ outperform $U_{\rm RND}$ significantly. The advantage of the ICUDO over the ICUR is discussed below in additional examples. Furthermore, we find that the winning strategy changes from $U_{oa_3}$ to $U_{oa_2}$ as we increase the mean $\mu$ of the distribution. This observation well illustrates the comments after Theorem \ref{thm:tradeoff} on the relevance of $E\gamma^2$ in determining the optimal value of the strength $t$. That is, for larger $E\gamma^2$, we are more inclined to choose a smaller strength. This is validated by our second observation together with $E\gamma^2=(\mu^2+1)^2$, which increases in $\mu(>0)$.
Note that the applicability of Theorem \ref{thm:lipschitzonesample} and its variants, Theorems \ref{thm:clipschitzonesample}--\ref{thm:linearcombination} in the Supplementary Material is broader than it appears. To see this, let $\phi:R\rightarrow R$ be a one-to-one mapping. Denote by $F_{\phi}$ the distribution of the transformed random variable $Z=\phi(X)$, which leads to the following representation:
$$g_{\phi}(z_1,\ldots,z_d):=g(\phi^{-1}(z_1),\ldots,\phi^{-1}(z_d))=g(x_1,\ldots,x_d).$$
If ($g_{\phi},F_{\phi}$) satisfies the conditions in these theorems, corresponding results also hold for the pair ($g,F$). For example, suppose $g(x_1,x_2)=x_1^{-a}x_2^{-a}$ and $F$ is a Pareto distribution with shape and scale parameters $a$ and $b$, respectively. The Pareto distribution is neither light-tailed nor bounded, and hence violates the conditions in Theorem \ref{thm:lipschitzonesample}. By taking $\phi(x)=1-(b/x)^a$, we have $\phi(X)\sim U(0,1)$. It can be verified that the conditions in Theorem \ref{thm:lipschitzonesample} are satisfied by ($g_{\phi},F_{\phi}$).
\noindent{\bf 2.2. Multi-sample U-statistics}
For $k=1,\ldots,K$, let
$X_{1}^{(k)},\ldots,X_{n_k}^{(k)}$ be a random sample of size $n_k$ from the distribution $F_k$. The UMVUE of
$$\Theta=\int g(x^{(1)}_1,\ldots,x^{(1)}_{d_1},\cdots,x^{(K)}_1,\ldots,x^{(K)}_{d_K})dF_1(x^{(1)}_1)\ldots dF_K(x^{(K)}_{d_K})$$
is given by the generalized U-statistic
$$U_0=\prod_{k=1}^K\binom{n_k}{d_k}^{-1}\sum_{\bm\eta\in\prod_{k=1}^KS_{n_k,d_k}}g(\mathcal{X}_{\bm\eta}),$$
$$S_{n_k,d_k}=\{{\bm \eta}_k=(\eta_{k,1},\ldots,\eta_{k,d_k}): 1\leq \eta_{k,1}<\eta_{k,2}<\ldots<\eta_{k,d_k}\leq n_k\},$$
$$\mathcal{X}_{{\bm\eta}}=(\mathcal{X}_{\bm\eta_1},\ldots,\mathcal{X}_{\bm\eta_K})=(X^{(1)}_{{ \eta}_{1,1}},\ldots,X^{(1)}_{{ \eta}_{1,d_1}},\cdots,X^{(K)}_{{ \eta}_{K,1}},\ldots,X^{(K)}_{{ \eta}_{K,d_K}}).$$
The $d(=\sum_{k=1}^Kd_k)$-dimensional kernel function $g$ is symmetric about any $d_k$-dimensional sub-input $\{x^{(k)}_1,\ldots,x^{(k)}_{d_k}\}$. The generalized U-statistic reduces to the traditional U-statistic when $K=1$.
An incomplete generalized U-statistic is given by
\begin{equation}\label{eq:incompleteu}
U=\frac{1}{m}\sum_{{\bm\eta}\in S}g(\mathcal{X}_{\bm\eta}),
\end{equation}
where $S\subset\prod_{k=1}^KS_{n_k,d_k}$ and $m=|S|$. We construct the multi-sample ICUDO as follows. For ease of illustration, we assume $n_k$' is a multiple of $L$.
\begin{itemize}
\item[Step 1.] Let $A_0$ be an $OA(m,d,L,t)$. Adopt random level permutations $\{\pi_1,\ldots,\pi_d\}$ of columns of $A_0$ independently. Specifically, for each $l\in\mathcal{Z}_L$, change all elements $l$ in the $j$th column of $A_0$ to $\pi_j(l)$. The $m$ rows of the resulting array $A$ are denoted by $\{\bm a^i=(\bm a^{i}_1,\ldots,\bm a^{i}_K):i=1,\ldots,m;\bm a^{i}_k\in\mathcal{Z}_L^{d_k},k=1,\ldots,K\}$.
\item[Step 2.] For each $k=1,\ldots,K$, create the partition $\mathcal{Z}_{n_k}=\bigcup_{l=1}^LG^{(k)}_l$, such that $|G^{(k)}_l|=n_kL^{-1}$ for $l\in\mathcal{Z}_L$, and $X_{i_1}^{(k)}\leq X_{i_2}^{(k)}$ for any $i_1\in G_{l_1}^{(k)}$, $i_2\in G_{l_2}^{(k)}$, with $l_1<l_2$. For any ${\bm a}=({\bm a}_1,\ldots,{\bm a}_K)$ with ${\bm a}_k=(a_{k,1},\ldots,a_{k,d_k})\in\mathcal{Z}_L^{d_k}$, define
\begin{equation}\label{eq:Gv}
\mathcal{G}_{{\bm a}}=\prod_{k=1}^K\prod_{j=1}^{d_k}G^{(k)}_{a_{k,j}}.
\end{equation}
\item[Step 3.] For $i=1,\ldots,m$, independently draw an element ${\bm \eta}^i$ uniformly from $\mathcal{G}_{{\bm a}^i}$, where $\bm a^i$ is the $i$th row of $A$:
\begin{equation}\label{uoamulti}
U_{oa}=\frac{1}{m}\sum_{i=1}^mg(\mathcal{X}_{\bm\eta^i}).
\end{equation}
\end{itemize}
An example is given in the Supplementary Material.
For any $j_{k,1},\ldots,j_{k,d_k}\in \mathcal{Z}_{d_k}$ and $k\in\mathcal{Z}_K$, assume
$$Eg^2\left(X^{(1)}_{j_{1,1}},\ldots,X^{(1)}_{j_{1,d_1}},\cdots,X^{(K)}_{j_{K,1}},\ldots,X^{(K)}_{j_{K,d_K}}\right)<\infty.$$
Let $n_{\min}=\min\{n_1,\ldots,n_K\}$ and $n_{\max}=\max\{n_1,\ldots,n_K\}$.
Here, we assume $n_{\min}\asymp n_{\max}$ and $L\prec n_{\min}$.
Let ${\bm u}=({\bm u}_1,\ldots,{\bm u}_K)$, where ${\bm u}_k\subseteq \mathcal{Z}_{d_k}$. Define $dF_{{\bm u}}=\prod_{k=1}^K\prod_{j\in{\bm u}_k}dF_k(x_{j}^{(k)})$. For any ${\bm u}$ and ${\bm x}=(x^{(1)}_1,\ldots,x^{(1)}_{d_1},\cdots,x^{(K)}_1,\ldots,$ $x^{(K)}_{d_K})$, we recursively define
\begin{equation}\label{eq:gufunction}
g_{\bm u}({\bm x})=\int g({\bm x})dF_{{\bm u}^c}~~~~h_{\bm u}({\bm x})=g({\bm x})-\sum_{{\bm v}\subset{\bm u}}h_{\bm v}(x),\notag
\end{equation}
where $\bm u^c=(\bm u_1^c,\ldots,\bm u_K^c)=(\mathcal{Z}_{d_1}\setminus\bm u_1,\ldots,\mathcal{Z}_{d_K}\setminus\bm u_K)$, $g_{\emptyset}({\bm x})=\Theta$ and $h_{\emptyset}({\bm x})=0$, ${\bm v}=({\bm v}_1,\ldots,{\bm v}_K)$, and ${\bm v}\subset{\bm u}$ means ${\bm v}_k\subseteq {\bm u}_k$ (${\bm v}\neq{\bm u}$).
For $\bm u$, we can define
$\sigma_{{\bm u}}^2={\rm Var}(g_{\bm u})~{\rm and}~\delta_{{\bm u}}^2={\rm Var}(h_{\bm u})$.
The MSE of the complete generalized U-statistic is given by \cite{sen:1974} as
\begin{equation}\label{eq:generalvar}
{\rm MSE}(U_0)=\prod_{k=1}^K\binom{n_k}{d_k}^{-1}\sum_{\bm u=(\bm u_1,\ldots,\bm u_K)}\left\{\prod_{k=1}^K\binom{d_k}{|\bm u_k|}\binom{n_k-d_k}{d_k-|\bm u_k|}\right\}\sigma_{\bm u}^2.\notag
\end{equation}
Let $|\bm u|=\sum_{k=1}^K|\bm u_k|$. The generalized U-statistic and the kernel function are called {\it order-q degenerate} if $\sigma_{\bm u}^2=\sum_{\bm v\in\bm u}\delta_{\bm v}^2=0$, for all $|\bm u|\leq q$, and there exists $\bm u'$ such that $\sigma_{\bm u'}^2>0$ and $|\bm u'|=q+1$. We have ${\rm MSE}(U_0)=O(n^{-(q+1)})$ in this case. For the non-degenerate case $q=0$, we have ${\rm MSE}(U_0)\asymp n^{-1}$ . With a slight abuse of notation, let $\sigma_{(j_1,\ldots,j_K)}=\sigma_{\bm u}$ and $\delta_{(j_1,\ldots,j_K)}=\delta_{\bm u}$, for $\bm u=(\bm u_1,\ldots,\bm u_K)$, with $|\bm u_k|=j_k$, $k=1,\ldots,K$. For the ICUR, we have
\begin{equation}\label{eq:generalchange}
{\rm MSE}(U_{\rm RND})={\rm MSE}(U_0)+\frac{R(0)}{m}+O\left(\frac{1}{mn_{\min}}\right),\notag
\end{equation}
$$R(t)=\sum_{\bm u:|\bm u|>t}\delta_{\bm u}^2=\sum_{j_1=0}^{d_1}\cdots\sum_{j_K=0}^{d_K}I(j_1+\cdots+j_K>t)\prod_{k=1}^K\binom{d_k}{j_k}\delta_{(j_1,\ldots,j_K)}^2.$$
The last term above reduces to the form of $R(t)$ for the one-sample case, but the second term yields a parsimonious presentation for the multi-sample case. The corresponding properties of $U_{oa}$ are given as follows.
\begin{theorem}\label{thm:noassumption}
For $U_{oa}$ based on $OA(m,d,L,t)$, for any pair of $(g,F)$, we have
\begin{eqnarray}\label{eq:noassumptionmulti}
{\rm MSE}(U_{oa})={\rm MSE}(U_0)+\frac{R(t)}{m}+o\left(\frac{1}{m}\right)+O\left(\frac{1}{n_{\min}^2}\right).
\end{eqnarray}
\end{theorem}
Theorem \ref{thm:noassumption} is basically a multi-sample version of Theorem \ref{thm:noassumptiononesample}, and its result can be strengthened in the same way. The details are omitted here to conserve space. We conclude this section with a machine learning example.
\begin{example} \label{example:rankingloss}(Ranking measure, \cite{chen:2009}). The ranking measure is an important topic in machine learning research. In the commonly used pairwise approach, the loss for a given classifier score function $f$ is given by
$$L(f)=\sum_{1\leq i<j\leq K}\sum_{x\in G_i,y\in G_j}\psi(f(y)-f(x)),$$
where $G_1,\ldots,G_K$ are $K$ groups ranked in ascending order. Here, $\psi$ could that the form of
\begin{itemize}
\item[$(i)$] hinge function: {\rm$\psi(z)=(1-z)_{+}$}, or a
\item[$(ii)$] logistic function: {\rm$\psi(z)=\log(1+\exp(-z))$}
\end{itemize}
for the Ranking SVM and RankNet methods, respectively. In the simulation, we set $K=2$, that is, the two-sample case, $|G_1|=|G_2|=10^4$, $f(G_1)\sim N(0,4)$, and $f(G_2)\sim N(5,4)$.
Figure \ref{fig:multid} reveals the high efficiency of $\tilde{U}_{oa}$ compared with that of $U_{\rm RND}$.
\end{example}
\begin{figure
\centering
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=6.7cm]{new1.pdf}
\end{minipage}
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=6.7cm]{new2.pdf}
\end{minipage}
\caption{Comparison of efficiencies of $\tilde{U}_{oa}$ and $U_{\rm RND}$ with respect to subsample size $m$ for loss function ($i$) (left) and ($ii$) (right).}
\label{fig:multid}
\end{figure}
\vspace{0.5cm}
\setcounter{section}{3}
\setcounter{equation}{2}
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage
\noindent{\large\bf 3. Debiased ICUDO for degenerate cases}
Recall the ICUDO procedure is actually biased owing to the inclusion of combinations with duplicate units. The bias square is $O(n^{-2})$ for any pair $(g,F)$, which is negligible compared to ${\rm Var}(U_0)\asymp n^{-1}$ in the non-degenerate case. One can see that it is no longer negligible in the degenerate case. In this section, we propose a debiased version of the ICUDO.
\if0{
\begin{example}\label{example:debias}
We work on the following two kernel functions in \cite{lee:1990}.
\begin{itemize}
\item[{$(i)$}] $g(x_1,x_2)=x_1x_2$, the ICUDO is based on an $OA(m,2,\sqrt{m},2)$ with $n=10^4$;
\item[{$(ii)$}] $g(x_1,x_2,x_3)=x_1x_2x_3$, the ICUDO is based on an $OA(m,3,m^{1/3},3)$ with $n=10^3$.
\end{itemize}
The data is simulated from the standard normal distribution so that the kernel functions in ($i$) and ($ii$) are order-1 and order-2 degenerate, respectively. See Table \ref{tab:debias}.
\end{example}
\begin{table}[h]
\begin{center}
\caption{Result of Example \ref{example:debias}.} \label{tab:debias}
\begin{tabular}{|c|ccc|c|cccc|}\hline
\multirow{2}{*}{$m/\binom{n}{2}$}{}&
\multicolumn{3}{c|}{$g(x_1,x_2)=x_1x_2$}&\multirow{2}{*}{$m/\binom{n}{3}$}{}&\multicolumn{4}{c|}{$g(x_1,x_2,x_3)=x_1x_2x_3$}\\
&$U_{\rm RND}$&$U_{oa}$&$\tilde{U}_{oa}$&&$U_{\rm RND}$&$U_{\rm BIBD}$&$U_{oa}$&$\tilde{U}_{oa}$\\\hline
$2\times 10^{-4}$ &0.027\%&3.700\% &3.817\%&$6\times 10^{-5}$& 0.006\% &-- &{0.125}\%&0.123\% \\
$1\times 10^{-3}$ &0.097\%&39.52\% &39.68\%&$3\times 10^{-4}$&0.034\% &--&1.217\%&{1.302}\% \\
$2\times 10^{-3}$ &0.188\%&59.03\% &68.99\%&$1\times 10^{-3}$&0.080\%&0.103\% &5.217\% &{5.655}\% \\
$4\times 10^{-3}$ &0.376\%&64.51\%&80.01\%&$3\times 10^{-3}$&0.255\% &--&10.03\% &{17.60}\%\\
$6\times 10^{-3}$ &0.572\%&66.68\%&90.90\%&$6\times 10^{-3}$&0.559\% &--&14.53\%&{28.42}\%\\
$1\times 10^{-2}$ &0.923\%&66.67\%&99.95\%&$5\times 10^{-2}$&3.033\%&-- &33.67\%&{59.94}\%\\\hline
\end{tabular}
\end{center}
\end{table}
For the kernel function $g(x_1,x_2)=x_1x_2$, the efficiency of ICUR stays at a very low level. The original ICUDO ($U_{oa}$) is also struggling at the efficiency of roughly $2/3$. It is because the bias square is of the same order $O(n^{-2})$ as the variance of $U_0$ for this order-1 degenerate kernel function. Note that the bias does not depend on the value of $m$ and hence there is no hope for $U_{oa}$ to be eventually asymptotically efficient. See Lemma 1 in Appendix for details. In contrast, the debiased ICUDO ($\tilde{U}_{oa}$) is shown to be asymptotically efficient. Now going to the kernel function $g(x_1,x_2,x_3)=x_1x_2x_3$, all three methods lowered their efficiencies. It is due to the order-2 degeneration, where the variance of $U_0$ is lowered to $O(n^{-3})$. Hence it is more challenging to compete with. But we can still see that the debiased ICUDO is lot more efficient than the original one. In fact, $\tilde{U}_{oa}$ is asymptotically efficient when $m\succ n^{9/4}$. The proof is omitted here to save the space. We have also added the BIBD method into the comparison. It only exists for the case of $m/\binom{n}{3}=10^{-3}$, where BIBD is slightly more efficient than ICUR but substantially less efficient than ICUDO methods. Recall that the BIBD method has been shown to yield the minimum variance for given $m$ by Lee (1982). ICUDO is out of the scope of comparison considered in Lee (1982). }\fi
We provide details for the multi-sample cases, where the one-sample cases are achieved by taking $K=1$. To proceed, Let $S_0^*=\{({\bm \eta}_1,\ldots,{\bm \eta}_K): {\bm \eta}_k=(\eta_{k,1},\ldots,\eta_{k,d_k})\in\mathcal{Z}_{n_k}^{d_k}, \eta_{k,j_1}\neq \eta_{k,j_2} {\rm~for~any~} j_1\neq j_2 \}$. The debiased ICUDO is constructed in the same way as the original, except that step 3 changes as follows:
\begin{itemize}
\item[Step 3$'$.] For $i=1,\ldots,m$, independently draw ${\bm \eta}^i$ from the uniform distribution on $\mathcal{G}_{{\bm a}^i}\cap S_0^*$.
Adopting (\ref{eq:incompleteu}) with $S_{oa}^*=\{{\bm \eta}^1,\ldots,{\bm \eta}^m\}$, we have the debiased ICUDO as
\begin{equation}\label{uoade}
\tilde{U}_{oa}=\frac{1}{m}\sum_{i=1}^m\omega_{\bm{\eta}^i}g(X_{\bm \eta^{i}}),
\end{equation}
where $\omega_{\bm \eta^i}=L^d|\mathcal{G}_{{\bm a}^i}\cap S_0^*|/|S_0^*|.$
\end{itemize}
\begin{theorem}\label{thm:debias}
$\tilde{U}_{oa}$ based on $OA(m,d,L,t)$ is an unbiased estimator, and
\begin{eqnarray}\label{eq:debiased}
{\rm MSE}(\tilde{U}_{oa})={\rm MSE}(U_0)+\frac{R(t)}{m}+o\left(\frac{1}{m}\right).
\end{eqnarray}
\end{theorem}
Theorem \ref{thm:debias} is analogous to Theorems \ref{thm:noassumptiononesample} and \ref{thm:noassumption} for the one-sample and multi-sample cases, respectively, except that the bias square term $O(n^{-2})$ and $O(n_{\min}^{-2})$ are eliminated. Now, for an order-$q$ degenerate U-statistic, the debiased ICUDO can be asymptotically efficient with $m\asymp n^{q+1}$, while the ICUR requires $m\succ n^{q+1}$. Moreover, we could allow $m$ to grow more slowly for the debiased ICUDO under some mild conditions on $(g,F)$. For example, when $d=2$, $q=1$, and the conditions of Theorem \ref{thm:lipschitzonesample} hold, the debiased ICUDO only needs $m\succ n$ to be asymptotically efficient, while the ICUR requires $m\succ n^2$. For the general order $q$ of degeneration, we have $m^*_{oa}=(m^*_{\rm RND})^{\frac{d}{d+1}}$, for all $d$, under the conditions in Theorem \ref{thm:lipschitzonesample}. Here, $m^*_{oa}$ and $m^*_{\rm RND}$ represent the minimum $m$ required for the ICUDO and ICUR, respectively, to be asymptotically efficient.
We conclude this section with the following multi-sample example. The kernel function is degenerate, and hence favors a debiased ICUDO. However, the highest order $\delta^2$-value vanishes, which encourages a lower strength of OA. The comparison is made between the ICUR and different versions of the ICUDO.
\begin{example}\label{example:final}
Let $K=2$, $d_1=d_2=2$, $d=4$, and
$$g(x^{(1)}_1,x^{(1)}_2,x^{(2)}_1,x^{(2)}_2)=I(x^{(1)}_1<x^{(2)}_1,x^{(1)}_2<x^{(2)}_1)+I(x^{(2)}_1<x^{(1)}_1,x^{(2)}_2<x^{(1)}_1).$$
The construction of $U_{oa}$ and the debiased $\tilde{U}_{oa}$ is based on $OA(m,4,m^{1/3},3)$ and $OA(m,4,$ $m^{1/4},4)$.
For continuous distributions $F_1$ and $F_2$, it can be verified that
$$E g(X^{(1)}_1,X^{(1)}_2,X^{(2)}_1,X^{(2)}_2)=\frac{2}{3}+\int(F_1(x)-F_2(x))^2d(F_1(x)+F_2(x))/2,$$
which indicates the similarity of $F_1$ and $F_2$. The null hypothesis of $F_1=F_2$ is rejected when the U-statistic is significantly larger than $2/3$. Note that the corresponding U-statistic is degenerate under the null hypothesis. See Table \ref{tab:final} for the simulation results when both samples are simulated from $N(0,1)$ with sample sizes $n_1=n_2=10^3$.
\end{example}
Note that in the $g$ function of Example \ref{example:final}, the two separate parts are all functions of three inputs. Thus, $R(4)=0$, and we can claim that $t=3$ works better than $t=4$, which is verified by the results in Table \ref{tab:final}.
\begin{table}[h]
\begin{center}\tabcolsep 6.6pt
\caption{Result of Example \ref{example:final}.}\label{tab:final}\tabcolsep 5pt
\begin{tabular}{|c|cccccccc|}\hline
$m/\binom{n}{2}$ &0.002 &0.01 &0.02 &0.04 &0.06 &0.1 &0.14 &0.2 \\\hline
${\rm Eff}(\tilde{U}_{oa_3})$ &0.836\% &{10.9\%} &{\bf 15.6\%} &{\bf 35.9\%} &{\bf 44.9\%} &{\bf 56.9\%} &{\bf 75.1\%} &{\bf 94.1\%} \\
\hline
${\rm Eff}(U_{oa_3})$ &{0.861\%} &9.50\% &12.9\% &25.2\% &28.3\% &29.8\% &36.3\% &39.0\% \\
\hline
${\rm Eff}(U_{oa_4})$ &{ 0.450\%} &4.96\% &6.78\% &10.6\% &10.7\% &11.9\% &14.5\% &15.6\% \\
\hline
${\rm Eff}(U_{\rm RND})$ &0.179\% &0.701\% &1.50\% &2.93\% &4.19\% &7.84\% &10.9\% &13.1\% \\
\hline
\end{tabular}
\end{center}
\end{table}
\setcounter{section}{4}
\setcounter{equation}{3}
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage
\noindent{\large\bf 4. ICUDO for multi-dimensional data}
Note that step 2 of the ICUDO algorithm in Section 2 does not apply to multi-dimensional data because it relies on ordering the univariate data. To remedy this, we adopt a clustering algorithm to divide the data into homogeneous groups. In this regard, the clustered group sizes may vary. This will necessitate a re-weighting procedure similar to the debiasing step in Section 3. To save space, we focus on the debiased ICUDO and adopt the notation of the multi-sample U-statistics in the study of multi-dimensional data. For $k=1,\ldots,K$, let $X_{1}^{(k)},\ldots,X_{n_k}^{(k)}$ be a random sample of size $n_k$ from the multi-dimensional distribution $F_k$. The algorithm is given as follows.
\begin{itemize}
\item[Step 1.] Let $A_0$ be an $OA(m,d,L,t)$. Adopt random level permutations $\{\pi_1,\ldots,$ $\pi_d\}$ of columns of $A_0$ independently. Specifically, for $l\in\mathcal{Z}_L$, change all elements $l$ in the $j$th column of $A_0$ to $\pi_j(l)$. The $m$ rows of the resulting array $A$ are denoted by $\{\bm a^i=(\bm a^{i}_1,\ldots,\bm a^{i}_K):i=1,\ldots,m;\bm a^{i}_k\in\mathcal{Z}_L^{d_k},k=1,\ldots,K\}$.
\item[Step 2.] Let $\mathcal{P}^{(k)}=\{G_1^{(k)},\ldots,G_L^{(k)}\}$ denote an $L$-group partition from the clustering of $\{X_{1}^{(k)},\ldots,X_{n_k}^{(k)}\}$. For any ${\bm a}=({\bm a}_1,\ldots,{\bm a}_K)$, with ${\bm a}_k=(a_{k,1},\ldots,a_{k,d_k})\in\mathcal{Z}_L^{d_k}$, define
\begin{equation}\label{eq:Gv}
\mathcal{G}_{{\bm a}}=\prod_{k=1}^K\prod_{j=1}^{d_k}G^{(k)}_{a_{k,j}}.
\end{equation}
\item[Step 3.] For $i=1,\ldots,m$, independently draw an element ${\bm \eta}^i$ uniformly from $\mathcal{G}_{{\bm a}^i}$, where $\bm a^i$ is the $i$th row of $A$. Let $\omega_{\bm \eta^i}=L^d|\mathcal{G}_{{\bm a}^i}\cap S_0^*|/|S_0^*|$.
\begin{equation}\label{uoamulti}
\tilde{U}_{oa}=\frac{1}{m}\sum_{i=1}^m\omega_{\bm \eta^i}g(X_{\bm\eta^i}).
\end{equation}
\end{itemize}
An example of the construction is given in the Supplementary Material.
\begin{theorem}\label{thm:multid1} Suppose $\omega_{\bm \eta^i}\rightarrow 1$ uniformly as $n,L\rightarrow\infty$.
For $\tilde{U}_{oa}$ based on $OA(m,d,L,t)$, we have
\begin{eqnarray}\label{eq:multid}
{\rm MSE}(\tilde{U}_{oa})={\rm MSE}(U_0)+\frac{R(t)}{m}+o\left(\frac{1}{m}\right).
\end{eqnarray}
\end{theorem}
The $R(t)$ in (\ref{eq:multid}) is given by (\ref{eq:rt}), except that the univariate distribution $F$ is changed to a multi-dimensional distribution. The assumption in Theorem \ref{thm:multid1} naturally holds if we force balance the group size in the clustering process. By applying the full strength $t=d$ OA to Theorem \ref{thm:multid1}, we have the following corollary.
\begin{corollary}\label{thm:multid2}
For $\tilde{U}_{oa}$ based on $OA(m,d,L,d)$, for any pair of $(g, F)$, we have
\begin{eqnarray}\label{eq:noassumptionmultid}
{\rm MSE}(\tilde{U}_{oa})={\rm MSE}(U_0)+o(m^{-1}).
\end{eqnarray}
\end{corollary}
The choice of $t$ has been discussed and is illustrated in Examples \ref{example:tradeoff} and \ref{example:final}. We do not compare different $t$ in the following examples because $d=2$ always holds, and so $t\leq 2$. We always take $t=2$, $L=10,20,\ldots,100$, and $m=L^t$.
\begin{example} \label{example:kendall}(Kendall's tau, \cite{chen:2018}). The Kernel function $h((x_1, y_1),(x_2, $ $y_2)) = 2 I(x_1 < x_2, y_1 < y_2) + 2 I(x_2 <
x_1, y_2 < y_1)-1$. For simplicity, we assume that $(X,Y)$ follows a normal distribution, with $\mu=(0,0)$ and $\Sigma={\rm diag}(3,1)$. Set $n=10^4$. The MSE when estimating the Kendall correlation using $U_{\rm RND}$ and $\tilde{U}_{oa}$ is shown in Table \ref{tab:kendall}. As a reference, we have ${\rm MSE}(U_0)=8.97\times 10^{-5}$.
\end{example}
\begin{table}[h]
\begin{center}\tabcolsep 4.5pt
\caption{Result of Example \ref{example:kendall}.}\label{tab:kendall}\tabcolsep 3.5pt
\begin{tabular}{|c|cccccccccc|}\hline
$m$ &100 &400 &900 &1600 &2500 &3600 &4900 &6400 &8100 &10000 \\\hline
${\rm MSE}({U}_{\rm RND})$ &.765 &.191 &.0903 &.0515 &.0260 &.0195 &.0167 &.0137 &.0098 &.0089 \\
\hline
${\rm MSE}(\tilde{U}_{oa})$ &.075 &.0096 &.0032 &.0015 &.00063 &.00035 &.00023 &.00014 &.00011 &.00009 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{example} \label{example:monotonicity}(Testing stochastic monotonicity, \cite{lee:2009}). Let $(X,Y)$ be a real-valued random vector, and denote by $F_{Y|X}(y|x)$ the conditional distribution function of $Y$, given $X$. Consider the problem of testing the stochastic monotonicity hypothesis
$$H_0: F_{Y|X}(y|x)\leq F_{Y|X}(y|x'), \forall y\in R ~{\rm and ~whenever}~ x\geq x'.$$
This essentially tests where an increase in $X$ would induce an increase in $Y$ (e.g., income vs. expenditure in a household). \cite{lee:2009} proposed the following testing statistic:
\begin{eqnarray}\label{eqn:309}
U_n(x,x')=\frac{1}{n(n-1)}\sum_{1\leq i\neq j\leq n}(I\{Y_i\leq x'\}-I\{Y_j\leq x'\}){\rm sign}(X_i-X_j)K(x-X_i)K(x-X_j),~
\end{eqnarray}
where $K(x)=0.75(1-x^2)$. We simulate $(X,Y)$ from a normal distribution with $\mu=(0,0)$ and $\Sigma={\rm diag}(3,1)$, and calculate (\ref{eqn:309}) at $(x,x')=(0,0)$. For $n=10^4$, the comparison between $\tilde{U}_{oa}$ and $U_{\rm RND}$ is given in Table \ref{tab:mono}. As a reference, we have ${\rm MSE}(U_0)=2.572$.
\end{example}
\begin{table}[h]
\begin{center}\tabcolsep 5.8pt
\caption{Result of Example \ref{example:monotonicity}.}\label{tab:mono}\tabcolsep 4.7pt
\begin{tabular}{|c|cccccccccc|}\hline
$m$ &100 &400 &900 &1600 &2500 &3600 &4900 &6400 &8100 &10000 \\\hline
${\rm MSE}({U}_{\rm RND})$ &302.7 &69.01 &38.01 &17.45 &12.86 &8.613 &7.438 &6.273 &4.886 &4.327 \\
\hline
${\rm MSE}(\tilde{U}_{oa})$ &33.18 &15.73 &8.848 &4.252 &3.524 &3.168 &2.732 &2.662 &2.630 &2.602 \\
\hline
\end{tabular}
\end{center}
\end{table}
\if0{
\begin{example} \label{example:auc}(Area Under the Curve (AUC) maximization, \cite{papa:2015}). In the training of a classifier to the data $(X,Y)$, where $X$ is the predictor and $Y\in \{-1,1\}$ is the label
AUC has been an important criteria for a classifier $s_{\theta}(X)$ as parametrized by $\theta$. The maximization of AUC with respect to $\theta$ leads to the optimization problem
$${\rm argmin}_{\theta}L(\theta), ~~~~L(\theta)=\sum_{x^+\in G^+}\sum_{x^-\in G^-}\log(1+\exp(s_{\theta}(x^-)-s_{\theta}(x^+))),$$
where $G^+=\{(X,Y):Y=1\}$ and $G^-=\{(X,Y),Y=-1\}$.
Consider separating two sets with a scoring function. Let $s_{\theta}(X)$ be a scoring rule to learn. In this case, we simply take linear scoring rule $s_{\theta}(X)=\theta^TX$. The optimal $\theta^*$ is determined by the following empirical risk minimization problem
$${\rm argmin}_{\theta}L(\theta), ~~~~L(\theta)=\sum_{x^+\in G^+}\sum_{x^-\in G^-}\log(1+\exp(s_{\theta}(x^-)-s_{\theta}(x^+))),$$
where the labelled observations form two groups .
The $\theta^*$ is derived by the recursion $\theta_{t+1}=\theta_t-\gamma_t\nabla_{\theta}L(\theta_t)$ by stochastic gradient descending (SGD). Consider the elements in $G^+$ follow the 2-dimensional normal distribution with $\mu=(0,4)$ and $\Sigma={\rm diag}(3,4)$, and the elements in $G^-$ follow the 2-dimensional normal distribution with $\mu=(0,0)$ and $\Sigma={\rm diag}(3,4)$. Set $|G^+|=|G^-|=10^4$. We will show two results for this example. First, the MSE in estimating the loss $L(\theta)$ by $\tilde{U}_{oa}$ and $U_{\rm RND}$ for fixed $\theta=(0,1)$ with different $m$ is shown in Table \ref{tab:auc}. As a reference, we have ${\rm MSE}(U_0)=0.2981$. Second, $L(\theta^*)$ estimated from the SGD method by $\tilde{U}_{oa}$ and $U_{\rm RND}$ with $m=25$ and $m=400$ is shown in Figure \ref{fig:route}.
\end{example}
\begin{table}[h]
\begin{center}\tabcolsep 5.8pt
\caption{Result of Example \ref{example:auc}.}\label{tab:auc}
\begin{tabular}{|c|cccccccccc|}\hline
$m$ &100 &400 &900 &1600 &2500 &3600 &4900 &6400 &8100 &10000 \\\hline
${\rm MSE}({U}_{\rm RND})$ &52.35 &14.09 &5.987 &4.747 &3.632 &2.781 &1.557 &1.191 &1.018 &.9938 \\
\hline
${\rm MSE}(\tilde{U}_{oa})$ &8.585 &.6687 &.5373 &.4586 &.4252 &.4089 &.3766 &.3471 &.3203 &.3090 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=8.1cm]{size25.pdf}
\end{minipage}
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=8.1cm]{size400.pdf}
\end{minipage}
\caption{Comparison of the loss (empirical risk) of $\theta^*$ estimated from the SGD method by $\tilde{U}_{oa}$ (red) and $U_{\rm RND}$ (blue) with $m=25$ (left) and $m=400$ (right). Solid represents the mean loss and dot represents mean loss plus the standard deviation.}
\label{fig:route}
\end{figure}
}\fi
{
\begin{example} \label{example:clusterperformance}(Clustering performance evaluation, \cite{papa:2015}). For a given distance $D:\mathcal{X}\times\mathcal{X}\rightarrow R$ defined on $\mathcal{X}$, the performance of a partition ${P}$ can be evaluated from the data $X_1,\ldots,X_n\in\mathcal{X}$ using
\begin{eqnarray}\label{eqn:311}
W({P})=\sum_{1\leq i<j\leq n}D(X_i,X_j)\cdot\sum_{\mathcal{C}\in{P}}I\{(X_i,X_j)\in\mathcal{C}^2\}.
\end{eqnarray}
Our purpose is to compare the different incomplete U-statistics of (\ref{eqn:311}); here, we focus on the k-means method for the comparison. The data are generated from a normal distribution with $\mu=(0,0)$ and $\Sigma={\rm diag}(1,2)$, and we divide the data into two groups. The MSE of $U_{\rm RND}$ and $\tilde{U}_{oa}$ when estimating $W(P)$ for different $m$ is shown in Table \ref{tab:clus}. As a reference, we have ${\rm MSE}(U_0)=1.043\times 10^{-4}$.
\end{example}
\begin{table}[h]
\begin{center}\tabcolsep 4.5pt
\caption{Result of Example \ref{example:clusterperformance}.}\label{tab:clus}\tabcolsep 3.7pt
\begin{tabular}{|c|cccccccccc|}\hline
$m$ &100 &400 &900 &1600 &2500 &3600 &4900 &6400 &8100 &10000 \\\hline
${\rm MSE}({U}_{\rm RND})$ &.216 &.0625 &.0346 &.0171 &.0064 &.0047 &.0038 &.0021 &.0017 &.0010 \\
\hline
${\rm MSE}(\tilde{U}_{oa})$ &.011 &.0064 &.0038 &.0019 &.00056 &.00051 &.00038 &.00027 &.00013 &.00012 \\
\hline
\end{tabular}
\end{center}
\end{table}}
\setcounter{section}{5}
\setcounter{equation}{4}
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage
\vspace{1cm}
\noindent{\large\bf 5. Conclusion}
\label{sec:discuz}
To tackle the computational issue of U-statistics, we have introduced a new type of incomplete U-statistic called the ICUDO, which has much higher efficiency than existing methods. The required computational burden, as indexed by the number of combinations $m$ for the ICUDO to be statistically equivalent to the complete U-statistic, is of smaller magnitude than existing methods. This was validated theoretically and empirically for degenerate and non-degenerate one- and multi-sample U-statistics on univariate and multi-dimensional data. In fact, $m$ is allowed to grow more slowly than the data size $n$ in the non-degenerate case.
The OA plays a critical role in the construction of the ICUDO, in light of its projective uniformity. Other space-filling design schemes exist with similar properties, such as the OA-based Latin hypercube by \cite{tang:1993}, and the strong orthogonal array by \cite{he:2012}, which is used frequently in the design of computer experiments. By exhaustive simulations, we find the improvement of the efficiency by these design schemes over that of the ICUDO to be within 1\%. However, this improvement is not sufficient to advocate using these structures, owing to the extra complexity of the computation. Other improvements over the OA are based on optimal criteria, such as the generalized minimum aberration OA. However, no theoretical results are available for these fixed structures.
Lastly, the following offer potential future research directions. ($i$) For high-dimensional data, dimension-reduction techniques need to be integrated into our current algorithm. ($ii$) For multi-sample cases, we may divide different samples into different numbers of groups in some optimal way. This will induce more complicated OA structures. ($iii$) For the purpose of statistical inference, it would be of interest to study the asymptotic distributions of the ICUDO under different conditions. ($iv$) The dimension of the kernel functions is fixed at $d$ as $n$ increases, and all data are generated independently. In one important type of U-statistic based on stochastic processes, $d$ increases with $n$ and the data can be dependent. These topics will involve quite different methodologies, and hence are left to future work.
\setcounter{section}{6}
\setcounter{equation}{4}
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage
\vspace{1cm}
\noindent{\large\bf Supplementary Material}
\label{sec:supplement}
The online Supplementary Material generalizes the result of Theorem \ref{thm:lipschitzonesample} under additional conditions. It also provides details on how to choose the combination of $L$ and $t$ and illustrates the generation of the ICUDO for multi-sample and multi-dimensional cases.
\par
\vskip 14pt
\noindent {\large\bf Acknowledgments}
Dr. Kong's research was partially supported by NSFC grant 11801033 and the Beijing Institute of Technology Research Fund Program for Young Scholars.
Dr. Zheng's research was partially supported by the National Science Foundation, DMS-1830864.
\par
\vspace{.5cm}
\setcounter{section}{6}
\setcounter{equation}{5}
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize\thepage
\noindent{\large\bf Appendix. Proof of Theorems}
\label{sec:discuz}
Lemmas \ref{lem:barfunctionproperty}--\ref{lem:lusin} contribute to the proof of Theorem \ref{thm:noassumptiononesample}. Theorems \ref{thm:noassumption} and \ref{thm:multid1} can be proved similarly as Theorem \ref{thm:noassumptiononesample}, but only with more tedious analysis, and hence they are omitted due to the limit of space. For any $\bm a\in \mathcal{Z}_L^d$, we call the set $\mathcal{G}_{\bm a}=\prod_{j=1}^{d}G_{a_{j}}$ a {\it grid}. Let $F_n$ be the empirical distribution of $\{X_1,\ldots,X_n\}$ and define
$V=\int g(x_1,\ldots,x_d)dF_n(x_1)\ldots $ $dF_n(x_d).$
For given $F_n$ and ${\bm \eta}\in \mathcal{G}_{\bm a}$, define
$$\bar{g}(\mathcal{X}_{\bm \eta})=|\mathcal{G}_{\bm a}|^{-1}\sum_{\bm{\eta}'\in \mathcal{G}_{\bm a}}g(\mathcal{X}_{\bm{\eta}'}).$$
For the same $S_{oa}=\{{\bm\eta}^1,\ldots,{\bm\eta}^m\}$ in generating $U_{oa}$, define
$$\bar{V}=\frac{1}{m}\sum_{i=1}^m\bar{g}(\mathcal{X}_{\bm{\eta}^i}).$$
\begin{lemma}\label{lem:barfunctionproperty}
Some properties of $V$ and $\bar{V}$ are listed as follows.\\
{$(i)$} $\bar{V}$ is an unbiased estimator of ~$V$.\\
{$(ii)$} The bias of ~$V$ is of order $O(n^{-1})$ and ${\rm MSE}(V)={\rm MSE}(U_0)+O(n^{-2})$.\\
{$(iii)$} $U_{oa}$ is an unbiased estimator of ~$V$ and so also has bias $O(n^{-1})$.\end{lemma}
\noindent{\it Proof.} $(i)$ follows the unbiasedness of orthogonal arrays. $(ii)$ can be found in Proposition 3.5 in \cite{shao:2007} (page 211). $(iii)$ follows from \cite{owen:1992}.~~~$\square$
\begin{lemma}\label{lem:bar}
$$E(\bar{V}-V)^2\leq\frac{1}{m}\sum_{\bm u:|\bm u|>t}\left(\delta_{\bm u}^2+O(n^{-1})\right).$$
\end{lemma}
\noindent{\it Proof.}
Let $\delta^2_{\bm u}=\delta^2_{|\bm u|}$ and $\sigma^2_{\bm u}=\sigma^2_{|\bm u|}$.Change the $F$ in section 2.2 to $F_n$, we can define $dF_{n,{\bm u}}$, $g_{n,{\bm u}}$, $h_{n,{\bm u}}$, $\sigma^2_{n,{\bm u}}$ and $\delta^2_{n,{\bm u}}$ analogously and sequentially. Again, by substituting $\bar{g}$ for $g$, with $F_n$, we define $\bar{g}_{n,{\bm u}}$, $\bar{h}_{n,{\bm u}}$, $\bar{\sigma}^2_{n,{\bm u}}$ and $\bar{\delta}^2_{n,{\bm u}}$.
Adopt (3.5) in Owen (1992) to $\bar{g}$, we have
$$E[(\bar{V}-V)^2|F_n]\leq\frac{1}{m}\sum_{\bm u:|\bm u|>t}\bar{\delta}^2_{n,\bm u}\leq\frac{1}{m}\sum_{\bm u:|\bm u|>t}{\delta}^2_{n,\bm u},$$
which leads to
$E(\bar{V}-V)^2=E(E[(\bar{V}-V)^2|F_n])\leq \frac{1}{m}\sum_{\bm u:|\bm u|>t}E{\delta}^2_{n,\bm u}$.
Consider $\sigma_{n,\bm u}^2=\int g^2_{n,\bm u}(x_1,\ldots,x_d)dF_n(x_1)\ldots dF_n(x_n)$, which can be further written as
$\int \left(\int g^2_{n,\bm u}dF_{n,\bm u^c}\right)^2dF_{n,\bm u}$.
This integer can be viewed as a V-statistic with the new kernel
$g(x_1,\ldots,x_{|\bm u|},x_{|\bm u|+1},\ldots,x_d)\cdot$ $g(x_1,\ldots,x_{|\bm u|},x_{d+1},$ $\ldots,x_{2d-|\bm u|})$,
which estimates $\sigma_{\bm u}^2$ with bias $O(n^{-1})$.~~~$\square$
\begin{lemma}\label{lem:lusin} {\rm (Lusin's theorem)}
\noindent For any measurable function $g$ on $R^d$ and arbitrary $\epsilon>0$, there exists a continuous $g_{\epsilon}$ defined on $R^d$ with compact support such that $E|g-g_{\epsilon}|<\epsilon$.
\end{lemma}
\noindent{\large\bf Proof of Theorem \ref{thm:noassumptiononesample}.} Define $g_{F}(Z_1,\ldots,Z_d)=g(F^{-1}(Z_1),\ldots,F^{-1}(Z_d))$ such that $Z\sim U(0,1)$ and $F^{-1}(Z)\sim F$. With this new kernel $g_F$, the distribution of random variables $X$ is assumed to be the uniform distribution on $[0,1]$.
Write $U_{oa}-\Theta$ as $(U_{oa}-\bar{V})+(\bar{V}-V)+(V-\Theta)$.
Simple analysis reveals the following relationships among of $V_{oa}$, $\bar{V}$ and $V$.
Conditional on $F_n$, $V$ is constant and so $E(U_{oa}-\bar{V})(V-\Theta)=0$, $E(\bar{V}-V)(V-\Theta)=0$ since $E(U_{oa}-\bar{V})=E(\bar{V}-V)=0$. Conditional on both $V$ and $\bar{V}$, $E(U_{oa}-\bar{V})=0$ which indicates $E(U_{oa}-\bar{V})(\bar{V}-V)=0$.
Thus,
\begin{equation}\label{eq:simpledecomposition}
{\rm MSE}(U_{oa})=E(U_{oa}-\bar{V})^2+E(\bar{V}-V)^2+{\rm MSE}(V)
\end{equation}
whose last two terms have been addressed by Lemma \ref{lem:bar} and Lemma \ref{lem:barfunctionproperty}.
So we need to prove $E(U_{oa}-\bar{V})^2=o(m^{-1})$.
Since $U_{oa}$ and $\bar{V}$ always use the same $S_{oa}=\{\bm\eta^1,\ldots,\bm\eta^m\}$,
$$E(U_{oa}-\bar{V})^2=E\left(\frac{1}{m}\sum_{i=1}^mg(\mathcal{X}_{\bm{\eta}^i})-\bar{g}(\mathcal{X}_{\bm{\eta}^i})\right)^2.$$
For ${i_1}\neq {i_2}$ ($i_1,i_2\in \mathcal{Z}_m$), $E(g(\mathcal{X}_{\bm{\eta}^{i_1}})-\bar{g}(\mathcal{X}_{\bm{\eta}^{i_1}}))(g(\mathcal{X}_{\bm{\eta}^{i_2}})-\bar{g}(\mathcal{X}_{\bm{\eta}^{i_2}}))=0$. Denote $\bm\eta\sim\bm\eta'$ if $\bm\eta$ and $\bm\eta'$ belong to the same grid.
\begin{equation}\label{eq:funproof}
E(U_{oa}-\bar{V})^2\leq 2m^{-1}E[(g(\mathcal{X}_{\bm{\eta}})-{g}(\mathcal{X}_{\bm{\eta}'}))^2|{\bm \eta\sim\bm \eta'}].
\end{equation}
For any $M>0$, define
$g({\bm x},M)=\max\{\min\{g({\bm x}),M\},-M\}$.
Obviously, we have $\lim_{M\rightarrow\infty}g({\bm x},M)=g({\bm x})$, and dominated convergence theorem indicates
\begin{eqnarray}\label{eq:funproof1}
E[(g(\mathcal{X}_{\bm \eta})-g(\mathcal{X}_{\bm \eta'}))^2|{\bm \eta\sim\bm \eta'}]=\lim_{M\rightarrow\infty}E[(g(\mathcal{X}_{\bm \eta},M)-g(\mathcal{X}_{\bm \eta'},M))^2|{\bm \eta\sim\bm \eta'}].
\end{eqnarray}
Thus, for arbitrary $\epsilon>0$, we can find $M_{\epsilon}$ such that
\begin{eqnarray}\label{eq:funproof2}
&E[(g(\mathcal{X}_{\bm \eta})-g(\mathcal{X}_{\bm \eta'}))^2|{\bm \eta\sim\bm \eta'}]\leq E[(g(\mathcal{X}_{\bm \eta},M_{\epsilon})-g(\mathcal{X}_{\bm \eta'},M_{\epsilon}))^2|{\bm \eta\sim\bm \eta'}]+\epsilon.~~~~
\end{eqnarray}
Note that $\{X_1,\ldots,X_n\}$ are random, so is $\mathcal{X}_{\bm\eta}$. Note that $Eg^2(X_1,\ldots,X_d)<\infty$.
We have $Eg^2(\mathcal{X}_{\bm\eta})<\infty$ and so $Eg(\mathcal{X}_{\bm\eta})<\infty$, which indicates $Eg^2(\mathcal{X}_{\bm \eta},M_{\epsilon})<\infty$ and $Eg(\mathcal{X}_{\bm\eta},M_{\epsilon})<\infty$.
From Lusin's theorem, there exists a continuous $g_{\epsilon,M_{\epsilon}}^*$ with compact support such that $E|g(\mathcal{X}_{\bm\eta},M)-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm\eta})|<\epsilon M_{\epsilon}^{-1}$. Since $|g(\mathcal{X}_{\bm\eta},M_{\epsilon})|\leq M_{\epsilon}$,
\begin{eqnarray}\label{eq:funproof3}
&&E[(g(\mathcal{X}_{\bm \eta},M_{\epsilon})-g(\mathcal{X}_{\bm \eta'},M_{\epsilon}))^2|{\bm \eta\sim\bm \eta'}]\notag\\
&\leq&2M_{\epsilon}E[|g(\mathcal{X}_{\bm \eta},M_{\epsilon})-g(\mathcal{X}_{\bm \eta'},M_{\epsilon})||{\bm \eta\sim\bm \eta'}]\notag\\
&\leq&2M_{\epsilon}E|g(\mathcal{X}_{\bm\eta},M_{\epsilon})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})|+2M_{\epsilon}E|g(\mathcal{X}_{\bm \eta'},M_{\epsilon})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm\eta'})|+\notag\\
&&2M_{\epsilon}E[|g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta'})||{\bm \eta\sim\bm \eta'}]\notag\\
&\leq&4\epsilon+2M_{\epsilon}E[|g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta'})||{\bm \eta\sim\bm \eta'}]
\end{eqnarray}
Note that $g_{\epsilon,M_{\epsilon}}^*$ has compact support and so is uniformly continuous. There exists $\Delta(M_{\epsilon}^{-1}\epsilon)$ such that $|g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta'})|\leq \epsilon M_{\epsilon}^{-1}$ as long as $||\mathcal{X}_{\bm \eta}-\mathcal{X}_{\bm \eta'}||_2\leq \Delta(M_{\epsilon}^{-1}\epsilon)$.
Define $$\mathcal{A}=\{|\mathcal{X}_{\eta_{j}}-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)~{\rm for~some}~j\in\mathcal{Z}_d\},$$
with $P(\mathcal{A})\leq\sum_{j=1}^{d}P\{|\mathcal{X}_{ \eta_{j}}-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)\}$, and $||\mathcal{X}_{\bm \eta}-\mathcal{X}_{\bm \eta'}||_2\leq \Delta(M_{\epsilon}^{-1}\epsilon)$ on $\mathcal{A}^c$.
\begin{eqnarray}
&&2M_{\epsilon}E[|g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta'})||{\bm \eta\sim\bm \eta'}]\notag\\
&=&2M_{\epsilon}P(\mathcal{A}^c)E[|g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta'})||{\bm\eta\sim\bm \eta'},\mathcal{A}^c]\notag\\
&&+2M_{\epsilon}P(\mathcal{A})E[|g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta})-g_{\epsilon,M_{\epsilon}}^*(\mathcal{X}_{\bm \eta'})||{\bm \eta\sim\bm \eta'},\mathcal{A}]\notag\\
&\leq&2\epsilon+4M^2_{\epsilon}\sum_{k=1}^{d}P\{|\mathcal{X}_{ \eta_{j}}-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)\}
\end{eqnarray}
Now we give the relationship among several events. For $j\in\mathcal{Z}_d$ and $\bm\eta\sim\bm\eta'$,
\begin{eqnarray}
&&\{|\mathcal{X}_{\eta_{j}}-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)\}\notag\\
&=&\{|\mathcal{X}_{\eta_{j}}-F_{n}(\mathcal{X}_{\eta_{j}})+F_{n}(\mathcal{X}_{\eta_{j}})-F_{n}(\mathcal{X}_{\eta'_{j}})+F_{n}(\mathcal{X}_{\eta'_{j}})-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)\}\notag\\
&\subseteq&\{\sup_{x\in(0,1)}|x-F_{n}(x)|\geq \frac{1}{3d}\Delta(M_{\epsilon}^{-1}\epsilon)\}\cup\{F_{n}(\mathcal{X}_{\eta_{j}})-F_{n}(\mathcal{X}_{\eta'_{j}})\geq \frac{1}{3d}\Delta(M_{\epsilon}^{-1}\epsilon)\}\notag
\end{eqnarray}
Note that $\bm\eta\sim\bm\eta'$, as $L\rightarrow\infty$, $P(\{F_{n}(\mathcal{X}_{\eta_{j}})-F_{n}(\mathcal{X}_{\eta'_{j}})\geq \frac{1}{3d}\Delta(M_{\epsilon}^{-1}\epsilon)\})\rightarrow 0$.
Dvoretzky-Kiefer-Wolfowitz inequality reveals $P\left(\sup_{x\in (0,1)}|F_{n}(x)-x|\geq \epsilon\right)\leq \exp(-2n\epsilon^2)$. So we immediately have
$P(\{|\mathcal{X}_{\eta_{j}}-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)\})\rightarrow 0$
as $n, L\rightarrow\infty$, and we can find $n_{\epsilon}$ and $L_{\epsilon}$ such that
\begin{equation}\label{eq:funproof4}
P(\{|\mathcal{X}_{\eta_{j}}-\mathcal{X}_{\eta'_{j}}|\geq d^{-1}\Delta(M_{\epsilon}^{-1}\epsilon)\})\leq (4dM^2_{\epsilon})^{-1}\epsilon
\end{equation}
as long as $n\geq n_{\epsilon}$ and $L\geq L_{\epsilon}$.
Finally, by combining (\ref{eq:funproof1})-(\ref{eq:funproof4}), we know that for arbitrary $\epsilon>0$, we can find $n_{\epsilon}$ and $L_{\epsilon}$ such that
$E[(g(\mathcal{X}_{\bm\eta})-g(\mathcal{X}_{\bm \eta'}))^2|{\bm \eta\sim\bm \eta'}]\leq 8\epsilon$,
as long as $n\geq n_{\epsilon}$ and $L\geq L_{\epsilon}$.
That means
\begin{equation}\label{eq:funprooffinal}
E[(g(\mathcal{X}_{\bm \eta})-g(\mathcal{X}_{\bm \eta'}))^2|{\bm \eta\sim\bm \eta'}]\rightarrow 0
\end{equation}
as $n,L\rightarrow\infty$.
Theorem \ref{thm:noassumptiononesample} is concluded by submitting (\ref{eq:funprooffinal}) into (\ref{eq:funproof}) and combining (\ref{eq:funproof}) with (\ref{eq:simpledecomposition}), Lemma \ref{lem:barfunctionproperty}($ii$) and Lemma \ref{lem:bar}.~~~$\square$
\vspace{.5cm}
\vspace{.5cm}
\noindent{\large\bf Proof of Theorem \ref{thm:lipschitzonesample}.}
There exists $c>0$ such that density function $f(\cdot)>c$ on $[a,b]$, and $|F(x_1)-F(x_2)|\geq c|x_1-x_2|$ for $x_1,x_2\in [a,b]$.
In (\ref{eq:simpledecomposition}), we only analyze $E(U_{oa}-\bar{V})^2$ since the rest two terms are given by Lemma \ref{lem:barfunctionproperty}$(ii)$ and Lemma \ref{lem:bar}.
Dvoretzky-Kiefer-Wolfowitz inequality reveals $P\left(\sup_{x\in R}|F_{n}(x)-F(x)|\geq \epsilon\right)\leq \exp(-2n\epsilon^2)$.
By taking $\epsilon=[\log(n)n^{-1}]^{1/2}$, we have
\begin{eqnarray}\label{eqDKW}
P\left(\mathcal{A}\right)\leq\exp(-2\log n)=O(n^{-2}),\notag
\end{eqnarray}
where $\mathcal{A}=\{\sup_{x\in R}|F_{n}(x)-F(x)|\geq n^{-1/2}\log^{1/2}(n)\}$.
Since $g$ is continuous and $F$ is bounded, we can find $M>0$ such that $|g|\leq M$ and so $|U_{oa}|,|\bar{V}|\leq M$.
\begin{eqnarray}
&&E[(g(\mathcal{X}_{\bm{\eta}})-{g}(\mathcal{X}_{\bm{\eta}'}))^2|{\bm \eta\sim\bm \eta'}]\notag\\
&=&P(\mathcal{A})E[(g(\mathcal{X}_{\bm{\eta}})-{g}(\mathcal{X}_{\bm{\eta}'}))^2|{\bm \eta\sim\bm \eta'},\mathcal{A}]+P(\mathcal{A}^c)E[(g(\mathcal{X}_{\bm{\eta}})-{g}(\mathcal{X}_{\bm{\eta}'}))^2|{\bm \eta\sim\bm \eta'},\mathcal{A}^c]\notag\\
&\leq& M^2n^{-2}+E[(g(\mathcal{X}_{\bm{\eta}})-{g}(\mathcal{X}_{\bm{\eta}'}))^2|{\bm \eta\sim\bm \eta'},\mathcal{A}^c]\notag
\end{eqnarray}
The analysis of $E[(U_{oa}-\bar{V})^2|\mathcal{A}^c]$ is as follows. On $\mathcal{A}^c$, we have, for $1\leq k_1,k_2\leq nL^{-1}$,
\begin{eqnarray}
&&c|X_{((l-1)nL^{-1}+k_1)}-X_{((l-1)nL^{-1}+k_2)}|\notag\\
&\leq& |F(X_{((l-1)nL^{-1}+k_1)})-F(X_{((l-1)nL^{-1}+k_2)})|\notag\\
&\leq& |F_{n}(X_{((l-1)nL^{-1}+k_1)})-F_{n}(X_{((l-1)nL^{-1}+k_2)})|+2n^{-1/2}\log^{1/2}n\notag\\
&\leq& L^{-1}+2n^{-1/2}\log^{1/2}n.\notag
\end{eqnarray}
Since $g$ is Lipschitz continuous, we know
$(g(\mathcal{X}_{\bm\eta})-g(\mathcal{X}_{\bm\eta'}))^2=O(L^{-2}+n^{-1}\log n)$
for any $\bm\eta\sim\bm\eta'$.
Then we have $E[(g(\mathcal{X}_{\bm\eta})-g(\mathcal{X}_{\bm\eta'}))^2|{\bm\eta\sim\bm\eta'},\mathcal{A}^c]=O(L^{-2}+n^{-1}\log n)$. With this equation, Theorem \ref{thm:boundedvaronesample} is the direct result of (\ref{eq:simpledecomposition}) (\ref{eq:funproof}), Lemma \ref{lem:barfunctionproperty}$(ii)$, Lemma \ref{lem:bar}.~~~~$\square$
\vspace{.5cm}
\vspace{.5cm}
\noindent{\large\bf Proof of Theorem \ref{thm:tradeoff}.}
For convenience, we simply write $g_F$ as $g$ in this proof.
In (\ref{eq:simpledecomposition}), we only analyze $E(U_{oa}-\bar{V})^2$ since the rest two terms are given by Lemma \ref{lem:barfunctionproperty} $(ii)$ and Lemma \ref{lem:bar}.
Each row of the matrix $A$ generated in step 1 follows the uniform distribution on $\mathcal{Z}_L^d$ since the permutation in each column of $A_0$ is independent. Thus,
$$E(U_{oa}-\bar{V})^2=E\left(\frac{1}{m}\sum_{i=1}^mg(\mathcal{X}_{\bm{\eta}^i})-\bar{g}(\mathcal{X}_{\bm{\eta}^i})\right)^2=\frac{1}{mL^d}\sum_{\bm a\in \mathcal{Z}_L^d}E[(g(\mathcal{X}_{\bm{\eta}})-\bar{g}(\mathcal{X}_{\bm{\eta}}))^2|\bm\eta\in\mathcal{G}_{\bm a}].$$
Analysis is now focused on $E[(g(\mathcal{X}_{\bm{\eta}})-\bar{g}(\mathcal{X}_{\bm{\eta}}))^2|\bm\eta\in\mathcal{G}_{\bm a}]$ for every $\bm a\in\mathcal{Z}_L^d$. Let $X_{(0)}=0$ and $X_{(n+1)}=1$. For $l\in\mathcal{Z}_L$, given $X_{((l-1)nL^{-1})}$ and $X_{(lnL^{-1}+1)}$, $X_{((l-1)nL^{-1}+1)},\ldots,X_{(lnL^{-1})}$ has the same distribution as the order statistic of $L$ samples following the uniform distribution on $[X_{((l-1)nL^{-1})},X_{(lnL^{-1}+1)}]$.
For $\mathcal{A}=\{\sup_{x\in R}|F_{n}(x)-F(x)|\geq n^{-\frac{1-c}{2}}\}$, Dvoretzky-Kiefer-Wolfowitz inequality reveals $P(\mathcal{A})=\exp(-2n^c)$.
On $\mathcal{A}^c$, we have $(X_{(lnL^{-1}+1)}-X_{((l-1)nL^{-1})})/L\rightarrow 1$ as $n\rightarrow\infty$. The analysis is now focused on
$E[(g(\mathcal{X}_{\bm{\eta}})-\bar{g}(\mathcal{X}_{\bm{\eta}}))^2|\bm\eta\in\mathcal{G}_{\bm a},\mathcal{A}^c]$.
For this given $\bm a$, define $\mathcal{X}_0=(X_{0,1},\ldots,X_{0,d})$ where $X_{0,j}=\frac{L}{n}\sum_{\eta\in G_{a_j}}X_{\eta}$ and so $\sum_{\eta\in G_{a_j}}(X_{\eta}-X_{0,j})=0$. Adopt the Taylor expansion on $\mathcal{X}_0$, we have
\begin{equation}\notag
g(\mathcal{X}_{\bm\eta})=g(\mathcal{X}_0)+\sum_{j=1}^d\frac{\partial g}{\partial x_j}\bigg|_{X_{0,j}}(X_{\eta_j}-X_{0,j})+O(L^{-2})~~{\rm and}~~\bar{g}(\mathcal{X}_{\bm\eta})=g(\mathcal{X}_0)+O(L^{-2}).
\end{equation}
\begin{eqnarray}
&&E[(g(\mathcal{X}_{\bm{\eta}})-\bar{g}(\mathcal{X}_{\bm{\eta}}))^2|\bm\eta\in\mathcal{G}_{\bm a},\mathcal{A}^c]\notag\\
&=&E\left[\left(\sum_{j=1}^d\frac{\partial g}{\partial x_j}\bigg|_{X_{0,j}}\cdot(X_{\eta_j}-X_{0,j})+O(L^{-2})\right)^2|\bm\eta\in\mathcal{G}_{\bm a},\mathcal{A}^c\right]\notag\\
&=&o(L^{-2})+\sum_{j=1}^dE\left[\left(\frac{\partial g}{\partial x_j}\bigg|_{X_{0,j}}\cdot(X_{\eta_j}-X_{0,j})\right)^2|\bm\eta\in\mathcal{G}_{\bm a},\mathcal{A}^c\right]\notag\\
&=&o(L^{-2})+\sum_{j=1}^d\left(\frac{\partial g}{\partial x_j}\bigg|_{X_{0,j}}\right)^2\frac{1}{12L^2}.\notag
\end{eqnarray}
And then we have
\begin{eqnarray}
&&E(U_{oa}-\bar{V})^2=\frac{1}{mL^d}\sum_{\bm a\in \mathcal{Z}_L^d}E[(g(\mathcal{X}_{\bm{\eta}})-\bar{g}(\mathcal{X}_{\bm{\eta}}))^2|\bm\eta\in\mathcal{G}_{\bm a}]\notag\\
&=&\frac{1}{12mL^2}\sum_{j=1}^d\left(\frac{1}{L^d}\sum_{\bm a\in \mathcal{Z}_L^d}\left(\frac{\partial g}{\partial x_j}\bigg|_{X_{0,j}}\right)^2\right)+o\left(\frac{1}{mL^2}\right)\notag\\
&=&\frac{1}{12mL^2}\sum_{j=1}^dE\left(\frac{\partial g}{\partial x_j}\right)^2+o\left(\frac{1}{mL^2}\right),\notag
\end{eqnarray}
Then Theorem \ref{thm:tradeoff} is the direct result of (\ref{eq:simpledecomposition}), Lemma \ref{lem:barfunctionproperty}($ii$) and Lemma \ref{lem:bar}.~~~~$\square$
\vspace{.5cm}
\noindent{\large\bf Proof of Theorem \ref{thm:debias}.}
Consider the $m$ rows of $A$, $\bm a_1,\ldots,\bm a_m$, generated in the step 1 of the construction in section 2.1. For any $\bm a\in\mathcal{Z}_L^d$, the random permutation in generating $\bm a_1,\ldots,\bm a_m$ reveals that $P(\bm a_1=\bm a)=L^{-d}$. Given $F_n$,
\begin{eqnarray}
E(\tilde{U}_{oa}|F_n)&=&E\frac{1}{m}\sum_{i=1}^m\omega_{\bm{\eta}^i}g(\mathcal{X}_{\bm \eta^{i}})=E\omega_{\bm{\eta}^1}g(\mathcal{X}_{\bm \eta^{1}})\notag\\
&=&\sum_{\bm a\in \mathcal{Z}_L^d}L^{-d}E_{\bm\eta\in\mathcal{G}_{\bm a}}\omega_{\bm{\eta}}g(\mathcal{X}_{\bm \eta})=\sum_{\bm a\in \mathcal{Z}_L^d}\frac{|\mathcal{G}_{{\bm a}^i}\cap S_0^*|}{|S_0^*|}E_{\bm\eta\in\mathcal{G}_{\bm a}}g(\mathcal{X}_{\bm\eta})\notag\\
&=&\sum_{\bm a\in \mathcal{Z}_L^d}\frac{|\mathcal{G}_{{\bm a}^i}\cap S_0^*|}{|S_0^*|}\left(\frac{1}{|\mathcal{G}_{{\bm a}^i}\cap S_0^*|}\sum_{\bm\eta\in\mathcal{G}_{\bm a}}g(\mathcal{X}_{\bm\eta})\right)=\frac{1}{|S_0^*|}\sum_{\bm\eta\in S_0^*}g(\mathcal{X}_{\bm\eta})=U_0.\notag
\end{eqnarray}
Since $U_0$ is unbiased, so is $\tilde{U}_{oa}$. This proves the unbiasedness of $\tilde{U}_{oa}$. The MSE of $\tilde{U}_{oa}$ can be similar analyzed as Theorem \ref{thm:noassumptiononesample}, and so is omitted here.~~~~$\square$
\bibhang=1.7pc
\bibsep=2pt
\fontsize{9}{14pt plus.8pt minus .6pt}\selectfont
\renewcommand\bibname{\large \bf References}
\expandafter\ifx\csname
natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL}\fi
\lhead[\footnotesize\thepage\fancyplain{}\leftmark]{}\rhead[]{\fancyplain{}\rightmark\footnotesize{}
| {
"timestamp": "2020-08-12T02:01:10",
"yymm": "2008",
"arxiv_id": "2008.04348",
"language": "en",
"url": "https://arxiv.org/abs/2008.04348",
"abstract": "U-statistics are widely used in fields such as economics, machine learning, and statistics. However, while they enjoy desirable statistical properties, they have an obvious drawback in that the computation becomes impractical as the data size $n$ increases. Specifically, the number of combinations, say $m$, that a U-statistic of order $d$ has to evaluate is $O(n^d)$. Many efforts have been made to approximate the original U-statistic using a small subset of combinations since Blom (1976), who referred to such an approximation as an incomplete U-statistic. To the best of our knowledge, all existing methods require $m$ to grow at least faster than $n$, albeit more slowly than $n^d$, in order for the corresponding incomplete U-statistic to be asymptotically efficient in terms of the mean squared error. In this paper, we introduce a new type of incomplete U-statistic that can be asymptotically efficient, even when $m$ grows more slowly than $n$. In some cases, $m$ is only required to grow faster than $\\sqrt{n}$. Our theoretical and empirical results both show significant improvements in the statistical efficiency of the new incomplete U-statistic.",
"subjects": "Statistics Theory (math.ST); Computation (stat.CO)",
"title": "Design based incomplete U-statistics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310532836284,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.707698479768603
} |
https://arxiv.org/abs/0909.4011 | Large-girth roots of graphs | We study the problem of recognizing graph powers and computing roots of graphs. We provide a polynomial time recognition algorithm for r-th powers of graphs of girth at least 2r+3, thus improving a bound conjectured by Farzad et al. (STACS 2009). Our algorithm also finds all r-th roots of a given graph that have girth at least 2r+3 and no degree one vertices, which is a step towards a recent conjecture of Levenshtein that such root should be unique. On the negative side, we prove that recognition becomes an NP-complete problem when the bound on girth is about twice smaller. Similar results have so far only been attempted for r=2,3. | \section{Introduction}
\label{section:introduction}
All graphs in this paper are simple, undirected and connected. If $H$ is a graph, its \emph{$r$-th power} $G=H^r$ is the graph on the same vertex set such that two distinct vertices are adjacent in $G$ if their distance in $H$ is at most $r$. We also call $H$ the \emph{$r$-th root} of $G$.
There are some problems naturally related to graph powers and graph roots. Suppose $\mathcal{P}$ is a class of graphs (possibly consisting of all graphs), $r$ is an integer and $G$ is an arbitrary graph. The questions we ask are:
\begin{itemize}
\item \textit{The recognition problem}: Is $G$ an $r$-th power of some graph from $\mathcal{P}$? Formally, we define a family of decision problems:
\begin{tabular}{lll}
& \textbf{Problem.} & $r$-TH-POWER-OF-$\mathcal{P}$-GRAPH\\
& \textbf{Instance.} & A graph $G$.\\
& \textbf{Question.} & Is $G=H^r$ for some graph $H\in\mathcal{P}$?
\end{tabular}
\item \textit{The $r$-th root problem}: Find some/all $r$-th roots of $G$ which belong to $\mathcal{P}$.
\item \textit{The unique reconstruction problem}: Is the $r$-th root of $G$ in $\mathcal{P}$ (if any) unique?
\end{itemize}
The above problems have been investigated for various graph classes $\mathcal{P}$. There exist characterizations of squares \cite{Muk} and higher powers \cite{EMR} of graphs, but they are not computationally efficient. Motwani and Sudan \cite{MotSud} proved the NP-completeness of recognizing graph squares and Lau \cite{Lau} extended this to cubes of graphs. Motwani and Sudan \cite{MotSud} suggested that recognizing squares of bipartite graphs is also likely to be NP-complete. This was disproved by Lau \cite{Lau}, who gave a polynomial time algorithm that recognizes squares of bipartite graphs and counts the bipartite square roots of a given graph. Apparently the first proof that $r$-TH-POWER-OF-GRAPH and $r$-TH-POWER-OF-BIPARTITE-GRAPH are NP-complete for any $r\geq 3$ was recently announced in \cite{LeNg}.
Considerable attention has been given to tree roots of graphs, which are quite well understood and can be computed efficiently. Lin and Skiena \cite{Lin} gave a polynomial time algorithm for recognizing squares of trees. The first polynomial time algorithm for $r$-TH-POWER-OF-TREE for arbitrary $r$ was given by Kearney and Corneil \cite{KeaCor}. A faster, linear time algorithm for this problem is due to Chang, Ko and Lu \cite{CKL}. All these algorithms also compute some $r$-th tree root of a given graph (if one exists). It is important to note that such a root need not be unique, not even up to isomorphism, so the difficulty lies in making consistent choices while constructing a root. We are going to use the computation of an $r$-th tree root of a graph as a black-box in our algorithms.
There has also been some work on the complexity of $r$-TH-POWER-OF-$\mathcal{P}$-GRAPH for such classes $\mathcal{P}$ as chordal graphs, split graphs and proper interval graphs \cite{LauCor} and for directed graphs and their powers \cite{Kutz}.
In this work we address the above problems for another large family of graphs, namely graphs with no short cycles. Recall, that the \emph{girth} of a graph is the length of its shortest cycle. For convenience we shall denote by $\girth{\geq g}$ the class of all graphs of girth at least $g$, and by $\girthplus{\geq g}$ its subclass consisting of graphs with no vertices of degree one (which we call \emph{leaves}). These classes of graphs make a convenient setting for graph roots because of the possible uniqueness results outlined below.
By \cite{4auth} the recognition of squares of $\girth{\geq 4}$-graphs is NP-complete, while squares of $\girth{\geq 6}$-graphs can be recognized in polynomial time. For $r\geq 3$ no complexity-theoretic results have been known, but there is some very interesting work on the uniqueness of the roots. Precisely, Levenshtein et al. \cite{Lev+} proved that if $G$ has a square root $H$ in the class $\girthplus{\geq 7}$, then $H$ is unique\footnote{It is not possible to obtain uniqueness if the vertices of degree one are allowed, hence this technical restriction. See \cite{Lev+} for details.}. The same statement was extended in \cite{Lev} to $r$-th roots in $\girthplus{\geq 2r+2\lceil(r-1)/{4}\rceil+1}$. The main conjecture in this area remains unresolved:
\begin{conjecture}[Levenshtein, \cite{Lev}]
\label{conjecture:zajebiste}
If a graph $G$ has an $r$-th root $H$ in $\girthplus{\geq 2r+3}$, then $H$ is unique in that class.
\end{conjecture}
The value of $g=2r+3$ is best possible, as witnessed by the cycle $C_{2r+2}$, which cannot be uniquely reconstructed from its $r$-th power.
The best result towards Conjecture \ref{conjecture:zajebiste} is that the number of roots $H$ under consideration is at most $\Delta(G)$ (the maximum vertex degree in $G$, \cite{Lev}), but its proof yields only exponential time $r$-th root and recognition algorithms.
At the same time Farzad et al. made a conjecture about recognizing powers of graphs of lower-bounded girth:
\begin{conjecture}[Farzad et al., \cite{4auth}]
\label{conjecture:debilne}
The problem $r$-TH-POWER-OF-$\girth{\geq 3r-1}$-GRAPH can be solved in polynomial time.
\end{conjecture}
\paragraph{\textbf{Our contribution.}} Our first result gives an efficient reconstruction algorithm in Levenshtein's case:
\begin{theorem}
\label{theorem:1}
Given any graph $G$, all its $r$-th roots in $\girthplus{\geq 2r+3}$ can be found in polynomial time.
\end{theorem}
Next, we use this result to deal with the general case, i.e. when the roots are allowed to have leaves. It turns out that the same girth bound of $2r+3$ admits a positive result:
\begin{theorem}
\label{theorem:positive}
The problem $r$-TH-POWER-OF-$\girth{\geq 2r+3}$-GRAPH can be solved in polynomial time.
\end{theorem}
Our result proves Conjecture \ref{conjecture:debilne} (for $r\geq 4$) and is in fact stronger. It also improves the result of \cite{LeNg} for $r=3,g=10$. Moreover, our algorithm for this problem is constructive and exhaustive in the sense that it finds ``all'' $r$-th roots in $\girth{\geq 2r+3}$ modulo the non-uniqueness of $r$-th tree roots of graphs, as explained in Section \ref{section:with-leafs}.
These positive results have a hardness counterpart:
\begin{theorem}
\label{theorem:girth}
The problem $r$-TH-POWER-OF-$\girth{\geq g}$-GRAPH is NP-complete for $g\leq r+1$ when $r$ is odd and $g\leq r+2$ when $r$ is even.
\end{theorem}
The paper is structured as follows. First we prove some auxiliary results, useful both in the construction of algorithms and in the hardness result. Section \ref{section:algorithm} contains the main algorithm from Theorem \ref{theorem:1}, which is then used in Section \ref{section:with-leafs} as a building block of the general recognition algorithm from Theorem \ref{theorem:positive}. NP-completeness is proved in Section \ref{section:large-girth}.
\section{Auxiliary results}
\label{section:reductions}
Let us fix some terminology. By $\textrm{dist}_H(u,v)$ we denote the distance from $u$ to $v$ in $H$. The \emph{$d$-neighbourhood} of a vertex $u$ in $H$ is the set of vertices of $H$ which are exactly in distance $d$ from $u$. The $1$-neighbourhood (i.e. the set of vertices adjacent to $u$) will be denoted $N_H(u)$.
Our setup usually involves a pair of graphs $G$ and $H$ on a common vertex set $V$ such that $G=H^r$. We adopt the notation
$$B_v:=\{u\in V: \textrm{dist}_H(u,v)\leq r\}=N_G(v)\cup\{v\}$$
for $v\in V$ (the letter $B$ stands for ``ball'' of radius $r$ in $H$). The lack of explicit reference to $r$ and $H$ in this notation should not lead to confusion. It is important that $B_v$ depend only on $G$.
Almost all previous work on algorithmic aspects of graph powers \cite{MotSud,4auth,Lau,LauCor,LeNg} makes use of a special gadget, called \emph{tail structure}, which, applied to a vertex $u$ in $G$, ensures that in any $r$-th root $H$ of $G$ this vertex has the same, pre-determined neighbourhood. Our main observation is that in fact such a tail structure carries a lot more information about $H$. It pins down not just $N_H(u)$, but also each $d$-neighbourhood of $u$ in $H$ for $d=1,\ldots,r$.
\begin{lemma}
\label{lemma:gadget}
Let $G=H^r$ and suppose that $\{v_0,v_1,\ldots,v_r\}\subset V$ is a set of vertices such that $N_G(v_r)=\{v_{r-1},\ldots,v_1,v_0\}$ and $N_G(v_{i+1})\subset N_G(v_{i})$ for all $i=0,\ldots,r-1$, where the inclusions are strict.
\footnote{This assumption (strictness of inclusions) can be removed at the cost of a more complicated statement, but this generality is not needed here.}
Then the subgraph of $H$ induced by $\{v_0,v_1,\ldots,v_r\}$ is a path $v_0- v_1-\ldots- v_r$ and the $d$-neighbourhood of $v_0$ in $H$ is precisely
$$N_G(v_{r-d})\setminus N_G(v_{r-d+1})\cup\{v_d\}$$
for all $d=1,\ldots,r$.
\end{lemma}
\begin{proof}
The subgraph $K$ of $H$ induced by $\{v_0,\ldots,v_r\}$ is connected --- otherwise $N_G(v_r)$ would contain vertices from outside $K$. Consider any vertex $u$ of $K$ that has an edge to some vertex $w$ outside $K$. Clearly, $\textrm{dist}_K(v_r,u)=r$, since otherwise $w$ would be in $N_G(v_r)$. This means that $K$ is a path from $v_r$ to $u$ and $u$ is the only vertex of that path which has edges to vertices outside $K$. The condition $N_G(v_{i+1})\subset N_G(v_{i})$ now implies that the vertices of this path are arranged as in the conclusion of the lemma. The second conclusion follows easily.
\end{proof}
Note that the tail structure itself does not enforce any extra constraints on $H$ other than the $d$-neighbourhoods of $v_0$.
In the algorithm for $r$-TH-POWER-OF-$\girth{\geq 2r+3}$-GRAPH we will need to solve the following tree root problem with additional restrictions imposed on the $d$-neighbourhoods of a certain vertex:\\
\begin{tabular}{lll}
& \textbf{Problem.} & RESTRICTED-$r$-TH-TREE-ROOT\\
& \textbf{Instance.} & A graph $G$, $r\geq 2$, a vertex $v\in V(G)$ and a partition\\
& & $V(G)=\{v\}\cup T^{(1)}\cup\ldots\cup T^{(r)}\cup T^{(>r)}$.\\
& \textbf{Question.} & Is $G=T^r$ for some tree $T$ such that the\\
& & $d$-neighbourhood of $v$ in $T$ is exactly $T^{(d)}$ for $d=1,\ldots,r$?
\end{tabular}
\begin{lemma}
There is a constructive polynomial time algorithm for RESTRICTED-$r$-TH-TREE-ROOT.
\end{lemma}
\begin{proof}
Define an auxiliary graph $G'$ by
\begin{align*}
V(G')= V(G) & \cup\{w_1,\ldots,w_r\}\cup\{u_1,\ldots,u_r\}\\
E(G')= E(G) & \cup\{w_i- w_j, u_i- u_j \textrm{ for all } i,j=1,\ldots,r\}\\
& \cup\{w_i- u_j \textrm{ if } i+j\leq r\}\\
& \cup\{w_i- v, u_i- v \textrm{ for all } i=1,\ldots,r\}\\
& \cup\{w_i- x, u_i- x \textrm{ for all } x\in T^{(j)} \textrm{ if } i+j\leq r\}
\end{align*}
We claim that the instance of RESTRICTED-$r$-TH-TREE-ROOT has a solution if and only if $G'$ has an $r$-th tree root (with no restrictions). Indeed, if our instance is solvable, then the solution can be turned into an $r$-th root of $G'$ by appending two paths $v- w_1-\ldots- w_r$ and $v- u_1-\ldots- u_r$ at $v$. On the other hand both sets $\{v,w_1,\ldots,w_r\}$ and $\{v,u_1,\ldots,u_r\}$ satisfy the assumptions of Lemma \ref{lemma:gadget}
\footnote{We used two paths just to ensure that the inclusions in Lemma \ref{lemma:gadget} are strict regardless of how small the rest of the graph might be. Again, with a more complicated statement of that lemma one path would suffice.}
, which implies that any $r$-th root of $G'$ has those two paths as induced subgraphs and that the $d$-neighbourhood of $v$ is $T^{(d)}\cup\{w_d,u_d\}$ for $d=1,\ldots,r$. It means that a solution to the instance of RESTRICTED-$r$-TH-TREE-ROOT can be obtained by searching for any $r$-th tree root of $G'$ and omitting the vertices $w_i, u_i$. For this we can use the algorithms of \cite{KeaCor,CKL}.
\end{proof}
\section{Algorithm for roots in $\girthplus{\geq 2r+3}$}
\label{section:algorithm}
In this section we present the algorithm from Theorem \ref{theorem:1}, that is the polynomial time reconstruction of all $r$-th roots in $\girthplus{\geq 2r+3}$ of a given graph $G$. There are two structural properties of graphs $H\in\girthplus{\geq 2r+3}$ that will be used freely throughout the proofs:
\begin{itemize}
\item Every $x\in V(H)$ is of degree at least 2 and the subgraph of $H$ induced by $B_x$ is a tree. This holds since any cycle in $H$ within $B_x$ would have length at most $2r+1$. We shall depict the ball $B_x$ in $H$ in the tree-like fashion.
\item If there is a simple path from $u$ to $v$ in $H$ of length exactly $r+1$ or $r+2$ then $u\not\in B_v$. Indeed, $u\in B_v$ iff there is a path of length at most $r$ from $u$ to $v$ in $H$, and combined with the first path this would yield a cycle of length at most $2r+2$.
\end{itemize}
To describe the algorithm we introduce the following sets defined for every $x,y \in V$.
\begin{align*}
S_{x,y}&=B_x\cap B_y\setminus\bigcup_{v\in B_y\setminus B_x} B_v \setminus \{x\}\\
P_{x,y}&=B_x\cap B_y\cap\bigcup_{v\in S_{x,y}} B_v\\
N_{x,y}&=B_x\cap B_y\cap\bigcap_{v\in P_{x,y}} B_v \setminus \{x\}
\end{align*}
These sets become meaningful if we compute them for the endpoints of an actual edge in some $r$-th root of $G$. Precisely:
\begin{theorem}
\label{theorem:edgedetermines}
Suppose $G=H^r$ for a graph $H\in\girthplus{\geq 2r+3}$ and $xy\in E(H)$. Then
$$N_{x,y}=N_H(x).$$
\end{theorem}
\begin{proof}
Because of the girth condition the set $B_x\cup B_y$ in $H$ consists of two disjoint trees $T_x$ and $T_y$, rooted in $x$ and $y$ respectively and connected by the edge $xy$ (see Fig.\ref{fig:2}). Let us introduce some subsets of those trees. By $W_x$ and $W_y$ denote the last levels:
$$W_x=\{u\in T_x: \textrm{dist}_H(u,x)=r\}, \quad W_y=\{u\in T_y: \textrm{dist}_H(u,y)=r\},$$
by $P_x$ and $P_y$ the next-to-last levels:
$$P_x=\{u\in T_x: \textrm{dist}_H(u,x)=r-1\}, \quad P_y=\{u\in T_y: \textrm{dist}_H(u,y)=r-1\},$$
and by $N_x$ and $N_y$ the children of $x$ and $y$ in $T_x$ and $T_y$:
$$N_x=\{u\in T_x: \textrm{dist}_H(u,x)=1\},\quad N_y=\{u\in T_y: \textrm{dist}_H(u,y)=1\}.$$
Clearly $B_x\cap B_y=(T_x\setminus W_x)\cup (T_y\setminus W_y)$, $W_x=B_x\setminus B_y$ and $W_y=B_y\setminus B_x$. Note that if $r=2$ we have $N_x=P_x$ and $N_y=P_y$.
\begin{figure}[h!]
\includegraphics{fig-2}
\caption{The subgraph of $H$ induced by $B_x\cap B_y$.}
\label{fig:2}
\end{figure}
First observe that every $u\in N_x$ and every $v\in B_y\setminus B_x=W_y$ are connected by a path of length $r+2$. It follows that $u\not\in B_v$, which implies
$$N_x\subset S_{x,y}.$$
It is also clear that $S_{x,y}\subset T_x$ (because every vertex in $T_y$ has a descendant $v\in W_y$).
Now the sum $\bigcup_{v\in S_{x,y}} B_x\cap B_y\cap B_v$ contains $\bigcup_{v\in N_x}B_x\cap B_y\cap B_v = (B_x\cap B_y)\setminus P_y$. On the other hand, if $v\in S_{x,y}$ and $u\in P_y$ then $u\not\in B_v$. Indeed, if $u\in B_v$ then there would be a path from $u$ to $v$ of length at most $r$. This path cannot be contained in $T_x\cup T_y$ (because $\textrm{dist}_H(u,x)=r$, so one can only get as far as $x$ going from $u$), hence it must exit $T_y$ through $W_y$ and then enter $T_x$ through $W_x$, finally reaching $v\in S_{x,y}$. However, that yields a path from $W_y$ to $S_{x,y}$ of length at most $r$ (in fact at most $r-1$), contradicting the definition of $S_{x,y}$. Eventually we proved
$$P_{x,y}=(B_x\cap B_y)\setminus P_y.$$
Now we have $\{y\}\cup N_x\subset N_{x,y}$ because every vertex of $\{y\}\cup N_x$ is in distance at most $r$ from all the vertices of $(B_x\cap B_y)\setminus P_y$. On the other hand, for every vertex $u$ of $B_x\cap B_y$ that is not in $N_x\cup\{x,y\}$ one can find a path of length $r+1$ that starts in $u$ and ends in a vertex $v\in (B_x\cap B_y)\setminus P_y$. Then $u\not\in B_v$, so $u\not\in N_{x,y}$. Such a path is obtained by going from $u$ up the tree it is contained in ($T_x$ or $T_y$) and then down in the other tree.
Concluding, we have identified $N_{x,y}$ to be $N_x\cup\{y\}$, as required.
\end{proof}
The previous theorem should be understood as follows. Given a graph $G$, we want to find its $r$-th root $H$. If we fix at least one edge $xy$ of $H$ in advance, we can compute the neighbourhood $N_H(x)$ of $x$ using only the data available in $G$. But then we can move on in the same way, computing the neighbours of those neighbours etc.
\begin{algorithm}[h!]
\caption{\textbf{Input:} $G$,$r$. \textbf{Output:} All $r$-th roots of $G$ in $\girthplus{\geq 2r+3}$}
\begin{algorithmic}
\STATE pick a vertex $x$ with smallest $|B_x|$
\FOR{all $y$ in $B_x$}
\STATE $H=$reconstructFromOneEdge($G,xy$)
\STATE \textbf{if} $H\in\girthplus{\geq 2r+3}$ and $H^r=G$ \textbf{output} $H$
\ENDFOR
\end{algorithmic}
\begin{algorithmic}
\STATE reconstructFromOneEdge($G,e$):
\STATE $H=(V(G),\{e\})$
\STATE \textbf{for} all $u\in V$ \textbf{set} processed[$u$]$:=$\textbf{false}
\WHILE{$H$ has an unprocessed vertex $x$ of degree at least $1$}
\STATE $y$ = any neighbour of $x$ in $H$
\STATE $E(H)=E(H)\cup\{xz \textrm{ for all } z\in N_{x,y}\}$
\STATE processed[$x$]$:=$\textbf{true}
\ENDWHILE
\STATE \textbf{return} $H$
\end{algorithmic}
\end{algorithm}
The $r$-th root algorithm is now straightforward. The procedure \emph{reconstructFromOneEdge} attempts to compute $H$ from $G$ assuming the existence of a given edge $e$ in $H$. This is repeated for all possible edges from a fixed vertex $x$. If $e$ is an edge in some $r$-th root $H$ of $G$, then Theorem \ref{theorem:edgedetermines} guarantees that we will recover exactly $H$ (in fact many times, once for each edge $xy\in H$; we omit the obvious optimization which avoids this redundancy).
~\\
\paragraph{\textbf{Remark.}} With an appropriate list representation of $G$ the set $N_{x,y}$ can be determined in time $O(|E(G)|+|V(G)|)$ for any $x,y$, so the running time of \emph{reconstructFromOneEdge} is $O(|V(G)|\cdot|E(G)|)$. By choosing the initial $x$ to be the vertex of the smallest degree in $G$ we can achieve the total running time of $O(\frac{|E(G)|}{|V(G)|}\cdot |V(G)|\cdot |E(G)|)=O(|E(G)|^2)$.
\section{Removing the no-leaves restriction}
\label{section:with-leafs}
In this section we obtain a polynomial time algorithm for the general recognition problem $r$-TH-POWER-OF-$\girth{\geq 2r+3}$-GRAPH, proving Theorem \ref{theorem:positive}. We start with a few definitions (see Fig.\ref{fig:8}).
For a graph $H$, which is not a tree, let $\textrm{core}(H)$ denote the largest subgraph of $H$ with no vertices of degree one (leaves). Alternatively this can be defined as follows. Given $H$, let $H'$ be the graph obtained from $H$ by removing all leaves and inductively define $H^{(n)} = (H^{(n-1)})'$. This process eventually stabilizes at the graph $\textrm{core}(H)$.
A vertex $v\in V(H)$ is called a \emph{core vertex} if it belongs to $\textrm{core}(H)$ and a \emph{non-core vertex} otherwise. The non-core vertices are grouped into trees attached to the core. For every vertex $v\in\textrm{core}(H)$ we denote by $T_v$ the tree attached at $v$ (including $v$) and by $T_v^{(d)}$ (for $d\geq 0$) the set of vertices of $T_v$ located in distance $d$ from $v$. For a non-core vertex $u$ the \emph{link} of $u$ (denoted $\rooot(u)$) is its closest core vertex and the \emph{depth} of $u$ (denoted $\textrm{depth}(u)$) is the distance from $u$ to $\rooot(u)$.
\begin{figure}
\includegraphics[scale=1]{fig-8}
\caption{The notation of Section \ref{section:with-leafs}.}
\label{fig:8}
\end{figure}
\subsection{Outline of the algorithm.} The algorithm for $r$-TH-POWER-OF-$\girth{\geq 2r+3}$-GRAPH processes the input graph $G$ in several steps (see Algorithm 2). First, we check if $G$ has a tree $r$-th root \cite{KeaCor,CKL}. If not, then we split the vertices of $G$ into the core and non-core vertices of any of its $r$-th roots. Lemma \ref{lemma:leaves} ensures that this partition is uniquely determined only by the graph $G$.
Let $\tilde{G}$ be the subgraph of $G$ induced by all the vertices that are classified as belonging to the core of any possible $r$-th root $H$. We now employ the algorithm from the previous section to find all $r$-th roots $\tilde{H}$ of $\tilde{G}$ which have girth at least $2r+3$ and no leaves (there is $O(\Delta(G))$ of them; conjecturally there is at most one).
Finally, we must attach the non-core vertices to each of the possible $\tilde{H}$. It turns out that once the core is fixed, the link of each non-core vertex can be uniquely determined, so we can pin down all the sets $V(T_v)$. However, we cannot simply look for any $r$-th tree root of the subgraph of $G$ induced by $V(T_v)$, because we have to ensure that the tree structure that we are going to impose on $V(T_v)$ is compatible with the neighbourhood information contained in the rest of $G$. Fortunately Lemma \ref{lemma:depths} guarantees that for a fixed $G$ and $\textrm{core}(H)$, all the sets $T_v^{(d)}$ for $d=1,\ldots,r$ are also uniquely determined. Since all the distances from the vertices of $T_v$ to the rest of the graph depend only on the vertex depths and the structure of the core, this is exactly the additional piece of data we need. Any tree root satisfying the given depth constraints will be compatible with the rest of the graph. Concluding, the problem we are left with for each $T_v$ is the RESTRICTED-$r$-TH-TREE-ROOT from Section \ref{section:reductions}. If all these instances have positive solutions, then the graph $H$ defined as $\tilde{H}$ with the trees $T_v$ attached at each core vertex $v$ is an $r$-th root of $G$.
The next two subsections describe the two crucial steps: detecting non-core vertices and the reconstruction of trees $T_v$.
\begin{algorithm}[h!]
\label{algorithm:2}
\caption{\newline \textbf{Input:} $G$,$r$.\newline \textbf{Output:} $r$-th roots of $G$ in $\girth{\geq 2r+3}$ (one per each core)}
\begin{algorithmic}
\STATE check if $G=T^r$ for some tree $T$
\STATE
\STATE $\tilde{G}:=G$
\WHILE{$\tilde{G}$ has vertices $u,v$ with $B_u\subset B_v$}
\STATE remove from $\tilde{G}$ all $u$ such that $B_u\subset B_v$ for some $v$
\ENDWHILE
\STATE
\FOR{every graph $\tilde{H}\in\girthplus{\geq 2r+3}$ such that $\tilde{H}^r=\tilde{G}$}
\STATE $H:=\tilde{H}$
\FOR{every vertex $v\in V(\tilde{H})$}
\STATE find $V(T_v)$ and a partition $V(T_v)=\{v\}\cup T_v^{(1)}\cup\ldots\cup T_v^{(r)}\cup T_v^{(>r)}$
\STATE use $restrictedTreeRoot$ to reconstruct some tree $T_v$
\STATE extend $H$ by attaching $T_v$ at $v$
\ENDFOR
\STATE \textbf{if} all $T_v$ existed \textbf{output} $H$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Finding core and non-core vertices.}
The next lemma shows how to detect all vertices located ``close to the bottom'' of the trees $T_v$ in $H$.
\begin{lemma}
\label{lemma:leaves}
Suppose $H\in\girth{\geq 2r+3}$ and $H^r=G$.Then the following conditions are equivalent for a vertex $u\in H$:
\begin{itemize}
\item[(1)] There is some other vertex $v\in H$ such that $B_u\subset B_v$.
\item[(2)] $u\not\in H^{(r)}$.
\end{itemize}
\end{lemma}
\begin{proof}
If $u\in H^{(r)}$ then $u$ is not removed in the first $r$ steps of cutting off the leaves of $H$, which means there exist at least two disjoint paths of length $r$ starting at $u$. However, it implies that for every vertex $v\in B_u$ there exists another $v'\in B_u$ (on one of those paths) such that $\textrm{dist}_H(v,v')=r+1$, hence $v'\in B_u$ but $v'\not\in B_v$. Therefore $B_u$ is not contained in $B_v$ for any $v\neq u$.
If, on the other hand, $u\not\in H^{(r)}$, then $u$ becomes a leaf after at most $r-1$ steps of the leaf-removal procedure and is removed in the subsequent step. Let $v$ be the last vertex adjacent to $u$ just before $u$ is removed. Clearly $B_u\subset B_v$.
\end{proof}
An inductive repetition of the above criterion determines the consecutive sets $V(H^{(r)})$, $V(H^{(2r)})$, $V(H^{(3r)})$, $\ldots$ for \emph{any} $r$-th root $H\in\girth{\geq 2r+3}$ of $G$ using only the information available in $G$. Eventually we obtain $V(\textrm{core}(H))$ which is the vertex set of $\tilde{G}$.
\textbf{Remark.} Repeated application of Lemma \ref{lemma:leaves} also proves that if $G$ has a tree $r$-th root then it does not have a non-tree $r$-th root in $\girth{\geq 2r+3}$ and vice-versa.
\subsection{Attaching the trees $T_v$.}
For each possible $\textrm{core}(H)$ we need to decide on a way of attaching the remaining (non-core) vertices to $H$ in a way which ensures that $H^r=G$. It turns out that all the data necessary to ensure the compatibility can be read off from $G$ and $\textrm{core}(H)$, so again this data is common for all the possible $r$-th roots of $G$ that have a fixed core.
\begin{lemma}
\label{lemma:depths}
Suppose that $H\in\girth{\geq 2r+3}$ is a graph such that $H$ is not a tree and $H^r=G$. Then for every non-core vertex $u$ of $H$ we have:
\begin{itemize}
\item either $B_u\cap V(\textrm{core}(H))=\emptyset$, in which case $\textrm{depth}(u)>r$, or
\item the subgraph of $H$ induced by $B_u\cap V(\textrm{core}(H))$ is a tree whose only center is $\rooot(u)$ and whose height (the distance from the center to every leaf) is $r-\textrm{depth}(u)$.
\end{itemize}
\end{lemma}
\begin{proof}
The first statement is obvious. As for the second, the subgraph induced by $B_u\cap V(\textrm{core}(H))$ consists of all the vertices of $V(\textrm{core}(H))$ in distance at most $r-\textrm{depth}(u)$ from $\rooot(u)$. Since $\textrm{core}(H)$ is a graph of girth at least $2r+3$ with no degree one nodes, these vertices induce a tree in $H$, and all the leaves of this tree are exactly in distance $r-\textrm{depth}(u)$ from $\rooot(u)$. Therefore $\rooot(u)$ is the unique center of that tree.
\end{proof}
Lemma \ref{lemma:depths} yields a method of partitioning the non-core vertices into the sets $V(T_v)$ and subdividing each $V(T_v)$ into a disjoint union $\{v\}\cup T_v^{(1)}\cup\ldots\cup T_v^{(r)}\cup T_v^{(>r)}$ of vertices in distance $1,2,\ldots,r$ and more than $r$ from $v$ using only the data from $G$ and $\textrm{core}(H)$. Indeed, for the vertices $u$ with $B_u\cap V(\textrm{core}(H))\not=\emptyset$ one finds the center and height of the subtree of $\textrm{core}(H)$ induced by $B_u\cap V(\textrm{core}(H))$ and applies the second part of Lemma \ref{lemma:depths} to obtain both $\rooot(u)$ and $\textrm{depth}(u)$, thus classifying $u$ to the appropriate $T_v^{(d)}$. The links of all remaining vertices are determined using the fact that all vertices in one connected component of $G\setminus\bigcup_{v\in \textrm{core}(H), d=0,\ldots,r-1} T_v^{(d)}$ have the same link.
\textbf{Remark.} The above partition can also be obtained (perhaps in a computationally easier way) from the following fact:
\begin{lemma}
If $H\in\girth{\geq 2r+3}$ is a graph such that $H$ is not a tree and $H^r=G$, then for every vertex $v$ of $\textrm{core}(H)$ and every $d=1,\ldots,r$ we have:
$$T_v^{(d)} = \bigcap_{\substack{a\in \textrm{core}(H)\\ \textrm{dist}_H(v,a)\leq r-d}}B_a\setminus \bigcup_{\substack{b\in \textrm{core}(H)\\ \textrm{dist}_H(v,b)\geq r-d+1}}B_b.$$
\end{lemma}
\begin{proof}
The $\subset$ inclusion is obvious. Now suppose $u$ belongs to the right-hand side. If $u$ is in $T_v$, then it clearly must have depth $d$, so it suffices to prove that $u$ cannot belong to any other $T_{v'}$. Suppose this is the case: $\textrm{link}(u)=v'\neq v$ and $l:=\textrm{depth}(u)$, $1\leq l\leq r$. Set $k=\textrm{dist}_H(v,v')$ and let $\Gamma$ denote the shortest path from $v$ to $v'$.
If $r-d\leq k-1$ then let $a,b\in\Gamma$ be the vertices in distance $r-d$ and $r-d+1$ from $v$, respectively. By assumption $u\in B_a$ and $u\not\in B_b$, but that is impossible since $b$ is closer to $u$ than $a$.
When $r-d\geq k$ extend the path $\Gamma$ beyond $v'$ in $\textrm{core}(H)$, so as to reach two vertices $a$ and $b$ in distance $r-d$ and $r-d+1$ from $v$, respectively (this is possible in $\textrm{core}(H)$). The assumption $u\in B_a\setminus B_b$ enforces $\textrm{dist}_H(u,a)=r$, but $\textrm{dist}_H(u,a)=l+(r-d-k)$ so $l=k+d$.
Now extend the path $\Gamma$ beyond $v$ up to a point $w$ such that $\textrm{dist}_H(v,w)=r-k-l+1$ (this is possible because $u\in B_v$ implies $k+l\leq r$ and because we are in the core). Moreover, the path $u- v'- v- w$ has length $r+1$ so it measures the distance from $u$ to $w$ and proves that $u\not\in B_w$. On the other hand:
$$\textrm{dist}_H(v,w)=r-k-l+1=r-k-(k+d)+1=r-d+1-2k<r-d$$
so we ought to have $u\in B_w$. This contradiction ends the proof.
\end{proof}
\section{Hardness results}
\label{section:large-girth}
Now we proceed to the hardness of recognition for powers of graphs of lower-bounded girth (Theorem \ref{theorem:girth}). For the reductions we use the following NP-complete problem (see \cite[Prob. SP4]{Gar}). It has already been successfully applied in this context (\cite{4auth, Lau, LauCor, LeNg}).
\begin{tabular}{llp{12cm}}
& \textbf{Problem.} & HYPERGRAPH 2-COLORABILITY (H2C{}) \\
& \textbf{Instance.} & A finite set $S$ and a collection $S_1,\ldots,S_m$ of subsets of $S$. \\
& \textbf{Question.} & Can the elements of $S$ be colored with two colors $A$, $B$ such that each set $S_j$ has elements of both colors?
\end{tabular}
An instance of this problem (also known as SET-SPLITTING) will be denoted $\mathcal{S}=(S;S_1,\ldots,S_m)$. We shall refer to the elements of the universum $S$ as $x_1,\ldots, x_n$. Any assignment of colors $A$ and $B$ to the elements of $S$ which satisfies the requirements of the problem will be called a \emph{2-coloring}.
In this section we fix $r$ and let $k=\lfloor \frac{r}{2}\rfloor$, so that $r=2k$ or $r=2k+1$ depending on parity. We shall define a graph that encodes both the structure of the H2C{} instance and the coloring. To ensure large girth, the connections between vertices representing sets, elements and colors will be realized by paths of length $k$. Since the graph under consideration is rather large we shall describe its succinct representation that does not require the enumeration of all edges.
\subsection{Case of odd $r=2k+1$}
Consider an instance $\mathcal{S}=(S;S_1,\ldots,S_m)$ of H2C. The following two definitions describe an auxiliary graph that will be used as a base for further constructions. The reader is referred to Fig.\ref{fig:red-odd} for a self-explanatory presentation of the graphs $K_\mathcal{S}$ and $H_\mathcal{S}$ defined below.
\begin{definition}
For an instance $\mathcal{S}=(S;S_1,\ldots,S_m)$ let $V_\mathcal{S}$ be the following set of vertices:
\begin{itemize}
\item $S_j, x_i$ for all subsets and elements,
\item $A, B, X$,
\item $T_{i,j}^{(l)}$ for every pair $i,j$ such that $x_i\in S_j$ and every $l=1,\ldots,k-1$,
\item $P_{i}^{(l)}$ for every $x_i$ and every $l=1,\ldots,k-1$,
\item the tail vertices $S_j^{(l)}$ for each $j$ and $l=1,\ldots,r$.
\end{itemize}
\end{definition}
\begin{definition}
Given any instance $\mathcal{S}=(S;S_1,\ldots,S_m)$ define a graph $K_\mathcal{S}$ on the vertex set $V_\mathcal{S}$ with the following edges:
\begin{itemize}
\item a path $S_j- T_{i,j}^{(1)}-\ldots- T_{i,j}^{(k-1)}- x_i$ whenever $x_i\in S_j$,
\item a path $x_i- P_{i}^{(1)}-\ldots- P_{i}^{(k-1)}$ for every $x_i$,
\item $X- x_i$ for all $i$,
\item the tail paths, that is $S_j- S_j^{(1)}- S_j^{(2)}-\ldots- S_j^{(r)}$ for every $j$.
\end{itemize}
\end{definition}
\begin{figure}
\includegraphics[scale=0.9]{fig-9}
\caption{For $\mathcal{S}=(\{x_1,\ldots,x_4\}; \{x_1,x_2\},\{x_1,x_3,x_4\},\{x_2,x_4\}\})$ the graph $K_\mathcal{S}$ consists of all \emph{but} the shaded edges. The graph $H_\mathcal{S}$ (made of all the edges above) encodes the coloring with $x_1,x_4$ of color $A$ and $x_2,x_3$ of color $B$. It is a 2-coloring of $\mathcal{S}$ since all $S_j$ are in distance $2k$ from $A$ and $B$.}
\label{fig:red-odd}
\end{figure}
This graph encodes only the structure of $\mathcal{S}$. To encode the coloring we link the loose paths from $x_i$ to either $A$ or $B$.
\begin{definition}
Given an instance $\mathcal{S}$ and a color assignment, define the graph $H_\mathcal{S}$ to be $K_\mathcal{S}$ with the additional edges $P_i^{(k-1)}- A$ whenever $x_i$ has color $A$ and $P_i^{(k-1)}- B$ whenever $x_i$ has color $B$.
\end{definition}
Note that $H_\mathcal{S}$ has girth $2k+2=r+1$. Now comes the graph to be used in our NP-completeness reduction:
\begin{definition}
For any instance $\mathcal{S}=(S;S_1,\ldots,S_m)$ of H2C{} put
$$G_\mathcal{S}={K_\mathcal{S}}^r\cup E_\mathcal{S}$$
where $E_\mathcal{S}$ is the set of edges from $A$ and $B$ to each of $X$, $x_i$, $S_j$, $T_{i,j}^{(l)}$, $P_i^{(l)}$, and $S_j^{(1)}$ for all possible $i,j,l$.
\end{definition}
Observe that $G_\mathcal{S}$ is defined independently of any particular color assignment. Moreover:
\begin{lemma}
\label{lemma:dupsko1}
For any 2-colored instance $\mathcal{S}$ we have $G_\mathcal{S}={H_\mathcal{S}}^r$.
\end{lemma}
\begin{proof}
Since $K_\mathcal{S}\subset H_\mathcal{S}$ then ${K_\mathcal{S}}^r\subset {H_\mathcal{S}}^r$. Now let us prove that $E_\mathcal{S}\subset{H_\mathcal{S}}^r$. In $H_\mathcal{S}$ both $A$ and $B$ are in distance $2k=r-1$ from each of $S_j$ (because we had a proper 2-coloring), hence in ${H_\mathcal{S}}^r$ we have edges from $A$ and $B$ to $S_j$ and $S_j^{(1)}$ (and to no further $S_j^{(l)}$). Both $A$ and $B$ are in distance $k+1$ from $X$ and $X$ is at most $k$ steps from each of $x_i$, $T_{i,j}^{(l)}$ and $P_i^{(l)}$, therefore $A$ and $B$ are at most $(k+1)+k=r$ steps from all these vertices. This proves that $G_\mathcal{S}\subset {H_\mathcal{S}}^r$.
Next we prove that ${H_\mathcal{S}}^r\subset G_\mathcal{S}$. Indeed, we have already checked that the edges between $\{A,B\}$ and the rest of the graph ${H_\mathcal{S}}^r$ are exactly as described by $E_\mathcal{S}$. Moreover, $A$ and $B$ are not adjacent in ${H_\mathcal{S}}^r$ since $\textrm{dist}_{H_\mathcal{S}}(A,B)=2(k+1)=r+1$.
We only need to check that if two vertices $u$, $v$ other than $A$, $B$ are in distance at most $r$ in $H_\mathcal{S}$ then they are in distance at most $r$ in $K_\mathcal{S}$. If the shortest path from $u$ to $v$ in $H_\mathcal{S}$ does not pass through $A$ or $B$ then of course it is true. If it does pass (say through $A$)
then both $u$ and $v$ must be from the set $\{x_i,S_j,X,T_{i,j}^{(l)},P_i^{(l)}\}$, because they cannot be further than $r-1=2k$ from $A$. The case $(u,v)=(S_j,S_{j'})$ is impossible, therefore at least one of $u$, $v$ (say $u$) is not any of the $S_j$. But then $\textrm{dist}_{K_\mathcal{S}}(u,X)\leq k$ and $\textrm{dist}_{K_\mathcal{S}}(v,X)\leq k+1$, so $\textrm{dist}_{K_\mathcal{S}}(u,v)\leq k+(k+1)=r$ as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:girth} for odd $r$]
Given an instance $\mathcal{S}=(S;S_1,\ldots,S_m)$ construct the graph $G_\mathcal{S}$. If $\mathcal{S}$ has a 2-coloring, then $G_\mathcal{S}$ is the $r$-th power of a graph with girth at least $r+1$, namely $G_\mathcal{S}={H_\mathcal{S}}^r$ by Lemma \ref{lemma:dupsko1}.
For the inverse implication suppose that $G_\mathcal{S}=H^r$ for some graph $H$. Define the coloring as follows: $x_i$ has color $A$ (resp. $B$) if there is a path of length at most $k$ from $x_i$ to $A$ (resp. $B$) in $H$. Clearly each $x_i$ is assigned at most one color since otherwise $A$ and $B$ would be adjacent in $H^r$.
The tail structure $S_j,S_j^{(1)},\ldots,S_j^{(r)}$ of each $S_j$ satisfies the assumptions of Lemma \ref{lemma:gadget}, so it enforces that in $H$:
\begin{itemize}
\item for every $j$ the $k$-neighbourhood of $S_j$ is precisely $\{x_i: x_i\in S_j\}\cup\{S_j^{(k)}\}$ (as in $K_\mathcal{S}$),
\item $A$ and $B$ are exactly in distance $2k$ from each $S_j$ (by the definition of $E_\mathcal{S}$).
\end{itemize}
Therefore for each $j$ there has to be at least one vertex in $\{x_i: x_i\in S_j\}$ that is $k$ steps from $A$ and at least one that is $k$ steps from $B$. This proves that the obtained coloring solves the H2C{} instance.
\end{proof}
\subsection{Case of even $r=2k$}
The argument in this case is similar, so we just outline the necessary changes. This is an extension of the construction from \cite{4auth}.
Define the vertex set $V_\mathcal{S}$ as
\begin{itemize}
\item $x_i$, $S_j$, $X$, $A$, $A'$, $B$, $B'$,
\item $T_{i,j}^{(l)}$ whenever $x_i\in S_j$ and $l=1,\ldots,k-1$,
\item $P_{i,A}^{(l)}$, $P_{i,B}^{(l)}$ for all $i$ and $l=1,\ldots,k-1$,
\item $S_j^{(l)}$ for all $j$ and $l=1,\ldots,r$.
\end{itemize}
The graph $K_\mathcal{S}$ is built as previously, except that instead of a path $x_i-\ldots- P_i^{(k-1)}$ we have two paths $x_i- P_{i,A}^{(1)}-\ldots- P_{i,A}^{(k-1)}$ and $x_i- P_{i,B}^{(1)}-\ldots- P_{i,B}^{(k-1)}$. Additionally we provide $K_\mathcal{S}$ with edges $A- A'$ and $B- B'$. See Fig.\ref{fig:red-even}.
For a 2-colored instance $\mathcal{S}=(S;S_1,\ldots,S_m)$ the graph $H_\mathcal{S}$ is defined as the extension of $K_\mathcal{S}$ by the edges:
\begin{itemize}
\item $P_{i,A}^{(k-1)}- A$ and $P_{i,B}^{(k-1)}- B'$ if $x_i$ has color $A$,
\item $P_{i,A}^{(k-1)}- A'$ and $P_{i,B}^{(k-1)}- B$ if $x_i$ has color $B$.
\end{itemize}
Note that $H_\mathcal{S}$ has girth $2k+2=r+2$.
Eventually we set $G_\mathcal{S}={K_\mathcal{S}}^r\cup E_\mathcal{S}$, where the edges of $E_\mathcal{S}$ are from $A$, $A'$, $B$, $B'$ to each of $x_i$, $S_j$, $T_{i,j}^{(l)}$, $P_{i,A}^{(l)}$, $P_{i,B}^{(l)}$, $X$ and two extra edges $A- B'$ and $B- A'$.
Intuitively, $A$, $B$ encode the colors while $A'$, $B'$ encode the ``non-colors'', i.e. an element $x_i$ of color $A$ is connected to $A$ and $B'$ (``non-$B$''). This complication is necessary to ensure that $T_{i,j}^{(1)}$ and $A$ are connected by a path of length at most $r$ regardless of the color of $x_i$. (Indeed, had we only retained the vertices $A$, $B$ as before, then $\textrm{dist}_{H_\mathcal{S}}(T_{i,j}^{(1)}, A)$ would be either $2k-1=r-1$ if $x_i$ had color $A$ or $2k+1=r+1$ otherwise. Remember that the $r$-th power of $H_\mathcal{S}$ must not depend on the color assignment.)
\begin{figure}[h!]
\includegraphics[scale=0.9]{fig-10}
\caption{The graphs $K_\mathcal{S}$ and $H_\mathcal{S}$ for even $r$. For description see Fig.\ref{fig:red-odd}.}
\label{fig:red-even}
\end{figure}
Also, when proving $G_\mathcal{S}={H_\mathcal{S}}^r$ one must consider the vertex pairs $(P_{i,A}^{(k-1)},S_j)$ and $(P_{i,B}^{(k-1)},S_j)$ separately, since whether they form an edge in ${K_\mathcal{S}}^r$ or not depends on whether $x_i\in S_j$. However, this dependence does not change when passing from ${K_\mathcal{S}}^r$ to ${H_\mathcal{S}}^r$.
With these alterations the proof in this case is analogous to the one for odd $r$.
\section{Conclusions and open problems}
\label{section:conclusions}
In this work we presented an efficient algorithmic solution to Levenshtein's reconstruction conjecture and we applied it to a more general, unrestricted $r$-th root problem. From a high-level perspective, it was possible because we could extract the ``core of the problem'' which has very few solutions (as the conjecture suggests), so we could hope that these can be found quickly. We also hope that the reverse flow of ideas is possible, so that some improved algorithmic edge-by-edge reconstruction technique might help resolve Levenshtein's conjecture.
Another (probably challenging) problem is to find a complete girth-parametrized complexity dichotomy, that is to close the gap between $r+1$ (or $r+2$) and $2r+3$. We believe that the $r$-th power recognition remains NP-complete even for graphs of girth $2r$.
In fact it would even be very interesting to find the complexity of SQUARE-OF-$\girth{\geq 5}$-GRAPH, to complete the complexity dichotomy of \cite{4auth} for $r=2$, because of possible implications in algebraic graph theory. Note, that if any algorithm for this problem is run with the complete graph $G=K_n$ as input, then the answer is ''yes'' if and only if there exists a graph on $n$ vertices that has girth at least $5$ and diameter $2$. By the Hoffman-Singleton theorem (see \cite{Singl,Biggs}) such a graph may exist only for $n=5,10,50$ and $3250$. The first three of these graphs are known, and the existence of the last one (for $n=3250$) is a long-standing open problem. Therefore, any efficient algorithm for SQUARE-OF-$\girth{\geq 5}$-GRAPH might (at least in principle) solve this problem.
| {
"timestamp": "2009-09-22T16:46:55",
"yymm": "0909",
"arxiv_id": "0909.4011",
"language": "en",
"url": "https://arxiv.org/abs/0909.4011",
"abstract": "We study the problem of recognizing graph powers and computing roots of graphs. We provide a polynomial time recognition algorithm for r-th powers of graphs of girth at least 2r+3, thus improving a bound conjectured by Farzad et al. (STACS 2009). Our algorithm also finds all r-th roots of a given graph that have girth at least 2r+3 and no degree one vertices, which is a step towards a recent conjecture of Levenshtein that such root should be unique. On the negative side, we prove that recognition becomes an NP-complete problem when the bound on girth is about twice smaller. Similar results have so far only been attempted for r=2,3.",
"subjects": "Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM)",
"title": "Large-girth roots of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105238756897,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7076984790761117
} |
https://arxiv.org/abs/1808.05797 | Single-Server Multi-Message Private Information Retrieval with Side Information | We study the problem of single-server multi-message private information retrieval with side information. One user wants to recover $N$ out of $K$ independent messages which are stored at a single server. The user initially possesses a subset of $M$ messages as side information. The goal of the user is to download the $N$ demand messages while not leaking any information about the indices of these messages to the server. In this paper, we characterize the minimum number of required transmissions. We also present the optimal linear coding scheme which enables the user to download the demand messages and preserves the privacy of their indices. Moreover, we show that the trivial MDS coding scheme with $K-M$ transmissions is optimal if $N>M$ or $N^2+N \ge K-M$. This means if one wishes to privately download more than the square-root of the number of files in the database, then one must effectively download the full database (minus the side information), irrespective of the amount of side information one has available. | \section{Introduction}
Consider $K$ independent messages stored at a single server.
One user wants to download $N$ messages from the sever while it already has $M$ messages as side information.
The user sends queries to the server and the server replies with (coded) messages according to the user's requests.
Private information retrieval requires that the server should not be able to infer any information about the indices of the messages that the user wants to download.
We refer to this problem as Single-server Multi-message Private Information Retrieval with Side Information (SMPIRSI).
To solve the SMPIRSI problem, we need to find the minimum required number of transmissions that the user should request from the server and the optimal linear coding scheme which enables the user to decode the demand messages while protecting the indices of the demand messages from the server.
\subsection{Related Work}
The Private Information Retrieval (PIR) problem was first studied in~\cite{492461} from a computational complexity perspective.
Recently, the PIR problem attracted considerable attention in the information theory society and many works study this problem from an information-theoretic point of view~\cite{1181949,Yekhanin:2010:PIR:1721654.1721674,7889028,8119895}.
A single user wants to privately download one message from a database.
To achieve perfect privacy in the information-theoretic sense, if the database is only stored at one server, the user has to download all messages.
The problem becomes more interesting if one supposes that the database is stored in multiple servers and there is no collusion between these servers.
By exploiting the advantages of replications of the database in non-colluding servers, private information retrieval can be achieved without downloading all messages and the capacity of this problem is characterized in~\cite{7889028}.
Ensuing work has studied many variations of this theme, including databases coded by erasure codes~\cite{6874954,7282975,8006763,7997029,7997393,7541531}, partial colluding servers~\cite{8119895,8006861,2017arXiv170601442B}, side information messages available at users~\cite{2017arXiv170900112K,2017arXiv170607035T,2017arXiv170901056W,2017arXiv170903022C,Li1806:Single} and multiple messages~\cite{8006859,DBLP:journals/corr/abs-1805-11892},
In~\cite{8006859}, Banawan and Ulukus consider the problem that the user wants to download multiple messages from multiple servers, but there is no side information at the user.
In~\cite{DBLP:journals/corr/abs-1805-11892}, Shariatpanahi {\it et al.} study the multi-message PIR problem with side information and the user wants to protect both the privacy of the indices of demand messages and of the side information messages.
In our problem, the user is only interested in protecting the privacy of the indices of the demand messages, which is a more challenging problem than protecting both the indices of the demand and side information messages.
The single-server multi-message PIR with side information problem is studied concurrently in~\cite{2018arXiv180709908H}, which has the same results as us when $N>M$ and presents achievability results when $N\le M$\footnote{Our work and~\cite{2018arXiv180709908H} study the same problem independently. We submitted our work to the 56th Allerton conference on 9th July 2018.}.
\subsection{Contributions}
\begin{itemize}
\item[(1)] We present a closed-form expression for the minimum number of required transmissions for SMPIRSI problem.
\item[(2)] We propose a novel method, Partition-and-MDS-Coding, to generate optimal linear coding schemes with satisfy the requirements of SMPIRSI and use the minimum number of transmissions.
\item[(3)] We show that the trivial MDS coding scheme with $K-M$ transmissions is optimal when the number of demand messages satisfies either $N> M$ or $N^2+N \ge K-M$.
\end{itemize}
\section{System Model and Definition}
Consider a server which stores $K$ independent messages, denoted by $\mathbf{X} = \{X_1,\dots,X_K\}$.
Each message $X_i \in \mathbb{F}$, where $\mathbb{F}$ is some finite field.
One user initially has $M$ side information messages and wants to download $N$ messages from the server, while the user does not want to reveal any information about the indices of the demand messages to the server.
We assume that the server only knows the number $M$ of side information messages of the user but has no idea about which messages the user has.
Let $\mathbf{W} = \{W_1,\dots,W_N\} \subseteq [K]$ denote the set of indices of the demand messages and $\mathbf{S} = \{S_1,\dots,S_M\} \subset [K] \setminus \mathbf{W}$ denote the set of side information messages.
Let $\mathcal{S}$ and $\mathcal{W}$ denote the random variables corresponding to the indices of side information messages and demand messages.
We assume that $\mathcal{W}$ is uniformly distributed over all subsets of $[K]$ with size $N$, i.e.,
\begin{align}
\Pr(\mathcal{W} = \mathbf{W}) = \frac{1}{\binom{K}{N}}&& \forall \mathbf{W} \subseteq [K], |\mathbf{W} |=N.
\end{align}
Moreover, $\mathcal{S}$ is also uniformly distributed over all subsets of $[K]\setminus \mathbf{W}$ with size $M$, i.e.,
\begin{align}
\Pr(\mathcal{S} = \mathbf{S}| \mathbf{W}) = \frac{1}{\binom{K-N}{M}} && \forall \mathbf{S} \subseteq [K]\setminus \mathbf{W}, |\mathbf{S}| = M.
\end{align}
To retrieve the demand messages, the user sends a query $Q(\mathbf{W},\mathbf{S})$, which is determined by the indices of the demand messages and side information messages, to the server and the server replies coded messages according to the query.
In this paper, we only consider linear coding schemes.
Let $\mathbf{T} = \{T_1,\dots,T_R\}$ denote the linear coding scheme with $R$ transmissions to be sent to the user by the server.
Private information retrieval requires $\mathbf{T}$ to satisfy two conditions:
\begin{enumerate}
\item \textbf{Retrieval Condition (Correctness)}: The user should be able to decode all demand messages from $\mathbf{T}$ by using its locally available side information messages, that is,
\begin{align}
H(X_{\mathbf{W}} | \mathbf{T}, X_{\mathbf{S}}) = 0\label{eq:retcon}.
\end{align}
\item \textbf{Privacy Condition}: The server should not be able to infer any information about the indices of the demand messages from the query, that is,
\begin{align}
I(\mathcal{W};Q(\mathbf{W},\mathbf{S})) = 0\label{eq:pricon}.
\end{align}
\end{enumerate}
To satisfy the \textbf{Privacy Condition}, it is equivalent to have
\begin{align}
H(\mathcal{W}|Q(\mathbf{W},\mathbf{S})) = H(\mathcal{W})
\end{align}
\begin{definition}[\textbf{Coding subspace}]\label{def:codingsubspace}
For any linear coding scheme $\mathbf{T} = \{T_1,\dots,T_R\}$, let $\text{supp}(T_i)$ denote the messages which are used to generate $T_i$.
Define a partition of messages $\mathcal{P}(\mathbf{T}) = \{\wp_1,\dots\}$ such that for each $T_i$, there exists a unique $\wp_j \in \mathcal{P}$ such that $\text{supp}(T_i) \subseteq \wp_j$.
We call the subspace spanned by each $\wp_j$ a coding subspace.
\end{definition}
For any linear coding scheme, it is possible to find its coding subspace(s).
The coding subspaces should jointly contain all messages, otherwise the non-included message cannot be the demand message, which violates the Privacy Condition.
The minimum number of required transmissions for single-server multi-message private information retrieval with side information can be computed by
\begin{align}
R^* = \min_{\mathcal{L} \in \Pi(K)} \sum_{i =1}^{|\mathcal{L}|} R(\mathcal{L}_i)\label{eq:r*}
\end{align}
where $\Pi(K)$ is the set of all partitions of $K$, $\mathcal{L} = \{\mathcal{L}_1,\dots,\mathcal{L}_{|\mathcal{L}|}\}$ and $R(\mathcal{L}_i)$ is the minimum number of required transmissions for coding subspace with size $\mathcal{L}_i$.
\begin{definition}[\textbf{MDS-Condition}]
A linear coding scheme $\mathbf{T}$ satisfies the MDS-Condition in the coding subspace spanned by the messages in $\wp_i\in \mathcal{P}(\mathbf{T})$ if there exists a non-negative integer $M_i$ such that
given any $M_i$ messages in $\wp_i$, all other messages can be decoded and none of the messages can be decoded given any $M_i-1$ messages.
\end{definition}
\begin{example}\label{eg:1}
Consider the SMPIRSI problem with the following setup:
$\mathbf{X} = \{X_1,X_2,\dots,X_{13}\}$, $\mathbf{W} = \{2,5\}$ and $\mathbf{S} = \{1,4,6,7,9\}$.
We give the following linear coding scheme $\mathbf{T} = \{T_1,\dots,T_6\}$:
\begin{align}
T_1 &= X_1 + X_2+X_4+X_6+X_8\\
T_2 &= X_1 + 2X_2+3X_4+4X_6+5X_8\\
T_3 &= X_3+X_{10}+X_{11}+X_{13}\\
T_4 &= X_3+2X_{10}+3X_{11}+4X_{13}\\
T_5 &= X_5+X_{7}+X_{9}+X_{12}\\
T_6 &= X_5+2X_{7}+3X_{9}+4X_{12}
\end{align}
From the user's perspective: $X_2$ can be decoded from $T_1$ and $T_2$ given that $X_1$, $X_4$ and $X_6$ are side information. $X_5$ can be decoded from $T_5$ and $T_6$ given that $X_7$ and $X_9$ are side information. Hence, the Retrieval Condition is satisfied.
From the server's perspective: The linear coding scheme has $3$ coding subspaces: $\wp_1 = \{X_1,X_2,X_4,X_6,X_8\}$, $\wp_2 = \{X_3,X_{10},X_{11},X_{13}\}$ and $\wp_3 = \{X_5,X_7,X_9,X_{12}\}$.
They satisfy the MDS-Condition with $m_1= 3$, $m_2=2$ and $m_3= 2$, respectively.
Since the server only knows that the user has $5$ side information messages and wants to download $2$ messages,
it can only infer that the demand messages are in either the same coding subspace or in $2$ different coding subspaces.
By using the randomized construction process shown in Section~\ref{sec:CodingScheme}, we can show that the probability for any two messages to be the demand messages is the same.
Hence, this coding scheme also satisfies the Privacy Condition.
We will show that for this problem, we need at least $6$ transmissions and present the proof for the reason why it is optimal to partition messages into these three coding subspaces and why it is sufficient to have two transmissions in each coding subspace in Section~\ref{sec:main}.
\end{example}
\section{Main Result}\label{sec:main}
The main result of this paper is presented by the following theorem. We give a closed-form expression for the minimum number of required transmissions for SMPIRSI problem.
\begin{theorem}~\label{Thm:main}
For the single-server multi-message private information retrieval with side information problem, the minimum number of required transmissions satisfies
\begin{align}
R^*(K,M,N)= K-M - (L^*-1-N)^{+}\bar{M}- ((L^*-N)V)^+\label{eq:main}
\end{align}
where $\bar{M} = \lfloor\frac{M}{N}\rfloor$, $t = M- N\bar{M}$, $L^* = \left\lceil\frac{K-t}{ \bar{M}+N}\right\rceil$ and $V = (K- (L^*-1)(\bar{M} +N)-t -N)/(L^*-N)$.
\end{theorem}
\begin{remark}
According to Theorem~\ref{Thm:main}, the minimum number of required transmissions is always upper bounded by $K-M$, which is consistent with the fact that there always exists a linear PIR coding scheme which is an MDS code with $K-M$ transmissions. This code has all messages in a single coding subspace.
\end{remark}
In order to prove Theorem~\ref{Thm:main}, we first prove some useful lemmas. We also provide an alternative proof in Appendix.
\begin{lemma}\cite{Li1806:Single}\label{lm:mds}
For any linear coding scheme that satisfies the Privacy Condition, without loss of optimality, the MDS-Condition should be satisfied in every coding subspace.
\end{lemma}
\begin{proof}
For a linear coding scheme $\mathbf{T} = \{T_1,\dots,T_R\}$, suppose there exists a coding subspace $\wp \in \mathcal{P}(\mathbf{T})$ such that the MDS-Condition is not satisfied in $\wp$.
Let $D(\mathbf{T},X)$ denote the minimum number of side information messages which are required to decode message $X$ from linear coding scheme $\mathbf{T}$.
Then, there must exists two messages $X_i$ and $X_j$ in the same coding subspace $\wp$ such that
\begin{itemize}
\item[(i)] $D(X_i) > D(X_j)$.
\item[(ii)] $D(X_i) = D(X_j)$, but there are more choices of side information messages for $X_j$ than $X_i$.
\end{itemize}
For the first case, suppose $X_j$ can be decoded from $T_k$ given $D(\mathbf{T},X_i)$ as side information. If $T_k$ is not the transmission that will be used by the user to decode the demand messages, then it is possible to remove messages in $\text{supp}(T_k)$ which are also in $\text{supp}(T_l)$ $\forall l \ne k$.
As a result, we either $X_j$ cannot be decoded from $T_k$ or $X_j$ is in another coding subspace.
For the second case, we can do similar operation on the transmissions which can be used to decode $X_j$ given $D(\mathbf{T},X_j)$ as side information.
In both cases, If the original coding scheme satisfies Privacy condition, the modified coding scheme also satisfies Privacy condition and uses the same number of transmissions.
Hence, it is optimal to only consider the linear coding schemes which satisfy MDS-Condition in every coding subspace as the candidate PIR coding scheme.
\end{proof}
According to Lemma~\ref{lm:mds}, it is without loss of optimality to restrict attention to linear coding schemes which satisfy the MDS-Condition in every coding subspace.
For such coding schemes, the minimum number of required transmissions in each coding subspace should satisfy the condition stated in the following Lemma.
\begin{lemma}\label{lm:tscs}
For coding subspace $\wp$, the number of transmissions in this coding subspace, $R(\wp)$, satisfies
\begin{align}
R(\wp,m(\wp)) \left\{ \begin{aligned}
&=|\wp|, & |\wp| \le N\\
&\ge |\wp|-m(\wp), & N+1\le |\wp|\le K
\end{aligned}
\right.
\end{align}
where $m(\wp) \in \{0,1,\dots, |\wp|-N \}$ is the number of side information messages that are used in such coding subspace.
\end{lemma}
\begin{proof}
If the coding subspace $\wp$ includes no more than $N$ messages, since all the messages in $\wp$ can be the demand message, the transmissions in such coding subspace should be equal to the dimension of the coding subspace, which is $|\wp|$.
If the coding subspace $\wp$ includes more than $N$ messages, then it must satisfy the MDS-Condition, according to Lemma~\ref{lm:mds}.
Hence, by using $m(\wp)$ side information messages, the number of transmissions in such coding subspace should be $|\wp|-m(\wp)$.
Additionally, since it is possible that all $N$ demand messages are in $\wp$, the number of transmissions in $\wp$ is at least $N$.
Thus, the maximum number of side information messages that can be used in this coding subspace is $|\wp|- N$.
\end{proof}
\begin{example*}
It can be verified that the coding scheme proposed in Example~\ref{eg:1}, $\mathbf{T}=\{T_1,\dots,T_6\}$, satisfies MDS-Condition in all three coding subspaces, with $m_1 = 3$, $m_2= 2$ and $m_3= 2,$ respectively.
\end{example*}
Let $\mathcal{P} = \{\wp_1,\dots,\wp_L\}$ denote the coding subspaces. Without loss of generality, we assume that $|\wp_1|\ge\dots,|\wp_L|$.
The minimum number of required transmissions of any linear PIR scheme based on such coding subspaces can be computed as
\begin{align}
R(\mathcal{P}) = \min_{\mathbf{m}}\sum_{i = 1}^{L} R(\wp_i,m_i)
\end{align}
where $\mathbf{m} = \{m_1,\dots,m_L\}$ is the vector of the number of side information messages used in each coding subspace.
It is easy to see that a coding subspace of larger size should have no fewer side information messages, i.e., $m_1\ge \dots\ge m_L$.
Since the total number of side information messages is $M$, the feasible side information vector should satisfy
\begin{align}
\sum_{i =1}^{\min\{L,N\}} m_i = M\label{eq:totalsideinformation}
\end{align}
The reason why the summation is taken only from $1$ to $\min\{L,N\}$ is that when $L>N$, the number of coding subspaces which contain demand messages is at most $N$.
If the first $m_i$'s sums up to $M$, every subset of $m_i$'s with size $N$ has sum no larger than $M$.
Hence, Eqn.~\eqref{eq:totalsideinformation} guarantees that the total number of side information messages used by any $N$ coding subspaces is not larger than $M$.
Additionally, if the size of one coding subspace is equal to or less than $N$, the number of side information messages that can be used in such coding subspace can only be zero.
The optimization problem~\eqref{eq:r*} for the minimum number of required transmissions can be expressed as follows by optimizing over side information vectors:
\begin{align}
R^* &= \min_{\mathcal{L} \in \Pi(K)} \sum_{i =1}^{|\mathcal{L}|} R(\mathcal{L}_i) \\
&= \min_{\mathcal{L} \in \Pi(K)} \min_{\mathbf{m} \in \mathbf{P}(M)} \sum_{i =1}^{|\mathcal{L}|} R(\mathcal{L}_i,m_i)\\
&= \min_{\mathcal{L} \in \Pi(K)} \min_{\mathbf{m} \in \mathbf{P}(M)} \sum_{i =1}^{|\mathcal{L}|} \mathcal{L}_i-m_i\\
& = K-\max_{\mathcal{L} \in \Pi(K)} \max_{\mathbf{m} \in \mathbf{P}(M)} \sum_{i =1}^{|\mathcal{L}|}m_i
\end{align}
where $\mathcal{L}=\{\mathcal{L}\_1,\dots,\mathcal{L}_{|\mathcal{L}|}\}$ is a partition of the integer $K$ which satisfies $\forall i \in [|\mathcal{L}|]: \mathcal{L}_i>N$ and $\mathbf{m}=\{m_1,\dots,m_{|\mathcal{L}|}\}$ is the vector of the number of side information messages used in each coding subspace which satisfies Eqn.~\eqref{eq:totalsideinformation} and $m_i \le (\mathcal{L}_i-N)^+$.
Since $\mathcal{L}$ is a partition of $K$, it is always true that the summation $\sum_{i =1}^{|\mathcal{L}|} L_i = K$.
Therefore, the optimization problem becomes find the optimal partition ($\mathcal{L}$) and optimal side information vector $\mathbf{m}$ such that $\sum_{i =1}^{|\mathcal{L}|}m_i$ is maximized.
\begin{lemma}\label{lm:sizelessthanN}
For any linear PIR coding scheme with fewer than $N$ coding subspaces, the minimum number of required transmissions is always $K-M$.
\end{lemma}
\begin{proof}
For any partition $\mathcal{L} = \{\mathcal{L}_1,\dots,\mathcal{L}_L \}$, if $L \le N$, it is possible that every coding subspace contains at least one demand messages.
Hence, the number of side information messages used in all coding subspace must sum up to $M$.
Thus, we have
\begin{align}
R^*(\mathcal{L}) = \sum_{i =1}^{L} R(\mathcal{L}_i,m_i) = \sum_{i=1}^{L} (\mathcal{L}_i - m_i) = \sum_{i=1}^{L} \mathcal{L}_i - \sum_{i=1}^{L} m_i = K- M
\end{align}
\end{proof}
\begin{lemma}\label{lm:sizelargerthanN}
For any linear PIR coding scheme based on a partition $\mathcal{L} = \{\mathcal{L}_1,\dots,\mathcal{L}_L \}$ with $L >N$ and $\mathcal{L}_1\ge \dots\ge \mathcal{L}_L\ge N$, the minimum number of transmissions satisfies
\begin{align}
R(\mathcal{L})^* &= \min_{\mathbf{m}}\sum_{i=1}^{L} R(\mathcal{L}_i) =K-\max_{\mathbf{m}} \sum_{i=1}^{L}m_i = K-M -\max_{\mathbf{m}}\sum_{i=N+1}^{L}m_i
\end{align}
where $\mathbf{m} = \{m_1,\dots,m_L\}$ is the vector of the number of side information messages in each coding subspace.
\end{lemma}
Hence, when the number of coding subspaces is larger than $N$,
the number of side information messages that can be used in the coding subspace corresponding to $\mathcal{L}_{N+1},\dots,\mathcal{L}_L$ should be maximized.
Recall that we assume $m_1\ge m_2\ge \dots\ge m_L$, hence, we actually need to maximize $m_N$.
Since vector $\mathbf{m}=\{m_1,\dots,m_L\}$ satisfies Eqn.~\eqref{eq:totalsideinformation}, we have $m_N \le \lfloor\frac{M}{N}\rfloor$.
Moreover, for coding subspaces with size $\mathcal{L}_i$ such that $N<\mathcal{L}_i<N+\lfloor \frac{M}{N}\rfloor$, the number of side information messages used in such a coding subspace satisfies $m_i \le \mathcal{L}_i-N$, since the number of transmissions used in such a coding subspace is at least $N$.
\begin{lemma}\label{lm:vector4sideinformation}
Let $\mathbf{m}=\{m_1,\dots,m_L\}$ denote the number of side information messages used in each coding subspace, where $L > N$ and $m_1\ge \dots\ge m_L$.
Let $\mathcal{L}_i$ denote the size of the $i$-th coding subspace and $t = M- N\lfloor \frac{M}{N} \rfloor$.
The optimal choice for $\mathbf{m}$ is
\begin{align}
\forall i \in [t]:\ \ &m_i = \min \{\lfloor \frac{M}{N} \rfloor+1, (\mathcal{L}_i-N)^+\}\\
\forall i \in \{t+1,\dots,L\}:\ \ &m_i = \min \{\lfloor \frac{M}{N} \rfloor, (\mathcal{L}_i-N)^+\}
\end{align}
\end{lemma}
\begin{lemma}\label{lm:optL}
The optimal partitions, $\{\mathcal{L}_1,\dots,\mathcal{L}_L\}$ satisfy
\begin{align}
\forall i \in [t]:\ \ &\mathcal{L}_i = \lfloor \frac{M}{N} \rfloor+N + 1\label{eq:L1}\\
\forall i \in \{t+1,\dots,L-1\}:\ \ &\mathcal{L}_i = \lfloor \frac{M}{N} \rfloor+N \label{eq:L2}\\
\forall i = L:\ \ &\mathcal{L}_L = K- (L-1)(\lfloor \frac{M}{N} \rfloor +N)-t\label{eq:L3}
\end{align}
where $t = M- N\lfloor \frac{M}{N} \rfloor$.
\end{lemma}
\begin{proof}
According to Lemma~\ref{lm:vector4sideinformation}, in order to maximize the number of side information messages in each coding subspace, it is sufficiently optimal to have
\begin{align}
\forall i \in [t]:\ \ &\mathcal{L}_i = \lfloor \frac{M}{N} \rfloor+1+N\\
\forall i \in \{t+1,\dots,L-1\}:\ \ &\mathcal{L}_i = \lfloor \frac{M}{N} \rfloor+N
\end{align}
Then, the total size of the first $L-1$ coding subspace is
\begin{align}
\sum_{i=1}^{L-1} \mathcal{L}_i = t(\lfloor \frac{M}{N} \rfloor+1+N) + (L-1-t)(\lfloor \frac{M}{N} \rfloor+N) = (L-1)(\lfloor \frac{M}{N} \rfloor +N)+t\label{eq:sumL}
\end{align}
Then the size of the last coding subspace can only be $q = K- (L-1)(\lfloor \frac{M}{N} \rfloor +N)-t$.
If $q > \lfloor \frac{M}{N} \rfloor+N$, then we can further decompose the last coding subspace into two smaller coding subspaces with size $\lfloor \frac{M}{N} \rfloor+N$ and $q- \lfloor \frac{M}{N} \rfloor+N$.
Thus, we can assume that $L$ is large enough such that $q < \lfloor \frac{M}{N} \rfloor+N$.
If $N < q < \lfloor \frac{M}{N} \rfloor+N$, the number of side information messages in this coding subspace is equal to $q-N$.
If $q\le N$, the number of transmissions required for the last coding subspace is equal to its size and hence the number of side information messages is zero.
\end{proof}
Now we are ready to prove Theorem~\ref{Thm:main}.
\begin{proof}(Theorem~\ref{Thm:main})
As has been shown in the proof of Lemma~\ref{lm:optL}, Eqn.~\eqref{eq:sumL}, for the optimal partition, $\{\mathcal{L}_1,\dots,\mathcal{L}_L\}$, we have $\sum_{i=1}^{L-1} \mathcal{L}_i = (L-1)(\lfloor \frac{M}{N} \rfloor +N)+t$.
Since the total number of messages is $K$, we have
\begin{align}
K\ge \sum_{i=1}^{L-1} \mathcal{L}_i = (L-1)(\lfloor \frac{M}{N} \rfloor +N)+t
\end{align}
Additionally, $L$ is an integer, we have the optimal number of coding subspaces is
\begin{align}
L^* = \left\lceil\frac{K-t}{\lfloor \frac{M}{N} \rfloor +N}\right\rceil
\label{eq:L*}
\end{align}
The side information vector $\mathbf{m} = \{m_1,\dots,m_L\}$ satisfies
\begin{align}
\forall i \in [t]:\ \ &m_i = \lfloor \frac{M}{N} \rfloor+1\\
\forall i \in \{t+1,\dots,L^*-1\}:\ \ &m_i = \lfloor \frac{M}{N} \rfloor\\
\forall i= L^*:\ \ &m_L = (K- (L^*-1)(\lfloor \frac{M}{N} \rfloor +N)-t -N)^+\label{eq:ml}
\end{align}
Hence the total number of required transmissions is
\begin{align}
R^* &= K-\max_{\mathcal{L} \in \Pi(K)} \max_{\mathbf{m}} \sum_{i =1}^{|\mathcal{L}|}m_i\\
& = K-M -\max_{\mathbf{m}}\sum_{i=N+1}^{L^*}m_i\\
& = \left\{
\begin{aligned}
&K-M &\text{if } L^*\le N\\
&K-M-((L^*-N)V)^+ &\text{if } L^*= N+1\\
&K-M - (L^*-1-N)^{+}\bar{M}-((L^*-N)V)^+ &\text{if } L^*> N+1\\
\end{aligned}
\right.\\
& = K-M - (L^*-1-N)^{+}\bar{M}- ((L^*-N)V)^+
\end{align}
where $\bar{M} = \lfloor\frac{M}{N}\rfloor$, $t = M- N\bar{M}$, $L^* = \left\lceil\frac{K-t}{ \bar{M}+N}\right\rceil$ and $V = (K- (L^*-1)(\bar{M} +N)-t -N)/(L^*-N)$.
\end{proof}
\begin{example*}
For Example~\ref{eg:1}, we have $K=13$, $M=5$ and $N=2$. According to Theorem~\ref{Thm:main}, it is easy to get that $\bar{M} = \lfloor\frac{M}{N}\rfloor = 2$, $t = M- N\bar{M}= 1$ and $L^* = \left\lceil\frac{K-t}{ \bar{M}+N}\right\rceil =3$. The minimum number of required transmissions is $R^* = 6$.
\end{example*}
\begin{theorem}
For given total number of messages $K$ and number of side information messages $M$, it is optimal to download $K-M$ transmissions by using the trivial MDS coding scheme with all messages in one coding subspace if either of the following conditions is satisfied.
\begin{enumerate}
\item $N> M$.
\item $N^2+N \ge K-M$
\end{enumerate}
\end{theorem}
\begin{proof}
If the first condition is satisfied, $N>M$, as it is optimal to only consider the coding schemes with partition of size larger than $N$, it can be shown that the number of side information messages used in the $N$-th coding subspace is $m_N = 0$.
Hence, we have $m_{N+1}=\dots,m_{L} =0$. The minimum number of required transmission is $K-M$.
If the second condition holds, $N^2+N \ge K-M$. Let us further assume that $N^2>K-M$, then we have
\begin{align}
&N^2+N\lfloor\frac{M}{N}\rfloor > K-M+ N\lfloor\frac{M}{N}\rfloor\\
\Leftrightarrow& N+1> \frac{K-M+ N\lfloor\frac{M}{N}\rfloor}{N+\lfloor\frac{M}{N}\rfloor}+1\ge L^*
\end{align}
Hence we have $L^* \le N$. According to Lemma~\ref{lm:sizelessthanN}, the minimum number of required transmissions is $K-M$.
If $N^2\le K-M\le N^2+N$, then it can be shown that $L^*=N+1$ and
\begin{align}
& K-M - N\lfloor\frac{M}{N}\rfloor \le N^2+N -N\lfloor\frac{M}{N}\rfloor \\
\Leftrightarrow& K-N(\lfloor\frac{M}{N}\rfloor +N) - (M-N\lfloor\frac{M}{N}\rfloor )-N \le 0\\
\Leftrightarrow& m_{N+1} = (K-N(\lfloor\frac{M}{N}\rfloor +N) - (M-N\lfloor\frac{M}{N}\rfloor )-N)^+=0
\end{align}
Since $L^* = N+1$ and $m_{N+1}=0$, according to Theorem~\ref{Thm:main}, $R^* =K-M$.
Thus, if any of the three conditions is satisfied, it is optimal to use the trivial MDS coding scheme with $K-M$ transmissions which takes all messages in one coding subspace.
\end{proof}
\section{Optimal Linear Coding Scheme}\label{sec:CodingScheme}
In this section, we show how to construct an optimal coding scheme for the single-server multi-message private information retrieval with side information problem.
For any single-server multi-message private information retrieval with side information problem with
$K$ total messages, $M$ side information messages and $N$ demand messages, we first compute $L^*$ defined by Eqn.~\eqref{eq:L*} and $m_L$ defined by Eqn.~\eqref{eq:ml}.
If $L^*\le N+1$ and $m_L = 0$, then $R^* = K-M$. It is trivial that the optimal linear coding scheme is the MDS coding scheme that takes all messages into one coding subspace.
If $L^*> N+1$ or $L^*=N+1$, $m_L > 0$, we use the following steps to construct the optimal linear coding schemes:
\begin{itemize}
\item[] \textbf{Step 1}:
The user creates a set of $L^*$ subsets, denoted by $\{\wp_1,\dots,\wp_{L^*}\}$ and $\forall i \in [L^*]$, the size of $\wp_i$ satisfies:
\begin{align}
|\wp_i| = \left\{
\begin{aligned}
&\lfloor\frac{M}{N}\rfloor + N+1, & \forall t \in \{1,\dots,t\} \\
&\lfloor\frac{M}{N}\rfloor + N, & \forall t \in \{t+1,\dots,L^*-1\}\\
& K- (L-1)(\lfloor \frac{M}{N} \rfloor +N)-t, & \text{for}\ i =L^*
\end{aligned}
\right.
\end{align}
where $t = M-N\lfloor \frac{M}{N} \rfloor$. Let $c_i$ for $i \in [L^*]$ denote the number of demand messages in subset $\wp_i$ and initiate to $0$
\item[] \textbf{Step 2}:
For the first demand message $X_{W_1}$, the user randomly selects one subset $\wp_i$ ($i \in [L^*]$) to contain it with probability $\frac{|\wp_i|}{K}$.
The user updates $c_i = c_i+1$.
Then for the $j$-th demand message ($j\in[N]$), the user randomly selects one subset $\wp_u$ ($u \in [L^*]$) to contain it with probability $\frac{|\wp_u|-c_u}{K-j+1}$.
Iteratively, the user places all demand messages into the subsets.
\item[] \textbf{Step 3}: For each subset $\wp_i$ with $c_i > 0$, the user randomly selects $m_i$ side information messages to put into $\wp_i$, where $m_i$ satisfies:
\begin{align}
m_i = \left\{
\begin{aligned}
&\lfloor\frac{M}{N} \rfloor +1, & 1\le i\le t\\
&\lfloor \frac{M}{N} \rfloor, & t+1 \le i \le L^*-1\\
&(|\wp_{L^*}|-N)^+, & i=L^*
\end{aligned}
\right.
\end{align}
\item[] \textbf{Step 4}: The user randomly distributes the other messages to fill up the remaining empty spaces in each subset.
\item[] \textbf{Step 5}: The user sends queries to the server according to the coding scheme which satisfies the MDS-Condition in each coding subspace $\wp_i$ ($\forall i \in [L^*]$) with $R(|\wp_i|,m_i)$ transmissions.
\end{itemize}
We name the coding schemes constructed by this method as Partition-and-MDS-Coding scheme, which is a modified based on optimal coding scheme for single demand message~\cite{2017arXiv170900112K}. The way we select subsets for demand messages is related to the URN problem. The probability of any $N$ messages to be the demand messages follows the binomial distribution.
\begin{theorem}
The Partition-and-MDS-Coding schemes satisfies the Retrieval Condition and the Privacy Condition.
\end{theorem}
\begin{proof}
For each coding subspace $\wp_i$, if it contains demand messages,
the number of transmissions $R(|\wp_i|,m_i)$ and the number of side information messages $m_i$ in such coding subspace satisfy $R(|\wp_i|,m_i)+m_i = |\wp_i|$.
Additionally, the Partition-and-MDS-Coding scheme satisfies MDS-Condition in very coding subspace.
Thus, all missing messages in $\wp_i$ can be successfully decoded, including the demand messages.
Therefore, the Retrival Condition is satisfied.
The probability that any $N$ messages (e.g. $\{X_{Z_1},\dots,X_{Z_N}\}$) are the demand messages can be computed as
\begin{align}
\Pr(\mathcal{W} = \{Z_1,\dots,Z_N\}) &= N!\, \Pr(W_1=Z_1,W_2=Z_2,\dots,W_N=Z_N)\\
& = N!\, \Pr(W_1=Z_1)Pr(W_2=Z_2,\dots,W_N=Z_N|W_1=Z_1)\\
& = N!\, \prod_{i=1}^{N} \Pr(W_i = Z_i|W_1^{i-1} = Z_1^{i-1})
\end{align}
According to the construction of the Partition-and-MDS-Coding scheme and assume that $Z_i \in \wp_j$, we have
\begin{align}
\Pr(W_i = Z_i|W_1^{i-1} = Z_1^{i-1}) &= \Pr(W_i \in \wp_j| W_1^{i-1} = Z_1^{i-1}) \Pr(W_i=Z_i| W_i \in \wp_j, W_1^{i-1} = Z_1^{i-1} )\\
& = \frac{|\wp_j| - \wp_j \setminus (\wp_j \cap \{Z_1,\dots,Z_{i-1}\})}{K-i+1}\frac{1}{|\wp_j| - \wp_j \setminus (\wp_j \cap \{Z_1,\dots,Z_{i-1}\})}\\
& = \frac{1}{K-i+1}
\end{align}
Hence, we have
\begin{align}
\Pr(\mathcal{W} = \{Z_1,\dots,Z_N\}) &= N!\, \prod_{i=1}^{N} \frac{1}{K-i+1} = \frac{N!}{K(K-1)\cdots (K-N+1)}
= \frac{1}{\binom{K}{N}}
\end{align}
Since there are $\binom{K}{N}$ possible demand message pairs with size $N$, every $N$-message pair is equally likely to be the demand messages, which satisfies the Privacy Condition of multi-message PIR.
\end{proof}
\begin{example*}
We construct the linear coding scheme in Example~\ref{eg:1} by using the Partition-and-MDS-Coding method. It is easy to compute and verify that $L^* = 3>N$ and $m_3 = 2 >0$.
Step 1: We first create three coding subspace ($\wp_1$, $\wp_2$, $\wp_3$) with size $|\wp_1| = 5$, $|\wp_2| = 4$ and $|\wp_3|=4$.
Step 2: We randomly select one coding subspace from $\{\wp_1,\wp_2,\wp_3\}$ to contain the first demand message $X_2$ with probability $\frac{5}{13}$, $\frac{4}{13}$ and $\frac{4}{13}$, respectively.
Suppose $\wp_1$ is chosen.
Then for the second demand message $X_2$, we randomly select one coding subspace from $\{\wp_1,\wp_2,\wp_3\}$ to contain the first demand message $X_2$ with probability $\frac{4}{13}$, $\frac{4}{13}$ and $\frac{4}{13}$, respectively.
Suppose $\wp_3$ is chosen.
Step 3: For $\wp_1$ and $\wp_3$, the coding subspaces which are chosen to contain demand messages, we randomly distribute $3$ and $2$ side information messages into them, respectively.
Suppose $X_1,X_4,X_6$ are placed in $\wp_1$ and $X_7,X_9$ are placed in $\wp_2$.
Step 4: Randomly distribute the remaining messages into the coding subspaces.
Suppose we get $\wp_1 = \{X_1,X_2,X_4,X_6,X_8\}$, $\wp_2 = \{X_3,X_{10},X_{11},X_{13}\}$ and $\wp_3 = \{X_5,X_7,X_9,X_{12}\}$.
Step 5: The user generates queries according to the linear coding scheme $\mathbf{T} = \{T_1,\dots,T_6\}$ shown in Example~\ref{eg:1}.
From the server's perspective, the probability for any two messages to be the demand message is the same, which is $\frac{1}{\binom{13}{2}} =\frac{1}{78}$. Thus, the server cannot infer any information about the indices of the demand messages.
\end{example*}
\section*{appendix}
We provide an alternative proof for the converse of the minimum number of required transmissions for the single-server multi-message private information retrieval with side information problem.
The proof techniques are inspired by~\cite{2017arXiv170903022C}.
Suppose each message has $L$ bits and messages are independent from each other, i.e.,
\begin{align}
H(X_1,\dots,X_K) &= H(X_1)+ \dots + H(X_K),\\
H(X_1) &= \dots = H(X_K) = L.
\end{align}
Recall that $\mathbf{W}$ and $\mathbf{S}$ denote the sets of indices of demand messages and side information messages, respectively.
Let $Q^{[\mathbf{W},\mathbf{S}]}$ denote the query that is generated for side information indexed by $\mathbf{S}$ and demand messages indexed by $\mathbf{W}$.
Let $A^{[\mathbf{W},\mathbf{S}]}$ denote the answer generated by the server after receiving query $Q^{[\mathbf{W},\mathbf{S}]}$.
Since the answer generated by the server is a deterministic function of the query and messages, we have
\begin{align}
H(A^{[\mathbf{W},\mathbf{S}]}|Q^{[\mathbf{W},\mathbf{S}]},X_{1:K}) = 0\label{eq:deta}
\end{align}
The retrieval condition~\eqref{eq:retcon} is equivalent to
\begin{align}
H(X_{\mathbf{W}}|A^{[\mathbf{W},\mathbf{S}]},Q^{[\mathbf{W},\mathbf{S}]},X_{\mathbf{S}}) = 0\label{eq:rcap}.
\end{align}
The privacy condition~\eqref{eq:pricon} is equivalent to the condition that for any $\mathbf{W},\mathbf{W}' \subseteq [K]$ and $|\mathbf{W}|=|\mathbf{W}'| = N$, there exists $\mathbf{S} \subseteq [K]\setminus \mathbf{W}$, $\mathbf{S}' \subseteq [K]\setminus \mathbf{W}'$ and $|\mathbf{S}| = |\mathbf{S}'| = M$ such that
\begin{align}
(A^{[\mathbf{W},\mathbf{S}]},Q^{[\mathbf{W},\mathbf{S}]},X_{1:K})\sim (A^{[\mathbf{W}',\mathbf{S}']},Q^{[\mathbf{W}',\mathbf{S}']},X_{1:K})\label{eq:w}.
\end{align}
where $A\sim B$ means that $A$ and $B$ are identically distributed.
Suppose $\mathbf{W}_0 = \mathbf{W}$ and $\mathbf{S}_0 = \mathbf{S}$ are the set of indices of demand messages and side information messages, respectively.
Then the total number of download bits ($D$) can be lower-bounded as follows.
\begin{align}
D&\ge H(A^{[\mathbf{W}_0,\mathbf{S}_0]}|Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{\mathbf{S}_0})\\
& = H(X_{\mathbf{W}_0},A^{[\mathbf{W}_0,\mathbf{S}_0]}|Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{\mathbf{S}_0})- H(X_{\mathbf{W}_0}|A^{[\mathbf{W}_0,\mathbf{S}_0]},Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{\mathbf{S}_0})\\
& \stackrel{\eqref{eq:correctness}}{=} H(X_{\mathbf{W}_0}|Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{\mathbf{S}_0}) + H(A^{[\mathbf{W}_0,\mathbf{S}_0]}|Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{\mathbf{W}_0\cup\mathbf{S}_0})\\
&= NL+H(A^{[\mathbf{W}_0,\mathbf{S}_0]}|Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{\mathbf{W}_0\cup\mathbf{S}_0}).
\end{align}
According to the privacy condition, for any $\mathbf{W}_i \subseteq [K]$, $|\mathbf{W}_i|= N$, there exists $\mathbf{S}_i \subseteq [K]\setminus \mathbf{W}_i$ which satisfies the retrieval condition, i.e.,
\begin{align}
H(X_{\mathbf{W}_j}| A^{[\mathbf{W}_j,\mathbf{S}_j]},Q^{[\mathbf{W}_j,\mathbf{S}_j]},X_{\mathbf{S}_j}) = 0\label{eq:correctness}.
\end{align}
We have
\begin{align}
H(A^{[\mathbf{W}_0,\mathbf{S}_0]}|Q^{[\mathbf{W}_0,\mathbf{S}_0]},X_{[K]}) = \min_{\mathbf{S}_i}
H(A^{[\mathbf{W}_i,\mathbf{S}_i]}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{[K]}).\label{eq:st}
\end{align}
Hence, for the special case $i =1$, we have
\begin{align}
D &\ge NL +\min_{\mathbf{S}_1}
H(A^{[\mathbf{W}_1,\mathbf{S}_1]}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup\mathbf{S}_0})\\
& =NL+ \min_{\mathbf{S}_1} H(X_{\mathbf{W}_1,\mathbf{S}_1}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup \mathbf{S}_0}) + H(A^{[\mathbf{W}_1,\mathbf{S}_1]}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0^1\cup \mathbf{S}_0^1})\nonumber\\
&\ \ \ -H(X_{\mathbf{W}_1\cup\mathbf{S}_1}|A^{[\mathbf{W}_1,\mathbf{S}_1]},Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup\mathbf{S}_0})\\
& =NL+\min_{\mathbf{S}_1} H(X_{\mathbf{W}_1,\mathbf{S}_1}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup \mathbf{S}_0})+ H(A^{[\mathbf{W}_1,\mathbf{S}_1]}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0^1\cup \mathbf{S}_0^1}) \nonumber\\
&\ \ \ - H(X_{\mathbf{S}_1}|A^{[\mathbf{W}_1,\mathbf{S}_1]},Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup\mathbf{S}_0}) - H(X_{\mathbf{W}_1}|A^{[\mathbf{W}_1,\mathbf{S}_1]},Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup\mathbf{S}_0^1})\\
& \stackrel{\eqref{eq:correctness}}{=}NL+\min_{\mathbf{S}_1} H(X_{\mathbf{W}_1,\mathbf{S}_1}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup \mathbf{S}_0})- H(X_{\mathbf{S}_1}|A^{[\mathbf{W}_1,\mathbf{S}_1]},Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup\mathbf{S}_0})\nonumber \\
&\ \ \
+H(A^{[\mathbf{W}_1,\mathbf{S}_1]}|Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0^1\cup \mathbf{S}_0^1})
\end{align}
We can apply the following substitutions iteratively
\begin{align}
\min_{\mathbf{S}_i} H(A^{[\mathbf{W}_i,\mathbf{S}_i]}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^i\cup \mathbf{S}_0^i}) = \min_{\mathbf{S}_{i+1}} H(A^{[\mathbf{W}_{i+1},\mathbf{S}_{i+1}]}|Q^{[\mathbf{W}_{i+1},\mathbf{S}_{i+1}]},X_{\mathbf{W}_0^{i+1}\cup \mathbf{S}_0^{i+1}}).
\end{align}
Suppose after $T$ substitutions, we have
\begin{align}
\mathbf{W}_0^T \cup \mathbf{S}_0^T = [K] \label{eq:k}.
\end{align}
Then we have the lower-bound for $D$ as follows.
\begin{align}
D &\ge NL + \min_{\mathbf{S}_1,\dots,\mathbf{S}_T} \sum_{i=1}^{T}\left[ H(X_{\mathbf{W}_i,\mathbf{S}_i}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup \mathbf{S}_0^{i-1}})- H(X_{\mathbf{S}_i}|A^{[\mathbf{W}_i,\mathbf{S}_i]},Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup\mathbf{S}_0^{i-1}})\right]\nonumber \\
&\ \ \
+H(A^{[\mathbf{W}_T,\mathbf{S}_T]}|Q^{[\mathbf{W}_T,\mathbf{S}_T]},X_{\mathbf{W}_0^T\cup \mathbf{S}_0^T})\\
&\stackrel{\eqref{eq:deta}\eqref{eq:k}}{=} NL + \min_{\mathbf{S}_1,\dots,\mathbf{S}_T} \sum_{i=1}^{T} \left[H(X_{\mathbf{W}_i,\mathbf{S}_i}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup \mathbf{S}_0^{i-1}})- H(X_{\mathbf{S}_i}|A^{[\mathbf{W}_i,\mathbf{S}_i]},Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup\mathbf{S}_0^{i-1}})\right]\label{eq:lb1}
\end{align}
Note that each term in the summation is non-negative, since
\begin{align}
H(X_{\mathbf{W}_i,\mathbf{S}_i}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup \mathbf{S}_0^{i-1}}) &\ge H(X_{\mathbf{S}_i}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup \mathbf{S}_0^{i-1}})\\
&\ge H(X_{\mathbf{S}_i}|A^{[\mathbf{W}_i,\mathbf{S}_i]},Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup\mathbf{S}_0^{i-1}})
\end{align}
In order to get a lower-bound for the total number of download bits ($D$), we need to minimize the summation.
And this lower-bound works for any choice of $\{\mathbf{W}_1,\dots,\mathbf{W}_T\}$.
We will construct a special set $\{\mathbf{W}_1,\dots,\mathbf{W}_T\}$ such that we can compute the minimum of the summation.
For any $i \in \mathbf{W}_0$, let $\mathbf{V}_i \subset \mathbf{S}_0$ denote the minimum subset such that
\begin{align}
H(X_i|A^{[\mathbf{W}_0],\mathbf{S}_0},Q^{[\mathbf{W}_0],\mathbf{S}_0},X_{\mathbf{V}_i}) = 0\label{eq:v}
\end{align}
Without loss of optimality, we may assume that $\cup_{i\in \mathbf{W}_0} \mathbf{V}_i = \mathbf{S}_0$.
Let $i^* = \arg\max_{i} |\cup_{j\in \mathbf{W}_0\setminus i}\mathbf{V}_j|$.
We construct $W_t$ for $t \in [T]$ as the following steps.
\begin{enumerate}
\item Put indices $\mathbf{W}_0 \setminus i^*$ into $\mathbf{W}_t$.
\item Add another index $i_t$ into $\mathbf{W}_t$, where $i_t \not \in \mathbf{W}_0^{t-1}$.
\end{enumerate}
After we have $\mathbf{W}_t$, we can select $\mathbf{S}_t$ to maximize the corresponding term in the summation in Equation~\eqref{eq:lb1}.
In such way, each round we add a new index in $\mathbf{W}_0^{t}$. Hence, after $T = K-N$ rounds, we have $\mathbf{W}_0^T = [K]$.
When the newly added index $i_t \in \mathbf{S}_0^{t-1}$, the optimal choice for $\mathbf{S}_t$ is $\mathbf{S}_t \subset (\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\setminus i_t)$.
In such case
\begin{align}
H(X_{\mathbf{W}_t,\mathbf{S}_t}|Q^{[\mathbf{W}_t,\mathbf{S}_t]},X_{\mathbf{W}_0^{t-1}\cup \mathbf{S}_0^{t-1}})- H(X_{\mathbf{S}_t}|A^{[\mathbf{W}_t,\mathbf{S}_t]},Q^{[\mathbf{W}_t,\mathbf{S}_t]},X_{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}})=0,
\end{align}
implying that this choice achieves the minimum.
By assumption, $A^{[\mathbf{W}_t,\mathbf{S}_t]}$ given side information $X_{\mathbf{S}_t}$ permits to decode $X_{\mathbf{W}_t}.$
It is possible that the same $A^{[\mathbf{W}_t,\mathbf{S}_t]},$ given the same side information $X_{\mathbf{S}_t},$ {\it also} permits to decode further messages.
Let us denote the indices of these decodable messages by ${\mathbf{U}_t}$ (noting that ${\mathbf{U}_t}$ may be the empty set), and the corresponding messages by $X_{\mathbf{U}_t}.$ Clearly, $\mathbf{U}_t \subseteq [K] \setminus (\mathbf{W}_t\cup \mathbf{S}_t),$
and the definition of $X_{\mathbf{U}_t}$ can be written as
\begin{align}
H(X_{\mathbf{U}_t}| A^{[\mathbf{W}_t,\mathbf{S}_t]},Q^{[\mathbf{W}_t,\mathbf{S}_t]},X_{\mathbf{S}_t}) = 0.
\end{align}
Similarly, when the newly added index $i_t \in \mathbf{U}_0^t$, we can show that the optimal choice for $\mathbf{S}_t$ is $\mathbf{S}_t \subset (\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\setminus i_t)$.
In such cases,
\begin{align}
H(X_{\mathbf{W}_t,\mathbf{S}_t}|Q^{[\mathbf{W}_t,\mathbf{S}_t]},X_{\mathbf{W}_0^{t-1}\cup \mathbf{S}_0^{t-1}})- H(X_{\mathbf{S}_t}|A^{[\mathbf{W}_t,\mathbf{S}_t]},Q^{[\mathbf{W}_t,\mathbf{S}_t]},X_{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}})=H(X_{i_t}) = L.
\end{align}
which achieves the minimum.
Now, the difficulty is minimizing those terms in the summation of Equation~\eqref{eq:lb1} where $i_t \not \in (\mathbf{W}_0^{t-1}\cup \mathbf{S}_0^{t-1}\cup \mathbf{U}_0^{t-1})$.
To deal with them, we need to further exploit the lower-bound expression.
Since $X_{\mathbf{W}_t\cup\mathbf{S}_t}$ is independent from the query $Q^{[\mathbf{W}_t,\mathbf{S}_t]}$, we have
\begin{align}
\sum_{i=1}^{T} H(X_{\mathbf{W}_i,\mathbf{S}_i}|Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup \mathbf{S}_0^{i-1}}) &= \sum_{i=1}^{T} H(X_{\mathbf{W}_i,\mathbf{S}_i}|X_{\mathbf{W}_0^{i-1}\cup \mathbf{S}_0^{i-1}}) \\
&= (|\mathbf{W}_0^T\cup\mathbf{S}_0^T| - |\mathbf{W}_0\cup\mathbf{S}_0|)L \\
&= (K-N-M)L.
\end{align}
Thus, the total number of download bits $D$ is also lower-bounded by
\begin{align}
D &\ge NL + (|\mathbf{W}_0^T\cup\mathbf{S}_0^i|-|\mathbf{W}_0\cup\mathbf{S}_0|)L-\max_{\mathbf{S}_1,\dots,\mathbf{S}_T}\sum_{i=1}^{T} H(X_{\mathbf{S}_i}|A^{[\mathbf{W}_i,\mathbf{S}_i]},Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup\mathbf{S}_0^{i-1}})\\
& = (K-M)L - \max_{\mathbf{S}_1,\dots,\mathbf{S}_T}\sum_{i=1}^{T} H(X_{\mathbf{S}_i}|A^{[\mathbf{W}_i,\mathbf{S}_i]},Q^{[\mathbf{W}_i,\mathbf{S}_i]},X_{\mathbf{W}_0^{i-1}\cup\mathbf{S}_0^{i-1}})\label{eq:lb2}
\end{align}
Thus, we can also maximize the summation of conditional entropies in Equation~\eqref{eq:lb2} to get the lower-bound.
As we have shown before, for $\mathbf{W}_t$ with $i_t \in \{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\cup\mathbf{U}_0^{t-1}\}$, the optimal choice $\mathbf{S}_t \subset (\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\setminus i_t)$, which implies
\begin{align}
H(X_{\mathbf{S}_t}|A^{[\mathbf{W}_t,\mathbf{S}_t]},Q^{[\mathbf{W}_t,\mathbf{S}_t]},X_{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}}) = 0.
\end{align}
For $\mathbf{W}_t$ with $i_t \not\in \{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\cup\mathbf{U}_0^{t-1}\}$, since $\mathbf{W}_0\setminus i^* \subset \mathbf{W}_t$, we have $\cup_{j\in \mathbf{W}_0\setminus i^*} \mathbf{V}_j \subseteq \mathbf{S}_t$ to guarantee the decoding correctness of $X_{\mathbf{W}_0\setminus i^*}$.
Thus, we can upper-bound the corresponding conditional entropy by
\begin{align}
H(X_{\mathbf{S}_{t}}|A^{[\mathbf{W}_{t},\mathbf{S}_{t}]},Q^{[\mathbf{W}_{t},\mathbf{S}_{t}]},X_{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}}) &\le H(X_{\mathbf{S}_t}|\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\cup\mathbf{U}_0^{t-1})\\
& \le (|\mathbf{S}_t| - |\cup_{j\in \mathbf{W}_0\setminus i^*} \mathbf{V}_j|)
\end{align}
By assumption, $\cup_{j\in \mathbf{W}_0} \mathbf{V}_j = \mathbf{S}_0$, we have
\begin{align}
\max_{\mathbf{V}_1^N :|\cup_{i\in \mathbf{W}_0} \mathbf{V}_i| = M } |\mathbf{S}_t| -|\cup_{j\in \mathbf{W}_0\setminus i^*} \mathbf{V}_j| &=
M-\min_{\mathbf{V}_1^N :|\cup_{i\in \mathbf{W}_0} \mathbf{V}_i| = M } \max_i|\cup_{j\in \mathbf{W}_0\setminus i} \mathbf{V}_j| \\
&
\left\{
\begin{aligned}
&\le \left\lfloor \frac{M}{N} \right\rfloor & \text{ if } M\ge N\\
&= 0 &\text{ if } M< N
\end{aligned}
\right.\label{eq:mh}
\end{align}
where the maximum is achieved when $\mathbf{V}_i \cap \mathbf{V}_j = \emptyset$ and ($M-N\left\lfloor \frac{M}{N} \right\rfloor$) $|\mathbf{V}_i|$'s are equal to $\left\lceil \frac{M}{N} \right\rceil$ and others are equal to $\left\lfloor \frac{M}{N} \right\rfloor$.
Therefore, for any $t \in [T]$, if $M<N$
\begin{align}
H(X_{\mathbf{S}_{t}}|A^{[\mathbf{W}_{t},\mathbf{S}_{t}]},Q^{[\mathbf{W}_{t},\mathbf{S}_{t}]},X_{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}}) = 0.
\end{align}
Otherwise, if $M \ge N$
\begin{align}
H(X_{\mathbf{S}_{t}}|A^{[\mathbf{W}_{t},\mathbf{S}_{t}]},Q^{[\mathbf{W}_{t},\mathbf{S}_{t}]},X_{\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}}) \le \left\lfloor \frac{M}{N} \right\rfloor.
\end{align}
\begin{lemma}
If the number of side information messages is smaller than the number of demand messages, i.e. $M< N$, the minimum number of required transmissions is $K-M$.
\end{lemma}
\begin{proof}
Suppose $M<N$, from Equation~\eqref{eq:mh} we have that
\begin{align}
M-\min_{\mathbf{V}_1^N :|\cup_{i\in \mathbf{W}_0} \mathbf{V}_i| = M } \max_i|\cup_{j\in \mathbf{W}_0\setminus i} \mathbf{V}_j| = 0.
\end{align}
Thus, each conditional entropy in the summation is zero, except the first term $H(X_{\mathbf{S}_0}) = ML$.
Hence, we have $D\ge (K-M)L$ which gives $R = \frac{D}{L} \ge K-M$.
Additionally, we know that the MDS coding scheme with $K-M$ is always a PIR scheme.
Therefore, $R^* = K-M$.
\end{proof}
Based on this, we can conclude the following useful proportions.
\begin{proportion}
If there exists $j \ne i$ ($i,j \in \mathbf{W}_0$) such that $\mathbf{V}_j = \mathbf{V}_i$, where $\mathbf{V}_i$ and $\mathbf{V}_j$ are defined in Equation~\eqref{eq:v}, then the minimum number of required transmissions is $K-M$.
\end{proportion}
\begin{proof}
If $\mathbf{V}_i = \mathbf{V}_j$ ($i\ne j$), then when we construct $\mathbf{W}_t$, we can remove either $i$ or $j$ from $\mathbf{W}_0$ and keep the others.
In such way, no matter what new index we add into $\mathbf{W}_t$, we have $H(X_{\mathbf{S}_t}|X_{\mathbf{S}_0}) = 0$. Hence, the lower bound for the number of transmissions is $K-M$ and we know that MDS coding scheme can achieve this lower bound.
\end{proof}
\begin{proportion}\label{pp:NumOfMes}
For any PIR coding scheme, for each $\mathbf{V}_i \ne \emptyset$, defined by Equation~\eqref{eq:v}, given $X_{\mathbf{V}_i}$, besides decoding $X_i$, there must exist at least another $N-1$ messages that can also be decoded.
\end{proportion}
\begin{proof}
Let $\mathbf{Y}_i$ denote the set of indices of the messages that can be decoded given $X_{\mathbf{V}_i}$.
Apparently, $ i \in \mathbf{Y}_i$, since $X_i$ can be decoded.
Suppose $|\mathbf{Y}_i| \le N-1$.
Then for any $j \in \mathbf{V}_i$, the set of messages $X_{\mathbf{Y}_i\cup\{j\}}$ cannot be decoded given any $N$ messages in $[K]\setminus (\mathbf{Y}_i\cup\{j\})$.
This is because $\mathbf{V}_i$ are the minimum set of messages that are required to decode $X_i$.
Hence, $X_{\mathbf{Y}_i\cup\{j\}}$ cannot be the demand messages, which violates the privacy condition.
Therefore, $|\mathbf{Y}_i| \ge N$.
\end{proof}
\begin{proportion}\label{pp:Y}
For $i \in \mathbf{W}_0$, let $\mathbf{Y}_i$ denote the set of indices of the messages that can be decoded given $X_{\mathbf{V}_i}$.
Without loss of optimality, we can assume that $\mathbf{Y}_i \cap \mathbf{Y}_j = \emptyset$ for any $i \ne j$.
\end{proportion}
\begin{proof}
Suppose there is one message $X_u$, which can be decoded given either $\mathbf{V}_i$ or $\mathbf{V}_j$.
Additionally, we assume $i \not\in \mathbf{Y}_j$ and $j \not\in \mathbf{Y}_i$.
To construct a new coding scheme, we can remove $X_u$ from the coded transmissions which can be used to decoded $X_u$ and $X_j$ given $X_{\mathbf{V}_j}$.
After the modification, $X_u$ can still be decoded given $X_{\mathbf{V}_i}$ and $X_{\mathbf{Y}_j \setminus \{u\}}$ can still be decoded given $X_{\mathbf{V}_j}$.
And the total number of required transmissions does not increase.
\end{proof}
\begin{lemma}
If $K\le N^2+N+M$, the minimum number of required transmissions is $K-M$.
\end{lemma}
\begin{proof}
According to Proportion~\ref{pp:NumOfMes}, for each $\mathbf{V}_i\ne \emptyset$ ($i\in \mathbf{W}_0$), we have $|\mathbf{Y}_i| \ge N$.
According to Proportion~\ref{pp:Y}, we can assume that for any $i\ne j \in \mathbf{W}_0$, $\mathbf{Y}_i \cap \mathbf{Y}_j = \emptyset$, then we have
\begin{align}
\sum_{i \in \mathbf{W}_0} |\mathbf{V}_i| + |\mathbf{Y}_i| = M+N^2
\end{align}
Thus, if $K < N^2 + M$, there must exist $l \in \mathbf{W}_0$ such that $\mathbf{Y}_l = \emptyset$.
In such cases, $i^* = l$ and $|\cup_{j\in \mathbf{W}_0\setminus i^*} \mathbf{V}_j| = M$.
That means all conditionally entropies in the summation are zero, except the first term $H(X_{\mathbf{S}_0}) = ML$, and the total number of required transmissions is $K-M$.
If $N^2+M \le K \le N^2+M+N$, there are enough messages such that for any $i,j\in \mathbf{W}_0$, $\mathbf{Y}_i \cap \mathbf{Y}_j = \emptyset$ and $\mathbf{V}_i \ne \emptyset$.
However, since $|\mathbf{W}_0\cup\mathbf{S}_0\cup\mathbf{U}_0| \ge M+N$, the number of messages for $X_{\mathbf{W}_1}$, $X_{\mathbf{S}_1}$ and $X_{\mathbf{U}_1}$ is at most $N$.
The side information that can be used to decode the new message which was added in $\mathbf{W}_1$ is zero.
Hence, we have
\begin{align}
H(X_{\mathbf{S}_1}|A^{[\mathbf{W}_1,\mathbf{S}_1]},Q^{[\mathbf{W}_1,\mathbf{S}_1]},X_{\mathbf{W}_0\cup\mathbf{S}_0\cup\mathbf{U}_0}) = 0.
\end{align}
Otherwise, if $H(X_{\mathbf{S}_1}|X_{\mathbf{W}_0\cup\mathbf{S}_0\cup\mathbf{U}_0}) > 0$, we have $|\mathbf{W}_1\cup\mathbf{U}_1| <N$.
This means messages indexed by subset of $\mathbf{W}_1\cup\mathbf{S}_1\cup\mathbf{U}_1$ with size $N$ cannot be the indices of demand messages, which violates the privacy condition.
Therefore, for $K\le N^2+N+M$, the minimum number of required transmissions is $K-M$.
\end{proof}
If $K \ge N^2+N+M$, it is possible to select $\mathbf{V}_i$'s ($i\in \mathbf{W}_0$) such that each conditional entropy in the summation can achieve their maximum. The number of required messages for $\mathbf{W}_0$, $\mathbf{S}_0$ and $\mathbf{U}_0$ is
\begin{align}
|\mathbf{W}_0\cup \mathbf{S}_0\cup \mathbf{U}_0| &= |\mathbf{W}_0|+ |\mathbf{S}_0|+| \mathbf{U}_0|\\
&= N + M + N(N-1)\\
& = N^2+M
\end{align}
As we have shown, only when $i_t \not\in (\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\cup\mathbf{U}_0^{t-1})$, the corresponding conditional entropy is positive.
And for each $i_t \not\in (\mathbf{W}_0^{t-1}\cup\mathbf{S}_0^{t-1}\cup\mathbf{U}_0^{t-1})$, there must have $N-1$ messages that can also be decoded given the new side information messages which are used for decoding $X_{i_t}$.
Hence, we have
\begin{align}
T^* = \left\lceil\frac{K-M-N^2}{N+\lfloor \frac{M}{N} \rfloor}\right\rceil
\end{align}
And if $K-M-N^2-(T^8-1)( N+\lfloor \frac{M}{N} \rfloor) \le N$, there are at most $N$ messages left after using $X_{\mathbf{W}_0^{T^*-1}\cup\mathbf{S}_0^{T^*-1}\cup\mathbf{U}_0^{T^*-1}}$. Hence, they should be sent separately.
Therefore, we have
\begin{align}
R = \lim_{L\to \infty}\frac{D}{L} \ge K-M - (T^*-1)^+\lfloor \frac{M}{N} \rfloor + (K-M-N^2-(T^*-1)^+( N+\lfloor \frac{M}{N} \rfloor)- N)^+
\end{align}
which can be shown to be equivalent to~\eqref{eq:main}. Therefore we prove the converse of the minimum number of required transmissions in an alternative way.
\section*{ACKNOWLEDGMENT}
This work was supported in part by the Swiss National Science Foundation under Grant 169294.
\bibliographystyle{IEEEtran}
| {
"timestamp": "2018-08-20T02:07:51",
"yymm": "1808",
"arxiv_id": "1808.05797",
"language": "en",
"url": "https://arxiv.org/abs/1808.05797",
"abstract": "We study the problem of single-server multi-message private information retrieval with side information. One user wants to recover $N$ out of $K$ independent messages which are stored at a single server. The user initially possesses a subset of $M$ messages as side information. The goal of the user is to download the $N$ demand messages while not leaking any information about the indices of these messages to the server. In this paper, we characterize the minimum number of required transmissions. We also present the optimal linear coding scheme which enables the user to download the demand messages and preserves the privacy of their indices. Moreover, we show that the trivial MDS coding scheme with $K-M$ transmissions is optimal if $N>M$ or $N^2+N \\ge K-M$. This means if one wishes to privately download more than the square-root of the number of files in the database, then one must effectively download the full database (minus the side information), irrespective of the amount of side information one has available.",
"subjects": "Information Theory (cs.IT)",
"title": "Single-Server Multi-Message Private Information Retrieval with Side Information",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105314577313,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984787693311
} |
https://arxiv.org/abs/1404.1224 | Burnside problem for groups of homeomorphisms of compact surfaces | A group $\Gamma$ is said to be periodic if for any $g$ in $\Gamma$ there is a positive integer $n$ with $g^n=id$.We first prove that a finitely generated periodic group acting on the 2-sphere $\SS^2$ by $C^1$-diffeomorphisms with a finite orbit, is finite and conjugate to a subgroup of $\mathrm{O}(3,\R)$ and we use it for proving that a finitely generated periodic group of spherical diffeomorphisms with even bounded orders is finite. Finally, we show that a finitely generated periodic group of homeomorphisms of any orientable compact surface other than the 2-sphere or the 2-torus (which is the purpose of a previous paper of the authors) is finite. | \section{Classification of finite order orientation preserving spherical homeomorphisms.}
\smallskip
In this section, we recall the classification of finite order orientation preserving spherical homeomorphisms of sphere
up to conjugacy.
According to \cite{CK}, \cite{Ei}, \cite{Ep} or \cite{K}, a finite order spherical homeomorphism is conjugate to an orthogonal map of $\mathrm{O}(3,\RR)$ and using the classification of elements in the orthogonal group $\mathrm{O}(3,\RR)$, wet get the following definition and proposition.
\begin{defi} A spherical homeomorphism is called {\bf quasi-rotation} if it is
conjugated to a spherical rotation. \end{defi}
\begin{prop}\label{isotop}
A finite order, orientation preserving spherical homeomorphism is a quasi-rotation
and then if it is non trivial it has exactly two fixed points.
\end{prop}
\begin{rema}\label{relrk} The ``folkloric result" states that if the homeomorphism is a $C^1$-diffeomorphism then the conjugating map is also a $C^1$-diffeomorphism.
\end{rema}
As a corollary of Proposition \ref{isotop}, we get
\begin{coro}\label{pseudo}
If $\Gamma$ is a periodic group, then $\mathbf {G}$ the subgroup $ \{g\in \Gamma : g$ is orientation preserving $\}$ is exactly the set $\{g\in \Gamma : g$ is a quasi-rotation $\}$. In particular, the subset of $\Gamma$ consisting in quasi-rotations is a subgroup.
\end{coro}
\section{Reduction to groups of rational quasi-rotations.}
\smallskip
A direct consequence of the fact that the composition of two orientation reversing homeomorphisms is orientation preserving, is
\begin{lemm}\label{finiteindex}
Let $\Gamma$ be a finitely generated periodic group of spherical homeomorphisms. Then $G$, the subgroup of $\Gamma$ consisting in quasi-rotations is of finite index in $\Gamma$. Hence, $\Gamma$ is finite if and only if $G$ is finite. \end{lemm}
For proving Theorem \ref{theo1}, it is enough to prove that $G$ is finite. This is the purpose of the following section.
Our proof consists in considering the following three cases: the case where $G$ has a global fixed point achieved in Proposition \ref{Global},
the case where $G$ has a finite orbit of cardinality 2 proven in Proposition \ref{card2} and the remaining case where $G$ has a
finite orbit of cardinality at least 3 showed in Proposition \ref{Cardgeq3}.
\section{Burnside problem for groups of rational quasi-rotations.}
\smallskip
Let $G$ be a finitely generated periodic group of quasi-rotations of the sphere.
\begin{defi}
We denote by $\mathbf {P_G}$ the set of points in $\mathbb S^2$ that arise as fixed points of some non trivial element of $G$.
Let $x\in P_G$, we denote by $\mathbf { St_G }(x) $ the {\bf stabilizer} in $G$ of $x$, that is the set:
$$\mathbf { St_G }(x) =\{g\in G : g(x)=x \}.$$ \end{defi}
\begin{lemm}\label{properties}
The set $P_G$ is $G$-invariant and $\displaystyle G=\bigcup_{x\in P_G} \mathrm{St}_G (x)$.
\end{lemm}
\begin{proof}
Let $x_0\in P_G$ and $g\in G$. By definition, there exists $f\in G$ such that $f(x_0)= x_0$. As $g\circ f\circ g^{-1} (g(x_0)) =g\circ f(x_0) = g(x_0)$, it follows that $g(x_0) \in P_G$.
The second point is a direct consequence of the fact that any element in $G$ admits a fixed point (every non trivial quasi-rotation has exactly two fixed points). \end{proof}
\medskip
\subsection{Groups acting with a global fixed point.} \
\smallskip
\begin{defi} Let $x\in \SS^2$, we define the following groups:
${\text{\bf Diff}} ^{1} _+ (\SS^2)$ consisting in orientation preserving $C^1$ spherical diffeomorphisms,
${\text{\bf Diff}}^1_{x}(\SS^2)$ consisting in $C^1$ spherical diffeomorphisms fixing $x$ and
${\text {\bf Diff}}^{1}_{x,+} (\SS^2)$ their intersection subgroup.
\end{defi}
\medskip
This section contains results on finitely generated periodic subgroups of $\mathrm{ Diff}^1_{x,+} (\SS^2)$. Furthermore, in the next section, we will apply these results in order to describe stabilizers of points that might not be finitely generated.
\medskip
\begin{defi} Define the map $\mathbf {D}: \mathrm{Diff}^1_{x} (\SS^2) \rightarrow \mathrm{GL}(2,I\kern -0.37 em R)$ by
$$D(g) = Dg(x) : \RR^2 \approx T_x \SS^2\rightarrow \RR^2 \approx T_x \SS^2, \text{ the differential map at $x$}.$$
\end{defi}
\medskip
\begin{lemm}\label{Diff}
The map $D$ is a morphism and the image of a periodic subgroup of $\mathrm{Diff}^1_{x,+} (\SS^2)$ is a periodic subgroup of $\mathrm{SL}(2,I\kern -0.37 em R)$.
\end{lemm}
\begin{proof}
As $D(fg) (x) = Df(g(x)) Dg(x)= Df(x)Dg(x)$ for any $f$, $g$ in $\mathrm{Diff}^1_{x,+} (\SS^2)$, $D$ is a morphism. Moreover, $D(g)$ has finite order provided that $g$ has.
\medskip
Let $g$ be a finite order element in $\mathrm{Diff}^1_{x,+} (\SS^2)$. By Proposition \ref{isotop} and Remark \ref{relrk}, there exist a spherical diffeomorphism $h$ and a spherical rational rotation $R_\alpha$ such that $g=h^{-1} R_\alpha h $. Without loss of generality, we can assume that $h(x)=x$ and $R_\alpha (x)=x$.
\smallskip
Indeed, if not $y:=h(x)\not=x$. There exists a spherical rotation $R_\beta$ such that $R_\beta(y) =x$ and therefore $ R_\beta h (x)=x$ and we can rewrite $g =( R_\beta h) ^{-1} (R_\beta R_\alpha R_\beta ^{-1}) (R_\beta h).$
Thus $g= H^{-1} R_\alpha' H$, where $H=R_\beta h$ fixes $x$ and $R_\alpha'= R_\beta R_\alpha R_\beta ^{-1}$ is a spherical rotation and $R_\alpha' (x) = H g H^{-1}(x)= H g(x) = H(x) =x$.
\medskip
Finally, we have $D(g)= D(h) ^ {-1} D(R_\alpha) D(h)$, since $D$ is a morphism. The linear map $D(R_\alpha)$ is the planar rotation of angle $ \alpha$, then $D(g)$ has determinant equal to $1$ so it belongs to $ \mathrm{SL}(2,I\kern -0.37 em R)$. \end{proof}
\smallskip
\begin{prop}\label{Global} Let $G$ be a finitely generated periodic subgroup of $\mathrm{Diff}_{x,+}^{1} (\SS^2)$. Then $G$ is finite and abelian.
\end{prop}
\begin{proof}
The set $D(G)$ is a finitely generated periodic subgroup, since it is the image by a morphism of $G$ satisfying these properties.
According to Schur's theorem (\cite{Sh}), as $D(G)$ is a finitely generated periodic subgroup of $ \mathrm{SL}(2,I\kern -0.37 em R)$, it is finite then it is compact.
The classification of compact subgroups of $\mathrm{SL}(2,I\kern -0.37 em R)$ states that $D(G)$ is conjugated to a subgroup of $\mathrm{SO}(2,I\kern -0.37 em R)$ consisting in linear planar rotations (see for example \cite{La}), hence $D(G)$ is abelian.
As a consequence, for any $f,g$ in $G$, $D([f,g]) = Id$, where $[f,g]=fgf^{-1}g^{-1}$ is the commutator of $f$ and $g$.
\smallskip
Finally, $[f,g]=h R_\alpha h ^{-1}$, since it is a quasi-rotation.
Then $D([f,g]) = D(h R_\alpha h ^{-1}) = D(h ) D(R_\alpha) D(h )^{-1} = Id $ and therefore $D(R_\alpha) =Id$ so $\alpha=0$ and $R_\alpha=Id$. Hence $[f,g] = Id$.
Consequently $G$ is a periodic, finitely generated and abelian group. It follows that $G$ finite. \end{proof}
\subsection{$G$ has a finite orbit of cardinality $2$} \
\smallskip
An important ingredient in this case is the following lemma concerning stabilizers of points. As a consequence of Proposition \ref{Global}, we have
\begin{lemm}\label{abelian}
Let $x_0\in P_G$ and $G'$ be a periodic subgroup of $\mathrm{Diff}_{x_0,+}^{1} (\SS^2)$, then $G'$ is an abelian group and its elements have the same two fixed points. Moreover, any finitely generated subgroup of $G'$ is finite and conjugated to a group of rational spherical rotations of same axis.
In particular, if $G$ is a finitely generated periodic subgroup of $\mathrm{Diff} _+^{1} (\SS^2)$, the subgroup $G':= \mathrm{St}_G (x_0)$ is an abelian group.
\end{lemm}
\begin{proof}
Let $f$, $g$ in $G'$ . The group $<f,g>$ generated by $f$ and $g$ is a finitely generated periodic subgroup of $\mathrm{Diff} _+^1 (\SS^2)$ that fixes $x_0$. Hence, according to Proposition \ref{Global}, $<f,g>$ is finite and abelian. Consequently, $f$ and $g$ commute.
Moreover, $f g f^{-1} = g$ implies that $Fix (g) = f( Fix g) $. Let $y\not=x$ be the second fixed point of the quasi-rotation $g$. Then $Fix(g) = \{x,y \} = \{ f(x)=x, f(y) \}$, then $f(y)= y$.
\medskip
Any finitely generated subgroup of $G'$ is abelian and periodic, so it is a finite subgroup of $ \mathrm{Diff} _+^1 (\SS^2)$.
By the folkloric result, it is $C^1$-conjugated to a subgroup of rational rotations.
\end{proof}
\begin{prop}\label{card2}
If $G$ is a finitely generated periodic subgroup of $C^1$ quasi-rotations. If $G$ has a finite orbit of cardinality $2$, then $G $ is finite.
\end{prop}
\begin{proof}
Let $x_0$ be a point with $G$-orbit of cardinality 2. We write $\mathcal O_G({x_0}) =\{x_0, x'_0\}$.
\smallskip
By Corollary \ref{abelian}, $\mathrm{St}_G ({x_0})$ is abelian.
\smallskip
We claim that $[G,G]$, the first derivated subgroup of $G$, is contained in $\mathrm{St}_G ({x_0})$.
Let $g\in G$, we have $g(x_0) \in \mathcal O_G({x_0}) =\{x_0, x'_0\}$ and
$g(x'_0) \in \mathcal O_G({x_0}) =\{x_0, x'_0\}$. Noting that if $g(x_0) = x'_0$ then $g^{-1}(x_0) = x'_0$, it is easy to check that $[f,g] (x_0)= f^{-1} g^{-1} f g (x_0) = x_0$, in all possible cases.
\medskip
We conclude that $[G,G]$ is abelian, this means that $[[G,G],[G,G]]$, the second derivated group of $G$, is trivial.
The last part of this proof is a general fact for finitely generated groups generated by $s$ finite order elements $g_1,..., g_s$:
{\em ``any element of $G$ can be written $g= g_1^{p_1}....g_s^{p_s} C$, where $C\in [G,G]$ and $p_i\geq 0$ is bounded by the order of $g_i$."}
So the index of $[G,G]$ in $G$ is bounded by the product of the orders of $g_1,..., g_s$. Moreover, Schreier's lemma
states that any subgroup of finite index in a finitely generated
group is finitely generated. This implies that $[G,G]$ is also finitely generated.
Here, as $G$ is a periodic group then $[G,G]$ is a finitely generated periodic group.
In particular, it is generated by finite order elements. Hence, last argument shows that $[[G,G],[G,G]]$ has finite index in $[G,G]$ which has finite index in $G$. Finally $[[G,G],[G,G]]$ has finite index in $G$. But, we also have shown that $[[G,G],[G,G]]$ is trivial, so $G$ is finite.
\end{proof}
\medskip
\begin{rema} Under the hypotheses that $G$ is a finitely generated periodic group of homeomorphisms, $\#P_G= 2$ and $G$ preserves a probability measure on $\mathbb S^2\setminus P_G$, analogous arguments as those developed in \cite{GL3} show that $G$ is finite and abelian.
\end{rema}
\smallskip
A sketch of the proof of last Remark is the following:
$G$ acts on the open annulus $\mathbb S^2\setminus P_G$ and preserves a measure. Hence, we can define the rotation map $\rho : G \rightarrow \mathbb S^1$; the number $\rho(g)$ coincides with the angle of a rotation conjugated to $g$.
As in \cite{GL3}, one shows that $\rho$ is a morphism. Therefore, it vanishes on commutators. Then
any commutator is conjugate to a rotation of angle $0$, so it is trivial. It follows that $G$ is abelian and since it is also finitely
generated and periodic, it is finite.
\subsection{$G$ has a finite orbit of cardinality at least $3$}
\medskip
\begin{prop}\label{Cardgeq3}
Let $G$ be a finitely generated periodic subgroup of quasi-rotations. If $G$ admits a finite orbit of cardinality at least $3$, then $G $ is finite.
\end{prop}
\begin{proof}
Let $x_0\in\SS^2$ having a finite $G$-orbit.
\smallskip
As $\#\mathcal O_G({x_0})\geq 3$, we can write $\mathcal O_G({x_0}) =\{x_0=g_0(x_0), x_1=g_1(x_0),..., x_n= g_n(x_0)\} $, with distinct $x_i$ and $n\geq 2$.
\medskip
We first claim that if $G$ is not finite, then $x_0 \in P_G$.
\smallskip
Indeed, as $G$ is not finite, there exists $f\notin \{Id=g_0, g_1, ..., g_n\} $. Since $f(x_0) \in \mathcal O_G({x_0})$, there exists $i$ such
that $f(x_0)= g_i(x_0)$, and then $x_0$ is fixed by $g_i ^{-1}f $.
\medskip
We secondly prove that $\mathrm{St}_G (x_0)$ is a finite group.
\smallskip
If $\mathrm{St}_G (x_0)$ is not a finite group, we write $\mathrm{St}_G(x_0) = \{f_n, n\in \mathbb N\}$, where $f_n\not=f_m$ if $n\not= m$.
As $\mathcal O_G({x_0})$ is finite, there are infinitely many $f_{s_n} , n\in \mathbb N$ such that $f_{s_n}(x_1)=f_{s_m}(x_1)$, for any $n,m$. Then $f_{s_0}^{-1} \circ f_{s_n} (x_1)=x_1$ and so $F_n= f_{s_0}^{-1} \circ f_{s_n}$ fixes $x_0$ and $x_1$.
\smallskip
Analogously, there exist infinitely many $F_{k_n} , n\in \mathbb N$ such that $F_{k_n}(x_2)=F_{k_m}(x_2)$, for any $n,m$. Then $F^{-1}_{k_0} \circ F_{k_n} (x_2)=x_2$ and so $F^{-1}_{k_0} \circ F_{k_n}$ fixes $x_0$, $x_1$ and $x_2$.
\smallskip
Consequently, $F^{-1}_{k_0} \circ F_{k_n}=Id$, since a non trivial quasi-rotation has exactly two fixed points. Finally, $F_{k_0}=F_{k_n}$, for any $n\in I\kern -0.37 em N$. This is a contradiction. Hence $\mathrm{St}_G (x_0)$ is finite.
\medskip
Finally, we conclude that $G$ is finite, by proving that $\displaystyle G=\bigcup_{i=0}^n g_i (\mathrm{St}_G(x_0)) $: let $g\in G$, as $g(x_0) \in \mathcal O_G({x_0})$, there exists $i$ such that $g(x_0)=g_i(x_0)$. Hence $g_i^{-1} g\in \mathrm{St}_G(x_0)$ and then
$ g\in g_i(\mathrm{St}_G(x_0))$. \end{proof}
\section{ Burnside problem for groups of homeomorphisms of the remain surfaces.}
In this section we prove that a finitely generated periodic group of homeomorphism of the closed disk is finite. We also prove Corollary \ref{Ann} and Theorem \ref{theo2}.
\subsection{ Burnside problem for groups of homeomorphisms of the closed 2-disk. } \
\medskip
Let $\Gamma$ be a finitely generated periodic subgroup of homeomorphisms of ${\mathbb D}^2$ and let $C=\partial {\mathbb D}^2$. As $C$ is invariant by $\Gamma$, according to the positive answer to the Burnside problem on the circle, $\Gamma$ acts as an abelian and finite group on $C$ and this action is faithful since any periodic homeomorphism on ${\mathbb D}^2$ whose restriction to $C$ is the identity is the identity (see \cite{CK}). As a consequence, $\Gamma$ is an abelian and finite group.
\subsection{ Burnside problem for groups of homeomorphisms of the closed annulus (Corollary \ref{Ann}) } \
\medskip
{\bf First Proof using Burnside Problem on ${\mathbb T}^2$.}
Let $\Gamma$ be a finitely generated periodic subgroup of $\mathrm{Homeo}({\mathbb A}} \def\BB{{\mathbb B}} \def\CC{{\mathbb C}^2)$. We can form ${\mathbb T}^2$ the double of ${\mathbb A}} \def\BB{{\mathbb B}} \def\CC{{\mathbb C}^2$ by gluing two copies of ${\mathbb A}} \def\BB{{\mathbb B}} \def\CC{{\mathbb C}^2$: $A_1$ and $A_2$ along their boundary.
\smallskip
Let $g\in \Gamma$, we denote by $g_i$ its corresponding map on $A_i$. We define the double $\tilde g$ of $g$ on ${\mathbb T}^2$ by $\tilde g = g_i(x) $ if $x\in A_i$. By construction $\tilde g$ is a finite order (same order as $g$) homeomorphism that preserves the gluing boundary $C$ and the induced action of $\Gamma$ ($g \mapsto \tilde g$) is faithful.
\smallskip
Therefore $\Gamma$ acts on $C$ as a finitely generated periodic group and its subgroup preserving each boundary component is of index at most $2$. According to the positive answer to the Burnside problem on the circle, this subgroup acts on each component as a finite group and the same holds for $\Gamma$. We conclude that $\Gamma$ acts on ${\mathbb T}^2$ with a finite orbit. In particular $\Gamma$ acts faithfully and preserving a probability measure on ${\mathbb T}^2$, the main theorem of \cite{GL3} implies that $\Gamma$ is finite.
\medskip
{\bf Second Proof.}
Let $g \in \Gamma$. Any finite order homeomorphism on the closed annulus is an isometry for some flat Riemannian metric (see \cite{Ep} and \cite{St}).
Let $\Gamma_0$ be the subgroup of $\Gamma$ consisting in homeomorphisms that preserve any connected component of the boundary. It is of finite index, so for proving that $\Gamma$ is finite it suffices to prove that its subgroup $\Gamma_0$, is finite.
Let $C_1$ and $C_2$ be the connected components of the boundary. As $C_i$ is invariant by $\Gamma_0$, $\Gamma_0$ acts as an abelian and finite group on $C_i$ and this action is faithful, since an orientation preserving isometry on closed annulus is a rotation about the central axis, if it is equal to identity on $C_1 \cup C_2$, then it is the identity on the annulus.
\medskip
The forthcoming subsections provide the proof of Theorem \ref{theo2}.
\subsection{ Burnside problem for groups of homeomorphisms of a closed orientable surface $S$ of genus $g$ greater than one. } \
\smallskip
Let $\Gamma$ be a finitely generated periodic subgroup of homeomorphisms of $S$.
Any finite order homeomorphism on the $S$ is an isometry for some Riemannian metric of constant curvature equal to $-1$ (see \cite{Ep} and \cite{St}).
Let $\phi: \Gamma \rightarrow \mathrm{GL}(2g, \ZZ)$ be the homology morphism.
The image $\phi(\Gamma)$ is a finitely generated periodic subgroup of $\mathrm{GL}(2g,\ZZ)$, hence it is finite, according to Schur's theorem.
We will use a strong result: there is no torsion element in the Torelli's group, that is, any homeomorphism homologous to identity whose isotopic class is periodic is isotopic to identity (see, for example, Theorem 6.12 of \cite{FM}).
Then the kernel of $\phi$ consists in isometries isotopic to identity. But, the only isometry isotopic to identity that preserves orientation is the identity (see \cite{FM} proof of Proposition 7.7). Since the subgroup of isometries isotopic to identity that preserve orientation is of finite index in the group of isometries isotopic to identity, it follows that the kernel of $\phi$ is also finite. As a consequence, $\Gamma$ is finite.
\smallskip
\subsection{ Burnside problem for groups of homeomorphisms of a compact orientable surface $S$ whose boundary has more than three circles. } \
\smallskip
Let $\Gamma$ be a finitely generated periodic subgroup of homeomorphisms of $S$.
$\Gamma$ preserves the boundary of $S$, that is, an union of circles.
We can form the double of $S$, $\widehat{S}$, by gluing two copies of $S$: $S_1$ and $S_2$ along their respective boundaries $\gamma_1$ and $\gamma_2$.
Let $g\in \Gamma$, we denote by $g_i$ its corresponding map on $S_i$. We define the double $\tilde g$ of $g$ on $\widehat{S}$ by $\tilde g = g_i(x) $ if $x\in S_i$. By construction $\tilde g$ is a finite order (same order as $g$) homeomorphism that preserves the gluing boundary $C$ and the induced action of $\Gamma$ ($g \mapsto \tilde g$) is faithful on the compact surface $\widehat{S}$ of genus greater than one. By last subsection we have that $\Gamma$ is finite.
\section{Proof of Theorem \ref{theo3} and Corollary \ref{coro21}.}
\noindent {\bf Proof of Theorem \ref{theo3}.}
{\bf 1}. Let $\Gamma$ be a finitely generated periodic group of diffeomorphisms of a compact manifold $M$ of dimension $n$. Suppose that $\Gamma$ acts on $M$ with a finite orbit.
\begin{clai} \label{clai1} There exists $x_0\in M$ such that $\Gamma_0= St_\Gamma ({x_0})$ is a finite index, finitely generated periodic subgroup of $\Gamma$.
\end{clai}
\begin{proof} Let $\mathcal{O} _{x_0} =\{x_0,x_1,...,x_m\}$ be a finite $\Gamma$-orbit. For $i\in \{0,...,m\}$, we write $x_i = g_i (x_0)$, where $g_i\in \Gamma$ and $g_0=Id_M$.
Let $g\in \Gamma$, as $g(x_0) \in \mathcal{O} _{x_0}$ there exists some $i \in \{0,...,m\}$ such that $g(x_0) = g_i(x_0)$. Therefore $g_i^{-1} g (x_0) = x_0$ that is $g\in g_i( St_\Gamma ({x_0}))$.
In conclusion, $\displaystyle G= \bigcup_{i=0}^m g_i( St_\Gamma ({x_0}))$, meaning that $St_\Gamma ({x_0})$ has finite index in $G$ ; according to Schreier's Lemma it is finitely generated.
\end{proof}
\begin{rema} As a consequence of this claim, for proving that $\Gamma$ is finite it suffices to prove that its subgroup $\Gamma_0$, acting with a global fixed point on $M$, is finite. This is the purpose of the next proposition.
\end{rema}
\begin{prop} Let $\Gamma_0$ be finitely generated periodic group of diffeomorphisms of $M$. If $\Gamma_0$ acts on $M$ with a global fixed point then $\Gamma_0$ is finite
\end{prop}
\begin{proof}
Consider the map $\mathbf {D}: \Gamma_0 \rightarrow \mathrm{GL}(n,I\kern -0.37 em R)$, where $D(g) = D g(x_0)$ is the differential map of $g$ at $x_0$ (after identification of $T_{x_0}M $ to $\RR^n$).
\medskip
It is easy to see that the map $D$ is a morphism and it is faithful.
Indeed, let $g\in \Gamma_0$ such that $D(g) =Id$. We have already noted that $g$ is an isometry for some Riemannian metric ($m_g$) on $M$. Since an isometry is uniquely determined by its value and its differential at a single point, we get that $g=Id_M$.
Then $\Gamma_0$ is isomorphic to its image $D(\Gamma_0)$ which is a finitely generated periodic subgroup of $\mathrm{GL}(n,I\kern -0.37 em R)$, hence finite, according to Schur's theorem. This concludes the case where $\Gamma$ acts with a finite orbit on $M$
\bigskip
{\bf 2.} Let $\Gamma$ be a finitely generated periodic group of diffeomorphisms of a compact manifold $M$. Suppose that $\Gamma$ preserves a finite union of circles on $M$.
By an analogous argument to Claim \ref{clai1}, it suffices to prove that a finitely generated periodic subgroup of diffeomorphisms of $M$ that preserves a circle is finite. According to the positive answer to the Burnside problem on the circle, $\Gamma$ admits a finite orbit (on its invariant circle) so by part {\bf 1} it is finite.
\end{proof}
\noindent {\bf Proof of Corollary \ref{coro21}.}
Let $f$ be a non trivial central element of $\Gamma$, a finitely generated periodic subgroup of $Diff^1 (\SS^2)$. As $f$ commutes with any $g$ in $\Gamma$, the set $Fix f$ consisting of its fixed points is $G$-invariant. The classification of finite order homeomorphisms of $\SS^2$ indicates that $Fix f $ is either finite or a circle. Hence, Theorem \ref{theo3} implies that $\Gamma$ is finite. \hfill $\square$
\bigskip
\section{Groups of even bounded orders}
The aim of this section is proving Corollary \ref{coroeven}.
\smallskip
Let $G$ be a finitely generated periodic group of spherical diffeomorphisms with even bounded orders. The subgroup of orientation preserving elements of $G$ is a group with even bounded orders of index at most 2 in $G$, then we can suppose that $G$ consists in $C^1$ quasi-rotations, in particular any non trivial element of $G$ has
exactly two fixed points.
\medskip
According to the classification of the finite subgroups of $Diff_+^1 (\SS^2)$ and the fact that alternating groups $A_4$, $A_5$ and symmetric group $S_4$ contain elements of order 3, a finite group with even orders of orientation preserving spherical diffeomorphisms is either
\begin{enumerate}
\item a cyclic group $\ZZ / m\ZZ$ where
{$m=2 p$}, $p\in \NN$ or
\item a diedral group ${\mathbb D}_m =<\sigma, \tau \vert \sigma^2=\tau^m= 1, \sigma\tau\sigma= \tau ^{-1} > \\ = <\sigma, \sigma' \vert \sigma^2= (\sigma') ^2= (\sigma \sigma')^m=1>$,
{$m=2 p$}, $p\in \NN$.
\end{enumerate}
\medskip
Note that a group with even orders always contains involutions (elements of order 2) and let us denote $Inv(G)=\{\sigma \in G\setminus{Id} : \sigma^2=1\}$ and $Z(\sigma)$ the centralizer of $\sigma$ in $G$, that is $Z(\sigma_0)= \{f\in G : f\sigma_0=\sigma_0f \}$.
\medskip
\begin{lemm}\label{carr} \
Let $G$ be a finitely generated periodic group of orientation preserving spherical diffeomorphisms with even bounded orders.
\begin{enumerate}
\item Let $\sigma_0\in Inv(G)$, the set $Z(\sigma_0)\cap Inv(G)$ is finite,
\item $Inv(G)$ is finite.
\end{enumerate}
\end{lemm}
\begin{proof}
Let us write $Z(\sigma_0)\cap Inv(G)= \{i_n, n\in \NN\}$, where $i_0=\sigma_0$.
Fix $n\in\NN$, the group $G_n$ generated by $i_0$, ..., $i_n$ is finitely generated and fixes the set $Fix(i_0)$ (since $i_k$ commutes with $i_0$), then it is finite, by Theorem \ref{theo1}. Therefore this group is either a cyclic group or a diedral group and $ \{Id\} \subset G_0=<i_0> \subset \cdots \subset G_{n} \subset G_{n+1}\cdots$ .
This sequence stabilizes at some rank $n_0$, since $G$ is of bounded orders. That is, for all $n\geq n_0$ one has $G_n= G_{n_0}$, hence $i_n \in G_{n_0}$, $\forall n\geq n_0$ and therefore $Z(\sigma_0)\cap Inv(G)$ is finite.
\medskip
We now prove item 2. Suppose by contradiction that there exists in $G$ an infinite sequence of involutions $\sigma_1, \cdots ,\sigma_n, \cdots$ .
For all $n\in \NN$, the group $<\sigma_0, \sigma_n>$ is either cyclic or diedral. Therefore it contains an involution $i_n$ that commutes with $\sigma_0$ and $\sigma_n$ ($i_n=\sigma_0 $ in the cyclic case or $i_n=( \sigma_0\sigma_n) ^{\frac{m_n}{2}}$ in the diedral case $<\sigma_0, \sigma_n>= {\mathbb D}_{m_n}$).
Since $Z(\sigma_0)\cap Inv(G)$ is finite, we can suppose (eventually passing to an infinite subsequence of $(\sigma_n)$) that $i_n=i$, for all $n$.
Finally, $i$ commutes with all $\sigma_n$, that is $\{\sigma_n, n\in \NN\} \subset Z(i)\cap Inv(G)$ which is finite by item 1, that is a contradiction. \end{proof}
\medskip
\noindent {\it End of proof of Corollary \ref{coroeven}.}
Applying Lemma \ref{carr}, we obtain that $Inv(G)$ and therefore {$Fix(Inv(G))=\{x \in \SS^2 : \sigma(x)=x, $ for some $\sigma \in Inv(G) \}$} are finite sets.
As $Fix (f\sigma f^{-1})= f(Fix(\sigma))$ and $ f\sigma f^{-1} \in Inv(G)$ for all $f\in G$, the finite set $Fix(Inv(G))$ is non empty and $G$-invariant. By Theorem \ref{theo1}, $G$ is finite. \hfill $\square$
| {
"timestamp": "2014-11-12T02:13:42",
"yymm": "1404",
"arxiv_id": "1404.1224",
"language": "en",
"url": "https://arxiv.org/abs/1404.1224",
"abstract": "A group $\\Gamma$ is said to be periodic if for any $g$ in $\\Gamma$ there is a positive integer $n$ with $g^n=id$.We first prove that a finitely generated periodic group acting on the 2-sphere $\\SS^2$ by $C^1$-diffeomorphisms with a finite orbit, is finite and conjugate to a subgroup of $\\mathrm{O}(3,\\R)$ and we use it for proving that a finitely generated periodic group of spherical diffeomorphisms with even bounded orders is finite. Finally, we show that a finitely generated periodic group of homeomorphisms of any orientable compact surface other than the 2-sphere or the 2-torus (which is the purpose of a previous paper of the authors) is finite.",
"subjects": "Dynamical Systems (math.DS); Group Theory (math.GR)",
"title": "Burnside problem for groups of homeomorphisms of compact surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310529389902,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984772704232
} |
https://arxiv.org/abs/1711.00935 | A Note on Robust Biarc Computation | A new robust algorithm for the numerical computation of biarcs, i.e. $G^1$ curves composed of two arcs of circle, is presented. Many algorithms exist but are based on geometric constructions, which must consider many geometrical configurations. The proposed algorithm uses an algebraic construction which is reduced to the solution of a single $2$ by $2$ linear system. Singular angles configurations are treated smoothly by using the pseudoinverse matrix when solving the linear system. The proposed algorithm is compared with the Matlab's routine \texttt{rscvn} that solves geometrically the same problem. Numerical experiments show that Matlab's routine sometimes fails near singular configurations and does not select the correct solution for large angles, whereas the proposed algorithm always returns the correct solution. The proposed solution smoothly depends on the geometrical parameters so that it can be easily included in more complex algorithms like splines of biarcs or least squares data fitting. | \section{Introduction}
In the industrial applications of curves there are two philosophies: one is the use of highly sophisticated polynomial splines or transcendental curves like high degree Bézier curves~\cite{Bezier:1970},
rational functions~\cite{Saini:2015,Zheng:2009,Farin:1999},
clothoid curves~\cite{Bertolazzi:2014,Narayan:2014,stoer:1982,Meek:1998,Walton:2009},
hodographs~\cite{Farouki:2015,Kozak:2015}, etc., which can produce continuous paths up to the curvature, the jerk or the snap (or even higher), but at a relatively expensive computational cost, usually because there are not closed form solutions and a system of nonlinear equations must be numerically solved. The other side of the coin is the employment of low degree polynomials, for instance piecewise linear interpolants or circular arc splines. The advantage of using this family of curves is that, at the price of losing some precision and smoothness, the computational times required to produce a path are in practice negligible, because the associated interpolation problem can be solved with elementary actions. Moreover, sometimes it is simply not necessary to go beyond $G^1$ continuity, a typical case is represented by real time applications.\\
In this paper we discuss an improvement of the algorithm for $G^1$ biarc fitting used in Matlab. A biarc is a curve obtained by connecting two arcs of circle that match with $G^1$ continuity and interpolate two given points and two angles.
Biarcs have several interesting properties, first of all, they are easy to understand and to use: in fact the arclength computation is straightforward, the tangent vector field is continuous and defined everywhere, the curvature is defined almost everywhere and is piecewise constant. Moreover, they are very useful in several applications, for instance, they are effectively used in the approximation of higher degree
curves~\cite{Maier:2014,Deng:2014} or spirals \cite{Narayan:2014}, they easily produce curves particularly used in CNC machining and milling, where the cutting devices follow the so called G-code, i.e. path composed of straight lines and circles.
Other applications of biarcs are in Computer Aided Design or Manufacturing (CAD-CAM), where they are used to specify the path \cite{Yang:2006} or the offset of a more general curve, \cite{Kim:2012}.
\\
\textbf{Related work.} Biarcs were originally proposed in an industrial environment rather than in an academic one, and from the 1970s they have been studied extensively by Bèzier \cite{Bezier:1970}, Bolton \citep{Bolton:1975} and Sabin \cite{Sabin:1977}. A general theoretical framework for a complete classification of the biarcs, in the M\"obius plane, is proposed in \cite{Kurnosenko:2013}. The solution of the biarc interpolation problem is not unique because the imposed constraints leave one degree of freedom, thus there is a one-dimensional family of interpolating biarcs to general planar $G^1$ Hermite data. Different choices of this free parameter give origin to different interpolation schemes. The most used construction techniques build the biarc by equal chord or by parallel tangent, \cite{Sir:2006}. In the first case the length of the two arcs is chosen equal, in the second case the tangent at the joint point is chosen parallel to the segment that connects the initial with the final point, \cite{Meek:1995,Meek:2008,Narayan:2014}. In all cases, it can be shown, see for instance \cite{Sir:2006}, all the possible joint points must be on a certain circle.
These solutions are based on a geometric approach and consider different cases (up to 128), as detailed in \cite{Kurnosenko:2013} and in the references therein contained. Typical cases are the C-shaped, S-shaped and J-shaped biarcs \cite{Kurnosenko:2013,Deng:2012}.\\
\textbf{Paper contribution.} The algorithm herein proposed extends the range of Hermite data where Matlab's implementation fails or gives a non-consistent solution. These cases are discussed with examples in Section~\ref{sec:num}.
Following our approach used for the solution of the $G^1$ Hermite Interpolation Problem with clothoid curves, \cite{Bertolazzi:2014}, we propose herein a novel pure analytic solution to the biarc problem, that does not require to split the problem in mutually exclusive cases. We select the free parameter required to close the system of equations in the same way of the Matlab's Curve Fitting Toolbox implementation (\cite{matlab:2017a}, page 12-218). The construction is explained in detail in the next section. The issue of Matlab's function for biarcs (\texttt{rscvn}) is that it cannot solve certain configurations of angles and that it gives a non-consistent solution for some range of angles. We show how to overcome this problem while maintaining the same approach for the construction of the biarc. The solution is also extremely fast and numerically stable to be computed because only the solution of a $2$ by $2$ linear system is required. This is done via the explicit computation of the pseudoinverse \citep{Shinozaki:1972} matrix, which guarantees a consistent solution also in the case when the linear system is singular. The proposed algorithm is tested and validated in Section \ref{sec:num} and the complete pseudo code is given in~\ref{sec:algo}.
\section{Biarc Formulation}\label{sec:properties}
The biarc problem requires to find the pair of circle segments (possibly degenerate, as we will clarify next) that connect two points in the plane with assigned initial and final angles~\cite{Deng:2012}. More formally, it is the solution of the $G^1$ Hermite Interpolation Problem with two arcs. Let $\bm{p}_0=(x_0,y_0)^T$ and $\bm{p}_1=(x_1,y_1)^T$ be two points in the plane $\mathbb{R}^2$, $\vartheta_0$ and $\vartheta_1$ be the associated angles, then the biarc problem requires to find the solution of the following Boundary Value Problem (BVP):
\begin{EQ}[rclrclrcl]\label{eq:ODE}
x'(\ell) &=& \cos \theta(\ell), \qquad & x(0)&=&x_0, \qquad & x(L)&=&x_1,\\
y'(\ell) &=& \sin \theta(\ell), \qquad & y(0)&=&y_0, \qquad & y(L)&=&y_1, \\
\theta'(\ell) &=& k(\ell), \qquad & \theta(0)&=&\vartheta_0, \qquad & \theta(L)&=&\vartheta_1,
\end{EQ}
where the curvilinear abscissa $\ell$ is in the range $[0,L]$.
The above equations ensure that the solution exhibits $G^1$ continuity, however, because there are not enough degrees of freedom, in general, it is not possible to satisfy \eqref{eq:ODE} with a single arc or straight line. Therefore, the curvature cannot be a continuous function and must be piecewise constant:
\begin{EQ}\label{eq:ODE:k}
k(\ell) = \begin{cases}
\varkappa_0 & 0\leq \ell < \ell_\star \\
\varkappa_1 & \ell_\star \leq \ell \leq L
\end{cases}
\end{EQ}
where we assume that the curvilinear abscissa $\ell$ runs from $0$ to $L$ and the curvature has a jump for
$\ell_\star$, with $\ell_\star\in[0,L]$. The point for $\ell_\star$ is where the two arcs join. The two curvatures $\varkappa_0$, $\varkappa_1$ are real values, which can take the value zero. These values are associated to the radii of curvature of the two circles, if they are different from zero.
This formulation of the problem also contains degenerate cases, where the solution is not composed of two circles (i.e. we allow $\varkappa_0=0$ or $\varkappa_1=0$), meaning that a straight line can be part of the solution. Other particular cases are represented by a single arc of circle or by a single straight line.
As pointed out in several references, \cite{Meek:1995,Maier:2014, Kurnosenko:2013}, with this formulation the biarc solution is not unique, in fact the number of the constraints leaves one degree of freedom that allows many different geometric constructions \cite{Sir:2006}.
In this paper we focus on the solution proposed and implemented in Matlab's \texttt{rscvn} function, \cite{matlab:2017a}, page 12-218, which uses the degree of freedom to assign the direction of the (unit) normal vector $\bm{n}(\ell)$ to the trajectory at $\ell_\star$:
\begin{EQ}\label{matlab:n}
\bm{n}(\ell_\star)=(-\sin\theta(\ell_\star),\cos\theta(\ell_\star))^T.
\end{EQ}
The consequence of assigning $\bm{n}(\ell_\star)=\bm{v}$ is that problem~\eqref{eq:ODE}--\eqref{eq:ODE:k}
will have at most one solution. According to Matlab's Handbook, such normal vector
\begin{quote}
`` $\bm{v}$ is chosen as the reflection, across the perpendicular to the segment
from $\bm{p}_0$ to $\bm{p}_1$, of the average of the vectors $\bm{n}(0)$ and $\bm{n}(L)$''.
\end{quote}
We elaborate this construction by recasting it into an equivalent one expressed with
the tangent vectors $\tt(\ell)$.
The application of a rotation of $\pi/2$ to $\bm{n}(\ell_\star)=\bm{v}$ yields
an equivalent condition $\tt(\ell_\star)=\bm{w}$, where $\bm{w}$
is reflected along the segment from $\bm{p}_0$ to $\bm{p}_1$, of the average of the tangents $\tt(0)$ and $\tt(L)$.
Moreover, this construction can be improved by reasoning on the angles instead of the tangent vectors.
Indeed, it is more convenient to use the average of the angles rather than the average of the vectors, especially when the average of the vectors will yield a null (or very small) vector. In such cases the normal vector is not well posed, but the average of the angles is always well posed.\\
We construct $\bm{w}$ on condition~\eqref{matlab:n} as $\bm{w}=(\cos\vartheta_\star,\sin\vartheta_\star)^T$ and $\vartheta_\star$ is computed as, Figure \ref{fig:0}:
\begin{EQ}\label{eq:angle}\label{matlab:t}
\overline{\vartheta_\star} = \dfrac{\vartheta_0+\vartheta_1}{2},
\qquad
\vartheta_\star = \alpha+(\alpha-\overline{\vartheta_\star}) = 2\alpha-\overline{\vartheta_\star}
\end{EQ}
with $\alpha = \mathop{\mathrm{atan2}}(y_1-y_0,x_1-x_0)$, e.g. $\alpha$ is the angle that satisfies
\begin{EQ}\label{eq:polar}
\begin{cases}
x_1-x_0=d\cos\alpha, \\
y_1-y_0=d\sin\alpha,
\end{cases}
\quad
d=\norm{
\begin{pmatrix}
x_1-x_0\\
y_1-y_0
\end{pmatrix}
}.
\end{EQ}
The condition $\bm{n}(\ell_\star)=\bm{v}$ becomes thus
$\theta(\ell_\star)=\vartheta_\star$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.7]{tangents}
\end{center}
\caption{Generalisation of Matlab biarc interpolation scheme, converted from normal vectors to tangent vectors.
The figure shows the case of $\bm{p}_0$ and $\bm{p}_1$ aligned with the $x$ axis and $(x_\star,y_\star)$ the
joint point.}\label{fig:0}
\end{figure}
Now consider the Initial Value Problem (IVP) for the first segment of the biarc problem:
\begin{EQ}[rclrcl]\label{eq:IVP}
x'(\ell) &=& \cos \theta(\ell), \qquad & x(0) &=&x_0,\\
y'(\ell) &=& \sin \theta(\ell), \qquad & y(0) &=&y_0,\\
\theta'(\ell) &=& \varkappa_0 \qquad & \theta(0)&=&\vartheta_0,
\end{EQ}
where $\varkappa_0\in\mathbb{R}$ is a constant value to be determined.
\begin{definition}\label{def:sc}
We define the functions $\mathop{\mathrm{sinc}} x$ and $ \mathop{\mathrm{cosc}} x$ as
\begin{EQ}\label{eq:smooth}
\mathop{\mathrm{sinc}} x = \dfrac{\sin x}{x},\qquad
\mathop{\mathrm{cosc}} x = \dfrac{1-\cos x}{x}
\end{EQ}
that are used to find a numerically robust solution to~\eqref{eq:IVP}.
A standard way to compute~\eqref{eq:smooth} near the critical point $x=0$ is to
expand them with their Taylor approximations:
\begin{EQ}[rclrcl]
\mathop{\mathrm{sinc}} x &=& 1-x\left(\dfrac{1}{6}-\dfrac{x^2}{20}\right)
+\varepsilon_s(x),\quad &
\abs{\varepsilon_s(x)}&\leq &\dfrac{\abs{x}^6}{5040}
\\
\mathop{\mathrm{cosc}} x &=& \dfrac{x}{2}
\left(1-\dfrac{x^2}{12}\left(1-\dfrac{x^2}{30}\right)\right)
+\varepsilon_c(x),\quad&
\abs{\varepsilon_c(x)}&\leq &\dfrac{\abs{x}^7}{40320}.
\end{EQ}
Only a small number of terms must be considered for the required precision, for example, to limit the error for $\mathop{\mathrm{sinc}} x$ below $10^{-20}$ it is enough to have $\abs{x}\leq 0.002$, whereas for a (relative) error in the series of $\mathop{\mathrm{cosc}} x$ smaller than $10^{-20}$ it is enough to have $\abs{x}\leq 0.003$. They are implemented in Algorithms \ref{algo:Sinc} and \ref{algo:Cosc} in \ref{sec:algo}.
\end{definition}
By using definition~\ref{def:sc}, it is found by direct integration that the solution of~\eqref{eq:IVP} can be written as
\begin{EQ}\label{eq:pre:nlsys}
\begin{pmatrix}
x(\ell) \\ y(\ell)
\end{pmatrix}
=
\begin{pmatrix}
x_0 \\ y_0
\end{pmatrix}
+
\ell
\begin{pmatrix}
\cos\vartheta_0 & -\sin\vartheta_0 \\
\sin\vartheta_0 & \cos\vartheta_0
\end{pmatrix}
\begin{pmatrix}
\mathop{\mathrm{sinc}}(\varkappa\ell) \\
\mathop{\mathrm{cosc}}(\varkappa\ell)
\end{pmatrix}
\qquad
\theta(\ell) = \vartheta_0+\ell\varkappa,
\end{EQ}
where $\ell$ is the arc length of the curve.
In an analogous way we can compute the solution of the second arc that begins in $\bm{p}_1$ with the corresponding angle and goes backwards from $\bm{p}_1$ to meet the first segment.
The biarc problem requires hence to find the point of intersection of the two curves and leads to the following problem definition.
\begin{problem}\label{prob:biarc}
The joint condition obtained with the Matlab condition~\eqref{matlab:t} (or equivalently~\eqref{matlab:n})
yields the nonlinear system:
\begin{EQ}\label{eq:pre:nlsys:1:nonstandard}
\begin{cases}
x(\ell_0;x_0,y_0,\vartheta_0,\varkappa_0)= x(-\ell_1;x_1,y_1,\vartheta_1,\varkappa_1),\\
y(\ell_0;x_0,y_0,\vartheta_0,\varkappa_0)= y(-\ell_1;x_1,y_1,\vartheta_1,\varkappa_1),\\
\vartheta_0+\ell_0\varkappa_0 = \vartheta_\star, \\
\vartheta_1-\ell_1\varkappa_1 = \vartheta_\star,
\end{cases}
\end{EQ}
where the unknowns are $\ell_0$, $\ell_1$, $\varkappa_0$ and $\varkappa_1$.
The function $x(\ell_0;x_0,y_0,\vartheta_0,\varkappa_0)$ is the solution of~\eqref{eq:pre:nlsys}
with initial values $x_0$, $y_0$, $\vartheta_0$, $\varkappa_0$ and analogously for the other functions. It is important to point out that $\ell_0>0$ and $\ell_1>0$.
\end{problem}
At this stage, it is convenient to recast the problem into standard form, by a transform that remaps the initial and the final points with the points $(0,0)$ and $(1,0)$, respectively. A similar bipolar transform is proposed also in \cite{Kurnosenko:2013, Bertolazzi:2014}.
\begin{problem}[Standard form]\label{prob:biarc:standard}
The problem in standard form (after roto-translation and scaling) yields the nonlinear system:
\begin{EQ}\label{eq:pre:nlsys:1}
\begin{cases}
\big(\cos\theta_0\mathop{\mathrm{sinc}}(s\kappa_0)-\sin\theta_0\mathop{\mathrm{cosc}}(s\kappa_0)\big)s
+\big(\cos\theta_1\mathop{\mathrm{sinc}}(-t\kappa_1)-\sin\theta_1\mathop{\mathrm{cosc}}(-t\kappa_1)\big)t=1 \\
\big(\sin\theta_0\mathop{\mathrm{sinc}}(s\kappa_0)+\cos\theta_0\mathop{\mathrm{cosc}}(s\kappa_0)\big)s
+\big(\sin\theta_1\mathop{\mathrm{sinc}}(-t\kappa_1)+\cos\theta_1\mathop{\mathrm{cosc}}(-t\kappa_1)\big)t=0\\
\theta_0+s\kappa_0 = \theta_\star, \\
\theta_1-t\kappa_1 = \theta_\star,
\end{cases}
\end{EQ}
where using~\eqref{eq:polar} we obtain the following identity
\begin{EQ}
\theta_0 = \vartheta_0-\alpha,\quad
\theta_1 = \vartheta_1-\alpha,\quad
\theta_\star = \vartheta_\star-\alpha,\quad
\kappa_0 = \varkappa_0 d,\quad
\kappa_1 = \varkappa_1 d,\quad
s = \ell_0/d,\quad
t = \ell_1/d,
\end{EQ}
moreover the solution must satisfy $s>0$ and $t>0$. Notice that the standard assumption that the two points to be interpolated are different, i.e. $\bm{p}_0\neq \bm{p}_1$ implies $d>0$, hence $s$ and $t$ are well defined.
\end{problem}
\begin{lemma}\label{lem:regolare}
The solution $(s,t,\kappa_0,\kappa_1)$ of nonlinear system~\eqref{eq:pre:nlsys:1} in Problem~\ref{prob:biarc:standard} is obtained by solving the linear system
\begin{EQ}\label{eq:lsys}
\bm{A}
\begin{pmatrix} s \\ t \end{pmatrix}
=
\begin{pmatrix} 1 \\ 0 \end{pmatrix}
\end{EQ}
where $\bm{A}$ is a $2$ by $2$ matrix given by
\begin{EQ}\label{eq:sys:coeff}
\begin{pmatrix}
A_{11} \\ A_{21}
\end{pmatrix}
=
\begin{pmatrix}
\cos\theta_0 & -\sin\theta_0 \\
\sin\theta_0 & \cos\theta_0
\end{pmatrix}
\begin{pmatrix}
\mathop{\mathrm{sinc}}\theta_\star^0 \\
\mathop{\mathrm{cosc}}\theta_\star^0
\end{pmatrix},
\qquad
\begin{pmatrix}
A_{12} \\ A_{22}
\end{pmatrix}
=
\begin{pmatrix}
\cos\theta_1 & -\sin\theta_1\\
\sin\theta_1 & \cos\theta_1
\end{pmatrix}
\begin{pmatrix}
\mathop{\mathrm{sinc}}\theta_\star^1 \\
\mathop{\mathrm{cosc}}\theta_\star^1
\end{pmatrix},
\end{EQ}
and $\theta_\star^0 = \theta_\star - \theta_0$,
$\theta_\star^1 = \theta_\star - \theta_1$. Finally
$\kappa_0=\theta_\star^0/s$ and $\kappa_1=-\theta_\star^1/t$.
\end{lemma}
\begin{proof}
From the last two equations of~\eqref{eq:pre:nlsys:1} we obtain
$s\kappa_0 = \theta_\star - \theta_0=\theta_\star^0$ and $-t\kappa_1 = \theta_\star-\theta_1=\theta_\star^1$.
The substitution of these relations into the first two equations of~\eqref{eq:pre:nlsys:1}
yields the linear system~\eqref{eq:lsys}.
\qed
\end{proof}
The solution of the linear system \eqref{eq:lsys} must be handled with care
because of numerical instabilities that happen when the rank is not full and the determinant of the matrix of the coefficients is zero or close to zero. We discuss now these implications: first we consider the following determinants, used to
theoretically solve the linear system by Cramer's Rule.
\begin{lemma}[Theoretical solution]
The solution $(s,t,\kappa_0,\kappa_1)$ of nonlinear system~\eqref{eq:pre:nlsys:1} of Problem~\ref{prob:biarc:standard}
is
\begin{EQ}\label{eq:nonlin:sol}
s=d\,\dfrac{\mathcal{K}(\theta_0,\theta_\star)}
{\mathcal{D}(\theta_\star^0,\theta_\star^1)},\qquad
t=d\,\dfrac{\mathcal{K}(\theta_1,\theta_\star)}
{\mathcal{D}(\theta_\star^0,\theta_\star^1)},
\end{EQ}
with $\kappa_0=\theta_\star^0/s$ and $\kappa_1=-\theta_\star^1/t$,
where $d$ is defined in~\eqref{eq:polar} and
\begin{EQ}
\mathcal{D}(x,y) = \dfrac{\sin(x-y)+\sin y-\sin x}{xy},\qquad
\mathcal{K}(x,y) = \dfrac{\cos x-\cos y}{x-y}.
\end{EQ}
\end{lemma}
\begin{proof}
We have the following determinants:
\begin{EQ}
\abs{
\begin{matrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{matrix}
}
= \mathcal{D}(\vartheta_\star^0,\vartheta_\star^1),
\quad
\abs{
\begin{matrix}
1 & A_{12} \\
0 & A_{22}
\end{matrix}
}
=
\mathcal{K}(\vartheta_0,\vartheta_\star),
\quad
\abs{
\begin{matrix}
A_{11} & 1 \\
A_{21} & 0
\end{matrix}
}
=
\mathcal{K}(\vartheta_1,\vartheta_\star),
\end{EQ}
and the thesis follows by employing Cramer's Rule for solving a linear system.\qed
\end{proof}
\begin{remark}
The functions $\mathcal{D}(x,y)$ and $\mathcal{K}(x,y)$ can be evaluated via the identity
\begin{EQ}
\mathcal{D}(x,y) = \mathop{\mathrm{sinc}} y\mathop{\mathrm{cosc}} x-\mathop{\mathrm{sinc}} x\mathop{\mathrm{cosc}} y, \qquad
\mathcal{K}(x,y) = -2\sin\left(\dfrac{x+y}{2}\right)\mathop{\mathrm{sinc}}\left(\dfrac{x-y}{2}\right)
\end{EQ}
Thus, the functions $\mathcal{D}(x,y)$ and $\mathcal{K}(x,y)$ can be computed
with the $\mathop{\mathrm{sinc}}$ and $\mathop{\mathrm{cosc}}$ expansions of Definition~\ref{def:sc}
and are well defined and numerically stable for all $x$ and $y$.
\end{remark}
When the linear system \eqref{eq:lsys} has full rank and is far from singularity,
there are no numerical issues and the computation is safe.
It is important to notice, however, that the solution of the nonlinear system \eqref{eq:nonlin:sol} requires the ratio of those functions, which is not well defined when $\mathcal{D}(x,y)$ is close to zero. For instance, we have that
$\mathcal{D}(x,x)=0$ and thus the system associated to Problem \ref{prob:biarc} of biarc fitting has a singular configuration if $\vartheta_\star^0=\vartheta_\star^1$, that is if $\vartheta_1=\vartheta_0$.
Another pathologic case is $\mathcal{K}(x,-x)=0$, which happens when the solution is degenerate, e.g. when the curvature becomes zero. This occurs when $\theta_i=-\theta_\star$, or expanding the previous term,
if $\theta_i=(\theta_0+\theta_1)/2$, which implies again that $\theta_0=\theta_1$.
\begin{lemma}[Existence of the solution]\label{lem:singular}
Let $\theta_0$ and $\theta_1$ be angles in the interval $[-\pi,\pi]$ .
The solution $(s,t,\kappa_0,\kappa_1)$ of nonlinear system~\eqref{eq:pre:nlsys:1} in Problem~\ref{prob:biarc:standard}
exists if $\theta_0\neq\theta_1$.
In the singular case $\theta_0=\theta_1=\theta$ the solution exists only if
the Matlab condition $\theta_\star=-\theta$ is satisfied and $\theta\in(-\pi,\pi)$.
\end{lemma}
\begin{proof}
In the singular case the coefficients of the linear system become
\begin{EQ}[rcl]
\begin{pmatrix}
A_{11} \\ A_{21}
\end{pmatrix}
=
\begin{pmatrix}
A_{12} \\ A_{22}
\end{pmatrix}
&=&
\begin{pmatrix}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{pmatrix}
\begin{pmatrix}
\mathop{\mathrm{sinc}}(\theta_\star-\theta) \\
\mathop{\mathrm{cosc}}(\theta_\star-\theta)
\end{pmatrix}
=
\dfrac{1}{\theta_\star-\theta}
\begin{pmatrix}
\sin\theta_\star-\sin\theta \\
\cos\theta-\cos\theta_\star
\end{pmatrix}\\
&=&\mathop{\mathrm{sinc}}(\theta_\star-\theta)
\begin{pmatrix}
\cos((\theta_\star+\theta)/2) \\
\sin((\theta_\star+\theta)/2)
\end{pmatrix}
\end{EQ}
and the system reduces to
\begin{EQ}
\mathop{\mathrm{sinc}}((\theta_\star-\theta)/2)
\begin{pmatrix}
\cos((\theta_\star+\theta)/2) \\
\sin((\theta_\star+\theta)/2)
\end{pmatrix}
(s+t) =
\begin{pmatrix}
1\\0
\end{pmatrix}.
\end{EQ}
The only way to be consistent is that $\sin((\theta_\star+\theta)/2)=0$, i.e.
$\theta_\star+\theta=0+2k\pi$ and due the limitation of range angle, $\theta_\star=-\theta$.
In this case we have that
\begin{EQ}
\mathop{\mathrm{sinc}}\theta
\begin{pmatrix}
1 \\
0
\end{pmatrix}
(s+t) =
\begin{pmatrix}
1\\0
\end{pmatrix},
\end{EQ}
which shows that the solution of the system exists and satisfies $s+t=1/\mathop{\mathrm{sinc}}\theta$
when $\mathop{\mathrm{sinc}}\theta\neq 0$.
Finally $\mathop{\mathrm{sinc}}\theta>0$ for $\theta\in(-\pi,\pi)$.\qed
\end{proof}
In conclusion, the solution of~\eqref{eq:pre:nlsys:1} exists by showing the solution of the linear system
of Lemma~\ref{lem:regolare}, in the regular case.
In the singular case, when the Matlab condition is used, by lemma~\ref{lem:singular}
the linear system~\eqref{eq:lsys} is consistent and problem~\eqref{eq:pre:nlsys:1}
admits solutions.
Thus, even in the singular case it is possible to obtain a solution that makes
sense in the geometric problem.
The least square solution of linear system~\eqref{eq:lsys}
is chosen in the singular case.
The linear system is always solved with the
stable pseudoinverse computation which smoothly covers both the singular
and non-singular cases.
This computation is extremely fast due to the small dimension of the problem.
For a $2$ by $2$ matrix, the pseudoinverse can be computed directly, thus avoiding the need for
additional libraries and algorithms~\cite{Shinozaki:1972}.
Using LU factorisation of a $2$ by $2$ non zero matrix $\bm{A}=\bm{L}\bm{U}$
the pseudoinverse of $\bm{A}$ is easily computed by checking the only two cases (when $\bm{A}\neq\bm{0}$):
\begin{itemize}
\item $\bm{L}$ and $\bm{U}$ are square and non-singular so that
the pseudoinverse is equal to the usual inverse and
$\bm{A}^+=\bm{A}^{-1}= \bm{U}^{-1}\bm{L}^{-1}$.
\item $\bm{L}$ and $\bm{U}$ are two vectors (row and column respectively).
From the property $(\bm{L}\bm{U})^+ = \bm{U}^+\bm{L}^+$,
$\bm{U}^+$ and $\bm{L}^+$ are computed using the formula
$\bm{a}^+ = \bm{a}^T/\norm{\bm{a}}^2$ (valid when $\bm{a}$
is a row or a column vector).
\end{itemize}
The complete biarc algorithm is implemented in Algorithm \ref{algo:biarc} in the Appendix, together with the pseudoinverse computation (Algorithm~\ref{algo:2x2}).
\section{Numerical Tests}
\label{sec:num}
In this section we show some numerical experiments to validate the presented algorithm. In the first test, see Figure \ref{fig:1}, we create a bouquet of biarcs all starting in $\bm{p}_0=(0,0)$ with angles in the range $(-\pi,\pi)$ and ending at the point $\bm{p}_1=(1,0)$ with different final angles.
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.7]{figure1}
\end{center}
\caption{Four examples of biarc interpolation with different initial and final angles. The first arc is plotted in blue, the second arc in red.}\label{fig:1}
\end{figure}
From Figure \ref{fig:1} we can see that the solution of the problem varies with continuity; in the following test we show that this is not the case with Matlab's function. In fact we can see in Figure \ref{fig:2} a direct comparison on the same tests between the algorithm herein proposed (cases (a) and (c)) and Matlab (cases (b) and (d)). In Figure \ref{fig:2} (a) and (c) there is continuity in the variation of the solution, whereas in
Figure \ref{fig:2} (b) and (d) we can notice a jump in the solution, which is an undesirable behaviour.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.7]{figure2}
\end{center}
\caption{Comparison between present method (a), (c) and Matlab (b) and (c). Arrows indicate the initial and final tangent vectors. Matlab's output exhibits wrong selections in the solution, which does not vary with continuity. }\label{fig:2}
\end{figure}
In Figure \ref{fig:2} (a) and (b) we plot the solutions for $\bm{p}_0=(0,0)$ and $\bm{p}_1=(1,0)$, the angles range in $[\pi/2, 4/5\pi]$, some tangent vectors are shown as arrows.
In Figure \ref{fig:2} (c) and (d) we plot the solutions for $\bm{p}_0=(0,0)$ and $\bm{p}_1=(1,0)$, the initial angles range in $[\pi/2, 4/5\pi]$, the final angles are in the range $[-4/5\pi,-\pi/2]$. In both cases (b) and (d) Matlab selects a non-natural solution.\\
As a last example, we show in Figure \ref{fig:3} two cases where Matlab produces a wrong solution when it is close to singular configurations, that is, when the average of the vectors used to find the joint point are zero or almost zero. In
Figure \ref{fig:3} (a) our algorithm correctly interpolates $\bm{p}_0=(0,0)$ and $\bm{p}_1=(1,0)$ with $\vartheta_0=\vartheta_1=\pi/2$ producing a classic S-shaped biarc, while in (b), Matlab selects the wrong angle and produces a C-shaped biarc that violates the tangent at the initial point. In Figure \ref{fig:3} (c) and (d) we show the solution of the same problem with slightly perturbed angles:
$\bm{p}_0=(0,0)$, $\bm{p}_1=(1,0)$ but $\vartheta_0=\vartheta_1=\pi/2-10^4\epsilon$, where $\epsilon$ is the machine epsilon, i.e. a very small number. In Figure \ref{fig:3} (c) our algorithm produces a solution very close to the non-perturbed case (a), whereas Matlab gives a line segment, that is incompatible with the correct solution (c) or with the non-perturbed (still wrong) solution of (b).
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.7]{figure3}
\end{center}
\caption{Comparison between present method (a), (c) with Matlab (b) and (c). Arrows indicate the initial and final tangent vectors. Cases (a) and (b) are the non-perturbed angles $\vartheta_0=\vartheta_1=\pi/2$, cases (c) and (d) have $\vartheta_0=\vartheta_1=\pi/2-10^4\epsilon$, where $\epsilon$ is the machine epsilon. Dotted lines and dots are respectively the control polygons and control points.}\label{fig:3}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
A new robust algebraic algorithm for the numerical computation of biarcs is presented.
Differently to geometric based solution, it is not necessary
to consider many geometrical configurations.
The algorithm does not use any complex geometrical construction and is based on
the solution of a non linear system that is reduced to the solution
of a single $2$ by $2$ linear system.
The singular configuration (when the angles satisfy $\vartheta_0=\vartheta_1$)
is solved smoothly by using pseudoinverse when solving the linear system.
The Matlab's routine \texttt{rscvn} solves geometrically the same problem; this has the drawback
that it is not possible to find the correct biarc in all the configurations.
Finally, \texttt{rscvn} fails to compute the biarc when the configuration
is \emph{almost singular}.
The biarc computed by the proposed algorithm smoothly depends on the
parameters e.g. $\vartheta_0$ and $\vartheta_1$ so that it can be easily included
in more complex algorithms like splines of biarcs or least squares data fitting.
\bibliographystyle{alpha}
| {
"timestamp": "2017-11-06T02:03:05",
"yymm": "1711",
"arxiv_id": "1711.00935",
"language": "en",
"url": "https://arxiv.org/abs/1711.00935",
"abstract": "A new robust algorithm for the numerical computation of biarcs, i.e. $G^1$ curves composed of two arcs of circle, is presented. Many algorithms exist but are based on geometric constructions, which must consider many geometrical configurations. The proposed algorithm uses an algebraic construction which is reduced to the solution of a single $2$ by $2$ linear system. Singular angles configurations are treated smoothly by using the pseudoinverse matrix when solving the linear system. The proposed algorithm is compared with the Matlab's routine \\texttt{rscvn} that solves geometrically the same problem. Numerical experiments show that Matlab's routine sometimes fails near singular configurations and does not select the correct solution for large angles, whereas the proposed algorithm always returns the correct solution. The proposed solution smoothly depends on the geometrical parameters so that it can be easily included in more complex algorithms like splines of biarcs or least squares data fitting.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A Note on Robust Biarc Computation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105287006256,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984767707872
} |
https://arxiv.org/abs/math/0603648 | Analytic General Solutions of Nonlinear Difference Equations | There is no general existence theorem for solutions for nonlinear difference equations, so we must prove the existence of solutions in accordance with models one by one.In our work, we found theorems for the existence of analytic solutions of nonlinear second order difference equations. The main work of the present paper is obtaining representations of analytic general solutions with new methods of complex analysis. | \section{Introduction}
\vspace{0.7cm}
There is a general existence theorem for solutions of analytic differential
equations, but
we have no general existence theorem for
analytic difference equations.
For example, we consider the following first order nonlinear difference equation
$$ x(t+1) = 2 x(t) + x(t)^2. \eqno{(*)} $$
Putting $x(t) = -1 + y(t),$ we get $y(t+1) = y(t)^2$ and $\log y(t+1) = 2 \log y(t).$
Then $u(t) = 2^t$ is a (particular) solution of the equation $u(t+1) = 2u(t).$
Putting $C(t) = (\log y(t))/u(t),$ we have $C(t+1) = C(t),$ that is $C(t) = \pi(t),$ where
$\pi(t)$ is an entire solution with the period $1.$
Therefore, a general solution of $(*),$ which tends to $0$ as
$t \to - \infty,$ is given by $x(t) = \exp[\pi(t) 2^t] - 1.$
This simple example $(*)$ illustrates the whole make-up of the present paper.
$0$ is its equilibrium point of $(*)$, with the characteristic value $2.$
A formal solution of it is obtained by putting $x(t) = \sum_{n=1}^{\infty} a_n (2^t)^n.$
If its convergence is shown, then we have a solution $x(t)$ of the initial value problem,
which tends to $0$ as $t \to - \infty.$ Further we proceed to seek general solutions.
For analytic differential equations, a solution of initial value problem is always represented
by a power series.
This is the reason that the general existence theorem can be established for differential equations.
But for difference equations, this is not the case. Next we consider the following
first order difference equation
$$ x(t+1) = x(t) + x(t)^2, \eqno{(**)} $$
for which $0$ is the equilibrium point with characteristic value $1,$
but we can not put its formal solution in the form
$\sum_{n=1}^{\infty} a_n (1^t)^n.$ That is, selection of appropriate formal
solution depends on the problem.
Of course, by \cite{Kimu} p.237 Theorem 14.2,
($**$) has a local solution with the asymptotic expansion
$$ x(t) \sim -\frac{1}{t} \left\{1 + \sum_{j+k \geq 1} \hat{q}_{jk} t^{-j}
\left(\frac{\log t}{t} \right)^k \right\}^{-1}, $$
where $\hat{q}_{jk}$ are constants.
\par
Further, for analytic differential equations, the solution is determined uniquely
by the initial condition. However,
for analytic difference equations, solution cannot be determined by
the (initial) condition $x(t) \to 0$ as $t \to - \infty,$
hence we need to consider general solutions.
In this paper, we consider the following second order nonlinear difference equation,
\begin{equation}
u(t+2)=f(u(t),u(t+1)), \label{1.1}
\end{equation}
where $f(x,y)$ is an entire function of $x$, $y$.
We assume that there is an equilibrium point $u^*: u^* = f(u^*, u^*).$
We can take $u^* = 0,$ that is $f(0,0)= 0$ without losing generality. \par
Many studies for difference equations are considered with discrete variables.
Indeed such the equation (\ref{1.1}) is often considered for $t\in {\Bbb N}$.
However in our study,
we consider the difference equation (\ref{1.1}) with a continuous variable $t$.
If "$t$" of equation (\ref{1.1}) represents "time", then $t$ is of course
a real variable. However hereafter, in (\ref{1.1}), $t$ represents a complex variable,
because we consist more general theorems. \par
Our aim is to obtain analytic general solutions $u(t)$ of (\ref{1.1}) such that
$u(t+n) \to 0$ as
$n \to +\infty$ or $n \to - \infty.$ \par
We define $f(x,y)$ in (\ref{1.1}) such that
\begin{equation}
f(x,y) = - \beta x - \alpha y + g(x,y), \quad \beta \ne 0, \label{1.2}
\end{equation}
where $g$ consists of higher order terms for $x$, $y$ such that
$g(x,y)=\sum_{i,j\geqq 0, i+j\geqq 2} b_{i,j} x^iy^j\not\equiv 0,$
and $\alpha$, $\beta$, $b_{i,j}$ are constants.
Further we assume that at least one of moduli of the
characteristic values is neither $0$ nor $1.$
The case that both of characteristics equal to $1$ will be treated in another
paper. \par
\quad
The processes of my work are as follows:
{\bf 1)} determination of formal solutions, {\bf 2)} getting particular solution by Schauder's
Fixed Point Theorem in a locally convex topological space,
{\bf 3)} obtaining general solutions by the method of Kimura\cite{Kimu}
and Yanagihara\cite{Ya}. \par
\section{Analytic Solutions}
\subsection{ A formal solution.}
The characteristic equation of (\ref{1.1}) with (\ref{1.2}) is
\begin{equation}
D(\lambda)=\lambda^2+\alpha\lambda+\beta=0.\label{2.1}
\end{equation}
Let $\lambda_1$, $\lambda_2$ be roots of the characteristic equation and
$|\lambda_1|\leqq |\lambda_2|$.
Then we consider following two cases, i) $|\lambda_1|< 1$ and ii) $|\lambda_2|>1$.
Of course, some characteristic equations have properties both
i) and ii). \par
In case i), we consider solutions such that
$$u(t+n)\to \,0,\quad \text {as}\,\,\,n\to\, +\infty.$$
In case ii), we consider solutions such that
$$u(t+n)\to \,0,\quad \text {as}\,\,\,n\to\, -\infty.$$
In case i) we put $\lambda=\lambda_1$, and in case ii) we put
$\lambda=\lambda_2$. Then we
put a formal solution to (\ref{1.1})
$$u(t)=\sum_{n=1}^{\infty} a_n\lambda^{nt},$$
in both cases.
We substitute $u(t)=\sum_{n=1}^{\infty} a_n\lambda^{nt}$,
$u(t+1)=\sum_{n=1}^{\infty} a_n\lambda^{n(t+1)}$,
$u(t+2)=\sum_{n=1}^{\infty} a_n\lambda^{n(t+2)}$,
into (1.1). And we compare the coefficients of $\lambda^{nt},\,(n=1,2,\cdots)$,
then we have, with $D(\lambda)$ in (\ref{2.1}),
\begin{equation}
\left\{
\begin{array}{l}
a_1\cdot D(\lambda)=0,\notag\\
a_2\cdot D(\lambda^2)=
a_1^2(b_{2,0}+b_{1,1} \lambda+ b_{0,2}\lambda^2),
\\
a_3\cdot D(\lambda^3)=
b_{2,0}2 a_1a_2+b_{1,1} a_1 a_2\lambda(\lambda+1)
+b_{0,2}2 a_1 a_2\lambda^3\notag\\
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\,\,\,\,\,+
a_1^3(b_{3,0}+b_{2,1}\lambda+b_{1,2}\lambda^2+b_{0,3}\lambda^3),\notag
\\
\cdots\\
a_k\cdot D(\lambda^k)=C_k(a_1,\cdots,a_{k-1}),\quad\notag\\
\cdots,\notag
\end{array}
\right.%
\end{equation}
where $C_k(a_1,\cdots,a_{k-1})$ are
polynomials of $a_1,\cdots,a_{k-1}$ with coefficients
$b_{i,j}\lambda^l$,
$0\leqq i \leqq k$, $0\leqq j\leqq k$, $0\leqq l\leqq k$, $2\leqq i+j\leqq
k$.
From definition of $\lambda$ and $D$, we have
$D(\lambda)=0$ and $D(\lambda^k)\neq 0\, (k\geqq 2)$, and we can have $a_1$ is arbitrary. \par
Here we suppose that $a_1\neq 0$. Then we have
\begin{equation}
a_k=\frac{a_1^k}{D(\lambda^k)}C^*_k(b_{i,j},\lambda^l), \,\, k\geqq 2, \label{2.2}
\end{equation}
where $C^*_k(b_{i,j},\lambda^l)$ are constants which are given by the function $f$, in which
they consist of $b_{i,j}$, $2\leqq i+j\leqq k$ and $\lambda^l$, $0\leqq l\leqq k$.
Hence we can determine a formal solution of (\ref{1.1}),
\begin{equation}
u(t)=\sum_{n=1}^{\infty} a_n\lambda^{nt},\label{2.3}
\end{equation}
in both cases i) and ii). Here we have $a_1$ is arbitrary not $0$,
and $a_k$ are determined by $a_1$.
\subsection{ Existence of an analytic solution }
Here we put $u(t)=s,u(t+1)=w, u(t+2)=z$, and
$H(s,w,z)=-z+f(s,w)$.
Then the equation (\ref{1.1}) can be written such as
\begin{equation}
H(u(t),u(t+1),u(t+2))=0.\label{2.4}
\end{equation}
$H(s,w,z)$ is holomorphic in a neighborhood of $(0,0,0)$ and
we have $H(0,0,0)=0$ easily.
Furthermore we have
$
\frac{\partial H}{\partial s}(0,0,0)
=\frac{\partial f}{\partial s} \Bigr |_{s=w=0}
=-\beta \neq 0$ as remarked in (\ref{1.2}).
From implicit function theorem,
for the equation $H(s,w,z)=0$,
we have a holomorphic function
$\phi$ such that
\begin{equation}
s=\phi(w,z) \quad \mbox{for}\quad |w|,\,|z|\leqq \rho\label {2.5}
\end{equation}
for some $\rho>0$.
Furthermore we have a constant $K$ such that
\begin{equation}
|s|=|\phi(w,z)|\leqq K(|w|+|z|) \quad \mbox{for}\quad |w|,|z|\leqq \rho.\label{2.6}
\end{equation}
\quad
Let $N$ be a positive integer. Put the partial sum of formal solution
as
$P_N(t)=\sum_{n=1}^N \alpha_n \lambda^{nt}$, and put
$p_N(t)=u(t)-P_N(t)$. Here we rewrite $p(t)=p_N(t)$. \par
Moreover we define following sets,
\begin{align}
&S(\eta)=\{t\in \Bbb{C}: |\lambda^t|\leqq \eta \},\notag\\
&J(A,\eta)=\{p:p(t) \,\text{is holomorphic and }\, |p(t)|\leqq A|\lambda^t|^{N+1} \,
\mbox{for }\, t\in S(\eta)\}\notag.
\end{align}
in which $A>0$ and $\eta$, $0<\eta<1$ are constants. We determined these constants in a proof
of existence for a fixed point of following maps $T_i$ ($i=1,2$).\par
Suppose there would exist a solution $u(t)$ of (\ref{1.1}) in $S(\eta)$.
Then $p_N(t)=u(t)-P_N(t)$ would
belong to $J(A,\eta)$ for some suitably chosen constants $A$, $\eta$, and would satisfy the equation
\begin{equation}
p(t+2)=f(p(t)+P_N(t),p(t+1)+P_N(t+1))-P_N(t+2),\label{2.7}
\end{equation}
with $p(t)=p_N(t)$.
Conversely, suppose there would exist a solution $p(t)$ of (\ref{2.7}), then $u(t)=p(t)+P_N(t)$
would be a solution of (\ref{1.1}).
So, hereafter we concentrate on
proving the existence of $p(t)\in J(A,\eta)$ such that $u(t)=p(t)+P_N(t)$ satisfies (\ref{2.7}).
In case i) $|\lambda|<1$,
the existence of solutions $u(t)$ of (\ref{2.7}) is equivalent to
the existence of $p(t)$ which satisfies
$$p(t)
=\phi(p(t+1)+P_N(t+1),p(t+2)+P_N(t+2))-P_N(t).$$
For $p(t)\in J(A,\eta)$, we put
\begin{equation}
T_1[p](t)=\phi(p(t+1)+P_N(t+1),p(t+2)+P_N(t+2))-P_N(t).\label{2.8}
\end{equation}
Then we can prove that $T_1$ maps $J(A,\eta)$ into itself (see Appendix A).
The map $T_1$ is obviously continuous if $J(A,\eta)$ is
endowed with topology of uniform
convergence on compact sets in $S(\eta)$. Furthermore
$J(A,\eta)$ is clearly convex, and is relatively compact by the theorem of
Montel \cite{Ahlfors}.\par
By Schauder's fixed point theorem \cite{Dug}(p.74), \cite{Smar}(p.32), we obtain the existence of a
fixed point $p(t)=p_N(t)\in J(A,\eta)$ of $T_1$ in $S(\eta)$.
Moreover we can prove uniqueness of the fixed point (see Appendix B) and independence from $N$
(see Appendix C).
Hence
we have an analytic solution $u(t)$ in $S(\eta)$.
\par
\vspace{0.5cm}
\quad
In case ii) $|\lambda|>1$,
(\ref{2.7}) is equivalent to
the existence of $p(t)$ which satisfies,
$$p(t)=f(p(t-2)+P_N(t-2),p(t-1)+P_N(t-1))-P_N(t).$$
For $p(t)\in J(A,\eta)$, we put
\begin{equation}
T_2[p](t)=f(p(t-2)+P_N(t-2),p(t-1)+P_N(t-1))-P_N(t)\notag
\end{equation}
Then we can prove the existence of an analytic solution $u(t)$ in
$S(\eta)$ by the arguments similar as above.
\par
\vspace{0.7cm}
Thus we have the following Theorem 1.
\par
\vspace{0.7cm}
{\bf Theorem 1.} \em
Let $\lambda_1,\, \lambda_2$ be roots of
$D(\lambda)=0$ in (2.1), with $|\lambda_1|\leqq|\lambda_2|$.
Suppose $|\lambda_1|<1$ or $|\lambda_2|>1$.
Put $\lambda=\lambda_1$ for the former, and $\lambda=\lambda_2$ for latter.
And we assume that $\lambda_1^k\neq \lambda_2$ and $\lambda_2^k\neq \lambda_1$
for any $k\in {\mathbb N}$.
Then there is an $\eta>0$ such that we have a holomorphic
solution $u(t)=\sum^{\infty}_{n=1}a_n\lambda^{nt}$ in
$S(\eta)=\{t;|\lambda^t|<\eta\}$.
\em\par
\vspace{0.5cm}
{\bf In case ii).} The solution $u(t)$ can be analytically continued
to the whole plane, by making use of the equation (\ref{1.1})
$u(t+2)=f(u(t),u(t+1)).$\par
{\bf In case i).} The function $\phi(w,z)$ in (\ref{2.5})
$s=\phi(w,z)$ for $|w|,\,|z|\leqq \rho$
is defined only locally,
though we can also analytically continue
$u(t)$, keeping out of branch points. The solution obtained is
multi-valued. \par
\vspace{0.5cm}
\subsection{Particular solutions.}
In this subsection, we consider solutions $u_1(t)$ and $u_2(t)$ which are
respectively depend on $\lambda_1$ and $\lambda_2$. Put formal solutions such that
$u_1(t) = \sum_{n=1}^{\infty} a_{1,n} \lambda_1^{nt} $ and
$u_2(t) = \sum_{n=1}^{\infty} a_{2,n} \lambda_2^{nt}$, we have
$a_{m,k}\cdot D(\lambda_m^k)=C_{m,k}(a_{m,1},\cdots,a_{m,k-1}),$
$(m=1,2; k\in \Bbb{N})$
with the similar arguments in subsection {\bf 2.1}, where
$C_{m,k}(a_{m,1},\cdots,a_{m,k-1})$ are
polynomials of $a_{m,1},\cdots,a_{m,k-1}$ with coefficients
$b_{i,j}\lambda_m^l$,
$0\leqq i \leqq k$, $0\leqq j\leqq k$, $0\leqq l\leqq k$, $2\leqq i+j\leqq
k$.
Furthermore if we take as $a_{m,1}\neq 0$, then we have
\begin{equation}
a_{m,k}D(\lambda_m^k)=a_{m,1}^k
C^*_{m,k}(b_{i,j},\lambda_m^l), \,\, m=1,2;\,\,k\geqq 2, \label{2.9}
\end{equation}
where $C^{*}_{m,k}(b_{i,j},\lambda_m^l)$ are constants which are given by the function $f$,
in which
they consist of $b_{i,j}$, $2\leqq i+j\leqq k$ and $\lambda_m^l$, $0\leqq l\leqq k$.
Then we have the following lemma 2 and lemma 3 with the similar arguments
in {\bf 2.1-2.2}.\par
\vspace{0.7cm}
{\bf Lemma 2.} \em
Let $\lambda_1, \lambda_2$ be roots of (\ref{2.1}) with $|\lambda_1| \leqq |\lambda_2| < 1$.
If $\lambda_2^k \ne \lambda_1$
for any positive integer $k$ greater than $1$,
then there are constants
$\eta_1,\, \eta_2>0$ such that we have following
two holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}),
\begin{gather}
u_m(t) = \sum_{n=1}^{\infty} a_{m,n} \lambda_m^{nt} \quad \text{in}
\quad S(\eta_m)=\{t;|\lambda_m^t|<\eta_m\}, \quad (m=1,2),\notag
\end{gather}
in which $a_{1,1}$ and $a_{2,1}$ can be taken to be arbitrary
non-zero constants.\par
For the case $\lambda_2^k = \lambda_1$ for some $k \in {\mathbb N},$
if $C^*_{2,k}(b_{i,j},\lambda_2^l)=0$ given in (\ref{2.9}),
then we take $a_{2,1}\neq 0$ and $a_{2,k} \ne 0$ arbitrary,
and have the solution $u_2(t)$ as above.
On the other hand, if $C^*_{2,k}(b_{i,j},\lambda_2^l)\neq 0$ for the $k$,
then we take $a_{2,j}=0$ for $(j\ne kn, n\in \Bbb{N})$, and can take
$a_{2,k} \ne 0$ arbitrary, then we determine coefficients
$a_{2,kn}$
for $n \geqq 2$ as above.
Hence then there is an
$\eta_2>0$ such that we have holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}),
$$u_2(t) = \sum_{n=1}^{\infty} a_{2,kn} \lambda_2^{knt} \quad\text{in} \,\,S(\eta_2),$$
as well as $u_1(t) = \sum_{n=1}^{\infty} a_{1,n} \lambda_1^{nt}$
in $S(\eta_1). $ In the case of $\lambda_2^k=\lambda_1$ and
$C^*_{2,k}(b_{i,j},\lambda_2^l)\neq 0$,
if we take $a_{2,k}=a_{1,1}$, then $u_2(t)=u_1(t)$ in $S(\eta_1)\cap S(\eta_2)$. \par
Thus, in the both cases, $u_1(t + n) \to 0, u_2(t + n) \to 0$ as $n \to \infty$ uniformly
on any compact subset of the $t$-plane.
\em\par
\vspace{0.5cm}
\quad {\bf Proof.}\quad
If $\lambda_2^k \ne \lambda_1$ for any $k \in {\mathbb N},$ then we
can determine formal
solution
$u_2(t) = \sum_{n=1}^{\infty} a_{2,n} \lambda_2^{nt}$ as in subsection {\bf 2.1},
with $\lambda = \lambda_2$ instead of $\lambda_1.$ And we can show that it is
an actual solution
as in subsections {\bf 2.1-2.2} \par
For the case $\lambda_2^k = \lambda_1$ for some $k \in {\mathbb N},$
if we take $a_{2,1}\neq 0$, form (\ref{2.9}), we have
\begin{equation}
a_{2,k}D(\lambda_2^k)=a_{2,k}D(\lambda_1)=a_{2,1}^k
C^*_{2,k}(b_{i,j},\lambda_2^l)=0.\label{2.10}
\end{equation}
If $C^*_{2,k}(b_{i,j},\lambda_2^l)= 0$, we can take $a_{2,k}$ arbitrary, and
determine $a_{2,n}$, $2\leqq n \leqq k-1, n\geqq k+1$ by $a_{2,1}$ as in (\ref{2.9}).\par
However if $C^*_{2,k}(b_{i,j},\lambda_2^l)\neq 0$, the equation (\ref{2.10})
is contradiction. Thus
we must take $a_{2,1}=0$, then $a_{2,n}=0,$ for $n\leqq k-1$ by
$a_{2,k}\cdot D(\lambda_2^k)=C_{2,k}(a_{2,1},\cdots,a_{2,k-1}).$
Then we can take $a_{2,k}$ to be
arbitrary non-zero constant, and determine coefficients $a_{2,n}$ as follows
\begin{equation}
a_{2,n}=
\begin{cases}
0,\quad (n\neq km , \,\, m\in \Bbb{N}),\notag\\
a_{2,k}^m\frac{C_{2,m}^*(b_{i,j}, \lambda_2^{lk})}{D(\lambda_2^{km})}
=a_{2,k}^m\frac{C_{2,m}^*(b_{i,j}, \lambda_1^{l})}{D(\lambda_1^m)},
\quad (n= km , \,\, m\in \Bbb{N}),\notag
\end{cases}
\end{equation}
where $C^*_{2,m}(b_{i,j},\lambda_1^l)$ are constants defined in (\ref{2.9}).
Hence we can determine a formal solution $u_2(t)$ such that
$$u_2(t) = \sum_{n=1}^{\infty} a_{2,kn} \lambda_2^{knt} \quad\text{in} \,\,S(\eta_2).$$
If we take $a_{2,k}=a_{1,1}$, then we have only one solution.
Futhermore for the both cases of $\lambda_2^k=\lambda_1$,
we can prove that
there is an $\eta_2>0$ such that we have a holomorphic solution
$u_2=\sum_{n=1}^{\infty} a_{2,kn} \lambda_2^{knt}$ in $S(\eta_2)$, with the
similar arguments in {\bf 2.2}.
in $S(\eta_1)\cap S(\eta_2)$.
\par
Obviously $u_1(t + n) \to 0, u_2(t + n) \to 0$ as $n \to +\infty$ uniformly on
any compact subset of the $t$-plane.
$\square$\par
\vspace{0.7cm}
{\bf Lemma 3.} \em
Let $\lambda_1, \lambda_2$ be roots of (\ref{2.1}) with $1<|\lambda_1| \leqq |\lambda_2|$.
If $\lambda_1^k \ne \lambda_2$
for any positive integer $k$ greater than $1$,
then there are constants
$\eta_1,\, \eta_2>0$ such that we have following
two holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}),
\begin{gather}
u_m(t) = \sum_{n=1}^{\infty} a_{m,n} \lambda_m^{nt} \quad \text{in}
\quad S(\eta_m)=\{t;|\lambda_m^t|<\eta_m\}, \quad (m=1,2),\notag
\end{gather}
in which $a_{1,1}$ and $a_{2,1}$ can be taken to be arbitrary
non-zero constants.\par
For the case $\lambda_1^k = \lambda_2$ for some $k \in {\mathbb N},$
if $C^*_{1,k}(b_{i,j},\lambda_1^l)=0$ given in (\ref{2.9}),
then we take $a_{1,1}\neq 0$ and $a_{1,k}\neq 0$ arbitrary, and
have the solution $u_1(t)$ as above.
On the other hand, if $C^*_{1,k}(b_{i,j},\lambda_1^l)\neq 0$ for the $k$,
then we take $a_{1,j}=0$ for $(j\ne kn, n\in \Bbb{N})$, and can
$a_{1,k} \ne 0$ arbitrary, then we determine coefficients
$a_{1,kn}$
for $n \geqq 2$ as above.
Hence then there is an
$\eta_1>0$ such that we have holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}),
$$u_1(t) = \sum_{n=1}^{\infty} a_{1,kn} \lambda_1^{knt} \quad\text{in} \,\,S(\eta_1),$$
as well as $u_2(t) = \sum_{n=1}^{\infty} a_{2,n} \lambda_2^{nt}$
in $S(\eta_2). $ In
the case of $\lambda_1^k=\lambda_2$ and $C^*_{1,k}(b_{i,j},\lambda_1^l)\neq 0$,
if we take $a_{1,k}=a_{2,1}$, then $u_1(t)=u_2(t)$ in $S(\eta_1)\cap S(\eta_2)$. \par
Thus, in the both cases, $u_1(t - n) \to 0, u_2(t - n) \to 0$ as $n \to \infty$ uniformly
on any compact subset of the $t$-plane. \par
\em\par
\vspace{0.5cm}
\quad {\bf Proof.}\quad
We can prove with arguments similar as Lemma 2. $\square$\par
\vspace{0.7cm}
The analytic solutions $u_1$ and $u_2$ obtained in Lemmas 2-3 are
"Particular Solutions" of (\ref{1.1}).
\section{Analytic General Solutions}
Analytic general solutions of nonlinear difference equations have been
investigated, for example, by Harris \cite{Harris}, \cite{Harris2},
and others, but we can not make use of their method.
Here we follow the method of
Kimura \cite{Kimu} and Yanagihara \cite{Ya},
where general solutions of the first order difference equations are studied. \par
In this section we consider the following case,
$$
|\lambda_1|<1<|\lambda_2|.
$$
For other cases, we study
general solutions of
the difference equation (1.1) in other papers. \par
\vspace{0.4cm}
For a linear second order difference equation, general solutions are
written by two particular solutions of it. But for a nonlinear second
order difference equation, in this case,
general solutions which converge to
an equilibrium point of the equation are written by one of two
particular solutions $u_1$ or $u_2$ of the difference equation.\par
\vspace{0.7cm}
Let $u(t)$ be a solution of (\ref{1.1}), and $w(t) = u(t+1).$ Then (\ref{1.1})
can be written as a system of simultaneous equations
\begin{equation}
\begin{pmatrix} u(t+1) \\ w(t+1) \end{pmatrix} = \begin{pmatrix} 0 & 1 \\
- \beta & - \alpha \end{pmatrix} \begin{pmatrix} u(t) \\ w(t) \end{pmatrix}
+ \begin{pmatrix} 0 \\ g(u(t), w(t)) \end{pmatrix}
\label{3.1}
\end{equation}
Let $\lambda_1, \lambda_2$ be roots of the equation (\ref{2.1}) and
$P = \begin{pmatrix} 1 & 1 \\ \lambda_1 & \lambda_2 \end{pmatrix}.$
Put
\begin{equation}
\begin{pmatrix} u \\ w \end{pmatrix} = P \begin{pmatrix} x \\ y \end{pmatrix}.
\label{3.2}
\end{equation}
From $\lambda_1\neq\lambda_2$, we can transform the coefficient matrix of linear terms
of (\ref{3.1}) into diagonal form, i.e.,
(\ref{3.1}) is transformed to a following system with respect to $x, y:$
\begin{equation}
\left\{ \begin{aligned}
x(t + 1) &= \lambda_1 x(t) + \sum_{i+j \geq 2} c_{ij} x(t)^i y(t)^j = X(x(t), y(t)), \\
y(t + 1) &= \lambda_2 y(t) + \sum_{i+j \geq 2} d_{ij} x(t)^i y(t)^j = Y(x(t), y(t)).
\end{aligned} \right.
\label{3.3}
\end{equation}
On the other hand, let
$Q = \begin{pmatrix} 1 & 1 \\ \lambda_2 & \lambda_1 \end{pmatrix}.$
Put
\begin{equation}
\begin{pmatrix} u \\ w \end{pmatrix} = Q \begin{pmatrix} x \\ y \end{pmatrix}.
\label{3.4}
\end{equation}
Then (\ref{3.1}) is transformed to a system with respect to $x, y:$
\begin{equation}
\left\{ \begin{aligned}
x(t + 1) &= \lambda_2 x(t) + \sum_{i+j \geq 2} c'_{ij} x(t)^i y(t)^j = X'(x(t), y(t)), \\
y(t + 1) &= \lambda_1 y(t) + \sum_{i+j \geq 2} d'_{ij} x(t)^i y(t)^j = Y'(x(t), y(t)).
\end{aligned} \right.
\label{3.5}
\end{equation}
Then we will show the following Theorem 4.
\vspace{0.7cm}
{\bf Theorem 4.} \em
Let $\lambda_1,\, \lambda_2$ be roots of the characteristic
equation of (\ref{1.1}) such that $|\lambda_1|<1< |\lambda_2|$.
Suppose that $u_1(t)$ and $u_2(t)$ are solutions of (1.1) which have the expansions
$u_1(t)=\sum^{\infty}_{n=1} a_{1,n}\lambda_1^{nt}$ in $S(\eta_1)=\{t;|\lambda^t|<\eta_1 \}$,
$u_2(t)=\sum^{\infty}_{n=1} a_{2,n}\lambda_2^{nt}$ in $S(\eta_2)=\{t;|\lambda^t|<\eta_2 \}$
with some constants $\eta_1,\eta_2>0$.
Further suppose that $\Upsilon(t)$ is an analytic solution of (\ref{1.1})
such that either $\Upsilon(t+n)\to 0$ as $n\to +\infty$ or $n\to -\infty$,
uniformly on any compact subsets of $t$-plane.
If the solution $\Upsilon$ of (\ref{1.1}) satisfies
$\Upsilon(t+(-1)^{m-1}n)\to 0$, $(m=1,2)$ as $n\to +\infty$,
then
there is a periodic entire function $\pi_m(t),(\pi_m(t+1)=\pi_m(t))$, such that
\begin{align}
\Upsilon(t)&=\frac{1}{\lambda_{m+1}-\lambda_m}( \lambda_{m+1}\sum^{\infty}_{n=1}
a_{m,n}\lambda_m^{n(t+\pi_m(t))}-\sum^{\infty}_{n=1}
a_{m,n}\lambda_m^{n(t+\pi_m(t)+1)})\notag\\
&\qquad+\Psi_m\Biggr(
\frac{1}{\lambda_{m+1}-\lambda_m}( \lambda_{m+1}\sum^{\infty}_{n=1}
a_{m,n}\lambda_m^{n(t+\pi_m(t))}-\sum^{\infty}_{n=1}
a_{m,n}\lambda_m^{n(t+\pi_m(t)+1)})
\Biggr),
\label{3.6}
\end{align}
in $S(\eta_m)$, with the convention $\lambda_3$ means $\lambda_1$.
Further we have
$\frac{\Upsilon(t+1+(-1)^{m-1}n)}{\Upsilon(t+(-1)^{m-1}n)}\to \lambda_m$, ($m=1,2$),
as $n\to +\infty$.
When $m=1$,
$\Psi_1$ is a solution of
\begin{equation}
\Psi(X(x,\Psi(x)))=Y(x,\Psi(x)),\label{3.7}
\end{equation}
and
when $m=2$,
$\Psi_2$ is a solution of
\begin{equation}
\Psi(X'(x,\Psi(x)))=Y'(x,\Psi(x)),\label{3.8}
\end{equation}
in which
$X$, $Y$ are defined in (\ref{3.3}), and $X'$, $Y'$ are defined in (\ref{3.5}).
\par
Conversely, a function $\Upsilon(t)$ which is represented as in (\ref{3.6})
in $S(\eta_m)$ for some $\eta_m>0$, where $\pi_m(t)$ is a periodic function with
the period one,
is a solution of (\ref{1.1}) such that $\Upsilon(t+(-1)^{m-1}n)\to 0$
and
$\frac{\Upsilon(t+1+(-1)^{m-1}n)}{\Upsilon(t+(-1)^{m-1}n)}\to \lambda_m$ as
$n \to +\infty$ with $m=1,2$.
\par
\em\par
\medskip
\vspace{0.7cm}
{\bf Proof.}\quad
At first we prove the case $m=1$.\par
Let $u(t)$ be the solution of (\ref{1.1}) in the argument of Section 2. And suppose
$\Upsilon(t)$ be a solution of (\ref{1.1}) such that $\Upsilon(t+n)\to 0$
as $n\to +\infty$
uniformly on any compact subsets of $t$-plane.\par
At first we will consider the meaning of the functional equation (\ref{3.7}). \par
Suppose (\ref{3.3}) admits a solution $(x(t), y(t))$.
If
$\frac{dx}{dt}\neq 0$, then we can write $t=\psi(x)$ with a function $\psi$
in a neighborhood of $x_0=x(t_0)$, and we can write
\begin{equation}
y(t)=y(\psi(x))=\Psi(x),\label{3.9}
\end{equation}
as far as $\frac{dx}{dt}\neq 0$.
Then the function $\Psi$ satisfies the functional equation (\ref{3.7}).\par
Conversely we assume that a function $\Psi$ is a solution of
the functional equation (\ref{3.7}). If the first order difference equation
\begin{equation}
x(t+1)=X(x(t),\Psi(x(t))),\label{3.10}
\end{equation}
has a solution $x(t)$, then we put $y(t)=\Psi(x(t))$ and have a
solution $(x(t), y(t))$ of (\ref{3.3}).
From \cite{Ya} we see that the first order difference equation
(\ref{3.10}) has an analytic solution.\par
This relation is a point of our method.\par
\vspace{0.7cm}
Put $\omega(t)=\Upsilon(t+1)$ and
\begin{equation}
\begin{pmatrix}\chi\\ \nu \end{pmatrix}
=P^{-1}\begin{pmatrix} \Upsilon\\\omega\end{pmatrix}.\label{3.11}
\end{equation}
Then we have
$\chi(t)=\frac{1}{\lambda_2-\lambda_1}(\lambda_2\Upsilon(t)-\omega(t))$.
Since $\Upsilon(t+n)\to 0$ and $\omega(t+n)\to 0$ as $n\to \infty$, we have
$\chi(t+n) \to 0$ as $n\to \infty$.\par
Let $u(t)$ be a solution given in Section 2,
$$u(t)=\sum^{\infty}_{n=1}a_{1,n} \lambda^{nt}\qquad\qquad (\lambda=\lambda_1).$$
Then we can write by (\ref{3.11}),
since $\lambda_1 = \lambda$ and $u(t)$ is a function of $\lambda^t,$
\begin{equation}
x(t)=\frac{1}{\lambda_2-\lambda}(\lambda_2 u(t)-u(t+1))
=\frac{1}{\lambda_2-\lambda}
\Biggr ( \sum_{n=1}^{\infty}(\lambda_2a_{1,n}-a_{1,n}\lambda^n)(\lambda^t)^n
\Biggr )
=\Tilde{U}(\lambda^t),\label{3.12}
\end{equation}
where $\zeta = {\tilde U}(\tau)$ is a function of
$\tau = \lambda^t$ and ${\tilde U}'(0) = a_{1,1} \ne 0$ and ${\tilde U}(0) = 0.$
Since ${\tilde U}(\tau)$ is an open map, for any $\eta_1 > 0$
there is an $\eta_2 > 0$ such that
$$ {\tilde U}(\{ |\tau| < \eta_1 \}) \supset \{|\zeta| < \eta_2 \}. $$
Since $\chi(t+n) \to 0$ as $n \to \infty,$ supposed that
$t$ belongs to a compact set $K,$ there is an $n_0 \in {\mathbb N}$
such that for $t' \in K$
$$|\chi(t'+n)|<\eta_2\quad (n\geqq n_0).$$
Thus there is a $\tau'=\lambda^{\sigma}$, such that
\begin{equation}
\chi(t'+n)=\Tilde{U}(\tau')=\Tilde{U}(\lambda^{\sigma}).\label{3.13}
\end{equation}
Since $\Tilde{U}'(0)=a_{1,1}\neq 0$, using the theorem on implicit function
we have a $\Tilde{U}^{-1}$ such that
$$\lambda^{\sigma}=\Tilde{U}^{-1}(\chi(t'+n)).$$
Put $t=t'+n$, then $\lambda^{\sigma}=\Tilde{U}^{-1}(\chi(t))$,
and we write
\begin{equation}
\sigma=\log_{\lambda}\Tilde{U}^{-1}(\chi(t))=\ell(t).\label{3.14}
\end{equation}
\quad When there is a solution $\chi(t)$ of (\ref{3.3}),
from (\ref{3.10}), (\ref{3.12}-\ref{3.13})
we have
\begin{align}
\chi(t+1)
&=X(\chi(t),\Psi(\chi(t)) )\notag\\
&=X(\Tilde{U}(\lambda^{\sigma}),\Psi(\Tilde{U}(\lambda^{\sigma})) )\notag\\
&=X(x(\sigma),\Psi(x(\sigma)))\notag\\
&=x(\sigma+1)=\Tilde{U}(\lambda^{\sigma+1}).\notag
\end{align}
Hence
$$\sigma+1=\ell(t+1),\,\,\ell(t)+1=\ell(t+1).$$
If we put $\pi(t) = \ell(t) - t,$ then we obtain
$\pi(t+1) = \ell(t+1) - (t+1) = \ell(t) - t = \pi(t),$ and we can write as
\begin{equation}
\ell(t) = t + \pi(t), \label{3.15}
\end{equation}
where $\pi(t)$ is defined for a compact set $K$ with $\Re[t]$
sufficiently large. Furthermore we can continue the $\pi(t)$ analytically as a
periodic function with the period $1.$ Thus we have
$$ \sigma = t + \pi(t). $$
From (\ref{3.13}) and (\ref{3.12}), $\chi(t)$ can be written as
$$
\chi(t)=\Tilde{U}(\lambda^{t+\pi(t)})=x(t+\pi(t))
=\frac{1}{\lambda_2-\lambda_1}
(\lambda_2u(t+\pi(t))-u(t+1+\pi(t))).$$
We have following equations, making use of the equation (\ref{3.11})
\begin{align}
\Upsilon(t)&=\chi(t)+\nu(t)\notag\\
&=\chi(t)+\Psi(\chi(t))\notag\\
&=x(t+\pi(t))+\Psi(x(t+\pi(t)))\notag\\
&=\frac{1}{\lambda_2-\lambda_1}(\lambda_2\sum^{\infty}_{n=1}
a_{1,n}\lambda^{n(t+\pi(t))}-\sum^{\infty}_{n=1}
a_{1,n}\lambda^{n(t+\pi(t)+1)})\notag\\
&\qquad+\Psi\Biggr(
\frac{1}{\lambda_2-\lambda_1}(\lambda_2 \sum^{\infty}_{n=1}
a_{1,n}\lambda^{n(t+\pi(t))}-\sum^{\infty}_{n=1}
a_{1,n}\lambda^{n(t+\pi(t)+1)})
\Biggr),\notag
\end{align}
where $\pi(t)$ is defined for $t\in\cup_{n\in \Bbb{Z}}(K + n)$
with a compact set $K.$ Since $K$ is arbitrary,
we can continue $\pi(t)$ analytically to
a periodic entire function with period $1,$
and
$\Psi$ is a solution of (\ref{3.7}). By making use of the Theorem in \cite{Suzu1}, (\cite{Suzu3}),
$\Psi$ is obtained in the form, in a neighborhood of $x=0$,
\begin{equation}
\Psi(x)=\sum_{n=2}^{\infty} \gamma_n x^n,\label{3.16}
\end{equation}
that is, the expansion begins with $x^2$.
From $\chi(t+1)=X(\chi(t),\Psi(\chi(t)))$, we have
$$\chi(t+1)=\lambda_1\chi(t)+\sum_{i+j\geqq 2}c_{ij}\chi(t)^i
\Psi(\chi(t))^j,$$
and
$$\frac{\chi(t+1)}{\chi(t)}=\lambda_1+\sum_{i+j\geqq 2}c_{ij}
\chi(t)^{i-1}\Psi(\chi(t))^j.
$$
Since $\chi(t+n)\to 0$, as $n\to +\infty$ and by (\ref{3.16}),
$$\frac{\Psi(\chi(t+n))}{\chi(t+n)}\to\, 0,\,
\frac{\chi(t+1+n)}{\chi(t+n)}\to \, \lambda_1,\quad
\text{ as} \quad n\to +\infty.$$
From $\Upsilon(t)=\chi(t)+\Psi(\chi(t))$, we have
\begin{align}
\frac{\Upsilon(t+n+1)}{\Upsilon(t+n)}=
\frac{\chi(t+n+1)+\Psi(\chi(t+n+1))}{\chi(t+n)+\Psi(\chi(t+n))}&=
\frac{\frac{\chi(t+n+1)}{\chi(t+n)}+
\frac{\Psi(\chi(t+n+1))}{\chi(t+n+1)}\cdot \frac{\chi(t+n+1)}{\chi(t+n)} }
{1+\frac{\Psi(\chi(t+n))}{\chi(t+n)}}\notag\\
&\to\, \lambda_1,\,\,
\text{as} \,\, n\,\to\, +\infty.\notag
\end{align}
Conversely, if we put $\Upsilon(t)$ as (\ref{3.6}),
where $\pi$ is an arbitrary periodic entire function, and
$\Psi$ is a solution of (\ref{3.6}),
then $\Upsilon(t)$ is a solution of (\ref{1.1}) such that
$\Upsilon(t+n)\to 0$ as $n\to \,+\infty$.
Furthermore then we have a solution $\chi$ of (\ref{3.3}) such that
$$\Upsilon(t)=\chi(t)+\Psi(\chi(t)),$$
where $\chi(t+n)\to 0$ as $n\to +\infty$.
Hence we have
$\frac{\Upsilon(t+1+n)}{\Upsilon(t+n)}\to \lambda_1$ as $n\to +\infty$. \par
\vspace{0.5cm}
Similarly in the proof of the above case, we can prove the case $m=2$
making use of the equations in (\ref{3.4}) and (\ref{3.5}).
$\square$
\par
\vspace{0.7cm}
| {
"timestamp": "2006-03-28T12:50:45",
"yymm": "0603",
"arxiv_id": "math/0603648",
"language": "en",
"url": "https://arxiv.org/abs/math/0603648",
"abstract": "There is no general existence theorem for solutions for nonlinear difference equations, so we must prove the existence of solutions in accordance with models one by one.In our work, we found theorems for the existence of analytic solutions of nonlinear second order difference equations. The main work of the present paper is obtaining representations of analytic general solutions with new methods of complex analysis.",
"subjects": "Classical Analysis and ODEs (math.CA); Complex Variables (math.CV)",
"title": "Analytic General Solutions of Nonlinear Difference Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105280113491,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7076984762711512
} |
https://arxiv.org/abs/1906.06988 | Tauberian Conditions for Almost Convergence in a Geodesic Metric Space | In the present paper, after recalling the Karcher mean in Hadamard spaces, we study the relation between convergence, almost convergence and mean convergence (respect to the defined mean) of a sequence in Hadamard spaces. These results extend Tauberian conditions from Banach spaces to Hadamard spaces. Also, we show that every almost periodic sequence in Hadamard spaces is almost convergent. | \section{Introduction}
For a sequence $\{x_n\}$ in a linear space, the Vallee-Poussin means of the sequence for each $k$ is defined by:
$$ \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} x_{k+i}, $$
(see \cite{Butzer1993}) and the Cesaro mean by:
$$ \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} x_i. $$
\noindent Lorentz \cite{lorentz1948} defined the concept of {\it almost convergence} for a bounded sequence $\{x_n\}$ of real (or complex) scalars using two approaches. The first approach is using Banach limits, and the second one is based on the uniform convergence of the above Vallee-Poussin means respect to $k$. In fact, he showed that these two approaches are equivalent. Clearly convergence of sequence $\{x_n\}$ implies almost convergence of the sequence and almost convergence implies convergence of the Cesaro mean of the sequence, i.e.,
\begin{equation*}
x_n\underset{n}{\longrightarrow} y \quad \Longrightarrow \quad \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} x_{k+i} \underset{n}{\longrightarrow} y ~\text{uniformly in}~k \quad \Longrightarrow \quad \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} x_i \underset{n}{\longrightarrow} y
\end{equation*}
For the reverse directions we need some sufficient conditions, which are called the Tauberian conditions and considered by Lorentz \cite{lorentz1948} for scalar sequences. Then Kuo \cite{Kuo2009} extended the Tauberian conditions from real sequences to vector sequences in Banach spaces. In this paper, after the definition of mean and almost convergence for a sequence in a Hadamard space, we extend the Tauberian conditions in this setting.
In the next section, we present some preliminaries including the basic concepts of Hadamard spaces and the required lemmas to state and prove the main results. We also define the Karcher mean as well as the concept of the ergodic and almost convergence of a sequence respect to this mean. In Sections 3 and 4, Tauberian theorems are studied respectively for metric and weak convergence. Finally, Section 5 is devoted to prove almost convergence of almost periodic sequences in Hadamard spaces.
\section{Preliminaries}
In metric space $(X,d)$, a geodesic between two points $x,y\in X$ is a map
$$ \gamma :[0,d(x,y)]\longrightarrow X, $$
such that $\gamma(0)=x,\ \gamma\big(d(x,y)\big)=y \ \text{and} \ d\big(\gamma(t),
\gamma(t')\big)=|t-t'|, \quad \forall t,t'\in[0,1]$.
The metric space that every it's two points are joined by a geodesic, is said geodesic space and it is said uniquely geodesic if between any two points there is exactly one geodesic. The image of $\gamma$ is called a geodesic segment and denoted by $[x,y]$ for uniquely geodesic spaces, also in such spaces $m_t=(1-t)x\oplus ty$ for every $t\in[0,1]$ is the unique point in the segment $[x,y]$ such that:\\
\centerline{$d(x,m_t)=td(x,y)$ and $d(y,m_t)=(1-t)d(x,y)$.}
A geodesic triangle $\triangle:=\triangle(x_1,x_2,x_3)$ in a geodesic metric space $(X,d)$ consists of three points $x_1, x_2, x_3\in X$ as vertices and three geodesic segments joining each pair of vertices as edges. A comparison triangle for the geodesic triangle $\triangle$ is the triangle $\overline{\triangle}:=\overline{\triangle}(x_1,x_2,x_3):=\triangle(\overline{x_1},\overline{x_2},\overline{x_3})$ in the Euclidean space $\Bbb{R}^2$ such that $d(x_i,x_j)=d_{\Bbb{R}^2}(\overline{x_i},\overline{x_j})$ for all $i,j=1,2,3$. A geodesic space $X$ is said to be a $CAT(0)$ space if for each geodesic triangle $\triangle$ in $X$ and its comparison triangle $\overline{\triangle}$ in $\Bbb{R}^2$, the $CAT(0)$ inequality
\begin{equation*}
d(x,y)\leqslant d_{\Bbb{R}^2}(\overline{x},\overline{y}),
\end{equation*}
is satisfied for all $x,y\in \triangle$ and all comparison points $\overline{x}, \overline{y}\in \overline{\triangle}$.
A $CAT(0)$ space $(X,d)$ is a geodesic metric space which satisfies the $CN-$inequality
\begin{equation}\label{cat0ineq}
d^2(x, m)\leqslant \frac{1}{2}d^2(x,y)+\frac{1}{2}d^2(x,z)-\frac{1}{4}d^2(y,z),
\end{equation}
where $x,y,z\in X$ and $m$ is the midpoint of the segment $[y,z]$, i.e.,\linebreak $d(m,y)=d(m,z)=\frac{1}{2}d(z,y)$ \cite{bridson2011metric}. Also in \cite[Lemma 2.5]{DHOMPONGSA20082572} and \cite[page 163]{bridson2011metric}, we find out that a geodesic metric space is a $CAT(0)$ space if and only if for every three points $x_0 ,x_1, y \in X$ and for every $0<t<1$
\allowdisplaybreaks\begin{equation}
d^2(y, x_t)\leqslant(1-t)d^2(y,x_0)+td^2(y,x_1)-t(1-t)d^2(x_0,x_1),
\end{equation}
where $x_t=(1-t)x_0\oplus tx_1$ for every $t\in[0,1]$, the above inequality is known as strong convexity of the function $d^2$ respect to each argument. A $CAT(0)$ space is uniquely geodesic. A complete $CAT(0)$ space is said Hadamard space. From now, we denote every Hadamard space by $\mathscr H$. In Hadamard space any nonempty closed convex subset $S$ is Chebyshev i.e., $P_Sx=\{s\in S: d(x,S)=d(x,s) \}$ is singleton, where $d(x,S):=\displaystyle\inf_{s\in S} d(x,s)$\cite{bacak2014convex}. Thus, the metric projection on nonempty closed convex subset $S$ of a Hadamard space $\mathscr H$ is the following map:
\begin{equation*}
P:\mathscr H\longrightarrow S \quad x\mapsto P_Sx,
\end{equation*}
where $P_Sx$ is the nearest point of $S$ to $x$ for all $x\in \mathscr H$. A well-known fact implies that
\begin{equation}\label{projection}
d^2(x,P_Sx)+d^2(P_Sx,y)\leqslant d^2(x,y),\ \forall y\in S
\end{equation}
(see \cite{bacak2014convex} and also \cite{dehghan2012}).
Let $(X,d)$ be a $CAT(0)$ space, a function $f:X\longrightarrow \Bbb R$ is said to be convex if for all $x,y\in X$ and for all $\lambda\in [0,1]$
\begin{equation*}
f\big((1-\lambda )x\oplus\lambda y\big)\leqslant (1-\lambda)f(x)+\lambda f(y),
\end{equation*}
clearly the metric function $d$ on $CAT(0)$ space $X$ is convex. Also, $f$ is said to be $\gamma$-strongly convex with $\gamma>0$ if for all $x,y\in \mathscr H$
\begin{equation*}
f\big(\lambda x\oplus (1-\lambda)y\big)\leqslant \lambda f(x)+(1-\lambda)f(y)-\lambda(1-\lambda)\gamma d^2(x,y).
\end{equation*}
Clearly by definition of $CAT(0)$ space, the metric function $d^2$ on $CAT(0)$ space $X$ is $\gamma$-strongly convex respect to each argument with $\gamma=1$. A function $f: X\longrightarrow \mathbb{R}$ is said to be lower semicontinuous (shortly, lsc) if the set $\{x\in X : f(x)\leqslant \alpha\}$ is closed for all $\alpha\in \Bbb R$. Any lsc, strongly convex function in a Hadamard space has a unique minimizer \cite{bacak2014convex}.\\
The following lemma contains some inequalities that are satisfied in any Hadamard space, and we use them in the next section.
\begin{lemma}$($\cite{CHAOHA2006983,papadopoulos2005}$)$.\label{l2}
Let $(X,d)$ be a $CAT(0)$ space. Then for all $x,y,z\in X$ and\linebreak $t,s\in[0,1]$; we have:\\
$1)\quad d\big((1-t)x\oplus ty,z\big)\leqslant (1-t)d(x,z)+td(y,z)$.\\
$2)\quad d\big((1-t)x\oplus ty,(1-s)x\oplus sy\big)= |t-s|d(x,y)$.\\
$3)\quad d\big((1-t)z\oplus tx,(1-t)z\oplus ty\big)\leqslant td(x,y).$
\end{lemma}
Berg and Nikolaev in \cite{Berg2008} introduced the notion of quasilinearization that is the map $\langle \cdot,\cdot\rangle:(X\times X)\times (X\times X)\longrightarrow \Bbb R$ defined by
\begin{equation}\label{e12}
\langle\overset{\rightarrow}{ab},\overset{\rightarrow}{cd}\rangle=\frac{1}{2}\big\{d^2(a,d)+d^2(b,c)-d^2(a,c)-d^2(b,d) \big\} \quad a,b,c,d\in X,
\end{equation}
where a vector $\overset{\rightarrow}{ab}$ or $ab$ denotes a pair $(a,b)\in X\times X$. Also in \cite{Berg2008} they proved that a geodesically connected metric space is a $CAT(0)$ space if and only if it satisfies the Cauchy-Schwarz inequality as:
\begin{equation*}
\langle ab, cd\rangle\leqslant d(a,b)d(c,d) \quad (a,b,c,d\in X).
\end{equation*}
Now we define the notion of $\triangle-$convergence in $CAT(0)$ spaces that is weaker than the convergence respect to metric and it is an alternative of weak convergence in these spaces.\\
In a $CAT(0)$ space $X$, for a bounded sequence $\{x_n\}$ if for $x\in X$ we set\linebreak $r(x,\{x_n\})=\displaystyle\limsup_{n\rightarrow \infty} d(x,x_n)$, the asymptotic radius of $\{x_n\}$ is defined as follows:
$$r(\{ x_n\} )=\inf \big\{ r(x,\{ x_n\} ): x\in X \big\},$$
and the asymptotic center is the set
$$ A(\{ x_n\} )=\big\{x\in X: r(x,\{ x_n\} )= r(\{ x_n\} ) \big\}.$$
It is known that in a Hadamard space, $A(\{ x_n\} )$ is singleton\cite{KIRK20083689}. The notion of\linebreak $\triangle-$convergence first introduced by Lim\cite{lim1976} as follows.
\begin{definition}\label{weakcondef1}
A sequence $\{ x_n\}$ is said $\triangle-$convergent to $x$ if $x$ is the unique asymptotic center of $\{ x_{n_j}\}$ for every subsequence $\{ x_{n_j}\}$ of $\{ x_n\}$. The point $x$ is said $\triangle-\lim$ of $\{ x_n\}$ and denoted as $\bigtriangleup-\displaystyle\lim_n x_n=x$ or $x_n\overset{\bigtriangleup}{\longrightarrow}x$.
\end{definition}
\begin{lemma}$($see \cite{KIRK20083689}$)$.\label{lconsubseq}
Every bounded sequence in $CAT(0)$ space has a $\bigtriangleup -convergent$ subsequence. Also every closed convex subset of a Hadamard space is $\bigtriangleup-$closed in the sense that it contains all $\bigtriangleup-\lim$ points of every $\bigtriangleup$-convergent subsequence.
\end{lemma}
We have two other equivalent definitions for the notion of $\triangle-$convergence by the next two propositions.
\begin{proposition}\label{weakcondef2}$($\cite[Proposition 5.2]{espinola2009}$)$
A sequence $\{x_n\}$ in a Hadamard space $(\mathscr H,d)$, $\triangle-$converges to $x$ if and only if $\displaystyle\lim_{n\rightarrow\infty}d(x,P_Ix_n)=0$ for all geodesics $I$ issuing from the point $x$, where $P_I:\mathscr H\longrightarrow I$ is the projection map.
\end{proposition}
\begin{proposition}\label{weakcondef3}$($\cite[Theorem 2.6]{kakavandi2013}$)$
Let $(X,d)$ be a $CAT(0)$ space, $\{x_n\}$ be a sequence in $X$ and $x\in X$. Then $\{x_n\}$ $\triangle-$converges to $x$ if and only if
\begin{equation*}
\displaystyle\limsup_{n\rightarrow\infty} \langle xx_n,xy \rangle\leqslant 0, \quad \forall y\in X.
\end{equation*}
\end{proposition}
\begin{definition}\label{Karcher mean}
Given a finite number of points $x_0,\ldots ,x_{n-1}$ in a Hadamard space, we define the functions
\begin{equation}\label{e22}
\mathcal{F}_n(x)=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_i,x),
\end{equation}
and
\begin{equation}\label{e23}
\mathcal{F}_n^k(x)=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},x).
\end{equation}
From \cite[p.41 Proposition 2.2.17]{bacak2014convex} we know that these functions have unique minimizers. For $\mathcal{F}_n(x)$ (resp. $\mathcal{F}_n^k(x)$) the unique minimizer is denoted by $\sigma_n(x_0,\ldots ,x_{n-1})$ or shortly, $\sigma_n$, (resp. $\sigma_n^k(x_k,\ldots ,x_{k+n-1})$ or shortly, $\sigma_n^k$) and it is called the mean of $x_0,\ldots ,x_{n-1}$ (resp. $x_k,\ldots ,x_{k+n-1}$). These mean is known as the Karcher mean of $x_0,\ldots ,x_{n-1}$ (resp. $x_k,\ldots ,x_{k+n-1}$) (see \cite{karcher1977}).
\end{definition}
\begin{remark} \label{Karcher mean Banach spaces}
The Karcher mean is also defined in each reflexive and strictly convex Banach spaces if we replace $d(\cdot,\cdot)$ by $\|\cdot\|$ in Definition \ref{Karcher mean}. Reflexivity ensures the existence of the minimizer and the strict convexity ensures uniqueness of the minimizer. In spite of Hilbert spaces which in the Karcher mean is the same arithmetic (linear) mean, in general reflexive and strictly convex Banach spaces they are different. Because matching of these means is equivalent to the parallelogram identity, which is a characterization of Hilbert spaces\cite{danamir2013}.
\end{remark}
\section{Tauberian Conditions for Metric Convergence}
A sequence $\{x_n\}$ in a Hadamard space $\mathscr H$ is called the Cesaro convergent or the mean convergent (resp. almost convergent) to $x\in X$, if $\sigma_n$ (resp. $\sigma_n^k$) converges (resp. converges uniformly in $k$) to $x$. In this section, we present some Tauberian theorems for these means.
We need the next lemma to prove the Tauberian theorems for the Karcher mean.
\begin{lemma}\label{l15}
Let $\{x_n\}$ be a sequence in Hadamard space $\mathscr H$. Then for $\sigma_n^k$ defined as the above, for each $y\in\mathscr H$ and $k\geqslant 1$, we have:
\begin{enumerate}
\renewcommand{\theenumi}{\roman{enumi}}
\item\label{l15i1}
$\displaystyle d^2\big( \sigma_n^k ,y\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},y)-\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},\sigma_n^k).$
\item\label{l15i2} $\displaystyle d\big( \sigma_n^k ,y\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d(x_{k+i},y).$
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{l15i1}.
Since $\sigma_n^k$ is the unique minimizer of $\mathcal{F}_n^k(x)$ defined in \eqref{e23} and by the strong convexity of this function, for $0<\lambda <1$ we have:
\allowdisplaybreaks\begin{eqnarray}
\mathcal{F}_n^k(\sigma_n^k)&\leqslant & \mathcal{F}_n^k\big(\lambda\sigma_n^k\oplus (1-\lambda)y\big) \nonumber \\
&\leqslant &\lambda\mathcal{F}_n^k(\sigma_n^k)+ (1-\lambda)\mathcal{F}_n^k(y)- \lambda(1-\lambda)d^2\big( \sigma_n^k ,y\big). \nonumber
\end{eqnarray}
Therefor we obtain
\begin{equation*}
\lambda d^2\big( \sigma_n^k ,y\big)\leqslant \mathcal{F}_n^k(y)-\mathcal{F}_n^k(\sigma_n^k).
\end{equation*}
Letting $\lambda\rightarrow 1$ implies:
\begin{eqnarray}\label{ee1}
d^2\big( \sigma_n^k ,y\big)&\leqslant &\mathcal{F}_n^k(y)-\mathcal{F}_n^k(\sigma_n^k) \nonumber \\
&= &\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},y)-\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},\sigma_n^k),
\end{eqnarray}
which is the intended result. In particular, we have
\begin{equation*}
d^2\big( \sigma_n^k ,y\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},y).
\end{equation*}
\eqref{l15i2}. Triangle inequality yields:
\begin{equation*}
d^2\big( \sigma_n^k ,y\big)+d^2\big( y,x_{k+i}\big)-2d\big( \sigma_n^k ,y\big)d\big( y,x_{k+i}\big)\leqslant d^2\big( \sigma_n^k ,x_{k+i}\big),
\end{equation*}
hence
\begin{equation*}
d^2\big( y,x_{k+i}\big)\leqslant d^2\big( \sigma_n^k ,x_{k+i}\big)+2d\big( \sigma_n^k ,y\big)d\big( y,x_{k+i}\big)-d^2\big( \sigma_n^k ,y\big).
\end{equation*}
So summing up over $i$ from $0$ to $n-1$ and multiplying by $\frac{1}{n}$ imply:
\begin{equation}\label{convkarcher1}
\frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big( y,x_{k+i}\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big( \sigma_n^k ,x_{k+i}\big)+2d\big( \sigma_n^k ,y\big)\frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d\big( y,x_{k+i}\big)-d^2\big( \sigma_n^k ,y\big).
\end{equation}
On the other hand, by \eqref{ee1} we have:
\begin{equation}\label{convkarcher2}
\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},\sigma_n^k)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},y)-d^2(\sigma_n^k ,y).
\end{equation}
\eqref{convkarcher1} and \eqref{convkarcher2} show that
\begin{equation*}
d\big( \sigma_n^k ,y\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d(x_{k+i},y).
\end{equation*}
\end{proof}
In the next theorem, we consider the relation between convergence and the almost convergence.
\begin{theorem}\label{tautheokarcher1}
Let $\{x_n\}$ be a sequence in Hadamard space $\mathscr H$. Then $\{x_n\}$ converges to $y$ if and only if $\sigma_n^k$ defined as the unique minimizer of \eqref{e23} converges to $y$ uniformly in $k\geqslant 0$ $($or the sequence $\{x_n\}$ almost converges to $y)$ and $\{x_n\}$ is asymptotically regular $($i.e., $d(x_n,x_{n+1})\to0$ as $n\to\infty)$.
\end{theorem}
\begin{proof}
\textbf{Necessity.} By Part \eqref{l15i1} of Lemma \ref{l15}, we have:
\begin{equation}\label{e25}
d^2\big( \sigma_n^k ,y\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},y).
\end{equation}
Since the sequence $\{x_n\}$ converges strongly to $y$, $d^2(x_n,y)\longrightarrow 0$ and hence the right side of \eqref{e25} converges to zero uniformly in $k$, consequently $\sigma_n^k$ converges to $y$ uniformly in $k$. Also it is clear that the sequence $\{x_n\}$ is asymptotically regular.\\
\textbf{Sufficiency.} Let $\sigma_n^k$ converge to $y$ uniformly in $k\geqslant 0$ and $\{x_n\}$ is asymptotically regular. By $CN-$inequality, we have:
\allowdisplaybreaks\begin{eqnarray}
0&\leqslant &d^2\big( \sigma_n^k,\frac{1}{2}x_k\oplus\frac{1}{2}y\big)\nonumber \\
&\leqslant &\frac{1}{2}d^2(\sigma_n^k,x_k)+\frac{1}{2}d^2(\sigma_n^k,y)-\frac{1}{4}d^2(x_k,y).\nonumber
\end{eqnarray}
Therefore by Part \eqref{l15i1} of Lemma \ref{l15}, we obtain:
\allowdisplaybreaks\begin{eqnarray}
d^2(x_k,y)&\leqslant & 2d^2(\sigma_n^k,x_k)+2d^2(\sigma_n^k,y) \nonumber \\
&\leqslant &\frac{2}{n}\displaystyle\sum_{i=0}^{n-1}d^2(x_{k+i},x_k)+2d^2(\sigma_n^k,y)\nonumber \\
&= &\frac{2}{n}\bigg\{d^2(x_k,x_{k+1})+d^2(x_k,x_{k+2})+\cdots +d^2(x_k,x_{k+n-1}) \bigg\}+2d^2(\sigma_n^k,y)\nonumber \\
&\leqslant &\frac{2}{n}\bigg\{d^2(x_k,x_{k+1})+\bigg(\displaystyle\sum_{i=k}^{k+1}d(x_i,x_{i+1})\bigg)^2+\cdots +\bigg(\displaystyle\sum_{i=k}^{k+n-2}d(x_i,x_{i+1})\bigg)^2 \bigg\} \nonumber\\
& & + 2d^2(\sigma_n^k,y)\nonumber\\
&\leqslant &\frac{2}{n}\bigg(\displaystyle\sup_{i\geqslant k} d(x_i,x_{i+1})\bigg)^2\big( 1+2^2+\cdots (n-1)^2\big)+2d^2(\sigma_n^k,y) \nonumber\\
&= &\frac{2}{n}\frac{(n-1)n(2n-1)}{2}\bigg(\displaystyle\sup_{i\geqslant k} d(x_i,x_{i+1})\bigg)^2+2d^2(\sigma_n^k,y) \nonumber\\
&= &(n-1)(2n-1)\bigg(\displaystyle\sup_{i\geqslant k} d(x_i,x_{i+1})\bigg)^2+2d^2(\sigma_n^k,y).\nonumber
\end{eqnarray}
From asymptotic regularity of $\{x_n\}$, $d(x_{n+1},x_n)\longrightarrow 0$. Taking $\limsup$ when $k\to\infty$, we get:
\begin{equation*}
\displaystyle\limsup_{k\rightarrow\infty} d^2(x_k,y)\leqslant 2\displaystyle\limsup_{k\rightarrow\infty} d^2(\sigma_n^k,y)\leqslant 2\displaystyle\sup_{k\geqslant 1} d^2(\sigma_n^k,y).
\end{equation*}
Since $\sigma_n^k$ is uniformly convergent to $y$, letting $n\longrightarrow \infty$ completes the proof.
\end{proof}
For the relation between the mean convergence and the almost convergence defined above, we present the following Tauberian condition:
\begin{equation}\label{taucondkarchermean2}
\displaystyle\lim_{n\rightarrow\infty}\displaystyle\sup_{k\geqslant 0}\bigg(\frac{1}{n+k}\displaystyle\sum_{i=0}^{k-1} \Big(d^2(x_i,\sigma_n^k)-d^2(x_i,\sigma_k)\Big)\bigg)=0.
\end{equation}
\begin{theorem}\label{tautheokarcher2}
For the sequence $\{x_n\}$ in Hadamard space $\mathscr H$, $\sigma_n^k$ defined as the unique minimizer of \eqref{e23} converges to $y$ uniformly in $k\geqslant 0($or the sequence $\{x_n\}$ is almost convergent to $y)$ if and only if $\sigma_n$ defined as the unique minimizer of \eqref{e22} converges to $y$ $($or the sequence $\{x_n\}$ is mean convergent to $y)$ and \eqref{taucondkarchermean2} is satisfied.
\end{theorem}
\begin{proof}
\textbf{Necessity.} With getting $k=0$, it is obvious.\\
\textbf{Sufficiency.} Let $\sigma_n$ converge to $y$ and \eqref{taucondkarchermean2} is satisfied. First by $CN-$inequality we have:
\allowdisplaybreaks\begin{eqnarray}
0&\leqslant &d^2\big(\sigma_{n+k},\frac{1}{2}\sigma_n^k\oplus\frac{1}{2}y \big)\nonumber \\
&\leqslant &\frac{1}{2}d^2(\sigma_{n+k},\sigma_n^k)+\frac{1}{2}d^2(\sigma_{n+k},y)-\frac{1}{4}d^2(\sigma_n^k,y).\nonumber
\end{eqnarray}
Therefore by Part \eqref{l15i1} of Lemma \ref{l15}, and definition of $\sigma_n^k$ in inequality $\star$ and also definition of $\sigma_k$ in inequality $\star\star$, we obtain:
\allowdisplaybreaks\begin{eqnarray}
d^2(\sigma_n^k,y)&\leqslant & 2d^2(\sigma_{n+k},\sigma_n^k)+2d^2(\sigma_{n+k},y) \nonumber \\
&\leqslant &2\bigg(\frac{1}{n+k}\displaystyle\sum_{i=0}^{n+k-1}d^2(x_i,\sigma_n^k)- \frac{1}{n+k}\displaystyle\sum_{i=0}^{n+k-1}d^2(x_i,\sigma_{n+k})\bigg)+2d^2(\sigma_{n+k},y) \nonumber \\
&\leqslant &2\bigg(\frac{1}{n+k}\displaystyle\sum_{i=0}^{k-1}d^2(x_i,\sigma_n^k)+ \frac{1}{n+k}\displaystyle\sum_{i=k}^{n+k-1}d^2(x_i,\sigma_n^k) \nonumber \\
& &- \frac{1}{n+k}\displaystyle\sum_{i=0}^{n+k-1}d^2(x_i,\sigma_{n+k})\bigg)+2d^2(\sigma_{n+k},y) \nonumber \\
&\overset{\star}{\leqslant} &2\bigg(\frac{1}{n+k}\displaystyle\sum_{i=0}^{k-1}d^2(x_i,\sigma_n^k)+ \frac{1}{n+k}\displaystyle\sum_{i=k}^{n+k-1}d^2(x_i,\sigma_{n+k}) \nonumber \\
& &- \frac{1}{n+k}\displaystyle\sum_{i=0}^{n+k-1}d^2(x_i,\sigma_{n+k})\bigg)+2d^2(\sigma_{n+k},y) \nonumber \\
&\overset{\star\star}{\leqslant} &2\bigg(\frac{1}{n+k}\displaystyle\sum_{i=0}^{k-1}\Big(d^2(x_i,\sigma_n^k)-d^2(x_i,\sigma_k) \Big)\bigg)+2d^2(\sigma_{n+k},y). \nonumber
\end{eqnarray}
Thus we get:
\begin{eqnarray}
\displaystyle\sup_{k\geqslant 0}d^2(\sigma_n^k,y)&\leqslant &2\displaystyle\sup_{k\geqslant 0}\bigg(\frac{1}{n+k}\displaystyle\sum_{i=0}^{k-1}\Big(d^2(x_i,\sigma_n^k)-d^2(x_i,\sigma_k) \Big)\bigg)+2\displaystyle\sup_{k\geqslant n}d^2(\sigma_{k},y). \nonumber
\end{eqnarray}
Letting $n\to\infty$, the proof is now complete by the assumptions.
\end{proof}
In the next theorem, we show another Tauberian condition for the relation between the mean convergence and convergence of the sequence in Hadamard spaces.
We first state an elementary lemma without proof.
\begin{lemma}\label{l7}
For a real sequence $\{a_n\}$, we have:
\begin{equation*}
\frac{1}{n+1}\sum_{k=0}^{n} a_k=a_n-\frac{1}{n+1}\sum_{k=1}^{n}k(a_k-a_{k-1}).
\end{equation*}
\end{lemma}
By Theorems \ref{tautheokarcher1} and \ref{tautheokarcher2}, we know that convergence of the sequence $\{x_n\}$ implies its mean convergence, for the reverse direction we have the following theorem.
\begin{theorem}\label{tautheokarcher3}
Let $\{x_n\}$ be a sequence in Hadamard space $\mathscr H$, and $\sigma_n$ be the mean sequence defined as the unique minimizer of \eqref{e22}. If $\sigma_n$ converges to $y$ and $nd(x_n,x_{n-1})\longrightarrow 0$ as $n\to\infty$, then $x_n$ converges to $y$.
\end{theorem}
\begin{proof}
For a fixed integer $n>0$, let $a_i=d(x_i,x_n)$ for $i=1,2,\cdots, n$, then by Part \eqref{l15i2} of Lemma \ref{l15} and Lemma \ref{l7}, we have:
\allowdisplaybreaks\begin{eqnarray}
d(x_n,y)&\leqslant &d(x_n,\sigma_{n+1})+d(\sigma_{n+1},y)\nonumber \\
&\leqslant &\frac{1}{n+1}\sum_{i=0}^{n}d(x_n,x_i)+d(\sigma_{n+1},y) \nonumber \\
&\leqslant &\frac{1}{n+1}\sum_{i=1}^{n}i\big(d(x_{i-1},x_n)-d(x_i,x_n)\big)+d(\sigma_{n+1},y) \nonumber\\
&\leqslant &\frac{1}{n+1}\sum_{i=1}^{n}id(x_{i-1},x_i)+d(\sigma_{n+1},y)\to 0,\ \ \text{as}\ n\to\infty \nonumber
\end{eqnarray}
which concludes the result.
\end{proof}
Summary of the results of Tauberian theorems for the Karcher mean is showed in the below figure:
\begin{figure}[!h]
\centerline{\includegraphics[height=5cm]{Pic5.jpg}}
\end{figure}
The results of this section and Remark \ref{Karcher mean Banach spaces} propose this question: Does the Tauberian conditions hold for the Karcher mean in more general geodesic spaces for example uniformly convex geodesic metric spaces or in uniformly convex Banach spaces?
\begin{remark}
Let $(\mathscr H,d)$ be a Hadamard space. For a curve $c:[0,\infty)\longrightarrow\mathscr H$, $\sigma_T$ and $\sigma_T^s$(the Cesaro mean and the Vallee-Poussin means) are defined respectively as the unique minimizers of the functions
\begin{equation*}
\mathcal{G}_T(y)=\frac{1}{T}\displaystyle\int_{0}^{T} d^2(c(t),y)dt,
\end{equation*}
and
\begin{equation*}
\mathcal{G}_T^s(y)=\frac{1}{T}\displaystyle\int_{0}^{T} d^2(c(t+s),y)dt.
\end{equation*}
A curve $c:[0,\infty)\longrightarrow\mathscr H$ is called the Cesaro convergent or the mean convergent (resp. almost convergent) to $x\in X$, if $\sigma_T$ (resp. $\sigma_T^s$) converges (resp. converges uniformly in $s$) to $x$ as $T\longrightarrow\infty$. It is easy to check that the results of Theorems \ref{tautheokarcher1} and \ref{tautheokarcher2} remain true for curves with similar proofs, but Theorem \ref{tautheokarcher3} for curves remains an open question.
\end{remark}
\section{Tauberian Conditions for $\triangle-$Convergence}
In this section, we are going to show that the obtained results in the previous section hold for $\triangle-$convergence. Proofs of Corollaries 4.1, 4.2 and 4.3 is directly concluded from the proofs of Theorems 3.2, 3.3 and 3.5. In the next corollary, we see that the "if" part of Theorem \ref{tautheokarcher1} holds for $\triangle-$convergence. But this fact that $\triangle-$convergence of $\{x_n\}$ implies $\triangle-$almost convergence of $\{x_n\}$ in general Hadamard spaces is an open question for us.
\begin{corollary}\label{tautheokarcher1weak}
Let $\{x_n\}$ be a sequence in Hadamard space $\mathscr H$. If $\sigma_n^k$ defined as unique minimizer of \eqref{e23} $\triangle-$converges to $y$ uniformly in $k\geqslant 0$ $($or the sequence $\{x_n\}$\linebreak $\triangle-$almost converges to $y)$ and $\{x_n\}$ is asymptotically regular $($i.e., $d(x_n,x_{n+1})\to0$ as $n\to\infty)$, then $\{x_n\}$ $\triangle-$converges to $y$.
\end{corollary}
\begin{proof}
By nonexpansiveness of the projection mapping for all geodesic $I$ issuing from $x$, we have:
\allowdisplaybreaks\begin{eqnarray}
d^2(P_Ix_k,x) &\leqslant &2d^2(P_Ix_k, P_I\sigma_n^k)+2d^2(P_I\sigma_n^k, x) \nonumber\\
&\leqslant &2d^2(x_k, \sigma_n^k)+2d^2(P_I\sigma_n^k, x). \nonumber
\end{eqnarray}
Now the proof of Theorem \ref{tautheokarcher1} can be repeated. The result is concluded by Proposition \ref{weakcondef2}.
\end{proof}
Also for the relation between $\triangle-$mean convergence and $\triangle-$almost convergence of a sequence, we have the following theorem.
\begin{corollary}\label{tautheokarcher2weak}
For the sequence $\{x_n\}$ in Hadamard space $\mathscr H$, if $\sigma_n^k$ defined as the unique minimizer of \eqref{e23} $\triangle-$converges to $y$ uniformly in $k\geqslant 0($or the sequence $\{x_n\}$, $\triangle-$almost converges to $y)$, then $\sigma_n$ defined as the unique minimizer of \eqref{e22}\linebreak $\triangle-$converges to $y$ $($or the sequence $\{x_n\}$, $\triangle-$mean converges to $y)$. Also, $\triangle-$ convergence of $\sigma_n$ to $y$ and \eqref{taucondkarchermean2} imply $\triangle-$ convergence of $\sigma_n^k$ to $y$ uniformly in $k\geqslant 0$.
\end{corollary}
\begin{proof}
It is obvious that $\triangle-$almost convergence of the sequence $\{x_n\}$ to $x$ implies $\triangle-$mean convergence of the sequence $\{x_n\}$ to $x$, also via the condition \eqref{taucondkarchermean2} the reverse direction is true by the same proof of Theorem \ref{tautheokarcher2}. Because by nonexpansiveness of the projection mapping for all geodesic $I$ issuing from $x$, we have:
\allowdisplaybreaks\begin{eqnarray}
d^2(P_I\sigma_n^k ,x) &\leqslant &2d^2(P_I\sigma_n^k, P_I\sigma_{n+k})+2d^2(P_I\sigma_{n+k}, x) \nonumber\\
&\leqslant &2d^2(\sigma_n^k, \sigma_{n+k})+2d^2(P_I\sigma_{n+k}, x), \nonumber
\end{eqnarray}
and this completes the proof by Proposition \ref{weakcondef2}.
\end{proof}
We can also obtain suitable Tauberian condition for $\triangle-$mean convergence to\linebreak $\triangle-$ convergence of a sequence in the next corollary.
\begin{corollary}\label{tautheokarcher3weak}
Let $\{x_n\}$ be a sequence in Hadamard space $\mathscr H$, and $\sigma_n$ is the mean sequence defined as the unique minimizer of \eqref{e22}. If $\sigma_n$ $\triangle-$converges to $y$ and\linebreak $nd(x_n,x_{n-1})\longrightarrow 0$ as $n\to\infty$, then $x_n$ $\triangle-$converges to $y$.
\end{corollary}
\begin{proof}
By nonexpansiveness of the projection mapping on all geodesic $I$ issuing from $x$, we have:
\allowdisplaybreaks\begin{eqnarray}
d(P_Ix_n ,x) &\leqslant &d(P_Ix_n, P_I\sigma_{n+1})+d(P_I\sigma_{n+1}, x) \nonumber\\
&\leqslant &d(x_n, \sigma_{n+1})+d(P_I\sigma_{n+1}, x). \nonumber
\end{eqnarray} The rest of the proof is similar to the proof of Theorem \ref{tautheokarcher3} and it is concluded from Proposition \ref{weakcondef2}.
\end{proof}
As we stated in the first of this section, we don't know whether $\triangle-$convergence of $\{x_n\}$ implies $\triangle-$almost convergence of $\{x_n\}$ or not. But for $\triangle-$convergence to $\triangle-$mean convergence, we prove the next theorem in some special Hadamard spaces. The following condition as a geometric condition for nonpositive curvature metric spaces has been introduced by Kirk and Panyanak \cite{KIRK20083689} as:
$(\mathbf{Q_4})$: for points $x,y,p,q\in\mathscr H$ and any point $m$ in the segment $[x,y]$
\begin{equation*}
d(p,x)<d(x,q)\ \&\ d(p,y)<d(y,q) \Longrightarrow d(p,m)\leqslant d(m,q).
\end{equation*}
and $(\mathbf{\overline{Q_4}})$ condition as the modification of it introduced by Kakavandi \cite{kakavandi2013} as:
$(\mathbf{\overline{Q_4}})$: for points $x,y,p,q\in\mathscr H$ and any point $m$ in the segment $[x,y]$
\begin{equation*}
d(p,x)\leqslant d(x,q)\ \&\ d(p,y)\leqslant d(y,q) \Longrightarrow d(p,m)\leqslant d(m,q).
\end{equation*}
In fact if for $x,y\in \mathscr H$ set $F(x,y):=\{z\in \mathscr H\ : \ d(x,z)\leqslant d(y,z) \}$, the $(\mathbf{\overline{Q_4}})$ condition is equivalent to $F(x,y)$ is convex for any $x,y\in \mathscr H$. Hilbert spaces, $\Bbb{R}$-trees and any $CAT(0)$ space of constant curvature satisfy $(\mathbf{\overline{Q_4}})$ condition \cite{kakavandi2013,espinola2009}. We need the next lemma before stating the main result.
\begin{lemma}\label{l16}
Let $\mathscr H$ be a Hadamard space and $\{x_n\}$ be a sequence in $\mathscr H$. Let $\{\sigma_n\}$ and $\{\sigma_n^k\}$ be the means defined in Definition \ref{Karcher mean}, for each $k\geqslant 0$, we have:
\begin{enumerate}
\renewcommand{\theenumi}{\roman{enumi}}
\item\label{l16i1} $\sigma_n^k\in \overline{co}\{x_n\}$.
\item\label{l16i2} If $\{x_n\}$ is bounded, then $d\big(\sigma_n,\sigma_n^k\big)\longrightarrow 0$ as $n\rightarrow\infty$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{l16i1}.
Let $P:\mathscr H\longrightarrow \overline{co}\{x_n\}$ be the projection map. On the one hand, by the inequality \eqref{projection} for any $i$, we have:
\begin{equation*}
d^2(x_i,\sigma_n^k)\geqslant d^2(x_i,P\sigma_n^k)+ d^2(P\sigma_n^k, \sigma_n^k).
\end{equation*}
On the other hand, by definition of $\sigma_n^k$, we have:
\begin{equation*}
\displaystyle\frac{1}{n}\sum_{i=0}^{n-1} d^2(x_{k+i}, \sigma_n^k) \leqslant \displaystyle\frac{1}{n}\sum_{i=0}^{n-1} d^2(x_{k+i}, P\sigma_n^k).
\end{equation*}
Hence we obtain $d^2(P\sigma_n^k, \sigma_n^k)=0$, which is the requested result.\\
\eqref{l16i2}.
By Lemma \ref{l15}, Part \eqref{l15i1}, and definition of $\sigma_n$ in inequality $\star$, we get:
\allowdisplaybreaks\begin{eqnarray}
d^2\big(\sigma_n,\sigma_n^k\big)&\leqslant &\frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_{k+i},\sigma_n\big)- \frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_{k+i},\sigma_n^k\big) \nonumber \\
&\leqslant &\frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_i,\sigma_n\big)- \frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_{k+i},\sigma_n^k\big) \nonumber \\
&&+ \frac{1}{n}\displaystyle\sum_{i=0}^{k-1} d^2\big(x_{n+i},\sigma_n\big) \nonumber \\
&\overset{\star}{\leqslant} &\frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_i,\sigma_n^k\big)- \frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_{k+i},\sigma_n^k\big) \nonumber \\
&&+\frac{1}{n}\displaystyle\sum_{i=0}^{k-1} d^2\big(x_{n+i},\sigma_n\big) \nonumber \\
&\leqslant & \frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_{k+i},\sigma_n^k\big)- \frac{1}{n}\displaystyle\sum_{i=0}^{n-1}d^2\big(x_{k+i},\sigma_n^k \big) \nonumber\\
&& +\frac{1}{n}\displaystyle\sum_{i=0}^{k-1} d^2\big(x_i,\sigma_n^k\big)+\frac{1}{n}\displaystyle\sum_{i=0}^{k-1} d^2\big(x_{n+i},\sigma_n\big). \nonumber
\end{eqnarray}
Now boundedness of the sequence $\{x_n\}$ (and hence boundedness of $\{\sigma_n\}$ and $\{\sigma_n^k\}$) completes the proof.
\end{proof}
\begin{theorem}\label{tautheokarcher4}
Let $\{x_n\}$ be a sequence in Hadamard space $\mathscr H$ that satisfy $(\overline{Q_4})$ condition, and $\sigma_n$ be the mean sequence defined as the unique minimizer of \eqref{e22}. If $\{x_n\}$ $\triangle-$converges to $y$, then $\sigma_n$ $\triangle-$converges to $y($or $\{x_n\}$ $\triangle-$mean converges to $y)$.
\end{theorem}
\begin{proof}
First note that by $\triangle-$convergence of $\{x_n\}$, this sequence and hence $\{\sigma_n\}$ and $\{\sigma_n^k\}$ for each $k\geqslant 1$ are bounded, therefore by Part \ref{l16i2} of Lemma \ref{l16} for each $k\geqslant 1$, $d\big(\sigma_n,\sigma_n^k\big)\longrightarrow 0$ as $n\rightarrow\infty$. Also, boundedness of $\{\sigma_n\}$ implies that there exists a subsequence $\{\sigma_{n_i}\}$ of $\{\sigma_n\}$ such that $\{\sigma_{n_i}\}$ $\bigtriangleup -converges$ to $v\in \mathscr H$. Since $d\big(\sigma_n,\sigma_n^k\big)\longrightarrow 0$ for each $k\geqslant 1$, we have $\{\sigma_{n_i}^k\}$ $\bigtriangleup -converges$ to $v$ for each $k\geqslant 1$. If we show that $v=y$, then the proof is complete. Suppose to the contrary, i.e., there is a $\delta>0$ such that
\begin{equation*}
d(v,y)=\delta.
\end{equation*}
On the other hand, by $\bigtriangleup -convergence$ of $x_n$ to $y$, using Proposition \ref{weakcondef3}, we have:
\begin{equation*}
\displaystyle\limsup_{n\rightarrow\infty} \Big(d^2(v,y)+d^2(x_n,y)-d^2(x_n,v)\Big)\leqslant 0.
\end{equation*}
Hence there exists $N$ such that for any $n\geqslant N$
\begin{equation*}
d^2(x_n,y)-d^2(x_n,v)\leqslant 0.
\end{equation*}
Now we know that by $(\overline{Q_4})$ condition, the set $F(y,v)$ is convex. By Part \ref{l16i1} of Lemma \ref{l16}, $\sigma_n^N\in F(y,v)$. Also by continuity of metric function, $F(y,v)$ is closed and hence by Lemma \ref{lconsubseq}, it is $\bigtriangleup-$closed. This facts since $\sigma_{n_i}^N$ $\bigtriangleup -converges$ to $v$, replacing $n$ with $n_i$ and by $\bigtriangleup$-closedness of $F(y,v)$, imply that $d^2(y,v)=0$ i.e., $y=v$, which is a contradiction. This completes the proof.
\end{proof}
\begin{remark}
\noindent It is easy to check that Corollaries \ref{tautheokarcher1weak}, \ref{tautheokarcher2weak} and Theorem \ref{tautheokarcher4} for $\bigtriangleup -convergence$ remains true with the analogous arguments for curves in Hadamard spaces.
\end{remark}
\section{Almost Convergence of Almost Periodic Sequences}
In \cite{bohr1947almost,lorentz1948} we see that every almost periodic real sequence is almost convergent. Now we prove it in Hadamard spaces.
\begin{definition}[Periodic and Almost Periodic Sequences]
Let $\{x_n\}$ be a sequence in metric space $(X,d)$, we call this sequence is periodic with the period $p$ (or $p-$periodic) if there exists a positive integer $p$ such that $x_{n+p}=x_n$ for all $n$.\\
A sequence $\{x_n\}$ is called almost periodic if for each $\epsilon>0$ there are natural numbers $L=L(\epsilon)$ and $N=N(\epsilon)$ such that any interval $(k,k+L)$ where $k\geqslant 0$ contains at least one integer $p$ satisfying
\begin{equation}
d(x_{n+p},x_n)<\epsilon, \quad\quad \forall n\geqslant N.
\end{equation}
\end{definition}
We need the next lemma for proving almost convergence of almost periodic sequences in Hadamard spaces.
\begin{lemma}\label{lemstconfun2}$($see \cite[Lemma 4.3]{khatibzadehpouladi2019}$)$
Let $(\mathscr H, d)$ be a Hadamard space and $\{f_n^k\}_{k,n}$ be a sequence of convex functions on $\mathscr H$. If $\{x_n^k\}_{k,n}$ is a sequence of minimum points of $\{f_n^k\}_{k,n}$ and $x$ is the unique minimizer of the strongly convex function $f$, satisfying:
\begin{enumerate}
\renewcommand{\theenumi}{\Roman{enumi}}
\item\label{lemstconfun2i1} the sequence $\{f_n^k\}$ is pointwise convergent to $f$ as $n$ tends to infinity uniformly in $k\geqslant 0$,
\item\label{lemstconfun2i2} $\displaystyle\limsup_{n\rightarrow\infty} \sup_{k\geqslant 0}\big(f(x_n^k)-f_n^k(x_n^k)\big)\leqslant 0$,
\end{enumerate}
then $x_n^k$ converges to $x$ uniformly in $k\geqslant 0$ as $n\rightarrow \infty$.
\end{lemma}
\begin{theorem}
Let $\{x_n\}$ be an almost periodic sequence in Hadamard space $\mathscr H$. Then the sequence $\{x_n\}$ is almost convergent.
\end{theorem}
\begin{proof}
Since $\{x_n\}$ is almost periodic, by \cite[Proposition 3.3]{khatibzadehpouladi2019} for each $x$, $\{d(x_n,x)\}$ is almost periodic, also it is easy to check that for each $x$, $\{d^2(x_n,x)\}$ is almost periodic. By \cite{lorentz1948}(see also \cite{bohr1947almost}) the scalar sequence $\{d^2(x_n,x)\}$ is almost convergent for all $x\in \mathscr H$. Define:
\begin{equation}\label{eqmainresu1}
\mathcal{F}_n^k(x):=\displaystyle\frac{1}{n}\sum_{i=0}^{n-1}d^2(x_{k+i},x),
\end{equation}
and
\begin{equation}\label{eqmainresu2}
\mathcal{F}(x):=\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}d^2(x_{k+i},x)\quad \text{uniformly in}\ k\geqslant 0.
\end{equation}
Almost convergence of $\big\{d^2(x_n,x)\big\}$ for any $x\in \mathscr H$ shows that \eqref{eqmainresu2} is well defined. By the strong convexity of $d^2(\cdot,x)$, the functions $\mathcal{F}_n^k$ and $\mathcal{F}$ are strongly convex and therefore have unique minimizers $\sigma_n^k$ and $\sigma$ respectively. By analogous argument of \cite[Theorem 4.4]{khatibzadehpouladi2019} using of Lemma \ref{lemstconfun2}, we conclude that $\sigma_n^k$ converges to $\sigma$ uniformly in $k\geqslant 0$ as $n\rightarrow \infty$ or the sequence $\{x_n\}$ is almost convergent and this completes the proof.
\end{proof}
Every $N$-periodic sequence is almost periodic and by the previous theorem, it is almost convergent. We prove that it almost converges to the mean of its $N$ first points.
\begin{theorem}
Let $\{x_n\}$ be a $N-$periodic sequence in Hadamard space $\mathscr H$. Then the sequence $\{x_n\}$ is almost convergent to $\sigma_N$ defined in Definition \ref{Karcher mean}.
\end{theorem}
\begin{proof}
In Definition \ref{Karcher mean}, we see that $\sigma_n$ or the Karcher mean of $x_0,\cdots,x_{n-1}$ is the unique minimizer of the function \eqref{e22} and $\sigma_n^k$ or the karcher mean of $x_k,\cdots,x_{k+n-1}$ is the unique minimizer of the function \eqref{e23}. By part \eqref{l15i1} of Lemma \ref{l15}, we have:
\begin{equation*}
d^2\big( \sigma_n^k ,\sigma_N\big)\leqslant \frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},\sigma_N)-\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},\sigma_n^k).
\end{equation*}
Without loss of generality, for all $n$ we can suppose that $n=tN+r$, $0\leq r<N$. Now by $N-$periodicity of the sequence $\{x_n\}$ in step $\star$ and the definition of $\sigma_N$ in step $\star\star$, we obtain:
\allowdisplaybreaks\begin{eqnarray}
d^2\big( \sigma_n^k ,\sigma_N\big) &\leqslant & \frac{1}{n}\displaystyle\sum_{i=0}^{tN-1} d^2(x_{k+i},\sigma_N) +\frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_N) \nonumber \\
&&-\frac{1}{n}\displaystyle\sum_{i=0}^{tN-1} d^2(x_{k+i},\sigma_n^k)- \frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_n^k) \nonumber \\
&=& \frac{t}{n}\displaystyle\sum_{i=0}^{N-1} d^2(x_{k+i},\sigma_N) +\frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_N) \nonumber \\
&&-\frac{t}{n}\displaystyle\sum_{i=0}^{N-1} d^2(x_{k+i},\sigma_n^k)- \frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_n^k) \nonumber \\
&\leqslant &\frac{t}{n}\displaystyle\sum_{i=k}^{k+N-1} d^2(x_i,\sigma_N)-\frac{t}{n}\displaystyle\sum_{i=k}^{k+N-1} d^2(x_i,\sigma_n^k)+\frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_N) \nonumber\\
&=&\frac{t}{n}\Big(\displaystyle\sum_{i=0}^{N-1} d^2(x_i,\sigma_N)+\displaystyle\sum_{i=N}^{k+N-1} d^2(x_i,\sigma_N)-\displaystyle\sum_{i=0}^{k-1} d^2(x_i,\sigma_N) \Big)\nonumber\\
&&+\frac{t}{n}\Big(-\displaystyle\sum_{i=0}^{N-1} d^2(x_i,\sigma_n^k)+\displaystyle\sum_{i=0}^{k-1} d^2(x_i,\sigma_n^k)-\displaystyle\sum_{i=N}^{k+N-1} d^2(x_i,\sigma_n^k)\Big)\nonumber\\
&&+\frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_N) \nonumber \\
&\overset{\star}{=}&\frac{t}{n}\displaystyle\sum_{i=0}^{N-1} d^2(x_i,\sigma_N)-\frac{t}{n}\displaystyle\sum_{i=0}^{N-1} d^2(x_i,\sigma_n^k)+\frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_N) \nonumber \\
&\overset{\star\star}{\leqslant}&\frac{t}{n}\displaystyle\sum_{i=0}^{N-1} d^2(x_i,\sigma_n^k)-\frac{t}{n}\displaystyle\sum_{i=0}^{N-1} d^2(x_i,\sigma_n^k)+\frac{1}{n}\displaystyle\sum_{i=tN}^{tN+r-1} d^2(x_{k+i},\sigma_N) \nonumber \\
&\leqslant &\displaystyle\frac{r}{n}\sup_{tN\leqslant i\leqslant tN+r-1} d^2(x_{k+i},\sigma_N),
\end{eqnarray}
hence
\begin{equation*}
\displaystyle\sup_{k\geqslant 0} d^2\big( \sigma_n^k ,\sigma_N\big) \leqslant \displaystyle\frac{r}{n}\sup_{k\geqslant 0}\sup_{tN\leqslant i\leqslant tN+r-1} d^2(x_{k+i},\sigma_N).
\end{equation*}
Now letting $n\rightarrow\infty$ completes the proof.
\end{proof}
\bibliographystyle{amsplain}
\begin{filecontents}{shortbib.bib}
@spmpsci{url= "no", doi= "no"}
@article {AOYAMA20072350,
AUTHOR = {Aoyama, K. and Kimura, Y. and Takahashi, W. and
Toyoda, Masashi},
TITLE = {{Approximation of common fixed points of a countable family of
nonexpansive mappings in a {B}anach space}},
JOURNAL = {Nonlinear Anal.},
FJOURNAL = {Nonlinear Analysis. Theory, Methods \& Applications. An
International Multidisciplinary Journal},
VOLUME = {67},
YEAR = {2007},
NUMBER = {8},
PAGES = {2350--2360},
ISSN = {0362-546X},
MRCLASS = {47H10 (41A65 47H09)},
MRNUMBER = {2338104},
MRREVIEWER = {Liaqat Ali Khan},
DOI = {10.1016/j.na.2006.08.032},
}
@book{bacak2014convex,
title={{Convex Analysis and Optimization in Hadamard Spaces}},
author={Bacak, M.},
isbn="9783110361629",
lccn="2014029958",
series="De Gruyter Series in Nonlinear Analysis and Applications",
year="2014",
publisher="De Gruyter",
}
@article {CHAOHA2006983,
AUTHOR = {Chaoha, P. and Phon-on, A.},
TITLE = {{A note on fixed point sets in {CAT}(0) spaces}},
JOURNAL = {J. Math. Anal. Appl.},
FJOURNAL = {Journal of Mathematical Analysis and Applications},
VOLUME = {320},
YEAR = {2006},
NUMBER = {2},
PAGES = {983--987},
ISSN = {0022-247X},
MRCLASS = {54H25},
MRNUMBER = {2226009},
DOI = {10.1016/j.jmaa.2005.08.006},
}
@book{bridson2011metric,
title={{Metric Spaces of Nonpositive Curvature}},
author="Bridson, M. R. and Hafliger, A.",
isbn="9783540643241",
lccn="99038163",
series="Grundlehren der mathematischen Wissenschaften",
year="2011",
publisher="Springer Berlin Heidelberg",
}
@article{Kuo2009,
author={Kuo, M. K.},
title={{Tauberian conditions for almost convergence}},
journal="Positivity",
year="2009",
month="Nov",
day="01",
volume="13",
number="4",
pages="611--619",
issn="1572-9281",
doi="10.1007/s11117-008-2282-z",
}
@article{lorentz1948,
author = {Lorentz, G. G.},
doi = "10.1007/BF02393648",
fjournal = "Acta Mathematica",
journal = "Acta Math.",
pages = "167--190",
publisher = "Institut Mittag-Leffler",
title = {{A contribution to the theory of divergent sequences}},
volume = "80",
year = "1948",
}
@article {Butzer1993,
AUTHOR = {Butzer, P. L. and Nessel, R. J.},
TITLE = {{Aspects of de {L}a {V}all\'{e}e-{P}oussin's work in approximation
and its influence}},
JOURNAL = {Arch. Hist. Exact Sci.},
FJOURNAL = {Archive for History of Exact Sciences},
VOLUME = {46},
YEAR = {1993},
NUMBER = {1},
PAGES = {67--95},
ISSN = {0003-9519},
MRCLASS = {01A60 (01A55 41-03)},
MRNUMBER = {1235866},
MRREVIEWER = {Jesper L\"{u}tzen},
DOI = {10.1007/BF00387727},
}
@book{papadopoulos2005,
title={{Metric Spaces, Convexity and Nonpositive Curvature}},
author={Papadopoulos, A.},
isbn={9783037190104},
lccn={2006355594},
series={IRMA lectures in mathematics and theoretical physics},
year={2005},
publisher={European Mathematical Society},
}
@incollection {Liimatainen2012,
AUTHOR = {Liimatainen, T.},
TITLE = {{Optimal riemannian metric for a volumorphism and a mean
ergodic theorem in complete global Alexandrov nonpositively
curved spaces}},
BOOKTITLE = {Analysis, geometry and quantum field theory},
SERIES = {Contemp. Math.},
VOLUME = {584},
PAGES = {163--178},
PUBLISHER = {Amer. Math. Soc., Providence, RI},
YEAR = {2012},
MRCLASS = {53C23 (37A30 58C30 58D15 58D17)},
MRNUMBER = {3013044},
MRREVIEWER = {Fernando Galaz-Garc\'\i a},
DOI = {10.1090/conm/584/11593},
}
@book{danamir2013,
title={{Characterizations of Inner Product Spaces}},
author={Amir, D.},
isbn={9783034854870},
lccn={86018807},
series={Operator Theory: Advances and Applications},
year={2013},
publisher={Birkh{\"a}user Basel},
}
@article {karcher1977,
AUTHOR = {Karcher, H.},
TITLE = {{Riemannian center of mass and mollifier smoothing}},
JOURNAL = {Comm. Pure Appl. Math.},
FJOURNAL = {Communications on Pure and Applied Mathematics},
VOLUME = {30},
YEAR = {1977},
NUMBER = {5},
PAGES = {509--541},
ISSN = {0010-3640},
MRCLASS = {58E10 (53C20)},
MRNUMBER = {0442975},
MRREVIEWER = {S. Takizawa},
DOI = {10.1002/cpa.3160300502},
}
@incollection {Sturm2003,
AUTHOR = {Sturm, K. T.},
TITLE = {{Probability measures on metric spaces of nonpositive
curvature}},
BOOKTITLE = {Heat kernels and analysis on manifolds, graphs, and metric
spaces ({P}aris, 2002)},
SERIES = {Contemp. Math.},
VOLUME = {338},
PAGES = {357--390},
PUBLISHER = {Amer. Math. Soc., Providence, RI},
YEAR = {2003},
MRCLASS = {60B05 (28C15 28C99 53C21)},
MRNUMBER = {2039961},
MRREVIEWER = {Vladimir I. Bogachev},
DOI = {10.1090/conm/338/06080},
}
@book{barbu2012,
title={{Convexity and Optimization in Banach Spaces}},
author={Barbu, V. and Precupanu, T.},
isbn={9789400722477},
lccn={2011942142},
series={Springer Monographs in Mathematics},
year={2012},
publisher={Springer Netherlands},
}
@article {kirk20083689,
AUTHOR = {Kirk, W. A. and Panyanak, B.},
TITLE = {{A concept of convergence in geodesic spaces}},
JOURNAL = {Nonlinear Anal.},
FJOURNAL = {Nonlinear Analysis. Theory, Methods \& Applications. An
International Multidisciplinary Journal},
VOLUME = {68},
YEAR = {2008},
NUMBER = {12},
PAGES = {3689--3696},
ISSN = {0362-546X},
MRCLASS = {54H25 (47H10 54E40)},
MRNUMBER = {2416076},
MRREVIEWER = {Jes\'{u}s Garc\'{i}a-Falset},
DOI = {10.1016/j.na.2007.04.011},
}
@article {DHOMPONGSA20082572,
AUTHOR = {Dhompongsa, S. and Panyanak, B.},
TITLE = {{On {$\Delta$}-convergence theorems in {${\rm CAT}(0)$} spaces}},
JOURNAL = {Comput. Math. Appl.},
FJOURNAL = {Computers \& Mathematics with Applications. An International
Journal},
VOLUME = {56},
YEAR = {2008},
NUMBER = {10},
PAGES = {2572--2579},
ISSN = {0898-1221},
MRCLASS = {47H10 (47H09 47J25 53C23 65J15)},
MRREVIEWER = {Jos\'{e} A. Ezquerro},
DOI = {10.1016/j.camwa.2008.05.036},
}
@article {lim1976,
AUTHOR = {Lim, T. C.},
TITLE = {{Remarks on some fixed point theorems}},
JOURNAL = {Proc. Amer. Math. Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {60},
YEAR = {1976},
PAGES = {179--182},
ISSN = {0002-9939},
MRCLASS = {47H10 (54H25)},
MRNUMBER = {0423139},
MRREVIEWER = {M. Edelstein},
DOI = {10.2307/2041136},
}
@article {kakavandi2013,
AUTHOR = {Ahmadi Kakavandi, B.},
TITLE = {{Weak topologies in complete {$CAT(0)$} metric spaces}},
JOURNAL = {Proc. Amer. Math. Soc.},
FJOURNAL = {Proceedings of the American Mathematical Society},
VOLUME = {141},
YEAR = {2013},
NUMBER = {3},
PAGES = {1029--1039},
ISSN = {0002-9939},
MRCLASS = {53C23 (54A20 54E35)},
MRNUMBER = {3003694},
MRREVIEWER = {Huse Fatki\'{c}},
DOI = {10.1090/S0002-9939-2012-11743-5},
}
@article {espinola2009,
AUTHOR = {Esp\'{i}nola, R. and Fern\'{a}ndez-Le\'{o}n, A.},
TITLE = {{${\rm CAT}(k)$}-spaces, weak convergence and fixed points},
JOURNAL = {J. Math. Anal. Appl.},
FJOURNAL = {Journal of Mathematical Analysis and Applications},
VOLUME = {353},
YEAR = {2009},
NUMBER = {1},
PAGES = {410--427},
ISSN = {0022-247X},
MRCLASS = {47H10 (55M20)},
MRNUMBER = {2508878},
MRREVIEWER = {Sompong Dhompongsa},
DOI = {10.1016/j.jmaa.2008.12.015},
}
@article{Berg2008,
AUTHOR = {Berg, I. D. and Nikolaev, I. G.},
TITLE = {{Quasilinearization and curvature of {A}leksandrov spaces}},
JOURNAL = {Geom. Dedicata},
FJOURNAL = {Geometriae Dedicata},
VOLUME = {133},
YEAR = {2008},
PAGES = {195--218},
ISSN = {0046-5755},
MRCLASS = {53C45 (51K10)},
MRNUMBER = {2390077},
MRREVIEWER = {Koichi Nagano},
DOI = {10.1007/s10711-008-9243-3},
}
@book{bohr1947almost,
title={{Almost Periodic Functions}},
author={Bohr, H. A. and Cohn, H.},
lccn={47005500},
series={AMS Chelsea Publishing Series},
year={1947},
publisher={Chelsea Publishing Company}
}
@article{khatibzadehpouladi2019,
AUTHOR = {Khatibzadeh, H. and Pouladi, H.},
TITLE = {{Almost periodicity and ergodic theorems for nonexpansive mappings and semigroups in Hadamard spaces}},
JOURNAL = {https://arxiv.org/abs/1903.00629},
VOLUME = {},
YEAR = {2019}
}
@article{dehghan2012,
author = {Dehghan, H. and Rooin, J.},
title = {{A characterization of metric projection in CAT(0) spaces}},
journal = {https://arxiv.org/abs/1311.4174},
year={2012},
pages ={},
isbn={},
}
\end{filecontents}
| {
"timestamp": "2019-06-18T02:29:06",
"yymm": "1906",
"arxiv_id": "1906.06988",
"language": "en",
"url": "https://arxiv.org/abs/1906.06988",
"abstract": "In the present paper, after recalling the Karcher mean in Hadamard spaces, we study the relation between convergence, almost convergence and mean convergence (respect to the defined mean) of a sequence in Hadamard spaces. These results extend Tauberian conditions from Banach spaces to Hadamard spaces. Also, we show that every almost periodic sequence in Hadamard spaces is almost convergent.",
"subjects": "Functional Analysis (math.FA)",
"title": "Tauberian Conditions for Almost Convergence in a Geodesic Metric Space",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105342148367,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7076984749650977
} |
https://arxiv.org/abs/1101.2932 | Fractional Variational Calculus with Classical and Combined Caputo Derivatives | We give a proper fractional extension of the classical calculus of variations by considering variational functionals with a Lagrangian depending on a combined Caputo fractional derivative and the classical derivative. Euler-Lagrange equations to the basic and isoperimetric problems are proved, as well as transversality conditions. | \section{Introduction}
Fractional calculus (FC) is a generalization of (integer) differential calculus,
in the sense it deals with derivatives of real or complex order.
FC was born on 30th September 1695. On that day, L'H\^{o}pital
wrote a letter to Leibniz, where he asked about Leibniz's notation
of $n$th order derivatives of a linear function.
L'H\^{o}pital wanted to know the result for the derivative of order $n=1/2$.
Leibniz replied that ``\emph{one day, useful consequences will be drawn}''
and, in fact, his vision became a reality. The study of non-integer order derivatives
rapidly became a very attractive subject to mathematicians,
and many different forms of fractional (\textrm{i.e.}, non-integer)
derivative operators were introduced:
the Grunwald--Letnikow, Riemann--Liouville, Caputo \cite{Hilfer,Kilbas,Podlubny},
and the more recent notions of Cresson \cite{Cresson}, Jumarie \cite{Jumarie},
or Klimek \cite{Klimek}.
The calculus of variations with fractional
derivatives was born in 1996 with the work of Riewe,
to better describe nonconservative systems in mechanics
\cite{CD:Riewe:1996,CD:Riewe:1997}. It is a subject
of strong current research due to its many applications
in science and engineering, including mechanics,
chemistry, biology, economics, and control theory
(see, \textrm{e.g.}, the recent papers
\cite{FrMult,MyID:154,MyID:147,MyID:152,MyID:179,FrTor,%
withBasiaRachid,NatFr,comDorota,MyID:181}).\footnote{The
literature on \emph{fractional variational calculus}
is vast, and we do not try to provide here a comprehensive
review on the subject. We give only some representative references
from 2010 and 2011. Other references can be found therein.}
Following \cite{Tatiana:Spain2010},
we consider here that the highest derivative
in the Lagrangian is of integer order.
The main advantage of our formulation,
with respect to the ``pure'' fractional
approach adopted in the literature,
is that the classical results of variational calculus
can now be obtained as particular cases. We recall that
the only possibility of obtaining the classical derivative $y'$ from
a fractional derivative $D^\alpha y$, $\alpha \in (0,1)$,
is to take the limit when $\alpha$ tends to one.
However, in general such a limit does not exist \cite{Ross:Samko:Love}.
Differently from \cite{Tatiana:Spain2010},
where the fractional problems are considered in the sense
of Riemann--Liouville, we consider here
combined Caputo derivatives $^{C}D^{\alpha,\beta}_{\gamma}$.
The operator $^{C}D^{\alpha,\beta}_{\gamma}$
extends the Caputo fractional derivatives,
and was introduced for the first time in \cite{ComCa}
as a useful tool in the description of some nonconservative models
and more general classes of variational problems.
More precisely, we investigate here problems of the calculus
of variations with integrands depending
on the independent variable $x$, an unknown vector-function $\textbf{y}$,
its integer order derivative $\textbf{y}'$,
and a fractional derivative $^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}$
given as a convex combination
of the left Caputo fractional derivative of order $\alpha$
and the right Caputo fractional derivative of order $\beta$.
The paper is organized as follows. Section~\ref{sec:PFC}
presents the basic definitions and facts needed in the sequel.
Our results are then stated and proved in Section~\ref{sec-FrCVDI}.
We discuss the fundamental concepts of a variational calculus such as the
Euler--Lagrange equations for the basic (Theorem~\ref{theorem:EL})
and isoperimetric (Theorem~\ref{theorem:isop}) problems, as well as
transversality conditions (Theorem~\ref{theorem:nbc}).
We end with an illustrative example of the results
of the paper (Section~\ref{sec:ex}).
\section{Preliminaries on Fractional Calculus}
\label{sec:PFC}
In this section we present some basic necessary facts on fractional calculus.
For more on the subject and applications,
we refer the reader to the books \cite{Hilfer,Kilbas,Podlubny}.
\begin{definition}[Riemann--Liouville fractional integrals]
Let $f\in L_1\left([a,b]\right)$ and $0<\alpha<1$.
The left Riemann--Liouville Fractional Integral (RLFI)
of order $\alpha$ of a function $f$ is defined by
\begin{equation*}
_{a}\textsl{I}_x^\alpha f(x)
:=\frac{1}{\Gamma(\alpha)}\int_a^x(x-t)^{\alpha-1}f(t)dt,
\end{equation*}
and the right RLFI by
\begin{equation*}
_{x}\textsl{I}_b^\alpha f(x)
:=\frac{1}{\Gamma(\alpha)}\int_x^b(t-x)^{\alpha-1}f(t)dt,
\end{equation*}
for all $x\in[a,b]$.
\end{definition}
\begin{definition}[Left and right Riemann--Liouville fractional derivatives]
The left Riemann--Liouville Fractional Derivative (RLFD)
of order $\alpha$ of a function $f$, denoted by $_{a}\textsl{D}_x^\alpha f$,
is defined by
\begin{equation*}
_{a}\textsl{D}_x^\alpha f(x)
:= \frac{d}{dx}{_{a}}\textsl{I}_x^{1-\alpha}f(x)
=\frac{1}{\Gamma(1-\alpha)}\frac{d}{dx}\int_a^x(x-t)^{-\alpha}f(t)dt,
\end{equation*}
$x\in [a,b]$. Similarly, the right RLFD of order $\alpha$
of a function $f$, denoted by $_{x}\textsl{D}_b^\alpha f$,
is defined by
\begin{equation*}
_{x}\textsl{D}_b^\alpha f(x)
:= -\frac{d}{dx}{_{x}}\textsl{I}_b^{1-\alpha}f(x)
= \frac{-1}{\Gamma(1-\alpha)}\frac{d}{dx}\int_x^b(t-x)^{-\alpha}f(t)dt,
\end{equation*}
$x\in [a,b]$.
\end{definition}
\begin{definition}[Caputo fractional derivatives]
Let $f\in AC\left([a,b]\right)$, where $AC\left([a,b]\right)$
represents the space of absolutely continuous functions
on the interval $[a,b]$. The left Caputo Fractional Derivative (CFD) is defined by
\begin{equation*}
^{C}_{a}\textsl{D}_x^\alpha f(t)
:= {_{a}}\textsl{I}_x^{1-\alpha}\left(\frac{d}{dt}f\right)(x)
= \frac{1}{\Gamma(1-\alpha)}\int_a^x(x-t)^{-\alpha}\frac{d}{dt}f(t)dt,
\end{equation*}
$x\in [a,b]$, and the right CFD by
\begin{equation*}
_{x}\textsl{D}_b^\alpha f(x)
:= {_{x}}\textsl{I}_b^{1-\alpha}\left(-\frac{d}{dt}f\right)(x)
= \frac{-1}{\Gamma(1-\alpha)}\int_x^b(t-x)^{-\alpha}\frac{d}{dt}f(t)dt,
\end{equation*}
$x\in [a,b]$, where $\alpha$ is the order of the derivative.
\end{definition}
\begin{theorem}[Fractional integration by parts \cite{Kilbas}]
Let $p\geq1$, $q\geq1$, and $\frac{1}{p}+\frac{1}{q}\leq1+\alpha$.
If $g\in L_p\left([a,b]\right)$ and $f\in L_q\left([a,b]\right)$,
then the following formula for integration by parts holds:
$$
\int\limits_a^bg(x){_{a}}\textsl{I}_x^{\alpha}f(x)dx
=\int\limits_a^bf(x){_{x}}\textsl{I}_b^{\alpha}g(x)dx.
$$
\end{theorem}
\begin{definition}[The combined fractional derivative
$^{C}D^{\alpha,\beta}_{\gamma}$ \cite{ComCa}]
Let $\alpha,\beta\in(0,1)$ and $\gamma\in[0,1]$.
The combined fractional derivative operator $^{C}D^{\alpha,\beta}_{\gamma}$
is given by
$$
^{C}D^{\alpha,\beta}_{\gamma}
:=\gamma ^{C}_{a}D^{\alpha}_{x}+(1-\gamma) ^{C}_{x}D^{\beta}_{b}.
$$
\end{definition}
\begin{remark}
\label{remarl:pc:comD}
The combined fractional derivative coincides
with the right CFD in the case $\gamma = 0$, \textrm{i.e.},
$^{C}D^{\alpha,\beta}_{0}f(x)= ^{C}_{x}D^{\alpha}_{b}f(x)$.
For $\gamma = 1$ one gets the left CFD:
$^{C}D^{\alpha,\beta}_{1}f(x)= ^{C}_{a}D^{\alpha}_{x}f(x)$.
\end{remark}
For $\textbf{f}=[f_1,\ldots,f_N]:[a,b]\rightarrow\mathbb{R}^N$,
$N\in\mathbb{N}$, and $f_i\in AC\left([a,b]\right)$,
$i=1,\ldots,N$, we put
\begin{equation*}
^{C}D^{\alpha,\beta}_{\gamma}\textbf{f}(x)
:=\left[^{C}D^{\alpha,\beta}_{\gamma}f_1(x),
\ldots, ^{C}D^{\alpha,\beta}_{\gamma}f_N(x)\right].
\end{equation*}
In the discussion to follow, we also need the following formula
for fractional integrations by parts \cite{ComCa}:
\begin{multline}
\label{eq:fracIP:C}
\int\limits_a^b g(x) ^{C}D^{\alpha,\beta}_{\gamma}f(x)dx
= \int\limits_a^b f(x)D^{\beta,\alpha}_{1-\gamma}g(x)dx\\
+\left[ \gamma f(x)_{x}\textsl{I}_b^{1-\alpha} g(x)
- \left(1-\gamma\right) f(x)_{a}\textsl{I}_x^{1-\beta} g(x)\right]_{x=a}^{x=b},
\end{multline}
where $D^{\beta,\alpha}_{1-\gamma}:=(1-\gamma)_{a}\textsl{D}_x^\beta
+\gamma _{x}\textsl{D}_b^\alpha$.
\section{Main Results}
\label{sec-FrCVDI}
Consider the following functional:
\begin{equation}
\label{eq:FU}
\mathcal{J}\textsl{}\left(\textbf{y}\right)
=\int\limits_a^b
L\left(x,\textbf{y}(x),\textbf{y}'(x),{^{C}}D^{\alpha,\beta}_{\gamma}\textbf{y}(x)\right)dx,
\end{equation}
where $x\in[a,b]$ is the independent variable;
$\textbf{y}(x)\in\mathbb{R}^N$ is a real vector variable;
$\textbf{y}'(x)\in\mathbb{R}^N$ with $\textbf{y}'$ the
first derivative of $\textbf{y}$; ${^{C}}D^{\alpha,\beta}_{\gamma}\textbf{y}(x)
\in\mathbb{R}^N$ stands for the combined fractional derivative of $\textbf{y}$
evaluated in $x$; and $L\in C^1\left([a,b]\times\mathbb{R}^{3N};\mathbb{R}\right)$.
Let $\textbf{D}$ denote the set of all functions
$\textbf{y}:[a,b]\rightarrow \mathbb{R}^N$ such that $\textbf{y}'$
and $^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}$ exist and are continuous
on the interval $[a,b]$. We endow $\textbf{D}$ with the norm
\begin{equation*}
\left\|\textbf{y}\right\|_{1,\infty}
:= \max\limits_{a\leq x\leq b}\left\|\textbf{y}(x)\right\|
+\max\limits_{a\leq x\leq b}\left\|\textbf{y}'(x)\right\|
+\max\limits_{a\leq x\leq b}\left\| ^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}(x)\right\|,
\end{equation*}
where $\left\|\cdot\right\|$ is a norm in $\mathbb{R}^N$.
Along the work, we denote by $\partial_{i}K$, $i=1,\ldots,M$ $(M\in\mathbb{N})$, the partial
derivative of a function $K:\mathbb{R}^M\rightarrow\mathbb{R}$ with respect to its $i$th argument.
Let $\lambda\in\mathbb{R}^r$. For simplicity of notation
we introduce the operators $\left[\cdot\right]^{\alpha,\beta}_{\gamma}$
and $_{\lambda}\left\{\cdot\right\}^{\alpha,\beta}_{\gamma}$ defined by
\begin{gather*}
\left[\textbf{y}\right]^{\alpha,\beta}_{\gamma}(x)
:=\left(x,\textbf{y}(x),\textbf{y}'(x),
^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}(x)\right),\\
_{\lambda}\left\{\textbf{y}\right\}^{\alpha,\beta}_{\gamma}(x)
:=\left(x,\textbf{y}(x),\textbf{y}'(x),
^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}(x),
\lambda_1,\ldots,\lambda_r\right).
\end{gather*}
\subsection{The Euler--Lagrange equation}
\label{sub:sec:FrIP}
We begin with the following problem of the fractional calculus of variations.
\begin{problem}
\label{problem:1} Find a function $y\in\textnormal{\textbf{D}}$ for
which the functional \eqref{eq:FU}, \textrm{i.e.},
\begin{equation}
\label{eq:Funct} \mathcal{J}\left(y\right)
=\int\limits_a^bL\left[y\right]^{\alpha,\beta}_{\gamma}(x)dx,
\end{equation}
subject to given boundary conditions
\begin{equation}
\label{eq:BoundCond} y(a)=\textbf{y}^a, \quad y(b) =\textbf{y}^b,
\end{equation}
$\textbf{y}^a,\textbf{y}^b\in\mathbb{R}^N$, achieves a minimum.
\end{problem}
\begin{definition}[Admissible function]
\label{adm:f} A function $y\in\textnormal{\textbf{D}}$ that
satisfies all the constraints of a problem is said to be admissible
to that problem. The set of admissible functions is denoted by
$\mathcal{D}$.
\end{definition}
\begin{remark}
For Problem~\ref{problem:1} the constraints
mentioned in Definition~\ref{adm:f} are the
boundary conditions \eqref{eq:BoundCond}.
\end{remark}
We now define what is meant by minimum of
$\mathcal{J}$ on $\mathcal{D}$.
\begin{definition}[Local minimizer]
A function $\overline{y}\in\mathcal{D}$ is said to be a local
minimizer to $\mathcal{J}$ on $\mathcal{D}$ if there exists some
$\delta>0$ such that
\begin{equation*}
\mathcal{J}\left(\overline{y}\right) -\mathcal{J}\left(y\right) \leq
0
\end{equation*}
for all functions $y\in\mathcal{D}$ with $\left\|y
-\overline{y}\right\|_{1,\infty}<\delta$.
\end{definition}
Similarly to the classical calculus of variations,
a necessary optimality condition to Problem~\ref{problem:1}
is based on the concept of variation.
\begin{definition}[First variation]
The first variation of $\mathcal{J}$ at
$y\in\textnormal{\textbf{D}}$ in the direction
$\textnormal{\textbf{h}}\in\textnormal{\textbf{D}}$ is defined by
\begin{equation*}
\delta \mathcal{J}\left(y;\textnormal{\textbf{h}}\right)
:=\lim\limits_{\epsilon\rightarrow 0}\frac{\mathcal{J}\left(y
+\epsilon
\textnormal{\textbf{h}}\right)-\mathcal{J}\left(y\right)}{\epsilon}
=\left.\frac{\partial}{\partial\epsilon}\textit{J}\left(y
+\epsilon\textnormal{\textbf{h}}\right)\right|_{\epsilon=0},
\end{equation*}
provided the limit exists.
\end{definition}
\begin{definition}[Admissible variation]
An admissible variation at $y\in\mathcal{D}$ for $\mathcal{J}$ is a
direction $\textnormal{\textbf{h}}\in\textnormal{\textbf{D}}$,
$\textnormal{\textbf{h}}\neq 0$, such that
\begin{itemize}
\item $\delta \mathcal{J}\left(y;
\textnormal{\textbf{h}}\right)$ exists; and
\item $y+\epsilon\textnormal{\textbf{h}}
\in\mathcal{D}$ for all sufficiently small $\epsilon$.
\end{itemize}
\end{definition}
\begin{theorem}[see, \textrm{e.g.}, \cite{Troutman}]
\label{thm:Troutman} Let $\mathcal{J}$ be a functional defined on
$\mathcal{D}$. Suppose that $y$ is a local minimizer to
$\mathcal{J}$ on $\mathcal{D}$. Then $\delta \mathcal{J}\left(y;
\textnormal{\textbf{h}}\right)=0$ for each admissible variation
$\textnormal{\textbf{h}}$ at $y$.
\end{theorem}
We now state and prove the Euler--Lagrange
equations for Problem~\ref{problem:1}.
\begin{theorem}
\label{theorem:EL} If $y=\left(y_1,\ldots,y_N\right)$ is a local
minimizer to Problem~\ref{problem:1}, then $y$ satisfies the system
of $N$ Euler--Lagrange equations
\begin{equation}
\label{eq:EL}
\partial_i L \left[y\right]^{\alpha,\beta}_{\gamma}(x)
-\frac{d}{dx}\partial_{N+i}L
\left[y\right]^{\alpha,\beta}_{\gamma}(x)
+D^{\beta,\alpha}_{1-\gamma}\partial_{2N+i}L
\left[y\right]^{\alpha,\beta}_{\gamma}(x)=0,
\end{equation}
$i=2,\ldots,N+1$, for all $x\in[a,b]$.
\end{theorem}
\begin{proof}
Suppose that $y$ is a solution to Problem~\ref{problem:1} and let
$\textbf{h}$ be an arbitrary admissible variation for this problem,
\textrm{i.e.},
\begin{equation*}
h_i(a)=h_i(b)=0, \quad i=1,\ldots,N.
\end{equation*}
According with Theorem~\ref{thm:Troutman}, a necessary condition
for $\textbf{y}$ to be a minimizer is given by
\begin{equation*}
\frac{\partial}{\partial\epsilon}\mathcal{J}\left.\left(y
+\epsilon\textbf{h}\right)\right|_{\epsilon=0}=0,
\end{equation*}
that is,
\begin{equation}
\label{eq:1}
\begin{split}
& \int\limits_a^b\left[\sum\limits_{i=2}^{N+1}\partial_i L
\left[y\right]^{\alpha,\beta}_{\gamma}(x)h_{i-1}(x) +
\sum\limits_{i=2}^{N+1}\partial_{N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\frac{d}{dx}h_{i-1}(x)\right.\\
&\quad \left.+\sum\limits_{i=2}^{N+1}\partial_{2N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\left(^{C}D^{\alpha,
\beta}_{\gamma}h_{i-1}(x)\right)\right]dx=0.
\end{split}
\end{equation}
Using the integration by parts formulas, for the classical and
$^{C}D^{\alpha,\beta}_{\gamma}$ derivatives,
in the second and third term of the integrand, we obtain
\begin{equation}
\label{eq:2}
\begin{split}
&\int\limits_a^b\sum\limits_{i=2}^{N+1}\left[\partial_i
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)
-\frac{d}{dx}\partial_{N+i} L
\left[y\right]^{\alpha,\beta}_{\gamma}(x) +
^{C}D^{\beta,\alpha}_{1-\gamma}\partial_{2N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]h_{i-1}(x)dx\\
&\quad+\left.\left[\sum\limits_{i=2}^{N+1}h_{i-1}(x)\partial_{N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]\right._{x=a}^{x=b}
+\gamma\left.\left[\sum\limits_{i=2}^{N+1}h_{i-1}(x)_{x}I^{1-\alpha}_{b}\partial_{2N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]\right._{x=a}^{x=b}\\
&
\quad-(1-\gamma)\left.\left[\sum\limits_{i=2}^{N+1}h_{i-1}(x)_{a}I^{1-\beta}_{x}\partial_{2N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]\right._{x=a}^{x=b}=0.
\end{split}
\end{equation}
Since $h_i(a)=h_i(b)=0$, $i=1,\ldots,N$, we get
\begin{equation*}
\int\limits_a^b\sum\limits_{i=2}^{N+1}\left[\partial_i L
\left[y\right]^{\alpha,\beta}_{\gamma}(x)
-\frac{d}{dx}\partial_{N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x) +
^CD^{\beta,\alpha}_{1-\gamma}\partial_{2N+i}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]h_{i-1}(x)dx=0.
\end{equation*}
Equalities \eqref{eq:EL} follow from the application of the
fundamental lemma of the calculus of variations
(see, \textrm{e.g.}, \cite{Troutman}).
\end{proof}
When the Lagrangian $L$ does not depend on fractional derivatives,
then Theorem~\ref{theorem:EL} reduces to the classical result
(see, \textrm{e.g.}, \cite{Troutman}). The fractional Euler--Lagrange
equations via Caputo derivatives that one can find in the literature,
are also obtained as corollaries of Theorem~\ref{theorem:EL}.
The next result is obtained choosing a Lagrangian
that does not depend on the classical derivatives.
\begin{corollary}[Theorem~6 of \cite{ComCa}]
\label{Theo E-L1}
Let $\mathbf{y}=(y_1,\ldots,y_N)$ be a local
minimizer to problem
\begin{gather*}
\mathcal{J}(\mathbf{y})
=\int_a^b L\left(x,\mathbf{y}(x),{^CD^{\alpha,\beta}_{\gamma}} \mathbf{y}(x)\right)
\, dx \longrightarrow \min\\
\mathbf{y}(a)=\mathbf{y}^{a},
\quad \mathbf{y}(b)=\mathbf{y}^{b},
\end{gather*}
$\mathbf{y}^{a}$, $\mathbf{y}^{b}\in \mathbb{R}^N$.
Then, $\mathbf{y}$ satisfies the system of
$N$ fractional Euler--Lagrange equations
\begin{equation}
\label{E-L1}
\partial_i L[\mathbf{y}](x)+D^{\beta,\alpha}_{1-\gamma}
\partial_{N+i}L[\mathbf{y}](x)=0,
\end{equation}
$i=2,\ldots N+1$, for all $x\in[a,b]$.
\end{corollary}
If one considers $\gamma = 1$ (\textrm{cf.} Remark~\ref{remarl:pc:comD})
and $N = 1$ in Corollary~\ref{Theo E-L1}, then \eqref{E-L1} reduces to
the well known Caputo fractional Euler--Lagrange equation:
if $y$ is a local minimizer to problem
\begin{gather*}
\mathcal{J}(y)=\int_a^b
L\left(x,y(x),{^C_a D_x^{\alpha}}y(x)\right) \, dx \longrightarrow \min\\
y(a) = y_a, \quad y(b) = y_b,
\end{gather*}
then $y$ satisfies the fractional Euler--Lagrange equation
\begin{equation}
\label{eq:fEL:cC}
\partial_2 L\left(x,y(x),{^C_a D_x^{\alpha}}y(x)\right)
+{_x D_b^{\alpha}}\partial_3 L\left(x,y(x),{^C_a D_x^{\alpha}} y(x)\right) = 0
\end{equation}
for all $x\in[a,b]$ (see, \textrm{e.g.}, \cite{FrTor:Caputo}).
\subsection{Transversality conditions}
Let $l\in\left\{1,\ldots,N\right\}$.
Assume now that in Problem~\ref{problem:1}
the boundary conditions \eqref{eq:BoundCond}
are substituted by
\begin{equation}
\label{eq:BoundCond2}
\textbf{y}(a)=\textbf{y}^a, \quad y_i(b)=y^b_i,
\ i=1,\ldots,N \textnormal{ for } i\neq l,
\textnormal{ and } y_l(b) \textnormal{ is free}
\end{equation}
or
\begin{equation}
\label{eq:BoundCond3}
\textbf{y}(a)=\textbf{y}^a, \quad y_i(b)=y^b_i,
\ i=1,\ldots,N \textnormal{ for }
i\neq l, \textnormal{ and } y_l(b)\leq y_l^b.
\end{equation}
\begin{theorem}
\label{theorem:nbc} If $y=\left(y_1,\ldots,y_N\right)$ is a solution
to Problem~\ref{problem:1} with either \eqref{eq:BoundCond2} or
\eqref{eq:BoundCond3} as boundary conditions instead of
\eqref{eq:BoundCond}, then $y$ satisfies the system of
Euler--Lagrange equations \eqref{eq:EL}. Moreover, under the
boundary conditions \eqref{eq:BoundCond2} the extra transversality
condition
\begin{multline}
\label{eq:tc:1stc}
\left[\partial_{N+l+1}L\left[y\right]^{\alpha,\beta}_{\gamma}(x)
+\gamma_{x}I^{1-\alpha}_{b}\partial_{2N+l+1}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right.\\
\left.-(1-\gamma)_{a}I^{1-\beta}_{x}\partial_{2N+l+1}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]_{x=b}=0
\end{multline}
holds; under the boundary conditions \eqref{eq:BoundCond3}
the extra transversality condition
\begin{multline}
\label{eq:tc:2ndc}
\left[\partial_{N+l+1}L\left[y\right]^{\alpha,\beta}_{\gamma}(x)
+\gamma_{x}I^{1-\alpha}_{b}\partial_{2N+l+1}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right.\\
\left.-(1-\gamma)_{a}I^{1-\beta}_{x}\partial_{2N+l+1}
L\left[y\right]^{\alpha,\beta}_{\gamma}(x)\right]_{x=b} \leq 0
\end{multline}
holds, with \eqref{eq:tc:1stc} taking place if $y_l(b)<y_l^b$.
\end{theorem}
\begin{proof}
The fact that the system of Euler--Lagrange equations \eqref{eq:EL}
is satisfied is a simple consequence of the proof of Theorem~\ref{theorem:EL}
(one can always restrict to the subclass of functions
$\textnormal{\textbf{h}}\in\textnormal{\textbf{D}}$
for which $h_i(a)=h_i(b)=0$, $i=1,\ldots,N$).
Let us assume that the boundary conditions are \eqref{eq:BoundCond2}.
Condition \eqref{eq:tc:1stc} follows from \eqref{eq:1}.
Suppose now that the boundary conditions are \eqref{eq:BoundCond3}.
Then, there are two cases to consider.
(i) If $y_l(b)<y_l^b$, then there are admissible
neighboring paths with terminal value both above and below
$y_l(b)$, so that $h_l(b)$ can take either sign.
Therefore, the transversality condition is \eqref{eq:tc:1stc}.
(ii) Let $y_l(b)=y_l^b$. In this case neighboring paths
with terminal value $\tilde{y}_l\leq y_l(b)$ are considered.
Choose $h_l$ such that $h_l(b)\geq 0$. Then, $\epsilon \leq 0$
and the transversality condition, which has its root in the first order condition
\eqref{eq:2}, must be changed to an inequality. We obtain \eqref{eq:tc:2ndc}.
\end{proof}
When the Lagrangian does not depend on fractional derivatives,
then the left hand side of \eqref{eq:tc:1stc} and \eqref{eq:tc:2ndc}
reduce to the classical expression
$\partial_{N+l+1} L\left(x,\mathbf{y}(x),\mathbf{y}'(x)\right)$
(for instance, when $N = 1$ and $y(a)$ is fixed with $y(b)$ free,
then we get the well known boundary condition
$\partial_{3} L\left(b,y(b),y'(b)\right) = 0$).
In the particular case when the Lagrangian
does not depend on the classical derivatives,
$\gamma = 1$, $N = 1$,
and we have boundary conditions \eqref{eq:BoundCond2},
then one obtains from Theorem~\ref{theorem:nbc}
the following result of \cite{Agrawal}.
\begin{corollary}[\textrm{cf.} Theorem~1 of \cite{Agrawal}]
If $y$ is a local minimizer to problem
\begin{gather*}
\mathcal{J}(y)=\int_a^b L\left(x,y(x),{^C_a D_x^{\alpha}} y(x)\right) dx
\longrightarrow \min\\
y(a) = y_a \quad (y(b) \text{ is free}),
\end{gather*}
then $y$ satisfies the fractional Euler--Lagrange equation \eqref{eq:fEL:cC}.
Moreover,
\begin{equation*}
\left[{_x I_b^{1-\alpha}}\partial_3
L(x,y(x),{^C_a D_x^{\alpha}} y(x))\right]_{x=b}=0.
\end{equation*}
\end{corollary}
\subsection{The isoperimetric problem}
\label{sub:sec:OC}
We now consider the following problem of the calculus of variations.
\begin{problem}
\label{problem:P2}
Minimize functional \eqref{eq:Funct} subject
to given boundary conditions \eqref{eq:BoundCond}
and $r$ isoperimetric constraints
\begin{equation}
\label{eq:BoundCond4} \mathcal{G}^j(y) =\int\limits_a^b
G^j\left[y\right]^{\alpha,\beta}_{\gamma}(x)dx =\xi_j, \quad
j=1,\ldots,r,
\end{equation}
where $G^j\in C^1\left([a,b]\times\mathbb{R}^{3N};\mathbb{R}\right)$
and $\xi_j\in\mathbb{R}$ for $j=1,\ldots,r$.
\end{problem}
Problems of the type of Problem~\ref{problem:P2},
where some integrals are to be given a fixed value
while another one is to be made a maximum or a minimum,
are called \emph{isoperimetric problems}.
Such variational problems have found a broad class of important applications
throughout the centuries, with numerous useful implications
in astronomy, geometry, algebra, analysis, and engineering.
For references and recent advancements on the subject,
we refer the reader to \cite{isoJMAA,isoNabla,iso:ts,specialAveiro2Basia}.
Here, in order to obtain necessary optimality conditions
for the combined fractional isoperimetric problem
(Problem~\ref{problem:P2}), we make use of the following theorem.
\begin{theorem}[see, \textrm{e.g.}, Theorem~2 of \cite{G:H} on p.~91]
\label{theorem:nec} Let
$\mathcal{J},\mathcal{G}^{1},\ldots,\mathcal{G}^r$ be functionals
defined in a neighborhood of $y$ and having continuous first
variations in this neighborhood. Suppose that $y$ is a local
minimizer to the isoperimetric problem given by \eqref{eq:Funct},
\eqref{eq:BoundCond} and \eqref{eq:BoundCond4}. Assume that there
are functions $\textnormal{\textbf{h}}^1,
\ldots,\textnormal{\textbf{h}}^r\in\textnormal{\textbf{D}}$ such
that
\begin{equation}
\label{eq:reg:cond}
A=(a_{kl}), \quad a_{kl}:=\delta\mathcal{G}^k(\textbf{y};\textbf{h}^l),
\text{ has maximal rank } r.
\end{equation}
Then there exist constants $\lambda_1,\ldots,\lambda_r \in\mathbb{R}$
such that the functional
\begin{equation*}
\mathcal{F}
:=\mathcal{J}-\sum\limits_{j=1}^{r}\lambda _{j}\mathcal{G}^{j}
\end{equation*}
satisfies
\begin{equation}
\label{eq:var}
\delta\mathcal{F}(\mathbf{y};\mathbf{h})=0
\end{equation}
for all $\mathbf{h}\in \mathbf{D}$.
\end{theorem}
\begin{theorem}
\label{theorem:isop} Let assumptions of Theorem~\ref{theorem:nec}
hold. If $y$ is a local minimizer to Problem~\ref{problem:P2}, then
$y$ satisfies the system of $N$ fractional Euler--Lagrange equations
\begin{equation}
\label{eq:EL2}
\partial_i F _{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)
-\frac{d}{dx}\partial_{N+i} F
_{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)
+D^{\beta,\alpha}_{1-\gamma}\partial_{2N+i} F
_{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)=0,
\end{equation}
$i=2,\ldots,N+1$, for all $x\in[a,b]$, where function
$F:[a,b]\times\mathbb{R}^{3N}\times\mathbb{R}^r\rightarrow\mathbb{R}$
is defined by
\begin{equation*}
F_{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)
:=L\left[y\right]^{\alpha,\beta}_{\gamma}(x)
-\sum\limits_{j=1}^r\lambda_jG^j\left[y\right]^{\alpha,\beta}_{\gamma}(x).
\end{equation*}
\end{theorem}
\begin{proof}
Under assumptions of Theorem~\ref{theorem:nec},
the equation \eqref{eq:var} is fulfilled for every
$\textnormal{\textbf{h}}\in\textbf{D}$.
Consider a function $\textbf{h}$ such that
$\textnormal{\textbf{h}}(a)=\textnormal{\textbf{h}}(b)=0$. Then,
\begin{equation*}
\begin{split}
0 &=\delta\mathcal{F}\left(y;\textnormal{\textbf{h}}\right)
=\left.\frac{\partial}{\partial\epsilon}\mathcal{F}\left(y
+\epsilon\textnormal{\textbf{h}}\right)\right|_{\epsilon=0}\\
&=\int_a^b\left[\sum\limits_{i=2}^{N+1}\partial_i\
F_{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)h_{i-1}(x)\right.
+\sum\limits_{i=2}^{N+1}\partial_{N+i}
F_{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)
\frac{d}{dx}h_{i-1}(x)\\
&\qquad\qquad\left.+\sum\limits_{i=2}^{N+1}\partial_{2N+i}
F_{\lambda}\left\{y\right\}_{\gamma}^{\alpha,\beta}(x)
^{C}D^{\alpha,\beta}_{\gamma}h_{i-1}(x)\right]dx.
\end{split}
\end{equation*}
Using the classical and the integration by parts formula \eqref{eq:fracIP:C},
and applying the fundamental lemma of the calculus of variations
in a similar way as in the proof of Theorem~\ref{theorem:EL},
we obtain \eqref{eq:EL2}.
\end{proof}
Suppose now, that constraints \eqref{eq:BoundCond4}
are characterized by inequalities
\begin{equation*}
\mathcal{G}(y) =\int\limits_a^b
G^j\left[y\right]^{\alpha,\beta}_{\gamma}(x)dx \leq \xi_j, \quad
j=1,\ldots,r.
\end{equation*}
In this case we can set
\begin{equation*}
\int\limits_a^b \left(G^j\left[y\right]^{\alpha,\beta}_{\gamma}(x)
-\frac{\xi_j}{b-a}\right)dx+\int\limits_a^b\left(\phi_j(x)\right)^2dx=0,
\end{equation*}
$j=1,\ldots,r$, where $\phi_j$ have the same continuity properties
as $y_i$. Therefore, we obtain the following problem: minimize the
functional
\begin{equation*}
\hat{\mathcal{J}}(y)
=\int\limits_a^b\hat{L}\left(x,\textbf{y}(x),y'(x),
^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}(x),\phi(x)\right)dx,
\end{equation*}
where $\phi(x)=\left[\phi_1(x),\ldots,\phi_r(x)\right]$,
subject to $r$ isoperimetric constraints
\begin{equation*}
\int\limits_a^b \left[G^j\left[y\right]^{\alpha,\beta}_{\gamma}(x)
-\frac{\xi_j}{b-a}+\left(\phi_j(x)\right)^2\right]dx=0, \quad
j=1,\ldots,r,
\end{equation*}
and boundary conditions \eqref{eq:BoundCond}. Assuming that assumptions
of Theorem~\ref{theorem:isop} are satisfied,
we conclude that there exist constants
$\lambda_j\in\mathbb{R}$, $j=1,\ldots,r$,
for which the system of $N$ equations
\begin{equation}
\label{eq:EL3}
\begin{split}
&\partial_i\hat{F}\left(x,y(x),y'(x),
^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}(x),\lambda_1,\ldots,\lambda_r,\phi(x)\right)\\
&\quad-\frac{d}{dx}\partial_{N+i}\hat{F}\left(x,y(x),y'(x),
^{C}D^{\alpha,\beta}_{\gamma}y(x),\lambda_1,\ldots,\lambda_r,\phi(x)\right)\\
&\quad+D^{\beta,\alpha}_{1-\gamma}\partial_{2N+i}
\hat{F}\left(x,y(x),y'(x),
^{C}D^{\alpha,\beta}_{\gamma}\textbf{y}(x),\lambda_1,\ldots,\lambda_r,\phi(x)\right)
=0,
\end{split}
\end{equation}
$i=2,\ldots,N+1$, where $\hat{F}=\hat{L}+\sum\limits_{j=1}^r
\lambda_j\left(G^j-\frac{\xi_j}{b-a}+\phi_j^2\right)$ and
\begin{equation}
\label{eq:6}
\lambda_j\phi_j(x)=0, \quad j=1,\ldots,r,
\end{equation}
hold for all $x\in[a,b]$. Note that it is enough to assume that the
regularity condition \eqref{eq:reg:cond} holds for the constraints
which are active at the local minimizer $\textbf{y}$. Indeed,
suppose that $l<r$ constraints, say
$\mathcal{G}^1,\ldots,\mathcal{G}^l$ for simplicity, are active at
the local minimizer $\textbf{y}$, and there are functions
$\textnormal{\textbf{h}}^1,
\ldots,\textnormal{\textbf{h}}^l\in\textnormal{\textbf{D}}$ such
that the matrix $B=(b_{kj})$, $b_{k,j}
:=\delta\mathcal{G}^k\left(y;\textnormal{\textbf{h}}^j\right)$,
$k,j=1,\dots,l<r$ has maximal rank $l$. Since the inequality
constraints $\mathcal{G}^{l+1},\ldots,\mathcal{G}^r$ are inactive,
the condition \eqref{eq:6} is trivially satisfied by taking
$\lambda_{l+1}=\ldots=\lambda_r=0$. On the other hand, since the
inequality constraints $\mathcal{G}^1,\ldots,\mathcal{G}^l$ are
active and satisfy the regularity condition \eqref{eq:reg:cond} at
$\textbf{y}$, the conclusion that there exist constants
$\lambda_j\in\mathbb{R}$, $j=1,\ldots,r$, such that \eqref{eq:EL3}
holds, follow from Theorem~\ref{theorem:isop}. Moreover,
\eqref{eq:6} is trivially satisfied for $j=1,\ldots,l$.
\section{An Illustrative Example}
\label{sec:ex}
Let $\alpha\in\left(0,1\right)$, $N = 1$, $\gamma=1$,
and $\xi\in\mathbb{R}$. Consider the following
fractional isoperimetric problem:
\begin{equation}
\label{eq:ex}
\begin{gathered}
\mathcal{J}(y)=\int_0^1\left(y'(x)
+ \, {^C_{0}\textsl{D}_x^{\alpha}} y(x)\right)^2dx \longrightarrow \min\\
\mathcal{G}(y)=\int_0^1\left(y'(x)
+ \, {^C_{0}\textsl{D}_x^{\alpha}} y(x)\right)dx = \xi\\
\begin{split}
y(0)=0\, , \
y(1)=&\int_0^1 E_{1-\alpha}\left(-\left(1-t\right)^{1-\alpha}\right)\xi dt.
\end{split}
\end{gathered}
\end{equation}
In this problem we make use of the Mittag-Leffler function
$E_{\alpha}(z)$. We recall that the Mittag-Leffler function is
defined by
\begin{equation*}
E_{\alpha}(z) =\sum_{k=0}^\infty\frac{z^k}{\Gamma(\alpha k+1)}\, .
\end{equation*}
This function appears naturally in the solution
of fractional differential equations,
as a generalization of the exponential function
\cite{CapelasOliveira}.
Indeed, while a linear second order
ordinary differential equation
with constant coefficients presents an exponential function as solution,
in the fractional case the Mittag--Leffler functions emerge \cite{Kilbas}.
In our example \eqref{eq:ex} the function $F$
of Theorem~\ref{theorem:isop} is given by
$$
F(x,y,y', {^C_{0}\textsl{D}_x^{\alpha}} y,\lambda)
=\left(y'+{^C_{0}\textsl{D}_x^{\alpha}} y\right)^2 -\lambda \left(y'
+ {^C_{0}\textsl{D}_x^{\alpha}} y\right).
$$
One can easily check that $y$ such that
\begin{equation}
\label{eq:y:ex} y(x)=\int_0^x
E_{1-\alpha}\left(-\left(x-t\right)^{1-\alpha}\right) \xi dt
\end{equation}
\begin{itemize}
\item is not an extremal for $\mathcal{G}$;
\item satisfies $y'+ {^C_{0}\textsl{D}_x^{\alpha}}y= \xi$
(see, \textrm{e.g.}, \cite[p.~328, Example~5.24]{Kilbas}).
\end{itemize}
Moreover, \eqref{eq:y:ex} satisfies the Euler--Lagrange equations
\eqref{eq:EL2} for $\lambda=2\xi$, \textrm{i.e.},
$$
-\frac{d}{dx}\left(2\left(y' + {^C_{0}\textsl{D}_x^{\alpha}}
y\right) -2\xi\right) + {_{x}\textsl{D}_1^{\alpha}}\left(2\left(y'+
{^C_{0}\textsl{D}_x^{\alpha}} y\right) -2\xi\right)=0.
$$
We conclude that \eqref{eq:y:ex} is an extremal for the
isoperimetric problem \eqref{eq:ex}.
\begin{remark}
When $\alpha \rightarrow 1$ the isoperimetric constraint is
redundant with the boundary conditions, and the fractional
isoperimetric problem \eqref{eq:ex} simplifies to the classical
variational problem
\begin{equation}
\label{eq:ex:alpha1}
\begin{gathered}
\mathcal{J}(y)
=4\int_0^1 (y'(x))^2 dx \longrightarrow \min\\
y(0)=0 \, , \quad y(1) = \frac{\xi}{2}.
\end{gathered}
\end{equation}
Our fractional extremal \eqref{eq:y:ex} gives $y(x)=\frac{\xi}{2}x$
for $i=1,\ldots,N$, which is exactly the minimizer of
\eqref{eq:ex:alpha1}.
\end{remark}
\begin{remark}
Choose $\xi = 1$. When $\alpha \rightarrow 0$
one gets from \eqref{eq:ex} the classical isoperimetric problem
\begin{equation}
\label{eq:ex:alpha0}
\begin{gathered}
\mathcal{J}(y) =\int_0^1\left(y'(x)
+ y(x)\right)^2 dx \longrightarrow \min\\
\mathcal{G}(y)
=\int_0^1 y(x) dx = \frac{1}{e}\\
y(0)=0 \quad y_i(1)= 1-\frac{1}{\mathrm{e}}.
\end{gathered}
\end{equation}
Our extremal \eqref{eq:y:ex} is then reduced to the classical
extremal $y(x)=1 - \mathrm{e}^{-x}$ of the isoperimetric problem
\eqref{eq:ex:alpha0}.
\end{remark}
\begin{remark}
Let $\alpha=\frac{1}{2}$.
Then \eqref{eq:ex} gives the following
fractional isoperimetric problem:
\begin{equation}
\label{eq:ex:alpha=1/2}
\begin{gathered}
\mathcal{J}(y)=\int_0^1\left(y'(x)
+ {^C_{0}\textsl{D}_x^{\frac{1}{2}}} y(x)\right)^2 dx\longrightarrow \min\\
\mathcal{G}(y)=\int_0^1\left(y'(x)
+ {^C_{0}\textsl{D}_x^{\frac{1}{2}}} y(x)\right)dx=\xi \\
y(0) =0\, , \quad y(1) = \xi\left(
\mathrm{erfc}(1)\mathrm{e}+\frac{2}{\sqrt{\pi}}-1\right),
\end{gathered}
\end{equation}
where $\mathrm{erfc}$ is the complementary error function defined by
\begin{equation*}
\mathrm{erfc}(z)=\frac{2}{\sqrt{\pi}}\int_{z}^{\infty}exp(-t^2)dt.
\end{equation*}
The extremal \eqref{eq:y:ex} for the particular fractional
isoperimetric problem \eqref{eq:ex:alpha=1/2} is
$$
y(x)=\xi\left(\mathrm{e}^x \mathrm{erfc}(\sqrt{x})
+\frac{2\sqrt{x}}{\sqrt{\pi}}-1\right).
$$
\end{remark}
\section*{Acknowledgements}
This work was partially supported by the
\emph{Portuguese Foundation for Science and Technology} (FCT)
through the \emph{Center for Research and Development in Mathematics and Applications} (CIDMA).
TO is also supported by FCT through the Ph.D. fellowship SFRH/BD/33865/2009;
ABM by BUT grant S/WI/1/08; and DFMT through the project UTAustin/MAT/0057/2008.
| {
"timestamp": "2011-01-18T02:00:23",
"yymm": "1101",
"arxiv_id": "1101.2932",
"language": "en",
"url": "https://arxiv.org/abs/1101.2932",
"abstract": "We give a proper fractional extension of the classical calculus of variations by considering variational functionals with a Lagrangian depending on a combined Caputo fractional derivative and the classical derivative. Euler-Lagrange equations to the basic and isoperimetric problems are proved, as well as transversality conditions.",
"subjects": "Optimization and Control (math.OC)",
"title": "Fractional Variational Calculus with Classical and Combined Caputo Derivatives",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435196,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7076984747722431
} |
https://arxiv.org/abs/1407.4477 | Convex separable problems with linear and box constraints in signal processing and communications | In this work, we focus on separable convex optimization problems with box constraints and a set of triangular linear constraints. The solution is given in closed-form as a function of some Lagrange multipliers that can be computed through an iterative procedure in a finite number of steps. Graphical interpretations are given casting valuable insights into the proposed algorithm and allowing to retain some of the intuition spelled out by the water-filling policy. It turns out that it is not only general enough to compute the solution to different instances of the problem at hand but also remarkably simple in the way it operates. We also show how some power allocation problems in signal processing and communications can be solved with the proposed algorithm. | \section{Introduction}
\PARstart{C}{onsider} the following problem:
\begin{align}\label{1}
(\mathcal P): \quad \underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} x_n\le \rho_j \quad j=1,2,\ldots,N\\\nonumber
& l_n \le x_n \le u_n \quad n=1,2,\ldots,N
\end{align}
where $\{x_n\}$ are the optimization variables, the coefficients $\{\rho_j\}$ are real-valued parameters, and the constraints $ l_n \le x_n \le u_n$ are called variable bounds or box constraints with $-\infty \le l_n < u_n \le +\infty$. The functions $f_{n}$ are real-valued, continuous and strictly convex in $[l_n,u_{n}]$, and continuously differentiable in $(l_n,u_{n})$. If $f_{n}$ is not defined in $l_n$ and/or in $u_n$, then it is extended by continuity as $f_n(l_n)=\mathop {\lim }\nolimits_{x_n \rightarrow l_n^+} f_n(x_n)$ and $f_n(u_n)=\mathop {\lim }\nolimits_{x_n \rightarrow u_n^-} f_n(x_n)$. {Possible extensions of $(\mathcal P)$ will be discussed in Section II.C.}
\subsection{Motivation and contributions}
Constrained optimization problems of the form \eqref{1} arise in connection with a wide range of power allocation problems in different applications and settings in signal processing and communications. For example, they arise in connection with the design of multiple-input multiple-output (MIMO) systems dealing with the minimization of the power consumption while meeting the quality-of-service (QoS) requirements over each data stream (see for example \cite{PalomarQoS2004,PalomarAug2005, Palomar2005aa, Jiang2006,Bergman2009} for point-to-point communications and \cite{Fu2011,Shaolei2010, Sanguinetti2012,An2013, Sanguinetti2013} for amplify-and-forward relay networks). A survey of some of these problems for point-to-point MIMO communications can be found in \cite{Palomar2005}. It also appears in the design of optimal training sequences for channel estimation in multi-hop transmissions using decode-and-forward protocols \cite{Gao2008} and in the optimal power allocation for the maximization of the instantaneous received signal-to-noise ratio in amplify-and-forward multi-hop transmissions under short-term power constraints \cite{Farhadi09}.
Other instances of \eqref{1} are shown to be the rate-constrained power minimization problem over a code division multiple-access channel with correlated noise \cite{Padakandla2009} and the power allocation problem in amplify-and-forward relaying scheme for multiuser cooperative networks under frequency-selective block-fading \cite{Pham2010}. {Formulations as in \eqref{1} arise also in wireless communications with energy harvesting constraints. For example, they appear in \cite{Ozel2011} wherein the authors look for the optimal energy management scheme that maximizes the throughput in a point-to-point link with an energy harvesting transmitter operating over a fading channel. They can also be found in the design of the precoding strategy that maximizes the mutual information along independent channel accesses under non-causal knowledge of the channel state and harvested energy \cite{Gregori2013}.}
{Clearly, the optimization problem in \eqref{1} can always be solved using standard convex solvers. Although possible, this in general does not provide any insights into its solution and does not exploit the particular structure of the problem itself. In this respect, all the aforementioned works go a step further and provide ad-hoc algorithms for specific instances of \eqref{1} in the attempt of giving some intuition on the solutions. However, this is achieved at the price of a loss of generality in the sense that most of them can only be used for the specific problem at hand.} On the contrary, the main contribution of this work is to develop a general framework that allows one to compute the solution (and its structure) for any problem in the form of \eqref{1}. In other words, whenever a problem can be put in the form of \eqref{1}, then its solution can be efficiently obtained by particularizing the proposed algorithm to the problem at hand without the need of developing specific solutions.
\vspace{-0.2cm}
\subsection{Related work}
The main related literature to this paper is represented by \cite{Padakandla2007} and \cite{Wang2012} in which the authors focus on solving problems of the form:
\begin{align}\label{1.10}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} x_n\le \sum\limits_{n=1}^j\alpha_n \quad j=1,2,\ldots,N\\\nonumber
& 0 \le x_n \le u_n \quad n=1,2,\ldots,N
\end{align}
with $\alpha_n \ge 0$ for any $n$. The above problems are known as separable convex optimization problems with linear \emph{ascending inequality} constraints and box constraints. In particular, in \cite{Padakandla2007} the authors propose a dual method to numerically evaluate the solution of the above problem in no more than $N-1$ iterations under an ordering condition on the slopes of the functions at the origin. An alternative solution improving the worst case complexity of \cite{Padakandla2007} is illustrated in \cite{Wang2012}. {Differently from \cite{Padakandla2007} and \cite{Wang2012}, we consider more general problems in which the inequality constraints are \emph{not necessarily} in ascending order since the box constraint values $l_n$ and $u_n$ may possibly be equal to $-\infty$ and $+\infty$, respectively. All this makes \eqref{1} more general than problems of the form given in \eqref{1.10}. Observe, however, that if the lower bounds $l_n$ are all finite, then problem \eqref{1} boils down to \eqref{1.10} (as it can be easily shown using simple mathematical arguments). Compared to \cite{Padakandla2007} and \cite{Wang2012}, however, we also follow a different approach that allows us (simply exploiting the inherent structure of \eqref{1}) to focus only on functions $f_n$ that are continuous, strictly convex and monotonically decreasing in the intervals $[l_n, u_n]$. Furthermore, differently from \cite{Padakandla2007} we do not impose any constraints on the slopes of $f_n$.}
It is also worth mentioning that at the time of submission we became aware (through a private correspondence with the authors) of \cite{NCC2014} in which the problem originally solved in \cite{Padakandla2007} has been revisited in light of the theory of polymatroids. In particular, in \cite{NCC2014} the authors have removed some of the restrictions on functions $f_n$ that were present in \cite{Padakandla2007}. This allows them to come up with a solution similar to the one we propose in this work.
\subsection{Organization}
The remainder of the paper is structured as follows. Some preliminary results are discussed in the next section {together with some possible extensions of the problem at hand}. Section \ref{main_result} provides the main result of the paper: an algorithm to evaluate the solution to ($\mathcal{P}$). Section \ref{graphical_interpretation} presents some graphical interpretations of the way the proposed solution operates. This leads to an interesting water-filling inspired policy. {Section \ref{examples} shows how some power allocation problems of practical interest in signal processing and communications can be solved with the proposed algorithm.} Finally, some conclusions are drawn in Section \ref{conclusions}.
\section{Preliminary results and discussions}
Some preliminary results are discussed in the sequel. In particular, we first study the feasibility (admissibility) of \eqref{1} and then we show that the optimization in \eqref{1} reduces to solve an equivalent problem in which all the functions $f_n$ are continuous, strictly convex and monotonically decreasing in the intervals $[l_n,u_n]$. {In addition, we also discuss some possible extensions of \eqref{1}.}
\vspace{-0.3cm}
\subsection{Feasibility }
The feasibility of \eqref{1} simply amounts to verifying that for given values of $\{l_n\}$, $\{u_n\}$ and $\{\rho_n\}$, the \textit{feasible set} (or \textit{constraint set}) is not empty \cite{BoydBook}.
A necessary and sufficient condition for \eqref{1} to be feasible is provided in the following proposition.
\begin{proposition}
The solution to \eqref{1} exists if and only if
\begin{align}\label{feasibility}
\sum \limits_{n=1}^{j} l_n\le \rho_j \quad j=1,2,\ldots,N.
\end{align}
\end{proposition}
\begin{proof}
The proof easily follows from $(\mathcal{P})$ {since the point $(l_1,l_2,\ldots,l_N)$ is feasible.}
\end{proof}
In all subsequent discussions, we assume that \eqref{feasibility} is satisfied and denote $x^{\star}_n$, for $n=1,2,\ldots,N$, the solutions of \eqref{1}.
\vspace{-0.2cm}
\subsection{Monotonic properties of $f_n$}
Observe that since $f_n$ is by definition strictly convex in $[l_n,u_{n}]$ and continuously differentiable in $(l_n,u_{n})$, then the three following cases may occur.
\vspace{0.1cm}
{\bf{a}}) The function $f_n$ is monotonically increasing in $[l_n,u_{n}]$ or, equivalently, $f_n'(x_n)>0$ for any $x_n \in (l_n,u_{n})$.
\vspace{0.1cm}
{\bf{b}}) There exists a point $z_n$ in $(l_n,u_n)$ such that $f_n^{'} (z_n) = 0 $ with $f_n'(x_n)<0$ and $f_n'(x_n)>0$ for any $x_n$ in $(l_n, z_n)$ and $(z_n, u_n)$, respectively.
\vspace{0.1cm}
{\bf{c}}) The function $f_n$ is monotonically decreasing in $[l_n,u_{n}]$ or, equivalently, $f_n'(x_n)<0$ for any $x_n \in (l_n,u_{n})$.
\begin{lemma} \label{LemmaA}
If $f_n$ is monotonically increasing in $[l_n,u_{n}]$ and $l_n \ne - \infty$, then $x_n^\star$ is given by
\begin{align}\label{solA}
x_n^\star = l_n.
\end{align}
\end{lemma}
\vspace{-0.15cm}
\begin{proof} The proof is given in Appendix A. \end{proof}
The above result can be used to find an equivalent form of \eqref{1}. Denote by $\mathcal A \subseteqq \mathcal \{1,2,\ldots,N\}$ the set of indices $n$ in \eqref{1} for which case {\bf{a}}) holds true and assume (without loss of generality)
that $\mathcal A=\{1,2,\ldots,|\mathcal A|\}$.
Using the results of Lemma 1, it follows that $x_n^\star = l_n$ for any $n \in \mathcal A$ while the computation of the remaining variables with indices $n \notin \mathcal A$ requires to solve the following reduced problem:
\begin{align}\label{caseA}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=|\mathcal A|+1}^{N} f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=|\mathcal A|+1}^{j} x_n\le \rho_j^{\prime} \quad j=|\mathcal A|+1,\ldots,N\\\nonumber
& l_n \le x_n \le u_n \quad n=|\mathcal A|+1,\ldots,N
\end{align}
with
\begin{align}
\rho_j^{\prime} = \rho_j-\sum\limits_{n=1}^{|\mathcal A|}l_n
\end{align}
for $ j = |\mathcal A|+1,\ldots,N$\footnote{Notice that in order for problem in \eqref{caseA} and thus for the original problem in \eqref{1} to be well-defined it must be $l_n \ne - \infty$ $\forall n \in \mathcal A$.}. The above optimization problem is exactly in the same form of \eqref{1} except for the fact that all its functions $f_n$ fall into cases {\bf{b}}) or {\bf{c}}). To proceed further, we make use of the following result.
\begin{lemma} \label{LemmaB}
If there exists a point $z_n$ in $(l_n,u_n)$ such that $f_n^{'} (z_n) = 0 $ with $f_n'(x_n)<0\; \forall x_n \in (l_n, z_n)$ and $f_n'(x_n)>0\; \forall x_n \in (z_n, u_n)$, then it is always
\begin{align}\label{solB}
l_n \le x_n^\star \le z_n.
\end{align}
\end{lemma}
\begin{proof}
The proof is given in Appendix A.
\end{proof}
Using the above result, it follows that solving \eqref{caseA} amounts to looking for the solution of the following equivalent problem:
{\begin{align}\label{caseB}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=|\mathcal A|+1}^{N} f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=|\mathcal A|+1}^{j} x_n\le \rho_j^{\prime} \quad j=|\mathcal A|+1,\ldots,N\\\nonumber
& l_n \le x_n \le u^\prime_n \quad n=|\mathcal A|+1,\ldots,N
\end{align}}
where
\begin{align}\label{}
u^\prime_n &= z_n \quad \text{if} \; n\; \in \; \mathcal{B} \\
u^\prime_n & = u_n \quad \text{otherwise}
\end{align}
with $\mathcal B$ being the set of indices $n$ in \eqref{caseA} for which case {\bf{b}}) holds true. The above problem is in the same form as \eqref{1} with the only difference that all functions $f_n$ are monotonically decreasing in $(l_n,u'_{n})$ and thus fall into case {\bf{c}}).
The results of Lemmas \ref{LemmaA} and \ref{LemmaB} can be summarized as follows. Once
the optimal values of the variables associated with functions $f_n$ that are monotonically increasing have been trivially computed through \eqref{solA}, it remains to solve the optimization problem \eqref{caseA} in which the functions $f_n$ belong to either case {\bf{b}}) or {\bf{c}}). In turn, problem \eqref{caseA} is equivalent to problem \eqref{caseB} with only class {\bf{c}}) functions. This means that we can {simply} consider optimization problems of the form in \eqref{1} in which all functions $f_n$ fall into case {\bf{c}}). Accordingly, in the following we assume that \eqref{feasibility} is satisfied and only focus on functions $f_n$ that are continuous, strictly convex and monotonically decreasing in the intervals $[l_n,u_n]$. {For notational simplicity, however, in all subsequent derivations we maintain the notation given in \eqref{1}, though we assume that the results of Lemmas 1 and 2 have been already applied.}
\vspace{-0.4cm}
\subsection{Possible extensions}
{An equivalent form of $(\mathcal P)$, which is sometimes encountered in literature, is given by:
\begin{align}\label{P_2}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} x_n\ge \rho_j \quad j=1,2,\ldots,N\\\nonumber
& l_n \le x_n \le u_n \quad n=1,2,\ldots,N. \nonumber
\end{align}
{The above problem can be rewritten in the same form as in \eqref{1} simply replacing $x_n$ with $y_n = - x_n$ in \eqref{P_2}. In doing this, we obtain
\begin{align} \label{P_3}
\underset{\{y_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(- y_n)\\ \nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} y_n\le - \rho_j \quad j=1,2,\ldots,N\\ \nonumber
& -u_n \le y_n \le -l_n\quad n=1,2,\ldots,N
\end{align}
which is exactly in the same form of $(\mathcal P)$.}}
{Consider also the following problem
\begin{align}\label{P_3}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} g_n(x_n)\le \rho_j \quad j=1,2,\ldots,N\\\nonumber
& l_n \le x_n \le u_n \quad n=1,2,\ldots,N \nonumber
\end{align}
in which $g_n$ is a {continuos and} strictly increasing function. Setting $y_n=g_n(x_n)$ yields
\begin{align}\label{P_3.1}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N p_n(y_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} y_n\le \rho_j \quad j=1,2,\ldots,N\\\nonumber
& l_n^\prime \le {y_n} \le u_n^\prime \quad n=1,2,\ldots,N. \nonumber
\end{align}
where $p_n = f_n \circ g_n^{-1}$, $l_n^\prime = g_n(l_n)$ and $u_n^\prime = g_n(u_n)$ with $l_n^\prime < u_n^\prime$ since $g_n$ is strictly increasing. Clearly, \eqref{P_3.1} is in the same form of $(\mathcal P)$ in \eqref{1} provided that $p_n$
is continuous and strictly convex in $[l_n^\prime ,u^\prime_n]$, and continuously differentiable in ($l_n^\prime ,u^\prime_n$). This happens for example when: $i$) $f_n$ is a strictly convex decreasing function and
$g_n^{-1}$ is a concave function (or, equivalently, $g_n$ is a convex function); $ii$) $f_n$ is a strictly convex increasing function and $g_n^{-1}$ is a convex function (or, equivalently, $g_n$ is a concave function).}
{Similar arguments can be used when $g_n$ in \eqref{P_3} is a strictly decreasing function. This means that the results of this work can also be applied to the case in which the constraints have the following form:
\begin{align}\label{constraint_g_n}
\sum\limits_{n=1}^j g_n(x_n) \le \rho_j
\end{align}
with $g_n$ being continuously differentiable and invertible in $[l_n ,u_n]$.}
\vspace{-0.2cm}
\section{The main result}\label{main_result}
This section proposes an iterative algorithm that computes the solutions $x_n^\star$ for $n=1,2,\ldots,N$ in a finite number of steps $L<N$.
We begin by {denoting}
\begin{align}\label{h_n}
h_n(x_n) = -f_n^{'}(x_n)
\end{align}
which is a positive and strictly decreasing function since $f_n$ is by definition monotonically decreasing, strictly convex in $[l_n,u_{n}]$ and continuously differentiable in $(l_n,u_{n})$. {We take $h_n(l_n)=\mathop {\lim }\nolimits_{x_n \rightarrow l_n^+} h_n(x_n)$ and $h_n(u_n)=\mathop {\lim }\nolimits_{x_n \rightarrow u_n^-} h_n(x_n)$.} We also define the functions $\xi_n(\varsigma)$ for $n=1,2,\ldots,N$ as follows
\begin{align}\label{xi_n}
\xi_n(\varsigma) = \left\{ {\begin{array}{*{20}c}
{u_n } & {0 \le \varsigma < h_n (u_n )} \\ \\
{h_n^{-1}(\varsigma)} & { h_n (u_n ) \le \varsigma < h_n(l_n )}\\ \\
{l_n} & {h_n(l_n ) \le \varsigma} \\
\end{array}} \right.
\end{align}
where $0 \le \varsigma < +\infty$ and ${h_n^{-1}} $ denotes the inverse function of ${h_n}$ within the interval $[l_n,u_n]$. Since $h_n$ is a continuous and strictly decreasing function, then ${h_n^{-1}} $ is continuous and strictly decreasing {whereas $\xi_n$ is continuous and non-increasing}. Functions $\xi_n(\varsigma) $ in \eqref{xi_n} can be easily rewritten
in the following compact form:
\begin{align}\label{xi_n.1}
\xi_n(\varsigma) = \min\left\{\max\left\{h_n^{-1}(\varsigma), l_n\right\}, u_n\right\}
\end{align}
from which it is seen that each $\xi_n(\varsigma)$ projects $h_n^{-1}(\varsigma)$ onto the interval $[l_n,u_n]$.
\begin{theorem}
The solutions of $(\mathcal P)$ are given by
\begin{align}\label{x_n^star}
x_n^\star= \xi_n(\sigma_n^\star)
\end{align}
where the quantities {$\sigma_n^\star \ge 0$} for $n=1,2,\ldots,N$ are some Lagrange multipliers satisfying the following conditions{\footnote{We use $0\le x\, \bot \, y \le0$ to denote $0\le x$, $y \le 0$ and $xy = 0$.}}:
{\begin{equation}\label{KKT2eq_1}
0 \le (\sigma_n^\star-\sigma_{n+1}^\star) \; \bot\; \Big(\sum\limits_{j=1}^{n}x_j^\star-\rho_n\Big) \le 0 \quad
\end{equation}}
with $\sigma_{N+1}^\star = 0$.
\end{theorem}
\begin{proof} The proof is given in Appendix B. \end{proof}
From \eqref{xi_n.1} and \eqref{x_n^star}, it easily follows that $x_n^\star$ can be compactly represented as
\begin{equation}\label{x_n^star_2}
x_n^\star= \min\left\{\max\left\{h_n^{-1}(\sigma_n^\star), l_n\right\}, u_n\right\}.
\end{equation}
\begin{lemma}
The Lagrange multipliers $\sigma_n^\star$ satisfying \eqref{KKT2eq_1} can be computed by means of the iterative procedure illustrated in {\bf{Algorithm 1}}.\end{lemma}
\begin{proof} The proof is given in Appendix C. \end{proof}
{\begin{algorithm}[t]
\caption{Iterative procedure for solving $(\mathcal P)$ in \eqref{1}.}
\begin{enumerate}
\item Set $j=0$ and $\gamma_n =\rho_n$ for every $n$.
\item {{\bf{While}}} $j < N$
\begin{enumerate}
\item Set $\mathcal{N}_{j}=\{j+1, \ldots, N\}$
\item For every $n \in \mathcal{N}_{j}$.
\begin{enumerate}
\item If $\gamma_n < \sum\nolimits_{i=j+1}^n u_i$ then compute $\varsigma_n^\star$ as the solution of
\begin{align}\label{100.10}
c_n(\varsigma)=\sum\limits_{i=j+1}^n \xi_i(\varsigma) = \gamma_n\end{align}
for $\varsigma$.
\item If $\gamma_n \ge \sum\nolimits_{i=j+1}^n u_i$ then set
\begin{align}\label{100.101}
\varsigma_n^\star=0.
\end{align}
\end{enumerate}
\item Evaluate
\begin{align}\label{101.10}
\mu^\star= \underset{n\in \mathcal{N}_{j}}{\max} \quad \varsigma_n^\star
\end{align}
and
\begin{align}\label{102.10}
k^\star = \underset{n\in \mathcal{N}_{j}}{ \max} \left\{n | \varsigma_n^\star=\mu^\star\right\}.
\end{align}
\item Set $\sigma_n^\star\leftarrow \mu^\star$ for $n=j+1, \ldots,$ $ k^\star$.\\
\item Use $\sigma_n^\star$ in \eqref{x_n^star} to obtain $x_n^\star$ for $n=j+1, \ldots,$ $ k^\star$. \\
\item Set $\gamma_n\leftarrow\gamma_n- \gamma_{k^\star}$
for $n=k^\star+1, \ldots, N$.\\
\item Set $j\leftarrow k^\star$.
\end{enumerate}
\end{enumerate}
\end{algorithm}}
As seen, Algorithm 1 proceeds as follows {(see also Section IV for a more intuitive graphical illustration)}. At the first iteration it sets $j=0$ and $\gamma_n=\rho_n$, $\forall n$, and for those values of $n \in \{1,2,\ldots,N\}$ such that
\begin{align}\label{c_n_1_10}
\gamma_n < \sum\limits_{i=1}^n u_i
\end{align}
it computes the unique solution $\varsigma_n^\star$ (see Appendix D for a detailed proof on the existence and uniqueness of $\varsigma_n^\star$) of the following equation
\begin{align}\label{c_n_1_2}
c_n(\varsigma) = \sum\limits_{i=1}^n \xi_i(\varsigma) = \gamma_n.
\end{align}
On the other hand, for those values of $n\in \{1,2,\ldots,N\}$ such that
\begin{align}\label{c_n_1_11}
\gamma_n \ge \sum\limits_{i=1}^n u_i
\end{align}
it sets $\varsigma_n^\star=0$. The values $\varsigma_n^\star$ computed as described above, for $n=1,2,\ldots,N$, are used in \eqref{101.10} and \eqref{102.10} to obtain $\mu^\star$ and $k^\star$, respectively. {As it follows from \eqref{101.10} and \eqref{102.10}, $\mu^\star$ is set equal to the maximum value of $\{\varsigma_n^\star\}$ with $n\in \{1,2,\ldots,N\}$ while $k^\star$ stands for its corresponding index.} Both are then used to replace $\sigma_n^\star$ with $\mu^\star$ for $n=1,2,\ldots,k^\star$. Note that if two or more indices can be associated with $\mu^\star$ (meaning that $\varsigma_n^\star = \mu^\star$ for all such indices), then according to \eqref{102.10} the maximum one is selected.
Once $\{\sigma_1^\star,\sigma_2^\star,\ldots,\sigma_{k^{\star}}^\star\}$ have been computed, Algorithm 1 moves to the second step, which essentially consists in solving the following reduced problem:
\begin{align}\label{4.1}
\quad \underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=k^\star +1}^N f_n(x_n)\\ \nonumber
\text{subject to} \quad & \!\!\!\!\!\!\sum \limits_{n=k^\star +1}^{j} x_n\le \gamma_j - \gamma_{k^{\star}}\quad j=k^\star +1,k^\star +2,\ldots,N\\ \nonumber
& l_n \le x_n \le u_n \quad n=k^\star +1,k^\star+2,\ldots,N
\end{align}
using the same procedure as before. The iterative procedure terminates in a finite number of steps when all quantities $\sigma_n^\star$ are computed. According to Theorem 1, the solutions of $(\mathcal P)$ for $n=1,2,\ldots, N$ are eventually obtained as $x_n^\star= \xi_n(\sigma_n^\star)$.
\vspace{-0.2cm}
\subsection{Remarks}
The following remarks are of interest.
\begin{remark}It is worth observing than in deriving Algorithm 1 we have implicitly assumed that the number of linear constraints in \eqref{1} is exactly $N$. When this does not hold true, Algorithm 1 can be slightly modified in an intuitive and straightforward manner. Specifically, let $\mathcal{L} \subset \{1,2,\ldots,N\}$ denote the subset of indices associated to the linear constraints of the optimization problem at hand. In these circumstances, we have that \eqref{1} reduces to:
\begin{align}\label{1mod}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{j} x_n\le \rho_j \quad j \in \mathcal{L}\\
& l_n \le x_n \le u_n \quad n=1,2,\ldots,N. \nonumber
\end{align}
The solution of \eqref{1mod} can still be computed through the iterative procedure illustrated in Algorithm 1 once the two following changes are made:
\begin{itemize}
\item Step a) -- Replace $\mathcal{N}_{j}$ with $\mathcal{N}_{j} \cap \mathcal{L}$.
\item Step f) -- Replace the statement ``Set $\gamma_n\leftarrow\gamma_n- \gamma_{k^\star}$
for $n=k^\star+1,k^\star+2, \ldots, N$" with ``Set $\gamma_n\leftarrow\gamma_n- \gamma_{k^\star}$
for $n \in \{k^\star+1,k^\star+2, \ldots, N\} \cap \mathcal{L}$".
\end{itemize}
As seen, when only a subset $\mathcal{L}$ of constraints must be satisfied, then Algorithm 1 proceeds computing the quantities $\varsigma_n^\star$ only for the indices $n\in \mathcal{L}$.
\end{remark}
\begin{remark}The number of iterations $L$ required by Algorithm 1 to compute all the Lagrange multipliers $\sigma_n^\star$ (and, hence, to compute all the solutions $x_n^\star$) depends on the cardinality $\left|\mathcal{L}\right|$ of $\mathcal{L}$ (or, equivalently, on the number of linear constraints). {In general, $L$ is less than or equal to $\left|\mathcal{L}\right|$. However,} if $\left|\mathcal{L}\right|=1$ only one iteration is required and thus $L=1$. Also, if there is no linear constraint (which means $\mathcal{L}= \emptyset $ and $\left|\mathcal{L}\right|=0$) the solutions of $(\mathcal{P})$ can be computed without running Algorithm 1 since they are trivially given by $x_n^\star=u_n$. On the other hand, if $\left|\mathcal{L}\right|=N$ the maximum number of iterations required is $N-1$. Indeed, assume that at each iteration Algorithm 1 provides only one $x_n^\star$ (which amounts to saying that at the first iteration Algorithm 1 computes $x_1^\star$, at the second $x_2^\star$, and so forth). Accordingly, at the end of the $(N-1)$th iteration the values of $x_1^\star,x_2^\star,\ldots,x_{N-1}^\star$ are available, and $x_N^\star$ can be directly computed as $ x_N^\star = \min\{(\rho_N-\sum_{n=1}^{N-1}x_n^\star),u_n\}$ {without the need of performing the $N$th iteration. For simplicity, in the sequel we assume that the $N$th iteration is always performed so that it is assured that the last value of $k^\star$ computed through \eqref{102.10} is always equal to $N$.}
\end{remark}
\begin{remark} Observe that if there exists one or more values of $j\in \mathcal{L}$ in \eqref{1mod} for which the following condition holds true
\begin{align}\label{000012}
\rho_j= \sum\limits_{i=1}^j l_i
\end{align}
then it easily follows that $x^{\star}_{n}=l_n$ for $n=1,2,\ldots, j_{\max}$, with $j_{\max}$ being the maximum value of $j\in \mathcal{L}$ such that the above condition is satisfied. This means that solving \eqref{1mod} basically reduces to find the solution of the following problem:
\begin{align}\label{caseCAC}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=j_{\max}+1}^{N} f_n(x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=j_{\max}+1}^{j} x_n\le \rho_j^{\prime} \quad j \in \mathcal L \setminus \mathcal C\\\nonumber
& l_n \le x_n \le u_n \quad n=j_{\max}+1,\ldots,N
\end{align}
in which $\mathcal C = \{1,2, \ldots, j_{\max}\}$ and $\rho_j^{\prime} = \rho_j-\sum\nolimits_{n=1}^{j_{\max}}l_n$.
\end{remark}
\begin{remark} {For later convenience, we concentrate on the computation of $\mu^\star$ in the last step of Algorithm 1}. For this purpose, denote by $\{\mu^{\star}_{1},\mu^{\star}_{2},\ldots,\mu^{\star}_{L}\}$ and $ k^{\star}_{1}<k^{\star}_{2}<\cdots<k^{\star}_{L}$ (with $k^{\star}_{1} \ge 1$ and $k^{\star}_{L}=N$) the values of $\mu^{\star}$ and $k^\star$ provided by \eqref{101.10} and \eqref{102.10}, respectively, at the end of the $L$ iterations required to solve $(\mathcal{P})$. Setting $k_{0}^{\star}=0$, we may write
\begin{equation}
\label{rem3_1}
\sigma_{n}^\star=\mu_{j}^{\star} \quad k_{j-1}^{\star}+1 \le n \le k_{j}^{\star} \; \mathrm{and} \; j=1,2,\ldots,L
\end{equation}
with $\{\mu_{1}^{\star},\mu_{2}^{\star},\ldots,\mu_{{L-1}}^{\star}\}$ such that
\begin{align}\label{rem3_2}
\sum\limits_{n= k_{j-1}^{\star}+1}^{k_{j}^{\star}} x_n^\star = \rho_{k_{j}^{\star}}- \rho_{k_{j-1}^{\star}} \quad j=1,2,\ldots,{L-1}.
\end{align}
For $j=L$ two cases may occur, namely $\mu_{L}^{\star} > 0$ or $\mu_{L}^{\star} = 0$. In the former, $\mu_{L}^{\star} $ is such that
\begin{align} \label{rem3_3}
\sum\limits_{n= k_{L-1}^{\star}+1}^{k_{L}^{\star}} x_n^\star = \rho_{k_{L}^{\star}}- \rho_{k_{L-1}^{\star}}
\end{align}
while in the latter we simply have that
\begin{equation}
\label{rem3_4}
x_n^\star=u_n \quad \quad n=k_{L-1}^{\star}+1,\ldots,k_{L}^{\star}.
\end{equation}
\end{remark}
\begin{remark}
At any given iteration, Algorithm 1 requires to solve at most $N-k^\star$ non-linear equations (where $k^\star$ is the value obtained from \eqref{102.10} at the previous iteration):
\begin{align}\label{eq1R1}
\!\!\!\!\!c_n({\varsigma})= \sum\limits_{i=k^\star+1}^n \xi_i(\varsigma) = \gamma_n\quad n=k^\star+1,k^\star+2,\ldots, N.
\end{align}
When the solutions $\{\varsigma_{n}^\star\}$ of the above equations can be computed in closed form, the computational complexity required by each iteration is nearly negligible. On the other hand, when a closed-form does not exist, this may result in excessive computation. In this latter case, a possible means of reducing the computational complexity relies on the fact that $c_n(\varsigma)$ is a non-increasing function as it is the sum of non-increasing functions. Now, assume that the solution of \eqref{eq1R1} has been computed for $n=n'$. Since we are interested in the maximum between the solutions of \eqref{eq1R1}, as indicated in \eqref{101.10}, then for $n''>n'$ $c_{n''}(\varsigma) = \gamma_{n''}$ must be solved only if $c_{n''}(\varsigma_{n'}^\star) > \gamma_{n''}$. Indeed, only in this case $\varsigma_{n''}^\star$ would be greater than $\varsigma_{n'}^\star$. Accordingly, we may proceed as follows. We start by solving \eqref{eq1R1} for $n=k^\star+1$. Then, we look for the first index $n>k^\star+1$ for which $c_{n}(\varsigma_{k^\star+1}^\star) > \gamma_{n}$ and solve the equation associated to such an index. We proceed in this way until $n=N$. In this way, the number of non-linear equations solved at each iteration is smaller than or equal to that required by Algorithm 1.
\end{remark}
\begin{remark}
{From the above remark, it follows that the proposed algorithm can be basically seen as composed of two layers. The outer layer computes the Lagrange multipliers $\{\sigma_n^\star\}$ whereas the inner layer evaluates the solution to \eqref{eq1R1}. If the latter can be solved in closed form, then the complexity required by the inner layer is negligible and thus the number of iterations required to solve the problem is essentially given by the number of iterations of the outer layer, which is at most $N-1$ with $N$ being the number of linear constraints. On the other hand, if the solution to \eqref{eq1R1} cannot be computed in closed form, then the total number of iterations should also take into account the complexity of the inner layer. However, this cannot be easily quantified as it largely depends on the particular structure of \eqref{eq1R1} and the specific iterative procedure used to solve it.}
\end{remark}
\begin{figure}[t]
\begin{center}
\psfrag{x1}[r][m]{\scriptsize{$x_1^\star = -0.8$}}
\psfrag{x2}[r][m]{\scriptsize{$x_2^\star=-1.2$}}
\psfrag{x3}[r][m]{\scriptsize{$x_3^\star=1.9$}}
\psfrag{x4}[r][m]{\scriptsize{$x_4^\star=-1.8$}}
\psfrag{x5}[c][m][1][90]{\scriptsize{\quad\;$\sigma_3^\star=\sigma_4^\star= 1.195$\quad}}
\psfrag{x6}[c][m][1][90]{\scriptsize{$\sigma_1^\star=\sigma_2^\star= 4.451$}}
\psfrag{x7}[r][m]{\scriptsize{$\varsigma$}}
\psfrag{data1}[l][m]{\scriptsize{$\xi_1(\varsigma)$}}
\psfrag{data2}[l][m]{\scriptsize{$\xi_2(\varsigma)$}}
\psfrag{data3}[l][m]{\scriptsize{$\xi_3(\varsigma)$}}
\psfrag{data4}[l][m]{\scriptsize{$\xi_4(\varsigma)$}}
\includegraphics[width=0.9\columnwidth]{figW.eps}
\end{center}
\caption{Graphical illustration of the solutions $x_n^\star$. The intersection of $\xi_n(\varsigma)$ with the vertical dashed line at $\varsigma= \sigma_n^\star$ yields $x_n^\star$.}
\label{fig1}
\end{figure}
\vspace{-0.1cm}
\section{Graphical interpretations}\label{graphical_interpretation}
In the next, we provide graphical interpretations of the general policy spelled out by Theorem 1 and Lemma 3.
\subsection{Charts}
A direct depiction of Theorem 1 and Lemma 3 can be easily obtained by plotting $c_n(\varsigma)$ and $\xi_n(\varsigma)$ for $n=1,2,\ldots,N$ as a function of $\varsigma \ge 0$. From \eqref{100.10}, it follows that the intersections of curves $c_n(\varsigma)$ with the horizontal lines at $\gamma_n$ yield $\varsigma_n^\star$ from which $\mu^\star$ and $k^\star$ are computed as indicated in \eqref{101.10} and \eqref{102.10}. According to \eqref{x_n^star}, the solutions $x_n^\star$ for $n=1,2,\ldots,k^\star$ correspond to the interception of the corresponding functions $\xi_n(\varsigma)$ with the vertical line at $\varsigma = \sigma_n^\star = \mu^\star$. Once $x_n^\star$ for $n=1,2,\ldots,k^\star$ are computed, the algorithm proceeds with the computation of the remaining solutions by solving the corresponding reduced problem.
For illustration purposes, we assume $N=4$, $l_n = -\infty$ for any $n$, $\mathbf{u}=[0.4, -1.2, 2, -1.8]$ and ${\boldsymbol{\rho}} = [0.2, -2, 1.1, -1.9]$. In addition, we set
\begin{align}\label{5.1}
f_n(x_n) = w_n e^{-x_n}
\end{align}
with $[w_1, w_2, w_3, w_4]=[2, 5, 8, 0.5]$. Then, it follows that $h_n(x_n) ={w_n}e^{-x_n}$ and $h_n^{-1}(\varsigma) = \ln w_n -\ln \varsigma$.
Then, from \eqref{xi_n} we obtain
\begin{align}\label{xi_n.0}
\xi_n(\varsigma) = \left\{ {\begin{array}{*{20}c}
{u_n } & {0 \le \varsigma < w_n e^{-u_n} } \\ \\
\ln w_n -\ln \varsigma & { w_n e^{-u_n} \le \varsigma}\\
\end{array}} \right.
\end{align}
or, more compactly,
\begin{align}\label{xi_n.0}
\xi_n(\varsigma) = \min\left\{\max\left\{ \ln w_n -\ln \varsigma,0\right\},u_n\right\}
\end{align}
whose graph is shown in Fig. \ref{fig1}.
As seen, the first operation of Algorithm 1 is to compute the quantities $\varsigma_n^\star$ for $n=1,\ldots, 4$ according to step b). Since the condition $\gamma_n \le \sum\nolimits_{i=1}^n u_i$ is satisfied for $n=1,2,\ldots, 4$, the computation of $\varsigma_n^\star$ requires to solve \eqref{100.10} for $n=1,2,\ldots, 4$. Using \eqref{xi_n.0}, we easily obtain:
\begin{align}
\varsigma_1^\star &= e^{\ln w_1 - \gamma_1} = 1.637\\
\varsigma_2^\star &= e^{{\ln w_1 + u_2 - \gamma_2}} = 4.451\\
\varsigma_3^\star &= e^{{\ln w_3 + u_1 + u_2 - \gamma_3}} = 1.196\\
\varsigma_4^\star &= e^{\frac{\ln w_1 + \ln w_3 + u_2 + u_4 - \gamma_4}{2}} = 2.307.
\end{align}
A direct depiction of the above results can be easily obtained by plotting $c_n(\varsigma)$ for $n=1,2,\ldots,4$ as a function of $\varsigma \ge 0$. As shown in Fig. \ref{fig2}, the intersections of curves $c_n(\varsigma)$ with the horizontal lines at $\varsigma = \gamma_n$ yield $\varsigma_n^\star$.
\begin{figure}[t]
\begin{center}
\psfrag{x1}[r][m]{\scriptsize{$\gamma_1 = 0.2$}}
\psfrag{x2}[r][m]{\scriptsize{$\gamma_2 = -2$}}
\psfrag{x3}[r][m]{\scriptsize{$\gamma_3 = 1.1$}}
\psfrag{x4}[c][m]{\scriptsize{$ \gamma_4 = -1.9$\quad\quad\quad\quad}}
\psfrag{x6}[c][m][1][90]{\scriptsize{$\varsigma_1^\star = 1.637$}}
\psfrag{x8}[c][m][1][90]{\scriptsize{$\varsigma_2^\star = 4.451$}}
\psfrag{x7}[c][m][1][90]{\scriptsize{$\varsigma_3^\star = 1.196$}}
\psfrag{x5}[c][m][1][90]{\scriptsize{$\;\varsigma_4^\star = 2.307$}}
\psfrag{x9}[c][m]{\scriptsize{$\varsigma$}}
\psfrag{data1}[c][m]{\scriptsize{$\quad c_1(\varsigma)$}}
\psfrag{data2}[c][m]{\scriptsize{$\quad c_2(\varsigma)$}}
\psfrag{data3}[c][m]{\scriptsize{$\quad c_3(\varsigma)$}}
\psfrag{data4}[c][m]{\scriptsize{$\quad c_4(\varsigma)$}}
\includegraphics[width=0.9\columnwidth]{figZ.eps}
\end{center}
\caption{Graphical illustration of $c_n(\varsigma)$. Their intersection with the horizontal dashed lines at $\gamma_1 = 0.2$, $\gamma_2 = -2$, $\gamma_3 = 1.1$ and $\gamma_4 = -1.9$ yields respectively $\varsigma_1^\star = 1.637$, $\varsigma_2^\star = 4.451$, $\varsigma_3^\star = 1.196$ and $\varsigma_4^\star = 2.307$. \vspace{-0.3cm}}
\label{fig2}
\end{figure}
Using the above results into \eqref{101.10} and \eqref{102.10} of step c) yields $\mu^\star = 4.451$ and $k^\star = 2$
from which (according to step d)) we obtain
\begin{align}
\sigma_1^\star=\sigma_2^\star=\mu^\star = 4.451.
\end{align}
Once the optimal $\sigma_1^\star$ and $\sigma_2^\star$ are computed, Algorithm 1 proceeds solving the following reduced problem:
\begin{align}
\underset{\{x_3,x_4\}}{\min} \quad &
\sum\limits_{n=3}^4 {w_n}e^{-x_n}\\\nonumber
\text{subject to} \quad & \sum \limits_{n=3}^{j} x_n\le \gamma_j\quad j=3,4\\\nonumber
& x_n \le u_n \quad n=3,4
\end{align}
with $\gamma_3 = 3.1$ and $\gamma_4 = 0.1$ as obtained from $\gamma_j \leftarrow \gamma_j - \gamma_{k^\star}$
observing that $\gamma_{k^\star}=\gamma_2 = -2$. Since $\gamma_3 > u_3 $, from step b) we have that $\varsigma_3^\star = 0$ while $\varsigma_4^\star$ turns out to be given by
\begin{align}
\varsigma_4^\star = e^{{\ln w_3 + u_4 - \gamma_4}} = 1.195.
\end{align}
As before, $\varsigma_4^\star$ can be obtained as the intersection of new function
\begin{align}
c_4(\varsigma) = \sum\limits_{n=3}^4 \xi_n(\varsigma)
\end{align}
with the horizontal line at $\varsigma = \gamma_4 = 0.1$. Then, from \eqref{101.10} and \eqref{102.10}, we have that
\begin{align}
\mu^\star = \mathop {\max }\nolimits_{n=3,4 } \varsigma_n^\star= 1.195
\end{align}
and thus $k^\star = 4$. This means that $\sigma_3^\star= \sigma_4^\star=1.195$.
The optimal $x_n^\star$ are eventually obtained as $x_n^\star = \xi_n(\sigma_n^\star)$. This yields $x_1^\star = -0.8$, $x_2^\star = -1.2$, $x_3^\star = 1.9$ and $x_4^\star = -1.8$. As depicted in Fig. \ref{fig1}, the solution $x_n^\star$ corresponds to the interception of $\xi_n(\varsigma)$ with the vertical line at $\varsigma = \sigma_n^\star$.
\subsection{Water-filling inspired policy}
{While the charts used in the foregoing example are quite useful, we put forth an alternative interpretation that allows retaining some of the intuition of the water-filling policy. This interpretation is valid for cases in which the optimization variables $\{x_n;\,n=1,2,\ldots,N\}$ can only take non-negative values, which amounts to setting $l_n=0$ for $n=1,2,\ldots,N$.
We start considering the simple case in which a single linear constraint is imposed:
\begin{align}\label{wfl1}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N f_n(x_n)\\
\nonumber \text{subject to} \quad & \sum \limits_{n=1}^{N} x_n\le \rho_N \\
\nonumber & 0 \le x_n \le u_n \quad n=1,2,\ldots,N.
\end{align}
Using the results of Theorem 1 and Lemma 3, the solution to \eqref{wfl1} is found to be
\begin{equation}\label{wfl2}
x_n^\star= \xi_n(\sigma^\star)=\min\left\{\max\left\{h_n^{-1}(\sigma_n^\star), 0\right\}, u_n\right\}
\end{equation}
where the values of $\sigma_n^\star$ are obtained through Algorithm 1. Since a single constraint is present in \eqref{10}, then a single iteration is required to compute all the values of $\sigma_n^\star$ for $n=1,2,\ldots,N$. In particular, it turns out that $\sigma_n^\star = \sigma^\star$ for any $n$, with $\sigma^\star$ such that the following condition is satisfied:
\begin{align}\label{wfl4}
\sum\limits_{n=1}^N x_n^\star =\sum\limits_{n=1}^N \xi_n(\sigma^\star)= \rho_N.
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.55\textwidth]{Fig100.eps}
\end{center}
\caption{Water-filling inspired interpretation of the solutions $x_n^\star$ .}
\label{GWF1}\vspace{-0.4cm}
\end{figure}
Consider now $N$ vessels, which are filled with a proper material (different from vessel to vessel) up to a given level. Think of it as the \textit{zero-level} and assume that it is the same for all vessels, as illustrated in Fig. \ref{GWF1} for $N = 6$. Assume that a certain quantity $\eta$ of water (measured in proper units) is poured into each vessel and let each material be able to first absorb it and then to expand accordingly up to a certain level. In particular, assume that the behaviour of material $n$ is regulated by $\xi_n(\varsigma)$ with $\varsigma= 1/\eta$. More precisely, $\xi_n(\varsigma)$ is the difference between the new level of material $n$ and the zero-level. From \eqref{xi_n}, it easily follows that the expansion starts only when $\eta$ reaches the level $\eta=1/h_n(0)$ while it stops when $\eta=1/h_n(u_n)$, corresponding to a maximum expansion of $\xi_n(\varsigma)=u_n$. This means that additional water beyond the quantity $1/h_n(u_n)$ does not produce any further expansion - it is simply accumulated in vessel $n$ above the level $u_n$ as depicted in Fig. \ref{GWF1}.
Using \eqref{wfl2} and \eqref{wfl4}, the solutions $\{x_n^\star\}$ to \eqref{wfl1} can thus be interpreted as obtained trough the following procedure, which is reminiscent of the water-filling policy.
\begin{enumerate}
\item Consider $N$ vessels;
\item Assume vessel $n$ is filled with a proper material up to a certain zero-level (the same for each vessel);
\item Let the behaviour of material $n$ be regulated by $\xi_n$;
\item Compute $\sigma^\star$ through \eqref{wfl4};
\item Poor the same quantity $\eta^\star=1/\sigma^\star$ of water into each vessel;
\item The material height over the zero-level in vessel $n$ gives $x_n^\star$.
\end{enumerate}
The extension of the above water-filling interpretation to the general form in \eqref{1} is straightforward. Assume that the $j$th iteration is considered. Then, Algorithm 1 proceeds as follows.
\begin{enumerate}
\item Consider $N_j$ vessels with indices $n=j+1,\ldots,N$;
\item Assume vessel $n$ is filled with a proper material up to a certain zero-level (the same for each vessel);
\item Let the behaviour of material $n$ be regulated by $\xi_n$;
\item Compute $\mu^\star$ and $k^\star$ through \eqref{101.10} and \eqref{102.10};
\item Poor the same quantity $\eta^\star=1/\mu^\star$ of water into vessels $n = j+1,\ldots,k^\star$;
\item The material height over the zero-level gives $x_n^\star$ for $n = j+1,\ldots,k^\star$.
\end{enumerate}
\begin{remark}
Observe that the speed by which material $n$ expands itself depends on $\xi_n^\prime $ defined as the first derivative of $\xi_n$ with respect to $\eta = 1/\varsigma$. It can be easily shown that
\begin{align}\label{pippo}
\xi_n^\prime = \frac{1}{\eta^2 f_n^{\prime\prime}\left(h_n^{-1}(1/\eta)\right)}
\end{align}
from which it follows that the rate of growth is inversely proportional to the second derivative of
$f_n$ evaluated at $h_n^{-1}(1/\eta)$.
\end{remark}
}
\vspace{-0.2cm}
\section{Particularization to power allocation problems}\label{examples}
In the following, we show how some power allocation problems in signal processing and communications can be put in the form of \eqref{1}, and thus can be solved with the generalized algorithm illustrated above\footnote{Due to the considerable amount of works in this field, our exposition will be necessarily incomplete and will reflect the subjective tastes and interests of the authors. To compensate for this partiality, we refer the interested reader to the list of references for an entree into the extensive literature on this subject.}.
\vspace{-0.3cm}
\subsection{{Classical water-filling and cave-filling policies}}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.4\textwidth]{FigCF1.eps}
\end{center}
\caption{Illustration of the water-filling inspired policy for problem \eqref{10} when $N=3$.}
\label{Fig8_new}
\end{figure}
Consider the classical problem of allocating a certain amount of power $P$ among a bank of non-interfering channels to maximize the capacity. This problem can be mathematically formulated as follows:
\begin{align}\label{10}
\underset{\{x_n\}}{\max} \quad &
\sum\limits_{n=1}^N \log(1+\lambda_n x_n)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{N} x_n\le P \quad \\\nonumber
& 0 \le x_n \le u_n \quad n=1,2,\ldots,N
\end{align}
where $x_n$ represents the transmit power allocated over the $n$th channel of gain $\lambda_n$ whereas $\log(1+\lambda_n x_n)$ gives the capacity of the $n$th channel. Clearly, we assume that $\sum_{n=1}^{N} u_n > P$, otherwise \eqref{10} has the trivial solution $x_n^\star=u_n$.
The above problem can be put in the same form of \eqref{1mod} setting $f_n(x_n) = -\log (1+\lambda_n x_n)$,
$l_n=0$ $\forall n$, $\mathcal{L}=\{N\}$ and $\rho_N = P$. Observing that
\begin{align}
h_n^{-1}(\varsigma) = \frac{1}{\varsigma} - \frac{1}{\lambda_n}
\end{align}
from \eqref{x_n^star_2} one gets
\begin{equation}\label{cavefilling}
x_n^\star=\min\left\{\max\left\{\dfrac{1}{\sigma^\star} - \dfrac{1}{\lambda_n},0\right\},u_n\right\}
\end{equation}
with $\sigma^\star$ such that
\begin{align}\label{constraint}
\sum\limits_{n=1}^N x_n^\star =\sum\limits_{n=1}^N \xi_n(\sigma^\star)= P.
\end{align}
{Using the water-filling policy illustrated in Section \ref{graphical_interpretation}, the solutions in \eqref{cavefilling} have the visual interpretation shown in Fig.~\ref{Fig8_new}, where we have assumed $N=3$ and set $\eta^{\star}=1/\sigma^\star$. The material inside the $n$th vessel starts expanding when the quantity of water $\eta$ poured in the vessel equals $1/\lambda_n$. Due to the particular form of $f_n$, the expansion follows the linear law $\xi_n(1/\eta)= \eta-1/\lambda_n$ as long as $\eta \le u_n+1/\lambda_n$. After that, water is no more absorbed and the expansion stops. The additional water is accumulated in the vessel above the maximum level of the material. As shown in Fig.~\ref{Fig8_new}, this is precisely what happens with the yellow material in vessel $1$. On the other hand, we have that $\eta^\star-1/\lambda_2 < u_2$ and thus no water is accumulated on the top of the red material in vessel $2$. Finally, the green material in vessel $3$ is such that no expansion occurs since $\eta^\star<1/\lambda_3$.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.4\textwidth]{FigCF2.eps}
\end{center}
\caption{Illustration of the cave-filling policy for problem \eqref{10} when $N=3$.}
\label{Fig8}
\end{figure}
An alternative visual interpretation of \eqref{cavefilling} (commonly used in the literature) is given in Fig.~\ref{Fig8}, where ${1}/{\lambda_n}$ and $u_n + {1}/{\lambda_n}$ are viewed as the ground and the ceiling levels of patch $n$, respectively. In this case, the solution is computed as follows. We start by flooding the region with water to a level $\eta$. The total amount of water used is then given by
\begin{align}
\sum\limits_{n=1}^N \min\left\{\max\left\{\eta - \frac{1}{\lambda_n}, 0\right\},u_n\right\}.
\end{align}
The flood level is increased until a total amount of water equal to $P$ is used. The depth of water inside patch $n$ gives $x_n^\star$. This solution method is known as cave-filling due to its specific physical meaning. Clearly, if $u_n = +\infty$ for any $n$ in \eqref{10} then $x_n^\star$ reduces to
\begin{equation}\label{}
x_n^\star=\max\left\{\dfrac{1}{\sigma^\star} - \dfrac{1}{\lambda_n},0\right\}.
\end{equation}
which is the well-known and classical water-filling solution.
A problem whose solution has the same visual interpretation of Fig. \ref{Fig8} is considered also in \cite{Gao2008} (see problem (21)) in which the authors design the optimal training sequences for channel estimation in multi-hop transmissions using decode-and-forward protocols.
\vspace{-0.3cm}
\subsection{{General water-filling policies}}
\begin{figure}[t]
\begin{center}
\includegraphics[width=.4\textwidth]{FigGCF1.eps}
\end{center}
\caption{Illustration of the water-filling inspired policy for problem \eqref{60} .}
\label{Fig11}
\end{figure}
Consider now the following problem:
\begin{align}\label{60}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N \frac{\lambda_n}{x_n}\\\nonumber
\text{subject to} \quad & \sum \limits_{{n=1}}^{{j}} x_n\le \rho_j \quad j=1,2,\ldots,N\\\nonumber
& 0 \le x_n \le 1 \quad n=1,2,\ldots,N
\end{align}
where $\{\lambda_n > 0\}$ are positive parameters. This problem is considered in \cite{Sanguinetti2012} in the context of linear transceiver design architectures for MIMO networks with a single non-regenerative relay. It also appears in \cite{PalomarQoS2004} where the authors deal with the linear transceiver design problem in MIMO point-to-point networks to minimize the power consumption while satisfying specific QoS constraints on the mean-square-errors (MSEs). A similar instance can also be found in \cite{Palomar2003} and corresponds to the minimization of the weighted arithmetic mean of the MSEs in a multicarrier MIMO system with a total power constraint. All the above examples could in principle be solved with (specifically designed) multi-level water-filling algorithms \cite{Palomar2005}. Easy reformulations allow to use the more general Algorithm 1 as shown next for problem \eqref{60}.
Setting
$f_n(x_n) = {\lambda_n}/{x_n}$
and letting $l_n=0$ and $u_n = 1$, $\forall n$, it is easily seen that \eqref{60} has the same form as \eqref{1}. Then, one gets $h_n(x_n) = {\lambda_n}/{x_n^2}$
and $h_n^{-1}(\varsigma) = \sqrt{{\lambda_n}/{\varsigma}}.$
The solution to \eqref{60} is given by
\begin{align}\label{65}
x_n^\star=\min\left\{\max\left\{\sqrt{\dfrac{\lambda_n}{\sigma_{n}^\star}},0 \right\},1\right\}
\end{align}
where $\{\sigma_{1}^\star, \sigma_{2}^\star,\ldots,\sigma_{N}^\star\}$ are computed through Algorithm 1 and take the form \eqref{rem3_1} with $\{\mu_{1}^{\star},\mu_{2}^{\star},\ldots,\mu_{{L-1}}^{\star}\}$ such that
\begin{align}
\sum\limits_{n= k_{j-1}^{\star}+1}^{k_{j}^{\star}} x_n^\star = \rho_{k_{j}^{\star}}- \rho_{k_{j-1}^{\star}} \quad j=1,2,\ldots,{L-1}.
\end{align}
{According to Remark 4}, if $\mu_{L}^{\star}$ is greater than $0$ then
\begin{align}
\sum\limits_{n= k_{L-1}^{\star}+1}^{N} x_n^\star = \rho_{N}- \rho_{k^{\star}_{L-1}}
\end{align}
otherwise when $\mu_{L}^{\star} = 0$ one gets
\begin{equation}
\label{ }
x_n^\star=1 \quad \quad n=k_{L-1}^{\star}+1,\ldots,N.
\end{equation}
{The solutions $x_n^\star$ in \eqref{65} can be thought as obtained through the water-filling policy illustrated in Section \ref{graphical_interpretation} in which the expansion of material $n$ is regulated by the square-root law $\xi_n(1/\eta) = \sqrt{\lambda_n\eta}$ with rate of growth given by
\begin{equation}
\label{ }
\xi_n(1/\eta)=\sqrt{\frac{\lambda_n}{\eta}},
\end{equation}
according to \eqref{pippo}. This is illustrated in Fig. \ref{Fig11} wherein we consider the first iteration of Algorithm 1 under the assumption that $k_1^\star = 3$ and $\lambda_3<\lambda_1<\lambda_2$. As expected, the level of the red material in the $2$nd vessel is higher than the others.}
\vspace{-0.3cm}
\subsection{Some other examples}
Consider now the following problem:
\begin{align}\label{70}
\underset{\{x_n\}}{\min} \quad &
\sum\limits_{n=1}^N {\lambda_n}{e^{-x_n}}\\\nonumber
\text{subject to} \quad & \sum \limits_{{n=1}}^{{j}} x_n\le \rho_j \quad j=1,2,\ldots,N\\\nonumber
& x_n \le 0 \quad n=1,2,\ldots,N.
\end{align}
The above problem arises in \cite{Jiang2006} where the authors deal with the power minimization in MIMO point-to-point networks with non-linear architectures at the transmitter or at the receiver. A similar problem arises when two-hop MIMO networks with a single amplify-and-forward relay are considered \cite{Sanguinetti2012}.
The solution of \eqref{70} has the form
\begin{equation}\label{72}
x_n^\star=\min\left\{\max\left\{\log\left(\dfrac{\lambda_n}{\sigma_n^\star}\right),0 \right\},u_n\right\}
\end{equation}
where the quantities $\sigma_{n}^\star$ are given by \eqref{rem3_1}.
Another instance of \eqref{1} arises in connection with the computation of the optimal power allocation for the maximization of the instantaneous received signal-to-noise ratio in amplify-and-forward multi-hop transmissions under short-term power constraints \cite{Farhadi09}. Denoting by $N$ the total number of hops, the problem can be mathematically formalized as follows \cite{Farhadi09}
\begin{align}\label{600}
\underset{\{x_n\}}{\max} \quad &
\left(\prod_{n=1}^N \left(1+\frac{1}{x_n\lambda_n}\right)-1\right)^{-1}\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{N} x_n\le P \\\nonumber
& 0 \le x_n \le u_n \quad n=1,2,\ldots,N.
\end{align}
where $x_n$ represents the power allocated over the $n$th hop and $P$ denotes the available power. In addition, $\lambda_n$ is the channel gain over the $n$th hop. The above problem can be equivalently reformulated as follows
\begin{align}\label{601}
\underset{\{x_n\}}{\min} \quad &
\sum_{n=1}^N \log \left(1+\frac{1}{x_n\lambda_n}\right)\\\nonumber
\text{subject to} \quad & \sum \limits_{n=1}^{N} x_n\le P \\\nonumber
& 0 \le x_n \le u_n \quad n=1,2,\ldots,N
\end{align}
from which it is clear that it is in the same form as \eqref{1mod} with
\begin{align}
f_n(x_n) = \log \left(1+\frac{1}{x_n\lambda_n}\right)
\end{align}
$\mathcal L = \{N\}$ and $\rho_N = P$. Then,
\begin{align}\label{605}
h_n^{-1}(\varsigma) = \frac{\sqrt{1+\frac{4\lambda_n}{\varsigma}}-1}{2\lambda_n}.
\end{align}
It is assumed $\sum_{n=1}^{N} u_n > P$, otherwise \eqref{601} has the trivial solution $x_n^\star=p_n$. Using \eqref{605} into \eqref{x_n^star} yields
\begin{equation}\label{x_n.3}
x_n^\star=\min\left\{\max\left\{\frac{1}{2\lambda_n}\Big({\sqrt{1+\dfrac{4\lambda_n}{\sigma^\star}}-1}\Big),0 \right\},u_n\right\}
\end{equation}
with $\sigma^\star$ such that $\sum\nolimits_{n=1}^N x_n^\star = P$.
\section{Conclusions}\label{conclusions}
{An iterative algorithm has been proposed to compute the solution of separable convex optimization problems with a set of linear and box constraints. The proposed solution operates through a two layer architecture, which has a simple graphical water-filling inspired interpretation. The outer layer requires at most $N-1$ steps with $N$ being the number of linear constraints whereas the number of iterations of the inner layer depends on the complexity of solving a set of (possibly) non-linear equations. If solvable in closed form, then the computational burden of the inner layer is negligible. The problem under investigation is particularly interesting since a large number of existing (and likely future) power allocation problems in signal processing and communications can be reformulated as instances of its general form, and thus can be solved with the proposed algorithm without the need of developing specific solutions for each of them.}
\section*{Appendix A \\ Proof of Lemmas 1 and 2}
We start considering case {\bf{a}}). Without loss of generality, we concentrate on $f_1$, which is assumed monotonically increasing in $[l_1,u_1]$, and aim at proving that $x^\star_1 = l_1$. {We start denoting by $\mathcal{S}(x_1)$ the feasible set of $x_2,x_3,\ldots,x_N$ for a given $x_1 \in [l_1,u_1]$. Mathematically, $\mathcal{S}(x_1)$ is such that
\begin{align}\label{Opt_Prob_Red.1}
&\sum\limits_{n=2}^{j}x_n \le \rho_{n}-x_1 \quad j=2,\ldots,N \\\nonumber
& l_{n} \le x_n \le u_{n} \quad n=2,\ldots,N.
\end{align}
Clearly, we have that $\mathcal{S}(x_1) \subseteq \mathcal{S}(l_1)$
for any $x_1 \, \in \, (l_1,u_1]$. For notational convenience, we also define $F(x_1)$ as
\begin{align}\label{Opt_Prob_Red.2}
F(x_1) = \underset{\{x_2,x_3,\ldots,x_N\} \in \mathcal{S}(x_1)}{\min} \quad
\sum\limits_{n=2}^{N}f_{n}({x_n}).
\end{align}
Observe now that the optimal value $x^{\star}_{1}$ is such that $f_1(x_1) + F(x_1)$ is minimized. To this end, we recall that: $\mathbf{i}$) $ \; f_1(l_1) < f_1(x_1)$ since $f_1$ is strictly increasing in $[l_1, u_1]$; $\mathbf{ii}$) $\;F_1(l_1) \le F_1(x_1)$ since $\mathcal{S}(x_1) \subseteq \mathcal{S}(l_1)$ for any $x_1 \, \in \, (l_1,u_1]$. Therefore, it easily follows that $f_1(l_1) + F(l_1) < f_1(x_1) + F(x_1)$ for any $x_1 \, \in \, (l_1,u_1]$, which proves that $x^{\star}_{1}= l_1$. The same result can easily be extended to a generic $x_n$ with $n \ne 1$ using similar arguments. This proves Lemma 1.}
{Consider now case {\bf{b}}) and assume that there exists a point $z_n$ in $(l_n,u_n)$ such that $f_n^{'} (z_n) = 0 $ with $f_n'(x_n)<0\; \forall x_n \in (l_n, z_n)$ and $f_n'(x_n)>0\; \forall x_n \in (z_n, u_n)$. We aim at proving that $x^\star_n \in [l_n,z_n]$}
{Since $f_n'(x_n)>0\; \forall x_n \in (z_n, u_n)$, then $f_n(x_n)$ is monotonically increasing in $[z_n,u_n]$. Consequently, by Lemma 1 it follows that $x^\star_n$ cannot be greater than $z_n$. This amounts to saying that $x^{\star}_n$ must belong to the interval $[l_n,z_n]$, as stated in \eqref{solB}.}
Finally, for case {\bf{c}}) nothing can be said a priori apart for that the solution $x^{\star}_n$ lies in interval $[l_n,u_{n}]$ as required by the box constraints in \eqref{1}.
\section*{Appendix B \\ Proof of Theorem 1}
We begin by writing the Karush-Kuhn-Tucker (KKT) conditions for the convex problem $(\mathcal{P})$:
\begin{equation}\label{KKT1}
-h_{n}(x_n)+\sum\limits_{j=n}^{N}\lambda_j+\nu_n-\kappa_n=0 \quad \quad n=1,\ldots,N
\end{equation}
\begin{equation}\label{KKT2}
0 \le \lambda_n \; \bot \; \Big(\sum\limits_{j=1}^{n}x_j-\rho_n\Big) \le 0 \quad \quad n=1,\ldots,N
\end{equation}
\begin{equation}\label{KKT3.1}
0 \le \nu_n \; \bot \; \left(x_n-u_n\right) \le 0 \quad \quad n=1,\ldots,N
\end{equation}
\begin{equation}\label{KKT4.1}
0\le \kappa_n \; \bot \; \left(x_n-l_n\right) \ge 0 \quad \quad n=1,\ldots,N
\end{equation}
where $h_{n}(x) = -f^{\prime}_{n}(x)$.
Letting $\sigma_n =\sum\nolimits_{j=n}^{N}\lambda_j$ and $\sigma_{N+1} = 0$, we may rewrite \eqref{KKT1} -- \eqref{KKT4.1} in the following equivalent form:
\begin{equation}\label{KKT1eq}
-h_{n}(x_n)+\sigma_n+\nu_n-\kappa_n=0 \quad \quad n=1,\ldots,N
\end{equation}
\begin{equation}\label{KKT2eq}
\sigma_n \ge 0 \quad \quad \sigma_{N+1} = 0
\end{equation}
\begin{equation}\label{KKT3eq}
0 \le (\sigma_n-\sigma_{n+1}) \; \bot\; \Big(\sum\limits_{j=1}^{n}x_j-\rho_n\Big) \le 0 \quad n=1,\ldots,N
\end{equation}
\begin{equation}\label{KKT3}
0 \le \nu_n \; \bot \; \left(x_n-u_n\right) \le 0 \quad \quad n=1,\ldots,N
\end{equation}
\begin{equation}\label{KKT4}
0\le \kappa_n \; \bot \; \left(x_n-l_n\right) \ge 0 \quad \quad n=1,\ldots,N.
\end{equation}
Since ($\mathcal{P}$) is convex, solving the KKT conditions is equivalent to solving ($\mathcal{P}$). {Accordingly, we let $x^{\star}_{n}$, $\nu^{\star}_{n}$, $\kappa^{\star}_{n}$ and $\sigma_n^{\star}$ to denote the solution of \eqref{KKT1eq} -- \eqref{KKT4} for $n=1,\ldots,N$. In the next, it is shown that $x^{\star}_{n}$, $\nu^{\star}_{n}$ and $\kappa^{\star}_{n}$ are given by}
\begin{equation}\label{xsol}
x_{n}^{\star} = \xi_{n}(\sigma_{n}^{\star})
\end{equation}
\begin{equation}\label{nisol}
\nu_{n}^{\star} = \max \{h_{n}(x_{n}^{\star})-\sigma_{n}^{\star},0 \}
\end{equation}
\begin{equation}\label{kappasol}
\kappa_{n}^{\star} = \max \{\sigma_{n}^{\star} -h_{n}(x_{n}^{\star}),0 \}
\end{equation}
where
$\xi_n(\sigma_{n}^{\star})$ is computed as in \eqref{xi_n} with $\varsigma=\sigma_n^{\star}$:
\begin{equation}\label{csi_n}
\xi_{n}(\sigma_{n}^{\star}) = \left\{\begin{array}{cl}
u_{n} & 0 \le \sigma_{n}^{\star} < h_{n}(u_n)\\ \\
h^{-1}_{n}(\sigma_{n}^{\star}) & h_{n}(u_n) \le \sigma_{n}^{\star} < h_{n}(l_n) \\ \\
l_n & h_{n}(l_n)\le \sigma_{n}^{\star}. \\
\end{array}
\right.
\end{equation}
{The following three cases are considered separately:} $\mathbf{a}$) $x_{n}^{\star}=l_n$; $\mathbf{b}$) $l_n < x_{n}^{\star} < u_n$; $\mathbf{c}$) $x_{n}^{\star} = u_n$.
Case $\mathbf{a}$) If $x_{n}^{\star}=l_n$ then from \eqref{KKT3} it immediately follows $\nu_{n}^{\star}=0$ whereas \eqref{KKT1eq} reduces to $-h_{n}(x_{n}^{\star})+\sigma_n^{\star}=\kappa_n^{\star}$,
from which using \eqref{KKT4} we get
\begin{equation}\label{pluto_1}
-h_{n}(x_{n}^{\star})+\sigma_n^{\star}\ge 0
\end{equation}
or, equivalently, $ \sigma_n^{\star} \ge h_{n}(x_{n}^{\star}) = h_{n}(l_n).$
{Using the above result into \eqref{csi_n} yields
\begin{equation}\label{}
x_n^{\star}=l_n=\xi_n(\sigma_n^{\star})
\end{equation}
as stated in \eqref{xsol}.}
From the above results, it also follows that:
\begin{align}
\nu_{n}^{\star}&=0=\max\{h_{n}(x_{n}^{\star})-\sigma_n^{\star},0\}\\
\kappa_{n}^{\star}&=\sigma_n^{\star}-h_{n}(x_{n}^{\star})=\max\{\sigma_n^{\star}-h_{n}(x_{n}^{\star}),0\}
\end{align}
as given in \eqref{nisol} and \eqref{kappasol}, respectively.
Case $\mathbf{b}$) From \eqref{KKT3} and \eqref{KKT4} we obtain $\nu_{n}^{\star}=0$ and $\kappa_{n}^{\star}=0$ so that \eqref{KKT1eq} reduces to
\begin{equation}\label{KKT1eq_ii}
{- h_{n}(x_{n}^{\star})}+\sigma_n^{\star}=0
\end{equation}
from which we have that $x_{n}^{\star}=-h_n^{-1}(\sigma_n^{\star})$. Since in this case $l_n < x_{n}^{\star} < u_n$, then
\begin{equation}\label{sigma_n_star_ii}
h_{n}(u_{n}) < \sigma_n^{\star} < h_{n}(l_{n})
\end{equation}
so that we obtain
\begin{equation}\label{}
x_n^{\star}=h_n^{-1}(\sigma_n^{\star})=\xi_n(\sigma_n^{\star}).
\end{equation}
Also, taking \eqref{KKT1eq_ii} into account yields
\begin{align}
\nu_{n}^{\star}&=0=\max\{h_{n}(x_{n}^{\star})-\sigma_n^{\star},0\}\\
\kappa_{n}^{\star}&=0=\max\{\sigma_n^{\star}-h_{n}(x_{n}^{\star}),0\}.
\end{align}
Case $\mathbf{c}$) In this case, from \eqref{KKT4} one gets $\kappa_{n}^{\star}=0$ whereas \eqref{KKT1eq} reduces to $\nu_n^{\star} = h_{n}(x_{n}^{\star})-\sigma_n^{\star}$. Since \eqref{KKT3} is satisfied for $\nu_n^{\star} \ge 0$, then $h_{n}(x_{n}^{\star})-\sigma_n^{\star} \ge 0$ or, equivalently,
\begin{equation}\label{sigma_n_star_iii}
\sigma_n^{\star} \le h_{n}(x_{n}^{\star}) =h_{n}(u_{n}).
\end{equation}
Accordingly, we can write
\begin{equation}\label{}
x_n^{\star}=u_n=\xi_n(\sigma_n^{\star})
\end{equation}
\begin{equation}
\nu_{n}^{\star}=h_{n}(x_{n}^{\star})-\sigma_n^{\star}=\max\{h_{n}(x_{n}^{\star})-\sigma_n^{\star},0\}
\end{equation}
and
\begin{equation}
\kappa_{n}^{\star}=0=\max\{\sigma_n^{\star}-h_{n}(x_{n}^{\star}),0\}.
\end{equation}
Using all the above results together, \eqref{xsol} -- \eqref{kappasol} easily follow from which it is seen that $x^{\star}_n$ depends solely on $\sigma_n^{\star}$. The latter must be chosen so as to satisfy \eqref{KKT2eq} and \eqref{KKT3eq}.
\section*{Appendix C \\ Proof of Lemma 3}
\begin{algorithm}[t]
\caption{Equivalent form of {\bf{Algorithm 1}}.}
\begin{enumerate}
\item Set $j=1$, $k_0^{\star} = 0$ and $\gamma_n = \rho_n$ for every $n$.
\item {{\bf{While}}} $k_{j-1}^{\star}< N$
\begin{enumerate}
\item Set $\mathcal N_{k_j^{\star}}= \{k_{j-1}^{\star}+1,k_{j-1}^{\star}+2, \ldots, N\}$.
\item For every $n$ in $\mathcal N_{k_j^{\star}}$.
\begin{enumerate}
\item If $\gamma_n < \sum\nolimits_{i=k_{j-1}^{\star}+1}^n u_i$ then compute $\varsigma_{n,j}^\star$ as the solution of
\begin{align}\label{100}
c_n(\varsigma;j)=\sum\limits_{i=k_{j-1}^{\star}+1}^n \xi_i(\varsigma) = \gamma_n
\end{align}
for $\varsigma$.
\item If $\gamma_n \ge \sum\nolimits_{i=k_{j-1}^{\star}+1}^n u_i$ then set
\begin{align}\label{101.0120}
\varsigma_{n,j}^\star = 0.
\end{align}
\end{enumerate}
\item Evaluate
\begin{align}\label{101}
\mu_j^\star= \underset{n\in \mathcal{N}_{k_j^{\star}}}{\max} \quad \varsigma_{n,j}^\star
\end{align}
and
\begin{align}\label{102}
k_j^\star= \underset{n\in \mathcal{N}_{k_j^\star}}{ \max} \left\{n | \varsigma_{n,j}^\star=\mu_j^\star\right\}.
\end{align}
\item Set
\begin{align}\label{104}
\sigma_n^\star \leftarrow \mu_j^\star \quad \text{for}\;\; k_{j-1}^\star < n \le k_j^\star.
\end{align}
\item Set $\gamma_n \leftarrow \rho_n - \rho_{k_{j-1}^\star}$
for $k_{j-1}^\star < n \le N$.
\item Set $j\leftarrow j + 1$.
\end{enumerate}
\end{enumerate}
\end{algorithm}
For the sake of clarity, the steps of Algorithm 1 are put in the equivalent forms illustrated in {\bf{Algorithm 2}} in which basically some indices and equations are introduced or reformulated in order to ease understanding of the mathematical arguments and steps reported below.
{As seen, the $j$th iteration of Algorithm 2 computes the real parameter $\mu_j^{\star}$ and the integer $k_j^{\star} \in \{k_{j-1}^{\star}+1,\ldots,N\}$ through \eqref{101} and \eqref{102}, respectively. The latter are then used in \eqref{104} to obtain $\sigma_n^{\star}$ for $n=k_{j-1}^{\star}+1,\ldots,k_j^{\star}$:}
\begin{equation}\label{sigmasolkr}
\sigma_{n}^{\star} = \mu_j^{\star} \quad \text{for} \;\; k_{j-1}^\star < n \le k_{j}^\star.
\end{equation}
In the next, it is shown that the quantities $\sigma_{n}^{\star}$ given by \eqref{sigmasolkr} satisfy \eqref{KKT2eq} and \eqref{KKT3eq}.
{We start proving that $\sigma_{n}^{\star} \ge 0$
as required in \eqref{KKT2eq}. Since the domain of the function $c_n(\varsigma;j)$ in \eqref{100} is the interval $[0,+\infty)$ then the solution $\varsigma_{n,j}^\star$ of $c_n(\varsigma;j)=\rho_n-\rho_{k_{j-1}^{\star}}$ is non-negative or it does not exist. In the latter case, Algorithm 2 sets $\varsigma_{n,j}^\star=0$ (according to \eqref{101.0120}) and hence $\varsigma_{n,j}^\star \ge 0$ in any case. This means that $\mu_j^{\star}$, as computed through \eqref{101}, is non-negative and, consequently, $\sigma_{n}^{\star}$ is non-negative as well.}
{To proceed further, we now show that
\begin{equation}\label{KKT3eq_AxC}
\sigma_n^{\star}-\sigma_{n+1}^{\star} \ge 0
\end{equation}
for $n=1,\ldots,N$ as required in \eqref{KKT3eq}. To this end, we start observing that $\forall j$
\begin{align}\label{sn_inc_1}
\sigma_n^{\star}-\sigma_{n+1}^{\star} & =0 \quad \text{for} \;\; k_{j-1}^{\star} < n < k_{j}^{\star}
\end{align}
as immediately follows from \eqref{sigmasolkr}. On the other hand, for $n=k_{j}^{\star}$ one has
\begin{align}\label{sn_inc_2}
\sigma_{k_{j}^{\star}}^{\star}-\sigma_{k_{j}^{\star}+1}^{\star} =\mu_{j}^{\star}-\mu_{j+1}^{\star}.
\end{align}
From \eqref{sn_inc_1} and \eqref{sn_inc_2}, it clearly follows that to prove \eqref{KKT3eq_AxC} it suffices to show that $\mu_{j}^{\star} \ge \mu_{j+1}^{\star}$. To see how this comes about, we start observing that since each $\xi_n(\varsigma)$ in \eqref{xi_n} is non-increasing then $c_n(\varsigma;j)$ in \eqref{100} is non-increasing as well, so that we may write
\begin{equation}\label{1000}
c_n(\mu_j^{\star};j) \le c_n(\varsigma_{n,j}^\star;j) = \rho_n-\rho_{k_{j-1}^{\star}} \quad n=k_{j-1}^{\star}+1,\ldots,N
\end{equation}
where we have taken into account that by definition $\mu_j^{\star} \ge \varsigma_{n,j}^\star$ (as it follows from \eqref{101}). In particular, \eqref{1000} is satisfied with equality for $n=k_{j}^{\star}$ whereas it is a strict inequality for $n>k_{j}^{\star}$. Indeed, it cannot exist an index $\bar{k} > k_{j}^{\star}$ such that $c_{\bar{k}}(\mu_j^{\star};j) = \rho_{\bar{k}} - \rho_{k_{j-1}^{\star}}$
because this would mean that $\mu_j^{\star}$ is solution of both $c_{\bar{k}}(\varsigma;j)=\rho_{\bar{k}}-\rho_{k_{j-1}^{\star}}$ and $c_{k_{j}^{\star}}(\varsigma;j)=\rho_{k_{j}^{\star}}-\rho_{k_{j-1}^{\star}}$. If that is the case, in applying \eqref{102} at the $j$th step $\bar{k}$ would have been chosen instead of $k_{j}^{\star}$.
Based on the above results, setting $n=k_{j+1}^{\star}>k_{j}^{\star}$ into \eqref{1000} yields
\begin{equation}\label{1022}
c_{k_{j+1}^{\star}}(\mu_j^{\star};j) < \rho_{k_{j+1}^{\star}} - \rho_{k_{j-1}^{\star}}.
\end{equation}
Also, a close inspection of \eqref{100} reveals that for $n> k_j^{\star}$ $c_{n}(\varsigma;j)$ can be rewritten as follows
\begin{equation}
c_{n}(\varsigma;j) = c_{k_j^{\star}}(\varsigma;j) + c_{n}(\varsigma;j+1)
\end{equation}
from which setting $n=k_{j+1}^{\star}$ we obtain
\begin{equation}\label{1010}
c_{k_{j+1}^{\star}}(\varsigma;j) = c_{k_j^{\star}}(\varsigma;j) + c_{k_{j+1}^{\star}}(\varsigma;j+1).
\end{equation}
Replacing $\varsigma$ with $\mu_j^{\star}$ in \eqref{1010} yields
\begin{equation}\label{1011}
c_{k_{j+1}^{\star}}(\mu_j^{\star};j) = \rho_{k_{j}^{\star}} - \rho_{k_{j-1}^{\star}} + c_{k_{j+1}^{\star}}(\mu_j^{\star};j+1)
\end{equation}
where we have taken into account that
\begin{equation}
c_{k_j^{\star}}(\mu_j^{\star};j) = \rho_{k_{j}^{\star}} - \rho_{k_{j-1}^{\star}}
\end{equation}
as it easily follows from the definition of $c_{n}(\varsigma;j)$ in \eqref{100} and from those of $\mu_j^{\star}$ and $k_j^{\star}$ in \eqref{101} and \eqref{102}.}
{Using \eqref{1022} with \eqref{1011} leads to
\begin{equation}\label{1020}
c_{k_{j+1}^{\star}}(\mu_j^{\star};j+1) < \rho_{k_{j+1}^{\star}} - \rho_{k_{j}^{\star}}
\end{equation}
from which recalling that
\begin{equation}
c_{k_{j+1}^{\star}}(\mu_{j+1}^{\star};j+1) = \rho_{k_{j+1}^{\star}} - \rho_{k_{j}^{\star}}
\end{equation}
we obtain
\begin{equation}
c_{k_{j+1}^{\star}}(\mu_j^{\star};j+1) < c_{k_{j+1}^{\star}}(\mu_{j+1}^{\star};j+1).
\end{equation}
Since the functions $c_{n}(\varsigma;j)$ are non-increasing, from the above inequality we eventually obtain $\mu_{j} ^{\star}> \mu_{j+1}^{\star}$
from which using \eqref{sn_inc_2} we have that $\sigma_{k_{j}^{\star}}^{\star}-\sigma_{k_{j}^{\star}+1}^{\star}>0$. Accordingly, from \eqref{sn_inc_1} it follows that $\sigma_n^{\star}-\sigma_{n+1}^{\star} \ge 0$ as required by \eqref{KKT3eq}.}
{We proceed showing that
\begin{align}\label{0010}
\sum\limits_{i=1}^{n}x^\star_i-\rho_n \le 0 \quad n=1,\ldots,N.
\end{align}
For this purpose, observe that for {$k_{j-1}^{\star} <n \le k_{j}^{\star}$} we have that
\begin{align}\label{} \nonumber
\sum\limits_{i=1}^{n}x_i^{\star}& =\rho_{k_{j-1}^{\star}} +\sum\limits_{i= k^\star_{j-1} +1}^{n}\xi_i(\sigma^\star_{i}) = \\ \nonumber& =\rho_{k_{j-1}^{\star}} +\sum\limits_{i= k^\star_{j-1} +1}^{n}\xi_i(\mu^\star_{j}) =\rho_{k_{j-1}^{\star}} +c_n(\mu^\star_{j};j)
\end{align}
from which using \eqref{1000} it easily follows that the inequality in \eqref{0010} is always satisfied.}
{We are now left with proving that
\begin{equation}\label{last5e1}
(\sigma_n^{\star}-\sigma_{n+1}^{\star})\Big(\sum\limits_{i=1}^{n}x_i^{\star}-\rho_n\Big)=0 \quad\mathrm{for} \;\; k_{j-1}^{\star}< n \le k_j^{\star}.
\end{equation}
For this purpose, we start observing that \eqref{last5e1} is trivially satisfied for $k_{j-1}^{\star}< n < k_j^{\star}$ due to \eqref{sn_inc_1}.
On the other hand, setting $n=k_j^{\star}$ into \eqref{last5e1} yields
\begin{equation}\label{last5e2}
(\sigma_{k_j}^{\star}-\sigma_{k_j+1}^{\star})\Big(\sum\limits_{i=1}^{k_j^{\star}}x_i^{\star}-\rho_{k_j^{\star}}\Big)=0
\end{equation}
which holds true if and only if $\sum\nolimits_{i=1}^{k_j^{\star}}x_i^{\star}-\rho_{k_j^{\star}}=0$ since $\sigma_{k_j}^{\star} - \sigma_{k_j+1}^{\star}>0$. To this end, we observe that
\begin{align}\label{} \nonumber
\sum\limits_{n=1}^{k_j^{\star}}x_n^{\star}& =\sum\limits_{\ell=0}^{j-1}\sum\limits_{i=k_\ell^{\star}+1}^{k_{\ell+1}^{\star}}x_i^{\star} =\sum\limits_{\ell=0}^{j-1}\sum\limits_{i=k_\ell^{\star}+1}^{k_{\ell+1}^{\star}}\xi_i(\mu_{\ell+1}^{\star}) \\
& =\rho_{k_{1}^{\star}}+\sum\limits_{\ell=1}^{j-1}\left(\rho_{k_{\ell+1}^{\star}}-\rho_{k_{\ell}^{\star}}\right)=\rho_{k_{j}^{\star}}
\end{align}
which shows that also \eqref{last5e2} is satisfied.
Collecting all the above results together, it follows that the quantities $\{\sigma_{n}^{\star}\}$ computed by means of Algorithm 1 satisfy the KKT conditions.}
\section*{Appendix D \\ Existence and uniqueness of $\varsigma^{\star}_n$}
{The purpose of this Appendix is to show that the quantities $\varsigma^{\star}_{n}$ required by \eqref{101.10} and \eqref{102.10} for the computation of $\mu^\star$ and $k^\star$, respectively, are always well-defined. This amounts to proving that at each iteration either \eqref{100.10} or \eqref{100.101} provide a \emph{unique} $\varsigma^{\star}_{n} \ge 0$ for any $n \in \mathcal{N}_{j}$.}
{As done in Appendix C, we refer to the equivalent form illustrated in Algorithm 2 and start considering the first iteration for which $j=1$, $\gamma_n=\rho_n$ and $k^\star_0 = 0$. Under the assumption that the problem ($\mathcal{P}$) is feasible (see Proposition 1 in Section II.A), and recalling Remark 4 ( see Section III.A), the following two cases are of interest:
{\bf{a}}) $\sum\nolimits_{i=1}^n l_i < \rho_n < \sum\nolimits_{i=1}^n u_i$;
{\bf{b}}) $\rho_n \ge \sum\nolimits_{i=1}^n u_i$.
In particular, case {\bf{a}}) can be easily handled observing that $c_n(\varsigma;1)$ is strictly decreasing in the interval $(\omega_n,\Omega_n)$ with
\begin{align}
\omega_n=\min\{h_i(u_i),i=1,\ldots,n\}\\
\Omega_n=\max\{h_i(l_i),i=1,\ldots,n\}
\end{align}
whereas $c_n(\varsigma;1)=\sum\nolimits_{i=1}^n u_i$ for $\varsigma \in [0,\omega_n]$, and $c_n(\varsigma;1)=\sum\nolimits_{i=1}^n l_i$ for $\varsigma \in [\Omega_n,+\infty)$. Accordingly, if case {\bf{a}}) holds true, then the solution of $c_n(\varsigma;1)=\gamma_n$ exists and is unique, it belongs to the interval $(\omega_n,\Omega_n)$, and coincides with the quantity $\varsigma_{n,1}^\star$ as computed through \eqref{100}. On the other hand, if case {\bf{b}}) holds true, then $\varsigma_{n,1}^\star=0$ as given by \eqref{101.0120}. In both cases, Algorithm 2 produces a single value of $\varsigma_{n,1}^\star \ge 0$ for any $n$.
Consider now the $(j+1)$th step of Algorithm 2. Assume that at the $j$th step the value of $\varsigma_{n,j}^{\star}$ is well-defined (in the sense specified above) for any $n > k^\star_{j-1}$. This means that
$\varsigma_{n,j}^{\star}=0$ if
\begin{align}
\gamma_n = \rho_n-\rho_{k^\star_{j-1}} \ge \sum\limits_{i=k^\star_{j-1}+1}^n u_i.
\end{align}
On the other hand, $\varsigma_{n,j}^{\star} >0$ is the unique solution of $c_n(\varsigma;j)=\rho_n-\rho_{k^\star_{j-1}}$, when
\begin{align}
\sum\limits_{i=k^\star_{j-1}+1}^n l_i < \rho_n-\rho_{k^\star_{j-1}} < \sum\limits_{i=k^\star_{j-1}+1}^n u_i.
\end{align}
In the sequel, it is shown that if the above assumptions hold true then the value of $\varsigma_{n,j+1}^{\star}$ for $n > k^\star_{j}$ is also well-defined at the $(j+1)$th step. This amounts to saying that
\begin{align} \label{139}
\sum\limits_{i=k^\star_{j}+1}^n l_i < \rho_n-\rho_{k^\star_{j}}
\end{align}
for $n > k^\star_{j}$.
By contradiction, assume that there exists an index $\bar{k} > k_{j}^\star$ such that
\begin{align}\label{140}
\sum\limits_{i=k^\star_{j}+1}^{\bar{k}} l_i \ge \rho_{\bar{k}}-\rho_{k^\star_{j}}.
\end{align}
This would mean that $\forall \varsigma >0$
\begin{align}
c_{\bar{k}}(\varsigma; j+1) \ge \sum\limits_{i=k^\star_{j}+1}^{\bar{k}} l_i \ge \rho_{\bar{k}}-\rho_{k^\star_{j}}
\end{align}
from which, recalling that $ c_{\bar{k}}(\varsigma;j+1)=c_{\bar{k}}(\varsigma;j)-c_{k^\star_{j}}(\varsigma;j)$ and setting $\varsigma={\mu}^\star_{j}$,
one would get
\begin{align}
c_{\bar{k}}({\mu}^\star_{j};j) \ge c_{k^\star_{j}}({\mu}^\star_{j};j) +\rho_{\bar{k}}-\rho_{k^\star_{j}} = \rho_{\bar{k}}-\rho_{k^\star_{j-1}}.
\end{align}
This would contradict the fact that $c_{n}({\mu}^\star_{j};j) < \rho_{n}-\rho_{k^\star_{j-1}}$ for any $n > {k^\star_{j}}$, as already shown in \eqref{1000}. Accordingly, we must conclude that it cannot exist an index $\bar{k} > k_{j}^\star$ for which \eqref{140} is satisfied, and hence that \eqref{139} holds $\forall n > k_{j}^\star$. In turn, this amounts to saying that if the values of $\varsigma_{n,j}^{\star}$ for $n>k^\star_{j-1}$ are well-defined at the $j$th step, allowing the computation of $\mu^\star_j$ and $k^\star_{j}$, then the values of $\varsigma_{n,j+1}^{\star}$ for $n>k^\star_{j}$, computed at the $(j+1)$th step, are well-defined as well, allowing the computation of $\mu^\star_{j+1}$ and $k^\star_{j+1}$. Since the values of $\varsigma_{n,j}^{\star}$ are well-defined at the first iteration, then they are always well-defined. This concludes the proof.
\bibliographystyle{IEEEtran}
| {
"timestamp": "2014-07-18T02:00:49",
"yymm": "1407",
"arxiv_id": "1407.4477",
"language": "en",
"url": "https://arxiv.org/abs/1407.4477",
"abstract": "In this work, we focus on separable convex optimization problems with box constraints and a set of triangular linear constraints. The solution is given in closed-form as a function of some Lagrange multipliers that can be computed through an iterative procedure in a finite number of steps. Graphical interpretations are given casting valuable insights into the proposed algorithm and allowing to retain some of the intuition spelled out by the water-filling policy. It turns out that it is not only general enough to compute the solution to different instances of the problem at hand but also remarkably simple in the way it operates. We also show how some power allocation problems in signal processing and communications can be solved with the proposed algorithm.",
"subjects": "Information Theory (cs.IT)",
"title": "Convex separable problems with linear and box constraints in signal processing and communications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435195,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984747722429
} |
https://arxiv.org/abs/2004.02365 | Solutions of time-fractional differential equations using homotopy analysis method | We have used the homotopy analysis method to obtain solutions of linear and nonlinear fractional partial differential differential equations with initial conditions. We replace the first order time derivative by $\psi$-Caputo fractional derivative, and also we compare the results obtained by the homotopy analysis method with the exact solutions. | \section{Introduction}
Fractional order differential equations, systems of fractional algebraic-dif\-fer\-en\-tial equations and
fractional integrodifferential equations have been widely studied. Methods for obtaining analytical solutions
to these problems, in its nonlinear form, are commonly used, among them the Adomian decomposition method
\cite{Adomian,jafari2006,momani}, homotopy perturbation method (HPM) \cite{momani} and homotopy analysis method (HAM)
\cite{Liao2,Liao}. Jafari and Seifi \cite{jafari2009} have obtained solutions for linear and nonlinear
fractional diffusion and wave equations by means of the HAM. Ganjiani \cite{ganjiani} has
discussed only nonlinear fractional differential equations and Xu et al. \cite{Xu} have discussed
fractional partial differential equations subject to the boundary conditions and initial condition,
both by means of homotopy analysis method. S{\l}ota et al. \cite{slota}
have applied HAM for solving integrodifferential equations and Zhang et al. \cite{zhang}
have investigate numerical solutions of higher-order fractional integrodifferential equations with
boundary conditions. Zurigat et al. \cite{zurigat2010,zurigat} have used HAM to solve systems of
fractional algebraic-differential equations.
Since, we are interested in the analytical solution of fractional partial differential equations, we must choose a
particular fractional derivative. There are several types of fractional derivatives defined in terms of a respective fractional
integral \cite{almeida,danieco,graziane,vanterler}. Perhaps the various ways of approaching the fractional derivative
reside in the fact that, until now, we don't have a classic geometric interpretation, as in the case of an
ordinary differential where we associate the concept of derivative with the concept of tangency.
Here, we choose the $\psi$-Caputo fractional derivative \cite{almeida} to discuss our applications.
The choice of this fractional differentiation operator is due to the fact that when we derive a constant, the result is identically zero and as a particular case recovers the classical Caputo fractional derivative.
The paper is organized in the following way. In Section \ref{sec:2}, we present some basic definitions aof the
fractional calculus. In Section \ref{sec:3}, we describe the HAM, and three examples are present in \ref{sec:4}.
The first approach, we discuss linear time-fractional diffusion equation;
the second approach a nonlinear time-fractional gas-dynamic equation is investigated. The third approach, we discuss
nonlinear time-fractional KdV equation. At the end of each application numerical solutions to show the
efficiency of the method are presented. Concluding remarks close the paper.
\section{Fractional calculus}
\label{sec:2}
In this section we present some concepts of the fractional calculus that are useful in the remainder of the text.
\begin{definition}
Let $\alpha>0$, $I=[a,b]$ be a finite or infinite interval, $f$ an integrable function defined on $I$ and
$\psi\in{C}^1(I)$ an increasing function such that $\psi'(x)\neq{0}$, for all $x\in I$. The left fractional
integral of $f$ with respect to another function $\psi$ of order $\alpha$ is defined as
\textnormal{\cite{almeida,kilbas}}
\begin{eqnarray}
I^{\alpha,\psi}_{a+}[f(x,t)]=\frac{1}{\Gamma(\alpha)}\int_{a}^{t}\psi'(\tau)(\psi(t)-\psi(\tau))^{\alpha-1}f(x,\tau)\textnormal{d}\tau. \label{integral}
\end{eqnarray}
For $\alpha=0$, we have
$$I^{0,\psi}_{a+}[f(x,t)]=f(x,t).$$
\end{definition}
\begin{definition}
Let $\alpha>0$, $n\in\mathbb{N}$, $I$ is the interval $-\infty\leq{a}<b\leq{\infty}$, $f,\psi\in{C^n}(I)$ two
functions such that $\psi$ is increasing and $\psi'(x)\neq{0}$, for all $x\in I$. The left $\psi$-Caputo
fractional derivative of $f$ of order $\alpha$ is given by \textnormal{\cite{almeida}}
$${^{C}D_{a+}^{\alpha,\psi}}[f(x,t)]=I_{a+}^{n-\alpha,\psi}
\left(\frac{1}{\psi'(t)}\frac{\partial}{\partial t}\right)^n f(x,t),$$
where
$$n=[\alpha]+1 \quad \mbox{for}\quad \alpha\notin\mathbb{N}, \quad\quad
n=\alpha\quad \mbox{for} \quad \alpha\in\mathbb{N}.$$
To simplify notation, we will use the abbreviated notation
$$f^{[n],\psi}(x,t)=\left(\frac{1}{\psi'(t)}\frac{\partial}{\partial t}\right)^n f(x,t).$$
\end{definition}
\begin{property}
Let $f\in C^{n}[a,b]$, $\alpha>0$ and $\delta>0,$ \textnormal{\cite{almeida}}.
\begin{enumerate}
\item $f(t)=(\psi(t)-\psi(a))^{\delta-1}$, then
$$I^{\alpha,\psi}_{a+}f(t)=\frac{\Gamma(\delta)}{\Gamma(\alpha+\delta)}(\psi(t)-\psi(a))^{\alpha+\delta-1}.$$
\item $\displaystyle I^{\alpha,\psi}_{a+}{^{C}D^{\alpha,\psi}_{a+}}[f(x,t)]=f(x,t)-
\sum_{k=0}^{n-1}\frac{f^{[k],\psi}(x,a)}{k!}(\psi(t)-\psi(a))^k,$ where\\ $n-1<\alpha<n$ with
$n\in\mathbb{N}.$
\end{enumerate}
\end{property}
\begin{definition}
Let $\alpha>0$ and $a>0$. The one-parameter Mittag-Leffler function has the power series representation \textnormal{\cite{almeida,mittag}}
\begin{eqnarray}
E_{\alpha}[\psi(t)-\psi(a)]=\sum_{m=1}^{\infty}\frac{(\psi(t)-\psi(a))^m}{\Gamma(m\alpha+1)}. \label{ML}
\end{eqnarray}
\end{definition}
\section{Homotopy analysis method}
\label{sec:3}
In this section we introduce the basic ideas of the HAM by means of the
description of general nonlinear problems.\newpage
We consider the following nonlinear differential equation in a general form
\begin{eqnarray}
\mathcal{N}[u(x,t)]=0, \label{eq-nl}
\end{eqnarray}
where $\mathcal{N}$ is a nonlinear differential operator, $x$ and $t$ are independent variables and $u$ is
an unknown function. We then construct the so-called zero-order deformation equation
\begin{eqnarray}
(1-p)\mathcal{L}[\varphi(x,t;p)-u_0(x,t)]=phH(x,t)\mathcal{N}[\phi(x,t;p)], \label{ordem-zero}
\end{eqnarray}
where $p\in[0,1]$ is an embedding parameter, $\hbar\neq{0}$ is an auxiliary parameter, $H(x,t)$ is an
auxiliary function and $\phi(x,t;p)$ is a function of $x$, $t$ and $p$. Let $u_0(x,t)$ be an
initial approximation of \textnormal{Eq.(\ref{eq-nl})} and $\mathcal{L}={^{C}D_{a+}^{\alpha,\psi}}$ denotes
an auxiliary linear differential operator with the property
$$\mathcal{L}[\phi(x,t)]=0, \quad\quad {\mbox{for}} \quad\quad \phi(x,t)=0.$$
When $p=0$ and $p=1$, we have
$$\phi(x,t;0)=u_0(x,t), \qquad {\mbox{and}} \qquad \phi(x,t;1)=u(x,t),$$
respectively. As the embedding parameter $p$ increases from $0$ to $1$, the solution $\phi(x,t;p)$
depends upon the embedding parameter $p$ and varies from the initial approximation $u_0(x,t)$
to the solution $u(x,t)$.
Expanding $\phi(x,t;p)$ in a Taylor's series with respect to $p$, we have
\begin{eqnarray}
\phi(x,t;p)=u_0(x,t)+\sum_{m=1}^{\infty}u_m(x,t)p^{m}, \label{serie}
\end{eqnarray}
where
$$u_m(x,t)=\frac{1}{m!}\frac{\partial^m}{\partial p^m}\phi(x,t;p)\biggl|_{p=0}.$$
Assume that the auxiliary parameter $\hbar$, the auxiliary function $H(x,t)$, the initial
approximation $u_0(x,t)$, and the auxiliary linear operator $\mathcal{L}={^{C}D_{a+}^{\alpha,\psi}}$
are so properly chosen that the series, \textnormal{Eq.(\ref{serie})}, converges at $p=1$. Then,
the series \textnormal{Eq.(\ref{serie})}, at $p=1$, becomes
$$u(x,t)=\phi(x,t;1)=u_m(x,t)=u_0(x,t)+\sum_{m=1}^{\infty}u_m(x,t).$$
Differentiating \textnormal{Eq.(\ref{ordem-zero})}, $m$ times with respect to $p$, then setting $p=0$,
and dividing it by $m!$, we obtain the $m$th-order deformation equation
\begin{eqnarray}
\mathcal{L}[u_m(x,t)-\mathcal{X}_m u_{m-1}(x,t)]=\hbar H(x,t)R_m(\vec{u}_{m-1},x,t), \label{mth}
\end{eqnarray}
with $\vec{u}_n=\{u_0(x,t),u_1(x,t),\ldots,u_n(x,t)\}$ and
$$R_m(\vec{u}_{m-1},x,t)=\frac{1}{(m-1)!}\frac{\partial^{m-1}}{\partial p^{m-1}}\mathcal{N}[\phi(x,t;p)]\biggl|_{p=0}$$
where we have introduced the notation
\begin{eqnarray}
\mathcal{X}_{m}=\left\{
\begin{array}{lcl}
0, \quad m\leq{1},\\
1, \quad m>1. \label{x_m}
\end{array}\right.
\end{eqnarray}
Operating the fractional integral operator $I_{a+}^{\alpha,\psi}$, given by \textnormal{Eq.(\ref{integral})}, on both sides
of \textnormal{Eq.(\ref{mth})}, we have
\begin{eqnarray}
u_m(x,t)&=&\mathcal{X}_{m}u_{m-1}(x,t)-\mathcal{X}_{m}\sum_{k=0}^{n-1}\frac{u_{m-1}^{[k],\psi}(x,a)}{k!}(\psi(t)-\psi(a))^k
\nonumber\\
&+&\hbar H(x,t)I^{\alpha,\psi}_{a+}[R_m(\vec{u}_{m-1},x,t)], \quad\quad m\geq{1}. \label{sol-mth}
\end{eqnarray}
Thus, we obtain $u_1(x,t), u_2(x,t),\cdots$ by means of \textnormal{Eq.(\ref{sol-mth})}.
The $M$th-order approximation of $u(x,t)$ is given by
$$
u(x,t)=\sum_{m=0}^{M}u_m(x,t),
$$
and for $M\rightarrow\infty$, we get an accurate approximation of \textnormal{Eq.(\ref{eq-nl})}.
\section{Applications}
\label{sec:4}
In this section we apply the HAM to solving linear and nonlinear fractional
partial differential equations.\\
\begin{application}
Let $t>0$, $x>0$ and $u=u(x,t)$. Consider the linear time-fractional diffusion equation
\textnormal{\cite{jafari2006,jafari2009}}
\begin{eqnarray}
{^{C}{D}^{\alpha,\psi}_{a+}}u=\frac{\partial^2 u}{\partial x^2}+u, \quad\quad 0<\alpha<1, \label{dif-eq}
\end{eqnarray}
whose solution satisfies the initial condition
\begin{eqnarray}
u(x,a)=\cos(\pi x). \label{cond-inic}
\end{eqnarray}
In order to solve \textnormal{Eq.(\ref{dif-eq})} by means of HAM, satisfying the initial condition given by
\textnormal{Eq.(\ref{cond-inic})}, it is convenient to choose the initial approximation
\begin{eqnarray}
u_0(x,t)=\cos(\pi x) \label{init-approx}
\end{eqnarray}
and the linear differential operator
\begin{eqnarray*}
\mathcal{L}[\phi(x,t;p)]={^{C}}{D}^{\alpha,\psi}_{a+}[\phi(x,t;p)],
\end{eqnarray*}
satisfying the property
$$\mathcal{L}[c]=0,$$
where $c$ is an arbitrary constant. We define the nonlinear differential operator
\begin{eqnarray}
\mathcal{N}[\phi(x,t;p)]={^{C}}{D}^{\alpha,\psi}_{a+}[\phi(x,t;p)]-\frac{\partial^2}{\partial x^2}[\phi(x,t;p)]-
\phi(x,t;p). \label{op-nl}
\end{eqnarray}
Using \textnormal{Eq.(\ref{op-nl})} and the assumption $H(x,t)=1$ we construct the zero-order deformation equation
\begin{eqnarray}
(1-p)\mathcal{L}[\phi(x,t;p)-u_0(x,t)]=p\hbar\mathcal{N}[\phi(x,t;p)].
\end{eqnarray}
Obviously, when $p=0$ and $p=1$, we get
\begin{eqnarray*}
\phi(x,t;0)=u_0(x,t) \quad\quad \mbox{and} \quad\quad \phi(x,t;1)=u(x,t),
\end{eqnarray*}
respectively. So the $m$th-order deformation equation is
\begin{eqnarray}
\mathcal{L}[u_m(x,t)-\mathcal{X}_{m}u_{m-1}(x,t)]=\hbar R_{m}(\vec{u}_{m-1},x,t), \label{ordem-m}
\end{eqnarray}
subject to the initial condition $u_m(x,a)=0$ where $\mathcal{X}_m$ is defined by \textnormal{Eq.(\ref{x_m})} and
$$R_{m}(\vec{u}_{m-1},x,t)={^{C}}{D}^{\alpha,\psi}_{a+}u_{m-1}(x,t)-\frac{\partial^2}{\partial x^2}u_{m-1}(x,t)-
u_{m-1}(x,t).$$
Now we apply the integral fractional operator $I_{a+}^{\alpha,\psi}$ on both sides of \textnormal{Eq.(\ref{ordem-m})} to get
\begin{eqnarray*}
{^{C}}{D}^{\alpha,\psi}_{a+}[u_m(x,t)-\mathcal{X}_{m}u_{m-1}(x,t)]=\hbar I^{\alpha,\psi}_{a+}
\left[{^{C}}{D}^{\alpha,\psi}_{a+}u_{m-1}(x,t)-\frac{\partial^2}{\partial x^2}u_{m-1}(x,t)-
u_{m-1}(x,t)\right],
\end{eqnarray*}
whose solution has the form
\begin{eqnarray*}
u_m(x,t)&-&\sum_{k=0}^{n-1}\frac{u_{m}^{[k],\psi}(x,a)}{k!}
(\psi(t)-\psi(a))^k-\mathcal{X}_{m}u_{m-1}(x,t)\\
&+&\mathcal{X}_{m}\sum_{k=0}^{n-1}\frac{u_{m-1}^{[k],\psi}(x,a)}{k!}
(\psi(t)-\psi(a))^k=\hbar\left[u_{m-1}(x,t)-\sum_{k=0}^{n-1}\frac{u_{m-1}^{[k],\psi}(x,a)}{k!}
(\psi(t)-\psi(a))^k\right.\\
&-&\left. I^{\alpha,\psi}_{a+}\left(\frac{\partial^2}{\partial x^2}u_{m-1}(x,t)+u_{m-1}(x,t)\right)\right],
\quad\quad m\geq{1}.
\end{eqnarray*}
For $0<\alpha<1$, then $n=1$, we can rewrite the above equation as
\begin{eqnarray}
u_m(x,t)&=&(\mathcal{X}_m+\hbar)u_{m-1}(x,t)-(\mathcal{X}_{m}+\hbar)u_{m-1}(x,a)\nonumber\\
&-&\hbar I^{\alpha,\psi}_{a+}
\left[\frac{\partial^2}{\partial x^2}u_{m-1}(x,t)+u_{m-1}(x,t)\right], \quad\quad m\geq{1}. \label{u_m}
\end{eqnarray}
From \textnormal{Eq.(\ref{init-approx})} and \textnormal{Eq.(\ref{u_m})}, we obtain
\begin{eqnarray*}
u_0(x,t)&=&\cos(\pi x),\\
u_1(x,t)&=&-\hbar I^{\alpha,\psi}_{a+}\left[\frac{\partial^2}{\partial x^2}u_0(x,t)+u_0(x,t)\right]=
-\hbar(1-\pi^2)\cos(\pi x)\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)},\\
u_2(x,t)&=&-(1+\hbar)\hbar(1-\pi^2)\cos(\pi x)\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}+
\hbar^2(1-\pi^2)^2\cos(\pi x)\frac{(\psi(t)-\psi(a))^{2\alpha}}{\Gamma(2\alpha+1)},\\
&\vdots&
\end{eqnarray*}
An accurate approximation of \textnormal{Eq.(\ref{dif-eq})} is given by
$$u(x,t)=u_0(x,t)+u_1(x,t)+u_2(x,t)+\cdots,$$
and, when $\hbar=-1$, we have
\begin{eqnarray*}
u(x,t)&=&\cos(\pi x)\left[1+\frac{(1-\pi^2)}{\Gamma(\alpha+1)}(\psi(t)-\psi(a))^{\alpha}+
\frac{[(1-\pi^2)](\psi(t)-\psi(a))^{\alpha}]^{2}}{\Gamma(2\alpha+1)}\right.\\
&+&\left.\frac{[(1-\pi^2)(\psi(t)-\psi(a))^{\alpha}]^{3}}{\Gamma(3\alpha+1)}+\cdots\right]\\
&=&\cos(\pi x)\left[1+\sum_{m=1}^{\infty}\frac{[(1-\pi^2)(\psi(t)-\psi(a))^{\alpha}]^{m}}{\Gamma(m\alpha+1)}\right],
\end{eqnarray*}
or in terms of the one-parameter Mittag-Lefler function \textnormal{Eq.(\ref{ML})},
\begin{eqnarray}
u(x,t)=\cos(\pi x)E_{\alpha}[(1-\pi^2)(\psi(t)-\psi(a))^{\alpha}]. \label{sol}
\end{eqnarray}
We note two important special cases of \textnormal{Eq.(\ref{sol})}. First taking $\psi(t)=t$ and $a=0$. In this case
the solution \textnormal{Eq.(\ref{sol})} takes the form
\begin{eqnarray}
u(x,t)=\cos(\pi x)\,E_{\alpha}[(1-\pi^2)t^{\alpha}] . \label{sol-t}
\end{eqnarray}
\textnormal{Eq.(\ref{sol-t})} recovers the solutions found by Jafari and Seifi \textnormal{\cite{jafari2009}} obtained by means of the
HAM and Jafari and Daftardar-Gejji \textnormal{\cite{jafari2006}} using the Adomian
decomposition method.
On the other hand, if $\psi(t)=\ln t$ and $a>0$, the solution \textnormal{Eq.(\ref{sol})} becomes
\begin{eqnarray}
u(x,t)=\cos(\pi x)\,E_{\alpha}\left[(1-\pi^2)\left(\ln\frac{t}{a}\right)^{\alpha}\right]. \label{sol-lnt}
\end{eqnarray}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap1.eps}
\caption{Approximate solutions $u(0.1,t)$ using 3-terms and exact solution of \textnormal{Eq.(\ref{dif-eq})} subject the initial condition
\textnormal{Eq.(\ref{cond-inic})} with $\psi(t)=t$, $a=0$ and $\alpha\rightarrow{1}$.
Solid line (exact solution): $\hbar=-1$, dashdotted: $\hbar=-0.6$, dashed: $\hbar=-0.8$, and dotted: $\hbar=-1.3$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap2.eps}
\caption{Exact solutions $u(0.1,t)$ using \textnormal{Eq.(\ref{sol-t})}. Solid line: $\alpha\rightarrow{1}$,
dashdotted: $\alpha=0.9$, and dotted: $\alpha=0.5$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap3.eps}
\caption{Approximate solutions $u(0.1,t)$ using 3-terms, $\psi(t)=\ln t$, $a=1$ and $\alpha\rightarrow{1}$. Solid line
(exact solution using \textnormal{Eq.(\ref{sol-lnt}))}: $\hbar=-1$, dashdotted: $\hbar=-0.7$, and dotted: $\hbar=-1.2$.}
\end{figure}
\end{application}
\begin{application}
Let $t>0$, $x >0$ and $u=u(x,t)$. Consider the nonlinear time-fractional gas-dynamic equation \textnormal{\cite{shone}}
\begin{eqnarray}
{^{C}{D}^{\alpha,\psi}_{a+}}u+u\cdot\frac{\partial u}{\partial x}-u+u^2=0, \quad\quad 0<\alpha<1 \label{dif-eq2}
\end{eqnarray}
whose solution satisfies the initial condition
\begin{eqnarray}
u(x,a)=\mbox{e}^{-x}. \label{cond-inic-2}
\end{eqnarray}
In order to solve \textnormal{Eq.(\ref{dif-eq2})}, we choose the initial approximation
\begin{eqnarray*}
u_0(x,t)=\mbox{e}^{-x} \label{init-approx-2}
\end{eqnarray*}
and the linear operator
\begin{eqnarray*}
\mathcal{L}[\phi(x,t;p)]={^{C}}{D}^{\alpha,\psi}_{a+}[\phi(x,t;p)],
\end{eqnarray*}
with the property $\mathcal{L}[c]=0$, where $c$ is a constant.
From \textnormal{Eq.(\ref{dif-eq2})}, we define the nonlinear differential operator
\begin{eqnarray*}
\mathcal{N}[\phi(x,t;p)]={^{C}}{D}^{\alpha,\psi}_{a+}[\phi(x,t;p)]+\phi(x,t;p)\cdot\frac{\partial}{\partial x}[\phi(x,t;p)]-
\phi(x,t;p)+[\phi(x,t;p)]^2.
\end{eqnarray*}
Taking $H(x,t)=1$, we construct the zero-order deformation equation
\begin{eqnarray}
(1-p)\mathcal{L}[\phi(x,t;p)-u_0(x,t)]=p\hbar\mathcal{N}[\phi(x,t;p)].
\end{eqnarray}
Obviously, when $p=0$ and $p=1$, we get
\begin{eqnarray*}
\phi(x,t;0)=u_0(x,t)=\mbox{e}^{-x} \quad\quad \mbox{and} \quad\quad \phi(x,t;1)=u(x,t),
\end{eqnarray*}
respectively. The $m$th-order deformation equation is given by
\begin{eqnarray}
\mathcal{L}[u_m(x,t)-\mathcal{X}_{m}u_{m-1}(x,t)]=\hbar R_{m}(\vec{u}_{m-1},x,t), \label{ordem-m2}
\end{eqnarray}
subject to the initial condition $u_m(x,a)=0$, where
\begin{eqnarray}
R_{m}(\vec{u}_{m-1},x,t)&=&{^{C}}{D}^{\alpha,\psi}_{a+}u_{m-1}(x,t)+\sum_{i=0}^{m-1}u_i(x,t)\cdot\frac{\partial}{\partial x}u_{m-1-i}(x,t)-u_{m-1}(x,t)\nonumber\\
&+&\sum_{i=0}^{m-1}u_i(x,t)u_{m-1-i}(x,t).
\end{eqnarray}
Operating the fractional integral operator $I_{a+}^{\alpha,\psi}$ on both sides of \textnormal{Eq.(\ref{ordem-m2})},
we have
\begin{eqnarray}
&&u_m(x,t)=(\mathcal{X}_m+\hbar)u_{m-1}(x,t)-(\mathcal{X}_{m}+\hbar)u_{m-1}(x,a) \label{u_m-2}\\
&&+\hbar I^{\alpha,\psi}_{a+}
\left[\sum_{i=0}^{m-1}u_i(x,t)\cdot\frac{\partial}{\partial x}u_{m-1-i}(x,t)-u_{m-1}(x,t)+
\sum_{i=0}^{m-1}u_i(x,t)u_{m-1-i}(x,t)\right]. \nonumber
\end{eqnarray}
In this way, we obtain
\begin{eqnarray*}
u_0(x,t)&=&\mbox{e}^{-x},\\
u_1(x,t)&=&\hbar I^{\alpha,\psi}_{a+}\left\{u_0(x,t)\cdot\frac{\partial}{\partial x}u_0(x,t)-u_0(x,t)+[u_0(x,t)]^2]\right\}
=-\hbar\mbox{e}^{-x}\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)},\\
u_2(x,t)&=&(1+\hbar)u_1(x,t)-(1+\hbar)u_1(x,a)+\hbar{I^{\alpha,\psi}_{a+}}\left\{u_0(x,t)\cdot\frac{\partial}{\partial x}u_1(x,t)\right.\\
&+&\left. u_1(x,t)\cdot\frac{\partial}{\partial x}u_0(x,t)-u_1(x,t)+2u_1(x,t)\cdot u_0(x,t)\right\}\\
&=&-(1+\hbar)\hbar\,\mbox{e}^{-x}\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}+
{\hbar^2}\mbox{e}^{-x}\frac{(\psi(t)-\psi(a))^{2\alpha}}{\Gamma(2\alpha+1)},\\
u_3(x,t)&=&(1+\hbar)u_2(x,t)-(1+\hbar)u_2(x,a)+\hbar{I^{\alpha,\psi}_{a+}}\left\{u_0(x,t)\cdot\frac{\partial}{\partial x}u_2(x,t)\right.\\
&+&\left. u_1(x,t)\cdot\frac{\partial}{\partial x}u_1(x,t)+u_2(x,t)\cdot\frac{\partial}{\partial x}u_0(x,t)
-u_2(x,t)\right.\\
&+&2u_0(x,t)\cdot u_2(x,t)+[u_1(x,t)]^2\bigg\}\\
&=&-(1+\hbar)^2 \hbar\,\mbox{e}^{-x}\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}+
2{\hbar^2}(1+\hbar)\mbox{e}^{-x}\frac{(\psi(t)-\psi(a))^{2\alpha}}{\Gamma(2\alpha+1)}\\
&-&{\hbar^3}\mbox{e}^{-x}\frac{(\psi(t)-\psi(a))^{3\alpha}}{\Gamma(3\alpha+1)},\\
&\vdots&
\end{eqnarray*}
The solution $u(x,t)$ is given by
$$u(x,t)=u_0(x,t)+u_1(x,t)+u_2(x,t)+\cdots$$
and if $\hbar=-1$, we have
\begin{eqnarray}
u(x,t)&=&\mbox{e}^{-x}\left[1+\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}+
\frac{(\psi(t)-\psi(a))^{2\alpha}}{\Gamma(2\alpha+1)}+
\frac{(\psi(t)-\psi(a))^{3\alpha}}{\Gamma(3\alpha+1)}+\cdots\right]\nonumber\\
&=&\mbox{e}^{-x}E_{\alpha}[(\psi(t)-\psi(a))^{\alpha}]. \label{sol-2}
\end{eqnarray}
In particular, if $\psi(t)=t$ and $a=0$, we can write the solution as
\begin{eqnarray}
u(x,t)=\mbox{e}^{-x}E_{\alpha}(t^{\alpha}) \label{sol-t-2}
\end{eqnarray}
and if $\alpha\rightarrow{1}$, we obtain the solution found by Shone and Patra \textnormal{\cite{shone}}
using the fractional complex transform and a new iterative method, this is, $u(x,t)=e^{t-x}$. On the other hand, if
$\psi(t)=\ln t$ and $a>0$, we obtain
\begin{eqnarray}
u(x,t)=\mbox{e}^{-x}\,E_{\alpha}\left[\left(\ln\frac{t}{a}\right)^{\alpha}\right]. \label{sol-lnt-2}
\end{eqnarray}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap4.eps}
\caption{Approximate solutions $u(0.2,t)$ using 4-terms and exact solution of \textnormal{Eq.(\ref{dif-eq2})} subject the initial
condition \textnormal{Eq.(\ref{cond-inic-2})} with $\psi(t)=t$, $a=0$ and $\alpha\rightarrow{1}$. Solid line (exact solution): $\hbar=-1$,
dashdotted: $\hbar=-0.6$, and dashed: $\hbar=-1.4$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap5.eps}
\caption{Exact solutions $u(0.2,t)$ using \textnormal{Eq.(\ref{sol-t-2})}. Solid line: $\alpha\rightarrow{1}$,
dashdotted: $\alpha=0.75$, and dotted: $\alpha=0.4$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap6.eps}
\caption{Approximate solutions $u(0.2,t)$ using 4-terms, $\psi(t)=\ln t$, $a=1$ and $\alpha\rightarrow{1}$.
Solid line (exact solution using \textnormal{Eq.(\ref{sol-lnt-2}))}: $\hbar=-1$, dashdotted: $\hbar=-2$, and dashed: $\hbar=-0.5$.}
\end{figure}
\end{application}
\begin{application}
\noindent Let $t>0$, $x>0$, $u=u(x,t)$ and $0<\alpha<1$. Consider the following nonlinear time-fractional KdV equation
\textnormal{\cite{momani,rehman}}
\begin{eqnarray}
{^{C}{D}^{\alpha,\psi}_{a+}}u(x,t)-\frac{\partial}{\partial x}[u(x,t)]^2+\frac{\partial}{\partial x}
\left[u(x,t)\frac{\partial^2}{\partial x^2}u(x,t)\right]=0, \label{dif-eq3}
\end{eqnarray}
whose solution satisfies the initial condition
\begin{eqnarray}
u(x,a)=\sinh^2 \left(\frac{x}{2}\right). \label{cond-inic-3}
\end{eqnarray}
Let $u_0(x,t)$ denote an initial approximation of $u(x,t)$, this is,
\begin{eqnarray}
u_0(x,t)=\sinh^2 \left(\frac{x}{2}\right) \label{init-approx-3}
\end{eqnarray}
and we choose the linear differential operator $\mathcal{L}={^{C}{D}^{\alpha,\psi}_{a+}}$, with the condition
$\mathcal{L}[c]=0$ where $c$ is a constant.
From \textnormal{Eq.(\ref{dif-eq3})}, we define the nonlinear differential operator
\begin{eqnarray*}
\mathcal{N}[\phi(x,t;p)]={^{C}}{D}^{\alpha,\psi}_{a+}[\phi(x,t;p)]+\phi(x,t;p)\cdot\frac{\partial}{\partial x}[\phi(x,t;p)]-
\phi(x,t;p)+[\phi(x,t;p)]^2.
\end{eqnarray*}
With the choice $H(x,t)=1$ we have the zero-order deformation equation
\begin{eqnarray}
(1-p)\mathcal{L}[\phi(x,t;p)-u_0(x,t)]=p\hbar\mathcal{N}[\phi(x,t;p)].
\end{eqnarray}
Obviously, when $p=0$ and $p=1$, we get
\begin{eqnarray*}
\phi(x,t;0)=u_0(x,t)=\sinh^2 \left(\frac{x}{2}\right) \quad\quad \mbox{and} \quad\quad \phi(x,t;1)=u(x,t),
\end{eqnarray*}
respectively. The $m$th-order deformation equation can be expressed by
\begin{eqnarray}
\mathcal{L}[u_m(x,t)-\mathcal{X}_{m}u_{m-1}(x,t)]=\hbar R_{m}(\vec{u}_{m-1},x,t), \label{ordem-m3}
\end{eqnarray}
where
\begin{eqnarray*}
R_{m}(\vec{u}_{m-1},x,t)&=&{^{C}}{D}^{\alpha,\psi}_{a+}u_{m-1}(x,t)-\frac{\partial}{\partial x}\left[\sum_{i=0}^{m-1}u_i(x,t)\cdot u_{m-1-i}(x,t)\right.\\
&-&\left.\sum_{i=0}^{m-1}u_i(x,t)\frac{\partial^2}{\partial x^2}u_{m-1-i}(x,t)\right].
\end{eqnarray*}
Applying the fractional operator $I_{a+}^{\alpha,\psi}$ to this equation we find
\begin{eqnarray}
u_m(x,t)&=&(\mathcal{X}_m+\hbar)u_{m-1}(x,t)-(\mathcal{X}_{m}+\hbar)u_{m-1}(x,a) \label{u_m-3}\\
&-&\hbar I^{\alpha,\psi}_{a+}
\left[\frac{\partial}{\partial x}\left(\sum_{i=0}^{m-1}u_i(x,t)\cdot u_{m-1-i}(x,t)-\sum_{i=0}^{m-1}u_{i}(x,t)
\frac{\partial^2}{\partial x^2}u_{m-1-i}(x,t)\right)\right]. \nonumber
\end{eqnarray}
Thereafter, we successively obtain
\begin{eqnarray*}
u_0(x,t)&=&\sinh^2 \left(\frac{x}{2}\right),\\
u_1(x,t)&=&-\hbar I^{\alpha,\psi}_{a+}\left[\frac{\partial}{\partial x}\left([u_0(x,t)]^2-
u_0(x,t)\frac{\partial^2}{\partial x^2}u_0(x,t)\right)\right]=\hbar\sinh(x)\frac{(\psi(t)-\psi(a))^{\alpha}}{4\Gamma(\alpha+1)},\\
u_2(x,t)&=&(1+\hbar)u_1(x,t)-\hbar{I^{\alpha,\psi}_{a+}}\left[\frac{\partial}{\partial x}\left(2u_0(x,t)\cdot u_1(x,t)-
u_0(x,t)\frac{\partial^2}{\partial x^2}u_1(x,t)-u_1(x,t)\frac{\partial^2}{\partial x^2}u_0(x,t)\right)\right]\\
&=&(1+\hbar)\hbar\,\sinh(x)\,\frac{(\psi(t)-\psi(a))^{\alpha}}{4\Gamma(\alpha+1)}+
\hbar^2\cosh(x)\,\frac{(\psi(t)-\psi(a))^{2\alpha}}{8\Gamma(2\alpha+1)},\\
&\vdots&
\end{eqnarray*}
The second-order approximation of $u(x,t)$ is
\begin{eqnarray*}
u(x,t)&=&\sinh^2 \left(\frac{x}{2}\right)+\hbar\sinh(x)\frac{(\psi(t)-\psi(a))^{\alpha}}{4\Gamma(\alpha+1)}+
(1+\hbar)\hbar\sinh(x)\,\frac{(\psi(t)-\psi(a))^{\alpha}}{4\Gamma(\alpha+1)}\\
&+&\hbar^2\cosh(x)\,\frac{(\psi(t)-\psi(a))^{2\alpha}}{8\Gamma(2\alpha+1)}.
\end{eqnarray*}
Taking $\hbar=-1$, we have
\begin{eqnarray}
u(x,t)=\sinh^2 \left(\frac{x}{2}\right)-\frac{\sinh(x)}{4}\frac{(\psi(t)-\psi(a))^{\alpha}}{\Gamma(\alpha+1)}+
\frac{\cosh(x)}{8}\frac{(\psi(t)-\psi(a))^{2\alpha}}{\Gamma(2\alpha+1)}. \label{sol-2nd}
\end{eqnarray}
If $\psi(t)=t$ and $a=0$, the second-order approximation of $u(x,t)$ \textnormal{Eq.(\ref{sol-2nd})} becomes
\begin{eqnarray}
u(x,t)=\sinh^2 \left(\frac{x}{2}\right)-\frac{t^{\alpha}}{4\Gamma(\alpha+1)}\,\sinh(x)+
\frac{t^{2\alpha}}{8\Gamma(2\alpha+1)}\,\cosh(x).\label{sol-t-3}
\end{eqnarray}
This solution is identical to the solution obtained using Rehman et al. \textnormal{\cite{rehman}}
the combination of the double Sumudu transform and homotopy perturbation method and also
obtained by Momani et al. \textnormal{\cite{momani}} by homotopy perturbation method.
On the other hand, if $\psi(t)=\ln t$ and $a>0$, we obtain
\begin{eqnarray}
u(x,t)=\sinh^2 \left(\frac{x}{2}\right)-\frac{\sinh(x)}{4\Gamma(\alpha+1)}\left(\ln\frac{t}{a}\right)^\alpha+
\frac{\cosh(x)}{8\Gamma(2\alpha+1)}\left(\ln\frac{t}{a}\right)^{2\alpha}. \label{sol-t-4}
\end{eqnarray}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap7.eps}
\caption{Approximate solutions $u(1,t)$ using 3-terms, $\psi(t)=t$, $a=0$ and $\alpha\rightarrow{1}$. Solid line: $\hbar=-1$,
dashdotted: $\hbar=-2$, and dashed: $\hbar=-0.8$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap8.eps}
\caption{Approximate solutions $u(1,t)$ using \textnormal{Eq.(\ref{sol-t-3})}. Solid line: $\alpha\rightarrow{1}$,
dashdotted: $\alpha=0.8$, and dashed: $\alpha=0.6$.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{grap9.eps}
\caption{Approximate solutions $u(1,t)$ using \textnormal{Eq.(\ref{sol-t-4})}. Solid line: $\alpha\rightarrow{1}$,
dashdotted: $\alpha=0.7$, and dashed: $\alpha=0.6$.}
\end{figure}
\end{application}
\section{Concluding remarks}
\label{sec:5}
In this paper we have presented the HAM to obtain approximate solutions for linear and nonlinear fractional
partial differential equations replacing the first order time derivative by the $\psi$-Caputo
fractional derivative. We solve fractional partial differential equations, and to obtain
explicit series solutions, we have presented numerical solutions. Therefore,
we considered different values for $\alpha$ and for auxiliary parameter $\hbar$. It is possible to
control the convergence region of the solution series, obtained by means of HAM, adjusting the auxiliary
parameter $\hbar$. Mathematica has been used for draw graphs.
| {
"timestamp": "2020-04-07T02:21:40",
"yymm": "2004",
"arxiv_id": "2004.02365",
"language": "en",
"url": "https://arxiv.org/abs/2004.02365",
"abstract": "We have used the homotopy analysis method to obtain solutions of linear and nonlinear fractional partial differential differential equations with initial conditions. We replace the first order time derivative by $\\psi$-Caputo fractional derivative, and also we compare the results obtained by the homotopy analysis method with the exact solutions.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Solutions of time-fractional differential equations using homotopy analysis method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435195,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984747722429
} |
https://arxiv.org/abs/alg-geom/9304006 | Recovering of curves with involution by extended Prym data | With every smooth, projective algebraic curve $\tilde{C}$ with involution $\sigma :\tilde{C}\longrightarrow \tilde{C}$ without fixed points is associated the Prym data which consists of the Prym variety $P:=(1-\sigma )J(\tilde{C})$ with principal polarization $\Xi$ such that $2\Xi$ is algebraically equivalent to the restriction on $P$ of the canonical polarization $\Theta $ of the Jacobian $J(\tilde{C})$. In contrast to the classical Torelli theorem the Prym data does not always determine uniquely the pair $(\tilde{C},\sigma )$ up to isomorphism. In this paper we introduce an extension of the Prym data as follows. We consider all symmetric theta divisors $\Theta $ of $J(\tilde{C})$ which have even multiplicity at every point of order 2 of $P$. It turns out that they form three $P_2$ orbits. The restrictions on $P$ of the divisors of one of the orbits form the orbit $\{ 2\Xi \} $, where $\Xi $ are the symmetric theta divisors of $P$. The other restrictions form two $P_2$-orbits $O_1,O_2\subset \mid 2\Xi \mid $. The extended Prym data consists of $(P,\Xi )$ together with $O_1,O_2$. We prove that it determines uniquely the pair $(\tilde{C} ,\sigma )$ up to isomorphism provided $g(\tilde{C})\geq 3$. The proof is analogous to Andreotti's proof of Torelli's theorem and uses the Gauss map for the divisors of $O_1,O_2$. The result is an analog in genus $>1$ of a classical theorem for elliptic curves. | \section*{Introduction}
\footnotetext{Supported in part by the Bulgarian foundation
"Scientific research" and by NSF under the US-Bulgarian project
"Algebra and algebraic geometry".}
The classical Torelli theorem states that every smooth, projective
algebraic curve $X$ is determined uniquely up to isomorphism by its
principally polarized Jacobian $(J(X),\Theta )$. In this paper we
consider curves $\tilde{C}$ with involution without fixed points
$\sigma :\tilde{C}\longrightarrow \tilde{C}$. We let
$C=\tilde{C}/\sigma $ and denote by $\pi :\tilde{C}\longrightarrow
C$ the factor map. One associates the principally polarized Prym
variety $(P,\Xi )$ where $P=P(\tilde{C},\sigma )=(1-\sigma
)J(\tilde{C})$ and $\Theta \mid _P$ is algebraically equivalent to
$2\Xi$. The natural question,
whether the Prym variety determines uniquely the pair
$(\tilde{C},\sigma )$ up to isomorphism, has negative answer in
general if the genus $g$ of $C$ is $\leq 6$ as well as in every genus
$\geq 7$ for some special loci of curves, e.g for hyperelliptic $C$.
The problem of spotting the pairs $(\tilde{C},\sigma )$ which are not
determined uniquely by the Prym variety is still open. Some
partial results have been obtained in
\cite{f-s},\cite{kan},\cite{don},\cite{don1},\cite{deb1},\cite{deb2},
\cite{nar},\cite{verra}.
In this paper we propose an extension of the Prym data $(P,\Xi )$
and prove that it determines uniquely up to isomorphism any pair
$(\tilde{C},\sigma )$ for $g\geq 2$. Our extension originates from the
following observation. Consider the case of $\tilde{C}$ of genus 1.
Here the involution is a translation $t_{\mu }$ by some point $\mu $
of order 2 in $J(\tilde{C})$. Notice that $\{ 0,\mu \} =Ker(Nm_{\pi }:
J(\tilde{C})\longrightarrow J(C))$. The Prym variety is equal to 0.
There is a classically known data which determines uniquely the pair
$(\tilde{C},t_{\mu })$ up to isomorphism (see e.g. \cite{mum2},
\cite{clem}). Namely, one can always represent $J(\tilde{C})$ as
${\bf C}/{\bf Z}\tau +{\bf Z}$, where $\tau $ belongs to the
upper-half plane ${\cal H}$, so that
$$
\mu =\frac{1}{2}\tau +\frac{1}{2}(mod {\bf Z}\tau +{\bf Z}).
$$
So, the moduli space of pairs $(\tilde{C},\sigma )$ is isomorphic to
$\Gamma _{1,2}\backslash {\cal H}$ where $\Gamma _{1,2}\subset
PGL(2,{\bf Z})$ is the subgroup
\[
\Gamma _{1,2}=\{ \left( \begin{array}{cc}a&b\\c&d \end{array} \right)
\mid ad-bc=1,ab\equiv 0(mod 2),cd\equiv 0(mod 2)
\} \]
One considers the three even theta functions with characteristics :
$\theta _{00}(z,\tau ),\theta _{01}(z,\tau )$ and $\theta
_{10}(z,\tau )$. Just one of them vanishes on $\mu $, namely $\theta
_{00}(z,\tau )$. Now, by the transformation law for theta functions
one checks that for
\begin{equation}\label{ii.1}
\lambda (\tau )=-\theta _{01}(0,\tau )^4/\theta _{10}(0,\tau )^4
\end{equation}
the set $\{ \lambda (\tau ),1/\lambda (\tau )\} $, or equivalently
the function $k(\tau )=\lambda (\tau )+1/\lambda (\tau )$, remain
invariant with respect to the action of $\Gamma _{1,2}$. Moreover the
map
$$
k:\Gamma _{1,2}\backslash {\cal H}\longrightarrow {\bf C}^*-\{ 0,2\}
$$
is an analytic isomorphism (see Section~(\ref{s2})).
Generalizing to genus greater then 1, first we have that
$Ker(Nm_{\pi }:J(\tilde{C})\longrightarrow J(C))=P\cup P_{\_ }$ where
$P_{\_ }$ is a translation of $P$ by a point of order 2. One considers
the symmetric theta divisors $\Theta $ of $J(\tilde{C})$ which have
the property that for every $\rho \in P_2$ either $\rho \not \in
\Theta $ or $\rho \in \Theta $ and $mult_{\rho }\Theta $ is even. It
turns out that there are three $P_2$-orbits of symmetric theta
divisors with this property. The divisors of one of the orbits contain
$P_{\_ }$ ; these are exactly the theta divisors which appear in
Wirtinger's theorem \cite{fay},\cite{mum} and satisfy $\Theta .P=2\Xi
$ for symmetric theta divisors $\Xi $ of $P$. None of the divisors of
the other two orbits contains $P_{\_ }$. Restricting the latter to $P$
we obtain two $P_2$-orbits of divisors in the linear system
$\mid 2\Xi \mid $ which we denote by $O_1,O_2$.
The extended Prym data consists of $(P,\Xi )$ together with the
two $P_2$-orbits $O_1,O_2\subset \mid 2\Xi \mid $. Our result is that
for $g\geq 2$ it determines uniquely up to isomorphism any pair
$(\tilde{C},\sigma )$. The proof is analogous to Andreotti's proof
of Torelli's theorem and uses the Gauss map for the divisors of $O_i$.
Special treatment is required if $C$ is hyperelliptic, or
bi-elliptic,
or g=3. In fact the bi-elliptic case has been already considered
earlier by Naranjo in \cite{nar} who proved that for $g\geq 10$ the
pair $(\tilde{C},\sigma )$ can be recovered by the ordinary Prym data
$(P,\Xi )$. His arguments however do not work for $g=4$ or $5$. If
$\eta $ is the point of order 2 in $Pic^0(C)$ which determines the
covering $\pi:\tilde{C}\longrightarrow C$ then the divisors of $O_i$
are equal to translations of connected components of the set
$$
Z=\{ L\in Pic^{2g-2}(\tilde{C})\mid Nm_{\pi }(L)\simeq K_C\otimes
\eta ,\; h^0(\tilde{C},L)\geq 1\}
$$
It is interesting that line bundles $L$ of this type appear also in
the study of rank 2 vector bundles on $C$ with canonical determinant
\cite{be} as well as in representing $(\tilde{C},\sigma )$ as the
spectral curves associated to $sp(2n)$ -matrices with parameter.
{\bf Acknowledgement.} Part of this work was done while the author
was a visitor in the University of Michigan and the University of
Utah. The hospitality of these institutions is gratefully
acknowledged.
\begin{center}{\bf Contents}\end{center}
0.Notation and preliminaries. 1.Double unramified coverings of
elliptic curves. 2.Extended Prym data. 3.The semicanonical map and
the Gauss map. 4.The hyperelliptic case, $g\geq 2$. 5.The
bi-elliptic case, $g\geq 4$. 6. The case $g=3$.
\section{Notation and preliminaries}\label{s1}
We denote by $\equiv$ the linear equivalence of divisors. Let $X$ be
an algebraic, smooth, irreducible curve. We denote by $J_d(X)$ the
divisor classes of degree $d$ modulo linear equivalence and by
$Pic^d(X)$ the isomorphism classes of invertible sheafs of degree $d$
on $X$. Abusing the notation we shall write by the same letter an
element in $J_d(X)$ and the corresponding element in $Pic^d(X)$. If
$D$ is a divisor of degree $d$ and $\xi = cl(D)$ its class in
$J_d(X)$ we write
$$
h^0(C,D) = h^0(C,\xi ) = dim\mid \xi \mid + 1
$$
If $L$ is an invertible sheaf of $X$, then $\mid L \mid $
is the linear system of divisors of sections of $H^0(X,L)$. If $\mid
L \mid $ is without base points we denote by $\varphi _L = \varphi
_{\mid L
\mid } $ the map $X \longrightarrow \mid L \mid ^*$ defined by
$\varphi _L(x) = \mid L(-x) \mid +x$.
Let $A$ be a principally polarized abelian variety of dimension $g$
isomorphic to ${\bf C}^g/\Lambda _{\tau }$, where $\Lambda _{\tau } =
{\bf Z}^g\tau + {\bf Z}^g$ with $\tau \in {\cal H}_g $, where
${\cal H}_g$ is the Siegel upper-half space. Any point $e \in
{\cal C}^g$ has two characteristics $\epsilon ,\delta \in {\bf R}^g$
such that $e =\epsilon \tau + \delta $. We shall write sometimes $e =
\left[ \begin{array}{c} \epsilon \\ \delta \end{array} \right] $. Let
$x \in
A$ and let $x = (x'\tau +x'')(mod \Lambda _{\tau})$ with $x',x''\in
{\bf R}^g$. If no confusion arises we shall refer to $x',x''$ as the
characteristics of $x$, keeping in mind that $x',x''$ are determined
modulo ${\bf Z}^g$. We shall denote by $A_2$ the points of order 2 in
$A$. Let $\lambda =\lambda '\tau +\lambda '' ,
\mu = \mu '\tau + \mu '' \in
A_2 $. The Weyl pairing $e_2 : A_2 \times A_2 \longrightarrow
{\bf Z}$ is defined by
$$
e_2(\lambda ,\mu) = 4(\lambda '^t\mu '' - \mu '^t\lambda ''(mod 2)
$$
Let $\Theta $ be a symmetric theta divisor. One defines a
quadratic form $q_{\Theta } : A_2 \longrightarrow {\bf Z}_2 $
associated with $\Theta $ by
\begin{equation}\label{e5.1}
q_{\Theta }(\lambda ) = mult_0(\Theta +\lambda )(mod 2)
\end{equation}
Consider the theta function with characteristics $\lambda ',\lambda
''\in \frac{1}{2} {\bf Z}^g$
$$
\theta \left[ \begin{array}{c} \lambda '\\ \lambda'' \end{array}
\right] (z,\tau )
= \sum_{n\in {\bf Z}^g}\exp {(\pi i(n+\lambda ')\tau ^t(n+\lambda ') +
2\pi i(n+\lambda '')^t(z+\lambda ''))}
$$
Then
$$
\theta \left[ \begin{array}{c} \lambda '\\ \lambda'' \end{array}
\right] (-z,\tau ) =
\theta \left[ \begin{array}{c} -\lambda '\\ -\lambda'' \end{array}
\right] (z,\tau ) =
(-1)^{4\lambda '^t\lambda ''}
\theta \left[ \begin{array}{c} \lambda '\\ \lambda'' \end{array}
\right] (z,\tau )
$$
Thus if $\Theta $ is the divisor of the theta function
$\theta \left[ \begin{array}{c} 0\\ 0 \end{array} \right] (z,\tau )$
then for any $\lambda =\lambda '\tau +\lambda '' \in A_2$ one has
\begin{equation}\label{e6.2}
q_{\Theta }(\lambda ) = 4\lambda '^t\lambda ''(mod 2)
\end{equation}
For any symmetric theta divisor $\Theta $ of $A$ the bilinear form
associated with $q_{\Theta }$ is $e_2$, i.e.
$$
\Delta _{\lambda }\Delta_{\mu}q_{\Theta }(\xi )
= q_{\Theta }(\xi +\lambda +\mu)-q_{\Theta }(\xi +\lambda )
-q_{\Theta }(\xi +\mu )+q_{\Theta }(\xi )
$$
is independent of $\xi $ and equals $e_2(\lambda ,\mu )$. In
particular one has the following formula
\begin{equation}\label{e6.1}
q_{\Theta }(\lambda +\alpha ) = q_{\Theta }(\lambda ) + e_2(\lambda
,\alpha ) + q_{\Theta }(\alpha ) - q_{\Theta }(0)
\end{equation}
The map $\Theta \longmapsto q_{\Theta }$ gives a bijective
correspondence between the set of symmetric theta divisors of $A$ and
the set of quadratic forms on $A_2$ whose associated bilinear form
equals $e_2$ and which vanish on $2^{g-1}(2^g+1)$ points of $A_2$.
Let $\tilde{C}$ be an algebraic, projective, smooth, irreducible
curve. Let $\sigma : \tilde{C} \longrightarrow \tilde{C}$ be an
involution without fixed points, let $C= \tilde{C}/\sigma$ and let
$\pi : \tilde{C} \longrightarrow C$ be the factor map. Let $g = g(C),
\tilde{g} = g(\tilde{C}) = 2g-1 $. We suppose that $g\geq2$. Let
$\tilde{J} = J(\tilde{C}) , J= J(C) $ be the Jacobians of $\tilde{C},
C$ respectively and let $P = P(\tilde{C},\sigma) =
(1-\sigma )\tilde{J}$ be the Prym variety. One has the maps $\pi ^* :
J\longrightarrow \tilde{J} , Nm : \tilde{J}\longrightarrow J$ such
that $\pi^* \circ Nm = 1+\sigma $. We denote by $j : P
\longrightarrow \tilde{J} $ the embedding. The kernel of $\pi ^* : J
\longrightarrow \tilde{J} $ is $\{0,\eta \}$ where $\eta \in J_2$.
Conversely, given $C$ and $\eta \in J(C)_2 , \eta \neq 0$ one
constructs a double unramified covering $\pi : \tilde{C}
\longrightarrow C$ such that $\pi _*{\cal O}_{\tilde{C}} \simeq {\cal
O}_C \oplus{\cal O}_C(\eta)$ and gets the set-up above.
Let $J \simeq {\bf C}^g/\Lambda , \tilde{J} \simeq {\bf
C}^{2g-1}/\tilde{\Lambda} , P\simeq {\bf C}^{g-1}/\Lambda_{\_}$.
Choosing as in \cite{fay}, \cite{clem} symplectic bases
$\{a_0,...,a_{g-1},b_0 ,...,b_{g-1}\} ,
\{\tilde{a}_0,...,\tilde{a}_{2g-2},\tilde{b}_0,...,\tilde{b}_{2g-2}\}
, \{
\tilde{a}_1-\tilde{a}_g,...,\tilde{a}_{g-1}-\tilde{a}_{2g-2},\tilde{b}
_1-\tilde{b}_g,...,\tilde{b}_{g-1}-\tilde{b}_{2g-2}\} $
of $\Lambda , \tilde{\Lambda }$ and $\Lambda_{\_}$ respectively we
can assume that $\eta = \frac{1}{2}a_0(mod \Lambda)$. Let us denote
by $\tau \in {\cal H}_g , \tilde{\tau} \in {\cal H}_{2g-1}, \Pi \in
{\cal H}_{g-1}$ the corresponding period matrices. One has the
following formulas :
\begin{equation}\label{e8.1}
\begin{array}{ccc}
(\pi ^*)_*\left[ \begin{array}{cc}\alpha _0&\alpha \\\beta _0&\beta
\end{array}\right] & = & \left[ \begin{array}{rcc}\alpha _0&\alpha
&\alpha \\2\beta _0&\beta &\beta \end{array}\right] \\\mbox{}\\
Nm_*\left[ \begin{array}{ccc}\alpha _0&\alpha &\alpha '\\\beta
_0&\beta &\beta ' \end{array}\right] & = & \left[ \begin{array}{rc}
2\alpha _0&\alpha +\alpha '\\\beta _0&\beta +\beta '
\end{array}\right] \\ \mbox{}\\ j_*\left[ \begin{array}{c}\alpha
\\\beta \end{array}\right] &=&\left[ \begin{array}{ccc}0&\alpha
&-\alpha \\0&\beta &-\beta \end{array} \right] \end{array}
\end{equation}
Here $\alpha ,\alpha ',\beta ,\beta ' \in {\bf R}^{g-1} ,\alpha
_0,\beta _0\in {\bf R}$ and $(\pi ^*)_*, Nm_* , j_*$ are the
linear maps which induce the homomorphisms $\pi^* , Nm , j$.
Wirtinger's theorem \cite{fay} states that there is a symmetric
theta divisor $\Theta _0$ of $\tilde{J}$ which is equal to
$W_{\tilde{g}-1}-\pi ^*\Delta $ for a certain theta characteristic
$\Delta $ of $C$, such that $\Theta_0\mid _P = 2\Xi $ for some
symmetric
theta divisor $\Xi$ of $P$. Moreover any point $L-\pi ^*\Delta $ of
$\Theta _0\cap P$ satisfies the properties
$$
Nm(L) \equiv K_C , h^0(\tilde{C},L) \equiv 0(mod 2) , h^0(\tilde{C},
L) \geq 2
$$
$\Theta _0$ is the divisor of the theta function $\theta[\lambda ]
(z,\tilde{\tau})$, where
$$
\lambda = \frac{1}{2}\tilde{ a}_0 =
\left[ \begin{array}{ccc}0&0&0 \\\frac{1}{2}&0&0 \end{array}\right]
$$
Next we recall and state in more general form some results of Welters
\cite{wel}. Let $f : X \longrightarrow Y$ be a double covering of
smooth, projective curves. It might be ramified. Let $\Lambda = \mid
D \mid$ be a complete linear system on $Y$ and let $deg(D) = d$. One
denotes by $S$ the subscheme of $X^{(d)}$ which is the pull-back of
$\Lambda $ under $f^{(d)}$
$$
\begin{array}{ccl}
S&\longrightarrow & X^{(d)}\\
\downarrow & &\downarrow \\
\Lambda &\longrightarrow & Y^{(d)}
\end{array}
$$
The arguments on pp.103-107 of \cite{wel} combined with Riemann-Roch's
theorem give the following proposition
\begin{prop}\label{p91.1}
Let $\hat{D}$ be a closed point of $S$ and let $A$ be the maximal
effective divisor of $C$ such that $\hat{D}=\pi ^*A + E$ with $A\geq
0,E\geq 0$. Let $D=f^{(d)}(\hat{D})=2A+E_1$. Then
(i) $S$ is nonsingular at $\hat{D}$ if and only if
$$
h^0(D-A) = h^0(D) - deg(A)
$$
(ii) Suppose $S$ is nonsingular at $\hat{D}$. Then
$f^{(d)}\mid _{S} : S\longrightarrow \Lambda $ is nondegenerate at
$\hat{D}$ if and only if $f^{(d)}:X^{(d)}\longrightarrow Y^{(d)}$ is
nondegenerate at $\hat{D}$ and this is the case if and only if $D$
contains no branch points of $f$ and $A=0$.
\end{prop}
Let $B\subset Y$ be the branch locus of $f$ and let $\delta$ be the
invertible sheaf with $\delta ^{\otimes 2}\simeq {\cal O}_Y(B)$ which
determines the covering by $f_*{\cal O}_X\simeq {\cal O}_Y\oplus
\delta $. An effective divisor $E$ of $X$ is called $f$-simple if
$E\not \geq f^*(y)$ for any $y\in Y$. The following lemma is due to
Mumford \cite{mum}.
\begin{lem}\label{l92.1}
Let $A$ be a divisor of $Y$ and let $E$ be an effective $f$-simple
divisor of $X$. Then there is an exact sequence
$$
0\longrightarrow {\cal O}_Y(A)\longrightarrow f_*{\cal O}_X(\pi^*A+E)
\longrightarrow {\cal O}_Y(A+Nm_f(E))\otimes \delta
^{-1}\longrightarrow 0
$$
\end{lem}
\begin{cor}\label{c92.2}
Under the assumptions of the preceding lemma suppose that \\
\noindent $deg(A)+deg(E)<deg(\delta )$. Then
$$
h^0(X,\pi ^*A+E) = h^0(Y,A)
$$
\end{cor}
\section{Double unramified coverings of elliptic curves}\label{s2}
Let $\tilde{E}$ be an elliptic curve. Choosing a point $o\in
\tilde{E}$ we shall sometimes identify $\tilde{E}$ with $J(\tilde{E})$
by the map $x\mapsto cl(x-o)$. Let $\sigma :\tilde{E}\longrightarrow
\tilde{E}$ be an involution without fixed points.
\begin{lem}\label{l10.1}
There exists $\mu \in J(\tilde{E})$ of order $2$ such that $\sigma
(x) = x+\mu $. Furthermore $Ker(\pi_*:J(\tilde{E})\longrightarrow
J(E))=\{ 0,\mu \} $.
\end{lem}
{\bf Proof.} Let $\mu =\sigma (0)-0.$ Since $P(\tilde{E},\sigma
)=(\sigma -1)J(\tilde{E})=0$ we have $\sigma (x-o)\equiv x-o$, thus
$\sigma x=x+\mu.$ Furthermore $2(\sigma (o)-o)=(1-\sigma )(\sigma
(o)-0)\equiv 0$ and $\pi _*(\sigma (o)-o)\equiv 0$, thus
$Ker\pi _*=\{ 0,\mu \} $ since $\# Ker\pi _*=2.$ q.e.d.
Using the notation of the Introduction let $\tilde{E}\simeq {\bf
C}/\Lambda _{\tau }$ where $\Lambda _{\tau }={\bf Z}\tau +{\bf Z}$
with $Im(\tau )>0$ and $\mu = \frac{1}{2}(\tau +1)(mod \Lambda
_{\tau })$. Consider $\lambda (\tau )$ defined by Eq.~(\ref{ii.1}).
Then $\{ \lambda (\tau ),1/\lambda (\tau )\} $ is the set of the roots
of the equation $x^2-k(\tau )x+1=0$ where
$$
k(\tau )=-(\theta _{10}(0,\tau )^8+\theta _{01}(0,\tau )^8)/
\theta _{10}(0,\tau )^4\theta _{01}(0,\tau )^4
$$
\begin{prop}\label{p12.1}
The map
\begin{equation}\label{e12.1}
k:\Gamma _{1,2}\backslash {\cal H}\longrightarrow {\bf C}-\{0,2\}
\end{equation}
is an analytic isomorphism.
\end{prop}
{\bf Proof.} Let $\Gamma _2$ be the level $2$ subgroup of $PSL(2,{\bf
Z})$
$$
\Gamma _{2}=\{ \left( \begin{array}{cc}a&b\\c&d \end{array}
\right) \equiv \left( \begin{array}{cc}1&0\\0&1 \end{array}
\right) (mod 2)\}
$$
Then $\mid \Gamma _{1,2}:\Gamma _{2}\mid =2$ and the element
$S=\left( \begin{array}{cc}0&1\\-1&0 \end{array}\right) $,
belongs to $\Gamma _{1,2}\backslash \Gamma _{2}$. It is well-known
(see e.g. \cite{clem}) that the map
$$
\lambda :\Gamma _{2}\backslash {\cal H}\longrightarrow {\bf C}-\{
0,1\}
$$
given by Eq.~(\ref{ii.1}) is an isomorphism. We have $S(\tau
)=-1/\tau $ and $\lambda (-1/\tau )=1/\lambda (\tau )$. The factor of
$\Gamma _{2}\backslash {\cal H}$ by the action of $S$ is $\Gamma
_{1,2}\backslash {\cal H}$, thus $k$ is an analytic isomorphism.
q.e.d.
Explicitly, given $k\neq 0,2$ we find $\lambda $ such that
$\lambda +1/\lambda =k$ and the corresponding pair $(\tilde{E},\mu \in
J(\tilde{E})_2)$ is given by the equation $y^2=x(x-1)(x-\lambda )$ and
the point $\mu =cl(p_1-p_2)$ where $p_1=(0,0),p_2=(1,0)$.
\section{Extended Prym data}\label{s3}
Let $\tilde{C},C$ etc. be as in Section~(\ref{s1}). Let $\Theta _{0}$
be
the divisor of the theta function \[
\theta \left[ \begin{array}{ccc}0&0&0\\\frac{1}{2}&0&0
\end{array}\right]
(z,\tilde{\tau })\] Let us denote by $q_{0}$ the quadratic form
$q_{\Theta _0} : \tilde{J}_2\longrightarrow {\bf Z}_2$ defined by
Eq.~(\ref{e5.1}). By Eq.~(\ref{e6.1}) and (\ref{e6.2})
one has
\begin{equation}\label{e14.1}
q_{0}\left( \left[ \begin{array}{cc}\alpha \\\beta \end{array}
\right] \right)=4\alpha ^t\beta +2\alpha _0(mod 2)
\end{equation}
Hence $q_{0}(\rho )=0$ for any $\rho \in P_2$. This follows also from
Wirtinger's theorem. The same property holds for any symmetric theta
divisor of the orbit $\{ \Theta _{0}+\rho \mid \rho \in P_2\} $. Let
us denote $\pi ^*(J)$ by $B$. By Eq.~(\ref{e8.1}) one has
$B_2\supset P_2$.
\begin{lem}\label{l14.1}
A symmetric theta divisor $\Theta \subset \tilde{J}$ has the property
that $q_{\Theta }$ vanishes on $P_2$ if and only if $\Theta =\Theta_
{0}+\alpha $ where $\alpha \in B_2$ and $q_{0}(\alpha )=0$.
\end{lem}
{\bf Proof.} Suppose $q_{\Theta }(P_2)=0$. Let $\Theta =\Theta_
{0}+\alpha $ for some $\alpha \in \tilde{J}_2$.Then for $\rho \in
P_2$ one has by Eq.~(\ref{e6.1})
$$
q_{\Theta }(\rho )=q_{0}(\rho +\alpha )=q_{0}(\rho )+e_2(\rho ,\alpha
)+q_{0}(\alpha )-q_{0}(0)
$$
Setting $\rho =0$ we conclude that $q_{0}(\alpha )=0$. Thus
$q_{\Theta }(P_2)=0$ implies that $e_2(P_2,\alpha )=0$.
Eq.~(\ref{e8.1}) show that the latter holds if and only if $\alpha
\in B_2$. Conversely, if $\alpha \in B_2$ and $q_{0}(\alpha )=0$,
then $q_{\Theta }(P_2)=0$ by the formula for $q_{\Theta }$ above.
q.e.d.
The zeros of $q_{0}$ which belong to $B_2$ are the following three
cosets with respect to $P_2$~: $P_2,\lambda _1+P_2,\lambda _2+P_2$
where $\lambda _1=\frac{1}{2}\tilde{a}_0(mod \tilde{\Lambda }),
\lambda _2=\frac{1}{2}\tilde{a}_0+\frac{1}{2}\tilde{b}_0(mod
\tilde{\Lambda })$. Let $\Theta _{1}=\Theta _{0}+\lambda _1,\Theta
_{2}=\Theta _{0}+\lambda _2$. We conclude that there are three $P_2$ -
orbits of
symmetric theta divisors $\Theta $ such that $q_{\Theta }$ vanishes on
$P_2$ :
\[ \begin{array}{ccc}
\{ \Theta _{0}+\rho \}\; ,&\{ \Theta _{1}+\rho \}\; ,&\{ \Theta
_{2}+\rho \} \end{array}\]
These are respectively the divisors of the theta functions
\[ \begin{array}{ccc}
\theta \left[ \begin{array}{ccc}0&\alpha
&\alpha \\\frac{1}{2}&\beta &\beta \end{array}\right](z,\tilde{\tau
})\; ,& \theta \left[ \begin{array}{ccc}0&\alpha
&\alpha \\0&\beta &\beta \end{array}\right](z,\tilde{\tau })\; ,&
\theta \left[
\begin{array}{ccc}\frac{1}{2}&\alpha &\alpha \\0&\beta
&\beta \end{array}\right](z,\tilde{\tau })
\end{array}\]
where $\alpha ,\beta \in \frac{1}{2}{\bf Z}^{g-1}$.
\begin{lem}\label{16.2}
Let $\mu = \frac{1}{2}\tilde{b}_0(mod \tilde{\Lambda })$. Then
$Ker(Nm)=P\cup P_{\mu } , Ker(Nm)\cap B_2=P_2\cup (\mu +P_2)$ and for
any $x\in \tilde{C}$ there exists a unique $\xi \in P$ such that
$\sigma x-x\equiv \mu +\xi $.
\end{lem}
{\bf Proof.} It is well-known \cite{fay},\cite{mum} that $Ker(Nm)$
has two connected components $P$ and $P_{\_ }$. Since $\mu \in
B_2\backslash P_2$ and $Nm(\mu )=0$ we get that $P_{\_ }=P_{\mu }$.
Hence $(P\cup P_{\_ })\cap B_2=P_2\cup (P_2+\mu )$. The last
statement of the lemma follows from the equality $\sigma x-x+P=
P_{\_ }$ \cite{mum}.
q.e.d.
Let $\tilde{J}_{2g-2},J_{2g-2}$ be the divisor classes of degree
$2g-2$ on $\tilde{C},C$ respectively and let
$Nm:\tilde{J}_{2g-2}\longrightarrow J_{2g-2}$ be the norm map. The
subvariety $Nm^{-1}(K_C+\eta )$ is a principal homogeneous space for
$Nm^{-1}(0)=P\cup P_{\_ }$, thus it has two connected components. Let
\begin{equation}\label{e165.1}
Z=\{ L\in \tilde{J}_{2g-2}\mid Nm(L)=K_C+\eta ,\; h^0(L)\geq 1\}
\end{equation}
and let $Z=Z_1\cup Z_2$ where $Z_{i}$ are the intersections of $Z$
with the connected components of $Nm^{-1}(K_C+\eta )$.
\begin{lem}\label{l1634.1}
Let $M$ be an effective divisor of $\tilde{C}$ such that $Nm(M) \in
\mid K_C+\eta \mid$. Suppose $x\in \tilde{C}$ and $x\not \in Bs\mid
M\mid $. Then
$$
h^0(M+\sigma x-x)=h^0(M)-1
$$
\end{lem}
{\bf Proof.} By Riemann-Roch's theorem $x$ is a base point of
$\mid K_{\tilde{C}}-M+x\mid $. Now, $K_{\tilde{C}}-M\equiv\sigma (M)$
, thus $\sigma (x)$ is a base point of $\mid M+\sigma (x)\mid $ which
proves the lemma since $x\not \in Bs\mid M\mid $. q.e.d.
\begin{prop}\label{p17.1}
The theta divisors $\Theta _{1}$ and $\Theta _{2}$ do not contain $P$
. The restrictions $\Theta _i.P$ are connected and reduced divisors of
$P$ which belong to the linear system $\mid 2\Xi \mid $. Furthermore,
up to a possible reordering of $Z_i$ one has
$$
\Theta _i.P=Z_i-\pi ^*\Delta -\lambda _i
$$
The point $L-(\pi ^*\Delta +\lambda _i)$ is nonsingular if and only if
$h^0(\tilde{C},L)=1$. The corresponding tangent hyperplane is equal
to $Nm(\mid L\mid )\in \mid K_C\otimes \eta \mid $ via the
identification
$$
T_0(P)^*\simeq H^0(\tilde{C},K_{\tilde{C}})^-\simeq H^0(C,K_C\otimes
\eta )
$$
\end{prop}
{\bf Proof.} Let $\kappa _i=\pi ^*\Delta +\lambda _i,i=1,2$.We have
$Nm(\lambda _i)=\eta $, thus $Nm(\kappa _i)=K_C+\eta $ and for
$L$ with $h^0(L)\geq 1$ one has $L-\kappa _i\in\Theta _i\cap Ker(Nm)$
if and only if $Nm(\mid L\mid )\in \mid K_C+\eta \mid $. Since
$dim\mid K_C+\eta \mid =g-2$ we conclude that neither $P$ nor $P_{\_
}$ are contained in $\Theta _i$. Furthermore $\Theta _i$ are ample,
so $\Theta _i\cap P$ are not empty and $\Theta _i.P$ are connected
divisors of $P$. Upon a possible reordering of $Z_1$ and $Z_2$ we
have $Z_1-\kappa _1=\Theta _1\cap P,Z_2-\kappa _1=\Theta
_1\cap P_{\_ }$. Since $P_{\_ }=P+\mu $ and $\lambda _2=\lambda
_1+\mu $ we obtain $Z_2-\kappa _2=\Theta _2\cap P$.
{\bf Claim.} {\it For every irreducible component $T$ of any
$\Theta _i.P$ the general element $L-\kappa _i\in T$ satisfies
$h^0(\tilde{C},L)=1$ }.\\
{\bf Proof.} Suppose the contrary. Then for any $x\in \tilde{C}$ the
image of the map
\[ \begin {array}{cc}
\psi :T\times \tilde{C}\longrightarrow Ker(Nm)\; ,&
\psi (L,x)=L+\sigma x-x
\end{array}\]
is contained in $Z$. This image must be of dimension $dimT+1=g-1$.
Indeed, if $M=L+\sigma x-x$, then for every sufficiently general
$L\in T$ and $x\in \tilde{C}$ one has by Lemma~(\ref{l1634.1}) that
$h^0(M)=h^0(L)-1$. If $dim\psi (T\times \tilde{C})\leq dimT$, then
for every sufficiently general $M\in Im(\psi ),x\in \tilde{C}$ one
has $h^0(M-\sigma x+x)=h^0(L)=h^0(M)+1$ which is an absurd by
Lemma~(\ref{l1634.1}). Now, $dimZ=g-2$, thus it is impossible that
$dim\psi (T\times \tilde{C})=g-1$. q.e.d.
Now, suppose that $L-\kappa _i\in\Theta _i\cap P$ is an element with
$h^0(L)=1$ and let $D=\mid L\mid $. Since $Nm(D)\in\mid K_C+\eta\mid
$ there is an anti invariant holomorphic differential $\omega $ of
$\tilde{C}$ whose divisor of zeros is $\pi ^*D$. Since $h^0(L)=1$
the point $L-\kappa _i$ is nonsingular on $\Theta _i$ and the tangent
space in $T_0\tilde{J}$ is given by the equation $\omega =0$. We see
that $\Theta _i$ and $P$ intersect transversely at $L-\kappa _i$ and
the tangent hyperplane of $\Theta _i.P$ at $L-\kappa _i$ is given by
the same equation $\omega =0$ in $T_0P$ since $\omega $
is anti invariant. What we have proved implies also that $Sing(\Theta
_i.P)=P\cap Sing\Theta _i$. This concludes the proof of the
proposition. q.e.d.
\begin{cor}\label{c19.1}
All irreducible components of $Z$ are of dimension $g-2$.
\end{cor}
We see that the $P_2$-orbit $\{ \Theta _0+\rho \} $ is distinguished
among the three $P_2$-orbits of symmetric theta divisors $\Theta$
which satisfy $q_{\Theta}\mid _{P_2}=0$ by the property that the
restriction of any $\Theta _0+\rho $ on $P$ is equal to
twice a theta divisor of $P$. It is also distinguished by the
property that every $\{ \Theta _0+\rho \} $ contains $P_{\_ }=P+\mu $
. Indeed, the fact that $\Theta _0\supset P+\mu $ follows from the
parity lemma \cite{tju} and is well-known \cite{mum1},\cite{fay}. If
$\Theta _1+\rho $ contained $P_{\_ }=P+\mu $, then $\Theta _2$
would contain $P$ since $\Theta _2=\Theta _1+\mu $ and $\rho \in P_2$
which contradicts Proposition~(\ref{p17.1}). Notice that the latter
distinction of $\{ \Theta _1+\rho \} $ and $\{ \Theta _2+\rho \} $
parallels the distinction of $\theta _{10}(z,\tau )$ and $\theta
_{01}(z,\tau )$ in the elliptic case. We can now state our extension
of the Prym data.
{\sc Extended Prym data}.{\it One associates to every
algebraic, smooth, irreducible, projective curve $\tilde{C}$ of genus
$\geq 3$ with an involution $\sigma
:\tilde{C}\longrightarrow \tilde{C}$ without fixed points, the
principally polarized Prym variety $(P,\Xi )$ and the two $P_2$-orbits
$O_1,O_2\subset \mid 2\Xi \mid $ which consist of the restrictions
$\Theta .P$ of the symmetric theta divisors $\Theta \subset
J(\tilde{C})$ such that $q_{\Theta }\mid _{P_2}=0$ and $\Theta \not
\supset P_{\_ }$ where $Ker(Nm_{\pi }:J(\tilde{C})\longrightarrow
J(C))=P\cup P_{\_ }$ }
We can now state our result which is a kind of generalization of
Proposition~(\ref{p12.1}) to curves of genus $>1$.
\begin{theo}\label{t20.1}
The pair $(\tilde{C},\sigma )$ is uniquely determined up to
isomorphism
by the extended Prym data $(P(\tilde{C},\sigma ),\Xi ),O_1,O_2\subset
\mid 2\Xi \mid $.
\end{theo}
\section{The semicanonical map and the Gauss map}\label{s4}
In this section $K_C\in Pic^{2g-2}(C)$ is the canonical sheaf of $C$
and
$\eta \in Pic^0(C)_2$ is the sheaf with $\eta ^{\otimes 2}\simeq
{\cal O}_C$ such that $\pi _*{\cal O}_{\tilde{C}}\simeq {\cal
O}_{C}\oplus \eta $. We shall denote by $\varphi _K,\varphi
_{K\otimes
\eta }$ the canonical, respectively semicanonical map of the curves
under consideration. Let $L=K_C\otimes \eta $. The following lemma
follows elementary from Riemann-Roch's theorem.
\begin{lem}\label{l21.1}
Suppose $g(C)\geq 2$. Then $\mid K_C\otimes \eta \mid $ has base
points if and only if $C$ is hyperelliptic and $\eta \simeq {\cal
O}_{C}(p_1-p_2)$ where $p_1,p_2$ are ramification points for the
double covering $f:C\longrightarrow {\bf P}^1$. In this case
$p_1+p_2=Bs\mid K_C\otimes \eta \mid $ and $K_C\otimes \eta \simeq
(f^*{\cal O}_{{\bf P}^1}(g-2))(p_1+p_2)$.
\end{lem}
\begin{lem}\label{l21.2}
Suppose $g(C)\geq 3 $ and $\mid K_C\otimes \eta \mid $ is without
base points. Then \\
$\varphi _L:C\longrightarrow \mid K_C\otimes \eta \mid
^*={\bf P}^{g-2}$ is a birational embedding except in the following
two cases :
(i) $g(C)=3$. Then $\varphi _L:C\longrightarrow {\bf P}^1$ is
of
degree $4$.
(ii) $g(C)\geq 4$, $C$ is bi-elliptic, i.e. it is a double
covering $f:C\longrightarrow E$ of an elliptic curve, and $\eta \simeq
f^*(\epsilon )$ where $\epsilon \in Pic^0(E)_2$. Here $\varphi
_L=\varphi
_{\delta \otimes \epsilon }\circ f$ where $\delta $ is the invertible
sheaf of $E$ which determines the covering, i.e. $\delta ^{\otimes
2}\simeq {\cal O}_{E}(x_1+...+x_{2g-2})$ for the branch points
$x_1,...,x_{2g-2}$ and $f_*{\cal O}_{C}\simeq {\cal O}_{E}\oplus
\delta $.
\end{lem}
{\bf Proof.} The case $g(C) =3$ is clear, so let us suppose that
$g\geq 4$. Let $X=\varphi _L(C)$ and let $d$ be the degree of the
map
$\varphi _L:C\longrightarrow X$. We have $d.deg(X)=2g-2$ and
$deg(X)\geq g-2$. This implies that the case $d\geq 3$ may occur only
if $g=4,deg(X)=2,d=3$. Otherwise either $\varphi _L$ is a birational
embedding or $d=2,deg(X)=g-1 $. In the latter case $p_a(X)=1$.
Suppose $d=2$ and $X$ is singular. Then the normalization of $X$ is
$\hat{X}\simeq {\bf P}^1$ and we can decompose $\varphi _L$ as
$$
\varphi _L=g\circ f:C\longrightarrow \hat{X}\longrightarrow {\bf
P}^{g-2}
$$
Since $\varphi _L$ is obtained from a complete linear system, $g$
must
have the same property, thus $g^*{\cal O}_{{\bf P}^{g-2}}(1)\simeq
{\cal O}_{{\bf P}^1}(g-2)$. This is impossible since $deg(X)=g-1$.
Consequently if $d=2$ then $X\subset {\bf P}^{g-2}$ is an elliptic
curve. Let $E=X,f=\varphi _L:C\longrightarrow E.$ Since $K_C\simeq
f^*(\delta )$ and $K_C\otimes \eta \simeq f^*{\cal O}_{E}(1)$ we
conclude that $\eta \simeq f^*(\epsilon )$ for some $\epsilon
\in Pic^0(E).$ The covering $f$
is ramified, so $f^*:Pic^0(E)\longrightarrow Pic^0(C)$ is an
injection, hence $\epsilon \in Pic^0(E)_2$
and $\epsilon \not \simeq {\cal O}_{E}$. Conversely, if
$f:C\longrightarrow E$ is a double covering of an elliptic curve, and
$\eta =f^*(\epsilon )$ with $\epsilon \in Pic^0(E)_2,\epsilon \not
\simeq {\cal O}_{E}$ then
$$
H^0(C,K_C\otimes \eta)\simeq H^0(C,\pi ^*(\delta \otimes \epsilon
))=\pi ^*H^0(E,\delta \otimes \epsilon )
$$
Hence $\varphi _L=\varphi _{\delta \otimes \epsilon }\circ f$ and
$d=2$.
It remains to rule out the possibility $g=4,d=3,deg(X)=2$. Here \\
$f=\varphi _L:C\longrightarrow X\simeq {\bf P}^1$ so $L\simeq
M^{\otimes
2}$, where $deg(M)=3,h^0(C,M)=2$, and $\mid M\mid $ is without base
points. Thus $C$ is not hyperelliptic and
$$
\varphi _K(C)=Q\cap F\subset {\bf P}^3
$$
where $Q$ is a quadric and $F$ is a cubic surface. We have $\mid L\mid
=\mid M\mid +\mid M\mid $ since $dim\mid L\mid =2$. Let $l_1,l_2$ be
lines in $Q$ such that $l_1+l_2=Q.H$ for a plane $H$. Let $M_i={\cal
O}_{C}(C.l_i).$ We can assume that $M=M_1$. Then $\eta \simeq
M_1\otimes M_2^{-1}.$ If $Q$ were singular, then $M_1\simeq M_2,$ so
$\eta \simeq {\cal O}_{C}$ which is absurd. Suppose $Q$ is
nonsingular.
Then $\eta ^{\otimes 2}\simeq {\cal O}_{C}$ implies $\mid
M_2^{\otimes 2}\mid =\mid M_1^{\otimes 2}\mid =\mid M_1\mid +\mid
M_1\mid $. This is again impossible since any reduced divisor
$x_1+x_2+x_3\in \mid M_2\mid $ can have only two common points with
any two divisors $D_1,D_2\in \mid M_1\mid $. q.e.d.
Suppose $g\geq 3$. Following Welters \cite{wel} let $S$ be the
subscheme of $\tilde{C}^{2g-2}$ which is the pull-back of $\mid
K_C\otimes \eta \mid \subset C^{2g-2}$
$$
\begin{array}{ccl}S&\longrightarrow &\tilde{C}^{2g-2}\\
\downarrow &\mbox{}&\downarrow Nm\\\mid K_C\otimes \eta \mid
&\longrightarrow &C^{2g-2} \end{array}
$$
It breaks naturally into two disjoint subschemes $S=S_1\cup S_2.$ The
singularities of $S$ can be calculated by Proposition~(\ref{p91.1})
with
$X=\tilde{C},Y=C,f=\pi ,\pi ^{(2g-2)}=Nm$. Since $S$ is a locally
complete intersection and $\pi ^{(2g-2)}$ is a finite map every
irreducible component of $S$ has dimension $g-2$. A Zariski open,
dense subset of $\mid K_C\otimes \eta \mid $ consists of reduced
divisors by Lemmas~(\ref{l21.1}) and (\ref{l21.2}), so according
to
Proposition~(\ref{p91.1}) $S$ is reduced. The subvarieties
$S_1,S_2$ are connected provided $g\geq 3$ \cite{wel}.
Let $T_1,T_2$ be divisors from $\mid 2\Xi \mid $ which belong to the
orbits $O_1,O_2$ respectively. Suppose $g\geq 3$.The Gauss maps
$G_i:T_i^{ns}\longrightarrow {\bf P}(T_0P)^*$ are defined on the
nonsingular loci of $T_i$ and send a point $x\in T_i^{ns}$ to the
translation of the tangent hyperplane $T_x(T_i)$ to $0\in P$. Let
$T=T_1\sqcup T_2$ and let $G:T^{ns}\longrightarrow {\bf P}(T_0P)^*$
be the
map whose restriction on $T_i^{ns}$ equals $G_i$. Let $S^0$ be the
Zariski open subset of $S$ which consists of those $\hat{D}$ with
$h^0(\tilde{C},\hat{D})=1$. By Proposition~(\ref{p17.1}) the map
$cl:S^0\longrightarrow Z^{ns}$ is an isomorphism and moreover one can
identify $T_i^{ns}$ with $Z_i^{ns}$ by translation and ${\bf
P}(T_0P)^*$ with $\mid K_C\otimes \eta \mid $. Since the Gauss map
does not depend on the translation one has for the Gauss map
$G:Z^{ns}\longrightarrow \mid K_C\otimes \eta \mid $ and every
$\hat{D}\in S^0$ the formula
\begin{equation}\label{e26.1}
G(cl(\hat{D}))=Nm(\hat{D})
\end{equation}
\begin{prop}\label{p26.1}
Let $L=K_C\otimes \eta $. Suppose $g\geq 4$. Let $R\subset T_i$ be the
ramification locus of $G$ and let $B$ be the algebraic closure of
$G(R)$.
(i) If $C$ is hyperelliptic and $\eta \simeq {\cal
O}_C(p_1-p_2)$, where $p_1,p_2$ are Weierstrass points of $C$,
then $P(\tilde{C},\sigma )\simeq J(C_2)$ for a certain hyperelliptic
curve $C_2$ (see Section~(\ref{s5})) and
$$
B=\varphi _K(C_2)^*\cup \bigcup _{i=1}^{2g}\varphi _K(q_i)^*
$$
where $\varphi _K(C_2)^*$ is the dual hypersurface of $\varphi
_K(C_2)$
and $\varphi _K(q_i)^*$ are the stars of hyperplanes which contain
$\varphi
_K(q_i)$, where $q_i$ are the Weierstrass points of $C_2$.
(ii) If $\mid K_C\otimes \eta \mid $ is without base points and
$\varphi _L:C\longrightarrow \mid K_C\otimes \eta \mid ^*$ is a
birational embedding, then $B$ has a unique irreducible component of
dimension $g-3$ and degree $>1$. This component is equal to
$\varphi _L(C)^*$.
(iii) If $\mid K_C\otimes \eta \mid $ is without base points and
$$
f=\varphi _L:C\longrightarrow \varphi _L(C)=E
$$
is a map of degree $2$ onto an elliptic curve, then
$$
B=E^*\cup \bigcup_{i=1}^{2g-2}x_i^*
$$
where $x_i$ are the branch points of $\varphi _L$.
\end{prop}
{\bf Proof.} (i) In this case any $T_i$ is the union of two translates
of the theta divisor $\Xi \subset P$ as it will be shown in
Section~(\ref{s5}). So, Part(i) follows from the description of the
branch locus of the Gauss map of the theta divisor of a hyperelliptic
Jacobian \cite{and}.
Now, let us assume that $\mid K_C\otimes \eta \mid $ is without base
points. If $Nm:S^0\longrightarrow \mid K_C\otimes \eta \mid $ is
degenerate at $\hat{D}\in S^0$, then by Proposition~(\ref{p91.1})
$\hat{D}=\pi ^*A+E$ for some $A>0$, so $D=Nm(\hat{D})=2~A+Nm(E)$. Let
$H\subset \mid K_C\otimes \eta \mid ^*$ be the hyperplane which
corresponds to $D$. Either $H$ contains an image of a ramification
point of the map $\varphi _L:C\longrightarrow \mid K_C\otimes \eta
\mid
^*$, or $H$ is tangent to a branch $\varphi _L(U)$ at a point
$\varphi
_L(p)$, where $p\in U\subset C$ and $\varphi _L$ is nondegenerate at
$p$.
The former case can happen only for finitely many points. This shows
that any component of $B$ of dimension $g-3$ must be either a star of
hyperplanes which contain a branch point $\varphi _L(p)$, or it is
contained in the dual variety $\varphi _L(C)^*$.
{\bf Proof of (ii).} It remains to show that $\varphi _L(C)^*\subset
B$.
Let the hyperplane $H\subset \mid K_C\otimes \eta \mid ^*$ be a
sufficiently general element of $\varphi _L(C)^*$ and let
$$
D=\varphi _L^*(H)=2p+p_3+...+p_{2g-2}
$$
be the corresponding divisor of $\mid K_C\otimes \eta \mid $. Here
$p,p_3,...,p_{2g-2}$ are distinct points of $C$ and $\varphi _L$ is
not
degenerate at $p$.
{\bf Claim 1.} {\it Let $\pi ^{-1}(p)=p'+p''$ One can choose
$p'_i\in
\tilde{C}$ with $\pi (p'_i)=p_i$ so that
$$
\hat{D}=p'+p''+p'_3+...+p''_{2g-2}
$$
has the property $h^0(\hat{D})=1$.}\\
{\bf Proof.} Let us choose arbitrary points $q_i\in \tilde{C}$ such
that $\pi (q_i)=p_i$. Let
$$
\hat{D}_0=p'+p''+q_3+...+q_{2g-2}
$$
If $h^0(\hat{D})\geq 2$, then at least one of the points $q_i$ is not
a base point of $\mid \hat{D}_0\mid $. Indeed, otherwise
$$
h^0({\cal O}_{\tilde{C}}(\pi ^*p))=h^0({\cal O}_{C}(p))+h^0({\cal
O}_{C}(p)\otimes \eta )\geq 2
$$
which is possible only in the Case (i). If $q_i\not \in Bs\mid
\hat{D}_0\mid $, let $\hat{D}_1=\hat{D}_0+\sigma (q_i)-q_i$. Then by
Lemma~(\ref{l1634.1}) $h^0(\hat{D}_1)=h^0(\hat{D}_0)-1$. Repeating the
same argument with $\hat{D}_1$ etc. we obtain eventually the required
divisor $\hat{D}$. q.e.d.
Now, $\hat{D}\in S^0$, the Gauss map at the point $cl(\hat{D})\in
Z^{ns}$ equals $H$ by Eq.~(\ref{e26.1}) and it is ramified at
$\hat{D}$ according to Proposition~(\ref{p91.1}). So, $\varphi
_L(C)^*\subset B$.\\
{\bf Proof of (iii).} We have proved above that
$$
B\subset E^*\cup \bigcup_{i=1}^{2g-2}x_i^*
$$
Let $H$ be a sufficiently general element of $E^*$ and let
$$
D=\varphi _L^*(H)=2p+2q+p_5+...+p_{2g-2}
$$
be the corresponding divisor of $\mid K_C\otimes \eta \mid $. Here
$p,q,p_5,...p_{2g-2}$ are distinct points of $C,\varphi _L$ is
nondegenerate at $p,q$ and $\{ p,q\} =f^*(x)$
for some $x\in E$.
{\bf Claim 2.} {\it Let $\pi ^{-1}(p)=\{ p',p''\} ,\pi ^{-1}(q)=\{
q',q''\} $. One can choose $p'_i\in \tilde{C}$ with $\pi (p'_i)=p_i$
so that
$$
\hat{D}=p'+p''+2q'+p'_5+...+p'_{2g-2}
$$
has the property $h^0(\hat{D})=1$.}\\
{\bf Proof.} Let $\eta = f^*(\epsilon )$ and let $y\in E $
be the point such that $\epsilon \simeq {\cal O}_{E}(y-x)$. Then one
has canonical isomorphisms
$$
\begin{array}{l}
H^0({\cal O}_{\tilde{C}}(\pi ^*(p+q))\simeq \pi ^*(H^0({\cal
O}_{C}(p+q))\oplus H^0({\cal O}_{C}(p+q)\otimes \eta))\\
\simeq (\pi ^*\circ f^*)(H^0({\cal O}_{E}(x))\oplus H^0({\cal
O}_{E}(y)))\simeq {\bf C}^2
\end{array}
$$
Thus $\mid p'+p''+q'+q''\mid $ is a pencil without base points. Using
the same argument as in Claim~1 we conclude that one can choose $p'_i$
so that
$$
\hat{D}'=p'+p''+q'+q''+p'_5+...+p'_{2g-2}
$$
has the property $h^0(\hat{D}')=2$. Applying once more
Lemma~(\ref{l1634.1}) we conclude that $\hat{D}=\hat{D}'+ q'
-q''$ satisfies $h^0(\hat{D})=1$. q.e.d.
We conclude as in Part (ii) that $E^*\subset B$. If $x_i=\varphi
_L(q_i)$ is a branch point of $f$ and $H$ is a sufficiently general
hyperplane in $\mid K_C\otimes \eta \mid ^*$ which contains $x_i$,
then the corresponding divisor of $\mid K_C\otimes \eta \mid $ has
the form
$$
D=\varphi _L^*(H)=2q_i+p_3+...+p_{2g-2}
$$
where $q_i,p_3,...,p_{2g-2}$ are distinct. The same argument as in
Part (ii) proves that $H\in B$, so $x_i^*\subset B$.
Proposition~(\ref{p26.1}) is proved. q.e.d.
We see that if $g\geq 4$ then the branch locus $B$ of the Gauss map
$G:T^{ns}\longrightarrow {\bf P}(T_0P)^*$ has a unique irreducible
component $B_0$ of dimension $g-3$ and degree $\geq 2$. We have shown
above that $B=X^*$ for a certain irreducible curve $X\subset {\bf
P}(T_0P)^*$. So, for $g\geq 4$, by the equality $(X^*)^*=X$
\cite{kleim} we obtain that $B_0^*$ is a curve
$X$. The following alternative takes place.
\begin{list}{(\roman{bean})}{\usecounter{bean}}
\item deg(X)=g-2
\item deg(X)=2g-2
\item deg(X)=g-1
\end{list}
The three cases correspond to those in Proposition~(\ref{p26.1}). If
Case (ii) occurs we prove Theorem~(\ref{t20.1}) as follows: $C$ is
isomorphic to the normalization of $X$. The normalization map
$f:C\longrightarrow X\subset {\bf P}(T_0P)^*$ is associated to the
complete linear system $\mid K_C\otimes \eta \mid $. Thus we obtain
$\eta \in J(C)_2$.
Cases (i) and (iii) are considered respectively in
Sections~(\ref{s5})
and (\ref{s6}). The case $g=3$ is treated in Section~(\ref{s7})
and the
case $g=2$ in Section~(\ref{s5}).
\section{The hyperelliptic case, $g\geq 2$}\label{s5}
Throughout this section we suppose that $C$ is a hyperelliptic curve
of genus $g\geq 2$,\\
$f:C\longrightarrow {\bf P}^1$ is the double covering
and $\eta \simeq {\cal O}_{C}(p_1-p_2)$ where $p_1,p_2$ are
ramification points of $f$ and $p_1\neq p_2$. Let $R=\{
p_1,p_2,p_3,...,p_{2g+2}\} $ be the set of ramification points of
$f,R_1=\{ p_1,p_2\}, R_2=R\backslash R_1$. Let $B_i=f(R_i),i=1,2$.
According to \cite{mum},\cite{dal} the covering \\$f\circ
\pi:\tilde{C}\longrightarrow C\longrightarrow {\bf P}^1$ has Galois
group ${\bf Z}_2\times {\bf Z}_2$. In the corresponding diagram of
Fig.1
\begin{figure}\label{f35.1}
\begin{center}
\begin{picture}(46,54)(0,0)
\put (19,0){\makebox(8,8){${\bf P}^1$}}
\put (0,23){\makebox(8,8){$C_1$}}
\put (19,23){\makebox(8,8){$C$}}
\put (38,23){\makebox(8,8){$C_2$}}
\put (19,46){\makebox(8,8){$\tilde{C}$}}
\put (19,46){\vector(-1,-1){15}}
\put (27,46){\vector(1,-1){15}}
\put (23,46){\vector(0,-1){15}}
\put (23,23){\vector(0,-1){15}}
\put (4,23){\vector(1,-1){15}}
\put (42,23){\vector(-1,-1){15}}
\put (24,15){$f$}
\put (24,38){$\pi $}
\put (6,38){$\pi _1$}
\put (37,38){$\pi _2$}
\put (6,15) {$f_1$}
\put (37,15) {$f_2$}
\end{picture}
\end{center}
\caption{}
\end{figure}
$f_i:C_i\longrightarrow {\bf P}^1$ is branched at $B_i,i=1,2$.
Furthermore $\pi_2^*:J(C_2)\longrightarrow P(\tilde{C},\sigma )$ is an
isomorphism. Let $\Theta _0=W_{\tilde{g}-1}(\tilde{C})-\pi ^*\Delta $
be as in Section~(\ref{s3}).
\begin{lem}\label{l36.1}
Let $\Xi \subset P(\tilde{C},\sigma )=P$ be a symmetric theta divisor.
Then there exists a unique $\rho \in P_2$ such that
\begin{equation}\label{e36.1}
\Xi = \pi _1^*(\zeta _1)+\pi _2^*W_{g-2}(C_2)-\pi ^*\Delta-\rho
\end{equation}
where $\zeta _1$ is the rational equivalence class of the points of
the rational curve $C_1$.
\end{lem}
{\bf Proof.} By Wirtinger's theorem there is a unique translation
$\Theta =\Theta _0+\rho $ with $\rho \in P_2$ such that
$\Theta .P=2\Xi $. The points of $\Theta \cap P$ have the form
$L-\pi ^*\Delta-\rho $
where $Nm(L)=K_C\; ,\; h^0(\tilde{C},L)\equiv 0(mod 2)$ and
$h^0(\tilde{C},L)\geq 2$. Now, $\mid K_C\mid \simeq f^*\mid {\cal
O}_{{\bf P}^1}(g-1)\mid $. One easily checks that if $\hat{D}$ is
effective divisor of $\tilde{C}$ and $Nm(\hat{D})\in f^*\mid
{\cal O}_{{\bf P}^1}(g-1)\mid $ then
$$
\hat{D}=\pi _1^*E+\pi _2^*F
$$
where $E$ and $F$ are effective divisors of $C_1,C_2$ respectively. We
have $deg(E)+deg(F)=g-1$ which gives only two irreducible components
of dimension $\geq g-2$ of
$$
Nm^{-1}(K_C)\cap W_{2g-2}(\tilde{C})
$$
namely $\pi _2^*W_{g-1}(C_2)$ of dimension $g-1$ and $\pi _1^*(\zeta
_1)+\pi _2^*W_{g-2}(C_2)$ of dimension $g-2$. On the other hand the
above intersection has, by the general theory, two irreducible
components: a translation of $\Xi $ and a translation of $P_{\_ }$.
This shows Eq.~(\ref{e36.1}). q.e.d.
Now, let us calculate the orbits $O_1,O_2\subset \mid 2\Xi \mid $. Let
$T_i\in O_i,i=1,2$. By Proposition~(\ref{p17.1}) one has
\begin{equation}\label{e38.1}
T_i=Z_i-\pi ^*\Delta -\nu _i
\end{equation}
for some $\nu _i\in \lambda _i+P_2 , i=1,2.$
\begin{lem}\label{l38.1}
One can enumerate $\pi ^{-1}(p_1)$ as $\{ p'_1,p''_1\} $ and $\pi
^{-1}
(p_2)$ as $\{ p'_2,p''_2\} $ so that
\[ \begin{array}{ccc}Z_1&=&\pi _2^*W_{g-2}(C_2)+p'_1+p'_2\cup
\pi _2^*W_{g-2}(C_2)+p''_1+p''_2\\Z_2&=&\pi
_2^*W_{g-2}(C_2)+p'_1+p''_2\cup \pi _2^*W_{g-2}(C_2)+p''_1+p'_2
\end{array}\]
\end{lem}
{\bf Proof.} From Lemma~(\ref{l21.1}) we have
$$
\mid K_C\otimes \eta \mid =f^*\mid {\cal O}_{{\bf P}^1}(g-2)\mid
+p_1+p_2
$$
One easily checks that if $\hat{D}$ is effective divisor of
$\tilde{C}$ and $Nm(\hat{D})\in f^*\mid {\cal O}_{{\bf P}^1}(g-2)\mid
$ then
$$
\hat{D}=\pi _1^*E+\pi _2^*F
$$
where $E,F$ are effective divisors of $C_1,C_2$ respectively. We have
$deg(E)+deg(F)=g-2.$
Thus the only irreducible component of dimension $\geq g-2$ of
$$
Nm^{-1}(f^*{\cal O}_{{\bf P}^1}(g-2))\cap W_{2g-4}(\tilde{C})
$$
is $\pi _2^*W_{g-2}(C_2)$. The irreducible components of $Z=Z_1\cup
Z_2$ are of dimension $g-2$ by Corollary~(\ref{c19.1}) and the
transformation $L\mapsto L+\sigma (p)-p$ interchanges the two
components of $Nm^{-1}(K_C\otimes \eta)$. This shows that $Z_1$ and
$Z_2$ have the form given in the lemma. q.e.d.
Lemmas~(\ref{l36.1}),(\ref{l38.1}) and Eq.~(\ref{e38.1}) give the
following corollary
\begin{cor}\label{c39.1}
Let $\Xi $ be an arbitrary symmetric theta divisor of
$P(\tilde{C},\sigma )$ and let $T_1,T_2$ be two divisors of the orbits
$O_1,O_2\subset \mid 2\Xi \mid $ respectively. Then
\[ \begin{array}{c}T_1=\Xi +p'_1+p'_2-\pi _1^*(\zeta _1)-\mu _1\cup
\Xi +p''_1+p''_2-\pi _1^*(\zeta _1)-\mu _1\\T_2=\Xi +p'_1+p''_2-\pi
_1^*(\zeta _1)-\mu _2\cup\Xi +p''_1+p'_2-\pi _1^*(\zeta _1)-\mu _2
\end{array}\]
for some $\mu _i\in \lambda _i+P_2,i=1,2$.
\end{cor}
Let us choose in an arbitrary way $\Xi ,T_1,T_2$ as above and let us
denote by $x_1,y_1,x_2,y_2$ the elements of $P(\tilde{C},\sigma )$
such that $T_1=\Xi +x_1\cup \Xi +y_1,T_2=\Xi +x_2\cup \Xi +y_2$. There
are two possible ways of representing the set $\{ x_1,y_1,x_2,y_2\} $
as union $A\cup B$, where $\# A=\# B=2$, and $A,B$ have one point of
$\{ x_1,y_1\} $ and one point of $\{ x_2,y_2\} $. Namely as :
\begin{equation}\label{e40.1}
\begin{array}{ccl}\{ x_1,x_2\} &\cup &\{ y_1,y_2\} ,\\
\{ x_1,y_2\} &\cup &\{ y_1,x_2\}
\end{array}
\end{equation}
Taking the sums of the sets in (\ref{e40.1}), using
Corollary~(\ref{c39.1}) and taking into account that $\lambda
_1+\lambda
_2=\mu $ (Section~(\ref{s3})) we see that the extended Prym data
determines the following $P_2$-orbit of quadruples of points in
$P(\tilde{C},\sigma )$, each quadruple being split into a union of
two pairs:
\begin{equation}\label{e405.1}
\begin{array}{c}\{ \{ 2p'_1+\pi ^*p_2-\pi _1^*(2\zeta _1)-\mu
+\rho ,2p''_1+\pi ^*p_2-\pi _1^*(2\zeta _1)-\mu +\rho \} \\
\cup \{ 2p'_2+\pi ^*p_1-\pi _1^*(2\zeta _1)-\mu
+\rho ,2p''_2+\pi ^*p_1-\pi _1^*(2\zeta _1)-\mu +\rho \} \}
\end{array}
\end{equation}
where $\rho \in P_2$. The splitting of the quadruples is consistent
with the action of $P_2$ on $Q$.
Let $f_2^{-1}(f(p_i))=\{ q'_i,q''_i\} $ where $\pi
_2^{-1}(q'_i)=p'_i,\pi _2^{-1}(q''_i)=p''_i,i=1,2$. Let
$f_2^{-1}(f(p_j))=q_j$ for $3\leq j\leq 2g+2$. Since $\pi
_2:C\longrightarrow C_2$ is branched at $\{ q'_1,q''_1,q'_2,q''_2\} $
one has
\begin{equation}\label{e41.2}
2p'_i=\pi _2^*(q'_i)\; ,\; 2p''_i=\pi _2^*(q''_i)
\end{equation}
One has also that
\begin{equation}\label{e41.3}
\pi _1^*(2\zeta _1)=\pi _1^*f^*_1(f(p_i))=2\pi ^*(p_i)
\end{equation}
{\bf Claim} {\it For any $i$ with $1\leq i\leq 2$ and any $j$ with
$3\leq j\leq 2g+2$ there is a point $\rho _{ij}\in P_2$ such that $\mu
=-\pi ^*(p_i-p_j)-\rho _{ij}$.}\\
{\bf Proof.} By Eq.~(\ref{e8.1}) one has $\pi ^*J(C)_2=P_2\cup
(\mu +P_2)$. Since $P(\tilde{C},\sigma )=\pi ^*J(C_2)$ one concludes
by the description of the points of order 2 of the hyperelliptic
Jacobian $J(C_2)$ \cite{mum2} that
$$
P_2=\{ \pi_2^*(S_1-S_2)\mid S_1\cup S_2\subset R_2,S_1\cap
S_2=\emptyset ,\# S_1=\# S_2\}
$$
Using this one easily shows that $\pi ^*(p_i-p_j)\not \in P_2$. q.e.d.
Now, using Eq.~(\ref{e41.2}),(\ref{e41.3}) and the Claim we have
for any $j$ with $3\leq j\leq 2g+2$
\[\begin{array}{cccc}\mbox{}&2p'_1+\pi ^*p_2-\pi _1^*(2\zeta _1)-\mu
+\rho &=&\pi _2^*(q'_1)-\pi ^*(p_2)-\mu +\rho \\
=&\pi _2^*(q'_1)-\pi ^*(p_j)+\rho _{2j}+\rho &=&\pi
_2^*(q'_1-q_j)+\rho _{2j}+\rho
\end{array}\]
We obtain that the $P_2$-orbit $Q$ is equal to
\begin{equation}\label{e425.1}
\begin{array}{c}\{ \{ \pi _2^*(q'_1-q_j)+\rho \; ,\; \pi
_2^*(q''_1-q_j)+\rho\} \\
\cup \{ \pi _2^*(q'_2-q_j)+\rho \; ,\; \pi _2^*(q''_2-q_j)+\rho \} \}
\end{array}
\end{equation}
{\bf Reconstruction of $(C,\eta )$ in the hyperelliptic case, $g\geq 3
.$}\\
We have a polarized isomorphism $P(\tilde{C},\sigma )\simeq
J(C_2)$. So, by Torelli's theorem \cite{acgh} one reconstructs the
smooth, hyperelliptic curve $C_2$ of genus $g_2=g-1\geq 2$. It has a
unique complete linear system $g^1_2$. Take a point $q\in C_2$ such
that $2q\in g^1_2$. Consider the Abel map $\alpha :C_2\longrightarrow
J(C_2)$ given by $\alpha (x)=cl(x-q)$.
\begin{lem}\label{l435.1}
There is a unique quadruple of $Q$ whose points belong to $\alpha
(C_2)$. Any other quadruple has no points in common with $\alpha
(C_2)$.
\end{lem}
{\bf Proof.} If $q=q_j$ we set $\rho =0$ in (\ref{e425.1}) and see
that
the quadruple
\begin{equation}\label{e435.1}
\{ \{ \alpha (q'_1),\alpha (q''_1)\}\cup \{ \alpha (q'_2),\alpha
(q''_2)\} \}
\end{equation}
is contained in $\alpha (C_2)$. Suppose that for some $\rho \in
J(C_2)_2,\rho \neq0$ one has
$$
q'_1-q_j+\rho \equiv x-q_j.
$$
Multiplying by 2 both sides of this equality we obtain $2q'_1\equiv
2x$. Since $C_2$ has a unique $g^1_2$ and $q'_1\neq x$ for $\rho \neq
0$ we conclude that $2q'_1\in g^1_2$ which is an absurd. This argument
shows that none of the quadruples of $Q$ different from
(\ref{e435.1})
can have points in common with $\alpha (C_2)$. q.e.d.
Now, we choose a map $f_2:C_2\longrightarrow {\bf P}^1$ of degree 2
and observe that the quadruple of points of $C_2$ defined in the lemma
is transformed by $f_2$ into a set of two points. Furthermore this set
does not depend on the choice of the ramification point $q$ of $f_2$.
Let us denote it by $B_1$. Let $B_2$ be the branch locus of $f_2$.
Then $C$ is isomorphic to the hyperelliptic curve branched at
$B=B_1\cup B_2$ and $\eta \in J(C)_2$ corresponds to this partition of
$B$ \cite{mum2}.
{\bf Reconstruction of $(C,\eta )$ in the case $g=2$.}\\
Here $P(\tilde{C},\sigma )$ is an elliptic curve $E$. Let $o\in E$
be the zero, let $\varphi _1,\varphi _2$ be a basis of $H^0(E,{\cal
O}_{E}(2o))$ and let $f_2=(\varphi _1:\varphi _2):E\longrightarrow
{\bf
P}^1$. For any $\rho \in E_2$, if $t_{\rho }:E\longrightarrow E$ is
the translation by $\rho $, there exists $\psi \in PGL(2)$ such that
the following diagram is commutative
\begin{equation}\label{e45.1}
\begin{array}{rlcl}\mbox{}&E&\stackrel{t_{\rho }}{\longrightarrow }
&E\\ f_2&\downarrow &\mbox{}&\downarrow
\hspace{.25cm}f_2\\
\mbox{}&{\bf P}^1&\stackrel{\psi }{\longrightarrow }&{\bf P}^1
\end{array}
\end{equation}
Moreover $\psi $ permutes the branch points of $f_2$. Let $B_2$ be the
branch locus of $f_2$. Take any of the quadruples of $Q$. Each of its
two pairs is invariant under the action of $-id_E$. Thus the image of
the quadruple is a set of two points which we denote by $B_1$. If we
choose another quadruple of $Q$ with image $B'_1$, then
(\ref{e45.1})
shows that there is a $\psi \in PGL(2)$ such that $\psi
(B_1)=B'_1,\psi (B_2)=B_2$. This gives the reconstruction of $(C,\eta
)$, up to isomorphism, as the hyperelliptic curve branched at
$B=B_1\cup B_2$ and $\eta $ as the point of $J(C)_2$ which
corresponds to this partition of $B$.
\section{The bi-elliptic case, $g\geq 4$}\label{s6}
Let $f:C\longrightarrow E$ be a double covering of an elliptic curve
$E$ ramified at $B=\{ x_1,...,x_{2g-2}\} $ and determined by $\delta
\in Pic^{g-1}(E)$ with $\delta ^{\otimes 2}\simeq {\cal O}_{E}(B)$.
Suppose $\eta =f^*(\epsilon )$ for some $\epsilon \in Pic^0(E)_2$.
Then
the unramified covering $\pi :\tilde{C}\longrightarrow C$ determined
by $\eta $ fits into the commutative diagram of Fig.2
\begin{figure}\label{f47.1}
\begin{center}
\begin{picture}(46,54)(0,0)
\put (19,0){\makebox(8,8){$E$}}
\put (0,23){\makebox(8,8){$C_1$}}
\put (19,23){\makebox(8,8){$C$}}
\put (38,23){\makebox(8,8){$C_2$}}
\put (19,46){\makebox(8,8){$\tilde{C}$}}
\put (19,46){\vector(-1,-1){15}}
\put (27,46){\vector(1,-1){15}}
\put (23,46){\vector(0,-1){15}}
\put (23,23){\vector(0,-1){15}}
\put (4,23){\vector(1,-1){15}}
\put (42,23){\vector(-1,-1){15}}
\put (24,15){$f$}
\put (24,38){$\pi $}
\put (6,38){$\pi _1$}
\put (37,38){$\pi _2$}
\put (6,15) {$f_1$}
\put (37,15) {$f_2$}
\end{picture}
\end{center}
\caption{}
\end{figure}
where $deg(f_i)=deg(\pi _i)=2,f_1:C_1\longrightarrow E$ is unramified,
determined by $\epsilon ,f_2:C_2\longrightarrow E$ is ramified at $B$
and is determined by $\delta _2=\delta \otimes \epsilon $. Here we
have the assumptions of Part (iii) of Proposition~(\ref{p26.1}) so the
extended Prym data determines :
\begin{itemize}
\item $E$ as the curve isomorphic to the dual $X\subset {\bf
P}(T_0P)$ of the unique irreducible component of degree $\geq 2$ of
the branch locus $G(R)$ of the Gauss map $G:T^{ns}\longrightarrow
{\bf P}(T_0P)^*$.
\item The points $\{ x_i \mid i=1,...,2g-2\} $ as the duals of the
remaining irreducible components of $G(R)$.
\item $\delta _2\simeq \delta \otimes \epsilon $ as isomorphic to
${\cal O}_{X}(1)$.
\end{itemize}
So, it remains to reconstruct $\epsilon $ which is the
content of the rest of this section.
\begin{lem}\label{l48.1}
Let $T_1=Z_1-\pi ^*\Delta -\mu _1,T_2 = Z_2-\pi ^*\Delta -\mu _2$ be
arbitrary divisors of the orbits $O_1,O_2\subset \mid 2\Xi \mid$,
where $\mu _i\in \lambda _i+P_2$ (Section~(\ref{s3})). Then $T_1,T_2$
are irreducible. Reordering, if necessary, $\{ \lambda _1,\lambda
_2\} $, respectively $\{ O_1,O_2\} ,\{ Z_1,Z_2\} ,\{ T_1,T_2\} $ one
has that the elements $e_1\in T_1,e_2\in T_2$ have the form
(i) $e_1=\pi _1^*(\xi _1)+\pi _2^*(\xi _2)-\pi ^*\Delta-\mu
_1$\\ \noindent where $\xi _1\in C_1,\xi _2\in W_{g-2}(C_2)$ and
$Nm_{f_1}(\xi _1)+Nm_{f_2}(\xi _2)=\delta _2$.
(ii) $e_2=\pi _2^*(\xi _2)-\pi ^*\Delta-\mu _2$\\
where $\xi _2\in W_{g-1}(C_2)$ and $Nm_{f_2}(\xi _2)=\delta
_2$.
\end{lem}
{\bf Proof.} One has to calculate the irreducible components of $Z$
defined in (\ref{e165.1}). One has
$$
H^0(C,K_C\otimes \eta ) \simeq H^0(C,f^*\delta _2)\simeq H^0(E,\delta
_2)\oplus H^0(E,\delta _2\otimes \delta ^{-1})\simeq H^0(E,\delta _2)
$$
Thus $\mid K_C\otimes \eta \mid =f^*\mid \delta _2\mid $. If
$\hat{D}$
is an effective divisor of $\tilde{C}$ such that $Nm_{\pi
}(\hat{D})\in f^*\mid \delta _2\mid $, then
$$
\hat{D}=\pi _1^*E+\pi _2^*F
$$
where $E$ and $F$ are effective divisors of $C_1,C_2$ respectively.
One has $Nm_{\pi }\circ \pi ^*_i=f^*\circ Nm_{f_i}$, so
$$
Nm_{f_1}(E)+Nm_{f_2}(F)\equiv \delta _2
$$
Corollary~(\ref{c19.1}) and a dimension count show that $cl(\hat{D})$
might be a general element of $Z$ if either $deg(E)=1,deg(F)=g-2$ or
$E=0,deg(F)=g-1$. So, $Z=Z'\cup Z''$ where
$$
Z'=\{ \pi _1^*(\xi _1)+\pi _2^*(\xi _2)\mid \xi _1\in C_1,\xi _2\in
W_{g-2}(C_2),Nm_{f_1}(\xi _1)+Nm_{f_2}(\xi _2)\equiv \delta _2\}
$$
and
$$
Z''=\{ \pi _2^*(\xi _2)\mid \xi _2\in W_{g-1}(C_2),Nm_{f_2}(\xi
_2)\equiv \delta _2\}
$$
{\bf Claim 1.} {\it $Z'$ is irreducible.}\\
{\bf Proof.} We consider the map $h:C_2^{(g-2)}\longrightarrow E$
defined by $h(D)=\mid \delta _2-Nm_{f_2}(D)\mid $ and the pull-back
diagram
$$
\begin{array}{ccl}X&\longrightarrow &C_1\\
\downarrow &\mbox{}&\downarrow f_1\\
C_2^{(g-2)}&\stackrel{h}{\longrightarrow }&E
\end{array}
$$
Then $Z'$ is the image of $X$ under the map
$$
(D,x)\longmapsto cl(\pi _1^*(x)+\pi _2^*(D))
$$
Now, $X$ might be reducible if $h_*\pi _1(C_2^{(g-2)})$ is contained
in $f_{1*}\pi _1(C_1)$. This is impossible. Indeed,
$f_{2*}:H_1(C_2)\longrightarrow H_1(E)$ is epimorphic since $f_2$ is
ramified. Therefore
$$
(cl\circ f_2^{(g-2)})_*:H_1(C_2^{(g-2)})\longrightarrow
H_1(J_{g-2}(E)) $$
is epimorphic. Composing it with the isomorphism
$J_{g-2}(E)\longrightarrow E$ given by $\xi \mapsto \mid \delta
_2-\xi \mid $ one obtains that $h_*:H_1(C_2^{(g-2)})\longrightarrow
H_1(E)$ is epimorphic. This proves that $X$ and therefore $Z'$ are
irreducible. q.e.d.
{\bf Claim 2.} {\it $Z''$ is irreducible. }\\
{\bf Proof.} With the same notation as above one considers the
pull-back diagram
$$
\begin{array}{ccl}Y&\longrightarrow &C_2\\
\downarrow &\mbox{}&\downarrow f_2\\
C_2^{(g-2)}&\stackrel{h}{\longrightarrow }&E
\end{array}
$$
Then $Z''$ is the image of $Y$ under the map
$$
(D,y) \longmapsto cl(\pi _2^*(D+y))
$$
In order to prove that $Y$ is irreducible it suffices to verify that
not every component of the branch divisor $h^*(B)$ has even
multiplicity. Now, $h$ can be decomposed as
$$
h=p\circ f_2^{(g-2)}:C_2^{(g-2)}\longrightarrow E^{(g-2)}
\longrightarrow E
$$
where $p$ is the fiber bundle map defined by $p(A)=\mid \delta
_2-A\mid $. Let $x\in B$. Then $\mid \delta _2-x\mid $ is a linear
system of degree $g-2\geq 2$ without base points. Let
$A=p_1+...+p_{g-2}$ be an element with no points in common with $B$.
Let $D\in C_2^{(g-2)}$ with $f_2^{(g-2)}(D)=A$. Then $p^{-1}(x)$ is
smooth at $A$ and $f_2^{(g-2)}$ is nondegenerate at $D$, thus
$h^*(x)$ is a divisor with multiplicity 1. q.e.d.
Now, $Z$ has two connected components $Z_1$ and $Z_2$, enumerated
as in Proposition~(\ref{p17.1}). So, $Z'\neq Z'', Z_i$ are irreducible
and either $Z_1=Z',Z_2=Z''$ or $Z_1=Z'',Z_2=Z'$. Reordering $\{
Z_1,Z_2\} $ if necessary we can assume that the former case takes
place. q.e.d.
\begin{lem}\label{l51.1}
The singular locus of $Z_1$ has codimension $\geq 2$.
\end{lem}
{\bf Proof.} Consider a divisor of $\tilde{C}$ of the form $\pi
_1^*A+H$ where $H$ is effective, $\pi _1$-simple and $deg(A)\geq 1$.
Then by Corollary~(\ref{c92.2}) one concludes that
\begin{equation}\label{e515.1}
h^0(\tilde{C},\pi _1^*A+H)=h^0(C_1,A)
\end{equation}
Let $\hat{D}=\pi _1^*(x)+\pi _2^*(F)$ where $x\in C_1,F$ is effective
divisor of $C_2$ and $f_1(x)+Nm_{f_2}(F)\equiv \delta _2$.
Proposition~(\ref{p17.1}) and (\ref{e515.1}) show that ${\cal
O}_{\tilde{C}}(\hat{D})$ is a singular point of $Z_1$ if and only if
$F$ is not $\pi _1$-simple. Now, if $\pi _1^*(y)\leq \pi _2^*(F)$,
then one easily checks that $f_2^*(f_1(y))\leq F$. Thus $Sing(Z_1)$
consists of
\begin{equation}\label{e52.1}
cl(\pi _1^*(x+f_1^*(t))+\pi _2^*(G))
\end{equation}
where $x\in C_1,t\in E,G\in C_2^{(g-4)}$ and $Nm_{f_2}(G)\in \mid
\delta _2-f_1(x)-2t\mid $. For any $G\in C_2^{(g-4)}$ there are two
different $\zeta _1\in J_3(C_1)$ such that $Nm_{f_1}(\zeta _1)\equiv
\delta _2-Nm_{f_2}(G)$. Thus the elements of the type (\ref{e52.1})
form a sublocus of $Z_1$ of dimension $\leq g-4$. q.e.d.
\begin{lem}\label{l53.1}
$Sing(Z_2)$ has a unique irreducible component $V$ of codimension 1 in
$Z_2$. A Zariski open, dense subset of $V$ consists of the elements
\begin{equation}\label{e53.3}
cl(\pi _2^*(f^*_2(x)+G))
\end{equation}
where $x\in E$, and $G$ is an effective, $f_2$-simple divisor of $C_2$
such that $Nm_{f_2}(G)\in \mid \delta _2-2x\mid $.
\end{lem}
{\bf Proof.} We claim that
\begin{equation}\label{e53.1}
dimW^1_{g-1}(C_2)\cap Nm_{f_2}^{-1}(\delta _2)\leq g-4
\end{equation}
This is clear if $C_2$ were not hyperelliptic. If $C_2$ were
hyperelliptic, then $W^1_{g-2}(C_2)=g^1_2+W_{g-3}(C_2)$. This
irreducible variety can not be contained in $Nm_{f_2}^{-1}(\delta
_2)$. Indeed, otherwise its translation would be contained in the
abelian hypersurface $Nm_{f_2}^{-1}(0)$ of $J(C_2)$ which is absurd
since this translation generates $J(C_2)$. By (\ref{e53.1}) we
conclude that the sublocus of $Sing(Z_2)$ :
\begin{equation}\label{e54.1}
\{ \pi _2^*(\xi _2)\mid Nm_{f_2}(\xi _2)=\delta _2,h^0(C,\xi
_2)\geq 2\}
\end{equation}
has codimension $\geq 2$ in $Z_2$.
Suppose $F$ is an effective, $f_2$-simple divisor of $C_2$ such
that $Nm_{f_2}(F)\in \mid \delta _2\mid $. Assume that $cl(\pi
_2^*F)\in SingZ_2$. By Proposition~(\ref{p17.1}) this is equivalent to
$h^0(\tilde{C},\pi _2^*F)\geq 2$. Then we claim that $h^0(C,F)\geq 2
$, so $\pi _2^*F$ belongs to the locus (\ref{e54.1}). Indeed, since
$\pi
_2:\tilde{C}\longrightarrow C$ is a double unramified covering
corresponding to $f^*_2(\epsilon )\in Pic^0(C_2)_2$ we have
\begin{equation}\label{e53.2}
h^0(\tilde{C},\pi _2^*F)=h^0(C_2,F)+h^0(C_2,f^*_2(\epsilon )(F))
\end{equation}
By Lemma~(\ref{l92.1}) we conclude that $h^0(C_2,f^*_2(\epsilon
)(F))=0$. So, $cl(\pi _2^*(F))$ belongs to $Sing(Z_2)$ if and only if
$cl(F)\in W^1_{g-1}(C_2)$.
Now, suppose that $F=f_2^*(x)+G$ where $x\in E,G$ is effective and
$Nm_{f_2}(G)\in \mid \delta _2-2x\mid $. Let $t_{\epsilon }(x)$ be the
translation of x by $\epsilon $. Then by Eq.~(\ref{e53.2}) we have
\begin{equation}\label{e54.3}
h^0(\tilde{C},\pi
_2^*F)=h^0(C_2,f_2^*(x)+G)+h^0(C_2,f_2^*(t_{\epsilon }(x))+G)
\end{equation}
Thus $h^0(\tilde{C},\pi ^*F)\geq 2$ and $cl(\pi ^*F)\in Sing Z_2$. The
sublocus of $SingZ_2$
$$
V=\{ cl(\pi _2^*(f_2^*(x)+G))\mid x\in E,G\geq 0,Nm_{f_2}(G)\in \mid
\delta _2-2x\mid \}
$$
is the image of $X$ where $X$ is defined by the pull-back diagram
\begin{equation}\label{e55.1}
\begin{array}{ccl}X&\longrightarrow &E\\
\downarrow &\mbox{}&\downarrow \beta \\
C_2^{(g-3)}&\stackrel{\alpha }{\longrightarrow }&J_2(E)
\end{array}
\end{equation}
and $\alpha (G)=cl(\delta _2-Nm_{f_2}(G)),\beta (x)=cl(2x)$. The same
argument as in Claim 1 of Lemma~(\ref{l48.1}) shows that $X$ is
irreducible. This implies that $V$ is irreducible as well.
Corollary~(\ref{c92.2}) and Eq.~(\ref{e54.3}) show that for $F=f
_2^*(x)+G$ one has $h^0(\pi _2^*F)=2$ if and only if $G$ is
$f_2$-simple. Thus the points (\ref{e53.3}) form a Zariski open, dense
subset of $V$.
Finally, $dimX=g-3$ and we claim that the map $X\longrightarrow V$
given by $(G,x)\longmapsto cl(\pi _2^*(f_2^*(x)+G))$ is of degree 2,
hence $dimV=g-3$. Indeed, let $\sigma _2:\tilde{C}\longrightarrow
\tilde{C}$ be the involution which interchanges the sheets of $\pi
_2$. Then for any $\pi _2^*(\xi _2)\in V$ with $h^0(\tilde{C},\pi
_2^*(\xi _2))=2$ according to Eq.~(\ref{e54.3}) there are exactly two
$\sigma _2$-invariant divisors in $\mid \pi _2^*(\xi _2)\mid $ namely
\begin{equation}\label{e56.1}
\begin{array}{cc}
\pi _2^*(f_2^*(x)+G)\; ,&\pi _2^*(f_2^*(t_{\epsilon }(x))+G)
\end{array}
\end{equation}
with $x,G$ as in the lemma. q.e.d.
Let
$$
S_2=\{ F\in C_2^{(g-2)}\mid Nm_{f_2}(F)\in \mid \delta _2\mid \}
$$
One has a surjective map
$$
\varphi =cl\circ \pi _2^*:S_2\longrightarrow Z_2
$$
{}From the proof of Claim 2 of Lemma~(\ref{l48.1}) we see that $S_2$
is irreducible. Moreover $deg\varphi =1$ since $h^0(\tilde{C},L)=1$
for
any sufficiently general $L\in Z_2$. Let us consider the Stein
factorization \cite{hart}
$$
\varphi =\psi \circ \alpha :S_2\longrightarrow \Gamma \longrightarrow
Z_2
$$
where $\psi $ is a finite map and $\alpha $ has connected fibers. Let
$$
W_1=\{ F\in S_2\mid h^0(C_2,F)\geq 2\}\; ,\;
W_2=\{ f_2^*A+E\in S_2\mid A\geq 0,E\geq 0,deg(A)\geq 2\}
$$
One has $codim_{S_2}(W_1)\geq 1$ and $codim_{S_2}(W_2)\geq 2$. Let
$S_2^0=S_2\backslash (W_1\cup W_2)$ and let $\Gamma ^0=\alpha(S^0_2)$.
\begin{lem}\label{l58.1}
The points of $S^0_2,\Gamma^0$ are nonsingular in $S_2,\Gamma $
respectively, $codim_{\Gamma }(\Gamma \backslash \Gamma ^0)\geq 2$ and
the map $\alpha:S^0_2\longrightarrow \Gamma ^0$ is an isomorphism. Let
$n:N\longrightarrow Z_2$ be the normalization of $Z_2$. Then there
exists a finite map $\beta :N\longrightarrow \Gamma $ such that
$n=\psi \circ \beta $. If $N^0=\beta ^{-1}(\Gamma ^0)$ then
$codim_N(N\backslash N^0)\geq 2$ and $\beta :N^0\longrightarrow
\Gamma ^0$ is an isomorphism
\end{lem}
{\bf Proof.} The points of $S^0_2$ are nonsingular by
Proposition~(\ref{p91.1}). For any $x\in S^0_2$ one has $\# \varphi
^{-1}(\varphi (x))=2$ as we have shown in the proof of
Lemma~(\ref{l53.1}).
Thus the map $\alpha :S^0_2\longrightarrow \Gamma ^0$ is bijective.
The map $\varphi $ is nondegenerate at the points of $S^0_2$. Indeed
$\varphi = cl\circ \pi _2^*=\pi _2^*\circ cl$, the map
$cl:S_2\longrightarrow J_{g-1}(C_2)$ is nondegenerate at any $F\in
S^0_2$ since $h^0(F)=1$, and the map $\pi
_2^*:J_{g-1}(C_2)\longrightarrow J_{2g-2}(\tilde{C})$ is obviously
nondegenerate. We conclude that $\alpha :S^0_2\longrightarrow \Gamma
^0$ is an isomorphism. One has $codim_{\Gamma }(\Gamma \backslash
\Gamma ^0)\geq 2$ since $codim_{Z_2}\varphi (W_1)\geq 2$ by
(\ref{e53.1}).
The rest of the lemma is clear by the universal property of the
normalization. q.e.d.
{\bf Reconstruction of $(C,\eta )$ in the bi-elliptic case, $g\geq
4$.}
\noindent
In the beginning of this section we have seen how to reconstruct up to
isomorphism $E$ and the covering $f_2:C_2\longrightarrow E$. Let
$T_i\in O_i\subset \mid 2\Xi \mid ,i=1,2$. We have proved above that
$T_i$ are irreducible and just one of $T_i$ has a singular locus of
codimension 1. Reordering, if necessary, as in Lemma~(\ref{l48.1}) we
can assume that this divisor is $T_2$. We can identify $T_2$ and $Z_2$
by translation. Let $n:N\longrightarrow T_2$ be the normalization of
$T_2$. Let $R=n^{-1}(V)$. The Zariski open subset $R^0=R\cap N^0$
is dense in $R$ since $codim_N(N\backslash N^0)\geq 2.$ By the
irreducibility of $X$ in (\ref{e55.1}) one concludes that
$\alpha^{-1}\circ \beta (R^0)$ and $R$ are irreducible as well. We
have an isomorphism $f^*:\mid \delta _2\mid \longrightarrow \mid
K_C\otimes \eta \mid $. Consider the Gauss map
$G:Z_2^{ns}\longrightarrow \mid K_C\otimes \eta\mid $. Then for every
$F\in S_2$ with $h^0(\tilde{C},\pi _2^*F)=1$ one has
$$
G(\varphi
(F))=f^*(Nm_{f_2}(F))
$$
Shrinking $N_0$ from Lemma~(\ref{l58.1}) we can assume that the
following properties hold
\begin{itemize}
\item $codim_N(N\backslash N_0)\geq 2$.
\item The composition $C\circ n$ can be extended to a regular map on
$N_0$.
\item Every point of $\alpha ^{-1}\circ \beta (R^0)$ has the form
$f_2^*(x)+A$ where $x$ is not a branch point of $f_2$, $A$ is reduced
and $f_2$-simple, and $x\not \in Supp(A).$
\end{itemize}
Now, we define a rational map
$$
\gamma : R\longrightarrow E
$$
as follows. For every $L\in R_0$ the hyperplane $G\circ n(L)$ belongs
to the unique irreducible component of degree $>1$ of the branch locus
of the Gauss map $G:T\longrightarrow {\bf P}(T_0P)^*$, namely $E^*$.
By the conditions above this hyperplane is tangent to a unique point
of $E$. We denote this point by $\gamma (L)$. Now, let $L=\beta
^{-1}\circ \alpha (f_2^*(x)+A)\in R_0$. Then $n^{-1}(n(L))=\{ L,L'\} $
where
\begin{equation}\label{e62.1}
L'=\beta ^{-1}\circ \alpha (f_2^*(t_{\epsilon }(x))+A)
\end{equation}
according to (\ref{e56.1}). This shows that the map
$n:R\longrightarrow
V\subset Sing(T_2)$ is of degree 2. By (\ref{e62.1}) the corresponding
involution $\tau ^*:{\bf C}(R)\longrightarrow {\bf C}(R)$ of the field
of rational functions on $R$ transforms $\gamma ^*{\bf C}(E)$ into
itself and $\tau ^*:\gamma ^*{\bf C}(E)\longrightarrow \gamma ^*{\bf
C}(E)$ is induced by the translation map $t_{\epsilon
}:E\longrightarrow E$. This gives the reconstruction of $\epsilon \in
J(E)_2$ and completes the reconstruction of $(C,\eta )$ from the
extended Prym data.
\section{The case $g=3$}\label{s7}
Let $a\in \tilde{C}$. We define the Abel-Prym map $\phi
:\tilde{C}\longrightarrow P(\tilde{C},\sigma )$ by
$$
\phi (x)=cl(x-a-\sigma (x-a))
$$
\begin{lem}\label{l635.1}
Suppose $g\geq 2$. The following alternative takes place
\begin{list}{(\roman{bean})}{\usecounter{bean}}
\item $\phi $ maps $\tilde{C}$ isomorphically onto its image $\phi
(\tilde{C})$.
\item The map $\phi :\tilde{C}\longrightarrow \phi (\tilde{C})$ has
degree 2.
\end{list}
The second case occurs if and only if
{\rm (*)} $C$ is hyperelliptic and $\eta \simeq {\cal O}_{C}(p_1-p_2)$
for some $p_1,p_2\in C$.\\
\noindent Here $\phi (\tilde{C})\simeq C_2$ and $\phi =\pi _2$ (see
Fig.1) via this isomorphism.
\end{lem}
{\bf Proof.} Suppose $\phi (x)=\phi (y)$ for some $x\neq y$. Then
$x+\sigma y\equiv y+\sigma x$, thus $\tilde{C}$ is hyperelliptic. It
has a unique $g^1_2$, so $\sigma (g^1_2)=g^1_2$. Let $\sigma _1$ be
the hyperelliptic involution of $\tilde{C}$. Then $\sigma \neq \sigma
_1$ and we claim that $\sigma $ and $\sigma _1$ commute. Indeed, for
any $z\in \tilde{C}$
\[ \begin{array}{cc}
\sigma z+\sigma _1(\sigma z)\in g^1_2\; ,&\sigma (z+\sigma _1z)\in
g^1_2
\end{array} \]
Thus $\sigma \sigma
_1z=\sigma _1\sigma z$. Let $\sigma _2=\sigma \sigma _1$. Let
$C_1=\tilde{C}/\sigma _1$ and let $\overline{\sigma
}:C_1\longrightarrow C_1$ be the involution induced by $\sigma $. Then
$\overline{\sigma }$ has two fixed points since $C_1\simeq {\bf P}^1$.
Thus $\pi :\tilde{C}\longrightarrow C$ fits into the commutative
diagram of Fig.1 and condition (*) holds.
If $\phi $ were degenerate at some point $x\in \tilde{C}$, then
$\pi (x)$ would be a base point of $\mid K_C\otimes \eta\mid $, thus
condition (*) holds according to Lemma~(\ref{l21.1}).
Conversely, suppose condition (*) holds. Then by the argument above
$\phi (x)=\phi (y)$ and $x\neq y$ if and only if $y$ belongs to the
divisor $\sigma (x+\sigma _1x)$. Thus $y=\sigma _2(x)$. q.e.d.
Further we suppose that $g=3$. Let $T_i\in O_i\subset \mid 2\Xi
\mid ,i=1,2.$ The divisors $T_i$ are reduced, connected curves
according to Proposition~(\ref{p17.1}). Let $S=Nm^{-1}(\mid
K_C\otimes \eta \mid )\subset \tilde{C}^{(4)} $ and let
$Z=Nm^{-1}(K_C\otimes \eta )\cap W_4(\tilde{C})\subset
J_4(\tilde{C})$. Both $S$ and $Z$ break into two disjoint, connected
components $S=S_1\cup S_2,Z=Z_1\cup Z_2$. We enumerate so that
$cl(S_i)=Z_i$ and $T_i$ is translation of
$Z_i,i=1,2$.
\begin{lem}\label{l66.1}
The curves $T_1,T_2$ are both singular if and only if condition (*)
of Lemma~(\ref{l635.1}) holds. If only one of $T_i$ is singular then
the
nonsingular one is a translation of $\phi (\tilde{C})$.
\end{lem}
{\bf Proof.} If condition (*) holds, then both $T_1$ and $T_2$ are
reducible and hence singular by Corollary~(\ref{c39.1}). Since $T_i$
is
a translation of $Z_i,i=1,2$ we can work with $Z_i$. Suppose that
condition (*) does not hold and $Z'\in \{ Z_1,Z_2\} $ is a singular
curve with a singular point $L$. Let $\{ Z',Z''\} =\{ Z_1,Z_2\} $.
Then
$$
X=\{ L+x-\sigma x\mid x\in \tilde{C}\} \subset Z''
$$
Clearly $X$ is a translation of $\phi (\tilde{C})$. According to
Lemma~(\ref{l635.1}) $X$ is isomorphic to $\tilde{C}$ and $X$ is
algebraically equivalent to $2\Xi $ \cite{mas}. Thus $X=Z''$. q.e.d.
{\bf Reconstruction of $(C,\eta ) $ in the case $g=3$.}
{\bf Case 1.} {\it Both $T_1,T_2$ are singular.}\\
According to Lemma~(\ref{l66.1}) we are in the situation of
Section~(\ref{s5}) where a procedure for the reconstruction of
$(C,\eta )$ was described.
{\bf Case 2.} {\it Just one of the curves $T'\in \{ T_1,T_2\} $ is
singular.}\\
We take the other curve $T''$. It is isomorphic to
$\tilde{C}$ according to Lemmas~(\ref{l66.1}) and (\ref{l635.1}). The
involution
$-id_P:T''\longrightarrow T''$ coincides with $\sigma $ via this
isomorphism. We thus reconstruct $(\tilde{C},\sigma )$.
{\bf Case 3.} {\it $T_1$ and $T_2$ are nonsingular.}\\
Consider the involutions $\sigma _i:T_i\longrightarrow T_i$
induced by $-id_P$, let $C_i=T_i/\sigma _i$ and let $\pi
_i:T_i\longrightarrow C_i$ be the factor maps, $i=1,2$. By
Proposition~(\ref{p17.1}) the map $cl:S_i\longrightarrow Z_i$ is
bijective. Thus $S_i$ are nonsingular since $Z_i$ are nonsingular.
Using Proposition~(\ref{p91.1}) one checks that the nonsingularity of
$S_i$ implies that $\sigma ^{(4)}:S_i\longrightarrow S_i$ is without
fixed points, thus $\sigma _i:T_i\longrightarrow T_i$ is without fixed
points as well, $i=1,2$. Consider the Gauss maps
$G_i:T_i\longrightarrow {\bf P}(T_0P)^*={\bf P}^1$. By
Eq.~(\ref{e26.1})
one shows that $G_i=f_i\circ \pi _i$ where $f_i:C_i\longrightarrow
{\bf P}^1$ are maps of degree 4 and concludes that the maps
$G_i:T_i\longrightarrow {\bf P}^1$ are obtained from $\varphi _{
K_C\otimes \eta }\circ \pi :\tilde{C}\longrightarrow {\bf P}^1$ by the
tetragonal construction of Donagi \cite{don},\cite{don1}.
Now, take the pair $(T_1,\sigma _1) $ and apply the tetragonal
construction to $G_1:T_1\longrightarrow {\bf P}^1$ (ibid.). One
obtains two 8-sheeted coverings $g_i:X_i\longrightarrow {\bf P}^1$
with involutions $\tau _i:X_i\longrightarrow X_i$ such that $g_i\circ
\tau _i=g_i$. For one of them, e.g. $X_2$, there is an isomorphism of
the coverings
$$
\begin{array}{rlcl}\mbox{}&X_2&\stackrel{\psi _2}{\longrightarrow
}&T_2\\ g_2&\downarrow &\mbox{}&\downarrow
\hspace{.25cm}G_2\\ \mbox{}&{\bf P}^1&=&{\bf P}^1
\end{array}
$$
such that $\psi _2\circ \tau _2=\sigma _2\circ \psi _2$. Then the
remaining pair $(X_1,\tau _1)$ is isomorphic to $(\tilde{C},\sigma )$.
| {
"timestamp": "1993-04-14T17:59:51",
"yymm": "9304",
"arxiv_id": "alg-geom/9304006",
"language": "en",
"url": "https://arxiv.org/abs/alg-geom/9304006",
"abstract": "With every smooth, projective algebraic curve $\\tilde{C}$ with involution $\\sigma :\\tilde{C}\\longrightarrow \\tilde{C}$ without fixed points is associated the Prym data which consists of the Prym variety $P:=(1-\\sigma )J(\\tilde{C})$ with principal polarization $\\Xi$ such that $2\\Xi$ is algebraically equivalent to the restriction on $P$ of the canonical polarization $\\Theta $ of the Jacobian $J(\\tilde{C})$. In contrast to the classical Torelli theorem the Prym data does not always determine uniquely the pair $(\\tilde{C},\\sigma )$ up to isomorphism. In this paper we introduce an extension of the Prym data as follows. We consider all symmetric theta divisors $\\Theta $ of $J(\\tilde{C})$ which have even multiplicity at every point of order 2 of $P$. It turns out that they form three $P_2$ orbits. The restrictions on $P$ of the divisors of one of the orbits form the orbit $\\{ 2\\Xi \\} $, where $\\Xi $ are the symmetric theta divisors of $P$. The other restrictions form two $P_2$-orbits $O_1,O_2\\subset \\mid 2\\Xi \\mid $. The extended Prym data consists of $(P,\\Xi )$ together with $O_1,O_2$. We prove that it determines uniquely the pair $(\\tilde{C} ,\\sigma )$ up to isomorphism provided $g(\\tilde{C})\\geq 3$. The proof is analogous to Andreotti's proof of Torelli's theorem and uses the Gauss map for the divisors of $O_1,O_2$. The result is an analog in genus $>1$ of a classical theorem for elliptic curves.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Recovering of curves with involution by extended Prym data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105252542431,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.707698474272607
} |
https://arxiv.org/abs/1809.10078 | Convex partial transversals of planar regions | We consider the problem of testing, for a given set of planar regions $\cal R$ and an integer $k$, whether there exists a convex shape whose boundary intersects at least $k$ regions of $\cal R$. We provide a polynomial time algorithm for the case where the regions are disjoint line segments with a constant number of orientations. On the other hand, we show that the problem is NP-hard when the regions are intersecting axis-aligned rectangles or 3-oriented line segments. For several natural intermediate classes of shapes (arbitrary disjoint segments, intersecting 2-oriented segments) the problem remains open. | \section{Introduction}
A set of points $Q$ in the plane is said to be in \emph{convex position} if for
every point $q \in Q$ there is a halfplane containing $Q$ that has $q$ on its
boundary. Now, let ${\cal R}$
be a set of $n$ {\em regions} in the plane. We say that
$Q$ is a {\em partial transversal} of $\cal
R$ if there exists an injective map $f: Q \to \cal R$ such that $q \in
f(q)$ for all $q \in Q$; if $f$ is a bijection we call
$Q$ a {\em full transversal}.
In this paper, we are concerned with the question whether a given set of regions $\cal R$ admits a convex partial transversal $Q$ of a given cardinality $|Q| = k$.
Figure~\ref {fig:example} shows an example.
\begin{figure}[tb]
\centering
\includegraphics{intro-example}
\caption{(a) A set of 12 regions. (b, c) A convex partial transversal of size $10$.}
\label{fig:example}
\end{figure}
The study of convex transversals was initiated by Arik Tamir at the Fourth NYU Computational Geometry Day in 1987, who asked
``Given a collection of compact sets, can one decide in polynomial time whether there exists a convex body whose boundary intersects every set in the collection?''
Note that this is equivalent to the question of whether a convex full transversal of the sets exists: given the convex body, we can place a point of its boundary in every intersected region;
conversely, the convex hull of a convex transversal forms a convex body whose boundary intersects every set.
In 2010, Arkin~et al.\xspace~\cite {ARKIN2014224} answered Tamir's original question in the negative (assuming P $\ne$ NP): they prove that the problem is NP-hard, even when the regions are (potentially intersecting) line segments in the plane, regular polygons in the plane, or balls in $\mkmbb{R}^3$.
On the other hand, they show that Tamir's problem can be solved in polynomial time when the regions are {\em disjoint} segments in the plane and the convex body is restricted to be a polygon whose vertices are chosen from a given discrete set of (polynomially many) candidate locations.
Goodrich and Snoeyink~\cite{gsch} show that for a set of {\em parallel} line segments, the existence of a convex transversal can be tested in $O(n \log n)$ time.
Schlipf~\cite {schlipf2012notes} further proves that the problem of finding a convex stabber for a set of disjoint {\em bends} (that is, shapes consisting of two segments joined at one endpoint) is also NP-hard.
She also studies the optimisation version of maximising the number of regions stabbed by a convex shape; we may re-interpret this question as finding the largest $k$ such that a convex partial transversal of cardinality $k$ exists.
She shows that this problem is also NP-hard for a set of (potentially intersecting) line segments in the plane.
\subparagraph{Related work.} Computing a partial transversal of maximum size
arises in wire layout applications~\cite{tompa1980optimal}. When each region in
$\cal R$ is a single point, our problem reduces to determining whether a point
set $P$ has a subset of cardinality $k$ in convex position. Eppstein
et al.\xspace~\cite{50} solve this in $O(kn^3)$ time and $O(kn^2)$ space using dynamic
programming; the total number of convex $k$-gons can also be tabulated in
$O(kn^3)$ time~\cite{rote1991counting,mitchellcounting}.
If we allow reusing elements, our problem becomes equivalent to so-called \emph{covering color classes} introduced by Arkin et al.\xspace~\cite{arkin2015}.
Arkin et al.\xspace show that for a set of regions $\cal R$ where each region is a set of two or three points, computing a convex partial transversal of $\cal R$ of maximum cardinality is NP-hard.
Conflict-free coloring has been studied extensively, and has applications in, for instance, cellular networks~\cite{even2003conflict,har2005conflict,katz2012conflict}.
\begin{table}
\centering
\caption{New and known results.}
\label{table:results}
{\footnotesize
\begin{tabular}{ r rcc }
\toprule
& & \bf disjoint & \bf intersecting \\
\midrule
\bf line segments:
& parallel & $O(n^{6})$ (upper hull only: $O(n^2)$) & N/A \\
& 2-oriented & \(\downarrow\) & open \\
& 3-oriented & \(\downarrow\) & NP-hard \\
& $\rho$-oriented & polynomial & $\uparrow$ \\
& arbitrary & open & NP-hard~\cite {ARKIN2014224} \\
\midrule
\bf rectangles:
& squares & open & open \\
& rectangles & open & NP-hard \\
\midrule
\bf other:
& bends & NP-hard~\cite {schlipf2012notes} & $\leftarrow$ \\
\bottomrule
\end{tabular}
}
\end{table}
\subparagraph{Results.}
Despite the large body of work on convex transversals and natural extensions of partial transversals that are often mentioned in the literature, surprisingly, no positive results were known.
We present the first positive results: in Section~\ref {sec:parallel} we show how to test whether a set of parallel line segments admits a convex transversal of size $k$ in polynomial time; we extend this result to disjoint segments of a fixed number of orientations in Section~\ref {sec:2-oriented}.
Although the hardness proofs of Arkin~et al.\xspace and Schlipf do extend to partial convex transversals, we strengthen these results by showing that the problem is already hard when the regions are $3$-oriented segments or axis-aligned rectangles (Section~\ref {sec:hardthreeintersect}).
Our results are summarized in Table~\ref {table:results}. The arrows in the table indicate that one result is implied by another.
For ease of terminology, in the remainder of this paper, we will drop the
qualifier ``partial'' and simply use ``convex transversal'' to mean ``partial
convex transversal''. Also, for ease of argument, in all our results we test
for {\em weakly convex} transversals. This means that the transversal may
contain three or more colinear points.
\section{Parallel disjoint line segments}
\label{sec:parallel}
Let \mkmcal{R} be a set of $n$ vertical line segments in $\mkmbb{R}^2$. We assume that no
three endpoints are aligned. Let $\ensuremath{\mathord\Uparrow}\mkmcal{R}$ and $\ensuremath{\mathord\Downarrow}\mkmcal{R}$ denote the sets of
upper and lower endpoints of the regions in \mkmcal{R}, respectively, and let
$\ensuremath{\mathord\Updownarrow}\mkmcal{R}=\ensuremath{\mathord\Uparrow}\mkmcal{R}\cup\ensuremath{\mathord\Downarrow}\mkmcal{R}$. In
Section~\ref{sub:Computing_an_upper_convex_transversal} we focus on computing
an \emph{upper convex transversal} --a convex transversal $Q$ in which all
points appear on the upper hull of $Q$-- that maximizes the number of regions
visited. We show that there is an optimal transversal whose strictly convex
vertices lie only on bottom endpoints in $\ensuremath{\mathord\Downarrow}\mkmcal{R}$. In
Section~\ref{sub:Computing_a_convex_transversal} we prove that there exists an
optimal convex transversal whose strictly convex vertices are taken from the
set of all endpoints $\ensuremath{\mathord\Updownarrow}\mkmcal{R}$, and whose leftmost and rightmost vertices are
taken from a discrete set of points. This leads to an $O(n^6)$ time dynamic
programming algorithm to compute such a transversal.
\subsection{Computing an upper convex transversal}
\label{sub:Computing_an_upper_convex_transversal}
Let $k^*$ be the maximum number of regions visitable by a upper convex
transversal of \mkmcal{R}.
\begin{lemma}
\label{lem:discrete_upper_hull}
Let $U$ be an upper convex transversal of \mkmcal{R} that visits $k$ regions. There
exists an upper convex transversal $U'$ of \mkmcal{R}, that visits the same $k$
regions as $U$, and such that the leftmost vertex, the rightmost vertex, and
all strictly convex vertices of $U'$ lie on the bottom endpoints of the
regions in \mkmcal{R}.
\end{lemma}
\begin{proof}
Let $\mathcal{U}$ be the set of all upper convex transversals with $k$
vertices. Let $U'\in\mathcal{U}$ be a upper convex transversal such that the
sum of the $y$-coordinates of its vertices is minimal. Assume, by
contradiction, that $U'$ has a vertex $v$ that is neither on the lower
endpoint of its respective segment nor aligned with its adjacent
vertices. Then we can move $v$ down without making the upper hull
non-convex. This is a contradiction. Therefore, all vertices in $U'$ are
either aligned with their neighbors (and thus not strictly convex), or at the
bottom endpoint of a region.
\end{proof}
Let $\Lambda(v,w)$ denote the set of bottom endpoints of regions in \mkmcal{R} that
lie left of $v$ and below the line through $v$ and $w$. See
Figure~\ref{fig:upper_dp}(a). Let $\ensuremath{\mathit{slope}}\xspace(\overline{uv})$ denote the slope of the
supporting line of $\overline{uv}$, and observe that
$\ensuremath{\mathit{slope}}\xspace(\overline{uv})=\ensuremath{\mathit{slope}}\xspace(\overline{vu})$.
By Lemma~\ref{lem:discrete_upper_hull} there is an optimal upper convex
transversal of \mkmcal{R} in which all strictly convex vertices lie on bottom endpoints
of the segments. Let $K[v,w]$ be the maximum number of regions visitable
by a
upper convex transversal that ends at a bottom endpoint $v$, and has an incoming
slope at $v$ of \emph{at least} $\ensuremath{\mathit{slope}}\xspace(\overline{vw})$. It is important to
note that the second argument $w$ is used only to specify the slope, and $w$
may be left or right of $v$. We have that
\[ K[v,w] = \max_{u \in \Lambda(v,w)} \max_{s \in \Lambda(u,v)} K[u,s] + I[u,v], \]
where $I[u,v]$ denotes the number of regions in \mkmcal{R} intersected by the segment
$\overline{uv}$ (in which we treat the endpoint at $u$ as open, and the
endpoint at $v$ as closed). See Figure~\ref{fig:upper_dp}(a) for an illustration.
\begin{figure}[tb]
\centering
\includegraphics{upper_dp}
\caption{(a) The definition of $K[v,w]$. The region $\Lambda(v,w)$ is
indicated in purple. The segments counted in $I[u,v]$ are shown in red. (b)
The case that $K[v,w]=K[v,u]$, where $u$ corresponds to the predecessor
slope of $\ensuremath{\mathit{slope}}\xspace(\overline{vw})$. (c) The case that
$K[v,w]=K[w,v]+I[w,v]$. }
\label{fig:upper_dp}
\end{figure}
\newcommand{\ensuremath{\mathit{pred}}\xspace}{\ensuremath{\mathit{pred}}\xspace}
\begin{observation}
\label{obs:monotone_slopes}
Let $v$, $s$, and $t$ be bottom endpoints of segments in \mkmcal{R} with
$\ensuremath{\mathit{slope}}\xspace(\overline{sv}) > \ensuremath{\mathit{slope}}\xspace(\overline{tv})$. We have that
$K[v,t] \geq K[v,s]$.
\end{observation}
Fix a bottom endpoint $v$, and order the other bottom endpoints
$w \in \ensuremath{\mathord\Downarrow}\mkmcal{R}$ in decreasing order of slope $\ensuremath{\mathit{slope}}\xspace(\overline{wv})$. Let
$S_v$ denote the resulting order. We denote the $x$-coordinate of a point $v$ by $v_x$.
\begin{lemma}
\label{lem:recurrence_upper}
Let $v$ and $w$ be bottom endpoints of regions in \mkmcal{R}, and let $u$ be the
predecessor of $w$ in $S_v$, if it exists (otherwise let
$K[v,u]=-\infty$). We have that
%
\begin{align*}
K[v,w] = \begin{cases}
\max\{1, K[v, u], K[w,v] + I[w,v]\} & \text{if }w_x < v_x\\
\max\{1, K[v, u]\} & \text{otherwise.}
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
If $w$ does not have any predecessor in $S_v$ then $w$ can be the only
endpoint in $\Lambda(v,w)$. In particular, if $w$ lies right of $v$ then
$\Lambda(v,w)$ is empty, and thus $K[v,w]=1$, i.e. our transversal starts and
ends at $v$. If $w$ lies left of $v$ we can either visit only $v$ or arrive
from $w$, provided the incoming angle at $w$ is at least
$\ensuremath{\mathit{slope}}\xspace(\overline{wv})$. In that case it follows that the maximum number of
regions visited is $K[w,v]+I[w,v]$.
If $w$ does have a predecessor $u$ in $S_v$, we have
$\Lambda(v,w) = \Lambda(v,u) \cup W$, where $W$ is either empty or the
singleton $\{w\}$. By Observation~\ref{obs:monotone_slopes} (and the
definition of $K$) we have that
$K[v,u] = \max_{u' \in \Lambda(v,u)}\max_{s \in \Lambda(u',v)} K[u',s] +
I[u',v]$. Analogous to the base case we have
$\max_{u' \in W} K[v,u'] \max_{s \in \Lambda(u',v)} K[u',s] + I[u',v] =
\max\{1,K[w,v] + I[w,v]\}$. The lemma follows.
\end{proof}
Lemma~\ref{lem:recurrence_upper} now suggests a dynamic programming approach to
compute the $K[v,w]$ values for all pairs of bottom endpoints $v,w$: we process
the endpoints $v$ on increasing $x$-coordinate, and for each $v$, we compute
all $K[v,w]$ values in the order of $S_v$. To this end, we need to compute (i)
the (radial) orders $S_v$, for all bottom endpoints $v$, and (ii) the number of
regions intersected by a line segment $\overline{uv}$, for all pairs of bottom
endpoints $u$, $v$. We show that we can solve both these problems in $O(n^2)$
time. We then also obtain an $O(n^2)$ time algorithm to compute
$k^* = \max_{v,w} K[v,w]$.
\subparagraph{Computing predecessor slopes.} For each bottom endpoint $v$, we
simply sort the other bottom endpoints around $v$. This can be done in $O(n^2)$
time in total~\cite{overmars1988cyclic}\footnote{Alternatively, we can dualize
the points into lines and use the dual arrangement to obtain all radial orders
in $O(n^2)$ time.}. We can now obtain $S_v$ by splitting the resulting list
into two lists, one with all endpoints left of $v$ and one with the endpoints
right of $v$, and merging these lists appropriately. In total this takes
$O(n^2)$ time.
\subparagraph{Computing the number of intersections.} We use the standard
duality transform~\cite{bkos-cgaa-00} to map every point $p=(p_x,p_y)$ to a
line $p^* : y=p_x x - p_y$, and every non-vertical line $\ell : y=ax+b$ to a
point $\ell^*=(a,-b)$.
Consider the arrangement \mkmcal{A} formed by the lines $p^*$
dual to all endpoints $p$ (both top and bottom) of all regions in \mkmbb{R}. Observe
that in this dual space, a vertical line segment $R=\overline{pq} \in \mkmcal{R}$
corresponds to a strip $R^*$ bounded by two parallel lines $p^*$ and $q^*$. Let
$\mkmcal{R}^*$ denote this set of strips corresponding to \mkmcal{R}. It follows that if we
want to count the number of regions of \mkmcal{R} intersected by a query line $\ell$
we have to count the number of strips in $\mkmcal{R}^*$ containing the point $\ell^*$.
All our query segments $\overline{uv}$ are defined by two bottom endpoints $u$
and $v$, so the supporting line $\ell_{uv}$ of such a segment corresponds to a
vertex $\ell^*_{uv}$ of the arrangement \mkmcal{A}. It is fairly easy to count, for
every vertex $\ell^*$ of \mkmcal{A}, the number of strips that contain $\ell^*$, in a
total of $O(n^2)$ time; simply traverse each line of \mkmcal{A} while maintaining the
number of strips that contain the current point.
Since in our case we wish to count only the regions intersected by a
line segment $\overline{uv}$ (rather than a line $\ell_{uv}$), we need two more
observations. Assume without loss of generality that $u_x < v_x$. This means we
wish to count only the strips $R^*$ that contain $\ell^*_{uv}$ and whose slope
$\ensuremath{\mathit{slope}}\xspace(R)$ lies in the range $[u_x,v_x]$.
\begin{observation}
\label{obs:top_and_bottom}
Let $p^*$ be a line, oriented from left to right, and let $R^*$ be a
strip. The line $p^*$ intersects the bottom boundary of $R^*$ before the top
boundary of $R^*$ if and only if $\ensuremath{\mathit{slope}}\xspace(p^*) > \ensuremath{\mathit{slope}}\xspace(R^*)$.
\end{observation}
\begin{figure}[tb]
\centering
\includegraphics{strips}
\caption{A line $p^*$ intersects the bottom of the strip $R^*$ if and only if
$\ensuremath{\mathit{slope}}\xspace(p^*) > \ensuremath{\mathit{slope}}\xspace(R^*)$,}
\label{fig:strips}
\end{figure}
Again consider traversing a line $p^*$ of \mkmcal{A} (from left to right), and let
$T_{p^*}(\ell^*)$ be the number of strips that contain the point $\ell^*$ and that
we enter through the top boundary of the strip.
\begin{lemma}
\label{lem:strips_containing}
Let $\ell^*_{uv}$, with $u_x < v_x$, be a vertex of \mkmcal{A} at which the lines
$u^*$ and $v^*$ intersect. The number of strips from $\mkmcal{R}^*$ with slope in
the range $[u_x,v_x]$ containing $\ell^*_{uv}$ is
$T_{u^*}(\ell^*_{uv}) - T_{v^*}(\ell^*_{uv})$.
\end{lemma}
\begin{proof}
\begin{figure}[tb]
\centering
\includegraphics[page=2]{strips}
\caption{The strip $R^*$ with a slope in the range $[u_x,v_x]$ containing
$\ell^*_{uv}$ contributes one to $T_{u^*}(\ell^*_{uv})$ and zero to
$T_{v^*}(\ell^*_{uv})$.}
\label{fig:strips_intersection}
\end{figure}
A strip that does not contain $\ell^*=\ell^*_{uv}$ contributes zero to both
$T_{u^*}(\ell^*)$ and $T_{v^*}(\ell^*)$. A strip that contains $\ell^*$ but
has slope larger than $v_x$ (and thus also larger than $u_x$) contributes one
to both $T_{u^*}(\ell^*)$ and $T_{v^*}(\ell^*)$
(Observation~\ref{obs:top_and_bottom}). Symmetrically, a strip that contains
$\ell^*$ but has slope smaller than $u_x$ contributes zero to both
$T_{u^*}(\ell^*)$ and $T_{v^*}(\ell^*)$. Finally, a strip whose slope is in
the range $[u_x,v_x]$ is intersected by $u^*$ from the top, and by $v^*$ from
the bottom (Observation~\ref{obs:top_and_bottom}), and thus contributes one
to $T_{u*}(\ell^*)$ and zero to $T_{v^*}(\ell^*)$. See
Figure~\ref{fig:strips_intersection} for an illustration. The lemma follows.
\end{proof}
\begin{corollary}
\label{cor:segments_intersecting}
Let $u, v \in \ensuremath{\mathord\Downarrow}\mkmcal{R}$ be bottom endpoints. The number of regions of \mkmcal{R}
intersected by $\overline{uv}$ is
$T_{u^*}(\ell^*_{uv}) - T_{v^*}(\ell^*_{uv})$.
\end{corollary}
We can easily compute the counts $T_{u*}(\ell^*_{uv})$ for every vertex
$\ell^*_{uv}$ on $u^*$ by traversing the line $u^*$. We therefore obtain the
following result.
\begin{lemma}
\label{lem:intersection_counting}
For all pairs of bottom endpoints $u,v \in \ensuremath{\mathord\Downarrow}\mkmcal{R}$, we can compute the
number of regions in \mkmcal{R} intersected by $\overline{uv}$, in a total of
$O(n^2)$ time.
\end{lemma}
Applying this in our dynamic programming approach for computing $k^*$ we get:
\begin{theorem}
\label{thm:parallel_segments_upper_hull}
Given a set of $n$ vertical line segments \mkmcal{R}, we can compute the maximum
number of regions $k^*$ visitable by an upper convex transversal $Q$ in
$O(n^2)$ time.
\end{theorem}
\subsection{Computing a convex transversal}
\label{sub:Computing_a_convex_transversal}
We now consider computing a convex partial transversal that maximizes the number
of regions visited. We first prove some properties of convex transversals.
We then use these properties to compute the maximum number of regions visitable
by such a transversal using dynamic programming.
\subsubsection{Canonical Transversals}
\label{ssub:Canonical_Transversals}
\begin{lemma}
\label{lem:discrete_convex_hull_buildup}
Let $Q$ be a convex partial transversal of \mkmcal{R}. There exists a convex partial
transversal $Q'$ of \mkmcal{R} such that
\begin{itemize}
\item the transversals have the same leftmost vertex $\ell$ and
the same rightmost vertex $r$,
\item the upper hull of $Q'$ intersects the same regions as the upper hull of
$Q$,
\item all strictly convex vertices on the upper hull of $Q'$ lie on bottom
endpoints of \mkmcal{R},
\item the lower hull of $Q'$ intersects the same regions as the lower hull of
$Q$, and
\item all strictly convex vertices on the lower hull of $Q'$ lie on top
endpoints of regions in \mkmcal{R}.
\end{itemize}
\end{lemma}
\begin{proof}
Clip the segments containing $\ell$ and $r$ such that $\ell$ and $r$ are the
bottom endpoints, and apply Lemma~\ref{lem:discrete_upper_hull} to get an
upper convex transversal $U$ of \mkmcal{R} whose strictly convex vertices lie on
bottom endpoints and that visits the same regions as the upper hull of
$Q$. So we can replace the upper hull of $Q$ by $U$. Symmetrically, we can
replace the lower hull of $Q$ by a transversal that visits the same regions and
whose strictly convex vertices use only top endpoints.
\end{proof}
A partial convex transversal $Q'$ of \mkmcal{R} is a \emph{lower canonical} transversal if
and only if
\begin{itemize}[nosep]
\item the strictly convex vertices on the upper hull of $Q'$ lie on bottom
endpoints in \mkmcal{R},
\item the strictly convex vertices on the lower hull of $Q'$ lie on bottom or
top endpoints of regions in \mkmcal{R},
\item the leftmost vertex $\ell$ of $Q'$ lies on a line through $w$, where
$w$ is the leftmost strictly convex vertex of the lower hull of $Q'$, and
another endpoint.
\item the rightmost vertex $r$ of $Q'$ lies on a line through $z$, where $z$
is the rightmost strictly convex vertex of the lower hull of $Q'$, and
another endpoint.
\end{itemize}
An \emph{upper canonical} transversal is defined analogously, but now $\ell$ and
$r$ lie on lines through an endpoint and the leftmost and rightmost strictly
convex vertices on the upper hull.
\begin{lemma}
\label{lem:discrete_convex_hull}
Let $Q$ be a convex partial transversal of \mkmcal{R} for which all $h\geq 2$ strictly
convex vertices in the lower hull lie on endpoints of regions in \mkmcal{R}. There
exists a lower canonical transversal $Q'$ of \mkmcal{R}, that visits the same regions
as $Q$.
\end{lemma}
\begin{proof}
\begin{figure}[tb]
\centering
\includegraphics{canonical_hull}
\caption{We move $\ell$ and the vertices on $\overline{\ell{}w}$
downwards, bending at $u$ and $w$ until $\overline{\ell{}w}$ contains a
bottom endpoint $p$ or becomes colinear with $\overline{ww'}$.}
\label{fig:canonical_hull}
\end{figure}
Let $\ell$ be the leftmost point of $Q$, let $u$ be the vertex of $Q$
adjacent to $\ell$ on the upper hull, let $w$ be the leftmost strictly convex
vertex on the lower hull of $Q$, and let $w'$ be the strictly convex vertex
of $Q$ adjacent to $w$. We move $\ell$ and all other vertices of $Q$ on
$\overline{\ell{}w}$ downwards, bending at $u$ and $w$, while $Q$ remains
convex and visits the same $k$-regions until: (i) $\ell$ lies on the bottom
endpoint of its segment, (ii) the segment $\overline{\ell{}w}$ contains the
bottom endpoint $p$ of a region, or (iii) the segment $\overline{\ell{}w}$
has become collinear with $\overline{ww'}$. See
Figure~\ref{fig:canonical_hull}. Observe that in all cases $\ell$ lies on a
line through the leftmost strictly convex vertex $w$ and another endpoint
(either $\ell$ itself, $p$, or $w'$). Symmetrically, we move $r$ downwards
until it lies on an endpoint or on a line through the rightmost strictly
convex vertex $z$ on the lower hull of $Q$ and another endpoint. Let $Q''$ be
the resulting convex transversal we obtain.
Let $\mkmcal{R}'$ be the regions intersected by the upper hull of $Q''$ (setting the
bottom endpoint of the regions containing $\ell$ and $r$ to be $\ell$ and
$r$). We now appeal to Lemma~\ref{lem:discrete_upper_hull} to get that there
is an upper hull that also visits all regions in $\mkmcal{R}'$ and in which all
strictly convex vertices lie on bottom endpoints. So, we can replace the
upper hull of $Q''$ by this upper hull and obtain the transversal $Q'$ stated
in the lemma.
\end{proof}
\begin{lemma}
\label{lem:discrete_straight}
Let $Q$ be a convex partial transversal of \mkmcal{R}, whose lower hull intersects at
least one region (other than the regions containing the leftmost and
rightmost vertices) but contains no endpoints of regions in \mkmcal{R}. There exists
a convex partial transversal $Q'$ intersecting the same regions as $Q$ that
does visit one endpoint of a region in \mkmcal{R}.
\end{lemma}
\begin{proof}
Consider the region $R$ intersected by $\overline{\ell{}r}$ whose bottom
endpoint $p$ minimizes the distance between $p$ and
$w=R\cap\overline{\ell{}r}$, and observe that $w$ is a vertex of $Q$. Shift
down $w$ (and the vertices on $\overline{\ell{}w}$ and $\overline{wr}$) until
$w$ lies on $p$.
\end{proof}
Let $Q=\ell{}urv$ be a quadrilateral whose leftmost vertex is $\ell$, whose
\emph{top} vertex is $u$, whose rightmost vertex is $r$, and whose
\emph{bottom} vertex is $v$. The quadrilateral $Q$ is a \emph{lower canonical}
quadrilateral if and only if
\begin{itemize}[nosep]
\item $u$ and $v$ lie on endpoints in $\ensuremath{\mathord\Updownarrow}\mkmcal{R}$,
\item $\ell$ lies on a line through $v$ and another endpoint, and
\item $r$ lies on a line through $v$ and another endpoint.
\end{itemize}
We define \emph{upper canonical} quadrilateral analogously (i.e. by requiring
that $\ell$ and $r$ lie on lines through $u$ rather than $v$).
\begin{lemma}
\label{lem:quadrilateral}
Let $Q=\ell{}urv$ be a convex quadrilateral with $\ell$ as leftmost vertex,
$r$ as rightmost vertex, $u$ a bottom endpoint on the upper hull of $Q$, and
$v$ an endpoint of a region in \mkmcal{R}.
\begin{itemize}
\item There exists an upper or lower canonical quadrilateral $Q'$ intersecting
the same regions as $Q$, or
\item there exists a convex partial transversal $Q''$ whose upper hull contains
exactly two strictly convex vertices, both on endpoints, or whose lower
hull contains exactly two strictly convex vertices, both on endpoints.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{quadrilateral}
\caption{We can transform a quadrilateral transversal $Q$ into a transversal
with two strictly convex vertices on the lower hull or on the upper hull
(cases (a) and (d)), or into a quadrilateral in which the leftmost vertex
and rightmost vertex lie on a line through an endpoint and either $u$ or
$v$ (cases (b) and (c)). The bottom row shows how we can
shift $r$ to lie on a line through $v$ and another endpoint $q$ when we
already have that $\ell$ lies on a line through $v$ and an endpoint $p$.}
\label{fig:quadrilateral_cases}
\end{figure}
Shift $\ell$ downward and $r$ upward with the same speed, bending at $u$ and
$v$ until one of the edges of the quadrilateral will stop to intersect the
regions only intersected by that edge. It follows that at such a time this
edge contains an endpoint $p$ of a region in \mkmcal{R}. We now distinguish between
four cases, depending on the edge containing $p$. See
Figure~\ref{fig:quadrilateral_cases}.
\begin{enumerate}[label=(\alph*)]
\item Case $p \in \overline{\ell{}u}$. It follows $p$ is a bottom
endpoint. We continue shifting $\ell$ and $r$, now bending in $p$, $u$, and
$v$. We now have a convex partial transversal $Q''$ that visits the same
regions as $Q$ and whose upper hull contains at least two strictly convex
vertices, both bottom endpoints of regions.
\item Case $p \in \overline{\ell{}v}$. It follows that $p$ is a bottom
endpoint. We now shift $r$ downwards until: (b.i) $\overline{ur}$ contains a
bottom endpoint $q$, (b.ii) $\overline{v{}r}$ contains a bottom endpoint $q$
or (b.iii) $\overline{vr}$ is collinear with $\overline{\ell{}v}$. In the case
(b.i) we get an upper hull with two strictly convex vertices, both bottom
endpoints of regions. In cases (b.ii) and (b.iii) we now have that both $\ell$
and $r$ lie on lines through $v$ and another endpoint.
\item Case $p \in \overline{ur}$. Similar to the case
$p \in \overline{\ell{}v}$ we now shift $\ell$ back upwards until the lower
hull contains at least two strictly convex endpoints, or until $\ell$ lies
on a line through $u$ and some other endpoint $q$.
\item Case $p \in \overline{vr}$. Similar to the case
$p \in \overline{\ell{}u}$. We continue shifting $\ell$ downward and $r$
upward, bending in $p$, thus giving two strictly convex vertices in the
lower hull, both on endpoints.
\end{enumerate}
\end{proof}
Let $k^u_2$ be the maximal number of regions of \mkmcal{R} visitable by an upper
convex transversal, let $k^u_4$ be the maximal number of regions of \mkmcal{R} visitable
by a canonical upper quadrilateral, and let $k^u$ denote the maximal number of
regions of \mkmcal{R} visitable by a canonical upper transversal. We define $k^b_2$, $k^b_4$, and
$k^b$, for the maximal number of regions of \mkmcal{R}, visitable by a lower convex
transversal, canonical lower quadrilateral, and canonical lower transversal,
respectively.
\begin{lemma}
\label{lem:optimal_convex_transversal}
Let $k^*$ be the maximal number of regions in \mkmcal{R} visitable by a convex partial
transversal of \mkmcal{R}. We have that $k^*=\max\{k^u_2,k^u_4,k^u,k^b_2,k^b_4,k^b\}$.
\end{lemma}
\begin{proof}
Clearly $k^* \geq \max\{k^u_2,k^u_4,k^u,k^b_2,k^b_4,k^b\}$. We now argue that
we can transform an optimal convex partial transversal $Q^*$ of \mkmcal{R} visiting
$k^*$ regions into a canonical transversal. The lemma then follows.
By Lemma~\ref{lem:discrete_convex_hull_buildup} there is an optimal convex
partial transversal $Q$ of \mkmcal{R}, visiting $k^*$ regions, whose strictly convex
vertices lie on endpoints of the regions in \mkmcal{R}.
If either the lower or upper hull of $Q$ does not intersect any regions, we
use Lemma~\ref{lem:discrete_upper_hull} (or its analog for the bottom hull)
to get $k^*=\max\{k^u_2,k^b_2\}$. Otherwise, if the lower hull of $Q$ contains
at least two strictly convex vertices, we apply
Lemma~\ref{lem:discrete_convex_hull}, and obtain that there is a convex
lower transversal. Similarly, if the upper hull contains at least two strictly
convex vertices we apply a lemma analogous to
Lemma~\ref{lem:discrete_convex_hull}, and obtain that there is a canonical
upper transversal visiting the same $k^*$ regions as $Q$.
If $Q$ has at most one strictly convex vertex on both the lower hull and
upper hull we use Lemma~\ref{lem:discrete_straight} and get that there exists
a convex quadrilateral $Q'$ that visits the same regions as $Q$. We now apply
Lemma~\ref{lem:quadrilateral} to get that there either is an optimal
transversal that contains two strictly convex vertices on its upper or lower
hull, or there is a canonical quadrilateral $Q'$ that intersects the same
regions as $Q$. In the former case we can again apply
Lemma~\ref{lem:discrete_convex_hull} to get a canonical convex partial
transversal that visits $k^*$ regions. In the latter case we have
$k^*=\max\{k^u_4,k^b_4\}$.
\end{proof}
By Lemma~\ref{lem:optimal_convex_transversal} we can restrict our attention to
upper and lower convex transversals, canonical quadrilaterals, and canonical
transversals. We can compute an optimal upper (lower) convex transversal in
$O(n^2)$ time using the algorithm from the previous section. Next, we argue that
we can compute an optimal canonical quadrilateral in $O(n^5)$ time, and an
optimal canonical transversal in $O(n^6)$ time. Arkin et al.\xspace~\cite{ARKIN2014224}
describe an algorithm that given a discrete set of vertex locations can find a
convex polygon (on these locations) that maximizes the number of regions
stabbed. Note, however, that since a region contains multiple vertex locations
---and we may use only one of them--- we cannot directly apply their algorithm.
Observe that if we fix the leftmost strictly convex vertex $w$ in the lower
hull, there are only $O(n^2)$ candidate points for the leftmost vertex $\ell$
in the transversal. Namely, the intersection points of the (linearly many)
segments with the linearly many lines through $w$ and an endpoint. Let $L(w)$
be this set of candidate points. Analogously, let $R(z)$ be the candidate
points for the rightmost vertex $r$ defined by the rightmost strictly convex
vertex $z$ in the lower hull.
\subsection{Computing the maximal number of regions intersected by a
canonical quadrilateral}
Let $Q=\ell{}urw$ be a canonical lower quadrilateral with $u_x < w_x$, let
$\mkmcal{R}''$ be the regions intersected by $\overline{u\ell}\cup\overline{\ell{}w}$,
and let $T[w,u,\ell] = |\mkmcal{R}''|$ be the number of such regions. We then define
$\mkmcal{R}_{uw\ell}=\mkmcal{R} \setminus \mkmcal{R}''$ to be the remaining regions, and
$I(\mkmcal{R}_{uw\ell},u,w,r)$ as the number of regions from $\mkmcal{R}$ intersected by
$\overline{ur}\cup\overline{wr}$. Observe that those are the regions from $\mkmcal{R}$
that are \emph{not} intersected by $\overline{u\ell}\cup\overline{\ell{}w}$.
Note that we exclude the two regions that have $u$ or $w$ as its endpoint from
$\mkmcal{R}_{uw\ell}$. Hence, the number of regions intersected by $Q$ is
$T[w,u,\ell] + I(\mkmcal{R}_{uw\ell},u,w,r)$, See
Figure~\ref{fig:1-oriented-canonical-quadrilateral}, and the maximum number of
regions all canonical lower quadrilaterals with $u_x \leq w_x$ is
\[ \max_{u,w,\ell} (T[u,w,\ell] + \max_{r \in R(w)} I(\mkmcal{R}_{uw\ell},u,w,r)).
\]
We show that we can compute this in $O(n^5)$ time. If $u_x > w_x$, we use a
symmetric procedure in which we count all regions intersected by
$\overline{ur}\cup\overline{rw}$ first, and then the remaining regions
intersected by $\overline{u\ell}\cup\overline{\ell{}w}$. Since $k^b_4$ is the
maximum of these two results, computing $k^b_4$ takes $O(n^5)$ time as well.
Since there are $O(n)$ choices for $w$ and $u$, and $O(n^2)$ for $\ell$, we can
naively compute $T[w,u,\ell]$ and $\mkmcal{R}_{uw\ell}$ in $O(n^5)$ time. For each
$w$, we then radially sort $R(w)$ around $w$. Next, we describe how we can then
compute all $I(\mkmcal{R}',u,w,r)$, with $r \in R(w)$, for a given set
$\mkmcal{R}'=\mkmcal{R}_{uw\ell}$, and how to compute $\max_r I(\mkmcal{R}',u,w,r)$.
\begin{figure}
\centering
\includegraphics{1-oriented-canonical-quadrilateral}
\caption{
The green vertical segments are $\mkmcal{R}''$, while the gray and black ones are $\mkmcal{R}_{uw\ell}$.
The number of black vertical segments is $I(\mkmcal{R}_{uw\ell},u,w,r)$.
}
\label{fig:1-oriented-canonical-quadrilateral}
\end{figure}
\subparagraph{Computing the number of Intersections} Given a set of regions
$\mkmcal{R}'$, the points $u$, $w$, and the candidate endpoints $R(w)$, sorted
radially around $w$, we now show how to compute $I(\mkmcal{R}',u,w,r)$ for all
$r \in R(w)$ in $O(n^2)$ time.
\begin{figure}[tb]
\centering
\includegraphics{1-oriented-quadrilateral_closing}
\caption{(a) The red crosses are possible candidate points in $R(w)$.
Gray crosses above $\overline{u\ell}$ or below $\overline{w\ell}$ or left to the vertical line through $w$ is not part of $R(w)$.
(b) The orange line segment $\overline{uq}$ and green line segment $\overline{qw}$ move upwards along $G$.
An events occurs when either $\overline{uq}$ or $\overline{qw}$ sweeps through the endpoint of a vertical segment in $\mkmbb{R}''$.
When $\overline{uq}$ sweeps through the top endpoint of $G_1$, we check whether $G_1$ intersects with $\overline{qw}$ or not.
We increase the number of intersected segment when $\overline{uq}$ sweeps through the bottom endpoint of $G_2$ and decrease
it when $\overline{qw}$ sweeps through the top endpoint of $G_1$.}
\label{fig:1-oriented-quadrilateral_closing}
\end{figure}
We sort the endpoints of $\mkmcal{R}'$ radially around $w$ and around $u$, and
partition the points in $R(w)$ based on which segment (region) they lie. Let
$G$ be a region that has $q$ as its bottom endpoint, and let
$R(w,G) = R(w)\cap G$. We explicitly compute the number of regions $m$
intersected by $\overline{uq}\cup\overline{qw}$ in linear time, and set
$I(\mkmcal{R}',u,w,q)$ to $m$. We shift up $q$, while maintaining the number of
regions intersected by $\overline{uq}\cup\overline{qw}$. See
Figure~\ref{fig:1-oriented-quadrilateral_closing}(b)
As $q$ moves
upwards, the segments $\overline{uq}$ and $\overline{qw}$, sweep through
endpoints of the regions in $\mkmcal{R}'$. We can decide which event should be processed
first from the two ordered lists of endpoints in constant time.
If an event occurs, the value of $m$ changes depending on the type of endpoint
and which line segment ($\overline{uq}$ or $\overline{qw}$) responsible for
that particular event. If $\overline{uq}$ sweeps through a point in
$\ensuremath{\mathord\Downarrow} \mkmcal{R}$, we increase the value of $m$ by one. Conversely, we decrease the
value of $m$ by one if $\overline{uq}$ sweeps a top endpoint of some region
$G'$ if $G'$ does not intersect with $\overline{qw}$. Otherwise, we do
nothing. We treat events caused by $\overline{qw}$ in a similar way. See
Figure~\ref{fig:1-oriented-quadrilateral_closing}(b) for an illustration. When $q$ passes
through a candidate point $r \in R(w,G)$ we set $I(\mkmcal{R}',u,w,r)$ to
$m$. Computing all values $I(\mkmcal{R},u,w,r)$ for all $r \in R(w,G)$ then takes
$O(n)$ time, and thus $O(n^2)$ time over all regions $G$.
\subparagraph{Maximizing over $r$.} To find the point $r \in R(w)$ that
maximizes $R'[u,w,\ell,r]$ we now just filter $R(w)$ to exclude all points
above the line through $u$ and $\ell$ and below the line through $w$ and
$\ell$, and report the maximum $I$ value among the remaining points.
\subparagraph{Improving the running time to $O(n^5)$.} Directly applying the
above approach yields an $O(n^6)$ time algorithm, as there are a total of
$O(n^4)$ triples $(u,w,\ell)$ to consider. We now improve this to $O(n^5)$ time
as follows.
First observe that since $u_x < w_x$, $\overline{ur}\cup\overline{rw}$ cannot
intersect any regions of the regions left of $u$. Let $\mkmcal{R}_u$ be this set of
regions. Since $\ell$ lies left of $u$ (by definition) we thus have that
$I(\mkmcal{R}_{uw\ell},u,w,r) = I(\mkmcal{R}_{uw\ell}\setminus \mkmcal{R}_u,u,w,r)$.
Second, consider all points $\ell \in L(w)$ that lie on the line through $w$
and some endpoint $p$. Observe that they all have the same set
$\mkmcal{R}_{uw\ell}\setminus \mkmcal{R}_u$. This means that there are only $O(n^3)$
different sets $\mkmcal{R}_{uwp}=\mkmcal{R}_{uw\ell}\setminus \mkmcal{R}_u$ for which we have to
compute $I(\mkmcal{R}_{uwp},u,w,r)$ values.
Finally, again consider all points $\ell \in L(w)$ that lie on the line $\mu$
through $w$ and some point $p$. For all these points we discard the same points
from $R(w)$ because they are below $\mu$. We can then compute
$T[u,w,\ell] + \max_r I(\mkmcal{R}_{uwp},u,w,r)$ for all these points $\ell$ in
$O(n^2)$ time in total, by rotating the line $\nu$ through $u$ and $\ell$
around $u$ in counter clockwise order, while maintaining the valid candidate
points in $R(w)$ (i.e. above $\mu$ and below $\nu$), and the maximum
$I(\mkmcal{R}_{uwp},u,w,r)$ value among those points,
see Figure~\ref{fig:1-oriented-improve-quadrilateral_closing}.
Since there are $O(n^3)$
combination of $w,u$, and $p$, then in total we spent $O(n^5)$ time to compute
all values of $T[u,w,\ell] + \max_r I(\mkmcal{R}_{uwp},u,w,r)$, over all $u$, $w$, and
$\ell$, and thus we obtain the following result.
\begin{figure}[tb]
\centering
\includegraphics{1-oriented-improve-quadrilateral_closing}
\caption{
$\mkmcal{R}_{uwp}$ is shown in green vertical segments.
The red crosses below $\nu$ (that has one of its endpoint at $\ell$'),
above $\mu$ and to the left of vertical line through $w$ are candidates for the rightmost point.
The orange crosses become candidates for the rightmost point after we rotate $\nu$ in
counter-clockwise order and consider $\ell$ as the leftmost point.
}
\label{fig:1-oriented-improve-quadrilateral_closing}
\end{figure}
\begin{lemma}
\label{lem:compute_canonical_quadrilateral}
Given a set of $n$ vertical line segments \mkmcal{R}, we can compute the maximum
number of regions $k^*$ visitable by a canonical quadrilateral $Q$ in
$O(n^5)$ time.
\end{lemma}
\subsubsection{Computing the maximal number of regions intersected by a
canonical transversal}
Next, we describe an algorithm to compute the maximal number of regions
visitable by a lower canonical convex transversal. Our algorithm consists of
three dynamic programming phases, in which we consider (partial) convex hulls
of a particular ``shape''.
In the first phase we compute (and memorize) $B[w,u,v,\ell]$: the maximal number of regions
visitable by a partial transversal that has $\overline{w\ell{}}$ as a segment
in the lower hull, and a
convex chain $\ell,\dots,u,v$ as upper hull. See Figure~\ref{fig:convexhull_dp}(a).
In the second phase we compute $K[u,v,w,z]$: the maximal number of regions
visitable by the canonical partial convex transversal whose rightmost top edge is
$\overline{uv}$ and whose rightmost bottom edge is $\overline{wz}$.
In the third phase we compute the maximal number of regions visitable when we ``close''
the transversal using the rightmost vertex $r$. To this end, we define
$R'[z,u,v,r]$ as the number of regions visitable by the canonical transversal
whose rightmost upper segment is $\overline{uv}$ and whose rightmost bottom
segment is $\overline{wz}$ and $r$ is defined by the strictly convex vertex $z$.
\subparagraph{Computing $B[w,u,v,\ell]$.}
Given a set of regions $\mkmcal{R}'$ let $U_{\mkmcal{R}'}[\ell,u,v]$ be the maximal number of
regions in $\mkmcal{R}'$ visitable by an upper convex transversal that starts in a fixed
point $\ell$ and ends with the segment $\overline{uv}$.
\begin{lemma}
\label{lem:compute_U}
We can compute all values $U_{\mkmcal{R}'}[\ell,u,v]$ in $O(n^2)$ time.
\end{lemma}
\begin{proof}
Analogous to the algorithm in Section~\ref{sub:Computing_an_upper_convex_transversal}.
\end{proof}
Let $B[w,u,v,\ell]$ be the maximal number of regions visitable by a transversal
that starts at $w$, back to $\ell$, and then an upper hull from $\ell$ ending
with the segment $\overline{uv}$. See Figure~\ref{fig:convexhull_dp}(a).
\begin{figure}[tb]
\centering
\includegraphics{convexhull_dp}
\caption{(a) $B[w,u,v,\ell]$ indicates the number of regions visited by a
partial convex transversal that has $\overline{\ell{}w}$ as bottom hull and
the upper hull from $\ell$ to $\overline{uv}$. We can compute the
$B[w,u,v,\ell]$ values for all $u,v$ by explicitly setting aside the
segments intersected by $\overline{\ell{}w}$ and then using the upper hull
algorithm. (b) The base case of the recurrence when $u_x < w_x$. The
regions counted by $I[w,z,u,v]$ are shown in red, whereas the regions
counted by $B[w,u,v,\ell]$ are shown in black. (c) The inductive step when
$u_x < w_x$.}
\label{fig:convexhull_dp}
\end{figure}
For each combination of $w$ and $\ell \in L(w)$, explicitly construct the
regions $\mkmcal{R}^{w,\ell}$ \emph{not} intersected by $\overline{w\ell}$. We then
have that
$B[w,u,v,\ell] = |\mkmcal{R} \setminus \mkmcal{R}^{w,\ell}| +
U_{\mkmcal{R}^{w,\ell}}[\ell,u,v]$. So, we can compute all $B[w,u,v,\ell]$, for
$w \in \ensuremath{\mathord\Updownarrow}\mkmcal{R}, \ell \in W(\ell)$, and $u,v \in \ensuremath{\mathord\Uparrow}\mkmcal{R}$, in $O(n^5)$ time
in total.
\subparagraph{Computing $X[a,u,w,z,\ell]$.} Let $X[a,u,w,z,\ell]$ be the
maximum number of regions \emph{left of $u$} visitable by a partial transversal
that
\begin{itemize}[nosep]
\item has $a$ as its leftmost strictly convex vertex in the bottom hull,
\item has $\ell \in L(a)$ as its leftmost vertex,
\item has $\overline{wz}$ as rightmost edge in the bottom hull, and
\item has $\overline{\ell{}u}$ as its (partial) upper hull.
\end{itemize}
We can compute $X[a,u,w,z,\ell]$ using an approach similar to the one we used to
compute $B[w,u,v,\ell]$: we fix $a, u$, and $\ell$, and explicitly compute the
segments intersected by $\overline{\ell{}u}\cup\overline{\ell{}a}$. We set
those we count them, and set them aside. On the remaining segments $\mkmcal{R}'$ we compute
an optimal bottom hull from $a$ to $z$ whose incoming slope at $a$ is at least
$\ensuremath{\mathit{slope}}\xspace(\ell,a)$. This takes $O(n^2)$ time, using an analogous approach to that used in
Section~\ref{sub:Computing_an_upper_convex_transversal}. Since we have $O(n^4)$ choices for the triple $a,u,\ell$ this
takes a total of $O(n^6)$ time.
\subparagraph{Computing $K[u,v,w,z]$.}
Let $\overleftarrow{\triangle}(P,u,v,w,z)$ denote the subset of the points $P$ left
of $\{u,v,w,z\}$, below the line through $u$ and $v$, and above the line
through $w$ and $z$.
Let $K[u,v,w,z]$ be a maximal number of regions visitable by the minimum area
canonical partial convex transversal that has $\overline{uv}$ as its rightmost
segment in the upper hull and $\overline{wz}$ as its rightmost segment in the
lower hull.
If $u_x < w_x$ we have that%
\begin{align*}
K[u,v,w,z] = \max\{&\max_{\ell \in \overleftarrow{\triangle}(L(w),u,v,w,z)} B[w,u,v,\ell] + I[w,z,u,v], \\
&\max_{t \in \overleftarrow{\triangle}(\ensuremath{\mathord\Updownarrow}\mkmcal{R},u,v,w,z)} K[u,v,t,w] + I[w,z,u,v]\},
\end{align*}
where $I[w,z,u,v]$ is the number of segments intersected by $\overline{wz}$ but
not by $\overline{uv}$. See Figure~\ref{fig:convexhull_dp}(b) and (c) for an
illustration. We rewrite this to
\[ K[u,v,w,z] = I[w,z,u,v] + \max\{\max_{\ell \in \overleftarrow{\triangle}(L(w),u,v,w,z)} B[w,u,v,\ell]
,\max_{t \in \overleftarrow{\triangle}(\ensuremath{\mathord\Updownarrow}\mkmcal{R},u,v,w,z)} K[u,v,t,w]\}. \]
\begin{figure}[tb]
\centering
\includegraphics{ch_dp_wu_case}
\caption{The two cases in the dynamic program when $u_x > w_x$. (a) The
segments counted by $T[w,u,\ell]$ in black, $J[w,z,\ell,u]$ in green,
$I[u,v,w,z]$ in red, and $I^*[w,z,u]$ in purple. (b) The case where there
is another segment in the lower hull as well. The segments counted by
$I[u,v,w,z]$ are again shown in red, whereas the segments counted by
$K[s,u,w,z]$ are shown in black.}
\label{fig:convexhull_dp_wu}
\end{figure}
If $u_x > w_x$ we get a more complicated expression. Let $J[w,z,\ell,u]$
be the number of regions intersected by $\overline{wz}$ but not by
$\overline{\ell{}u}$, and let $I^*[w,z,u]$ be the number of regions right of
$u$ intersected by $\overline{wz}$. See Figure~\ref{fig:convexhull_dp_wu} for an
illustration. We then have
\begin{align*}
K[u,v,w,z] = \max&\{\max_{\substack{a \in \overleftarrow{\triangle}(\ensuremath{\mathord\Downarrow}\mkmcal{R},u,v,w,z),\\
\ell \in L(a)}} X[a,u,w,z,\ell] + I^*[w,z,u] + I[u,v,w,z]\\
&, \max_{s \in \overleftarrow{\triangle}(\ensuremath{\mathord\Downarrow}\mkmcal{R},u,v,w,z)} K[s,u,w,z] + I[u,v,w,z]\}
\end{align*}
which we rewrite to
\begin{align*}
K[u,v,w,z] = I[u,v,w,z] + \max\{&I^*[w,z,u] + \max_{\substack{a \in \overleftarrow{\triangle}(\ensuremath{\mathord\Downarrow}\mkmcal{R},u,v,w,z),\\
\ell \in L(a)}} X[a,u,w,z,\ell]\\
&, \max_{s \in \overleftarrow{\triangle}(\ensuremath{\mathord\Downarrow}\mkmcal{R},u,v,w,z)} K[s,u,w,z]\}.
\end{align*}
We can naively compute all $I[w,z,u,v]$ and all $I^*[w,z,u]$ values in $O(n^5)$
time. Since there are only $O(n)$ choices for $t$ computing all values
$\max_t K[u,v,t,w]$ for all cells takes only $O(n^5)$ time in total. The same
holds for computing all $\max_s K[s,u,w,z]$.
As we describe next, we can compute the $\max_\ell B[w,u,v,\ell]$ values in
$O(n^5)$ time as well. Fix $w$, compute all $O(n^2)$ candidate points in $L(w)$
and sort them radially around $w$. For each $\overline{uv}$: remove the
candidate points above the line through $u$ and $v$. This takes $O(n^5)$ time
in total.
For each maximal subset $S \subseteq L(w)$ that lies below the line through $u$ and
$v$, we now do the following. We rotate a line around $w$ in counterclockwise
order, starting with the vertical line. We maintain the subset of $S$ that lies
above this line, and $\max_{\ell \in S} B[w,u,v,\ell]$. Thus, when this line
sweeps through a point $z$, we know the value
$\max_{\ell \in \overleftarrow{\triangle}(L(w),u,v,w,z)} B[w,u,v,\ell]$. There
are $O(n^3)$ sets $S$, each of size $O(n^2)$. So, while rotating the line we
process $O(n^2+n)=O(n^2)$ events. Since we can look up the value
$B[w,u,v,\ell]$ values in constant time, processing each event takes only
$O(1)$ time. It follows that we spend $O(n^5)$ time in total.
Similarly, computing all $\max_{a,\ell} X[a,u,w,z,\ell]$ values requires
$O(n^6)$ time: we fix $a,u,w,z$, and $\ell$, and compute all $O(n^6)$ values
$X[a,u,w,z,\ell]$ in $O(n^6)$ time. We group these values based on the slope of
$\overline{\ell{}a}$, and sort the groups on this slope. This again takes
$O(n^6)$ in total. Similarly, we sort the vertices $v$ around $u$, and
simultaneously scan the list of $X$ values and the vertices $v$, while
maintaining the maximum $X[a,u,w,z,\ell]$ value that has slope at least
$\ensuremath{\mathit{slope}}\xspace(u,v)$. This takes $O(n^6)$ time.
It follows that we can compute all $K[u,v,w,z]$ values in $O(n^6)$ time in
total.
\subparagraph{Closing the hull.} We now consider adding the rightmost point to
finish the convex transversal. Given vertices $u, v,w$, and $z$, let $\mkmcal{R}'$ be
the segments intersected by the partial transversal corresponding to
$K[u,v,w,z]$. We explicitly compute $\mkmcal{R}_{uvwz}=\mkmcal{R} \setminus \mkmcal{R}'$, and then compute the
maximum number of regions from this set intersected by
$\overline{vr}\cup\overline{rz}$, over all choices of rightmost vertex
$r$. This takes $O(n^2)$ time using the exact same approach we used in the
canonical quadrilateral section. It follows that we can compute the maximum
number of regions visitable by a canonical bottom convex transversal in
$O(n^6)$ time. Therefore we conclude:
\begin{theorem}
\label{thm:parallel_segments_convex_hull}
Given a set of $n$ vertical line segments \mkmcal{R}, we can compute the maximum
number of regions $k^*$ visitable by a convex partial transversal $Q$ in
$O(n^6)$ time.
\end{theorem}
\section{2-oriented disjoint line segments}
\label{sec:2-oriented}
In this section we consider the case when $\mkmcal{R}$ consists of vertical and horizontal disjoint segments. We will show how to apply similar ideas presented in previous sections to compute an optimal convex transversal $Q$ of $\mkmcal{R}$. As in the previous section, we will mostly restrict our search to canonical transversals. However, unlike in the one-oriented case, we will have one special case to consider when an optimal partial convex transversal has bends not necessarily belonging to a discrete set of points.
We call the left-, right-, top- and bottommost vertices $\ell$, $r$, $u$ and $b$ of a convex partial transversal the \emph{extreme} vertices. Consider a convex hull of a partial transversal $Q$, and consider the four convex chains between the extreme vertices. Let us call the chain between vertices $\ell$ and $u$ the \emph{upper-left hull}, and the other chains \emph{upper-right}, \emph{lower-right} and \emph{lower-left}. Similar to Lemma~\ref{lem:discrete_upper_hull} we can show the following:
\begin{lemma}\label{lem:2orient-canonical-1}
Let $Q$ be a convex partial transversal of \mkmcal{R} with extreme vertices $\ell$, $r$, $u$, and $b$. There exists a convex partial transversal $Q'$ of \mkmcal{R} such that
\begin{itemize}
\item the two transversals have the same extreme vertices,
\item all segments that are intersected by the upper-left, upper-right, lower-right, and lower-left hulls of $Q$ are also intersected by the corresponding hulls of $Q'$,
\item all strictly convex vertices on the \textbf{upper-left} hull of $Q'$ lie on \textbf{bottom} endpoints of vertical segments or on the \textbf{right} endpoints of horizontal segments of \mkmcal{R},
\item the convex vertices on the other hulls of $Q'$ lie on analogous endpoints.
\end{itemize}
\end{lemma}
One condition for a transversal to be canonical will be that all its strictly convex vertices, except for the extreme ones, satisfy the conditions of Lemma~\ref{lem:2orient-canonical-1}. Another condition will be that its extreme vertices belong to a discrete set of \emph{fixed} points. This set of fixed points will contain all the endpoints of the segments in \mkmcal{R}; we will call these points \emph{$0$th-order fixed points}. Furthermore, the set of fixed points will contain intersections of the segments of \mkmcal{R} of certain lines, that we will describe below, with the segments of \mkmcal{R}.
Now, consider a convex partial transversal $Q$ for which all the strictly convex vertices, except for the four extreme ones, lie on endpoints of segments in \mkmcal{R}. We will describe how to slide the extreme vertices of $Q$ along their respective segments to obtain a canonical transversal. Note, that for simplicity of exposition in the description below we assume that no new intersections of the convex hull of $Q$ with segments of \mkmcal{R} appear. Otherwise, we can restart the process with higher value of $k$. Let $s_{\ell}$, $s_r$, $s_u$, and $s_b$ be the four segments containing the four extreme points, and denote as $\mkmcal{R}_{u\ell}$, $\mkmcal{R}_{ur}$, $\mkmcal{R}_{b\ell}$, and $\mkmcal{R}_{br}$ the four subsets of $\mkmcal{R}$ of segments intersected by the upper-left, upper-right, bottom-left, and bottom-right hulls respectively.
First, consider the case when $u$ (or $b$) lies on a vertical segment. Then it can be safely moved down (or up) until it hits an endpoint of its segment (i.e., a $0$th-order fixed point), or is no longer an extreme vertex. If it is no longer an extreme vertex, we restore the conditions of Lemma~\ref{lem:2orient-canonical-1} and continue with the new topmost (or bottommost) vertex of the transversal. Similarly, in the case when $\ell$ or $r$ lie on horizontal segments, we can slide them until they reach $0$th-order fixed points.
Assume then that $u$ and $b$ lie on horizontal segments, and $\ell$ and $r$ lie on vertical segments.
We further assume that the non-extreme vertices have been moved according to Lemma~\ref{lem:2orient-canonical-1}.
We observe that either (1) there exists a chain of the convex hull of $Q$ containing at least two endpoints of segments, (2) there exists a chain of the convex hull of $Q$ containing no endpoints, or (3) all four convex chains contain at most one endpoint.
In case (1), w.l.o.g., let the upper-left hull contain at least $2$ endpoints. Then we can slide $u$ left along its segment until it reaches its endpoint or an intersection with a line through two endpoints of segment in $\mkmcal{R}_{u\ell}$. Note that sliding $u$ left does not create problems on the top-right hull. We can also slide $\ell$ up along its segment until it reaches its endpoint or an intersection with a line through two endpoints of segments in $\mkmcal{R}_{u\ell}$. Thus, we also need to consider the intersections of segments in \mkmcal{R} with lines through pairs of endpoints; we will call these \emph{$1$st-order fixed points}. Now, vertices $u$ and $\ell$ are fixed, and we will proceed with sliding $r$ and $b$.
For vertex $r$, we further distinguish two cases: (1.a) the upper-right convex hull contains at least two endpoints, or (1.b) it contains at most one endpoint.
In case (1.a), similarly to the case (1), we slide $r$ up until it reaches an endpoint of $s_r$ or an intersection with a line through two endpoints of segments in $\mkmcal{R}_{ur}$.
In case (1.b), we slide $r$ up, while unbending the strictly convex angle if it exists, until $r$ reaches an endpoint of $s_r$, or until the upper-right hull (which will be a straight-line segment $\overline{ur}$ at this point) contains a topmost or a rightmost endpoint of some segment in $\mkmcal{R}_{ur}$. In this case, $r$ will end up in an intersection point of a line passing through $u$ and an endpoint of a segment in $\mkmcal{R}_{ur}$. We will call such points \emph{$2$nd-order fixed points}, defined as an intersection of a segment in \mkmcal{R} with a line passing through a $1$st-order fixed point and an endpoint of a segment in \mkmcal{R}.
Similarly, for $b$ we distinguish two cases on the size of the lower-left hull, and slide $b$ until a fixed point (of $0$th-, $1$st-, or $2$nd-order). Thus, in case (1) we get a canonical convex transversal representation with strictly convex bends in the endpoints of segments in \mkmcal{R}, and extreme vertices in fixed points of $0$th-, $1$st-, or $2$nd-order. Note that there are $n^{2i+1}$ points that are $i$th-order, which means we only have to consider a polynomial number of discrete vertices in order to find a canonical solution.
In case (2), when there exists a chain of the convex hull of $Q$ that does not contain any endpoint, w.l.o.g., assume that this chain is the upper-left hull.
Then we slide $u$ left until the segment $\overline{u\ell}$ hits an endpoint of a segment while bending at an endpoint of the upper-right hull or at the point $r$, if the upper-right hull does not contain an endpoint. If the endpoint we hit does not belong to $s_u$, we are either in case (3), or there is another chain of the convex hull without endpoint and we repeat the process.
Else $u$ is an $0$th-order fixed point, and we can move $\ell$ up until it is an at most $1$st-order fixed point. Then, similarly to the previous case, we slide $r$ and $b$ until reaching at most a $1$st-order fixed point and at most a $2$nd-order fixed point respectively. Thus, in case (2) we also get a canonical convex transversal representation with strictly convex bends in the endpoints of segments in \mkmcal{R}, and extreme vertices in fixed points of $0$th-, $1$st-, or $2$nd-order.
Finally, in case (3), we may be able to slide some extreme points to reach another segment endpoint on one of the hulls, which gives us a canonical representation, as it puts us in case (1). However, this may not always be possible. Consider an example in Figure~\ref{fig:2-oriented-quad1}. If we try sliding $u$ left, we will need to rotate $\overline{u\ell}$ around the point $e_{u\ell}$ (see the figure), which will propagate to rotation of $\overline{\ell b}$ around $e_{b\ell}$, $\overline{br}$ around $e_{br}$, and $\overline{ru}$ around $e{ur}$, which may not result in a proper convex hull of $Q$.
The last case is a special case of a canonical partial convex transversal. It is defined by four segments on which lie the extreme points, and four endpoints of segments ``pinning'' the convex chains of the convex hull such that the extreme points cannot move freely without influencing the other extreme points. Next we present an algorithm to find an optimal canonical convex transversal with extreme points in the discrete set of fixed points. In the subsequent section we consider the special case when the extreme point are not necessarily from the set of fixed points.
\subsection{Calculating the canonical transversal}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[page=2,width=\textwidth]{2-oriented}
\caption{The input and our guess for the 4 extremal points.}
\label{fig:2-oriented-limits}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\textwidth}
\includegraphics[page=3,width=\textwidth]{2-oriented}
\caption{The regions induced by the extremal points.}
\label{fig:2-oriented-regions}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[page=4,width=\textwidth]{2-oriented}
\caption{The different subproblems.}
\label{fig:2-oriented-subproblems}
\end{subfigure}
\hfil
\begin{subfigure}{0.45\textwidth}
\includegraphics[page=5,width=\textwidth]{2-oriented}
\caption{Our guesses for the endpoints of the subproblems.}
\label{fig:2-oriented-guesses}
\end{subfigure}
\caption{The steps of our algorithm for 2-oriented segments.}
\label{fig:2-oriented}
\end{figure}
\begin{figure}
\centering
\includegraphics{2-oriented-quad1}
\caption{Extreme vertices are not in the set of fixed points.}
\label{fig:2-oriented-quad1}
\end{figure}
We know that the vertices of a solution must lie on $0$th, $1$st or $2$nd order points. We subdivide the segments into several subproblems that we can solve similarly to the parallel case. First, we guess the four extreme points of our convex polygon, as seen in Figure~\ref{fig:2-oriented-limits}. These four points must be linked by \(x,y\)-monotone chains, and each chain has a triangular region in which its vertices may lie, as illustrated in Figure~\ref{fig:2-oriented-regions}. The key insight is that inside each of these regions, we have a partial ordering on the segments, as each segment can cross any \(x,y\)-monotone chain only once. This allows us to identify three types of subproblem:
\begin{enumerate}
\item Segments that lie inside two non-adjacent regions. We include any segments that lie between two of these into one subproblem and solve it separately. In Figure~\ref{fig:2-oriented-subproblems}, this subproblem is indicated in yellow. Note that there can be only one such subproblem.
\item Segments that lie inside two adjacent regions. We include any segments that must come before the last one in our partial ordering. There are at most four of these subproblems; they are indicated in purple in Figure~\ref{fig:2-oriented-subproblems}.
\item Segments that lie inside only one region. This includes only those segments not used in one of the other subproblems. There are at most four of these subproblems; they are indicated in blue in Figure~\ref{fig:2-oriented-subproblems}.
\end{enumerate}
For subproblems 1 and 2, we now guess the last points used on these subproblems (four points for subproblem 1, two for each instance of subproblem 2). We can then solve these subproblems using a similar dynamic programming algorithm to the one used for parallel segments. For subproblem 3, the problem is easier, as we are only building one half of the convex chain. We simply perform this algorithm for all possible locations for extreme points and endpoints of subproblems. For a given set of guesses, we can solve a subproblem of any type in polynomial time.
\subsection{Special case}\label{ch:special-subsection}
As mentioned above this case only occurs when the four hulls each contain exactly one endpoint. The construction can be seen in Figure~\ref{fig:2-oriented-quad1}. Let $e_{u\ell}$, $e_{ur}$, $e_{br}$ and~$e_{b\ell}$ be the endpoints on the upper-left, upper-right, lower-right and lower-left hull. Let further $s_u$, $s_r$, $s_b$ and $s_\ell$ be the segments that contain the extreme points.
For two points $a$ and $b$, let $l(a, b)$ be the line through $a$ and $b$.
For a given position of $u$ we can place $r$ on or below the line $l(u, e_{ur})$. Then we can place $b$ on or left of the line $l(r, e_{br})$, $\ell$ on or above $l(b, e_{b\ell})$ and then test if $u$ is on or to the right of $l(\ell, e_{u\ell})$.
Placing $r$ lower decreases the area where $b$ can be placed and the same holds for the other extreme points. It follows that we place $r$ on the intersection of $l(u, e_{ur})$ and $s_r$, we set $\{b\}=l(r, e_{br})\cap s_b$ and $\{\ell\}=l(b, e_{b\ell})\cap s_\ell$.
Let then $u'$ be the intersection of the line $l(\ell, e_{\ell u})$ and the upper segment $s_u$.
In order to make the test if $u'$ is left of $u$ we first need the following lemma.
\begin{lemma}\label{th:math-does-not-increase}
Given a line $\ell$, a point $A$, and a point $X(\tau)$ with coordinates $\left(\frac{P_1(\tau)}{Q(\tau)}, \frac{P_2(\tau)}{Q(\tau)}\right)$ where $P_1(\cdot)$, $P_2(\cdot)$, and $Q(\cdot)$ are linear functions. The intersection $Y$ of $\ell$ and the line through the points $X$ and $A$ has coordinates $\left(\frac{P'_1(\tau)}{Q'(\tau)}, \frac{P'_2(\tau)}{Q'(\tau)}\right)$ where $P'_1(\cdot)$, $P'_2(\cdot)$ and $Q'(\cdot)$ are linear functions.
\end{lemma}
\begin{proof}
The proof consists of calculating the coordinates of the point $Y$ depending on $\tau$.
Let $(a_x, a_y)$ be the coordinates of the point $A$.
Let $l_1 x + l_2 y + l_3 =0$ and $k_1 x + k_2 y + k_3 =0$ be the equations of the lines $\ell$ and the line through $X$ and $A$.
We can determine $k_2$ and $k_3$ depending on $k_1$ because the line passes through $X$ and $A$.
It follows that $k_2=-k_1\frac{-P_1(\tau) + a_x Q(\tau)}{-P_2(\tau) + a_y Q(\tau)}$ and $k_3=-k_1\frac{a_y P_1(\tau) - a_x P_2(\tau)}{-P_2(\tau) + a_y Q(\tau)}$.
We can then calculate the coordinates of the point $Y$. We obtain
\begin{align*}
Y= \left(\vphantom{\frac{P_1}{P_2}}\right. & \frac{-(a_y l_2 + l_3) P_1(\tau) + a_x (l_2 P_2(\tau) + l_3 Q(\tau))}{l_1 P_1(\tau) + l_2 P_2(\tau) - (a_x l_1 + a_y l_2) Q(\tau)}, \\
& \left.\frac{a_y l_1 P_1(\tau) - (a_x l_1 + l_3) P_2(\tau) + a_y l_3 Q(\tau)}{l_1 P_1(\tau) + l_2 P_2(\tau) - (a_x l_1 + a_y l_2) Q(\tau)}\right)\qedhere
\end{align*}
\end{proof}
Let $(\tau, c)$ be the coordinates of the point $u$ for $\tau\in I$, where the constant $c$ and the interval $I$ are determined by the segment $s_u$.
Then by Lemma~\ref{th:math-does-not-increase} we have that the points $r$, $b$, $\ell$, $u'$ all have coordinates of the form specified in the lemma.
First we have to check for which values of $\tau$ the point $u$ is between $e_{u\ell}$ and $e_{ur}$, $r$ is between $e_{br}$ and $e_{ur}$, $b$ is between $e_{b\ell}$ and $e_{br}$ and $\ell$ is between $e_{b\ell}$ and $e_{u\ell}$. This results in a system of linear equations whose solution is an interval $I'$.
We then determine the values of $\tau\in I'$ where $u'=\left(\frac{P_1(\tau)}{Q(\tau)}, \frac{P_2(\tau)}{Q(\tau)}\right)$ is left of $u=(\tau, c)$ by considering the following quadratic inequality: $\frac{P_1(\tau)}{Q(\tau)} \leq \tau$.
If there exists a $\tau$ satisfying all these constraints, then there exists a convex transversal such that the points $u$, $r$, $b$ and $\ell$ are the top-, right-, bottom-, and leftmost points, and the points $e_{jk}$ ($j,k=u,r,b,\ell$) are the only endpoints contained in the hulls.
Combining this special case with the algorithm in the previous section, we obtain the following result:
\begin{theorem}\label{th:2-oriented-polynomial}
Given a set of 2-oriented line segments, we can compute the maximum number of regions visited by a convex partial transversal in polynomial time.
\end{theorem}
\subsection{Extensions}
One should note that the concepts explained here generalize to more orientations. For each additional orientation there will be two more extreme points and therefore two more chains. It follows that for $\rho$ orientations there might be $\rho$th-order fixed points. This increases the running time, because we need to guess more points and the pool of discrete points to choose from is bigger, but for a fixed number of orientations it is still polynomial in $n$. The special case generalizes as well, which means that the same case distinction can be used.
\section{3-oriented intersecting segments}
\label{sec:hardthreeintersect}
We prove that the problem of finding a maximum convex partial transversal $Q$ of a set of 3-oriented segments $\mkmcal{R}$
is NP-hard using a reduction from Max-2-SAT.
\begin{theorem}
Let $\mkmcal{R}$ be a set of segments that have three different orientations. The problem of finding a maximum convex partial transversal $Q$ of $\mkmcal{R}$ is NP-hard.
\end{theorem}
First, note that we can choose the three orientations without loss of generality: any (non-degenerate) set of three orientations can be mapped to any other set using an affine transformation, which preserves convexity of transversals.
We choose the three orientations in our construction to be vertical ($|$), the slope of $1$ ($\+$) and the slope of $-1$ ($\-$).
Given an instance of \textsc{Max-2-SAT} we construct a set of segments $\mkmcal{R}$ and then we prove that from a maximum convex partial transversal $Q$ of $\mkmcal{R}$ one can deduce the maximum number of clauses that can be made true in the instance.
\subsection{Overview of the construction}
Our constructed set \(\mkmcal{R}\) consists of several different substructures.
The construction is built inside a long and thin rectangle, referred to as the \emph {crate}.
The crate is not explicitly part of $\mkmcal{R}$.
Inside the crate, for each variable, there are several sets of segments that form chains. These chains alternate $\+$ and $\-$ segments reflecting on the boundary of the crate.
For each clause, there are vertical $|$ segments to transfer the state of a variable to the opposite side of the crate.
Figure~\ref {fig:bananaoverview} shows this idea.
However, the segments do not extend all the way to the boundary of the crate; instead they end on the boundary of a slightly smaller convex shape inside the crate, which we refer to in the following as the \emph{banana}.
Figure~\ref {fig:bananabending} shows such a banana.
Aside from the chains associated with variables, \(\mkmcal{R}\) also contains segments that form gadgets to ensure that the variable chains have a consistent state, and gadgets to represent the clauses of our \textsc{Max-2-SAT} instance.
Due to their winged shape, we refer to these gadgets by the name \emph{fruit flies}. (See Figure~\ref{fig:fruitfly} for an image of a fruit fly.)
Our construction makes it so that we can always find a transversal that includes all of the chains, the maximum amount of segments on the gadgets, and half of the $|$ segments. For each clause of our \textsc{Max-2-SAT} instance that can be satisfied, we can also include one of the remaining $|$ segments.
\begin{figure}
\hspace{-1cm}\includegraphics{horizontalbanana}
\caption{Overview of our construction. Each of the colored segment chains represents a variable. At each point where a chain bounces on the banana there is a fruit fly gadget. At each area marked orange there is a clause gadget. Each chain is only pictured once, but in actuality each chain is copied \(m+1\) times and placed at distance \(\epsilon\) of each other. The distance between the different variables is exaggerated for clarity.}
\label{fig:bananaoverview}
\end{figure}
\subsection{Complete construction}
In the following we assume that we are given an instance of \textsc{Max-2-SAT} $(V, C)$, where $V=\{v_1, \dots, v_n\}$ is the set of variables and $C=\{c_1, \ldots, c_m\}$ is the set of clauses. For an instance of \textsc{Max-2-SAT}, each clause has exactly two literals. The goal is to find an assignment for the variables such that the maximum number of clauses is satisfied.
We first construct a set of segments $\mkmcal{R}$ and then we prove that from a maximum convex partial transversal $Q$ of $\mkmcal{R}$ one can deduce the maximum number of clauses that can be made true in $(V,C)$.
The different substructures of \(\mkmcal{R}\) have sizes of differing orders of magnitude. Let therefore $\alpha$ (distance between the chains of the different variables), $\beta$ (distance between the inner and outer rectangles that shape the banana), $\gamma$ (horizontal distance between the inner anchor points of a fly), $\delta$ (vertical distance between outermost bristle line and a fly's wing), $\epsilon$ (distance between multiple copies of one variable segment), and \(\zeta\) (length of upper wing points of the flies), be rational numbers (depending polynomially on $n$ and $m$) with $1\gg\alpha\gg\beta\gg\gamma\gg\delta\gg\epsilon\gg\zeta>0$. Usable values for these constants are given in Table~\ref{tab:bananaconstants}, but other values are also possible.
\begin{table}
\centering
\begin{tabular}{cc}
\toprule
Constant & Value \\
\midrule
\(\alpha\) & \(\frac{1}{100n}\) \\
\(\beta\) & \(\frac{\alpha}{100}\) \\
\(\gamma\) & \(\frac{\alpha^3}{10000m^2}\) \\
\(\delta\) & \(\frac{\gamma}{100m}\) \\
\(\epsilon\) & \(\frac{\delta}{100}\) \\
\(\zeta\) & \(\frac{\delta}{100m^2}\) \\
\bottomrule
\vspace{0cm}
\end{tabular}
\caption{Possible values for the constants used in our hardness construction.}
\label{tab:bananaconstants}
\end{table}
\subsubsection{Construction of the Chains}
\label{sec:constructchains}
First we create the crate \(B\) with sides \(1\) by \(2m\). Then we construct the chains.
\begin{lemma}
For each variable, we can create a closed chain of \(4m+2\) segments with endpoints on \(B\) by alternating \(\-\) and \(\+\) segments.
\end{lemma}
\begin{proof}
Let $v_i$ be a variable and \(s_i\) be the (closed) chain we construct to be associated with \(v_i\).
Then the first segment of $s_i$ starts close to the top left of \(B\) at coordinates \((i\alpha,1)\) and has orientation $\-$ until it hits \(B\) at coordinates \((i\alpha+1,0)\) so that it connects the top and bottom sides of \(B\). Then the chain reflects off \(B\)'s bottom side so the second segment has orientation $\+$, shares an endpoint with the first segment and again connects \(B\)'s bottom and top sides by hitting \(B\) at point \((i\alpha+2, 1)\). Then we reflect downwards again.
Every time we go downwards and then upwards again we move a distance of \(2\) horizontally. Since our rectangle has length \(2m\) we can have \(m-1\) pairs of segments like this. We are then at the point \((i\alpha+2m -2,1)\). If we then go downwards to \((i\alpha+2m-1,0)\) and reflect back up again, we hit the right side of \(B\) at coordinates \((2m,1-i\alpha)\). We reflect back to the left and hit the top side of \(B\) at \(2m-i\alpha, 1\). This point is symmetrical with our starting point so as we reflect back to the left we will eventually reach the starting point again, creating a closed chain, no matter our values of \(i\) and \(\alpha\).
\end{proof}
We construct a chain \(s_i\) for each variable \(v_i \in V\). Then we construct two \(|\) segments for each clause \(c_j \in C\) as follows:\\ Let \(v_k, v_l\) be the variables that are part of clause \(c_j\). There are shared endpoints for two segments of \(s_k\) and \(s_l\) at \((k\alpha+1+2(j-1),0)\) and \((l\alpha+1+2(j-1),0)\) respectively. At these points, we add \(|\) segments, called \emph{clause segments}, with their other endpoints on the top side of \(B\). (See Figure~\ref{fig:bananaoverview}.)
Each chain is replaced by \(m+1\) copies of itself that are placed at horizontal distance \(\epsilon\) of each other. The clause segments are not part of the chain proper and thus not copied.
\subsubsection{How to Bend a Banana}
\label{sec:bananabending}
\begin{figure}
\includegraphics{bananabendinghorizontal}
\centering
\caption{A banana in a crate. It consists of four parabolas with very low curvature so that it is almost as straight as a rectangle while being strictly convex. The distance between \(B\) and \(B'\) has been exaggerated in the figure for clarity.}
\label{fig:bananabending}
\end{figure}
In the previous section we constructed the variable chains. In Section \ref{sec:constructfruitflies} we will construct fruit fly gadgets for each reflection of each chain and each clause. However, if we place the fruit flies on \(B\) they will not be in strictly convex position. To make it strictly convex we create a new, strictly convex bounding shape (the banana) which we place inside the crate. The fruit flies are then placed on the boundary of the banana, so that they are in strictly convex position.
We are given our crate \(B\) of size $1$ by $2m$ and we wish to replace it with our banana shape. (See Figure~\ref {fig:bananabending}.)
We want the banana to have certain properties.
\begin {itemize}
\item The banana should be strictly convex.
\item The distance between the banana and the crate should be at most $\beta$ everywhere.
\item We want to have sufficiently many points with rational coordinates to lie on the boundary of the banana.
\end {itemize}
To build such a banana, we first create an {\em inner crate} $B'$, which has distance $\beta$ to $B$. Then we create four parabolic arcs through the corners of $B'$ and the midpoints of the edges of $B$; see Figure~\ref {fig:bananabending}.
In the following, we make this construction precise. We start by focusing on the top side of the banana.
Let \(P\) be the parabola on the top side. It goes from \((\beta,1- \beta)\), through vertex \((m,1)\) to point \((2m-\beta,1-\beta)\). This means the equation defining \(P\) is \(y = \frac{-\beta}{(m-\beta)^2}(m-x)^2 + 1\).
For the top side of the banana, we have that there are two types of flies that need to be placed. At each reflection of a chain there is a \emph{reflection fly}. For each clause there is a \emph{clause fly}. The positions of the reflections on the top side for a variable \(v_i\)'s chains are \((i\alpha+2k + j\epsilon,1)\) and \((2m - [i\alpha+2k + j\epsilon],1)\) for each \(j \in \{1,\dots,m+1\}, k \in \{0,\dots,m-1\}\) (see Section \ref{sec:constructchains}). The clauses have approximate position \((1 +j, 1)\) for each \(j\in \{1,\dots,m\}\). (The clauses are moved horizontally with a factor of \(\alpha\) depending on which variables are included in the clause.) The reflection flies have distance \(\alpha\) from each other, which is much more than \(\beta\) and \(\epsilon\). The distances between reflections on the other sides of the crate are of similar size. So it is possible to create a box with edges of size \(\beta\) around each reflection point such that
\begin{itemize}
\item All involved segments in the reflection (meaning every copy of the two chain segments meeting at the reflection, and a possible clause segment) intersect the box.
\item There cannot be any other segments or parts of flies inside of the box
\item The top and bottom edges of the box lie on \(B\) and \(B'\). (Or the left and right edges do if the reflection is on the left or right side of the crate.)
\end{itemize}
The clause flies have distance 1 from any other flies, so even though they are wider than reflection flies (width at most \(n\alpha\)), we can likewise make a rectangular box such that both the clause's segments intersect the box and no other segments do. The box has a height of \(\beta\) and is placed between \(B\) and \(B'\); the width is based on what is needed by the clause.
Inside each reflection- or clause box we find five points on \(P\) with rational coordinates which will be the fly's \emph{anchor points}. We take the two intersection points of the box with \(P\), the point on \(P\) with \(x\)-coordinate center to the box, and points \(\gamma\) to the left and right of this center point on \(P\). We will use these anchor points to build our fly gadgets. This way we both have guaranteed rational coordinates and everything in convex position. See Figure \ref{fig:flybox}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth,trim={6.7cm, 0, 7cm, 0},clip]{box}
\caption{Close-up showing the box we construct around a reflection of a chain on the bottom side of \(B\). \(P\) is the parabola that forms the bottom side of the banana. The anchor points (shown in purple) will be used to construct fruit flies. Going from left to right, the second and fourth points have a horizontal distance \(\gamma\) to the center point. In the figure \(\gamma\) is exaggerated for clarity.}
\label{fig:flybox}
\end{figure}
\subsubsection{Construction of the Fruit Flies}
\label{sec:constructfruitflies}
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{leftwingfly}
\caption{Fly with clause segment on left wing}
\label{fig:leftwingfly}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{rightwingfly}
\caption{Fly with clause segment on right wing}
\label{fig:rightwingfly}
\end{subfigure}
\caption{By choosing which anchor points to connect, we can make the clause segment intersect either the left or the right wing of the fly. The clause segment always intersects the center anchor point (before it is shortened to be in convex position). }
\label{fig:politicalfly}
\end{figure}
\iffalse
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{annotatedfly}
\caption{Unswatted fly}
\label{fig:unswatfly}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\vspace{2.8cm}
\includegraphics[width=\textwidth]{swattedfly}
\caption{Partially swatted fly}
\label{fig:swatfly}
\end{subfigure}
\caption{The Fruit Fly gadget. The outer green line segments belong to a chain \(s_i\), the middle segment is a clause segment. The endpoints of the segments and the upper wing points (shown in red) are in convex position due to their placement on the bristle lines (shown in blue). In our actual construction, the fly appears completely \emph{swatted}: the points defining the fly lie on the same parabola, so close together that the fly is almost completely flat. The outer green line segments are then at an angle of \(90^\circ\).}
\label{fig:fruitfly}
\end{figure}
\fi
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{annotatedfly}
\caption{The Fruit Fly gadget. The endpoints of the segments and the upper wing points (shown in red) are in convex position due to their placement on the bristle lines (shown in blue). In our actual construction, the fly appears completely \emph{swatted}: the points defining the fly lie on the same parabola, so close together that the fly is almost completely flat. The chain segments are then at an angle of \(90^\circ\).}
\label{fig:fruitfly}
\end{figure}
Now we construct the numerous fruit fly gadgets that ensure the functionality of our construction.
The general concept of a fruit fly can be seen in Figure~\ref{fig:fruitfly}.
NB: In our images all flies are oriented "top side up", as if they are on the top side of the banana. (Even though reflection flies with a clause segment as seen in Figure~\ref{fig:fruitfly} can occur on the bottom side). In our actual construction, they are oriented such that the "top side" is pointing outwards from the banana relative to the side they are on.
We create
$n(4m+2)+m$ flies in total, of two types:
one {\em reflection fly} at each reflection of each chain and
one {\em clause fly} for each clause.
Each fly consists of a pair of {\em wings}. The wings are created by connecting four of the five anchor points in a criss-cross manner, creating two triangles. The intersection point between the segments in the center of the fly divides the wings into an \emph{upper wing} and a \emph{lower wing}. The intersecting segments are referred to in the following as the \emph{wing lines}.
The choice of which anchor points to connect depends on the presence of a clause segment. We always connect the outer points and the center point. If the fly is a reflection fly with a clause segment, we connect the anchor point such that the segment intersects the lower right wing if the variable appears negated in the clause and the lower left wing otherwise. See Figure \ref{fig:politicalfly}. If the fly is a clause fly, or a reflection fly without a clause segment, the choice is made at random. The wings are implicit, there are no segments in \(\mkmcal{R}\) that correspond to them.
Besides the segments making up the wings, we also create \(m+1\) line segments parallel to the wing lines at heights increasing up to \(\delta\) above each wing line. We will refer to these extra line segments as the flies' \emph{bristle lines}.
The distance between bristle lines decreases quadratically. We first compute a step size \(\kappa = \frac{\delta}{m^2}\). Then, for both wing lines, the bristle lines are placed at heights \(h_w + \delta - (m+1-i)^2\kappa\) for all \(i \in \{1,\dots,m+1\}\) where \(h_w\) is the height of the wing line (relative to the orientation of the fly).
When the bristle lines have been constructed, we shorten all of the line segments involved in the fly so that their endpoints are no longer on \(B\) but lie on a wing line or one of the bristle lines. We do this in such a manner that the endpoints of the copies of a segment are all in convex position with each other (as well as with the next and previous fly) and have rational coordinates.
To shorten the chain segments, we consider the copies as being sorted horizontally. W.l.o.g. we look at the segments intersecting the lower left wing, which are sorted from left to right. The first segment (the original uncopied one) gets shortened so its endpoint is the intersection between the original line segment and the fly's wing line. The next segment gets shortened so its endpoint is the intersection with the first bristle line. The next segment gets shortened so its endpoint is on the second bristle line, etc. The final copy's endpoint will lie on the penultimate bristle line.
If there is a clause segment on the wing it is shortened so that its endpoint lies on the highest bristle line.
After shortening all of the segments we also add \(m+1\) vertical line segments of length \(\zeta\) to each of the flies' upper wings. Their length is chosen to be this short so they behave like points and it does not matter for any transversal which point on the segment is chosen, but only if the segment is chosen at all. The line segments have horizontal distance \(\epsilon\) from each other and are placed in a reverse manner compared to the segments; so the first line segment intersects penultimate bristle line, the next segment intersects the bristle line below that, etc. until the final segment intersects the wing line at the tip of the wing. These segments form a part of \(\mkmcal{R}\). These segments (shown in red in Figure~\ref{fig:fruitfly}) are referred to in the following as the fly's \emph{upper wing points}.
For clause flies, the clause segments are shortened such that their endpoints lie on the highest bristle line. The fly is placed such that each clause segment has its own wing. The clause flies also have \(m+1\) upper wing points per wing.
\subsubsection{Putting it all together}
\label{sec:bananacount}
\begin{lemma}
\label{lem:polybanana}
The transformation of the instance of \textsc{Max-2-SAT} to \textsc{3-Oriented Maximum Partial Transversal} can be done in polynomial time and space.
\end{lemma}
\begin{proof}
The set of all segments of all copies of the chains, the clause-segments used for the clauses and the points of the flies together form the set $\mkmcal{R}$. Each chain has \(4m+2\) segments. We have \(n(m+1)\) chains. We also have \(n(4m+2)\) reflection flies that each have \(2(m+1)\) upper wing points. We have \(2m\) clause segments. Finally we have \(m\) clause flies that include \(2(m+1)\) upper wing points each. That brings the total size of \(\mkmcal{R}\) to
\(n(12m^2 + 18m + 6)+ 4m + 2m^2\).
During construction, we create \(5\) anchor points for each fly. We also construct \(m+1\) bristle lines for each fly. Each segment in \(\mkmcal{R}\) is first created and then has its endpoints moved to convex position by intersecting the segment with a bristle line. All of this can be done in polynomial time and space.
\end{proof}
\subsection{Proof of Correctness}
\begin{lemma}
The Max-2-SAT instance $(V, C)$ has an assignment of variables such that $k$ clauses are true if and only if the set $\mkmcal{R}$ allows a maximum convex partial transversal $Q$, with $|Q|=|\mkmcal{R}|-n(4m+2)(m+1) - (m-k)$.
\end{lemma}
\begin{proof}
Our fruit fly gadget is constructed such that a convex transversal can only ever include half of the upper wing points for each fly. So any transversal will at most include \(|\mkmcal{R}| - n(4m+2)(m+1) \) points.
As we will show below, it is always possible to create a transversal that includes half of the upper wing points of every fly, plus every chain segment of every variable, giving a transversal of at least \(|\mkmcal{R}|-n(4m+2)(m+1) - m\) points. So, as each fly has \(m+1\) upper wing points and each segment is copied \(m+1\) times, a maximum traversal must visit all flies and all chain segments, no matter how many clause segments are included.
A guaranteed way to include all flies in a transversal is to stay on the edge of the banana and including the flies in the order they are on the banana's boundary, while only choosing points on the chain segments that are inside of the halfplanes induced by the flies' wing lines. (For convenience, we assume that we only pick the segments' endpoints as it doesn't matter which point we choose in this halfplane in regard to which other points are reachable while maintaining convexity and the ability to reach other flies.)\\
\\
The only way to visit the maximum number of regions on a fly is to pick one of the two wing lines (with related bristle lines) and only include the points on the upper and lower wing it induces. (So either all segments on the lower left and upper right wing, or all segments on the lower right and upper left wing.) If we consider the two flies that contain the opposite endpoints of a chain segment it is clear that choosing a wing line on one of the flies also determines our choice on the other fly. If we choose the wing line on the first fly that does not include the \(m+1\) copies of the line segment we must choose the wing line on the other fly that does include them, otherwise we miss out on those \(m+1\) regions in our transversal. Since the chains form a cycle, for each set of chains corresponding to a variable we get to make only one choice. Either we choose the left endpoint of the first segment, or the right one. The segments of the chain then alternate in what endpoints are included.
If we choose the left endpoint for the first segment of a chain it is equivalent to setting the corresponding variable to true in our \textsc{Max-2-SAT} instance, otherwise we are setting it to false. \\ \\
At the reflection flies that have a clause segment, the endpoint of that clause segment on the fly can be added to the partial transversal iff it is on the wing that is chosen for that fly.(Recall from Section \ref{sec:constructfruitflies} that which wing contains the clause segment's endpoint depends on if the variable appears negated in the clause.) The clause segment has a clause fly at its other endpoint which it shares with the clause segment of another variable. If one of the two clause segments is already included in the transversal because it was on the correct wing of the reflection fly, we can choose the other wing of the clause fly. If neither of the clause segments are already included, we can only include one of the two by picking one of the two wings. This means there is no way to include the other clause segment, meaning our convex partial transversal is \(1\) smaller than it would be otherwise. This corresponds to the clause not being satisfied in the 2-SAT assignment.\\
Since we can always get half of the upper wing points and all of the chain segments our maximum convex partial transversal has cardinality $|Q|=|\mkmcal{R}|-n(4m+2)(m+1) - (m-k)$, where \(k\) is the number of clauses that can be satisfied at the same time. Since Lemma \ref{lem:polybanana} shows our construction is polynomial, we have proven that the problem of finding a maximum convex transversal of a set of line segments with 3 orientations is NP-hard.
\end{proof}
\subsection {Implications}
Our construction strengthens the proof by~\cite{schlipf2012notes} by showing that using only 3 orientations, the problem is already NP-hard. The machinery appears to be powerful: with a slight adaptation, we can also show that the problem is NP-hard for axis-aligned rectangles.
\begin{theorem}
Let $\mkmcal{R}$ be a set of (potentially intersecting) axis-aligned rectangles. The problem of finding a maximum convex partial transversal $Q$ of $\mkmcal{R}$ is NP-hard.
\end{theorem}
\begin{proof}
We build exactly the same construction, but afterwards we replace every vertical segment by a $45^\circ$ rotated square and all other segments by arbitrarily thin rectangles. The points on the banana's boundary are opposite corners of the square, and the body of the square lies in the interior of the banana so placing points there is not helpful.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2018-09-27T02:13:35",
"yymm": "1809",
"arxiv_id": "1809.10078",
"language": "en",
"url": "https://arxiv.org/abs/1809.10078",
"abstract": "We consider the problem of testing, for a given set of planar regions $\\cal R$ and an integer $k$, whether there exists a convex shape whose boundary intersects at least $k$ regions of $\\cal R$. We provide a polynomial time algorithm for the case where the regions are disjoint line segments with a constant number of orientations. On the other hand, we show that the problem is NP-hard when the regions are intersecting axis-aligned rectangles or 3-oriented line segments. For several natural intermediate classes of shapes (arbitrary disjoint segments, intersecting 2-oriented segments) the problem remains open.",
"subjects": "Computational Geometry (cs.CG)",
"title": "Convex partial transversals of planar regions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105321470076,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7076984734661899
} |
https://arxiv.org/abs/2210.04968 | The coverage ratio of the frog model on complete graphs | The frog model is a system of interacting random walks. Initially, there is one particle at each vertex of a connected graph $\mathcal{G}$. All particles are inactive at time zero, except for the one which is placed at the root of $\mathcal{G}$, which is active. At each instant of time, each active particle may die with probability $1-p$. Once an active particle survives, it jumps on one of its nearest vertices, chosen with uniform probability, performing a discrete time simple symmetric random walk (SRW) on $\mathcal{G}$. Up to the time it dies, it activates all inactive particles it hits along its way. From the moment they are activated on, every such particle starts to walk, performing exactly the same dynamics, independent of everything else. In this paper, we take $\mathcal{G}$ as the $n-$complete graph ($\mathcal{K}_n$, a finite graph with each pair of vertices linked by an edge). We study the limit in $n$ of the coverage ratio, that is, the proportion of visited vertices by some active particle up to the end of the process, after all active particles have died. | \section{Introduction}
\label{cap:introducao}
The frog model is a system of interacting random walks on a given rooted connected graph, finite or infinite. Initially, the graph contains some configuration (maybe one per vertex) of inactive particles on its vertices. The particles at the root of the graph start out active and perform independent simple nearest-neighbor discrete time random walks. Whenever a vertex with inactive particles is first visited, all the particles at the vertex wake up and begin their own independent random walks, waking up inactive particles as they visit them. The lifetime of a particle, in case it is active, can be finite or infinite. This model is often associated with the dynamics of the spread of a rumor or the spread of a virus in a connected population.
A formal definition of the frog model can be found in \cite{phase_transition}.
Part of the literature on the frog model is focused on the case where active particles have infinite lifetime and $\mathcal{G}$ is an infinite connected graph. In this setup, a goal is to study recurrence/transience, that is, conditions for the root of the graph to be visited infinitely often, as in \cite{recorrencia2,recorrencia3,recorrencia6}.
Another approach on infinite graphs is to study the limit shape of the set
of visited vertices, also known as \textit{shape theorem}, as found in \cite{shape_theorem}.
On finite graphs and particles with infinite lifetime, one can study the first time where each vertex of the graph is visited
at least once (the \textit{coverage time}) as in \cite{vida_fixa2,cover_time}. Finite lifetime on finite graphs are considered in
\cite{vida_fixa,vida_fixa2}.
In this paper, we are interested in studying the limit coverage ratio for the frog model on $\mathcal{K}_n$, the $n-$complete graph, when the lifetime of an active particle follows a geometric random variable with parameter $1-p$. That
is a consequence of the fact that at each instant of time, indenpendently of everything else, each active particle may
die with probability $1-p$ or survive with probability $p$.
We know from \cite{grafo_completo1} that, starting from a one particle per vertex configuration on $\mathcal{K}_n$, there exists a phase transition property, which means the existence of a non-trivial critical parameter $p_c$ such that the coverage ratio converges to zero in distribution for $p<p_c$, while this convergence does not occur for $p>p_c$. See \cite{grafo_completo2} for some interesting result with this setup. It is also shown in \cite{grafo_completo1} that $p_c=1/2$. Our main theorem extend the previous result, giving more details about the number of visited vertices, and proving also, for $p>p_c=1/2$, the exact limit in $n$ of the probability that the coverage ratio is strictly larger than zero.
Let us start with a basic definition
\begin{defn}
Let $V_\infty = V_\infty(\mathcal{K}_n)$ be the number of vertices of $\mathcal{K}_n$, visited at least once, up to the time there are no more active particles.
\end{defn}
Our main result is the following.
\begin{teo} \label{teorema:principal}
Consider the frog model on $\mathcal{K}_n$, starting from the configuration with one particle per vertex. Then
(i) For $p\leq 1/2$, for every function $f:\mathbb{N}\rightarrow \mathbb{R}$ such that $\lim_{n \to \infty}f(n)=+\infty$,
\[\lim_{n \to \infty}P(V_\infty \leq f(n))=1.
\]
(ii) For $p>1/2$, there are constants $c>0$ e $c'\in (0,1)$ such that
\[\lim_{n \to \infty}P(V_\infty \leq c \log n)=\frac{1-p}{p},\]
\[\lim_{n \to \infty}P(V_\infty \geq c' n)=\frac{2p-1}{p}.
\]
\end{teo}
Observe that the function $f(n)$ in part \textit{(i)} of Theorem~\ref{teorema:principal} can be replaced by $c \log n$, for any $c>0$.
Still, part $(i)$ of Theorem~\ref{teorema:principal} is valid for any sequence of connected graphs. The proof we provide in this paper works also in this case.
The ideas and techniques used in the proof of the Theorem~\ref{teorema:principal} are inspired by the study of the existence of a giant component on Erdös–Rényi graph (see \cite[sec. 11.2]{componente_gigante} for details). Some basic concepts of branching processes (see~\cite[sec. 5.4]{branching}) are also considered.
The intuition behind Theorem~\ref{teorema:principal} comes from the fact that if one considers a stage when there is a small number of visited vertices on a large graph, what one sees is that the next surviving particle will, most likely, visit an unvisited vertex, activating a new particle. Therefore, in the beginning of the process, the number of vertices visited is near to the number of individuals in a branching process that generates 2 offspring with probability $p$ and 0 offspring with probability $1-p$. In this setup, the extinction probability equals to $1$ for $p \leq 1/2$ and to $\frac{1-p}{p}$ for $p>1/2$. Besides that, for $p>1/2$, the branching process mentioned above grows infinitely with probability $\frac{2p-1}{p}$, an event that is in some sense analogous to $V_\infty$ being on the order of $n$.
The paper is organized as follows.
In section~\ref{sec:proc_auxiliar} we describe an auxiliary process aiming to simplify the computations needed. In section~\ref{sec:prova_teorema} one finds the proofs of Theorem~$\ref{teorema:principal}$. The sub-critical phase ($p \leq 1/2$) is proved in subsection~\ref{sec:subcritica}, for any sequence of connected graphs, while the supercritical phase ($p > 1/2$) is proved in subsection~\ref{sec:supercritica}.
\section{The auxiliary process}
\label{sec:proc_auxiliar}
Let us define an auxiliary process starting from a small modification on the frog model. Let us start from the $n+1$-complete graph, $\mathcal{K}_{n+1}$, picking one of its vertices and defining it as its root. We denote the set of vertices of $\mathcal{K}_{n+1}$ by $\mathcal{V}(\mathcal{K}_{n+1})$.
At time zero, there is one active particle at the root and one inactive particle at each other vertices of $\mathcal{K}_{n+1}$. All particles present at time zero are declared \textit{original}. This is done because at some point, \textit{extra} particles are considered.
The auxiliary process has the following characteristics
\begin{itemize}
\item It is taken in rounds, so that only one particle acts (dying or moving) in each round. The particle which is being considered each time, survives with probability $p$ and, if so, it jumps on one of its nearest vertices,
chosen with uniform probability.
\item Let $R$ be the time that the last original active particle dies. It is the time that the original frog model dies out. At this very time, every remaining inactive particles become \textit{extra} inactive particles. Besides that, a brand new \textit{extra} active particle is placed at the root of $\mathcal{K}_{n+1}$, so it acts in round $R+1$.
\item From now on every time the last extra active particle dies, another extra active particle is placed at the root of $\mathcal{K}_{n+1}$.
\end{itemize}
Observe that there are only extra particles from the round $R+1$ on. Observe also that $R<\infty$ with probability 1, as the number of original particles activated up to a given time is at most $|\mathcal{V}(\mathcal{K}_{n+1})|=n+1<\infty$ and the lifetime of any active particle is given by a geometric random variable of parameter $1-p$. The lifetime is finite with probability 1, since we are restricted to $p < 1$.
Even though $R<\infty$, the auxiliary process as a whole is infinite, as it always renews itself when we consider an extra particle at the origin. This rule makes it possible to analyze the behavior of the particle (original or extra) that performs the $k$th round for any $k \in \mathbb{N}$, without the need to check whether the original process has finished or not.
Now, for $k\in \mathbb{N}$ we define the random variable $X_k$ which depends on what happens with the particle that acts in round $k$ during the auxiliary process:
\begin{itemize}
\item $X_k=0$ if it dies.
\item $X_k=1$ if it survives and jumps to a vertex which has been visited before round $k$.
\item $X_k=2$ if it survives and jumps to a vertex which has never been visited before round $k$.
\end{itemize}
Therefore, $X_k$ can be interpreted as the number of descendants (in the branching process sense) of the particle that acts in the $k$th round. Observe that the sequence $X_1,X_2,...$ is neither independent nor identically distributed.
From the number of descendants in each round, we define the number of potentially active particles at the end of the $k$-th round as
\begin{equation} \label{def:pot_ativas}
A'_0:=1; \hspace{5mm} A'_k:=1+\sum_{j=1}^k (X_j-1), \hspace{1mm} k\geq 1.\end{equation}
The term potential comes from the fact that the previous expression also considers descendants of extra particles. We can disregard this by taking
\begin{equation}\label{def:r}
R=\inf\{k:A'_k=0\}
\end{equation}
\noindent
and setting the number of active particles at the end of the $k$th round as
\begin{equation} \label{def:ativas}
A_k:=A'_k{\mathbbm{1}}_{(k<R)}, \hspace{2mm} k \in \mathbb{N}.
\end{equation}
Similarly, we define the number of potentially visited vertices by the end of the $k$th round as
\begin{equation}\label{def:pot_visitados}
V'_0:=1; \hspace{5mm} V'_k:=1+\sum_{j=1} ^{k} {\mathbbm{1}}_{(X_j=2)}, \hspace{1mm} k\in \mathbb{N}
\end{equation}
\noindent
and the number of vertices visited at the end of the $k$th round as
\begin{equation} \label{def:visitados}
V_k:=V'_k {\mathbbm{1}}_{(k < R)} + V'_R {\mathbbm{1}}_{(k \geq R)},\hspace{1mm} k\in \mathbb{N}.
\end{equation}
We define the total number of vertices visited during the whole process by
\begin{equation} \label{def:visitados_total}
V_\infty:=\lim_{k \to \infty} V_k=V_{R}.
\end{equation}
Note that $V_\infty=V_R$ is not affected by the extra particles, as they are triggered only after $R$. The extra particles are useful for computational purposes, as they make it possible to keep track of the number of potentially active particles and potentially visited vertices without having to worry about whether all the original particles have died or not.
It is important to note that $V_\infty$ also describes the total number of vertices visited in the original frog model. Therefore, the study of the frog model will be done from the auxiliary process, using $V_\infty$.
Note that the auxiliary process and its definitions could be done for any conected graph. In some cases one could have $R=\infty$ with positive probability. The auxiliary process fits well the complete graph due to the fact that some computations can be easily done, as
\begin{equation}\label{eq:dist_x}
P(X_k=x|V'_{k-1}=v)=\begin{cases}
1-p & \text{if $x=0$,}\\
\frac{p(v-1)}{n} & \text{if $x=1$,}\\
\frac{p(n-v+1)}{n} & \text{if $x=2$.}
\end{cases}
\end{equation}
\section{Proofs}
\label{sec:prova_teorema}
We start off with a basic probability result involving Chernoff bounds that will be useful later on.
\begin{lem}[\cite{chernoff}, Theorem 4.5] \label{lema:chernoff}
For $X$ a random variable with binomial distribution, $X \sim B(n,p)$, it holds that
\[P(X \leq E(X) -t) \leq \exp(\frac{-t^2}{2E(X)}), \hspace{3mm} t>0.\]
\end{lem}
\subsection{The sub-critical phase: $p\leq 1/2$} \label{sec:subcritica}
\begin{proof}[Proof of Theorem~\ref{teorema:principal} part (i)]
Consider $\{\mathcal{G}_n\}_{n\in \mathbb{N}}$ a sequence of connected graphs. Let us define a process such that every time there is an active particle jumping to a neighboring vertex, it chooses one that has never been visited before. Our aim is to define a process whose number of visited vertices at any given time dominates the same quantity computed for the auxiliary process defined in~Section~\ref{sec:proc_auxiliar}.
Let us define a sequence of random variables $\{X_j^+\}_{j\in \mathbb{N}}$
\begin{equation}\label{eq:compara_y}
X_j^+=\begin{cases}
0 & \text{if $X_j=0$,}\\
2 & \text{if $(X_j=1) \cup (X_j=2)$.}
\end{cases}
\end{equation}
Note that $X^+_1,X^+_2,...$ are independent and identically distributed, as they rely only on the survival event of particles. Consider also $X^+$ a random variable with the same distribution as $X^+_1$. It holds that
\begin{equation} \label{eq:esp_y}
E(X^+)=2P(X^+=2)=2p\leq 1.
\end{equation}
Next, we consider a branching process where the number of children of each individual is distributed as $X^+$. As done for the auxiliary process, we keep track of this branching process by the number of descendants of each individual per turn.
Analogously to definitions (\ref{def:pot_ativas}), (\ref{def:r}), (\ref{def:ativas}), (\ref{def:pot_visitados}), (\ref{def:visitados}) and (\ref{def:visitados_total}), denote the number of potentially alive individuals at the end of the $k$th round of the branching process by
\begin{equation} \label{def:pot_ativas_y}
A'^{+}_k:=1+\sum_{j=1}^k (X_j^+-1),
\end{equation}
\noindent
and the number of rounds until the extinction of this population by
\begin{equation} \label{def:r_y}
R^{+}:=\inf\{k \in \mathbb{N}:A'^{+}_k=0\}.
\end{equation}
The total number of individuals in this branching process can be written as
\[V_\infty^{+}:=1+\sum_{j=1}^{R^{+}}{\mathbbm{1}}_{(X_j^+=2)}.
\]
Note that (\ref{eq:esp_y}) tells us that the branching process with offspring distribution $X^+$ becomes extinct, and therefore its total number of individuals is finite, with probability 1. The extinction probability does not change if we count the descendants of one individual at a time each round. As, by hypothesis, $\lim_{n \to \infty}f(n)=+\infty $, it follows that:
\[\lim_{n \to \infty}P(V_\infty^+\leq f(n))=1.
\]
From ($\ref{eq:compara_y}$), we have that for every $k\in \mathbb{N}$ it holds that $X^+_k\geq X_k$, which by its turn implies that $A'^+_k\geq A'_k$ and so $R^+\geq R$. It is also true that $V_\infty^+\geq V_\infty$. So, we conclude that
\[\lim_{n \to \infty}P(V_\infty\leq f(n))=1.
\]
\end{proof}
\subsection{The super-critical phase: $p > 1/2$} \label{sec:supercritica}
\begin{proof}[Proof of Theorem~\ref{teorema:principal} part (ii)]
As the case $p=1$ is trivial, we focus on $p \in (1/2,1)$, splitting the proof in the three following statements
\begin{itemize}
\item[(a)] For $k_-=2 \frac{4p}{(1+2p)[\frac{2p-1}{8p+4}]^2}\log n$ and $k_+=(1-\frac{2}{1+2p})n$, it holds that $${\lim_{n \to \infty}P(R \in [k_-,k_+])=0}.$$
\item[(b)] $\lim_{n \to \infty}P(V_\infty\leq c\log n|R < k_-)=1$ and ${\lim_{n \to \infty}P(V_\infty\geq c'n|R > k_+)=1}$.
\item[(c)] $\lim_{n \to \infty}P(R < k_-)= \frac{1-p}{p}$.
\end{itemize}
Note that with these 3 items, the proof comes to an end. Next, we prove (a), (b) and (c).
\noindent
\textit{Proof of (a).} First we define
\[k_-=k_-(n):=2 \frac{4p}{(1+2p)[\frac{2p-1}{8p+4}]^2}\log n;\]
\noindent
and
\[k_+=k_+(n):=(1-\frac{2}{1+2p})n.\]
See that $0< 1-\frac{2}{1+2p} < 1$ when $p> 1/2$.
For $k \in \mathbb{N}\cap[k_-,k_+]$, we may consider independent random variables $Y_1,...,Y_k$ such that for $j\in \{1,...,k \}$
\begin{equation}
\label{eq:dist_y}
P(Y_j=x)=
\begin{cases}
1-p & \text{if $x=0$,}\\
\frac{pk_+}{n} & \text{if $x=1$,}\\
\frac{p(n-k_+)}{n} & \text{if $x=2$.}
\end{cases}
\end{equation}
For any $j\in \{1,...,k \}$ it is true that $V'_{j-1}-1\leq j \leq k \leq k_+$. Note that $Y_j$ can be interpreted as the number of descendants of the particle participating in the $k$th round, when we maximize the number of visited vertices (compare with the equations (\ref{eq:dist_y}) and (\ref{eq:dist_x})). So, by coupling, we can assume that $\sum_{j=1}^k X_j$ is always greater than $\sum_{j=1}^k Y_j$. Let us denote this fact by
$\sum_{j=1}^k X_j \succeq \sum_{j=1}^k Y_j.$
Next, let us define define $a_k:=(\frac{2p}{1+2p}-\frac{1}{2})k$. Note that $0<\frac{2p}{1+2p}-\frac{1}{2}<1$ for $p>1/2$.
By using the Chernoff bounds from Lemma~\ref{lema:chernoff}, together with the fact that $$\sum_{j=1}^k {\mathbbm{1}}_{(Y_j=2)} \sim Bin(k,\frac{p(n-k_+)}{n})$$ and (\ref{def:pot_ativas}), we have that
\begin{align} \label{eq:pot_ativas}
P(&A'_k\leq a_k+1)\nonumber\\
&=P(\sum_{j=1}^k X_j \leq k+a_k)\nonumber\\
&\leq P(\sum_{j=1}^k Y_j \leq k+a_k)\nonumber\\
&\leq P(\sum_{j=1}^k {\mathbbm{1}}_{(Y_j=2)}\leq \frac{k+a_k}{2})\nonumber\\
&=P(\sum_{j=1}^k {\mathbbm{1}}_{(Y_j=2)} \leq kp-\frac{pkk_+}{n}-k(p-\frac{1}{2})+\frac{pkk_+}{n}+\frac{a_k}{2})\\
&=P(\sum_{j=1}^k {\mathbbm{1}}_{(Y_j=2)} \leq E(\sum_{j=1}^k {\mathbbm{1}}_{(Y_j=2)})-k[\frac{2p-1}{8p+4}])\nonumber\\
&\leq \exp(-\frac{k^2[\frac{2p-1}{8p+4}]^2}{2kp(1-\frac{k_+}{n})})\nonumber\\
& \stackrel{(*)}{\leq}\exp(-\frac{k_-[\frac{2p-1}{8p+4}]^2(1+2p)}{4p})\nonumber\\
&=\exp(-2 \log n)=o(n^{-1}),\nonumber
\end{align}
where for two sequences of functions $f_1,f_2,...$ and $g_1,g_2,...$, we write $f_n=o(g_n)$ if $\lim_{n \to \infty}\frac{f_n}{g_n}=0$.
The Chernoff bounds can be applied as $\frac{2p-1}{8p+4}>0$ for $p>1/2$.
Observe that we can exchange $k$ by $k_-$ in (*) as $k_-\leq k$ and $\exp(-x)$ is decreasing in $x$.
Now, consider the set
\[A:=\bigcup_{k\in \mathbb{N}\cap[k_-,k_+]} (A'_k\leq a_k+1 )\]
In words, it means that for some ${k\in \mathbb{N}\cup[k_-,k_+]}$ there is at most $a_k+1$ potentially active particles. By the subaditivity of the probability measure and (\ref{eq:pot_ativas}), we have
\[P(A)\leq \sum_{k\in \mathbb{N}\cap[k_-,k_+]} P(A'_k\leq a_k+1)\leq k_+ o(n^{-1})=o(1).\]
Then
$$P(A^c)=P(\cap_{k\in \mathbb{N}\cap [k_-,k_+]}(A'_k>a_k+1)) \stackrel{n\to \infty}{\rightarrow} 1.$$
As $R:=\inf\{k:A'_k=0 \},$
$$\lim_{n \to \infty}P(R \in [k_-,k_+])=0.$$
\noindent
\textit{Proof of (b).} From the definition of $R,$ we know that $X_j=0$ for some $j\in \{ 1,2,..,R\}$. Therefore, for $R<k_-$, from (\ref{def:pot_visitados}), (\ref{def:visitados}) and (\ref{def:visitados_total}), we know that
$$V_\infty=V_R=V'_R=1+\sum_{j=1}^R {\mathbbm{1}}_{(X_j=2)}\leq 1+ R -1<k_-. $$
So, we conclude that
\[\lim_{n \to \infty}P(V_\infty\leq k_-|R < k_-)=1.\]
\vspace{3mm}
Furthermore, for $\lfloor k_+ \rfloor=\max\{x\in \mathbb{Z}:x \leq k_+\}$, we have that the probability of having $\frac{k_++a_{k_+}}{2}=n(\frac{12p^2-4p-1}{4(1+2p)^2})$ or fewer potentially visited vertices equals to
\begin{equation}
\begin{aligned} \label{eq:vert_k+}
P(V'_{\lfloor k_+ \rfloor}\leq \frac{k_++a_{k_+}}{2})
&=P(\sum_{k=1}^{\lfloor k_+ \rfloor} {\mathbbm{1}}_{(X_k=2)}+1 \leq \frac{k_++a_{k_+}}{2})\\
&=P(\sum_{k=1}^{\lfloor k_+ \rfloor} {\mathbbm{1}}_{(X_k=2)} \leq \frac{(k_+-1)+(a_{k_+}-1)}{2})\\
&\stackrel{(*)}{\leq} P(\sum_{k=1}^{\lfloor k_+ \rfloor} {\mathbbm{1}}_{(X_k=2)} \leq \frac{\lfloor k_+ \rfloor +a_{\lfloor k_+ \rfloor}}{2} )\\
&\leq P(\sum_{k=1}^{\lfloor k_+ \rfloor} {\mathbbm{1}}_{(Y_k=2)} \leq \frac{\lfloor k_+ \rfloor +a_{\lfloor k_+ \rfloor}}{2} ) =o(1).
\end{aligned}
\end{equation}
The inequality (*) holds as $\lfloor k_+ \rfloor \geq k_+-1$ and that, $a_k=ck$, for some constant $c\in(0,1)$. Therefore $a_{\lfloor k_+ \rfloor}=c \lfloor k_+ \rfloor \geq c(k_+ -1)=a_{k_+}-c \geq a_{k_+}-1$.
Moreover, see that (\ref{eq:vert_k+}) is $o(1)$ because $\lfloor k_+ \rfloor \in \mathbb{N}\cap[k_-,k_+]$ and that term shows up in (\ref{eq:pot_ativas}).
Therefore
\[P(V'_{\lfloor k_+ \rfloor}\leq \frac{k_+ +a_{k_+}}{2}|R > k_+)P(R > k_+)+P(V'_{\lfloor k_+ \rfloor}\leq \frac{k_+ +a_{k_+}}{2}|R \leq k_+)P(R \leq k_+)\stackrel{n \to \infty}{\rightarrow} 0.\]
So, if $\lim_{n \to \infty} P(R > k_+)>0$ (see part c), it holds that $\lim_{n \to \infty}P(V'_{\lfloor k_+ \rfloor}>\frac{k_+ +a_{k_+}}{2}|R > k_+)=1$. In addition, by (\ref{def:pot_visitados}), (\ref{def:visitados}) and (\ref{def:visitados_total}) when $ R > k_+ \geq \lfloor k_+ \rfloor$, it is true that $V_\infty=V_R\geq V_{\lfloor k_+ \rfloor}=V'_{\lfloor k_+ \rfloor}$.
From this, we conclude that
\[\lim_{n \to \infty}P(V_\infty\geq \frac{k_+ +a_{k_+}}{2}|R > k_+)=1.\]
Remember that $k_+=c_1 n$ for a constant $c_1\in(0,1)$ and that $a_{k_+}=c_2 k_+=c_1 c_2 n$ for a constant $c_2\in(0,1)$, then $\frac{k_+ +a_{k_+}}{2}=c' n$ for some $c'\in (0,1)$, thus concluding the proof of (b).
\vspace{3mm}
\noindent
\textit{Proof of (c).}
Let us return to the random variable $X^+$ presented in sub-section~\ref{sec:subcritica}.
\begin{equation}\label{eq:dist_x+}
P(X^+=x)=
\begin{cases}
1-p, & \text{if $x=0$,}\\
p, & \text{if $x=2$.}\\
\end{cases}
\end{equation}
Again, we consider a branching process with offspring distribution given by $X^+$, where the number of descendants of the individuals is evaluated in turns, and return to definitions (\ref{def:pot_ativas_y}), (\ref{def:r_y}), rewritten below.
\[
A'^{+}_k:=1+\sum_{j=1}^k (X_j^+-1)
\]
representing the number of potentially active particles of this branching process at the end of the $k$th round and
\[
R^{+}:=\inf\{k \in \mathbb{N}:A'^{+}_k=0\}
\]
representing the instant where the branching process ends. It is now necessary to define $R^+:=+\infty$ if $\{k \in \mathbb{N}:A'^{+}_k=0 \}=\emptyset$, since it can happen, with positive probability in the case $p>1/2$, of having a never ending process. Furthermore, we define
\begin{equation}\label{def:ativas+}
A^{+}_k:=A'^+_k\mathbb{I}(k<R^+)
\end{equation}
representing the number of effectively active particles of this branching process at the end of the $k$th round.
Let us define the event for which the branching process with offspring distribution $X^+$ extinguishes, but after the round $k_-$ by
\[B^+_{k_-}:=\{k_-<R^+< +\infty\}.\]
We define one last branching process, with the same rounds scheme, with offspring distribution $X^-$, where
\begin{equation}\label{eq:dist_x-}
P(X^-=x)=
\begin{cases}
1-p & \text{if $x=0$,}\\
\frac{pk_-}{n} & \text{if $x=1$,}\\
\frac{p(n-k_-)}{n} & \text{if $x=2$.}
\end{cases}\end{equation}
Similarly, consider $X^-_1,X^-_2,...$ as identically distributed independent random variables with the same distribution of $X^-$ and define \[A'^{-}_k:=1 +\sum_{j=1}^k (X_j^- -1)\] as the number of potentially alive individuals at the end of the $k$th round.
Furthermore, define
\[
R^{-}:=\inf\{k \in \mathbb{N}:A'^{-}_k=0\}
\]
as the moment when the branching process with offspring given by $X^-$ ends. Consider that $R^-:=+\infty$ if $\{k \in \mathbb{N}:A'^{-}_k=0 \}=\emptyset$.
We have that $k \leq k_-\Rightarrow V'_{k-1}=1+\sum_{j=1}^{k-1} {\mathbbm{1}}_{(X_j=2)} \leq k_-$. Therefore, comparing (\ref{eq:dist_x+}) and (\ref{eq:dist_x-}) with (\ref{eq:dist_x}), we conclude that $\sum_{j=1}^k X^-_j \preceq \sum_{j=1}^k X_j \preceq \sum_{j=1}^k X^+_j$, and so $A'^{-}_k \preceq A'_k\preceq A'^{+}_k$ for all $k\in \mathbb{N}\cap [1,k_-]$. We then see that it is possible to couple the three process so that $R^{+}\leq k_- \Rightarrow R\leq k_-$ and also $R\leq k_- \Rightarrow R^-\leq k_-$.
So, it is true that
\begin{equation} \label{eq:confronto}
P(R^+<\infty) -P(B^+_{k_-})=P(R^{+}\leq k_-)\leq P(R \leq k_-) \leq P(R^-\leq k_-) \leq P(R^-<\infty).
\end{equation}
By properties of branching processes, $P(R^+<\infty)$ is the smallest positive solution of
\[P(R^+<\infty)=1-p+P(R^+<\infty)^2p.
\]
Solving the quadratic equation above, we realize that \[P(R^+<\infty)=\frac{1-p}{p}.\]
It also holds that $P(R^-<\infty)$ is the smallest positive solution of
\[P(R^-<\infty)=1-p+P(R^-<\infty)\frac{pk_-}{n}+P(R^-<\infty)^2\frac{p(n-k_-)}{n}.
\]
Solving the quadratic equation above, we see that \[P(R^-<\infty)=\frac{n(1-p)}{pn-pk_-}\stackrel{n \to \infty}{\rightarrow} \frac{1-p}{p}.\]
Denote $\lceil k_- \rceil=\min\{x\in \mathbb{Z}:x\geq k_-\}$. From (\ref{eq:pot_ativas}), it holds that $\lim_{n \to \infty}P(A'_{ \lceil k_- \rceil}>a_{\lceil k_- \rceil}+1)=1$ and therefore $\lim_{n \to \infty}P(A'^{+}_{ \lceil k_- \rceil}>a_{\lceil k_- \rceil}+1)=1$. Then, if $\lim_{n \to \infty} P(R^{+}> k_-)>0$ (which is true because $E(X^+)>1$), we have that $\lim_{n \to \infty}P(A'^{+}_{\lceil k_- \rceil}>a_{\lceil k_- \rceil}+1|R^{+}> k_-)=1$. Besides that, when $R^{+}> k_-$, it holds that $R^{+}\geq {\lceil k_- \rceil}$ and from (\ref{def:ativas+}), we have that
$A^{+}_{\lceil k_- \rceil}=A'^{+}_{\lceil k_- \rceil}$. Then, we conclude
\[\begin{aligned}
P(B_{k_-}^+)&= P(R^+>k_-,R^+<\infty)\\
&=P(R^+>k_-,R^+<\infty,A^{+}_{\lceil k_- \rceil}>a_{\lceil k_- \rceil}+1)+P(R^+>k_-,R^+<\infty,A^{+}_{\lceil k_- \rceil}\leq a_{\lceil k_- \rceil}+1) \\
&\leq \frac{P(R^+>k_-,R^+<\infty,A^{+}_{\lceil k_- \rceil}>a_{\lceil k_- \rceil}+1)}{P(R^+>k_-,A^{+}_{\lceil k_- \rceil}>a_{\lceil k_- \rceil}+1)}+\frac{P(R^+>k_-,A^{+}_{\lceil k_- \rceil}\leq a_{\lceil k_- \rceil}+1)}{P(R^+>k_-)}\\
& = P(R^+<\infty,|R^+>k_-,A^{+}_{\lceil k_- \rceil}>a_{\lceil k_- \rceil}+1)+P(A^{+}_{\lceil k_- \rceil}\leq a_{\lceil k_- \rceil}+1|R^+>k_-) \\
&\leq (\frac{1-p}{p})^{a_{\lceil k_- \rceil}+1}+P(A^{+}_{\lceil k_- \rceil}\leq a_{\lceil k_- \rceil}+1|R^+>k_-) \stackrel{n \to \infty}{\rightarrow}0
\end{aligned}
\]
where the last inequality is given by the extinction probability of at least $a_{\lceil k_- \rceil}+1$ independent branching processes, each with extinction probability $\frac{1-p}{p}$.
Applying to (\ref{eq:confronto}) all limits found, we conclude that
\[\lim_{n \to \infty} P(R<k_-)=\frac{1-p}{p}.\]
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2022-10-12T02:01:48",
"yymm": "2210",
"arxiv_id": "2210.04968",
"language": "en",
"url": "https://arxiv.org/abs/2210.04968",
"abstract": "The frog model is a system of interacting random walks. Initially, there is one particle at each vertex of a connected graph $\\mathcal{G}$. All particles are inactive at time zero, except for the one which is placed at the root of $\\mathcal{G}$, which is active. At each instant of time, each active particle may die with probability $1-p$. Once an active particle survives, it jumps on one of its nearest vertices, chosen with uniform probability, performing a discrete time simple symmetric random walk (SRW) on $\\mathcal{G}$. Up to the time it dies, it activates all inactive particles it hits along its way. From the moment they are activated on, every such particle starts to walk, performing exactly the same dynamics, independent of everything else. In this paper, we take $\\mathcal{G}$ as the $n-$complete graph ($\\mathcal{K}_n$, a finite graph with each pair of vertices linked by an edge). We study the limit in $n$ of the coverage ratio, that is, the proportion of visited vertices by some active particle up to the end of the process, after all active particles have died.",
"subjects": "Probability (math.PR)",
"title": "The coverage ratio of the frog model on complete graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105321470076,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7076984734661897
} |
https://arxiv.org/abs/2209.12628 | First Betti number and collapse | We show that when a sequence of Riemannian manifolds collapses under a lower Ricci curvature bound, the first Betti number cannot drop more than the dimension. | \section{Introduction}
For $n \in \mathbb{N}$, $ c \in \mathbb{R}$, $ D > 0 $, let $\mathfrak{M}_{Ric}(n,c,D)$ (resp. $\mathfrak{M}_{sec}(n,c,D)$) denote the class of closed $n$-dimensional Riemannian manifolds of Ricci curvature $\geq c$ (resp. sectional curvature $\geq c$) and diameter $\leq D$. A significant proportion of the subject consists of understanding the relationship between sequences $X_i \in \mathfrak{M}_{Ric}(n,c,D)$ and their Gromov--Hausdorff limits. Our main result concerns the first Betti number of such limit space.
\begin{thm}\label{main}
\rm Let $X_i \in \mathfrak{M}_{Ric}(n,c,D)$ be a sequence with $\beta_1(X_i) \geq r$ for each $i$. If $X_i$ converges in the Gromov--Hausdorff sense to a space $X$ containing a $k$-regular point, then
\[ \beta_1(X) \geq r + k - n. \]
\end{thm}
It has been known that for a Riemannian manifold $M$ of almost non-negative Ricci curvature, if its first Betti number equals its dimension then $M$ is homeomorphic to a torus. This result has been recently extended to singular spaces by Mondello, Mondino, and Perales \cite{mondello-mondino-perales}. A consequence of their work and Theorem \ref{main} is the following.
\begin{cor}\label{tori}
\rm For each $n \in \mathbb{N}$, there is $\varepsilon > 0 $ such that if $X_i \in \mathfrak{M}_{sec}(n,-\varepsilon ,1\\
)$ is a sequence of spaces with $\beta_1(X_i )\geq n$ that converges in the Gromov--Hausdorff sense to a space $X$ of Hausdorff dimension $k$, then $X$ is bi-H\"{o}lder homeomorphic to a flat $k$-dimensional torus.
\end{cor}
\begin{rem}
\rm Theorem \ref{main} shows that the first Betti number cannot drop more than the dimension. Contrastingly, the fundamental group can decrease in the limit even if there is no collapse: Otsu has constructed a sequence of metrics in $\mathbb{S}^3 \times \mathbb{R}P^2$ of positive Ricci curvature that converges in the Gromov--Hausdorff sense to a simply connected 5-dimensional space \cite{otsu}.
\end{rem}
Theorem \ref{main} is an improvement of the main result of \cite{Z}. On the other hand, the goal of this program is to solve following problem.
\begin{qu}\label{tori-conjecture}
\rm Assume a sequence $X_i \in \mathfrak{M}_{Ric}(n,c,D ) $ of spaces homeomorphic to the $n$-dimensional torus converges in the Gromov--Hausdorff sense to a space $X$. Is $X$ necessarily homeomorphic to a torus?
\end{qu}
The author would like to thank Anton Petrunin for suggesting Question \ref{tori-conjecture} and motivating the author to work on this problem, Raquel Perales and Guofang Wei for interesting discussions and helpful comments, and the Max Planck Institute for Mathematics for its financial support and hospitality.
\section{Preliminaries}
In this section we recall the required material for Theorem \ref{main} and Corollary \ref{tori}, which we prove in the following section.
\subsection{Gromov--Hausdorff topology}
The basics on the subject can be found in (\cite{BBI}, Chapter 7).
\begin{defn}
\rm We say that a function $f: X \to Y$ between metric spaces is an $\varepsilon$-isometry if for all $x_1, x_2 \in X$ one has $ \vert d^X(x_1,x_2) - d^Y(fx_1, fx_2) \vert \leq \varepsilon $, and $f(X)$ intersects each closed ball of radius $\varepsilon$ in $Y$. We say that a sequence of functions $f_i : X_i \to Y_i$ between metric spaces are \textit{Gromov--Hausdorff approximations} if $f_i$ is an $\varepsilon_i $-isometry for some sequence $\varepsilon_i \to 0$.
\end{defn}
\begin{prop}\label{GH-convergence}
\rm (Gromov) Let $X_i$ be a sequence of compact metric spaces, and let $X$ be a complete metric space. Then the following are equivalent:
\begin{itemize}
\item There is a sequence $f_i : X_i \to X$ of Gromov--Hausdorff approximations.
\item There is a sequence $h_i : X \to X_i$ of Gromov--Hausdorff approximations.
\end{itemize}
In either case, $X$ is compact and one says that the sequence $X_i$ \textit{converges to $X$ in the Gromov--Hausdorff sense}. Furthermore, there is a metric on the class of compact metric spaces modulo isometry that yields this topology.
\end{prop}
\begin{defn}
\rm We say that a function $f: (X,x) \to (Y,y)$ between pointed metric spaces is an $\varepsilon$-isometry if $fx=y$, for all $x_1, x_2 \in B^X(x, 2 / \varepsilon )$ one has $ \vert d^X(x_1,x_2) - d^Y(fx_1, fx_2) \vert \leq \varepsilon $, and $f(B^X(x, 2 / \varepsilon ) )$ intersects each closed ball of radius $\varepsilon$ in $B^Y(y, 1 / \varepsilon )$. We say that a sequence of functions $f_i : ( X_i ,x_i) \to (Y_i, y_i ) $ between pointed metric spaces are \textit{pointed Gromov--Hausdorff approximations} if $f_i$ is a pointed $\varepsilon_i $-isometry for some sequence $\varepsilon_i \to 0$.
\end{defn}
\begin{prop}\label{pointed-GH-convergence}
\rm (Gromov) Let $(X_i,x_i)$ be a sequence of proper pointed metric spaces, and let $(X,x)$ be a complete pointed metric space. Then the following are equivalent:
\begin{itemize}
\item There is a sequence $f_i : (X_i ,x_i) \to (X,x)$ of pointed Gromov--Hausdorff approximations.
\item There is a sequence $h_i : (X,x) \to (X_i,x_i)$ of pointed Gromov--Hausdorff approximations.
\end{itemize}
In either case, $X$ is proper and one says that the sequence $(X_i,x_i)$ \textit{converges to $(X,x)$ in the pointed Gromov--Hausdorff sense}. Furthermore, there is a metric on the class of proper pointed metric spaces modulo isometry that yields this topology.
\end{prop}
For $n \in \mathbb{N}$, $c \in \mathbb{R}$, we denote by $\mathfrak{M}_{Ric} (n,c)$ the class of complete $n$-dimensional Riemannian manifolds of Ricci curvature $\geq c$. One reason we know so much about these families of spaces is because they are pre-compact with respect to the Gromov--Hausdorff topology.
\begin{thm}\label{compactness}
\rm (Gromov) Let $(Y_i,y_i)$ be a sequence with $Y_i \in \mathfrak{M}_{Ric}(n,c) $ for each $i$. Then one can find a subsequence that converges in the pointed Gromov--Hausdorff sense to some proper metric space $(Y,y)$.
\end{thm}
\subsection{Equivariant Gromov--Hausdorff convergence}
There is a well studied notion of convergence of group actions in this setting. For a proper metric space $X$, the topology that we use on its group of isometries $Iso(X)$ is the compact-open topology, which in this setting coincides with both the topology of pointwise convergence and the topology of uniform convergence on compact sets. This topology makes $Iso(X)$ a locally compact second countable metrizable group.
\begin{defn}
\rm Let $(Y_i,q_i) $ be a sequence of proper metric spaces that converges in the pointed Gromov--Hausdorff sense to a proper space $(Y,q)$. Consider pointed Gromov--Hausdorff approximations $f_i : (Y_i,q_i ) \to (Y,q) $ and $h_i: (Y,q) \to (Y_i,q_i)$ such that $d^Y(f_i \circ h_i(y),y)\to 0$ for all $y \in Y$. Also let $\Gamma_i \leq Iso(Y_i)$ be a sequence of groups of isometries. We say that $\Gamma_i$ \textit{converges in the equivariant Gromov--Hausdorff sense to} a closed group $\Gamma \leq Iso (Y)$ if for all $R, \varepsilon > 0 $, one has the following:
\begin{itemize}
\item For each $g \in \Gamma$, there is $ i_0 \in \mathbb{N}$ such that for each $i \geq i_0$ there is $g_i \in \Gamma_i$ with $d^Y ( f_i \circ g_i \circ h_i (y), g(y)) \leq \varepsilon $ for all $y \in B^{Y}(q,R )$.
\item There is $i_0\in \mathbb{N}$ such that if $i \geq i_0$, $g \in \Gamma_i$ with $d^Y(gq_i,q_i)\leq R$, then there is $\gamma \in \Gamma $ such that $d ^Y( f_i \circ g \circ h_i (y), \gamma (y)) \leq \varepsilon $ for all $y \in B^{Y}(q,10R)$.
\end{itemize}
Although this definition clearly depends on $f_i$ and $h_i$, we usually omit this when we state that $\Gamma_i$ converges to $\Gamma$.
\end{defn}
This definition of equivariant convergence allows one to take limits before or after taking quotients.
\begin{lem}\label{equivariant}
\rm Let $(Y_i,q_i)$ be a sequence of proper metric spaces that converges in the pointed Gromov--Hausdorff sense to a proper space $(Y,q)$, and $\Gamma_i \leq Iso(Y_i)$ a sequence of isometry groups that converges in the equivariant Gromov--Hausdorff sense to a closed group $\Gamma \leq Iso (Y)$. Then the sequence $(Y_i/\Gamma_i, [q_i])$ converges in the pointed Gromov--Hausdorff sense to $(Y/\Gamma , [q])$.
\end{lem}
Since the isometry groups of proper metric spaces are locally compact, one has an Arzel\'a-Ascoli type result (\cite{fukaya-yamaguchi}, Proposition 3.6).
\begin{thm}\label{equivariant-compactness}
\rm (Fukaya--Yamaguchi) Let $(Y_i,q_i) $ be a sequence of proper metric spaces that converges in the pointed Gromov--Hausdorff sense to a proper space $(Y,q)$, and take a sequence $\Gamma_i \leq Iso(Y_i)$ of groups of isometries. Then there is a subsequence $(Y_{i_k}, q_{i_k}, \Gamma_{i_k})_{k \in \mathbb{N}}$ such that $\Gamma_{i_k}$ converges in the equivariant Gromov--Hausdorff sense to a closed group $\Gamma \leq Iso(Y)$.
\end{thm}
In \cite{gromov-afm}, Gromov studied which is the structure of discrete groups that act transitively on spaces that look like $\mathbb{R}^n$. Using the Malcev embedding theorem, he showed that they look essentially like lattices in nilpotent Lie groups. In \cite{breuillard-green-tao}, Breuillard--Green--Tao studied in general what is the structure of discrete groups that have a large portion acting on a space of controlled doubling. It turns out that the answer is still essentially just lattices in nilpotent Lie groups. In (\cite{zamora-lahs}, Sections 7-9) the ideas from \cite{gromov-afm} and \cite{breuillard-green-tao} are used to obtain the following structure result.
\begin{thm}\label{zamora}
\rm Let $(Z,p)$ be a proper pointed geodesic space of topological dimension $\ell \in \mathbb{N}$ and let $(D_i , p_i )$ be a sequence of discrete metric spaces converging in the pointed Gromov--Hausdorff sense to $(Z,p)$. Assume there is a sequence of isometry groups $\Gamma_i \leq Iso (D_i)$ that act transitively and for each $i$, $\Gamma_i$ is generated by its elements that move $p_i$ at most $10$. Then for large enough $i$, there are finite index subgroups $G_i \leq \Gamma_i$ and finite normal subgroups $F_i \triangleleft G_i $ such that $G_i / F_i $ is isomorphic to a quotient of a lattice in a nilpotent Lie group of dimension $\ell$. In particular, if the groups $\Gamma_i$ are abelian, for large enough $i$ their rank is at most $\ell$.
\end{thm}
For $k \in \mathbb{N}$, a proper metric space $X$, we say that $x \in X$ is a $k$\textit{-regular point} if for any sequence $\lambda_i \to \infty$, the sequence $(\lambda_i X, x )$ converges in the pointed Gromov--Hausdorff sense to $\mathbb{R}^k$. For limits of sequences in $\mathfrak{M}_{Ric}(n,c)$, almost all points are regular \cite{cheeger-colding}.
\begin{thm}\label{CC}
\rm (Cheeger--Colding) Let $ X_ i \in \mathfrak{M}_{Ric}(n,c) $ converge in the pointed Gromov--Hausdorff sense to a space $X$. If $\mathcal{R}_k$ denotes the set of $k$-regular points of $X$, then $\mathcal{R}_k \neq \emptyset$ implies $k \leq n$, and $\cup_{j=0}^n \mathcal{R}_k$ is dense in $X$.
\end{thm}
Arguably the most used tool in the theory of Riemannian manifolds of non-negative Ricci curvature is the Cheeger--Gromoll splitting theorem. It was later generalized by Cheeger and Colding to limits of Riemannian manifolds \cite{cheeger-colding-splitting}. Using this, one could understand how $\mathbb{R}^{k}$ arises as a quotient of such spaces.
\begin{thm} \label{cc-split}
\rm (Cheeger--Colding) Let $\varepsilon_i \to 0$ and $(Y_i, q_i) \in \mathfrak{M}_{Ric}(n,- \varepsilon_i) $ a sequence that converges in the pointed Gromov--Hausdorff sense to $(Y,q)$. If $Y$ contains an isometric copy of $\mathbb{R}^k$, then $Y$ split as a metric space as $\mathbb{R}^k \times Z$ for some proper geodesic space $Z$ of Hausdorff dimension $\leq n-k$.
\end{thm}
\begin{cor}\label{splitting}
\rm Let $\varepsilon_i \to 0$ and $(Y_i, q_i) \in \mathfrak{M}_{Ric}(n,- \varepsilon_i) $ be a sequence that converges in the pointed Gromov--Hausdorff sense to $(Y,q)$. Assume there is a sequence of groups of isometries $\Gamma_i \leq Iso (Y_i)$ such that $(Y_i/\Gamma_i , [q_i])$ converges in the pointed Gromov--Hausdorff sense to $\mathbb{R}^{k}$ and $\Gamma_i$ converges in the equivariant Gromov--Hausdorff sense to a group $\Gamma \leq Iso (Y)$. Then $Y$ splits as a metric space as $\mathbb{R}^{k}\times Z$ for some proper geodesic space $Z$ of Hausdorff dimension $\leq n- k$, and the $Z$-fibers given by this product coincide with the orbits of $\Gamma$.
\end{cor}
\begin{proof}
One can use the submetry $\phi: Y \to Y/\Gamma = \mathbb{R}^k$ to lift the lines of $\mathbb{R}^k$ to lines in $Y$ passing through $q$. By Theorem \ref{cc-split}, we get the desired splitting $Y = \mathbb{R}^k \times Z$ with $\phi (z_0, x)=x $ for all $x \in \mathbb{R}^k$ and some $z_0 \in Z$.
Let $g \in \Gamma$ and assume $g(z_0,x ) = (z,y)$ for some $z_0,z \in Z$, $x,y \in \mathbb{R}^k$. Then for all $t \geq 1$, one has
\begin{eqnarray*}
t \vert y-x \vert & = & \vert \phi (z_0, x+ t(y-x) - \phi ( (z_0,x)) \vert \\
& = & \vert \phi (z_0, x+ t(y-x))- \phi (z,y) \vert \\
& \leq & d^Y ( (z_0, x + t (y-x)) , (z,y) )\\
& = & \sqrt{ d^Z(z_0,z)^2 + \vert (t-1) (y-x) \vert ^2 }.
\end{eqnarray*}
As $t \to \infty$, this is only possible if $x=y$, and we conclude that the action of $\Gamma$ respects the splitting $Y = \mathbb{R}^k \times Z$.
\end{proof}
\subsection{Homology and Ricci curvature bounds}
We define the \textit{content} of a map $A \to X$ between topological spaces to be the image of the natural map $H_1(A) \to H_1(X)$. If $\mathcal{U}$ is a family of subsets of $X$, we denote by $H_1(\mathcal{U} \prec X ) \leq H_1(X)$ the subgroup generated by the contents of the inclusions $U \to X$ with $U \in \mathcal{U}$. This group satisfies a natural monotonicity property.
\begin{lem}\label{refinement}
\rm Let $X$ be a topological space, and $\mathcal{U}$, $\mathcal{V}$ two families of subsets of $X$. If for each $U \in \mathcal{U}$ there is $V \in \mathcal{V}$ with $U \subset V$, then $H_1(\mathcal{U}\prec X) \leq H_1(\mathcal{V} \prec X)$.
\end{lem}
If $\varepsilon > 0 $, $X$ is a metric space, and $\mathcal{U}$ is the family of balls of radius $\varepsilon$ in $X$, then we denote $H_1(\mathcal{U}\prec X)$ simply by $H_1^{\varepsilon}( X)$. It has been recently shown that limits of sequences in $\mathfrak{M}_{Ric}(n,c,D)$ are semi-locally-simply-connected \cite{wang}.
\begin{thm}\label{slsc}
\rm (Pan--Wang) Let $X_i \in \mathfrak{M}_{Ric}(n,c,D)$ converge in the Gromov--Hausdorff sense to a space $X$. Then $X$ is semi-locally-simply-connected. In particular, $H^{\varepsilon}_1(X)$ is trivial for small enough $\varepsilon$.
\end{thm}
\begin{thm}\label{SW}
\rm (Sormani--Wei) Let $X$ be a compact geodesic space. Assume there is $\varepsilon > 0 $ such that $H_1^{2\varepsilon}( X)$ is trivial, and let $Y$ be a compact geodesic space with $f:Y \to X$ an $\varepsilon /100 $-approximation. Then there is a surjective morphism $ H_1 (Y) \to H_1 (X)$ (independent of $\varepsilon$) whose kernel is precisely $H_1^{\varepsilon}(Y)$.
\end{thm}
\begin{proof}[Proof sketch:]
We follow the lines of (\cite{SW}, Theorem 2.1), where they prove this result for $\pi_1$ instead of $H_1$. Each 1-cycle in $Y$ can be thought as a family of loops $\mathbb{S}^1 \to Y$ with integer multiplicity. For each map $ \gamma : \mathbb{S}^1 \to Y$, by uniform continuity one could pick finitely many cyclically ordered points $\{ z_1, \ldots , z_m \} \subset \mathbb{S}^1$ such that $\gamma ([z_{j-1}, z_j])$ is contained in a ball of radius $\varepsilon /10$ for each $j$. Then set $\phi ( \gamma ) : \mathbb{S}^1 \to X $ to be the loop with $\phi (\gamma )(z_j) = f( \gamma (z_j) )$ for each $j$, and $\phi (\gamma ) \vert _{[z_{j-1}, z_j]}$ a minimizing geodesic from $\phi (\gamma )(z_{j-1})$ to $\phi (\gamma )(z_{j})$.
Clearly, $\phi (\gamma )$ depends on the choice of the points $z_j$ and the minimizing paths $\phi (\gamma ) \vert _{[z_{j-1}, z_j]}$. However, the homology class of $\phi (\gamma )$ in $H_1(X)$ does not depend on these choices, since different choices yield curves that are $\varepsilon$-uniformly close, which by hypothesis are homologous.
Assume that a 1-cycle $c$ in $Y$ is the boundary $\partial \sigma$ of a 2-chain $\sigma$. After taking iterated barycentric subdivision, one could assume that each simplex of $\sigma$ is contained in a ball of radius $\varepsilon /10$. By recreating $\sigma $ in $X$ via $f$ simplex by simplex, one could find a 2-chain whose boundary is $\phi (c)$. This means that $\phi$ induces a map $ \tilde{\phi}:H_1(Y) \to H_1(X)$.
In a similar fashion, if a 1-cycle $c$ in $Y$ is such that $\phi (c) $ is the boundary of a 2-chain $\sigma$, one could again apply iterated barycentric subdivision to obtain a 2-chain $\sigma^{\prime}$ in $X$ whose boundary is $\phi (c)$ and such that each simplex is contained in a ball of radius $\varepsilon /10$. Using $f$ one could recreate the 1-skeleton of $\sigma ^{\prime}$ in $Y$ in such a way that expresses $c$ as a linear combination with integer coefficients of 1-cycles contained in balls of radius $\varepsilon$ in $Y$. This implies that the kernel of $\tilde{\phi} $ is contained in $H_1^{\varepsilon}(Y)$.
If a 1-cycle $c$ in $Y$ is contained in a ball of radius $\varepsilon$, then $\phi (c)$ is contained in a ball of radius $2 \varepsilon$ and then by hypothesis, $\phi (c)$ is a boundary. This shows that the kernel of $\tilde{\phi}$ is precisely $H_1^{\varepsilon}(Y)$.
Lastly, for any loop $\gamma : \mathbb{S}^1 \to X$, one can create via $f$ a loop $\gamma_1 : \mathbb{S}^1 \to Y$ such that $\phi ( \gamma_1)$ is uniformly close (and hence homologous) to $\gamma$, so $\tilde{\phi}$ is surjective.
\end{proof}
\begin{cor}\label{SW-gap}
\rm Let $X$ be a compact geodesic space. Assume there is $\rho > 0 $ such that $H_1^{2\rho}( X)$ is trivial, and consider a sequence $X_i$ of compact geodesic spaces that converges to $X$ in the Gromov--Hausdorff sense. Then there is a sequence $\rho _i \to 0$ such that $ H_1^{\rho_i}( X_i) = H_1^{\rho}( X_i) $ for each $i$.
\end{cor}
\begin{proof}
For large enough $i$, let $\rho_i \in ( 0 , \rho ] $ be such that $\rho_i \to 0$ and there is a $\rho_i/100$-approximation $X_i \to X $. One could then apply Theorem \ref{SW} for $\varepsilon \in [\rho_i , \rho]$ to get a map $H_1(X_i) \to H_1(X)$ whose kernel equals both $ H_1^{\rho_i}(X_i) $ and $ H_1^{\rho}(X_i) $. For small $i$, simply set $\rho_i = \rho$.
\end{proof}
The following results were obtained in \cite{kapovitch-wilking}, and are stated in terms of $\pi_1$. The first one states that for $M \in \mathfrak{M}_{Ric}(n,c,D)$, there is a subgroup $N \leq H_1(M) $ that can be detected anywhere. The second one states that at regular points, there is a gap phenomenon.
\begin{thm}\label{KW}
\rm (Kapovitch--Wilking) For each $n \in \mathbb{N}$, $c \in \mathbb{R}$, $D > 0 $, $\varepsilon_1 > 0$, there are $\varepsilon_0 > 0$, $C \in \mathbb{N}$, such that the following holds. For each $ M \in \mathfrak{M}_{Ric}(n,c,D) $, there is $\varepsilon \in [\varepsilon_0, \varepsilon_1]$ and a subgroup $N \leq H_1(M)$ such that for all $x \in M$,
\begin{itemize}
\item $N$ lies in the content of the inclusion $ B^M(x, \varepsilon / 1000 ) \to M$.
\item The index of $N$ in the content of the inclusion $ B^M(x, \varepsilon ) \to M $ is $\leq C$.
\end{itemize}
\end{thm}
\begin{lem}\label{KW-gap}
\rm (Kapovitch--Wilking) Let $X_i \in \mathfrak{M}_{Ric}(n,c,D)$ converge in the Gromov--Hausdorff sense to a space $X$. Consider a $k$-regular point $x \in X$, and $h_i : X \to X_i$ a sequence of Gromov--Hausdorff approximations. Then there is $\eta > 0 $ and a sequence $\eta_i \to 0$ such that the contents of the inclusions $ B^{X_i}(h_i(x), \eta_i) \to X_i $, $ B^{X_i}(h_i(x), \eta) \to X_i $ coincide.
\end{lem}
For the proof of Corollary \ref{tori} we require the following result from \cite{mondello-mondino-perales}.
\begin{thm}\label{torus-stability}
\rm (Mondello--Mondino--Perales) For each $n \in \mathbb{N}$ there is $\varepsilon > 0 $ such that if $X_i \in \mathfrak{M}_{sec}(n,-1,\varepsilon ) $ converges in the Gromov--Hausdorff sense to a space $X$ of Hausdorff dimension $k$ and $\beta_1(X) \geq k$, then $X$ is bi-H\"{o}lder homeomorphic to a flat $k$-dimensional torus.
\end{thm}
\section{Proof of the main results}
\begin{proof}[Proof of Theorem \ref{main}:]
Let $p \in X$ be a $k$-regular point, $h_i : X \to X_i$ a sequence of Gromov--Hausdorff approximations, and set $p_i : = h_i (p)$. Then by Theorem \ref{KW-gap}, there is $\varepsilon_2 > 0 $ and a sequence $\eta_i \to 0$ such that the contents of the maps $ B^{X_i}(p_i, \eta_i) \to X_i$, $ B^{X_i}(p_i , \varepsilon_2) \to X_i$ coincide.
By Theorem \ref{slsc}, there is $ \varepsilon_1 \in (0, \varepsilon_2 ] $ such that for each $x \in X$, the content of the inclusion $ B^X(x,2 \varepsilon_1) \to X $ is trivial. By Theorem \ref{SW}, all we need to show is that for large enough $i$, $H_1^{\varepsilon_1}(X_i)$ has rank $\leq n-k $. By Corollary \ref{SW-gap}, there is a sequence $\rho_i \to 0$ with the property that $H_1^{\rho_i}(X_i) = H_1^{\varepsilon_1}(X_i)$ for each $i$.
By Theorem \ref{KW}, there are $\varepsilon _ 0 > 0 $, $C \in \mathbb{N}$, subgroups $N_i \leq H_1(X_i)$, and a sequence $\delta_i \in [\varepsilon _ 0 , \varepsilon _1] $ with the property that for each $x \in X_i$, the content of the map $ B^{X_i}(x, \delta_i ) \to X_i$ contains $N_i$ as a subgroup of index $\leq C$
Let $ x_1, \ldots , x_m \in X$ be such that $X = \cup_{j=1}^m B^X(x_j, \varepsilon_0/3)$, and set $x^i_j : = h_i (x_j)$. Then for large enough $i$, the balls $B^{X_i}(x^i_j, \varepsilon_0/2)$ cover $X_i$. This implies that for large enough $i$, each ball of radius $\rho_i$ in $X_i$ is contained in a ball of the form $B^{X_i}(x^i_j, \varepsilon_0)$. Hence if we let $\mathcal{U}_i$ denote the family $\{ B^{X_i} ( x_j^i , \delta_i ) \}_{j=1}^m$, then by Lemma \ref{refinement} we get
\[ H_1^{\rho_i}(X_i) \leq H_1 (\mathcal{U}_i \prec X_i ) \leq H_1^{\varepsilon_1}(X_i) = H_1^{\rho_i} (X_i) . \]
Since $ H_1^{\mathcal{U}_i} (X_i)$ is generated by the contents of the inclusions $ B^{X_i}(x^i_j, \delta_i) \to X_i$ with $j \in \{ 1, \ldots , m \}$, the index of $N_i $ in $H_1^{\mathcal{U}_i}(X_i)$ is at most $C^m$. Therefore, the rank of $H_1^{\varepsilon_1}(X_i)$ equals the rank of $N_i$ for all large enough $i$.
Let $\Gamma_i \leq H_1(X_i)$ denote the content of the inclusion $B^{X_i}(p_i, \varepsilon_2) \to X_i$. Since $\varepsilon_2 \geq \varepsilon_1$, $\Gamma_i$ contains $N_i$, and since $\Gamma_i$ equals the content of the inclusion $B^{X_i}(p_i, \eta_i) \to X_i$, and $\eta_i \leq \varepsilon_0$ for large enough $i$, the index of $N_i$ in $\Gamma_i$ is finite. Hence Theorem \ref{main} will follow from the following claim.
\begin{center}
\textbf{Claim:} For large enough $i$, $\Gamma_i$ has rank $\leq n- k$.
\end{center}
Let $\lambda_i \to \infty$ be a sequence that diverges so slowly that $\lambda_i \eta_i \to 0$ and the sequence $(\lambda_i X_i , p_i)$ converges in the pointed Gromov--Hausdorff sense to $\mathbb{R}^{k}$. We can achieve this since $p$ is $k$-regular and $\eta_ i \to 0$.
Let $(Y_i, q_i)$ denote the regular cover of $(\lambda X_i, p_i)$ with Galois group $H_1(X_i)$. By Theorem \ref{compactness} and Theorem \ref{equivariant-compactness}, we can assume that the sequence $(Y_i, q_i)$ converges in the pointed Gromov--Hausdorff sense to a proper geodesic space $(Y,q)$, and the groups $H_1(X_i)$ converge in the equivariant Gromov--Hausdorff sense to some closed group $\Gamma \leq Iso (Y)$. Since all elements of $H_1(X_i) \backslash \Gamma_i$ move $q_i$ at least $\varepsilon _2 \lambda_i$ away, the equivariant Gromov--Hausdorff limit of $\Gamma_i$ equals $\Gamma$ as well. Note that from the definition of equivariant Gromov--Hausdorff convergence, it follows that the $\Gamma_i$-orbits of $q_i$ converge in the pointed Gromov--Hausdorff sense to the $\Gamma$-orbit of $q$.
By Corollary \ref{splitting}, $Y$ splits isometrically as a product $\mathbb{R}^{k} \times Z$ with $Z$ a proper geodesic space of Hausdorff dimension $\leq n - k$, such that the $Z$-fibers coincide with the $\Gamma$-orbits. Since the topological dimension is always dominated by the Hausdorff dimension (\cite{hurewicz-wallman}, Chapter 7), the topological dimension of $Z$ is at most $n- k$. Then by Theorem \ref{zamora}, the rank of $\Gamma_i$ is at most $n- k$ for large enough $i$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{tori}:]
Let $\varepsilon > 0 $ be given by Theorem \ref{torus-stability}. By Theorem \ref{main}, $\beta_1(X) \geq k$, and the result follows.
\end{proof}
| {
"timestamp": "2022-09-27T02:29:59",
"yymm": "2209",
"arxiv_id": "2209.12628",
"language": "en",
"url": "https://arxiv.org/abs/2209.12628",
"abstract": "We show that when a sequence of Riemannian manifolds collapses under a lower Ricci curvature bound, the first Betti number cannot drop more than the dimension.",
"subjects": "Differential Geometry (math.DG); Metric Geometry (math.MG)",
"title": "First Betti number and collapse",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97631052387569,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984732733348
} |
https://arxiv.org/abs/1903.02287 | Nil Clean Divisor Graph | In this article, we introduce a new graph theoretic structure associated with a finite commutative ring, called nil clean divisor graph. For a ring $R$, nil clean divisor graph is denoted by $G_N(R)$, where the vertex set is $\{x\in R\,:\, x\neq 0, \,\exists\, y(\neq 0, \neq x)\in R$ such that $xy$ is nil clean$\}$, two vertices $x$ and $y$ are adjacent if $xy$ is a nil clean element. We prove some interesting results of nil clean divisor graph of a ring. | \section{Introduction} \label{S:intro}
In this article, rings are finite commutative rings with non zero identity. Diesl \cite{diesl}, introduced the concept of nil clean ring as a subclass of clean ring in $2013$. He defined that an element $x$ of a ring $R$ to be a nil clean element if it can be written as a sum of an idempotent element and a nilpotent element of $R$. $R$ is called nil clean ring if every element of $R$ is nil clean. Also in $2015$, Kosan and Zhou \cite{TK}, developed the concept of weakly nil clean ring as a generalization of nil clean ring. An element $x$ of a ring $R$ is weakly nil clean if $x=n+e$ or $x=n-e$, where $n$ is a nilpotent element and $e$ is an idempotent element of $R$. The set of nilpotent elements, set of unit elements, nil clean elements and weakly nil clean elements of a ring $R$ are denoted by $Nil(R)$, $U(R)$, $NC(R)$ and $WNC(R)$ respectively. By graph, we consider simple undirected graph. For a graph $G$, the set of edges and the set of vertices are denoted by $E(G)$ and $V(G)$ respectively. The concept of zero-divisor graph of a commutative ring was introduced by Beck in \cite{Beck} to discuss the coloring of rings. In $1999$, Anderson and Livingston \cite{al}, introduced zero divisor graph $\Gamma (R)$ of a commutative ring $R$. They defined, the vertex set of $\Gamma (R)$ to be the set of all non-zero zero divisors of $R$ and two vertices $x$ and $y$ are adjacent if $xy=0$. Li et al.\cite{LI}, developed a kind of graph structure of a ring $R$, called nilpotent divisor graph of $R$, whose vertex set is $\{x\in R\,:\,x\neq 0, \, \exists \, y(\neq 0)\in R$ such that $xy\in Nil(R)\}$ and two vertices $x$ and $y$ are adjacent if $xy\in Nil(R)$. In $2018$, Kimball and LaGrange \cite{kimball}, generalized the concept of zero divisor graph to idempotent divisor graph. For any idempotent $e\in R$, they defined the idempotent divisor graph $\Gamma_e(R)$ associated with $e$, where $V(\Gamma_e(R))=\{a\in R\,\,:\,\,$ there exists $b\in R$ with $ab=e\}$ and two vertices $a$ and $b$ are adjacent if $ab=e$.
In this article, we introduce nil clean divisor graph $G_N(R)$ associated with a finite commutative ring $R$. We define the nil clean divisor graph $G_N(R)$ of a ring $R$ by taking $V(G_N(R))=\{x\in R\,:\,x\neq 0,\,\exists\, y(\neq 0, \neq x)\in R$ such that $xy\in NC(R)\}$ as the vertex set and two vertices $x$ and $y$ are adjacent if and only if $xy$ is a nil clean element of $R$. Clearly nil clean divisor graph is a generalization of both idempotent divisor graph and nilpotent divisor graph. The properties like girth, clique number, diameter and dominating number etc. of $G_N(R)$ have been studied.\par
To start with, we recall some preliminaries about graph theory. For a graph $G$, the degree of a vertex $v\in G$ is the number of edges incident to $v$, denoted by $deg(v)$. The neighbourhood of a vertex $v\in G$ is the set of all vertices incident to $v$, denoted by $A_v$. A graph $G$ is said to be connected, if for any two distinct vertices of $G$, there is a path in $G$ connecting them. Number of edges on the shortest path between vertices $x$ and $y$ is called the distance between $x$ and $y$ and is denoted by $d(x,y)$. If there is no path between $x$ and $y$, then we say $d(x,y)= \infty$. The diameter of a graph $G$, denoted by $diam(G)$, is the maximum of distances of each pair of distinct vertices in $G$. If $G$ is not connected, then we say $diam(G)=\infty$. Also girth of $G$ is the length of the shortest cycle in $G$, denoted by $gr(G)$ and if there is no cycle in $G$, then we say $gr(G)=\infty$. A complete graph is a simple undirected graph in which every pair of distinct vertices is connected by an edge.
A clique is a subset a of set of vertices of a graph such that its induced subgraph is complete. A clique having $n$ number of vertices is called an $n$-clique. The maximal clique of a graph is a clique such that there is no clique with more vertices. The clique number of a graph $G$ is denoted by $\omega (G)$ and defined as the number of vertices in a maximal clique of $G$.
\section{Nil clean divisor graph}
We introduce nil clean divisor graph as follows:
\begin{Def}
For a ring $R$, nil clean divisor graph, denoted by $G_N(R)$ is defined as a graph with vertex set $\{x\in R\,:\, x\neq 0,\,\exists\, y(\neq 0, \neq x)\in R$ such that $xy \in NC(R)\}$ and two vertices $x$ and $y$ are adjacent if $xy\in NC(R)$.
\end{Def}
From the above definition, we observe that nil clean divisor graph is a generalization of nilpotent divisor graph, which is again a generalization of zero divisor graph. For any idempotent $e\in R$, nil clean divisor graph of $R$ is also a generalization of $\Gamma_e(R)$. As an example, the nil clean divisor graph $G_N(\mathbb{Z}_6)$ is shown below:
\begin{figure}[H]
\begin{pspicture}(0,3)(0,-1
\scalebox{.8}{
\rput(5,0){
\psdot[linewidth=.05](-6,3)
\psdot[linewidth=.05](-6,0)
\psdot[linewidth=.05](-9,1.5)
\psdot[linewidth=.05](-3,2.5)
\psdot[linewidth=.05](-3,0.5)
\psline(-9,1.5)(-6,3)(-6,0)(-9,1.5)(-6,0)(-9,1.5)(-3,2.5)(-3,0.5)(-9,1.5)
\rput(-9.3,1.5){$3$}
\rput(-6,3.3){$4$}
\rput(-6,-0.3){$1$}
\rput(-3,2.8){$2$}
\rput(-3,0.2){$5$}
}}
\end{pspicture}
\caption{Nil clean divisor graph of $\mathbb{Z}_6$.}\label{1}
\end{figure}
\begin{Thm}
The nil clean divisor graph $G_N(R)$ is complete if and only if $R$ is a nil clean ring.
\end{Thm}
\begin{proof}
Let $G_N(R)$ is a complete and $x\in R$. If $x=0$, then $x$ is nil clean, if $x\neq 0$ then $x.1=x$ is nil clean as $1\in V(G_N(R))$. Converse is clear from the definition of nil clean divisor graph.
\end{proof}
If $\mathbb{F}$ is a finite field of order $n$, then clearly $NC(\mathbb{F})=\{0,1\}$. Hence for any $x(\neq 0)\in \mathbb{F}$, $x$ is adjacent to only $x^{-1}$, provided $x\neq x^{-1}$. Hence the nil-clean divisor graph of $\mathbb{F}$ is as follows:
%
\begin{figure}[H]
\begin{pspicture}(0,3)(0,-1
\scalebox{.8}{
\rput(5,0){
\psdot[linewidth=.05](-8,2.5)
\rput(-8,2.8){$x_3$}
\psdot[linewidth=.05](-8,0.5)
\rput(-8, .2){$x_3^{-1}$}
\psline(-8,2.5)(-8,.5)
\psline(-9,2.5)(-9,.5)
\psline(-10,2.5)(-10,.5)
\psline(-7,2.5)(-7,.5)
\psline(-6,2.5)(-6,.5)
\psline(-3,2.5)(-3,.5)
\psline(-2,2.5)(-2,.5)
\psline(-1,2.5)(-1,.5)
\psline(0,2.5)(0,.5)
\psline(1,2.5)(1,.5)
\psdot[linewidth=.05](-9,0.5)
\rput(-9, 0.2){$x_2^{-1}$}
\psdot[linewidth=.05](-10,0.5)
\rput(-10, 0.2){$x_1^{-1}$}
\psdot[linewidth=.05](-7,0.5)
\rput(-7, 0.2){$x_4^{-1}$}
\psdot[linewidth=.05](-6,0.5)
\rput(-6, 0.2){$x_5^{-1}$}
\psdot[linewidth=.05](-9,2.5)
\rput(-9,2.8){$x_2$}
\psdot[linewidth=.05](-10,2.5)
\rput(-10,2.8){$x_1$}
\psdot[linewidth=.05](-7,2.5)
\rput(-7,2.8){$x_4$}
\psdot[linewidth=.05](-6,2.5)
\rput(-6,2.8){$x_5$}
\psdot[linewidth=.009](-4.8,1.5)
\psdot[linewidth=.009](-4.5,1.5)
\psdot[linewidth=.009](-4.2,1.5)
\psdot[linewidth=.05](-3,2.5)
\rput(-3,2.8){$y_5$}
\psdot[linewidth=.05](-2,2.5)
\rput(-2,2.8){$y_4$}
\psdot[linewidth=.05](-1,2.5)
\rput(-1,2.8){$y_3$}
\psdot[linewidth=.05](0,2.5)
\rput(0,2.8){$y_2$}
\psdot[linewidth=.05](1,2.5)
\rput(1,2.8){$y_1$}
\psdot[linewidth=.05](-3,0.5)
\rput(-3,0.2){$y_5^{-1}$}
\psdot[linewidth=.05](-2,0.5)
\rput(-2,0.2){$y_4^{-1}$}
\psdot[linewidth=.05](-1,0.5)
\rput(-1,0.2){$y_3^{-1}$}
\psdot[linewidth=.05](0,0.5)
\rput(0,0.2){$y_2^{-1}$}
\psdot[linewidth=.05](1,0.5)
\rput(1,0.2){$y_1^{-1}$}
}}
\end{pspicture}
\caption{Nil clean divisor graph of $\mathbb{F}$.}\label{2}
\end{figure}
Note that $x_i\neq x_i^{-1}$ and $y_i\neq y_i^{-1}$, otherwise we may get some isolated point as well in the graph.
\begin{Cor}
For a field $\mathbb{F}$ of order $n$, where $n>2$. If $A=\{a\in \mathbb{F}\,\,:\,\, a=a^{-1}\}$ then the following hold.
\begin{enumerate}
\item Diameter of $\mathbb{F}$ is infinite.
\item $Gr(G_N(\mathbb{F}))=\infty$ and $\omega(G_N(\mathbb{F}))=2$.
\item $|V(G_N(F))|=n-|A|-1$.
\end{enumerate}
\end{Cor}
\begin{Thm}
If $R$ has a non trivial idempotent or non trivial nilpotent element, then the girth of $G_N(R)$ is $3$.
\end{Thm}
\begin{proof}
If $R$ has a non trivial idempotent $e$, then $\{0,1,e,1-e\}\subset NC(R)$ and we get a cycle $1 - e - (1-e) - 1$. Also if $R$ has a non trivial nilpotent $n$, then $\{0,1,n,n+1\} \subset NC(R)$. In this case $1 - n - (n+1) - 1$ is a cycle in $G_N(R)$.
\end{proof}
\begin{Thm}
If $R$ has only trivial idempotents and trivial nilpotent, then girth of $G_N(R)$ is infinite.
\end{Thm}
\begin{proof}
Since $R$ has only trivial idempotents and trivial nilpotent so by Lemma $2.6$ \cite{ncg}, $R$ is a field. Hence the result.
\end{proof}
\begin{Thm}\label{T2.6}
Let $R$ be a ring. Then the following hold.
\begin{enumerate}
\item Either $R$ is a field or $G_N(R)$ is connected.
\item $diam(R)=\infty$ or $diam(R)\leq 3$.
\item $gr(G_N(R))=\infty$ or $gr(G_N(R))=3$.
\end{enumerate}
\end{Thm}
\begin{proof}
Suppose $R$ is a reduced ring. \\
Case (I): If $R$ has no non trivial idempotent, then $R$ is a field. \\
Case (II): If $R$ has a non trivial idempotent, say $e\in Idem(R)$, then for any $x,y\in V(G_N(R))$, there exist $x_1,y_1\in V(G_N(R))$, such that $xx_1, yy_1\in NC(R)=Idem(R)$. So, we have a path $x-x_1e-y_1(1-e)-y$ from $x$ to $y$.\par
If $R$ is not a reduced ring, then there exists $n\in Nil(R)$, such that $x-n-y$ is a path from $x$ to $y$, for any $x,y\in V(G_N(R))$. Hence (1) and (2) follow from the above observations and Figure \ref{2}.\\
(3) If $R$ is reduced, then either $R$ is a field or there exists a non trivial idempotent $e\in R$, such that $1-e-(1-e)-1$ is a cycle. So, $gr(G_N(R))=\infty$ or $gr(G_N(R))=3$. If $R$ is a non reduced ring, then since nilpotent graph is a subgraph of nil clean divisor graph, so from Theorem $2.1$ \cite{LI}, $gr(G_N(R))=3$.
\end{proof}
\begin{Cor}
If $R$ is not a reduced ring, then $diam(R)\leq 2$.
\end{Cor}
\begin{Cor}
A ring $R$ is a field if and only if nil clean divisor graph of $R$ is bipartite.
\end{Cor}
\begin{proof}
$\Rightarrow$ Trivial.\\
$\Leftarrow$ If nil clean divisor graph of $R$ is bipartite then $gr(G_N(R))\neq 3$. So from Theorem \ref{T2.6}, $gr(G_N(R))=\infty$ and hence $R$ is a field.
\end{proof}
\begin{Thm}
For a ring $R$, the following are equivalent.
\begin{enumerate}
\item $G_N(R)$ is a star graph.
\item $R \cong \mathbb{Z}_5$.
\end{enumerate}
\end{Thm}
\begin{proof}
The result follows from the fact that $gr(G_N(R))=\infty$ if and only if $R$ is a field.
\end{proof}
\begin{Thm}
For any ring $R$, $\omega(G_N(R))\geq$ max$\{|Nil(R)|, |Idem(R)|-1\}$.
\end{Thm}
\begin{proof}
From the definition of nil clean divisor graph, we observe that $Nil(R)$ and $Idem(R)$ respectively induces a complete subgraph of $G_N(R)$.
\end{proof}
Next we strudy about nil clean divisor graph of weakly nil clean ring.
\begin{Thm}
Let $R$ be a weakly nil clean ring which is not nil clean. Then $\omega(G_N(R))\geq [\frac{|R|}{2}]$ and $diam(R)=2$ if $|R|(>3)$ is even, where $[x]$ is the greatest integer function.
\end{Thm}
\begin{proof}
As $x\in WNC(R)$ implies $-x\in NC(R)$, so if $|R|$ is even, then $|NC(R)|\geq \frac{|R|}{2}$ and if $|R|$ is odd, then $|NC(R)|\geq \frac{|R|+1}{2}$. Since $R$ is commutative, so product of any two nil clean element is also a nil clean element. Hence $\omega(G_N(R))\geq [\frac{|R|}{2}]$.\par
Since $|R|>3$, so $R$ is not a field and hence $G_N(R)$ is connected. As $|R\setminus \{0\}|$ is odd, so there exists an element $a\in R$ such that $x\in NC(R)\cap WNC(R)$. Hence for any $x,y\in R$, $x-a-y$ is a path in $G_N(R)$ and $diam(G_N(R))= 2$ as $R$ is not a nil clean ring.
\end{proof}
\section{Nil clean divisor graph of $\mathbb{Z}_{2p}$ and $\mathbb{Z}_{3p}$, for any odd prime $p$}
In this section we study the structures of $G_N(\mathbb{Z}_{2p})$ and $G_N(\mathbb{Z}_{3p})$, for any odd prime $p$.
\begin{Lem}\label{3.1}
If $a\in V(G_N(\mathbb{Z}_{2p}))$, where $p$ is an odd prime, then the following hold.
\begin{enumerate}
\item If $a=p$, then $deg(a)=2p-2$.
\item If $a\in \{1,p-1,p+1,2p-1\}$, then $deg(a)=2$.
\item Otherwise $deg(a)=3$
\end{enumerate}
\end{Lem}
\begin{proof}
Clearly $NC(\mathbb{Z}_{2p})=\{0,1,p,p+1\}$.
\begin{enumerate}
\item If $a=p$, then for any $y\in V(G_N(\mathbb{Z}_{2p}))$, either $yp=p$ or $yp=0$. Hence every element of $V(G_N(\mathbb{Z}_{2p}))$ is adjacent to $p$.
\item It is easy to observe that, $A_1=\{p, p+1\}$, $A_{p-1}=\{p, 2p-1\}$, $A_{p+1}=\{1, p\}$ and $A_{2p-1}=\{p-1, p\}$.
\item
Let $a\in \mathbb{Z}_{2p}\setminus \{0, 1, p-1, p, p+1, 2p-1\}$. \\
Case (I): Let $a$ be an even number. If $ax=0$ in $\mathbb{Z}_{2p}$, then it has two solutions $0$ and $p$. If $ax=1$ in $\mathbb{Z}_{2p}$, then it has no solution, since $gcd(2p, a)=2\nmid 1$. If $ax=p$ in $\mathbb{Z}_{2p}$, then also it has no solution, since $gcd(2p, a)=2\nmid p$. If $ax=p+1$ in $\mathbb{Z}_{2p}$, then it has two distinct solutions $x_1$ and $x_2$ in $\mathbb{Z}_{2p}$, since $gcd(2p, a)=2\mid p+1$. Hence we conclude that $A_a=\{p,x_1,x_2\}$.\\
Case (II): Let $a$ be an odd number. If $ax=0$ in $\mathbb{Z}_{2p}$, then it has a unique solution $x=0$. If $ax=1$ in $\mathbb{Z}_{2p}$, then it has unique odd solution $x=y_1$ in $\mathbb{Z}_{2p}$, since $gcd(2p, a)=1\mid 1$. If $ax=p$ in $\mathbb{Z}_{2p}$, then it has unique solution $x=p$, since $gcd(2p, a)=1\mid p$. If $ax=p+1$ in $\mathbb{Z}_{2p}$, then it has unique even solution $x=y_2$ in $\mathbb{Z}_{2p}$, since $gcd(2p, a)=1\mid p+1$. Hence $A_a=\{p,y_1,y_2\}$\\
From the above cases it follows $deg(a)=3$.
\end{enumerate}
\end{proof}
\begin{Rem}\label{3.2}
In the proof of Lemma \ref{3.1} (3), Case(I), since $ax_1=ax_2$ in $\mathbb{Z}_{2p}$, so $x_1-x_2=0$ or $p$, but $x_1-x_2\neq 0$ as $x_1$ and $x_2$ are distinct. Hence if $x_1$ is odd, then $x_2$ is even and if $x_1$ is even, then $x_2$ is odd.
\end{Rem}
From Lemma \ref{3.1} and Remark \ref{3.2}, for any prime $p>2$, the nil clean divisor graph of $\mathbb{Z}_{2p}$ is the following:
\begin{figure}[H]
\begin{pspicture}(0,3)(0,-1
\scalebox{.8}{
\rput(5,0){
\psdot[linewidth=.05](-12,1.5)
\psdot[linewidth=.05](-10,3.5)
\psdot[linewidth=.05](-10,-.5)
\psdot[linewidth=.05](-9,3.5)
\psdot[linewidth=.05](-9,-.5)
\psdot[linewidth=.05](-7,3.5)
\psdot[linewidth=.05](-7,-.5)
\psdot[linewidth=.05](-6,3.5)
\psdot[linewidth=.05](-6,-.5)
\psdot[linewidth=.05](-4,3.5)
\psdot[linewidth=.05](-4,-.5)
\psdot[linewidth=.05](-3,3.5)
\psdot[linewidth=.05](-3,-.5)
\psdot[linewidth=.003](-2.2,2.5)
\psdot[linewidth=.003](-2,2.5)
\psdot[linewidth=.003](-1.8,2.5)
\psdot[linewidth=.003](-2.2,.5)
\psdot[linewidth=.003](-2,.5)
\psdot[linewidth=.003](-1.8,.5)
\psdot[linewidth=.05](-1,3.5)
\psdot[linewidth=.05](-1,-.5)
\psdot[linewidth=.05](0,3.5)
\psdot[linewidth=.05](0,-.5)
\psdot[linewidth=.05](1,3)
\psdot[linewidth=.05](1,0)
\psdot[linewidth=.05](2,2.5)
\psdot[linewidth=.05](2,.5)
\psline (-9,-.5)(-12,1.5)(-10,3.5)(-10,-.5)(-9,-.5)(-9,3.5)(-10,3.5)
\psline (-10,-.5)(-12,1.5)(-9,3.5)
\psline (-6,-.5)(-12,1.5)(-7,3.5)(-7,-.5)(-6,-.5)(-6,3.5)(-7,3.5)
\psline (-7,-.5)(-12,1.5)(-6,3.5)
\psline (-3,-.5)(-12,1.5)(-4,3.5)(-4,-.5)(-3,-.5)(-3,3.5)(-4,3.5)
\psline (-4,-.5)(-12,1.5)(-3,3.5)
\psline (-1,-.5)(-12,1.5)(0,3.5)(0,-.5)(-1,-.5)(-1,3.5)(0,3.5)
\psline (0,-.5)(-12,1.5)(-1,3.5)
\psline (-12,1.5)(1,3)(1,0)(-12,1.5)
\psline (-12,1.5)(2,2.5)(2,.5)(-12,1.5)
\rput(-12.3,1.5){$p$}
\rput(1,-.3){$p+1$}
\rput(1,3.3){$1$}
\rput(2,.2){$p-1$}
\rput(2,2.8){$2p-1$}
\rput(-10,3.8){$c_1$}
\rput(-10,-.8){$a_1$}
\rput(-9,3.8){$d_1$}
\rput(-9,-.8){$b_1$}
\rput(-7,3.8){$c_2$}
\rput(-7,-.8){$a_2$}
\rput(-6,3.8){$d_2$}
\rput(-6,-.8){$b_2$}
\rput(-4,3.8){$c_3$}
\rput(-4,-.8){$a_3$}
\rput(-3,3.8){$d_3$}
\rput(-3,-.8){$b_3$}
\rput(-1,3.8){$c_{\frac{p-3}{2}}$}
\rput(-1,-.8){$a_{\frac{p-3}{2}}$}
\rput(0,3.8){$d_{\frac{p-3}{2}}$}
\rput(0,-.8){$b_{\frac{p-3}{2}}$}
}}
\end{pspicture}
\caption{Nil clean divisor graph of $\mathbb{Z}_{2p}$.}\label{3}
\end{figure}
In Figure \ref{3}, $a_i$ and $b_i$ are even numbers from $\mathbb{Z}_{2p}\setminus \{0, 1, p-1, p, p+1, 2p-1\}$ such that $a_ib_i=p+1$, for $1\leq i\leq \frac{p-3}{2}$. Also $c_i=a_i+p$ and $d_i=b_i+p$, for $1\leq i\leq \frac{p-3}{2}$.\\
From the above observations we conclude the following:
\begin{Thm}\label{3.3}
The following hold for nil clean divisor graph $G_N(\mathbb{Z}_{2p})$, for any odd prime $p$.
\begin{enumerate}
\item Clique number of $G_N(\mathbb{Z}_{2p})$ is $3$.
\item Diameter of $G_N(\mathbb{Z}_{2p})$ is $2$.
\item Girth of $G_N(\mathbb{Z}_{2p})$ is $3$.
\item $\{p\}$ is the unique smallest dominating set for $G_N(\mathbb{Z}_{2p})$, that is, dominating number of the graph is $1$.
\end{enumerate}
\end{Thm}
Next we study about nil clean divisor graph of $\mathbb{Z}_{3p}$. Here we study the graph theoretic properties of $G_N(\mathbb{Z}_{3p})$.
\begin{Lem}\label{3.4}
In $G_N(\mathbb{Z}_{3p})$; where $p\equiv 2(mod\,3)$, the following hold.
\begin{enumerate}
\item $deg(3k)=5$ if $3k\notin \{p+1, 2p-1\}$, for $1\leq k\leq p-1$.
\item $deg(p+1)=deg(2p-1)=4$.
\end{enumerate}
\end{Lem}
\begin{proof}
Here $NC(\mathbb{Z}_{3p})=\{0,1,p+1,2p\}$. Observe that $3k.x\equiv 1(mod \, 3p)$ and $3k.x\equiv 2p(mod \, 3p)$ has no solution, as $gcd(3k,3p)=3$ does not divide $1$ and $2p$. The congruence $3k.x\equiv 0(mod \, 3p)$ has three incongruent solutions $\{0,p,2p\}$ in $\mathbb{Z}_{3p}$. Also $3k.x\equiv p+1(mod\,3p)$ has three distinct incongruent solutions in $\mathbb{Z}_{3p}$, as $gcd(3k,3p)=3$ divides $p+1$.
\begin{enumerate}
\item As $x^2\equiv p+1(mod \, 3p)$, has two solutions $p+1$ and $2p-1$, hence if $3k\notin \{p+1, 2p-1\}$, then $deg(3k)=6-1=5$, as $0\notin V(G_N(\mathbb{Z}_{3p}))$.
\item If $3k\in \{p+1, 2p-1\}$, then $deg(3k)=6-2$, as $0\notin V(G_N(\mathbb{Z}_{3p}))$ and we do not consider any loop.
\end{enumerate}
\end{proof}
\begin{Lem}\label{3.5}
In $G_N(\mathbb{Z}_{3p})$, where $p\equiv 2(mod\, 3)$ the following hold.
\begin{enumerate}
\item $deg(p)=deg(2p)=2p-2$.
\item For $x\in \{1,p-1,3p-1,2p+1\}$, $deg(x)=2$.
\item For $x\in \mathbb{Z}_{3p}\setminus L$, $deg(x)=3$, where $L=\{3k\,\,:\,\,1\leq k\leq p-1\}\cup \{1,p-1,2p+1, 3p-1,p,2p\}$.
\end{enumerate}
\end{Lem}
\begin{proof}Here $NC(\mathbb{Z}_{3p})=\{0,1,p+1,2p\}$.
\begin{enumerate}
\item Clearly $p.x\equiv 1(mod\,3p)$ and $p.x\equiv p+1(mod\,3p)$ have no solution as $gcd(3p,p)$ does not divide $1$ and $p+1$. Also $p.x\equiv 0(mod\,3p)$ has $p$ incongruent solutions $\{3k\,\,:\,\, 0\leq k\leq p-1\}$ and $p.x\equiv 2p(mod\,3p)$ has $p$ incongruent solutions $\{3k+2\,\,:\,\, 0\leq k\leq p-1\}$. Since $0\notin V(G_N(\mathbb{Z}_{3p}))$ and $p$ is of the form $3i+2$, for some $0\leq i\leq p-1$, hence $deg(p)=2p-2$. Now $2p.x\equiv 0(mod\,3p)$ has $p$ incongruent solutions $\{3k\,\,:\,\,0\leq k\leq p-1\}$ and $2p.x\equiv 2p(mod\,3p)$ has $p$ incongruent solutions $\{3k+1\,\,:\,\, 0\leq k\leq p-1\}$. But $2p.x\equiv 1(mod\,3p)$ and $2p.x\equiv p+1(mod\,3p)$ have no solutions. Hence $deg(2p)=2p-2$, since $2p$ is of the form $3i+1$, for some $1\leq i \leq p-1$.
\item Since $x\equiv a(mod\,3p)$, has only one solution $a$, hence $deg(1)=2$. Also $(3p-1).x\equiv c(mod\,3p)$ has only one solution $(3p-1)a$, hence $deg(3p-1)=2$, as $0\notin V(G_N(\mathbb{Z}_{3p}))$ and $3p-1\in U(\mathbb{Z}_{3p})$. Equation $(p-1).x\equiv 1(mod\,3p)$ and $(2p+1).x\equiv c(mod\,3p)$ have a unique solutions, where $c\in \{0,1,2p,p+1\}$. Since $p-1,2p+1\in U(\mathbb{Z}_{3p})$, so $deg(p-1)=deg(2p+1)=2$.
\item Let $a\in \mathbb{Z}_{3p}\setminus L$. As $gcd(a,3p)=1$, so $a.x\equiv 0(mod\,3p)$ has a unique solution $x=0$. Also $a.x\equiv c(mod\,3p)$, where $c\in \{1,2p,p+1\}$ has a unique solution. Hence $deg(a)=3$.
\end{enumerate}
\end{proof}
From Lemma \ref{3.4} and Lemma \ref{3.5}, for any prime $p>3$ with $p\equiv 2(mod\, 3)$, the nil clean divisor graph of $\mathbb{Z}_{3p}$ is the following:
\begin{figure}[H]
\begin{pspicture}(0,3)(0,-1
\scalebox{.8}{
\rput(5,0){
\psline (-11, 3)(-11, 0)
\psline (-10, 3.5)(-10, -.5)
\psline (-9, 3)(-9, 0)
\psline (-8, 3)(-8, 0)
\psline (-7, 3.5)(-7, -.5)
\psline (-6, 3)(-6, 0)
\psline (-4, 3)(-4, 0)
\psline (-3, 3.5)(-3, -.5)
\psline (-2, 3)(-2, 0)
\psline (3.5, 1.5)(-10, 3.5)
\psline (3.5, 1.5)(-10, -.5)
\psline (3.5, 1.5)(-7, 3.5)
\psline (3.5, 1.5)(-7, -.5)
\psline (3.5, 1.5)(-3, 3.5)
\psline (3.5, 1.5)(-3, -.5)
\psline (-9, 3)(3.5, 1.5)(-9, 0)
\psline (-6, 3)(3.5, 1.5)(-6, 0)
\psline (-2, 3)(3.5, 1.5)(-2, 0)
\psdot[linewidth=.05](-12,1.5)
\psdot[linewidth=.05](-11, 0)
\psdot[linewidth=.05](-11, 3)
\psdot[linewidth=.05](-10, 3.5)
\psdot[linewidth=.05](-10, -.5)
\psdot[linewidth=.05](-9, 3)
\psdot[linewidth=.05](-9, 0)
\psdot[linewidth=.05](-8, 3)
\psdot[linewidth=.05](-8, 0)
\psdot[linewidth=.05](-7, 3.5)
\psdot[linewidth=.05](-7, -.5)
\psdot[linewidth=.05](-6, 3)
\psdot[linewidth=.05](-6, 0)
\psdot[linewidth=.05](-4, 3)
\psdot[linewidth=.05](-4, 0)
\psdot[linewidth=.05](-3, 3.5)
\psdot[linewidth=.05](-3, -.5)
\psdot[linewidth=.05](-2, 3)
\psdot[linewidth=.05](-2, 0)
\psdot[linewidth=.05](-.5, 2)
\psdot[linewidth=.05](-.5, 1)
\psdot[linewidth=.05](1.2, 2.5)
\psdot[linewidth=.05](1.2, .5)
\psdot[linewidth=.05](2.5, 3.5)
\psdot[linewidth=.05](2.5, -.5)
\psdot[linewidth=.05](3.5, 1.5)
\psdot[linewidth=.005](-5, 1.3)
\psdot[linewidth=.005](-5, 1.6)
\psdot[linewidth=.005](-5.2, 1.3)
\psdot[linewidth=.005](-5.2, 1.6)
\psdot[linewidth=.005](-4.8, 1.3)
\psdot[linewidth=.005](-4.8, 1.6)
\psline (-11,3)(-12,1.5)(-11,0)
\psline (-8,3)(-12,1.5)(-8,0)
\psline (-4,3)(-12,1.5)(-4,0)
\psline (-.5,2)(-12,1.5)(-.5,1)
\psline (1.2,2.5)(-12,1.5)(1.2,.5)
\psline (-10,3.5)(-12,1.5)(-10,-.5)
\psline (-7,3.5)(-12,1.5)(-7,-.5)
\psline (-3,3.5)(-12,1.5)(-3,-.5)
\psline (-11,3)(-10,3.5)(-9,3)(-9,0)(-10,-.5)(-11,0)(-11,3)
\psline (-8,3)(-7,3.5)(-6,3)(-6,0)(-7,-.5)(-8,0)(-8,3)
\psline (-4,3)(-3,3.5)(-2,3)(-2,0)(-3,-.5)(-4,0)(-4,3)
\psline (-.5, 1)(1.2, .5)(3.5, 1.5)(1.2, 2.5)(-.5, 2)
\psline (3.5, 1.5)(1.2, 2.5)(2.5, 3.5)(3.5, 1.5)(1.2, .5)(2.5, -.5)(3.5, 1.5)
\rput(3.8, 1.5){$p$}
\rput(-12.4, 1.5){$2p$}
\rput(-10, 3.8){$k_1$}
\rput(-10, -.8){$l_1$}
\rput(-7, 3.8){$k_2$}
\rput(-7, -.8){$l_2$}
\rput(-3, 3.8){$k_{\frac{p-3}{2}}$}
\rput(-3, -.8){$l_{\frac{p-3}{2}}$}
\rput (-.5, 1.7){$p-1$}
\rput (-.5, 1.3){$1$}
\rput (1, 2.8){$2p-1$}
\rput (1, .2){$p+1$}
\rput (2.5, 3.8){$3p-1$}
\rput (2.5,-.8){$2p+1$}
\rput (-11,3.5){$a_1$}
\rput (-11,-.5){$c_1$}
\rput (-9, 3.5){$b_1$}
\rput (-9, -.5){$d_1$}
\rput (-8, 3.5){$a_2$}
\rput (-8, -.5){$c_2$}
\rput (-6, 3.5){$b_2$}
\rput (-6, -.5){$d_2$}
\rput (-4, 3.5){$a_{\frac{p-3}{2}}$}
\rput (-4, -.5){$c_{\frac{p-3}{2}}$}
\rput (-2, 3.5){$b_{\frac{p-3}{2}}$}
\rput (-2, -.5){$d_{\frac{p-3}{2}}$}
}}
\end{pspicture}
\caption{Nil clean divisor graph of $\mathbb{Z}_{3p}$, where $p\equiv 2(mod \,3)$.}\label{4}
\end{figure}
In Figure \ref{4}, $\{l_i,k_i\}\subseteq \{3k\,\,:\,\,1\leq k\leq p-1\}$, $a_ic_i\equiv 1(mod\,3p)$, $b_id_i\equiv 1(mod\,3p)$ and $a_ik_i \equiv c_il_i\equiv b_ik_i\equiv d_il_i\equiv p+1(mod\, 3p)$, for $1\leq i\leq \frac{p-3}{2}$. Also $a_i\equiv c_i\equiv 1(mod\,3)$ and $b_i\equiv d_i\equiv 2(mod\,3)$, for $1\leq i\leq \frac{p-3}{2}$.
\begin{Thm}\label{3.6}
For any prime $p$, where $p\equiv 2(mod \,3)$, the following hold for $G_N(\mathbb{Z}_{3p})$.
\begin{enumerate}
\item Girth of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item Clique number of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item Diameter of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item $\{p,2p\}$ is the unique smallest dominating set for $G_N(\mathbb{Z}_{3p})$, that is, dominating number of the graph is $2$.
\end{enumerate}
\end{Thm}
\begin{proof}
Clearly $NC(\mathbb{Z}_{3p})=\{0,1,p+1,2p\}$.
\begin{enumerate}
\item Since $p-(p+1)-(2p+1)-p$ is a cycle of $G_N(\mathbb{Z}_{3p})$, so girth of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item If possible, let $\omega((G_N(\mathbb{Z}_{3p}))=4$. Then there exists $A=\{a_i\,\,:\,\,1\leq i\leq 4\}\subset V(G_N(\mathbb{Z}_{3p}))$ such that $A$ forms a complete subgraph of $G_N(\mathbb{Z}_{3p})$. If $x\in \mathbb{Z}_{3p}\setminus \{p,2p,3k\,\,:\,\, 1\leq k\leq p-1\}$, then $deg(x)\leq 3$. Also $x$ is adjacent to either $p$ or $2p$, $x^{-1}$ and $3i$, for some $1\leq i\leq p-1$(provided $x\notin \{1,p-1,2p+1, 3p-1\}$). But $x^{-1}$ is also adjacent to $3j$, for some $1\leq j\leq p-1$ such that $i\neq j$. So $A\subseteq \{p,2p,3k\,\,:\,\, 1\leq k\leq p-1\}$. Suppose $a_1=3k$, for some $1\leq k\leq p-1$. From Figure \ref{4}, $A_{a_1}=\{p,2p,3i+1, 3j+2,3s\}$, where $1\leq i,j,s\leq p-1$, also $3s\notin A_{3i+1}$, $3s\notin A_{3j+2}$, $3i+1\notin A_{3j+2}$, $p\notin A_{2p}$, $2p\notin A_{3j+2}$ and $p\notin N_{3i+1}$. Therefore $a_i\notin \{3k\,\,:\,\,1\leq k\leq p-1\}$, a contradiction. Hence $\omega((G_N(\mathbb{Z}_{3p}))=3$, as $\{p, 2p-1,3p-1\}$ forms a complete subgraph of $G_N(\mathbb{Z}_{3p})$.
\item
From Figure \ref{4}; $1$ and $2$ are connected by minimum $3$ edges, so by Theorem \ref{T2.6}, $diam(G_N(\mathbb{Z}_{3p}))=3$.
\item Since every element of $G_N(\mathbb{Z}_{3p})\setminus \{p,2p\}$ is adjacent to either $p$ or $2p$. Hence proof follows from Figure \ref{4}.
\end{enumerate}
\end{proof}
\begin{Lem}\label{3.7}
In $G_N(\mathbb{Z}_{3p})$; where $p\equiv 1(mod\,3)$, the following hold.
\begin{enumerate}
\item $deg(3k)=5$ if $3k\notin \{p-1, 2p+1\}$, for $1\leq k\leq p-1$.
\item $deg(p-1)=deg(2p+1)=4$.
\end{enumerate}
\end{Lem}
\begin{proof}
Proof is similar to the proof of Lemma \ref{3.4}.
\end{proof}
\begin{Lem}\label{3.8}
In $\mathbb{Z}_{3p}$, where $p\equiv 1(mod\,3)$, the following hold.
\begin{enumerate}
\item $deg(p)=deg(2p)=2p-2$.
\item For $x\in \{1,p+1,3p-1,2p-1\}$, $deg(x)=2$.
\item For $x\in \mathbb{Z}_{3p}\setminus L$, $deg(x)=3$, where $L=\{3k\,\,:\,\,1\leq k\leq p-1\}\cup \{1,p,2p,p+1,2p-1, 3p+1\}$.
\end{enumerate}
\end{Lem}
\begin{proof}
Proof is similar to the proof Lemma \ref{3.5}.
\end{proof}
From Lemma \ref{3.7} and Lemma \ref{3.8}, the nil clean divisor graph of $\mathbb{Z}_{3p}$, where $p\equiv 1(mod\,3)$ is the following:
\begin{figure}[H]
\begin{pspicture}(0,3)(0,-1
\scalebox{.8}{
\rput(5,0){
\psline (-11, 3)(-11, 0)
\psline (-10, 3.5)(-10, -.5)
\psline (-9, 3)(-9, 0)
\psline (-8, 3)(-8, 0)
\psline (-7, 3.5)(-7, -.5)
\psline (-6, 3)(-6, 0)
\psline (-4, 3)(-4, 0)
\psline (-3, 3.5)(-3, -.5)
\psline (-2, 3)(-2, 0)
\psline (3.5, 1.5)(-10, 3.5)
\psline (3.5, 1.5)(-10, -.5)
\psline (3.5, 1.5)(-7, 3.5)
\psline (3.5, 1.5)(-7, -.5)
\psline (3.5, 1.5)(-3, 3.5)
\psline (3.5, 1.5)(-3, -.5)
\psline (-9, 3)(3.5, 1.5)(-9, 0)
\psline (-6, 3)(3.5, 1.5)(-6, 0)
\psline (-2, 3)(3.5, 1.5)(-2, 0)
\psdot[linewidth=.05](-12,1.5)
\psdot[linewidth=.05](-11, 0)
\psdot[linewidth=.05](-11, 3)
\psdot[linewidth=.05](-10, 3.5)
\psdot[linewidth=.05](-10, -.5)
\psdot[linewidth=.05](-9, 3)
\psdot[linewidth=.05](-9, 0)
\psdot[linewidth=.05](-8, 3)
\psdot[linewidth=.05](-8, 0)
\psdot[linewidth=.05](-7, 3.5)
\psdot[linewidth=.05](-7, -.5)
\psdot[linewidth=.05](-6, 3)
\psdot[linewidth=.05](-6, 0)
\psdot[linewidth=.05](-4, 3)
\psdot[linewidth=.05](-4, 0)
\psdot[linewidth=.05](-3, 3.5)
\psdot[linewidth=.05](-3, -.5)
\psdot[linewidth=.05](-2, 3)
\psdot[linewidth=.05](-2, 0)
\psdot[linewidth=.05](-.5, 2)
\psdot[linewidth=.05](-.5, 1)
\psdot[linewidth=.05](1.2, 2.5)
\psdot[linewidth=.05](1.2, .5)
\psdot[linewidth=.05](2.5, 3.5)
\psdot[linewidth=.05](2.5, -.5)
\psdot[linewidth=.05](3.5, 1.5)
\psdot[linewidth=.005](-5, 1.3)
\psdot[linewidth=.005](-5, 1.6)
\psdot[linewidth=.005](-5.2, 1.3)
\psdot[linewidth=.005](-5.2, 1.6)
\psdot[linewidth=.005](-4.8, 1.3)
\psdot[linewidth=.005](-4.8, 1.6)
\psline (-11,3)(-12,1.5)(-11,0)
\psline (-8,3)(-12,1.5)(-8,0)
\psline (-4,3)(-12,1.5)(-4,0)
\psline (-.5,2)(-12,1.5)(-.5,1)
\psline (1.2,2.5)(-12,1.5)(1.2,.5)
\psline (-10,3.5)(-12,1.5)(-10,-.5)
\psline (-7,3.5)(-12,1.5)(-7,-.5)
\psline (-3,3.5)(-12,1.5)(-3,-.5)
\psline (-11,3)(-10,3.5)(-9,3)(-9,0)(-10,-.5)(-11,0)(-11,3)
\psline (-8,3)(-7,3.5)(-6,3)(-6,0)(-7,-.5)(-8,0)(-8,3)
\psline (-4,3)(-3,3.5)(-2,3)(-2,0)(-3,-.5)(-4,0)(-4,3)
\psline (-.5, 1)(1.2, .5)(3.5, 1.5)(1.2, 2.5)(-.5, 2)
\psline (3.5, 1.5)(1.2, 2.5)(2.5, 3.5)(3.5, 1.5)(1.2, .5)(2.5, -.5)(3.5, 1.5)
\rput(3.8, 1.5){$p$}
\rput(-12.4, 1.5){$2p$}
\rput(-10, 3.8){$k_1$}
\rput(-10, -.8){$l_1$}
\rput(-7, 3.8){$k_2$}
\rput(-7, -.8){$l_2$}
\rput(-3, 3.8){$k_{\frac{p-3}{2}}$}
\rput(-3, -.8){$l_{\frac{p-3}{2}}$}
\rput (-.5, 1.7){$p+1$}
\rput (-.5, 1.3){$3p-1$}
\rput (1, 2.8){$2p+1$}
\rput (1, .2){$p-1$}
\rput (2.5, 3.8){$1$}
\rput (2.5,-.8){$2p-1$}
\rput (-11,3.5){$a_1$}
\rput (-11,-.5){$c_1$}
\rput (-9, 3.5){$b_1$}
\rput (-9, -.5){$d_1$}
\rput (-8, 3.5){$a_2$}
\rput (-8, -.5){$c_2$}
\rput (-6, 3.5){$b_2$}
\rput (-6, -.5){$d_2$}
\rput (-4, 3.5){$a_{\frac{p-3}{2}}$}
\rput (-4, -.5){$c_{\frac{p-3}{2}}$}
\rput (-2, 3.5){$b_{\frac{p-3}{2}}$}
\rput (-2, -.5){$d_{\frac{p-3}{2}}$}
}}
\end{pspicture}
\caption{Nil clean divisor graph of $\mathbb{Z}_{3p}$, where $p\equiv 1(mod \,\,3)$.}\label{5}
\end{figure}
In Figure \ref{5}, $\{l_i,k_i\}\subseteq \{3k\,\,:\,\,1\leq k\leq p-1\}$, $a_ic_i\equiv 1(mod\,\,3p)$, $b_id_i\equiv 1(mod\,\,3p)$ and $a_ik_i \equiv c_il_i\equiv b_ik_i\equiv d_il_i\equiv 2p+1(mod\,\, 3p)$, for $1\leq i\leq \frac{p-3}{2}$. Also $a_i\equiv c_i\equiv 2(mod\,3)$ and $b_i\equiv d_i\equiv 1(mod\,3)$, for $1\leq i\leq \frac{p-3}{2}$. Hence we get the following theorem:
\begin{Thm}\label{3.9}
The following hold for $G_N(\mathbb{Z}_{3p})$, for any prime $p$, where $p\equiv 1(mod \,\,3)$.
\begin{enumerate}
\item Girth of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item Clique number of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item Diameter of $G_N(\mathbb{Z}_{3p})$ is $3$.
\item $\{p,2p\}$ is the unique smallest dominating set for $G_N(\mathbb{Z}_{3p})$, that is, dominating number of the graph is $2$.
\end{enumerate}
\end{Thm}
\begin{proof}
Since Figure \ref{4} and Figure \ref{5} are similar, hence the proof is similar to the proof of Theorem \ref{3.6}.
\end{proof}
\section{Acknowledgement}
The first Author was supported by Government of India under DST(Department of Science and Technology), DST-INSPIRE registration no IF160671.
| {
"timestamp": "2019-03-07T02:12:18",
"yymm": "1903",
"arxiv_id": "1903.02287",
"language": "en",
"url": "https://arxiv.org/abs/1903.02287",
"abstract": "In this article, we introduce a new graph theoretic structure associated with a finite commutative ring, called nil clean divisor graph. For a ring $R$, nil clean divisor graph is denoted by $G_N(R)$, where the vertex set is $\\{x\\in R\\,:\\, x\\neq 0, \\,\\exists\\, y(\\neq 0, \\neq x)\\in R$ such that $xy$ is nil clean$\\}$, two vertices $x$ and $y$ are adjacent if $xy$ is a nil clean element. We prove some interesting results of nil clean divisor graph of a ring.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Nil Clean Divisor Graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105238756898,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7076984732733347
} |
https://arxiv.org/abs/1803.03704 | Edge-decomposing graphs into coprime forests | The Barat-Thomassen conjecture, recently proved in [Bensmail et al.: A proof of the Barat-Thomassen conjecture. J. Combin. Theory Ser. B, 124:39-55, 2017.], asserts that for every tree T, there is a constant $c_T$ such that every $c_T$-edge connected graph G with number of edges (size) divisible by the size of T admits an edge partition into copies of T (a T-decomposition). In this paper, we investigate in which case the connectivity requirement can be dropped to a minimum degree condition. For instance, it was shown in [Bensmail et al.: Edge-partitioning a graph into paths: beyond the Barat-Thomassen conjecture.arXiv:1507.08208] that when T is a path with k edges, there is a constant $d_k$ such that every 24-edge connected graph G with size divisible by k and minimum degree $d_k$ has a T-decomposition. We show in this paper that when F is a coprime forest (the sizes of its components being a coprime set of integers), any graph G with sufficiently large minimum degree has an F-decomposition provided that the size of F divides the size of G (no connectivity is required). A natural conjecture asked in [Bensmail et al.: Edge-partitioning a graph into paths: beyond the Barat-Thomassen conjecture.arXiv:1507.08208] asserts that for a fixed tree T, any graph G of size divisible by the size of T with sufficiently high minimum degree has a T-decomposition, provided that G is sufficiently highly connected in terms of the maximal degree of T. The case of maximum degree 2 is answered by paths. We provide a counterexample to this conjecture in the case of maximum degree 3. | \section{Disproving Conjecture~\ref{conj:wrong}}\label{sec:disprove}
In this section, we disprove Conjecture~\ref{conj:wrong}. We show that it does not hold even for trees of maximum degree three
\begin{figure}
\begin{center}
\includegraphics[scale=1]{tree.pdf}
\caption{The tree $T_4$.}
\label{fig:tree}
\end{center}
\end{figure}
Assume that Conjecture~\ref{conj:wrong} holds for some function $f$. Let $T_k$ be the complete binary tree of depth $k$ (see Figure~\ref{fig:tree} for an example). The maximum degree of $T_k$ is three and the number of edges of $T_k$ is $n_k=2^{k+1}-2$ for every $k$. Let $\bbT_k$ be the set of possible numbers of edges contained in a component of $T_k\setminus e$ for some edge $e$ of $T_k$.
Observe that the components of $T_k\setminus e$ have $2^i-2$ edges and $n_k-(2^i-1)=2^{k+1}-2^i-1$ edges for some $i\in[k]$, for every edge $e$. Thus, $\bbT_k=\{2^i-2|i\in[k]\}\cup \{2^{k+1}-2^i-1|i\in[k]\}$ and then $|\bbT_k|\leq 2k$.
It follows that the sum $\sum_{i=1}^{f(3)} t_i$, where $t_i\in \bbT_k$ for every $i\in [f(3)]$, can attain at most $(2k)^{f(3)}$ different values. Therefore, there exists $k_0$ such that $(2k)^{f(3)}<n_k$ for every $k>k_0$.
Fix $k>k_0$. Then, there exists $m\in [n_k]$ such that $m\neq \sum_{i=1}^{f(3)} t_i \mod n_k$ for any choice of values of $t_i\in \bbT_k$.
Let $G_1$ and $G_2$ be $f(3)$-edge-connected graphs with minimum degree at least $f(|E(T_k)|)$ such that the number of edges of $G_1$ is congruent to $m$ modulo $n_k$, the number of edges of $G_2$ is congruent to $n_k-f(3)-m$ modulo $n_k$ and there are sets $S_1\subseteq V(G_1)$, $S_2\subseteq V(G_2)$ of size $f(3)$ such that the distance between every two distinct vertices of $S_i$ in $G_i$, $i=1,2$, is greater than $2k$ (where the distance is the length of the shortest path between the two vertices). See Figure~\ref{fig:star} for an example of a construction of $G_1$ with these properties, $G_2$ can be constructed in an analogous way.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{star.pdf}
\caption{A graph $G_1$ with the desired properties can be easily constructed by taking a star with $f(3)$ edges, subdividing each edge $k$ times, replacing each vertex by $t=\max(f(|E(T_k)|),n_k)$ vertices and each edge by a complete bipartite graph and finally, adding appropriate number of additional edges between the vertices corresponding to the same vertex of the subdivided star, so that the number of edges of the graph is congruent to $m$ modulo $n_k$. Taking vertices of $S_1$ as in the figure yields a set of size $f(3)$ with distance at least $2k+2$ between every two vertices.}
\label{fig:star}
\end{center}
\end{figure}
Let $G$ be a graph obtained from the disjoint union of $G_1$ and $G_2$ by adding a matching $M$ of size $f(3)$ between the vertices of $S_1$ and $S_2$. Then, $G$ is $f(3)$-edge-connected, of minimum degree at least $f(|E(T_k)|)$ and with number of edges divisible by $|E(T_k)|$. Assume that there exists a $T_k$-decomposition $\TT$ of $G$. Since the distance between any two vertices in $S_1$ and any two vertices in $S_2$ is greater than the distance between any two vertices in $T$, every copy of $T_k$ in $\TT$ contains at most one vertex of $S_1$ and $S_2$ and therefore at most one edge of $M$. Note that each copy of $T$ with an edge in $M$ contains $t_i\in \bbT_k$ edges of $G_1$. Therefore, the number of edges of $G_1$ is $cn_k+\sum_{i=1}^{f(3)} t_i$, where $c$ is an integer and $t_i\in \bbT_k$ for every $i\in [f(3)]$. This yields a contradiction with the choice of the number of edges of~$G_1$.
\section{Proof of Lemma~\ref{lem:3tree}}\label{sec:3tree}
We start by showing that sufficiently highly edge-connected graph can be decomposed into copies of two trees with coprime numbers of edges.
\begin{lemma}\label{lem:conn-twotrees}
Let $T_1, T_2$ be trees with coprime numbers of edges. Then, there exists an integer $K=K(T_1,T_2)$ such that every $K$-edge-connected graph has a $\{T_1,T_2\}$-decomposition with less than $|E(T_1)|$ copies of $T_2$.
\end{lemma}
\begin{proof}
Let $m_1$, $m_2$ be numbers of edges of $T_1$, $T_2$, respectively, and let $k_{T_1}$ be as in Theorem~\ref{thm:barat-thomassen} and let $K=k_{T_1}+m_1m_2$. Let $G$ be a $K$-edge-connected graph and let $n$ be the smallest non-negative integer such that $m_1|(|E(G)|-nm_2)$. Since $m_1$ and $m_2$ are coprime, such $n$ exists and is smaller than $m_1$ by B\'{e}zout's Lemma.
By the greedy algorithm, it is possible to find a collection $\TT$ of $n$ edge-disjoint copies of $T_2$ in $G$. Then, $G\setminus E(\TT)$ is a $k_{T_1}$-edge-connected graph with number of edges divisible by $m_1$. The result follows from Theorem~\ref{thm:barat-thomassen}.
\end{proof}
For the purpose of the proof of Lemma~\ref{lem:3tree}, we extend the definition of a graph by allowing hyperedges of size one, which we call {\em stubs} and we call the resulting object a {\em stub graph}. Each vertex of a stub graph can be incident with arbitrarily many stubs and the {\em degree} of a vertex is the number of edges and stubs incident with it. Moreover, we assign a positive integer $i_s$ to each stub $s$ and we call $i_s$ the {\em index of the stub} $s$. \new{Intuitively, a stub can be viewed as a remainder of an edge after one of its endvertices has been removed from the graph. The index of the stub then contains some information about the removed endvertex.}
We write $E(G)$ to denote the set consisting of the edges and the stubs of a stub graph $G$ and we call it the {\em edge set} of $G$, but we do not refer to stubs as edges otherwise.
We denote the set of stubs of index $i$ in a stub graph $G$ by $S_i(G)$. A {\em subgraph} of a stub graph is also a stub graph and we say that two subgraphs are edge-disjoint if their edge sets are disjoint, i.e., they do not share any edge or stub.
Let $\TT$ be a set of trees (without stubs). We extend the definition of $\TT$-decomposition for graphs to stub graphs. Informally, in a $\TT$-decomposition of a stub graph, a stub incident with a vertex $v$ plays the role of a subtree of $T\in \TT$, such that the vertex $v$ is a leaf of this subtree.
A {\em twig} is a pair $(T,r)$, where $T$ is a proper tree and $r$ is a leaf of $T$. We say that $r$ is the {\em root} of the twig.
Let $G$ be a stub graph, $s$ a stub incident with a vertex $v$ in $G$ and $(T,r)$ a twig disjoint from $G$. A stub graph obtained from $G$ by {\em expanding $s$ by $(T,r)$} is the stub graph obtained from $G\setminus s$ and $T$ by identifying $v$ and $r$.
An {\em embedding} of a tree $T$ in a stub graph $G$ is a subgraph $T'$ of $G$ such that each stub in $T'$ has different index and there exists a mapping $\st$ assigning a twig $(T_s,r_s)$ to each stub $s$ in $T'$ such that expanding every stub $s$ in $T'$ by $\st(s)$ yields a copy of $T$. See Figure~\ref{fig:stub-dec} for an example.
We say that a stub graph $G$ has a $\TT$-decomposition if its edge set can be decomposed into disjoint sets $\{E_i\}_{i\in [k]}$ such that each $E_i$ forms an embedding of some $T\in \TT$. Note that this definition coincides with the definition of a $\TT$-decomposition in the usual sense if $G$ has no stubs.
We denote the graph obtained from $G$ by removing all the stubs by $G^{-}$ and we say that $G$ is {\em $k$-edge-connected} if $G^{-}$ is $k$-edge-connected.
The next observation asserts that a $\TT$-decomposition of $G^{-}$ can be easily extended to a $\TT$-decomposition of $G$.
\begin{observation}\label{obs:stub-dec}
If $G^{-}$ has a $\TT$-decomposition $\TT_1$, there exists a $\TT$-decomposition $\TT_2$ of $G$ such that $\TT_1\subseteq \TT_2$. Moreover, given $T\in \TT$, there exists $\TT_2$ such that $\TT_2\setminus \TT_1$ contains only embeddings of $T$.
\end{observation}
\begin{proof}
It follows from the fact that a stub $s$ (with the vertex incident to it) forms an embedding of any proper tree $T$. It is enough to let $\st(s)=(T,r)$, where $r$ is some leaf of $T$.
\end{proof}
In particular, a stub graph $G$ with no edges has a $\TT$-decomposition for any set of proper trees $\TT$, since $G^{-}$ has a trivial (empty) $\TT$-decomposition.
It follows that Lemma~\ref{lem:conn-twotrees} holds for stub graphs as well.
Next, we introduce some more tools and terminology. Let $G$ be a multigraph. We call a partition $(A,B)$ of $V(G)$ into two parts a {\em cut} in $G$. We denote $E(A,B)$ the set of edges of $G$ incident with a vertex in both $A$ and $B$ and call $|E(A,B)|$ the {\em order of the cut} $(A,B)$. We now prove Lemma~\ref{lem:cut}. Note that the definition of a cut and the statement of the lemma trivially extend to stub graphs (by considering the multigraph $G^-$ instead of the stub graph $G$).
\begin{proof}[Proof of Lemma~\ref{lem:cut}]
Let $(A,B)$ be a cut in $G$ of order at most $2k$ such that $A$ is inclusion-wise minimal. Assume that $A$ has more than one vertex and let $(A_1,A_2)$ be a cut in $G[A]$. By minimality of $A$, we have that the cuts $(A_1,V(G)\setminus A_1)$ and $(A_2,V(G)\setminus A_2)$ have order greater than $2k$. Since $|E(A_1,V(G)\setminus A_1)|+ |E(A_2,V(G)\setminus A_2)|=|E(A,B)|+2|E(A_1,A_2)|$, $(A_1,A_2)$ has order greater than $k$.
\end{proof}
The following results of Czumaj and Strothmann were originally proven only for simple graphs, however, they easily extend to multigraphs
\begin{theorem}[Czumaj, Strothmann~\cite{bib:czumaj}, extended]\label{thm:two-sparse-tree}
Every $2$-edge-connected multigraph $G$ contains a spanning tree $T$
such that
$\deg_T v\leq (\deg_G v+3)/2$ for every vertex $v$ of $G$.
\end{theorem}
\new{To find such a tree, it is enough to take an out-branching in a strongly connected balanced orientation of $G$.}
\begin{theorem}[Czumaj, Strothmann~\cite{bib:czumaj}, extended]\label{thm:sparse-tree}
Let $p$ be a positive integer. If a multigraph $G$ contains $2^p$ edge-disjoint spanning trees, then $G$ has a spanning tree $T$ such that
$\deg_T v\leq \deg_G v/2^p+3p/2$ for every vertex $v$ of $G$.
\end{theorem}
The following Corollary~\ref{cor:sparse-tree} of Theorem~\ref{thm:sparse-tree} was essentially proven in~\cite{bib:thomassen-paths}. Our version differs in some details. Among other things, we use the following theorem of Nash-Williams and Tutte to replace the requirement of having a collection of edge-disjoint spanning trees by edge-connectivity.
\begin{theorem}[Nash-Williams~\cite{bib:nash-williams}, Tutte~\cite{bib:tutte}]\label{thm:nash-williams}
If a multigraph $G$ is $2k$-edge-connected, then $G$ contains $k$ edge-disjoint spanning trees.
\end{theorem}
\begin{corollary}\label{cor:sparse-tree}
For every $\varepsilon>0$ and integer $m$, there exists $L$ such that every $2^n$-edge-connected stub graph $G$ with minimum degree at least $L$, where $n=2+m+ \lceil\log (1/\varepsilon)\rceil$,
has $2^{m}$ edge-disjoint spanning trees $T_1,\ldots, T_{2^m}$ such that
\[\sum_{i\in [2^m]}\deg_{T_i} v \leq \varepsilon \deg_{G} v\]
for every $v\in V(G)$.
\end{corollary}
\begin{proof}
By Theorem~\ref{thm:nash-williams}, the graph $G^{-}$ contains $2^{n-1}=2^m\cdot 2^{\lceil \log(1/\varepsilon)\rceil+1}$ edge-disjoint spanning trees. Thus, $G^{-}$ contains $2^m$ edge-disjoint $2^{\lceil \log(1/\varepsilon)\rceil+1}$-edge-connected spanning subgraphs (formed by unions of the spanning trees). From Theorem~\ref{thm:sparse-tree} applied to these $2^{\lceil \log(1/\varepsilon)\rceil+1}$-edge-connected graphs, it follows that $G^{-}$ contains $2^{m}$ edge-disjoint spanning trees $T_1,\ldots,T_{2^m}$ such that \[\sum_{i\in [2^m]}\deg_{T_i}v\leq \frac{\deg_{G^-} v}{2^{\lceil \log(1/\varepsilon)\rceil+1}}+3\cdot 2^{m}(\lceil \log(1/\varepsilon)\rceil+1)\leq \frac{\varepsilon \deg_{G} v}{2}+3\cdot 2^{m}(\lceil \log(1/\varepsilon)\rceil+1)\] for every $v\in V(G)$.
Moreover $3\cdot 2^m (\lceil \log(1/\varepsilon)\rceil+1) \leq \varepsilon L/2$ for $L$ sufficiently large. The result follows.
\end{proof}
Corollary~\ref{cor:sparse-tree} implies the following.
\begin{corollary}\label{cor:split}
For any positive integers $k_0$ and $\delta_0$, there exists $d_0$ such that the edge set of every $16k_0$-edge-connected stub graph $G$ with minimum degree at least $d_0$ can be decomposed into a $k_0$-edge-connected graph and a stub graph of minimum degree at least $\delta_0$.
\end{corollary}
\begin{proof}
Let $d_0=\max(2\delta_0,L)$, where $L$ is as in Corollary~\ref{cor:sparse-tree} for $\varepsilon=1/2$ and $m=\lceil \log k_0\rceil$. Observe that $2^n$ in Corollary~\ref{cor:sparse-tree} is then less than $16k_0$ (and equal to $8k_0$ if $k_0$ is a power of two). Thus, by Corollary~\ref{cor:sparse-tree}, there exist $k_0$ edge-disjoint spanning trees $T_1,\ldots, T_{k_0}$ such that $\sum_{i\in [k_0]}\deg_{T_i} v \leq 1/2 \deg_{G} v$ for every $v$. By the choice of $d_0$, $1/2 \deg_{G} v\geq \delta_0$ for every $v$ of $G$. Thus, $G\setminus (\bigcup_{i\in[k_0]} E(T_i))$ has minimum degree at least $\delta_0$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=1.2]{tree-stub.pdf}
\caption{Embedding of a tree. Stubs are depicted as arrows, roots are depicted as squares.}
\label{fig:stub-dec}
\end{center}
\end{figure}
We need the following easy consequence of B\'ezout identity for the proof of Lemma~\ref{lem:3tree}.
\begin{observation}\label{obs:bezout}
Let $a,b$ be positive integers and let $c$ be an integer such that $c>ab$ and $c$ is divisible by the greatest common divisor of $a$ and $b$. Then there exist non-negative integers $k_a$ and $k_b$ such that $k_b<a$ satisfying $k_a a +k_b b=c$.
\end{observation}
\begin{proof}
Let $d$ be the greatest common divisor of $a$ and $b$ and let $k_c$ be an integer such that $k_c d=c$. By B\'ezout identity there exist integers $x$ and $y$ satisfying $xa+yb=d$, thus $xk_c a +yk_c b=c$. Note that then also $(xk_c+ib)a +(yk_c-ia) b=c$ for any integer $i$. Let $i$ be such that $0\leq xk_c-ia <a$. Then $k_b=xk_c-ia$ and $k_a=xk_c+ib$ satisfy the claim of the observation, in particular, since $c>ab$ and $k_bb<ab$, $k_a$ must be positive.
\end{proof}
We now prove Lemma~\ref{lem:3tree}.
\begin{proof}[Proof of Lemma~\ref{lem:3tree}]
Let $k_0=\max(K(T,T_1),K(T,T_2), k_T)$, where $K$ is as in Lemma~\ref{lem:conn-twotrees} and $k_T$ is as in Theorem~\ref{thm:barat-thomassen}. Let $m=\max(|E(T)|,|E(T_1)|, |E(T_2)|)$, $d_0$ be as in Corollary~\ref{cor:split} for $k_0$ and $\delta_0=64k_0m+2|E(T)|(|E(T_1)|+|E(T_2)|)$. Let $k=16k_0$ and $\delta=d_0+2k$.
Let $(H_0, G_0):= (\emptyset, G)$ and we repeat the following recursive procedure, obtaining pairs of stub graphs $(H_i,G_i)$ until $G_i$ is empty. Let $n$ be the number of steps before $G_{n}$ is empty.
If $G_i$ is $k$-edge-connected or has only one vertex, we let $H_{i+1}=G_i$ and $G_{i+1}=\emptyset$.
Assume that $G_i$ is not $k$-edge-connected and has more than one vertex. Then, by Lemma~\ref{lem:cut}, there exists a cut $(A,B)$ in $G_i$ of order at most $2k$ such that $G[A]$ is $k$-edge-connected or has only one vertex.
Let $H_{i+1}=G_i[A]$ and let $C_{i+1}$ be the set of edges $uv\in E(G_i)$ with $u\in A$ and $v\in B$. Let $G_{i+1}$ be the stub graph obtained from $G_i[B]$ by adding a stub with index $i+1$ incident with $v$ for every edge in $C_{i+1}$ incident with $v$.
Observe that $V(G)=\dot{\bigcup}_{i=0}^{n}V(H_i)$. Moreover, the graphs $G_i$ and $H_i$ have the following properties.
\begin{claim}
For every $i\in [n]$, the following holds:
\begin{itemize}
\item There are at most $2k$ stubs with index $i$ created during the procedure (i.e., in total, there are at most $2k$ stubs with index $i$ in the stub graphs $H_1, \ldots, H_n$),
\item $G_i$ has minimum degree at least $\delta$,
\item $H_i$ is $k$-edge-connected or has only one vertex,
\item $H_i$ has minimum degree at least $\delta-2k$ (minimum of the empty set is $\infty$).
\end{itemize}
\end{claim}
\begin{inproof}
Since there is a one-to-one correspondence between the stubs of index $i$ and the edges in $C_i$ and $|C_i|\leq 2k$, there are at most $2k$ stubs with index $i$. Moreover, $\deg_{G_{i-1}}(v)=\deg_{G_{i}}(v)$ for every $v\in V(G_{i})$ and therefore $G_{i}$ has minimum degree at least $\delta$ by induction.
By construction, $H_i$ is $k$-edge-connected or has only one vertex and since $G_{i-1}$ has minimum degree at least $\delta$, $H_i$ has minimum degree at least $\delta-2k$ as required.
\end{inproof}
We call a $\{T,T_1,T_2\}$-decomposition {\em balanced} if the numbers of copies of $T_1$ and $T_2$ differ by at most $|E(T)|$.
Since $G_n$ is empty, it has a balanced $\{T,T_1,T_2\}$-decomposition $\TT_n$ (the trivial one).
Next, we proceed inductively, constructing a balanced $\{T,T_1,T_2\}$-decomposition $\TT_{i-1}$ of $G_{i-1}$ from a balanced decomposition $\TT_i$ of $G_i$ for $i\geq 2$. Moreover, in each step we increase the number of copies of $T_1$ in the decomposition by at most $|E(T)|$ and keep the number of copies of $T_2$ the same, or the other way round, we increase the number of copies of $T_2$ in the decomposition by at most $|E(T)|$ and keep the number of copies of $T_1$ the same.
In the last step, we construct a $\{T,T_1,T_2\}$-decomposition of $G$ from a balanced $\{T,T_1,T_2\}$-decomposition of $G_1$ in a similar way, ensuring that the numbers of copies of $T_1$ and $T_2$ in the constructed decomposition are the same.
Roughly speaking, each step of construction has two phases: first, we replace every stub $s$ in $G_i$ of index $i$ by a subtree $\st(s)$ in $G_{i-1}$. Then we decompose the remaining part of $G_{i-1}$ using Lemma~\ref{lem:conn-twotrees}. In the last step, when i=1, we will proceed in a slightly different way to ensure that the resulting decomposition will contain the same number of copies of $T_1$ and $T_2$.
More formally, given a balanced $\{T,T_1,T_2\}$-decomposition $\TT_i$ of $G_{i}$, let $j=1$ if the number of copies of $T_1$ in $\TT'_i$ is smaller than the number of copies of $T_2$ and $j=2$ otherwise. Recall that $V(G_{i-1})=V(H_i)\dot{\cup}V(G_i)$ and $E(G_{i-1})=E(H_i)\dot{\cup}C_{i} \dot{\cup} E(G_i)\setminus S_{i}(G_i)$.
If $H_i$ has more than one vertex, it is $k$-edge-connected and thus, by Corollary~\ref{cor:split}, $H_i$ contains a spanning subgraph $R_i$ with minimum degree at least $4km$ such that $H_i'=H_i\setminus E(R_i)$ is $K(T,T_j)$-edge-connected. If $H_i$ has only one vertex, let $R_i=H_i$ and $H'_i$ is an isolated vertex.
We replace every embedding of $T$, $T_1$ or $T_2$ in $\TT_i$ which contains a stub $s\in S_{i}$ by an embedding of $T$, $T_1$ or $T_2$ in $G_{i-1}\setminus E(H'_i)$. This yields a partial $\{T,T_1,T_2\}$-decomposition $\TT'$ of $G_{i-1}$.
Moreover, $\TT'$ is such that $E(H'_i)\subseteq E(G_{i-1})\setminus E(\TT')\subseteq E(H_i)$. Thus, the stub graph $H_i''=(V(H_i),E(G_{i-1})\setminus E(\TT'))$ has a $\{T,T_j\}$-decomposition $\TT''$ that contains at most $|E(T)|$ copies of $T_j$ by Lemma~\ref{lem:conn-twotrees} and by Observation~\ref{obs:stub-dec}, because $H_i''$ is either $K(T,T_j)$-edge-connected or has only one vertex (and therefore no edges).
Then, $\TT_{i+1}=\TT'\cup\TT''$ forms a $\{T,T_1,T_2\}$-decomposition of $G_{i-1}$. Since there are at most $|E(T)|$ copies of $T_j$ in $\TT'$, from the choice of $j$ it follows that if the difference between the number of copies of $T_1$ and $T_2$ in $\TT_i$ was at most $|E(T)|$, the difference in $\TT_{i-1}$ is also at most $|E(T)|$.
Let $K\in\TT_i$ be an embedding of $T$, $T_1$ or $T_2$ containing a stub $s$ with index $i$ (i.e., $s\in S_{i}(G_i)$). Note that $K$ contains at most one stub in $S_{i}(G_i)$. For each such $K$, we construct $K'\in \TT'$ such that $K\setminus S_i(G_i)\subseteq K'$ in the following way. We assume that $K$ is an embedding of $T$, the construction for $T_1$ and $T_2$ is analogous.
Let $\II$ be the set of the indices of the stubs in $K$. Let $v$ be the vertex incident with $s$ and let $uv$ be the edge in $C_{i}$ corresponding to the stub $s$.
We find an embedding $S_s$ of $\st(s)$ such that
\begin{itemize}
\item $v\in V(S_s)$ and it corresponds to the root of $\st(s)$,
\item $uv\in E(S_s)$ and $S_s\setminus v\subseteq R_i$, and
\item no stub in $S_s$ has its index in $\II$.
\end{itemize}
Then, $K':=S_s \cup (K\setminus s)$ is an embedding of $T$ in $G_{i-1}$.
Moreover, we ensure that $E(S_{s_1})$ and $E(S_{s_2})$ are disjoint for every two distinct stubs $s_1, s_2\in S_i(G_i)$. Thus, the edge sets of the embeddings in $\TT'$ will be mutually disjoint.
We construct an embedding $S_s$ of $\st(s)$ greedily.
Starting from $S_s$ which consists of the root in $v$ and the edge $uv$, we add edges and stubs one by one.
At the same time, we remove the used edges and stubs from $R_i$, making sure that no edge and no stub is used in more than one embedding.
Let $\II'$ be the set of the indices of the stubs in $S_s$.
Assume that $S_s$ is not yet an embedding of $\st(s)$ and let $w\in R_i$ be a vertex of $S_s$ to which we need to add an edge or a stub.
We argue that either $w$ has a neighbor $w'$ in $R_i\setminus S_s$ and therefore we can extend $S_s$ by the edge $ww'$ (removing $ww'$ from $R_i$) or $w$ is incident with a stub in $R_i$ such that its index is not in $\II\cup \II'$.
This is indeed the case; at the beginning, $w$ had degree at least $4km$ in $R_i$, at most $(2k-1)m$ edges and stubs from $R_i$ were removed by embedding trees corresponding to stubs in $S_{i}(G_i)\setminus \{s\}$ and at most $m$ edges and stubs incident with $v$ were removed or cannot be used for extending $S_s$ because the other endpoint of the edge is already in $S_s$.
This leaves at least $2km$ available edges and stubs incident with $v$.
Since $|\II\cup \II'|< m$, $R_i$ contains a stub incident with $w$ such that its index is not in $\II\cup \II'$, or an edge $ww'$ with $w'\notin S_s$.
At the last step, it remains to construct a $\{T,T_1,T_2\}$-decomposition of $G$ from a balanced $\{T,T_1,T_2\}$-decomposition of $G_1$. Note that $H_1$ is not a single vertex and has no stubs. Let $R_1$, $H_1'$ be subgraphs of $H_1$ defined as before, in particular, $H'_1$ is $k_T$-edge-connected (since $k_0\geq k_T$).
Let $t$, $t_1$ and $t_2$ be the numbers of embeddings of $T$, $T_1$ and $T_2$ in $\TT_1$ respectively. Without loss of generality, we assume that $t_1\leq t_2$.
Since $|E(G)|$ is divisible by the greatest common divisor of $|E(T)|$ and $|E(T_1)|+|E(T_2)|$,
$|E(G)|- t|E(T)|-t_2(|E(T_1)|+|E(T_2)|)$ is also divisible by the greatest common divisor of $|E(T)|$ and $|E(T_1)|+|E(T_2)|$. By Observation~\ref{obs:bezout}, there exists an integer $0\leq t'<|E(T)|$ such that $|E(G)|- t|E(T)|-t_2(|E(T_1)|+|E(T_2)|)-t'(|E(T_1)|+|E(T_2)|)$ is divisible by $|E(T)|$.
We greedily construct $t'+(t_2-t_1)$ copies of $T_1$ and $t'$ copies of $T_2$ in $R_1$, denote the resulting partial $\{T_1,T_2\}$-decomposition by $\TT^{*}$ and remove its edges from $R_1$. Note that the minimum degree of $R_1$ decreases by at most $2|E(T)|(|E(T_1)|+|E(T_2)|)$ and thus it is still at least $4km$. Thus, we can proceed in the same way as above, i.e., we construct a partial $\{T,T_1,T_2\}$-decomposition $\TT'$ from $\TT_1$ by expanding the stubs, using the remaining edges of $R_1$.
Then, $\TT^{*}\cup \TT'$ is a partial $\{T,T_1,T_2\}$-decomposition that contains the same number of copies of $T_1$ and $T_2$.
As before, let $H_1''$ be a graph obtained from $H_1'$ by adding unused edges of $R_1$. By our choice of $t'$, the number of edges of $H_1''$ divisible by $|E(T)|$ and thus by Theorem~\ref{thm:barat-thomassen} it has a $T$-decomposition $\TT''$. Then, $\TT^*\cup \TT'\cup \TT''$ is a $\{T,T_1,T_2\}$-decomposition that contains the same number of copies of $T_1$ and $T_2$.
\end{proof}
\section{Introduction}\label{sec:intro}
\input{intro-trees.tex}
\section{Disproving Conjecture~\ref{conj:wrong}}\label{sec:disprove}
\input{disprove.tex}
\section{Decomposition into coprime trees}\label{sec:twotrees}
\input{forests.tex}
\section*{Acknowledgements}
Part of this work was done while the first author was a postdoc at Laboratoire d’Informatique du Parall\'elisme,
\'Ecole Normale Sup\'erieure de Lyon,
69364 Lyon Cedex 07, France.
| {
"timestamp": "2018-03-13T01:02:39",
"yymm": "1803",
"arxiv_id": "1803.03704",
"language": "en",
"url": "https://arxiv.org/abs/1803.03704",
"abstract": "The Barat-Thomassen conjecture, recently proved in [Bensmail et al.: A proof of the Barat-Thomassen conjecture. J. Combin. Theory Ser. B, 124:39-55, 2017.], asserts that for every tree T, there is a constant $c_T$ such that every $c_T$-edge connected graph G with number of edges (size) divisible by the size of T admits an edge partition into copies of T (a T-decomposition). In this paper, we investigate in which case the connectivity requirement can be dropped to a minimum degree condition. For instance, it was shown in [Bensmail et al.: Edge-partitioning a graph into paths: beyond the Barat-Thomassen conjecture.arXiv:1507.08208] that when T is a path with k edges, there is a constant $d_k$ such that every 24-edge connected graph G with size divisible by k and minimum degree $d_k$ has a T-decomposition. We show in this paper that when F is a coprime forest (the sizes of its components being a coprime set of integers), any graph G with sufficiently large minimum degree has an F-decomposition provided that the size of F divides the size of G (no connectivity is required). A natural conjecture asked in [Bensmail et al.: Edge-partitioning a graph into paths: beyond the Barat-Thomassen conjecture.arXiv:1507.08208] asserts that for a fixed tree T, any graph G of size divisible by the size of T with sufficiently high minimum degree has a T-decomposition, provided that G is sufficiently highly connected in terms of the maximal degree of T. The case of maximum degree 2 is answered by paths. We provide a counterexample to this conjecture in the case of maximum degree 3.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Edge-decomposing graphs into coprime forests",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105307684549,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7076984724669181
} |
https://arxiv.org/abs/1412.1581 | On Low Tree-Depth Decompositions | The theory of sparse structures usually uses tree like structures as building blocks. In the context of sparse/dense dichotomy this role is played by graphs with bounded tree depth. In this paper we survey results related to this concept and particularly explain how these graphs are used to decompose and construct more complex graphs and structures. In more technical terms we survey some of the properties and applications of low tree depth decomposition of graphs. | \section{Tree-Depth}
The {\em tree-depth} of a graph is a minor montone graph
invariant that has been defined in \cite{Taxi_tdepth}, and which is equivalent or
similar to the {\em rank
function} (used for the analysis of countable graphs, see e.g.\ \cite{Nev2003}),
the {\em vertex ranking number} \cite{vertex_ranking,Schaffer}, and the minimum height of an elimination tree \cite{Bodlaender1995}.
Tree-depth can also be seen as an analog for undirected graphs of the cycle
rank defined by Eggan \cite{Eggan1963}, which is a parameter relating
digraph complexity to other areas such as regular language complexity and
asymmetric matrix factorization.
The notion of tree-depth found a wide range of applications, from the study of non-repetitive coloring \cite{Thue_choos} to the proof of the homomorphism preservation theorem for finite structures \cite{Rossman2007}. Recall the definition of tree-depth:
\begin{definition}
The {\em tree-depth} ${\rm td}(G)$ of a graph $G$ is
defined as
the minimum height\footnote{Here the height is defined as the maximum number of vertices in a chain from a root to a leaf} of a rooted forest $Y$ such that $G$ is a subgraph of the closure of $Y$ (that is of the graph obtained by adding edges between a vertex and all its ancestors). In particular, the tree-depth of a disconnected graph is the maximum of the tree-depths of its connected components.
\end{definition}
Several characterizations of tree-depth have been given, which can be seen as possible alternative definitions. Let us mention:
\begin{trivlist}
\setlength{\itemsep}{3mm}
\item {\bf TD$1$.} The tree-depth of a graph is the order of the largest clique in a trivially perfect supergraph of $G$ \cite{td-wiki}. Recall that a graph is {\em trivially perfect} if it has the property that in each of its induced subgraphs the size of the maximum independent set equals the number of maximal cliques \cite{Golumbic1978105}. This characterization follows directly from the property that a connected graph is trivially perfect if and only if it is the comparability graph of a rooted tree \cite{Golumbic1978105}.
\item {\bf TD$2$.} The tree-depth of a graph is the minimum number of colors in a {\em centered coloring} of $G$, that is in a vertex coloring of $G$ such that in every connected subgraph of $G$ some color appears exactly once \cite{Taxi_tdepth}.
\item {\bf TD$3$.}
A strongly related notion is vertex ranking, which has has been investigated in \cite{vertex_ranking,Schaffer}.
The {\em vertex ranking} (or {\em ordered coloring}) of a graph is a vertex coloring by a linearly ordered set of colors such that for every path in the graph with end vertices of the same color there is a vertex on this path with a higher color. The equality of the minimum number
of colors in a vertex ranking and the tree-detph is proved in \cite{Taxi_tdepth}.
\item {\bf TD$4$.} The tree-depth of a graph $G$ with connected components $G_1,\dots,G_p$, is recursively defined by:
$$
{\rm td}(G)=\begin{cases}
1&\text{ if }G\simeq K_1\\
\displaystyle\max_{i=1}^p {\rm td}(G_i)&\text{ if $G$ is disconnected}\\
\displaystyle 1+\min_{v\in V(G)}{\rm td}(G-v)&\text{ if $G$ is connected and }G\not\simeq K_1
\end{cases}
$$
The equivalence between the value given by this recursive definition and minimum height of an elimination tree, as well as the equality of this value with the tree-depth are proved in \cite{Taxi_tdepth}.
\item {\bf TD$5$.} The tree-depth can also be defined by means of games, see \cite{Giannopoulou2011, Gruber2008,Hunter2011}.
In particular, this leads to a min-max formula for tree-depth in the spirit of the min-max formula relating tree-width and bramble size \cite{Seymour1993}. Precisely, a {\em shelter} in a graph $G$ is a family $\mathcal S$ of non-empty connected
subgraphs of $G$ partially ordered by
inclusion such that for every subgraph $H\in\mathcal S$ not minimal in $\mathcal F$ and for every $x\in H$ there exists $H'\in\mathcal S$ covered by $H$ (in the partial order) such that $x\not\in H'$.
The {\em thickness} of a shelter $\mathcal S$ is the minimal length of a maximal chain of $\mathcal S$. Then the tree-depth of a graph $G$ equals the maximum thickness of a shelter in $G$ \cite{Giannopoulou2011}.
\item {\bf TD$6$.} Also, graphs with tree-depth at most $t$ can be theoretically characterized by means of a finite set of forbidden minors, subgraphs, or even induced subgraphs. But in each case, the number of obstruction grows at least like a double (and at most a triple) exponential in $t$ \cite{Dvorak2012969}.
\end{trivlist}
More generally, classes with bounded tree-depth can be characterized by several properties:
\begin{trivlist}
\setlength{\itemsep}{3mm}
\item {\bf TD$7$.} A class of graphs $\mathcal C$ has bounded tree-depth if and only if there is some integer $k$ such that graphs in $\mathcal C$ exclude $P_k$ as a subgraph.
More precisely, while computing the tree-depth of a graph $G$ is a hard problem, it can be (very roughly) approximated bu considering the height $h$ of a Depth-First Search tree of $G$, as $\lceil\log_2 (h+2)\rceil\leq {\rm td}(G)\leq h$ \cite{Sparsity}.
\item {\bf TD$8$.} A class of graphs $\mathcal C$ has bounded tree-depth if and only if there is some integers $s,t,q$ such that graphs in
$\mathcal C$ exclude $P_s, K_t,$ and $K_{q,q}$ as induced subgraphs (this follows from the previous item and \cite[Theorem 3]{Atminas2014}, which states that for every $s$, $t$, and $q$, there is a number $Z = Z(s, t, q)$ such that every graph with a path of length at least $Z$ contains either $P_s$ or $K_t$ or $K_{q,q}$ as an induced subgraph.
\item {\bf TD$9$.}
A monotone class of graphs has bounded tree-depth if and only if
it is well quasi-ordered for the induced-subgraph relation (with vertices possibly colored using $k\geq 2$ colors) (follows from
\cite{Ding1992}).
\item {\bf TD$10$.}
A monotone class of graphs has bounded tree-depth if and only if
First-order logic (FO) and monadic second-order (MSO) logic have the same expressive power on the class \cite{6280445}.
\end{trivlist}
Classes of graphs with tree-depth at most $t$ are computationally very simple, as witnessed by the following properties:
\begin{trivlist}
\setlength{\itemsep}{3mm}
\item It follows from {\bf TD$9$} that every hereditary property can be tested in polynomial time when restricted to graphs with tree-depth at most $t$.
Let us emphasize how one can combine {\bf TD$8$} and {\bf TD$9$}
to get complexity results for $P_s$-free graphs. Recall that a graph $G$ is {\em $k$-choosable} if for every assignment of a set $S(v)$ of $k$ colors to every vertex $v$ of $G$, there is a proper coloring of $G$ that assigns to each vertex $v$ a color from $S(v)$ \cite{vizing,ert}. Note that in general, for $k>2$, deciding $k$-choosability for bipartite graphs is $\Pi_2^P$-complete, hence more difficult that both NP and co-NP problems. It was proved in \cite{Heggernes2009} that for $P_5$-free graphs, that is, graphs excluding $P_5$ as an induced subgraph, k-choosability is fixed-parameter tractable. For general $P_s$-free graphs we prove:
\begin{theorem}
For every integers $s$ and $k$, there is a polynomial time algorithm to decide whether a $P_s$-free graph $G$ is $k$-choosable.
\end{theorem}
\begin{proof}
Assume $G$ is $P_s$-free. We can decide in polynomial time whether $G$ includes $K_{k+1}$ or $K_{k,k^k}$ as an induced subgraph.
In the affirmative, $G$ is not $k$-choosable. Otherwise, the tree-depth of $G$ is bounded by some constant $C(s,k)$. As the property to be $k$-choosable is hereditary, we can use a polynomial time algorithm deciding whether a graph with tree-depth at most $C(s,k)$
is $k$-choosable
\end{proof}
\item Graphs with tree-depth at most $t$ have a (homomorphism) core of order bounded by a function of $t$ \cite{Taxi_tdepth}. In other word, every graph $G$ with tree-depth at most $t$ has an induced subgraph $H$ of order at most $F(t)$ such that there exists an adjacency preserving map (that is: a {\em homomorphism}) from $V(G)$ to $V(H)$.
\item The complexity of checking the satisfaction of an ${\rm MSO}_2$ property $\phi$ on a class with tree-depth at most $t$ in time $O(f(\phi, t)\cdot|G|)$, where $f$ has an elementary dependence on $\phi$ \cite{Gajarsky2012}. This is in contrast with the dependence arising for ${\rm MSO}_2$-model checking in classes with bounded treewidth using Courcelle's algorithm \cite{Courcelle2}, where $f$ involves a tower of exponents of height growing with $\phi$ (what is generally unavoidable \cite{Frick2004}). These properties led to the study of classes with bounded shrub-depth, generalizing classes with bounded tree-depth, and enjoying similar properties for ${\rm MSO}_1$-logic
\cite{Gajarsky2012,Ganian2012}. Concerning the dependency on the tree-depth $t$, note that the
$(t + 1)$-fold exponential algorithm for MSO model-checking given by
Gajarsk{\'y} and Hlin{\v e}n\'y in \cite{gajarsky2012faster} is essentially optimal \cite{Lampis2013}.
\end{trivlist}
Graphs with bounded tree depth form the building blocs for more complicated graphs, with which we deal in the next section.
\section{Low Tree-Depth Decomposition of Graphs}
Several extensions of chromatic number of been proposed and studied in the literature. For instance, the {\em acyclic chromatic number} is the minimum number of colors in a proper vertex-coloring such that any two colors induce an acyclic graph (see e.g. \cite{AMS,Borodin1979}). More generally, for a fixed parameter $p$, one can ask what is the minimum number of colors in a proper vertex-coloring of a graph $G$, such that any subset $I$ of at most $p$ colors induce a subgraph with treewidth at most $|I|-1$. In this setting, the value obtained for $p=1$ is the chromatic number, while the value obtained for $p=2$ is the acyclic chromatic number.
In this setting, the following result has been proved by Devos, Oporowski, Sanders, Reed, Seymour and Vertigan using the structure theorem for graphs excluding a minor:
\begin{theorem}[\cite{2tw}]
\label{th:2tw}For every proper minor closed class $\mathcal K$ and
integer $k\geq 1$, there is an integer $N=N({\mathcal K},k)$,
such that every graph $G \in
{\mathcal K}$ has a vertex partition into $N$ graphs such that
any $j\leq k$ parts form a graph with tree-width at most $j-1$.
\end{theorem}
The stronger concept of low tree-depth decomposition has been introduced by the authors in \cite{Taxi_tdepth}.
\begin{definition}
A {\em low tree-depth decomposition} with parameter $p$ of a graph $G$ is a coloring of the vertices of $G$, such that any subset $I$ of at most $p$ colors induce a subgraph with tree-depth at least $|I|$. The minimum number of colors in a low tree-depth decomposition with parameter $p$ of $G$ is denoted by $\chi_p(G)$.
\end{definition}
For instance, $\chi_1(G)$ is the (standard) chromatic number of $G$, while $\chi_2(G)$ is the {\em star chromatic number} of $G$, that is the minimum number of colors in a proper vertex-coloring of $G$ such that any two colors induce a star forest (see e.g. \cite{alon2,Taxi_jcolor}).
The authors were able to extend Theorem~\ref{th:2tw} to low tree-depth decomposition in \cite{Taxi_tdepth}.
Then, using the concept of transitive fraternal augmentation \cite{POMNI}, the authors extended further existence of low tree-depth decomposition (with bounded number of colors) to classes with bounded expansion, the definition of which we recall now:
\begin{definition}
A class $\mathcal C$ has {\em bounded expansion} if there exists a function $f:\mathbb{N}\rightarrow\mathbb{N}$ such that every topological minor $H$ of a graph $G\in\mathcal{C}$ has an average degree bounded by $f(p)$, where $p$ is the maximum number of subdivisions per edge needed to turn $H$ into a subgraph of $G$.
\end{definition}
Extending low tree-depth decomposition to classes with bounded expansion in is the best possible:
\begin{theorem}[\cite{POMNI}]
\label{thm:chiBE}
Let $\mathcal{C}$ be a class of graphs, then the following are equivalent:
\begin{enumerate}
\item for every integer $p$ it holds $\sup_{G\in\mathcal{C}}\chi_p(G)<\infty$;
\item the class $\mathcal{C}$ has bounded expansion.
\end{enumerate}
\end{theorem}
Properties and characterizations of classes with bounded expansion will be discussed in more details in Section~\ref{sec:taxonomy} (we refer the reader to \cite{Sparsity} for a thorough analysis). Let us mention that classes with bounded expansion in particular include proper minor closed classes (as for instance planar graphs or graphs embeddable on some fixed surface), classes with bounded degree, and more generally classes excluding a topological minor. Thus on the one side the classes of graphs with bounded expansion include most of the sparse classes of structural graph theory, yet on the other side they have pleasant algorithmic and extremal properties.
On the other hand, one could ask whether for proper minor-closed classes one could ask there exists a stronger coloring than the one given by low tree-depth decompositions. Precisely, one can ask what is
the minimum number of colors required for a vertex coloring of a graph $G$, so that any subgraph $H$ of $G$ gets at least $f (H)$ colors. (For instance that the star coloring corresponds to the graph function where any $P_4$ gets at least $3$ colors.)
Define the {\em upper chromatic number} $\overline{\chi}(H)$ of a graph $H$ as the greatest integer, such that for any proper minor closed class of graph $\mathcal C$, there exists a constant $N=N(\mathcal C, H)$, such that any graph $G\in\mathcal C$ has a vertex coloring by at $N$ colors so that any subgraph of $G$ isomorphic to $H$ gets at least $\overline{\chi}(H)$ colors.
The authors proved in \cite{Taxi_tdepth} that $\overline{\chi}(H)={\rm td}(H)$, showing that low tree-depth decomposition is the best we can achieve for proper minor closed classes. Note that the tree-depth of a graph $G$ is also related to the chromatic numbers $\chi_p(G)$ by ${\rm td}(G)=\max_p \chi_p(G)$ \cite{Taxi_tdepth}.
\section{Low Tree-Depth Decomposition and Restricted Dualities}
The original motivation of low tree-depth decomposition was to prove the existence
of a triangle free graph $H$ such that every triangle-free planar $G$ admits a homomorphism to $H$, thus providing a structural strengthening of Gr\" otzsch's theorem \cite{Taxi_jcolor}.
Recall that a {\em homomorphism} of a graph $G$ to a graph $H$ is a mapping from the vertex set $V(G)$ of $G$ to the vertex set $V(H)$ of $H$ that preserves adjacency.
The existence (resp. non-existence) of a homomorphism of $G$ to $H$ will be denoted by $G\rightarrow H$ (resp. by $G\nrightarrow H$).
We refer the interested reader to the monograph \cite{HN} for a detailed study of graph homomorphisms.
Thus the above planar triangle-free problem can be restated as follows: Prove that there exists
a graph $H$ such that $K_3\nrightarrow H$ and such that for every planar graph $G$ it holds
$$
K_3\nrightarrow G\quad\iff\quad G\rightarrow H.
$$
More generally, we are interested in the following problem: given a class of graphs $\mathcal C$ and a connected graph $F$, find a graph $D_{\mathcal{C}}(F)$ for $\mathcal{C}$ (which we shall refer to as a {\em dual} of $F$ for $\mathcal C$), such that $F\nrightarrow D_{\mathcal{C}}(F)$ and such that for every $G\in\mathcal C$ it holds
$$
F\nrightarrow G\quad\iff\quad G\rightarrow D_{\mathcal{C}}(F).
$$
(Note that $D_{\mathcal{C}}(F)$ is not uniquely determined by the above equivalence.)
A couple $(F,D_{\mathcal{C}}(F))$ with the above property is called
a {\em restricted duality} of $\mathcal C$.
\begin{example}
For the special case of triangle-free planar graphs, the existence of a dual was proved by the authors in \cite{Taxi_tdepth} and
the minimum order dual has been proved to be the Clebsch graph
by Naserasr \cite{Nas}.
$\forall\text{ planar }G:$
$$\duality[15mm]{K3}{Clebsch}.$$
Note that this restricted homomorphism duality extends to the class of all graphs excluding $K_5$ as a minor \cite{Naserasr20095789}.
\end{example}
\begin{example}
A restricted homomorphism duality for toroidal graphs follows from the existence of a finite set of obstructions for $5$-coloring
proved by Thomassen in \cite{Thomassen199411}: Noticing that all the obstructions shown Fig.~\ref{fig:6crit} are homomorphic images of one of them, namely $C_1^3$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.75\textwidth]{tore_6crit}
\end{center}
\caption{The $6$-critical graphs for the torus.}
\label{fig:6crit}
\end{figure}
Thus we get the following restricted homomorphism duality.
$\forall\text{ toroidal }G:$
$$\duality[15mm]{T11}{K5}$$
\end{example}
\begin{definition}
A class $\mathcal C$ with the property that every connected graph $F$ has a dual for $\mathcal C$ is said to have {\em all restricted dualities}.
\end{definition}
In \cite{Taxi_tdepth} we proved, using low tree-depth decomposition, that for every proper minor closed class $\mathcal C$ has all restricted dualities. We generalized in \cite{POMNIII} this result to classes with bounded expansions. We briefly outline this.
In the study of restricted homomorphism dualities, a main tool appeared to be notion of $t$-approximation:
\begin{definition}
Let $G$ be a graph and let $t$ be a positive integer. A graph $H$ is
a {\em $t$-approximation} of $G$ if $G$ is homomorphic to
$H$ (i.e. $G\rightarrow H$) and every subgraph of $H$ of order at most $t$ is homomorphic to $G$.
\end{definition}
Indeed the following theorem is proved in \cite{FO_CSP}:
\begin{theorem}
\label{thm:dual_approx}
Let $\mathcal C$ be a class of graphs. Then the following are equivalent:
\begin{enumerate}
\item The class $\mathcal C$ is bounded and has all restricted dualities (i.e. every connected graph $F$ has a dual for $\mathcal C$);
\item For every integer $t$ there is a constant $N(t)$ such that every graph $G\in\mathcal{C}$ has a $t$-approximation of order at most $N(t)$.
\end{enumerate}
\end{theorem}
The following lemma stresses the connection existing between $t$-approximation and low tree-depth decomposition:
\begin{theorem}[\cite{FO_CSP}]
\label{thm:chi2approx}
For every integer $t$ there exists a constant $C_t$ such that every
graph $G$ has a $t$-approximation $H$ with order
$$
|H|\leq C_t^{\chi_t(G)^t}.
$$
\end{theorem}
Hence we have the following corollary of Theorems~\ref{thm:dual_approx}, \ref{thm:chi2approx}, and~\ref{thm:chiBE}, which was originally proved in \cite{POMNIII}:
\begin{corollary}
Every class with bounded expansion has all restricted dualities.
\end{corollary}
The connection between classes with bounded expansion and restricted dualities appears to be even stronger, as witnessed by the following (partial) characterization theorem.
\begin{theorem}[\cite{FO_CSP}]
Let $\mathcal{C}$ be a topologically closed class of graphs (that is a class closed by the operation of graph subdivision).
Then the following are equivalent:
\begin{enumerate}
\item the class $\mathcal{C}$ has all restricted dualities;
\item the class $\mathcal{C}$ has bounded expansion.
\end{enumerate}
\end{theorem}
This theorem has also a variant in the context of directed graphs:
\begin{theorem}[\cite{FO_CSP}]
Let $\mathcal{C}$ be a class of directed graphs closed by reorientation.
Then the following are equivalent:
\begin{enumerate}
\item the class $\mathcal{C}$ has all restricted dualities;
\item the class $\mathcal{C}$ has bounded expansion.
\end{enumerate}
\end{theorem}
\section{Intermezzo: Low Tree-Depth Decomposition and Odd-Distance Coloring}
Let $n$ be an odd integer and let $G$ be a graph. The problem of
finding a coloring of the vertices of $G$ with minimum number of colors such that two vertices at distance $n$ are colored differently, called {$D_n$-coloring} of $G$, was introduced in 1977 in Graph Theory Newsletter by E. Sampathkumar \cite{Sampathkumar1977} (see also \cite{jensen2011graph}). In \cite{Sampathkumar1977},
Sampathkumar claimed that every planar graph has a $D_n$-coloring for every odd integer $n$ with $5$ colors, and conjectured that $4$ colors suffice.
Unfortunately, the claimed result was flawed, as witnessed by the graph depicted on Figure~\ref{fig:D3col}, which needs $6$ colors for a $D_3$-coloring \cite{Sparsity}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{cube6chromatic}
\end{center}
\caption{On the left, a planar graph $G$ needing $6$-colors for a $D_3$-coloring. On the right, a witness: this a graph with vertex set $A\subset V(G)$ in which adjacent vertices are at distance $3$ in $G$, thus should get distinct colors in a $D_3$-coloring of $G$.}
\label{fig:D3col}
\end{figure}
Low tree-depth decomposition allows to prove that for any odd integer $n$, a fixed number of colors is sufficient for $D_n$-coloring planar graphs, and this results extends to all classes with bounded expansion.
\begin{theorem}[\cite{Sparsity}]
\label{thm:oddD}
For every class with bounded expansion $\mathcal C$ and every odd integer $n$ there exists a constant $N$ such that every graph $G\in\mathcal{C}$ has a $D_n$-coloring with at most $N$ colors.
\end{theorem}
The proof of Theorem~\ref{thm:oddD} relies on low tree-depth decomposition, and
the bound $N$ given in \cite{Sparsity} for the number of colors sufficient for a $D_n$-coloring of a graph $G$ is double exponential in $\chi_n(G)$. Hence it is still not clear whether a uniform bound could exist for $D_n$-coloring of planar graphs.
\begin{problem}[van den Heuvel and Naserasr]
Does there exist a constant $C$ such that for every odd integer $n$, it holds that every planar graph has a $D_n$-coloring with at most $C$ colors?
\end{problem}
Note that, however, there exists no bound for the {\em odd-distance coloring} of planar graphs, which requires that two vertices at odd distance get different colors. Indeed, one can construct outerplanar graphs having an arbitrarily large subset of vertices pairwise at odd distance (see Fig.~\ref{fig:oddcl}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.35\textwidth]{oddcl}
\caption{There exist outerplanar graphs with arbitrarily large subset of vertices pairwise at odd distance. (In the figure, the vertices in the periphery are pairwise at distance $1$, $3$, $5$, or $7$.)}
\label{fig:oddcl}
\end{center}
\end{figure}
However, no construction requiring a large number of colors without having a large set of vertices pairwise at odd-distance is known. Hence the following problem.
\begin{problem}[Thomass\'e]
Does there exist a function $f:\mathbb{N}\rightarrow\mathbb{N}$ such that every planar graph without $k$ vertices pairwise at odd distance has an odd-distance coloring with at most $f(k)$ colors?
\end{problem}
\section{Low Tree-Depth Decomposition and Density of Shallow Minors, Shallow Topological Minors, and Shallow Immersions}
\label{sec:taxonomy}
Classes with bounded expansion, which have been introduced in \cite{POMNI}, may be viewed as a relaxation of the notion of proper minor closed class. The original definition of classes with bounded expansion relates to the notion of shallow minor, as introduced by Plotkin, Rao, and Smith
\cite{shallow}.
\begin{definition}
Let $G,H$ be graphs with $V(H)=\{v_1,\dots,v_h\}$ and let $r$ be an integer.
A graph $H$ is a {\em shallow minor} of a graph $G$ {\em at depth} $r$, if
there exists disjoint subsets $A_1,\dots,A_h$ of $V(G)$ such that
(see Fig.~\ref{fig:shm})
\begin{itemize}
\item the subgraph of $G$ induced by $A_i$ is connected and as radius at most
$r$,
\item if $v_i$ is adjacent to $v_j$ in $H$, then some vertex in $A_i$ is
adjacent in $G$ to some vertex in $A_j$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.75\textwidth]{grad1}
\caption{A shallow minor}
\label{fig:shm}
\end{center}
\end{figure}
We denote \cite{POMNI, Sparsity} by $G\,{\triangledown}\, r$ the class of the (simple) graphs
which are shallow minors of $G$ at depth $r$,
and we denote by $\rdens{r}(G)$ the maximum density of a graph in
$G\,{\triangledown}\, r$, that is:
$$\rdens{r}(G)=\max_{H\in G\,{\triangledown}\, r}\frac{\|H\|}{|H|}$$
A class $\mathcal C$ has {\em bounded expansion} if
$\sup_{G\in\mathcal{C}}\rdens{r}(G)<\infty$ for each value of $r$.
Considering shallow minors may, at first glance, look arbitrary. Indeed
one can define as well the notions of shallow topological minors and shallow immersions:
\begin{definition}
A graph $H$ is a {\em shallow topological minor} at depth $r$ of a graph $G$ if some subgraph of $G$ is isomorphic to a subdivision of $H$ in which every edge has been subdivided at most $2r$ times (see Fig.~\ref{fig:stm}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.75\textwidth]{stm}
\caption{$H$ is a shallow topological minor of $G$ at depth $r$}
\label{fig:stm}
\end{center}
\end{figure}
We denote \cite{POMNI, Sparsity} by $G\,\widetilde{\triangledown}\, r$ the class of the (simple) graphs
which are shallow topological minors of $G$ at depth $r$,
and we denote by $\trdens{r}(G)$ the maximum density of a graph in
$G\,\widetilde{\triangledown}\, r$, that is:
$$\trdens{r}(G)=\max_{H\in G\,\widetilde{\triangledown}\, r}\frac{\|H\|}{|H|}$$
\end{definition}
Note that shallow topological minors can be alternatively defined by considering how a graph $H$ can be topologically embedded in a graph $G$:
a graph $H$ with vertex set $V(H)=\{a_1,\dots,a_k\}$ is a shallow topological minor of a graph $G$ at depth $r$ is there exists vertices $v_1,\dots,v_k$ in $G$ and a family $\mathcal P$ of paths of $G$ such that
\begin{itemize}
\item two vertices $a_i$ and $a_j$ are adjacent in $H$ if and only if there is a path in $\mathcal{P}$ linking $v_i$ and $v_j$;
\item no vertex $v_i$ is interior to a path in $\mathcal{P}$;
\item the paths in $\mathcal{P}$ are internally vertex disjoint;
\item every path in $\mathcal{P}$ has length at most $2r+1$.
\end{itemize}
We can similarly define the notion of shallow immersion:
\begin{definition}
A graph $H$ with vertex set $V(H)=\{a_1,\dots,a_k\}$ is a {\em shallow immersion} of a graph $G$ at depth $r$ is there exists vertices $v_1,\dots,v_k$ in $G$ and a family $\mathcal P$ of paths of $G$ such that
\begin{itemize}
\item two vertices $a_i$ and $a_j$ are adjacent in $H$ if and only if there is a path in $\mathcal{P}$ linking $v_i$ and $v_j$;
\item the paths in $\mathcal{P}$ are edge disjoint;
\item every path in $\mathcal{P}$ has length at most $2r+1$;
\item no vertex of $G$ is internal to more than $r$ paths in $\mathcal{P}$.
\end{itemize}
We denote \cite{POMNI, Sparsity} by $G\,\accentset{\propto}{\triangledown}\, r$ the class of the (simple) graphs
which are shallow immersions of $G$ at depth $r$,
and we denote by $\irdens{r}(G)$ the maximum density of a graph in
$G\,\accentset{\propto}{\triangledown}\, r$, that is:
$$\irdens{r}(G)=\max_{H\in G\,\accentset{\propto}{\triangledown}\, r}\frac{\|H\|}{|H|}$$
\end{definition}
It appears that although minors, topological minors, and immersions behave very differently, their shallow versions are deeply related, as witnessed by the following theorem:
\begin{theorem}[\cite{Sparsity}]
\label{thm:BE}
Let $\mathcal{C}$ be a class of graphs. Then the following are equivalent:
\begin{enumerate}
\item the class $\mathcal{C}$ has bounded expansion;
\item for every integer $r$ it holds $\sup_{G\in\mathcal{C}}\rdens{r}(G)<\infty$;
\item for every integer $r$ it holds $\sup_{G\in\mathcal{C}}\trdens{r}(G)<\infty$;
\item for every integer $r$ it holds $\sup_{G\in\mathcal{C}}\irdens{r}(G)<\infty$;
\item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,{\triangledown}\, r}\chi(H)<\infty$;
\item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,\widetilde{\triangledown}\, r}\chi(H)<\infty$;
\item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,\accentset{\propto}{\triangledown}\, r}\chi(H)<\infty$.
\end{enumerate}
\end{theorem}
In the above theorem, we see that not only shallow minors, shallow topological minors, and shallow immersions behave closely, but that the (sparse) graph density $\|G\|/|G|$ and the chromatic number $\chi(G)$ of a graph $G$ are also related. This last relation is intimately
related to the following result of Dvor\'ak \cite{Dvo2007}.
\begin{lemma}
\label{lem:degchr}
Let $c\geq 4$ be an integer and let $G$ be a graph with average degree
$d>56(c-1)^2\frac{\log (c-1)}{\log c-\log (c-1)}$.
Then the graph $G$ contains a subgraph $G'$ that is the $1$-subdivision of a graph with chromatic number $c$.
\end{lemma}
It follows from Theorem~\ref{thm:BE} that the notion of class with bounded expansion is quite robust. Not only classes with bounded expansion can be defined by edge densities and chromatic number, but also by virtually all common combinatorial parameters \cite{Sparsity}.
If one considers the clique number instead of the density or the chromatic number, then a different type of classes is defined:
\begin{definition}
A class of graph $\mathcal{C}$ is {\em somewhere dense} if
there exists an integer $p$ such that every clique is a shallow topological minor at depth $p$ of some graph in $\mathcal C$ (in other words, $\mathcal{C}\,\widetilde{\triangledown}\, p$ contain all graphs); the class
$\mathcal{C}$ is {\em nowhere dense} if it is not somewhere dense.
\end{definition}
Similarly that Theorem~\ref{thm:BE}, we have several characterizations of nowhere dense classes.
\begin{theorem}[\cite{Sparsity}]
\label{thm:ND}
Let $\mathcal{C}$ be a class of graphs. Then the following are equivalent:
\begin{enumerate}
\item the class $\mathcal{C}$ is nowhere dense;
\item for every integer $r$ it holds $\limsup_{G\in\mathcal{C}}
\frac{\log\rdens{r}(G)}{\log |G|}=0$;
\item for every integer $r$ it holds $\limsup_{G\in\mathcal{C}}
\frac{\log\trdens{r}(G)}{\log |G|}=0$ ;
\item for every integer $r$ it holds $\limsup_{G\in\mathcal{C}}
\frac{\log\irdens{r}(G)}{\log |G|}=0$;
\item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,{\triangledown}\, r}\omega(H)<\infty$;
\item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,\widetilde{\triangledown}\, r}\omega(H)<\infty$;
\item for every integer $r$ it holds $\sup_{H\in\mathcal{C}\,\accentset{\propto}{\triangledown}\, r}\omega(H)<\infty$.
\end{enumerate}
\end{theorem}
Note that every class with bounded expansion is nowhere dense.
As mentioned in Theorem~\ref{thm:chiBE}, classes with bounded expansion are also characterized by the fact that they allow low tree-depth decompositions with bounded number of colors. A similar statement holds for nowhere dense classes:
Precisely, we have the following:
\begin{theorem}
\label{thm:chiND}
Let $\mathcal{C}$ be a class of graphs, then the following are equivalent:
\begin{enumerate}
\item for every integer $p$ it holds $\limsup_{G\in\mathcal{C}}\frac{\chi_p(G)}{\log |G|}=0$;
\item the class $\mathcal{C}$ is nowhere dense.
\end{enumerate}
\end{theorem}
The direction bounding $\chi_p(G)$ of both Theorem~\ref{thm:chiBE} and~\ref{thm:chiND} follow from the next more precise result:
\begin{theorem}[\cite{Sparsity}]
\label{thm:chibound}
For every integer $p$ there is a polynomial $P_p$ (${\rm deg}\,P_p\approx 2^{2^p}$) such that for
every graph $G$ it holds
$$
\chi_p(G)\leq P_p(\trdens{2^{p-2}+1}(G)).
$$
\end{theorem}
Note that the original proof given in \cite{POMNI} gave a slightly weaker bound, and that an alternative proof of this result has been obtained by Zhu \cite{Zhu2008}, in a paper relating low tree-depth decomposition with the generalized coloring numbers introduced by Kierstead and Yang \cite{Kierstead2003}.
\section{Low Tree-Depth Decomposition and Covering}
In a low treedepth decomposition of a graph $G$ by $N$ colors and for parameter $t$,
the subsets of $t$ colors define a disjoint union of clusters that cover the graph, such that each cluster has tree-depth at most $t$, every vertex belongs to at most $\binom{N}{t}$ clusters, and every connected subgraph of order $t$ is included in at least one cluster.
It is natural to ask whether the condition that such a covering comes from a coloring could be dropped.
\begin{theorem}
Let $\mathcal C$ be a monotone class.
Then $\mathcal C$ has bounded expansion if and only if there exists a function $f$ such that for every integer $t$, every graph $G\in\mathcal C$ has a
covering $C_1,\dots, C_k$ of its vertex set such that
\begin{itemize}
\item each $C_i$ induces a connected subgraph with tree-depth at most $t$;
\item every vertex belongs to at most $f(t)$ clusters;
\item every connected subgraph of order at most $t$ is included in at least one cluster.
\end{itemize}
\end{theorem}
\begin{proof}
One direction is a direct consequence of Theorem~\ref{thm:chiBE}.
Conversely, assume that the class $\mathcal C$ does not have bounded expansion. Then there exists an integer $p$ such that
for every integer $d$ the class $\mathcal C$ contains the $p$-th subdivision of a graph $H_d$ with average degree at least $d$. Moreover, it is a standard argument that we can require $H_d$ to be
bipartite (as every graph with average degree $2d$ contains a bipartite subgraph with average degree at least $d$).
Let $t=2(p+1)$ and let $d=2f(t)+1$. Assume for contradiction that there exist clusters $C_1,\dots,C_k$ as required, then we can cover $H_d$ by clusters $C_1',\dots,C_k'$ such that
each $C_i'$ induces a star (possibly reduced to an edge),
every vertex belongs to at most $f(t)$ clusters, and
every edge is included in at least one cluster.
If an edge $\{u,v\}$ of $H_d$ is included in more than two clusters, it is easily checked that (at least) one of $u$ and $v$ can be safely removed from one of the cluster. Hence we can assume that each edge of $H_d$ is covered exactly once. To each cluster $C_i'$ associates the center of the star induced by $C_i'$ (or an arbitrary vertex of $C_i'$ if $C_i'$ has cardinality $2$) and orient the edges of the star induced by $C_i'$ away from the center. This way, every edge is oriented once and every vertex gets indegree at most $f(t)$. However, summing the indegrees we get $f(t)\geq d/2$, a contradiction.
\end{proof}
It is natural to ask whether similar statements would hold, if we weaken the condition that each cluster has tree-depth at most $t$ while we strengthen the condition that every connected subgraph of order at most $t$ is included in some cluster.
Namely, we consider the question whether a similar statement holds
if we allow each cluster to have radius at most $2t$ while requiring that every $t$-neighborhood is included in some cluster.
In the context of their solution of model checking problem for nowhere dense classes, Grohe, Kreutzer and Siebertz
introduced in \cite{Grohe2013} the notion of $r$-neighborhood cover and proved that
nowhere dense classes admit such cover with small maximum degree, and proved that nowhere dense classes and bounded expansion classes admit such nice covering.
Precisely,
for $r\in\mathbb{N}$, an {\em $r$-neighborhood cover} $\mathcal{X}$ of a graph $G$ is a set of connected subgraphs of $G$ called {\em clusters}, such that for every vertex $v\in V(G)$ there is some
$X\in\mathcal{X}$ with $N_r(v)\subseteq X$.
The {\em radius} ${\rm rad}(\mathcal{X})$ of a cover $\mathcal{X}$ is the maximum radius of its clusters. The {\em degree} $d^\mathcal{X}(v)$ of $v$ in $\mathcal{X}$ is the number of clusters that contain $v$. The {\em maximum degree} $\Delta(\mathcal{X})=\max_{v\in V(G)}d^\mathcal{X}(v)$.
For a graph $G$ and $r\in\mathbb{N}$ we define
$\tau_r(G)$ as the minimum maximum degree of an $r$-neighborhood cover of radius at most $2r$ of $G$.
The following theorem is proved in \cite{Grohe2013}.
\begin{theorem}
\label{thm:1b}
Let $\mathcal{C}$ be a class of graphs with bounded expansion. Then there is a function $f$ such that for all $r\in\mathbb{N}$ and all graphs $G\in\mathcal{C}$ , it holds
$\tau_r(G)\leq f(r)$.
\end{theorem}
In order to prove the converse statement, we shall need the following result of K\"uhn and Osthus \cite{K`uhn2004}:
\begin{theorem}
\label{thm:ko}
For every $k$ there exists $d = d(k)$ such that every graph of average degree at least $d$ contains a subgraph of average degree at least $k$ whose girth is at least six.
\end{theorem}
We are now ready to turn Theorem~\ref{thm:1b} into a characterization theorem of classes with bounded expansion.
\begin{theorem}
Let $\mathcal{C}$ be an infinite monotone class of graphs. Then
$\mathcal C$ has bounded expansion if and only if, for every integer $r$ it holds
$$\sup_{G\in\mathcal{C}} \tau_r(G)<\infty.$$
\end{theorem}
\begin{proof}
One direction follows from Theorem~\ref{thm:1b}. For the other direction, assume that the class $\mathcal C$ does not have bounded expansion. Then there exists an integer $p$ such that
for every integer $n$, $\mathcal C$ contains the $p$-th subdivision
of a graph $G_n$ with average degree at least $n$.
Let $d\in\mathbb{N}$. According to Theorem~\ref{thm:ko}, there exists $N(d)$ such that every graph with average degree at least $N(d)$ contains a subgraph of girth $6$ and average degree at least $d$.
We deduce that $\mathcal C$ contains the $p$-th subdivision
$H_d'$ of a graph $H_d$ with girth at least $6$ and average degree at least $d$. As in the proof of Theorem~\ref{thm:2}, we get
$$\sup_{G\in\mathcal{C}} \tau_{p+1}(G)
\geq \sup_{d} \tau_{p+1}(H_d')
\geq \sup_{d}\tau_{1}(H_d)
\geq \sup_{d}\frac{\|H_d\|}{|H_d|}=\infty.$$
\end{proof}
Also, similar statements exist for nowhere dense classes:
\begin{theorem}
A hereditary class $\mathcal C$ is nowhere dense if there exists a
function $f$ such that for every integer $t$ and every $\epsilon>0$, every graph $G\in\mathcal C$ of order $n\geq f(t,\epsilon)$ has a
covering $C_1,\dots, C_k$ of its vertex set such that
\begin{itemize}
\item each $C_i$ induces a connected subgraph with tree-depth at most $t$;
\item every vertex belongs to at most $n^\epsilon$ clusters;
\item every connected subgraph of order at most $t$ is included in at least one cluster.
\end{itemize}
\end{theorem}
\begin{proof}
One direction directly follows from Theorem~\ref{thm:chiND}. For the reverse direction, assume that $\mathcal C$ is not nowhere dense. Then there exists $p$ such that for every $n\in\mathbb{N}$, the class $\mathcal C$ contains a graph
$G_n$ having the $p$-th subdivision of $K_n$ as the spanning subgraph.
Assume that a covering exists for $t=3p+3$. Then every $p$-subdivided triangle of $K_n$ is included in some cluster.
As the $p$-subdivided $K_n$ includes $\binom{n}{3}$ triangles, and
as there are at most $n^{1+\epsilon}$ clusters including some principal vertex of the subdivided $K_n$ (which is necessary to include some subdivided triangle), some cluster $C$ includes at least
$n^{2-\epsilon}$ triangles. It follows that the subgraph induced by $C$ has a minor $H$ of order at most $n$ with at least $n^{2-\epsilon}$ triangles. However, as tree-depth is minor monotone, the graph $H$ has tree-depth at most $t$ hence is $t$-degenerate thus cannot contain more than $\binom{t}{2}n$ triangles. Whence we are led to a contradiction if $n>\binom{t}{2}^{\frac{1}{1-\epsilon}}$.
\end{proof}
\begin{theorem}[ \cite{Grohe2013}]
\label{thm:1}
Let $\mathcal{C}$ be a nowhere dense class of graphs. Then there is a function $f$ such that for all $r\in\mathbb{N}$ and $\epsilon>0$ and all graphs $G\in\mathcal{C}$ with $n\geq f(r,\epsilon)$ vertices, it holds
$\tau_r(G)\leq n^\epsilon$.
\end{theorem}
In other words, every infinite nowhere dense class of graphs $\mathcal C$ is such that
$$\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}=0.$$
We shall deduce from this theorem the following characterization of nowhere dense classes of graphs.
\begin{theorem}
\label{thm:2}
Let $\mathcal{C}$ be an infinite monotone class of graphs. Then
$$\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}$$
is either $0$ if $\mathcal C$ is nowhere dense, at at least $1/3$ if $\mathcal{C}$ is somewhere dense.
\end{theorem}
This theorem will directly follow from Theorem~\ref{thm:1} and the following two lemmas.
\begin{lemma}
\label{lem:1}
Let $G$ be a graph of girth at least $5$. Then it holds
$$
\tau_1(G)\geq \nabla_0(G),
$$
where
$$\nabla_0(G)=\max_{H\subseteq G}\frac{\|H\|}{|H|}.$$
\end{lemma}
\begin{proof}
Let $\mathcal{X}$ be a $1$-neighborhood cover of radius at most $2$ of $G$ with maximum degree $\tau_1(G)$. Let $X_1,\dots,X_k$ be the clusters of $\mathcal{X}$. For an edge $e=\{u,v\}$, let $i\leq k$ be the minimum integer such that $N_1(u)$ or $N_1(v)$ is included in $X_i$. Let $c_i$ be a center of $X_i$. Then $e$ belongs
to a path of length at most $2$ with endpoint $c_i$. We orient $e$ according to the orientation of this path away from $c_i$. Note that by the process, we orient every edge, and that every vertex $v$ gets at most one incoming edge by cluster that contains $v$. Hence
we constructed an orientation of $G$ with maximum degree at most $\tau_1(G)$. As the maximum indegree of an orientation of $G$ is at least $\nabla_0(G)$, we get $\tau_1(G)\geq \nabla_0(G)$.
\end{proof}
We deduce the following
\begin{lemma}
\label{lem:2}
Let $\mathcal{C}$ be a monotone somewhere dense class of graphs. Then
$$\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}\geq \frac{1}{3}.$$
\end{lemma}
\begin{proof}
A $\mathcal{C}$ is monotone and somewhere dense, there exists integer $p\geq 0$ such that for every $n\in\mathbb{N}$, the $p$-th subdivision ${\rm Sub}_p(K_n)$ of $K_n$ belongs to $\mathcal C$.
For $n\in\mathbb{N}$, let $H_n$ be a graph of girth at least $5$, with order $|H_n|\sim n$ and size $\|H_n\|\sim n^{3/2}$.
If $p=0$, then according to Lemma~\ref{lem:1} it holds
\begin{align*}
\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}&\geq \limsup_{G\in\mathcal{C}}\frac{\log \tau_1(G)}{\log |G|}\\
&\geq \lim_{n\rightarrow\infty}\frac{\log \nabla_0(H_n)}{\log |H_n|}\\
&\geq \lim_{n\rightarrow\infty}\frac{\log \|H_n\|-\log |H_n|}{\log |H_n|}
= \frac{1}{2}.
\end{align*}
Thus assume $p\geq 1$. Denote by $H_n'$ the $p$-th subdivision
of $H_n$, where we identify $V(H_n)$ with a subset of $V(H_n')$ for convenience. Then $|H_n|\sim pn^{3/2}$.
Let $\mathcal X=\{X_1,\dots,X_k\}$ be a $(p+1)$-neighborhood cover of radius at most $2(p+1)$ of $H_n'$ with maximum degree $\tau_{p+1}(H_n')$.
Let $c_i$ be a center of cluster $X_i$, and let $d_i$ be a vertex of $H_n$ at minimal distance of $c_i$ in $H_n'$. It is easily checked that
there exists a cluster $X_i'$ with center $d_i$ and radius $2(p+1)$ such that $X_i\cap V(H_n)=X_i'\cap V(H_n)$. Define $Y_i=X_i'\cap V(H_n)$. As $\mathcal X$ is a $(p+1)$-neighborhood cover of radius at most $2(p+1)$ of $H_n'$ with maximum degree $\tau_1(H_n')$,
the cover $\mathcal Y=\{Y_i\}$ is a $1$-neighborhood cover of radius $2$ of $H_n$ with maximum degree $\tau_{p+1}(H_n')$.
Hence $\tau_1(H_n)\leq \tau_{p+1}(H_n')$. Thus it holds
\begin{align*}
\adjustlimits\sup_{r\in\mathbb{N}}\limsup_{G\in\mathcal{C}}\frac{\log \tau_r(G)}{\log |G|}
&\geq \lim_{n\rightarrow\infty}\frac{\log \tau_{p+1}(H_n')}{\log |H_n'|}\\
&\geq \lim_{n\rightarrow\infty}\frac{\log \tau_{1}(H_n)}{\log |H_n'|}\\
&\geq \lim_{n\rightarrow\infty}\frac{\log \|H_n\|-\log |H_n|}{\log |H_n'|}=\frac{1}{3}.
\end{align*}
\end{proof}
\section{Algorithmic Applications of Low Tree-Depth Decomposition}
\label{sec:algo}
Theorem~\ref{thm:chibound} has the following algorithmic version.
\begin{theorem}[\cite{Sparsity}]
\label{thm:chialgo}
There exist polynomials $P_p$ (${\rm deg}\,P_p\approx 2^{2^p}$) and an algorithm that computes, for input graph $G$ and integer $p$, a low tree-depth decomposition of $G$ with parameter $p$ using
$N_p(G)$ colors in time $O(N_p(G)\,|G|)$, where
$$
\chi_p(G)\leq N_p(G)\leq P_p(\trdens{2^{p-2}+1}(G)).
$$
\end{theorem}
It is not surprising that
low tree-depth decompositions have immediately found several algorithmic applications \cite{Taxi_stoc06, POMNII}.
As noticed in \cite{chrobak}, the existence of an orientation of planar graphs with bounded out-degree allows for a planar graph $G$ (once such an orientation has been computed for $G$) an easy $O(1)$ adjacency test, and an enumeration of all the triangles of $G$ in linear time.
For a fixed pattern $H$,
the problem is to check whether an input graph $G$ has an induced subgraph
isomorphic to $H$ is called the {\em subgraph isomorphism problem}.
This problem is known to have complexity at most
$O(n^{\omega l/3})$ where $l$ is the order of $H$ and where $\omega$ is the
exponent of square matrix fast multiplication algorithm \cite{NP85} (hence
$O(n^{0.792\ l})$ using the fast matrix algorithm of \cite{coppersmith90}). The
particular case of subgraph isomorphism in planar graphs have been studied by
Plehn and Voigt \cite{plehn91}, Alon \cite{alon95} with super-linear bounds and
then by Eppstein \cite{Epp-SODA-95,Epp-JGAA-99} who gave the first linear
time algorithm for fixed pattern $H$ and $G$ planar. This was extended to
graphs with bounded genus in \cite{Epp-Algo-00}.
We further generalized this result to classes with bounded expansion \cite{POMNII}:
\begin{theorem}
\label{thm:count}
There is a function $f$ and an algorithm such that for every input graphs $G$ and $H$, counts the number of occurrences of $H$ is $G$ in time
$$O\bigl(f(H)\,(N_{|H|}(G))^{|H|}\,|G|\bigr),$$ where $N_p(G)$ is the number of colors computed by the algorithm in Theorem~\ref{thm:chialgo}.
In particular, for every fixed bounded expansion class (resp. nowhere dense class) $\mathcal C$ and every fixed pattern $H$, the number of occurrences of $H$ in a graph $G\in\mathcal C$ can be computed in linear time (resp. in time $O(|G|^{1+\epsilon})$ for any fixed $\epsilon>0$).
\end{theorem}
Theorem~\ref{thm:count} can be extended from the subgraph isomorphism problem to first-order model checking.
\begin{theorem}[\cite{DKT2}, see also \cite{Dawar2009}]
Let $\mathcal C$ be a class of graphs with bounded expansion, and
let $\phi$ be a first-order sentence (on the natural language of graphs). There exists a linear time algorithm that decides whether a graph $G\in\mathcal C$ satisfies $\phi$.
\end{theorem}
The above theorem relies on low tree-depth decomposition. However, the next result, due to Kazana and Segoufin, is based on the notion of transitive fraternal augmentation, which was introduced in \cite{POMNI} to prove Theorem~\ref{thm:chibound}.
\begin{theorem}[\cite{Kazana2013}]
\label{thm:Kazana}
Let $\mathcal C$ be a class of graphs with bounded expansion and let $\phi$ be a first-order formula. Then, for all $G\in\mathcal C$, we can compute the number $|\phi(G)|$ of satisfying assignements for $\phi$ in $G$ in in time $O(|G|)$.
Moreover, the set $\phi(G)$ can be enumerated in lexicographic order in constant time between consecutive outputs and linear time preprocessing time.
\end{theorem}
Eventually, the existence of efficient model checking algorithm has been extended to nowhere dense classes by
Grohe, Kreutzer, and Siebertz \cite{Grohe2013} using the notion
of $r$-neighborhood cover we already mentioned:
\begin{theorem}
\label{thm:FOND}
For every nowhere dense class $\mathcal C$ and every $\epsilon >0$, every property of graphs definable in first-order logic can be decided in time $O(n^{1+\epsilon})$ on $\mathcal{C}$.
\end{theorem}
However, it is still open whether a counting version of Theorem~\ref{thm:FOND} (in the spirit of Theorem~\ref{thm:Kazana}) holds.
\section{Low Tree-Depth Decomposition and Logarithmic Density of Patterns}
We have seen in the Section~\ref{sec:algo} that low tree-depth decomposition allows an easy counting of patterns. It appears that they also allow to prove some ``extremal'' results. A typical problem studied in extremal graph theory is to determine the maximum number of edges ${\rm ex}(n,H)$
a graph on $n$ vertices can contain without containing a subgraph isomorphic to $H$.
For non-bipartite graph $H$, the seminal result of Erd\H os and Stone \cite{erdos1946structure} gives a tight bound:
\begin{theorem}
$${\rm ex}(n,H)=\left(1-\frac{1}{\chi(H)-1}\right)\binom{n}{2}+o(n^2).$$
\end{theorem}
In the case of bipartite graphs, less is known. Let us mention the following result of
Alon, Krivelevich and Sudakov \cite{alon2003turan}
\begin{theorem}
Let $H$ be a bipartite graph with maximum degree $r$ on one side.
$${\rm ex}(n,H)= O(n^{2-\frac{1}{r}}).$$
\end{theorem}
The special case where $H$ is a subdivision of a complete graph will be of
prime interest in the study of nowhere dense classes. Precisely, denoting
${\rm ex}(n,K_t^{(\leq p)})$ the maximum number of edges a graph on $n$ vertices can contain without containing a subdivision of $K_t$ in which every edge is subdivided at most $p$ times, Jiang \cite{Jiang} proved the following bound:
\begin{theorem}
For every integers $k,p$ it holds
$${\rm ex}(n,K_k^{(\leq p)})=O(n^{1+\frac{10}{p}}).$$
\end{theorem}
From this theorem follows that if a class $\mathcal{C}$ is such that
$\limsup_{G\in\mathcal{C}\,\widetilde{\triangledown}\, t}\frac{\log \|G\|}{\log |G|}>1+\epsilon$ then
$\mathcal C\,\widetilde{\triangledown}\, \frac{10t}{\epsilon}$ contains graphs with unbounded clique number. This property is a main ingredient in the proof of the following classification ``trichotomy'' theorem.
\begin{theorem}[\cite{ND_characterization}]
\label{thm:tri}
Let $\mathcal C$ be an infinite class of graphs. Then
\begin{equation*}
\adjustlimits\sup_{t}\limsup_{G\in\mathcal C \,\widetilde{\triangledown}\, t}\frac{\log\|G\|}{\log|G|}\in\{-\infty,0,1,2\}.
\end{equation*}
Moreover, $\mathcal{C}$ is nowhere dense if and only if
$\adjustlimits\sup_{t}\limsup_{G\in\mathcal C \,\widetilde{\triangledown}\, t}\frac{\log\|G\|}{\log|G|}\leq 1$.
\end{theorem}
Note that the property that the logarithmic density of edges is integral
needs to consider all the classes $\mathcal C\,\widetilde{\triangledown}\, t$. For instance, the class
$\mathcal D$ of graphs
with no $C_4$ has a bounding logarithmic edge density of $3/2$, which jumps to $2$
when on considers $\mathcal{D}\,\widetilde{\triangledown}\, 1$.
Using low tree-depth decomposition, it is possible to extend
Theorem~\ref{thm:tri} to other pattern graphs:
\begin{theorem}[\cite{Taxi_hom}]
\label{thm:countF}
For every infinite class of graphs $\mathcal C$ and every graph $F$
$$
\adjustlimits \lim_{i\rightarrow\infty}\limsup_{G\in\mathcal C\,\widetilde{\triangledown}\, i}\frac{\log (\#F\subseteq G)}{\log |G|}\in\{-\infty,0,1,\dots,\alpha(F),|F|\},$$
where $\alpha(F)$ is the stability number of $F$.
Moreover, if $F$ has at least one edge, then $\mathcal{C}$ is nowhere dense if and only if $\adjustlimits \lim_{i\rightarrow\infty}\limsup_{G\in\mathcal C\,\widetilde{\triangledown}\, i}\frac{\log (\#F\subseteq G)}{\log |G|}\leq\alpha(F)$.
\end{theorem}
The main ingredient in the proof of this theorem is the analysis of local configurations, called $(k,F)$-sunflowers (see Fig.~\ref{fig:sunflower}). Precisely, for graphs $F$ and $G$, a
{\em $(k,F)$-sunflower} in $G$ is a $(k+1)$-tuple
$(C,\mathcal F_1,\dots,\mathcal F_k)$, such that $C\subseteq V(G), \mathcal F_i\subseteq \mathcal P(V(G))$,
the sets in $\{C\}\cup\bigcup_i\mathcal F_i$ are pairwise disjoints and
there exists a partition $(K,Y_1,\dots,Y_k)$ of $V(F)$ so that
\begin{itemize}
\item $\forall i\neq j,\ \omega(Y_i,Y_j)=\emptyset$,
\item $G[C]\approx F[K]$,
\item $\forall X_i\in\mathcal F_i, G[X_i]\approx F[Y_i]$,
\item $\forall (X_1,\dots,X_k)\in\mathcal F_1\times\dots\times\mathcal F_k$, the subgraph of $G$ induced by $C\cup X_1\cup\dots\cup X_k$ is isomorphic to $F$.
\end{itemize}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.75\textwidth]{petSF1}
\caption{A $(3,{\rm Petersen})$-sunflower}
\label{fig:sunflower}
\end{center}
\end{figure}
The following stepping up lemma gives some indication on how low tree-depth decomposition is related to the proof of Theorem~\ref{thm:countF}:
\begin{lemma}[\cite{Taxi_hom}]
There exists a function $\tau$ such that for every integers $p,k$, every graph
$F$ of order $p$, every $0<\epsilon<1$, the following property holds:
Every graph $G$ such that $(\#F\subseteq G)\,> |G|^{k+\epsilon}$ contains a $({k+1},F)$-sunflower
$(C,\mathcal F_1,\dots,\mathcal F_{k+1})$ with
$$
\min_i |\mathcal F_i|\geq \left(\frac{|G|}{\binom{\chi_p(G)}{p}^{1/\epsilon}}\right)^{\tau(\epsilon,p)}
$$
In particular, $G$ contains a subgraph $G'$ such that
\begin{align*}
&|G'|\geq (k+1)\left(\frac{{|G|}}{{\binom{\chi_p(G)}{p}^{1/\epsilon}}}\right)^{\tau(\epsilon,p)}\\
\text{and}\qquad&(\#F\subseteq G')\geq \left(\frac{|G'|-|F|}{k+1}\right)^{{k+1}}.
\end{align*}
\end{lemma}
\providecommand{\noopsort}[1]{}\providecommand{\noopsort}[1]{}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2014-12-05T02:08:47",
"yymm": "1412",
"arxiv_id": "1412.1581",
"language": "en",
"url": "https://arxiv.org/abs/1412.1581",
"abstract": "The theory of sparse structures usually uses tree like structures as building blocks. In the context of sparse/dense dichotomy this role is played by graphs with bounded tree depth. In this paper we survey results related to this concept and particularly explain how these graphs are used to decompose and construct more complex graphs and structures. In more technical terms we survey some of the properties and applications of low tree depth decomposition of graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "On Low Tree-Depth Decompositions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.976310530768455,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7076984724669181
} |
https://arxiv.org/abs/1004.3363 | Faster Algorithms for Semi-Matching Problems | We consider the problem of finding \textit{semi-matching} in bipartite graphs which is also extensively studied under various names in the scheduling literature. We give faster algorithms for both weighted and unweighted case.For the weighted case, we give an $O(nm\log n)$-time algorithm, where $n$ is the number of vertices and $m$ is the number of edges, by exploiting the geometric structure of the problem. This improves the classical $O(n^3)$ algorithms by Horn [Operations Research 1973] and Bruno, Coffman and Sethi [Communications of the ACM 1974].For the unweighted case, the bound could be improved even further. We give a simple divide-and-conquer algorithm which runs in $O(\sqrt{n}m\log n)$ time, improving two previous $O(nm)$-time algorithms by Abraham [MSc thesis, University of Glasgow 2003] and Harvey, Ladner, Lovász and Tamir [WADS 2003 and Journal of Algorithms 2006]. We also extend this algorithm to solve the \textit{Balance Edge Cover} problem in $O(\sqrt{n}m\log n)$ time, improving the previous $O(nm)$-time algorithm by Harada, Ono, Sadakane and Yamashita [ISAAC 2008]. |
\section*{APPENDIX}
\section{Edmonds-Karp-Tomizawa algorithm for weighted bipartite
matching} \label{sec:EK_algo}
In this section, we briefly explain Edmonds-Karp-Tomizawa (EKT)
algorithm. The algorithm starts with an empty matching $M$ and
iteratively augments (i.e., increases the size of) $M$. The matching
in each iteration is maintained so that it is {\em extreme}; i.e.,
it has highest weight among matching of the same cardinality. The
augmenting procedure is as follows. Let $M$ be a matching maintained
so far. Let $D_M$ be the directed graph obtained from $\hat{G}$ by
orienting each edge $e$ in $M$ from $\hat V$ to $U$ with length
$\ell_e=-w_e$ and orienting each edge $e$ not in $M$ from $U$ to
$\hat V$ with length $\ell_e=w_e$. Let $U_M$ (respectively, $\hat
V_M$) be the set of vertices in $U$ (respectively, $\hat V$) not
covered by $M$. If $|M|\neq |U|$, then there is a $U_M$-$\hat V_M$
path. Find a shortest such path, say $P$, and augment $M$ along $P$;
i.e., set $M=M\Delta P$. Repeat with the new value of $M$ until
$|M|=|U|$.
The bottleneck of this algorithm is the shortest path algorithm.
Although $D_M$ has negative edge length, one can find a {\em
potential} and applying Dijkstra's algorithm on a graph $D_M$ with
non-negative {\em reduced cost}. The potential and reduced cost are
defined as follows.
\begin{definition}
A function $p:U\cup \hat{V}\rightarrow \mathbb{R}$ is a
\textit{potential} if, for every edge $uv$ in the residual graph
$D_M$, $\tilde\ell_{uv}=\ell_{uv}+p(u)-p(v)$ is non-negative. We
call $\tilde\ell$ a {\em reduced cost} with respect to a potential
$p$.
\end{definition}
The key idea of using a potential is that a shortest path from $u$
to $v$ with respect to a reduced cost $\tilde\ell$ is also a
shortest with respect to $\ell$. We omit details here (see, e.g.,
(\cite[Chapter~7 and Section~17.2]{SchrijverA}), but note that we
can use a distance function found in the last iteration of the
algorithm as a potential, as in Algorithm~\ref{algo:EKT}.
\subsubsection*{Dijkstra's algorithm.}
\label{sec:Dijkstra} We now explain Dijkstra's algorithm on graph
$D_M$ with non-negative edge weight defined by $\tilde\ell$. Our
presentation is slightly different from the standard one but will be
easy to modify later. The algorithm keeps a subset $X$ of $U\cup
\hat{V}$, called the set of undiscovered vertices, and a function
$d: U\cup\hat V\rightarrow \mathbb{R}^+$ (the \textit{tentative
distance}). Start with $X= U\cup \hat V$ and set $d(u)=0$ for all
$u\in U_M$ and $d(v)=\infty$ for all vertex $v\notin U_M$. Apply the
following iteratively:
\begin{algorithmic}[1]
\STATE Find $u\in X$ minimizing $d(u)$ over $u\in X$. Set
$X=X\setminus \{u\}$.
\STATE For each neighbor $v$ of $u$ in $D_M$, ``relax'' $uv$: set
$d(v)\leftarrow\min\{d(v),d(u)+\tilde\ell_{uv}\}$.
\end{algorithmic}
The running time of Dijkstra's algorithm depends on the
implementation. One implementation is by using Fibonacci heap. Each
vertex $v\in U\cup\hat V$ is kept in the heap with key $d(v)$.
Finding and extracting a vertex of minimum tentative distance can be
done in an amortized time bound of $O(\log |U\cup\hat V|)$ by
``extract-min'', and relaxing an edge can be done in an amortized time
bound of $O(1)$ by ``decrease-key''.
Consider the running time caused by finding a shortest path.
Let $n=|U\cup V|$ and $m=|E|$.
Then we have to call insertion $O(n)$ times, decrease-key $O(m)$
times, and extract-min $O(n)$ times.
Thus, the overall running time is $O(m + n\log{n})$.
\section{Observation: $O(n^3)$ and $O(n^{5/2}\log (nW))$ time algorithms}
\label{sec:bimatching-algo}
Recall to the reduction from the weighted semi-matching problem
to the weighted bipartite matching problem, or equivalently, an
assignment problem. The reduction was shown
in~\cite{BCS74,horn1973,Harvey_slide}. We include it here for
completeness. Given a bipartite graph $G=(U\cup V,E)$ with edge
weight $w$, an instance for the semi-matching problem, we construct a
bipartite graph $G=(U\cup\hat{V},\hat{E})$ with weight $\hat{w}$, an
instance for the weighted bipartite matching problem, as follows.
For every vertex $v\in V$ of degree $deg(v)$, we create {\em exploded}
vertices $v^1, v^2,\ldots, v^{deg(v)}$ in $\hat{V}$ and let
$\hat{V}_{v}$ denote a set of such vertices.
For each edge $uv$ in $E$ of weight $w_{uv}$, we also create $deg(v)$
edges $uv^1,uv^2,\ldots,uv^{deg(v_i)})$, with associated weights $w_{uv},
2\cdot w_{uv},\ldots, \deg(v)\cdot w_{uv}$, respectively.
It is easy to verify that finding optimal semi-matching in $G$ is
equivalent to finding minimum matching in
$\hat{G}$. Figure~\ref{subfig:reduction} shows an example of this
reduction.
The construction yields a graph $\hat{G}$ with $O(m)$ vertices and
$O(nm)$ edges. Applying any existing algorithms for weighted
bipartite matching directly is not enough to get an improvement.
However, we observe that the reduction can be done in
$O(n^2\log n)$ time, and we can apply the result of Kao et
al. in~\cite{KLST01} to reduce the number of participating edges to
$O(n^3)$. Thus, Gabow and Tarjan's scaling algorithm~\cite{GT89} give
us the following result.
\begin{observation}
If all edges have non-negative integer weight bounded by $W$, then
there is an algorithm for the weighted semi-matching problem with the
running time of $O(n^{5/2}\log nW)$.
\label{obs:bimatching-algo}
\end{observation}
This result immediately gives an $O(n^{5/2}\log n)$ time algorithm for
the unweighted case (i.e., $W=1$). Hence, we already have an
improvement upon the previous $O(nm)$ time algorithm for the case of
dense graph.
Now, we give an explanation on the observation.
If we reduce the problem normally (as in
Section~\ref{sec:weighted}) to get $\hat G$, then the number of
edges in $\hat G$ and the running time will be $O(nm)$. However,
since the size of any matching in the graph $\hat G$ is at most
$|U|$, it suffices to consider only the smallest $|U|$ edges in
$\hat G$ incident to each vertex in $U$. Therefore, we may assume
that $\hat G$ has $O(n^2)$ edges. (The same observation is also used
in \cite{KLST01}.)
More precisely, let $E_u$ be a set of edges incident to $u$ in $\hat
G$, and $R$ be a set of $|U|$ smallest edges of $E_u$. If the
maximum matching of the minimum weight, say $M$, contains an edge
$e\in E_u\setminus U$, then $U\cup\{e\}$ has $|U|+1$ edges implying
that there is an edge $e'\in U$ incident to a vertex $v\in\hat V$
not matched by $M$. Thus, we can replace $e$ with $e'$ which results
in a matching of smaller weight. Therefore, we need to keep only
$|U|^2$ edges in our reduction.
Moreover, we can also reduce the time for a reduction to $O(n^2\log
n)$ as well. The faster reduction can be as follows. For each vertex
$u\in U$, we first add all edges incident to $u$, to a binary heap
$H$ with addition information $i=1$, say $(e,i)$. Then we
iteratively extract minimum $(e=uv,i)$ from $H$, create an edge
$uv^i$ in $E'$ with weight $i\cdot w(e)$, and insert
$(uv^{i+1},i+1)$ back to $H$. We repeat the process until $u$ has
$|U|$ incident edges in $E'$. The pseudocode of the reduction is
given in Algorithm~\ref{algo:reduction}.
\begin{algorithm}
\caption{\pname{Reduction} $(G=(U\cup V,E),w)$}
\label{algo:reduction}
\begin{algorithmic}[1]
\STATE Create an empty set $\hat E$, $\hat V$.
\FORALL{vertices $u\in U$}
\STATE Create a binary heap $H$.
\FORALL{edges $e$ incident to $u$}
\STATE Insert $(e,1)$ to $H$ with key $w(u)$.
\ENDFOR
\FOR{$k\leftarrow 1$ to $|U|$}
\STATE Extract-min from $H$, resulted in $(e=uv, i)$.
\STATE Insert a vertex $v^i$ to $\hat V$ (If not exists).
\STATE Insert an edge $uv^i$ to $\hat E$.
\STATE Insert an edge $(e'=uv^{i+1},i+1)$ to $H$ with key
$w(e)\cdot (i+1)$.
\ENDFOR
\STATE Destroy a binary heap $H$.
\ENDFOR
\STATE Return $\hat G=(U\cup\hat V,\hat E)$.
\end{algorithmic}
\end{algorithm}
Consider a vertex $u\in U$. In any time during the reduction, there
are $O(\deg_G(u))$ edges in $H$. So, the extract-min takes
$O(\log(\deg_G(u)))$ time. The time for inserting a vertex to $\hat
V$ and an edge to $\hat E$ is $O(1)$ which is dominated by the time
for extract-min. Thus, we have to consider only the time for heap
operations. For each vertices $u\in U$, we have to call insertion
$\deg_G(u)+|U|$ times and extract-min $|U|$ times. Thus, the time
required to process each vertex of $U$ is
$O((\deg_G(u)+|U|)\log|U|)$. It follows that the total running time
of the reduction is $O((|E|+|U|^2)\log |U|)=O(n^2\log n)$.
Now, we run algorithms for bipartite matching problem on the graph
$\hat{G}$ with $n^2$ edges.
Using Edmonds-Karp-Tomizawa, the running time becomes $O(nm)=O(n^3)$.
Using Gabow-Tarjan's scaling algorithm,
the running time becomes $O(\sqrt{n}m\log{(nW)}=O(n^{5/2}\log{(nW)})$,
where $W$ is the maximum edge weight.
\section{Dinitz's blocking flow algorithm}
\label{sec:Dinitz}
In this section, we will give an outline of Dinitz's blocking flow
algorithm~\cite{Dinitz70}. Given a network $R$ with source $s$ and
sink $t$, a flow $g$ is a {\em blocking flow} in $R$ if every path
from the source to the sink contains a saturated edge, an edge with
zero residual capacity. A blocking flow is usually called a greedy
flow, since the flow cannot be increased without any rerouting of
the previous flow paths. In a unit capacity network, depth-first
search can be used to find blocking flow in linear time.
Dinitz's algorithm works in {\em layer graph}, a subgraph whose
edges are in at least one shortest path from $s$ to $t$. This
condition implies that we only augment along the shortest paths. The
algorithm proceeds by successively find blocking flows in the layer
graphs of the residual graph of the previous round. The following is
an important property (see, e.g.,
\cite{AMOBook,SchrijverA,TarjanBook} for proofs). It states that
the distance between the source and the sink always increases after
each blocking flow step.
In the case of unit-capacity, Even-Tarjan~\cite{ET75} and
Karzanov~\cite{Karzanov73} showed that the algorithm finds a maximum
flow in time $O(\min\{n^{2/3},m^{1/2}\}m)$. In the case of
{\em unit-network}, i.e., every vertex either has indegree $1$ or
outdegree $1$, the algorithm finds a maximum flow in time
$O(\sqrt{n}m)$.
\section{Extension to Balanced Edge Cover problem}
\label{sec:edge-cover} Recall that the problem is the following.\\
\noindent {\bf Input:} A simple undirected graph $G =(V,E)$. \\
\noindent {\bf Task:} Find an \textit{edge cover} $F$ minimizing
$c(F)= \sum_{v\in V} \deg_{F} (v)$, where $\deg_{F} (v)= |\{vu\in
F\}|$ and we say that $F$ is an edge cover if $\deg_{F}(v)\geq 1$ for
all $v\in V$.\footnote{We note that the original definition of the
balanced edge cover problem has a function $f : \integer^+
\rightarrow \real^+$ as an input~\cite{HaradaOSY08}. However, it is
shown in \cite{HaradaOSY08} that the optimal balanced edge cover can
be determined independently of function $f$ as long as $f$ is
strictly monotonic. In other words, the problem is equivalent to the
one we define here.} \\
We call the solution of such problem an \textit{optimal balanced
edge cover}. Observe that any minimal edge cover -- including any
optimal balanced edge cover -- induces a star forest; i.e., every
connected component has at most one vertex of degree greater than
one (we call such vertices centers) and the rest have degree
exactly one. For any optimal balanced edge cover $F$, we call any
set of vertices $C$ an {\em extended set of centers of $F$} if $C$
contains all centers of $F$ and exactly one vertex from each of the
connected components that have no center. (To be precise, $C$
is a center if $C$ contains all centers of $F$ and each connected
component in the subgraph induced by $F$ contains exactly one vertex
in $C$.)
To solve the balanced edge cover problem using semi-matching
algorithm, we first make a further observation that if an extended
set of centers is given, then optimal balanced edge cover can be
found by simply solving the unweighted semi-matching problem.
\begin{lemma}
Let $C$ be an extended set of centers of some optimal balanced edge
cover $F$. Let $G'=((V\setminus C)\cup C, E')$ be a bipartite graph
where $E'$ is the set of edges between $V\setminus C$ and $C$ in
$G$. Then, an optimal semi-matching in $G'$ (where we allow vertices
in $C$ to be connected more than once) is an optimal balanced edge
cover in $G$.
\end{lemma}
\begin{proof}
Let $M$ be any optimal semi-matching in $G'$. First, note that $F$
is also a semi-matching in $G'$. Thus, the cost of $M$ is less
than the cost of $F$. It is left to show that $M$ is an edge cover.
In other words, it is left to prove that every vertex in $C$ is
covered by $M$.
Assume for the sake of contradiction that there is a vertex $v\in C$
that is not covered by $M$. We show that there exists a
cost-reducing path of $M$ starting from $v$ as follows. Starting
from $v_0=v$, let $v_1$ be any vertex adjacent to $v_0$ in $F$ and
$v_2$ be a (unique) vertex adjacent to $v_1$ in $M$. If
$\deg_M(v_2)>1$, then we stop the process. Otherwise, repeat the
process by finding a vertex $v_3$ adjacent to $v_2$ in $F$ and a
vertex $v_4$ adjacent to $v_3$.
Observe that all vertices found during the process are unique since
every vertex in $V\setminus C$ has degree exactly one in $F$, every
vertex found in the process, except $v_0$, has degree exactly one in
$M$, and there is no edge in $F$ between two vertices in $C$.
Therefore, the process above must stop. Moreover, the path obtained
after the process stops is a cost-reducing path, contradicting the
assumption that $M$ is an optimal semi-matching.
\end{proof}
It is left to find any extended set of centers. We do so by define
levels of vertices based on some edge cover.
\begin{definition} For any edge cover $F$, define the
\textit{levelling} of vertices in $F$, denoted by $L_F$, as follows.
First, let all center vertices (i.e., all vertices with degree more
than one in $F$) be on level $1$.
For $i=1, 2, \ldots$, we construct level $i+1$ by looking at any
vertex $v$ not yet assigned to any level. If $i$ is odd and $v$
shares an edge in $F$ with a vertex on level $i$, then we add $v$
to level $i+1$. Otherwise, we add $v$ to level $i+1$ if $i$ is even,
$v$ shares an edge not in $F$ with a vertex on level $i$
\textit{and} $v$ does not share an edge in $F$ with any vertex on
level $i+1$. (For example, after we put vertices of degree more than
one in the first level, we put their leaves on level two. Then, we
put all vertices adjacent to these leaves by non-covering edges on
level three. But, if we see a single edge having both end vertices
on level three, we put one end to level four and so on.)
Note that when the process finishes, there might be some vertices
that are not assigned to any level.\hfill\qed
\end{definition}
An optimal balanced edge cover can be found by the following
algorithm.
\paragraph{{\sc Find-Center} Algorithm:} First, find a minimum cardinality
edge cover $F$. Then, find $L_F$ in a breadth-first manner. Let $M$
be an optimal semi-matching of a bipartite graph where the left
vertices are even-level vertices and right vertices are odd-level
vertices. Output $M$ and edges in $F$ between vertices with no
level.
Now, we show the running time and the correctness of {\sc Find-Center}
algorithm.
\paragraph{Running time analysis:} $F$ can be found by simply adding
uncovered vertices to a maximum cardinality
matching~\cite{gallai1959uep,norman1959amc}. The maximum cardinality
matching in bipartite graph can be found by Micali-Vazirani
algorithm~\cite{MV80} in $O(\sqrt{n}m)$, or in $O(n^\omega)$ by
Harvey algorithm~\cite{Harvey06}, where $\omega$ is a time for
computing matrix multiplication. However, since the running time of
semi-matching algorithm is $O(\sqrt{n}m\log n)$. It suffices to use
the first one. Thus, $F$ can be found in $O(\sqrt{n}m)$ time.
Moreover, finding $L_F$ could be done in a breadth-first manner
which takes $O(n+|F|+|M|)=O(n)$ time. Therefore, the time for the
reduction from balanced edge cover to semi-matching problem is
$O(\sqrt{n}m)$ implying the total running time of $O(\sqrt{n}m\log
n)$.
\subsubsection*{Correctness:}
The proof of correctness uses an algorithm BEC1 proposed in
\cite{HaradaOSY08}. This algorithm starts from any minimum edge
cover and keep augmenting along a \textit{cost-reducing} path until
such path does not exist. Here a cost-reducing path regarding to an
edge cover $F$ is a path starting from any center vertex $u$,
follow any edge in $F$ and follow an edge not in $F$. The path
keeps using edges in $F$ and edges not in $F$ alternately until it
finally uses an edge not in $F$ and ends at a vertex $v$ such that
$\deg_F(v)\leq \deg_F(u)-2$. (See \cite{HaradaOSY08} for the formal
definition.) It is shown that BEC1 returns an optimal balanced edge
cover.
\begin{lemma}
Let $C$ be the set returned from the {\sc Find-Center} algorithm.
Then $C$ is an extended set of centers of some optimal balanced edge
cover $F^*$. In other words, there exists $F^*$ such that all of its
centers are in $C$ and each connected component (in the subgraph
induced by $F^*$) has exactly one vertex in $C$.
\end{lemma}
\begin{proof}
Let $F$ be the minimum cardinality edge cover found by the {\sc
Find-Center} algorithm. Consider a variation of BEC1 algorithm where
we augment along a shortest cost-reducing path. We claim that we can
always augment along the shortest cost-reducing path in such a way
that parity of vertices' levels never change. To be precise, we
construct a sequence of minimum cardinality edge covers $F=F_1, F_2,
\ldots$ where we get $F_i$ from $F_{i-1}$ by augmenting along some
shortest cost-reducing path. By the following process, we guarantee
that if any vertex is on an odd (even) level in $L_F$ then it is on
an odd (even) level in $L_{F_i}$. Moreover, if a vertex belongs to
no level in $L_F$ then it has no level in $L_{F_i}$.
Suppose we are at $F_i$ and our guarantee is maintained so far. Let
$P$ be any shortest cost-reducing path on $F_i$. If there is no such
$P$, then we found $F^*$. Otherwise, we consider two cases.
\begin{description}
\item[Case 1] If $P$ contains only vertices on level 1 and 2: This
is equivalent to reconnecting vertices on level 2 to vertices on
level 1. Level of every vertex is the same in $L_{F_i}$ and
$L_{F_{i+1}}$. Thus, the guarantee is maintained.
\item[Case 2] Otherwise: Let $P=v_0v_1v_2\ldots v_k$. Recall that
$k$ is even and note that all of $v_0, v_1, \ldots , v_{k-1}$ must
be on level 1 and 2 (alternately); otherwise, we can stop at the
first vertex that we visit on other level and obtain a shorter
cost-reducing path. Now, let us augment from $v_0$ until we reach
$v_{k-2}$. At this point, $v_{k-1}$ must have degree at least
three (after augmentation) because it is on level 1 (which means
that it has degree more than one in $F_i$) and just receives one
more edge from augmentation.
If $v_k$ is on level 3, then we are done as it will be on level 1
in $L_{F_{i+1}}$ and all vertices in its subtree will be 2 levels
higher.
If not, then $v_k$ must be on level 4. Let $a$ be a vertex
adjacent to $v_k$ by an edge in $F_i$ (which is on level 3) and
let $b$ be a vertex on level 2 adjacent to $a$ (by an edge not in
$F_i$). There are two subcases.
\begin{description}
\item[Case 2.1] When $v_{k-1}=b$: In this case, we use a path
$v_1v_2\ldots v_{k-1}a$ instead.
\item[Case 2.2] When $v_{k-1}\neq b$: In this case, we get an edge
cover with cardinality smaller than $|F_i|=|F|$ by deleting
three edges in $F_i$ incident to $b$, $v_{k-1}$ and $v_k$ and
add $ab$ and $v_{k-1}v_k$. (Note that for the case that $b$ is
covered by an edge incident to $v_{k-2}$, we use the fact that
$v_{k-2}$ has degree at least 3 noted earlier.) So, this case is
impossible as it contradicts the fact that $F$ is minimum
cardinality edge cover.\qedhere
\end{description}
\end{description}
\end{proof}
\section{Introduction}
\label{sect:intro}
In this paper, we consider a relaxation of the maximum bipartite
matching problem called \textit{semi-matching} problem, in both
weighted and unweighted cases. This problem has been previously studied
in the scheduling literature under different names, mostly known as
(non-preemptive) scheduling independent jobs on unrelated machines to
minimize flow time, or $R||\sum C_j$ in the standard scheduling
notation~\cite{Scheduling_book,Scheduling_survey,AMOBook}.
Informally, the problem can be explained by the following off-line
load balancing scenario: We are given a set of jobs and a set of machines.
Each machine can process one job at a time and it takes different
amounts of time to process different jobs. Each job also requires
different processing times if processed by different machines. One
natural goal is to have all jobs processed with the minimum {\em
total completion time}, or {\em total flow time}, which is the
summation of the duration each job has to wait until it is finished.
Observe that if the assignment is known, the order each machine
processes its assigned jobs is clear: It processes jobs in an
increasing order of the processing time.
To be precise, the semi-matching problem is as follows. Let
$G=(U\cup V, E)$ be a weighted bipartite graph, where $U$ is a set
of jobs and $V$ is a set of machines. For any edge $uv$, let
$w_{uv}$ be its weight. Each weight of an edge $uv$ indicates time
it takes $v$ to process $u$.
Through out this paper, let $n$ denote the number of vertices and
$m$ denote the number of edges in $G$.
A set $M\subseteq E$ is a {\em semi-matching} if each job $u \in U$
is incident with exactly one edge in $M$. For any semi-matching $M$,
we define the {\em cost} of $M$, denoted by $\cost(M)$, as follows.
First, for any machine $v\in V$, its cost with respect to a
semi-matching $M$ is
\[\cost_M(v)= (w_1)+(w_1+w_2)+\ldots +(w_1+\ldots
+w_{\deg_M(v)}) =\sum_{i=1}^{\deg_M(v)}(\deg_M(v)-i+1)\cdot
w_i
\]
where $\deg_M(v)$ is the degree of $v$ in $M$ and $w_1 \leq w_2 \leq
\ldots \leq w_{\deg_M(v)}$ are weights of the edges in $M$ incident
with $v$ sorted increasingly. Intuitively, this is the total
completion time of jobs assigned to $v$. Note that for the
unweighted case (i.e., when $w_e=1$ for every edge $e$), the cost of
a machine $v$ is simply $\deg_M(v)\cdot (\deg_M(v)+1)/2$.
Now, the cost of the semi-matching $M$ is simply the summation of
the cost over all machines: $$\cost(M) = \sum_{v\in V} \cost_M(v).$$
The goal is to find an {\em optimal semi-matching}, a semi-matching
with minimum cost.
\paragraph{Previous works:} Although the name ``semi-matching'' was
recently proposed by Harvey, Ladner, Lov\'{a}sz, and
Tamir~\cite{HLLT06}, the problem was studied as early as 1970s when
an $O(n^3)$ algorithm was independently developed by
Horn in~\cite{horn1973} and by Bruno, Coffman and Sethi
in~\cite{BrunoCS74}. Since then no progress has been made on this
problem except on its special cases and variations.
For the special case of {\em inclusive set restriction}
where, for each pair of jobs $u_1$ and $u_2$, either all neighbors
of $u_1$ are neighbors of $u_2$ or vice versa, a faster algorithm
with $O(n^2)$ running time was given by Spyropoulos and
Evans~\cite{SpyropoulosE85}. Many variations of this problem were
proved to be NP-hard, including the preemptive
version~\cite{Sitters01}, the case when there are
deadlines~\cite{su2009}, and the case of optimizing total weighted
tardiness~\cite{logendran2004}. The variation where the objective is
to minimize $\max_{v\in V}\cost_M(v)$ was also
considered~\cite{Low06,LeeLP09-note}.
The unweighted case of the semi-matching problem also received
considerably attention in the past few years. Since it was shown by
\cite{HLLT06} that an optimal solution of the semi-matching problem
is also optimal for the makespan version of the scheduling problem
(where one wants to minimize the time the last machine finishes), we
mention the results of both problems.
The problem was first studied in a special case, called {\em nested}
case where, for any two jobs, if their sets of neighbors are not
disjoint, then one of these sets contains the other set. This case is
shown to be solvable in $O(m+n\log n)$ time~\cite[p.103]{Pinedo01}.
For the general unweighted semi-matching problem,
Abraham~\cite[Section 4.3]{Abraham03} and Harvey, Ladner, Lov\'asz and
Tamir~\cite{HLLT06} independently developed two algorithms with
$O(nm)$ running time. Lin and Li~\cite{LinLi04} also gave an
$O(n^3\log{n})$-time algorithm which is later generalized to a more
general cost function~\cite{Li06}.
Recently, Lee, Leung and Pinedo~\cite{LeeLP09} showed that the problem
can be solved in polynomial time even when there are release times.
The unweighted semi-matching problem is recently generalized to the
quasi-matching problem by Bokal, Bresar and Jerebic~\cite{Bokal09}.
In this problem, a function $g$ is provided and each vertex $u\in U$
is required to connect to at least $g(u)$ vertices in $v$. Therefore,
the semi-matching problem is when $g(u)=1$ for every $u\in U$. They
also developed an algorithm for this problem which is a generalization
of the Hungarian method and used it to deal with a routing problem
in CDMA-based wireless sensor networks.
Motivated by the problem of assigning wireless stations (users) to
access points, the unweighted semi-matching problem is also
generalized to the problem of finding optimal semi-matching with
minimum weight where an $O(n^2m)$ time algorithm is
given~\cite{HaradaOSY07}.
Approximation algorithms and online algorithms for this problem
(both weighted and unweighted cases) and the makespan version have
also gained a lot of attention over the past few decades and have
applications ranging from scheduling in hospital to wireless
communication network. (See \cite{Scheduling_survey,vaik05} for the
recent surveys.)
\paragraph{Applications:}
As motivated by Harvey et al.~\cite{HLLT06}, even in an online setting
where jobs arrive and depart over time, they may be reassigned
from one machine to another cheaply if the algorithm's runtime is
significantly faster than the arrival/departure rate. (One example of
such case is the Microsoft Active Directory
system~\cite{GLLK79-load-balancing,HLLT06}.)
The problem also arose from the Video on Demand (VoD) systems where
the load of video disks needs to be balanced while data blocks from
the disks are retrieved or while serving
clients~\cite{Low02,TamirV08}.
The problem, if solved in the distributed setting, can be used to
construct a load balanced data gathering tree in sensor
networks~\cite{SSK06,MachadoT08}. The same problem also arose in
peer-to-peer systems~\cite{SuriTZ04,KothariSTZ04,SuriTZ07}.
In this paper, we also consider an ``edge cover'' version of the
problem.
In some applications such as sensor networks, there are no
jobs and machines but the sensor nodes have to be clustered and each
cluster has to pick its own head node to gather information from
other nodes in the cluster.
Motivated by this, Harada, Ono, Sadakane and
Yamashita~\cite{HaradaOSY08} introduced the
{\em balanced edge cover} problem\footnote{This problem is also
known as a {\em constant jump system} (see, e.g.,
\cite{tamir1995,Lovasz97}).} where the goal is to find an edge cover
(set of edges incident to every vertex) that minimizes the total
cost over all vertices. (The cost on each vertex is as previously
defined.) They gave an $O(nm)$ algorithm for this problem and
claimed that it could be used to solve the semi-matching problem as
well. We show that this problem can be efficiently reduced to the
semi-matching problem. Thus, our algorithm (for unweighted case)
also gives a better bound on the balanced edge cover problem.
\subsection*{Our results and techniques}
We consider the semi-matching problem and give a faster algorithm
for each of the weighted and unweighted cases. We also extend the
algorithm for the unweighted case to solve the balanced edge cover
problem.
\squishlist
\item \textbf{Weighted Semi-Matching:} (Section~\ref{sec:weighted})
We present an $O(nm\log{n})$ algorithm, improving the previous
$O(n^3)$ algorithm by Horn~\cite{horn1973} and Bruno et
al.~\cite{BrunoCS74}.
As in the previous results \cite{horn1973,BCS74,Harvey_slide}, we
use the reduction of the weighted semi-matching problem to the
weighted bipartite matching problem as a starting point. We, however,
only use the structural properties arising from the reduction and do
not actually perform the reduction.
\item \textbf{Unweighted Semi-Matching:} (Section~\ref{sec:unweighted})
We give an $O(\sqrt{n} m\log n)$ algorithm, improving the previous
$O(nm)$ algorithms by Abraham~\cite{Abraham03} and Harvey et
al.~\cite{HLLT06}.\footnote{We also observe an $O(n^{5/2}\log n)$
algorithm that arises directly from the reduction by applying
\cite{KLST01}.}
Our algorithm uses the same reduction to the min-cost flow problem
as in~\cite{HLLT06}. However, instead of cancelling one negative
cycle in each iteration, our algorithm exploits the structure of the
graphs and the cost functions to cancel many negative cycles in a
single iteration. This technique can also be generalized to any convex
cost function.
\item \textbf{Balanced Edge Cover:} (Section~\ref{sec:edge-cover})
We also present a reduction from the balanced edge cover problem to
the unweighted semi-matching problem. This leads to an
$O(\sqrt{n}m\log n)$ algorithm for the problem, improving the
previous $O(nm)$ algorithm by Harada et al.~\cite{HaradaOSY08}.
The main idea is to identify the ``center'' vertices of all the
clusters in the optimal solution. (Note that any balanced edge cover
(in fact, any minimal edge cover) clusters the vertices into stars.)
Then, we partition the vertices into two sides, center and
non-center ones, and apply the semi-matching algorithm on this
graph.
\squishend
\section{Conclusion}
\paragraph{Acknowledgment} We thank David Pritchard for useful
suggestions, Jane (Pu) Gao for pointing out some related surveys, and
Dijun Luo for pointing out some errors in the earlier version of this
paper.
\let\oldthebibliography=\thebibliography
\let\endoldthebibliography=\endthebibliography
\renewenvironment{thebibliography}[1]
\begin{oldthebibliography}{#1
\setlength{\parskip}{0ex
\setlength{\itemsep}{0ex
\end{oldthebibliography
}
{ \small
\bibliographystyle{plain}
\section{Unweighted semi-matching}
\label{sec:unweighted}
In this section, we present an algorithm that finds the optimal
semi-matching in unweighted graph in $O(m\sqrt{n}\log n)$ time.
\subsection*{Overview}
Our algorithm consists of the following three steps.
In the first step, we reduce the problem to the min-cost flow
problem, using the same reduction from Harvey et al.~\cite{HLLT06}.
(See Figure~\ref{fig:tran-sample}.) The details are provided in
Section~\ref{subsect:min-cost-flow}. We note that the flow is
optimal if and only if there is no cost reducing path (to be defined
later). We start with an arbitrary semi-matching and use this
reduction to get a corresponding flow. The goal is to eliminate all
the cost-reducing paths.
The second step is a divide-and-conquer algorithm used to eliminate
all the cost-reducing paths. We call this algorithm {\sc CancelAll}
(cf. Algorithm~\ref{alg:CancelAll}).
The main idea here is to divide the graph into two subgraphs so that
eliminating cost reducing paths ``inside'' each subgraph does not
introduce any new cost reducing paths going through the other.
This dividing step needs to be done carefully.
We treat this in Section~\ref{subsec:main_unweighted_algo}.
Finally, in the last component of the algorithm we deal with
eliminating cost-reducing paths between two sets of vertices
quickly. Naively, one can do this using any unit-capacity max-flow
algorithm, but this does not give an improvement on the running time.
To get a faster algorithm, we observe that the structure
of the graph is similar to a {\em unit network}, where every vertex
has in-degree or out-degree one. Thus, we get the same performance
guarantee as the Dinitz's
algorithm~\cite{Dinitz70,Dinitz06}.\footnote{The algorithm is also
known as ``Dinic's algorithm''. See~\cite{Dinitz06} for details.}
Details of this part can be found in
Section~\ref{subsect:cancel-paths}.
After presenting the algorithm in the next three sections, we
analyze the running time in Section~\ref{sect:running-time}. We note
that this algorithm also works in a more general cost function
(discussed in Section~\ref{sect:general}). We also observe an
$O(n^{5/2}\log n)$-time algorithm that arises directly from the
reduction of the weighted case (discussed in
Appendix~\ref{sec:bimatching-algo}). This already gives an
improvement over the previous results but our result presented here
improve the running time further.
\subsection{Reduction to min-cost flow and optimality characterization (revisited)}
\label{subsect:min-cost-flow}
In this section, we review the characterization of the optimality of
the semi-matching in the min-cost flow framework.
We use the reduction as given in~\cite{HLLT06}. Given a bipartite
graph $G=(U\cup V, E)$, we construct a directed graph $N$ as
follows. Let $\Delta$ denote the maximum degree of the vertices in
$V$. First, add a set of vertices, called {\em cost centers},
$C=\{c_1,c_2,\ldots,c_\Delta\}$ and connect each $v\in V$ to $c_i$
with edges of capacity 1 and cost $i$, for all $1\leq i\leq
\deg(v)$. Second, add $s$ and $t$ as a source and sink vertex. For
each vertex in $U$, add an edge from $s$ to it with zero cost and
unit capacity. For each cost center $c_i$, add an edge to $t$ with
zero cost and infinite capacity. Finally, direct each edge $e \in E$
from $U$ to $V$ with capacity 1 and cost 0. Observe that the new
graph $N$ has $O(n)$ vertices and $O(m)$ edges, and any
semi-matching in $G$ corresponds to a max flow in $N$.
\begin{figure}
\centering
\input{fig1}
\caption{Reduction to the min-cost flow problem. Each edge is
labelled with \textbf{(cost, capacity)} constraint. Thick edges
either are matching edges or contain the flow.}
\label{fig:tran-sample}
\end{figure}
Observe that the new graph $N$ contains $O(n)$ vertices and $O(m)$
edges. It can be seen that any semi-matching in $G$ corresponds to a
max flow in $N$. (See example in Figure~\ref{fig:tran-sample}.)
Moreover, Harvey et al.~\cite{HLLT06} prove that an optimal
semi-matching in $G$ corresponds to a min-cost flow in $N$; in other
words, the reduction described above is correct.
Our algorithm based on observation that the largest cost is
$O(|U|)$. This allows one to use the cost-scaling framework to solve
the problem.
Now, we review an optimality characterization of the min-cost flow.
We need to define a {\em cost-reducing path} first.
Let $R_f$ denote the residual graph of $N$ with respect to a flow
$f$.
We call any path $p$ from a cost center $c_i$ to $c_j$ in $R_f$
an {\em admissible path} and call $p$ a {\em cost-reducing path} if
$i>j$. A cost-reducing path is one-to-one corresponding to a
negative cost cycle implying the condition for the minimality of
$f$. Harvey {\em et al}~\cite{HLLT06} proved the following.
\begin{lemma}[\cite{HLLT06}]
A flow $f$ is a min-cost flow in $N$ if and only if there is no
cost-reducing path in $R_f(N)$. \label{lmm:opt-no-aug}
\end{lemma}
\begin{proof}
Note that $f$ is a min-cost flow if and only if there is no negative
cycle in $R_f$. To prove the ``only if" part, assume that there is
an cost-reducing path from $c_i$ to $c_j$. We consider the
shortest one, i.e., no cost center is contained the path except the
first and the last vertices. The edges that effect the cost of this
path are only the first and the last ones because only edges
incident to cost centers have cost. Cost of the first and the last
edge is $-i$ and $j$ respectively. Connecting $c_i$ and $c_j$ with
$t$ results a cycle of cost $j-i<0$.
For the ``if" part, assume that there is a negative-cost cycle in
$R_f$. Consider the shortest cycle which contains only two cost
centers, say $c_i$ and $c_j$ where $i>j$. This cycle contains an
admissible path from $c_i$ to $c_j$.
\end{proof}
Given a max-flow $f$ and a cost-reducing path $P$, one can find a
flow $f'$ with lower cost by augmenting $f$ along $P$ with a unit
flow. This is later called {\em path cancelling}. We are now ready
to explain our algorithm.
\subsection{The divide-and-conquer algorithm}\label{subsec:main_unweighted_algo}
Our algorithm takes a bipartite graph $G=(U\cup V,E')$ and outputs
the optimal semi-matching. It starts by transforming $G$ into a
graph $N$ as described in the previous section. Since the source
$s$ and the sink $t$ are always clear from the context, the graph
$N$ can be seen as a tripartite graph with vertices $U\cup V\cup C$;
later on, we denote $N=(U\cup V\cup C,E)$. The algorithm proceeds by
finding an arbitrary max-flow $f$ from $s$ to $t$ in $N$ which
corresponds to a semi-matching in $G$. This can be done in linear
time since the flow is equivalent to any semi-matching in $G$.
To find the min-cost flow in $N$, the algorithm uses a subroutine
called {\pname{CancelAll}} (cf. Algorithm~\ref{alg:CancelAll}) to
cancel all cost-reducing paths in $f$. Lemma~\ref{lmm:opt-no-aug}
ensures that the final flow is optimal.
\begin{algorithm}{
\caption{ \pname{CancelAll}$(N=(U \cup V\cup C, E))$
\label{alg:CancelAll}}
\begin{algorithmic}[1]
\STATE \textbf{if}{$|C|=1$} \textbf{then} halt \textbf{endif}
\STATE Divide C into $C_1$ and $C_2$ of roughly equal size.
\STATE \pname{Cancel}($N,C_2,C_1$). \{Cancel all cost-reducing
paths from $C_2$ to $C_1$\}.\label{line:cancel}
\STATE Divide $N$ into $N_1$ and $N_2$ where $N_2$ is ``reachable'' from
$C_2$ and $N_1$ is the rest.
\STATE Recursively solve \pname{CancelAll}$(N_1)$ and \pname{CancelAll}$(N_2)$.
\end{algorithmic}
}
\end{algorithm}
{\pname{CancelAll}} works by dividing $C$ and solves the problem
recursively. Given a set of cost centers $C$, the algorithm divides
$C$ into roughly equal-size subsets $C_1$ and $C_2$ such that, for
any $c_i\in C_1$ and $c_j \in C_2$, $i<j$. This guarantees that
there is no cost reducing path from $C_1$ to $C_2$. Then it cancels
all cost reducing paths from $C_2$ to $C_1$ by calling {\sc Cancel}
algorithm (described in Section~\ref{subsect:cancel-paths}).
It is left to cancel the cost-reducing paths ``inside'' each of
$C_1$ and $C_2$. This is done by partitioning the vertices of $N$
(except $s$ and $t$) and forming two subgraphs $N_1$ and $N_2$.
Then solve the problem separately on each of them.
In more detail, we partition the graph $N$ by letting $N_2$ be a
subgraph induced by vertices reachable from $C_2$ in the residual
graph and $N_1$ be the subgraph induced by the rest vertices. (Note
that both graphs have $s$ and $t$.) For example, in
Figure~\ref{fig:tran-sample}, $v_1$ is reachable from $c_3$ by the
path $c_3, v_2, u_2, v_1$ in the residual graph.
\begin{lemma}\label{lem:cancelall}
{\pname{CancelAll}}$(N)$ (cf. Algorithm~\ref{alg:CancelAll}) cancels
all cost-reducing paths in $N$.
\end{lemma}
\begin{proof}
Recall that all cost-reducing paths from $C_2$ to $C_1$ are cancelled
in line~\ref{line:cancel}. Let $S$ denote the set of vertices
reachable from $C_2$.
\begin{claim}
After line~\ref{line:cancel}, no admissible paths between two cost
centers in $C_1$ intersect $S$. \label{claim:no-intersect}
\end{claim}
\begin{proof}
Assume, for the sake of contradiction, that there exists an
admissible path from $x$ to $y$, where $x,y\in C_1$, that contains a
vertex $s\in S$. Since $s$ is reachable from some vertex $z\in C_2$,
there must exist an admissible path from some vertex in $z$ to $y$;
this leads to a contradiction.
\end{proof}
This claim implies that, in our dividing step, all cost-reducing
paths between pairs of cost centers in $C_1$ remain entirely in
$N_1$. Furthermore, vertices in any cost reducing path between pairs
of cost centers in $C_2$ must be reachable from $C_2$; thus, they
must be inside $S$. Therefore, after the recursive calls, no
cost-reducing paths between pairs of cost centers in the same
subproblems $C_i$ are left. The lemma follows if we can show that
in these processes we do not introduce more cost-reducing paths from
$C_2$ to $C_1$. To see this, note that all edges between $N_1$ and
$N_2$ remain untouched in the recursive calls. Moreover, these
edges are directed from $N_1$ to $N_2$, because of the maximality of
$S$. Therefore there is no admissible path from $C_2$ to $C_1$.
\end{proof}
\subsection{Cancelling paths from $C_2$ to $C_1$}
\label{subsect:cancel-paths} In this section we describe an
algorithm that cancels all admissible paths from $C_2$ to $C_1$ in
$R_f$, which can be done by finding a max flow from $C_2$ to $C_1$.
To simplify the presentation, we assume that there is a super-source
$s$ and super-sink $t$ connecting to vertices in $C_2$ and in $C_1$,
respectively.
To find a maximum flow, observe that $N$ is unit-capacity and every
vertex of $U$ has indegree $1$ in $R_f$. By exploiting these
properties, we show that Dinitz's blocking flow
algorithm~\cite{Dinitz70} can find a maximum flow in
$O(|E|\sqrt{|U|})$ time. The algorithm is done by repeatedly
augmenting flows through the shortest augmenting paths. (see
Appendix~\ref{sec:Dinitz}).
\begin{lemma}
Let $d_i$ be the length of the shortest $s-t$ path in the residual
graph at the $i^{th}$ iteration. For all $i$, $d_{i+1}>d_i$.
\end{lemma}
The lemma can be used to show that Dinitz's algorithm terminates
after $n$ rounds of the blocking flow step, where $n$ is the number
of vertices. Since after the $n$-th round, the distance between the
source is more than $n$, which means that there is no augmenting
path from $s$ to $t$ in the residual graph. The number of rounds
can be improved for certain classes of problems. Even and
Tarjan~\cite{ET75} and Karzanov~\cite{Karzanov73} showed that in unit
capacity networks, Dinitz's algorithm terminates after
$\min(n^{2/3},m^{1/2})$ rounds, where $m$ is the number of edges.
Also, in unit networks, where every vertex has in-degree one or
out-degree one, Dinitz's algorithm terminates in $O(\sqrt{n})$ time
(see, e.g., Tarjan's book~\cite{TarjanBook}).
Since the graph $N$ we are considering is very similar to unit
networks, we are able to show that Dinitz's algorithm also terminates
in $O(\sqrt{n})$ in our case.
For any flow $f$, a {\em residual flow} $f'$ is a flow in a residual
graph $R_f$ of $f$. If $f'$ is maximum in $R_f$, $f+f'$ is maximum
in the original graph. The following lemma relates the amount of
the maximum residual flow with the shortest distance from $s$ to $t$
in our case. The proof is a modification of Theorem~8.8
in~\cite{TarjanBook}.
\begin{lemma}\label{lem:modify-Tarjan}
If the shortest $s-t$ distance in the residual graph is $d>4$, the
amount of the maximum residual flow is at most $O(|U|/d)$.
\end{lemma}
\begin{proof}
A maximum residual flow in a unit capacity network can be decomposed
into a set $\mathcal P$ of edge-disjoint paths where the number of
paths equals to the flow value. Each of these paths are of length
at least $d$. Clearly, each path contains the source, the sink, and
exactly two cost centers. Now consider any path $P\in\mathcal P$ of
length $l$. It contains $l-3$ vertices from $U\cup V$. Since the
original graph is a bipartite graph, at least $\lfloor(l -
3)/2\rfloor\geq\lfloor(d - 3)/2\rfloor\geq(d-4)/2$ vertices are from
$U$. Note that each path in $\mathcal P$ contains a disjoint set of
vertices in $U$, since a vertex in $U$ has in-degree one. Therefore,
we conclude that there are at most $2|U|/(d-4)$ paths in $\mathcal
P$. The lemma follows since each path has one unit of flows.
\end{proof}
From these two lemma, we have the main lemma for this section.
\begin{lemma}
{\pname{Cancel}} terminates in $O(|E|\sqrt{|U|})$
time.\label{lem:cancel_run_time}
\end{lemma}
\begin{proof}
Since each iteration can be done in $O(|E|)$ time, it is enough to
prove that the algorithm terminates in $O(\sqrt{|U|})$ rounds. The
previous lemma implies that the amount of the maximum residual flow
after the $O(\sqrt{|U|})$-th rounds is $O(\sqrt{|U|})$ units. The
lemma thus follows because after that the algorithm augments at
least one unit of flow for each round.
\end{proof}
\subsection{Running time}
\label{sect:running-time} The running time of the algorithm is
dominated by the running time of {\pname{CancelAll}}, which can be
analyzed as follows.
Let $T(n,n',m,k)$ denote the running time of the algorithm when
$|U|=n, |V|=n', |E|=m,$ and $|C|=k$. For simplicity, assume that
$k$ is a power of two. By Lemma~\ref{lem:cancel_run_time},
{\pname{Cancel}} runs in $O(|E|\sqrt{|U|})$ time. Therefore,
\[
T(n,n',m,k) \leq c\cdot m\sqrt{n} + T(n_1,n'_1,m_1,k/2) +
T(n_2,n'_2,m_2,k/2),
\]
for some constant $c$, where $n_i,n'_i,$ and $m_i$ denote the number
of vertices and edges in $N_i$, respectively. Recall that each edge
participates in at most one of the subproblems; thus, $m_1+m_2\leq
m$. Observe that the number of cost centers always decrease by a
factor of two. Thus, the recurrence is solved to
$O(\sqrt{n}m\log{k})$. Since $k=O(|U|)$, the running time is
$O(\sqrt{n}m\log{n})$ as claimed.
Furthermore, the algorithm can work in more general cost function
with the same running time as shown in the next section.
\subsection{Generalizations of an unweighted algorithm}
\label{sect:general}
The problem can be viewed in a slightly more general version. In
Harvey et al.~\cite{HLLT06}, the cost functions for each vertex
$v\in V$ are the same. We relax this condition, allowing different
function for each vertex where each function is convex. More
precisely, for each $v\in V$, let $f_v:\integer_{+}\rightarrow\real$
be a convex function, i.e., for any $i$,
$f_v(i+1)-f_v(i)\geq f_v(i)-f_v(i-1)$. The cost for matching $M$ on
vertex $v$ is $f_v(\deg_M(v))$. In this convex cost function, the
transformation similar to what described in
Section~\ref{subsect:min-cost-flow} can still be done.
However, the number of different values of $f_v$ is now $O(|E|)$.
So, the size of the set of cost centers $C$ is now upper bounded by
$O(|E|)$ not $O(|U|)$.
Therefore, the running time of our algorithm becomes
$O(|E|\sqrt{|U|}\log{|C|})=O(|E|\sqrt{|U|}\log{|E|})
=O(\sqrt{n}m\log{n})$ (since $|E|\leq n^2$) which is the same as
before.
\section{Weighted semi-matching}
\label{sec:weighted}
In this section, we present an algorithm that finds optimal weighted
semi-matching in $O(nm\log n)$ time.
\subsection*{Overview}
Our improvement follows from studying the reduction from the
weighted semi-matching problem to the weighted bipartite matching
problem considered in the previous
works~\cite{horn1973,BrunoCS74,Harvey_slide} and the
Edmonds-Karp-Tomizawa (EKT) algorithm for finding the weighted
bipartite matching~\cite{EK70,Tomizawa71}.
We first review these briefly.
For more detail, see Appendix~\ref{sec:EK_algo}
and~\ref{sec:bimatching-algo}.
\paragraph{Reduction:} As in~\cite{horn1973,BrunoCS74,Harvey_slide}, we
consider the reduction from the semi-matching problem on bipartite
graph $G=(U\cup V,E)$ to the minimum-weight bipartite matching on a
graph $\hat G$.
The reduction is done by exploding the vertices in $V$, i.e., for each
vertex $v\in V$ we create $\deg(v)$ vertices, $v^1, v^2, \ldots,
v^{\deg(v)}$. We also make copies of edges incident to $v$ in the
original graph $G$, i.e, for each vertex $u\in U$ such that $uv\in E$,
we create edges $uv^1, uv^2, \ldots, uv^{\deg(v)}$. For each edge
$uv^i$ incident to $v^i$ in $\hat G$, we set its weight to $i$ times
its original weight in $G$, i.e, $w_{uv^i}=i\cdot w_{uv}$. We denote
the set of these vertices by $\hat V_v$.
Thus, we have
\begin{align*}
\hat{G} &= (U\cup\hat{V},\hat{E}) \\
\hat{V} &= \{v^1,v^2,\ldots,v^{\deg_G(v)}:v\in V\} \\
\hat{E} &= \{uv^1,uv^2,\ldots,v^{\deg_G(v)}:uv\in E\} \\
\hat{w}_{uv^i} &= i\cdot w_{uv}\quad \forall uv\in
E, i\in\{1,2,\ldots,\deg_G(v)\}
\end{align*}
The correctness of this reduction can be seen by
replacing the edges incident to $v$ in the semi-matching by the
edges incident to $v^1, v^2, \ldots$ with weights in an decreasing
order. For example, in Figure~\ref{subfig:reduction}, edge $u_1v_1$
and edge $u_2v_1$ in the semi-matching in $G$ correspond to
$u_1v_1^1$ and $u_2v_1^2$ in the matching in $\hat G$.
The reduction is illustrated in Figure~\ref{subfig:reduction}.
This alone does not give an improvement on the semi-matching problem
because the number of edges becomes $O(nm)$.
However, we can apply some tricks to improve the running time.
(See Appendix~\ref{sec:bimatching-algo}.)
\begin{figure}
\centering
\subfigure[Reduction] {
\includegraphics[width=0.45\textwidth]{weighted-reduction-2.eps}
\label{subfig:reduction} }
\subfigure[Residual graphs] {
\includegraphics[width=0.45\textwidth]{residual-graph.eps}
\label{subfig:residual-graph} }
\caption{
\label{fig:weight-reduction-soda}
\end{figure}
\paragraph{EKT algorithm:} Our improvement comes from studying the behavior
of the EKT algorithm for finding the bipartite matching in $\hat G$.
The EKT algorithm iteratively increases the cardinality of the
matching by one by finding a shortest augmenting path. Such path
can be found by applying Dijkstra's algorithm on the {\em residual
graph} $D_M$ (corresponding to a matching $M$) with a {\em reduced
cost}, denoted by $\tilde w$ as an edge length.\\
Figure~\ref{subfig:residual-graph} shows examples of a residual graph
$D_M$. The direction of an edge depends on whether it is in the
matching or not. The weight of each edge depends on its weight in the
original graph and the costs on its end vertices. We draw an edge of
length 0 from $s$ to all vertices in $U_M$ and from all vertices in
$\hat V_M$ to $t$, where $U_M$ and $\hat V_M$ are the sets of
unmatched vertices in $U$ and $\hat V$, respectively. We want to find
the shortest path from $s$ to $t$ or, equivalently, from $U_M$ to
$\hat V_M$.
The reduced cost is computed from the {\em potentials} on the
vertices, which can be found as in
Algorithm~\ref{algo:EKT}.\footnote{Note that we set the
potentials in an unusual way: We keep potentials of the unmatched
vertices in $\hat V$ to $0$. The reason is roughly that we can speed
up the process of finding the distances of all vertices but vertices
in $\hat V_M$. Notice that this type of potentials is valid too
(i.e., $\tilde w$ is non-negative) since for any edge $uv$ such that
$v\in \hat V_M$ is unmatched,
$\tilde{w}_{uv}=w_{uv}+p(u)-p(v)=w_{uv}+p(u)\geq 0$.}
\begin{algorithm}
\caption{\pname{EKT Algorithm} $(\hat G, w)$} \label{algo:EKT}
\begin{algorithmic}[1]
\STATE Let $M=\emptyset$.
\STATE For every node $v$, let $p(v)=0$. ($p(v)$ is a potential on
$v$.)
\REPEAT{
\STATE\label{line:begin_iteration} Let $\tilde w_{uv}=w_{uv}+p(u)-p(v)$ for every edge
$uv$. ($\tilde w_{uv}$ is a reduced cost of an edge $uv$.)
\STATE For every node $v$, compute the distance $d(v)$ which is
the distance from $U_M$ (the set of unmatched vertices in $U$)
to $v$ in $D_M$. (Recall that the length of edges in $D_M$ is
$\tilde w$.)
\STATE Let $P$ be the shortest $U_M$-$\hat V_M$ path in $D_M$.
\STATE Update the potential $p(u)$ to $d(u)$ for every
vertex $u\in U\cup (\hat V\setminus \hat V_M)$.
\STATE Augment $M$ along $P$, i.e., $M=
P\triangle M$ (where $\triangle$ denotes the symmetric difference operator).
}
\UNTIL {all vertices in $U$ are matched}
\RETURN $M$
\end{algorithmic}
\end{algorithm}
Applying EKT algorithm directly leads to an $O(n(n'\log n' + m'))$
where $n=|U|$, $n'=|U\cup V|$ and $m'$ is the number of edges in
$\hat G$. Since $n'=|\hat V|=\Theta(m)$ and $m'=\Theta(n^2)$, the
running time is $O(nm\log n+n^3)$.
(We note that this could be brought down to $O(n^3)$ by applying the
result of Kao, Lam, Sung and Ting~\cite{KLST01} to reduce the number
of participating edges. See Appendix~\ref{sec:bimatching-algo}.)
The bottleneck here is the Dijkstra's algorithm which needs $O(n'\log
n'+m')$ time. We now review this algorithm and pinpoint the part that
will be sped up.
\paragraph{Dijkstra's algorithm:} Recall that the Dijkstra's algorithm
starts from a source vertex and keeps adding to its shortest path
tree a vertex with minimum tentative distance. When a new vertex $v$
is added, the algorithm updates the tentative distance of all
vertices outside the tree by relaxing {\em all} edges incident to
$v$. On an $n'$-vertex $m'$-edge graph, it takes $O(\log n')$ time
(using priority queue) to find a new vertex to add to the tree and
hence $O(n'\log n')$ in total. Further, relaxing all edges takes
$O(m')$ time in total.
Recall that in our case, $m'=\Theta(n^2)$ which is too large. {\em
Thus, we wish to reduce the number of edge relaxations to improve
the overall running time.}
\paragraph{Our approach:} We reduce the number of edge relaxation as
follows. Suppose that a vertex $u\in U$ is added to the shortest
path tree. For every $v\in V$, a neighbor of $u$ in $G$, we relax
all edges $uv^1$, $uv^2$, $\ldots$, $uv^i$ in $\hat G$ {\em at the
same time}. In other words, instead of relaxing $\Theta(nm)$ edges
in $\hat G$ separately, we group the edges to $m$ groups (according
to the edges in $G$) and relax all edges in each group together. We
develop a relaxation method that takes $O(\log n)$ time per group.
In particular, we design a data structure $H_v$, for each vertex
$v\in V$, that supports the following operations.
\squishlist
\item {\sc Relax}($uv$, $H_v$): This operation works as if it relaxes
edges $uv^1$, $uv^2$, $\ldots$
\item {\sc AccessMin}($H_v$): This operation returns a vertex $v^i$ (exploded from
$v$) with minimum tentative distance among vertices that are not deleted (by the next
operation).
\item {\sc DeleteMin}($H_v$): This operation finds $v^i$ from {\sc
AccessMin} and then returns and deletes $v^i.$ %
\squishend
Our main result is that, by exploiting the structure of the problem,
one can design $H_v$ that supports {\sc Relax}, {\sc AccessMin} and
{\sc DeleteMin} in $O(\log n)$, $O(1)$ and $O(\log n)$ respectively.
Before showing such result, we note that speeding up Dijkstra's
algorithm and hence EKT algorithm is quite straightforward once we
have $H_v$: We simply build a binary heap $H$ whose nodes correspond
to vertices in an original graph $G$. For each vertex $u\in U$, $H$
keeps track of its tentative distance. For each vertex $v\in V$, $H$
keeps track of its {\em minimum tentative distance} returned from
$H_v$.
\paragraph{Main idea:} Before going into details, we sketch the main idea
here. The data structure $H_v$ that allows fast ``group relaxation''
operation can be built because of the following nice structure of
the reduction: For each edge $uv$ of weight $w_{uv}$ in $G$, the
weights $w_{uv^1}, w_{uv^2}, \ldots$ of the corresponding edges in
$\hat G$ increase linearly (i.e., $w_{uv}, 2w_{uv}, 3w_{uv},
\ldots$). This enables us to know the order of vertices, among $v^1,
v^2, \ldots$, that will be added to the shortest path tree. For
example, in Figure~\ref{subfig:residual-graph}, when $M=\emptyset$,
we know that, among $v^1$ and $v^2$, $v^1$ will be added to the
shortest path tree first as it always has a smaller tentative
distance.
However, since the length of edges in $D_M$ does not solely depend
on the weights of the edges in $\hat G$ (in particular, it also
depends on a potentials on both end vertices), it is possible (after
some iterations of the EKT algorithm) that $v^1$ is added to the
shortest path tree after $v^2$.
Fortunately, due to the way the potential is defined by the EKT
algorithm, a similar nice property still holds: Among $v^1, v^2,
\ldots$ in $D_M$ corresponding to $v$ in $G$, if a vertex $v^k$, for
some $k$, is added to the shortest path tree first, then the
vertices on each side of $v^k$ have a nice order: Among $v^1, v^2,
\ldots, v^{k-1}$, the order of vertices added to the shortest path
tree is $v^{k-1}, v^{k-2}, \ldots, v^2, v^1$. Further, among
$v^{k+1}, v^{k+2}, \ldots$, the order of vertices added to the
shortest path tree is $v^{k+1}, v^{k+2}, \ldots$.
This main property, along with a few other observations, allow us to
construct the data structure $H_v$.
In the next section, we show the properties we need and use them to
construct $H_v$ in the latter section.
\subsection{Properties of the tentative distance}
\label{sec:properties} Consider any iteration of the EKT algorithm
(with a potential function $p$ and a matching $M$). We study the
following functions $f_{*_v}$ and $g_{*v}$.
\begin{definition
\label{def:fandg} For any edge $uv$ from $U$ to $V$ and any integer
$1\leq i\leq \deg(v)$, let \[g_{uv}(i) = d(u)+p(u)+i\cdot w_{uv}
\quad \text{and}\quad f_{uv}(i)=g_{uv}(i)-p(v^i) =
d(u)+p(u)-p(v^i)+i\cdot w_{uv}.\] %
For any $v\in V$ and $i\in [\deg(v)]$, define the lower envelope
of $f_{uv}$ and $g_{uv}$ over all $u\in U$ as
\[
f_{*v}(i)=\min_{u:uv\in E}f_{uv}(i)\quad \quad \text{and}\quad
g_{*v}(i)=\min_{u:uv\in E}g_{uv}(i).\]
\end{definition}
Our goal is to understand the structure of the function $f_{*v}$
whose values $f_{*v}(1), f_{*v}(2), \ldots$ are tentative distances
of $v^1, v^2, \ldots$, respectively. The function $g_{*v}$ is simply
$f_{*v}$ with the potential of $v$ ignored. We define $g_{*v}$ as it
is easier to keep track of since it is a combination of linear
functions $g_{uv}$ and therefore piecewise linear.
Now we state the key properties that enable us to keep track of
$f_{*v}$ efficiently. Recall that $v^1, v^2, \ldots$ are the
exploded vertices of $v$ (from the reduction).
\begin{proposition} \label{prop:main-properties}
Consider a matching $M$ and a potential $p$ at any iteration of the
EKT algorithm.
\squishlist
\item[(1)] For any vertex $v\in V$, there exists $\alpha_v$ such that
$v^1, \ldots, v^{\alpha_v}$ are all matched and $v^{\alpha_v+1}, \ldots,
v^{\deg(v)}$ are all unmatched.
\item[(2)] For any vertex $v\in V$, $g_{*v}$ is a piecewise linear
function.
\item[(3)] For any edge $uv\in E$ where $u\in U$ and $v\in V$, and any
$i$, $f_{uv}(i)=f_{*v}(i)$ if and only if $g_{uv}(i)=g_{*v}(i)$.
\item[(4)] For any edge $uv\in E$ where $u\in U$ and $v\in V$,
let $\alpha_v$ be as in (1). There exists an integer $1\leq
\gamma_{uv}\leq k$ such that for $i=1,2,\ldots,\gamma_{uv}-1$,
$f_{uv}(i)\geq f_{uv}(i+1)$ and for
$i=\gamma_{uv},\gamma_{uv}+1,\ldots,\alpha_v-1$, $f_{uv}(i)\leq
f_{uv}(i+1)$. In other words,
$f_{uv}(1),f_{uv}(2),\ldots,f_{uv}(\alpha_v)$ is a unimodal
sequence. \squishend
\end{proposition}
Figure~\ref{subfig:potential-and-g} and \ref{subfig:unimodal} show
the structure of $g_{*v}$ and $f_{*v}$ according to statement (2)
and (4) in the above proposition. By statement (3), the two pictures
can be combined as in Figure~\ref{subfig:f-and-g}: $g_{*v}$
indicates $u$ that makes both $g_{*v}$ and $f_{*v}$ minimum in each
interval and one can find $i$ that minimizes $f_{*v}$ in each
interval by looking at $\alpha_v$ (or near $\alpha_v$ in some case).
\begin{figure}
\centering
\subfigure[$g_{*v}$ and potential function. Note that
$w_{u_1v}>w_{u_2v}>w_{u_3v}$.]{ \label{subfig:potential-and-g}
\includegraphics[height=0.2\textwidth]{potential-and-g.eps}}
\subfigure[$f_{*v}$ is unimodal.]{
\label{subfig:unimodal}
\includegraphics[height=0.2\textwidth]{unimodal.eps}}
\subfigure[$f_{*v}$ together with $g_{*v}$.]{
\label{subfig:f-and-g}
\includegraphics[height=0.2\textwidth]{f-and-g-2.eps}}
\caption{}
\label{fig:bowl}
\end{figure}
\begin{proof}\hfill
\noindent\textbf{(1)} The first statement follows from the following
claim.
\begin{claim}
For any $i$, if the exploded vertex $v^{i+1}$ of $v$ (in $\hat V_v$)
is matched by $M$, then $v^i$ is also matched.
\end{claim}
\begin{proof}
The claim follows from the fact that EKT algorithm maintains $M$ so
that $M$ is an extreme matching. Suppose that $v^{i+1}$ is matched
by $M$ (i.e., $uv^{i+1}\in M$), but $v^i$ is not matched. Then we
can remove $uv^{i+1}$ from $M$ and add $uv^i$ to $M$. The
resulting matching will have a cost less than $M$ but have the same
cardinality, a contradiction.
\end{proof}
\noindent\textbf{(2)} To see the second statement, notice that
$g_{uv}=d(u)+p(u)+i\cdot w_{uv}$ is linear for a fixed $uv\in E$.
Hence, $g_{*v}$ is a lower envelope of a linear function implying
that it is piecewise linear.\\
\noindent\textbf{(3)} To prove the third statement, recall that for
any $u$ and any $i$, $f_{uv}(i)=g_{uv}(i)-p(v^i)$. Therefore, for
any $u$, $u'$ and $i$, $f_{uv}(i)>f_{u'v}(i)$ if and only if
$g_{uv}(i)>g_{u'v}(i)$. Thus, the third statement follows.\\
\noindent\textbf{(4)} For the fourth claim, we first explain the
intuition. First, observe that the function $g_{uv}$ is increasing
with rate $w_{uv}$. Moreover, the difference of $f_{uv}(i)$ and
$f_{uv}(j)$ is a function of the potential $p(v^i)$ and $p(v^j)$ and
the multiple of edge weight $(j-i)w_{uv}$. In fact, whether the
difference is negative or positive depends on the value of these
three parameters. We show that these parameters change monotonically
and so we have the desired property.
To prove the fourth statement formally. We first prove two claims.
For the first claim below, recall that the potential of matched
vertices, at any iteration, is defined to be the distance on the
residual graph of the previous iteration. In particular, for any
$v^i\in \hat V$, there is a vertex $u\in U$ such that $p(u)+i\cdot
w_{uv}=p(v)$. (See Algorithm~\ref{algo:EKT}.)
\begin{claim} \label{lem:price_bound}
For any integer $i< \alpha_v$, consider the exploded vertices $v^i$
and $v^{i+1}$. Let $u$ and $u'$ denote two vertices in $U$ such that
$p(u)+i\cdot w_{uv}=p(v^i)$ and $p(u')+(i+1)\cdot
w_{u'v}=p(v^{i+1})$. Then $w_{uv}\geq p(v^{i+1})-p(v^i)\geq
w_{u'v}$.
\end{claim}
\begin{proof} The first part, $w_{u'v}\geq p(v^{i+1})-p(v^i)$,
follows from $p(v^i)=p(u)+i\cdot w_{uv}$ and $p(v^{i+1})\leq
p(u)+(i+1)\cdot w_{uv}.$ The second part, $p(v^{i+1})-p(v^i)\geq
w_{u'v}$, follows from, $p(v^i)\leq p(u')+i\cdot w_{u'v}$ and
$p(v^{i+1})= p(u')+(i+1)\cdot w_{u'v}$.
\end{proof}
Proof of the next claim follows directly from the definition of
$f_{uv}$ (cf. Definition~\ref{def:fandg}).
\begin{claim} \label{lem:distance_implication}
For any $i<\alpha_v$, $f_{uv}(i)> f_{uv}({i+1})$ if and only if
$p(v^{i+1})-p(v^i)>w_{uv}$ and $f_{uv}(i)< f_{uv}({i+1})$ if and
only if $p(v^{i+1})-p(v^i)<w_{uv}$.
\end{claim}
Now, the fourth statement in the Proposition follows from the
following statements: For any integer $i<\alpha_v$,
\squishlist
\item[(i)] if $f_{uv}(i)>f_{uv}({i+1})$, then
$f_{uv}(j) \geq f_{uv}(j+1)$ for any integer $j<i$, and
\item[(ii)] if $f_{uv}(i)<f_{uv}({i+1})$, then
$f_{uv}(j) \leq f_{uv}(j+1)$ for any integer $i\leq j \leq \alpha_v.$
\squishend
To prove the first statement, let $u'$ be such that $p(u')+i\cdot
w_{u'v}=p(v^i)$. If $f_{uv}(i)>f_{uv}({i+1})$, then
$$p(v^i)-p(v^{i-1})\geq w_{u' v}\geq p(v^{i+1})-p(v^i)> w_{uv}$$
where the first two inequalities follow from
Claim~\ref{lem:price_bound} and the third inequality follows from
Claim~\ref{lem:distance_implication}. It then follows from
Claim~\ref{lem:distance_implication} that $f_{uv}(i-1)>f_{uv}(i)$.
The first statement follows by repeating the argument above. The
second statement can be proved similarly. This completes the proof
of the fourth statement.
\end{proof}
\subsection{Data structure}\label{sec:datastructure}
\paragraph{Specification:} Let us first redefine the problem
so that we can talk about the data structure in a more general way.
We show how to use this data structure for the semi-matching problem
in the next section.
Let $n$ and $N$ be positive integers and, for any integer $i$,
define $[i]=\{1, 2, \ldots, i\}$. We would like to maintain at most
$n$ functions $f_1, f_2, \ldots, f_n$ mapping $[N]$ to a set of
positive reals. We assume that $f_i$ is given as an {\em oracle},
i.e., we can get $f_i(x)$ by sending a query $x$ to $f_i$ in $O(1)$
time.
Let $L$ and $S$ be a subset of $[N]$ and $[n]$, respectively. (As we
will see shortly, we use $L$ to keep the numbers left undeleted in
the process and $S$ to keep the functions inserted to the data
structure.) Initially, $L=[N]$ and $S=\emptyset$. For any $x\in
[N]$, let $f^*_S(x)=\min_{f_i\in S} f_i(x)$.
We want to construct a data structure $\cal H$ that supports the
following operations.
\squishlist
\item {\bf{\sc AccessMin}($\cal H$)}: Return $x\in L$ with minimum value $f^*_S$, i.e., $x=\arg\min_{x\in L}
f^*_S(x)$.
\item {\bf{\sc Insert}($f_i$, $\cal H$)}: Insert $f_i$ to
$S$.
\item {\bf {\sc DeleteMin}($\cal H$)}: Delete $x$ from $L$ where $x$
is returned from {\sc AccessMin($\cal H$)}.
\squishend
{\bf Properties:}
We assume that $f_1, f_2, \ldots$ have the following properties.
\squishlist
\item For all $i$, $f_i$ is {\em unimodal}, i.e., there is some
$\gamma_i\in [N]$ such that $f_i(1)\geq f_i(2)\geq \ldots \geq
f_i(\gamma_i) \leq f_i(\gamma_i+1)\leq f_i(\gamma_i+2)\leq \ldots
\leq f_i(N)\,.$
We assume that $\gamma_i$ is given along with $f_i$.
\item We also assume that each $f_i$ comes along with a
linear function $g_i$ where, for any $x\in [N]$, $g_i(x)=x\cdot
w_i+d_i$, for some $w_i$ and $d_i$. These linear functions have a
property that $f_i(x)=f^*_S(x)$ if and only if $g_i(x)=g^*_S(x)$,
where $g^*_S(x)=\min_{i\in S} g_i(x)$.
\item Finally, we assume that once $x$ is deleted from $L$, $f^*_S(x)$ will never change,
even after we add more functions to $S$.
\squishend
For simplicity, we also assume that $w_i\neq w_j$ for all $i\neq j$.
This assumption can be removed by taking care of the case of equal
weight in the insert operation.
We now show that there is a data structure such that every operation
can be done in $O(\log n)$ time.
\paragraph{Data structure design:} We have two data structures to
maintain the information of $f_i$'s and $g_i$'s.
First, we create a data structure $T_g$ to maintain an ordered
sequence $g_{i_1}, g_{i_2}, \ldots$ such that $w_{i_1}\geq w_{i_2}\geq
\ldots$. We want to be able to insert a new function $g_i$ to $T_g$ in
$O(\log n)$ time. Moreover, for any $w$, we want to be able to find
$w_{i_{j}}$ and $w_{i_{j+1}}$ such that $w_{i_j}\leq w< w_{i_{j+1}}$
in $O(\log n)$ time. Such $T_g$ can be implemented by a balanced
binary search tree, e.g., an AVL tree.
Observe that the linear functions $g_{i_1}, g_{i_2}, \ldots$ appear in
the lower envelope in order, i.e., if $g_{i_j}(x)\geq
g_{i_{j+1}}(x)$, then $g_{i_j}(y)\geq g_{i_{j+1}}(y)$ for any $y>x$.
Therefore, we can use data structure $T_g$ to maintain the range of
values such that each $g_i$ (and therefore $f_i$) is in the lower
envelope. That is, we use $T_g$ to maintain $x_1\leq y_1\leq x_2\leq
y_2 \leq \ldots$ such that $g_i(x)=g^*_S(x)$ for all $i$ and
$x_i\leq x\leq y_i$).
Consider the value $\min_{x\in\{x_i, x_i+1, \ldots, y_i\}\cap L} f_i(x)$.
Since $f_i$ is unimodal, the minimum value of $f_i(x)$ over
$\{x_i, x_i+1, \ldots, y_i\}\cap L$ attains at the point closest
to $\gamma_i$ either from the left or from the right.
Thus, we can use two pointers $p_i$ and $q_i$ such that $x_i\leq
p_i\leq \gamma_i\leq q_i\leq y_i$ to maintain the minimum value of
$f_i$ from the left and right of $\gamma_i$, i.e.,
the minimum value $\min_{x\in\{x_i, x_i+1, \ldots, y_i\}\cap L} f_i(x)$
is either $f_i(p_i)$ or $f_i(q_i)$.
Finally, we use a binary heap $B$ to store the values $f_1(p_1),
f_2(p_2), \ldots$ and $f_1(q_1), f_2(q_2), \ldots$ so that we can search
and delete the minimum among these values in $O(\log n)$ time.
More details of the implementation of each operation are the
followings.
\squishlist
\item \textbf{{\sc AccessMin}($\mathcal{H}$)}: This operation is done by
returning the minimum value in $B$. This value is $\min (f_1(p_1),
f_2(p_2), \ldots, f_1(q_1), f_2(q_2), \ldots) = \min_{x\in L} f^*_S(x).$
\item \textbf{{\sc Insert}($f_i$, $\mathcal{H}$)}: First, insert
$g_i$ to $T_g$ which can be done as follows. Let the current ordered
sequence be $g_{i_1}, g_{i_2}, \ldots$. In $O(\log n)$ time, we find
$g_{i_j}$ and $g_{i_{j+1}}$ such that $w_{i_j}\leq w_i<w_{i_{j+1}}$
and insert $g_i$ between them. Moreover, we update the region
$g_{i_j}$, $g_i$, and $g_{i_{j+1}}$ are in the lower envelope of
$g^*_S$, i.e., we get the values $y_{i_j}, x_i, y_i, x_{i_{j+1}},
y_{i_{j+1}}$ (note that $y_{i_j}\leq x_i \leq y_i\leq
x_{i_{j+1}}\leq y_{i_{j+1}}$).
Next, we deal with the pointers $p_i$ and $q_i$: We set
$p_i=\min(\gamma_i, y_i)$ and $q_i=\max(\gamma_i, x_i)$. (The
intuition here is that we would like to set $p_i=q_i=\gamma_i$ but
it is possible that $\gamma_i<x_i$ or $\gamma_i>y_i$ which means
that $\gamma_i$ is not in the region that $g_i$ is in the lower
envelope $g^*_S$). Finally, we also update $p_{i_j}$ and
$q_{i_{j+1}}$: $p_{i_j}=\min(p_{i_j}, x_i)$ and
$q_{i_{j+1}}=\max(q_{i_{j+1}}, y_i)$. Figure~\ref{fig:insert} shows
an effect of inserting a new function.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{insert.eps}
\caption{Inserting a new function}\label{fig:insert}
\end{center}
\end{figure}
We note one technical detail here: It is possible that $p_i$ is
already deleted from $L$. This implies that there is another
function $f_{i_{j'}}$ such that $f_{i_{j'}}(p_i)=f_i(p_i)$ (since we
assume that if $p_i$ is already deleted, then $f^*_S(p_i)$ will never
change even when we add more functions to $S$). There are two cases:
$j'<j$ or $j'>j$. For the former case, we know that
$f_{i_{j'}}(p_i-1)<f_i(p_i-1)$ since $w_{j'}>w_j$ and thus we simply
do nothing ($p_i$ will never be returned by {\sc AccessMin}). For
the latter case, we know that $f_{i_{j'}}(p_i-1)>f_i(p_i-1)$ and
thus we simply set $p_i$ to $p_i-1$.
We deal with the same case for $q_i$ similarly.
\item\textbf{{\sc DeleteMin}($\mathcal{H}$)}: We delete the node
with minimum value from $B$ (which is the one on top of the heap).
This deleted node corresponds to one of the values $f_1(p_1),
f_2(p_2), \ldots, f_1(q_1), f_2(q_2), \ldots$. Assume that $f_i(p_i)$
(resp. $f_i(q_i)$) is such value. We insert a node with value
$f_{i}(p_i-1)$ (resp. $f_{i}(q_i+1)$).
\squishend
\subsection{Using the data structure for semi-matching
problem}\label{sec:using}
For any right vertex $v$, we construct a data structure $H_v$ as in
Section~\ref{sec:datastructure} to maintain $f_{uv}$ for all
neighbor of $v$ which comes along with $g_{uv}$. These functions
satisfy the properties above, as shown in
Section~\ref{sec:properties}. (We note that once $x$ is deleted,
$f_{*v}(x)$ will never change since this corresponds to adding a
vertex $v_x$ to the shortest path tree
with distance $f_{*v}(x)$.)
The last issue is how to find $\gamma_{uv}$, the lowest point of an
edge $uv$ quickly.
We now show an algorithm that finds $\gamma_{uv}$, for every edge
$uv\in E$ in time $O(|V|+|E|)$ \textit{in total}. This algorithm can
be run before we start each iteration of the main algorithm (i.e.,
above Line~\ref{line:begin_iteration} of Algorithm~\ref{algo:EKT}).
To derive such algorithm, we need the following observation.
\begin{lemma}\label{lem:order_m}
For any $v\in V$ and $u_1, u_2 \in U$, if $w_{u_1v}\geq w_{u_2v}$,
then $\gamma_{u_1v}\leq \gamma_{u_2v}.$
\end{lemma}
\begin{proof}
Note that by Lemma~\ref{lem:distance_implication}, $\gamma_{uv}$ is
the minimum integer $i\in [\deg(v)]$ such that
$p(v^{i+1})-p(v^i)\leq w_{uv}$. Also, for any $j<q(u_1v)$,
$p(v^{j+1})-p(v^i)>w_{u_1v}$ by definition. If
$\gamma_{u_1v}>\gamma_{u_2v}$, then
$p(v^{q(u_2v)+1})-p(v^{q(u_2v)})>w_{u_1v}$. However,
$p(v^{q(u_2v)+1})-p(v^{q(u_2v)})\leq w_{u_2v}$. So,
$w_{u_1v}<w_{u_2v}$.
\end{proof}
\paragraph{Algorithm:} The following algorithm finds $\gamma_{uv}$ for all
$uv\in E$. First, in the preprocessing step (which is done once
before we begin the main algorithm), we order edges incident to $v$
decreasingly by their weights, for every vertex $v\in V$. This
process takes $O(\deg(v)\log(\deg(v))).$ We only have to compute
$\gamma_{uv}$ once, so this process does not affect the overall
running time.
Next, for any $v\in V$, suppose that the list is $(u_1,
u_2,\ldots,u_{\deg(v)})$. Since $w_{u_1}\geq w_{u_2}\geq\ldots\geq
w_{\deg(v)}$, it implies that $\gamma_{u_1v}\leq \gamma_{u_2v}\leq
\ldots \gamma_{u_{\deg(v)}v}$ by Lemma~\ref{lem:order_m}. So, we
first find $\gamma_{u_1v}$ and then $\gamma_{u_2v}$ and so on. This
step takes $O(\deg(v))$ for each $v\in V$ and $O(m)$ in total.
Therefore, the running time for computing the minimum point
$\gamma_{uv}$'s is $O(m\log n)$.
| {
"timestamp": "2011-06-08T02:00:28",
"yymm": "1004",
"arxiv_id": "1004.3363",
"language": "en",
"url": "https://arxiv.org/abs/1004.3363",
"abstract": "We consider the problem of finding \\textit{semi-matching} in bipartite graphs which is also extensively studied under various names in the scheduling literature. We give faster algorithms for both weighted and unweighted case.For the weighted case, we give an $O(nm\\log n)$-time algorithm, where $n$ is the number of vertices and $m$ is the number of edges, by exploiting the geometric structure of the problem. This improves the classical $O(n^3)$ algorithms by Horn [Operations Research 1973] and Bruno, Coffman and Sethi [Communications of the ACM 1974].For the unweighted case, the bound could be improved even further. We give a simple divide-and-conquer algorithm which runs in $O(\\sqrt{n}m\\log n)$ time, improving two previous $O(nm)$-time algorithms by Abraham [MSc thesis, University of Glasgow 2003] and Harvey, Ladner, Lovász and Tamir [WADS 2003 and Journal of Algorithms 2006]. We also extend this algorithm to solve the \\textit{Balance Edge Cover} problem in $O(\\sqrt{n}m\\log n)$ time, improving the previous $O(nm)$-time algorithm by Harada, Ono, Sadakane and Yamashita [ISAAC 2008].",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "Faster Algorithms for Semi-Matching Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105287006256,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7076984709680101
} |
https://arxiv.org/abs/1503.06822 | Tree spanners of bounded degree graphs | A tree $t$-spanner of a graph $G$ is a spanning tree of $G$ such that the distance between pairs of vertices in the tree is at most $t$ times their distance in $G$. Deciding tree $t$-spanner admissible graphs has been proved to be tractable for $t<3$ and NP-complete for $t>3$, while the complexity status of this problem is unresolved when $t=3$. For every $t>2$ and $b>0$, an efficient dynamic programming algorithm to decide tree $t$-spanner admissibility of graphs with vertex degrees less than $b$ is presented. Only for $t=3$, the algorithm remains efficient, when graphs $G$ with degrees less than $b\log |V(G)|$ are examined. | \section{Introduction}
A $t$-spanner of a graph $G$ is a spanning subgraph of $G$, such that the distance between pairs of vertices in
the $t$-spanner is at most $t$ times their distance in $G$. Spanners, when they have a few edges, approximate
the distances in the graph, while they are sparse. Spanners of a graph that are trees attain the minimum number of edges a
spanner of the graph can have.
There are applications of spanners in a variety of areas, such as
distributed computing \cite{Awerbuch85,Peleg89},
communication networks \cite{PelUpf88,PelegReshef}, motion planning and
robotics \cite{Arikati96,Chew89}, phylogenetic analysis
\cite{Bandelt86} and in embedding finite
metric spaces in graphs approximately \cite{Rabinov98}.
In \cite{Pettie-Low-Dist-Span} it is mentioned that spanners have applications
in approximation algorithms for geometric spaces \cite{Narasimhanbook},
various approximation algorithms \cite{Fakcharoenphol}
and solving diagonally dominant linear systems \cite{Spielman}.
On one hand, in \cite{Bondy89,CaiCor95a,CaiThesis} an efficient algorithm
to decide tree 2-spanner admissible graphs is presented, where a method
to construct all the tree 2-spanners of a graph is also given. On the
other hand, in \cite{CaiCor95a,CaiThesis} it is proven that for
each $t\geq 4$ the problem to decide graphs that admit a tree $t$-spanner
is an NP-complete problem. The complexity status of the tree 3-spanner
problem is unresolved. In \cite{Fekete01}, for every $t$, an efficient algorithm to
determine whether a planar graph with bounded face length admits a tree $t$-spanner is
presented. In \cite{Fomin} the existence of an efficient (actually linear) algorithm for the tree spanner problem on
bounded degree graphs is shown, using a theorem of Logic; while it is mentioned that:
``It would be interesting to show that one could use tools that do not rely on
Courcelle’s theorem or Bodlaender’s algorithm to speed up practical implementations".
In this article, for every $t$, an efficient dynamic programming algorithm to
decide tree $t$-spanner admissibility of bounded degree graphs is presented
(theorem~\ref{tgeniko}).
Tree $t$-spanners ($t\geq 3$) have been studied for various families of
graphs. If a connected graph is a cograph or a split graph or the
complement of a bipartite graph, then it admits a tree 3-spanner
\cite{CaiThesis}. Also, all convex bipartite graphs have a tree 3-spanner,
which can be constructed in linear time \cite{Venkatesan97}.
Efficient algorithms to recognize graphs that
admit a tree $3$-spanner have been developed for interval, permutation
and regular bipartite graphs \cite{Madanlal96}, planar graphs \cite{Fekete01},
directed path graphs \cite{Le99}, very strongly chordal graphs, 1-split graphs and
chordal graphs of diameter at most 2 \cite{Brandstadtchordal}. In \cite{Tree3-spannersofdiameteratmost5}
an efficient algorithm to decide if a graph admits a tree 3-spanner of diameter at most 5 is presented.
Moreover, every strongly chordal
graph admits a tree 4-spanner, which can be constructed in linear time
\cite{Brandst99}; note that, for each $t$, there is a connected chordal
graph that does not admit any tree $t$-spanner. The tree
$t$-spanner problem has been studied for small diameter chordal graphs \cite{Brandstadtchordal},
diametrically uniform graphs \cite{Manuel}, and
outerplanar graphs \cite{Narayanaswamy2015}. Approximation algorithms for the tree
$t$-spanner problem are presented in \cite{DraganK,PelegReshef}, where in \cite{DraganK} a new necessary condition
for a graph to have a tree t-spanner in terms of decomposition is also presented.
There are NP-completeness results for the tree $t$-spanner problem for families of graphs.
In \cite{Fekete01}, it is shown that it is NP-hard to determine the minimum $t$ for which a
planar graph admits a tree $t$-spanner. In \cite{Dragan2008}, it is proved that, for every $t \geq 4$, the problem of finding a
tree t-spanner is NP-complete on \mbox{$K_ 6$-minor-free} graphs.
For any $t\geq4$, the tree $t$-spanner problem is NP-complete
on chordal graphs of diameter at most $t+1$, when $t$ is even, and of diameter at most $t+2$, when
$t$ is odd \cite{Brandstadtchordal}; note that this refers to the diameter of the graph not to the diameter
of the spanner. In \cite{Treespannersofsmalldiameter} it is shown that the
problem to determine whether a graph admits a tree $t$-spanner of
diameter at most $t+1$ is tractable, when $t\leq 3$, while it is an
NP-complete problem, when $t\geq 4$. This last result is used in \cite{JCSS_inapprox} to hint
at the difficulty to approximate the minimum $t$ for which a graph
admits a tree $t$-spanner.
The tree 3-spanner problem is very interesting, since its complexity status is unresolved. In \cite{PhDthesis} it is
shown that only for $t=3$ the union of any two tree $t$-spanners of any given graph may contain big induced cycles but never
an odd induced cycle (other than a triangle); such unions are proved to be perfect graphs.
The algorithm presented in this article is efficient only for $t\leq 3$, when graphs with maximum degree $O(\log n)$
are considered, where $n(G)$ is the number of vertices of each graph $G$ (section~\ref{stequals3}).
The tree 3-spanner problem can be formulated as an integer
programming optimization problem. Constraints for such a formulation appear in \cite{PhDthesis}, providing certificates
of tree 3-spanner inadmissibility for some graphs.
\section{Definitions}
\label{sdefin}
In general, terminology of \cite{West} is used. If $G$ is a graph, then $V(G)$ is its {\em vertex set} and
$E(G)$ its {\em edge set}.
An {\em edge} between vertices $u,v\in G$ is denoted as $uv$. Also, $G\setminus\{uv\}$ is
the graph that remains when edge $uv$ is removed from $G$.
Let $v$ be a vertex of $G$, then $N_G(v)$ is the set of $G$ neighbors of $v$, while
$N_G[v]$ is $N_G(v)\cup \{v\}$; in this article, graphs do not have loop edges.
The {\em degree} of a vertex $v$ in $G$ is the number of edges of $G$ incident to $v$. Here,
$\Delta(G)$ is the maximum degree over the vertices of $G$.
Let $G$ and $H$ be two graphs. Then, $G\setminus H$ is graph $G$ without the vertices of $H$,
i.e. $V(G\setminus H)=V(G)\setminus V(H)$ and $E(G\setminus H)=\{uv\in E(G): u\not\in V(H)$ and $v\not\in V(H)\}$.
The {\em union} of $G$ and $H$, denoted as $G\cup H$, is the graph with vertex set $V(G)\cup V(H)$ and edge set
$E(G)\cup E(H)$. Similarly, the {\em intersection} of $G$ and $H$, denoted as $G\cap H$, is the graph with vertex set
$V(G)\cap V(H)$ and edge set $E(G)\cap E(H)$.
Additionally, $G[H]$ is the subgraph of $G$ {\em induced} by the vertices
of $H$, i.e. $G[H]$ contains all vertices in $V(G)\cap V(H)$ and all the edges of $G$ between vertices in $V(G)\cap V(H)$.
Note that the usual definition of induced subgraph refers to $H$ being a subgraph of $G$.
The $G$ distance between two vertices $u,v\in G$ is the length of a $u, v$ shortest path in $G$, while
it is infinity, when $u$ and $v$ are not connected in $G$.
The definition of a tree $t$-spanner follows.
\begin{defin}
A graph $T$ is a {\bf $t$-spanner} of a graph $G$ if and only if $T$ is a
subgraph of $G$ and, for every pair $u$ and $v$ of vertices of
$G$, if $u$ and $v$ are at distance $d$ from each other in $G$, then $u$
and $v$ are at distance at most $t\cdot d$ from each other in $T$. If $T$ is also a tree, then
$T$ is a {\bf tree $t$-spanner} of $G$.
\end{defin}
Note that in order to check that a spanning tree of a graph $G$
is a tree $t$-spanner of $G$, it suffices to examine pairs of vertices that are adjacent
in $G$ \cite{CaiThesis}. There is an additive version of a spanner as well
\cite{Kratsch98additivetree,Pettie-Low-Dist-Span}, which is not
studied in this article. In the algorithm and in the proofs, $r$-centers are frequently used.
\begin{defin}
Let $r$ be an integer. Vertex $v$ of a graph $G$ is an {\bf $r$-center} of $G$
if and only if for all vertices $u$ in $G$, the distance from $v$ to $u$ in $G$ is less than or equal to $r$.
\end{defin}
To refer to all the vertices near a central vertex, the notion of a
sphere is used.
\begin{defin}
Let $r$ be an integer and $v$ a vertex of a graph $G$. Then, the subgraph of $G$ induced by the vertices of
$G$ at distance less than or equal to $r$ from $v$ is the {\bf sphere} of $G$ with center $v$ and radius $r$;
it is denoted as \mbox{{\bf $(v,r)_G$-sphere}}.
\end{defin}
Obviously, $v$ is an $r$-center of a graph $G$, if and only if
the $(v,r)_G$-sphere is equal to $G$.
Let $f, g$ be functions from the set of all graphs to the non negative integers. Then, $f$ is $O(g)$ if and only
if there are graph $G_0$ and integer $M$ such that $f(G)\leq M g(G)$ for every $G$ with $|V(G)|>|V(G_0)|$.
An algorithm that runs in polynomial time is called {\em efficient}.
\section{Description of the algorithm.}
In \cite{Master}, a characterization of tree $t$-spanner admissible graphs
in terms of decomposition states, generally speaking, that if a tree $t$-spanner
admissible graph $G$ does not have small diameter then it is the union of two tree
$t$-spanner admissible graphs whose intersection is a small diameter subgraph of $G$ (this result
requires further definitions to be stated exactly and it is not used in the proofs of this article).
So, it may be the case that, starting with small diameter subgraphs and adding on them
partial solutions of the remaining graph, a tree $t$-spanner of the whole graph is
built.
\begin{table}[tbp]
\begin{tt}
\begin{center}
\fbox{
\parbox{0.95\textwidth}{
{\bf Algorithm Find\_Tree\_spanner}($G$, $t$)\newline
{\bf Input:} A connected nonempty graph $G$ and an integer $t>1$.\newline
\begin{algorithmic}[1]
\STATE ${\cal A}_G^0=\emptyset$\label{l1-1}
\FOR{($k=1$ to $|V(G)|$)}\label{l1-2}
\STATE ${\cal A}_G^k=\emptyset$\label{l1-3}
\FOR{(vertex $v\in G$)}\label{l1-4}
\STATE ${\cal S}_v = \{S\subseteq G: S$ is a tree $t$-spanner of $G[S]$ and
$v$ is a $\lfloor\frac{t}{2}\rfloor$-center of $S\}$\label{l1-5}
\FOR{($S\in {\cal S}_v$)}\label{l1-6}
\STATE $T^k_{v,S}=$ {\bf Find\_Subtree}($G$, $t$, $v$, $S$, $k$,${\cal A}_G^{k-1}$)\label{l1-7}
\STATE ${\cal A}_G^k = {\cal A}_G^k \cup \{T^k_{v,S}\}$\label{l1-8}
\STATE {\bf If} ($V(G)=V(T^k_{v,S})$) {\bf Return}($T^k_{v,S}$){\bf \}\}}\label{l1-9}
\ENDFOR
\ENDFOR
\STATE {\bf Discard} ${\cal A}_G^{k-1}${\bf\}}\label{l1-10}
\ENDFOR
\STATE {\bf Return}($G$ does not admit a tree $t$-spanner.)\label{l1-11}
\setcounter{total_no_lines}{\value{ALC@line}}
\end{algorithmic}
}}
\end{center}
\end{tt}
\caption{Algorithm {\tt Find\_Tree\_spanner($G$, $t$)}. Procedure {\tt Find\_Subtree} is described in
table~\ref{tp}.}
\label{ta}
\end{table}
Algorithm {\tt Find\_Tree\_spanner} in table~\ref{ta} has as input a graph $G$
and an integer $t>1$. Its output is a tree $t$-spanner of $G$ or a message that $G$ does not admit any
tree $t$-spanner. Being a dynamic programming algorithm, it grows partial solutions into final solutions starting from small
subtrees of $G$. Obviously, each such subtree must be a tree $t$-spanner of the subgraph of $G$ induced by the vertices
of the subtree. All these subtrees are the first partial solutions of the dynamic programming method and are generated
by exhaustive search (first stage of the algorithm). Graphs of bounded degree have vertices of bounded neighborhoods;
therefore, this search for small subtrees is no harm. Note that the algorithm works for all input graphs but its efficiency
suffers when graphs of big degrees are examined.
\begin{table}[tbp]
\begin{tt}
\begin{center}
\framebox{
\noindent\parbox{0.95\textwidth}{
{\bf Procedure Find\_Subtree}($G$, $t$, $v$, $S$, $k$, ${\cal A}_G^{k-1}$)\newline
{\bf Input:} A graph $G$, an integer $t>1$, a vertex $v\in G$, a tree $t$-spanner $S$ of $G[S]$
with $\lfloor\frac{t}{2}\rfloor$-center $v$,
an integer $k\geq 1$, and a set ${\cal A}_G^{k-1}$ of subtrees of $G$.\newline
\begin{algorithmic}[1]
\setcounter{ALC@line}{\value{total_no_lines}}
\IF{($k=1$)}\label{l2-1}
\STATE ${\cal Q}_{v,S}=\{Q\subseteq G: Q$ is a component of $G\setminus S\}$\label{l2-2}
\COMMENT{static}
\STATE {\bf Return}($S$){\bf\}}\label{l2-3}
\ELSE\label{l2-4}
\STATE $T^k_{v,S}=T^{k-1}_{v,S}$ \COMMENT{$T^{k-1}_{v,S}$ is in ${\cal A}_G^{k-1}$}\label{l2-5}
\FOR{(component $Q\in {\cal Q}_{v,S}$)}\label{l2-6}
\FOR{($T^{k-1}_{u,R} \in {\cal A}_G^{k-1}$ such that $u\in N_S(v)$)}\label{l2-7}
\STATE $T^k_{v,S,u,R,Q}=(T^{k-1}_{u,R}[Q\cup R]\cup S)\setminus((R\setminus S)\setminus Q)$\label{l2-8}
\IF{($T^k_{v,S,u,R,Q}$ is a tree $t$-spanner of $G[Q\cup S]$)}\label{l2-9}
\STATE $T^k_{v,S}=T^k_{v,S}\cup T^k_{v,S,u,R,Q}$\label{l2-10}
\STATE ${\cal Q}_{v,S} = {\cal Q}_{v,S} \setminus \{Q\}$\label{l2-11}
\STATE {\bf Break}{\bf\}\}\}} \COMMENT{Stop search in ${\cal A}_G^{k-1}$}\label{l2-12}
\ENDIF
\ENDFOR
\ENDFOR
\STATE {\bf Return}($T^k_{v,S}$){\bf\}}\label{l2-13}
\ENDIF
\end{algorithmic}
}}
\end{center}
\end{tt}
\caption{Procedure {\tt Find\_Subtree($G$, $t$, $v$, $S$, $k$, ${\cal A}_G^{k-1}$)}. In line~\ref{l2-2}, variable ${\cal Q}_{v,S}$
has been declared as static; i.e. it is stored locally for later use, when the procedure is called again.}
\label{tp}
\end{table}
In each of the next stages of this dynamic programming algorithm, each partial solution is examined and, then, if possible, it is
incremented (procedure {\tt Find\_Subtree} in table~\ref{tp}).
The initial subtree of each partial solution (which was formed in the first stage) is its core. Let $T^k_{v,S}$ be
a partial solution that is being examined. Removing the core of $T^k_{v,S}$, which is $S$, from $G$ creates
some components. Each such
component $Q$ that is not covered so far by $T^k_{v,S}$ is considered. The core of $T^k_{v,S}$ is put together with an
appropriate (based on $Q$) portion of nearby partial solutions; if the resulting graph is a tree $t$-spanner of the subgraph
of $G$ induced by the vertices of the resulting graph, then the partial solution under examination $T^k_{v,S}$ is incremented
by the resulting graph. This increment helps $T^k_{v,S}$ to cover $Q$. If $G$ admits a tree $t$-spanner, then some of
the partial solutions eventually cover $G$; if so, the algorithm outputs one of them (line~\ref{l1-9} of table~\ref{ta}).
Otherwise, $|V(G)|$ stages suffice to conclude that $G$ does not admit any tree $t$-spanner (line~\ref{l1-11} of
table~\ref{ta}). The description of the algorithm in the two tables
has some details, which are explained in the following paragraphs.
Let us start with table~\ref{ta}.
Here, ${\cal A}_G^0$ is set to $\emptyset$ (line~\ref{l1-1}) and its only use is to call a
procedure later on correctly.
To give motion to the process of growing partial solutions a main {\tt For} loop is used (line~\ref{l1-2}),
where variable $k$ is incremented
by 1 at the end of each stage, starting from 1. Set ${\cal A}_G^k$ is to store the progress on partial solutions and it is
initialized to $\emptyset$ (line~\ref{l1-3}); i.e. it is a set
whose elements are subtrees of $G$. The first stage (k=1) is different from the rest in not having previous
partial solutions to merge. First, it is necessary to pick names for the primary partial solutions. For each
vertex $v$ of $G$ a set ${\cal S}_v$ is formed (line~\ref{l1-5}). Each subgraph $S$ of $G$ that is a tree
$t$-spanner of $G[S]$ and has $v$ as a $\lfloor\frac{t}{2}\rfloor$-center becomes an element of ${\cal S}_v$.
This set ${\cal S}_v$ can be formed by exhaustively checking all the subtrees of the sphere of $G$ with
center $v$ and radius $\lfloor\frac{t}{2}\rfloor$ (see lemma~\ref{lsetsize}). Note that the computations to
form ${\cal S}_v$ can be done only for $k=1$. Then, for each member $S$ of ${\cal S}_v$ a partial solution is considered
under the name $T^1_{v,S}$. Second, each primary partial solution must be initialized (line~\ref{l1-7}). This is a job for procedure
{\tt Find\_Subtree}, which for $k=1$ returns $S$; i.e. $T^1_{v,S}=S$.
Of course, each newly formed primary partial
solution is stored in ${\cal A}_G^1$ (line~\ref{l1-8}). It may well be the case, when $G$ is a small graph, that some of these primary
solutions already spans $G$ (i.e. $V(G)=V(T^1_{v,S})$); then, a tree $t$-spanner of $G$ is found. This completes the
first stage of the main {\tt For} loop.
For $k>1$, partial solutions are merged if possible. Again, all partial solutions are considered one by one.
Procedure {\tt Find\_Subtree} in table~\ref{tp} receives as input (among others)
vertex $v$ and subtree $S\in{\cal S}_v$; these two determine the name of the partial solution under examination $T^k_{v,S}$,
where $k$ is just the number of the stage the algorithm is in.
It also receives as input all the partial solutions formed in the previous stage of the dynamic programming
method through set ${\cal A}_G^{k-1}$. Procedure {\tt Find\_Subtree} has saved locally the set of
components ${\cal Q}_{v,S}$ of $G\setminus S$, when it was called during the first stage of the main
algorithm ($k=1$). Set ${\cal Q}_{v,S}$ is a static variable; the content
of this set changes and these changes are remembered when the procedure is called again. Another way to
put it is that ${\cal Q}_{v,S}$ is a global variable, which is not lost each time the procedure ends.
The central set of operations of this dynamic programming algorithm is in procedure {\tt Find\_Subtree},
when $k>1$ (table~\ref{tp}, lines~\ref{l2-4} to~\ref{l2-13}) .
First, partial solution $T^k_{v,S}$ takes the value that it had in the previous stage, which had been
stored in ${\cal A}_G^{k-1}$; i.e $T^k_{v,S}=T^{k-1}_{v,S}$ (line~\ref{l2-5}).
Then, second, each component $Q$ in ${\cal Q}_{v,S}$ is examined to check if $T^k_{v,S}$ can be extended towards
$Q$ (line~\ref{l2-6}). Third, this extension will be done using other nearby partial solutions in ${\cal A}_G^{k-1}$.
For this, all partial solutions
in ${\cal A}_G^{k-1}$ that involve as central vertex a neighbor of $v$ in $S$ are considered, one at a time (line~\ref{l2-7});
the central vertex of partial solution $T^{k-1}_{u,R}$ is $u$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6cm]{fprocedurestep.pdf}
\caption{The formation of auxiliary graph $T^k_{v,S,u,R,Q}$ in procedure {\tt Find\_Subtree} of
table~\ref{tp}. The left circle is $S$, which is a tree
$t$-spanner of $G[S]$ and has $v$ as a $\lfloor\frac{t}{2}\rfloor$-center. Similarly, the right circle
is $R$, which is a tree $t$-spanner of $G[R]$ and has $u$ as a $\lfloor\frac{t}{2}\rfloor$-center;
here, $u$ is a neighbor of $v$ in $S$. Also, $Q$ is a component of $G\setminus S$. The
gray area is $(R\setminus S)\setminus Q$. The vertices that correspond to this gray area are removed
from $B=T^{k-1}_{u,R}[Q\cup R]\cup S$ to form the auxiliary graph.}
\label{fprocedurestep}
\end{center}
\end{figure}
Fourth, assume that the nearby partial solution $T^{k-1}_{u,R}$ is considered when component $Q$ is examined. Then,
in line~\ref{l2-8}, an
auxiliary graph $T^k_{v,S,u,R,Q}$ is formed (see figure~\ref{fprocedurestep}). The fundamental part of this
auxiliary graph is $T^{k-1}_{u,R}[Q\cup R]$; i.e. the restriction of the considered nearby partial solution to the
component under examination (plus its core $R$). The intuition behind this operation is that $u$ may
be ``closer'' than $v$ to $Q$ and, therefore, $T^{k-1}_{u,R}$ may have covered
$Q$ in a previous stage of the dynamic programming method. For example, for $k=2$, input graph
$G$ may ``end'' towards the ``direction'' of edge $vu$; i.e. $V(Q)\subseteq V(R)$ and, therefore, $T^1_{u,R}$ covers $Q$.
In most of the cases $T^{k-1}_{u,R}$ does not contain all the vertices of $Q$; then, $T^{k-1}_{u,R}[Q\cup R]$ is still
meaningful, because of the slightly different than the usual definition of induced subgraph used in this article
(see section~\ref{sdefin}).
Having the foundation in hand, i.e. graph $T^{k-1}_{u,R}[Q\cup R]$, subtree $S$ is added to it; lets call for convenience
the resulting graph $B$ (i.e. $B=T^{k-1}_{u,R}[Q\cup R]\cup S$; here, $B$ is not used as a variable within
table~\ref{tp}).
Note that $B$ may not even be a tree, because $S\cup R$ may contain a cycle.
Since partial solution $T^k_{v,S}$ is to be incremented towards exactly $Q$, graph $(R\setminus S)\setminus Q$
(gray area in figure~\ref{fprocedurestep})
is removed\footnote{Due to the exhaustive search for primary partial solutions, there is always some $R'$ that
will do the job instead of $R$, such that $(R'\setminus S)\setminus Q=\emptyset$. But removing the gray area facilitates
the proof of correctness; this way more nearby partial solutions may help the partial solution under examination to grow.}
from $B$ to form the auxiliary graph $T^k_{v,S,u,R,Q}$ (see lemma~\ref{lgrayarea}).
Fifth, if auxiliary graph $T^k_{v,S,u,R,Q}$ is a tree $t$-spanner of $G[Q\cup S]$, then it is added to
the partial solution under examination; i.e. $T^k_{v,S}=T^k_{v,S}\cup T^k_{v,S,u,R,Q}$ (line~\ref{l2-10}). The result is again
a tree $t$-spanner of the subgraph of $G$ induced by the vertices of the result (see lemma~\ref{lmerikilysi}
and lemma~\ref{lunion}). Also, in this case, component $Q$ is removed from
$Q_{v,S}$ (line~\ref{l2-11}) and the search for nearby partial solutions to grow $T^k_{v,S}$ towards $Q$ stops
(line~\ref{l2-12}), since $T^k_{v,S}$ now covers $Q$. The execution continues after the {\tt If}
statement (line~\ref{l2-9}) with the
next component in $Q_{v,S}$. Note that auxiliary graph $T^k_{v,S,u,R,Q}$ can be discarded at this point.
These five steps complete the central set of operations of this dynamic
programming algorithm. Of course, after all components in $Q_{v,S}$ have been examined, procedure
{\tt Find\_Subtree} returns $T^k_{v,S}$ to the main program (line~\ref{l2-13}).
A few comments on the algorithm follow. The algorithm works for $t=2$ as well, although there is an efficient algorithm
for this case \cite{Bondy89,CaiCor95a,CaiThesis}. Maintaining the various sets used in the algorithm is done using linked lists.
This way a {\tt For} loop on elements of a set retrieves sequentially all elements in a linked list.
Finally, procedure {\tt Find\_Subtree} doesn't need to check if its input is appropriate.
\section{Proof of correctness}
The following lemma is employed in various places within this section; when it is used in a proof, a footnote gives
the correspondence between the variable names in the proof and the names in its statement below. It
describes a basic property of spanners: vertices too far apart in a spanner of a graph cannot be adjacent in the graph.
The notion of a sphere has been defined in section~\ref{sdefin}
\begin{lemma}
Let $G$ be a graph, $T$ a tree $t$-spanner of $G$, and $x$ a vertex of $G$, where $t>1$. Let $X$ be the
$(x,\lfloor\frac{t}{2}\rfloor)_T$-sphere. Let $y$ be a $T$ neighbor of $x$
and let $T_y$ be the component of $T\setminus \{xy\}$ that contains $y$. Then, there is no edge of $G$ from a
vertex in $T_y\setminus X$ to a vertex in $(G\setminus T_y)\setminus X$.
\label{lcut_subtree}
\end{lemma}
{\em Proof}.
Assume, towards a contradiction, that there is an edge of $G$ between a vertex $p\in T_y\setminus X$ and
a vertex $q\in (G\setminus T_y)\setminus X$. Let $P_1$ be the $T$ path from $p$ to $x$. Then, all the
vertices of $P_1$ but $x$ are vertices of $T_y$. Also, the length of $P_1$ is strictly greater than $\lfloor\frac{t}{2}\rfloor$,
because $X$ contains all the vertices of $T$ at $T$ distance less than or equal to $\lfloor\frac{t}{2}\rfloor$
from $x$ and $p$ is out of $X$.
Let $P_2$ be the $T$ path from $q$ to $x$. Then, none of the
vertices of $P_2$ is a vertex of $T_y$, because $q\not\in T_y$ and $x\not\in T_y$. Also, the length of $P_2$ is strictly
greater than $\lfloor\frac{t}{2}\rfloor$, because $X$ contains all the vertices of $T$ at $T$ distance less than or equal to
$\lfloor\frac{t}{2}\rfloor$ from $x$ and
$q$ is out of $X$. Then, the $T$ path from $p$ to $q$ has length greater than or equal to $2\lfloor\frac{t}{2}\rfloor+2$; i.e. the
$T$ distance between the endpoints of edge $pq$ of $G$ is strictly greater than $t$. This is a contradiction to $T$ being
a tree $t$-spanner of $G$.\hspace*{\fill}$\Box$\par\addvspace{\topsep}
When growing a partial solution $T_1=T^k_{v,S}$ towards a component $Q$ of $G\setminus S$, by adding an
auxiliary graph $T_2=T^k_{v,S,u,R,Q}$ (line~\ref{l2-10} of table~\ref{tp}),
the result $T_1\cup T_2$ must be a tree $t$-spanner of $G[T_1\cup T_2]$;
the following lemma handles that. The vertices of forest $T_2\setminus T_1$ are the vertices of $Q$, while
the vertices of forest $T_1\setminus T_2$ are the vertices of the components of $G\setminus S$ that have already
been covered by $T_1$; since these two forests
correspond to such components, there is no edge of the input graph between them. The intersection
$T_1\cap T_2$ corresponds to the core (the initial value) of partial solution $T_1$, which is $S$.
\begin{lemma}
Let $G$ be a graph. Assume that $T_1$ is a tree $t$-spanner of $G[T_1]$ and that $T_2$ is a tree $t$-spanner of
$G[T_2]$. Also, assume that there is no edge of $G$ between a vertex in $T_1\setminus T_2$ and a vertex in
$T_2\setminus T_1$. If $T_1\cap T_2$ is a nonempty tree, then $T_1\cup T_2$ is a tree $t$-spanner of
$G[T_1\cup T_2]$.
\label{lunion}
\end{lemma}
{\em Proof}.
Let $T_{\cup}=T_1\cup T_2$ and $T_{\cap}=T_1\cap T_2$. First, it is proved that $T_{\cup}$ is a tree.
Let ${\cal Q}$ be the components of $T_{\cup}\setminus T_{\cap}$. Trivially, each component in ${\cal Q}$ is either an
induced subgraph of $T_1\setminus T_2$ or an induced subgraph of $T_2\setminus T_1$.
Consider a component $Q$ in ${\cal Q}$. Obviously, $T_{\cup}$ is a connected graph ($T_{\cap}$ is nonempty).
So, there must be at least one
edge of $T_{\cup}$ between $Q$ and $T_{\cap}$. Without loss of generality, assume that $Q$ is an induced subgraph of
$T_1\setminus T_2$. So, all the edges of $T_{\cup}$ between $Q$ and $T_{\cap}$ belong to $T_1$.
Therefore, since $Q$ and $T_{\cap}$ are connected subgraphs of $T_1$ and $T_1$ is a tree, there is exactly one
edge of $T_{\cup}$ between $Q$ and $T_{\cap}$.
Here, $T_{\cap}$ is an induced subgraph of $T_{\cup}$, because any extra edge would form a cycle in $T_1$ or in $T_2$.
So, $T_{\cup}$ is a connected graph that consists of $|{\cal Q}|+1$ vertex disjoint trees plus $|{\cal Q}|$ edges;
therefore, it is a tree.
Second, it is proved that $T_{\cup}$ is a $t$-spanner of $G[T_1\cup T_2]$. Consider an edge $vu$ of $G[T_1\cup T_2]$.
Then, $vu$ is an edge of $G[T_1]$ or an edge of $G[T_2]$, because
there is no edge of $G$ between a vertex in $T_1\setminus T_2$ and a vertex in $T_2\setminus T_1$. Assume,
without loss of generality, that $vu$ is an edge of $G[T_1]$. Since $T_1$ is a tree $t$-spanner of $G[T_1]$, the distance
in $T_{\cup}$ between $v$ and $u$ is at most $t$.\hspace*{\fill}$\Box$\par\addvspace{\topsep}
Incrementing of partial solutions is done through a specific command within the algorithm. The following lemma
examines one by one the executions of this command and confirms that the incrementing is done properly. For
this, a double induction is used.
\begin{lemma}
Let $G$ be a graph and $t>1$ an integer. For every $v\in G$, for every $S\in{\cal S}_v$ and for every $k$ ($1\leq k \leq |V(G)|$),
$T^k_{v, S}$ is a tree $t$-spanner of $G[T^k_{v, S}]$, where ${\cal S}_v$ and $T^k_{v, S}$ are
constructed in algorithm {\tt Find\_Tree\_spanner} of table~\ref{ta} on input ($G$, $t$).
\label{lmerikilysi}
\end{lemma}
{\em Proof}.
Since $S\in{\cal S}_v$, $T^1_{v,S}$ belongs to ${\cal A}_G^1$ and it is equal to $S$,
which is a tree $t$-spanner of $G[S]$; so, the lemma holds for $k=1$.
Let $l_k$ be the total number of times $T^k_{v, S}$ is incremented through command
\begin{equation} \label{eincrement}
T^k_{v,S}=T^k_{v,S}\cup T^k_{v,S,u,R,Q}
\end{equation}
in line~\ref{l2-10} of table~\ref{tp}. Denote with $T^{k,l}_{v,S}$ the value of variable $T^k_{v,S}$, when
$T^k_{v, S}$ has been incremented $l$ times, through command~(\ref{eincrement}) above, where $2\leq k \leq |V(G)|$.
Then, $T^{2,0}_{v,S}=T^1_{v,S}$ and $T^{k,0}_{v,S}=T^{k-1,l_{k-1}}_{v,S}$, where $3\leq k \leq |V(G)|$
(line~\ref{l2-5} of table~\ref{tp}). Also,
$T^{k,l}_{v,S}=T^{k,l-1}_{v,S}\cup T^k_{v,S,u,R,Q}$, where $2\leq k \leq |V(G)|$ and $1\leq l\leq l_k$.
First, it is proved, by induction on $l$, that for each $k$ ($2\leq k \leq |V(G)|$) if $T^{k,0}_{v,S}$ is a tree $t$-spanner of $G[T^{k,0}_{v,S}]$, then $T^{k,l}_{v,S}$ is a tree $t$-spanner of $G[T^{k,l}_{v,S}]$, where $0\leq l\leq l_k$. The base
case ($l=0$) holds trivially. For the induction step ($1\leq l\leq l_k$), $T_1=T^{k,l-1}_{v, S}$ is incremented by
$T_2=T^k_{v,S,u,R,Q}$ for some $u$, $R$ and $Q$ to become $T^{k,l}_{v, S}$; i.e. $T^{k,l}_{v, S}=T_1\cup T_2$.
Then, $T_2$ must be a tree $t$-spanner
of $G[Q\cup S]$ (see condition of {\tt If} statement in line~\ref{l2-9} of table~\ref{tp}).
Also, $T_1$ is a tree $t$-spanner of $G[T_1]$, by induction hypothesis. The vertex set of $T_2$ is $V(S\cup Q)$.
Also, the vertex set of $T_1$ is the vertices of $S$ union the vertices of some components of
$G\setminus S$ other than $Q$, because $Q$ has not been used to increment this partial solution
before. So, there is no edge of $G$ between a vertex in $T_1\setminus T_2$ and a vertex in
$T_2\setminus T_1$ and $V(T_1\cap T_2)=V(S)$. By construction of $T_2$, $S$ is a subtree of $T_2$. Also,
$T_1$ contains $S$, because $T_1$ contains $T^1_{v,S}$, which is equal to $S$. So, $T_1\cap T_2$ is $S$, which
is a nonempty tree. Here, $T^{k,l}_{v,S}=T_1\cup T_2$. Therefore, by lemma~\ref{lunion},
$T^{k,l}_{v,S}$ is a tree $t$-spanner of $G[T^{k,l}_{v,S}]$.
Second, the lemma is proved by induction on $k$; i.e. it is proved that $T^{k,l}_{v, S}$ is a tree $t$-spanner of
$G[T^{k,l}_{v, S}]$,
where $2\leq k \leq |V(G)|$ and $0\leq l\leq l_k$. For the base case, $T^{2,0}_{v,S}$ is equal to $T^1_{v,S}$,
which has been shown to be a tree $t$-spanner $G[T^1_{v,S}]$. So, by the first induction,
$T^{2,l}_{v,S}$ is a tree $t$-spanner of $G[T^{2,l}_{v, S}]$, for $0\leq l\leq l_k$. For the induction step,
$T^{k,0}_{v,S}$ is equal to $T^{k-1,l_{k-1}}_{v,S}$, which, by induction hypothesis, is a tree $t$-spanner
$G[T^{k-1,l_{k-1}}_{v,S}]$. So, by the first induction,
$T^{k,l}_{v,S}$ is a tree $t$-spanner of $G[T^{k,l}_{v, S}]$, for $0\leq l\leq l_k$.\hspace*{\fill}$\Box$\par\addvspace{\topsep}
Assume that a partial solution $T^k_{v,S}$ is about to grow towards a component $W$ of $G\setminus S$ with the help
of a nearby partial solution $T'=T^{k-1}_{u,R}$. Then, the vertices of $R$ that are not in $S$ nor in $W$ are
not needed to grow $T^k_{v,S}$. The following lemma facilitates this process.
\begin{lemma}
Let $G$ be a graph and $T$ a tree $t$-spanner of $G$, where $t>1$. Let $v$ be a vertex of $G$ and let $S$ be the
$(v,\lfloor\frac{t}{2}\rfloor)_T$-sphere. Let $u$ be a $T$ neighbor of $v$ and let $R$ be the
$(u,\lfloor\frac{t}{2}\rfloor)_T$-sphere. Let $W$ be a component of $G\setminus S$. Let $T'$ be a
tree $t$-spanner of $G[T']$, such that $R\subseteq T'$. Let $L$ be $(R\setminus S)\setminus W$.
If $T'[W\cup R]\cup S$ is a tree $t$-spanner of $G[W\cup S\cup R]$, then $(T'[W\cup R]\cup S)\setminus L$
is a tree $t$-spanner of $G[W\cup S]$.
\label{lgrayarea}
\end{lemma}
{\em Proof}.
Obviously, $L$ corresponds to the gray area in figure~\ref{fprocedurestep}. Let $g$ be a vertex in $L$.
Then, $g\not\in S$ and $g\not\in W$. So, if there is an edge of $G$ from $g$ to a vertex in $W$, then $g$ must
be in $W$, a contradiction. Assume, towards a contradiction, that there is an edge $e$ of
$T'[W\cup R]\cup S$ from $g$ to a vertex in $S\cup R$,
such that $e\not\in E(S\cup R)$. Obviously, $S\cup R$ is a connected graph, because $\lfloor\frac{t}{2}\rfloor>0$.
Also, $R\subseteq T'$; so, all edges of $S\cup R$ are present in $T'[W\cup R]\cup S$. Therefore, there is
a path in $T'[W\cup R]\cup S$ between the endpoints of $e$ that avoids $e$ (note that $L\subseteq R$). This
is a contradiction to $T'[W\cup R]\cup S$ being a tree. So, since $L$ does not contain any vertex of $S$,
all edges of $T'[W\cup R]\cup S$ incident to $g$ must be edges of $R$ (it was proved earlier that there is no
edge of $G$ between $g$ and a vertex in $W$). Here, $g$ must be at distance
exactly $\lfloor\frac{t}{2}\rfloor$ from $u$, because $g\in R\setminus S$ and $S$ contains all the vertices at $T$ distance
less than or equal to $\lfloor\frac{t}{2}\rfloor-1$ from $u$. Therefore, $g$ is incident to only one edge of $T'[W\cup R]\cup S$.
So, removing $L$ from $T'[W\cup R]\cup S$ results to a tree $t$-spanner of $G[W\cup S].$\hspace*{\fill}$\Box$\par\addvspace{\topsep}
The main lemma in the proof of correctness of the algorithm follows. It guarantees that if the input graph
admits a tree $t$-spanner, then some partial solutions can grow during some stages of the algorithm.
To break its proof into small parts, minor conclusions appear
as statements at the end of the paragraph that justifies them and are numbered equation like. Also,
intermediate conclusions appear as numbered facts. The lemma is proved
by induction on the number of stages of the algorithm (variable $k$ in the main {\tt For} loop at
table~\ref{ta}; see line~\ref{l1-2}). Note that the algorithm is ahead of the induction, in the sense
that for $k=2$ the algorithm starts merging primary partial solutions, while the induction considers such
merges for $k>\lfloor\frac{t}{2}\rfloor$, as one can see in the proof of fact~\ref{fspannerwhenempty} (for
$t\leq 3$, though, the algorithm and the induction are on the same page). The intuition behind the lemma is the
following. If a graph admits a tree $t$-spanner $T$, then there is a sphere $S$ of $T$ which is close to leaves of $T$, or
to picture it, say that $S$ is close to an end of $T$. Then, a nearby sphere $R$ does cover some of the leaves that
$S$ just misses. Here, $S$ corresponds to the partial solution that may grow, while $R$ corresponds to
a nearby partial solution $T^k_{u,R}$ that may help it grow. This picture is described formally by
fact~\ref{fspannerwhenempty}, where ${\cal H}=\emptyset$ means that $S$ is close to an end of $T$.
After the first steps of the induction, some partial solutions have grown. Assume now that $S$ is
not close to some end of $T$. Then, $S$ is
nearby to a partial solution $T^k_{u,R}$, which is closer to that end of $T$ than $S$ is. The induction
hypothesis hints that, at some earlier stage, $T^k_{u,R}$ covered the part of the input graph
from $R$ to that end of $T$. So, $T^k_{u,R}$ may help (see fact~\ref{fspannerwhennotempty}) the partial solution
that corresponds to $S$ to grow.
\begin{lemma}
If $G$ admits a tree $t$-spanner $T$ ($t>1$) for which there exists vector ($k$, $v$, $S$, ${\cal W}$, $u$) such that:
\begin{enumerate}
\item $1\leq k\leq |V(G)|-1$, $v\in V(G)$,
\item $S$ is the $(v,\lfloor\frac{t}{2}\rfloor)_T$-sphere,
\item $u$ is a $T$ neighbor of $v$, $T_u$ is the component of $T\setminus\{uv\}$ that
contains $u$,
\item ${\cal W}=\{ X\subseteq G: X$ is a component of $G\setminus S$ and $V(X)\subseteq V(T_u)\}$, and
\item $v$ is a $k$-center of $T_u\cup S$,
\end{enumerate}
then algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) (see table~\ref{ta}) returns a graph or for every
component $W\in {\cal W}$ there exists $R_W\subseteq G$ such that:
\begin{itemize}
\item $T^k_{u, R_W}$ is stored in ${\cal A}_G^k$ of algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$)
and
\item $T^{k+1}_{v,S,u,R_W,W}$ is a tree $t$-spanner of $G[W\cup S]$, where
$T^{k+1}_{v,S,u,R_W,W}$ is the graph $(T^k_{u,R_W}[W\cup R_W]\cup S)\setminus((R_W\setminus S)\setminus W)$
(see line~\ref{l2-8} of table~\ref{tp}, where such auxiliary graphs are constructed).
\end{itemize}
\label{lypodendro}
\end{lemma}
{\em Proof}.
Assume that algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) does not return a graph; then all the stages of
the main {\tt For} loop of the algorithm are executed (line~\ref{l1-2} of table~\ref{ta}).
The lemma is proved by induction on $k$. For the {\bf base case}, $k\leq \lfloor\frac{t}{2}\rfloor$.
Here, $S$ is the subtree of $T$ that contains all the vertices of $T$ at $T$ distance less than or equal to $\lfloor\frac{t}{2}\rfloor$
from $v$. So, all the vertices in $\bigcup{\cal W}$ have to be at $T$ distance strictly greater than
$\lfloor\frac{t}{2}\rfloor$ from $v$ (each member of ${\cal W}$ is a component of $G\setminus S$).
Here, $v$ is a $k$-center of $T_u\cup S$; so,
all vertices in $\bigcup{\cal W}$ are at $T$ distance less than or equal to $k$ from $v$ (the vertex set of each component in
${\cal W}$ is subset of $T_u$). So, ${\cal W}=\emptyset$
and the lemma holds vacuously.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=7cm]{fepanalipsi.pdf}
\caption{The situation for $t=3$. Only edges of $T$ are shown. The dashed line sets concern vertex sets
involved in the induction hypothesis. Note that, when $t=3$, one can prove that for each $y\in X_W$, sets
${\cal W}_y$ and ${\cal W}'_y$ coincide.}
\label{fepanalipsi}
\end{center}
\end{figure}
For the {\bf induction step}, $\lfloor\frac{t}{2}\rfloor+1\leq k\leq |V(G)|-1$. Some definitions which
are used throughout the proof are introduced in this paragraph.
Let $W$ be a component in ${\cal W}$. Let $R$ be the $(u,\lfloor\frac{t}{2}\rfloor)_T$-sphere.
Let $X_W=V(R\cap W)$.
Let $Y_W=\{y\in N_T(u):$ there is a vertex $x\in X_W$ such that the $T$ path from $x$
to $u$ contains $y\}$. Here, $v$ is not in $Y_W$, because $V(W)\subseteq V(T_u)$. Note that, when
$t\leq3$, $X_W=Y_W$ (figure~\ref{fepanalipsi}). Now, for each $y\in Y_W$ let
$T_y$ be the component of $T\setminus\{uy\}$ that contains $y$. To make use of induction
hypothesis appropriate sets of components are defined. For each $y\in Y_W$ let
${\cal W}_y=\{X\subseteq G: X$ is a component of $G\setminus R$ and $V(X)\subseteq V(T_y)\}$.
Since a tree $t$-spanner for $G[S\cup W]$ is to be constructed, only the components of
each ${\cal W}_y$ that fall within $W$ are of interest. So, for each $y\in Y_W$, let
${\cal W}'_y=\{H\in{\cal W}_y: H\subseteq W\}$. To refer to all these components,
define ${\cal H}=\bigcup_{y\in Y_W}{\cal W}'_y$.
A fundamental reason that the algorithm works is fact~\ref{fbiguinion}. It says that each component of
$G\setminus R$ falls either nicely into $W$ or completely out of $W$ and, therefore, the induction
hypothesis becomes useful (see figure~\ref{falgochoice}). On one hand, $X_W\subseteq V(W)$, by definition of $X_W$. Also,
For every $y\in Y_W$, every component in ${\cal W}'_y$ is a subgraph of $W$. Therefore,
\begin{equation} \label{eonedirection}
V(\bigcup{\cal H})\cup X_W\subseteq V(W)
\end{equation}
On the other hand, let $p$ be a vertex in $W$. Let $P$ be the $T$ path from $p$ to $v$ (see figure~\ref{fsubtrees}).
Since $V(W)\subseteq V(T_u)$, $P$ contains
$u$ and all its vertices but $v$ belong to $T_u$. All the vertices of $W$ are at $T$ distance strictly greater than
$\lfloor\frac{t}{2}\rfloor$ from $v$, by the definition of $S$ ($W$ is a component of $G\setminus S$). So, $P$ contains
exactly one vertex, say vertex $x$, at $T$ distance exactly $\lfloor\frac{t}{2}\rfloor+1$ from $v$ ($P$ is
a sub path of tree $T$). This means that $x$ is at $T$ distance exactly $\lfloor\frac{t}{2}\rfloor$ from $u$.
Therefore, $x\in R$ (note that $R$ contains all the vertices at $T$ distance less than or equal to
$\lfloor\frac{t}{2}\rfloor$ from $u$).
Also, all the vertices of $P$ from $x$ to $p$ are at $T$ distance strictly greater than $\lfloor\frac{t}{2}\rfloor$ from $v$; so,
there is a path in $G\setminus S$ from $p$ to $x$. But $p\in W$; so, $x\in W$ as well. Then, $x\in X_W$, because
$X_W=V(R\cap W)$. So,
\begin{equation} \label{episx}
p\in X_W, \text{when}\ p=x
\end{equation}
When $p\not=x$, $p$ is at $T$ distance strictly greater than $\lfloor\frac{t}{2}\rfloor$ from $u$, since
$x$ is at $T$ distance exactly $\lfloor\frac{t}{2}\rfloor$ from $u$;
so, $p\not\in R$. Therefore, $p$ is in a component, say component $H_1$, of
$G\setminus R$ (see figure~\ref{fsubtrees}). Assume, towards a contradiction, that $H_1\not\subseteq W$.
Here, $H_1$ is a component of $G\setminus R$, $W$ is a component of $G\setminus S$,
and $H_1\cap W\not=\emptyset$. So, since $H_1\not\subseteq W$, there must be an edge $e$ of $G$ from
a vertex in $H_1\cap W$ to a vertex in $S\setminus R$. As it can be seen in figure~\ref{fsubtrees}, the $T$ distance
between the endpoints of $e$ must be bigger than $t$, which contradicts to $T$ being a tree $t$-spanner of $G$; formally,
lemma~\ref{lcut_subtree} is employed.
Here, $S\setminus R$ is a subgraph of the component of $T\setminus\{uv\}$ that contains $v$ (easily seen by the
definitions of $S$ and $R$); call that component $T_v$.
So, $S\setminus R\subseteq T_v\setminus R$. Also, $V(W)\subseteq V(T_u)$ and
$H_1\subseteq G\setminus R$; so, $V(H_1\cap W)\subseteq V(T_u\setminus R$). But $T_v\cap T_u=\emptyset$; so,
$H_1\cap W\subseteq (G\setminus T_v)\setminus R$. Therefore, $e$ is an edge of $G$ from a vertex in $T_v\setminus R$
to a vertex in $(G\setminus T_v)\setminus R$; this is a contradiction to lemma\footnote{Vertex $u$ in the proof
corresponds to vertex $x$ in the lemma, $R$ to $X$, $v$ to $y$, and $T_v$ to $T_y$.}~\ref{lcut_subtree}.
So,
\begin{equation} \label{eHsubW}
H_1\subseteq W
\end{equation}
Let $y$ be the neighbor of $u$ in
$P$ towards $p$ (note that when $t\leq 3$, $x=y$); then, $y\in Y_W$ (since $x\in P$ and $x\in X_W$)
and $p\in T_y$. Assume, towards a contradiction, that there is a
vertex of $H_1$ out of $T_y$. Then, there must be an edge of $H_1$ (and of $G$ as well) from a vertex in $T_y\setminus R$
to a vertex in $(G\setminus T_y)\setminus R$, because $H_1$ is a component of $G\setminus R$. This is a contradiction to
lemma\footnote{Vertex $u$ in the proof
corresponds to vertex $x$ in the lemma and $R$ to $X$.}~\ref{lcut_subtree}. So,
\begin{equation} \label{eHsubTy}
V(H_1)\subseteq V(T_y)
\end{equation}
Here, $H_1$ is a component of $G\setminus R$, such that $V(H_1)\subseteq V(T_y)$ (statement~(\ref{eHsubTy})) and
$H_1\subseteq W$ (statement~(\ref{eHsubW})). So, $H_1\in {\cal W}'_y$. Therefore, since
$y\in Y_W$, $H_1\in {\cal W}'_y$, and $p\in H_1$, it holds that:
\begin{equation} \label{episnotx}
p\in V(\bigcup{\cal H}), \text{when}\ p\not=x
\end{equation}
Since $p$ is just any vertex in $W$, from statements ~(\ref{eonedirection}),~(\ref{episx}), and~(\ref{episnotx}),
the following holds.
\begin{fact}
$V(\bigcup{\cal H})\cup X_W=V(W)$.
\label{fbiguinion}
\end{fact}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{fsubtrees.pdf}
\caption{The shapes with fat lines correspond to $S$ (left) and to $W$ (right). The shapes with dashed lines
correspond to $R$ (left) and to $H$ (right; also, $H$ may stand for $H_1$ as well, depending on the context).
All the $T$ neighbors of $u$ are shown, namely, $v$, $y_1$, $y$, and $y_2$.
The dark gray area is set $X_W$. The subtrees of $T$ that correspond to $T_y$ and $T_{y_2}$ are shown gray (including
dark gray). Here, $y_1$ is not in $Y_W$. The gray area out of $R$ corresponds to $\bigcup_{y\in Y_W}(\bigcup{\cal W}_y)$,
while the gray area out of $R$ but within $W$ corresponds to $\bigcup {\cal H}$.
Here, $P$ is a $T$ path from $p$ to $v$; note that $P$ contains only one vertex in the dark gray area, namely $x$. Finally,
the hatched area is $S\setminus R$.}
\label{fsubtrees}
\end{center}
\end{figure}
Sphere $R$ is going to be the required in the conclusion of the lemma $R_W$. Note that, by removing the
gray area as shown in figure~\ref{fprocedurestep}, sphere $R$ is suitable for every component
in ${\cal W}$, not just $W$. Trivially, $R$ is a tree $t$-spanner of $G[R]$, because it is a subtree
of $T$. Also, $u$ is a $\lfloor\frac{t}{2}\rfloor$-center of $R$, because $u$ is defined to be the center
of sphere $R$ of radius $\lfloor\frac{t}{2}\rfloor$. Therefore (line~\ref{l1-5} of table~\ref{ta}), $R\in {\cal S}_u$ and
the following holds (line~\ref{l1-8} of table~\ref{ta}).
\begin{fact}
Algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) stores $T^k_{u, R}$ in ${\cal A}_G^k$
\label{fRinA}
\end{fact}
Next, it is proved that $T^{k+1}_{v,S,u,R,W}$ is a tree $t$-spanner of $G[W\cup S]$.
For this, {\bf two cases} are examined.
{\bf On one hand}, consider the case that
${\cal H}=\emptyset$. Then, an induction hypothesis cannot be used.
In this case, by fact~\ref{fbiguinion}, $V(W)=X_W$. But, $X_W\subseteq V(R)$, by definition.
So, $T^k_{u,R}[W\cup R]=T^k_{u,R}[R]$. But $R$ is equal to $T^1_{u, R}$, because, for $k=1$,
algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) makes the call
{\tt Find\_Subtree}($G$, $t$, $u$, $R$, 1, $\emptyset$) and procedure {\tt Find\_Subtree} returns $R$.
Since partial solutions are never reduced during the algorithm, $R\subseteq T^k_{u,R}$. But
$T^k_{u, R}$ is a tree, because of lemma~\ref{lmerikilysi}. So, $T^k_{u,R}[W\cup R]=T^k_{u,R}[R]=R$, because
the subgraph of a tree induced by the vertices of a subtree is the subtree.
Therefore, $T^{k+1}_{v,S,u,R,W}$ is equal to $(R\cup S)\setminus ((R\setminus S)\setminus W)$.
By lemma~\ref{lgrayarea}, which ``clears" the gray area in figure~\ref{fprocedurestep},
it suffices to prove that $R\cup S$ is a tree $t$-spanner of $G[W\cup S\cup R]$.
Here, $R\cup S$ is a subtree of $T$, so it is a tree $t$-spanner of $G[S\cup R]$. But $V(W)\subseteq R$
in this case; so, $G[S\cup R]=G[W\cup S\cup R]$. Therefore, the following holds.
\begin{fact}
$T^{k+1}_{v,S,u,R,W}$ is a tree $t$-spanner of $G[W\cup S]$, when
${\cal H}=\emptyset$.
\label{fspannerwhenempty}
\end{fact}
{\bf On the other hand}, consider the case that ${\cal H}\not=\emptyset$. To prepare for
formulations of induction hypothesis, observe that $R$ is closer than $S$ to $W$. Formally,
all the vertices in $T_u$ are at $T$ distance less than or equal to $k$ from $v$, because
$v$ is a $k$-center of $T_u\cup S$. So, for each $y\in Y_W$, all vertices in $T_y$ are at $T$
distance less than or equal to $k-1$ from $u$. Also, all the vertices in $R$ are at $T$ distance less than
or equal to $\lfloor\frac{t}{2}\rfloor$ from
$u$. But in the induction step $k\geq \lfloor\frac{t}{2}\rfloor+1$; so, for each $y\in Y_W$, all the vertices in
$T_y\cup R$ are at $T$ distance less than or equal to $k-1$ from $u$ (note that $y\in R$, so $T_y\cup R$ is connected).
Therefore\footnote{Note that if $X$ is a connected subgraph of a tree $T$, then the $X$ distance between a pair of
vertices of $X$ is equal to the $T$ distance between this pair of vertices.},
\begin{equation}\label{ecenter}
\text{For each $y\in Y_W$, vertex $u$ is a $(k-1)$-center of $T_y\cup R$}
\end{equation}
Vector ($k-1$, $u$, $R$, ${\cal W}_y$, $y$)
satisfies the five conditions of the lemma for every $y\in Y_W$. To see this, first,
$1\leq\lfloor\frac{t}{2}\rfloor\leq k-1 \leq |V(G)|-2$ and
$u\in G$. Second, $R$ has been defined appropriately. Third, $y$ is a $T$ neighbor of
$u$ and $T_y$ has been defined appropriately. Fourth, ${\cal W}_y$ has been defined appropriately. Finally, fifth,
$u$ is a $(k-1)$-center of $T_y\cup R$ (statement~(\ref{ecenter})). Therefore,
since for every $y\in Y_W$ the first coordinate of vector ($k-1$, $u$, $R$, ${\cal W}_y$, $y$) is strictly
less than $k$, the induction hypothesis states that the conclusion of the lemma holds.
Therefore, for every component in $\bigcup_{y\in Y_W}{\cal W}_y$ the two statements in the
conclusion of the lemma hold. But ${\cal H}\subseteq\bigcup_{y\in Y_W}{\cal W}_y$.
Let $H$ be a component in ${\cal H}$; then, $H\in {\cal W}'_{y_H}$ for some $y_H\in Y_W$.
Therefore, by the induction hypothesis, for $H$, there is $R_H\subseteq G$ such that (see figure~\ref{falgochoice}):
\begin{itemize}
\item $T^{k-1}_{y_H, R_H}$ is contained in ${\cal A}_G^{k-1}$ of algorithm {\tt Find\_Tree\_spanner($G$, $t$)} and
\item $T^k_{u, R, y_H,R_H,H}$ is a tree $t$-spanner of $G[H\cup R]$, where
$T^k_{u, R, y_H,R_H,H}$ is the graph
$(T^{k-1}_{y_H, R_H}[H\cup R_H]\cup R)\setminus((R_H\setminus R)\setminus H)$.
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=12cm]{falgochoice.pdf}
\caption{Part of the situation in the proof of lemma~\ref{lypodendro}, as it is seen from two different angles.
The shapes with one circular end and one rectangular end correspond to auxiliary graphs.
The small rectangle is $S$ and the big rectagle is component $W$ of $G\setminus S$. The dashed circle is $R$; the dashed
rectangles are components of $G\setminus R$, which are the elements of ${\cal H}$ and are named as $H_1$, $H_2$, and $H_3$.
Here, $V(W)=V((W\cap R)\cup H_1\cup H_2\cup H_3)$ (see fact~\ref{fbiguinion}). The induction hypothesis dictates
that there are three auxiliary graphs, which are shown on the left hand side,
that can help partial solution $T^k_{u,R}$ to cover
$H_1$, $H_2$, and $H_3$. On the right hand side, three auxiliary graphs are also shown but are these that procedure
{\tt Find\_Subtree} (see table~\ref{tp}) actually picked to increment partial solution $T^k_{u,R}$
towards $H_1$, $H_2$, and $H_3$. Note that the
pairwise intersection of the auxiliary graphs on each side is $R$; for example, $T_{H_1}\cap T_{H_2}=R$.
As the proof continues, $K=T_{H_1}\cup T_{H_2}\cup T_{H_3}$ becomes the
key element in proving the lemma, when ${\cal H}\not=\emptyset$.}
\label{falgochoice}
\end{center}
\end{figure}
Therefore, when procedure {\tt Find\_Subtree} is called (line~\ref{l1-7} of table~\ref{ta}) with input
($G$, $t$, $u$, $R$, $k$, ${\cal A}_G^{k-1}$) there is at least one
entry in ${\cal A}_G^{k-1}$ to satisfy the condition (line~\ref{l2-9} of table~\ref{tp}) for incrementing $T^k_{u, R}$
towards $H$; note that $y_H\in Y_W\subseteq N_T(u)=N_R(u)$.
So (see figure~\ref{falgochoice}), for some $l_H\leq k$, $T^{l_H}_{u, R}$ is incremented towards $H$ and assume that
the procedure picks $T_H=T^{l_H}_{u, R, p_H, R(p_H,H),H}$ to do so, where $p_H$ is some $R$ neighbor
of $u$ and $T^{l_H-1}_{p_H,R(p_H,H)}$ belongs to ${\cal A}^{l_H-1}_G$; i.e. $T_H$ is a tree
$t$-spanner of $G[H\cup R]$ and the command $T^{l_H}_{u, R}=T^{l_H}_{u, R}\cup T_H$ is executed
(lines~\ref{l2-9} and~\ref{l2-10} of table~\ref{tp}).
Since $T_H\subseteq T^{l_H}_{u, R}$ and $ T^{l_H}_{u, R}\subseteq T^k_{u, R}$,
auxiliary graph $T_H$ is a subgraph of $T^k_{u, R}$. All these
auxiliary graphs that correspond to each $H\in{\cal H}$ (which have been used by procedure
{\tt Find\_Subtree} to construct a part of $T^k_{u, R}$) are put
together in one graph $K$. The following holds.
\begin{equation}
\text{Let}\ K=\bigcup_{H\in {\cal H}}T_H.\ \text{Then,}\ K\subseteq T^k_{u, R}
\label{einductionhypothesis}
\end{equation}
For each $H\in {\cal H}$ auxiliary graph $T_H$ is a connected graph,
because it is a tree $t$-spanner of $G[H\cup R]$. Also, all these auxiliary graphs
share the vertices of $R$.
So, $K$ is a connected subgraph of $T^k_{u,R}$ (statement~(\ref{einductionhypothesis})).
But $T^k_{u, R}$ is a tree $t$-spanner of $G[T^k_{u, R}]$,
because of lemma~\ref{lmerikilysi}. So, $K$ is a tree $t$-spanner of $G[K]$.
Also, $K$ coincides with $T^k_{u,R}[K]$, because it is a subtree of tree $T^k_{u,R}$.
Here, $V(K)=V((\bigcup{\cal H})\cup R)$,
again because for each $H\in {\cal H}$ auxiliary graph $T_H$ is a tree
$t$-spanner of $G[H\cup R]$. Therefore, by fact~\ref{fbiguinion}, $V(K)=V(W\cup R)$, because
$X_W\subseteq V(R)$. So, $K$ is a tree $t$-spanner of $G[W\cup R]$ and coincides with
$T^k_{u,R}[W\cup R]$. The following holds.
\begin{equation}
\text{$K=T^k_{u,R}[W\cup R]$ is a tree $t$-spanner of $G[W\cup R]$}
\label{eWcupR}
\end{equation}
Set $T_2=R\cup S$. Here, $T_2$ is a connected subgraph of $T$; so, $T_2$ is a tree $t$-spanner of $G[T_2]$.
Lemma~\ref{lunion} will be used to prove that $K\cup T_2$ is a tree $t$-spanner of $G[K\cup T_2]$, so the
additional requirements for $K$ and $T_2$ are shown\footnote{Here, $K$ corresponds to $T_1$ of the lemma.}.
Assume, towards a contradiction, that there is an
edge $e'$ of $G$ between a vertex in $K\setminus T_2$ and a vertex in $T_2\setminus K$ (a similar approach was
taken in the proof of statement~(\ref{eHsubW}), where tree $T_v$ was defined to be the component of
$T\setminus\{uv\}$ that contains $v$). Here, $V(K\setminus T_2)$ is a subset of $V(T_u\setminus R)$,
because $V(W)\subseteq V(T_u)$ (here $V(K)=V(W\cup R)$ and $R\subseteq T_2$). So, since
$T_v\cap T_u=\emptyset$, it holds that $V(K\setminus T_2)\subseteq V((G\setminus T_v)\setminus R)$.
Also, $T_2\setminus K$ is equal to $S\setminus R$, because $W$ being a component of
$G\setminus S$ avoids $S$. Clearly, $V(S\setminus R)$ is a subset of $V(T_v\setminus R)$. Therefore, the
existence of edge $e'$ is a contradiction to lemma\footnote{Vertex $u$ in the proof corresponds to vertex $x$
in the lemma, $R$ to $X$, $v$ to $y$, and $T_v$ to $T_y$.}~\ref{lcut_subtree}. Here, $K\cap T_2$ is
equal to $R$, which is a nonempty tree. Therefore, by lemma~\ref{lunion}, $K\cup T_2$ is a tree $t$-spanner
of $G[K\cup T_2]$. But $K\cup T_2$ is equal to $K\cup S$, because $R\subseteq K$. Therefore (see statement~(\ref{eWcupR})),
the following holds.
\begin{equation}
\text{$K\cup S=T^k_{u,R}[W\cup R]\cup S$ is a tree $t$-spanner of $G[W\cup R\cup S]$}
\label{eWcupRcupS}
\end{equation}
Here, $T^{k+1}_{v,S,u,R,W}$ is equal to $(T^k_{u,R}[W\cup R]\cup S)\setminus ((R\setminus S)\setminus W)$;
so, it remains to remove the gray area in figure~\ref{fprocedurestep}.
Then, by statement~(\ref{eWcupRcupS}) and lemma~\ref{lgrayarea}, the following holds.
\begin{fact}
$T^{k+1}_{v,S,u,R,W}$ is a tree $t$-spanner of $G[W\cup S]$, when
${\cal H}\not=\emptyset$.
\label{fspannerwhennotempty}
\end{fact}
By facts~\ref{fRinA},~\ref{fspannerwhenempty}, and~\ref{fspannerwhennotempty} the lemma holds.\hspace*{\fill}$\Box$\par\addvspace{\topsep}
To explain the time complexity of the algorithm, when graphs of bounded degree are inputs, the following lemma
binds the size of various sets used in {\tt For} loops by functions of the maximum degree of the input graph.
\begin{lemma}
Let $G$ be a connected graph of maximum degree $\Delta$ and $t>1$ an integer. Then,
\begin{enumerate}
\item $|{\cal S}_v|\leq 2^{\Delta^{2+\lfloor\frac{t}{2}\rfloor}}$, for every vertex $v$ of $G$,
\item $|{\cal Q}_{v,S}|\leq \Delta^{2+\lfloor\frac{t}{2}\rfloor}+\Delta$, for every vertex $v$ of $G$
and for every $S$ in ${\cal S}_v$,
\item $|{\cal A}_G^{k-1}|\leq |V(G)| \max_{x\in V(G)}|{\cal S}_x|$, for every $k$ ($2\leq k \leq |V(G)|$),
\item the number of $T^{k-1}_{u,R} \in {\cal A}_G^{k-1}$ such that $u\in N_S(v)$ is at most
$\Delta \max_{x\in V(G)} |{\cal S}_x|$, for every vertex $v$ of $G$,
for every $S$ in ${\cal S}_v$, and for every $k$ ($2\leq k \leq |V(G)|$),
\end{enumerate}
where, ${\cal S}_v$, ${\cal Q}_{v,S}$, ${\cal A}_G^{k-1}$, and $T^{k-1}_{u,R}$ are constructed in algorithm
{\tt Find\_Tree\_spanner} of tables~\ref{ta} and~\ref{tp}
on input ($G$, $t$).
\label{lsetsize}
\end{lemma}
{\em Proof}.
First, let $G_v$ be the sphere of $G$ with center $v$ and radius $\lfloor\frac{t}{2}\rfloor$. Let $S$ be a member
of ${\cal S}_v$. Since $v$ is a $\lfloor\frac{t}{2}\rfloor$-center of $S$ (line~\ref{l1-5} of table~\ref{ta}),
$S$ must be a subgraph of $G_v$.
Let $L^i$ denote the number of vertices at $G$ distance exactly $i$ from $v$. Then $L^0=1$. Also, $L^i\leq \Delta L^{i-1}$,
for $i\geq 1$, because $G$ has maximum degree $\Delta$ and each vertex at $G$ distance
$i$ from $v$ must be adjacent to a vertex at $G$ distance $i-1$ from $v$. So, $G_v$ has at most
$\sum_{i=0}^{i=\lfloor\frac{t}{2}\rfloor}L^i$ vertices. So, simply, $G_v$ has at most
$\Delta^{1+\lfloor\frac{t}{2}\rfloor}+1$ vertices\footnote{This includes the $\Delta\leq 1$ cases,
because, then, $G_v$ is the one vertex graph ($\Delta=0$) or has at most two vertices ($\Delta=1$).}.
Therefore, since $G$ has maximum degree $\Delta$, $G_v$ has at most $\Delta |V(G_v)|/2$
edges. So, simply, $G_v$ has at most
$\Delta^{2+\lfloor\frac{t}{2}\rfloor}$ edges. At most $|V(G_v)|-1$ edges can participate in $S$. So, roughly, considering the
power set of $E(G_v)$, the number of elements in ${\cal S}_v$ can be at most $2^{\Delta^{2+\lfloor\frac{t}{2}\rfloor}}$.
Second, any subtree $S$ in ${\cal S}_v$ can have at most $|V(G_v)|$ vertices. Then, as shown earlier, $S$ can
have at most $\Delta^{1+\lfloor\frac{t}{2}\rfloor}+1$ vertices.
The number of edges of $G$ with one endpoint in $S$ and the other
out of $S$ can be at most $\Delta|V(S)|$, since $G$ has maximum degree $\Delta$. Then, since $G$ is
connected,\footnote{The algorithm works for disconnected graphs as well without increasing its time complexity.
Here is the only place the connectedness of the input graph is used. This facilitates the calculations for the running
time of the algorithm.}
the number of components of $G\setminus S$ is at most $\Delta^{2+\lfloor\frac{t}{2}\rfloor}+\Delta$.
Third, for each vertex $x$ of $G$, $|{\cal S}_x|$ partial solutions are in ${\cal A}_G^{k-1}$, where $2\leq k \leq |V(G)|$.
So, $|{\cal A}_G^{k-1}|\leq |V(G)| \max_{x\in V(G)}|{\cal S}_x|$.
Fourth, let $v$ be a vertex of $G$ and $S$ a member of ${\cal S}_v$. Then, since $G$ has maximum degree $\Delta$,
$v$ has at most $\Delta$ neighbors in $S$. But each vertex $u$ of $G$ is a central vertex of $|{\cal S}_u|$ partial
solutions. Therefore, the number of $T^{k-1}_{u,R} \in {\cal A}_G^{k-1}$ such that $u\in N_S(v)$ is at most
$\Delta \max_{x\in V(G)}|{\cal S}_x|$, where $2\leq k \leq |V(G)|$.
\hspace*{\fill}$\Box$\par\addvspace{\topsep}
\begin{thm}
Let $b$, $t$ be positive integers. There is an efficient algorithm to decide whether any graph $G$ with $\Delta(G)\leq b$
admits a tree $t$-spanner.
\label{tgeniko}
\end{thm}
{\em Proof}.
If $t=1$, then a graph admits a tree 1-spanner if and only if it is a tree.
The empty graph admits a tree $t$-spanner and a disconnected graph cannot admit a tree $t$-spanner.
So, it remains to check nonempty connected graphs for $t>1$. For this, the algorithm described in this article
is employed and it is proved that a nonempty connected graph $G$ admits a tree $t$-spanner if and only if
algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) returns a graph, where $t>1$.
For the sufficiency proof, assume that algorithm {\tt Find\_Tree\_spanner} on input
($G$, $t$) returns $T^k_{v,S}$ (line~\ref{l1-9} of table~\ref{ta}).
Then, $V(G)=V(T^k_{v,S})$. But $T^k_{v,S}$ is a tree $t$-spanner
of $G[T^k_{v,S}]$, because of lemma~\ref{lmerikilysi}. Therefore, $G$ admits a tree $t$-spanner.
For the necessity proof, assume that $G$ admits a tree $t$-spanner $T$. Let $v$ be a vertex of $G$ and
$S$ be the $(v,\lfloor\frac{t}{2}\rfloor)_T$-sphere.
Then, algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) calls
procedure {\tt Find\_Subtree} with parameters ($G$, $t$, $v$, $S$, $1$, ${\cal A}_G^0$)
(line~\ref{l1-7} of table~\ref{ta}); note that
this happens even when $G$ is the one vertex graph. Then, procedure {\tt Find\_Subtree} returns $S$
(line~\ref{l2-3} of table~\ref{tp}) which becomes $T^1_{v,S}$.
On one hand, consider the case that ${\cal Q}_{v,S}$ is empty (line~\ref{l2-2} of table~\ref{tp}).
Then, $G$ contains no vertices out of $S$; so, the algorithm returns $T^1_{v,S}$.
On the other hand, consider the case that ${\cal Q}_{v,S}$ is not empty. Assume, towards a contradiction,
that algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) does not return a graph. Then,
for $k=|V(G)|$ the algorithm {\tt Find\_Tree\_spanner} on input ($G$, $t$) does not return
$T^k_{v,S}$ (line~\ref{l1-9} of table~\ref{ta}).
This means that $T^k_{v,S}$ does not contain the vertex set of some component
$W$ in ${\cal Q}_{v,S}$. This happens because (lines~\ref{l2-6} to~\ref{l2-10} of table~\ref{tp})
there is no vertex $x\in N_S(v)$ and $R_W\subseteq G$,
such that $T^{k-1}_{x, R_W}$ is contained in set ${\cal A}_G^{k-1}$ and
$T^k_{v,S,x,R_W,W}$ is a tree $t$-spanner of $G[W\cup S]$, where
$T^k_{v,S,x,R_W,W}=(T^{k-1}_{x,R_W}[W\cup R_W]\cup S)\setminus((R_W\setminus S)\setminus W)$.
But this is a contradiction to lemma~\ref{lypodendro}; to see this, its five conditions are examined.
First, $1\leq k-1\leq |V(G)|-1$, because
$|V(G)|>1$ in this case; also, $v\in V(G)$. Second, $S$ is the $(v,\lfloor\frac{t}{2}\rfloor)_T$-sphere.
Third, let $u$ be a $T$ neighbor of $v$ that is in a $T$ path from $W$ to $v$;
also, let $T_u$ be the component of $T\setminus\{uv\}$ that contains $u$. Fourth,
let ${\cal W}$ be the set $\{ X\subseteq G: X$ is a component of $G\setminus S$ and $V(X)\subseteq V(T_u)\}$.
Finally, fifth, $v$ is a $(k-1)$-center of $T_u\cup S$, even when $G$ is a path and $v$ an end vertex ($k=|V(G)|$
and $T_u\cup S$ is connected, because $u\in S$).
Therefore, vector ($k-1$, $v$, $S$, ${\cal W}$, $u$) satisfies the conditions of lemma~\ref{lypodendro}.
It suffices to prove that $W\in {\cal W}$. By definition of vertex $u$, at least one vertex of $W$
belongs to $T_u$. Assume, towards a contradiction, that there is a vertex of $W$ which
is not in $T_u$. Then, since $W$ is a component of $G\setminus S$, there must be an edge of $G$ from a vertex in
$T_u\setminus S$ to a vertex in $(G\setminus T_u)\setminus S$. This is a contradiction to lemma\footnote{Vertex
$v$ in the proof corresponds to vertex $x$ in the lemma, $S$ to $X$, $u$ to $y$, and $T_u$ to $T_y$.}~\ref{lcut_subtree}.
Hence, $W\in {\cal W}$. Therefore, by lemma~\ref{lypodendro} there is such a vertex $x$,
namely $u$, and such an $R_W$; a contradiction.
It remains to prove that the algorithm runs in polynomial time, when bounded degree graphs are considered.
Let $n(G)$ be the number of vertices in $G$. Then, checking if the input to algorithm {\tt Find\_Tree\_spanner} is
a connected nonempty graph $G$ with $\Delta(G)\leq b$ and $t>1$ takes $O(n)$ time.
For every vertex $v\in G$, $|{\cal S}_v|$ is $O(1)$,
because $\Delta(G)$ is bounded by constant $b$ and $t$ is a constant (see lemma~\ref{lsetsize}). So,
procedure {\tt Find\_Subtree} in table~\ref{tp} is called $O(n^2)$ times.
Consider procedure {\tt Find\_Subtree} on input ($G$, $t$, $v$, $S$, $k$, ${\cal A}_G^{k-1}$).
The construction of ${\cal Q}_{v,S}$ takes $O(n)$ time ($G$ has a linear number of edges) and it is done only when $k=1$.
The commands in lines~\ref{l2-5} to~\ref{l2-11} of procedure {\tt Find\_Subtree} in table~\ref{tp} are executed
when $k>1$ and are examined one by one in this paragraph. The number of partial solutions formed in the previous stage
$|{\cal A}_G^{k-1}|$ is $O(n)$ (lemma~\ref{lsetsize}), because $|{\cal S}_x|$ is $O(1)$ for each vertex $x\in G$.
Finding $T^{k-1}_{v,S}$ in ${\cal A}_G^{k-1}$ and doing the assignment $T^k_{v,S}=T^{k-1}_{v,S}$ (line~\ref{l2-5})
takes $O(n)$ time, because $|{\cal A}_G^{k-1}|$ is $O(n)$. Also, by lemma~\ref{lsetsize}, $|{\cal Q}_{v,S}|$ is $O(1)$,
because $\Delta(G)$ is bounded by constant $b$ and $t$ is a constant (line~\ref{l2-6}). Next, it takes $O(n)$ time to
check sequentially all elements of ${\cal A}_G^{k-1}$, because $|{\cal A}_G^{k-1}|$ is $O(n)$. Though, (line~\ref{l2-7})
the number of $T^{k-1}_{u,R} \in {\cal A}_G^{k-1}$ such that $u\in N_S(v)$ is $O(1)$ (lemma~\ref{lsetsize}),
because $\Delta(G)$ is bounded by constant $b$ and $|{\cal S}_x|$ is $O(1)$ for each vertex $x\in G$.
Construction of $T^{k-1}_{u,R}[Q\cup R]$ in line~\ref{l2-8} takes $O(n)$ time. To check whether $T^k_{v,S,u,R,Q}$ is a tree
$t$-spanner of $G[Q\cup S]$ in line~\ref{l2-9} takes $O(n)$ time, because $\Delta(G)$ is bounded by constant $b$ and,
therefore, $G[Q\cup S]$ has a linear number of edges. Constructing the union of $T^k_{v,S}$ and
$T^k_{v,S,u,R,Q}$ in line~\ref{l2-10} takes $O(n)$ time. Finally, removing $Q$ from ${\cal Q}_{v,S}$ in line~\ref{l2-11}
can be done in linear time. Therefore, each call of procedure {\tt Find\_Subtree} takes $O(n)$ time.
Returning back to algorithm {\tt Find\_Tree\_spanner} (lines~\ref{l1-7} to~\ref{l1-9} of table~\ref{ta}),
inserting the output of procedure {\tt Find\_Subtree} in
${\cal A}_G^{k}$ takes $O(n)$ time. Next, checking whether $V(G)=V(T^k_{v,S})$ and output $T^k_{v,S}$
take $O(n)$ time. All these three commands are executed $O(n^2)$ times, i.e. as many times as
procedure {\tt Find\_Subtree} is called. Therefore, algorithm {\tt Find\_Tree\_spanner} takes $O(n^3)$
time and it is efficient.\hspace*{\fill}$\Box$\par\addvspace{\topsep}
\section{The $t=3$ case}
\label{stequals3}
Let $G$ be a graph. When $t=3$, for every $v$ in $G$, each $S$ in ${\cal S}_v$ must have $v$ as a
$1$-center, because $\lfloor\frac{t}{2}\rfloor=1$ (line~\ref{l1-5}
of table~\ref{ta}). This means that $S$ is a tree with central vertex
$v$ and all its remaining vertices are leaves. If the maximum degree of $G$ is $\Delta$, then $S$ can
have up to $\Delta$ leaves. So, ${\cal S}_v$ can have up to $2^{\Delta}$ members. So, if graphs
with maximum degree at most $b\log n$ are considered as input to the algorithm (where $b$ is some
constant), then, for each vertex $v$ of the input graph, the size of ${\cal S}_v$ is at most
$2^{b\log n}=n^b$. So, $|{\cal S}_v|$ is polynomially bounded by the number of vertices of the
input graph $n$. Also, all other sets considered in lemma~\ref{lsetsize} are polynomially bounded
by $n$. So, for $t=3$ and for every $b>0$, the algorithm runs in polynomial time and it is efficient, when graphs $G$
with degrees less than $b\log |V(G)|$ are examined.
As mentioned in the introduction, the problem had been solved for $t=2$ on general graphs.
Now, for $t>3$, a tree in ${\cal S}_v$ may contain vertices at distance 2 from $v$. This makes the
size of ${\cal S}_v$ super-polynomial in $n$ in the worst case, when graphs with
maximum degree at most $b\log n$ are considered. Therefore, the algorithm is not efficient in
this case.
There is some possibility, though, that the $t=4$ case is similar to the $t=3$ case.
The diameter of initial partial solutions (members of ${\cal S}_v$, for each vertex $v$ of the input graph $G$; line~\ref{l1-5}
of table~\ref{ta}) is at most 2, when $t=3$, while it is at most 4, when $4\leq t\leq 5$.
One can well consider initial partial solutions of diameter at most 3, when $t=4$.
Then\footnote{A diameter at most 3 subtree of a graph $G$ consists of at most one central edge $e$ and at most
$2\Delta(G)$ edges sharing a vertex with $e$. Therefore, the number of such subtrees with a given central edge is at most
$2^{2\Delta(G)}$.}, the tree 4-spanner admissibility of graphs with degrees less than $b\log n$ (where $b$ is a
constant) may be decided efficiently too.
As mentioned in the proof of correctness (see figure~\ref{fepanalipsi}), the $t=3$ case exhibits
some structural differences as well, compared to the $t>3$ cases. These differences may justify
further investigation, in an attempt to resolve the complexity status of the tree 3-spanner
problem.
\section{Notes}
Let us hint at the diversity of the tree spanners that the algorithm can produce, with a possible application.
Assume that a tree $t$-spanner of a graph $G$ is needed\footnote{For example, a tree spanner of the
underlying graph of a communication network is needed for broadcasting but some connections (edges of the graph) are
very reliable and they must be in the spanner.} but it must contain certain edges of $G$.
Assume that these necessary edges form a tree $A$ of diameter at most $2\lfloor\frac{t}{2}\rfloor$.
Hence, there is a vertex $a$ of $G$ that is a $\lfloor\frac{t}{2}\rfloor$-center of $A$. Because of its small diameter,
$A$ is a tree $t$-spanner of $G[A]$. So, $A$ will be in ${\cal S}_a$, when algorithm {\tt Find\_Tree\_spanner}
is run on input $(G,t)$. To output only a suitable spanner, line~\ref{l1-9} of table~\ref{ta} must be changed to
\begin{equation*}
\text{{\tt{\bf If} ($V(G)=V(T^k_{v,S})$ and $S=A$) {\bf Return}($T^k_{v,S}$){\bf \}\}}}}
\end{equation*}
Then, the algorithm outputs a graph if and only if there is a tree $t$-spanner of $G$
that contains $A$.
The only reason that the algorithm is not efficient for general graphs is the huge size of sets ${\cal S}_v$ even for
a few vertices $v$ of the input graph $G$. A promising research direction is to consider a family of
input graphs for which one can prune these sets down to manageable sizes; i.e. when the algorithm
constructs set ${\cal S}_v$ for each vertex $v$ of input graph $G$ (line~\ref{l1-5} of table~\ref{ta}),
an efficient procedure may rule out many subtrees of $G$ that are not needed to build a final solution, because of
some properties of $G$.
\bibliographystyle{plain}
| {
"timestamp": "2016-05-10T02:20:44",
"yymm": "1503",
"arxiv_id": "1503.06822",
"language": "en",
"url": "https://arxiv.org/abs/1503.06822",
"abstract": "A tree $t$-spanner of a graph $G$ is a spanning tree of $G$ such that the distance between pairs of vertices in the tree is at most $t$ times their distance in $G$. Deciding tree $t$-spanner admissible graphs has been proved to be tractable for $t<3$ and NP-complete for $t>3$, while the complexity status of this problem is unresolved when $t=3$. For every $t>2$ and $b>0$, an efficient dynamic programming algorithm to decide tree $t$-spanner admissibility of graphs with vertex degrees less than $b$ is presented. Only for $t=3$, the algorithm remains efficient, when graphs $G$ with degrees less than $b\\log |V(G)|$ are examined.",
"subjects": "Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS)",
"title": "Tree spanners of bounded degree graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105280113491,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.707698470468374
} |
https://arxiv.org/abs/2106.13947 | Optimal prediction of Markov chains with and without spectral gap | We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 \leq k \leq O(\sqrt{n})$, using techniques from universal compression, the optimal prediction risk in Kullback-Leibler divergence is shown to be $\Theta(\frac{k^2}{n}\log \frac{n}{k^2})$, in contrast to the optimal rate of $\Theta(\frac{\log \log n}{n})$ for $k=2$ previously shown in Falahatgar et al. (2016). These rates, slower than the parametric rate of $O(\frac{k^2}{n})$, can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(\frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters. Extensions to higher-order Markov chains are also obtained. |
\section{Introduction}
\label{sec:intro}
Learning distributions from samples is a central question in statistics and machine learning.
While significant progress has been achieved in property testing and estimation based on independent and identically distributed (iid) data, for many applications, most notably natural language processing, two new challenges arise:
(a) Modeling data as independent observations fails to capture their temporal dependency;
(b) Distributions are commonly supported on a large domain whose cardinality is comparable to or even exceeds the sample size.
Continuing the progress made in \cite{FOPS2016,HOP18}, in this paper we study the following prediction problem with dependent data modeled as Markov chains.
Suppose $X_1,X_2,\dots$ is a stationary first-order Markov chain on state space $[k]\eqdef\sth{1,\dots,k}$ with unknown statistics.
Observing a trajectory $X^n\triangleq(X_1,\ldots,X_n)$, the goal is to predict the next state $X_{n+1}$ by estimating its distribution conditioned on the present data.
We use the Kullback-Leibler (KL) divergence as the loss function:
For distributions $P=\qth{p_1,\dots,p_k},Q=\qth{q_1,\dots,q_k}$, $D(P\|Q)=\sum_{i=1}^k p_i\log \frac{p_i}{q_i}$ if $p_i=0$ whenever $q_i=0$ and $D(P\|Q)=\infty$ otherwise.
The minimax prediction risk is given by
\begin{align}
\Risk_{k,n}
&\eqdef \inf_{\hat M} \sup_{\pi,M} \Expect[D(M(\cdot|X_n) \| \hat M(\cdot|X_n))]
=\inf_{\hat M} \sup_{\pi,M} \sum_{i=1}^k \Expect[D(M(\cdot|i) \| \hat M(\cdot|i)) \indc{X_n=i}]
\label{eq:riskkn}
\end{align}
where the supremum is taken over all stationary distributions $\pi$ and transition matrices $M$ (row-stochastic) such that $\pi M=\pi$,
the infimum is taken over all estimators $\hat M=\hat M(X_1,\dots,X_n)$ that are proper Markov kernels (i.e. rows sum to 1), and $M(\cdot|i)$ denotes the $i$th row of $M$.
Our main objective is to characterize this minimax risk within universal constant factors as a function of $n$ and $k$.
The prediction problem \prettyref{eq:riskkn} is distinct from the parameter estimation problem such as estimating the transition matrix \cite{bartlett1951frequency,anderson1957statistical,B61,wolfer2019minimax} or its properties \cite{csiszar2000consistency,kamath2016estimation,HJLWWY18,hsu2019mixing} in that the quantity to be estimated (conditional distribution of the next state) depends on the sample path itself. This is precisely what renders the prediction problem closely relevant to natural applications such as autocomplete and text generation. In addition, this formulation allows more flexibility with far less assumptions compared to the estimation framework.
For example, if certain state has very small probability under the stationary distribution, consistent estimation of the transition matrix with respect to usual loss function, e.g. squared risk, may not be possible, whereas the prediction problem is unencumbered by such rare states.
In the special case of iid data,
the prediction problem reduces to estimating the distribution in KL divergence. In this setting the optimal risk is well understood, which is known to be ${k-1\over 2n}(1+o(1))$
when $k$ is fixed and $n\diverge$ \cite{BFSS02} and $\Theta(\frac{k}{n})$ for $k=O(n)$ \cite{paninski2004variational,KOPS15}.\footnote{Here and below $\asymp, \lesssim, \gtrsim$ or $\Theta(\cdot),O(\cdot),\Omega(\cdot)$ denote equality and inequalities up to universal multiplicative constants.}
Typical in parametric models, this rate $\frac{k}{n}$ is commonly referred to the ``parametric rate'', which leads to a sample complexity that scales proportionally to the number of parameters and inverse proportionally to the desired accuracy.
In the setting of Markov chains, however, the prediction problem is much less understood especially for large state space. Recently the seminal work \cite{FOPS2016} showed the surprising result that for stationary Markov chains on two states, the optimal prediction risk satisfies
\begin{align}
\Risk_{2,n} = \Theta\pth{\log\log n\over n},
\label{eq:risk-mc1}
\end{align}
which has a nonparametric rate even when the problem has only two parameters.
The follow-up work \cite{HOP18} studied general $k$-state chains and showed a lower bound of $\Omega(\frac{k\log\log n}{n})$
for uniform (not necessarily stationary) initial distribution; however, the upper bound $O(\frac{k^2\log\log n}{n})$ in \cite{HOP18} relies on implicit assumptions on mixing time such as spectral gap conditions: the proof of the upper bound for prediction (Lemma 7 in the supplement) and for estimation (Lemma 17 of the supplement) is based on Berstein-type concentration results of the empirical transition counts, which depend on spectral gap.
The following theorem resolves the optimal risk for $k$-state Markov chains:
\begin{theorem}[Optimal rates without spectral gap]
\label{thm:optimal}
There exists a universal constant $C>0$ such that for all $3 \leq k \leq \sqrt{n}/C$,
\begin{equation}
\frac{k^2}{C n}\log\left(\frac{n}{k^2}\right) \leq \Risk_{k,n} \leq \frac{C k^2}{n}\log\left(\frac{n}{k^2}\right).
\label{eq:optimal}
\end{equation}
Furthermore, the lower bound continues to hold even if the Markov chain is restricted to be irreducible and reversible.
\end{theorem}
\begin{remark}
\label{rmk:ach}
The optimal prediction risk of $O(\frac{k^2}{n} \log \frac{n}{k^2})$ can be achieved by an average version of the \emph{add-one estimator} (i.e.~Laplace's rule of succession).
Given a trajectory $x^n=(x_1,\ldots,x_n)$ of length $n$, denote the transition counts (with the convention $N_i\equiv N_{ij}\equiv 0$ if $n=0,1$)
\begin{align}\label{eq:transition.count}
N_{i}=\sum_{\ell=1}^{n-1}\indc{x_\ell=i},\quad N_{ij}=\sum_{\ell=1}^{n-1}\indc{x_\ell=i,x_{\ell+1}=j}.
\end{align}
The add-one estimator for the transition probability $M(j|i)$ is given by
\begin{equation}
\hat M^{+1}_{x^n}(j|i) \triangleq {N_{ij}+1\over N_i+k},
\label{eq:addone}
\end{equation}
which is an additively smoothed version of the empirical frequency.
Finally, the optimal rate in
\prettyref{eq:optimal} can be achieved by the following estimator $\hat M$ defined as an average of add-one estimators over different sample sizes:
\begin{equation}
\hat M_{x^n}(x_{n+1}|x_n) \triangleq \frac{1}{n} \sum_{t=1}^{n} \hat M^{+1}_{x_{n-t+1}^n}(x_{n+1}|x_n).
\label{eq:markov-add-1}
\end{equation}
In other words, we apply the add-one estimator to the most recent $t$ observations $(X_{n-t+1},\ldots,X_n)$ to predict the next $X_{n+1}$, then average over $t=1,\ldots,n$.
Such Ces\`aro-mean-type estimators have been introduced before in the density estimation literature (see, e.g., \cite{YB99}).
It remains open whether the usual add-one estimator (namely, the last term in \prettyref{eq:markov-add-1} which uses all the data) or any add-$c$ estimator for constant $c$ achieves the optimal rate.
In contrast, for two-state chains the optimal risk \prettyref{eq:risk-mc1} is attained by a hybrid strategy \cite{FOPS2016}, applying add-$c$ estimator for $c = \frac{1}{\log n}$ for trajectories with at most one transition and $c=1$ otherwise. Also note that the estimator in \eqref{eq:markov-add-1} can be computed in $O(nk)$ time. To derive this first note that given any $j\in [k]$ calculating $\hat M^{+1}_{x^{n-1}_{1}}(j|x_{n-1})$ takes $O(n)$ time and given any $M^{+1}_{x^{n-1}_{n-t+1}}(j|x_{n-1})$ we need $O(1)$ time to calculate $\hat M^{+1}_{x^{n-1}_{n-t+2}}(j|x_{n-1})$. Summing over all $j$ we get the algorithmic complexity upper bound.
\end{remark}
%
\prettyref{thm:optimal} shows that the departure from the parametric rate of $\frac{k^2}{n}$, first discovered in \cite{FOPS2016,HOP18} for binary chains, is even more pronounced for larger state space.
As will become clear in the proof, there is some fundamental difference between two-state and three-state chains, resulting in $\Risk_{3,n} = \Theta(\frac{\log n}{n}) \gg \Risk_{2,n} = \Theta(\frac{\log \log n}{n})$.
It is instructive to compare the sample complexity for prediction in the iid and Markov model.
Denote by $d$ the number of parameters, which is $k-1$ for the iid case and $k(k-1)$ for Markov chains.
Define the sample complexity $n^*(d,\epsilon)$ as the smallest sample size $n$ in order to achieve a prescribed prediction risk $\epsilon$.
For $\epsilon = O(1)$, we have
\begin{equation}
n^*(d,\epsilon) \asymp
\begin{cases}
\frac{d}{\epsilon} & \text{iid}\\
\frac{d}{\epsilon} \log \log \frac{1}{\epsilon} & \text{Markov with $2$ states}\\
\frac{d}{\epsilon} \log \frac{1}{\epsilon} & \text{Markov with $k\geq 3$ states}.
\end{cases}
\label{eq:samplecomplexity}
\end{equation}
At a high level, the nonparametric rates in the Markov model can be attributed to the memory in the data.
On the one hand, \prettyref{thm:optimal} as well as \prettyref{eq:risk-mc1} affirm that one can obtain meaningful prediction without imposing any mixing conditions;\footnote{To see this, it is helpful to consider the extreme case where the chain does not move at all or is periodic, in which case predicting the next state is in fact easy.}
such decoupling between learning and mixing has also been observed in other problems such as learning linear dynamics \cite{simchowitz2018learning,dean2019sample}.
On the other hand, the dependency in the data does lead to a strictly higher sample complexity than that of the iid case; in fact, the lower bound in \prettyref{thm:optimal} is proved by constructing chains with spectral gap as small as $O(\frac{1}{n})$ (see \prettyref{sec:optimal}).
Thus, it is conceivable that with sufficiently favorable mixing conditions, the prediction risk improves over that of the worst case and, at some point, reaches the parametric rate.
To make this precise, we focus on Markov chains with a prescribed spectral gap.
It is well-known that for an irreducible and reversible chain, the transition matrix $M$ has $k$ real eigenvalues satisfying $1=\lambda_1\geq \lambda_2\geq\dots\lambda_k\geq -1$.
The \emph{absolute spectral gap} of $M$, defined as
\begin{equation}
\gamma_*\triangleq 1-\max\sth{\abs{\lambda_i}:i\neq 1},
\label{eq:gamma-def}
\end{equation}
quantifies the memory of the Markov chain. For example, the mixing time is determined by $1/\gamma^*$ (relaxation time) up to logarithmic factors.
As extreme cases, the chain which does not move ($M$ is identity) and which is iid ($M$ is rank-one) have spectral gap equal to 0 and 1, respectively.
We refer the reader to \cite{LevinPeres17} for more background. Note that the definition of absolute spectral gap requires irreducibility and reversibility, thus we restrict ourselves to this class of Markov chains (it is possible to use more general notions such as pseudo spectral gap to quantify the memory of the process, which is beyond the scope of the current paper).
Given $\gamma_0\in (0,1)$, define $\calM_k(\gamma_0)$ as the set of transition matrices corresponding to irreducible and reversible chains whose absolute spectral gap exceeds $\gamma_0$.
Restricting \prettyref{eq:riskkn} to this subcollection and noticing the stationary distribution here is uniquely determined by $M$, we define the corresponding minimax risk:
\begin{align}
\Risk_{k,n}(\gamma_0)&\triangleq\inf_{\hat \mymat}\sup_{\mymat\in \calM_k(\gamma_0)}\EE\qth{D(\mymat(\cdot|X_n)\|\hat \mymat(\cdot|X_n))}
\label{eq:risk-gamma}
\end{align}
Extending the result \prettyref{eq:risk-mc1} of \cite{FOPS2016}, the following theorem characterizes the optimal prediction risk for two-state chains with prescribed spectral gaps (the case $\gamma_0=0$ correspond to the minimax rate in \cite{FOPS2016} over all binary Markov chains):
\begin{theorem}[Spectral gap dependent rates for binary chain]
\label{thm:gamma2}
For any $\gamma_0\in (0,1)$
\[
\Risk_{2,n}(\gamma_0)
\asymp \frac{1}{n} \max\sth{1,\log\log\pth{\min\sth{n,\frac 1{\gamma_0}}}}.
\]
\end{theorem}
\prettyref{thm:gamma2} shows that for binary chains, parametric rate $O(\frac{1}{n})$ is achievable if and only if the spectral gap is nonvanishing. While this holds for bounded state space
(see \prettyref{cor:parametricrate} below), for large state space, it turns out that much weaker conditions on the absolute spectral gap suffice to guarantee the parametric rate $O({k^2\over n})$, achieved by the add-one estimator applied to the entire trajectory. In other words, as long as the spectral gap is not excessively small,
the prediction risk in the Markov model behaves in the same way as that of an iid model with equal number of parameters. Similar conclusion has been established previously for the sample complexity of estimating the entropy rate of Markov chains in \cite[Theorem 1]{HJLWWY18}.
\begin{theorem}
The add-one estimator in \eqref{eq:addone} achieves the following risk bound.
\begin{enumerate}[label=(\roman*)]
\item For any $k\geq 2$,
$
\Risk_{k,n}(\gamma_0)\lesssim {k^2\over n}
$
provided that $\gamma_0\gtrsim (\frac{\log k}{k})^{1/4}$.
\item In addition, for $k\gtrsim (\log n)^6$, $\Risk_{k,n}(\gamma_0)\lesssim {k^2\over n}$ provided that $\gamma_0\gtrsim{(\log(n+k))^2\over k}$.
\end{enumerate}
\label{thm:gammak}
\end{theorem}
\begin{corollary}
\label{cor:parametricrate}
For any fixed $k\geq 2$, $\Risk_{k,n}(\gamma_0)=O(\frac{1}{n})$ if and only if $\gamma_0=\Omega(1)$.
\end{corollary}
Finally, we address the optimal prediction risk for higher-order Markov chains:
\begin{theorem}\label{thm:m-order-rate}
There is a constant $C_m$ depending on $m$ such that for any $2\leq k\leq n^{\frac{1}{m+1}}/C_m$ and constant $m\geq 2$ the minimax prediction rate for $m^{\text{th}}$-order Markov chains with stationary initialization is
$
\Theta_m\pth{{k^{m+1}\over n}\log{n\over k^{m+1}}}.
$
\end{theorem}
Notably, for binary states, it turns out that the optimal rate $\Theta\pth{\log\log n\over n}$ for first-order Markov chains determined by \cite{FOPS2016} is something very special, as we show that for second-order chains the optimal rate is $\Theta\pth{\log n\over n}$.
\subsection{Proof techniques}
\label{sec:technique}
The proof of \prettyref{thm:optimal} deviates from existing approaches based on concentration inequalities for Markov chains.
For instance, the standard program for analyzing the add-one estimator \prettyref{eq:addone} involves
proving concentration of the empirical counts on their population version, namely, $N_i \approx n\pi_i$ and $N_{ij} \approx n\pi_iM(j|i)$, and bounding the risk in the atypical case by concentration inequalities, such as the Chernoff-type bounds in \cite{L98,P15}, which have been widely used
in recent work on statistical inference with Markov chains \cite{kamath2016estimation,HJLWWY18,HOP18,hsu2019mixing,wolfer2019minimax}.
However, these concentration inequalities inevitably depends on the spectral gap of the Markov chain,
leading to results which deteriorate as the spectral gap becomes smaller.
For two-state chains, results free of the spectral gap are obtained in \cite{FOPS2016} using explicit joint distribution of the transition counts;
this refined analysis, however, is difficult to extend to larger state space as the probability mass function of $(N_{ij})$ is given by Whittle's formula \cite{W55} which takes an unwieldy determinantal form.
Eschewing concentration-based arguments, the crux of our proof of \prettyref{thm:optimal}, for both the upper and lower bound, revolves around the following quantity known as \emph{redundancy}:
\begin{equation}
\Red_{k,n} \triangleq \inf_{Q_{X^n}} \sup_{P_{X^n}} D(P_{X^n} \| Q_{X^n}) = \inf_{Q_{X^n}} \sup_{P_{X^n}} \sum_{x^n} P_{X^n}(x^n) \log \frac{P_{X^n}(x^n) }{Q_{X^n}(x^n) }.
\label{eq:red-intro}
\end{equation}
Here the supremum is taken over all joint distributions of stationary Markov chains $X^n$ on $k$ states, and the infimum is over all joint distributions $Q_{X^n}$.
A central quantity which measures the minimax regret in universal compression, the redundancy \prettyref{eq:red-intro} corresponds to minimax cumulative risk (namely, the total prediction risk when the sample size ranges from 1 to $n$),
while \prettyref{eq:riskkn} is the individual minimax risk at sample size $n$ -- see \prettyref{sec:riskred} for a detailed discussion.
We prove the following reduction between prediction risk and redundancy:
\begin{equation}
\frac{1}{n} \Red_{k-1,n}^{\mathsf{sym}} - \frac{\log k}{n} \lesssim \Risk_{k,n} \leq \frac{1}{n-1} \Red_{k,n}
\label{eq:riskred-approx}
\end{equation}
where $\Red^{\mathsf{sym}}$ denotes the redundancy for \emph{symmetric} Markov chains. The upper bound is standard: thanks to the convexity of the loss function and stationarity of the Markov chain, the risk of the Ces\`{a}ro-mean estimator \prettyref{eq:markov-add-1} can be upper bounded using the cumulative risk and, in turn, the redundancy. The proof of the lower bound is more involved. Given a $(k-1)$-state chain, we embed it into a larger state space by introducing a new state, such that with constant probability, the chain starts from and gets stuck at this state for a period time that is approximately uniform in $[n]$, then enters the original chain. Effectively, this scenario is equivalent to a prediction problem on $k-1$ states with a \emph{random} (approximately uniform) sample size, whose prediction risk can then be related to the cumulative risk and redundancy.
This intuition can be made precise by considering a Bayesian setting, in which the $(k-1)$-state chain is randomized according to the least favorable prior for \prettyref{eq:red-intro}, and representing the Bayes risk as conditional mutual information and applying the chain rule.
Given the above reduction in \prettyref{eq:riskred-approx}, it suffices to show both redundancies therein are on the order of $\frac{k^2}{n}\log \frac{n}{k^2}$.
The redundancy is upper bounded by \emph{pointwise redundancy}, which replaces the average in \prettyref{eq:red-intro} by the maximum over all trajectories.
Following \cite{davisson1981efficient,csiszar2004information}, we consider an explicit probability assignment defined by add-one smoothing and using combinatorial arguments to bound the pointwise redundancy, shown optimal by information-theoretic arguments.
The optimal spectral gap-dependent rate in \prettyref{thm:gamma2} relies on the key observation in \cite{FOPS2016} that, for binary chains, the dominating contribution to the prediction risk comes from trajectories with a single transition, for which we may apply an add-$c$ estimator with $c$ depending appropriately on the spectral gap. The lower bound is shown using a Bayesian argument similar to that of \cite[Theorem 1]{HOP18}. The proof of \prettyref{thm:gammak} relies on more delicate concentration arguments as the spectral gap is allowed to be vanishingly small. Notably, for small $k$, direct application of existing Bernstein inequalities for Markov chains in \cite{L98,P15} falls short of establishing
the parametric rate of $O(\frac{k^2}{n})$ (see \prettyref{rmk:4thmoment} in \prettyref{sec:gammak} for details); instead, we use a fourth moment bound which turns out to be well suited for analyzing concentration of empirical counts conditional on the terminal state.
For large $k$, we further improve the spectral gap condition using a simulation argument for Markov chains using independent samples \cite{B61,HJLWWY18}. A key step is a new concentration inequality for $D(P\|\widehat{P}_{n,k}^{+1})$, where $\widehat{P}_{n,k}^{+1}$ is the add-one estimator based on $n$ iid observations of $P$ supported on $[k]$:
\begin{align}\label{eq:KL_concentration}
\PP\left(D(P\|\widehat{P}_{n,k}^{+1}) \ge c\cdot \frac{k}{n} + \frac{\mathsf{polylog}(n)\cdot \sqrt{k}}{n} \right) \le \frac{1}{\mathsf{poly}(n)},
\end{align}
for some absolute constant $c>0$. Note that an application of the classical concentration inequality of McDiarmid would result in the second term being $\mathsf{polylog}(n)/\sqrt{n}$, and \eqref{eq:KL_concentration} crucially improves this to $\mathsf{polylog}(n)\cdot \sqrt{k}/n$.
Such an improvement has been recently observed by \cite{mardia2020concentration,agrawal2020finite,guo2020chernoff} in studying the similar quantity $D(\widehat{P}_{n}\|P)$ for the (unsmoothed) empirical distribution $\widehat{P}_{n}$; however, these results, based on either the method of types or an explicit upper bound of the moment generating function, are not directly applicable to \prettyref{eq:KL_concentration} in which the true distribution $P$ appears as the first argument in the KL divergence.
The nonasymptotic analysis of the prediction rate for higher-order chains with large alphabets is based on a similar redundancy-based reduction as the first-order chain. However, optimal nonasymptotic redundancy bounds for higher-order chains is more challenging.
Notably, in lower bounding redundancy, we need to bound the mutual information from below by upper bounding the squared error of certain estimators.
As noted in \cite{tatwawadi2018minimax}, existing analysis in \cite[Sec~III]{D83} based on simple mixing conditions from \cite{parzen1962stochastic} leads to suboptimal results on large alphabets.
To bypass this issue, we show the pseudo spectral gap \cite{P15} of the transition matrix of the first-order chain $\{(X_{t+1},\dots,X_{t+m-1})\}_{t=0}^{n-m+1}$ is at least a constant. This is accomplished by a careful construction of a prior on $m^{\rm th}$-order transition matrices with $\Theta\pth{k^{m+1}}$ degrees of freedom.
\subsection{Related work}
\label{sec:related}
While the exact prediction problem studied in this paper has recently been in focus since \cite{FOPS2016,HOP18}, there exists a large body of literature on relate works. As mentioned before some of our proof strategies draws inspiration and results from the study of redundancy in universal compression, its connection to mutual information,
as well as the perspective of sequential probability assignment as prediction, dating back to \cite{D73,davisson1981efficient,rissanen1984universal,shtarkov1987univresal,R88}. Asymptotic characterization of the minimax redundancy for Markov sources, both average and pointwise, were obtained in \cite{D83,A99,jacquet2002combinatorial}, in the regime of fixed alphabet size $k$ and large sample size $n$. Non-asymptotic characterization was obtained in \cite{D83} for $n\gg k^2\log k$ and recently extended to $n\asymp k^2$ in \cite{tatwawadi2018minimax}, which further showed that the behavior of the redundancy remains unchanged even if the Markov chain is very close to being iid in terms of spectral gap $\gamma^*=1-o(1)$.
The current paper adds to a growing body of literature devoted to statistical learning with dependent data, in particular those dealing with Markov chains.
Estimation of the transition matrix \cite{bartlett1951frequency,anderson1957statistical,B61,sinkhorn1964relationship} and testing the order of Markov chains \cite{csiszar2000consistency} have been well studied in the large-sample regime.
More recently attention has been shifted towards large state space and nonasymptotics. For example,
\cite{wolfer2019minimax} studied the estimation of transition matrix in $\ell_\infty\to\ell_\infty$ induced norm for Markov chains with prescribed pseudo spectral gap and minimum probability mass of the stationary distribution, and determined sample complexity bounds up to logarithmic factors. Similar results have been obtained for estimating properties of Markov chains, including
mixing time and spectral gap \cite{hsu2019mixing}, entropy rate \cite{kamath2016estimation,HJLWWY18,obremski2020complexity}, graph statistics based on random walk \cite{ben2018estimating}, as well as identity testing \cite{daskalakis2018testing,cherapanamjeri2019testing,wolfer2020minimax,fried2021identity}.
Most of these results rely on assumptions on the Markov chains such as lower bounds on the spectral gap and the stationary distribution, which afford concentration for sample statistics of Markov chains. In contrast, one of the main contributions in this paper, in particular \prettyref{thm:optimal}, is that optimal prediction can be achieved without these assumptions, thereby providing a novel way of tackling these seemingly unavoidable issues. This is ultimately accomplished by information-theoretic and combinatorial techniques from universal compression.
\subsection{Notations and preliminaries}
For $n\in\naturals$, let $[n]\triangleq \{1,\ldots,n\}$. Denote $x^n=(x_1,\ldots,x_n)$ and $x_t^n=(x_t,\ldots,x_n)$.
The distribution of a random variable $X$ is denoted by $P_X$. In a Bayesian setting, the distribution of a parameter $\theta$ is referred to as a prior, denoted by $P_\theta$.
We recall the following definitions from information theory \cite{ckbook,cover}.
The conditional KL divergence is defined as as an average of KL divergence between conditional distributions:
\begin{equation}
D(P_{A|B}\|Q_{A|B}|P_B) \triangleq \Expect_{B\sim P_B} [D(P_{A|B}\|Q_{A|B})] = \int P_B(db) D(P_{A|B=b}\|Q_{A|B=b}).
\label{eq:KLcond}
\end{equation}
The mutual information between random variables $A$ and $B$ with joint distribution $P_{AB}$ is
$I(A;B) \triangleq
D(P_{B|A}\|P_B|P_A)$;
similarly, the conditional mutual information is defined as
\[
I(A;B|C)
\triangleq D(P_{B|A,C}\|P_{B|C}|P_{A,C}).
\]
The following variational representation of (conditional) mutual information is well-known
\begin{equation}
I(A;B) = \min_{Q_B} D(P_{B|A}\|Q_B|P_A), \quad I(A;B|C) = \min_{Q_{B|C}} D(P_{B|A,C}\|Q_{B|C}|P_{AC}).
\label{eq:MI-var}
\end{equation}
The entropy of a discrete random variables $X$ is $H(X) \triangleq \sum_x P_X(x) \log\frac{1}{P_X(x)}$.
\subsection{Organization}
\label{sec:org}
The rest of the paper is organized as follows.
In \prettyref{sec:riskred} we describe the general paradigm of minimax redundancy and prediction risk and their dual representation in terms of mutual information.
We give a general redundancy-based bound on the prediction risk, which, combined with redundancy bounds for Markov chains, leads to the upper bound in \prettyref{thm:optimal}.
\prettyref{sec:optimal} presents the lower bound construction, starting from three states and then extending to $k$ states.
Spectral-gap dependent risk bounds in Theorems \ref{thm:gamma2} and \ref{thm:gammak} are given in \prettyref{sec:pf-gamma}. \prettyref{sec:order-m} presents the results and proofs for $m^{\text{th}}$-order Markov chains.
\prettyref{sec:discussion} discusses the assumptions and implications of our results and related open problems.
\section{Two general paradigms}
\label{sec:riskred}
\subsection{Redundancy, prediction risk, and mutual information representation}
\label{sec:riskred-bound}
For $n\in\naturals$, let $\calP=\{P_{X^{n+1}|\theta}: \theta\in\Theta\}$ be a collection of joint distributions parameterized by $\theta$.
\paragraph{``Compression''.}
Consider a sample $X^n\triangleq(X_1,\ldots,X_n)$ of size $n$ drawn from $P_{X^n|\theta}$ for some unknown $\theta\in\Theta$. The \emph{redundancy} of a probability assignment (joint distribution) $Q_{X^n}$ is defined as the worst-case KL risk of fitting the joint distribution of $X^n$, namely
\begin{equation}
\Red(Q_{X^n}) \triangleq \sup_{\theta\in\Theta} D(P_{X^n|\theta} \| Q_{X^n}).
\label{eq:red-Q}
\end{equation}
Optimizing over $Q_{X^n}$, the minimax redundancy is defined as
\begin{equation}
\Red_n \triangleq \inf_{Q_{X^n}} \Red_n(Q_{X^n}),
\label{eq:red}
\end{equation}
where the infimum is over all joint distribution $Q_{X^n}$.
This quantity can be operationalized as the redundancy (i.e.~regret) in the setting of universal data compression, that is, the excess number of bits compared to the optimal compressor of $X^n$ that knows $\theta$ \cite[Chapter 13]{cover}.
The capacity-redundancy theorem (see \cite{kemperman1974shannon} for a very general result) provides the following mutual information characterization of \prettyref{eq:red}:
\begin{equation}
\Red_n = \sup_{P_\theta} I(\theta;X^n),
\label{eq:capred}
\end{equation}
where the supremum is over all distributions (priors) $P_\theta$ on $\Theta$.
In view of the variational representation \prettyref{eq:MI-var}, this result can be interpreted as a minimax theorem:
\[
\Red_n = \inf_{Q_{X^n}} \sup_{P_\theta} D(P_{X^n|\theta} \| Q_{X^n}|P_\theta) = \sup_{P_\theta} \inf_{Q_{X^n}} D(P_{X^n|\theta} \| Q_{X^n}|P_\theta).
\]
Typically, for fixed model size and $n\diverge$, one expects that $\Red_n = \frac{d}{2} \log n (1+o(1)$, where
$d$ is the number of parameters; see \cite{rissanen1984universal} for a general theory of this type. Indeed, on a fixed alphabet of size $k$,
we have $\Red_n = \frac{k-1}{2}\log n (1+o(1))$ for iid model \cite{D73} and
$\Red_n = \frac{k^m(k-1)}{2}\log n (1+o(1))$ for $m$-order Markov models \cite{trofimov1974redundancy},
with more refined asymptotics shown in \cite{xie1997minimax,szpankowski2012minimax}.
For large alphabets, nonasymptotic results have also been obtained.
For example, for first-order Markov model, $\Red_{n} \asymp k^2 \log\frac{n}{k^2}$ provided that $n \gtrsim k^2$ \cite{tatwawadi2018minimax}.
\paragraph{``Prediction''.}
Consider the problem of predicting the next unseen data point $X_{n+1}$ based on the observations $X_1,\ldots,X_n$, where $(X_1,\ldots,X_{n+1})$ are jointly distributed as $P_{X^{n+1}|\theta}$ for some unknown $\theta\in\Theta$. Here, an estimator is a distribution (for $X_{n+1}$) as a function of $X^n$, which, in turn, can be written as a conditional distribution $Q_{X_{n+1}|X^{n}}$.
As such, its worst-case average risk is
\begin{equation}
\Risk(Q_{X_{n+1}|X^{n}}) \triangleq \sup_{\theta\in\Theta} D(P_{X_{n+1}|X^{n},\theta} \| Q_{X_{n+1}|X^{n}} | P_{X^{n}|\theta}),
\label{eq:risk-Q}
\end{equation}
where the conditional KL divergence is defined in \prettyref{eq:KLcond}.
The minimax prediction risk is then defined as
\begin{equation}
\Risk_n \triangleq \inf_{Q_{X_{n+1}|X^{n}}} \Risk_n(Q_{X_{n+1}|X^{n}}),
\label{eq:risk}
\end{equation}
While \prettyref{eq:red} does not directly correspond to a statistical estimation problem, \prettyref{eq:risk} is exactly the familiar setting of ``density estimation'', where $Q_{X_{n+1}|X^n}$ is understood as an estimator for the distribution of the unseen $X_{n+1}$ based on the available data $X_1,\ldots,X_{n}$.
In the Bayesian setting where $\theta$ is drawn from a prior $P_\theta$, the Bayes prediction risk coincides with the conditional mutual information
as a consequence of the variational representation \prettyref{eq:MI-var}:
\begin{equation}
\inf_{Q_{X_{n+1}|X^{n}}} \Expect_\theta[D(P_{X_{n+1}|X^{n},\theta} \| Q_{X_{n+1}|X^{n}} | P_{X^{n}|\theta})] = I(\theta;X_{n+1}|X^n).
\label{eq:MC_Bayes-MI}
\end{equation}
Furthermore, the Bayes estimator that achieves this infimum takes the following form:
\begin{equation}
Q_{X_{n+1}|X^{n}}^{\sf Bayes} = P_{X^{n+1}|X^n} = \frac{\int_\Theta P_{X^{n+1}|\theta}\,dP_\theta }{\int_\Theta P_{X^{n}|\theta}\,dP_\theta },
\label{eq:MC_bayes}
\end{equation}
known as the Bayes predictive density \cite{D73,liang2004exact}.
These representations play a crucial role in the lower bound proof of \prettyref{thm:optimal}.
Under appropriate conditions which hold for Markov models (see \prettyref{lmm:caprisk} in \prettyref{app:caprisk}), the minimax prediction risk \prettyref{eq:risk} also admits a dual representation analogous to \prettyref{eq:capred}:
\begin{equation}
\Risk_n = \sup_{\theta\sim\pi} I(\theta;X_{n+1}|X^n),
\label{eq:caprisk}
\end{equation}
which, in view of \prettyref{eq:MC_Bayes-MI}, show that the principle of ``minimax=worst-case Bayes'' continues to
hold for prediction problem in Markov models.
The following result
relates the redundancy and the prediction risk.
\begin{lemma}
\label{lmm:riskred}
For any model $\calP$,
\begin{equation}
\Red_n\leq \sum_{t=0}^{n-1} \Risk_t.
\label{eq:riskred1}
\end{equation}
In addition, suppose that each $P_{X^n|\theta}\in\calP$ is stationary and $m^{\rm th}$-order Markov.
Then for all $n\geq m+1$,
\begin{equation}
\Risk_n \leq \Risk_{n-1} \leq \frac{\Red_n}{n-m}.
\label{eq:riskred2}
\end{equation}
Furthermore, for any joint distribution $Q_{X^n}$ factorizing as $Q_{X^n}=\prod_{t=1}^n Q_{X_t|X^{t-1}}$, the prediction risk of the estimator
\begin{equation}
\tilde Q_{X_n|X^{n-1}}(x_n|x^{n-1}) \triangleq \frac{1}{n-m} \sum_{t=m+1}^n Q_{X_t|X^{t-1}}(x_n|x_{n-t+1}^{n-1})
\label{eq:red-ach}
\end{equation}
is bounded by the redundancy of $Q_{X^n}$ as
\begin{equation}
\Risk(\tilde Q_{X_n|X^{n-1}}) \leq \frac{1}{n-m} \Red(Q_{X^n}).
\label{eq:riskred-ach}
\end{equation}
\end{lemma}
\begin{remark}
Note that the upper bound \prettyref{eq:riskred1} on redundancy, known as the ``estimation-compression inequality'' \cite{KOPS15,FOPS2016}, holds without conditions, while the lower bound \prettyref{eq:riskred2} relies on stationarity and Markovity.
For iid data, the estimation-compression inequality is almost an equality; however, this is not the case for Markov chains, as both sides of \prettyref{eq:riskred1} differ by an unbounded factor of $\Theta(\log\log n)$ for $k=2$ and $\Theta(\log n)$ for fixed $k\geq 3$ -- see \prettyref{eq:risk-mc1} and \prettyref{thm:optimal}.
On the other hand, Markov chains with at least three states offers a rare instance where \prettyref{eq:riskred2} is tight, namely, $\Risk_{n} \asymp \frac{\Red_{n}}{n}$ (cf.~\prettyref{lmm:red-addone}).
\end{remark}
\begin{proof}
The upper bound on the redundancy follows from the chain rule of KL divergence:
\begin{equation}
D(P_{X^n|\theta} \| Q_{X^n}) = \sum_{t=1}^n D(P_{X_{t}|X^{t-1},\theta} \| Q_{X_{t}|X^{t-1}} | P_{X^{t-1}}).
\label{eq:KLchain}
\end{equation}
Thus
\[
\sup_{\theta\in\Theta} D(P_{X^n|\theta} \| Q_{X^n})
\leq \sum_{t=1}^n \sup_{\theta\in\Theta} D(P_{X_{t}|X^{t-1},\theta} \| Q_{X_{t}|X^{t-1}} | P_{X^{t-1}}).
\]
Minimizing both sides over $Q_{X^n}$ (or equivalently, $Q_{X_{t}|X^{t-1}}$ for $t=1,\ldots,n$) yields \prettyref{eq:riskred1}.
To upper bound the prediction risk using redundancy, fix any $Q_{X^n}$, which gives rise to $Q_{X_{t}|X^{t-1}}$ for $t=1,\ldots,n$.
For clarity, let use denote the $t^{\rm th}$ estimator as $\hat P_t(\cdot|x^{t-1}) = Q_{X_{t}|X^{t-1}=x^{t-1}}$.
Consider the estimator $\tilde Q_{X_n|X^{n-1}}$ defined in \prettyref{eq:red-ach}, namely,
\begin{equation}
\tilde Q_{X_n|X^{n-1}=x^{n-1}} \triangleq \frac{1}{n-m} \sum_{t=m+1}^n \hat P_t(\cdot|x_{n-t+1},\ldots,x_{n-1}).
\label{eq:tildeQ}
\end{equation}
That is, we apply $\hat P_t$ to the most recent $t-1$ symbols prior to $X_n$ for predicting its distribution, then average over $t$.
We may bound the prediction risk of this estimator by redundancy as follows: Fix $\theta\in\Theta$.
To simplify notation, we suppress the dependency of $\theta$ and write $P_{X^n|\theta} \equiv P_{X^n}$. Then
\begin{align*}
D(P_{X_{n}|X^{n-1}} \| \tilde Q_{X_{n}|X^{n-1}} | P_{X^{n-1}})
\stepa{=} & ~ \expect{D\pth{P_{X_{n}|X^{n-1}_{n-m}} \Big\| \frac{1}{n} \sum_{t=1}^n \hat P_t(\cdot|X_{n-t+1}^{n-1})}} \\
\stepb{\leq} & ~ \frac{1}{n-m} \sum_{t=m+1}^n \expect{D(P_{X_{n}|X^{n-1}_{n-m}} \| \hat P_t(\cdot|X_{n-t+1}^{n-1}))} \\
\stepc{=} & ~ \frac{1}{n-m} \sum_{t=m+1}^n \Expect\qth{D(P_{X_{t}|X^{t-1}_{t-m}} \| \hat P_t(\cdot|X^{t-1})) } \\
\stepd{=} & ~ \frac{1}{n-m} \sum_{t=m+1}^n D(P_{X_{t}|X^{t-1}} \| Q_{X^{t}|X^{t-1}}|P_{X^{t-1}}) \\
\leq & ~ \frac{1}{n-m} \sum_{t=1}^n D(P_{X_{t}|X^{t-1}} \| Q_{X^{t}|X^{t-1}}|P_{X^{t-1}}) \\
\stepe{=} & ~ \frac{1}{n-m} D(P_{X^n} \| Q_{X^n}),
\end{align*}
where
(a) uses the $m^{\rm th}$-order Markovian assumption;
(b) is due to the convexity of the KL divergence;
(c) uses the crucial fact that for all $t=1,\ldots,n-1$, $(X_{n-t},\ldots,X_{n-1}) \eqlaw (X_1,\ldots,X_{t})$, thanks to stationarity;
(d) follows from substituting $\hat P_t(\cdot|x^{t-1}) = Q_{X_{t}|X^{t-1}=x^{t-1}}$, the Markovian assumption $P_{X_{t}|X^{t-1}_{t-m}}=P_{X_{t}|X^{t-1}}$, and rewriting the expectation as conditional KL divergence;
(e) is by the chain rule \prettyref{eq:KLchain} of KL divergence.
Since the above holds for any $\theta\in \Theta$, the desired \prettyref{eq:riskred-ach} follows which implies that $\Risk_{n-1} \leq \frac{\Red_n}{n-m}$.
Finally, $\Risk_{n-1} \leq \Risk_{n}$ follows from
$
\Expect[ D(P_{X_{n+1|X_n}} \| \hat P_n(X_2^{n}))]
=
\Expect[ D(P_{X_{n|X_{n-1}}} \| \hat P_n(X_1^{n-1}))]
$, since $(X_2,\ldots,X_n)$ and $(X_1,\ldots,X_{n-1})$ are equal in law.
\end{proof}
\begin{remark}
\label{rmk:MI-pf}
Alternatively, \prettyref{lmm:riskred} also follows from the mutual information representation \prettyref{eq:capred} and \prettyref{eq:caprisk}. Indeed, by the chain rule for mutual information,
\begin{equation}
I(\theta;X^n) = \sum_{t=1}^n I(\theta;X_t|X^{t-1}),
\label{eq:MIchain}
\end{equation}
taking the supremum over $\pi$ (the distribution of $\theta$) on both sides yields \prettyref{eq:capred}.
For \prettyref{eq:caprisk}, it suffices to show that $I(\theta;X_t|X^{t-1})$ is decreasing in $t$:
for any $\theta\sim \pi$,
\begin{align*}
I(\theta; X_{n+1}|X^n)
= & ~ \Expect \log \frac{P_{X_{n+1}|X^n,\theta}}{P_{X_{n+1}|X^n}} = \Expect \log \frac{P_{X_{n+1}|X^n,\theta}}{P_{X_{n+1}|X^n_2}} + \underbrace{\Expect \log \frac{P_{X_{n+1}|X^n_2}}{P_{X_{n+1}|X^n}}}_{-I(X_1;X_{n+1}|X_2^n)},
\end{align*}
and the first term is
\[
\Expect \log \frac{P_{X_{n+1}|X^n,\theta}}{P_{X_{n+1}|X^n_2}} =
\Expect \log \frac{P_{X_{n+1}|X^n_{n-m+1},\theta}}{P_{X_{n+1}|X^n_2}}
= \Expect \log \frac{P_{X_{n}|X^{n-1}_{n-m},\theta}}{P_{X_{n}|X^{n-1}}}
= I(\theta;X_{n}|X^{n-1})
\]
where the first and second equalities follow from the $m^{\rm th}$-order Markovity and stationarity, respectively.
Taking supremum over $\pi$ yields $\Risk_n \leq \Risk_{n-1}$.
Finally, by the chain rule \prettyref{eq:MIchain}, we have
$I(\theta;X^n) \geq (n-m) I(\theta;X_n|X^{n-1})$, yielding
$\Risk_{n-1} \leq \frac{\Red_n}{n-m}$.
\end{remark}
\subsection{Proof of the upper bound part of \prettyref{thm:optimal}}
\label{sec:main-ub}
Specializing to first-order stationary Markov chains with $k$ states,
we denote the redundancy and prediction risk in \prettyref{eq:red} and \prettyref{eq:risk} by $\Red_{k,n}$ and $\Risk_{k,n}$, the latter of which is precisely the quantity previously defined in \prettyref{eq:riskkn}.
Applying \prettyref{lmm:riskred} yields $\Risk_{k,n} \leq \frac{1}{n-1}\Red_{k,n}$.
To upper bound $\Red_{k,n}$, consider the following probability assignment:
\begin{align}
Q(x_1,\cdots,x_n) = \frac{1}{k}\prod_{t=1}^{n-1} \widehat{M}_{x^t}^{+1}(x_{t+1}|x_t)
\label{eq:addone-joint}
\end{align}
where $\widehat{M}^{+1}$ is the add-one estimator defined in \prettyref{eq:addone}.
This $Q$ factorizes as $Q(x_1)=\frac{1}{k}$ and $Q(x_{t+1}|x^t) = \widehat{M}_{x^t}^{+1}(x_{t+1}|x_t)$.
The following lemma bounds the redundancy of $Q$:
\begin{lemma}
\label{lmm:red-addone}
$\Red(Q) \leq k(k-1)\left[\log \left(1+\frac{n-1}{k(k-1)}\right)+1\right] + \log k.
$
\end{lemma}
Combined with \prettyref{lmm:riskred},
\prettyref{lmm:red-addone} shows that
$\Risk_{k,n} \leq C \frac{k^2}{n} \log \frac{n}{k^2}$ for all $k \leq \sqrt{n/C}$ and some universal constant $C$,
achieved by the estimator \prettyref{eq:markov-add-1}, which is obtained by applying the rule
\prettyref{eq:red-ach} to \prettyref{eq:addone-joint}.
It remains to show \prettyref{lmm:red-addone}. To do so, we in fact bound
the pointwise redundancy of the add-one probability assignment \prettyref{eq:addone-joint} over all (not necessarily stationary) Markov chains on $k$ states.
The proof is similar to those of \cite[Theorems 6.3 and 6.5]{csiszar2004information}, which, in turn, follow the arguments of \cite[Sec.~III-B]{davisson1981efficient}.
\begin{proof}
We show that for every Markov chain with transition matrix $M$ and initial distribution $\pi$, and every trajectory $(x_1,\cdots,x_n)$, it holds that
\begin{align}\label{eq:pointwise-red}
\log\frac{\pi(x_1)\prod_{t=1}^{n-1} M(x_{t+1}|x_t) }{Q(x_1,\cdots,x_n)} \le k(k-1)\left[\log \left(1+\frac{n}{k(k-1)}\right)+1\right] + \log k,
\end{align}
where we abbreviate the add-one estimator $M_{x^t}(x_{t+1}|x_t)$ defined in \prettyref{eq:addone} as $M(x_{t+1}|x_t)$.
To establish \eqref{eq:pointwise-red}, note that $Q(x_1,\cdots,x_n)$ could be equivalently expressed using the empirical counts $N_i$ and $N_{ij}$ in \prettyref{eq:transition.count} as
\begin{align*}
Q(x_1,\cdots,x_n) = \frac{1}{k}\prod_{i=1}^k \frac{\prod_{j=1}^k N_{ij}!}{k\cdot (k+1)\cdot \cdots\cdot (N_i+k-1)}.
\end{align*}
Note that
\[
\prod_{t=1}^{n-1} M(x_{t+1}|x_t) = \prod_{i=1}^k \prod_{j=1}^k M(j|i)^{N_{ij}} \le \prod_{i=1}^k \prod_{j=1}^k (N_{ij}/N_i)^{N_{ij}},
\]
where the inequality follows from $\sum_j \frac{N_{ij}}{N_i}\log \frac{N_{ij}/N_i }{M(j|i)}\geq 0$ for each $i$, by the nonnegativity of the KL divergence.
Therefore, we have
\begin{align}\label{eq:likelihood-ratio}
\frac{\pi(x_1)\prod_{t=1}^{n-1} M(x_{t+1}|x_t) }{Q(x_1,\cdots,x_n)} \le k\cdot \prod_{i=1}^k \frac{k\cdot (k+1)\cdot \cdots\cdot (N_i+k-1)}{N_i^{N_i}} \prod_{j=1}^k\frac{ N_{ij}^{N_{ij}} }{N_{ij}!}.
\end{align}
We claim that: for $n_1,\cdots,n_k\in \integers_+$ and $n=\sum_{i=1}^k n_i \in\naturals$, it holds that
\begin{align}\label{eq:MC_claim}
\prod_{i=1}^k \left(\frac{n_i}{n}\right)^{n_i} \le \frac{\prod_{i=1}^k n_i!}{n!},
\end{align}
with the understanding that $(\frac{0}{n})^{0} = 0!=1$.
Applying this claim to \eqref{eq:likelihood-ratio} gives
\begin{align*}
\log \frac{\pi(x_1)\prod_{t=1}^{n-1} M(x_{t+1}|x_t) }{Q(x_1,\cdots,x_n)} &\le \log k+\sum_{i=1}^k \log \frac{k\cdot (k+1)\cdot \cdots\cdot (N_i+k-1)}{N_i!} \\
&= \log k + \sum_{i=1}^k \sum_{\ell = 1}^{N_i} \log\left(1+\frac{k-1}{\ell}\right)\\
&\le \log k + \sum_{i=1}^k \int_0^{N_i} \log\left(1+\frac{k-1}{x}\right)dx \\
&= \log k + \sum_{i=1}^k \left((k-1)\log\left(1+\frac{N_i}{k-1}\right) + N_i\log\left(1+\frac{k-1}{N_i}\right) \right) \\
&\stepa{\le} k(k-1)\log \left(1+\frac{n-1}{k(k-1)}\right) + k(k-1) + \log k,
\end{align*}
where (a) follows from the concavity of $x\mapsto \log x$, $\sum_{i=1}^k N_i=n-1$, and $\log(1+x)\le x$.
It remains to justify \prettyref{eq:MC_claim}, which has a simple information-theoretic proof:
Let $T$ denote the collection of sequences $x^n$ in $[k]^n$ whose \emph{type} is given by $(n_1,\ldots,n_k)$. Namely, for each $x^n \in T$,
$i$ appears exactly $n_i$ times for each $i \in[k]$.
Let $(X_1,\ldots,X_n)$ be drawn uniformly at random from the set $T$.
Then
\[
\log \frac{n!}{\prod_{i=1}^k n_i!} = H(X_1,\ldots,X_n) \stepa{\leq} \sum_{j=1}^n H(X_j) \stepb{=} n \sum_{i=1}^k \frac{n_i}{n}\log\frac{n}{n_i},
\]
where (a) follows from the fact that the joint entropy is at most the sum of marginal entropies; (b) is because each $X_j$ is distributed as $(\frac{n_1}{n},\ldots,\frac{n_k}{n})$.
\end{proof}
\section{Optimal rates without spectral gap}
\label{sec:optimal}
In this section, we prove the lower bound part of \prettyref{thm:optimal}, which shows the optimality of the average version of the add-one estimator \prettyref{eq:red-ach}.
We first describe the lower bound construction for three-state chains, which is subsequently extended to $k$ states.
\subsection{Warmup: an $\Omega(\frac{\log n}{n})$ lower bound for three-state chains}
\label{sec:threestates}
\begin{theorem}
\label{thm:threestates}
$
\Risk_{3,n} = \Omega\pth{\frac{\log n}{n}}.
$
\end{theorem}
To show \prettyref{thm:threestates}, consider the following one-parameter family of transition matrices:
\begin{equation}
\calM = \sth{ \M_p=\qth{\begin{matrix}
1-\frac{2}{n} & \frac{1}{n} & \frac{1}{n}\\
\frac{1}{n} & 1-\frac{1}{n}-p & p\\
\frac{1}{n} & p & 1-\frac{1}{n}-p
\end{matrix}}\colon 0 \leq p \leq 1-\frac{1}{n}}.
\label{eq:M3}
\end{equation}
Note that each transition matrix in $\calM$ is symmetric (hence doubly stochastic), whose corresponding chain is reversible with a uniform stationary distribution and spectral gap $\Theta(\frac{1}{n})$; see \prettyref{fig:M3}.
\begin{figure}[ht
\centering
\begin{center}
\begin{tikzpicture}[scale=0.6,
roundnode/.style={circle, draw=black, thick, minimum size=5mm},
squarednode/.style={rectangle, draw=black,semithick, minimum size=5mm},minimum size= 1mm, every edge/.style={
draw,->,>=stealth',auto,semithick}]
\node[roundnode] (s1) at (0,2*1.732) {1};
\node[roundnode] (s2) at (-2,0) {2};
\node[roundnode] (s3) at (2,0) {3};
\path[->] (s1) edge [bend left=15] node [right] {$\frac 1n$}(s2)
edge [bend left=15] node [right] {$\frac 1n$}(s3)
edge [loop left,every loop/.style={looseness=12, in=60,out=120}] node [above]{$1-\frac 2n$}();
\path[->] (s2) edge [bend left=15] node [left] {$\frac 1n$}(s1)
edge [bend left=15] node [above] {$p$}(s3)
edge [loop left,every loop/.style={looseness=12, in=150,out=210}] node {$1-\frac 1n-p$}();
\path[->] (s3) edge [bend left=15] node [below] {$p$}(s2)
edge [bend left=15] node [left] {$\frac 1n$}(s1)
edge [loop right, every loop/.style={looseness=12, in=330,out=30}] node {$1-\frac 1n-p$}();
\end{tikzpicture}
\end{center}
\caption{Lower bound construction for three-state chains.
\label{fig:M3
\end{figure}
The main idea is as follows.
Notice that by design, with constant probability, the trajectory is of the following form: The chain starts and stays at state 1 for $t$ steps, and then transitions into state 2 or 3 and never returns to state 1, where $t=1,\ldots,n-1$. Since $p$ is the single unknown parameter, the only useful observations are visits to state $2$ and $3$ and each visit entails one observation about $p$ by flipping a coin with bias roughly $p$. Thus the effective sample size for estimating $p$ is $n-t-1$ and we expect the best estimation error is of the order of $\frac{1}{n-t}$. However, $t$ is not fixed. In fact, conditioned on the trajectory is of this form, $t$ is roughly uniformly distributed between $1$ and $n-1$. As such, we anticipate the estimation error of $p$ is approximately
\[
\frac{1}{n-1}\sum_{i=1}^{n-1} \frac{1}{n-t} = \Theta\pth{\frac{\log n}{n}}.
\]
Intuitively speaking, the construction in \prettyref{fig:M3} ``embeds'' a symmetric two-state chain (with states 2 and 3) with unknown parameter $p$ into a space of three states, by adding a ``nuisance'' state 1, which effectively slows down the exploration of the useful part of the state space, so that in a trajectory of length $n$, the effective number of observations we get to make about $p$ is roughly uniformly distributed between $1$ and $n$. This explains the extra log factor in \prettyref{thm:threestates}, which actually stems from the harmonic sum in $\Expect[\frac{1}{\Unif([n])}]$.
We will fully explore this embedding idea in \prettyref{sec:kstates} to deal with larger state space.
Next we make the above intuition rigorous using a Bayesian argument.
Let us start by recalling the following well-known lemma.
\begin{lemma}
\label{lmm:addone}
Let $q \sim \Unif(0,1)$. Conditioned on $q$, let $N \sim \Binom(m,q)$.
Then the Bayes estimator of $q$ given $N$ is the ``add-one'' estimator:
\[
\Expect[q|N] = \frac{N+1}{m+2}
\]
and the Bayes risk is given by
\[
\Expect[(q-\Expect[q|N])^2] = \frac{1}{6(m+2)}.
\]
\end{lemma}
\begin{proof}[Proof of \prettyref{thm:threestates}]
Consider the following Bayesian setting: First, we draw $p$ uniformly at random from $[0,1-\frac{1}{n}]$. Then, we generate the sample path $X^n=(X_1,\ldots,X_n)$ of a stationary (uniform) Markov chain with transition matrix $\M_p$ as defined in \prettyref{eq:M3}.
Define
\begin{align}\label{eq:MC_Xt-sets}
\begin{gathered}
\calX_t = \{x^n: x_1=\ldots=x_t=1, x_i \neq 1, i=t+1,\ldots,n\}, \quad t=1,\dots,n-1,\\
\quad \calX = \cup_{t=1}^{n-1} \calX_t.
\end{gathered}
\end{align}
Let $\mu(x^n|p) = \prob{X=x^n}$. Then
\begin{equation}
\mu(x^n|p) = \frac{1}{3} \pth{1-\frac{2}{n}}^{t-1} \frac{2}{n} p^{N(x^n)} \pth{1-\frac{1}{n}-p}^{n-t-1-N(x^n)}, \quad x^n \in \calX_t,
\label{eq:muxp}
\end{equation}
where $N(x^n)$ denotes the number of transitions from state 2 to 3 or from 3 to 2.
Then
\begin{align}
\prob{X^n \in \calX_t}
= & ~ \frac{1}{3} \pth{1-\frac{2}{n}}^{t-1} \frac{2}{n} \sum_{k=0}^{n-t-1}\binom{n-t-1}{k} p^{k} \pth{1-\frac{1}{n}-p}^{n-t-1-k} \nonumber \\
= & ~ \frac{1}{3} \pth{1-\frac{2}{n}}^{t-1} \frac{2}{n} \pth{1-\frac{1}{n}}^{n-t-1} = \frac{2}{3n} \pth{1-\frac{1}{n}}^{n-2} \pth{1-\frac{1}{n-1}}^{t-1}
\label{eq:calXt}
\end{align}
and hence
\begin{align}
\prob{X^n \in \calX} = & ~ \sum_{t=1}^{n-1} \prob{X^n \in \calX_t} = \frac{2(n-1)}{3n} \pth{1-\frac{1}{n}}^{n-2}\pth{1-\pth{1-\frac{1}{n-1}}^{n-1}} \label{eq:calX} \\
= & ~ \frac{2(1-1/e)}{3e} +o_n(1) \nonumber.
\end{align}
Consider the Bayes estimator (for estimating $p$ under the mean-squared error)
\[
\hat p(x^n) = \Expect[p|x^n] = \frac{\Expect[p \cdot \mu(x^n|p)]}{\Expect[\mu(x^n|p)]}.
\]
For $x^n\in \calX_t$, using \prettyref{eq:muxp} we have
\begin{align*}
\hat p(x^n)
= & ~ \frac{\expect{ p^{N(x^n)+1} \pth{1-\frac{1}{n}-p}^{n-t-1-N(x^n)}}}{ \expect{ p^{N(x^n)} \pth{1-\frac{1}{n}-p}^{n-t-1-N(x^n)}} }, \quad p \sim \Unif\pth{0,\frac{n-1}{n}} \\
= & ~ \frac{n-1}{n} \frac{\expect{ U^{N(x^n)+1} \pth{1-U}^{n-t-1-N(x^n)}}}{ \expect{ U^{N(x^n)} \pth{1-U}^{n-t-1-N(x^n)}} }, \quad U \sim \Unif(0,1) \\
= & ~ \frac{n-1}{n} \frac{N(x^n)+1}{n-t+1},
\end{align*}
where the last step follows from \prettyref{lmm:addone}.
From \prettyref{eq:muxp}, we conclude that conditioned on $X^n \in \calX_t$ and on $p$, $N(X^n) \sim \Binom(n-t-1,q)$, where $q = \frac{p}{1-\frac{1}{n}} \sim \Unif(0,1)$.
Applying \prettyref{lmm:addone} (with $m=n-t-1$ and $N=N(X^n)$), we get
\begin{align*}
\Expect[(p-\hat p(X^n))^2|X^n \in \calX_t]
= & ~ \pth{\frac{n-1}{n}}^2 \Expect\qth{\pth{q- \frac{N(x^n)+1}{n-t+1} }^2} \\
= & ~ \pth{\frac{n-1}{n}}^2 \frac{1}{6(n-t+1)}.
\end{align*}
Finally,
note that conditioned on $X^n \in \calX$, the probability of $X^n \in \calX_t$ is close to uniform. Indeed, from \prettyref{eq:calXt} and \prettyref{eq:calX} we get
\[
\prob{X^n \in \calX_t| \calX}
= \frac{1}{n-1}
\frac{\pth{1-\frac{1}{n-1}}^{t-1}}{1- \pth{1-\frac{1}{n-1}}^{n-1} } \geq \frac{1}{n-1} \pth{ \frac{1}{e-1} + o_n(1)}, \quad t=1,\ldots,n-1.
\]
Thus
\begin{align}
\Expect[(p-\hat p(X^n))^2 \indc{X^n \in \calX}]
= & ~ \prob{X^n \in \calX} \sum_{t=1}^{n-1} \Expect[(p-\hat p(X^n))^2 | X^n \in \calX_t] \prob{X^n \in \calX_t| \calX} \nonumber \\
\gtrsim & ~ \frac{1}{n-1} \sum_{t=1}^{n-1} \frac{1}{n-t+1} = \Theta\pth{\frac{\log n}{n}} \label{eq:pbayes}.
\end{align}
Finally, we relate \prettyref{eq:pbayes} formally to the minimax prediction risk under the KL divergence.
Consider any predictor $\hat \M(\cdot|i)$ (as a function of the sample path $X$) for the $i$th row of $\M$, $i=1,2,3$.
By Pinsker inequality, we conclude that
\begin{align}
D(\M(\cdot|2) \| \hat \M(\cdot|2))
\geq \frac{1}{2} \|\M(\cdot|2)-\hat \M(\cdot|2)\|_{\ell_1}^2 \geq \frac{1}{2} (p-\hat \M(3|2))^2
\end{align}
and similarly, $D(\M(\cdot|3) \| \hat \M(\cdot|3)) \geq \frac{1}{2}(p-\hat \M(2|3))^2$.
Abbreviate $\hat \M(3|2) \equiv \hat p_2$ and $\hat \M(2|3) \equiv \hat p_3$, both functions of $X$.
Taking expectations over both $p$ and $X$,
the Bayes prediction risk can be bounded as follows
\begin{align}
& \sum_{i=1}^3 \Expect[D( \M(\cdot|i)\|\hat \M(\cdot|i)) \indc{X_n=i} ] \nonumber \\
\geq & ~ \frac{1}{2} \Expect[(p-\hat p_2)^2 \indc{X_n=2} + (p-\hat p_3)^2 \indc{X_n=3}] \nonumber \\
\geq & ~ \frac{1}{2} \sum_{x \in \calX} \mu(x^n)\pth{ \Expect[(p-\hat p_2)^2|X=x^n] \indc{x_n=2} + \Expect[(p-\hat p_3)^2|X=x^n] \indc{x_n=3} }\nonumber \\
\geq & ~ \frac{1}{2} \sum_{x^n \in \calX} \mu(x^n) \Expect[(p-\hat p(x^n))^2|X=x^n] (\indc{x_n=2} + \indc{x_n=3}) \nonumber \\
= & ~ \frac{1}{2} \sum_{x^n \in \calX} \mu(x^n) \Expect[(p-\hat p(x^n))^2|X=x^n] \nonumber \\
= & ~ \frac{1}{2} \Expect[(p-\hat p(X))^2 \indc{X\in\calX}] \overset{\prettyref{eq:pbayes}}{=} \Theta\pth{\frac{\log n}{n}}. \nonumber
\end{align}
\end{proof}
\subsection{$k$-state chains}
\label{sec:kstates}
The lower bound construction for $3$-state chains in \prettyref{sec:threestates}
can be generalized to $k$-state chains.
The high-level argument is again to augment a $(k-1)$-state chain into a $k$-state chain. Specifically, we partition the state space $[k]$ into two sets $\calS_1=\{1\}$ and $\calS_2=\{2,3,\cdots,k\}$.
Consider a $k$-state Markov chain such that the transition probabilities from $\calS_1$ to $\calS_2$, and from $\calS_2$ to $\calS_1$, are both very small (on the order of $\Theta(1/n)$). At state $1$, the chain either stays at $1$ with probability $1-1/n$ or moves to one of the states in $\calS_2$ with equal probability $\frac{1}{n(k-1)}$;
at each state in $\calS_2$, the chain moves to $1$ with probability $\frac{1}{n}$; otherwise, within the state subspace $\calS_2$, the chain evolves according to some symmetric transition matrix $T$. (See \prettyref{fig:Mk} in Section~\ref{subsec:chain_construction} for the precise transition diagram.)
The key feature of such a chain is as follows. Let $\calX_t$ be the event that $X_1,X_2,\cdots,X_t\in \calS_1$ and $X_{t+1},\cdots,X_n\in \calS_2$. For each $t\in [n-1]$,
one can show that $\PP(\calX_t)\ge c/n$ for some absolute constant $c>0$. Moreover, conditioned on the event $\calX_t$, $(X_{t+1},\ldots,X_n)$ is equal in law to a stationary Markov chain $(Y_1,\cdots,Y_{n-t})$ on state space $\calS_2$ with symmetric transition matrix $T$.
It is not hard to show that estimating $M$ and $T$ are nearly equivalent. Consider the Bayesian setting where $T$ is drawn from some prior. We have
\begin{align*}
\inf_{\widehat{M}}\EE_{T}\left[\EE[D(M(\cdot | X_n) \| \widehat{M}(\cdot|X_n)) | \calX_t]\right] \approx \inf_{\widehat{T}}\EE_{T}\left[\EE[D(T(\cdot | Y_{n-t}) \| \widehat{T}(\cdot|Y_{n-t}))]\right]
= I(T; Y_{n-t+1} | Y^{n-t}),
\end{align*}
where the last equality follows from the representation \prettyref{eq:MC_Bayes-MI} of Bayes prediction risk as conditional mutual information.
Lower bounding the minimax risk by the Bayes risk, we have
\begin{align}\label{eq:riskred3}
\Risk_{k,n} &\ge \inf_{\widehat{M}}\EE_{T}\left[\EE[D(M(\cdot | X_n) \| \widehat{M}(\cdot|X_n))]\right] \nonumber \\
&\ge \inf_{\widehat{M}}\sum_{t=1}^{n-1} \EE_{M}\left[\EE[D(M(\cdot | X_n) \| \widehat{M}(\cdot|X_n))|\calX_t] \cdot \PP(\calX_t)\right] \nonumber \\
&\ge \frac{c}{n}\cdot \sum_{t=1}^{n-1} \inf_{\widehat{M}}\EE_{M}\left[\EE[D(M(\cdot | X_n) \| \widehat{M}(\cdot|X_n))|\calX_t] \right] \nonumber \\
&\approx \frac{c}{n}\cdot \sum_{t=1}^{n-1} I(T;Y_{n-t+1}|Y^{n-t}) = \frac{c}{n}\cdot (I(T;Y^n) - I(T;Y_1)).
\end{align}
Note that $I(T;Y_1)\le H(Y_1)\le \log(k-1)$ since $Y_1$ takes values in $\calS_2$.
Maximizing the right hand side over the prior $P_T$ and recalling the dual representation for redundancy in \prettyref{eq:capred}, the above inequality \eqref{eq:riskred3} leads to a risk lower bound of $\Risk_{k,n} \gtrsim \frac{1}{n} (\Red_{k-1,n}^{\sf sym} - \log k)$,
where $\Red_{k-1,n}^{\sf sym}=\sup I(T;Y_1)$ is the redundancy for \emph{symmetric} Markov chains with $k-1$ states and sample size $n$.
Since symmetric transition matrices still have $\Theta(k^2)$ degrees of freedom, it is expected that $\Red_{k,n}^{\sf sym} \asymp k^2\log \frac{n}{k^2}$ for $n\gtrsim k^2$, so that \eqref{eq:riskred3} yields the desired lower bound $\Risk_{k,n} = \Omega(\frac{k^2}{n} \log \frac{n}{k^2})$ in Theorem \ref{thm:optimal}.
Next we rigorously carry out the lower bound proof sketched above: In Section \ref{subsec:chain_construction}, we explicitly construct the $k$-state chain which satisfies the desired properties in \prettyref{sec:kstates}. In Section \ref{subsec:bayes_lower_bound}, we make the steps in \eqref{eq:riskred3} precise and bound the Bayes risk from below by an appropriate mutual information. In Section \ref{subsec:prior_construction}, we choose a prior distribution on the transition probabilities and prove a lower bound on the resulting mutual information, thereby completing the proof of Theorem \ref{thm:optimal}, with the added bonus that the construction is restricted to irreducible and reversible chains.
\subsubsection{Construction of the $k$-state chain}\label{subsec:chain_construction}
We construct a $k$-state chain with the following transition probability matrix:
\begin{align}\label{eq:M_construction}
M = \left[\begin{matrix}
1 - \frac{1}{n} & \begin{matrix} \frac{1}{n(k-1)} & \frac{1}{n(k-1)} & \cdots & \frac{1}{n(k-1)} \end{matrix} \\
\begin{matrix} 1/n \\ 1/n \\ \vdots \\ 1/n \end{matrix} & \mbox{\LARGE $\left(1-\frac{1}{n}\right)T$}
\end{matrix}\right],
\end{align}
where $T\in \reals^{\calS_2\times \calS_2}$ is a symmetric stochastic matrix to be chosen later. The transition diagram of $M$ is shown in Figure \ref{fig:Mk}.
One can also verify that the spectral gap of $M$ is $\Theta(\frac{1}{n})$.
\begin{figure}[ht
\centering
\begin{center}
\begin{tikzpicture}[scale=0.9,
roundnode/.style={circle, draw=black, thick, minimum size=5mm},
squarednode/.style={rectangle, draw=black,semithick, minimum size=5mm},minimum size= 1mm, every edge/.style={draw,->,>=stealth',auto,semithick}]
\node [roundnode] (s1) at (4,6) {1};
\node [roundnode] (s2) at (-2,-2) {2};
\node [roundnode] (s3) at (1.8,-2) {3};
\node [roundnode] (s4) at (6.2,-2) {$\ldots$};
\node [roundnode] (s5) at (10,-2) {$k$};
\draw [dotted] (-3,4) -- (11,4);
\node [above] at (10.5,4) {$\calS_1$}; \node [below] at (10.5,4) {$\calS_2$};
\path (s1) edge [dashed, bend left=10] node [left] {$\frac{1}{n(k-1)}$} (s2)
edge [dashed, bend left=10] node [right] {$\frac{1}{n(k-1)}$} (s3)
edge [dashed, bend right=10] (s4)
edge [dashed, bend right=15] node [right] {$\frac{1}{n(k-1)}$} (s5)
edge [loop left,every loop/.style={looseness=12, in=60,out=120}] node [above]{$1-\frac{1}{n}$}();
\path [->] (s2) edge [dashed, bend left=10] node [left] {$\frac{1}{n}$} (s1)
edge [bend left=15, <->] node [above] {$(1-\frac{1}{n})T_{2,3}$} (s3)
edge [bend right=20, <->] (s4)
edge [bend left=40, <->] node [below] {$(1-\frac{1}{n})T_{2,k}$} (s5)
edge [loop left,every loop/.style={looseness=12, in=300,out=240}] node [below]{$(1-\frac{1}{n})T_{2,2}$}();
\path [->] (s3) edge [dashed, bend left=5] node [left] {$\frac{1}{n}$} (s1)
edge [bend left=15, <->] (s4)
edge [loop left,every loop/.style={looseness=12, in=300,out=240}] node [below]{$(1-\frac{1}{n})T_{3,3}$}()
edge [bend left=20, <->] node [above] {$(1-\frac{1}{n})T_{3,k}$} (s5);
\path [->] (s4) edge [bend right=5, dashed] node [right] {$\frac{1}{n}$} (s1)
edge [loop left,every loop/.style={looseness=12, in=300,out=240}] ();
\path [->] (s5) edge [bend right=15, dashed] node [right] {$\frac{1}{n}$} (s1)
edge [loop left,every loop/.style={looseness=12, in=300,out=240}] node [below]{$(1-\frac{1}{n})T_{k,k}$}()
edge [bend right=15, <->] (s4);
\end{tikzpicture}
\end{center}
\caption{Lower bound construction for $k$-state chains. Solid arrows represent transitions within $\calS_1$ and $\calS_2$, and dashed arrows represent transitions between $\calS_1$ and $\calS_2$. The double-headed arrows denote transitions in both directions with equal probabilities.
\label{fig:Mk
\end{figure}
Let $(X_1,\ldots,X_n)$ be the trajectory of a stationary Markov chain with transition matrix $M$.
We observe the following properties:
\begin{enumerate}[label=(P\arabic*)]
\item \label{pt:1} This Markov chain is irreducible and reversible, with stationary distribution $(\frac{1}{2},\frac{1}{2(k-1)},\cdots,\frac{1}{2(k-1)})$;
\item \label{pt:2} For $t\in [n-1]$, let $\calX_t$ denote the collections of trajectories $x^n$ such that $x_1,x_2,\cdots,x_t\in \calS_1$ and $x_{t+1},\cdots,x_n\in \calS_2$. Then
\begin{align}\label{eq:Xt_prob}
\PP(X^n\in\calX_t)&= \PP(X_1=\cdots=X_t=1)\cdot \PP(X_{t+1}\neq 1 | X_t = 1)\cdot \prod_{s=t+1}^{n-1} \PP(X_{s+1}\neq 1|X_s\neq 1) \nonumber \\
&= \frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-1}\cdot \frac{1}{n}\cdot \left(1-\frac{1}{n}\right)^{n-1-t} \ge \frac{1}{2en}.
\end{align}
Moreover, this probability does not depend of the choice of $T$;
\item \label{pt:3} Conditioned on the event that $X^n\in\calX_t$, the trajectory $(X_{t+1},\cdots,X_n)$ has the same distribution as a length-$(n-t)$ trajectory of a stationary Markov chain with state space $\calS_2=\{2,3,\cdots,k\}$ and transition probability $T$, and the uniform initial distribution.
Indeed,
\begin{align*}
\prob{X_{t+1}=x_{t+1},\ldots,X_n=x_n|X^n\in\calX_t}
= & ~ \frac{\frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-1}\cdot \frac{1}{n(k-1)} \prod_{s=t+1}^{n-1} M(x_{s+1}|x_s) }{\frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-1}\cdot \frac{1}{n}\cdot \left(1-\frac{1}{n}\right)^{n-1-t}} \\
= & ~ \frac{1}{k-1}
\prod_{s=t+1}^{n-1} T(x_{s+1}|x_s).
\end{align*}
\end{enumerate}
\subsubsection{Reducing the Bayes prediction risk to redundancy}\label{subsec:bayes_lower_bound}
Let $\calM_{k-1}^{\mathsf{sym}}$ be the collection of all symmetric transition matrices on state space $\calS_2=\{2,\ldots,k\}$.
Consider a Bayesian setting where the transition matrix $M$ is constructed in \eqref{eq:M_construction} and the submatrix $T$ is drawn from an arbitrary prior on $\calM_{k-1}^{\mathsf{sym}}$.
The following lemma lower bounds the Bayes prediction risk.
\begin{lemma}\label{lmm:riskred_markov}
Conditioned on $T$, let $Y^n=(Y_1,\ldots,Y_n)$ denote a stationary Markov chain on state space $\calS_2$ with transition matrix $T$ and uniform initial distribution. Then
\begin{align*}
\inf_{\widehat{M}}\EE_{T}\left[\EE[D(M(\cdot|X_n)\| \widehat{M}(\cdot|X_n)) ]\right] \ge \frac{n-1}{2en^2}\left(I(T;Y^n) - \log (k-1)\right).
\end{align*}
\end{lemma}
Lemma \ref{lmm:riskred_markov} is the formal statement of the inequality \eqref{eq:riskred3} presented in the proof sketch. Maximizing the lower bound over the prior on $T$ and in view of the mutual information representation \prettyref{eq:capred}, we obtain the following corollary.
\begin{corollary}
Let $\Risk_{k,n}^{\mathsf{sym}}$ denote the minimax prediction risk for stationary irreducible and reversible Markov chains on $k$ states and $\Red_{k,n}^{\mathsf{sym}}$ the redundancy
for stationary symmetric Markov chains on $k$ states. Then
\begin{align*}
\Risk_{k,n}^{\sf rev} \ge \frac{n-1}{2en^2}(\Red_{k-1,n}^{\mathsf{sym}} - \log(k-1)).
\end{align*}
\end{corollary}
We make use of the properties \ref{pt:1}--\ref{pt:3} in Section \ref{subsec:chain_construction} to prove Lemma \ref{lmm:riskred_markov}.
\begin{proof}[Proof of Lemma \ref{lmm:riskred_markov}]
Recall that in the Bayesian setting, we first draw $T$ from some prior on $\calM_{k-1}^{\mathsf{sym}}$, then generate the stationary Markov chain $X^n=(X_1,\ldots,X_n)$ with state space $[k]$ and transition matrix $M$ in \prettyref{eq:M_construction}, and $(Y_1,\ldots,Y_n)$ with state space $\calS_2=\{2,\ldots,k\}$ and transition matrix $T$.
We first relate the Bayes estimator of $M$ and $T$ (given the $X$ and $Y$ chain respectively).
For clarity, we spell out the explicit dependence of the estimators on the input trajectory.
For each $t\in[n]$, denote by $\hat M_t=\hat M_t(\cdot|x^t)$
the Bayes estimator of $M(\cdot|x_t)$ give $X^t=x^t$, and
$\hat T_t(\cdot|y^t)$
the Bayes estimator of $T(\cdot|y_t)$ give $Y^t=y^t$.
For each $t=1,\ldots,n-1$ and for each trajectory $x^n=(1,\ldots,1,x_{t+1},\ldots,x_n) \in \calX_t$, recalling the form \prettyref{eq:MC_bayes} of the Bayes estimator,
we have, for each $j\in\calS_2$,
\begin{align*}
\hat M_n(j|x^n)
= & ~ \frac{\prob{X^{n+1}=(x^n,j)}}{\prob{X^n=x^n}} \\
= & ~ \frac{\Expect[\frac{1}{2} M(1|1)^{t-1} M(x_{t+1}|1) M(x_{t+2}|x_{t+1}) \ldots M(x_n|x_{n-1}) M(j|x_n) ]}{\Expect[\frac{1}{2} M(1|1)^{t-1} M(x_{t+1}|1) M(x_{t+2}|x_{t+1}) \ldots M(x_n|x_{n-1}) ]} \\
= & ~ \pth{1-\frac{1}{n}} \frac{\Expect[T(x_{t+2}|x_{t+1}) \ldots T(x_n|x_{n-1}) T(j|x_n) ]}{\Expect[T(x_{t+2}|x_{t+1}) \ldots T(x_n|x_{n-1}) ]} \\
= & ~ \pth{1-\frac{1}{n}} \hat T_{n-t}(j|x_{t+1}^n) ,
\end{align*}
where we used the stationary distribution of $X$ in \ref{pt:1} and the uniformity of the stationary distribution of $Y$, neither of which depends on $T$.
Furthermore, by construction in \prettyref{eq:M_construction}, $\hat M_n(1|x^n) = \frac{1}{n}$ is deterministic.
In all, we have
\begin{equation}
\hat M_n(\cdot|x^n) = \frac{1}{n} \delta_1 + \pth{1-\frac{1}{n}}\hat T_{n-t}(\cdot|x_{t+1}^n) , \quad x^n\in\calX_t.
\label{eq:MC_bayesMT}
\end{equation}
with $\delta_1$ denoting the point mass at state 1, which parallels the fact that
\begin{equation}
M(\cdot|x) = \frac{1}{n} \delta_1 + \pth{1-\frac{1}{n}} T(\cdot|x), \quad x \in \calS_2.
\label{eq:MT}
\end{equation}
By \ref{pt:2}, each event $\{X^n\in\calX_t\}$ occurs with probability at least $1/(2en)$, and is independent of $T$. Therefore,
\begin{align}\label{eq:decomposition}
\EE_{T}\left[\EE[D(M(\cdot|X_n)\| \widehat{M}(\cdot|X^n)) ]\right] \ge \frac{1}{2en}\sum_{t=1}^{n-1}\EE_{T}\left[\EE[D(M(\cdot|X_n)\| \widehat{M}(\cdot|X^n)) | X^n\in\calX_t ]\right].
\end{align}
By \ref{pt:3}, the conditional joint law of $(T,X_{t+1},\ldots,X_n)$ on the event $\{X^n\in\calX_t\}$ is the same as the joint law of $(T,Y_{1},\ldots,Y_{n-t})$.
Thus, we may express the Bayes prediction risk in the $X$ chain as
\begin{align}\label{eq:reduction_M_N}
\EE_{T}\left[\EE[D(M(\cdot|X_n)\| \widehat{M}(\cdot|X^n)) | X^n\in\calX_t ]\right]
& \stepa{=} \left(1-\frac{1}{n}\right)\cdot \EE_{T}\left[\EE[D(T(\cdot|Y_{n-t}) \| \widehat{T}(\cdot|Y^{n-t}))]\right] \nonumber \\
& \stepb{=} \left(1-\frac{1}{n}\right)\cdot I(T;Y_{n-t+1}|Y^{n-t}),
\end{align}
where (a) follows from \prettyref{eq:MC_bayesMT}, \prettyref{eq:MT}, and the fact that for distributions $P,Q$ supported on $\calS_2$,
$D(\epsilon \delta_1 + (1-\epsilon) P \|\epsilon \delta_1 + (1-\epsilon) Q)=(1-\epsilon) D(P\|Q)$;
(b) is the mutual information representation \prettyref{eq:MC_Bayes-MI} of the Bayes prediction risk.
Finally, the lemma follows from \eqref{eq:decomposition}, \eqref{eq:reduction_M_N}, and the chain rule
\begin{align*}
\sum_{t=1}^{n-1} I(T;Y_{n-t+1}|Y^{n-t}) = I(T;Y^{n}) - I(T;Y_1) \ge I(T;Y^{n}) - \log(k-1),
\end{align*}
as $I(T;Y_1)\le H(Y_1)\le \log(k-1)$.
\end{proof}
\subsubsection{Prior construction and lower bounding the mutual information}\label{subsec:prior_construction}
In view of Lemma \ref{lmm:riskred_markov}, it remains to find a prior on $\calM_{k-1}^{\mathsf{sym}}$ for $T$, such that the mutual information $I(T;Y^n)$ is large.
We make use of the connection identified in \cite{davisson1981efficient,D83,rissanen1984universal} between estimation error and mutual information (see also
\cite[Theorem 7.1]{csiszar2004information} for a self-contained exposition).
To lower the mutual information, a key step is to find a good estimator $\widehat{T}(Y^n)$ of $T$. This is carried out in the following lemma.
\begin{lemma}\label{lmm:L2_upper}
In the setting of \prettyref{lmm:riskred_markov}, suppose that $T\in \calM_{k}^{\mathsf{sym}}$ with $T_{ij}\in [\frac{1}{2k},\frac{3}{2k}]$ for all $i,j\in [k]$. Then there is an estimator $\widehat{T}$ based on $Y^n$ such that
\begin{align*}
\EE[\|\widehat{T} - T \|_{\mathsf{F}}^2] \le \frac{16k^2}{n-1},
\end{align*}
where $\|\widehat{T} - T \|_{\mathsf{F}} = \sqrt{\sum_{ij} (\widehat{T}_{ij} - T_{ij})^2}$ denotes the Frobenius norm.
\end{lemma}
We show how Lemma \ref{lmm:L2_upper} leads to the desired lower bound on the mutual information $I(T;Y^n)$. Since $k\ge 3$, we may assume that $k-1=2k_0$ is an even integer. Consider the following prior distribution $\pi$ on $T$: let $u=(u_{i,j})_{i,j\in [k_0], i\le j}$ be iid and uniformly distributed in $[1/(4k_0),3/(4k_0)]$, and $u_{i,j} = u_{j,i}$ for $i>j$. Let the transition matrix $T$ be given by
\begin{align}
T_{2i-1,2j-1} = T_{2i,2j} = u_{i,j}, \quad T_{2i-1,2j} = T_{2i,2j-1} = \frac{1}{k_0} - u_{i,j}, \quad \forall i,j\in [k].
\label{eq:Tu}
\end{align}
It is easy to verify that $T$ is symmetric and a stochastic matrix, and each entry of $T$ is supported in the interval $[1/(4k_0), 3/(4k_0)]$. Since $2k_0 = k-1$, the condition of Lemma \ref{lmm:L2_upper} is fulfilled, so there exist estimators $\widehat{T}(Y^n)$ and $\widehat{u}(Y^n)$ such that
\begin{align}
\EE[\|\widehat{u}(Y^n) - u\|_2^2] \le \EE[\|\widehat{T}(Y^n) - T \|_{\mathsf{F}}^2] \le \frac{64k_0^2}{n-1}.
\label{eq:MSEu}
\end{align}
Here and below, we identify $u$ and $\hat u$ as $\frac{k_0(k_0+1)}{2}$-dimensional vectors.
Let $h(X) = \int -f_X(x)\log f_X(x)dx$ denote the differential entropy of a continuous random vector $X$ with density $f_X$ w.r.t the Lebesgue measure
and $h(X|Y)=\int -f_{XY}(xy)\log f_{X|Y}(x|y)dxdy$ the conditional differential entropy (cf.~e.g.~\cite{cover}). Then
\begin{align}\label{eq:diff_entropy}
h(u) = \sum_{i,j\in [k_0], i\le j}h(u_{i,j}) = -\frac{k_0(k_0+1)}{2}\log(2k_0).
\end{align}
Then
\begin{align*}
I(T;Y^n)
& \stepa{=} I(u;Y^n)\\
& \stepb{\geq} I(u;\hat u(Y^n)) = h(u)-h(u|\hat u(Y^n))\\
& \stepc{\geq} h(u)-h(u-\hat u(Y^n))\\
& \stepd{\geq} \frac{k_0(k_0+1)}{4}\log\left(\frac{n-1}{1024\pi ek_0^2}\right) \ge \frac{k^2}{16}\log\left(\frac{n-1}{256\pi ek^2}\right).
\end{align*}
where
(a) is because $u$ and $T$ are in one-to-one correspondence by \prettyref{eq:Tu};
(b) follows from the data processing inequality;
(c) is because $h(\cdot)$ is translation invariant and concave;
(d) follows from the maximum entropy principle \cite{cover}:
$h(u-\hat u(Y^n)) \leq \frac{k_0(k_0+1)}{4}\log\left(\frac{2\pi e}{k_0(k_0+1)/2}\cdot \EE[\|\widehat{u}(Y^n) - u\|_2^2] \right)$, which in turn is bounded by \prettyref{eq:MSEu}.
Plugging this lower bound into Lemma \ref{lmm:riskred_markov} completes the lower bound proof of Theorem \ref{thm:optimal}.
\begin{proof}[Proof of Lemma \ref{lmm:L2_upper}]
Since $T$ is symmetric, the stationary distribution is uniform, and there is a one-to-one correspondence between the joint distribution of $(Y_1,Y_2)$ and the transition probabilities. Motivated by this observation, consider the following estimator $\widehat{T}$: for $i,j\in [k]$, let
\begin{align*}
\widehat{T}_{ij} = k\cdot \frac{\sum_{t=1}^n \indc{Y_t=i, Y_{t+1}=j} }{n-1}.
\end{align*}
Clearly $\EE[\widehat{T}_{ij}]=k\cdot \PP(Y_1=i,Y_2=j)= T_{ij}$. The following variance bound is shown in \cite[Lemma 7, Lemma 8]{tatwawadi2018minimax} using the concentration inequality of \cite{P15}:
\begin{align*}
\var(\widehat{T}_{ij}) \le k^2\cdot \frac{8T_{ij}k^{-1}}{\gamma_*(T)(n-1)},
\end{align*}
where $\gamma_*(T)$ is the absolute spectral gap of $T$ defined in \prettyref{eq:gamma-def}. Note that $T = k^{-1} \mathbf{J} + \Delta$,
where $\mathbf{J}$ is the all-one matrix and each entry of $\Delta$ lying in $[-1/(2k),1/(2k)]$. Thus the spectral radius of $\Delta$ is at most $1/2$ and thus $\gamma_*(T)\ge 1/2$. Consequently, we have
\begin{align*}
\EE[\|\widehat{T} - T \|_{\mathsf{ F}}^2] = \sum_{i,j\in [k]} \var(\widehat{T}_{ij}) \le \sum_{i,j\in [k]} \frac{16kT_{ij}}{n-1} = \frac{16k^2}{n-1},
\end{align*}
completing the proof.
\end{proof}
\section{Spectral gap-dependent risk bounds}
\label{sec:pf-gamma}
\subsection{Two states}\label{sec:lower bound}
To show \prettyref{thm:gamma2},
let us prove a refined version. In addition to the absolute spectral gap defined in \prettyref{eq:gamma-def}, define the spectral gap
\begin{equation}
\gamma \triangleq 1-\lambda_2
\label{eq:gamma-def1}
\end{equation}
and $\calM_k'(\gamma_0)$ the collection of transition matrices whose spectral gap exceeds $\gamma_0$.
Paralleling $\Risk_{k,n}(\gamma_0)$ defined in
\prettyref{eq:risk-gamma}, define $\Risk_{k,n}'(\gamma_0)$ as the minimax prediction risk restricted to $M\in \calM_k'(\gamma_0)$
Since $\gamma\geq \gamma^*$, we have
$\calM_k(\gamma_0)\subseteq \calM_k'(\gamma_0)$ and hence
$\Risk_{k,n}'(\gamma_0) \geq \Risk_{k,n}(\gamma_0)$.
Nevertheless, the next result shows that for $k=2$ they have the same rate:
\begin{theorem}[Spectral gap dependent rates for binary chain]
\label{thm:gamma2-refined}
For any $\gamma_0\in (0,1)$
\[
\Risk_{2,n}(\gamma_0)
\asymp
\Risk_{2,n}'(\gamma_0)
\asymp \frac{1}{n} \max\sth{1,\log\log\pth{\min\sth{n,\frac 1{\gamma_0}}}}.
\]
\end{theorem}
We first prove the upper bound on $\Risk_{2,n}'$. Note that it is enough to show
\begin{align}
\Risk_{2,n}'(\gamma_0)
\lesssim {\log\log\pth{1/ \gamma_0}\over n},\quad
\text{if } n^{-0.9}\leq \gamma_0\leq e^{-e^{5}}.
\label{eq:gamma2a}
\end{align}
Indeed, for any $\gamma_0\leq n^{-0.9}$, the upper bound $\calO\pth{\log\log n/n}$ proven in \cite{FOPS2016}, which does not depend on the spectral gap, suffices; for any $\gamma_0> e^{-e^{5}}$, by monotonicity we can use the upper bound $\Risk_{2,n}'(e^{-e^{5}})$.
We now define an estimator that achieves \prettyref{eq:gamma2a}. Following \cite{FOPS2016}, consider trajectories with a single transition, namely, $\sth{2^{n-\ell}1^\ell,1^{n-\ell}2^\ell:1\leq \ell\leq n-1}$, where $2^{n-\ell}1^{\ell}$ denotes the trajectory $(x_1,\cdots,x_n)$ with $x_1=\cdots=x_{n-\ell}=2$ and $x_{n-\ell+1}=\cdots=x_n=1$.
We refer to this type of $x^n$ as \emph{step sequences}. For all non-step sequences $x^n$, we apply the add-$\frac 12$ estimator similar to \prettyref{eq:addone}, namely
\begin{align*}
\widehat{M}_{x^n}(j|i) = \frac{N_{ij}+\frac 12}{N_i+1}, \qquad i,j\in \{1,2\},
\end{align*}
where the empirical counts $N_i$ and $N_{ij}$ are defined in \prettyref{eq:transition.count};
for step sequences of the form $2^{n-\ell}1^{\ell}$, we estimate by
\begin{align}\label{eq:z.ell.est}
{\hat \mymat_\ell(2|1)}={1/(\ell \log(1/\gamma_0))}, \quad
{\hat \mymat_\ell(1|1)}=1-{\hat \mymat_\ell(2|1)}.
\end{align}
The other type of step sequences $1^{n-\ell}2^{\ell}$ are dealt with by symmetry.
Due to symmetry it suffices to analyze the risk for sequences ending in 1. The risk of add-$\frac 12$ estimator for the non-step sequence $1^n$ is bounded as
\begin{align*}
\EE\qth{\indc{X^n=1^n}D({\mymat(\cdot|1)}\|{\hat \mymat_{1^n}(\cdot|1)})}
&=P_{X^n}(1^n)\sth{M(2|1)\log\pth{ \frac{M(2|1)}{1/(2n)} }+M(1|1)\log\pth{M(1|1)\over (n-\frac 12)/n}}
\nonumber\\
&\leq (1-M(2|1))^{n-1}\sth{2M(2|1)^2n+\log\pth{{n\over n-\frac 12}}}
\lesssim \frac 1n.
\end{align*}
where the last step followed by using $(1-x)^{n-1}x^2\leq n^{-2}$ with $x=M(2|1)$ and $\log x\leq x-1$. From \cite[Lemma 7,8]{FOPS2016} we have that the total risk of other non-step sequences is bounded from above by $\calO\pth{\frac 1n}$ and hence it is enough to analyze the risk for step sequences, and further by symmetry, those in $\sth{2^{n-\ell}1^\ell:1\leq \ell\leq n-1}$. The desired upper bound \prettyref{eq:gamma2a} then follows from \prettyref{lmm:stepbound} next.
\begin{lemma}\label{lmm:stepbound}
For any $n^{-0.9}\leq \gamma_0\leq e^{-e^{5}}$, $\hat \mymat_\ell(\cdot|1)$ in \eqref{eq:z.ell.est} satisfies
$$
\sup_{M\in \calM'_2(\gamma_0)}\sum_{\ell=1}^{n-1}\EE\qth{\indc{X^n=2^{n-\ell}1^\ell}D({\mymat(\cdot|1)}\|{\hat \mymat_\ell(\cdot|1)})}
\lesssim {\log\log(1/\gamma_0)\over n}.
$$
\end{lemma}
\begin{proof}\label{app:stepbound}
For each $\ell$ using $\log\pth{ 1\over {1-x}}\leq 2x,x\leq \frac 12$ with $x=\frac 1{\ell\log(1/\gamma_0)}$,
\begin{align}
D({\mymat(\cdot|1)}\|{\hat \mymat_\ell(\cdot|1)})
&= {M(1|1)\log \pth{M(1|1)\over 1-\frac 1{\ell{\log(1/\gamma_0)}}}+{\mymat(2|1)}\log \pth{{\mymat(2|1)}\ell{\log(1/\gamma_0)}}}
\nonumber\\
&\lesssim {1\over \ell {\log(1/\gamma_0)}}+{\mymat(2|1)}\log(M(2|1)\ell)+{\mymat(2|1)}\log {{\log(1/\gamma_0)}}
\nonumber\\
&\le {1\over \ell {\log(1/\gamma_0)}}+\mymat(2|1)\log_+(\mymat(2|1)\ell) + \mymat(2|1) {\log\log(1/\gamma_0)}\label{eq:appgub5},
\end{align}
where we define $\log_+(x) = \max\{1,\log x\}$.
Recall the following Chebyshev's sum inequality: for $a_1\le a_2\le \cdots\le a_n$ and $b_1\ge b_2\ge \cdots\ge b_n$, it holds that
\begin{align*}
\sum_{i=1}^n a_ib_i \le \frac{1}{n}\pth{\sum_{i=1}^n a_i}\pth{\sum_{i=1}^n b_i}.
\end{align*}
The following inequalities are thus direct corollaries: for $x,y \in [0,1]$,
\begin{align}
\sum_{\ell=1}^{n-1} x(1-x)^{n-\ell-1}y(1-y)^{\ell-1} &\le \frac{1}{n-1}\pth{\sum_{\ell=1}^{n-1} x(1-x)^{n-\ell-1}}\pth{\sum_{\ell=1}^{n-1} y(1-y)^{\ell-1}}\nonumber\\
& \le \frac{1}{n-1}, \label{eq:chebyshev-1} \\
\sum_{\ell=1}^{n-1} x(1-x)^{n-\ell-1}y(1-y)^{\ell-1}\log_+(\ell y) &\le \frac{1}{n-1}\pth{\sum_{\ell=1}^{n-1} x(1-x)^{n-\ell-1}}\pth{\sum_{\ell=1}^{n-1} y(1-y)^{\ell-1}\log_+(\ell y)} \nonumber \\
&\le \frac{1}{n-1}\sum_{\ell=1}^{n-1}y(1-y)^{\ell-1}(1+\ell y) \le \frac{2}{n-1}, \label{eq:chebyshev-2}
\end{align}
where in \eqref{eq:chebyshev-2} we need to verify that $\ell\mapsto y(1-y)^{\ell-1}\log_+(\ell y)$ is non-increasing. To verify it, w.l.o.g. we may assume that $(\ell+1)y\ge e$, and therefore
\begin{align*}
\frac{y(1-y)^{\ell}\log_+((\ell+1)y )}{y(1-y)^{\ell-1}\log_+(\ell y)} &= \frac{(1-y)\log((\ell+1)y)}{\log_+(\ell y)} \le \pth{1-\frac{e}{\ell+1}}\pth{1 + \frac{\log(1+1/\ell)}{\log_+(\ell y)} } \\
&\le \pth{1-\frac{e}{\ell+1}}\pth{1+\frac{1}{\ell}} < 1 + \frac{1}{\ell} - \frac{e}{\ell+1} < 1.
\end{align*}
Therefore,
\begin{align}
&\sum_{\ell=1}^{n-1}\EE\qth{\indc{X^n=2^{n-\ell}1^\ell}D(\mymat(\cdot|1)\|\hat \mymat_\ell(\cdot|1))} \nonumber\\
&\le \sum_{\ell=1}^{n-1}M(2|2)^{n-\ell-1}M(1|2)
M(1|1)^{\ell-1}D(\mymat(\cdot|1)\|\hat \mymat_\ell(\cdot|1))\nonumber\\
&\stepa{\lesssim} \sum_{\ell=1}^{n-1}M(2|2)^{n-\ell-1}M(1|2)M(1|1)^{\ell-1}
\pth{\frac {1}{\ell \log(1/\gamma_0)}+M(2|1)\log_+(M(2|1)\ell) + M(2|1)\log\log(1/\gamma_0)} \nonumber\\
&\stepb{\le} \sum_{\ell=1}^{n-1} \frac{M(2|2)^{n-\ell-1}M(1|2)M(1|1)^{\ell-1}}{\ell \log(1/\gamma_0)} + \frac{2+\log\log(1/\gamma_0)}{n-1}, \label{eq:appgub6}
\end{align}
where (a) is due to \eqref{eq:appgub5}, (b) follows from \eqref{eq:chebyshev-1} and \eqref{eq:chebyshev-2} applied to $x=M(1|2), y=M(2|1)$. To deal with the remaining sum, we distinguish into two cases. Sticking to the above definitions of $x$ and $y$, if $y > \gamma_0/2$, then
\begin{align*}
\sum_{\ell=1}^{n-1}\frac{x(1-x)^{n-\ell-1}(1-y)^{\ell-1}}{\ell} \le \frac{1}{n-1}\pth{\sum_{\ell=1}^{n-1} x(1-x)^{n-\ell-1} }\pth{\sum_{\ell=1}^{n-1} \frac{(1-y)^{\ell-1}}{\ell}} \le \frac{\log(2/\gamma_0)}{n-1},
\end{align*}
where the last step has used that $\sum_{\ell=1}^\infty t^{\ell-1}/\ell = \log(1/(1-t))$ for $|t|<1$. If $y\le \gamma_0/2$, notice that for two-state chain the
spectral gap is given explicitly by
$\gamma=M(1|2)+M(2|1)=x+y$, so that the assumption $\gamma\ge \gamma_0$ implies that $x\ge \gamma_0/2$. In this case,
\begin{align*}
\sum_{\ell=1}^{n-1}\frac{x(1-x)^{n-\ell-1}(1-y)^{\ell-1}}{\ell} &\le \sum_{\ell < n/2} (1-x)^{n/2-1} + \sum_{\ell\ge n/2} \frac{x(1-x)^{n-\ell-1}}{n/2} \\
&\le \frac{n}{2}e^{-(n/2-1)\gamma_0} + \frac{2}{n} \lesssim \frac{1}{n},
\end{align*}
thanks to the assumption $\gamma_0\ge n^{-0.9}$. Therefore, in both cases, the first term in \eqref{eq:appgub6} is $O(1/n)$, as desired.
\end{proof}
Next we prove the lower bound on $\Risk_{2,n}$. It is enough to show that
$
\Risk_{2,n}(\gamma_0)
\gtrsim \frac{1}{n}\log\log\pth{1/ \gamma_0}$
for $n^{-1}\leq \gamma_0\leq e^{-e^{5}}$.
Indeed, for $\gamma_0\geq e^{-e^{5}}$, we can apply the result in the \iid setting (see, e.g., \cite{BFSS02}), in which the absolute spectral gap is 1, to obtain the usual parametric-rate lower bound $\Omega\pth{\frac 1n}$;
for $\gamma_0 <n^{-1}$, we simply bound $\Risk_{2,n}(\gamma_0)$ from below by $\Risk_{2,n}(n^{-1})$.
Define \begin{align}\label{eq:alpha.beta}
\alpha=\log(1/\gamma_0),\quad\beta=\left\lceil {\alpha\over 5\log\alpha} \right\rceil,
\end{align}
and consider the prior distribution
\begin{align}\label{eq:twostate.prior}
\scr M=\Unif(\calM),
\quad \calM&=\sth{\mymat:{\mymat(1|2)}=\frac 1n,{\mymat(2|1)}={1\over \alpha^m}: m\in \naturals \cap\pth{\beta,5\beta}}.
\end{align}
Then the lower bound part of \prettyref{thm:gamma2} follows from the next lemma.
\begin{lemma}\label{lmm:bayes.risk}
Assume that $n^{-0.9}\leq \gamma_0\leq e^{-e^{5}}$. Then
\begin{enumerate}[label=(\roman*)]
\item $\gamma_*> \gamma_0$ for each $M\in\calM$;
\item the Bayes risk with respect to the prior $\scr M$ is at least $\Omega\pth{\log\log(1/\gamma_0)\over n}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i) follows by noting that absolute spectral gap for any two states matrix $M$ is $1-\abs{1-{\mymat(2|1)}-{\mymat(1|2)}}$ and for any $M\in \cal M$, $M(2|1)\in \pth{\alpha^{-5\beta},\alpha^{-\beta}}\subseteq(\gamma_0,\gamma_0^{1/5})\subseteq (\gamma_0,1/2) $ which guarantees
$
\gamma_*=M(1|2)+M(2|1)>\gamma_0.
$
To show part (ii) we lower bound the Bayes risk when the observed trajectory $X^n$ is a step sequence in $\sth{2^{n-\ell}1^\ell:1\leq \ell\leq n-1}$. Our argument closely follows that of \cite[Theorem 1]{HOP18}. Since $\gamma_0\ge n^{-1}$, for each $M\in \calM$, the corresponding stationary distribution $\pi$ satisfies
\begin{align*}
\pi_2 = \frac{M(2|1)}{M(2|1)+M(1|2)} \ge \frac{1}{2}.
\end{align*}
Denote by $\Risk(\scr M)$ the Bayes risk with respect to the prior $\scr M$ and by ${\hat \mymat^{\mathsf{B}}_\ell(\cdot|1)}$
the Bayes estimator for prior $\scr M$ given $X^n=2^{n-\ell}1^\ell$. Note that
\begin{equation}
\prob{X^n = 2^{n-\ell}1^\ell} = \pi_2 \pth{1-\frac{1}{n}}^{n-\ell-1} \frac{1}{n} M(1|1)^{\ell-1} \geq \frac{1}{2en} M(1|1)^{\ell-1}.
\label{eq:probstepseq}
\end{equation}
Then
\begin{align}
\Risk(\scr M)
&\geq \EE_{M\sim \scr M}\qth{\sum_{\ell=1}^{n-1}\EE\qth{\indc{X^n=2^{n-\ell}1^\ell}
D({\mymat(\cdot|1)}\|{\hat \mymat^{\mathsf{B}}_\ell(\cdot|1)})}}
\nonumber\\
&\geq \EE_{M\sim \scr M}\qth{\sum_{\ell=1}^{n-1}{M(1|1)^{\ell-1}\over 2en}
D({\mymat(\cdot|1)}\|{\hat \mymat^{\mathsf{B}}_\ell(\cdot|1)})} \nonumber \\
&= \frac{1}{2en}\sum_{\ell=1}^{n-1}\EE_{M\sim \scr M} \qth{M(1|1)^{\ell-1}
D({\mymat(\cdot|1)}\|{\hat \mymat^{\mathsf{B}}_\ell(\cdot|1)})}. \label{eq:MC_bayes_risk_gamma}
\end{align}
Recalling the general form of the Bayes estimator in \prettyref{eq:MC_bayes} and in view of \prettyref{eq:probstepseq}, we get
\begin{align}\label{eq:MC_bayes_est}
\widehat{M}_{\ell}^{\mathsf{B}}(2|1) = \frac{\EE_{M\sim \scr M}[M(1|1)^{\ell-1}M(2|1)]}{\EE_{M\sim \scr M}[M(1|1)^{\ell-1}]}, \quad \widehat{M}_{\ell}^{\mathsf{B}}(1|1) = 1 - \widehat{M}_{\ell}^{\mathsf{B}}(2|1).
\end{align}
Plugging \eqref{eq:MC_bayes_est} into \eqref{eq:MC_bayes_risk_gamma}, and using
\begin{align*}
D((x,1-x)\|(y,1-y))
= x\log{x\over y}+(1-x) \log{1-x\over 1-y}
\geq x\max\sth{0,\log{x\over y}-1},
\end{align*}
we arrive at the following lower bound for the Bayes risk:
\begin{align}
&\Risk(\scr M) \nonumber\\
\ge & \frac{1}{2en}\sum_{\ell=1}^{n-1}\EE_{M\sim \scr M} \qth{M(1|1)^{\ell-1} M(2|1)\max\left\{0, \log\pth{\frac{M(2|1)\cdot \EE_{M\sim \scr M}[M(1|1)^{\ell-1}]}{\EE_{M\sim \scr M}[M(1|1)^{\ell-1}M(2|1)]}} -1\right\}}. \label{eq:KL.lowerbound}
\end{align}
Under the prior $\scr M$, $M(2|1)=1-M(1|1)=\alpha^{-m}$ with $\beta \leq m \leq 5\beta$.
We further lower bound \eqref{eq:KL.lowerbound} by summing over an appropriate range of $\ell$. For any $m\in [\beta,3\beta]$, define
\begin{align*}
\ell_1(m) = \left\lceil \frac{\alpha^m}{\log \alpha} \right\rceil, \qquad \ell_2(m) = \left\lfloor \alpha^m\log \alpha \right\rfloor.
\end{align*}
Since $\gamma_0 \le e^{-e^5}$, our choice of $\alpha$ ensures that the intervals $\{[\ell_1(m),\ell_2(m)]\}_{\beta\le m\le 3\beta}$ are disjoint. We will establish the following claim: for all $m\in [\beta,3\beta]$ and $\ell\in [\ell_1(m), \ell_2(m)]$, it holds that
\begin{align}\label{eq:MC_bayes-claim}
\frac{\alpha^{-m}\cdot \EE_{M\sim \scr M}[M(1|1)^{\ell-1}]}{\EE_{M\sim \scr M}[M(1|1)^{\ell-1}M(2|1)]} \gtrsim \frac{\log(1/\gamma_0)}{\log\log(1/\gamma_0)}.
\end{align}
We first complete the proof of the Bayes risk bound assuming \eqref{eq:MC_bayes-claim}. Using \eqref{eq:KL.lowerbound} and \eqref{eq:MC_bayes-claim}, we have
\begin{align*}
\Risk(\scr M)
&\gtrsim {1\over n}\cdot
\frac{1}{4\beta}\sum_{m=\beta}^{3\beta}\sum_{\ell=\ell_1(m)}^{\ell_2(m)}\alpha^{-m}(1-\alpha^{-m})^{\ell-1}\cdot \log\log(1/\gamma_0)
\nonumber\\
&= {\log\log(1/\gamma_0)\over 4n\beta}
\sum_{m=\beta}^{3\beta}
\sth{(1-\alpha^{-m})^{\ell_1(m) - 1 }-(1-\alpha^{-m})^{\ell_2(m)}}\nonumber \\
&\stepa{\ge} {\log\log(1/\gamma_0)\over 4n\beta}\sum_{m=\beta}^{3\beta}\pth{\pth{\frac 14}^{1\over \log\alpha}-\pth{\frac 1e}^{-1+\log\alpha }}
\gtrsim {\log\log(1/\gamma_0)\over n},
\end{align*}
with (a) following from $\frac 14\leq (1-x)^{1\over x}\leq \frac 1e$ if $x\leq \frac 12$, and $\alpha^{-m}\leq \alpha^{-\beta}\le\gamma_0^{1/5}\leq \frac 12$.
Next we prove the claim \eqref{eq:MC_bayes-claim}. Expanding the expectation in \eqref{eq:twostate.prior}, we write the LHS of \eqref{eq:MC_bayes-claim} as
\begin{align*}
\frac{\alpha^{-m}\cdot \EE_{M\sim \scr M}[M(1|1)^{\ell-1}]}{\EE_{M\sim \scr M}[M(1|1)^{\ell-1}M(2|1)]} = {X_\ell + A_\ell+ B_\ell\over X_\ell + C_\ell+D_\ell},
\end{align*}
where
\begin{align*}
X_\ell&=\pth{1-\alpha^{-m}}^\ell, \quad
A_\ell= \sum_{j={\beta}}^{m-1}\pth{1-\alpha^{-j}}^\ell, \quad
B_\ell= \sum_{j=m+1}^{5{\beta}}\pth{1-\alpha^{-j}}^\ell,\\
C_\ell&= \sum_{j={\beta}}^{m-1}\pth{1-\alpha^{-j}}^\ell{\alpha}^{m-j}, \quad
D_\ell= \sum_{j=m+1}^{5{\beta}}\pth{1-\alpha^{-j}}^\ell{\alpha}^{m-j}.
\end{align*}
We bound each of the terms individually.
Clearly, $X_\ell\in (0,1)$ and $A_\ell\geq 0$.
Thus it suffices to show that $B_\ell \gtrsim \beta$ and $C_\ell,D_\ell\lesssim 1$,
for $m\in [\beta,3\beta]$ and $
\ell_1(m)\leq \ell\leq \ell_2(m)$.
Indeed,
\begin{itemize}
\item
For $j\geq m+1$, we have
\begin{align*}
\pth{1-{\alpha}^{-j}}^\ell
\geq\pth{1-{\alpha}^{-j}}^{\ell_2(m)}
\stepa{\geq} \pth{1/4}^{\ell_2(m)\over {\alpha}^j}
\geq \pth{1/4}^{\log {\alpha} \over {\alpha}}
\geq 1/4,
\end{align*}
where in (a) we use the inequality $(1-x)^{1/x}\geq 1/4$ for $x\leq 1/2$. Consequently, $B_\ell \ge \beta/2$;
\item
For $j\le m-1$, we have
\begin{align*}
\pth{1-{\alpha}^{-j}}^\ell
\leq \pth{1-{\alpha}^{-j}}^{\ell_1(m)} \stepb{\le} e^{-\frac{{\alpha}^{m-j}}{\log \alpha}}
= \gamma_0^{{\alpha}^{m-j-1}\over \log {\alpha}},
\end{align*}
where (b) follows from $(1-x)^{1/x}\le 1/e$ and the definition of $\ell_1(m)$. Consequently,
\begin{align*}
C_\ell \le \gamma_0^{\alpha\over \log\alpha}\sum_{j={\beta}}^{m-2}{\alpha}^{m-j}
+{\alpha}\gamma_0^{1\over \log {\alpha}}
\leq e^{-{\alpha^2\over \log \alpha} + (2\beta+1)\log\alpha }
+e^{\log\alpha -\frac{\alpha}{\log\alpha}}
\leq 2,
\end{align*}
where the last step uses the definition of $\beta$ in \eqref{eq:alpha.beta};
\item
$D_{\ell} \le \sum_{j=m+1}^{5\beta} \alpha^{m-j} \le 1$, since $\alpha =\log\frac{1}{\gamma_0}\geq e^5$.
\end{itemize}
Combining the above bounds completes the proof of \eqref{eq:MC_bayes-claim}.
\end{proof}
\subsection{$k$ states}
\label{sec:gammak}
\subsubsection{Proof of \prettyref{thm:gammak} (i)}
Notice that the prediction problem consists of $k$ sub-problems of estimating the individual rows of $M$, so it suffices show the contribution from each of them is $O\pth{\frac kn}$. In particular, assuming the chain terminates in state 1 we bound the risk of estimating the first row by the add-one estimator $\hat M^{+1}(j|1)={N_{1j}+1\over N_1+k}$.
Under the absolute spectral gap condition of $\gamma_*\geq \gamma_0$, we show
\begin{align}
\EE\qth{\indc{X_n=1}D\pth{{\mymat(\cdot|1)}\|{\hat\mymat^{+1}(\cdot|1)}}}
\lesssim {k\over n}\pth{1+\sqrt{\log k\over k\gamma_0^4}}.
\label{eq:risk.row1}
\end{align}
By symmetry,
we get the desired $\Risk_{k,n}(\gamma_0)\lesssim {k^2\over n}\pth{1+\sqrt{\log k\over k\gamma_0^4}}$.
The basic steps of our analysis are as follows:
\begin{itemize}
\item When $N_1$ is substantially smaller than its mean, we can bound the risk using the worst-case risk bound for add-one estimators and the
probability of this rare event.
\item Otherwise, we decompose the prediction risk as
\begin{align*}
D(\mymat(\cdot|1)\|\hat\mymat^{+1}(\cdot|1))
=\sum_{j=1}^k\qth{\mymat(j|1)\log\pth{\mymat(j|1)(N_1+k)\over N_{1j}+1}-\mymat(j|1)+{N_{1j}+1\over N_1+k}}.
\end{align*}
We then analyze each term depending on whether $N_{1j}$ is typical or not.
Unless $N_{1j}$ is atypically small, the add-one estimator works well whose risk can be bounded quadratically.
\end{itemize}
To analyze the concentration of the empirical counts we use the following moment bounds. The proofs are deferred to \prettyref{app:moment.bounds}.
\begin{lemma}\label{lmm:moment.reversebound}
Finite reversible and irreducible chains observe the following moment bounds:
\begin{enumerate}[label=(\roman*)]
\item \label{lmm:secondmoment.reversebound}
${\EE\qth{\pth{N_{ij}-N_i{\mymat(j|i)}}^2|X_n=i}}
\lesssim
n\pi_i{\mymat(j|i)}(1-{\mymat(j|i)})+{\sqrt{M(j|i)}\over \gamma_*}
+{{M(j|i)}\over \gamma_*^2}$
\item
\label{lmm:fourthmoment.reversebound}
$\EE\qth{\pth{N_{ij}-N_i{\mymat(j|i)}}^4|X_n=i}
\lesssim (n\pi_i{\mymat(j|i)}(1-{\mymat(j|i)}))^2
+{\sqrt{{\mymat(j|i)}}\over \gamma_*}+{{\mymat(j|i)}^2\over \gamma_*^4}$
\item\label{lmm:fourthmoment.centralbound}
$\EE\qth{\pth{N_i-(n-1)\pi_i}^4|X_n=i}
\lesssim {n^2\pi_i^2\over \gamma_*^2}
+{1\over \gamma_*^4}.$
\end{enumerate}
\end{lemma}
When $\gamma_*$ is high this shows that the moments behave as if for each $i\in [k]$, $N_1$ is approximately Binomial($n-1,\pi_i$) and $N_{ij}$ is approximately Binomial$(N_i,M(j|i))$, which happens in case of \iid sampling. For \iid models \cite{KOPS15} showed that the add-one estimator achieves $\calO\pth{\frac kn}$ risk bound which we aim here too. In addition, dependency of the above moments on $\gamma_*$ gives rise to sufficient conditions that guarantees parametric rate. The technical details are given below.
We decompose the left hand side in \eqref{eq:risk.row1} based on $N_1$ as
\begin{align*}
\EE\qth{\indc{X_n=1}D\pth{{\mymat(\cdot|1)}\|{\hat\mymat^{+1}(\cdot|1)}}}
=\EE\qth{\indc{A^\leq}D\pth{{\mymat(\cdot|1)}\|{\hat\mymat^{+1}(\cdot|1)}}}
+\EE\qth{\indc{A^>}D\pth{{\mymat(\cdot|1)}\|{\hat\mymat^{+1}(\cdot|1)}}}
\end{align*}
where the typical set $A^>$ and atypical set $A^\leq$ are defined as
\begin{align*}
A^\leq\eqdef\sth{X_n=1, N_1\leq {(n-1)\pi_1/ 2}},
\quad A^>\eqdef\sth{X_n=1, N_1> {(n-1)\pi_1/ 2}}.
\end{align*}
For the atypical case,
note the following deterministic property of the add-one estimator.
Let $\hat Q$ be an add-one estimator with sample size $n$ and alphabet size $k$
of the form $\hat Q_i = \frac{n_i+1}{n+k}$, where $\sum n_i=n$. Since
$\hat Q$ is bounded below by $\frac{1}{n+k}$ everywhere, for any distribution $P$, we have
\begin{equation}
D(P \|\hat Q) \leq \log(n+k).
\label{eq:addone-worstcase}
\end{equation}
Applying this bound on the event $A^\leq$, we have
\begin{align}
&\EE\qth{\indc{A^\leq}D\pth{{\mymat(\cdot|1)}\|{\hat\mymat^{+1}(\cdot|1)}}}
\nonumber\\
&\leq \log \pth{n\pi_1+k}
\PP\qth{X_n=1,N_1\leq (n-1)\pi_1/2}
\nonumber\\
&\stepa{\lesssim} \indc{n\pi_1\gamma_*\leq 10}
\pi_1\log\pth{n\pi_1+k}
+\indc{n\pi_1\gamma_*>10} \pi_1\log \pth{{n\pi_1+k}}
{\EE\qth{\pth{N_1-(n-1)\pi_1}^4|X_n=1}
\over n^4\pi_1^4}
\label{eq:moment.bound.N1}\\
&\stepb{\leq}\indc{n\pi_1\gamma_*\leq 10}
{10\over n\gamma_*}\log\pth{{10\over \gamma_*}+k}
+\indc{n\pi_1\gamma_*>10}\log \pth{n\pi_1+k}\pth{{1\over n^2\pi_1\gamma_*^2}
+{1\over n^4\pi_1^3\gamma_*^4}}
\nonumber\\
&\stepc{\lesssim} \frac 1n
\sth{\indc{n\pi_1\gamma_*\leq 10}
{\log(1/\gamma_*)+\log k\over \gamma_*}
+\indc{n\pi_1\gamma_*>10}\pth{n\pi_1+\log k}\pth{{1\over n\pi_1\gamma_*^2}
+{1\over n^3\pi_1^3\gamma_*^4}}}
\nonumber\\
&{\lesssim} {1\over n}
\sth{\indc{n\pi_1\gamma_*\leq 10}\pth{\frac 1{\gamma_*^2}+{\log k\over \gamma_*}}
+\indc{n\pi_1\gamma_*>10}\pth{{1\over\gamma_*^2}
+{\log k\over \gamma_*}}}
\lesssim {1\over n\gamma_0^2}+{\log k\over n\gamma_0}.
\label{eq:bound.A<}
\end{align}
where we got (a) from Markov inequality, (b) from \prettyref{lmm:moment.reversebound}(iii) and (c) using $x+y\leq xy,x,y\geq 2$.
Next we bound $\EE\qth{\indc{A^>}D\pth{{\mymat(\cdot|1)}\|{\hat\mymat^{+1}(\cdot|1)}}}$.
Define \begin{align*}
\Delta_i
=\mymat(i|1)\log\pth{\mymat(i|1)\over \hat\mymat^{+1}(i|1)}-\mymat(i|1)+\hat\mymat^{+1}(i|1).
\end{align*}
As $D({\mymat (\cdot|1)}\|{\hat\mymat^{+1} (\cdot|1)})= \sum_{i=1}^k\Delta_i$ it suffices to bound $\EE\qth{\indc{A^>}\Delta_i}$ for each $i$. For some $r\geq 1$ to be optimized later consider the following cases separately
\paragraph{Case (a) $n\pi_1\leq r$ or $n\pi_1{\mymat(i|1)}\leq 10$:}
Using the fact $y\log(y)-y+1\leq (y-1)^2$ with $y={M(i|1)\over \hat M^{+1}(i|1)}={{\mymat(i|1)}(N_1+k)\over N_{1i}+1}$ we get
\begin{align}\label{eq:chisq.bound}
\Delta_i\leq {\pth{{\mymat(i|1)}N_1-N_{1i}+{\mymat(i|1)}k-1}^2\over \pth{N_1+k}\pth{N_{1i}+1}}.
\end{align}
This implies
\begin{align}
\EE\qth{\indc{A^>}\Delta_i}
&\leq \EE\qth{\indc{A^>}\pth{{\mymat(i|1)}N_1-N_{1i}+{\mymat(i|1)}k-1}^2\over
\pth{N_1+k}\pth{N_{1i}+1}}
\nonumber \\
&\stepa{\lesssim} {{\EE\qth{\indc{A^>}
\pth{{\mymat(i|1)}N_1-N_{1i}}^2}
+k^2\pi_1{\mymat(i|1)}^2+\pi_1}\over n\pi_1+k}
\nonumber \\
&\stepb{\lesssim} {\pi_1\EE\qth{\left.{\pth{{\mymat(i|1)}N_1-N_{1i}}^2}\right|X_n=1}\over n\pi_1+k}
+{1+rk{M(i|1)}\over n}
\end{align}
where (a) follows from $N_1>{(n-1)\pi_1\over 2}$ in $A^>$ and the fact that $(x+y+z)^2\leq 3(x^2+y^2+z^2)$;
(b) uses the assumption that either $n\pi_1\leq r$ or $n\pi_1{\mymat(i|1)}\leq 10$. Applying \prettyref{lmm:moment.reversebound}\ref{lmm:secondmoment.reversebound} and the fact that $x+x^2\leq 2(1+x^2)$,
continuing the last display we get
\begin{align*}
&\EE\qth{\indc{A^>}\Delta_i}
\lesssim
{n\pi_1{\mymat(i|1)}+\pth{1+{{M(i|1)}\over \gamma^2_*}}
\over n}
+{1+rk{M(i|1)}\over n}
\lesssim {{1+rk{M(i|1)} \over n}+{M(i|1)\over n\gamma_0^2}}.
\end{align*}
Hence
\begin{align}
\EE\qth{\indc{A^>}D({\mymat (\cdot|1)}\|{\hat\mymat^{+1} (\cdot|1)})}
= \sum_{i=1}^k\EE\qth{\indc{A^>}\Delta_i}
\lesssim {\frac {rk}n +{1\over\gamma_0^2}}.
\label{eq:bound.A>.case(a)}
\end{align}
\paragraph{Case(b) $n\pi_1> r$ and $n\pi_1{\mymat(i|1)}> 10$:}
We decompose $A^>$ based on count of $N_{1i}$ into atypical part $B^{\leq}$ and typical part $B^>$
\begin{align*}
B^{\leq}&\eqdef \sth{X_n=1,N_1>{(n-1)\pi_1/2},N_{1i}\leq {(n-1)\pi_1{\mymat(i|1)}/4}}\\
B^>&\eqdef \sth{X_n=1,N_1>{(n-1)\pi_1/2},N_{1i}> {(n-1)\pi_1{\mymat(i|1)}/4}}
\end{align*}
and bound each of $\EE\qth{\indc{B^\leq}\Delta_i}$ and $\EE\qth{\indc{B^>}\Delta_i}$ separately.
\paragraph{Bound on $\EE\qth{\indc{B^{\leq}}\Delta_i}$}
Using $\hat M^{+1}(i|1)\geq {1\over N_1+k}$ and $N_{1i}<N_1M(i|1)/2$ in $B^\leq$ we get
\begin{align}
\EE\qth{\indc{B^{\leq}}\Delta_i}
&=\EE\qth{\indc{B^{\leq}}{\mymat(i|1)}\log \pth{{\mymat(i|1)}(N_1+k)\over N_{1i}+1}}
+\EE\qth{\indc{B^{\leq}}\pth{{N_{1i}+1\over N_1+k}-{\mymat(i|1)}}}
\nonumber \\
&\leq \EE\qth{\indc{B^{\leq}}{\mymat(i|1)}\log \pth{{\mymat(i|1)} (N_1+k)}}
+\EE\qth{\indc{B^{\leq}}\pth{{N_{1i}\over N_1}-{\mymat(i|1)}}}
+\EE \qth{\indc{B^{\leq}}\over N_1}
\nonumber \\
&\lesssim \EE\qth{\indc{B^{\leq}}{\mymat(i|1)}\log \pth{{\mymat(i|1)} (N_1+k)}}
+{1\over n}
\label{eq:B1.risk.decomp}
\end{align}
where the last inequality followed as
$
\EE \qth{\indc{B^\leq}/ N_1}
\lesssim {\PP[X_n=1]/n\pi_1}
=\frac 1n$.
Note that for any event $B$ and any function $g$,
\begin{align*}
\EE\qth{g(N_1)\indc{N_1\geq t_0,B}}
=g(t_0)\PP[N_1\geq t_0,B]
+\sum_{t= t_0+1}^n\pth{g(t)-g(t-1)}
\PP[N_1\geq t,B].
\end{align*}
Applying this identity with $t_0=\ceil{(n-1)\pi_1/2}$, we can bound the expectation term in \eqref{eq:B1.risk.decomp} as
\begin{align}
&\EE\qth{\indc{B^{\leq}}{\mymat(i|1)}\log\pth{{\mymat(i|1)}(N_1+k)}}
\nonumber \\
&={\mymat(i|1)}
\log\pth{{\mymat(i|1)}(t_0+k)}\PP\qth{N_1\geq t_0,N_{1i}\leq {n\pi_1{\mymat(i|1)}\over 4},X_n=1}
\nonumber\\
&\quad +{\mymat(i|1)}\sum_{t= t_0+1}^{n-1}
\log\pth{1+\frac 1{t-1+k}}\PP\qth{N_1\geq t+1,N_{1i}\leq {n\pi_1{\mymat(i|1)}\over 4},X_n=1}
\nonumber \\
&\leq \pi_1{\mymat(i|1)}
\log\pth{{\mymat(i|1)}(t_0+k)}\PP\qth{\left.{\mymat(i|1)}N_1-N_{1i}\geq {\mymat(i|1)t_0\over 4}\right|X_n=1}
\nonumber\\
&\quad +{{\mymat(i|1)}\over n}\sum_{t=t_0+1}^{n-1}
\PP\qth{\left.{\mymat(i|1)}N_1-N_{1i}\geq {\mymat(i|1)t\over 4}\right|X_n=1}
\label{eq:B1.expectation.part}
\end{align}
where last inequality uses $\log\pth{1+\frac 1{t-1+k}}\leq \frac 1t\lesssim \frac 1{n\pi_1}$ for all $t\geq t_0$. Using Markov inequality $\PP\qth{Z>c}\leq c^{-4}{\EE\qth{Z^4}}$ for $c>0$, \prettyref{lmm:moment.reversebound}\ref{lmm:fourthmoment.reversebound} and $x+x^4\leq 2(1+x^4)$ with $x=\sqrt{M(i|1)}/\gamma_*$
\begin{align*}
&\PP\qth{\left.{\mymat(i|1)}N_1-N_{1i}\geq {{\mymat(i|1)}t\over 4}\right|X_n=1}
\lesssim {(n\pi_1{\mymat(i|1)})^2
+{{\mymat(i|1)}^2\over \gamma_*^4} \over \pth{t{\mymat(i|1)}}^4}.
\end{align*}
In view of above continuing \eqref{eq:B1.expectation.part} we get
\begin{align*}
&\EE\qth{\indc{B^{\leq}}{\mymat(i|1)}\log\pth{{\mymat(i|1)}(N_1+k)}}
\nonumber\\
&\lesssim \pth{(n\pi_1{\mymat(i|1)})^2+{{\mymat(i|1)}^2\over \gamma_*^4}}
\pth{{\pi_1{\mymat(i|1)}\log({\mymat(i|1)}(n\pi_1+k))\over (n\pi_1{\mymat(i|1)})^4}
+\frac 1{n({\mymat(i|1)})^3}\sum_{t=t_0+1}^{n}{1\over t^4}}
\nonumber \\
&{\lesssim} \pth{(n\pi_1{M(i|1)})^2+{{\mymat(i|1)}^2\over \gamma_*^4}\over n}
\pth{{\log(n\pi_1M(i|1)+kM(i|1))\over (n\pi_1{\mymat(i|1)})^3}
+\frac 1{(n\pi_1{\mymat(i|1)})^3}}
\nonumber \\
&{\lesssim} {1\over n}
\pth{(n\pi_1{\mymat(i|1)})^2+{{\mymat(i|1)}^2\over \gamma_*^4}}
{\log(n\pi_1M(i|1)+kM(i|1))\over (n\pi_1{\mymat(i|1)})^3}
\nonumber\\
&\lesssim {1\over n}\pth{{\log(n\pi_1M(i|1)+kM(i|1))\over n\pi_1{\mymat(i|1)}}
+{M(i|1)\log(n\pi_1M(i|1)+k)\over
n\pi_1\gamma_*^4 (n\pi_1M(i|1))^2}}
\nonumber\\
&\stepa{\lesssim} {1\over n}\pth{{n\pi_1M(i|1)+kM(i|1)\over n\pi_1{\mymat(i|1)}}
+{M(i|1)\log(n\pi_1M(i|1))\over n\pi_1\gamma_*^4 (n\pi_1M(i|1))^2}+{M(i|1)\log k\over
n\pi_1\gamma_*^4 (n\pi_1M(i|1))^2}}
\nonumber\\
&\stepb{\lesssim} {1\over n}
\pth{1+kM(i|1)+{M(i|1)\log k\over r\gamma_0^4}}
\end{align*}
where (a) followed using $x+y\leq xy$ for $x,y\geq 2$ and (b) followed as $n\pi_1\geq r,n\pi_1{\mymat(i|1)}\geq 10$ and $\log(n\pi_1M(i|1))\leq n\pi_1M(i|1)$. In view of \eqref{eq:B1.risk.decomp} this implies
\begin{align}
\sum_{i=1}^k\EE\qth{\indc{B^{\leq}}\Delta_i}
\lesssim \sum_{i=1}^k {1\over n}
\pth{1+kM(i|1)\pth{1+{\log k\over rk\gamma_0^4}}}
\lesssim {k\over n}
\pth{1+{\log k\over rk\gamma_0^4}}.
\end{align}
\paragraph{Bound on $\EE\qth{\indc{B^>}\Delta_i}$}
\noindent
Using the inequality \eqref{eq:chisq.bound}
\begin{align*}
\EE\qth{\indc{B^>}\Delta_i}
&\leq \EE\qth{\indc{B^>}\pth{{\mymat(i|1)}N_1-N_{1i}+{\mymat(i|1)}k-1}^2\over
\pth{N_1+k}\pth{N_{1i}+1}}
\nonumber \\
&\lesssim {\EE\qth{\indc{B^>}
\sth{\pth{{\mymat(i|1)}N_1-N_{1i}}^2}}+k^2\pi_1{\mymat(i|1)}^2+\pi_1
\over (n\pi_1+k)(n\pi_1{\mymat(i|1)}+1)}
\nonumber \\
&\lesssim {\pi_1\EE\qth{\left.{\pth{{\mymat(i|1)}N_1-N_{1i}}^2}\right|X_n=1}\over (n\pi_1+k)(n\pi_1{\mymat(i|1)}+1)}
+{k{\mymat(i|1)}\over n}
\end{align*}
where (a) follows using properties of the set $B^>$ along with $(x+y+z)^2\leq 3(x^2+y^2+z^2)$. Using \prettyref{lmm:moment.reversebound}\ref{lmm:secondmoment.reversebound} we get
\begin{align*}
&\EE\qth{\indc{B^>}\Delta_i}
\lesssim {n\pi_1{\mymat(i|1)}+\pth{1+{{\mymat(i|1)}\over \gamma^2_*}}
\over n(n\pi_1{\mymat(i|1)}+1)}
+{k{\mymat(i|1)}\over n}
\lesssim {{1+k{\mymat(i|1)} \over n}+{{\mymat(i|1)}\over n\gamma_0^2}}.
\end{align*}
Summing up the last bound over $i\in [k]$ and using
\label{eq:bound.B1} we get for $n\pi_1>r,n\pi_1M(i|1)>10$
\begin{align*}
\EE\qth{\indc{A^>}D({\mymat (\cdot|1)}\|{\hat\mymat^{+1} (\cdot|1)})}
&=\sum_{i=1}^k\qth{\EE\qth{\indc{B^{\leq}}\Delta_i}
+\EE\qth{\indc{B^>}\Delta_i}}
\lesssim {k\over n}\pth{1+{1\over k\gamma_0^2}+{\log k\over rk\gamma_0^4}}.
\end{align*}
Combining this with \eqref{eq:bound.A>.case(a)} we obtain
\begin{align*}
\EE\qth{\indc{A^>}D({\mymat (\cdot|1)}\|{\hat\mymat^{+1} (\cdot|1)})}
&\lesssim
{k\over n}\pth{{1\over k\gamma_0^2}+r+{\log k\over rk\gamma_0^4}}
\lesssim {k\over n}\pth{1+{\sqrt{\log k\over k\gamma_0^4}}}
\end{align*}
where we chose $r=10+\sqrt{\log k\over k\gamma_0^4}$ for the last inequality. In view of \eqref{eq:bound.A<} this implies the required bound.
\begin{remark}
\label{rmk:4thmoment}
We explain the subtlety of the concentration bound in \prettyref{lmm:moment.reversebound} based on fourth moment and why existing Chernoff bound or Chebyshev inequality falls short.
For example, the risk bound in \eqref{eq:bound.A<} relies on bounding the probability that $N_1$ is atypically small. To this end, one may use the classical Chernoff-type inequality for reversible chains (see \cite[Theorem 1.1]{L98} or \cite[Proposition 3.10 and Theorem 3.3]{P15})
\begin{align}
\PP\qth{N_1\leq (n-1)\pi_1/2|X_1=1}
\lesssim \frac 1{\sqrt{\pi_1}} e^{-\Theta(n\pi_1\gamma_*)};
\label{eq:pauline}
\end{align}
in contrast, the fourth moment bound in \eqref{eq:moment.bound.N1} yields
$\PP\qth{N_1\leq (n-1)\pi_1/2|X_1=1} =O(\frac{1}{(n\pi_1\gamma_*)^2})$.
Although the exponential tail in \prettyref{eq:pauline} is much better, the pre-factor $\frac 1{\sqrt{\pi_1}}$, due to conditioning on the initial state, can lead to a suboptimal result when $\pi_1$ is small. (As a concrete example, consider two states with $M(2|1)=\Theta(\frac {1}n)$ and $M(1|2)=\Theta(1)$. Then $\pi_1=\Theta(\frac{1}{n}),\gamma=\gamma_*\approx \Theta(1)$, and \prettyref{eq:pauline} leads to $\PP\qth{N_1\leq (n-1)\pi_1/2,X_n=1} = O(\frac{1}{\sqrt{n}})$ as opposed to the desired $O(\frac{1}{n})$.)
In the same context it is also insufficient to use 2nd moment based bound (Chebyshev), which leads to $\PP\qth{N_1\leq (n-1)\pi_1/2|X_1=1} =O(\frac{1}{n\pi_1\gamma_*})$. This bound is too loose, which, upon substitution into \prettyref{eq:moment.bound.N1}, results in an extra $\log n$ factor in the final risk bound when $\pi_1$ and $\gamma_*$ are large.
\end{remark}
\subsubsection{Proof of \prettyref{thm:gammak} (ii)}
Let $k\geq (\log n)^6$ and $\gamma_0\geq {(\log (n+k))^2\over k}$.
We prove a stronger result using spectral gap as opposed to the absolute spectral gap. Fix
$M$ such that $\gamma \geq \gamma_0$. Denote its stationary distribution by $\pi$. For absolute constants $\tau>0$ to be chosen later and $c_0$ as in \prettyref{lmm:multinomial.bound} below, define
\begin{gather}
\epsilon(m)={2k\over m}+{c_0(\log n)^3\sqrt k\over m},
\quad c_n=100\tau^2{\log n\over n\gamma},
\nonumber\\
n_i^{\pm}=n\pi_i\pm \tau\max\sth{{\log n\over n\gamma},{\sqrt{\pi_i\log n\over n\gamma}}}, \quad i=1,\ldots,k.
\label{eq:nipm}
\end{gather}
Let $N_i$ be the number of visits to state $i$ as in \prettyref{eq:transition.count}.
We bound the risk by accounting for the contributions from different ranges of $N_i$ and $\pi_i$ separately:
\begin{align}
&\EE\qth{\sum_{i=1}^k\indc{X_n=i}
D\pth{M(\cdot|i)\|\hat M^{+1}(\cdot|i)}}
\nonumber\\
&=\sum_{i:\pi_i\geq c_n}\EE\qth{\indc{X_n=i,n_i^-\leq N_i\leq n_i^+}
D\pth{M(\cdot|i)\|\hat M^{+1}(\cdot|i)}}
\nonumber\\
&+\sum_{i:\pi_i\geq c_n}\EE\qth{\indc{X_n=i, N_i> n_i^+ \text{ or } N_i<n_i^-}
D\pth{M(\cdot|i)\|\hat M^{+1}(\cdot|i)}}
+\sum_{i:\pi_i< c_n}\EE\qth{\indc{X_n=i}
D\pth{M(\cdot|i)\|\hat M^{+1}(\cdot|i)}}
\nonumber\\
&\leq \log (n+k)\sum_{i:\pi_i\geq c_n}
\PP\qth{D(M(\cdot|i)\|\hat M^{+1}(\cdot|i))>\epsilon(N_i),n_i^-\leq N_i\leq n_i^+}
+\sum_{i:\pi_i\geq c_n}\EE\qth{\indc{X_n=i,n_i^-\leq N_i\leq n_i^+}\epsilon(N_i)}
\nonumber\\
&+\log (n+k)\sum_{i:\pi_i\geq c_n}\qth{\PP\qth{N_i\geq n_i^+}
+\PP\qth{N_i\leq n_i^-}}
+\sum_{i:\pi_i\leq c_n}\pi_i\log(n+k)
\nonumber\\
&\lesssim \log (n+k)\sum_{i:\pi_i\geq c_n}
\PP\qth{D(M(\cdot|i)\|\hat M^{+1}(\cdot|i))>\epsilon(N_i),n_i^-\leq N_i\leq n_i^+}
+\sum_{i:\pi_i\geq c_n}\pi_i\max_{n_i^-\leq m\leq n_i^+}\epsilon(m)
\nonumber\\
&\quad
+\log(n+k)\sum_{i:\pi_i\geq c_n}\pth{\PP\qth{N_i> n_i^+}
+\PP\qth{N_i< n_i^-}}+{k\pth{\log(n+k)}^2\over n\gamma}.
\label{eq:s5}
\end{align}
where the first inequality uses the worst-case bound \prettyref{eq:addone-worstcase} for add-one estimator.
We analyze the terms separately as follows.
For the second term, given any $i$ such that $\pi_i\geq c_n$, we have, by definition in \prettyref{eq:nipm}, $n_i^-\geq 9n\pi_i/10$ and $n_i^+-n_i^-\leq n\pi_i/5$, which implies
\begin{align}
\sum_{i:\pi_i\geq c_n}\pi_i\max_{n_i^-\leq m\leq n_i^+}\epsilon(m)
&\leq
\sum_{i:\pi_i\geq c_n}\pi_i \pth{{2k\over 0.9n\pi_i}+{10\over 9}{c_0(\log n)^3\sqrt k\over n\pi_i}}
\lesssim {k^2\over n}
+{(\log n)^3k^{3/2}\over n}.
\label{eq:3(ii).N_i.concentration}
\end{align}
For the third term, applying \cite[Lemma 16]{HJLWWY18} (which, in turn, is based on the Bernstein inequality in \cite{P15}), we get
$\PP\qth{N_i> n_i^+}+\PP\qth{N_i< n_i^-}
\leq 2n^{-\tau^2\over 4+10\tau}$.
To bound the first term in \prettyref{eq:s5}, we follow the method in \cite{B61,HJLWWY18} of representing the sample path of the Markov chain using independent samples generated from $M(\cdot|i)$ which we describe below. Consider a random variable $X_1\sim \pi$ and an array $W=\sth{W_{i\ell}:i=1,\dots,k\text{ and } \ell=1,2,\dots}$ of independent random variables,
such that $X$ and $W$ are independent and $W_{i\ell} \iiddistr M(\cdot|i)$ for each $i$.
Starting with generating $X_1$ from $\pi$, at every step $i\geq 2$ we set $X_i$ as the first element in the $X_{i-1}$-th row of $W$ that has not been sampled yet.
Then one can verify that $\sth{X_1,\dots,X_n}$ is a Markov chain with initial distribution $\pi$ and transition matrix $M$.
Furthermore, the transition counts satisfy $N_{ij}=\sum_{\ell=1}^{N_i}\indc{W_{i\ell}=j}$, where
$N_i$ be the number of elements sampled from the $i$th row of $W$.
Note the conditioned on $N_i=m$, the random variables $\{W_{i1},\ldots,W_{im}\}$ are no longer iid. Instead, we apply a union bound. Note that for each fixed $m$,
the estimator
\[
\hat M^{+1}(j|i)= {\sum_{\ell=1}^{m}\indc{W_{i\ell}=j}+1\over m+k} \triangleq \hat M^{+1}_m(j|i), \quad j\in [k]
\]
is an add-one estimator for $M(j|i)$ based on an \iid sample of size $m$.
\prettyref{lmm:multinomial.bound} below provides a high-probability bound for the add-one estimator in this iid setting. Using this result and the union bound, we have
\begin{align*}
&\sum_{i:\pi_i\geq c_n}\PP\qth{D(M(\cdot|i)\|\hat M^{+1}(\cdot|i))>\epsilon(N_i),n_i^-\leq N_i\leq n_i^+}
\nonumber\\
&\leq
\sum_{i:\pi_i\geq c_n}\pth{n_i^+-n_i^-} \max_{n_i^-\leq m\leq n_i^+} \PP\qth{D(M(\cdot|i)\|\hat M_m^{+1}(\cdot|i))>\epsilon(m)}
\leq
\sum_{i:\pi_i\geq c_n} {1\over n^2}
\leq {k\over n^2}
\end{align*}
where the second inequality applies \prettyref{lmm:multinomial.bound} with $t=n\geq n_i^+ \geq m$ and uses $n_i^+-n_i^-\leq n\pi_i/5$ for $\pi_i\geq c_n$.
Combining the above with \eqref{eq:3(ii).N_i.concentration}, we continue \eqref{eq:s5} with $\tau=25$ to get
\begin{align*}
&\EE\qth{\sum_{i=1}^k\indc{X_n=i}
D\pth{M(\cdot|i)\|\hat M^{+1}(\cdot|i)}}
\lesssim
{k^2\over n}
+{(\log n)^3k^{3/2} \over n}
+{k(\log (n+k))^2\over n\gamma}
\end{align*}
which is $\calO\pth{k^2\over n}$ whenever $k\geq (\log n)^6$ and $\gamma\geq {(\log (n+k))^2\over k}$.
\begin{lemma}[KL risk bound for add-one estimator]\label{lmm:multinomial.bound}
Let $V_1,\dots, V_m\simiid Q$ for some distribution $Q=\sth{Q_i}_{i=1}^k$ on $[k]$.
Consider the add-one estimator $\hat Q^{+1}$ with
$\hat Q^{+1}_i=\frac{1}{m+k}(\sum_{j=1}^m\indc{V_j=i}+1)$.
There exists an absolute constant $c_0$ such that for any $t\ge m$,
\begin{align*}
\PP\qth{D(Q\|\hat Q^{+1})\geq {2k\over m}+{c_0(\log t)^3\sqrt k\over m}}\leq {1\over t^{3}}.
\end{align*}
\end{lemma}
\begin{proof}
Let $\hat Q$ be the empirical estimator $\hat Q_i=\frac{1}{m}\sum_{j=1}^m\indc{V_j=i}$. Then $\hat Q^{+1}_i={m\hat Q_i+1\over m+k}$ and hence
\begin{align}
D(Q\|\hat Q^{+1})
&=\sum_{i=1}^k\pth{Q_i\log{Q_i\over \hat Q_i^{+1}}-Q_i+\hat Q^{+1}_i}
\nonumber\\
&=\sum_{i=1}^k\pth{Q_i\log{Q_i(m+k)\over m\hat Q_i+1}-Q_i+{m\hat Q_i+1\over m+k}}
\nonumber\\
&=\sum_{i=1}^k\pth{Q_i\log{Q_i\over \hat Q_i+\frac 1m}-Q_i+\hat Q_i+\frac 1m}
+\sum_{i=1}^k\pth{Q_i\log{m+k\over m}-{k\hat Q_i\over m+k}-{k\over m(m+k)}}
\nonumber\\
&\leq\sum_{i=1}^k\pth{Q_i\log{Q_i\over \hat Q_i+\frac 1m}-Q_i+\hat Q_i+\frac 1m}
+\frac km\label{eq:KL-nopoisson}
\end{align}
with last equality following by $0\leq\log\pth{m+k\over m}\leq k/m$.
To control the sum in the above display it suffices to consider its Poissonized version. Specifically, we aim to show
\begin{align}\label{eq:KL-poisson}
\PP\qth{\sum_{i=1}^k\pth{ Q_i\log{Q_i\over \hat Q_i^{\mathsf{poi}}+\frac 1m}-Q_i+\hat Q_i^{\mathsf{poi}}+\frac 1m}>{k\over m}+{c_0(\log t)^3\sqrt k\over m}}\leq {1\over t^4}
\end{align}
where $m\hat Q^{\mathsf{poi}}_i, i=1,\ldots,k$ are distributed independently as $\Poi(m Q_i)$. (Here and below
$\Poi(\lambda)$ denotes the Poisson distribution with mean $\lambda$.) To see why \prettyref{eq:KL-poisson} implies the desired result, letting $w={k\over m}+{c_0(\log t)^3\sqrt k\over m}$
and $Y=\sum_{i=1}^km\hat Q_i^{\mathsf{poi}}\sim \Poi(m)$, we have
\begin{align}
&\PP\qth{\sum_{i=1}^k\pth{Q_i\log{Q_i\over \hat Q_i+\frac 1m}-Q_i+\hat Q_i+\frac 1m}>w}
\nonumber\\
&\stepa{=} \PP\qth{\left.\sum_{i=1}^k\pth{Q_i\log{Q_i\over \hat Q_i^{\mathsf{poi}}+\frac 1m}-Q_i+\hat Q_i^{\mathsf{poi}}+\frac 1m}>w \right|
\sum_{i=1}^kQ^{\mathsf{poi}}_i=1}
\nonumber\\
&\stepb{\leq} {1\over t^4\PP[Y=m]}
= {m!\over t^4e^{-m}m^m}
\stepc{\lesssim} {\sqrt m\over t^4}
\leq \frac 1{t^3}.
\end{align}
where (a) followed from the fact that conditioned on their sum independent Poisson random variables follow a multinomial distribution;
(b) applies \eqref{eq:KL-poisson};
(c) follows from Stirling's approximation.
To prove \eqref{eq:KL-poisson} we rely on concentration inequalities for sub-exponential distributions. A random variable $X$ is called sub-exponential with parameters $\sigma^2,b>0$, denoted as $\mathsf{SE}(\sigma^2,b)$ if
\begin{align}
\EE\qth{e^{\lambda (X-\EE[X])}}
\leq e^{\lambda^2\sigma^2\over 2},
\quad \forall |\lambda|<\frac 1b.
\end{align}
Sub-exponential random variables satisfy the following properties \cite[Sec.~2.1.3]{WainwrightBook19}:
\begin{itemize}
\item If $X$ is $\mathsf{SE}(\sigma^2,b)$ for any $t>0$
\begin{align}\label{eq:subexpo.conc}
\PP\qth{\abs{X-\EE[X]}\geq v}\leq
\begin{cases}
2e^{-v^2/(2\sigma^2)} , &0<v\leq {\sigma^2\over b}\\
2e^{-v/(2b)}
, &v> {\sigma^2\over b}.
\end{cases}
\end{align}
\item {Bernstein condition}: A random variable $X$ is $\mathsf{SE}(\sigma^2,b)$ if it satisfies \begin{align}\label{eq:Bernstein.cond}
\EE\qth{\abs{X-\EE[X]}^\ell}\leq \frac 12 \ell!\sigma^2b^{\ell-2},\quad \ell =2,3,\dots.
\end{align}
\item If $X_1,\dots,X_k$ are independent $\mathsf{SE}(\sigma^2,b)$, then
$\sum_{i=1}^k X_i$ is $\mathsf{SE}(k\sigma^2,b)$.
\end{itemize}
Define $
X_i= Q_i\log{Q_i\over \hat Q_i^{\mathsf{poi}}+\frac 1m}-Q_i+\hat Q_i^{\mathsf{poi}}+\frac 1m,i\in [k].
$
Then \prettyref{lmm:poisson.SE.conc} below shows that $X_i$'s are independent $\mathsf{SE}(\sigma^2,b)$ with $\sigma^2={c_1(\log m)^4\over m^2},b={c_2(\log m)^2\over n}$ for absolute constants $c_1,c_2$, and hence $\sum_{i=1}^k\pth{X_i-\EE[X_i]}$ is $\mathsf{SE}(k\sigma^2,b)$. In view of \eqref{eq:subexpo.conc} for the choice $c_0=8(c_1+c_2)$ this implies
\begin{align}\label{eq:SE.conc.app}
\PP\qth{\sum_{i=1}^k\pth{X_i-\EE[X_i]}\geq c_0{(\log t)^3\sqrt k\over m}}
&\leq 2e^{-{c_0^2k(\log t)^6\over 2m^2\sigma^2}}
+2e^{-{c_0\sqrt k(\log t)^3\over 2mb}}
\leq \frac 1{t^3}.
\end{align}
Using $0 \leq y\log y-y+1\leq (y-1)^2,y> 0$ and $\EE\qth{\frac \lambda{\Poi(\lambda)+1}}=\sum_{v=0}^\infty {e^{-\lambda}\lambda^{v+1}\over(v+1)!}=1-e^{-\lambda}$
\begin{align*}
\EE\qth{\sum_{i=1}^k X_i}
&\leq \EE\qth{\sum_{i=1}^k {\pth{Q_i-\pth{\hat Q_i^{\mathsf{poi}}+\frac 1m}}^2\over \hat Q_i^{\mathsf{poi}}+\frac 1m}}
\nonumber\\
&= \sum_{i=1}^kmQ_i^2\EE\qth{1\over m\hat Q_i^{\mathsf{poi}}+1}
-1+\frac km
=\sum_{i=1}^kQ_i\pth{1-e^{-mQ_i}}
-1+\frac km
\leq \frac km.
\end{align*}
Combining the above with \eqref{eq:SE.conc.app} we get \eqref{eq:KL-poisson} as required.
\end{proof}
\begin{lemma}\label{lmm:poisson.SE.conc}
There exist absolute constants $c_1,c_2$ such that the following holds.
For any $p\in (0,1)$ and $nY\sim\Poi(np)$, $X=p\log{p\over Y+\frac 1n}-p+Y+\frac 1n$ is $\mathsf{SE}\pth{{c_1(\log n)^4\over n^2},{c_2(\log n)^2\over n}}$.
\end{lemma}
\begin{proof}
Note that $X$ is a non-negative random variable.
Since $\EE\qth{\pth{X-\EE[X]}^\ell}\leq 2^{\ell}\EE\qth{X^\ell}$ , by the Bernstein condition \eqref{eq:Bernstein.cond},
it suffices to show $\EE[X^\ell]\leq \pth{c_3\ell(\log n)^2\over n}^\ell,\ell=2,3,\dots$ for some absolute constant $c_3$.
guarantees the desired sub-exponential behavior. The analysis is divided into following two cases for some absolute constant $c_4\geq 24$.
\paragraph{Case I $p\geq {c_4\ell\log n\over n}$:}
Using Chernoff bound for Poisson \cite[Theorem 3]{Janson02}
\begin{align}
\PP\qth{|\Poi(\lambda)-\lambda|>x}\leq 2e^{-{x^2\over 2(\lambda+x/3)}},\quad\lambda,x>0,
\label{eq:poissontail}
\end{align}
we get
\begin{align}
\PP\qth{|Y-p|> \sqrt{c_4\ell p\log n\over 4n}}
&\leq 2\exp\pth{-{c_4n\ell p\log n\over 8np+2\sqrt{c_4n\ell p\log n}}}
\nonumber\\
&\leq
2\exp\pth{-{c_4\ell\log n\over {8+2\sqrt{c_4\ell\log n/ np}}}}
\leq {1\over n^{2\ell}}
\end{align}
which implies $p/2\leq Y\leq 2p$ with probability at least $1-{n^{-2\ell}}$. Since $0 \leq X\leq {(Y-p-\frac 1n)^2\over Y+\frac 1n}$,
we get
$
\EE[X^\ell]
\lesssim {\pth{\sqrt{{c_4}\ell p\log n/4n}}^{2\ell}\over (p/2)^\ell}
+{n^\ell\over n^{2\ell}}
\lesssim \pth{c_4\ell\log n\over n}^\ell.
$
\paragraph{Case II $p< {c_4\ell\log n\over n}$:}
\begin{itemize}
\item
On the event $\{Y>p\}$, we have
$X \leq Y + \frac{1}{n} \leq 2Y$, where the last inequality follows because $nY$ takes non-negative integer values. Since $X\geq 0$, we have
$X^\ell\indc{Y> p}\leq (2Y)^\ell\indc{Y> p}$ for any $\ell\geq 2$.
Using the Chernoff bound \prettyref{eq:poissontail}, we get $Y\leq {2c_4\ell \log n\over n}$ with probability at least $1-n^{-2\ell}$, which implies
\begin{align*}
\EE\qth{X^\ell\indc{Y\geq p}}
&\leq \EE\qth{(2Y)^\ell\indc{Y> p,Y\leq {2c_4\ell \log n\over n}}}
+\EE\qth{(2Y)^\ell\indc{Y> p,Y> {2c_4\ell \log n\over n}}}
\nonumber\\
&\leq\pth{4c_4\ell\log n\over n}^\ell
+2^\ell\pth{\EE[Y^{2\ell}]\PP\qth{Y> {2c_4\ell \log n\over n}}}^{\frac 12}
\leq
\pth{c_5\ell\log n\over n}^\ell
\end{align*}
for absolute constant $c_5$.
Here, the last inequality follows from Cauchy-Schwarz and using the Poisson moment bound \cite[Theorem 2.1]{Ahle21}:\footnote{For a result with less precise constants,
see also \cite[Eq.~(1)]{Ahle21} based on \cite[Corollary 1]{latala1997estimation}.}
$\EE[(nY)^{2\ell}]\leq \pth{2\ell\over \log\pth{1+{2\ell\over np}}}^{2\ell}\leq \pth{c_6\ell\log n}^{2\ell}$ for some absolute constant $c_6$,
with the second inequality applying the assumption $p< {c_4\ell\log n\over n}$.
\item As
$
X\indc{Y\leq p}\leq p\log n +\frac 1n
\lesssim {\ell(\log n)^2\over n},
$
we get $\EE\qth{X^\ell\indc{Y\leq p}}
\le \pth{c_7\ell(\log n)^2\over n}^\ell$ for some absolute constant $c_7$.
\end{itemize}
\end{proof}
\subsubsection{Proof of \prettyref{cor:parametricrate}}\label{sec:k.state.lb}
We show the following monotonicity result of the prediction risk.
In view of this result, \prettyref{cor:parametricrate} immediately follows from \prettyref{thm:gamma2} and \prettyref{thm:gammak} (i).
Intuitively, the optimal prediction risk is monotonically increasing with the number of states; this, however, does not follow immediately due to the extra assumptions of irreducibility, reversibility, and prescribed spectral gap.
\begin{lemma}
$\Risk_{k+1,n}(\gamma_0)\geq \Risk_{k,n}(\gamma_0)$ for all $\gamma_0\in(0,1),k\geq 2$.
\end{lemma}
\begin{proof}
Fix an $M\in\calM_{k}(\gamma_0)$ such that $\gamma_*(M)>\gamma_0$. Denote the stationary distribution $\pi$ such that $\pi M = \pi$.
Fix $\delta\in(0,1)$ and define a transition matrix $\tilde{M}$ with $k+1$ states as follows:
\[
\tilde{M} = \begin{pmatrix} (1-\delta) M & \delta \ones \\ (1-\delta) \pi & \delta \end{pmatrix}
\]
One can verify the following:
\begin{itemize}
\item $\tilde{M}$ is irreducible and reversible;
\item The stationary distribution for $\tilde{M}$ is $\tilde \pi=((1-\delta)\pi,\delta)$
\item The absolute spectral gap of $\tilde{M}$ is $\gamma_*(\tilde{M})=(1-\delta) \gamma_*(M)$, so that $\tilde{M}\in\calM_{k+1}(\gamma_0)$ for all sufficiently small $\delta$.
\item Let $(X_1,\ldots,X_n)$ and $(\tilde X_1,\ldots,\tilde X_n)$ be stationary Markov chains with transition matrices $M$ and $\tilde{M}$, respectively.
Then as $\delta\to 0$, $(X_1,\ldots,X_n)$ converges to $(\tilde X_1,\ldots,\tilde X_n)$ in law, i.e., the joint probability mass function converges pointwise.
\end{itemize}
Next fix any estimator $\hat M$ for state space $[k+1]$. Note that without loss of generality we can assume $\hat \mymat(j|i)>0$ for all $i,j\in [k+1]$ for otherwise the KL risk is infinite.
Define $\hat{\mymat}^{\text{trunc}}$ as $\hat{\mymat}$ without the $k+1$-th row and column, and
denote by $\hat{\mymat}'$ its normalized version, namely, $\hat{\mymat}'(\cdot|i)={\hat{\mymat}^{\text{trunc}}(\cdot|i)\over 1-\hat M^{\text{trunc}}(k+1|i)}$ for $i=1,\ldots,k$.
Then
\begin{align*}
\EE_{\tilde X^n} \qth{D(\tilde M(\cdot|\tilde X_n)\|{\hat M(\cdot|\tilde X_n)})}
\xrightarrow{\delta\to0} & ~ \EE_{X^n} \qth{D(M(\cdot|X_n)\|{\hat M(\cdot|X_n)})}\\
\geq & ~ \EE_{X^n} \qth{D(M(\cdot|X_n)\|{\hat M'(\cdot|X_n)})} \\
\geq & ~ \inf_{\hat M} \EE_{X^n} \qth{D(M(\cdot|X_n)\|{\hat M(\cdot|X_n)})}
\end{align*}
where in the first step we applied the convergence in law of $\tilde X^n$ to $X^n$ and the continuity of $P\mapsto D(P\|Q)$ for fixed componentwise positive $Q$;
in the second step we used the fact that
for any sub-probability measure $Q=(q_i)$ and its normalized version $\bar Q = Q/\alpha$ with $\alpha = \sum q_i \leq 1$, we have
$D(P\|Q) = D(P\|\bar Q) + \log \frac{1}{\alpha} \geq D(P\|\bar Q)$.
Taking the supremum over $M \in \calM_k(\gamma_0)$ on the LHS and the supremum over $\tilde M\in\calM_{k+1}(\gamma_0)$ on the RHS, and finally the infimum over $\hat M$ on the LHS, we conclude $\Risk_{k+1,n}(\gamma_0)\geq \Risk_{k,n}(\gamma_0)$.
\end{proof}
\section{Higher-order Markov chains}
\label{sec:order-m}
\subsection{Basic setups}
In this section we prove \prettyref{thm:m-order-rate}. We start with some basic definitions for higher-order Markov chains.
Let $m\geq 1$. Let $X_1,X_2,\dots$ be an $m^{\text{th}}$-th order Markov chain with state space $\calS$ and transition matrix $M\in\reals^{\calS^m \times \calS}$ so that
$\prob{X_{t+1}=x_{t+1}|X_{t-m+1}^t=x_{t-m+1}^t} =M(x_{t+1}|x_{t-m+1}^t)$ for all $t\geq m$.
Clearly, the joint distribution of the process is specified by the transition matrix and the initial distribution, which is a joint distribution for $(X_1,\ldots,X_m)$.
A distribution $\pi$ on $\calS^m$ is a \emph{stationary} distribution if $\{X_t: t\geq 1\}$ with $(X_1,\ldots,X_m)\sim \pi$ is a stationary process, that is,
\begin{equation}
(X_{i_1+t},\ldots,X_{i_n+t}) \eqlaw (X_{i_1},\ldots,X_{i_n}), \quad \forall n,i_1,\ldots,i_n,t\in\naturals.
\label{eq:stationarity}
\end{equation}
It is clear that \prettyref{eq:stationarity} is equivalent to $(X_{1},\ldots,X_{m}) \eqlaw (X_{2},\ldots,X_{m+1})$.
In other words, $\pi$ is the solution to the linear system:
\begin{align}
\pi(x_1,\ldots,x_m) = \sum_{x_0 \in \calS} \pi(x_0,x_1,\ldots,x_{m-1}) M(x_m|x_1,\ldots,x_{m-1}), \quad \forall x_1,\ldots,x_m \in \calS.
\label{eq:MC_stationarity}
\end{align}
Note that this implies, in particular, that $\pi$ as a joint distribution of $m$-tuples itself must satisfy those symmetry properties required by stationarity, such as all marginals being identical, etc.
Next we discuss reversibility. A random process $\{X_t\}$ is \emph{reversible} if
for any $n$,
\begin{equation}
X^n ~\eqlaw ~\overline{X^n},
\label{eq:reversibility}
\end{equation}
where $\overline{X^n}\eqdef (X_n,\ldots,X_1)$ denotes the time reversal of $X^n=(X_1,\ldots,X_n)$.
Note that a reversible $m^{\text{th}}$-order Markov chain must be stationary. Indeed,
\begin{equation}
(X_2,\ldots,X_{m+1}) \eqlaw (X_{m},\ldots,X_1) \eqlaw (X_{1},\ldots,X_m),
\label{eq:rev2stat}
\end{equation}
where the first equality follows from $(X_1,\ldots,X_{m+1}) \eqlaw (X_{m+1},\ldots,X_1)$.
The following lemma gives a characterization for reversibility:
\begin{lemma}\label{lmm:reversibility-general}
An $m^{\text{th}}$-order stationary Markov chain is reversible if and only if \prettyref{eq:reversibility} holds for $n=m+1$, namely
\begin{equation}
\pi(x_1,\ldots,x_m) M(x_{m+1}|x_1,\ldots,x_m) = \pi(x_{m+1},\ldots,x_2) M(x_1|x_{m+1},\ldots,x_2), \quad \forall x_1,\ldots,x_{m+1}\in\calS.
\label{eq:rev-mth}
\end{equation}
\end{lemma}
\begin{proof}
First, we show that \prettyref{eq:reversibility} for $n=m+1$ implies that for $n\leq m$.
Indeed,
\begin{align}
(X_1,\ldots,X_n) \eqlaw (X_{m+1},\ldots,X_{m-n+2}) \eqlaw (X_n,\ldots,X_1)
\label{eq:MC_reversibility-stationarity}
\end{align}
where the first equality follows from $(X_1,\ldots,X_{m+1}) \eqlaw (X_{m+1},\ldots,X_1)$ and the second applies stationarity.
Next, we show \prettyref{eq:reversibility} for $n=m+2$ and the rest follows from induction on $n$.
Indeed,
\begin{align*}
&~ \prob{(X_1,\ldots,X_{m+2}) = (x_1,\ldots,x_{m+2})} \\
= & ~ \pi(x_1,\ldots,x_m) M(x_{m+1}|x_1,\ldots,x_m) M(x_{m+2}|x_2,\ldots,x_{m+1}) \\
\stepa{=} & ~ \pi(x_{m+1},\ldots,x_2) M(x_1|x_{m+1},\ldots,x_2) M(x_{m+2}|x_2,\ldots,x_{m+1}) \\
\stepb{=} & ~ \pi(x_2,\ldots,x_{m+1}) M(x_1|x_{m+1},\ldots,x_2) M(x_{m+2}|x_2,\ldots,x_{m+1}) \\
\stepc{=} & ~ \pi(x_{m+2},\ldots,x_3) M(x_{2}|x_{m+2},\ldots,x_3) M(x_1|x_{m+1},\ldots,x_2) \\
= & ~ \prob{(X_1,\ldots,X_{m+2}) = (x_{m+2},\ldots,x_1)} = \prob{(X_{m+2},\ldots,X_1) = (x_1,\ldots,x_{m+2})}.
\end{align*}
where (a) and (c) apply \prettyref{eq:reversibility} for $n=m+1$, namely, \prettyref{eq:rev-mth};
(b) applies \prettyref{eq:reversibility} for $n=m$.
\end{proof}
In view of the proof of \prettyref{eq:rev2stat}, we note that any distribution $\pi$ on $\calS^m$ and $m^{\text{th}}$-order transition matrix $M$ satisfying $\pi(x^m)=\pi(\overline{x^m})$ and \eqref{eq:rev-mth} also satisfy \eqref{eq:MC_stationarity}. This implies such a $\pi$ is a stationary distribution for $M$. In view of \prettyref{lmm:reversibility-general} the above conditions also guarantee reversibility. This observation can be summarized in the following lemma, which will be used to prove the reversibility of specific Markov chains later.
\begin{lemma}
\label{lmm:st-rev}
Let $M$ be a $k^m\times k$ stochastic matrix describing transitions from $\calS^m$ to $\calS$. Suppose that $\pi$ is a distribution on $\calS^m$ such that $\pi(x^m) = \pi(\overline{x^m})$ and $\pi(x^m)M(x_{m+1}|x^m) = \pi(\overline{x_2^{m+1}})M(x_1|\overline{x_2^{m+1}})$. Then $\pi$ is the stationary distribution of $M$ and the resulting chain is reversible.
\end{lemma}
For $m^{\rm th}$-order stationary Markov chains, the optimal prediction risk is defined as as
\begin{align}
{\Risk}_{k,n,m}
&\eqdef \inf_{\hat M} \sup_{M} \Expect[D(M(\cdot|X_{n-m+1}^n) \| \hat M(\cdot|X_{n-m+1}^n))]
\nonumber\\
&=\inf_{\hat M} \sup_{M} \sum_{x^m \in\calS^m} \Expect[D(M(\cdot|x^m) \| \hat M(\cdot|x^m)) \indc{X_{n-m+1}^n=x^m}]
\label{eq:riskkn2}
\end{align}
where the supremum is taken over all $k^m\times k$ stochastic matrices $M$ and the trajectory is initiated from the stationary distribution.
In the remainder of this section we will show the following result, completing the proof of \prettyref{thm:m-order-rate} previously announced in \prettyref{sec:intro}.
\begin{theorem}\label{thm:optimal-m}
For all $m\geq 2$, there exist a constant $C_m>0$ such that for all $2\leq k\leq n^{\frac{1}{m+1}}/C_m$,
$$
{k^{m+1}\over C_mn}\log\pth{n\over k^{m+1}}
\leq \Risk_{k,n,m}
\leq {C_mk^{m+1}\over n}\log\left(\frac n{k^{m+1}}\right).
$$
Furthermore, the lower bound holds even when the Markov chains are required to be reversible.
\end{theorem}
\subsection{Upper bound}
We prove the upper bound part of the preceding theorem, using only stationarity (not reversibility).
We rely on techniques from \cite[Chapter 6, Page 486]{csiszar2004information} for proving redundancy bounds for the $m^{\text{th}}$-order chains. Let $Q$ be the probability assignment given by
\begin{align}
Q(x^n)
=\frac 1{k^m}\prod_{a^m\in\calS^m}
{\prod_{j=1}^kN_{a^mj}!\over k\cdot(k+1)\cdots(N_{a^m}+k-1)},
\end{align}
where $N_{a^mj}$ denotes the number of times the block $a^mj$ occurs in $x^n$, and $N_{a^m}=\sum_{j=1}^k N_{a^mj}$ is the number of times the block $a^m$ occurs in $x^{n-1}$. This probability assignment corresponds to the add-one rule
\begin{align}
Q(j|x^{n})
= \hat M_{x^n}^{+1}(j|x_{n-m+1}^n)
={N_{x_{n-m+1}^nj}+1\over N_{x_{n-m+1}^n}+k}.
\end{align}
Then in view of \prettyref{lmm:riskred}, the following lemma proves the desired upper bound in \prettyref{thm:optimal-m}.
\begin{lemma}
\label{lmm:red-addone-order-m} Let $\Red(Q_{X^n})$ be the redundancy of the $m^{\text{th}}$-order Markov chain, as defined in \prettyref{sec:riskred-bound}, and $X^m$ be the corresponding observed trajectory. Then
$$\Red(Q_{X^n}) \leq \frac 1{n-m}\sth{{k^m(k-1)}\left[\log \left(1+\frac{n-m}{k^m(k-1)}\right)+1\right] + m\log k}.
$$
\end{lemma}
\begin{proof}
We show that for every Markov chain with transition matrix $M$ and initial distribution $\pi$ on $\calS^m$, and every trajectory $(x_1,\cdots,x_n)$, it holds that
\begin{align}\label{eq:MC_pointwise-red-m}
\log\frac{\pi(x_1^m)\prod_{t=m}^{n-1} M(x_{t+1}|x^t_{t-m+1}) }{Q(x_1,\cdots,x_n)} \le k^m(k-1)\left[\log \left(1+\frac{n-m}{k^m(k-1)}\right)+1\right] + m\log k,
\end{align}
where $M(x_{t+1}|x^t_{t-m+1})$ the transition probability of going from $x^t_{t-m+1}$ to $x_{t+1}$.
Note that
\[
\prod_{t=m}^{n-1} M(x_{t+1}|x^t_{t-m+1}) = \prod_{a^{m+1}\in\calS^{m+1}}M(a_{m+1}|a^m)^{N_{a^{m+1}}} \le \prod_{a^{m+1}\in\calS^{m+1}} (N_{a^{m+1}}/N_{a^m})^{N_{a^{m+1}}},
\]
where the last inequality follows from $\sum_{a_{m+1\in\calS}} \frac{N_{a^{m+1}}}{N_{a^m}}\log \frac{N_{a^{m+1}}}{N_{a^m}M(a_{m+1}|a^m)}\geq 0$ for each $a^m$, by the non-negativity of the KL divergence.
Therefore, we have
\begin{align}\label{eq:MC_likelihood-ratio-m}
\frac{\pi(x_1^m)\prod_{t=m}^{n-1} M(x_{t+1}|x_{t-m+1}^t) }{Q(x_1,\cdots,x_n)} \le k^m\cdot \prod_{a^m\in\calS^m} \frac{k\cdot (k+1)\cdot \cdots\cdot (N_{a^m}+k-1)}{N_{a^m}^{N_{a^m}}} \prod_{a_{m+1}\in \calS}\frac{ N_{a^{m+1}}^{N_{a^{m+1}}} }{N_{a^{m+1}}!}.
\end{align}
Using \eqref{eq:MC_claim} we continue \eqref{eq:MC_likelihood-ratio-m} to get
\begin{align*}
\log \frac{\pi(x_1)\prod_{t=m}^{n-1} M(x_{t+1}|x_t) }{Q(x_1,\cdots,x_n)} &\le m\log k+\sum_{a^m\in\calS^m} \log \frac{k\cdot (k+1)\cdot \cdots\cdot (N_{a^m}+k-1)}{N_{a^m}!} \\
&= m\log k + \sum_{a^m\in\calS^m} \sum_{\ell = 1}^{N_{a^m}} \log\left(1+\frac{k-1}{\ell}\right)\\
&\le m\log k + \sum_{a^m\in\calS^m} \int_0^{N_{a^m}} \log\left(1+\frac{k-1}{x}\right)dx \\
&= m\log k + \sum_{a^m\in\calS^m} \left((k-1)\log\left(1+\frac{N_{a^m}}{k-1}\right) + N_{a^m}\log\left(1+\frac{k-1}{N_{a^m}}\right) \right) \\
&\stepa{\le} k^m(k-1)\log \left(1+\frac{n-m}{k^m(k-1)}\right) + k^m(k-1) + m\log k,
\end{align*}
where (a) follows from the concavity of $x\mapsto \log x$, $\sum_{a^m\in\calS^m} N_{a^m}=n-m+1$, and $\log(1+x)\le x$.
\end{proof}
\subsection{Lower bound}
\subsubsection{Special case: $m\geq 2,k=2$}
We only analyze the case $m=2$, i.e.~second-order Markov chains with binary states, as the lower bound still applies to the case of $m\ge 3$ case. The transition matrix for second-order chains is given by a $k^2\times k$ stochastic matrices $M$ that gives the transition probability from the ordered pairs $(i,j)\in \calS\times\calS$ to some state $\ell\in\calS$:
\begin{align}
M(\ell|ij)=\PP\qth{X_3=\ell|X_1=i,X_2=j}.
\end{align}
Our result is the following.
\begin{theorem}
${\Risk}_{2,n,2}=\Theta\pth{\log n\over n}$.
\end{theorem}
\begin{proof}
The upper bound part has been shown in \prettyref{lmm:red-addone-order-m}.
For the lower bound, consider the following one-parametric family of transition matrices (we replace $\calS$ by $\sth{1,2}$ for simplicity of the notation)
\begin{align}
\tilde\calM=\sth{M_p=
\bbordermatrix{~ & 1 & 2 \cr
11 & 1-\frac 1n & \frac 1n \cr
21 & \frac 1n & 1-\frac 1n \cr
12 & 1-p & p \cr
22 & p & 1-p}
: 0\leq p\leq 1}
\label{eq:Mp-2nd}
\end{align}
and place a uniform prior on $p\in[0,1]$. One can verify that each $M_p$ has the uniform stationary distribution over the set $\sth{1,2}\times\sth{1,2}$ and the chains are reversible.
Next we introduce the set of trajectories based on which we will lower bound the prediction risk. Analogous to the set $\calX=\cup_{t=1}^n\calX_t$ defined in \eqref{eq:MC_Xt-sets} for analyzing the first-order chains, we define
\begin{align}
\calV=\sth{1^{n-t} z^{t}:z_1=z_2=z_t=2,z_{i}^{i+1}\neq 11, i\in[t-1], t=4,\dots,n-2} \subset \{1,2\}^n.
\end{align}
In other words, the sequences in $\calV$ start with a string of 1's before transitioning into two consecutive 2's, are forbidden to have no consecutive 1's thereafter, and finally end with 2.
To compute the probability of sequences in $\calV$, we need the following preparations.
Denote by $\oplus$ the the operation that combines any two blocks from $\sth{22,212}$ via merging the last symbol of the first block and the first symbol of the second block, for example,
$ 22\oplus 212=2212,
22\oplus 22\oplus 22=2222$.
Then for any $x^n\in \calV$ we can write it in terms of the initial all-1 string, followed by alternating run of blocks from $\{22,212\}$ with the first run being of the block 22 (all the runs have positive lengths), combined with the merging operation $\oplus$:
\begin{align}
\label{eq:MC_calV-decomposition}
x^n=\underbrace{1\dots1}_{\text{all ones}} \underbrace{22\oplus 22\dots\oplus 22}_{p_1 \text{ many } 22}\oplus \underbrace{212\oplus 212\dots\oplus 212}_{p_2 \text{ many } 212} \oplus\underbrace{ 22\oplus 22\dots\oplus 22}_{p_3 \text{ many } 22}\oplus \underbrace{212\oplus 212\dots\oplus 212}_{p_4 \text{ many }212}\oplus 22\oplus \dots.
\end{align}
Let the vector $(q_{22\to 22},q_{22\to 212},q_{212\to 22},q_{212\to 212})$ denotes the transition probabilities between blocks in $\sth{22,212}$ (recall the convention that the two blocks overlap in the symbol 2). Namely, according to \prettyref{eq:Mp-2nd},
\begin{align*}
q_{22\to 22}
&=\PP\qth{X_3=2,X_2=2|X_2=2,X_1=2}
=M(2|22)=1-p
\\
q_{22\to 212}
&=\PP\qth{X_4=2,X_3=1,X_2=2|X_2=2,X_1=2}
=M(2|21)M(1|22)=\pth{1-\frac 1n}p
\\
q_{212\to 22}
&=\PP\qth{X_4=2,X_3=2|X_3=2,X_2=1,X_1=2}
=M(2|12)=p
\\
q_{212\to 212}
&=\PP\qth{X_5=2,X_4=1,X_3=2|X_3=2,X_2=1,X_1=2}
=M(2|21)M(1|12)=\pth{1-\frac 1n}(1-p).
\\
\end{align*}
Given any $x^n\in \calV$ we can calculate its probability under the law of $M_p$ using frequency counts $\bm{F}(x^n)=\pth{F_{111},F_{22\to 22},F_{22\to 212},F_{212\to 22},F_{212\to 212}}$, defined as
\begin{gather*}
F_{111}
=\sum_{i}\indc{x_{i}=1,x_{i+1}=1,x_{i+2}=1},\quad
F_{22\to 22}
=\sum_{i}\indc{x_{i}=2,x_{i+1}=2,x_{i+2}=2},\\
F_{22\to 212}
=\sum_{i}\indc{x_{i}=2,x_{i+1}=2,x_{i+2}=1,x_{i+3}=2},
\quad
F_{212\to 22}
=\sum_{i}\indc{x_{i}=2,x_{i+1}=1,x_{i+2}=2,x_{i+3}=2},\\
F_{212\to 212}
=\sum_{i}\indc{x_{i}=2,x_{i+1}=1,x_{i+2}=2,x_{i+3}=1,x_{i+4}=2}.
\end{gather*}
Denote $\mu(x^n|p)=\PP\qth{X^n=x^n|p}$. Then for each $x^n\in\calV$ with $\bm F(x^n)=\bm F$ we have
\begin{align}\label{eq:MC_prob-F}
&\mu(x^n|p)
\nonumber\\
&=\PP(X^{F_{111}+2}=1^{F_{111}+2})M(2|11)M(2|12)\prod_{a,b\in\sth{22,212}}q_{a\to b}^{F_{a\to b}}
\nonumber\\
&=\frac 14\pth{1-\frac 1n}^{F_{111}}\frac 1n \cdot p \cdot
p^{F_{212\to 22}}\sth{p\pth{1-\frac 1n}}^{F_{22\to 212}}(1-p)^{F_{22\to 22}}\sth{(1-p)\pth{1-\frac 1n}}^{F_{212\to 212}}
\nonumber\\
&=\frac 14\pth{1-\frac 1n}^{F_{111}+F_{22\to 212}+F_{212\to 212}}
\frac 1n p^{y+1}(1-p)^{f-y}
\end{align}
where $y=F_{212\to 22}+F_{22\to 212}$ denotes the number of times the chain alternates between runs of 22 and runs of 212, and $f=F_{212\to 22}+F_{22\to 212}+F_{212\to 212}+F_{22\to 22}$ denotes the number of times the chain jumps between blocks in $\{22,212\}$.
Note that the range of $f$ includes all the integers in between 1 and $(n-6)/2$. This follows from the definition of $\calV$ and the fact that if we merge either 22 or 212 using the operation $\oplus$ at the end of any string $z^{t}$ with $z_t=2$, it increases the length of the string by at most 2. Also, given any value of $f$ the value of $y$ ranges from 0 to $f$.
\begin{lemma}
\label{lmm:countyf}
The number of sequences in $\calV$ corresponding to a fixed pair $(y,f)$ is $\binom fy$.
\end{lemma}
\begin{proof}
Fix $x^n\in \calV$ and let that $p_{2i-1}$ is the length of the $i$-th run of 22 blocks and $p_{2i}$ is the length of the $i$-th run of 212 blocks in $x^n$ as depicted in \eqref{eq:MC_calV-decomposition}.
The $p_i$'s are all non-negative integers. There are total $y+1$ such runs and the $p_i$'s satisfy $\sum_{i=1}^{y+1}p_i=f+1$, as the total number of blocks is one more than the total number of transitions. Each positive integer solution to this equation $\sth{p_i}_{i=1}^{y+1}$ corresponds to a sequence $x^n\in\cal V$ and vice versa. The total number of such sequences is $\binom fy$.
\end{proof}
We are now ready to compute the Bayes estimator and risk.
For any $x^n\in\calV$ with a given $(y,f)$, the Bayes estimator of $p$ with prior $p\sim\Unif[0,1]$ is
\[
\hat p(x^n) = \Expect[p|x^n] = \frac{\Expect[p \cdot \mu(x^n|p)]}{\Expect[\mu(x^n|p)]}
\overset{\eqref{eq:MC_prob-F}}{=}{y+2\over f+3}.
\]
Note that the probabilities $\mu(x^n|p)$ in \eqref{eq:MC_prob-F} can be bounded from below by $\frac 1{4en} p^{y+1}(1-p)^{f-y}$. Using this, for each $x^n\in \calV$ with given $y,f$ we get the following bound on the integrated squared error for a particular sequence $x^n$
\begin{align}
&\int_0^1 \mu(x^n|p)(p-\hat p(x^n))^2 dp
\nonumber\\
&\geq \frac 1{4en} \int_0^1 p^{y+1}(1-p)^{f-y}\pth{p-{y+2\over f+3}}^2dp
=\frac 1{4en} {(y+1)!(f-y)!\over (f+2)!}{(y+2)(f-y+1)\over (f+3)^2(f+4)}
\label{eq:bayes-2nd}
\end{align}
where the last equality followed by noting that the integral is the variance of a $\text{Beta}(y+2,f-y+1)$ random variable without its normalizing constant.
Next we bound the risk of any predictor by the Bayes error. Consider any predictor $\hat \M(\cdot|ij)$ (as a function of the sample path $X$) for transition from $ij$, $i,j\in\sth{1,2}$. By the Pinsker's inequality, we conclude that
\begin{align}
D(\M(\cdot|12) \| \hat \M(\cdot|12))
\geq \frac{1}{2} \|\M(\cdot|12)-\hat \M(\cdot|12)\|_{\ell_1}^2 \geq \frac{1}{2} (p-\hat \M(2|12))^2
\end{align}
and similarly, $D(\M(\cdot|22) \| \hat \M(\cdot|22)) \geq \frac{1}{2}(p-\hat \M(1|22))^2$.
Abbreviate $\hat \M(2|12) \equiv \hat p_{12}$ and $\hat \M(1|22) \equiv \hat p_{22}$, both functions of $X$.
Using
\prettyref{eq:bayes-2nd} and \prettyref{lmm:countyf}, we have
\begin{align}
& \sum_{i,j=1}^3 \Expect[D( \M(\cdot|ij)\|\hat \M(\cdot|ij))) \indc{X_{n-1}^n=ij} ] \nonumber \\
&\geq ~ \frac 12 \Expect\qth{(p-\hat p_{12})^2 \indc{X_{n-1}^n=12,X^n\in\calV} + (p-\hat p_{22})^2 \indc{X_{n-1}^n=22,X^n\in\calV}} \nonumber \\
&\geq ~ \frac 12\int_0^1\qth{\sum_{\bm{F}}\sum_{x^n\in\calV:\bm{F}(x^n)=\bm{F}} \mu(x^n|p)\pth{(p-\hat p_{12})^2 \indc{x_{n-1}^n=12} + (p-\hat p_{22})^2 \indc{x_{n-1}^n=22} }}dp\nonumber \\
&\geq ~ \frac 12\int_0^1\qth{\sum_{\bm{F}}\sum_{x^n\in\calV:\bm{F}(x^n)=\bm{F}} \mu(x^n|p) (p-\hat p(x^n))^2 }dp \nonumber \\
&\geq ~ \frac 12\sum_{f=1}^{\frac {n-6}2}\sum_{y=0}^{f}
\binom fy \frac 1{4en} {(y+1)!(f-y)!\over (f+2)!}{(y+2)(f-y+1)\over (f+3)^2(f+4)}
\nonumber\\
& \geq ~ \frac 1{8en} \sum_{f=1}^{\frac {n-6}2}\sum_{y=0}^{f}
{y+1\over (f+2)(f+1)}{(y+2)(f-y+1)\over (f+3)^2(f+4)}
\geq ~ \Theta\pth{\frac 1n} \sum_{f=1}^{\frac {n-6}2}
\sum_{y=\frac f4}^{\frac f3} \frac 1{f^2}
= ~ \Theta\pth{\log n\over n}.
\end{align}
\end{proof}
\subsubsection{General case: $m\geq 2, k\geq 3$}
We will prove the following.
\begin{theorem}\label{thm:optimal-lower-order-m}
For absolute constant $C$, we have
$$\Risk_{k,n,m}
\geq \frac 1{2^{m+4}}\pth{\frac 12-{2^m-2\over n}}
\pth{1-\frac 1n}^{n-2m+1}{(k-1)^{m+1}\over n}\log\pth{{1\over 2^{2m+8}\cdot 3\pi e(m+1)}\cdot {n-m\over (k-1)^{m+1} }}.$$
\end{theorem}
For ease of notation let $\calS=\sth{1,\dots,k}$.
Denote ${\tilde \calS}=\sth{2,\dots,k}$.
Consider an
$m^{\text{th}}$-order transition matrix $M$ of the following form:
\begin{align}
M(s|x^m)=\text{\begin{tabular}{|c|c|c|}
\hline
\multirow{2}{*}{Starting string $x^m$} &
\multicolumn{2}{c|}{Next state} \\\cline{2-3}
& $s=1$ & $s\in\sth{2,\dots,k}$ \\
\hline & & \\
$1^m$ & $1- \frac 1n$ & $\frac 1{n(k-1)}$ \\
& & \\
\hline
& & \\
\multirow{2}{*}{$1x^{m-1}, x^{m-1}\in{\tilde \calS}^{m-1}$} & $1-b$ & $\frac b{(k-1)}$\\
& & \\
\hline & & \\
\multirow{2}{*}{$x^{m}\in{\tilde \calS}^{m}$} &$\frac 1n$ & {\mbox{$\pth{1-\frac 1n}T(s|x^m)$}}
\\
& & \\
\hline & &\\
\multirow{2}{*}{$x^m\notin \sth{1^m,1{\tilde \calS}^{m-1},{\tilde \calS}^{m}}$} & $\frac 12$ & $\frac 1{2(k-1)}$
\\
& & \\
\hline
\end{tabular}
},
\quad b=\frac 12-{2^m-2\over n}.
\label{eq:MC_M_construction2}
\end{align}\\
\noindent
Here $T$ is a $(k-1)^m\times (k-1)$ transition matrix for an $m^{\rm th}$-order Markov chain with state space $\tilde \calS$, satisfying the following property:
\begin{enumerate}[label=(P)]
\item \label{pt:MC_P} $T(x_{m+1}|x^{m}) = T(x_1|\overline{x_2^{m+1}}),\quad \forall x^{m+1}\in\tilde \calS^{m+1}$.
\end{enumerate}
\begin{lemma}\label{lmm:MC_stationary-T}
Under the condition \ref{pt:MC_P}, the transition matrix $T$ has a stationary distribution that is uniform on ${\tilde \calS}^m$.
Furthermore, the resulting $m^{\rm th}$-order Markov chain is reversible (and hence stationary).
\end{lemma}
\begin{proof}
We prove this result using \prettyref{lmm:st-rev}. Let $\pi$ denote the uniform distribution on $\tilde \calS^m$, i.e., $\pi(x^m)=\frac 1{(k-1)^m}$ for all $x^m\in\tilde \calS^m$. Then for any $x^m\in\tilde \calS^m$ the condition $\pi(x^m)=\pi(\overline{x^m})$ follows directly and $\pi(x^m)T(x_{m+1}|x^m)
=\pi(\overline{x_2^{m+1}})T(x_{1}|\overline{x_2^{m+1}})$ follows from the assumption \ref{pt:MC_P}.
\end{proof}
Next we address the stationarity and reversibility of the chain with the bigger transition matrix $M$ in \prettyref{eq:MC_M_construction2}:
\begin{lemma}\label{lmm:MC_stationary-M}
Let $M$ be defined in \prettyref{eq:MC_M_construction2}, wherein the transition matrix $T$
satisfies the condition \ref{pt:MC_P}. Then $M$ has a stationary distribution given by
\begin{align}
\pi(x^m)=
\begin{cases}
\frac{1}{2} & x^m = 1^m\\
\frac{b}{(k-1)^m} & x^{m}\in{\tilde \calS}^{m} \\
\frac{1}{n(k-1)^{d(x^{m})}} & \text{otherwise}
\end{cases}
\label{eq:pi-mth}
\end{align}
where $d(x^m)\triangleq\sum_{i=1}^m\indc{x_i\in{\tilde \calS}}$ and $b=\frac 12-{2^m-2\over n}$ as in
\prettyref{eq:MC_M_construction2}.
Furthermore, the $m^{\rm th}$-order Markov chain with initial distribution $\pi$ and transition matrix $M$ is reversible.
\end{lemma}
\begin{proof}
Note that the choice of $b$ guarantees that $\sum_{x^m\in\calS^m}\pi(x^m)=1$.
Next we again apply \prettyref{lmm:st-rev} to verify stationarity and reversibility.
First of all, since
$d(x^m)=d(\overline{x^m})$, we have $\pi(x^m)=\pi(\overline{x^m})$ for all $x^m\in\calS^m$ .
Next we check the condition $\pi(x^m)M(x_{m+1}|x^m) = \pi(\overline{x_2^{m+1}})M(x_1|\overline{x_2^{m+1}})$.
For the sequence $1^{m+1}$ the claim is easily verified. For the rest of the sequences we have the following.
\begin{itemize}
\item \textbf{Case 1 ($x^{m+1}\in {\tilde \calS}^{m+1}$):}
Note that $x^{m+1}\in\tilde \calS^{m+1}$ if and only if $x^m,{\overline{x_2^{m+1}}}\in\tilde \calS^m$. This implies
\begin{align*}
\pi(x^m)M(x_{m+1}|x^m)
&=
\frac b{(k-1)^m}\pth{1-\frac 1n}T(x_{m+1}|x^m)
\nonumber\\
&=\frac b{(k-1)^m}\pth{1-\frac 1n}T(x_1|\overline{x_2^{m+1}})
=\pi(\overline{x_2^{m+1}})M(x_1|\overline{x_2^{m+1}}).
\end{align*}
\item \textbf{Case 2 ($x^{m+1}\in 1{\tilde \calS}^m$ or $x^{m+1}\in {\tilde \calS}^m1$):} By symmetry it is sufficient to analyze the case $x^{m+1}\in 1{\tilde \calS}^m$. Note that in the sub-case $x^{m+1}\in 1{\tilde \calS}^m$, $x^m\in 1\tilde \calS^{m-1}$ and $\overline{x_2^{m+1}}\in\tilde \calS^{m}$. This implies \begin{align}
\pi(x^m)=\frac 1{n(k-1)^{m-1}},
&\quad
M(x_{m+1}|x^m)=\frac b{k-1},
\nonumber\\
\pi(\overline{x_2^{m+1}})=\frac b{(k-1)^{m}},
& \quad M(x_{1}|\overline{x_2^{m+1}})=\frac 1n.
\end{align}
In view of this we get
$\pi(x^m)M(x_{m+1}|x^m)
=\pi(\overline{x_2^{m+1}})M(x_1|\overline{x_2^{m+1}}).$
\item \textbf{Case 3 ($x^{m+1}\notin 1^{m+1}\cup {\tilde \calS}^{m+1}\cup1{\tilde \calS}^{m}\cup{\tilde \calS}^{m}1$):}
\noindent Suppose that $x^{m+1}$ has $d$ many elements from ${\tilde \calS}$. Then $x^m,x_2^{m+1}\notin \sth{1^m,{\tilde \calS}^m}$. We have the following sub-cases.
\begin{itemize}
\item If $x_1=x_{m+1}=1$, then both $x^m,x_2^{m+1}$ have exactly $d$ elements from ${\tilde \calS}$. This implies $\pi(x^m)=\pi(\overline{x_2^{m+1}})=\frac 1{n(k-1)^d}$ and $M(x_{m+1}|x^m)=M(x_1|\overline{x_2^{m+1}})=\frac 12$.
\item If $x_1,x_{m+1}\in{\tilde \calS}$, then both $x^m,x_2^{m+1}$ have exactly $d-1$ elements from ${\tilde \calS}$. This implies $\pi(x^m)=\pi(\overline{x_2^{m+1}})=\frac 1{n(k-1)^{d-1}}$ and $M(x_{m+1}|x^m )=M(x_1|\overline{x_2^{m+1}})=\frac 1{2(k-1)}$.
\item If $x_1=1,x_{m+1}\in {\tilde \calS}$, then $x^m$ has $d-1$ elements from $\tilde S$ and $x_2^{m+1}$ has $d$ elements from $\calS$. This implies $\pi(x^m)=\frac 1{n(k-1)^{d-1}},\pi(\overline{x_2^{m+1}})=\frac 1{n(k-1)^{d}}$ and $M(x_{m+1}|x^m)=\frac 1{2(k-1)},M(x_1|\overline{x_2^{m+1}})=\frac 12$.
\item If $x_1\in {\tilde \calS},x_{m+1}=1$, then $x^m$ has $d$ elements from $\tilde S$ and $x_2^{m+1}$ has $d-1$ elements from $\calS$ then $\pi(x^m)=\frac 1{n(k-1)^{d}},\pi(\overline{x_2^{m+1}})=\frac 1{n(k-1)^{d-1}}$ and $M(x_{m+1}|x^m)=\frac 12,M(x_1|\overline{x_2^{m+1}})=\frac 1{2(k-1)}$.
\end{itemize} For all these sub-cases we have $\pi(x^m)M(x_{m+1}|x^m) = \pi(\overline{x_2^{m+1}})M(x_1|\overline{x_2^{m+1}})$ as required.
\end{itemize}
This finishes the proof.
\end{proof}
Let $(X_1,\ldots,X_n)$ be the trajectory of a stationary Markov chain with transition matrix $M$ as in \eqref{eq:MC_M_construction2}.
We observe the following properties:
\begin{enumerate}[label=(R\arabic*)]
\item \label{pt:MC_21} This Markov chain is irreducible and reversible. Furthermore, the stationary distribution $\pi$ assigns probability $\frac 12$ to the initial state $1^m$.
\item \label{pt:MC_22} For $m\leq t\leq n-1$, let $\calX_t$ denote the collections of trajectories $x^n$ such that $x_1,x_2,\cdots,x_t=1$ and $x_{t+1},\cdots,x_n\in {\tilde \calS}$. Then using \prettyref{lmm:MC_stationary-M}
\begin{align}\label{eq:MC_Xt_prob2}
\PP(X^n\in\calX_t)&= \PP(X_1=\cdots=X_t=1)
\cdot
\PP(X_{t+1}\neq 1|X_{t-m+1}^{t}=1^{m})
\nonumber\\
&\quad \cdot
\prod_{i=2}^{m-1}
\PP(X_{t+i}\neq 1|X_{t-m+i}^{t}=1^{m-i+1},X_{t+1}^{t+i-1}\in{\tilde \calS}^{i-1})
\nonumber\\
&\quad \cdot\PP(X_{t+m}\neq 1|X_t=1,X_{t+1}^{t+m-1}\in{\tilde \calS}^{m-1})
\cdot \prod_{s=t+m}^{n-1} \PP(X_{s+1}\neq 1|X_{s-m+1}^s\in{\tilde \calS}^m) \nonumber \\
&= \frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-m}\cdot \frac{b}{n2^{m-2}}\cdot \left(1-\frac{1}{n}\right)^{n-m-t} =\frac b{n2^{m-1}}\pth{1-\frac 1n}^{n-2m}.
\end{align}
Moreover, this probability does not depend of the choice of $T$;
\item \label{pt:MC_23} Conditioned on the event that $X^n\in\calX_t$, the trajectory $(X_{t+1},\cdots,X_n)$ has the same distribution as a length-$(n-t)$ trajectory of a stationary $m^{\text{th}}$-order Markov chain with state space ${\tilde \calS}$ and transition probability $T$, and the uniform initial distribution.
Indeed,
\begin{align*}
& \prob{X_{t+1}=x_{t+1},\ldots,X_n=x_n|X^n\in\calX_t} \\
&= \frac{\frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-m} \cdot \frac b{n2^{m-2}(k-1)^{m}} \prod_{s=t+m}^{n-1} \pth{1-\frac1n}T(x_{s+1}|x_{s-m+1}^s) }{\frac{b}{n2^{m-1}}\left(1-\frac{1}{n}\right)^{n-2m}} \\
&= \frac{1}{(k-1)^m}
\prod_{s=t+m}^{n-1} T(x_{s+1}|x_{s-m+1}^s).
\end{align*}
\end{enumerate}
\paragraph{Reducing the Bayes prediction risk to mutual information}
Consider the following Bayesian setting, we first draw $T$ from some prior satisfying property \ref{pt:MC_P}, then generate the stationary $m^{\rm th}$-order Markov chain $X^n=(X_1,\ldots,X_n)$ with state space $[k]$ and transition matrix $M$ in \eqref{eq:MC_M_construction2} and stationary distribution $\pi$ in \prettyref{eq:pi-mth}. The following lemma lower bounds the Bayes prediction risk.
\begin{lemma}\label{lmm:riskred_markov2}
Conditioned on $T$, let $Y^n=(Y_1,\ldots,Y_n)$ denote an $m^{\rm th}$-order stationary Markov chain on state space
${\tilde \calS}=\{2,\ldots,k\}$ with transition matrix $T$ and uniform initial distribution. Then
\begin{align*}
&\inf_{\widehat{M}}\EE_{T}\left[\EE[D(M(\cdot|X_{n-m+1}^n)\| \widehat{M}(\cdot|X_{n-m+1}^n))) ]\right] \nonumber\\
&\ge \frac{b(n-1)}{n^22^{m-1}}\pth{1-\frac 1n}^{n-2m}\left(I(T;Y^{n-m}) - m\log (k-1)\right).
\end{align*}
\end{lemma}
\begin{proof}
We first relate the Bayes estimator of $M$ and $T$ (given the $X$ and $Y$ chain respectively).
For each $m\leq t\leq n$, denote by $\hat M_t=\hat M_t(\cdot|x^t)$
the Bayes estimator of $M(\cdot|x_{t-m+1}^t)$ given $X^t=x^t$, and
$\hat T_t(\cdot|y^t)$
the Bayes estimator of $T(\cdot|y_{t-m+1}^t)$ given $Y^t=y^t$.
For each $t=1,\ldots,n-1$ and for each trajectory $x^n=(1,\ldots,1,x_{t+1},\ldots,x_n) \in \calX_t$, recalling the form \eqref{eq:MC_bayes} of the Bayes estimator,
we have, for each $j\in{\tilde \calS}$,
\begin{align*}
&\hat M_n(j|x^n)\\
&= ~ \frac{\prob{X^{n+1}=(x^n,j)}}{\prob{X^n=x^n}} \\
&= ~ \frac{\Expect[\frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-m} \cdot \frac b{n2^{m-2}(k-1)^{m}} \prod_{s=t+m}^{n-1} M(x_{s+1}|x_{s-m+1}^s) M(j|x_{n-m+1}^n)]}{\Expect[\frac{1}{2}\cdot \left(1-\frac{1}{n}\right)^{t-m} \cdot \frac b{n2^{m-2}(k-1)^{m}} \prod_{s=t+m}^{n-1} M(x_{s+1}|x_{s-m+1}^s)]} \\
&= ~ \pth{1-\frac{1}{n}} \frac{\Expect[\frac 1{(k-1)^m}\prod_{s=t+m}^{n-1} T(x_{s+1}|x_{s-m+1}^s) T(j|x_{n-m+1}^n)]}{\Expect[\frac 1{(k-1)^m}\prod_{s=t+m}^{n-1} T(x_{s+1}|x_{s-m+1}^s)]} \\
&= ~ \pth{1-\frac{1}{n}} \frac{\prob{Y^{n-t+1} = (x_{t+1}^n, j)} }{\prob{Y^{n-t} = x_{t+1}^n}} \\
&= ~ \pth{1-\frac{1}{n}} \hat T_{n-t}(j|x_{t+1}^n) .
\end{align*}
Furthermore, since $M(1|x^m)=1/n$ for all $x^m\in {\tilde \calS}$ in the construction \eqref{eq:MC_M_construction2}, the Bayes estimator also satisfies $\hat M_n(1|x^n) = 1/n$ for $x^n\in \calX_t$ and $t\le n-m$.
In all, we have
\begin{equation}
\hat M_n(\cdot|x^n) = \frac{1}{n} \delta_1 + \pth{1-\frac{1}{n}}\hat T_{n-t}(\cdot|x_{t+1}^n) , \quad x^n\in\calX_t, t\le n-m.
\label{eq:MC_bayesMT2}
\end{equation}
with $\delta_1$ denoting the point mass at state 1, which parallels the fact that
\begin{equation}
M(\cdot|y^m) = \frac{1}{n} \delta_1 + \pth{1-\frac{1}{n}} T(\cdot|y^m), \quad y^m\in {\tilde \calS}^m.
\label{eq:MC_MT2}
\end{equation}
By \ref{pt:MC_22}, each event $\{X^n\in\calX_t\}$ occurs with probability at least ${b\over n2^{m-1}}\pth{1-\frac 1n}^{n-2m}$, and is independent of $T$. Therefore,
\begin{align}\label{eq:MC_decomposition2}
&\EE_{T}\left[\EE[D(M(\cdot|X_{n-1}X_n)\| \widehat{M}(\cdot|X^n)) ]\right] \nonumber \\
&\ge \frac b{n2^{m-1}}\pth{1-\frac 1n}^{n-2m}\sum_{t=m}^{n-m}\EE_{T}\left[\EE[D(M(\cdot|X_{n-m+1}^n)\| \widehat{M}(\cdot|X^n)) | X^n\in\calX_t ]\right].
\end{align}
By \ref{pt:MC_23}, the conditional joint law of $(T,X_{t+1},\ldots,X_n)$ on the event $\{X^n\in\calX_t\}$ is the same as the joint law of $(T,Y_{1},\ldots,Y_{n-t})$.
Thus, we may express the Bayes prediction risk in the $X$ chain as
\begin{align}\label{eq:MC_reduction_M_N2}
\EE_{T}\left[\EE[D(M(\cdot|X_{n-m+1}^n)\| \widehat{M}(\cdot|X^n)) | X^n\in\calX_t ]\right]
& \stepa{=} \left(1-\frac{1}{n}\right)\cdot \EE_{T}\left[\EE[D(T(\cdot|Y_{n-t-m+1}^{n-t}) \| \widehat{T}(\cdot|Y^{n-t}))]\right] \nonumber \\
& \stepb{=} \left(1-\frac{1}{n}\right)\cdot I(T;Y_{n-t+1}|Y^{n-t}),
\end{align}
where (a) follows from \prettyref{eq:MC_bayesMT2}, \prettyref{eq:MC_MT2}, and the fact that for distributions $P,Q$ supported on ${\tilde \calS}$,
$D(\epsilon \delta_1 + (1-\epsilon) P \|\epsilon \delta_1 + (1-\epsilon) Q)=(1-\epsilon) D(P\|Q)$;
(b) is the mutual information representation \prettyref{eq:MC_Bayes-MI} of the Bayes prediction risk.
Finally, the lemma follows from \eqref{eq:MC_decomposition2}, \eqref{eq:MC_reduction_M_N2}, and the chain rule
\begin{align*}
\sum_{t=m}^{n-m} I(T;Y_{n-t+1}|Y^{n-t}) = I(T;Y^{n-m}) - I(T;Y^m) \ge I(T;Y^{n-m}) - m\log(k-1),
\end{align*}
as $I(T;Y^m)\le H(Y^m)\le m \log(k-1)$.
\end{proof}
\paragraph{Prior construction and lower bounding the mutual information}
We assume that $k=2k_0+1$ for some integer $k_0$. For simplicity of notation we replace $\tilde S$ by $\calY={1,\dots,k-1}$. This does not affect the lower bound. Define an equivalent relation on $|\calY|^{m-1}$ given by the following rule: $x^{m-1}$ and $y^{m-1}$ are related if and only if $x^{m-1}=y^{m-1}$ or $x^{m-1}=\overline{y^{m-1}}$. Let $R_{m-1}$ be a subset of $\calY^{m-1}$ that consists of exactly one representative from each of the equivalent classes. As each of the equivalent classes under this relation will have at most two elements the total number of equivalent classes is at least $|\calY|^{m-1}\over 2$, i.e., $|R_{m-1}|\geq {(k-1)^{m-1}\over 2}$.
We consider the following prior: let $u=\sth{u_{ix^{m-1}j}}_{i\leq j\in{[k_0]},x^{m-1}\in R_{m-1}}$ be iid and uniformly distributed in $[1/(4k_0),3/(4k_0)]$ and for each $i\leq j,x^{m-1}\in R_{m-1}$ define $u_{jx^{m-1}i},u_{i\overline{x^{m-1}}j},u_{j\overline{x^{m-1}}i}$ to be same as $u_{i{x^{m-1}}j}$. Let the transition matrix $T$ be given by
\begin{align}\label{eq:MC_u-T-relation-order-m}
&T(2j-1|2i-1,x^{m-1}) = T(2j|2i,x^{m-1}) = u_{ix^{m-1}j},
\nonumber\\
&T(2j|2i-1,x^{m-1}) = T(2j-1|2i,x^{m-1}) = \frac 1{k_0}-u_{ix^{m-1}j}, \quad i,j\in \calY,x^{m-1}\in \calY^{m-1}.
\end{align}
One can check that the constructed $T$ is a stochastic matrix and satisfies the property \ref{pt:MC_P}, which enforces uniform stationary distribution. Also each entry of $T$ belongs to the interval $[{1\over 2(k-1)},{3\over 2(k-1)}]$.
Next we use the following lemma to derive estimation guarantees on $T$.
\begin{lemma}\label{lmm:L2_upper-order-m}
Suppose that $T$ is an $\ell^{m}\times \ell$ transition matrix, on state space $\calY^m$ with $|\calY|=\ell$, satisfying $T(x_{m+1}|x^{m}) = T(x_1|\overline{x_2^{m+1}}),\quad \forall x^{m+1}\in [\ell]^{m+1}$ and $T(y_{m+1}|y^m)\in [\frac{c_1}{\ell},\frac{c_2}{\ell}]$ with $0<c_1<c_2<1<c_1$ for all $y^{m+1}\in [\ell]^{m+1}$. Then there is an estimator $\widehat{T}$ based on stationary trajectory $Y^n$ simulated from $T$ such that
\begin{align*}
\EE[\|\widehat{T} - T \|_{\mathsf{F}}^2] \le\frac{4c_1^{2m+3}(m+1)\ell^{2m}}{c_2(n-m)} ,
\end{align*}
where $\|\widehat{T} - T \|_{\mathsf{F}} = \sqrt{\sum_{y^{m+1}} (\widehat{T}(y_{m+1}|y^m) - T(y_{m+1}|y^m))^2}$ denotes the Frobenius norm.
\end{lemma}
For our purpose we will use the above lemma on $T$ with $\ell=k-1,c_1=\frac 12,c_2=\frac 32$. Therefore it follows that there exist estimators $\widehat{T}(Y^n)$ and $\widehat{u}(Y^n)$ such that
\begin{align}
\EE[\|\widehat{u}(Y^n) - u\|_2^2] \le \EE[\|\widehat{T}(Y^n) - T \|_{\mathsf{F}}^2] \le \frac{4c_2(m+1)(k-1)^{2m}}{c_1^{2m+3}(n-m)}.
\label{eq:MSEu-second}
\end{align}
Here and below, we identify $u=\sth{u_{ix^{m-1}j}}_{i\leq j,x^{m-1}\in R_{m-1}}$ and $\hat u=\sth{\hat u_{ix^{m-1}j}}_{i\leq j,x^{m-1}\in R_{m-1}}$ as ${|R_{m-1}|k_0(k_0+1)\over 2}={|R_{m-1}|(k^2-1)\over 8}$-dimensional vectors.
Let $h(X) = \int -f_X(x)\log f_X(x)dx$ denote the differential entropy of a continuous random vector $X$ with density $f_X$ w.r.t the Lebesgue measure
and $h(X|Y)=\int -f_{XY}(xy)\log f_{X|Y}(x|y)dxdy$ the conditional differential entropy (cf.~e.g.~\cite{cover}). Then
\begin{align}
h(u) = \sum_{i\leq j\in{[k_0]},x^{m-1}\in R_{m-1}}h(u_{ix^{m-1}j}) = -\frac{|R_{m-1}|(k^2-1)}{8}\log(k-1).
\end{align}
Then
\begin{align*}
I(T;Y^n)
& \stepa{=} I(u;Y^n)\\
& \stepb{\geq} I(u;\hat u(Y^n)) = h(u)-h(u|\hat u(Y^n))\\
& \stepc{\geq} h(u)-h(u-\hat u(Y^n))\\
& \stepd{\geq} \frac{|R_{m-1}|(k^2-1)}{16}\log\left(\frac{c_1^{2m+3}|R_{m-1}|(k^2-1)(n-m)}{64\pi ec_2(m+1)(k-1)^{2m+2}}\right)
\ge \frac{(k-1)^{m+1}}{32}\log\left(\frac{n-m}{c_m(k-1)^{m+1}}\right).
\end{align*}
for constant $c_m={128\pi ec_2(m+1)\over c_1^{2m+3}}$, where
(a) is because $u$ and $T$ are in one-to-one correspondence by \prettyref{eq:MC_u-T-relation-order-m};
(b) follows from the data processing inequality;
(c) is because $h(\cdot)$ is translation invariant and concave;
(d) follows from the maximum entropy principle \cite{cover}:
$h(u-\hat u(Y^n)) \leq \frac{|R_{m-1}|(k^2-1)}{16}\log\left(\frac{2\pi e}{|R_{m-1}|(k^2-1)/8}\cdot \EE[\|\widehat{u}(Y^n) - u\|_2^2] \right)$, which in turn is bounded by \prettyref{eq:MSEu-second}.
Plugging this lower bound into Lemma \ref{lmm:riskred_markov2} completes the lower bound proof of \prettyref{thm:optimal-m}.
\subsubsection{Proof of \prettyref{lmm:L2_upper-order-m} via pseudo spectral gap}
In view of \prettyref{lmm:MC_stationary-T} we get that the stationary distribution of $T$ is uniform over $\calY^m$, and there is a one-to-one correspondence between the joint distribution of $Y^{m+1}$ and the transition probabilities
\begin{align}
\PP\qth{Y^{m+1}=y^{m+1}}
=\frac 1{\ell^{m}}T(y_{m+1}|y^m).
\end{align}
Consider the following estimator $\widehat{T}$: for $y_{m+1}\in [\ell]^{m+1}$, let
\begin{align*}
\widehat{T}(y_{m+1}|y^m) = \ell^m\cdot \frac{\sum_{t=1}^{n-m} \indc{Y_t^{t+m}=y^{m+1}} }{n-m}.
\end{align*}
Clearly $\EE[\widehat{T}(y_{m+1}|y^m)]=\ell^m\PP\qth{y_{m+1}|y^m}= T(y_{m+1}|y^m)$. Next we observe that the sequence of random variables $\sth{Y_{t}^{t+m}}_{t=1}^{n-m}$ is a first-order Markov chain on $[\ell]^{m+1}$. Let us denote its transition matrix by $T_{m+1}$ and note that its stationary distribution is given by $\pi(a^{m+1})=\ell^{-m}T(a_{m+1}|a^m), a^{m+1}\in[\ell]^{m+1}$. For the transition matrix $T_{m+1}$, which must be non-reversible, the \emph{pseudo spectral gap} $\gamma_{\text{ps}}(T_{m+1})$ is defined as
\begin{align*}
\gamma_{\text{ps}}(T_{m+1}) = \max_{r\ge 1} \frac{\gamma( (T_{m+1}^*)^r T_{m+1}^r )}{r},
\end{align*}
where $T_{m+1}^*$ is the adjoint of $T_{m+1}$ defined as $T_{m+1}^*(b^{m+1}|a^{m+1}) = \pi(b^{m+1})T(a^{m+1}|b^{m+1})/\pi(a^{m+1})$.
With these notations, the concentration inequality of \cite[Theorem 3.2]{P15} gives the following variance bound:
\begin{align*}
\var(\widehat{T}(y_{m+1}|y^m)) \le \ell^{2m}\cdot \frac{4\PP\qth{Y^{m+1}=y^{m+1}}}{\gamma_{\text{ps}}(T_{m+1})(n-m)} \le \ell^{2m}\cdot \frac{4T(y_{m+1}|y^m)\ell^{-m}}{\gamma_{\text{ps}}(T_{m+1})(n-m)}.
\end{align*}
The following lemma bounds the pseudo spectral gap from below.
\begin{lemma}\label{lmm:MC_bound-pseudo-spectral-gap}
Let $T\in \mathbb{R}^{\ell^m\times \ell}$ be the transition matrix of an $m$-th order Markov chain $(Y_t)_{t\geq 1}$
over a discrete state space $\mathcal{Y}$ with $|\mathcal{Y}|=\ell$, and assume that
\begin{itemize}
\item all the entries of $T$ lie in the interval $[\frac {c_1}\ell,\frac {c_2}\ell]$ for some absolute constants $0<c_1<c_2$;
\item $T$ has the uniform stationary distribution on $[\ell]^m$.
\end{itemize}
Let $T_{m+1}\in \reals^{\ell^{m+1} \times \ell^{m+1}}$ be the transition matrix of the first-order Markov chain $((Y_t,Y_{t+1},\cdots,Y_{t+m}))_{t\geq 1}$. Then we have
\begin{align*}
\gamma_{\text{\rm ps}}(T_{m+1}) \ge {c_1^{2m+3}\over c_2(m+1)}.
\end{align*}
\end{lemma}
Consequently, we have
\begin{align*}
\EE[\|\widehat{T} - T \|_{\mathsf{ F}}^2] = \sum_{y^{m+1}\in [\ell]^{m+1}} \var(\widehat{T}(y_{m+1}|y^m)) \le \sum_{y^{m+1}\in [\ell]^{m+1}} \frac {4c_2(m+1)\ell^m}{c_1^{2m+3}}\cdot \frac{T(y_{m+1}|y^m)}{n-m} = \frac{4c_2(m+1)\ell^{2m}}{c_1^{2m+3}(n-m)},
\end{align*}
completing the proof.
\begin{proof}[Proof of \prettyref{lmm:MC_bound-pseudo-spectral-gap}]
As $T_{m+1}$ is a first-order Markov chain, the stochastic matrix $T_{m+1}^{m+1}$ defines the probabilities of transition from $(Y_t,Y_{t+1},\cdots,Y_{t+m})$ to $(Y_{t+m+1},Y_{t+m+2},\cdots,Y_{t+2m+1})$. By our assumption on $T$
\begin{align}\label{eq:MC_min-T_{m+1}}
\min_{a^{2m+2}\in \calY^{2m+2}} T^{m+1}_{m+1}(a^{2m+2}_{m+2}|a^{m+1})
\geq \prod_{t=0}^{m}T(a_{2m+2-t}|a^{2m+1-t}_{m+2-t})
\geq {c_1^{m+1} \over \ell^{m+1}}.
\end{align}
Given any $a^{m+1},b^{m+1}\in \calY^{m+1}$, using the above inequality we have
\begin{align}\label{eq:MC_min-T*_{m+1}}
&(T_{m+1}^*)^{m+1}(b^{m+1}|a^{m+1})
\nonumber\\
&=\sum_{\bm y_1\in\calY^{m+1},\dots,\bm y_{m}\in\calY^{m+1}}T_{m+1}^*(b^{m+1}|\bm y_{m})\sth{\prod_{t=1}^{m-1} T_{m+1}^*(\bm y_{m-t+1}|\bm y_{m-t})}T_{m+1}^*(\bm y_1|a^{m+1})
\nonumber\\
&=\sum_{\bm y_1\in\calY^{m+1},\dots,\bm y_{m}\in\calY^{m+1}}{\pi( b^{m+1})T_{m+1}(\bm y_{m}|b^{m+1})\over \pi(\bm y_m)}\sth{\prod_{t=1}^{m-1}{\pi(\bm y_{m-t+1})T_{m+1}(\bm y_{m-t}|\bm y_{m-t+1})\over \pi(\bm y_{m-t})}}{\pi(\bm y_1)T_{m+1}(a^{m+1}|\bm y_1)\over \pi(a^{m+1})}
\nonumber\\
&={\pi(b^{m+1})\over \pi(a^{m+1})}
\sum_{\bm y_1\in\calY^{m+1},\dots,\bm y_{m}\in\calY^{m+1}}T_{m+1}(\bm y_{m}|b^{m+1})\sth{\prod_{t=1}^{m-1}T_{m+1}(\bm y_{m-t}|\bm y_{m-t+1})}T_{m+1}(a^{m+1}|\bm y_1)
\nonumber\\
&={\pi(b^{m+1})\over \pi(a^{m+1})}T^{m+1}_{m+1}(a^{m+1}|b^{m+1})
\nonumber\\
&={\pi(b^m)T(b_{m+1}|b^m)\over \pi(b^m)T(a_{m+1}|a^m)}T^{m+1}_{m+1}(a^{m+1}|b^{m+1})
\geq {c_1\over c_2}\cdot {c_1^{m+1}\over \ell^{m+1}}.
\end{align}
Using \eqref{eq:MC_min-T_{m+1}},\eqref{eq:MC_min-T*_{m+1}} we get
\begin{align}
&\min_{a^{m+1},b^{m+1}\in\calY^{m+1}}\sth{(T_{m+1}^*)^{m+1}T^{m+1}_{m+1}}(b^{m+1}|a^{m+1})
\nonumber\\
&\geq
\sum_{d^{m+1}\in\calY^{m+1}}\pth{\min_{a^{m+1},d^{m+1}\in\calY^{m+1}}(T_{m+1}^*)^{m+1}(d^{m+1}|a^{m+1})}
\pth{\min_{b^{m+1},d^{m+1}\in\calY^{m+1}}T^{m+1}_{m+1}(b^{m+1}|d^{m+1})}
\nonumber\\
&\geq \sum_{d^{m+1}\in\calY^{m+1}}{c_1^{2m+3}\over c_2\ell^{2m+2}}
\geq {c_1^{2m+3}\over c_2\ell^{m+1}}.
\end{align}
As $(T_{m+1}^*)^{m+1}T^{m+1}_{m+1}$ is an $\ell^{m+1}\times \ell^{m+1}$ stochastic matrix, we can use \prettyref{lmm:special-hoffman} to get the lower bound on its spectral gap $\gamma((T_{m+1}^*)^{m+1}T^{m+1}_{m+1})\geq {c_1^{2m+3}\over c_2}$. Hence we get
\begin{align}
\gamma_{\text{\rm ps}}(T_{m+1})\geq {\gamma((T_{m+1}^*)^{m+1}T^{m+1}_{m+1})\over m+1}
\geq {c_1^{2m+3}\over c_2(m+1)}
\end{align}
as required.
A more generalized version of \prettyref{lmm:special-hoffman} can be found in from \cite{H67}.
\begin{lemma}\label{lmm:special-hoffman}
Suppose that $A$ is a $d\times d$ stochastic matrix with $\min_{i,j}A_{ij}\geq \epsilon$. Then for any eigenvalue $\lambda$ of $A$ other than 1 we have $|\lambda|\leq 1-d\epsilon$.
\end{lemma}
\begin{proof}
Suppose that $\lambda$ is an eigenvalue of $A$ other than 1 with non-zero left eigenvector $\bm v$, i.e. $\lambda v_j=\sum_{i=1}^dv_i A_{ij},j=1,\dots,d$. As $A$ is a stochastic matrix we know that $\sum_{j}A_{ij}=1$ for all $i$ and hence $\sum_{i=1}^d v_i=0$. This implies
\begin{align}
|\lambda v_j|
=\left|\sum_{i=1}^dv_iA_{ij}\right|
=\left|\sum_{i=1}^dv_i(A_{ij}-\epsilon)\right|
\leq \sum_{i=1}^d|v_i(A_{ij}-\epsilon)|
= \sum_{i=1}^d|v_i|(A_{ij}-\epsilon)
\end{align}
with the last equality following from $A_{ij}\geq \epsilon$. Summing over $j=1,\dots d$ in the above equation and dividing by $\sum_{i=1}^d|v_i|$ we get $|\lambda|\leq 1-d\epsilon$ as required.
\end{proof}
\end{proof}
\section{Discussions and open problems}
\label{sec:discussion}
We discuss the assumptions and implications of our results as well as related open problems.
\paragraph{Very large state space.}
\prettyref{thm:optimal} determines the optimal prediction risk under the assumption of $k \lesssim \sqrt{n}$.
When $k \gtrsim \sqrt{n}$, \prettyref{thm:optimal} shows that the KL risk is bounded away from zero. However, as the KL risk can be as large as $\log k$, it is a meaningful question to determine the optimal rate in this case, which, thanks to the general reduction in \prettyref{eq:riskred-approx}, reduces to determining the redundancy for symmetric and general Markov chains.
For iid data, the minimax \emph{pointwise} redundancy is known to be $n \log \frac{k}{n} + O(\frac{n^2}{k})$ \cite[Theorem 1]{szpankowski2012minimax} when $k\gg n$.
Since the average and pointwise redundancy usually behave similarly, for Markov chains it is reasonable to conjecture that the redundancy is $\Theta(n \log \frac{k^2}{n})$ in the large alphabet regime of $k \gtrsim \sqrt{n}$, which, in view of \prettyref{eq:riskred-approx}, would imply the optimal prediction risk is
$\Theta(\log \frac{k^2}{n})$ for $k \gg \sqrt{n}$.
In comparison, we note that the prediction risk is at most $\log k$, achieved by the uniform distribution.
\paragraph{Other loss functions}
As mentioned in \prettyref{sec:technique}, standard arguments based on concentration inequalities inevitably rely on mixing conditions such as the spectral gap. In contrast, the risk bound in \prettyref{thm:optimal}, which is free of any mixing condition, is enabled by powerful techniques from universal compression which bound the redundancy by the pointwise maximum over all trajectories combined with information-theoretic or combinatorial argument. This program only relies on the Markovity of the process rather than stationarity or spectral gap assumptions.
The limitation of this approach, however, is that the reduction between prediction and redundancy crucially depends on the form of the KL loss function\footnote{In fact, this connection breaks down if one swap $M$ and $\hat M$ in the KL divergence in \prettyref{eq:riskkn}.} in \prettyref{eq:riskkn}, which allows one to use the mutual information representation and the chain rule to relate individual risks to the cumulative risk.
More general loss in terms of $f$-divergence have been considered in \cite{HOP18}. Obtaining spectral gap-independent risk bound for these loss functions, this time without the aid of universal compression, is an open question.
\paragraph{Stationarity}
As mentioned above, the redundancy result in \prettyref{lmm:red-addone} (see also \cite{D83,tatwawadi2018minimax}) holds for nonstationary Markov chains as well.
However, our redundancy-based risk upper bound in \prettyref{lmm:riskred} crucially relies on stationarity.
It is unclear whether the result of \prettyref{thm:optimal} carries over to nonstationary chains.
| {
"timestamp": "2022-05-05T02:05:52",
"yymm": "2106",
"arxiv_id": "2106.13947",
"language": "en",
"url": "https://arxiv.org/abs/2106.13947",
"abstract": "We study the following learning problem with dependent data: Observing a trajectory of length $n$ from a stationary Markov chain with $k$ states, the goal is to predict the next state. For $3 \\leq k \\leq O(\\sqrt{n})$, using techniques from universal compression, the optimal prediction risk in Kullback-Leibler divergence is shown to be $\\Theta(\\frac{k^2}{n}\\log \\frac{n}{k^2})$, in contrast to the optimal rate of $\\Theta(\\frac{\\log \\log n}{n})$ for $k=2$ previously shown in Falahatgar et al. (2016). These rates, slower than the parametric rate of $O(\\frac{k^2}{n})$, can be attributed to the memory in the data, as the spectral gap of the Markov chain can be arbitrarily small. To quantify the memory effect, we study irreducible reversible chains with a prescribed spectral gap. In addition to characterizing the optimal prediction risk for two states, we show that, as long as the spectral gap is not excessively small, the prediction risk in the Markov model is $O(\\frac{k^2}{n})$, which coincides with that of an iid model with the same number of parameters. Extensions to higher-order Markov chains are also obtained.",
"subjects": "Statistics Theory (math.ST)",
"title": "Optimal prediction of Markov chains with and without spectral gap",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815497525024,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.7076842288829999
} |
https://arxiv.org/abs/0906.2105 | Nonuniform sampling and recovery of multidimensional bandlimited functions by Gaussian radial-basis functions | Let $S\subset\R^d$ be a bounded subset with positive Lebesgue measure. The Paley-Wiener space associated to $S$, $PW_S$, is defined to be the set of all square-integrable functions on $\R^d$ whose Fourier transforms vanish outside $S$. A sequence $(x_j:j\kin\N)$ in $\R^d$ is said to be a Riesz-basis sequence for $L_2(S)$ (equivalently, a complete interpolating sequence for $PW_S$) if the sequence $(e^{-i\la x_j,\cdot\ra}:j\kin\N)$ of exponential functions forms a Riesz basis for $L_2(S)$. Let $(x_j:j\kin\N)$ be a Riesz-basis sequence for $L_2(S)$. Given $\lambda>0$ and $f\in PW_S$, there is a unique sequence $(a_j)$ in $\ell_2$ such that the function $$ I_\lambda(f)(x):=\sum_{j\in\N}a_je^{-\lambda \|x-x_j\|_2^2}, \qquad x\kin\R^d, $$ is continuous and square integrable on $\R^d$, and satisfies the condition $I_\lambda(f)(x_n)=f(x_n)$ for every $n\kin\N$. This paper studies the convergence of the interpolant $I_\lambda(f)$ as $\lambda$ tends to zero, {\it i.e.,\} as the variance of the underlying Gaussian tends to infinity. The following result is obtained: Let $\delta\in(\sqrt{2/3},1]$ and $0<\beta<\sqrt{3\delta^2 -2}$. Suppose that $\delta B_2\subset Z\subset B_2$, and let $(x_j:j\in\N)$ be a Riesz basis sequence for $L_2(Z)$. If $f\in PW_{\beta B_2}$, then $f=\lim_{\lambda\to 0^+} I_\lambda(f)$ in $L_2(\R^d)$ and uniformly on $\R^d$. If $\delta=1$, then one may take $\beta$ to be 1 as well, and this reduces to a known theorem in the univariate case. However, if $d\ge2$, it is not known whether $L_2(B_2)$ admits a Riesz-basis sequence. On the other hand, in the case when $\delta<1$, there do exist bodies $Z$ satisfying the hypotheses of the theorem (in any space dimension). | \section{Introduction}\label{S:0}
The theoretical study of interpolation has long played a
prominent role in the development of the theory
of approximation, whilst the computational aspects of
the subject have found a natural outlet in numerical analysis.
Given the inherent naturalness of the subject, and the
variety of interesting questions it generates, it is not
surprising that interpolation theory continues to
be an active area of research.
Among the diverse issues associated to interpolation, the theory
of `cardinal interpolation' has been a well-studied theme
over many years. The term refers to
the interpolation of data on a regular, usually infinite, grid.
When the interpolation at such a grid is done via spline functions,
in particular, one encounters `cardinal spline interpolation'.
Rooted in
Isaac Schoenberg's seminal work, this subject
was further developed by a number of his
successors.
The last century has witnessed enormous
advances in the state of the art of electrical engineering
and telecommunication theory. Included among the myriad branches
of this discipline is the theory of `sampling', which is an
integral part of signal analysis. By their very nature,
the theories of sampling and interpolation are intertwined;
indeed, at a basic level both are in essence one and the same.
Philosophical considerations apart, there are also solid
and captivating mathematical connections between the
subjects. Connections of this nature, in the context
of cardinal spline interpolation, were brought out
by Schoenberg himself; see, for instance
\cite{Sc}, especially his
remarks on page 228 there. In this paper Schoenberg showed,
among other things, how bandlimited functions
can be recovered as limits of their cardinal-spline
interpolants, as the degree of the underlying
spline tends to infinity. Substantial extensions
of this theme have since been carried out by many,
both in the univariate and multivariate settings. More recently,
it was also discovered that the theory of cardinal spline
interpolation, and its connections to sampling, have
a strong resonance in the theory of radial-basis
functions, especially involving Gaussians. One
consequence of these developments is the fact
that bandlimited functions can also be recovered
via their Gaussian cardinal interpolants, in a suitable
limiting sense.
Moving away from the realm of gridded data, it is natural
to look for comparable connections between interpolation
at irregularly spaced points -- often
referred to as scattered-data interpolation -- via
splines and radial-basis functions, and the now classical
theory
of nonuniform sampling. In the case of splines,
the analogue of Schoenberg's theorem to which
we alluded above was obtained in \cite{LM}, and
a counterpart of this
result for Gaussians was given later
in \cite{SS}.
The focus in \cite{SS} is on the one-dimensional
case, although it does include a relatively
straightforward extension
to higher dimensions in terms of tensor products.
The purpose of
the present article is to present a general
multidimensional result. Here we overcome our
earlier obstacles by extending the
techniques of \cite{LM} and
\cite{SS} in such a way that
the radial symmetry of the multidimensional
Gaussian can be exploited fully. As in \cite{LM} and \cite{SS}, our setting for the
interpolation problem also involves Riesz-basis sequences.
However, we are forced to settle for less in the
multivariate situation, because the existence
of suitable Riesz-basis sequences in higher dimensions is
a subtle matter depending on the geometry of the underlying domain.
A more detailed discussion of this issue is the subject of
the concluding remarks in Section 3.
The rest of the paper is organized as follows.
In the next section we lay out some background
material of a general nature. More specific
preliminaries are given in the subsequent section,
which concludes with
the statement of the main result. The proof
of the latter
is detailed in the final section.
\section{Preliminaries}\label{S:1}
Let $d\in\mathbb{N}$. For $1\le p< \infty$ and a measurable set
$S \subset\mathbb{R}^d$ with positive Lebesgue measure $m(S)$, we denote by
$L_p(S)$ the space of complex-valued functions which are $p$-integrable on $S$ (with respect
to the Lebesgue measure). For $f\in L_p(S)$, we denote its standard $L_p$-norm
by
$\|f\|_{L_p(S)}$, or by $\|f\|_p$, when the context is clear.
We also denote by $\|\cdot\|_p$,
the $\ell_p$-norm, $1\le p\le \infty$, on the space of (finite and infinite) sequences.
The space of continuous functions on $\mathbb{R}^d$ is denoted by $C(\mathbb{R}^d)$, and
$$
C_0(\mathbb{R}^d):=\big\{f\in C(\mathbb{R}^d): \lim_{\|x\|_2\to\infty} f(x)=0\big\}.
$$
If $f\in L_1(\mathbb{R}^d)$, then the Fourier transform of $f$, $\hat f$, is defined as follows:
\begin{equation}\label{E:1.1}
\hat f(x):=\int_{\mathbb{R}^d} f(u) e^{-i\langle x,u\rangle}du, \quad \text{ $x\in\mathbb{R}^d$.}
\end{equation}
The Fourier transform of $g\in L_2(\mathbb{R}^d)$ is denoted by $\mathcal{F}[g]$.
The assignment $g\mapsto\mathcal{F}[g]$ is the unique extension
of the map
$$
\widehat{(\cdot)}: L_1(\mathbb{R}^d)\cap L_2(\mathbb{R}^d)\to L_2(\mathbb{R}^d)
$$
to a bounded linear operator on $L_2(\mathbb{R}^d)$, satisfying
the following Plancherel-Parseval relation:
\begin{equation}\label{E:1.2}
\|\mathcal{F}[g]\|_2^2= (2\pi)^{d}\|g\|_2^2\text{ \ for all $g\in L_2(\mathbb{R}^d)$.}
\end{equation}
If $g\in L_2(\mathbb{R}^d)\cap C(\mathbb{R}^d)$ and $\mathcal{F}[g]\in L_1(\mathbb{R}^d)$, then the following inverse formula
holds:
\begin{equation}\label{E:1.3}
g(x)=\frac1{(2\pi)^d} \int_{\mathbb{R}^d} \mathcal{F}[g](u) e^{i\langle u,x\rangle} du, \quad\text{ for all $x\in\mathbb{R}^d$.}
\end{equation}
For $\lambda>0$ we define the {\em Gaussian function } $g_\lambda:\mathbb{R}^d\to\mathbb{R}$ by
$$
g_\lambda(x)=e^{-\lambda\|x\|_2^2}, \quad \text{for all $x\in\mathbb{R}^d$},
$$
and recall that
\begin{equation}\label{E:1.4}
\hat g_\lambda(u)=\Big(\frac{\pi}{\lambda}\Big)^{d/2} e^{-{\|u\|_2^2/(4\lambda)}}, \text{ for all $u\in\mathbb{R}^d$}.
\end{equation}
The functions we wish to interpolate are the so called {\em bandlimited } or
{\em Paley-Wiener functions} on $\mathbb{R}^d$. Specifically, for a bounded and measurable
$S\subset \mathbb{R}^d$ with $m(S)>0$, we define
$$
PW_S=\big\{g\in L_2(\mathbb{R}^d): \mathcal{F}[g]=0 \text{ almost everywhere outside $S$}
\big\}.
$$
If $S$ is as above and $g\!\in\! PW_S$,
then
the Fourier inversion formula implies the relations
\begin{equation}\label{E:1.5}
g(u)=\frac{1}{(2\pi)^d}\int_{\mathbb{R}^d}
{\mathcal{F}}[g](x)e^{i\langle x, u\rangle}\,dx=
\frac{1}{(2\pi)^d}\int_A
{\mathcal{F}}[g](x)e^{i\langle x, u\rangle}\,dx,
\end{equation}
for almost all $u\!\in\!\mathbb{R}^d$. As $L_2(S)\subset L_1(S)$ and
${\mathcal{F}}[g]=0$ almost everywhere outside $S$, it follows
that ${\mathcal{F}}[g]\!\in\! L_1(\mathbb{R}^d)$, so the
Riemann-Lebesgue Lemma asserts that the last
expression in \eqref{E:1.5}, as a
function of $u$, belongs to $C_0(\mathbb{R}^d)$.
So we may assume that \eqref{E:1.5} holds for all $u\!\in\!\mathbb{R}^d$.
Moreover,
the Bunyakovskii--Cauchy--Schwarz (BCS)
Inequality and \eqref{E:1.5}
combine to show that
\begin{equation}\label{E:0.5a}
|g(u)|\le \frac{m^{1/2}(S)}{(2\pi)^{d}}\|\mathcal{F}[g]\|_{L_2(S)}=
\frac{m^{1/2}(S)}{(2\pi)^{d/2}}\|g\| _{L_2(\mathbb{R}^d)},
\quad u\!\in\!\mathbb{R}^d.
\end{equation}
\section{Gaussian interpolants associated to Riesz-basis sequences}\label{S:2}
We begin by assembling some basic facts about bases in Hilbert spaces (cf. \cite{Yo}).
Let $(\mathcal{H},\langle\cdot,\cdot\rangle_H)$ be a separable (complex) infinite dimensional Hilbert space. We say that $(h_j:j\!\in\! \mathbb{N})$ is a {\em Riesz basis for $\mathcal{H}$} if
every element $h$ in $\mathcal{H}$
admits a unique representation of the form
\begin{equation}\label{E:2.1}
h=\sum_{j\in \mathbb{N}} a_j h_j, \quad\text{ with }\qquad \sum_{j\in \mathbb{N}} |a_j|^2<\infty.
\end{equation}
One can then show that there exists a unique bounded sequence $(h^*_j:j\!\in\!\mathbb{N})\subset \mathcal{H}$ so that
$a_j=\langle h,h^*_j \rangle_H$, for all $j\!\in\!\mathbb{N}$. We call the $h^*_j$'s the {\em coordinate functionals for $(h_j)$}.
The sequence $(h^*_j)$ is also a Riesz basis for $\mathcal{H}$,
and its coordinate functionals are the $h_j$'s.
Moreover there exists
a positive constant $R_b$ so that
\begin{equation}\label{E:2.2}
\frac1{R_b}\Big( \sum_{j\in\mathbb{N}} |c_j|\Big)^{1/2}\le \Big\|\sum_{j\in\mathbb{N}} c_j h_j \Big\|\le R_b \Big( \sum_{j\in\mathbb{N}} |c_j|\Big)^{1/2}
\end{equation}
for every square-summable sequence $(c_j:j\!\in\!\mathbb{N})$.
Let $S$ be a bounded subset of $\mathbb{R}^d$ with positive Lebesgue measure. We say that a sequence $(x_j:j\!\in\!\mathbb{N})\subset \mathbb{R}^d$
is a {\em Riesz-basis sequence for $L_2(S)$} if the sequence $(e^{-i\langle x_j,\cdot\rangle}:j\!\in\!\mathbb{N})$
is a Riesz basis for $L_2(S)$.
The following pair of observations will be useful.
\begin{prop}\label{P:2.0} Let $S$ be as above and let $(x_k)$ be a Riesz-basis sequence for $L_2(S)$.
\begin{enumerate}
\item{(a)} There is a $q>0$ so that $\|x_k-x_\ell\|\ge q$ for $k\not=\ell$.
\item{(b)} There is a constant $C>0$ so that
$$\Big(\sum_{k\in\mathbb{N}} |f(x_k)|^2\Big)^{1/2}\le C \|f\|_{L_2(\mathbb{R}^d)},
\text{ for every $f\in PW_{S}$.}$$
\end{enumerate}
\end{prop}
\begin{proof} If (a) were not true, there would be two subsequences $(k_j)$ and $(\ell_j)$
such that $\lim_{j\to\infty} \|x_{k_j}-x_{\ell_j}\|= 0$,
whence the Dominated Convergence Theorem implies that
$\lim_{j\to\infty} \|e^{-i\langle x_{k_j},\cdot\rangle}-e^{-i\langle x_{\ell_j},\cdot\rangle}\|_{L_2(S)}= 0$. Let
$(e^*_j:j\!\in\!\mathbb{N})$ be the coordinate functionals for $(e^{-i\langle x_j,\cdot\rangle}:j\!\in\!\mathbb{N})$. Since $\langle e^{-i\langle x_n,\cdot \rangle},e^*_m\rangle=\delta_{mn} $, for $m,n\in\mathbb{N}$,
we have (with $\langle\cdot,\cdot\rangle_{_S}=\langle\cdot,\cdot\rangle_{L_2(S)}$)
$ \langle e^{-i\langle x_{k_j},\cdot\rangle}-e^{-i\langle x_{\ell_j},\cdot\rangle}, e^*_{\ell_j}\rangle_S=1$, for $j\in\mathbb{N}$.
But this contradicts the boundedness of the sequence
$(\|e^*_j\|:j\!\in\!\mathbb{N})$.
\noindent (b) Let $f\in PW_S$. Considering $\mathcal{F}[f]$ as a function in $L_2(S)$
and recalling that $(e^*_j)$ is also a Riesz basis for $L_2(S)$, we may write
$$\mathcal{F}[f]= \sum_{j\in\mathbb{N}} \langle \mathcal{F}[f], e^{-i\langle x_j,\cdot \rangle}\rangle_{_S} e^*_j= (2\pi)^d\sum_{j\in\mathbb{N}} f(x_j) e^*_j,$$
where the second equality stems from \eqref{E:1.5}.
The asserted result follows from \eqref{E:2.1} and \eqref{E:2.2} applied to the Riesz basis $(e^*_j)$.
\end{proof}
We now begin our discussion of Gaussian interpolants associated to to Riesz-basis sequences. As the first step we state
the following result which can be deduced from \cite[Lemma 2.1]{NSW} and the M.~Riesz Convexity
Theorem.
\begin{prop}\label{P:2.1a} Let $q>0$ and suppose that $(x_j)$ is a $q$-separated sequence in
$\mathbb{R}^d$, {\it i.e.}
$$\inf_{k\not=\ell} \|x_k-x_\ell\|\ge q.$$
If $\lambda>0$, and if $(a_j)$ is a bounded sequence of complex numbers, then the function
$\mathbb{R}^d\ni x\mapsto\sum a_j g_\lambda(x-x_j)$ is continuous and bounded.
Moreover, the infinite matrix $\big(g_\lambda(x_j-x_k)\big)_{j,k\in\mathbb{N}}$ acts as a bounded operator
on $\ell_p$, for all $1\le p\le \infty$.
\end{prop}
This next theorem is an important discovery in the
quantitative theory
of radial-basis functions.
\begin{thm}\label{T:NW} { \cite[Theorem 2.3]{NW}}
Let $\lambda$ and $q$ be fixed positive numbers. There exists a number $\theta$, depending only
on $d$, $\lambda$, and $q$, such that
the following holds: if
$(x_j)$
is any $q$-separated sequence in $\mathbb{R}^d$,
then
$
\sum_{j,k}\xi_j{\overline \xi}_k g_\lambda(\|x_j-x_k\|_2)\ge\theta
\sum_j|\xi_j|^2,
$
for every sequence of complex numbers
$(\xi_j)$.
\end{thm}
\begin{cor}\label{C:2.2}
Suppose that $\lambda$ is a fixed
positive number. Let
$(x_j:j\!\in\!\mathbb{N})\subset \mathbb{R}^d$ be $q$-separated for some $q>0$.
Then the matrix
$(g_\lambda(x_k-x_j))_{k,j\in\mathbb{N}}$ is boundedly invertible on $\ell_2$.
In particular,
given a
square-summable sequence $(d_k:k\!\in\!\mathbb{N})$, there
exists a unique square-summable sequence
$(a_j^{(\lambda)}:j\!\in\!\mathbb{N})$ such that
$$
\sum_{j\in\mathbb{N}}a_j^{(\lambda)}g_\lambda(x_k-x_j)=d_k, \qquad k\!\in\!\mathbb{N}.
$$
\end{cor}
The interpolation
operators, whose study will occupy the rest
of the paper, are introduced in
the following theorem, which is a
necessary prelude to the main result; its proof will
be given in the next section.
\begin{prop}\label{P:2.3} Let $d\!\in\!\mathbb{N}$ and
let $ Z \subset \mathbb{R}^d$ be convex, symmetric about the origin and bounded with $m(Z)>0$. Let $\lambda$ be a fixed positive number,
and let $(x_j:j\!\in\!\mathbb{N})\subset \mathbb{R}^d$ be a Riesz -basis sequence for $L_2(Z)$.
For any $f\in PW_{Z}$, there exists a unique square-summable
sequence $(a_j^{(\lambda)}:j\!\in\!\mathbb{N})$
such that
\begin{equation}\label{E:2.3.1}
\sum_{j\in\mathbb{N}}a_j^{(\lambda)} g_\lambda(x_k-x_j)=f(x_k), \quad k\!\in\!\mathbb{N}.
\end{equation}
The {\em Gaussian Interpolation Operator }
$I_\lambda: PW_{ Z }\to L_2(\mathbb{R}^d)$,
defined by
\begin{equation}\label{E:2.3.1a}
I_\lambda(f)(\cdot)=\sum_{j\in\mathbb{N}} a_j^{(\lambda)}g_\lambda((\cdot)-x_j),
\end{equation}
where $(a_j^{(\lambda)}:j\!\in\!\mathbb{N})$ satisfies \eqref{E:2.3.1},
is a well-defined, bounded linear operator from $PW_{ Z }$ to $L_2(\mathbb{R}^d)$.
Moreover, $I_\lambda(f)\in C_0(\mathbb{R}^d)$.
\end{prop}
We now state the main result of the paper.
\begin{thm}\label{T:2.4} Let $\delta\in(\sqrt{2/3},1]$ and $0<\beta<\sqrt{3\delta^2 -2}$. Assume that $Z\subset \mathbb{R}^d$ is convex and symmetric about the origin, such that
$\delta B_2\subset Z\subset B_2 $, and
and let $(x_j:j\in\mathbb{N})$ be a Riesz basis sequence for $L_2(Z)$. Let $I_\lambda$
be the associated Gaussian Interpolation Operator.
Then for every $f\in PW_{\beta B_2}$ we have $ f=\lim_{\lambda\to 0^+} I_\lambda(f)$ in $L_2(\mathbb{R}^d)$ and uniformly on $\mathbb{R}^d$.
\end{thm}
\begin{rems} The statement of Theorem \ref{T:2.4} includes the case $\delta=1$, {\it i.e.,\/} $Z=B_2$. Moreover, in this case, the proof allows one to take
$\beta=1$. However, unless $d=1$ (in which case one obtains Theorems 4.3 and 4.4 in \cite{SS}),
this result may well be vacuous, for
it is not known if $L_2(B_2)$ admits a Riesz-basis sequence for $d\ge2$.
In fact, Kristian Seip and Joaquim Ortega-Cerd\`a have informed us
that the prevailing belief is that, when $d\ge2$, there is {\it no\/} Riesz basis
for $L_2(B_2)$ consisting of exponentials; the latter
has also demonstrated this to be the case for certain allied spaces \cite{OC}.
This interesting problem is
closely related to interpolatory properties of the associated Paley-Wiener space. It is
also connected to
Fuglede's work \cite{Fu} and his conjecture, and to the recent studies reported in \cite{IKT1}, \cite{IKT2}, and \cite{Ta}.
On the other hand, there do exist bodies $Z$ satisfying the hypotheses of Theorem \ref{T:2.4}
(in any space dimension); specifically these are {\sl zonotopes\/}. Firstly,
given any $\delta\in(0,1)$,
there exist zonotopes $Z$ such that $\delta B_2\subset Z\subset B_2$; this fact, well known to convex geometers,
may be deduced, for instance, as a consequence
of \cite[Theorem 4.1.10]{Ga}. As to the existence of Riesz-basis sequences
for $L_2(Z)$, this is proved in \cite{LR} for $d=2$; the higher dimensional version of this theorem -- to which \cite{LR} alludes --
was communicated to us by Yuri Lyubarskii (private
correspondence).
It is also known that Riesz-basis sequences exist for $L_2(T^d)$, where $T^d$ is a symmetric cube centred at the origin. Furthermore, in this case,
one can provide sufficient conditions under which a set of distinct points in $\mathbb{R}^d$ forms a Riesz-basis sequence for
$L_2(T^d)$; see, for example, \cite{SZ} and \cite{Ba}. These conditions lead to multivariate generalizations of Kadec's
famous ``1/4-theorem" \cite{Ka}. However, cubes do not quite serve our purpose here; our argument makes essential use
of the additional flexibility offered by zonotopes.
\end{rems}
\section{Proof of the main result}\label{S:3}
For $m\in\mathbb{N}$ we define a linear bounded operator $A_m$ on $L_2( Z )$ as follows:
Let $(e^*_k) \subset L_2( Z )$ be the
coordinate functionals for $(e^{-i\langle x_k,\cdot\rangle}:k\!\in\!\mathbb{N})$, {\it i.e.,} for every $h\in L_2( Z )$,
\begin{equation}\label{E:3.2}
h=\sum_{k\in\mathbb{N}} \langle h,e^*_k\rangle_{_{ Z }} e^{-i\langle\cdot, x_k\rangle} =
\sum_{k\in\mathbb{N}} \int_{ Z } h(\xi) \overline{e^*_k(\xi)} \,d\xi \, e^{-i\langle\cdot, x_k\rangle}.
\end{equation}
Note that for $a=(a_1,a_2\ldots, a_d)\in \mathbb{R}^d$ we have
\begin{align}\label{E:3.5}
\Big\|\sum_{k\in\mathbb{N}} \langle h,e^*_k\rangle_{_{ Z }} e^{-i\langle \cdot, x_k\rangle}\Big\|_{L_2(a+ Z )}
&= \Big\|\sum_{k\in\mathbb{N}} \langle h,e^*_k\rangle_{_{ Z }} e^{-i\langle \cdot+a, x_k\rangle}\Big\|_{L_2( Z )} \\
&= \Big\|\sum_{k\in\mathbb{N}} e^{-i\langle a, x_k\rangle} \langle h,e^*_k\rangle_{_{ Z }} e^{-i\langle \cdot, x_k\rangle}\Big\|_{L_2( Z )}
\le R_b^2 \|h\|_{L_2( Z )},\notag
\end{align}
where $R_b$ is the constant satisfying \eqref{E:2.2}.
Thus the following extension $E(h)$
of $h$ is locally square integrable, hence defined almost everyhwere on $\mathbb{R}^d$.
\begin{equation}\label{E:3.3}
E(h)(x)=\sum_{k\in\mathbb{N}} \langle h,e^*_k\rangle_{_{ Z }} e^{-i\langle x, x_k\rangle},\,\, x\in\mathbb{R}^d .
\end{equation}
Let $m \in \mathbb{N}$, and define $A_{m}:L_2( Z )\to L_2( Z )$ by
\begin{equation}\label{E:3.6}
A_m(h)(\xi)=E(h)(2^m(\xi))\chi_{ Z \setminus (1/2) Z }(\xi)
\end{equation}
For $h\in L_2( Z )$ it follows from \eqref{E:3.5} that
\begin{align}\label{E:3.7}
\|A_m(h)\|_{L_2( Z )}^2&=\int_{ Z \setminus (1/2) Z } |E(h)(2^m u)|^2 du\\
&=
2^{-dm}\int_{2^m Z \setminus 2^{m-1} Z } |E(h)(v)|^2 dv\le 2^{-dm} C^m
R_b^4\|h\|_{L_2( Z )}^2,\notag
\end{align}
where $C$ is the number of translates of $ Z $ which are needed to cover $2Z$.
The constant $C$ can be bounded by a number which only depends on $d$, and an induction argument shows that at most $C^m$ translates of $ Z $ are needed to cover $2^m Z $.
\begin{proof}[Proof of Proposition \ref{P:2.3}]
Let $\lambda>0$. By Proposition \ref{P:2.0} and Corollary \ref{C:2.2}, there is a positive constant $\kappa$ so that,
for each $f\in PW_{ Z }$, there is a sequence $(a_j^{(\lambda)})\in\ell_2$
satisfying \eqref{E:2.3.1} and the estimate
\begin{equation}\label{E:2.3.3}
\|(a_j^{(\lambda)})\|_2\le \kappa\|f\|_2.
\end{equation}
Proposition \ref{P:2.1a} ensures that the function $I_\lambda(f)$, as defined in \eqref{E:2.3.1}, is
continuous and bounded whenever $f\in PW_{ Z }$.
Next we show that $I_\lambda$ is a bounded operator on $L_2(\mathbb{R}^d)$.
Let $f\in PW_{ Z }$ and let $(a_j^{(\lambda)})\in \ell_2$ be the sequence given above.
By \eqref{E:2.2}, the function $Q:=\sum_{k\in\mathbb{N}} a_k^{(\lambda)} e^{-i\langle \cdot, x_k\rangle}$
is square integrable on $ Z $,
so \eqref{E:3.5} ensures that
$\|Q\|_{L_2(a+ Z )}\le R_b^2\|Q\|_{L_2( Z )}$ whenever $a\in \mathbb{R}^d$.
In particular, $Q$ is locally square integrable, hence locally integrable,
on $\mathbb{R}^d$. Combining these facts with the exponential
decay of $\hat{g_\lambda}$, we find, via a standard periodization argument, that
the function
$$
w: \mathbb{R}^d\ni x\mapsto \Big(\frac{\pi}{\lambda}\Big)^{d/2} e^{-\|x\|^2/(4\lambda)}\sum_{k\in\mathbb{N}} a_k^{(\lambda)} e^{-i\langle x, x_k\rangle},
$$
belongs to $L_2(\mathbb{R}^d)\cap L_1(\mathbb{R}^d)$.
Moreover, using \eqref{E:2.2}, \eqref{E:3.5}, and \eqref{E:2.3.3},
we arrive at the estimate
\begin{equation}\label{E:3west}
\|w\|_{L_2(\mathbb{R}^d)}\le C'\|f\|_{L_2(\mathbb{R}^d)},
\end{equation}
where $C'$
depends only on $\lambda$ and $R_b$.
As $w$ is in $L_1(\mathbb{R}^d)\cap L_2(\mathbb{R}^d)$ and $I_\lambda(f)$ is continuous, it follows from general principles that
$w$ is the Fourier transform of $I_\lambda(f)$. Thus $I_\lambda(f)\in C_0(\mathbb{R}^d)\cap L_2(\mathbb{R}^d)$ and
$\|I_\lambda(f)\|_{L_2(\mathbb{R}^d)}\le C' (2\pi)^{-d/2}\|f\|_{L_2(\mathbb{R}^d)}$, by \eqref{E:3west} and \eqref{E:1.2}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:2.4}]
Now fix $f\in PW_{ Z }$ and write $I_\lambda(f)$ as
$$I_\lambda(f)(\cdot)=\sum_{j\in\mathbb{N}} a_j^{(\lambda)}g_\lambda((\cdot)-x_j).$$
Recall from the preceding paragraph that
the Fourier transform of $I_\lambda(f)$ is given by
\begin{equation}\label{E:3.1}
\mathcal{F}\big[I_\lambda(f)\big](u)=
\Big(\frac{\pi}{\lambda}\Big)^{d/2} e^{-\|u\|^2_2/(4\lambda)} \sum_{j\in\mathbb{N}} a^{(\lambda)}_j e^{-i \langle x_j,u\rangle}, \text{ $u\in\mathbb{R}^d$}.
\end{equation}
The proof of Theorem \ref{T:2.4} proceeds in three steps.
\noindent
{\bf Step 1.} We claim that there is a constant $D_1<\infty$ and $\lambda_0>0$, only depending on $(x_j)$, so that
$$\Arrowvert \mathcal{F}[I_\lambda(f)]\Arrowvert_2 \le D_1 e^{(1-\delta^2)/(4\lambda)} \Arrowvert \mathcal{F}(f) \Arrowvert_2, \quad \lambda \in (0,\lambda_0].$$
We start by defining
$$
H_\lambda(u)=\Big(\frac{\pi}{\lambda}\Big)^{d/2} \sum_{j\in\mathbb{N}} a^{(\lambda)}_j e^{-i \langle x_j,u\rangle}= e^{\|u\|_2^2/(4\lambda)}\mathcal{F}[I_\lambda(f)](u),
\quad\text{$u\in\mathbb{R}^d$,}
$$
and let $h_\lambda=H_\lambda|_{Z}\in L_2(Z)$
(thus $H_\lambda=E(h_\lambda)$).
Suppose that $k\in\mathbb{N}$. Equation \eqref{E:1.3} implies that
\begin{equation}\label{E:3fval1}
(2\pi)^d f(x_k)=\int_{ Z }\mathcal{F}[f](u)e^{i\langle x_k,u\rangle}\,du=\big\langle \mathcal{F}[f],e^{-i\langle x_k,\cdot\rangle}\big\rangle_{ Z }.
\end{equation}
On the other hand, equations \eqref{E:2.3.1} and \eqref{E:2.3.1a} assert that
\begin{align}\label{E:3fval2}
(2\pi)^d f(x_k)
&=(2\pi)^d I_\lambda(f)(x_k) \\
&= \int_{\mathbb{R}^d} \mathcal{F}[I_\lambda(f)](u) e^{i\langle x_k,u\rangle} du \text{\ \ (by \eqref{E:1.3})}\notag\\
&= \int_{\mathbb{R}^d} e^{-\|u\|_2^2/(4\lambda)}H_\lambda(u) e^{i\langle x_k,u\rangle}\, du\notag\\
&=\int_{ Z } e^{-\|u\|_2^2/(4\lambda)}H_\lambda(u) e^{i\langle x_k,u\rangle} du\notag\\&\qquad +
\sum_{m=1}^\infty
\int_{2^m Z \setminus 2^{m-1} Z } e^{-\|u\|_2^2/(4\lambda)}H_\lambda(u) e^{i\langle x_k,u\rangle}\, du \notag\\
&=\int_{ Z } e^{-\|u\|_2/^2(4\lambda)}H_\lambda(u) e^{i\langle x_k,u\rangle}\, du\notag\\
&\qquad+
\sum_{m=1}^\infty 2^{dm}
\int_{ Z \setminus 2^{-1} Z } e^{-\|2^mv\|_2^2/(4\lambda)}H_\lambda(2^mv) e^{i\langle x_k,2^mv \rangle}\, dv \notag\\
&=\int_{ Z } e^{-\|u\|_2^2/(4\lambda)}h_\lambda(u) e^{i\langle x_k,u\rangle}\, du\notag\\
&\qquad+
\sum_{m=1}^\infty 2^{dm}
\int_{ Z \setminus 2^{-1} Z } e^{-\|2^mv\|_2^2/(4\lambda)} A_m(h_\lambda)(v)
\overline{A_m (e^{-i\langle x_k,\cdot \rangle})} (v)\, dv\notag\\
&=\big\langle e^{-\|\cdot\|_2^2/(4\lambda)}h_\lambda ,e^{-i\langle x_k,\cdot\rangle} \big\rangle_{ Z } \notag \\
&\qquad+\sum_{m=1}^\infty 2^{dm}\big\langle e^{-\|2^m(\cdot)\|_2^2/(4\lambda)} A_m(h_\lambda), A_m( e^{-i\langle x_k,\cdot)\rangle})\big\rangle_{ Z }\notag\\
&=\big\langle \mathcal{F}[I_\lambda(f)] ,e^{-i\langle x_k,\cdot\rangle} \big\rangle_{ Z } \notag\\
&\qquad
+\sum_{m=1}^\infty\big\langle 2^{dm} A^*_m\big( e^{-\|2^m(\cdot)\|_2^2/4(\lambda)} A_m(h_\lambda)\big),
e^{-i\langle x_k,\cdot\rangle}\big\rangle_{ Z }\notag\\
&=\big\langle \mathcal{F}[I_\lambda(f)]+\sum_{m=1}^\infty 2^{dm} A^*_m\big( e^{-\|2^m(\cdot)\|_2^2/4(\lambda)} A_m(h_\lambda)\big),
e^{-i\langle x_k,\cdot\rangle}\big\rangle_{ Z }.\notag
\end{align}
As $(e^{-i\langle x_k,\cdot\rangle}:k\in\mathbb{N})$ is a Riesz basis for $L_2( Z )$ (in particular a complete system),
equations
\eqref{E:3fval1} and \eqref{E:3fval2} lead to the identity
\begin{equation}\label{E:3.8}
\mathcal{F}[f]=\mathcal{F}[I_\lambda(f)]+\sum_{m=1}^\infty 2^{dm} A^*_m\big( e^{-\|2^m(\cdot)\|^2_2/4(\lambda)} A_m(h_\lambda)\big)\quad \text{ a.e. on $ Z $.}
\end{equation}
Suppose now that $h\in L_2( Z )$ and $m\in\mathbb{N}$.
We deduce from \eqref{E:3.7} that
\begin{align*}
\|2^{md}&A^*_m\big( e^{-\|2^m(\cdot)\|_2^2/(4\lambda)} A_m(h)\big) \|_{L_2( Z )}^2\\
&\le C^m R_b^4 2^{md}\|e^{-\|2^m(\cdot)\|_2^2/(4\lambda)} A_m(h)\big) \|_{L_2( Z )}^2\\
&\le C^m R_b^4 2^{md}\|e^{-2^{2m-2}\delta^2/(4\lambda)} A_m(h)\big) \|_{L_2( Z )}^2
\quad
\text{(since ${\rm supp} A_m(h)\!\subset\! Z \setminus\frac12 Z\!\subset\!Z\setminus \frac\delta2 B_2 $)}\\
&\le \big(C^m R_b^4 \big)^2 e^{-2^{2m-2}\delta^2/(2\lambda)} \| h \|_{L_2( Z )}^2,
\end{align*}
whence
$$\|2^{md}A^*_m\big(e^{-\|2^m(\cdot)\|_2^2/(4\lambda)} A_m\big)\|_{L_2( Z )}
\le C^m R_b^4 e^{-2^{2m-2}\delta^2/(4\lambda)}.$$
Therefore the linear operator
$$\tau_\lambda: L_2( Z )\to L_2( Z ),\quad h\mapsto
\sum_{m\in\mathbb{N}}2^{md}A^*_m\big( e^{-\|2^m(\cdot)\|_2^2/(4\lambda)} A_m(h)\big)$$
is bounded. In fact,
as
there are numbers $\lambda_0>0$ and $D$, which depend only on $C$ (which only depends on $d$), such that
\begin{equation}\label{E:3.6c}
\sum_{m\in\mathbb{N}} C^{m} e^{-2^{2m-2}\delta^2/(4\lambda)}\le D e^{-\delta^2/(4\lambda)},\quad \lambda\in(0,\lambda_0],
\end{equation}
the operator norm of $\tau_\lambda$ obeys the following estimate:
\begin{equation}\label{E:3.9}
\|\tau_\lambda\|\le R_b^4 D e^{-\delta^2/(4\lambda)} \text{ whenever $\lambda<\lambda_{0}$}.
\end{equation}
As the operator $\tau_\lambda$ is positive, \eqref{E:3.8} yields
$$\|\mathcal{F}[f]\|_2\, \|h_\lambda\|_2\ge
\langle \mathcal{F}[f],h_\lambda\rangle_{_Z}\ge\langle e^{-\|\cdot\|_2^2/(4\lambda)} h_\lambda,h_\lambda\rangle_{_Z}
\ge e^{-1/(4\lambda)} \|h_\lambda\|_2^2.$$
Consequently,
\begin{equation}\label{E:3.9a}
\|h_\lambda\|_2\le e^{1/(4\lambda)} \|\mathcal{F}[f]\|_2.\end{equation}
Thus, from \eqref{E:3.8} and\eqref{E:3.9} we get
\begin{equation}\label{E:3.10}
\|\mathcal{F}[I_\lambda(f)]|_{_{ Z }}\|_2
\le \|\mathcal{F}[f]\|_2+ \|\tau_\lambda(h_\lambda)\|_2\le \big(1+ R_b^4D e^{(1-\delta^2)/(4\lambda)}\big)\|\mathcal{F}[f]\|_2.
\end{equation}
Our next step is to estimate $\|\mathcal{F}[I_\lambda(f)]\,|_{_{\mathbb{R}^d\setminus Z }}\|_2$.
Equation \eqref{E:3.1} implies that
\begin{align}\label{E:3.11a}
\|\mathcal{F}[&I_\lambda(f)]|_{_{\mathbb{R}^d\setminus Z }}\|_2^2\\
&=
\int_{\mathbb{R}^d\setminus Z }e^{-\|u\|^2_2/(2\lambda)} |H_\lambda(u)|^2 \, du\notag\\
&=\sum_{m=1}^\infty \int_{2^m Z \setminus 2^{m-1} Z }e^{-\|u\|^2_2/(2\lambda)} |H_\lambda(u)|^2 \, du \notag\\
&=\sum_{m=1}^\infty 2^{dm}\int_{ Z \setminus 2^{-1} Z }e^{-2^{2m}\|v\|^2_2/(2\lambda)}|A_m(h_\lambda)(v)|^2 \, dv \notag\\
&\le \sum_{m=1}^\infty 2^{dm} e^{-2^{2m}\delta^2/(8\lambda)}\|A_m(h_\lambda)\|_2^2
\quad\text{(as ${\rm supp} A_m(h)\subset Z \setminus2^{-1} Z $)}\notag\\
&\le R_b^4 \|h_\lambda\|^2_2\sum_{m=1}^\infty C^me^{-2^{2m}\delta^2/(8\lambda)}
\quad\text{ \ (by \eqref{E:3.7})}
\notag\\
&\le e^{1/(2\lambda)} R_b^4 \|\mathcal{F}[f]\|_2^2 \sum_{m=1}^\infty e^{-2^{2m}\delta^2/(8\lambda)}C^m
\quad \text{ \ (by \eqref{E:3.9a}).} \notag
\end{align}
By changing $\lambda_0$ and $D$, if need be, one obtains, as in \eqref{E:3.6c},
\begin{equation}\label{E:3.6c2}
\sum_{m=1}^\infty e^{-2^{2m}\delta^2/(8\lambda)}C^m \le De^{-\delta^2/(2\lambda)}, \quad \lambda\in(0,\lambda_0].
\end{equation}
Combining \eqref{E:3.10} and \eqref{E:3.11a} proves our claim.
\medskip
\noindent
{\bf Step 2.} Let $f\in PW_{\beta B_2}$. There is a positive constant $D_2$ such that
\begin{equation}\label{E:3.13}
\|f-I_\lambda(f)\|_2 \leq D_2 e^{(\beta^2-3\delta^2+2)/(4\lambda)} \| f \|_2,
\end{equation}
for all $0<\lambda<\lambda_0$.
\begin{rem}
Note that \eqref{E:3.13} implies that
$\lim_{\lambda\to0^+} I_\lambda(f)= f \text{\ in $L_2(\mathbb{R}^d)$.}$
\end{rem}
To prove \eqref{E:3.13} we define
$\tilde\tau_\lambda=e^{1/(4\lambda)}\tau_\lambda$,
\begin{align*}
&M_\lambda:L_2( Z )\to L_2( Z ), \quad h\mapsto e^{-(1-\|\cdot\|_2^2)/(4\lambda) }h, \text{ and }\\
&L_\lambda: L_2( Z ) \to L_2( Z ), \quad h\mapsto R\circ\mathcal{F}\circ I_\lambda\circ\mathcal{F}^{-1}(h),
\end{align*}
where $R:L_2(\mathbb{R}^d)\to L_2( Z )$ is the restriction map.
\begin{prop}\label{P:3.2}
The map $\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda$
is an invertible operator on $L_2( Z )$, and
$(\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda)^{-1}=L_\lambda$.
\end{prop}
\begin{proof}
Let $h\in PW_{ Z }$. From \eqref{E:3.8} we obtain (a.e. on $Z$)
\begin{align*}
\mathcal{F}[h]&=\mathcal{F}[I_\lambda(h)]+\tau_\lambda\big(e^{\|\cdot\|^2/(4\lambda)}\mathcal{F}[I_\lambda(h)]|\big)\\
&=\mathcal{F}[I_\lambda(h)]+\tilde\tau_\lambda\circ M_\lambda \big(\mathcal{F}[I_\lambda(h)]\big)
=\big(\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda \big) L_\lambda(\mathcal{F}[h]).
\end{align*}
This implies that $\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda$ is surjective and is a left inverse of the bounded operator
$L_\lambda$. Next we show that $\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda$ is also injective. To that end,
let $(\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda)(h)=0 $ for some $h\in L_2( Z )$. Then
\begin{align*}
0&=\big\langle (\text{\rm Id}\!+\!\tilde\tau_\lambda\circ M_\lambda)(h), M_\lambda(h)\big\rangle_{ Z }
=\big\langle h,M_\lambda(h)\big\rangle_{ Z }\!+\!\big\langle \tilde\tau_\lambda( M_\lambda(h)), M_\lambda(h)\big\rangle_{ Z }
\ge \big\langle h,M_\lambda(h)\big\rangle_{ Z }\ge 0,
\end{align*}
the first inequality above being a consequence of the positivity of $\tilde\tau_\lambda$.
Hence $\langle h,M_\lambda(h)\rangle_{ Z }=0$, which implies that $h=0$, because
$M_\lambda$ is a strictly positive operator.
The injectivity of $\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda$ follows.
Thus $\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda$ is invertible, and its
inverse is $L_\lambda$.
\end{proof}
Proposition \ref{P:3.2} provides the following identity on $ Z $:
$$
\mathcal{F}[f]-\mathcal{F}[I_\lambda (f)]=\big[\text{\rm Id}-(\text{\rm Id}+\tilde\tau_\lambda \circ M_\lambda)^{-1} \big](\mathcal{F}[f])=
(\text{\rm Id}+\tilde\tau_\lambda\circ M_\lambda)^{-1}\circ\tilde\tau_\lambda\circ M_\lambda(\mathcal{F}[f]).
$$
If $f\in PW_{\beta B_2}$, then \eqref{E:3.9} and Step 1 provide
\begin{align}\label{E:3.16}
\| \mathcal{F}[f]\!-\!\mathcal{F}[I_\lambda (f)]|_{_{ Z }}\|_2&\le \|(\text{\rm Id}\!+\!\tilde\tau_\lambda\!\circ\! M_\lambda)^{-1}\|\,\|\tilde\tau_\lambda\| \,\|M_\lambda(\mathcal{F}[f])\|_2\\
&\le D_1 e^{(1-\delta^2)/(4\lambda)} e^{1/(4\lambda) }R^2_b D e^{(-\delta^2)/(4\lambda)} \|M_\lambda(\mathcal{F}[f])\|_2\notag\\
&=: D' e^{(1-2\delta^2)/(4\lambda)} \big\|e^{\|\cdot\|^2/(4\lambda)}(\mathcal{F}[f])\big\|_2\notag\\
& \le D' e^{(\beta^2+1-2\delta^2)/(4\lambda)} \|\mathcal{F}[f] \|.\notag
\end{align}
Now the first inequality in \eqref{E:3.11a} yields
\begin{align}\label{E:3.17}
\big\|\mathcal{F}[&I_\lambda (f)]|_{_{\mathbb{R}^d\setminus Z }}\big\|_2^2\\
&\le \sum_{m=1}^\infty 2^{dm} e^{-2^{2m}\delta^2/(8\lambda)} \|A_m(h_\lambda)\|_2^2\notag\\
&\le R_b^4\sum_{m=1}^\infty C^m e^{-2^{2m}\delta^2/(8\lambda)} \big\|e^{\|\cdot\|^2_2/(4\lambda)} \mathcal{F}[I_\lambda(f)]|{_{Z}}\big\|_2^2\quad
\text{\ \ by \eqref{E:3.7}} \notag\\
&\le R_b^4 D e^{-\delta^2/(2\lambda)} \big\|e^{\|\cdot\|^2_2/(4\lambda)} \mathcal{F}[I_\lambda(f)]|{_{Z}}\big\|_2^2\quad
\text{\ \ by \eqref{E:3.6c2}} \notag\\
&\le R_b^4 D\big[ \big\|e^{(\|\cdot\|^2_2-\delta^2)/(4\lambda)} (\mathcal{F}[I_\lambda(f)]|{_{Z}}-\mathcal{F} (f)\big)\big\|_2 +
\big\|e^{(\|\cdot\|^2_2-\delta^2)/(4\lambda)} \mathcal{F} (f))\big\|_2 \big]^2 \notag\\
&\le R_b^4 D\big[ e^{(1-\delta^2)/(4\lambda)} \big\| (\mathcal{F}[I_\lambda(f)]|{_{Z}}-\mathcal{F} (f)\big)\big\|_2 +
\big\|e^{(\|\cdot\|^2_2-\delta^2)/(4\lambda)} \mathcal{F} (f))\big\|_2 \big]^2.\notag
\end{align}
If we restrict to $f \in PW_{\beta B_2}$, then, by \eqref{E:3.16},
\begin{align*}
\big\|\mathcal{F}[&I_\lambda (f)]|_{_{\mathbb{R}^d\setminus Z }}\big\|_2^2
\le R_b^4 D\big[D' e^{(\beta^2+1-2\delta^2)/(4\lambda)}e^{(1-\delta^2)/(4\lambda)} + e^{(\beta^2-\delta^2)/(4\lambda)}\big]^2\|\mathcal{F} (f))\|_2^2.
\end{align*}
Combining this with \eqref{E:1.2}, and using \eqref{E:3.16} once again, we obtain
the following estimate for some constant $D_2$:
$$ \Arrowvert f-I_\lambda f \Arrowvert_2 \leq D_2 e^{(\beta^2 -3\delta^2+2)/(4\lambda)} \Arrowvert f \Arrowvert_2. $$
\noindent{\bf Step 3.} Suppose that $f \in PW_{\beta B_2}$. There exist constants $\lambda_1\in(0,\lambda_0]$ and $D_3$ so that
$$\big|I_\lambda(f)(x)- f(x)\big|\le D_3 e^{(\beta^2-3\delta^2+2)/(4\lambda)}\|f\|_2,$$
for all $0<\lambda\le \lambda_1$ and $x\in \mathbb{R}^d$. In particular
$\lim_{\lambda\to0^+} I_\lambda(f)= f$ uniformly on $\mathbb{R}^d$.
We first observe that, we can find, as before, numbers $\lambda_1\in(0,\lambda_0]$ and $D''>0$, such that
\begin{equation}\label{E:3.18}
m^{1/2}( Z )R_b^2 \sum_{m=1}^\infty C^{m/2} 2^{dm/2} e^{(4-2^{2m})/(16\lambda)}\le D'' \quad\text{whenever}\quad 0<\lambda\le \lambda_1.
\end{equation}
Let $x\in\mathbb{R}^d$ and $f\in PW_{\beta B_2}$. We use \eqref{E:1.3} to write
\begin{align}\label{E:3.18}
\big|I_\lambda&(f)(x)- f(x)\big|\\
&= \frac1{(2\pi)^d} \Bigg| \int_{ Z } \big[ \mathcal{F}[I_\lambda(f)](u) - \mathcal{F}[f](u) \big] e^{ixu} du
+ \int_{\mathbb{R}^d \setminus Z }\mathcal{F}[I_\lambda(f)](u)e^{ixu} du\Bigg|\notag\\
&\le \frac1{(2\pi)^d}\big[ \|\mathcal{F}[I_\lambda(f)]|_{_{ Z }}- \mathcal{F}[f]\|_1+\big\|\mathcal{F}[I_\lambda(f)]|_{_{\mathbb{R}^d\setminus Z }}\|_1\big].\notag
\end{align}
From the BCS inequality and \eqref{E:3.16} we deduce that $\lim_{\lambda\to 0^+}\|\mathcal{F}[I_\lambda(f)]|_{_{ Z }}- \mathcal{F}[f]\|_1=0$,
and an argument similar to that in \eqref{E:3.11a} yields
\begin{align*}
\big\|\mathcal{F}&[I_\lambda(f)]|_{_{\mathbb{R}^d\setminus Z }}\big\|_1\\
&=\sum_{m=1}^\infty \int_{2^m Z \setminus 2^{m-1} Z } e^{-\|u\|^2_2/(4\lambda)} |H_\lambda(u)|du\\
&=\sum_{m=1}^\infty 2^{dm}\int_{ Z \setminus 2^{-1} Z } e^{-2^{2m}\|v\|^2_2/(4\lambda)}|A_m(h_\lambda)(v)| dv\\
&\le m^{1/2}( Z )\sum_{m=1}^\infty 2^{dm} \|e^{-2^{2m}\|\cdot\|^2_2/(4\lambda)}A_m(h_\lambda)\|_2
\qquad\text{\ (by the BCS inequality)}\\
&\le m^{1/2}( Z )R_b^2 \sum_{m=1}^\infty C^{m/2} 2^{dm/2} e^{-2^{2m}\delta^2/(16\lambda)} \|h_\lambda\|_2\\
&\qquad\qquad\Big(\text{\ by \eqref{E:3.7} and since ${\rm supp}(A_m(h))\!\subset\!Z\setminus { \frac12}Z$}\Big)\\
&= D'' \big\|e^{(\|\cdot\|_2^2-\delta^2)/(4\lambda)}\mathcal{F}[I_\lambda(f)]|_{ Z }\big\|_2 \quad\text{(by \eqref{E:3.18})}\\
&\leq D'' e^{(1-\delta^2)/(4\lambda)}\big[\|\mathcal{F}[I_\lambda(f)]|_Z-\mathcal{F}[f]\|_2 +\|\mathcal{F}[f]\|_2\big]\\
&\leq D'D''e^{(\beta^2+2-3\delta^2)/(4\lambda)}\|\mathcal{F}[f]\|. \quad\text{(by \eqref{E:3.16})}
\end{align*}
This concludes the proof.
\end{proof}
\centerline{\bf Acknowledgments\/}
\noindent We thank Yuri~Lyubarskii, Joaquim~Ortega-Cerd\`a, Grigoris~Paouris, and Kristian~Seip for generously sharing with us their time and expertise.
| {
"timestamp": "2010-01-22T22:32:06",
"yymm": "0906",
"arxiv_id": "0906.2105",
"language": "en",
"url": "https://arxiv.org/abs/0906.2105",
"abstract": "Let $S\\subset\\R^d$ be a bounded subset with positive Lebesgue measure. The Paley-Wiener space associated to $S$, $PW_S$, is defined to be the set of all square-integrable functions on $\\R^d$ whose Fourier transforms vanish outside $S$. A sequence $(x_j:j\\kin\\N)$ in $\\R^d$ is said to be a Riesz-basis sequence for $L_2(S)$ (equivalently, a complete interpolating sequence for $PW_S$) if the sequence $(e^{-i\\la x_j,\\cdot\\ra}:j\\kin\\N)$ of exponential functions forms a Riesz basis for $L_2(S)$. Let $(x_j:j\\kin\\N)$ be a Riesz-basis sequence for $L_2(S)$. Given $\\lambda>0$ and $f\\in PW_S$, there is a unique sequence $(a_j)$ in $\\ell_2$ such that the function $$ I_\\lambda(f)(x):=\\sum_{j\\in\\N}a_je^{-\\lambda \\|x-x_j\\|_2^2}, \\qquad x\\kin\\R^d, $$ is continuous and square integrable on $\\R^d$, and satisfies the condition $I_\\lambda(f)(x_n)=f(x_n)$ for every $n\\kin\\N$. This paper studies the convergence of the interpolant $I_\\lambda(f)$ as $\\lambda$ tends to zero, {\\it i.e.,\\} as the variance of the underlying Gaussian tends to infinity. The following result is obtained: Let $\\delta\\in(\\sqrt{2/3},1]$ and $0<\\beta<\\sqrt{3\\delta^2 -2}$. Suppose that $\\delta B_2\\subset Z\\subset B_2$, and let $(x_j:j\\in\\N)$ be a Riesz basis sequence for $L_2(Z)$. If $f\\in PW_{\\beta B_2}$, then $f=\\lim_{\\lambda\\to 0^+} I_\\lambda(f)$ in $L_2(\\R^d)$ and uniformly on $\\R^d$. If $\\delta=1$, then one may take $\\beta$ to be 1 as well, and this reduces to a known theorem in the univariate case. However, if $d\\ge2$, it is not known whether $L_2(B_2)$ admits a Riesz-basis sequence. On the other hand, in the case when $\\delta<1$, there do exist bodies $Z$ satisfying the hypotheses of the theorem (in any space dimension).",
"subjects": "Classical Analysis and ODEs (math.CA); Functional Analysis (math.FA)",
"title": "Nonuniform sampling and recovery of multidimensional bandlimited functions by Gaussian radial-basis functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815491146485,
"lm_q2_score": 0.7154240079185318,
"lm_q1q2_score": 0.7076842284266638
} |
https://arxiv.org/abs/1402.0276 | KMS states on the C*-algebras of reducible graphs | We consider the dynamics on the C*-algebras of finite graphs obtained by lifting the gauge action to an action of the real line. Enomoto, Fujii and Watatani proved that if the vertex matrix of the graph is irreducible, then the dynamics on the graph algebra admits a single KMS state. We have previously studied the dynamics on the Toeplitz algebra, and explicitly described a finite-dimensional simplex of KMS states for inverse temperatures above a critical value. Here we study the KMS states for graphs with reducible vertex matrix, and for inverse temperatures at and below the critical value. We prove a general result which describes all the KMS states at a fixed inverse temperature, and then apply this theorem to a variety of examples. We find that there can be many patterns of phase transition, depending on the behaviour of paths in the underlying graph. | \section{Introduction}
Composing the gauge action of $\mathbb{T}$ with the map $t\mapsto e^{it}$ gives a natural dynamics on any Cuntz-Krieger algebra or graph algebra. Enomoto, Fujii and Watatani proved thirty years ago that for a simple Cuntz-Krieger algebra $\mathcal{O}_A$, this dynamics admits a unique KMS state, and that this state has inverse temperature the natural logarithm $\ln\rho(A)$ of the spectral radius $\rho(A)$ (which is also the Perron-Frobenius eigenvalue of $A$) \cite{EFW}. Recently Kajiwara and Watatani revisited this question for the $C^*$-algebras of finite graphs with sources, and found many more KMS states \cite{KW}. Other authors are currently interested in KMS states on the $C^*$-algebras of infinite graphs \cite{T, CL} or on the $C^*$-algebras of higher-rank graphs \cite{Y,aHLRSk}.
We recently studied KMS states on the Toeplitz algebra $\mathcal{T} C^*(E)$ of a finite graph $E$~\cite{aHLRS1}. For inverse temperatures $\beta$ larger than a critical value $\beta_c$, we described a simplex of KMS$_\beta$ states whose dimension is determined by the number of vertices in the graph \cite[Theorem~3.1]{aHLRS1}. This gave a concrete implementation of an earlier result of Exel and Laca \cite[Theorem~18.4]{EL}, at least as it applies to the gauge dynamics. The critical inverse temperature $\beta_c$ in \cite{aHLRS1} is $\ln\rho(A)$ where $A$ is the vertex matrix of the graph $E$. When $A$ is irreducible in the sense of Perron-Frobenius theory (and in particular if $C^*(E)$ is simple), we showed that there is a unique KMS$_{\ln\rho(A)}$ state on $\mathcal{T} C^*(E)$, and that this state factors through $C^*(E)$.
Here we consider a finite graph $E$ whose vertex matrix $A$ is reducible, and aim to find all the KMS states on $\mathcal{T} C^*(E)$ and $C^*(E)$. We have organised our results so that we can describe the KMS states at each fixed inverse temperature. From \cite[Theorem~3.1]{aHLRS1}, we already have a concrete description of the simplex of KMS$_\beta$ states on $\mathcal{T} C^*(E)$ for $\beta>\ln\rho(A)$, and we know exactly which ones factor through $C^*(E)$ \cite[Corollary~6.1]{aHLRS1}.
Our first main theorem concerns the critical value $\beta=\ln \rho(A)$ (Theorem~\ref{KMScrit}). It identifies two different families of extreme KMS$_{\ln \rho(A)}$ states. The first family $\{\psi_C\}$ is parametrised by a set of strongly connected components $C$ of $E$ such that the matrix $A_C:=A|_{C\times C}$ satisfies $\beta=\ln \rho(A_C)$ (in the theorem we say exactly which components belong to this set). The states $\psi_C$ all factor through $C^*(E)$. Then we consider the hereditary closure $H$ in $E^0$ of the components $C$ with $\beta=\ln \rho(A_C)$, and the complementary graph $E\backslash H$ with vertex set $E^0\backslash H$. The second family $\{\phi_{v}\}$ of extremal KMS$_{\ln \rho(A)}$ consists of states which factor through a natural quotient map of $\mathcal{T} C^*(E)$ onto $\mathcal{T} C^*(E\backslash H)$ (see Proposition~\ref{quotmapH}), and which is parametrised by $E^0\backslash H$. The convex hull of $\{\psi_C\}\cup\{\phi_v\}$ is the full simplex of KMS$_{\ln \rho(A)}$ states. The proof of Theorem~\ref{KMScrit} involves some rather intricate computations using the Perron-Frobenius theory for the matrices $A_C$.
In \S\ref{sec:allKMS}, we describe the KMS$_{\ln\rho(A)}$ states for a fixed inverse temperature $\beta$ satisfying $\beta<\ln\rho(A)$. In Theorem~\ref{thm:altogether}, we consider the hereditary closure $H_\beta$ of the connected components $C$ with $\ln\rho(A_C)>\beta$. If $\beta>\ln\rho(A_{E^0\backslash H_\beta})$, the KMS$_{\beta}$ states all factor through the quotient $\mathcal{T} C^*(E\backslash H_\beta)$, and an application of \cite[Theorem~3.1]{aHLRS1} gives a concrete description of these states. If $\beta=\ln\rho(A_{E^0\backslash H_\beta})$, then applying Theorem~\ref{KMScrit} to $E\backslash H_\beta$ shows that there are two families $\{\psi_C\}$ and $\{\phi_{v}\}$ of extremal KMS$_\beta$ states. Theorem~\ref{thm:altogether} also identifies the states which factor through $C^*(E)$, where there are some tricky subtleties involving the saturations of the sets $H_\beta$ and $K_\beta$.
By applying Theorem~\ref{thm:altogether} as $\beta$ decreases, we can in principle find all KMS states on $\mathcal{T} C^*(E)$ and $C^*(E)$ for every finite graph $E$. In \S\ref{sec:exs}, we show how this works on a variety of examples, and find in particular that there are graphs for which our dynamics has many phase transitions. These examples shed considerable light on the possible behaviour of KMS states, and in particular on what happens between the various critical inverse temperatures discussed in \cite[\S14]{EL}. We close with a section of concluding remarks in which we discuss the range of possible inverse temperatures, and the connections with the results of \cite{EL, CL}.
\section{Background}
\subsection{Directed graphs and their Toeplitz algebras} Suppose that $E=(E^0,E^1,r,s)$ is a directed graph. We use the conventions of \cite{CBMS} for paths, so that, for example, $ef$ is a path when $s(e)=r(f)$. We write $E^n$ for the set of paths of length $n$, and $E^*:=\bigcup_{n\in \mathbb{N}}E^n$. For vertices $v,w$, we write $vE^nw$ for the set $\{\mu\in E^n: r(\mu)=v\text{ and }s(\mu)=w\}$ (and we allow variations on this theme).
A Toeplitz-Cuntz-Krieger family $(P,S)$ consists of mutually orthogonal projections $\{P_v:v\in E^0\}$ and partial isometries $\{S_e:e\in E^1\}$ such that $S_e^*S_e=P_{s(e)}$ for every $e\in E^1$ and
\begin{equation}\label{TCK}
P_v\geq \sum_{e\in F}S_eS_e^*\text{ for every $v\in E^0$ and finite subset $F$ of $vE^1=r^{-1}(v)$.}
\end{equation}
Here we consider only finite graphs, and then it suffices to impose the inequality \eqref{TCK} for $F=vE^1$.
The Toeplitz algebra $\mathcal{T} C^*(E)$ is generated by a universal Toeplitz-Cuntz-Krieger family $(p,s)$; the existence of such an algebra was proved in \cite[Theorem~4.1]{FR}. For $\mu\in E^n$, we define $s_\mu:=s_{\mu_1}s_{\mu_2}\cdots s_{\mu_n}$. Then each $s_\mu$ is also a partial isometry, and we have
\[
\mathcal{T} C^*(E):=\overline{\lsp}\big\{s_\mu s_\nu^*:\mu,\nu\in E^*,\;s(\mu)=s(\nu)\big\}.
\]
We shall work mostly in the Toeplitz algebra $\mathcal{T} C^*(E)$ rather the usual graph algebra $C^*(E)$, and it is therefore convenient to view $C^*(E)$ as the quotient of $\mathcal{T} C^*(E)$ by the ideal generated by
\[
\Big\{ p_v-\sum_{r(e)=v}s_es_e^*:v\in E^0\Big\}.
\]
We write $\pi_E$ for the quotient map of $\mathcal{T} C^*(E)$ onto $C^*(E)$, and $\bar p_v:=\pi_E(p_v)$, $\bar s_e:=\pi_E(s_e)$. The pair $(\bar p,\bar s)$ is then universal for Cuntz-Krieger families in the usual way.
\subsection{Ideals in Toeplitz algebras}\label{idealsinT}
We are interested in graphs whose $C^*$-algebras $C^*(E)$ are not simple. The standard theory (as in \cite{KPRR}, \cite{BPRS} or \cite[\S4]{CBMS}) says that ideals in $C^*(E)$ are determined by subsets $H$ of $E^0$ which are both hereditary ($v\in H$ and $vE^*w\not=\emptyset$ imply $w\in H$) and saturated ($s(vE^1)\subset H$ implies $v\in H$). In the Toeplitz algebra, there are more ideals, and in particular every hereditary subset determines one. We need to know what the quotient is.
\begin{prop}\label{quotmapH}
Suppose that $H$ is a hereditary set of vertices in a directed graph $E$ and that $H$ is not all of $E^0$. Then $E\backslash H:=(E^0\backslash H, s^{-1}(E^0\backslash H),r,s)$ is a directed graph, and there is a homomorphism $q_H:\mathcal{T} C^*(E)\to \mathcal{T} C^*(E\backslash H)=C^*(p^{E\backslash H}, s^{E\backslash H})$ such that
\begin{equation}\label{defTCKquot}
q_H(p_v)=\begin{cases}
p^{E\backslash H}_v&\text{if $v\in E^0\backslash H$}\\
0&\text{if $v\in H$,}
\end{cases}
\quad\text{and}\quad
q_H(s_e)=\begin{cases}
s^{E\backslash H}_e&\text{if $s(e)\in E^0\backslash H$}\\
0&\text{if $s(e)\in H$.}
\end{cases}
\end{equation}
The homomorphism is surjective, and its kernel is the ideal $J_H$ generated by $\{p_v:v\in H\}$.
\end{prop}
\begin{proof}
Since $s$ maps $(E\backslash H)^1:=s^{-1}(E^0\backslash H)$ to $(E\backslash H)^0:=E^0\backslash H$, and since $r(e)\in H$ implies $s(e)\in H$, $r$ maps $(E\backslash H)^1$ into $(E\backslash H)^0$ also. Thus $E\backslash H$ is a directed graph. The formulas on the right-hand sides of \eqref{defTCKquot} define a Toeplitz-Cuntz-Krieger $E$-family in $\mathcal{T} C^*(E\backslash H)$, and hence the universal property of $\mathcal{T} C^*(E)$ gives the existence of the homomorphism $q_H$. It is surjective because its range contains all the generators of $\mathcal{T} C^*(E\backslash H)$. The kernel of $q_H$ contains all the generators of $J_H$, so $J_H\subset \ker q_H$, and hence $q_H$ factors through the quotient map $q:\mathcal{T} C^*(E)\to \mathcal{T} C^*(E)/ J_H$. We write $\bar q_H$ for the homomorphism on $\mathcal{T} C^*(E)/ J_H$ such that $q_H=\bar q_H\circ q$.
To see that $J_H$ is all of $\ker q_H$, we construct a left inverse for $\bar q_H$. A quick check shows that the elements $\{q(p_v),q(s_e):v\in E^0\backslash H,\; e\in s^{-1}(E^0\backslash H)\}$ form a Toeplitz-Cuntz-Krieger $(E\backslash H)$-family in $\mathcal{T} C^*(E)/ J_H$. (It is crucial that we are not trying to impose a Cuntz-Krieger relation at vertices in $E^0\backslash H$ which receive edges from $H$.) Thus there is a homomorphism $\rho:\mathcal{T} C^*(E\backslash H)\to \mathcal{T} C^*(E)/J_H$ such that $\rho(p^{E\backslash H}_v)=q(p_v)$ and $\rho(s^{E\backslash H}_e)=q(s_e)$. Since $s(e)\in H$ implies that $q(s_e)=0$, the range of $\rho$ contains the images of all the generators of $\mathcal{T} C^*(E)$, and hence $\rho$ is surjective. A quick check shows that $\bar q_H\circ \rho$ fixes the generators of $\mathcal{T} C^*(E\backslash H)$, and hence is the identity on $\mathcal{T} C^*(E\backslash H)$. Now the surjectivity of $\rho$ implies that $\rho\circ\bar q_H$ is the identity on $\mathcal{T} C^*(E\backslash H)$, so $\bar q_H$ is injective, and we have $\ker q_H=J_H$.
\end{proof}
\subsection{Decompositions of the vertex matrix}
Let $E$ be a finite directed graph. The vertex matrix of $E$ is the $E^0\times E^0$ matrix $A$ with entries $A(v,w)=|vE^1w|$; the powers of $A$ then have entries $A^n(v,w)=|vE^nw|$. We will do computations using block decompositions of the vertex matrix $A$. For subsets $C,D\subset E^0$, we write $A_{C,D}$ for the $C\times D$ subblock of $A$, and $A_C:=A_{C,C}$. We usually choose decompositions of $E^0=C_1\sqcup C_2\sqcup\cdots\sqcup C_n$ such that the associated block decomposition of $A$ is upper-triangular.
For $v,w\in E^0$, we write $v\leq w\Longleftrightarrow vE^*w\not=\emptyset$, and $v\sim w\Longleftrightarrow v\leq w \text{ and }w\leq v$. It is easy to check that $\sim$ is an equivalence relation on $E^0$ (we have $v\sim v$ for all $v\in E^0$ because $E^0\subset E^*$). We write $E^0/\!\!\sim$ for the set of equivalence classes, and refer to these equivalence classes as the \emph{strongly connected components} of $E$. When $C\in E^0/\!\!\sim$, the matrix $A_C$ is either a $1\times 1$ zero matrix (if $C=\{v\}$ is a single vertex with no loops, in which case we say $C$ is a trivial component), or an irreducible matrix in the sense of Perron-Frobenius theory (so that for every $v,w\in C$, there exists $n$ such that $A^n(v,w)>0$).
We next order the vertex set $E^0$ to ensure that the vertex matrix takes a convenient block upper-triangular form. The relation $\leq $ descends to a well-defined partial order on $E^0/\!\!\sim$; when $C\leq D$, we say that $D$ talks to $C$. We list first the trivial components for which $A_C=(0)$ and which do not talk to nontrivial components; we list them in an order such that $w$ appears after $v$ when $v\leq w$. Next we list the components which are minimal for the order $\leq$ on the remaining components, grouping the vertices in the same component together. Then we list the trivial components which talk only to the components we have listed so far, and so on. This decomposes $A$ as a block upper-triangular matrix in which the diagonal components $A_C$ are either $1\times 1$ zero matrices or are irreducible. We will refer to such a decomposition as a \emph{Seneta decomposition} of $A$. (Though since Seneta uses different conventions in \cite[\S1.2]{Seneta}, the decomposition he discusses there is a block lower-triangular matrix and our minimal components would become maximal\footnote{Unfortunately, there is no universal convention as to whether $A(v,w)$ should refer to edges from $w$ to $v$ or edges from $v$ to $w$. Our
convention arises from viewing directed edges as arrows in a category, in
which case one expects $ef:=e\circ f$ to have source $s(f)$ and range
$r(e)$. This convention is standard in many places: for example, in the
substantial literature on higher-rank graphs, which have strong links to
higher-dimensional subshifts \cite{KP, PRW}, and in studying equivalences
for categories of modules over path algebras of quivers \cite{S1, S2}, see
especially the discussion in \cite[\S5.4]{S1}.}.)
\subsection{KMS states}
We denote the gauge actions of $\mathbb{T}$ on $\mathcal{T} C^*(E)$ and $C^*(E)$ by $\gamma$. We are interested in the dynamics $\alpha$ given, on both $\mathcal{T} C^*(E)$ and $C^*(E)$, by $\alpha_t=\gamma_{e^{it}}$. For KMS states, we use the conventions of our previous paper \cite{aHLRS1}. Thus we know from \cite[Proposition~2.1]{aHLRS1} that a state $\phi$ of $\mathcal{T} C^*(E)$ is a KMS$_\beta$ state for some $\beta\in \mathbb{R}$ if and only if
\begin{equation}\label{workhorse}
\phi(s_\mu s_\nu^*)=\delta_{\mu,\nu}e^{-\beta|\mu|}\phi(p_{s(\mu)})\quad\text{for all $\mu,\nu\in E^*$.}
\end{equation}
For fixed $\beta$ the KMS$_\beta$ states on $(\mathcal{T} C^*(E),\alpha)$ form a simplex, which we shall refer to as the \emph{KMS$_\beta$ simplex} of $(\mathcal{T} C^*(E),\alpha)$. The KMS$_0$ states are the invariant traces.
Since the results of \cite[\S3]{aHLRS1} already describe all the KMS states for large inverse temperatures, we do not have anything new to say about ground states or KMS$_\infty$ states.
\section{KMS states and quotients}
When the vertex matrix $A$ of $E$ is irreducible, there are no KMS$_\beta$ states on the Toeplitz algebra $\mathcal{T} C^*(E)$ when $\beta<\ln \rho(A)$. So it seems reasonable that if $C$ is a strongly connected component with $\ln \rho(A_C)>\beta$, then every KMS$_\beta$ state must vanish on vertex projections $p_v$ with $v\in C$. The key to our analysis of reducible graphs is that KMS states must also vanish on any projections $p_v$ for vertices $v$ that connect to such components $C$. The next result makes this precise.
\begin{prop}\label{old2.5}
Suppose that $H$ is a hereditary subset of $E^0$, and $q_H:\mathcal{T} C^*(E)\to \mathcal{T} C^*(E\backslash H)$ is the surjection of Proposition~\ref{quotmapH}. Then for every $\beta\in [0,\infty)$, $q_H^*:\psi\mapsto \psi\circ q_H$ is an affine injection of the KMS$_\beta$ simplex of $(\mathcal{T} C^*(E\backslash H),\alpha)$ into the KMS$_\beta$ simplex of $(\mathcal{T} C^*(E),\alpha)$. If $\{C\in H/\!\!\sim:\ln\rho(A_C)>\beta\}$ generates $H$ as a hereditary subset of $E^0$, then $\phi(p_v)=0$ for every KMS$_\beta$ state $\phi$ on $\mathcal{T} C^*(E)$ and every $v\in H$; if in addition $H$ is not all of $E^0$, then $q_H^*$ is surjective.
\end{prop}
\begin{lem}\label{Lemfactor}
Suppose that $H$ is the hereditary subset of $E^0$ generated by $\mathcal{C}_1\subset E^0/\!\!\sim$, and that $\beta\leq \ln \rho(A_C)$ for all $C\in \mathcal{C}_1$. Suppose that $\phi$ is a KMS$_\beta$ state on $(\mathcal{T} C^*(E),\alpha)$, and that $v$ belongs to the complement of $\bigcup\{C\in\mathcal{C}_1:\beta=\ln\rho(A_C)\}$ in $H$. Then $\phi(p_v)=0$.
\end{lem}
When $\{C\in H/\!\!\sim:\ln\rho(A_C)>\beta\}$ generates $H$, as in Proposition~\ref{old2.5}, Lemma~\ref{Lemfactor} applies to every $v\in H$ with $\mathcal{C}_1=\{C\in H/\!\!\sim:\ln\rho(A_C)>\beta\}$. The extra generality in the Lemma will be useful in the proof of Proposition~\ref{discardbottom} below.
\begin{proof}
For every path $\mu$ with $s(\mu)=v$, \eqref{TCK} implies that $p_{r(\mu)}\geq s_\mu s_\mu^*$, and hence
\begin{equation}\label{estphipv}
0\leq \phi(p_v)=\phi(s_\mu^* s_\mu)=e^{\beta|\mu|}\phi(s_\mu s_\mu^*)\leq e^{\beta|\mu|}\phi(p_{r(\mu)}).
\end{equation}
Proposition~2.1(c) of \cite{aHLRS1} implies that the vector $m^\phi:=(\phi(p_w))$ in $[0,1]^{E^0}$ satisfies the subinvariance relation $Am^\phi\leq e^\beta m^\phi$, and for every $C\in \mathcal{C}_1$ we have
\begin{equation}\label{locsubinv}
A_C(m^\phi|_C)\leq A_C(m^\phi|_C)+A_{C,H\backslash C}(m^\phi|_{H\backslash C})=(Am^\phi)|_C\leq e^\beta m^\phi|_C.
\end{equation}
Since $v\in H$ and $H$ is generated by $\mathcal{C}_1$, there exists $C\in \mathcal{C}_1$ such that $CE^*v\not=\emptyset$. Then either $\beta<\ln\rho(A_C)$ or $\beta=\ln\rho(A_C)$. Suppose that $\beta<\ln\rho(A_C)$. Then \eqref{locsubinv} and the last sentence in Theorem~1.6 of \cite{Seneta} imply that $m^\phi|_C=0$. We can therefore apply \eqref{estphipv} to any $\mu\in CE^*v$, and deduce that $\phi(p_v)=0$.
Now suppose that $\beta=\ln\rho(A_C)$. Then by hypothesis $v\notin C$, and there exists $\lambda\in CE^*v$ of the form $\lambda=e\mu$, where $e\in E^1$, $r(e)\in C$ and $s(e)\notin C$. If $m^\phi|_C=0$, then we can apply \eqref{estphipv} to $\mu$ and deduce that $\phi(p_v)=0$. So we suppose that $m^\phi|_C\not=0$. Then \eqref{locsubinv} and Theorem~1.6 of \cite{Seneta} imply that $m^\phi|_C$ is a multiple of the Perron-Frobenius eigenvector for $A_C$. Since
\begin{align}\label{locsubinv2}
(A_C(m^\phi|_C))_{r(e)}&\leq (A_C(m^\phi|_C))_{r(e)}+A(r(e),s(e))m^\phi_{s(e)}\\&\leq((Am^\phi)|_C)_{r(e)}\leq e^\beta m^\phi_{r(e)}=\rho(A_C)(m^\phi|_C)_{r(e)},\notag
\end{align}
and the left and right ends of \eqref{locsubinv2} are equal, we deduce that $A(r(e),s(e))m^\phi_{s(e)}=0$ and $\phi(p_{r(\mu)})=m^\phi_{s(e)}=0$; now \eqref{estphipv} implies that $\phi(p_v)=0$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{old2.5}]
Since $(E\backslash H)^*=\{\mu\in E^*:s(\mu)\notin H\}$, we can deduce from \cite[Proposition~2.1(a)]{aHLRS1} that $\psi\circ q_H$ is a KMS state if and only if $\psi$ is. Since $q_H$ is surjective, $q_H^*$ is injective, and it is clearly weak* continuous and affine. To see the assertion about surjectivity, suppose that $\{C\in H/\!\!\sim:\ln\rho(A_C)>\beta\}$ generates $H$ and $\phi$ is a KMS$_\beta$ state of $(\mathcal{T} C^*(E),\alpha)$. Lemma~\ref{Lemfactor} implies that $\phi(p_v)=0$ for all $v\in H$. Now we can apply \cite[Lemma~2.2]{aHLRS1} with $\mathcal{F}=\{s_\mu s_\nu^*:\mu,\nu\in E^*\}$ and $P=\{p_v:v\in H\}$, and deduce that $\phi$ factors through a state of $\mathcal{T} C^*(E)/J_H=\mathcal{T} C^*(E)/\ker q_H$. Thus if $H\not= E^0$, there is a state $\psi$ of $\mathcal{T} C^*(E\backslash H)$ such that $\phi=\psi\circ q_H$. Since $q_H$ is surjective and is equivariant for the various actions $\alpha$, $\psi$ is a KMS$_\beta$ state of $(\mathcal{T} C^*(E\backslash H),\alpha)$.
\end{proof}
The analogue of Proposition~\ref{old2.5} for the graph algebra $C^*(E)$ has a slightly different hypothesis: it suffices that $\{C:\ln\rho(A_C)>\beta\}$ generates $H$ as a \emph{saturated} hereditary set. This happens because the identification of $C^*(E)/I_H$ with $C^*(E\backslash H)$ only works when $H$ is saturated (compare Proposition~\ref{quotmapH} with \cite[Theorem~4.1]{BPRS} or \cite[Theorem~4.9]{CBMS}).
\begin{prop}\label{old2.5c}
Suppose that $H$ is a saturated hereditary subset of $E^0$, and write $\bar q_H$ for the canonical surjection of $C^*(E)$ onto $C^*(E\backslash H)$. Then for every $\beta\in [0,\infty)$, $\bar q_H^*:\psi\mapsto \psi\circ \bar q_H$ is an affine injection of the KMS$_\beta$ simplex of $(C^*(E\backslash H),\alpha)$ into the KMS$_\beta$ simplex of $(C^*(E),\alpha)$. If $\{C\in H/\!\!\sim:\ln\rho(A_C)>\beta\}$ generates $H$ as a saturated hereditary subset of $E^0$, then $\phi(p_v)=0$ for every KMS$_\beta$ state $\phi$ on $C^*(E)$ and every $v\in H$; if in addition $H$ is not all of $E^0$, then $\bar q_H^*$ is surjective.
\end{prop}
For the proof we need a simple lemma. Recall from the proof of \cite[Corollary~6.1]{aHLRS1}, for example,
that the saturation $\Sigma H$ of a hereditary set $H$ can be viewed as
$\bigcup_{k=0}^\infty S_kH$, where $S_kH$ are the subsets of $E^0$ defined recursively
by
\begin{equation}\label{eq:satdef}
S_0H=H\quad\text{and}\quad S_{k+1}H=S_kH\cup\{v:s(vE^1)\subset S_kH\}.
\end{equation}
\begin{lem}\label{extend=}
Suppose that $H$ is a hereditary subset of $E^0$, $\beta\in [0,\infty)$, and $\phi,\psi$ are KMS$_\beta$ states on $(C^*(E), \alpha)$.
\begin{enumerate}
\item\label{extenda} If $\phi(p_v)=\psi(p_v)$ for all $v\in H$, then $\phi=\psi$ on the ideal $I_H$ of $C^*(E)$ generated by $\{p_v:v\in H\}$.
\item\label{extendb} If $\phi(p_v)=0$ for all $v\in H$, then $\phi(p_v)=0$ for all $v$ in the saturation $\Sigma H$.
\end{enumerate}
\end{lem}
\begin{proof}
For \eqref{extenda}, we first claim that $\phi(p_v)=\psi(p_v)$ for all $v\in \Sigma H$.
We are given that $\phi(p_v)=\psi(p_v)$ for $v\in S_0H$. Suppose that
$\phi(p_v)=\psi(p_v)$ for $v\in S_kH$. Then for $v\in S_{k+1}H$ and $e\in vE^1$, we have
$s(e)\in S_kH$, and
\begin{align}\label{extendcomp}
\phi(p_v)
&= \phi\Big(\sum_{e \in vE^1} s_e s^*_e\Big)
= \sum_{e \in vE^1} e^{-\beta} \phi(p_{s(e)})\\
&=\sum_{e \in vE^1} e^{-\beta} \psi(p_{s(e)})=\psi(p_v).\notag
\end{align}
Thus by induction we have $\phi(p_v)=\psi(p_v)$ for all $v\in S_kH$ and all $k$, and hence for all $v\in \Sigma H$, as claimed.
Next, we recall that
\[
I_H=\overline{\lsp}\{s_\mu s_\nu^*:s(\mu)=s(\nu)\in \Sigma H\}
\]
(see \cite[Lemma~4.3]{BPRS}, for example). For a typical spanning element $s_\mu s_\nu^*$, Equation (2.1) in \cite{aHLRS1} says that
\[
\phi(s_\mu s_\nu^*)=\delta_{\mu,\nu}e^{-\beta |\mu|}\phi(p_{s(\mu)})=\delta_{\mu,\nu}e^{-\beta |\mu|}\psi(p_{s(\mu)})=\psi(s_\mu s_\nu^*),
\]
and it follows from linearity and continuity that $\phi=\psi$ on $I_H$.
For \eqref{extendb}, we repeat the induction argument of the first paragraph, and in particular the computation in the first line of \eqref{extendcomp}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{old2.5c}]
As in the proof of Proposition~\ref{old2.5}, $\bar q_H^*:\psi\mapsto \psi\circ \bar q_H$ is an affine injection of the KMS$_\beta$ simplex of $C^*(E\backslash H)$ into the KMS$_\beta$ simplex of $C^*(E)$. Suppose that $\phi$ is a KMS$_\beta$ state of $C^*(E)$. Then applying Proposition~\ref{old2.5} to the hereditary closure $H_0$ of $\{C\in H/\!\!\sim:\beta<\ln\rho(A_C)\}$ shows that $\phi(\bar p_v)=\phi\circ \pi_E(p_v)=0$ for $v\in H_0$. Thus $\{v \in E^0 : \psi(\bar p_v) = 0\}$ contains $H_0$, and hence by Lemma~\ref{extend=} contains $\Sigma H_0=H$. Now \cite[Lemma~2.2]{aHLRS1} implies that $\phi$ factors through a state of $C^*(E)/I_H$, and hence there is a state $\psi$ of $C^*(E\backslash H)$ such that $\phi=\psi\circ \bar q_H$. Then the surjectivity of $\bar q_H$ implies that $\psi$ is a KMS$_\beta$ state, and $\bar q_H^*(\psi)=\phi$.
\end{proof}
\section{KMS states on Toeplitz algebras}\label{sec:ToeplitzKMS}
We suppose that $E$ has at least one cycle, so that $\rho(A)\geq 1$ (by Lemma~A.1 in \cite{aHLRS1}), and the critical inverse temperature $\ln \rho(A)\geq 0$. Since a Seneta decomposition of $A$ is upper triangular as a block matrix, we have
\[
\rho(A)=\max\{\rho(A_C):C\in E^0/\!\!\sim\text{ is a nontrivial strongly connected component}\}.
\]
We therefore focus on the set
\begin{equation}\label{rightevalue}
\{C\in E^0/\!\!\sim \;:\rho(A_C)=\rho(A)\}.
\end{equation}
of \emph{critical components} of $E$, and in particular on the set $\mc=\mc(E)$ of \emph{minimal critical components} that are minimal in the induced partial order on the set \eqref{rightevalue}.
The results of the previous section imply that if $\beta<\ln\rho(A)$, then every KMS$_\beta$ state on $\mathcal{T} C^*(E)$ vanishes on the hereditary closure of $\{C:\ln\rho(A_C)=\ln\rho(A)\}$. This hereditary closure is the same as that of $\mc(E)$. So the location of the minimal critical components in the graph plays an important role in our analysis. Because the minimal critical components are minimal in \eqref{rightevalue}, they cannot talk to each other. Thus in a Seneta decomposition of the vertex matrix $A$, our conventions ensure that the diagonal blocks $\{A_C:C\in \mc(E)\}$ associated to the minimal critical components appear in the decomposition above other critical components $A_D$.
The next result is a new version of \cite[Theorem~2.1(a)]{aHLRS1}.
\begin{prop}\label{discardbottom}
Suppose $E$ has at least one cycle. Let $K=\bigcup_{C\in\mc(E)} C$, let $H$ be the hereditary closure of $K$, and let $L$ be the union of the nontrivial strongly connected components. Let $\beta\in \mathbb{R}$. Then
\begin{enumerate}
\item\label{elempropa} $\rho(A_{E^0\backslash H})<\rho(A)$;
\item\label{elempropb} if $\phi$ is a KMS$_{\ln\rho(A)}$ state of $(\mathcal{T} C^*(E),\alpha)$, then $\phi(p_v)=0$ for $v\in H\backslash K$;
\item\label{elempropc} if $E^0$ is the hereditary closure of $K$ and $\phi$ is a KMS$_\beta$ state of $(\mathcal{T} C^*(E),\alpha)$, then $\ln\rho(A)\leq \beta$;
\item\label{elempropd} if $E^0$ is the hereditary closure of $L$ and $\phi$ is a KMS$_\beta$ state of $(\mathcal{T} C^*(E),\alpha)$, then there is a nontrivial component $C$ with $\ln\rho(A_C)\leq \beta$;
\item\label{elemprope} if $E^0$ is the saturated hereditary closure of $L$ and $\phi$ is a KMS$_\beta$ state of $(C^*(E),\alpha)$, then there is a nontrivial component $C$ with $\ln\rho(A_C)\leq \beta$.
\end{enumerate}
\end{prop}
\begin{proof}
Since every minimal element of \eqref{rightevalue} is contained in $H$, so is every other strongly connected component $C$ in \eqref{rightevalue}. Thus $\rho(A_C)<\rho(A)$ for every strongly connected component $C$ that is contained in $E^0\backslash H$, and
\[
\rho(A_{E^0\backslash H})= \max\{\rho(A_C):C\in E^0/\!\!\sim,\ C\subset E^0\backslash H\}<\rho(A),
\]
which is \eqref{elempropa}. Next suppose that $\phi$ is a KMS$_{\ln\rho(A)}$ state on $(\mathcal{T} C^*(E),\alpha)$. We set things up so $\mc(E)=\{C\in \mc(E):\rho(A_C)=\rho(A)\}$, so we can apply Lemma~\ref{Lemfactor} with $\beta=\ln\rho(A)$ and $\mathcal{C}_1=\mc(E)$, and \eqref{elempropb} follows.
For \eqref{elempropc}, we suppose that $\ln\rho(A)>\beta$. Then $\mc(E)\subset\{C\in E^0/\!\!\sim: \ln\rho(A_C)>\beta\}$, and hence the hypothesis implies that $\{C: \ln\rho(A_C)>\beta\}$ generates $E^0$. So Proposition~\ref{old2.5} applies with $H=E^0$. Thus $\phi(p_v)=0$ for all $v\in E^0$, and $1=\phi(1)=\sum_{v\in E^0}\phi(p_v)=0$, which is a contradiction. A similar argument gives \eqref{elempropd}. For \eqref{elemprope}, we repeat the argument yet again, using Proposition~\ref{old2.5c} instead of Proposition~\ref{old2.5}.
\end{proof}
\begin{rem}
If the hereditary closure $G$ of $L$ is not all of $E^0$, then $\rho(A\backslash G)=0$, and Theorem~3.1 of \cite{aHLRS1} applies to $E\backslash G$ and every $\beta\in \mathbb{R}$. Thus if $\beta<\ln\rho(A_C)$ for every nontrivial component $C$, there is a $(|E^0\backslash G|-1)$-dimensional simplex of KMS$_\beta$ states on $\mathcal{T} C^*(E\backslash G)$. It follows from Proposition~\ref{old2.5} that there is also a $(|E^0\backslash G|-1)$-dimensional simplex of KMS$_\beta$ states on $\mathcal{T} C^*(E)$. Whether any of these factor through $C^*(E)$ will depend on whether $E\backslash \Sigma G$ has sources (see \cite[Corollary~6.1]{aHLRS1}), and Example~\ref{EminusGsourced} shows that $E\backslash \Sigma G$ can have sources.
\end{rem}
Proposition~\ref{discardbottom} implies that the KMS$_{\ln \rho(A)}$ simplex does not see the set $H\backslash K$, and hence (via \cite[Lemma~2.2]{aHLRS1}) that the KMS$_{\ln \rho(A)}$ states vanish on the ideal $J_{H\backslash K}$ generated by $\{p_v:v\in H\backslash K\}$. Our next result describes how the minimal critical components give rise to KMS$_{\ln \rho(A)}$ states.
\begin{thm}\label{KMScrit}
Suppose that $E$ is a directed graph with at least one cycle. Let $K=\bigcup_{C\in \mc(E)}C$, and let $H:=\{v\in E^0:KE^*v\not=\emptyset\}$ be the hereditary closure of $K$.
\begin{enumerate}
\item\label{crita} Let $C\in \mc(E)$ be a minimal critical component, and let $x^C$ be the unimodular Perron-Frobenius eigenvector of $A_C$ (that is, the one with $\|x^C\|_1=1$). Define a vector $z^C\in [0,\infty)^{E^0\backslash H}$ by
\begin{equation}\label{defz}
z^C:=\rho(A)^{-1}(1-\rho(A)^{-1}A_{E^0\backslash H})^{-1}A_{E^0\backslash H,\,C}x^C.
\end{equation}
Then there is a KMS$_{\ln\rho(A)}$ state $\psi_C$ of $(\mathcal{T} C^*(E),\alpha)$ such that
\begin{equation}\label{formphiC}
\psi_C(s_\mu s_\nu^*)=
\delta_{\mu,\nu}\rho(A)^{-|\mu|}(1+\|z^C\|_1)^{-1}\begin{cases}
z^C_{s(\mu)}&\text{if $s(\mu)\in E^0\backslash H$}\\
x^C_{s(\mu)}&\text{if $s(\mu)\in C$}\\
0&\text{if $s(\mu)\in H\backslash C$.}
\end{cases}
\end{equation}
The state $\psi_C$ factors through a KMS$_{\ln\rho(A)}$ state $\bar\psi_C$ of $(C^*(E),\alpha)$.
\item\label{critb} The map $t\mapsto \sum_{C\in \mc(E)}t_C\psi_C$ is an affine isomorphism of
\[
S_E:=\Big\{t\in[0,1]^{\mc(E)}:\sum_{C\in \mc(E)} t_C=1\Big\}
\]
onto a simplex $\Sigma_{\mc(E)}$ of KMS$_{\ln\rho(A)}$ states of $(\mathcal{T} C^*(E),\alpha)$. Every KMS$_{\ln\rho(A)}$ state of $(\mathcal{T} C^*(E),\alpha)$ is a convex combination of a state of the form $q_H^*(\phi)=\phi\circ q_H$ and a state in $\Sigma_{\mc(E)}$.
\end{enumerate}
\end{thm}
The idea in part~\eqref{crita} is that the values of a KMS state on vertices in $C$
contribute to the values $\phi(p_v)$ for $v\in E^0\backslash H$ when there are paths
$\lambda$ from $C$ to $v$. As
discussed at the beginning of Section~3 of \cite{aHLRS1}, for $\beta > \ln\rho(A_{E^0 \backslash H})$ the series $\sum^\infty_{n=0} e^{-\beta n} A_{E^0 \backslash H}^n$ converges in operator norm to $(1 - e^{-\beta} A_{E^0 \backslash H})^{-1}$, and so
\begin{equation}\label{eq:inverse series}
(1 - e^{-\beta} A_{E^0 \backslash H})^{-1}(v,w)
= \sum^\infty_{n=0} e^{-\beta n} A_{E^0 \backslash H}^n(v,w)
= \sum_{\lambda \in v E^* w} e^{-\beta |\lambda|}.
\end{equation}
Since $\rho(A) > \rho(A_{E^0
\backslash H})$, the $(E^0\backslash H)\times C$ matrix
$(1-\rho(A)^{-1}A_{E^0\backslash H})^{-1}A_{E^0\backslash H,\,C}$ in~\eqref{defz} has entries
\begin{equation}\label{eq:zCseries}
\big((1-\rho(A)^{-1}A_{E^0\backslash H})^{-1}A_{E^0\backslash H,\,C}\big)(v,w)
= \sum_{e \in (E^0 \backslash H) E^1 w}\; \sum_{\mu \in v E^* r(e)} \rho(A)^{-|\mu|}.
\end{equation}
We use \eqref{eq:inverse series} in the proof of part~(\ref{crita})
and~\eqref{eq:zCseries} in the proof of part~(\ref{critb}), and again in Lemma~\ref{factorthruC*} and Theorem~\ref{thm:altogether}(\ref{it:critstates}).
\begin{proof}[Proof of Theorem~\ref{KMScrit}\,\eqref{crita}]
We partition $E^0$ as $(E^0\backslash H)\cup C\cup (H\backslash C)$, and claim that the vector $(z^C,x^C,0)$ satisfies
\begin{equation}\label{eigenvectorforA}
A(z^C,x^C,0)=\rho(A)(z^C,x^C,0).
\end{equation}
Since $C$ is minimal, it does not talk to any of the other components in $H\backslash C$, and we have
\begin{equation}\label{decompA}
A(z^C,x^C,0)=(A_{E^0\backslash H}z^C+A_{E^0\backslash H,C}x^C,A_Cx^C,0).
\end{equation}
We know that $A_Cx^C=\rho(A)x^C$, so we concentrate on the first term.
Proposition~\ref{discardbottom} implies that $\rho(A_{E^0\backslash H})<\rho(A)$.
Since $e^{-\rho(A)}=\rho(A)^{-1}$, ~\eqref{eq:inverse series} gives
\[
z^C=\sum_{n=0}^\infty\rho(A)^{-n-1}A_{E^0\backslash H}^nA_{E^0\backslash H,\,C}x^C,
\]
and we have
\begin{align*}
A_{E^0\backslash H}z^C+A_{E^0\backslash H,C}x^C
&=A_{E^0\backslash H}\Big(\sum_{n=0}^\infty \rho(A)^{-n-1}A_{E^0\backslash H}^nA_{E^0\backslash H,C}x^C\Big)+A_{E^0\backslash H,C}x^C\\
&=\Big(\sum_{m=1}^\infty \rho(A)^{-m}A_{E^0\backslash H}^mA_{E^0\backslash H,C}x^C\Big)+A_{E^0\backslash H,C}x^C\\
&=\sum_{m=0}^\infty \rho(A)^{-m}A_{E^0\backslash H}^mA_{E^0\backslash H,C}x^C\\
&=\rho(A)z^C.
\end{align*}
From this and \eqref{decompA}, we deduce that $(z^C,x^C,0)$ satisfies \eqref{eigenvectorforA}, as claimed.
Since $x^C$ is unimodular, $m:=(1+\|z^C\|_1)^{-1}(z^C,x^C,0)$ satisfies $\|m\|_1=1$, and hence is a probability measure on $E^0$. Equation~\eqref{eigenvectorforA} implies that $Am=\rho(A)m$. Thus Proposition~4.1 of \cite{aHLRS1} implies that there is a KMS$_{\ln\rho(A)}$ state $\psi_C$ on $(\mathcal{T} C^*(E),\alpha)$ satisfying \eqref{formphiC},
and that $\psi_C$ factors through a KMS$_{\ln\rho(A)}$ state of $(C^*(E),\alpha)$.
\end{proof}
The double sum appearing on the right-hand side of~\eqref{eq:zCseries} is parametrised by paths in $vE^*w$
of the form $\mu e$, where $r(e)$ is in $E^0\backslash C$ and $\mu$ is a path in
$E\backslash H$. We say that such paths \emph{make a quick exit from $C$}. For a
minimal critical component $C$, we write $\operatorname{QE}(C)$ for the set
\[
\operatorname{QE}(C) := \{\mu e : e \in E^1 C, r(e) \not\in C, \mu \in E^* r(e)\}
\]
of paths which start in $C$ and make a quick exit from $C$, and $\operatorname{QE}(K):=\bigcup_{C\in
\mc(E)} \operatorname{QE}(C)$. With this notation, the right-hand side of~\eqref{eq:zCseries} becomes
\[
\sum_{\lambda \in v \operatorname{QE}(C)w} \rho(A)^{-(|\lambda|-1)}.
\]
\begin{lem}\label{lemonQE}
The projections $\{s_\lambda s_\lambda^*:\lambda\in \operatorname{QE}(K)\}$ are mutually orthogonal.
\end{lem}
\begin{proof}
Suppose that $\mu,\nu\in \operatorname{QE}(K)$
and $\mu\not=\nu$. If $|\mu|=|\nu|$, then $(s_\mu s_\mu^*)(s_\nu s_\nu^*)=s_\mu(s_\mu^*s_\nu)s_\nu^*=0$. So suppose that one path is longer, say $|\mu|>|\nu|$. Then $s(\nu)$ is in $K$ and $s(\mu_{|\nu|})$ is not in $K$ because the different minimal critical components do not talk to each other. Thus $\mu$ does not have the form $\nu\mu'$, and we have $s_\mu^*s_\nu=0$, which implies that $(s_\mu s_\mu^*)(s_\nu s_\nu^*)=0$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{KMScrit}\,\eqref{critb}]
Suppose that $\phi$ is a KMS$_{\ln\rho(A)}$ state of $(\mathcal{T} C^*(E),\alpha)$, and consider $m^\phi=(\phi(p_v))$, which by \cite[Proposition~2.1(c)]{aHLRS1} satisfies the subinvariance relation $Am^\phi\leq \rho(A)m^\phi$. Suppose that $C\in \mc=\mc(E)$. Proposition~\ref{discardbottom} implies that $m^\phi_v=0$ for $v\in H\backslash K$, which since the minimal critical components do not talk to each other implies that $(Am^\phi)|_C=A_C(m^\phi|_C)$. So subinvariance implies that
\[
A_C(m^\phi|_C)=(Am^\phi)|_C\leq \rho(A)m^\phi|_C=\rho(A_C)m^\phi|_C;
\]
now \cite[Theorem~1.6]{Seneta} implies that we have equality throughout, and that $m^\phi|_C$ is a multiple of the unimodular Perron-Frobenius eigenvector $x^C$ for $A_C$. We define $t_C\in [0,\infty)$ by $m^\phi|_C=t_C(1+\|z^C\|_1)^{-1}x^C$.
We claim that $\sum_{C\in \mc}t_C\leq 1$. For $v\in E^0\backslash H$,
Lemma~\ref{lemonQE} implies that $\phi(p_v)\geq \sum_{\lambda\in v\operatorname{QE}(K)}\phi(s_\lambda
s_\lambda^*)$. Now we calculate, using \cite[Proposition~2.1(a)]{aHLRS1}
and~\eqref{eq:zCseries}:
\begin{align}\label{lbforphip}
\phi(p_v)&\geq \sum_{\lambda\in v\operatorname{QE}(K)}\phi(s_\lambda s_\lambda^*)=\sum_{\lambda\in v\operatorname{QE}(K)}\rho(A)^{-|\lambda|}\phi(p_{s(\lambda)})\\
&=\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}\Big(\sum_{\lambda\in v\operatorname{QE}(C)}\rho(A)^{-|\lambda|}x^C_{s(\lambda)}\Big)\notag\\
&=\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}\Big(\sum_{w\in C}\;\sum_{\lambda\in v\operatorname{QE}(C)w}\rho(A)^{-|\lambda|}x^C_w\Big)\notag\\
&=\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}\Big(\sum_{w\in
C}\rho(A)^{-1}\big((1-\rho(A)^{-1}A_{E^0\backslash H})^{-1}A_{E^0\backslash
H,\,C})\big)(v,w)x^C_w\Big)\notag\\
&=\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}z^C_v.\notag
\end{align}
For $v\in C$ we have $\phi(p_v)=t_C(1+\|z^C\|_1)^{-1}x^C_v$ by definition of $t_C$. Thus
\begin{align}\label{subinvofdiff}
1=\phi(1)&=\sum_{v\in E^0}\phi(p_v)\geq\sum_{v\in E^0\backslash H}\phi(p_v)+\sum_{C\in \mc}\sum_{v\in C}\phi(p_v)\\
&\geq \sum_{v\in E^0\backslash H}\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}z^C_v+\sum_{C\in \mc}\sum_{v\in C}t_C(1+\|z^C\|_1)^{-1}x^C_v\notag\\
&=\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}\|z^C\|_1+\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}\notag\\
&=\sum_{C\in \mc}t_C,\notag
\end{align}
as claimed.
The states $\psi_C$ in part~\eqref{crita} are KMS$_{\ln\rho(A)}$ states with $m^{\psi_C}=(1+\|z^C\|_1)^{-1}(z^C,x^C,0)$, and hence \eqref{eigenvectorforA} says that $Am^{\psi_C}=\rho(A)m^{\psi_C}$. We know from \cite[Proposition~2.1(c)]{aHLRS1} that $m^\phi$ is a probability measure with $Am^\phi\leq \rho(A)m^\phi$. Thus
\begin{align}
A\Big(m^\phi-&\sum_{C\in\mc}t_Cm^{\psi_C}\Big)=Am^\phi-\sum_{C\in \mc} t_CAm^{\psi_C}
=Am^\phi-\sum_{C\in \mc} t_C\rho(A)m^{\psi_C}\label{msubinv}\\
&\leq \rho(A)m^\phi-\sum_{C\in \mc} t_C\rho(A)m^{\psi_C}
=\rho(A)\Big(m^\phi-\sum_{C\in\mc}t_Cm^{\psi_C}\Big).\notag
\end{align}
For $v\in E^0\backslash H$, we saw in \eqref{lbforphip} that
\[
m^\phi_v=\phi(p_v)\geq\sum_{C\in \mc}t_C(1+\|z^C\|_1)^{-1}z^C_v=\sum_{C\in \mc}t_Cm^{\psi_C}_v.
\]
For $v\in K$, say $v\in C$, we have from the definition of $t_C$ that
\[
m^\phi_v=t_C(1+\|z^C\|_1)^{-1}x^C_v=t_C\psi_C(p_v)=t_Cm^{\psi_C}_v;
\]
for $v\in H\backslash K$, Proposition~\ref{discardbottom} gives $m^\phi_v=m^{\psi_C}_v=0$ for all $C$. Thus the difference satisfies $\big(m^\phi-\sum_{C\in \mc} t_Cm^{\psi_C}\big)\big|_H=0$.
If $\sum_{C\in \mc}t_C=1$, then we have $m^\phi=\sum_C t_Cm^{\psi_C}$ because both are probability measures and $m^\phi\geq\sum_C t_Cm^{\psi_C}$. Then, since $\phi$ and $\sum_Ct_C\psi_C$ are KMS$_{\ln\rho(A)}$ states of $(\mathcal{T} C^*(E),\alpha)$ which agree on projections, Proposition~2.1 of \cite{aHLRS1} implies that $\phi=\sum_{C} t_C\psi_C$.
If $\sum_{C\in \mc}t_C<1$, then
\[
m:=\Big(1-\sum_{C\in\mc}t_C\Big)^{-1}\Big(m^\phi-\sum_{C\in\mc}t_Cm^{\psi_C}\Big)\Big|_{E^0\backslash H}
\]
is a probability measure, and the calculation \eqref{msubinv} implies that $m$ is subinvariant for the graph $E\backslash H$. Since $\rho(A_{E^0\backslash H})<\rho(A)$, applying \cite[Theorem~3.1]{aHLRS1} to the graph $E\backslash H$, with $\beta=\ln\rho(A)$ and $\epsilon=(1-\rho(A)^{-1}A_{E^0\backslash H})^{-1}m$, gives a KMS$_{\ln\rho(A)}$ state $\phi_\epsilon$ on $(\mathcal{T} C^*(E\backslash H),\alpha)$ such that $\phi_\epsilon(p_v)=m_v$ for $v\in E^0\backslash H$. Now $(1-\sum_C t_C)(\phi_\epsilon\circ q_H)+\sum_Ct_C\psi_C$ is a KMS$_{\ln\rho(A)}$ state on $(\mathcal{T} C^*(E),\alpha)$ which agrees with $\phi$ on vertex projections, and hence
\[
\phi=\Big(1-\sum_{C\in\mc} t_C\Big)(\phi_\epsilon\circ q_H)+\sum_Ct_C\psi_C.\qedhere
\]
\end{proof}
Since $\rho(A_{E^0\backslash H})<\rho(A)$, Theorem~3.1 of \cite{aHLRS1} describes the
KMS$_{\ln\rho(A)}$ states on $\mathcal{T} C^*(E\backslash H)$. We write $y^{E\backslash H}$ for
the vector in $[1,\infty)^{E^0\backslash H}$ described in \cite[Theorem~3.1(a)]{aHLRS1},
$\Delta^{E\backslash H}_{\ln\rho(A)}$ for the simplex $\{\epsilon:\epsilon\cdot
y^{E\backslash H}=1\}$ in $[0,\infty)^{E^0\backslash H}$, and $\phi_\epsilon$ for the
KMS$_{\ln\rho(A)}$ state on $\mathcal{T} C^*(E\backslash H)$ described in
\cite[Theorem~3.1(b)]{aHLRS1}.
\begin{cor}\label{cor:state decomp}
Every KMS$_{\ln\rho(A)}$ state on $\mathcal{T} C^*(E)$ has the form
\begin{equation}\label{genKMScrit}
\phi_{r,\epsilon,t}:=r(\phi_\epsilon\circ q_H)+(1-r)\Big(\sum_{C\in \mc(E)}t_C\psi_C\Big)
\end{equation}
for some $r\in [0,1]$, $\epsilon\in\Delta^{E\backslash H}_{\ln\rho(A)}$ and $t\in
S_{\mc}$. We have $\phi_{r,\epsilon,t} = \phi_{r',\epsilon',t'}$ if and only if
$(r\epsilon, (1-r)t) = (r'\epsilon', (1-r')t')$.
\end{cor}
\begin{proof}
Theorem~\ref{KMScrit}(\ref{critb}) shows that each KMS$_{\ln\rho(A)}$ state has the
form~\eqref{genKMScrit}.
Suppose that $(r\epsilon, (1-r)t) = (r'\epsilon', (1-r')t')$. Then $(1 - r)\sum t_C
\psi_C = (1 - r')\sum t'_C \psi_C$, and so $\phi_{r, \epsilon,
t} - \phi_{r',\epsilon',t'} = (r\phi_\epsilon - r'\phi_{\epsilon'}) \circ q_H$.
Since $\sum t_C = \sum t'_C = 1$, we also have $1 - r = 1 - r'$ and hence $r = r'$.
So either $r = 0$ or $\epsilon =
\epsilon'$, and in either case, $r\phi_\epsilon = r'\phi_{\epsilon'}$, giving
$\phi_{r,\epsilon,t} - \phi_{r',\epsilon',t'} = 0$.
Now suppose that $\phi_{r, \epsilon, t} = \phi_{r', \epsilon', t'}$. Fix $C \in \mc(E)$
and $v \in C$. For $C' \in \mc(E)$, the formula~\eqref{formphiC} shows that
$\psi_{C'}(p_v) = \delta_{C, C'}(1 + \|z^C\|)^{-1}x^C_v$. Since $q_H(p_v) = 0$,
\[
0 = \phi_{r, \epsilon, t}(p_v) - \phi_{r', \epsilon', t'}(p_v)
= \big((1-r)t_C - (1 - r')t'_C\big) (1 + \|z^C\|)^{-1} x^C_v.
\]
Parts (a)~and~(d) of \cite[Theorem~1.5]{Seneta} imply that $x^C_v > 0$, and so $(1 -
r)t_C = (1 - r')t'_C$. It remains to show that $r \epsilon = r'\epsilon'$.
We have $r = 1 - \|(1-r)t\|_1 = 1 - \|(1-r')t'\|_1 = r'$, and so $0 = \phi_{r, \epsilon,
t} - \phi_{r,\epsilon',t} = r(\phi_\epsilon \circ q_H - \phi_{\epsilon'}\circ q_H)$.
If $r = 0$, then we trivially have $r\epsilon = r' \epsilon'$. Suppose that $r \not= 0$.
Then $\phi_\epsilon \circ q_H = \phi_{\epsilon'} \circ q_H$. Proposition~\ref{old2.5}
implies that $q_H^*$ is injective, so $\phi_\epsilon = \phi_{\epsilon'}$; since $\epsilon \mapsto \phi_\epsilon$ is
injective \cite[Theorem~3.1(b)]{aHLRS1}, we deduce that $\epsilon = \epsilon'$.
\end{proof}
\section{The KMS simplices for a fixed inverse temperature}\label{sec:allKMS}
In this section, we consider a finite directed graph $E$ and a real number $\beta$, and aim to describe the extreme points of the KMS$_\beta$ simplices of
$\mathcal{T} C^*(E)$ and $C^*(E)$. The states described in Theorem~\ref{KMScrit} will be some of them. We generate some more candidates by applying \cite[Theorem~3.1]{aHLRS1} to a graph of the form $E\backslash H$. We continue to use the recursive description of the saturation $\Sigma H$ described on page~\pageref{eq:satdef}.
\begin{prop}\label{defphiv}
Suppose that $H$ is a hereditary subset of $E^0$ and $\beta>\ln\rho(A_{E^0\backslash H})$. For each $v\in E^0\backslash H$ the series $\sum_{\mu\in (E\backslash H)^*v}e^{-\beta|\mu|}$ converges with sum $y_v\geq 1$; let $y$ be the the vector $(y_v)$ in $[1,\infty)^{E^0\backslash H}$. Then for each $v\in E^0\backslash H$, there is a KMS$_\beta$ state $\phi^H_v$ of $\mathcal{T} C^*(E\backslash H)$ such that
\begin{equation}\label{eq:phiv formula}
\phi_v^H(s_\mu s^*_\nu) = \delta_{\mu,\nu} e^{-\beta|\mu|} (1-e^{-\beta}A_{E^0\backslash H})^{-1}(s(\mu),v)y_v^{-1}\quad\text{ for $\mu,\nu \in (E\backslash H)^*$.}
\end{equation}
The states $\{\phi_v^H:v\in E^0\backslash H\}$ are the extremal KMS$_\beta$ states of $\mathcal{T} C^*(E\backslash H)$.
\end{prop}
\begin{proof}
Applying \cite[Theorem~3.1(a)]{aHLRS1} to $E\backslash H$ shows that the series defining $y_v$ converges. We define $\epsilon^v\in [0,\infty)^{E^0\backslash H}$ by $\epsilon^v_u=\delta_{u,v}y_v^{-1}$. Then $\epsilon^v\cdot y=1$, and the corresponding probability measure $m^v=(1-e^{-\beta}A_{E^0\backslash H})^{-1}\epsilon^v$ in \cite[Theorem~3.1(a)]{aHLRS1} has entries
\[
m^v_w=(1-e^{-\beta}A_{E^0\backslash H})^{-1}(w,v)y_v^{-1}\quad\text{for $w\in E^0\backslash H$.}
\]
Thus by \cite[Theorem~3.1(b)]{aHLRS1}, there is a KMS$_\beta$ state $\phi_v^H$ of $\mathcal{T} C^*(E\backslash H)$ satisfying \eqref{eq:phiv formula}. It follows from \cite[Theorem~3.1(c)]{aHLRS1} that the $\phi_v^H$ are the extreme points of the simplex of KMS$_\beta$ states (as observed in \cite[Remark~3.2]{aHLRS1}).
\end{proof}
\begin{cor}\label{defphiv2}
Let $v\in E^0$ and $\beta>0$. Suppose that there is a hereditary subset $H$ of $E^0$ such that $v\notin H$ and $\ln\rho(A_{E^0\backslash H})<\beta$. Then there is a KMS$_\beta$ state $\phi_{\beta,v}$ of $(\mathcal{T} C^*(E),\alpha)$ such that for every pair $\mu,\nu\in E^*$, we have
\begin{equation}\label{defphibetav}
\phi_{\beta,v}(s_\mu s^*_\nu)=
\begin{cases}
0&\text{if $s(\mu)E^*v=\emptyset$}\\
\delta_{\mu,\nu} \Big(e^{-\beta|\mu|}\sum_{\lambda\in s(\mu)E^*v}e^{-\beta|\lambda|}\Big)y_v^{-1}&\text{if $s(\mu)E^*v\not=\emptyset$;}
\end{cases}
\end{equation}
for every $H$ satisfying these hypotheses, we have $\phi_{\beta,v}=\phi^H_v\circ q_H$.
\end{cor}
Notice that \eqref{defphibetav} implies that the state $\phi_{\beta,v}$ does not depend on the choice of the hereditary set $H$ satisfying $v\notin H$ and $\ln\rho(A_{E^0\backslash H})<\beta$.
\begin{proof}
Proposition~\ref{defphiv} gives us a KMS$_\beta$ state $\phi^H_v$ of $(\mathcal{T} C^*(E\backslash H),\alpha)$. Because $H$ is hereditary, every path $\lambda$ in $E^*v$ lies entirely in $E\backslash H$. Thus \eqref{eq:phiv formula} implies that for every $\mu,\nu\in (E\backslash H)^*$, we have
\[
\phi_v^H(s_\mu s^*_\nu) = \delta_{\mu,\nu}\Big(e^{-\beta|\mu|}\sum_{\lambda\in s(\mu)(E\backslash H)^*v}e^{-\beta|\lambda|}\Big)y_v^{-1}=\delta_{\mu,\nu} \Big(e^{-\beta|\mu|}\sum_{\lambda\in s(\mu)E^*v}e^{-\beta|\lambda|}\Big)y_v^{-1};
\]
notice that \eqref{eq:phiv formula} is zero if $s(\mu)E^*v=\emptyset$, and in that case we need to interpret the empty sum on the right-hand side as $0$. For $\mu,\nu\in E^*$ with $s(\mu)=s(\nu)\in H$, we have $q_H(s_\mu s^*_\nu)=0$. Thus for arbitrary $\mu,\nu\in E^*$ with $s(\mu)=s(\nu)$, we have
\[
\phi_v^H\circ q_H(s_\mu s^*_\nu)=
\begin{cases}
0&\text{if $s(\mu)=s(\nu)\in H$}\\
\delta_{\mu,\nu} \Big(e^{-\beta|\mu|}\sum_{\lambda\in s(\mu)E^*v}e^{-\beta|\lambda|}\Big)y_v^{-1}&\text{if $s(\mu)=s(\nu)\in E^0\backslash H$,}
\end{cases}
\]
and $\phi_{\beta,v}:= \phi^H_v\circ q_H$ is a KMS$_\beta$ state of $(\mathcal{T} C^*(E),\alpha)$ satisfying \eqref{defphibetav}.
\end{proof}
\begin{thm}\label{thm:altogether}
Suppose that $E$ is a finite directed graph and $\beta$ is a real number, and denote by $\alpha$ all the actions of $\mathbb{R}$ obtained by lifting gauge actions on Toeplitz algebras and graph algebras. Let $H_\beta$ be the hereditary closure in $E^0$ of $\{C \in E^0/\!\!\sim : \ln\rho(A_C)>\beta\}$.
\begin{enumerate}
\item\label{it:nostates} If $H_\beta = E^0$, then $(\mathcal{T} C^*(E),\alpha)$ has no KMS$_\beta$ states.
\item\label{it:nocrit} Suppose that $H_\beta \not= E^0$ and that $\beta > \ln\rho(A_{E^0\backslash H_\beta})$. For $v \in E^0\backslash H_\beta$, there is a KMS$_\beta$ state $\phi_{\beta,v}$ of $(\mathcal{T} C^*(E),\alpha)$ satisfying \eqref{defphibetav}. Then
\[
\big\{\phi_{\beta, v}: v \in E^0 \backslash H_\beta\big\}
\]
are the extreme points of the KMS$_\beta$ simplex of $(\mathcal{T} C^*(E),\alpha)$. A KMS$_\beta$ state factors through $C^*(E)$ if and only if it belongs to the convex hull of
\[
\big\{\phi_{\beta, v}: v \text{ is a source in } E \backslash \Sigma H_\beta\big\}.
\]
\item\label{it:critstates} Suppose that $H_\beta \not= E^0$ and that $\beta =
\ln\rho(A_{E^0 \backslash H_\beta})$. Let $K_\beta$ be the hereditary closure in $E^0$ of $\{C \in E^0/\!\!\sim : \ln\rho(A_C)\geq\beta\}$. For $v \in E^0 \backslash K_\beta$, there is a KMS$_\beta$ state $\phi_{\beta, v}$ of $(\mathcal{T} C^*(E),\alpha)$ satisfying \eqref{defphibetav}. For $C \in \mc(E \backslash H_\beta)$, let $\psi_C^{H_\beta}$ be the KMS$_\beta$ state of $(\mathcal{T} C^*(E \backslash H_\beta),\alpha)$ obtained by applying Theorem~\ref{KMScrit}(\ref{crita}) to the graph $E\backslash H_\beta$. Then the states
\begin{equation}\label{eq:extreme points}
\big\{\psi_C:=\psi_C^{H_\beta}\circ q_{H_\beta} : C \in \mc(E\backslash H_\beta)\big\} \cup
\big\{\phi_{\beta,v}: v \in E^0 \backslash{K_\beta}\big\}
\end{equation}
are the extreme points of the KMS$_\beta$ simplex of $(\mathcal{T} C^*(E),\alpha)$. A KMS$_\beta$ state factors through $C^*(E)$ if and only if it belongs to the convex hull of
\begin{equation}\label{convhull}
\big\{\psi_C: C \in \mc(E\backslash H_\beta)\big\} \cup
\big\{\phi_{\beta, v}: v \text{ is a source in } E \backslash \Sigma K_\beta\big\}.
\end{equation}
\end{enumerate}
\end{thm}
Both $H_\beta$ and $K_\beta$ are hereditary subsets of $E^0$, and $H_\beta\subset K_\beta$. Obviously the proof of the theorem must exploit the specific nature of these two sets, but some of our arguments are more general, and we separate out some lemmas. Throughout this section, $E$ is a finite directed graph.
\begin{lem}\label{lem:extremeptsfactor}
Suppose that $I$ is an ideal in a $C^*$-algebra $A$, that $\phi_1,\dots,\phi_n$
are states of $A$, and that $\lambda_i\in (0,\infty)$ for $1\leq i\leq n$. Then $\sum_{j=1}^n \lambda_j \phi_j$ factors through
$A/I$ if and only if $\phi_i$ factors through $A/I$ for all $i$.
\end{lem}
\begin{proof}
If each $\phi_i$ factors through $A/I$, then so does every linear combination. So suppose that $\sum_j\lambda_j \phi_j$ factors through $C^*(E)$. For a
positive element $a$ in $I$ and each $i$, we have
\[
0 = \sum_{j=1}^n \lambda_j \phi_j(a) \geq \lambda_i \phi_i(a)\geq 0,
\]
and since $\lambda_i > 0$, this forces $\phi_i(a) = 0$. Since $I$ is spanned by
its positive elements, we deduce that $\phi_i$ vanishes on $I$, and hence $\phi_i$ factors through $A/I$.
\end{proof}
\begin{lem}\label{lem:smallerC*E}
Suppose that $H \subset E^0$ is hereditary
and that $\phi$ is a KMS$_\beta$-state of $\mathcal{T} C^*(E \backslash H)$ which factors
through $C^*(E \backslash H)$. If $\phi(p_v) = 0$ for all $v \in \Sigma H \backslash H$, then the state $\phi \circ q_H$ of $\mathcal{T} C^*(E)$ factors through $C^*(E)$.
\end{lem}
\begin{proof}
The hypothesis says that there is a state $\bar\phi$ of $C^*(E\backslash H)$ such that $\phi = \bar\phi \circ \pi_{E \backslash H}$.
Let $J$ be the ideal of $C^*(E \backslash H)$ generated by $\{p_v : v \in \Sigma H
\backslash H\}$. Then \cite[Lemma~2.2]{aHLRS1} implies that $\bar{\phi}$ factors through
$C^*(E \backslash H)/J$. Theorem~4.1(b) of \cite{BPRS} implies that there is an isomorphism of $C^*(E \backslash \Sigma H)$ onto $C^*(E \backslash H)/J$ which takes $\bar{s}_e$ to $s_e +
J$. So
there is a KMS$_\beta$ state $\bar{\bar{\phi}}$ of $C^*(E \backslash \Sigma H)$ such that
$\phi = \bar{\bar{\phi}} \circ \bar{q}_{\Sigma H \backslash H} \circ \pi_{E
\backslash H}$. By considering the images of generators of $\mathcal{T} C^*(E)$, one checks that
the diagram
\begin{equation}\label{eq:CD}
\begin{tikzpicture}[>=stealth, yscale=0.6]
\node (TC*E) at (4,2) {$\mathcal{T} C^*(E)$};
\node (C*E) at (4,0) {$C^*(E)$};
\node (TC*E/H) at (0,2) {$\mathcal{T} C^*(E \backslash H)$};
\node (C*E/H) at (-4,2) {$C^*(E \backslash H)$};
\node (C*E/SH) at (-4,0) {$C^*(E \backslash \Sigma H)$};
\draw[->] (TC*E)--(C*E) node[right, midway] {\small$\pi_E$};
\draw[->] (C*E)--(C*E/SH) node[above, midway] {\small$\bar{q}_{\Sigma H}$};
\draw[->] (TC*E)--(TC*E/H) node[above, midway] {\small$q_H$};
\draw[->] (TC*E/H)--(C*E/H) node[above, midway] {\small$\pi_{E \backslash H}$};
\draw[->] (C*E/H)--(C*E/SH) node[left, midway] {\small$\bar{q}_{\Sigma H \backslash H}$};
\end{tikzpicture}\end{equation}
commutes. Thus $\phi \circ
q_H$ factors through the state $\bar{\bar{\phi}} \circ \bar{q}_{\Sigma H}$ of $C^*(E)$.
\end{proof}
\begin{lem}\label{relatephis}
Suppose that $E$ is a finite directed graph with vertex matrix $A$, and that $\beta>\ln\rho(A)$. Suppose that $G$ is a hereditary subset of $E^0$, and let $y^E\in [1,\infty)^{E^0}$ and $y^{E\backslash G}$ be the vectors of \cite[Theorem~3.1]{aHLRS1} for the graphs $E$ and $E\backslash G$. If $\epsilon\in [0,1]^{E^0}$ satisfies $\epsilon\cdot y=1$ and $\epsilon|_{G}=0$, then $\epsilon|_{E\backslash G}$ satisfies $(\epsilon|_{E\backslash G})\cdot y^{E\backslash G}=1$, and the corresponding KMS$_\beta$ states on the Toeplitz algebras satisfy $\phi_\epsilon=\phi_{\epsilon|_{E\backslash G}}\circ q_G$.
\end{lem}
\begin{proof}
For $w\in E^0\backslash G$, we have $(E\backslash G)^*w=E^*w$, and hence
\[
y^{E\backslash G}_w=\sum_{\mu\in(E\backslash G)^*w}e^{-\beta|\mu|}=\sum_{\mu\in E^*w}e^{-\beta|\mu|}=y^E_w.
\]
Thus $y^{E\backslash G}=y|_{E\backslash G}$, and $1=\epsilon\cdot y^E=(\epsilon|_{E\backslash G})\cdot (y|_{E\backslash G})=(\epsilon|_{E\backslash G})\cdot y^{E\backslash G}$.
Since $G$ is hereditary, for $v\in E^0$ we have
\[
m_v=\big((1-e^{-\beta}A)^{-1}\epsilon\big)_v=\begin{cases}((1-e^{-\beta}A_{E^0\backslash G})^{-1}\epsilon|_{E\backslash G})_v&\text{if $v\in E^0\backslash G$}\\
0&\text{if $v\in G$,}
\end{cases}
\]
and hence
\[
\phi_\epsilon(p^E_v)=\begin{cases}\phi_{\epsilon|_{E\backslash G}}(p^{E\backslash G}_v)&\text{if $v\in E^0\backslash G$}\\
0&\text{if $v\in G$.}
\end{cases}
\]
Thus $\phi_\epsilon$ and $\phi_{\epsilon|_{E\backslash G}}\circ q_G$ agree on the vertex projections $\{p_v\}$ in $\mathcal{T} C^*(E)$, and since both are KMS$_\beta$ states, \cite[Proposition~2.1(a)]{aHLRS1} implies that they are equal.
\end{proof}
\begin{lem}\label{factorthruC*}
Suppose that $H$ is a hereditary subset of $E$ and $\beta>\ln \rho(A_{E^0\backslash H})$. Let $v\in E^0\backslash H$, and let $\phi^H_v$ be the state of $\mathcal{T} C^*(E\backslash H)$ described in Proposition~\ref{defphiv}. Then $\phi^H_v\circ q_H$ factors through $C^*(E)$ if and only if $v$ is a source in $E\backslash\Sigma H$.
\end{lem}
\begin{proof}
Suppose that $v$ is a source in $E\backslash\Sigma H$. Then $v$ must be a source in $E$: otherwise, we have $s(r^{-1}(v))\subset \Sigma H$, and saturation implies that $v\in \Sigma H$. In particular, $v$ is a source in $E\backslash H$, and \cite[Corollary~6.1(a)]{aHLRS1} implies that $\phi^H_v$ factors through $C^*(E\backslash H)$. With a view to applying Lemma~\ref{lem:smallerC*E}, we take $w\in \Sigma H\backslash H$. Since $\Sigma H$ is hereditary, $wE^nv=\emptyset$ for all $n$, and \eqref{eq:inverse series} implies that $(1-e^{-\beta}A_{E^0\backslash H})^{-1}(w,v)=0$. Thus
\[
\phi^H_v(p_w)=(1-e^{-\beta}A_{E^0\backslash H})^{-1}(w,v)y_v^{-1}=0,
\]
and Lemma~\ref{lem:smallerC*E} implies that $\phi^H_v\circ q_H$ factors through $C^*(E)$.
Now suppose that $\phi^H_v \circ q_H$ factors through $C^*(E)$. Since
$\phi^H_v \circ q_H(p_w)=0$ for $w \in H$,
Lemma~\ref{extend=}(\ref{extendb}) implies that $\phi^H_v \circ q_H$ vanishes on $\{p_w : w \in \Sigma H\}$. Thus it follows from \cite[Lemma~2.2]{aHLRS1} that $\phi^H_v \circ q_H$ factors through $C^*(E \backslash \Sigma H)$. Since $\phi^H_v(p_v) \not= 0$, we deduce that $v \in E^0 \backslash \Sigma H$. Thus $\epsilon^v_w = (y^{E \backslash H}_v)^{-1} \delta_{v,w}$ vanishes for $w$ in the hereditary set $\Sigma H$, and Lemma~\ref{relatephis} implies that $\phi^H_v\circ q_H=\phi_{\epsilon^v|_{E \backslash \Sigma H}} \circ q_{\Sigma H
\backslash H}$. Since $\phi^H_v \circ q_H$ factors through $C^*(E)$, for $w \in E^0 \backslash \Sigma H$ we have
\begin{align*}
\phi_{\epsilon^v|_{E \backslash \Sigma H}}\Big(p_w - \sum_{e \in w(E \backslash \Sigma H)^1} s_e s^*_e\Big)
&= \phi_{\epsilon^v|_{E \backslash \Sigma H}} \circ q_{\Sigma H \backslash H}
\Big(p_w - \sum_{e \in w(E \backslash H)^1} s_e s^*_e\Big) \\
&= \phi^H_v \circ q_H\Big(p_w - \sum_{e \in wE^1} s_e s^*_e\Big) = 0.
\end{align*}
Applying \cite[Lemma~2.2]{aHLRS1} to $E\backslash \Sigma H$ shows that
$\phi_{\epsilon^v|_{E \backslash \Sigma H}}$ factors through $C^*(E \backslash \Sigma
H)$. We have $\beta > \rho(A_{E^0 \backslash H})$, so Corollary~6.1(a) of \cite{aHLRS1}
implies that $\epsilon^v|_{E \backslash \Sigma H}$ is supported on the sources of $E
\backslash \Sigma H$, and hence $v$ is a source in $E \backslash \Sigma H$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:altogether}]
(\ref{it:nostates}) We suppose that $\mathcal{T} C^*(E)$ has a KMS$_\beta$ state $\phi$, and
prove that $H_\beta \not= E^0$. The set $\bigcup\{C \in {E^0/\!\!\sim} :
\ln\rho(A_C)>\beta\}$ generates $H_\beta$ as a hereditary set, and so
Proposition~\ref{old2.5} implies that $\phi(p_w) = 0$ for all $w \in H_\beta$. Hence
$1 = \phi(1) = \sum_{v \not\in H_\beta} \phi(p_v)$, and $H_\beta$ cannot be all of
$E^0$.
(\ref{it:nocrit}) Applying Corollary~\ref{defphiv2} with $H=H_\beta$ gives the existence of the state $\phi_{\beta,v}$, and the last comment in Corollary~\ref{defphiv2} implies that $\phi_{\beta,v}=\phi^{H_\beta}_v\circ q_{H_\beta}$. We can apply Proposition~\ref{defphiv} with $H=H_\beta$, and deduce that the states $\phi^{H_\beta}_v$ for $v \notin H_\beta$ are the extreme points of the KMS$_\beta$ simplex of $\mathcal{T} C^*(E \backslash H_\beta)$. Since $H_\beta$ is not all of $E^0$, the final statement of Proposition~\ref{old2.5} implies that $q^*_{H_\beta}$ is an isomorphism of the KMS$_\beta$ simplex of $\mathcal{T} C^*(E \backslash H_\beta)$ onto that of
$\mathcal{T} C^*(E)$. Hence the states $\phi_{\beta,v}=\phi^{H_\beta}_v \circ q_{H_\beta}$ are the extreme points of the KMS$_\beta$ simplex of $\mathcal{T} C^*(E)$.
Lemma~\ref{factorthruC*} implies that $\phi_{\beta,v}=\phi^{H_\beta}_v\circ q_{H_\beta}$ factors through $C^*(E)$ if and only if $v$ is a source in $E \backslash \Sigma H_\beta$. So Lemma~\ref{lem:extremeptsfactor} implies that a KMS$_\beta$ state $\phi$ factors through $C^*(E)$ if and only if it belongs to the convex hull of $\{\phi_{\beta,v}: v \text{ is a source in } E \backslash \Sigma H_\beta\}$.
(\ref{it:critstates}) We can apply Corollary~\ref{defphiv2} with $H=K_\beta$ to get the state $\phi_{\beta,v}=\phi^{K_\beta}\circ q_{K_\beta}$. As in \eqref{it:nocrit}, $q^*_{H_\beta}$ is an isomorphism of the KMS$_\beta$ simplex of $\mathcal{T} C^*(E \backslash H_\beta)$ onto that of $\mathcal{T} C^*(E)$. Since $\beta = \ln\rho(A_{E^0 \backslash H_\beta})$ is real, $\rho(A_{E^0 \backslash H_\beta})$ cannot be $0$, and \cite[Lemma~A.1(b)]{aHLRS1} implies that $E \backslash H_\beta$ has at least one cycle. The set $K_\beta\backslash H_\beta$ is generated as a hereditary subset of $E^0\backslash H_\beta$ by
the minimal critical components of $E \backslash H_\beta$, and hence is the set $H$ in Theorem~\ref{KMScrit} for the graph $E \backslash H_\beta$. Thus Corollary~\ref{cor:state
decomp} implies that the KMS$_\beta$ states of
$\mathcal{T} C^*(E \backslash H_\beta)$ have the form $\phi_{r, \epsilon, t}$, and that
the extreme points are the ones of the form $\phi_{1, \epsilon^v, t} = \phi^{K_\beta\backslash H_\beta}_v \circ q_{K_\beta\backslash H_\beta}=\phi_{\beta,v}$ or $\phi_{0, \epsilon, \delta_C} = \psi^{H_\beta}_C$.
Proposition~\ref{quotmapH} implies that $q_{K_\beta\backslash H_\beta} \circ q_{H_\beta} = q_{K_\beta}$. Thus the KMS$_\beta$ simplex of $\mathcal{T} C^*(E)$ is the convex hull of the set~\eqref{eq:extreme points}.
It remains to show that a convex combination of the states~\eqref{eq:extreme points} factors through $C^*(E)$ if and only if it belongs to the convex hull of the set~\eqref{convhull}. Lemma~\ref{factorthruC*} implies that $\phi^{K_\beta}_v
\circ q_{K_\beta}$ factors through $C^*(E)$ if and only if $v$ is a
source in $E \backslash \Sigma K_\beta$. We claim that the $\psi^{H_\beta}_C
\circ q_{H_\beta}$ all factor through $C^*(E)$. To see this, fix $C \in \mc(E
\backslash H_\beta)$. Theorem~\ref{KMScrit}(\ref{crita}) implies that $\psi^{H_\beta}_C$ factors through $C^*(E \backslash H_\beta)$. We have $v E^n C \not= \emptyset$ for all $v \in C$ and $n \in \mathbb{N}$ because $C$ is a nontrivial connected component. Since $C \cap H_\beta = \emptyset$, we deduce that $C$ does not intersect any of the sets $S_k
H_\beta$ of~\eqref{eq:satdef}, and hence $C \cap \Sigma H_\beta =
\emptyset$. Then because $\Sigma H_\beta$ is hereditary, we have $w E^* C = \emptyset$ for all $w \in \Sigma H_\beta$. Hence~\eqref{eq:zCseries} implies that $z^C_w = 0$ for all $w \in \Sigma H_\beta \backslash H_\beta$, and so \eqref{formphiC} implies that $\psi^{H_\beta}_C(p_w) = 0$ for all $w \in \Sigma H_\beta
\backslash H_\beta$. Now Lemma~\ref{lem:smallerC*E} implies that $\psi_C=\psi^{H_\beta}_C \circ q_{H_\beta}$
factors through $C^*(E)$.
\end{proof}
Theorem~\ref{thm:altogether} describes the KMS$_\beta$ simplex for each fixed $\beta$. However, it also makes sense to fix a vertex $v$, and ask for which $\beta$ there is a state $\phi_{\beta,v}$ of $(\mathcal{T} C^*(E),\alpha)$ as in Corollary~\ref{defphiv2}.
\begin{cor}\label{betav}
Suppose that $E$ is a finite directed graph and $v\in E^0$. Define
\[
\beta_v:=\max\{\ln\rho(A_C):C\leq v\}.
\]
Then there is a state $\phi_{\beta,v}$ satisfying \eqref{defphibetav} if and only if $\beta>\beta_v$.
\end{cor}
\begin{proof}
First suppose that there exists such a state $\phi_{\beta,v}$. Then there is a hereditary set $H$ such that $v\notin H$ and $\ln\rho(A_{E^0\backslash H})<\beta$. But then any $C$ with $C\leq v$ lies in $E^0\backslash H$, and $\ln\rho(A_C)\leq \ln\rho(A_{E^0\backslash H})<\beta$. Thus $\beta_v=\max\{\ln\rho(A_C):C\leq v\}<\beta$. Conversely, suppose that $\beta>\beta_v$. Then the hereditary closure $K_\beta$ of $\{C:\ln\rho(A_C)\geq \beta\}$ does not contain $v$: for if so, then there exists $C\in \{C:\ln\rho(A_C)\geq \beta\}$ with $C\leq v$, and we have $\beta_v\geq \beta$. Thus we can apply Corollary~\ref{defphiv2} with $H=K_\beta$ to deduce the existence of $\phi_{\beta,v}$.
\end{proof}
\section{Examples}\label{sec:exs}
We give some examples to show how we can use Theorem~\ref{thm:altogether} to compute all the KMS states on $\mathcal{T} C^*(E)$ and $C^*(E)$. Since we want to focus on how the different components of $E$ interact, we consider graphs in which the components are small.
\begin{ex}\label{dumbbell1}
The following graph $E$
\[
\begin{tikzpicture}
\node[inner sep=1pt] (v) at (0,0) {$v$};
\node[inner sep=1pt] (w) at (2,0) {$w$};
\draw[-latex] (v)--(w);
\foreach \x in {0,2} {
\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);
}
\foreach \x in {0,2,4} {
\draw[-latex] (w) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (w);
}
\end{tikzpicture}
\]
has two strongly connected components $\{v\}$ and $\{w\}$. Both are nontrivial components, with $A_{\{v\}}=(2)$, $A_{\{w\}}=(3)$ and $\rho(A)=3$.
\begin{itemize}
\item For $\beta>\ln\rho(A)=\ln 3$, the set $H_\beta$ of Theorem~\ref{thm:altogether} is empty, and Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives a $1$-dimensional simplex of KMS$_{\beta}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\phi_{\beta,v}$ and $\phi_{\beta,w}$. None of these factor through $C^*(E)$.
\item At $\beta=\ln 3$, $H_{\beta}$ is still empty, but $K_{\ln 3}$ is the hereditary closure of $\{w\}$, which is all of $E^0$. The only critical component is $\{w\}$, and hence Theorem~\ref{thm:altogether}\eqref{it:critstates} gives a unique KMS$_{\ln 3}$ state $\psi_{\{w\}}$ which factors through $C^*(E)$.
\item For $\beta<\ln 3$, $H_\beta=E^0$, and $(\mathcal{T} C^*(E),\alpha)$ has no KMS$_\beta$ states.
\end{itemize}
\end{ex}
\begin{ex}\label{dumbbell2}
Reversing the horizontal arrow in the previous example makes a big difference. The graph $E$ now looks like
\[
\begin{tikzpicture}
\node[inner sep=1pt] (v) at (0,0) {$v$};
\node[inner sep=1pt] (w) at (2,0) {$w$};
\draw[-latex] (w)--(v);
\foreach \x in {0,2} {
\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);
}
\foreach \x in {0,2,4} {
\draw[-latex] (w) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (w);
}
\end{tikzpicture}
\]
The strongly connected components are still $\{v\}$ and $\{w\}$, but now the minimal critical component $\{w\}$ is hereditary.
\begin{itemize}
\item For $\beta>\ln 3=\ln\rho(A)$, $H_\beta=\emptyset$, and Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives a $1$-dimensional simplex of KMS$_{\beta}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\phi_{\beta,v}$ and $\phi_{\beta,w}$. None of these factor through $C^*(E)$.
\item For $\beta=\ln 3$, we have $H_\beta=\emptyset$ and $K_\beta=\{w\}$. Theorem~\ref{thm:altogether}\eqref{it:critstates} gives a $1$-dimensional simplex of KMS$_{\ln 3}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\phi_{\ln 3,v}$ and $\psi_{\{w\}}$, and only $\psi_{\{w\}}$ factors through $C^*(E)$. (We work out a formula for $\psi_{\{w\}}$ at the end of this example.)
\item For $\ln 2<\beta<\ln 3$, $H_\beta=\{w\}$, and Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives a single KMS$_\beta$ state $\phi_{\beta, v}$ on $(\mathcal{T} C^*(E),\alpha)$, which does not factor through $C^*(E)$.
\item For $\beta=\ln 2$, $H_\beta=\{w\}$ and $K_\beta=\{v,w\}=E^0$. The graph $E\backslash H_\beta$ has a single critical component $\{v\}$, and Theorem~\ref{thm:altogether}\eqref{it:critstates} gives a unique KMS$_{\ln 2}$ state $\psi_{\{v\}}$ on $(\mathcal{T} C^*(E),\alpha)$. This state factors through $C^*(E)$.
\item For $\beta<\ln 2$, there are no KMS$_\beta$ states.
\end{itemize}
We can make the construction of these states quite explicit. We illustrate by working through the construction of the KMS$_{\ln 3}$ state $\psi_{\{w\}}$. The unimodular Perron-Frobenius eigenvector for the matrix $A_{\{w\}}=(3)$ is the scalar $x^{\{w\}}_w=1$, and the vector $z^{\{w\}}$ in \eqref{defz} is the scalar
\[
z^{\{w\}}_v=\rho(A)^{-1}\big(1-\rho(A)^{-1}A_{\{v\}}\big)^{-1}A(v,w)x^{\{w\}}_w=3^{-1}(1-3^{-1}.2)^{-1}1.1=3^{-1}.3=1.
\]
Thus $\|1+z^{\{w\}}\|_1=2$, $\psi_{\{w\}}(p_v)=\psi_{\{w\}}(p_w)=2^{-1}$, and
\[
\psi_{\{w\}}(s_\mu s_\nu^*)=\delta_{\mu,\nu}3^{-|\mu|}2^{-1}\quad\text{for $\mu,\nu\in E^*$.}
\]
\end{ex}
\begin{ex}\label{ex:2vertexcomponent}
We now replace the component $\{w\}$ with a 2-vertex component whose critical inverse
temperature still exceeds that of the component $\{v\}$. This gives an example in which the KMS$_\beta$ simplex changes dimension both as $\beta$ decreases to $\ln \rho(A)$, and as $\beta$ passes through $\ln \rho(A)$.
\[
\begin{tikzpicture}
\node[inner sep=1pt] (v) at (0,0) {$v$};
\node[inner sep=1pt] (w) at (2,0) {$w$};
\node[inner sep=1pt] (u) at (2,1) {$u$};
\draw[-latex] (w)--(v);
\foreach \x/\xx in {0/3,2/5} {
\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);
\draw[-latex] (w) .. controls +(-0.\xx,0.5) .. (u);
\draw[-latex] (u) .. controls +(0.\xx,-0.5) .. (w);
\draw[-latex] (u) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (u);
}
\end{tikzpicture}
\]
The strongly connected components are $\{v\}$ and $\{w,u\}$. The block corresponding to the latter is $A_{\{w,u\}} =
\big(\begin{smallmatrix}2&2\\2&0\end{smallmatrix}\big)$, which has spectral radius $\rho(A_{\{w,u\}})=\gamma := 1 + \sqrt5$. Since $\rho(A)=\max\{\rho(A_{\{v\}}),\rho(A_{\{w,u\}})\}=\gamma$, $\{w,u\}$ is a minimal critical component.
\begin{itemize}
\item For $\beta>\ln \gamma$, $H_\beta=\emptyset$, and
Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives a $2$-dimensional simplex of
KMS$_{\beta}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\phi_{\beta,
v}$, $\phi_{\beta, w}$ and $\phi_{\beta,u}$. None of these factor through
$C^*(E)$.
\item For $\beta = \ln\gamma$, we have $H_\beta = \emptyset$ and $K_\beta =
\{w,u\}$.
Theorem~\ref{thm:altogether}\eqref{it:critstates} gives a $1$-dimensional simplex
of KMS$_{\ln\gamma}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points
$\phi_{\ln\gamma,v}$ and $\psi_{\{w,u\}}$. Only $\psi_{\{w,u\}}$ factors through
$C^*(E)$.
\item For $0<\beta<\ln\gamma$, we have $\{w,u\} \subseteq H_\beta$, and so the KMS$_\beta$ simplex is similar to that of Example~\ref{dumbbell2}. In particular, the
dimension of the KMS$_\beta$ simplex drops again to 0 as $\beta$ drops below
$\ln \gamma$, and disappears altogether for $\beta<\ln 2$.
\end{itemize}
\end{ex}
\begin{ex}\label{EminusGsourced} In the next graph $E$, we have added two trivial components, and now the subtleties involving saturations in Theorem~\ref{thm:altogether} come into play.
\[
\begin{tikzpicture}
\node[inner sep=1pt] (u_1) at (-2,0) {$u_1$};
\node[inner sep=1pt] (u_2) at (2,0) {$u_2$};
\node[inner sep=1pt] (v) at (0,0) {$v$};
\node[inner sep=1pt] (w) at (4,0) {$w$};
\draw[-latex] (w)--(u_2);
\draw[-latex] (u_2)--(v);
\draw[-latex] (u_1)--(v);
\foreach \x in {0,2} {\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);}
\foreach \x in {0,2,4} {
\draw[-latex] (w) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (w);
}
\end{tikzpicture}
\]
\begin{itemize}
\item For $\beta>\ln 3$, we have $H_\beta=\emptyset$, and Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives us a $3$-dimensional simplex of KMS$_\beta$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\phi_{\beta,v}$, $\phi_{\beta,w}$, $\phi_{\beta,u_1}$ and $\phi_{\beta,u_2}$. The state $\phi_{\beta,u_1}$ factors through $C^*(E)$.
\item At $\beta=\ln 3$, we have $H_\beta=\emptyset$ and $K_\beta=\{w\}$. Theorem~\ref{thm:altogether}\eqref{it:critstates} gives us a $3$-dimensional simplex of KMS$_{\ln 3}$ states, with extreme points $\phi_{\ln 3,v}$, $\phi_{\ln 3,u_1}$ and $\phi_{\ln 3,u_2}$ alongside the state $\psi_{\{w\}}$ associated to the critical component $\{w\}$ in $K_\beta$. Now $\Sigma K_\beta=\{u_2, w\}$, and the vertex $u_1$ is a source in $E\backslash \Sigma K_\beta$. Thus both $\psi_{\{w\}}$ and $\phi_{\ln 3,u_1}$ factor through KMS$_{\ln 3}$ states of $(C^*(E),\alpha)$.
\item For $\ln 2<\beta<\ln 3$, we have $H_\beta=\{w\}$, and Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives us a $2$-dimensional simplex of KMS$_\beta$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\phi_{\beta,v}$, $\phi_{\beta,u_1}$ and $\phi_{\beta,u_2}$. Since $\Sigma H_\beta=\{u_2,w\}$, only the state $\phi_{\beta, u_1}$ factors through $C^*(E)$.
\item For $\beta=\ln 2$, we have $H_\beta=\{w\}$ and $K_\beta=E^0$. The only critical component in $E\backslash H_\beta$ is $\{v\}$, and hence Theorem~\ref{thm:altogether}\eqref{it:critstates} implies that $(\mathcal{T} C^*(E),\alpha)$ has a unique KMS$_{\ln 2}$ state $\psi_{\{v\}}$, and that this state factors through $C^*(E)$.
\item For $\beta<\ln 2$, the hereditary closure of $H_\beta=\{v,w\}$ is all of $E^0$, and $(\mathcal{T} C^*(E),\alpha)$ has no KMS$_\beta$ states.
\end{itemize}
\end{ex}
\begin{ex}\label{ex4}
Our next graph $E$ is the one from Example~\ref{EminusGsourced} with the edge between $u_1$ and $v$ reversed.
\[
\begin{tikzpicture}
\node[inner sep=1pt] (u_1) at (-2,0) {$u_1$};
\node[inner sep=1pt] (u_2) at (2,0) {$u_2$};
\node[inner sep=1pt] (v) at (0,0) {$v$};
\node[inner sep=1pt] (w) at (4,0) {$w$};
\draw[-latex] (w)--(u_2);
\draw[-latex] (u_2)--(v);
\draw[-latex] (v)--(u_1);
\foreach \x in {0,2} {\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);}
\foreach \x in {0,2,4} {
\draw[-latex] (w) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (w);
}
\end{tikzpicture}
\]
\begin{itemize}
\item For $\beta> \ln 3$ and $\beta=\ln 3$, we still have a $3$-dimensional simplex of KMS$_{\beta}$ states on $(\mathcal{T} C^*(E),\alpha)$. However, for this graph $u_1$ is not a source in $E\backslash \Sigma K_{\ln 3}=E^0\backslash \{u_2,w\}$, and only the KMS$_{\ln 3}$ state $\psi_{\{w\}}$ factors through $C^*(E)$.
\item For $\ln 2<\beta<\ln 3$, we still have $H_\beta=\{w\}$ and a 2-dimensional simplex of KMS$_\beta$ states. For this graph, none of these KMS states factors through $C^*(E)$.
\item At $\beta=\ln 2$, $K_{\beta}=\{v,u_2,w\}$, and we have a $1$-dimensional simplex of KMS$_{\ln 2}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\psi_{\{u_2,w\}}$ and $\phi_{\ln 2,u_1}$. The state $\psi_{\{u_2,w\}}$ factors through $C^*(E)$.
\item For $\beta<\ln 2$, we have $H_\beta=\{v, u_2,w\}$, and a single KMS$_\beta$ state $\phi_{\beta,u_1}$ on $(\mathcal{T} C^*(E),\alpha)$. Since $\Sigma H_\beta$ is all of $E^0$, this state does not factor through $C^*(E)$.
\end{itemize}
\end{ex}
\begin{ex}\label{ex:persistentsource}
We now add a source $u_3$ to the graph of Example~\ref{ex4}.
\[
\begin{tikzpicture}
\node[inner sep=1pt] (u_3) at (-4,0) {$u_3$};
\node[inner sep=1pt] (u_1) at (-2,0) {$u_1$};
\node[inner sep=1pt] (u_2) at (2,0) {$u_2$};
\node[inner sep=1pt] (v) at (0,0) {$v$};
\node[inner sep=1pt] (w) at (4,0) {$w$};
\draw[-latex] (u_3)--(u_1);
\draw[-latex] (w)--(u_2);
\draw[-latex] (u_2)--(v);
\draw[-latex] (v)--(u_1);
\foreach \x in {0,2} {\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);}
\foreach \x in {0,2,4} {
\draw[-latex] (w) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (w);
}
\end{tikzpicture}
\]
The vertex $u_3$ belongs to the complement of $H_\beta$ and of $K_\beta$ for all $\beta$.
So at every $\beta$, the new vertex $u_3$ gives an extreme point
$\phi_{\beta, u_3}$ of the KMS$_\beta$ simplex of $\mathcal{T} C^*(E)$, and this state factors through $C^*(E)$.
\end{ex}
\begin{ex}
The following graph $E$
\[
\begin{tikzpicture}
\node[inner sep=1pt] (u) at (0,0) {$u$};
\node[inner sep=1pt] (v) at (2,1) {$v$};
\node[inner sep=1pt] (w) at (2,-1) {$w$};
\node[inner sep=1pt] (x) at (4,0) {$x$};
\draw[-latex] (x)--(v);
\draw[-latex] (x)--(w);
\draw[-latex] (w)--(u);
\draw[-latex] (v)--(u);
\foreach \x in {0,2} {
\draw[-latex] (x) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (x);
}
\foreach \x in {0,2} {
\draw[-latex] (v) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (v);
}
\foreach \x in {0,2} {
\draw[-latex] (w) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (w);
}
\foreach \x in {0} {
\draw[-latex] (u) .. controls +(1.\x,1.\x) and +(-1.\x,1.\x) .. (u);
}
\end{tikzpicture}
\]
has three components $C$ with $\rho(A)=\rho(A_C)$, but only $\{v\}$ and $\{w\}$ are minimal.
\begin{itemize}
\item For $\beta>\ln 2$, we have a $3$-dimensional simplex of KMS$_\beta$ states on $(\mathcal{T} C^*(E),\alpha)$, and none of them factor through $C^*(E)$.
\item At $\beta=\ln 2$, we have a $2$-dimensional simplex of KMS$_{\ln 2}$ states on $(\mathcal{T} C^*(E),\alpha)$ with extreme points $\psi_{\{v\}}$, $\psi_{\{w\}}$ and $\phi_{\beta, u}$. Of these, only $\psi_{\{v\}}$ and $\psi_{\{w\}}$ factor through $C^*(E)$.
\item For $0\leq \beta<\ln 2$, there is a unique KMS$_\beta$ state on $\mathcal{T} C^*(E)$, which only factors through $C^*(E)$ when $\beta=0$. This KMS$_0$ state is the invariant trace on $C^*(E)$ that is obtained by lifting the trace on $C^*(E\backslash \Sigma H)\cong C(\mathbb{T})$ given by integration against Haar measure on~$\mathbb{T}$.
\end{itemize}
\end{ex}
\section{Concluding Remarks}\label{CR}
\subsection{Critical inverse temperatures}\label{CIT}
We say that $\beta$ is a \emph{critical inverse temperature} if $H_\beta\not= E^0$ and $\beta=\ln\rho(A_{E^0\backslash H_\beta})$. Theorem~\ref{thm:altogether}\eqref{it:critstates} says that these are precisely the inverse temperatures at which we have states of the form $\psi_C$, and that these states factor through $C^*(E)$; for all but the smallest critical $\beta$, we also have states of the form $\phi_{\beta,v}$.
Every critical inverse temperature $\beta$ has the form $\ln\rho(A_C)$ for some component $C$, but as our examples show, not every $\ln\rho(A_C)$ need be critical. (For example, $\beta=\ln 2$ in Example~\ref{dumbbell1}.) So to find the critical $\beta$ for a given finite graph $E$, we compute the numbers $\beta=\ln\rho(A_C)$, identify the sets $H_\beta$ by looking at the graph, and discard the numbers which are not critical. The set of critical inverse temperatures is always finite (with cardinality bounded by $|E^0|$), but could in general be arbitarily large.
Since there are finitely many critical values, we can list them in increasing order. Then for $\beta$ between two consecutive critical values, say $\beta\in(\beta_C,\beta_D)$, Theorem~\ref{thm:altogether}\eqref{it:nocrit} gives a simplex of KMS$_\beta$ states with extreme points $\{\phi_{\beta,v}:v\in E^0\backslash H_\beta\}$.
For the Toeplitz algebra, the range of possible inverse temperatures $\beta$ is either $\mathbb{R}$ (if $E$ has a source which does not talk to any nontrivial component $C$) or $[\beta_l,\infty)$, where $\beta_l$ is the smallest critical inverse temperature. But for the graph algebra $C^*(E)$ there are interesting number-theoretic restrictions on the possible values of critical $\beta$ and thus on the range of possible inverse temperatures. We use results of Lind \cite{Lin}, and refer to the treatment in \cite[\S11.1]{Lin-Mar}.
Suppose that $E$ is a directed graph without sources and suppose that $C^*(E)$ has a KMS$_\beta$ state. Then $\beta=\ln\rho(A_C)$ for some strongly connected component $C$ of $E$. Since $\rho(A_C)$ is the Perron-Frobenius eigenvalue of $A_C$, it is a root of the characteristic polynomial $\det(x1-A_C)$, which is a monic polynomial of degree $n$ with integer coefficients. Thus $\rho(A_C)$ is an algebraic integer. For each algebraic integer $\lambda$ there is a unique \emph{minimal polynomial} $q_\lambda(x)\in \mathbb{Q}[x]$ that is monic, irreducible and has $q_\lambda(\lambda)=0$ \cite[Proposition~6.1.7]{IR}; the other roots of this polynomial are called the \emph{conjugates} of $\lambda$. A \emph{Perron number} is an algebraic integer $\lambda \geq 1$ that is strictly larger than the absolute value of all its other conjugates.
\begin{prop}\label{excPerron}
Suppose that $\beta> 0$. Then $e^{p\beta}$ is a Perron number for some $p\in \mathbb{N}$ if and only if there exists a graph $E$ without sources such that the gauge dynamics on $C^*(E)$ has a KMS$_\beta$ state.
\end{prop}
\begin{proof} Let $E$ be a graph such that $C^*(E)$ has a KMS$_\beta$ state, and choose a component $C$ such that $\beta=\ln\rho(A_C)$, as above. Let $p$ be the period of the irreducible matrix $A_C$, then $(e^{\beta})^p=e^{p\beta}$ is a Perron number by the implication
$(1)\Longrightarrow(3)$ of \cite[Theorem 11.1.5]{Lin-Mar}. Conversely, if $e^{p\beta}$ is a Perron number for some $p\in \mathbb{N}$, the implication $(3)\Longrightarrow (1)$ of the same theorem gives the existence of a nonnegative integer matrix $A$ with spectral radius~$e^\beta$. Thus for the graph $E$ with vertex matrix $A$, $C^*(E)$ has a KMS$_\beta$ state.
\end{proof}
It is easy to produce Perron numbers and also algebraic integers $\lambda \geq 1$ that are not Perron numbers. For example, $(5-\sqrt 5)/2$ is an algebraic integer with minimal polynomial $x^2-5x+5$, and hence the conjugates are $(5\pm \sqrt 5)/2$. Thus Proposition~\ref{excPerron} implies that there is no graph without sources such that $C^*(E)$ has a KMS state with inverse temperature $\ln((5-\sqrt 5)/2)$. Note that $x^2-5x+5$ is the characteristic polynomial of
\[
A=\begin{pmatrix}3&1\\1&2
\end{pmatrix},
\]
which is the vertex matrix of a graph with two vertices.
\subsection{Connections with the results of Carlsen and Larsen}
In their recent preprint \cite{CL}, Carlsen and Larsen study the KMS states of generalized gauge dynamics on the relative graph algebras of possibly infinite graphs using the partial action techniques developed by Exel and Laca in \cite{EL}. Their results apply in particular to finite graphs\footnote{Though in \cite{CL} they use the non-functorial convention for paths in directed graphs, so strictly speaking one would have to apply their results to the opposite graph $E^{\text{opp}}=(E^0,E^1,s,r)$.}, where taking their function $N:E^1\to \mathbb{R}$ to be identically $1$ gives the action $\alpha:\mathbb{R}\to \operatorname{Aut} \mathcal{T} C^*(E)$ studied here.
To make the connection, we observe that the sum $y_v := \sum_{\mu\in E^*v} e^{-\beta |\mu|}$ in \cite[Theorem~3.1]{aHLRS1} is the same as that defining the ``fixed-target partition function'' $Z_v(\beta)$ in \cite[Equation (5.8)]{CL} (see also \cite[Definition 9.3]{EL}). Our results allow us to identify the intervals of convergence of these partition functions:
\begin{lem}
Let $E$ be a finite graph. For $v\in E^0$, take $\beta_v
=\max\{\ln\rho(A_C):C\leq v\}$ as in Corollary~\ref{betav}. Then the series
$\sum_{\mu\in E^*v} e^{-\beta |\mu|}$ converges if and only if $\beta>\beta_v$.
\end{lem}
\begin{proof}
If $\beta>\beta_v$, then taking $H=H_{\beta_v}$ in Proposition~\ref{defphiv} shows that the series converges. On the other hand, suppose that $\beta \leq \beta_v$ and $C$ is a component such that $\beta \leq \beta_C$.
Choose a path $\lambda$ in $CE^*v$. Then
\[
\sum_{\mu\in E^*v} e^{-\beta |\mu|} \geq e^{-\beta|\lambda|} \sum_{\mu'\in CE^*r(\lambda)} e^{-\beta |\mu'|},
\]
and the series $\sum_{\mu'\in CE^*r(\lambda)} e^{-\beta |\mu'|}$ diverges because $\rho(e^{-\beta}A_C )\geq 1$ and $A_C$ is irreducible. Thus $\sum_{\mu\in E^*v} e^{-\beta|\mu|}$ diverges too.
\end{proof}
The factors $\delta_{\mu,\nu}$ in our formulas for the values of KMS states show that all the KMS states on $\mathcal{T} C^*(E)$ and $C^*(E)$ factor through the expectation onto the diagonal $D:=\overline{\lsp}\{s_\lambda s_\lambda^*:\lambda\in E^*\}$. The restriction to $D$ is then given by a measure $\nu$ on the spectrum of $D$, which is $E^*\cup E^\infty$. Exel and Laca say that a KMS state $\psi$ is of \emph{finite type} if this measure $\nu$ is supported on the set $E^*$ of finite paths, and of \emph{infinite type} if $\nu$ is supported on the set $E^\infty$ of infinite paths \cite{EL}. (These are described as \emph{infinite type (A)} in \cite{CL}; for finite $E$, $E^\infty$ has no wandering infinite paths, and hence there are no states which are of their infinite type (B).)
The states $\phi_{\beta,v}$ have the form $\phi_\epsilon\circ q_H$ with $\epsilon$ a point mass supported at $v$. In \cite[\S6.4]{aHLRS1} we described measures on $E^*$ for the states of the form $\phi_\epsilon$, so they and the $\phi_{\beta,v}$ are of finite type. The states $\psi_C$ factor through $C^*(E)$, and are of infinite type; to see this\footnote{To see what goes wrong with this argument for states $\phi_{\beta,v}$ which factor through $C^*(E)$, notice that then Theorem~\ref{thm:altogether} implies that $v$ is a source in some $E\backslash \Sigma H_\beta$ or $E\backslash \Sigma K_\beta$, and hence $v$ is also a source in $E$. But then no Cuntz-Krieger relation is imposed at $v$ in $C^*(E)$, so \eqref{compmeas} is not valid.}, we use that $\psi_C$ factors through $C^*(E)$ to compute
\begin{equation}\label{compmeas}
\nu(\{\lambda\})=\nu(Z(\lambda))-\sum_{r(e)=s(\lambda)}\nu(Z(\lambda e))=\psi_C(s_\lambda s^*_\lambda)-\sum_{r(e)=s(\lambda)}\psi_C(s_{\lambda e}s^*_{\lambda e})=0.
\end{equation}
Thus for $\beta$ between two critical values, say $\beta\in (\beta_C,\beta_D)$, the set $E^0_{\beta\text{{-reg}}}$ in \cite[Definition~5.5]{CL} is $E\backslash H_{\beta_D}$ and for $\beta$ critical it is $E\backslash K_\beta$. The set $E^0_{\beta\text{{-crit}}}$ is empty unless $\beta$ is critical, and it is the union of the criticial components in $E\backslash H_\beta$ if $\beta$ is critical. If $\beta_C$ is critical and there are sources in $E\backslash \Sigma K_{\beta_C}$, then $(C^*(E),\alpha)$ has KMS$_{\beta_C}$ states of both finite and infinite type.
| {
"timestamp": "2014-05-12T02:04:51",
"yymm": "1402",
"arxiv_id": "1402.0276",
"language": "en",
"url": "https://arxiv.org/abs/1402.0276",
"abstract": "We consider the dynamics on the C*-algebras of finite graphs obtained by lifting the gauge action to an action of the real line. Enomoto, Fujii and Watatani proved that if the vertex matrix of the graph is irreducible, then the dynamics on the graph algebra admits a single KMS state. We have previously studied the dynamics on the Toeplitz algebra, and explicitly described a finite-dimensional simplex of KMS states for inverse temperatures above a critical value. Here we study the KMS states for graphs with reducible vertex matrix, and for inverse temperatures at and below the critical value. We prove a general result which describes all the KMS states at a fixed inverse temperature, and then apply this theorem to a variety of examples. We find that there can be many patterns of phase transition, depending on the behaviour of paths in the underlying graph.",
"subjects": "Operator Algebras (math.OA)",
"title": "KMS states on the C*-algebras of reducible graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.989181550709283,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7076842235656158
} |
https://arxiv.org/abs/2107.11772 | A cubic ring of integers with the smallest Pythagoras number | We prove that the ring of integers in the totally real cubic subfield $K^{(49)}$ of the cyclotomic field $\mathbb{Q}(\zeta_7)$ has Pythagoras number equal to $4$. This is the smallest possible value for a totally real number field of odd degree. Moreover, we determine which numbers are sums of integral squares in this field, and use this knowledge to construct a diagonal universal quadratic form in five variables. | \section{Introduction}
In this note, we shall study sums of integral squares in the cubic field $K^{(49)} = \Q(\zeta_7+\zeta_7^{-1})$, which can be characterised e.g.\ as the unique number field of discriminant $49$ or as the maximal totally real subfield of the seventh cyclotomic field. In the main Theorem \ref{th:main}, we show that any sum of integral squares can in fact be written as a sum of four such squares -- in the terminology introduced below, the Pythagoras number of $\O_{K^{(49)}}$ is $4$. In Theorem~\ref{th:odd>3} we recall the generally known fact that this is the smallest possible value among totally real fields of odd degree. Moreover, with the information gained during the proof of the main theorem, it is fairly simple to fully characterise the numbers which are sums of integral squares in $K^{(49)}$ (Theorem \ref{th:characterisation}) and to use this to construct a diagonal universal quadratic form in five variables (Corollary \ref{co:universal}).
Let us start by summarising the known results. For standard definitions and notation, in particular regarding number fields, local fields and quadratic forms, we refer the reader to \cite{Mi} and \cite{OMeara}.
\smallskip
When $R$ is a commutative ring, $\sum R^2$ denotes the set of all elements of $R$ which can be written as a sum of squares, and for $\alpha \in \sum R^2$, its \emph{length} $\ell(\alpha)$ is the smallest integer $n$ such that $\alpha$ can be written as a sum of $n$ squares. The \emph{Pythagoras number} of $R$ is then defined as the largest length occurring in $\sum R^2$, i.e.
\[
\P(R) = \sup \bigl\{\ell(\alpha) : \alpha \in \textstyle{\sum} R^2\bigr\}.
\]
In this notation, Lagrange's four square theorem can be written as $\P(\Z)=4$. The Pythagoras number is primarily studied for fields \cite{Ho, TVY, BV, BL, BGV, Hu}, as this is arguably the easier case, but there are several results about the Pythagoras number of rings of integers $\O_K$ of a number field $K$ as well. In fact, if $K$ is not totally real, then $\P(\O_K) \leq 4$ \cite{Pf}. For totally real fields, the situation is dramatically different: By Scharlau \cite{Sch}, there are number fields $K$ with arbitrarily large $\P(\O_K)$. On the other hand, $\P(\O_K)$ is bounded by a constant depending only on the degree $d=[K : \Q]$, as shown by Kala and Yatsyna \cite{KY}. This constant is the so-called $g$-invariant $g_{\Z}(d)$, see e.g.\ \cite{KO}; thus for $1 \leq d \leq 5$, this bound is $d+3$, and for $d=6$, its value is $10$.
For a given number field $K$, it is fairly simple to obtain lower bounds for $\P(\O_K)$, since finding the length of any given $\alpha$ is only a computational task; the main content of several recent papers \cite{Ti, KRS, Kr} is finding good lower bounds for infinite families of fields. Obtaining tight upper bounds is much more difficult. The result of Kala and Yatsyna was generalised in \cite{KRS} to include $g$-invariants of any subfield of $K$, which in particular yielded $\P(\O_K) \leq 5$ for quadratic extensions of $\Q(\sqrt5)$; however, very few values of $g$-invariants of totally real orders are known, so this generalisation is of limited use.
Since cubic fields have no subfields besides $\Q$, the only easily available upper bound for their Pythagoras number is $\P(\O_K) \leq g_{\Z}(3) = 6$. Recently, Tinková \cite{Ti} showed that this upper bound is attained in infinitely many totally real cubic fields $K$. More precisely, she studied the family of so-called \emph{simplest cubic fields} $\Q(\rho_a)$, introduced by Shanks \cite{Sh}, where $\rho_a$ is a root of the polynomial $x^3-ax^2-(a+3)x-1$, $a \geq -1$, and proved that $\P(\Z[\rho_a]) = 6$ for $a \geq 3$. (This order is the full ring of integers for a positive proportion of $a$.)
For $a=0,1,2$, Tinková exhibits elements of length $5$, and for $a=-1$ she only gives $\ell(7)=4$, noting that using a computer program, she checked that no element with a reasonably small trace has bigger length. Therefore, it is natural to conjecture that $\P(\Z[\rho_{-1}])=4$. Our Theorem \ref{th:main} confirms this hypothesis, since $\Q(\rho_{-1}) = K^{(49)}$. Among totally real cubic fields, our result provides the first one where $\P(\O_K) \leq 4$, and by Tinková's result we know that it is the only simplest cubic field where the order $\Z[\rho_a]$ has Pythagoras number less than $5$.
In total, very few totally real fields $K$ with $\P(\O_K) \leq 4$ are known: For $K = \Q(\sqrt2), \Q(\sqrt3)$ and $\Q(\sqrt5)$ one has $\P(\O_K)=3$, while for $K = \Q, \Q(\sqrt6)$ and $\Q(\sqrt7)$ it is known that $\P(\O_K)=4$. For all other real quadratic fields, the maximal possible value $5 = g_{\Z}(2)$ is attained. The paper \cite{KRS} proves that there are at most seven real biquadratic fields with $\P(\O_K) \leq 4$, conjecturing that these seven fields indeed satisfy the inequality. This weak evidence may lead one to conjecture that in fact, there are only finitely many totally real $K$ with $\P(\O_K) \leq 4$.
We conclude this section by combining two well-known facts in order to show $\P(\O_K) \geq 4$ for every totally real number field of odd degree. The result is stated for general orders:
\begin{theorem}\label{th:odd>3}
Let $K$ be a totally real number field with $[K : \Q]$ odd. Then $\P(K) = 4$. If~$\O$ is an order in $K$, then $\P(\O) \geq 4$.
\end{theorem}
\begin{proof}
The fact that $\P(K)=4$ can be found in \cite[Ex.\ ~XI.5.9(2)]{La}. It is an application of the local-global principle, and the core of the argument is as follows: Since there is an odd number of real embeddings, by Hilbert's reciprocity law there exists at least one finite place where the form $x^2+y^2+z^2$ is anisotropic.
It is also well known that if $R$ is any integral domain and $F$ its field of fractions, then $\P(R) \geq \P(F)$. Indeed, if $\frac{\alpha}{\beta}$ is not a sum of $n$ squares in $F$, then the same holds for $\alpha\beta$ in $F$ and thus also in $R$. Now the proof is concluded since the field of fractions of $\O$ is $K$.
\end{proof}
\section{Preliminaries and results}
We have already defined the Pythagoras number, the set $\sum R^2$ and the length $\ell(\,\cdot\,)$. By $I_n$ we mean the quadratic form $x_1^2 + \cdots + x_n^2$. For brevity, we sometimes denote squares simply by $\square$. The symbol $h(\varphi)$ stands for the class number of the quadratic form $\varphi$. For its definition, as well as for other standard terminology and notation, we refer the reader to O'Meara's book \cite{OMeara}. The starting point of this paper is the following result, proven by analytical methods in \cite{Pe2}:
\begin{theorem}[Peters, 1977] \label{th:peters}
There are only six totally real number fields where $h(I_3)=1$; $K^{(49)}$ is one of them. (The other fields in question are $\Q$, $\Q(\sqrt2)$, $\Q(\sqrt5)$, $\Q(\sqrt{17})$ and the cubic field $K^{(148)}$ with discriminant $148$.)
\end{theorem}
From now on, we will leave general number fields and focus only on the field with discriminant $49$. Let $\zeta_7 = \mathrm{exp}(\frac{2\pi\mathrm{i}}{7})$ be the primitive seventh root of unity and let $K^{(49)}$ stand for the totally real cubic field $\Q(\zeta_7+\zeta_7^{-1})$. Usually we denote its ring of integers $\Z[\zeta_7+\zeta_7^{-1}]$ simply by $\O$. Later, we will also use the element $\rho = \zeta_7+\zeta_7^{-1}$ which generates $\O$; its minimal polynomial is $x^3+x^2-2x-1$. Our main theorem is the following:
\begin{theorem} \label{th:main}
$\P\bigl(\Z[\zeta_7+\zeta_7^{-1}]\bigr)=4$.
\end{theorem}
There are many remarkable properties of $K^{(49)}$; for example, it has class number $1$, it is a Galois extension of $\Q$, and every other cubic Galois extension of $\Q$ has larger discriminant (in absolute value).
Thanks to Theorem \ref{th:peters}, sums of three integral squares in $\O$ satisfy the local-global principle; hence, our main tools are the completions $\O_\p$ of the ring of integers $\O$ at places $\p$. In particular, we will examine the only dyadic place of $\O$, corresponding to the prime ideal $(2)$. (The fact that $2$ is indeed a prime of $\O = \Z[\zeta_7+\zeta_7^{-1}]$ is easily checked using \cite[Th.\ ~3.41]{Mi}.)
\smallskip
In general, local conditions do not suffice to describe the set $\sum \O_K^2$; usually there are elements which are not a sum of squares (\enquote{globally}) despite being a sum of integral squares at all completions. Scharlau calls them \enquote{Ausnahmeelemente} (i.e.\ \emph{exceptional elements}) and examines them in depth in \cite{Sch2} -- for example, he shows that up to multiplication by squares of units, there are always only finitely many of them; on the other hand, he proves that in every cubic order there is at least one such element. As the discriminant of $K$ is odd, the only local condition for a sum of squares is total positivity \cite[Kap.\ ~0]{Sch2}. In our second result, we characterise the set of sums of squares, showing that up to multiplication by units there is only one exceptional element, namely $1+\rho+\rho^2$. (Remember that $\rho=\zeta_7+\zeta_7^{-1}$.)
\makeatletter
\newcommand{\mylabel}[2]{#2\def\@currentlabel{#2}\label{#1}}
\makeatother
\begin{theorem} \label{th:characterisation}
Let $\alpha \in \O$ be totally positive. Then the following statements are equivalent:
\begin{enumerate}
\item[\mylabel{a}{(1a)}] $\alpha \notin \sum \O^2$.
\item[\mylabel{b}{(1b)}] $\alpha$ is not a sum of four integral squares.
\item[\mylabel{c}{(2a)}] The norm of $\alpha$ is $7$.
\item[\mylabel{d}{(2b)}] $\alpha = u^2(1+\rho+\rho^2)$ for a unit $u \in \O$.
\item[\mylabel{e}{(2c)}] $\alpha$ is an indecomposable element and not a square.
\end{enumerate}
\end{theorem}
The proof of this result, as well as the therein contained notion of \emph{indecomposable elements}, is to be found in Section \ref{se:characterisation}. Note that while a full characterisation of $\sum \O_K^2$ is not usually known, for real quadratic fields, the problem is solved in \cite[Satz 2]{Pe}.
\smallskip
We can immediately state a simple corollary of our characterisation of $\sum \O^2$. Remember that a totally positive definite quadratic form is called \emph{universal} if it represents all totally positive integers. In \cite{KY}, it is proven that $K^{(49)}$ has the very rare property of admitting a universal quadratic form with rational integral coefficients (the form in question has four variables: $x^2+y^2+z^2+w^2+xw+yw+zw$). With our knowledge, one easily constructs a universal quadratic form in five variables:
\begin{corollary} \label{co:universal}
The totally positive definite quadratic form $x_1^2 + x_2^2 + x_3^2 + x_4^2 + (1+\rho+\rho^2)x_5^2$ is universal over $\O$.
\end{corollary}
While the form given in \cite{KY} has less variables, their form is \emph{non-classical}, i.e.\ the corresponding symmetric matrix contains non-integral entries. Our form is not only classical, but even diagonal. For monogenic simplest cubic fields, \cite[Th.\ 1.1]{KT} provides a construction of a diagonal universal quadratic form, which for $\O_{K^{(49)}}$ requires $12$ variables, so our form is much simpler. To the best of our knowledge, no classical universal form in five or less variables in a totally real field of odd degree has been previously known. (By a well-known argument involving Hilbert reciprocity, at least four variables are necessary for fields of odd degree \cite{EK}.)
\section{Sums of three squares}
It is well known \cite[102:5]{OMeara} that every quadratic form with class number $1$ satisfies the local-global principle. This means that a number is represented over $\O$ if and only if it is represented over $\O_{\p}$ for all places $\p$. While the sum of four squares $I_4$ does not have this property (we will later prove that $1+\rho+\rho^2$ is not a sum of any number of squares, while being a sum of four squares everywhere locally), we shall exploit that $I_3$ satisfies the local-global principle to fully determine which numbers are a sum of three integral squares in $K^{(49)}$:
\begin{proposition} \label{pr:localglobal}
In $\O = \Z[\zeta_7+\zeta_7^{-1}]$, the quadratic form $I_3$ represents a nonzero number $\alpha$ if and only if $\alpha$ satisfies both the following conditions:
\begin{enumerate}
\item $\alpha$ is totally positive;
\item at the dyadic place $\O_{(2)}$, $\alpha \neq -t^2$ for all $t \in \O_{(2)}$.
\end{enumerate}
\end{proposition}
The proof is given at the end of this section. To prepare for it, we need to examine the dyadic completion $\O_{(2)}$. We state our results more generally for the ring of integers $\mathfrak{O}$ in a dyadic local field $L$ with a few additional properties, since this generality makes the proofs clearer. We start with a useful observation:
\begin{observation}
Let $\mathfrak{O}$ be any commutative ring where $2$ is a prime element. Then the following statements are equivalent for $x,y \in \mathfrak{O}$:
\begin{enumerate}
\item $x \equiv y \pmod2$; \label{it:1}
\item $x^2 \equiv y^2 \pmod4$; \label{it:2}
\item $x^2 \equiv y^2 \pmod2$. \label{it:3}
\end{enumerate}
Indeed, the implications (\ref{it:1}) $\Rightarrow$ (\ref{it:2}) $\Rightarrow$ (\ref{it:3}) are trivial. For (\ref{it:3}) $\Rightarrow$ (\ref{it:1}), one rewrites the condition (\ref{it:3}) as $2 \mid (x-y)^2$ and uses the fact that $2$ is a prime.
\end{observation}
This observation can further be exploited to get the following lemma:
\begin{lemma} \label{le:sumoftwosquares}
Let $\mathfrak{O}$ be the ring of integers of a dyadic local field $L$ where $2$ is a prime element and the degree $[L : \Q_2]$ is odd. Let $u,v \in \mathfrak{O}$ be units. Then:
\begin{enumerate}
\item $u^2 + v^2 \neq \square$;
\item there are no $y,z \in \O$ such that $u^2+y^2+z^2 \equiv 0 \pmod4$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that, contrary to the first statement, $u^2+v^2=w^2$ for some $w \in \mathfrak{O}$. Then $w^2 \equiv (u+v)^2 \pmod2$, so the previous observation yields $w^2 \equiv (u+v)^2 \pmod4$.
Plugging this into $u^2+v^2 = w^2$ gives $2uv \equiv 0 \pmod4$. This is equivalent to $uv \equiv 0 \pmod2$, which cannot happen since $u,v$ are units. Thus the first part is proven.
Similarly, if the second statement is violated, we write $z^2 \equiv (u+y)^2 \pmod2$, so the above observation yields $z^2 \equiv (u+y)^2 \pmod4$. Plugging in, we get $2u^2 + 2y^2 + 2uy \equiv 0 \pmod4$. This is equivalent to $u^2 + uy + y^2 \equiv 0 \pmod2$; as $u$ is a unit, it follows that there exists a root of $t^2+t+1$ modulo $2$, i.e.\ in $\mathfrak{O}/(2)$.
Since $2$ is the prime element, $\mathfrak{O}/(2)$ is the residue field. It is isomorphic to $\mathbb{F}_{2^d}$, where $d=[L:\Q_2]$ is by assumption an odd number. However, this is a contradiction -- no root of an irreducible quadratic polynomial over $\mathbb{F}_2$ can be contained in an extension of odd degree.
\end{proof}
We were building towards the following characterisation of sums of three squares in~$\O_{(2)}$ and more generally in the ring of integers in any unramified dyadic local field of odd degree:
\begin{lemma} \label{le:dyadicI3}
Let $L$ be any dyadic local field where $2$ is a prime element and the degree $[L : \Q_2]$ is odd. Then, over its ring of integers $\mathfrak{O}$, the form $I_3$ represents exactly those elements which are not equal to $-t^2$ for any $t \in \mathfrak{O}$.
\end{lemma}
\begin{proof}
Bear in mind that a square is integral if and only if the squared number is integral. Now, the proof has two parts. First we show that if a number in $\mathfrak{O}$ is a sum of three squares in $L$, then these squares are necessarily integral. Afterwards we use the known results about representations over a local field.
Let us examine whether a sum of three non-integral squares can belong to $\mathfrak{O}$. We use the fact that $2$ is the prime element, and multiply the equality by a suitable power of $2$, to reformulate the question as follows: Is it possible to find $x,y,z \in \mathfrak{O}$, with $x$ being a unit, so that $x^2+y^2+z^2 \equiv 0 \pmod4$? By Lemma \ref{le:sumoftwosquares} (2), the answer is no.
We have just shown that if the sum of three squares is integral, then the squared numbers are integral as well (which, as a corollary, also yields that $0$ is not non-trivially represented, i.e.\ $I_3$ is anisotropic). Therefore, we now only have to examine the numbers represented by $I_3$ over the field $L$ instead of over the ring $\mathfrak{O}$. Since $I_3$ is anisotropic and its determinant is a square, \cite[63:21]{OMeara} yields that the only not-represented elements are minus squares.
\end{proof}
With this dyadic preparation, we are ready to characterise sums of three squares in $\O$.
\begin{proof}[Proof of Proposition \ref{pr:localglobal}]
By Theorem \ref{th:peters}, the form $I_3$ has class number $1$, so it satisfies the local-global principle. Thus, it suffices to check which numbers are represented locally.
In the real embeddings, every positive number is already a square, while negative numbers cannot be written as any number of squares. In a non-dyadic completion, every ternary unimodular form is universal \cite[92:1b]{OMeara}. Thus it remains to examine the only dyadic place $\O_{(2)}$. This was done in Lemma \ref{le:dyadicI3} -- this lemma applies since $(2)$ is inert, so $[K^{(49)}_{(2)} : \Q_2]$ is equal to $[K^{(49)} : \Q]$, which is odd.
\end{proof}
\section{The main proof}
Now we fully understand sums of three integral squares. To complete the proof that every sum of squares in $\Z[\zeta_7+\zeta_7^{-1}]$ is already a sum of four squares, one has to examine two types of problematic elements: Those which are equal to minus square of a unit in $\O_{(2)}$, and those which are minus square of a number divisible by $2$ in $\O_{(2)}$. To deal with the latter type, one simple trick suffices:
\begin{observation} \label{ob:2multiples}
Let $\alpha \in \O$ be totally positive and let it be equal to $-(2t)^2$ at the dyadic place. Then $\ell(\alpha)=4$, i.e.\ it is a sum of four but not of three squares.
Indeed, the number $\alpha/2$ is totally positive and it is clearly not equal to $-\square$ at the dyadic place, hence by Proposition \ref{pr:localglobal} we have $\alpha/2 = x^2 + y^2 + z^2$ in $\O$. Then $\alpha = (x+y)^2 + (x-y)^2 + z^2 + z^2$. On the other hand, by the same proposition, $\alpha$ is not a sum of three squares.
\end{observation}
As a side note, one could also use the universality of the quadratic form $x^2 + y^2 + z^2 + w^2 + xw + yw + zw$ to deduce that every totally positive number in $2\O$ is a sum of four squares, since $2(x^2+y^2+z^2+w^2+xw+yw+zw) = (x+y+w)^2+(x-y)^2+(z+w)^2+z^2$. Either way, combining Proposition \ref{pr:localglobal} with the above observation, we obtain the following neat statement, which is the basis of the proofs of Theorems \ref{th:main} and \ref{th:characterisation}:
\begin{corollary} \label{co:minusunitbad}
If a totally positive number in $\O$ is not a sum of four integral squares, then in $\O_{(2)}$ it is equal to $-u^2$ where $u$ is a unit.
\end{corollary}
Finally, we are ready to prove that the Pythagoras number of $\O$ is $4$:
\begin{proof}[Proof of Theorem \ref{th:main}]
Let $\alpha \in \sum \O^2$. Clearly, $\alpha$ is totally non-negative, so in most cases Corollary \ref{co:minusunitbad} applies and $\alpha$ is a sum of four squares. It remains to deal with the case when at the dyadic place, $\alpha = -u^2$ with $u$ a unit. In this case, the condition that $\alpha$ is totally positive is not enough for its being a sum of squares (compare Theorem \ref{th:characterisation}). However, we know that $\alpha = \sum x_i^2$ in $\O$ for some $x_i$. We shall show that if $x_j$ is a dyadic unit, then $\alpha - x_j^2$ is a sum of three squares in $\O$, implying that $\alpha$ is a sum of four squares. This will conclude the proof, since at least one of the summands must be a unit.
For the sake of contradiction, suppose that $\sum_{i \neq j} x_i^2$ is not a sum of three squares in $\O$. It is totally positive, so by Proposition \ref{pr:localglobal} it must be $-\square$ in $\O_{(2)}$. However, the equality $-u^2 = x_j^2 - \square$ in $\O_{(2)}$ is impossible by Lemma \ref{le:sumoftwosquares} (1) since $u$ a $x_j$ are units.
\end{proof}
\section{Characterisation of sums of squares} \label{se:characterisation}
In this section we are going to determine which numbers can be written as a sum of squares (by Theorem \ref{th:main} they can in fact be represented as a sum of \emph{four} squares). To achieve this, we shall need the following notion: A totally positive number in $\O$ is called \emph{indecomposable} if it cannot be written as a sum of two totally positive elements of $\O$.
The indecomposable elements are quite difficult to study; it is a significant success that \cite[Th.\ 1.2]{KT} fully characterises them for the order $\Z[\rho_a]$ in the simplest cubic fields. For our case, their result reads as follows (recall that $\rho = \zeta_7+\zeta_7^{-1}$):
\begin{lemma}[Kala, Tinková]
Up to multiplication by squares of units, the only indecomposables of $\O$ are $1$ and $1+\rho+\rho^2$.
\end{lemma}
We use this lemma to provide a very explicit description of the set $\sum \O^2$, namely: a totally positive $\alpha$ is an exceptional element if and only if it satisfies the simple condition \ref{c} or equivalently \ref{d}. However, note that the core of the proof does not use the indecomposable elements explicitly -- without the lemma, we would still have the equivalence of \ref{a} (and \ref{b}) with \ref{e}.
\setcounter{section}{2}
\setcounter{theorem}{2}
\begin{theorem}
Let $\alpha \in \O$ be totally positive. Then the following statements are equivalent:
\begin{enumerate}
\item[(1a)] $\alpha \notin \sum \O^2$.
\item[(1b)] $\alpha$ is not a sum of four integral squares.
\item[(2a)] The norm of $\alpha$ is $7$.
\item[(2b)] $\alpha = u^2(1+\rho+\rho^2)$ for a unit $u \in \O$.
\item[(2c)] $\alpha$ is an indecomposable element and not a square.
\end{enumerate}
\end{theorem}
\begin{proof}
The new input in this proof will be the equivalence between \ref{a} and \ref{e}. Most of the other implications are clear by now:
By Theorem \ref{th:main}, \ref{a} and \ref{b} are equivalent. Statements \ref{d} and \ref{e} are equivalent by the previous lemma. The equivalence of \ref{c} and \ref{d} is elementary: On the one hand, one easily checks (or looks up in \cite{KT}) that $1+\rho+\rho^2$ has norm $7$; on the other hand, since $(7)$ ramifies, any two (totally positive) elements of norm $7$ generate the same ideal, hence differ only by multiplication by a (totally positive) unit. It is well known \cite{Ti} that in simplest cubic fields, a unit is totally positive if and only if it is a square.
The implication from \ref{e} to \ref{a} is also immediate: An indecomposable element is either a square or not a sum of squares. Hence our task is to prove the converse: If $\alpha$ is decomposable, then $\alpha$ is a sum of squares. In the following, we will repeatedly exploit Corollary \ref{co:minusunitbad}: If a totally positive number is not a sum of squares, then in $\O_{(2)}$ it must be equal to $-u^2$ where $u$ is a unit.
Assume that $\alpha = \beta + \gamma$. If both $\beta$ and $\gamma$ are sums of squares, then so is $\alpha$. If neither of them is a sum of squares, then at the dyadic place, $\alpha$ is $-(u_1^2+u_2^2)$ where $u_1,u_2$ are units. However, by Lemma \ref{le:sumoftwosquares} (1) the sum of two unit squares is not a square in $\O_{(2)}$, hence $\alpha$ is a sum of squares in $\O$.
Hence we may assume that $\beta = \sum x_i^2$ and $\gamma$ is not a sum of squares. We also may assume that $x_1$ is a unit in $\O_{(2)}$; if not, we use the equality $(2x)^2=x^2+x^2+x^2+x^2$. Clearly it suffices to show that $\tilde{\gamma} = x_1^2 + \gamma$ is a sum of squares. And this is easy, since Lemma \ref{le:sumoftwosquares} (1) gives a contradiction if both $\gamma$ and $\tilde{\gamma}$ are minus squares of units in $\O_{(2)}$.
\end{proof}
\setcounter{section}{5}
\setcounter{theorem}{1}
The key idea behind the above proof is that while not every indecomposable element is a sum of squares, the sum of two indecomposables in $\O$ can always be rewritten as a sum of squares. With the just proven theorem, one immediately gets Corollary \ref{co:universal}: The quadratic form $x_1^2+x_2^2+x_3^2+x_4^2$ represents all totally positive elements except for those of the form $u^2(1+\rho+\rho^2)$, and these are represented by $(1+\rho+\rho^2)x_5^2$.
\section{Final notes
The crucial ingredient of our proofs was the fact that the sum of three squares $I_3$ satisfies the local-global principle. The same arguments in fact prove that $\P(\O_K)=4$ for every totally real field $K$ of odd degree where $(2)$ is inert, \emph{provided that $I_3$ integrally represents all numbers which it integrally represents everywhere locally}, but the only straightforward way to prove this integral local-global principle is proving $h(I_3)=1$.
Therefore, the only totally real fields where it is reasonably simple to show that $\P(\O_K)$ is small are the six fields listed in Theorem \ref{th:peters} as the only fields with $h(I_3)=1$. Let us discuss them briefly:
For $\Q$, we have $\P(\Z)=4$ by Lagrange's four square theorem. For $\Q(\sqrt2)$ and $\Q(\sqrt5)$ it is well known \cite{Dz, Ma} that $\P(\O_K)=3$; the proofs are based on the local-global principle. For $\Q(\sqrt{17})$, the situation is different: $\P\bigl(\Z\bigl[\frac{1+\sqrt{17}}{2}\bigr]\bigr)=5$, which is the largest value allowed for quadratic orders by the bound $g_{\Z}(2)$; this is easily proven by checking that $7 + \bigl(\frac{1+\sqrt{17}}{2}\bigr)^2$ is not a sum of four squares. Actually, by \cite[Th.\ 5'']{CP}, there are many different elements of length $5$ in $\Z\bigl[\frac{1+\sqrt{17}}{2}\bigr]$: Any totally positive element of the norm $128\cdot 4^k$ has length $5$
The only remaining field is $K^{(148)}$, the cubic field with discriminant $148$. There, the same strategy as in this paper might be applied -- first, one proves an analogy of Proposition~\ref{pr:localglobal}, characterising sums of three squares, and then hopefully exploits this result to prove $\P(\O_{K^{(148)}})=4$ or $\P(\O_{K^{(148)}})=5$. It should not be too difficult, but the situation is slightly complicated by the fact that $(2)$ ramifies, so the behaviour at the dyadic place is less simple than in our Lemma \ref{le:dyadicI3}. For example, total positivity is no longer the only local condition for a sum of squares.
\input{pythagorasbibliography}
\end{document}
| {
"timestamp": "2021-07-27T02:18:11",
"yymm": "2107",
"arxiv_id": "2107.11772",
"language": "en",
"url": "https://arxiv.org/abs/2107.11772",
"abstract": "We prove that the ring of integers in the totally real cubic subfield $K^{(49)}$ of the cyclotomic field $\\mathbb{Q}(\\zeta_7)$ has Pythagoras number equal to $4$. This is the smallest possible value for a totally real number field of odd degree. Moreover, we determine which numbers are sums of integral squares in this field, and use this knowledge to construct a diagonal universal quadratic form in five variables.",
"subjects": "Number Theory (math.NT)",
"title": "A cubic ring of integers with the smallest Pythagoras number",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.989181550709283,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7076842235656158
} |
https://arxiv.org/abs/1402.3336 | Permutation Patterns in Latin Squares | In this paper we study pattern avoidance in Latin Squares, which gives us a two dimensional analogue of the well studied notion of pattern avoidance in permutations. Our main results include enumerating and characterizing the Latin Squares which avoid patterns of length three and a generalization of the Erdős-Szekeres theorem. We also discuss equivalence classes among longer patterns, and conclude by describing open questions of interest both in light of pattern avoidance and their potential to reveal information about the structure of Latin Squares. Along the way, we show that classical results need not trivially generalize, and demonstrate techniques that may help answer future questions. | \section{Introduction}
A permutation of length $n$ is a rearrangement of the numbers $\{1, 2, \ldots, n\}$ and many interesting questions have been asked and answered about the structure of permutation classes. In particular, pattern containment and avoidance, which we will formally define shortly, ask about the types of subsequences a permutation does and does not have.
A natural generalization of a permutation is a \emph{Latin Square}, which we will also introduce below. Each row and column of a Latin Square is just a permutation, and so questions about patterns in permutations readily generalize to questions about patterns in Latin Squares. Latin Squares are exciting objects to study by themselves, and we will begin this paper by formally defining pattern avoidance in Latin Squares. Little work seems to have been done in this area.
Sections $3$ through $5$ extend classical results from pattern avoidance in permutations. We begin by enumerating and characterizing Latin squares that avoid patterns of length three. In Section $4$ we discuss avoidance of longer patterns, and in Section $5$, discuss pattern containment.
Our final section, in our minds, is one of the most critical parts of this paper: a discussion of several open questions. Answers to these questions will not only be exciting additions to the current results in the field of pattern avoidance, but they may also lay the foundation for answering several important questions about the structure and number of Latin Squares. Most of the contents of this paper were presented at the $11^{\text{th}}$ Annual Permutation Patterns conference, and after the talk, many participants came up with additional questions. We have added these to our list in Section $6$.
\section{Background}
We previously described a Latin Square as a set of permutations. More concretely, an $n^{th}$ order Latin Square is an $n$ by $n$ grid in which the numbers $1, 2, ..., n$ (often called symbols) are each used exactly once in each row and column. We readily know that there are $n!$ permutations of $n$ and so it comes as a surprise that the number of Latin Squares of order $n$ is only known up to $n=11$ \cite{McKay}. If we let $L_n$ be the number of $n^\text{th}$ order Latin Squares, then the best known bounds for $L_n$ are very far apart. For example, van Lint and Wilson \cite[p.~187]{VLW} give upper and lower bounds which differ asymptotically by a factor of $n^n$.
We can naturally extend the definition of pattern avoidance in permutations to pattern avoidance in Latin Squares. This definition first requires an understanding of pattern containment in permutations: a permutation of size $n$ is said to \emph{ contain} a pattern (also a permutation) of size $k\leq n$ if a subsequence of the permutation is order isomorphic to the pattern. If a permutation does not contain a pattern, it is said to \emph{avoid} it. For example, the permutation $13254$ contains $123$ because the subsequence $135$ is in the same relative order (that is, strictly increasing) as the pattern $123$. Conversely, $13254$ avoids $321$ because it does not contain a strictly decreasing subsequence of length three.
To extend the preceding definition to the setting of Latin Squares, note that each row and each column of a Latin Square can be viewed as a permutation by reading the rows and the columns from left to right and top to bottom, respectively. We define a Latin Square's \emph{row permutations} to be the $n$ permutations corresponding to the rows in this manner and we define the \emph{column permutations} similarly. Then:
\begin{defn}
A Latin Square avoids a pattern $\pi$ if all row and column permutations avoid $\pi$. The number of $n^\text{th}$ order Latin Squares avoiding $\pi$ will be denoted $L_n(\pi)$.
\footnote{While this definition is particularly natural, we note that there are other ways of defining pattern avoidance in Latin Squares. For example, each symbol $k$ of a Latin Square determines a permutation $\pi$, where $\pi(i)=j$ when the entry in row $i$, column $j$ is $k$. One could also require these symbol permutations to avoid a pattern in order to say that the full square does. This convention could make sense, and there is a way to view and define Latin Squares so that the distinction between rows, columns and symbols is arbitrary (see Chapter 17 in \cite{VLW}). Furthermore, since Latin Squares are two dimensional, it might also make sense to study Latin Squares avoiding a ``two dimensional pattern.'' In this paper will study pattern avoidance using Definition 1, though we will discuss these alternative definitions of pattern avoidance in Section 6.}
\end{defn}
The canonical question of pattern avoidance has been, given some pattern $\pi$, how many permutations avoid (or equivalently, contain) $\pi$. One of the earliest results was that the number of permutations of length $n$ avoiding any pattern of length three (e.g. any permutation of $\{1, 2, 3\}$) is just $\frac1{n+1}\binom{2n}{n}$, the $n^\text{th}$ Catalan number. This result is proved in chapter 4 of \cite{Bona}.
One might suspect that if $\pi$ and $\pi'$ are patterns of the same length, then the same number of permutations will avoid them; however, this is not true in general. When, for any $n$, the number of permutations in $S_n$ avoiding $\pi$ and $\pi'$ are the same, these patterns are said to be \emph{Wilf-equivalent}. In addition to enumerating permutations avoiding a pattern, characterizing these equivalence classes is a central question in the study of permutation patterns.
\section{Avoiding Patterns of Length Three}
One of the first results in classical pattern avoidance was the enumeration of permutations which avoid patterns of length three. In this section we ask the same question for Latin Squares and will find the following result:
\begin{thm}
\label{thm:avoid}
For any $\pi\in S_3$, $L_n(\pi)=n.$
\end{thm}
To prove this result, we begin by considering a less restrictive case: the number of Latin Squares avoiding a pattern in just the columns. We can count these Latin Squares using the following proposition:
\begin{prop}
For any permutation $\sigma\in S_n$, there is exactly one Latin Square avoiding the pattern $123$ in the columns with $\sigma$ as its first row.
\end{prop}
\begin{proof}
Suppose that the top row has been fixed as $\sigma$ and consider the column beginning with a $1$. The rest of the entries must be in decreasing order: if there were any two in increasing order, they would form a 123 pattern with the 1 in the top row. Thus, the only possible permutation for this column is \\ $1,n,(n-1),\ldots,3,2$.
The column whose first entry is $2$ must similarly be completed as \\ $2, 1, n,(n-1),\ldots, 3$ in a decreasing order. This claim follows because the numbers $3$ through $n$ must be in strictly decreasing order, and to avoid conflict with our $1^{st}$ column, $n$ cannot be in the second row.
Now proceed iteratively. To fill out the column beginning with $j$, for $j<n$, all elements greater than $j$ must be in a decreasing order, and $n$ cannot be placed in the first $j$ rows. The elements greater than $j$ are then forced to be placed in the bottom $n-j$ rows. To complete the column, the remaining numbers $2,\dots, j-1$ must be in decreasing order to avoid forming a $123$ pattern with $n$. Figure \ref{proof} shows an illustration of this process when $n=4$. \begin{figure}[h]
$$
\begin{tabular}{|c|c|c|c|}\hline
2&1&3&4\\\hline
&&&\\\hline
&&&\\\hline
&&&\\\hline
\end{tabular}
\to
\begin{tabular}{|c|c|c|c|}
\hline
2&1&3&4\\\hline
&4&&\\\hline
&3&&\\\hline
&2&&\\\hline
\end{tabular}
\to
\begin{tabular}{|c|c|c|c|}
\hline
2&1&3&4\\\hline
1&4&&\\\hline
4&3&&\\\hline
3&2&&\\\hline
\end{tabular}
\to
\begin{tabular}{|c|c|c|c|}
\hline
2&1&3&4\\\hline
1&4&2&\\\hline
4&3&1&\\\hline
3&2&4&\\\hline
\end{tabular}$$
\caption{Proof method of Proposition 2.}\label{proof}
\end{figure}
This leaves one unfilled column, which can only be completed one way to avoid repeats in the rows. This method will construct a unique Latin Square avoiding $123$ in the columns with $\sigma$ as its top row. \end{proof}
Our proof readily generalizes for any of the other five permutations of length three. To avoid $132$, first consider the $1$ in the top row and place the remaining elements in an increasing order. To avoid $312$ or $321$, act similarly but first consider the $n$ in the first row. To avoid $231$ and $213$, instead begin with the bottom row.
Because one row effectively determines a unique Latin Squares avoiding a pattern of length three in the columns, we obtain the following corollary.
\begin{cor}
The number of $n^{th}$ order Latin Squares avoiding a pattern of length three in just the columns (or rows) is n!
\end{cor}
The above work also reveals a very interesting structure for pattern avoidance in the columns:
\begin{rem}
In a Latin Square avoiding $123$, $231$ or $312$, each entry is one less than the one above it (mod $n$). Thus, all columns are of the form $i, i-1, \ldots, 1, n, \ldots, i+1.$ When avoiding $132$, $213$, or $321$, all columns are instead increasing and of the form $i, i+1, \ldots, n, 1, \ldots, i-1.$
\end{rem}
Note that these results pertain, respectively, to the even and odd permutations of $S_3$. As above, this remark applies similarly to the rows.
We are now ready for the proof of Theorem 2.
\begin{proof}
Let $\pi$ be a permutation in $S_3$. To construct a Latin Square avoiding $\pi$, we have $n$ choices for which number is placed in the top left box of our Latin Square. Using the above remark, there is exactly one way to complete this row
(as $i, i-1, \ldots, 1, n, \ldots, i+1$ if $\pi$ is even, or $i, i+1, \ldots, n, 1, \ldots, i-1$, if $\pi$ is odd). We now have one number in each column, which from Proposition 3 shows there is only one way to complete each column.
Since each of the $n$ choices for where this first element can go produces exactly one Latin Square avoiding $\pi$, this completes our proof.
\end{proof}
\begin{rem}
The $n$ $n^{\text{th}}$ order Latin Squares which avoid any particular pattern of length three have a structure that is worth noting. The Latin Squares which avoid $123, 231$, and $312$ are of the form shown in Figure \ref{123}, and the Latin Squares which avoid $132, 213$, and $321$ are of the form shown in Figure \ref{321}.
\begin{figure}[h]
\[
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
i&i-1&&&1&n&&&i+2&i+1\\\hline
i-1&\ddots&&1&n&&&i+2&i+1&i\\\hline
&&1&n&&&i+2&i+1&i&\\\hline
&1&n&\ddots&&i+2&i+1&i&&\\\hline
1&n&&&i+2&i+1&i&&&\\\hline
n&&&i+2&i+1&i&&&&1\\\hline
&&i+2&i+1&i&&\ddots&&1&n\\\hline
&i+2&i+1&i&&&&1&n&\\\hline
i+2&i+1&i&&&&1&n&\ddots&\\\hline
i+1&i&&&&1&n&&&i+2\\\hline
\end{array}
\]
\caption{ The general form of a 123, 231, or 312 avoiding Latin Square.}\label{123}
\end{figure}
\begin{figure}[h!]
\[
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
i&i+1&&&n&1&&&i-2&i-1\\\hline
i+1&\ddots&&n&1&&&i-2&i-1&i\\\hline
&&n&1&&&i-2&i-1&i&\\\hline
&n&1&\ddots&&i-2&i-1&i&&\\\hline
n&1&&&i-2&i-1&i&&&\\\hline
1&&&i-2&i-1&i&&&&n\\\hline
&&i-2&i-1&i&&\ddots&&n&1\\\hline
&i-2&i-1&i&&&&n&1&\\\hline
i-2&i-1&i&&&&n&1&\ddots&\\\hline
i-1&i&&&&n&1&&&i-2\\\hline
\end{array}
\]
\caption{ The general form of a 132, 213, or 321 avoiding Latin Square.}\label{321}
\end{figure}
\end{rem}
Every row and column is in a cyclic increasing or decreasing structure where adjacent elements differ by one (mod $n$). In addition, we earlier saw that there were $n!$ Latin Squares avoiding a pattern of length three in just the columns, and $n!$ avoiding it in just the rows. When we force both restrictions we find that only $n$ Latin Squares satisfy both avoidance criteria.
By examining the above Latin Squares, we can also see the following corollary:
\begin{cor}
A Latin Square contains either all of the patterns $\{123,231,312\}$, or none of them. This also holds for $\{132,213,321\}$.
\end{cor}
Given the previous result, the proof of this corollary is straightforward. If a Latin Square does not contain any of $\{123,231,312\}$, it must be in the decreasing form shown above and it will not contain the others. However, out of context, it is somewhat surprising that any Latin Square with three terms in a row or column in increasing order must have three terms in the relative order 231 and 312.
We have now seen that all patterns of length three are also Wilf-equivalent for Latin Squares and that the growth rate of the number of these Latin Squares is polynomial as opposed to exponential (as is the case for permutations).
\section{Avoidance of Larger Patterns}
Computing $L_n(\pi)$ for a general pattern $\pi$ of length greater than three is considerably more difficult. As of yet, we know of no simple algorithm for filling in a partially completed Latin Square so that it will avoid a permutation in $S_4$. We begin this section with a much more tractable question: counting $L_n(\pi)$, for $\pi\in S_n$, in terms of the total number of Latin Squares.
\begin{thm}
For any $\pi\in S_n$, $L_n(\pi)=\left(\frac{n!-n}{n!}\right)^2L_n.$
\end{thm}
\begin{proof}
Let $\pi,\rho$ be permutations in $S_n$. Given a $\pi$-avoiding $n^{th}$ order Latin Square, apply the permutation $\rho\circ\pi^{-1}$ to each entry. Doing so will create a bijection from Latin Squares which avoid $\pi$ to those which avoid $\rho$, so that $L_n(\pi)$ for $\pi\in S_n$ only depends on $n$. We first count Latin Squares avoiding any $\pi\in S_n$ in the columns. Let the number of these be $\ell_n(\pi)$.
Let two Latin Squares be r-equivalent if they are related by a permutation of rows. Each r-equivalence class will be of size $n!$. Let the $i^\text{th}$ column of a Latin Square, $S$, be $\sigma_i$. The $n$ permutations $\pi\circ\sigma_i^{-1}$, for $1\le i\le n$, are the only ones which, when applied to the rows of $S$, cause the result to contain $\pi$ in a column. Thus, each r-equivalence class will contain $n!-n$ Latin Squares which avoid $\pi$ in the columns, so that $\ell_n(\pi)=\frac{n!-n}{n!}\cdot L_n$. By similar logic, if we partition Latin Squares which column-avoid $\pi$ into c-equivalence classes up to permutation of columns, each c-equivalence class of size $n!$ will contain $n!-n$ Latin Squares which avoid $\pi$ in the rows and columns. Thus, we have $$L_n(\pi)=\frac{n!-n}{n!}\cdot\ell_n(\pi)=\left(\frac{n!-n}{n!}\right)^2L_n.$$ \end{proof}
As noted earlier, it is not generally true that $L_n(\pi)$ = $L_n (\pi')$ when $\pi$ and $\pi'$ are patterns of the same length. We say that $\pi$ and $\pi'$ are Wilf-equivalent in Latin Squares when $L_n(\pi)=L_n(\pi')$ for all $n$.
In classical pattern avoidance, Wilf-equivalence classes have rich structure. Let the $complement$ of a permutation, $\pi^c$, be given by $\pi^c(i)=(n+1)-\pi(i)$, and the reverse, $\pi^\text{rev}$, by $\pi^\text{rev}(i)=\pi(n+1-i)$. It is easy to show $\pi$ is Wilf-equivalent to its complement and reverse in permutations, and a quick proof shows $\pi$ is similarly equivalent to its $inverse$ (which satisfies $\pi^{-1}(\pi(i))=i$).
There are many other Wilf-equivalences in permutations; for example, $S_n(4132)=S_n(3142)$, as shown in \cite{Stankova} (where $S_n(\pi)$ denotes the number of permutations of length $n$ avoiding a pattern $\pi$). Nontrivial equivalences exist for arbitrarily large patterns. For $\pi_1\in S_n$ and $\pi_2\in S_m$, let $\pi_1\oplus \pi_2$ be the permutation in $S_{n+m}$, given by applying $\pi_1$ to $\{1,\dots,n\}$ and $\pi_2$ to $\{n+1,\dots,n+m\}$. Then \cite{BWX} shows that for any pattern $\pi$, $S_n(12\dots k\oplus \pi)=S_n(k\dots21\oplus \pi)$. Combined, these results can be used to show that there are only three Wilf classes in $S_4$ \cite{Bona}.
For Latin Squares, we still have equivalence under reverse and complement.
\begin{thm}\label{comprev}
$L_n(\pi) = L_n(\pi^\text{rev}) = L_n(\pi^c)$ where $\pi^\text{rev}$ is the reverse of $\pi$ and $\pi^c$ is the complement of $\pi$.
\end{thm}
\begin{proof}
Let $\mathcal{L}_n(\pi)$ denote the set of $n^\text{th}$ order Latin Squares which avoid $\pi$. Given a Latin Square $S\in\mathcal{L}_n(\pi)$, let $\phi(S)$ be the new Latin Square when each entry $i$ is replaced with $n+1-i$. Since an occurrence of $\pi$ in $S$ would cause $\phi(S)$ to contain $\pi^c$, we then have that $\phi$ is a bijection from $\mathcal{L}_n(\pi)$ to $\mathcal{L}_n(\pi^c)$. To prove that $L_n(\pi)=L_n(\pi^\text{rev})$, we define the mapping $\rho$ by rotating the Latin Square $180^\circ$ around its center.
\begin{comment}
, as below:
\[
\begin{tabular}{ccc}
\begin{tabular}{|c|c|c|c|}
\hline
1&4&2&3\\\hline
4&3&1&2\\\hline
3&2&4&1\\\hline
2&1&3&4\\\hline
\end{tabular}
&
{\Large$\xrightarrow{\rho}$}
&
\begin{tabular}{|c|c|c|c|}
\hline
4&3&1&2\\\hline
1&4&2&3\\\hline
2&1&3&4\\\hline
3&2&4&1\\\hline
\end{tabular}
\end{tabular}
\]
\end{comment}
This has the effect of reversing all rows and columns, so this will be a bijection from $\mathcal{L}_n(\pi)$ to $\mathcal{L}_n(\pi^\text{rev})$.
\end{proof}
However, all of the other equivalences for patterns of length four do not carry over to Latin Squares. Dan Daly calculated $L_5(\pi)$ for every $\pi\in S_4$ (personal communication, July 12, 2012), using methods in \cite{McKay}, and the only equivalences that existed were the ones proved in Theorem \ref{comprev}. These data show that, for avoidance in Latin Squares, there are eight Wilf classes in $S_4$ as opposed to the three in the case of permutations. This illustrates how pattern avoidance in Latin Squares is more nuanced and difficult than it is for permutations. Whether or not any nontrivial Wilf-equivalences exist for Latin Squares is an open question.
\section{Monotone Subsequences}
The celebrated theorem of Erd\H os and Szekeres states that every permutation of length $pq+1$ contains an increasing subsequence of length $p+1$ or a decreasing subsequence of length $q+1$. In the special case where $p=q$, we have that length $n^2+1$ permutations contain a $monotone$ (i.e. strictly increasing or strictly decreasing) subsequence of length $n+1$ \cite{Erdos}. In addition, this is the longest possible monotone sequence whose existence is guaranteed, since for every $m<n^2+1$, there exists permutations of length $m$ which have no monotone subsequence of length $n+1$. This result can be rephrased as follows:
\begin{thm}[Erd\H os and Szekeres]
Let $\lambda_n=\lfloor\sqrt{n-1}\rfloor+1$. Every permutation of length $n$ has a monotone subsequence of length $\lambda_n$, and there exist permutations of length $n$ without monotone subsequences of length $\lambda_n+1$.
\end{thm}
In the above theorem, we can think of $\lambda_n$ as the length of the longest forced monotone subsequence. We wish to generalize this theorem to Latin Squares by defining a corresponding variable, $\Lambda_n$.
\begin{defn}
Let $\Lambda_n$ be the largest integer such that every $n^{th}$ order Latin Square has a row or column with a monotone subsequence of length $\Lambda_n$.
\end{defn}
It is trivially true that $\Lambda_n\ge\lambda_n$, since there is guaranteed to be a monotone subsequence of length $\lambda_n$ in every row and column of a given $n^{th}$ order Latin Square. In the next theorem we present a slight improvement of this bound.
\begin{thm}
If $n\ge(m-1)(m-2)+2$ for some integer $m$, then $\Lambda_n\ge m$.
\end{thm}
\begin{proof}
Given an $n^{th}$ order Latin Square, consider the row whose leftmost entry is $n$. The $n-1$ entries to the right of this $n$ form a permutation of length at least $(m-1)(m-2)+1$. From the Erd\"os-Szekeres Result, we know that this permutation either has an increasing subsequence of length $m$ or a decreasing one of length $m-1$. In the latter case, combining the leftmost $n$ with this decreasing subsequence creates a decreasing subsequence of length $m$, so either way, a monotone sequence of length $m$ exists. Thus, $\Lambda_n\ge m$.
\end{proof}
Note that this proof could have been analogously applied to the column beginning with an $n$ or to the row and column beginning with a $1$, so that we are actually guaranteed four occurrences of a monotone subsequence of length $m$.
By inverting the formula in the condition of the previous theorem, we can express this result in terms of $\Lambda_n$.
\begin{cor}\label{lb} For all $n>1$,
$\Lambda_n\ge \left\lfloor \frac32 +\sqrt{n-\frac74}\right\rfloor.$
\end{cor}
We now show that this lower bound is tight for all perfect squares (except 1). The lower bound in these cases is given by
$$
\Lambda_{n^2}\ge\left\lfloor \frac32 +\sqrt{n^2-\frac74}\right\rfloor\ge\left\lfloor \frac32 +\sqrt{n^2-\frac{4n-1}4}\right\rfloor=n+1.
$$
We also have equality for these numbers, which was proved by Sam Connolly, a participant at the 2013 REU at East Tennessee State University. For $i,j\in \{1,2,\dots,n^2\}$, let $k_{ij}$ be in $\{1,2,\dots,n^2\}$ and satisfy $k_{ij}\equiv i+j-1$ (mod $n^2$). Consider the $n^2$-order Latin Square, where for $i,j\in \{1,2,\dots,n^2\}$, the $i,j$ entry is $k_{ij}n$ (mod $n^2+1$). For instance, when $n^2=9$, this produces the square in Figure \ref{connolly}.
\begin{figure}
$$
\begin{array}{|c|c|c|c|c|c|c|c|c|}
\hline
3&6&9&2&5&8&1&4&7\\\hline
6&9&2&5&8&1&4&7&3\\\hline
9&2&5&8&1&4&7&3&6\\\hline
2&5&8&1&4&7&3&6&9\\\hline
5&8&1&4&7&3&6&9&2\\\hline
8&1&4&7&3&6&9&2&5\\\hline
1&4&7&3&6&9&2&5&8\\\hline
4&7&3&6&9&2&5&8&1\\\hline
7&3&6&9&2&5&8&1&4\\\hline
\end{array}
$$
\caption{Order 9 Latin Square whose longest monotone subsequence is 4.}\label{connolly}
\end{figure}
The first row and column is a classic example of a permutation of length $n^2$ whose longest monotone subsequence is of length $n$. The other rows and columns are cyclic permutations of the first, and this increases the length of this subsequence by only one. This example proves that $\Lambda_{n^2}\le n+1$. Combined with the lower bound, this shows $ \Lambda_{n^2}=n+1$.
Since one can show that $\Lambda_3=3$, our bound, that $\Lambda_3\ge2$, is not always tight. It would be true that $\Lambda_n$ was either equal to or one more than our lower bound if the conjecture below were true:
\begin{conj}
$\Lambda_{n}$ is nondecreasing.
\end{conj}
The corresponding result is trivial for permutations, since a permutation of length $n+1$ naturally contains one of length $n$ by removing the $n+1$ entry. A similar containment argument does not translate immediately to Latin Squares.
\section{Open Questions}
We would like to end with a discussion of several open problems which we believe may spur future investigations. Again, many of these problems were provided by attendees of the $11^{th}$ Permutation Patterns conference. We are unfortunately unable to remember all their names. Answering some of these questions would lead to the beginning of a rich theory of Pattern Avoidance in Latin Squares.
\vspace{5mm}
\noindent{\bf Open Problems}
\begin{itemize}
\item What is $L_n(\pi)$ for patterns of length $4$ or more? In particular, what can be said when $\pi = 1234$?
\item For a fixed pattern, say $\pi = 123...m$, can anything be said about the growth rate of $L_n(\pi)$? For which value of $m$ does the count first become exponential?
\item Can anything be said about the growth rate of $L_n(\pi)$ vs $L_n(\pi')$, where $\pi$ and $\pi'$ are respectively patterns of length $i$ and $i+1$ and $n\gg i$?
\item Which patterns are the easiest to avoid in Latin Squares? The hardest? If $\pi$ and $\pi'$ are of the same length, how different can $L_n (\pi)$ and $L_n(\pi ')$ be?
\item Are there any Wilf-equivalences outside of those mentioned in Theorem \ref{comprev}?
\item What happens when pattern avoidance is defined to require the permutations induced by each symbol (see the footnote in Section 2) to avoid the target pattern as well?
\item Can anything be said about Latin Squares avoiding a specific pattern (or set of patterns) in the rows, and a different pattern (or set of patterns) in the columns? For example, to avoid $123$ in the columns and $321$ in the rows, we can take a 123 avoiding square and reflect it through the vertical axis. This means that there are $n$ Latin Squares with this structure. Can something more interesting be said using larger patterns?
\item Instead of avoidance or containment of specific patterns, can we build Latin Squares using a set of permutations in a (set of) avoidance classes? How many can be built?
\item Is there a closed form expression for $L_n(\pi_n)$? Such an expression, or even bounds, could, with Theorem 8, be used to find better bounds on $L_n.$
\end{itemize}
We end with a final unexplored generalization leading to additional open questions. In what follows, we define a \emph{Latin Rectangle} to be any rectangular array with entries in $1,\dots,n$ and no repeats in any row or column. The usual definition of a Latin Rectangle requires the number of columns to be $n$; in what follows, we wish to examine ``sub-rectangles'' induced by choosing any $p$ rows and $q$ columns of a Latin Square, and these subrectangles will not always fit the traditional definition. Call two Latin Rectangles \emph{order isomorphic} if one can be obtained from the other by applying an increasing function $f$ to each entry. For a Latin Rectangle, $R$, we say a Latin Square \emph{contains the pattern $R$} if it has some sub-rectangle which is order isomorphic to $R$. For example, consider the Latin Square in Figure \ref{connolly}. The subrectangle at rows $2$ and $7$, and columns $1,5$ and $9$, is shown below.
\[
\begin{array}{|c|c|c|}\hline
6&8&3\\\hline
1&6&8\\\hline
\end{array}
\]
This is order isomorphic to the rectangle,
\[R=
\begin{array}{|c|c|c|}\hline
3&4&2\\\hline
1&3&4\\\hline
\end{array}
\]
so we say the original square contains the pattern $R$.
Every problem addressed in the paper can be expressed in terms of rectangular patterns. For example, enumerating $L_n(123)$ is equivalent to counting Latin Squares which avoid both
$R=\begin{array}{|c|c|c|}\hline
1&2&3\\\hline
\end{array}$
and the $90^\circ$ clockwise rotation of $R$. Many questions about permutation patterns generalize to rectangular patterns, so that there is a great deal of research that can be done in this area.
\vspace{3mm}
\section{Acknowledgements}
The research of both authors was supported by NSF REU grant 1004624. We thank Anant Godbole, Project Director, and the other participants for useful discussions. We particularly thank Dan Daly for the data he provided, initially proposing the question of pattern avoidance in Latin Squares, and supporting us while we worked on the problem. Many of the open questions were raised by participants at the 11th International Permutation Patterns Conference in Paris. Finally, we thank our anonymous referees for their valuable feedback.
\begin{comment}
\item How many Latin Squares contain $123$?
\begin{itemize}
\item While this question is very straightforward to state, it may be hard to answer. We end with it because any answer would allow you to readily calculate the total number of $n$-th order Latin Squares!
\end{itemize}
\end{itemize}
\item A Latin Rectangle is a $p$ by $q$ array with entries in $1,\dots,n$ and no repeats in any row or column. Can this problem be generalized to Latin Squares avoiding Latin Rectangular ``patterns?''
\end{comment}
\bibliographystyle{plain}
| {
"timestamp": "2014-02-17T02:01:22",
"yymm": "1402",
"arxiv_id": "1402.3336",
"language": "en",
"url": "https://arxiv.org/abs/1402.3336",
"abstract": "In this paper we study pattern avoidance in Latin Squares, which gives us a two dimensional analogue of the well studied notion of pattern avoidance in permutations. Our main results include enumerating and characterizing the Latin Squares which avoid patterns of length three and a generalization of the Erdős-Szekeres theorem. We also discuss equivalence classes among longer patterns, and conclude by describing open questions of interest both in light of pattern avoidance and their potential to reveal information about the structure of Latin Squares. Along the way, we show that classical results need not trivially generalize, and demonstrate techniques that may help answer future questions.",
"subjects": "Combinatorics (math.CO)",
"title": "Permutation Patterns in Latin Squares",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815491146486,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.707684222424776
} |
https://arxiv.org/abs/1805.01338 | Beta polytopes and Poisson polyhedra: $f$-vectors and angles | We study random polytopes of the form $[X_1,\ldots,X_n]$ defined as convex hulls of independent and identically distributed random points $X_1,\ldots,X_n$ in $\mathbb{R}^d$ with one of the following densities: $$ f_{d,\beta} (x) = c_{d,\beta} (1-\|x\|^2)^{\beta}, \qquad \|x\| < 1, \quad \text{(beta distribution, $\beta>-1$)} $$ or $$ \tilde f_{d,\beta} (x) = \tilde{c}_{d,\beta} (1+\|x\|^2)^{-\beta}, \qquad x\in\mathbb{R}^d, \quad \text{(beta' distribution, $\beta>d/2$)}. $$ This setting also includes the uniform distribution on the unit sphere and the standard normal distribution as limiting cases. We derive exact and asymptotic formulae for the expected number of $k$-faces of $[X_1,\ldots,X_n]$ for arbitrary $k\in\{0,1,\ldots,d-1\}$. We prove that for any such $k$ this expected number is strictly monotonically increasing with $n$. Also, we compute the expected internal and external angles of these polytopes at faces of every dimension and, more generally, the expected conic intrinsic volumes of their tangent cones. By passing to the large $n$ limit in the beta' case, we compute the expected $f$-vector of the convex hull of Poisson point processes with power-law intensity function. Using convex duality, we derive exact formulae for the expected number of $k$-faces of the zero cell for a class of isotropic Poisson hyperplane tessellations in $\mathbb R^d$. This family includes the zero cell of a classical stationary and isotropic Poisson hyperplane tessellation and the typical cell of a stationary Poisson--Voronoi tessellation as special cases. In addition, we prove precise limit theorems for this $f$-vector in the high-dimensional regime, as $d\to\infty$. Finally, we relate the $d$-dimensional beta and beta' distributions to the generalized Pareto distributions known in extreme-value theory. | \section{Introduction and main results}
\subsection{Introduction}
Let $X_1,\ldots,X_n$ be random points chosen independently and uniformly from the unit sphere $\mathbb{S}^{d-1}$ or the unit ball $\mathbb{B}^{d}$. Their convex hull $[X_1,\ldots,X_n]$ is a random polytope; see Figure~\ref{fig:beta_polytope}. What is the expected number of vertices, edges, or, more generally, $k$-dimensional faces of this random polytope? What are the expected internal and external angles of this polytope? Does the expected number of $k$-dimensional faces increase if we add one more point to the sample?
In order to address these questions, it is useful (and probably even necessary) to consider a more general family of distributions including the aforementioned examples as special or limit cases. We say that a random vector in $\mathbb{R}^d$ has a $d$-dimensional \textit{beta distribution} with parameter $\beta>-1$ if its Lebesgue density is
\begin{equation}\label{eq:def_f_beta}
f_{d,\beta}(x)=c_{d,\beta} \left( 1-\left\| x \right\|^2 \right)^\beta\mathbbm{1}_{\{\|x\| < 1\}},\qquad x\in\mathbb{R}^d,\qquad
c_{d,\beta}= \frac{ \Gamma\left( \frac{d}{2} + \beta + 1 \right) }{ \pi^{ \frac{d}{2} } \Gamma\left( \beta+1 \right) }.
\end{equation}
Here, $\|x\| = (x_1^2+\ldots+x_d^2)^{1/2}$ denotes the Euclidean norm of the vector $x= (x_1,\ldots,x_d)\in\mathbb{R}^d$. The uniform distribution on the unit ball $\mathbb{B}^d$ is recovered by taking $\beta=0$, whereas the uniform distribution on the unit sphere $\mathbb{S}^{d-1}$ is the weak limit of the beta distribution, as $\beta\downarrow -1$. Very similar to the beta distributions are the \textit{beta' distributions} with Lebesgue density
\begin{equation}\label{eq:def_f_beta_prime}
\tilde{f}_{d,\beta}(x)=\tilde{c}_{d,\beta} \left( 1+\left\| x \right\|^2 \right)^{-\beta},\qquad
x\in\mathbb{R}^d,\qquad
\tilde{c}_{d,\beta}= \frac{ \Gamma\left( \beta \right) }{\pi^{ \frac{d}{2} } \Gamma\left( \beta - \frac{d}{2} \right) },
\end{equation}
where the parameter $\beta$ should satisfy $\beta>d/2$ to ensure integrability. The standard normal distribution on $\mathbb{R}^d$ can be viewed as a limiting case of both the beta and beta' family, as $\beta\to +\infty$. In fact, the four $d$-dimensional distributions mentioned above (i.e.\ the beta distribution, the beta' distribution, the normal distribution and the uniform distribution on the sphere) are characterized by a common underlying property discovered by Ruben and Miles~\cite{ruben_miles}. This characterizing property is also crucial in the present context and will be discussed in more detail below.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth ]{convex_hull_sphere1.png}
\includegraphics[width=0.49\textwidth]{convex_hull_ball2.png}
\end{center}
\caption{
Convex hull of $n=1000$ uniformly distributed points on the sphere (left figure, beta polytope with $\beta=-1$) and the ball (right figure, beta polytope with $\beta=0)$.
}
\label{fig:beta_polytope}
\end{figure}
Convex hulls of $n\geq d+1$ independent random points sampled according to these distributions in $\mathbb{R}^d$ are referred to as \textit{beta} and \textit{beta' polytopes}. Beta and beta' polytopes for the particular case $n=d+1$ (where these polytopes are simplices with probability one) were considered in the works of Miles \cite{miles}, Ruben and Miles \cite{ruben_miles} and, more recently, by Grote, Kabluchko and Th\"ale \cite{beta_simplices}. Asymptotic properties of the beta and beta' polytopes in the general case $n\geq d+1$ were studied by Affentranger \cite{affentranger}, while explicit formulae for some characteristics of these polytopes like the expected intrinsic volumes and the expected number of hyperfaces were derived by Kabluchko, Temesvari and Th\"ale \cite{beta_polytopes}. Let us also point out that the class of beta' polytopes also plays a crucial role in the recent study of spherical convex hulls of random points on half spheres. This connection has been exploited in the works of Bonnet, Grote, Temesvari, Th\"ale, Turchi and Wespi \cite{bonnet_etal} and Kabluchko, Marynych, Temesvari and Th\"ale \cite{convex_hull_sphere}. In this light, the present paper can be regarded as the continuation of our previous works on beta and beta' polytopes. Its main results, which will be presented in Sections~\ref{subsec:MainForBeta} and~\ref{subsec:MainForBetaprime}, can roughly be summarized as follows.
\begin{itemize}
\item[(a)] We provide an explicit formula for the expected number of $k$-dimensional faces of beta and beta' polytopes, for every $k\in\{0,1,\ldots,d-1\}$.
\item[(b)] We prove that the expected number of $k$-dimensional faces strictly increases if new points are added to the sample, again for every $k\in\{0,1,\ldots,d-1\}$.
\item[(c)] We compute the expected external and internal angles of beta and beta' polytopes.
\end{itemize}
In addition, these results have a number of corollaries which are presented in Sections~\ref{subsec:PPP}, \ref{subsec:convex_hulls_half_sphere}, \ref{subsec:PHT}, \ref{subsec:PHT_asympt} and~\ref{subsec:extreme_pareto}. They can be summarized as follows.
\begin{itemize}
\item[(d)] We provide a formula for the expected number of $k$-dimensional faces of the convex hull of a Poisson point process with power-law intensity, for every $k\in\{0,1,\ldots,d-1\}$.
\item[(e)] From (d) we deduce a formula for the expected number of $k$-faces of the zero cell of a Poisson hyperplane tessellation and the typical cell of the Poisson--Voronoi tessellation.
\item[(f)] We provide asymptotic formulae for the expected $f$-vector of these cells in high dimensions, i.e., as $d$ goes to $\infty$.
\item[(g)] We relate the characterizing property of the beta and beta' distributions which is crucial for obtaining the above results to the properties of the generalized Pareto distributions known in extreme-value theory.
\end{itemize}
As already mentioned above, the standard Gaussian distribution appears as the large $\beta$ limit of both beta and beta' distributions. More concretely, we have the following
\begin{lemma}\label{lem:gauss_limit}
If $X(\beta)$ is a random point in $\mathbb{R}^d$ with density either $f_{d,\beta}$ or $\tilde f_{d, \beta}$, then $\sqrt{2\beta} X(\beta)$ converges weakly to the standard normal distribution on $\mathbb{R}^d$, as $\beta\to +\infty$.
\end{lemma}
\begin{proof}
Write down the density of $\sqrt{2\beta} X(\beta)$, verify that it converges pointwise to the standard normal density and appeal to Scheff\'e's lemma.
\end{proof}
Most of the results of the present paper can be translated to Gaussian polytopes by taking the limit $\beta\to+\infty$. Since in the Gaussian setting most results are not new and admit simpler and more elegant proofs, see, e.g., \cite{kabluchko_thaele} for the proof of monotonicity, we refrain from considering the Gaussian case here.
\subsection{Main results for beta polytopes}\label{subsec:MainForBeta}
Let $X_1,\ldots,X_n$ be independent identically distributed (i.i.d.)\ random points in $\mathbb{R}^d$ with beta density $f_{d,\beta}$ and assume that $d\geq 2$ and $n\geq d+1$. Their convex hull will be denoted by
$$
P_{n,d}^\beta := [X_1,\ldots,X_n].
$$
Unless otherwise stated, in all results on beta polytopes the parameter $\beta$ satisfies $\beta\geq -1$, where the value $\beta=-1$ corresponds to the uniform distribution on the unit sphere $\mathbb{S}^{d-1}$. We are interested in various characteristics of the beta polytopes $P_{n,d}^\beta$.
Given a polytope $P\subset\mathbb{R}^d$, we denote by $\mathcal{F}_k(P)$, $k\in\{0,1,\ldots,d-1\}$, the set of $k$-dimensional faces of $P$ and by $f_k(P)=|\mathcal{F}_k(P)|$ their total number. Note that the random polytopes considered in the present paper are \textit{simplicial}, that is, all of their faces are simplices, with probability $1$. If $F$ is a face of $P$, let $\beta(F,P)$ (respectively, $\gamma(F,P)$) be the internal (respectively, external) solid angle at $F$. The normalization is chosen so that the solid angle of the full space is equal to $1$. For convenience of the reader, we collect the necessary background information from convex and stochastic geometry in Section~\ref{sec:facts}.
\begin{theorem}[Expected $f$-vector]\label{theo:f_vect}
For every $k\in \{0,1,\ldots, d-1\}$, the expected number of $k$-dimensional faces of $P_{n,d}^\beta$ is given by
\begin{equation}\label{eq:f_k_P_main}
\mathbb{E} f_k(P_{n,d}^{\beta})
=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} I_{n,d-2s}(2\beta+d) J_{d-2s,k+1}\left(\beta + s + \frac 12\right).
\end{equation}
Here, the quantities $I_{n,k}(\alpha)$ are given by the formula
\begin{equation}\label{eq:I_definition}
I_{n,k}(\alpha) = \int_{-1}^{+1} c_{1, \frac {\alpha k -1}{2}}
(1-t^2)^{\frac {\alpha k - 1}{2}}
\left(\int_{-1}^t c_{1, \frac{\alpha - 1}{2}} (1-s^2)^{\frac{\alpha - 1}{2}}\,{\rm d} s\right)^{n-k} \,{\rm d} t,
\end{equation}
while $J_{m,\ell}(\alpha)$ denotes the expected internal angle at some $(\ell-1)$-dimensional face of the simplex $[Z_1,\ldots,Z_{m}] \subseteq \mathbb{R}^{m-1}$, where $Z_1,\ldots,Z_m$ are i.i.d.\ points with density $f_{m-1,\alpha}$, that is,
\begin{equation}\label{eq:J_definition}
J_{m,\ell}(\alpha) = \mathbb{E}\beta([Z_1,\ldots,Z_{\ell}], [Z_1,\ldots,Z_m]), \qquad \ell\in\{1,2,\ldots,m\}.
\end{equation}
\end{theorem}
\begin{remark}
In this paper we shall use the convention that $\binom{a}{b}=0$ whenever $b>a$ or $b<0$. In particular, this implies that the sum in~\eqref{eq:f_k_P_main} contains only finitely many non-zero terms. More concretely, all terms with $d-2s \leq k$ vanish.
\end{remark}
\begin{remark}\label{rem:SpecialCasesMainResultBeta}
For faces of dimension $k=d-1$ and $k=d-2$ the expression in~\eqref{eq:f_k_P_main} simplifies considerably, since the only non-vanishing term is the one with $s=0$, and we get
\begin{align*}
\mathbb{E} f_{d-1}(P_{n,d}^{\beta})
=
2 \binom n {d} I_{n,d}(2\beta+d)
\qquad\text{and}\qquad
\mathbb{E} f_{d-2}(P_{n,d}^{\beta})
=
d\binom n {d} I_{n,d}(2\beta+d),
\end{align*}
where we used that $J_{m,m}(\alpha)=1$ and $J_{m,m-1}(\alpha) = 1/2$. The first formula recovers a result obtained in~\cite[Theorem~2.11, Remark 2.14]{beta_polytopes}, whereas the second one follows from the Dehn--Sommerville relation $2 f_{d-2}(P) = d f_{d-1}(P)$ valid for any $d$-dimensional simplicial polytope $P$.
\end{remark}
In the deterministic setting, it is easy to construct examples which show that adding one more point to the convex hull may increase or decrease the number of $k$-dimensional faces. However, in the setting of random polytopes, it is natural to conjecture that adding one more point should \textit{increase} the \textit{expected} number of $k$-faces, for every $k\in\{0,1,\ldots,d-1\}$. This conjecture is known to hold in several special cases. The work of Devillers, Glisse, Goaoc, Moroz and Reitzner~\cite{devillers} covers the case of the expected vertex number for convex hulls of uniformly distributed points in a planar convex body. For faces of maximal dimension it was established in the work of Beermann and Reitzner~\cite{beermann_diss,beermann_reitzner} for Gaussian polytopes and in~\cite{bonnet_etal} by Bonnet, Grote, Temesvari, Th\"ale, Turchi and Wespi for beta and beta' polytopes. So far the only model where monotonicity of the expected number of $k$-faces is known for arbitrary $k\in\{0,1,\ldots,d-1\}$ are the Gaussian polytopes \cite{kabluchko_thaele}.
The explicit formula stated in Theorem~\ref{theo:f_vect} allows us to add another positive answer to the conjecture for the $k$-faces of beta polytopes, where $k\in\{0,1,\ldots,d-1\}$.
\begin{theorem}[Monotonicity of the expected $f$-vector]\label{theo:monoton}
For all $d\geq 2$, $n\geq d+1$ and $k\in \{0,1,\ldots, d-1\}$ we have
$$
\mathbb{E} f_k(P_{n,d}^{\beta}) < \mathbb{E} f_k(P_{n+1,d}^{\beta}).
$$
\end{theorem}
The quantities $I_{n,k}(\alpha)$ and $J_{m,\ell}(\alpha)$ that appeared in \eqref{eq:I_definition} and \eqref{eq:J_definition}, respectively, will play a central role in the sequel.
The next theorem shows that the quantities $I_{n,k}(\alpha)$ can be interpreted as the \textit{expected external angles} of beta simplices (and, more generally, of beta polytopes).
\begin{theorem}[Expected external angles]\label{theo:external}
Fix some $k\in\{1,\ldots,d\}$ and consider the simplex $G := [X_1,\ldots,X_k]$. The expected external angle at $G$ is given by
$$
\mathbb{E} \gamma(G, P_{n,d}^\beta)
=
I_{n,k}(2\beta + d)
$$
with the convention that $\gamma(G, P_{n,d}^\beta)=0$ if $G$ is not a face of $P_{n,d}^\beta$.
Furthermore, the random variable $\gamma(G, P_{n,d}^\beta)$ is stochastically independent of the isometry type of the simplex $G/\sqrt{1 - h^2}$, where $h:=d(0,\mathop{\mathrm{aff}}\nolimits G)$ is the distance from the origin to the affine hull of $G$.
\end{theorem}
\begin{remark}
Let us mention two alternative expressions for $I_{n,k}(\alpha)$:
\begin{align*}
I_{n,k}(\alpha)
&=\int_{-\pi/2}^{+\pi/2} c_{1,\frac{\alpha k - 1}{2}} (\cos \varphi)^{\alpha k} \left(\int_{-\pi/2}^\varphi c_{1,\frac{\alpha-1}{2}}(\cos \theta)^{\alpha} \,{\rm d} \theta \right)^{n-k} \, {\rm d} \varphi\\
&=\int_{-\infty}^{+\infty} c_{1,\frac{\alpha k - 1}{2}} (\cosh \varphi)^{-(\alpha k+1)} \left(\int_{-\infty}^\varphi c_{1,\frac{\alpha-1}{2}}(\cosh \theta)^{-(\alpha+1)} \,{\rm d} \theta \right)^{n-k} \, {\rm d} \varphi.
\end{align*}
The first formula can be obtained from~\eqref{eq:I_definition} by the change of variables $t=\sin\varphi$, $s= \sin \theta$ with $\varphi, \theta\in (-\frac \pi 2, +\frac \pi 2)$, whereas for the second we put $t= \tanh \varphi$, $s= \tanh \theta$ with $\varphi,\theta\in \mathbb{R}$.
\end{remark}
Finding an explicit formula for the \textit{expected internal angles} $J_{m,\ell}(\alpha)$ of beta simplices is a much more difficult question (except for the two trivial cases mentioned in Remark \ref{rem:SpecialCasesMainResultBeta} above and the identity $J_{3,1}(\alpha) = 1/6$ which is valid because the sum of the angles of a triangle is $\pi$). This is not surprising, since even in the limiting case, as $\alpha\to\infty$, where it is possible to show that $J_{m,\ell}(\alpha)$ tends to the internal angle of a $(\ell-1)$-dimensional face of an $(m-1)$-dimensional regular simplex, an explicit formula is not widely known. Explicit and asymptotic (as the dimension goes to $\infty$) formulae for the internal angles of regular simplices can be found in~\cite{boroczky_henk,kabluchko_zaporozhets_absorption,rogers_packing,rogers,vershik_sporyshev}.
The methods used in these papers do not seem to generalize to the finite $\alpha$ case. The problem of computing the quantities $J_{m,\ell}(\alpha)$ will be addressed elsewhere.
In the next theorem we analyze the asymptotic behavior of the expected number of $k$-faces of $P_{n,d}^\beta$ when $n\to\infty$ and all other parameters stay fixed.
\begin{theorem}[Asymptotics of the $f$-vector]\label{theo:f_vector_asympt_beta}
For any fixed $d\in\mathbb{N}$ and $k\in \{0,1,\ldots, d-1\}$ we have
\begin{align*}
\lim_{n\to\infty} n^{-\frac{d-1}{2\beta + d + 1}} \mathbb{E} f_k(P_{n,d}^\beta)
&=
\frac{2}{d!} \binom {d}{k+1} J_{d,k+1}\left(\beta + \frac 12\right)
\frac{c_{1, \frac {(2\beta+d) d -1}{2}}}{2\beta+d+1}\\
&\qquad\qquad\qquad\times\left(\frac{2\beta+d+1}{c_{1,\frac{2\beta+d -1}{2}}}\right)^{\frac{(2\beta+d) d + 1}{2\beta+d +1}}
\Gamma\left(\frac{(2\beta+d) d+1}{2\beta+d+1}\right).
\end{align*}
\end{theorem}
\begin{remark}
In the case $\beta=-1$, which corresponds to the uniform distribution on the sphere $\mathbb{S}^{d-1}$, the above simplifies to
$$
\lim_{n\to\infty} \frac 1n \mathbb{E} f_k(P_{n,d}^{-1})
={2^d\pi^{{d\over 2}-1}\over d(d-1)^2}{d\choose k+1}J_{d,k+1}\left(-{1\over 2}\right){\Gamma(1+{d(d-2)\over 2})\over\Gamma({(d-1)^2\over 2})}\left({\Gamma({d+1\over 2})\over\Gamma({d\over 2})}\right)^{d-1}.
$$
Except for the case $k=d-1$, where $J_{d,d}(-1/2)=1$ and which is mentioned in Buchta, M\"uller and Tichy \cite{buchta_mueller_tichy}, such an explicit result seems to be new, although the order of $\mathbb{E} f_k(P_{n,d}^{-1})$ in $n$ was determined in the thesis \cite{stemeseder_phd} using entirely different tools. Similarly, in the case $\beta=0$ corresponding to the uniform distribution on the ball $\mathbb{B}^d$, we obtain
\begin{equation}\label{eq:Limitf_kBall}
\begin{split}
\lim_{n\to\infty} n^{-\frac{d-1}{d+1}} \mathbb{E} f_k(P_{n,d}^0) &= {2\pi^{d(d-1)\over 2(d+1)}\over (d+1)!}{d\choose k+1}J_{d,k+1}\left({1\over 2}\right)\\
&\qquad\qquad\times{\Gamma(1+{d^2\over 2})\Gamma({d^2+1\over d+1})\over\Gamma({d^2+1\over 2})}\left({(d+1)\Gamma({d+1\over 2})\over \Gamma(1+{d\over 2})}\right)^{d^2+1\over d+1}.
\end{split}
\end{equation}
Again, except for the case $k=d-1$, which is treated in \cite{affentranger}, such an explicit result seems new.
\end{remark}
Let us point out the following connection to a question of Reitzner. In \cite{ReitznerCombinatorialStructure} he has shown that if $K_n$ is the convex hull of $n\geq d+1$ uniformly distributed random points in a convex body $K\subseteq\mathbb{R}^d$ with twice differentiable boundary $\partial K$ and everywhere positive Gaussian curvature $\kappa(\,\cdot\,)$ then, for every $k\in\{0,1,\ldots,d-1\}$,
\begin{equation}\label{eq:ReitznerExpectation}
\lim_{n\to\infty}n^{-{d-1\over d+1}}\mathbb{E} f_k(K_n) = c_{d,k}\Omega(K)\qquad\text{with}\qquad\Omega(K):=\int_{\partial K}\kappa(x)^{1\over d+1}\,\mathcal{H}^{d-1}(\textup{d} x)
\end{equation}
being the so-called \textit{affine surface area} of $K$ ($\mathcal{H}^{d-1}$ is the $(d-1)$-dimensional Hausdorff measure) and where $c_{d,k}$ is a constant only depending on $d$ and on $k$. Unfortunately and as pointed out in \cite[p.\ 181]{ReitznerCombinatorialStructure} it was not possible so far to determine the constant $c_{d,k}$ explicitly and in an accessible form. But since \eqref{eq:ReitznerExpectation} is true in particular for $K=\mathbb{B}^d$ and since the affine surface area of $\mathbb{B}^d$ is $\Omega(\mathbb{B}^d)=2\pi^{d/2}/\Gamma({d\over 2})$, we can identify $c_{d,k}$ with $\Omega(\mathbb{B}^d)^{-1}$ times the right hand side in \eqref{eq:Limitf_kBall}. We summarize these findings in the next proposition.
\begin{proposition}
The constant $c_{d,k}$ in \eqref{eq:ReitznerExpectation} is given by
\begin{equation*}
c_{d,k} = {\pi^{-{d\over d+1}}\over (d+1)!}{d\choose k+1}J_{d,k+1}\left({1\over 2}\right){\Gamma({d\over 2})\Gamma(1+{d^2\over 2})\Gamma({d^2+1\over d+1})\over\Gamma({d^2+1\over 2})}\left({(d+1)\Gamma({d+1\over 2})\over \Gamma(1+{d\over 2})}\right)^{d^2+1\over d+1}.
\end{equation*}
\end{proposition}
\begin{remark}
We remark that for Gaussian polytopes, a representation of this type, involving the interior angle of a regular simplex, is well known from \cite[Equations (4.1) and (4.2)]{HMR04}.
\end{remark}
In the next theorem we evaluate the expected conic intrinsic volumes of the tangent cones at faces of the beta polytope. The definition of tangent cones and conic intrinsic volumes (which include internal and external solid angles as special case), together with a list of their properties, will be given in Section~\ref{sec:facts}.
\begin{theorem}[Expected conic intrinsic volumes of tangent cones]\label{theo:expected_conic_tangent}
Fix some $k\in\{1,\ldots,d\}$ and consider the simplex $G := [X_1,\ldots,X_k]$. Then, for every $j\in\{k-1,\ldots,d\}$, the expected $j$-th conic intrinsic volume of the tangent cone $T(G, P_{n,d}^\beta)$ at $G$ is given by
\begin{align*}
\mathbb{E} \upsilon_{j} (T(G, P_{n,d}^{\beta}))&={n-k\choose j-k+1}I_{n,j+1}(2\beta+d) J_{j+1,k}\left(\beta + \frac{d - j}2\right)+\mathbbm{1}_{\{j=d\}}\mathbb{P}\left[G\notin\mathcal{F}_{k-1}(P_{n,d}^\beta)\right],
\end{align*}
with the convention that $T(G, P_{n,d}^{\beta})=\mathbb{R}^d$ if $G$ is not a face of $P_{n,d}^{\beta}$.
\end{theorem}
Taking $j=k-1$ and observing that $\upsilon_{k-1} (T(G, P_{n,d}^{\beta})) = \gamma(T(G, P_{n,d}^{\beta}))$ (this is because the tangent cone contains the $(k-1)$-dimensional affine hull of $G$ as its lineality space, provided $G$ is a face), we recover Theorem~\ref{theo:external} as a special case of Theorem~\ref{theo:expected_conic_tangent}. On the other extreme, we may take $j=d$, which leads to the following result.
\begin{corollary}[Expected internal angles]\label{theo:internal}
Fix some $k\in\{1,\ldots,d\}$ and consider the simplex $G := [X_1,\ldots,X_k]$. The expected internal angle at $G$ is given by
\begin{align*}
\mathbb{E} \beta(G, P_{n,d}^\beta) &={n-k\choose d-k+1}I_{n,d+1}(2\beta+d) J_{d+1,k}\left(\beta\right)+\mathbb{P}\left[G\notin\mathcal{F}_{k-1}(P_{n,d}^\beta)\right],
\end{align*}
with the convention that $\beta(G, P_{n,d}^\beta)=1$ if $G$ is not a face of $P_{n,d}^{\beta}$.
\end{corollary}
\subsection{Main results for beta' polytopes}\label{subsec:MainForBetaprime}
In this section we present our results for beta' polytopes. Let $X_1,\ldots,X_n$ be i.i.d.\ random points in $\mathbb{R}^d$ with density $\tilde f_{d,\beta}$. Their convex hull will be denoted by
$$
\tilde P_{n,d}^\beta = [X_1,\ldots,X_n].
$$
We assume that $n\geq d+1$, so that $\tilde P_{n,d}^\beta$ has full dimension $d$. The following is the analogue of Theorem \ref{theo:f_vect}.
\begin{theorem}[Expected $f$-vector]\label{theo:f_vect_prime}
For every $k\in \{0,1,\ldots, d-1\}$, the expected number of $k$-dimensional faces of $\tilde P_{n,d}^\beta$ is given by
\begin{equation}
\mathbb{E} f_k(\tilde P_{n,d}^{\beta})
=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} \tilde I_{n,d-2s}(2\beta-d) \tilde J_{d-2s,k+1}\left(\beta - s - \frac 12\right).
\end{equation}
Here, the quantities $\tilde I_{n,k}(\alpha)$ are given by the formula
\begin{equation}\label{eq:I_definition_prime}
\tilde I_{n,k}(\alpha)
=
\int_{-\infty}^{+\infty} \tilde c_{1, \frac {\alpha k + 1}{2}}
(1+t^2)^{-\frac {\alpha k + 1}{2}}
\left(\int_{-\infty}^t \tilde c_{1, \frac{\alpha + 1}{2}} (1+s^2)^{-\frac{\alpha + 1}{2}}\,{\rm d} s\right)^{n-k} {\rm d} t,
\end{equation}
while $\tilde J_{m,\ell}(\alpha)$ denotes the expected internal angle at some $(\ell-1)$-dimensional face of the simplex $[Z_1,\ldots,Z_{m}] \subset \mathbb{R}^{m-1}$, where $Z_1,\ldots,Z_m$ are i.i.d.\ points with density $\tilde f_{m-1,\alpha}$, that is,
\begin{equation}\label{eq:J_definition_prime}
\tilde J_{m,\ell}(\alpha) = \mathbb{E} \beta([Z_1,\ldots,Z_{\ell}], [Z_1,\ldots,Z_m]), \qquad \ell\in\{1,\ldots,m\}.
\end{equation}
\end{theorem}
The next theorem is the analogue of Theorem \ref{theo:monoton} and shows that the expected $f$-vector is strictly monotonically increasing as a function of the number $n$ of points.
\begin{theorem}[Monotonicity of the expected $f$-vector]\label{theo:monoton_prime}
For all $d\geq 2$, $n\geq d+1$ and $k\in\{0,1,\ldots, d-1\}$ we have
$$
\mathbb{E} f_k(\tilde P_{n,d}^{\beta}) < \mathbb{E} f_k(\tilde P_{n+1,d}^{\beta}).
$$
\end{theorem}
Our next result for the external angle for beta' polytopes is the analogue of Theorem \ref{theo:external}.
\begin{theorem}[Expected external angles]\label{theo:external_prime}
Fix some $k\in\{1,\ldots,d\}$ and consider the simplex $G := [X_1,\ldots,X_k]$. The expected external angle at $G$ is given by
$$
\mathbb{E} \gamma(G, \tilde P_{n,d}^\beta)
=
\tilde I_{n,k}(2\beta - d)
$$
with the convention that $\gamma(G, \tilde P_{n,d}^\beta)=0$ if $G$ is not a face of $\tilde P_{n,d}^\beta$. Furthermore, the random variable $\gamma(G, \tilde P_{n,d}^\beta)$ is stochastically independent of the isometry type of the simplex $G/\sqrt{1 + h^2}$, where $h:=d(0,\mathop{\mathrm{aff}}\nolimits G)$ is the distance from the origin to the affine hull of $G$.
\end{theorem}
\begin{remark}
As in the beta case, we have two alternative expressions for $\tilde I_{n,k}(\alpha)$:
\begin{align*}
\tilde I_{n,k}(\alpha)
&=\int_{-\pi/2}^{+\pi/2} \tilde c_{1,\frac{\alpha k + 1}{2}} (\cos \varphi)^{\alpha k-1} \left(\int_{-\pi/2}^\varphi \tilde c_{1,\frac{\alpha+1}{2}}(\cos \theta)^{\alpha-1} \,{\rm d} \theta \right)^{n-k} \, {\rm d} \varphi\\
&=\int_{-\infty}^{+\infty} \tilde c_{1,\frac{\alpha k + 1}{2}} (\cosh \varphi)^{-\alpha k} \left(\int_{-\infty}^\varphi \tilde c_{1,\frac{\alpha+1}{2}}(\cosh \theta)^{-\alpha} \,{\rm d} \theta \right)^{n-k} \, {\rm d} \varphi.
\end{align*}
These formulae can be obtained from~\eqref{eq:I_definition_prime} by the changes of variables $t=\tan \varphi$, $s= \tan \theta$, and $t= \sinh \varphi$, $s= \sinh \theta$, respectively.
\end{remark}
Finally, we present a formula for the expected conic intrinsic volumes of the tangent cones at the faces of a beta' polytope.
\begin{theorem}[Expected conic intrinsic volumes of tangent cones]\label{theo:expected_conic_tangent_prime}
Fix some $k\in\{1,\ldots,d\}$ and consider the simplex $G := [X_1,\ldots,X_k]$. Then, for every $j\in\{k-1,\ldots,d\}$, the expected $j$-th conic intrinsic volume of the tangent cone $T(G, \tilde P_{n,d}^\beta)$ at $G$ is given by
\begin{align*}
\mathbb{E} \upsilon_{j} (T(G, \tilde P_{n,d}^{\beta}))&={n-k\choose j-k+1}\tilde I_{n,j+1}(2\beta-d) \tilde J_{j+1,k}\left(\beta - \frac{d - j}2\right)+\mathbbm{1}_{\{j=d\}}\mathbb{P}\left[G\notin\mathcal{F}_{k-1}(\tilde P_{n,d}^\beta)\right],
\end{align*}
with the convention that $T(G, \tilde P_{n,d}^{\beta})=\mathbb{R}^d$ if $G$ is not a face of $\tilde P_{n,d}^{\beta}$.
\end{theorem}
Taking $j=d$ we also have the following analogue of Corollary \ref{theo:internal}, while with the choice $j=k-1$ we recover Theorem \ref{theo:external_prime}.
\begin{corollary}[Expected internal angles]\label{theo:internal_prime}
Fix some $k\in\{1,\ldots,d\}$ and consider the simplex $G := [X_1,\ldots,X_k]$. The expected internal angle at $G$ is given by
\begin{align*}
\mathbb{E} \beta(G, \tilde P_{n,d}^\beta) &={n-k\choose d-k+1}\tilde I_{n,d+1}(2\beta-d) \tilde J_{d+1,k}\left(\beta\right)+\mathbb{P}\left[G\notin\mathcal{F}_{k-1}(\tilde P_{n,d}^\beta)\right],
\end{align*}
with the convention that $\beta(G, \tilde P_{n,d}^\beta)=1$ if $G$ is not a face of $\tilde P_{n,d}^{\beta}$.
\end{corollary}
\begin{remark}
The methods of the present paper can be adapted to treat the \textit{symmetric} beta and beta' polytopes which are defined as the convex hulls of $\pm X_1,\ldots,\pm X_n$, where $X_1,\ldots,X_n$ are i.i.d.\ with beta or beta' distribution. However, we refrain from considering symmetric polytopes in this paper.
\end{remark}
\subsection{Poisson point processes with power-law intensity}\label{subsec:PPP}
In the large $n$ limit, rescaled samples from the beta' distribution converge to the Poisson point process with a power-law intensity function. This can be used to obtain results on the convex hull of this class of Poisson point process. For $\alpha>0$ let $\Pi_{d,\alpha}$ be a Poisson point process on $\mathbb{R}^d\backslash\{0\}$ with power-law intensity function
$$
x\mapsto \|x\|^{-d-\alpha},\qquad x\in \mathbb{R}^d \backslash \{0\}.
$$
The number of points of $\Pi_{d,\alpha}$ outside any ball centered at the origin is finite, but the total number of points is infinite, and, in fact, the origin is an accumulation point for the atoms of $\Pi_{d,\alpha}$, with probability $1$; see the left panel of Figure~\ref{fig:poisson}. The convex hull of the atoms of $\Pi_{d,\alpha}$ will be denoted by $\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha}$. In~\cite{convex_hull_sphere} it was shown that $\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha}$ is almost surely a polytope, and explicit formulae for its expected intrinsic volumes and expected number of $(d-1)$-dimensional faces were given. Using the results obtained in Section \ref{subsec:MainForBetaprime} we can now provide an explicit formula for the expected number of $k$-dimensional faces of $\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha}$ for any $k\in\{0,1,\ldots,d-1\}$.
\begin{theorem}\label{theo:f_vect_poisson}
For every $d\in\mathbb{N}$ and $k\in \{0,1,\ldots,d-1\}$, the expected number of $k$-faces of $\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha}$ is given by
\begin{equation} \label{eq:E_f_k_beta_prime_to_poisson}
\begin{split}
&\mathbb{E} f_k(\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha})
=
\lim_{n\to\infty}\mathbb{E} f_k\left(\tilde P_{n,d}^{\frac{d+\alpha}{2}}\right)
\\
&\quad=2 \sum_{\substack{m\in \{k+1,\ldots,d\}\\ m\equiv d \Mod{2}}} \frac{\Gamma\left(\frac{m\alpha + 1}{2}\right) \Gamma\left(\frac \alpha 2\right)^{m}}{\Gamma\left(\frac{m\alpha}{2}\right)\Gamma\left(\frac{\alpha+1}{2}\right)^{m}}\frac{(\sqrt \pi \alpha)^{m-1}}{m} \binom {m}{k+1} \tilde J_{m,k+1}\left(\frac{m-1+\alpha}{2}\right),
\end{split}
\end{equation}
where $\tilde J_{m,k+1}(\alpha)$ is defined as in Theorem~\ref{theo:f_vect_prime}.
\end{theorem}
\begin{remark}\label{rem:f_d-1_d-2}
For faces of dimensions $k=d-1$ and $k=d-2$ the result simplifies to
\begin{align*}
\mathbb{E} f_{d-1}(\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha})
&=
\frac 2d (\sqrt \pi \alpha)^{d-1} \frac{\Gamma\left(\frac{d\alpha + 1}{2}\right) \Gamma\left(\frac \alpha 2\right)^{d}}{\Gamma\left(\frac{d\alpha}{2}\right)\Gamma\left(\frac{\alpha+1}{2}\right)^{d}},\\
\mathbb{E} f_{d-2}(\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha})
&=
\frac d2 \mathbb{E} f_{d-1}(\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha}).
\end{align*}
The first formula was obtained in~\cite[Corollary 2.13]{convex_hull_sphere}, whereas the second one is valid for every simplicial polytope (even almost surely by the Dehn--Sommerville equations). Note that although the intensity function used here differs by a multiplicative constant from that used in~\cite{convex_hull_sphere}, the expected $f$-vector is the same in both cases because in the case of power-law intensity, multiplying intensity by a constant is equivalent to spatial rescaling the Poisson point process, which does not affect the $f$-vector of the convex hull.
\end{remark}
\subsection{Convex hulls on the half-sphere}\label{subsec:convex_hulls_half_sphere}
Let us mention an application of the above results to the random spherical convex hulls first studied by B\'ar\'any, Hug, Reitzner and Schneider~\cite{barany_etal}. Let $U_1,\ldots,U_n$ be independent random points distributed uniformly on the $d$-dimensional upper half-sphere $\mathbb{S}^d_+ = \mathbb{S}^d\cap \{x_0\geq 0\}\subset\mathbb{R}^{d+1}$. Let $C_n:=\mathop{\mathrm{pos}}\nolimits (U_1,\ldots,U_n)$ be the random cone generated by these points.
The $f$-vector of the random spherical polytope $C_n\cap \mathbb{S}^d_+$ has the same distribution as the $f$-vector of $\tilde P_{n,d}^{\beta}$ with $\beta= (d+1)/2$; see~\cite{bonnet_etal,convex_hull_sphere}.
Theorem~\ref{theo:f_vect_prime} with $m:=d-2s$ yields
$$
\mathbb{E} f_k(C_n\cap \mathbb{S}^d_+)
=
2 \sum_{\substack{m\in \{k+1,\ldots,d\}\\ m\equiv d \Mod{2}}} \binom n {m} \binom {m}{k+1} \tilde I_{n,m}(1) \tilde J_{m,k+1}\left(\frac{m}{2}\right)
$$
for all $k\in\{0,\ldots,d-1\}$. Further, Theorem~\ref{theo:monoton_prime} implies that the expected $f$-vector of the random spherical polytope $C_n\cap \mathbb{S}^d_+$ increases component-wise with $n$. In fact, the limits to which these vectors converge, as $n\to\infty$, are finite. Namely, in~\cite{convex_hull_sphere} it was shown that
$$
\lim_{n\to\infty} \mathbb{E} f_{k+1}^\ell(C_n) = \lim_{n\to\infty} \mathbb{E} f_{k}^\ell(C_n\cap \mathbb{S}^d_+) = \mathbb{E} f_{k}^\ell(\mathop{\mathrm{conv}}\nolimits \Pi_{d,1}),
$$
for $k\in\{0,1,\ldots,d-1\}$ and any $\ell\in\mathbb{N}$.
Using Theorem~\ref{theo:f_vect_poisson}, we arrive at the following asymptotic formula for the particular case $\ell=1$:
$$
\lim_{n\to\infty} \mathbb{E} f_{k}(C_n\cap \mathbb{S}^d_+)
=
2\sqrt \pi \sum_{\substack{m\in \{k+1,\ldots,d\}\\ m\equiv d \Mod{2}}} \frac{\Gamma\left(\frac{m + 1}{2}\right)}{\Gamma\left(\frac{m}{2}\right)} \frac{\pi^{m-1}}{m} \binom {m}{k+1} \tilde J_{m,k+1}\left(\frac{m}{2}\right).
$$
The cases $k\in\{0, d-1,d-2\}$ were treated in~\cite{barany_etal}. In particular, the limit for $k=0$ was expressed in~\cite[Theorem~7.1]{barany_etal} in terms of certain constant $C(d)$ given as a multiple integral in~\cite[Equation (22)]{barany_etal}. The limit of the complete expected $f$-vector was expressed in~\cite[Theorem~2.4]{convex_hull_sphere} in terms of multiple integrals that can be interpreted as absorption probabilities of the Poisson point process $\Pi_{d,1}$. Our approach provides an alternative formula in terms of the quantities $\tilde J_{m,\ell}(\alpha)$.
Let us finally comment on the first equality in~\eqref{eq:E_f_k_beta_prime_to_poisson}. It follows from standard results in extreme-value theory that if $X_1,X_2,\ldots$ are i.i.d.\ points in $\mathbb{R}^d$ with density $\tilde f_{d,\frac {d+\alpha}{2}}$, then the point process
$$
\sum_{j=1}^n \delta_{n^{-1/\alpha} X_j}
$$
converges, as $n\to\infty$, to the Poisson point process $\Pi_{d,\alpha}$ weakly on the space of locally finite integer-valued measures on $\mathbb{R}^d\backslash\{0\}$ endowed with the vague topology, see \cite[Equation (4.6)]{convex_hull_sphere}. From this one can deduce the distributional convergence
$$
f_k (\tilde P_{n,d}^{\frac{d+\alpha}{2}}) \overset{d}{\underset{n\to\infty}\longrightarrow} f_k (\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha})
$$
together with the convergence of all moments from the continuous mapping theorem as in~\cite{convex_hull_sphere}. In fact, in~\cite{convex_hull_sphere} we considered only the case $\alpha=1$ (which was tailored towards the application to convex hulls on the half-sphere~\cite{barany_etal}), but the same method of proof applies to any $\alpha>0$. In the proof of Theorem~\ref{theo:f_vect_poisson}, which will be given in Section~\ref{sec:proof_poisson_limit}, we shall prove the second line of~\eqref{eq:E_f_k_beta_prime_to_poisson} by using the explicit formula for the expected $f$-vector of a beta' polytope.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth ]{PPP_points_alpha_eq3.png}
\includegraphics[width=0.49\textwidth]{PPP_lines_alpha_eq3.png}
\end{center}
\caption{
Left: The Poisson point process $\Pi_{2,3}$ on $\mathbb{R}^2$ with intensity $\|x\|^{-5}$, together with its convex hull. Right: The dual Poisson line tessellation, together with the corresponding zero cell.
}
\label{fig:poisson}
\end{figure}
\subsection{Poisson hyperplane tessellations}\label{subsec:PHT}
Using essentially convex duality, Poisson point processes can be transformed into Poisson hyperplane processes. To state this precisely, fix a space dimension $d\geq 2$ as well as a parameter $\alpha>0$, the so-called \textit{distance exponent}. We define a $\sigma$-finite measure $\Theta_{\alpha}$ on the affine Grassmannian $A(d,d-1)$ by
\begin{equation}\label{eq:Theta_def}
\Theta_{\alpha}(\,\cdot\,)
:=
\frac {2}{\omega_d} \int_{\mathbb{S}^{d-1}} \int_{0}^\infty \mathbbm{1}_{\{H(u,t)\in\,\cdot\,\}}\,t^{\alpha-1}\,\textup{d} t \sigma(\textup{d} u),
\end{equation}
where $H(u,t)$ is the hyperplane $H(u,t)=\{x\in\mathbb{R}^d:\langle x,u\rangle=t\}$ and $\sigma$ denotes the spherical Lebesgue measure on $\mathbb{S}^{d-1}$ with total mass $\omega_d=2\pi^{d/2}/\Gamma(\frac d2)$.
Note that $\Theta_1$ coincides with the Lebesgue measure $\mu_{d-1}$ on $A(d,d-1)$ to be defined in~\eqref{eq:DefMeasureMuk} below.
In this paper, by a \textit{Poisson hyperplane process} with distance exponent $\alpha$ we understand a Poisson point process $\eta_{\alpha}$ on the space $A(d,d-1)$ with intensity measure $\Theta_{\alpha}$; see the right panel of Figure~\ref{fig:poisson}. The random hyperplanes in $\eta_{\alpha}$ dissect $\mathbb{R}^d$ into almost surely countably many random convex polyhedra, which are called cells in the sequel. The collection of these random polyhedra is known as a \textit{Poisson hyperplane tessellation}. Our focus lies on the \textit{zero cell}
$$
Z_{\alpha} := \bigcap_{H\in\eta_{\alpha}} H^-
$$
of such a random tessellation, where for a hyperplane $H\in A(d,d-1)$ we write $H^-$ for the closed half-space determined by $H$ that contains the origin. We emphasize that the probability law of $Z_{\alpha}$ is invariant under rotations and that $Z_{\alpha}$ is almost surely bounded and hence a random polytope. Zero cells of Poisson hyperplane tessellations of this type have attracted considerable attention in the literature, see \cite{HoermannHugReitznerThaele,HugSchneider07LargeCells} as well as the references cited therein. In particular, this class contains two prominent special cases. Namely, $Z_{1}$ corresponds to the zero cell of a stationary and isotropic Poisson hyperplane tessellation with intensity
$$
{1\over 2}\mathbb{E}\sum_{H\in\eta_\alpha}\mathbbm{1}_{\{H\cap\mathbb{B}^d\neq\emptyset\}} = {1\over 2}\Theta_1(\{H\in A(d,d-1):H\cap\mathbb{B}^d\neq\emptyset\}) = {1\over \omega_d}\int_{\mathbb{S}^{d-1}}\int_0^1 {\rm d} t\sigma(\textup{d} u) = 1
$$
(see \cite[Equation (4.27)]{SW08}), while $Z_{d}$ has the same distribution as the typical cell of a stationary Poisson--Voronoi tessellation of a suitable constant intensity. Both models are classical objects in stochastic geometry and well studied; we refer to \cite{LastPenrosePPPBook,SW08} for further background material.
It is a crucial observation that the zero cells $Z_{\alpha} $ are \textit{dual} to convex hulls of Poisson point processes of the type discussed in Section~\ref{subsec:PPP}. To make this precise, we recall from \cite[Chapter 5.1]{MatousekDiscreteGeometryBook} that if $K\subset\mathbb{R}^d$ is a convex body, its dual (or polar body) $K^\circ$ is defined as
$$
K^\circ:=\{y\in\mathbb{R}^d:\langle x,y\rangle\leq 1\text{ for all }x\in K\}.
$$
In particular, if $P\subset\mathbb{R}^d$ is a polytope with $0$ in its interior, it is well known that, for all $k\in\{0,1,\ldots,d-1\}$,
\begin{equation}\label{eq:DualityFvector}
f_k(P) = f_{d-k-1}(P^\circ),
\end{equation}
see~\cite[Corollary~2.13]{ziegler_book_lec_on_poly}.
To state the next theorem we recall from Section~\ref{subsec:PPP} that by $\Pi_{d,\alpha}$ we denote a Poisson point process on $\mathbb{R}^d\setminus\{0\}$ with power-law intensity function $x\mapsto\|x\|^{-d-\alpha}$.
\begin{theorem}\label{thm:PoissonHyperplanes}
Fix $\alpha>0$. Then $Z_\alpha^\circ$ has the same distribution as $\mathop{\mathrm{conv}}\nolimits\Pi_{d,\alpha}$. In particular,
$$
\mathbb{E} f_k(Z_{\alpha}) = \mathbb{E} f_{d-k-1}(\mathop{\mathrm{conv}}\nolimits\Pi_{d,\alpha}).
$$
\end{theorem}
Theorem \ref{thm:PoissonHyperplanes} together with Theorem \ref{theo:f_vect_poisson} yields an explicit description of the expected $f$-vector of the zero cells $Z_\alpha$.
For example, Remark~\ref{rem:f_d-1_d-2} yields
\begin{equation}\label{eq:E_f_0_Z_alpha}
\mathbb{E} f_0(Z_\alpha) = \mathbb{E} f_{d-1}(\mathop{\mathrm{conv}}\nolimits\Pi_{d,\alpha}) = \frac 2d (\sqrt \pi \alpha)^{d-1} \frac{\Gamma\left(\frac{d\alpha + 1}{2}\right) \Gamma\left(\frac \alpha 2\right)^{d}}{\Gamma\left(\frac{d\alpha}{2}\right)\Gamma\left(\frac{\alpha+1}{2}\right)^{d}},
\quad
\mathbb{E} f_1(Z_\alpha) = \frac d2 \mathbb{E} f_0(Z_\alpha),
\end{equation}
whereas the formulae for the remaining components are more complicated and involve terms of the form $\tilde J_{m,d-k}(\gamma)$, namely
$$
\mathbb{E} f_k(Z_\alpha)
= 2\sum_{\substack{m\in \{d-k,\ldots,d\}\\ m\equiv d \Mod{2}}} {\Gamma({m\alpha+1\over 2})\over\Gamma({m\alpha\over 2})}\left({\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{m}{(\sqrt{\pi}\alpha)^{m-1}\over m}{m\choose d-k}\tilde{J}_{m,d-k-1}\left({m-1+\alpha\over 2}\right).
$$
This adds to the existing literature, where only a formula for $\mathbb{E} f_0(Z_1)$ (and, since $Z_1$ is a simple polytope with probability one, also for $\mathbb{E} f_1(Z_1)$) is available~\cite[Theorem 10.4.9]{SW08}. Also, in~\cite[Corollary 3.3]{HoermannHugReitznerThaele} a formula for $\mathbb{E} f_0(Z_\alpha)$ for general $\alpha>0$ was given in terms of certain multiple integral which was not clear how to evaluate.
Additionally to the above formulae for $\mathbb{E} f_k(Z_\alpha)$, we claim that for the zero cell of a stationary and isotropic Poisson hyperplane tessellation in $\mathbb{R}^d$ it holds that
\begin{equation}\label{eq:E_f_d-2}
\mathbb{E} f_{d-2} (Z_1) = \mathbb{E} f_{1}(\mathop{\mathrm{conv}}\nolimits\Pi_{d,1}) = \frac{1}{2} \binom {d+1}{3} \pi^2.
\end{equation}
Indeed, while the first equality is a particular case of Theorem~\ref{thm:PoissonHyperplanes}, the second one was obtained in~\cite[Theorem~2.4, Remark~2.5]{convex_hull_sphere} by combining a formula from~\cite{barany_etal} with an Efron-type identity proved in~\cite[Theorem~2.8]{convex_hull_sphere}.
Since the expected intrinsic volumes $\mathbb{E} V_{k}(Z_1)$ are proportional to $\mathbb{E} f_{d-k}(Z_1)$ by an identity due to Schneider~\cite[p.~693]{SchneiderWeightedFaces}, the above yields also formulae for $\mathbb{E} V_k(Z_1)$.
\subsection{Asymptotic results for Poisson hyperplane tessellations}\label{subsec:PHT_asympt}
Next, we shall consider for fixed $k\in\{0,1,2,\ldots\}$ the asymptotic behaviour of $\mathbb{E} f_k(Z_{\alpha})$, as $d\to\infty$. While this has already been investigated in \cite{HoermannHugReitznerThaele} on a logarithmic scale, we are able to prove \textit{exact} asymptotic formulae, which strengthen these results. We shall write $a_d\sim b_d$ for two sequences $(a_d)_{d\in\mathbb{N}}$ and $(b_d)_{d\in\mathbb{N}}$ whenever $a_d/b_d\to 1$, as $d\to\infty$.
\begin{theorem}\label{thm:PoissonHyperplanesDtoInfinity}
Fix $k\in\{0,1,2,\ldots\}$. If the distance exponent $\alpha=\alpha(d)$ is such that
$\inf_{d\in\mathbb{N}} \alpha(d) > 0$, then
$$
\mathbb{E} f_k(Z_\alpha) \sim {\sqrt{\alpha}\over 2^{k-{1\over 2}}}\left({\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{d}{(\sqrt{\pi}\,\alpha)^{d-1}\over k!}d^{k-{1\over 2}},
\qquad d\to\infty.
$$
\end{theorem}
We remark that Theorem~\ref{thm:PoissonHyperplanesDtoInfinity} is consistent with Theorems~1.2 and~3.21 of~\cite{HoermannHugReitznerThaele}, which yield the limit relation
$$
\lim_{d\to\infty}\sqrt[d]{\mathbb{E} f_k(Z_\alpha)} = {\sqrt{\pi}\alpha\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}
$$
for a fixed $\alpha>0$ (corresponding in the special case that $\alpha=1$ to the zero-cell of a stationary and isotropic Poisson hyperplane tessellation) as well as
$$
\lim_{d\to\infty}d^{-{1\over 2}}\sqrt[d]{\mathbb{E} f_k(Z_d)} = \sqrt{2\pi}
$$
in the case that $\alpha=d$ (which corresponds to the typical cell of a stationary Poisson--Voronoi tessellation). Of course, both relations easily follow from Theorem~\ref{thm:PoissonHyperplanesDtoInfinity} as well, but Theorem~\ref{thm:PoissonHyperplanesDtoInfinity} is in fact much more precise. For example, we obtain the asymptotic formulae
$$
k! \,\mathbb{E} f_k(Z_\alpha) \sim
\begin{cases}
\pi^{d-\frac 12} \left(\frac d 2\right)^{k-\frac 12},
&\text{ if } \alpha=1,\\
{\rm e}^{1/4} {2^{{d+1\over 2}-k}\,\pi^{d-1\over 2}}\,d^{{d\over 2}+k-1},
&\text{ if } \alpha=d,
\end{cases}
$$
for any fixed $k\in\{0,1,2,\ldots\}$. The term ${\rm e}^{1/4}$ in the second line appears because of the expansion
$$
\frac{\Gamma({d\over 2})}{\Gamma({d+1\over 2})} = \sqrt{\frac 2 d} \left(1 + \frac{1+o(1)}{4d} \right), \qquad d\to\infty.
$$
\subsection{Organization of the paper}
The rest of the paper is organized as follows.
In Section~\ref{sec:facts} we introduce the necessary notation and recall some facts from stochastic and integral geometry. Section~\ref{sec:properties_beta} contains the canonical decomposition for beta and beta' distribution which is of major importance in our proofs, which in turn are collected in Section~\ref{sec:proofs}.
\section{Notation and facts from stochastic and integral geometry}\label{sec:facts}
\subsection{General notation}
For $d\geq 1$ we let $\mathbb{R}^d$ be the $d$-dimensional Euclidean space with the standard scalar product $\langle\,\cdot\,,\,\cdot\,\rangle$ and the associated norm $\|\,\cdot\,\|$. We let $\mathbb{B}^d=\{x\in\mathbb{R}^d:\|x\|\leq 1\}$ be the Euclidean unit ball and $\mathbb{S}^{d-1}=\{x\in\mathbb{R}^d:\|x\|=1\}$ be the corresponding $(d-1)$-dimensional unit sphere. Let also $\lambda_d$ denote the $d$-dimensional Lebesgue measure and $\sigma$ denote the spherical Lebesgue measure which is normalized in such a way that $\sigma(\mathbb{S}^{d-1})=\omega_d:={2\pi^{d/2}\over\Gamma(d/2)}$.
The \textit{convex} (respectively, \textit{positive}, \textit{linear}, \textit{affine}) \textit{hull} of a set $A\subset\mathbb{R}^d$ is the smallest convex set (respectively, convex cone, linear subspace, affine subspace) containing the set $A$ and is denoted by $\mathop{\mathrm{conv}}\nolimits A$ (respectively, $\mathop{\mathrm{pos}}\nolimits A$, $\mathop{\mathrm{lin}}\nolimits A$, $\mathop{\mathrm{aff}}\nolimits A$). The convex hull of finitely many points $x_1,\ldots,x_n$ is also denoted by $[x_1,\ldots,x_n]$.
We let $(\Omega,\mathcal{F},\mathbb{P})$ be our underlying probability space, which we implicitly assume to be rich enough to carry all the random objects we consider. Expectation (i.e.\ integration) with respect to $\mathbb{P}$ is denoted by $\mathbb{E}$. For two random variables $X$ and $Y$ we write $X\overset{d}{=}Y$ if $X$ and $Y$ have the same probability law. Moreover, for random variables $X,X_1,X_2,\ldots$ we shall write $X_n\overset{d}{\to}X$ if $X_n$ converges to $X$ in distribution, as $n\to\infty$.
\subsection{Polytopes and their faces}
A \textit{polytope} is a convex hull of finitely many points, while a \textit{polyhedron} is a finite intersection of closed half-spaces. We recall that a bounded polyhedron is also a polytope. The dimension of a polyhedron $P$ is the dimension of its affine hull $\mathop{\mathrm{aff}}\nolimits P$.
The \textit{$f$-vector} of a $d$-dimensional polyhedron $P\subseteq\mathbb{R}^d$ is defined by
$$
\mathbf{f} (P) := (f_0(P), \ldots,f_{d-1}(P)),
$$
where $f_k(P)$ is the number of $k$-dimensional faces of $P$.
The set of $k$-dimensional faces of a polyhedron $P$ is denoted by $\mathcal{F}_k(P)$, so that $f_k(P)$ is the cardinality of $\mathcal{F}_k(P)$.
\subsection{Grassmannians and the Blaschke--Petkantschin formula}
We denote by $G(d,k)$, respectively $A(d,k)$, the set of $k$-dimensional linear, respectively affine, subspaces of $\mathbb{R}^d$.
The unique probability measure on $G(d,k)$ which is invariant under the action of the orthogonal group ${\rm SO}(d)$ is denoted by $\nu_k$. The affine Grassmanninan $A(d,k)$ is endowed with the infinite measure $\mu_k$ defined by
\begin{equation}\label{eq:DefMeasureMuk}
\mu_k(\,\cdot\,) := \int_{G(d,k)}\int_{L^\bot}\mathbbm{1}_{\{L+x\in\,\cdot\,\}}\,\lambda_{L^\bot}({\rm d} x) \nu_{k}({\rm d} L),
\end{equation}
where $L^\bot$ is the orthogonal complement of $L$ and $\lambda_{L^\bot}$ is the Lebesgue measure on $L^\bot$, see~\cite[pp.~168--169]{SW08}.
The next theorem, to be found in~\cite[Theorem 7.2.7]{SW08}, allows to replace integration over all $k$-tuples of points in $\mathbb{R}^d$ by the double integration first over all $(k-1)$-dimensional affine subspaces $A$ and then over all $k$-tuples inside $A$. An important feature is the appearance of a term involving $\Delta(x_1,\ldots,x_k)$, the $(k-1)$-dimensional volume of the simplex $[x_1,\ldots,x_k]$.
\begin{proposition}[Affine Blaschke--Petkantschin formula]\label{theo:blaschke_petk}
For all $k\in \{1,\ldots,d+1\}$ and every non-negative Borel function $f:(\mathbb{R}^d)^{k}\to\mathbb{R}$ we have
\begin{multline*}
\int_{(\mathbb{R}^d)^{k}}f(x_1,\ldots,x_k) \, \lambda_d^k({\rm d}(x_1,\ldots,x_k))
\\
= B(d,k) \int_{A(d,k-1)}\int_{E^{k}}f(x_1,\ldots,x_k)\,\Delta^{d-k+1}(x_1,\ldots,x_k)\, \lambda_E^k({\rm d}(x_1,\ldots,x_k))
\mu_{k-1}({\rm d} E).
\end{multline*}
Here, $\lambda_E$ is the Lebesgue measure on the affine subspace $E$, and
$$
B(d,k) = ((k-1)!)^{d-k+1}\,{\omega_{d-k+2}\cdots\omega_d\over\omega_1\cdots\omega_{k-1}},
\qquad
B(d,1) = 1.
$$
\end{proposition}
\subsection{Cones and solid angles}
In this paper, the term cone always refers to a \textit{polyhedral cone}, that is an intersection of finitely many closed half-spaces whose boundaries pass through the origin. In particular any polyhedral cone is a polyhedron. The \textit{solid angle} of a cone $C\subset \mathbb{R}^d$ is defined as
$$
\alpha(C) := \mathbb{P}[N\in C],
$$
where $N$ is a random vector having a standard normal distribution on the linear hull of $C$. The \textit{polar} (or \textit{dual}) \textit{cone} of $C$ is defined by
$$
C^\circ := \{v\in \mathbb{R}^d\colon \langle v, z\rangle\leq 0 \text{ for all } z\in C\}.
$$
The \textit{tangent cone} $T(F,P)$ at a face $F$ of a full-dimensional polytope $P\subseteq\mathbb{R}^d$ is defined as
$$
T(F,P) := \{v\in\mathbb{R}^d\colon x_0 + v\varepsilon \in P \text{ for some } \varepsilon>0\},
$$
where $x_0$ is any point in the relative interior of $F$ (the definition does not depend on the choice of $x_0$). The \textit{normal cone} of $F$ is the polar to the tangent cone, that is
$$
N(F,P) := T^\circ (F,P) = \{v\in \mathbb{R}^d\colon \langle v, z-x_0\rangle \leq 0 \text{ for all } z\in P\}.
$$
The \textit{internal} and \textit{external} angles at a face $F$ of $P$ are defined as the solid angles of the tangent and the normal cones, respectively:
$$
\beta(F,P) := \alpha(T(F,P)), \qquad \gamma(F,P) := \alpha (N(F,P)).
$$
For further background material we refer, for example, to \cite{AmelunxenLotzDCG17,GruenbaumGA,GruenbaumBook}.
\subsection{Conic intrinsic volumes and Grassmann angles}
In this section we recall the definitions of the conic intrinsic volumes and Grassmann angles of cones and refer to \cite{AmelunxenLotzDCG17,ALMT14,glasauer_phd,SW08} for further information.
For a polyhedral cone $C\subset\mathbb{R}^{d}$ we denote by $\mathcal{F}_k(C)$ the set of its $k$-dimensional faces. Note that $C$ is the disjoint union of the relative interiors of its faces, where the \textit{relative interior} $\relint F$ of a face $F$ is the interior of $F$ with respect to its affine hull $\mathop{\mathrm{aff}}\nolimits F$ as the ambient space.
If $x\in\mathbb{R}^{d}$ is a point, we let $\pi_C(x)$ denote the \textit{metric projection} of $x$ onto $C$, that is the uniquely determined point $y\in C$ minimizing the distance $\|x-y\|$.
For $k\in\{0,1,\ldots,d\}$ the $k$-th \textit{conic intrinsic volume} $\upsilon_k(C)$ is defined by
$$
\upsilon_k(C) := \sum_{F\in\mathcal{F}_k(C)} \mathbb{P}[\pi_C(N)\in\relint(F)],
$$
where $N$ is a standard Gaussian random vector in $\mathbb{R}^{d}$, also put $\upsilon_k(C):=0$ if $\mathcal{F}_k(C)=\emptyset$. In other words, $\upsilon_k(C)$ is the probability that the metric projection of $N$ lies in the relative interior of a $k$-dimensional face of $C$, that is, in the so-called $k$-skeleton of $C$. For convenience also define $\upsilon_k(C):=0$ for all integers $k\notin\{0,1,\ldots,d\}$.
For example, if $C$ is a $k$-dimensional linear subspace, then $\upsilon_{k}(C)=1$, while all other conic intrinsic volumes vanish.
By definition, the conic intrinsic volumes are non-negative and their sum equals one. Moreover, they satisfy the so-called \textit{Gauss--Bonnet formula} \cite[Equation~(5.3)]{ALMT14}
\begin{equation}\label{eq:gauss_bonnet}
\upsilon_0(C)+ \upsilon_2(C) + \ldots = \upsilon_1(C)+ \upsilon_3(C) +\ldots = \frac 12,
\end{equation}
provided $C$ is not a linear subspace. Observe that $\upsilon_d(C)$ is just the solid angle of $C$, provided that $\dim\mathop{\mathrm{aff}}\nolimits C=d$.
The so-called \textit{Grassmann angles} of a polyhedral cone $C\subset\mathbb{R}^d$ were defined by Gr\"unbaum \cite{GruenbaumGA} as
\begin{equation}\label{eq:h_def}
h_{k}(C) := {1\over 2}\mathbb{P}[C\cap L_{d+1-k}\neq\{0\}], \qquad k\in\{1,\ldots,d\},
\end{equation}
where $L_{d+1-k}\in G(d,d+1-k)$ is a random subspace distributed according to the probability measure $\nu_{d+1-k}$.
The \textit{conic Crofton formula}~\cite[Equation (2.10)]{AmelunxenLotzDCG17} states that the conic intrinsic volumes and the Grassmann angles are related by
\begin{equation}\label{eq:ConicalIntVolGrassmannAngle}
h_{k}(C) = \upsilon_{k}(C) + \upsilon_{k+2}(C) + \ldots, \quad k\in \{0,1,\ldots, d\},
\end{equation}
provided $C$ is not a linear subspace. We remark that the above sums only contain finitely many non-zero terms and that the Grassmann angles were called the \textit{half-tail functionals} in~\cite{ALMT14}.
\subsection{Random projections of polytopes}
Let $P\subset\mathbb{R}^N$ be a polytope and $L_d\in G(N,d)$ be a random subspace distributed according to the probability measure $\nu_d$. Then $\Pi_{d}P$ stands for the random polytope in $L_d$ that arises as the orthogonal projection of $P$ onto $L_d$. The next result we recall is due to \citet{AS92}, its proof is based on the conic Crofton formula~\eqref{eq:ConicalIntVolGrassmannAngle}. It says that the expected $f$-vector of the random polytope $\Pi_d P$ can be expressed in terms of the interior and exterior angles of the original polytope $P$. We emphasize that the sum on the right hand side of \eqref{eq:AffSchn} below only contains finitely many non-zero terms.
\begin{proposition}[Expected $f$-vectors of random projections]\label{theo:affentranger_schneider}
Let $P\subset \mathbb{R}^N$ be a polytope. Then, for all $k\in\{0,1,\ldots,d-1\}$,
\begin{equation}\label{eq:AffSchn}
\mathbb{E} f_k(\Pi_d P) = 2 \sum_{s=0}^\infty \sum_{G\in \mathcal{F}_{d-1-2s}(P)} \gamma(G,P) \sum_{F\in \mathcal{F}_k(G)} \beta(F,G).
\end{equation}
\end{proposition}
\section{Properties of beta and beta' distributions}\label{sec:properties_beta}
\subsection{Identification of affine subspaces}\label{subsec:identification}
Sometimes it will be convenient to identify every affine subspace of $\mathbb{R}^d$ with the Euclidean space of the corresponding dimension. To make this precise, we recall that $A(d,k)$ is the set of $k$-dimensional affine subspaces of $\mathbb{R}^d$. For an affine subspace $E \in A(d, k)$ we denote by $\pi_E: \mathbb{R}^d\to E$ the orthogonal projection onto $E$ and by $p(E)=\pi_E(0)=\argmin_{x\in E}\|x\|$ the projection of the origin on $E$. For every affine subspace $E\in A(d,k)$ let us fix an isometry $I_E:E\to \mathbb{R}^k$ such that $I_E(p(E))=0$. The exact choice of the isometries $I_E$ is not important (essentially due to the rotational invariance of the beta and beta' distributions). We only require that $(x,E) \mapsto I_E(\pi_E(x))$ defines a Borel measurable map from $\mathbb{R}^d \times A(d,k)$ to $\mathbb{R}^k$, where we supply $\mathbb{R}^d$, $\mathbb{R}^k$ and $A(d,k)$ with their standard Borel $\sigma$-algebras; see~\cite[Chapter 13.2]{SW08} for the case of $A(d,k)$.
\subsection{Projections and distances}
The next lemma, taken from \cite[Lemma 4.3]{beta_polytopes}, states that the beta and beta$^\prime$-distributions on $\mathbb{R}^d$ yield distributions of the same type (but with different parameters) when projected onto arbitrary linear subspaces.
\begin{lemma}[Orthogonal projections]\label{lem:projection}
Denote by $\pi_L: \mathbb{R}^d\rightarrow L$ the orthogonal projection onto a $k$-dimensional linear subspace $L\in G(d,k)$.
\begin{itemize}
\item[(a)] If the random point $X$ has density $f_{d,\beta}$ for some $\beta\geq -1$, then $I_L(\pi_L (X))$ has density $f_{k,\beta+\frac{d-k}{2}}$.
\item[(b)] If the random point $X$ has density $\tilde{f}_{d,\beta}$ for some $\beta>\frac d2$, then $I_L(\pi_L(X))$ has density $\tilde{f}_{k,\beta-\frac{d-k}{2}}$.
\end{itemize}
\end{lemma}
The next lemma describes the distribution of the squared norm of a random vector with $d$-dimensional beta or beta' distribution. The squared norm turns out to have the usual, one-dimensional beta or beta' distribution.
Recall that a random variable has a classical \textit{beta distribution} with parameters $\alpha_1>0, \alpha_2>0$, denoted by $\text{\rm Beta}(\alpha_1,\alpha_2)$, if its Lebesgue density on $\mathbb{R}$ is
$$
g_{\alpha_1,\alpha_2}(t) = \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)} t^{\alpha_1-1} (1-t)^{\alpha_2-1}\mathbbm{1}_{\{0<t<1\}}, \qquad t\in \mathbb{R}.
$$
Similarly, a random variable has a classical \textit{beta' distribution} with parameters $\alpha_1>0$, $\alpha_2>0$, denoted by $\text{\rm Beta}'(\alpha_1,\alpha_2)$, if its Lebesgue density on $\mathbb{R}$ is
$$
\tilde g_{\alpha_1,\alpha_2}(t) = \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)} t^{\alpha_1-1} (1+t)^{-\alpha_1-\alpha_2}\mathbbm{1}_{\{t>0\}}, \qquad t\in \mathbb{R}.
$$
Observe that, up to reparametrization and rescaling, $\text{\rm Beta}'(\alpha_1,\alpha_2)$ coincides with the Fisher--Snedecor $F$-distribution. For the following fact we refer to \cite[Theorem 2.7]{beta_simplices} as well as the references cited therein.
\begin{lemma}[Squared norm]\label{lem:squared_norm}
Let $X$ be a random vector in $\mathbb{R}^d$.
\begin{itemize}
\item[(a)] If $X$ has the beta density $f_{d,\beta}$, then $\|X\|^2 \sim \text{Beta}(\frac d2, \beta + 1)$.
\item[(a)] If $X$ has the beta' density $\tilde f_{d,\beta}$, then $\|X\|^2 \sim \text{Beta}'(\frac d2, \beta - \frac{d}{2})$.
\end{itemize}
\end{lemma}
\subsection{Canonical decomposition of Ruben and Miles}
Let $X_1,\ldots,X_k$ be i.i.d.\ random points in $\mathbb{R}^d$ with the beta density $f_{d,\beta}$. Let $k\leq d+1$, so that $[X_1,\ldots,X_k]$ is a simplex. We need a description of the positions of these points inside their own affine hull $A= \mathop{\mathrm{aff}}\nolimits(X_1,\ldots,X_k)$, together with the position of $A$ inside $\mathbb{R}^d$. The next theorem is due to Ruben and Miles~\cite{ruben_miles}. Since this result is of central importance for what follows and since in \cite{ruben_miles} a different notation is used, we give a streamlined proof.
\begin{theorem}[Canonical decomposition in the beta case]\label{theo:ruben_miles}
Let $X_1,\ldots,X_k$ be i.i.d.\ random points in $\mathbb{R}^d$ with density $f_{d,\beta}$, where $\beta>-1$ and $k\leq d+1$. Let $A=\mathop{\mathrm{aff}}\nolimits(X_1,\ldots,X_k)$ be the affine subspace spanned by $X_1,\ldots,X_k$. Let also $p(A)$ be the orthogonal projection of the origin on $A$ and let $h(A) = \|p(A)\|$ denote the distance from the origin to $A$. Observe that $A\cap \mathbb{B}^d$ is a $(k-1)$-dimensional ball of radius $\sqrt{1-h^2(A)}$ and consider the points
$$
Z_i := \frac{I_A(X_i)}{\sqrt{1-h^2(A)}} \in \mathbb{B}^{k-1}, \quad i=1,\ldots,k.
$$
Then,
\begin{itemize}
\item[(a)] The joint Lebesgue density of the random vector $(Z_1,\ldots,Z_k)$ is a constant multiple of
$$\Delta^{d-k+1} (z_1,\ldots,z_k)\prod_{i=1}^k f_{k-1,\beta}(z_i),$$
\item[(b)] the random vector $(Z_1,\ldots,Z_k)$ is stochastically independent of $A$,
\item[(c)] the density of $I_{A^\bot}(p(A))\in \mathbb{B}^{d-k+1}$ is $f_{d-k+1, \frac{(k-1)(d+1)}{2} + k \beta}$,
\item[(d)] $I_{A^\bot}(p(A))$ is stochastically independent of $A^\bot$.
\end{itemize}
\end{theorem}
\begin{proof}
Let $\varphi:\mathbb{R}^k\to [0,\infty)$ and $\psi:A(d,k-1)\to[0,\infty)$ be Borel measurable functions. We are interested in the following quantity:
\begin{align*}
B_{\varphi,\psi}
:&=
\mathbb{E} \left[\varphi\left(\frac{I_A(X_1)}{\sqrt{1-h^2(A)}},\ldots,\frac{I_A(X_{k})}{\sqrt{1-h^2(A)}}\right) \psi(A)\right]
\\
& =
\int_{(\mathbb{R}^d)^k} \varphi\left(\frac{I_A(x_1)}{\sqrt{1-h^2(A)}},\ldots,\frac{I_A(x_{k})}{\sqrt{1-h^2(A)}}\right) \psi(A)
\left(\prod_{i=1}^k f_{d,\beta}(x_i)\right) \left(\prod_{i=1}^k\lambda_d({\rm d} x_i)\right),
\end{align*}
where $A$ is used to denote $\mathop{\mathrm{aff}}\nolimits (x_1,\ldots,x_k)$ without risk of confusion.
For the rest of the proof, let $C_1,C_2,\ldots$ be constants depending only on $d,k,\beta$. By the affine Blaschke--Petkantschin formula stated in Proposition~\ref{theo:blaschke_petk}, we have
\begin{multline*}
B_{\varphi,\psi}
=
C_1 \int_{A(d,k-1)}\int_{A^k} \varphi\left(\frac{I_A(x_1)}{\sqrt{1-h^2(A)}},\ldots,\frac{I_A(x_{k})}{\sqrt{1-h^2(A)}}\right) \psi(A)
\\
\times \Delta^{d-k+1}(x_1,\ldots,x_k) \left(\prod_{i=1}^k f_{d,\beta}(x_i)\right) \left(\prod_{i=1}^k\lambda_A({\rm d} x_i)\right) \mu_{k-1}({\rm d} A).
\end{multline*}
Using the substitution $y_i=I_A(x_i)\in\mathbb{R}^{k-1}$, $1\leq i\leq k$, recalling that $I_A:A\to \mathbb{R}^{k-1}$ is an isometry such that $I_A(p(A)) =0$ and observing that $\|x_i\|^2 = h^2(A) + \|y_i\|^2$, we arrive at
\begin{align*}
B_{\varphi,\psi}&=
C_2 \int_{A(d,k-1)} \int_{(\mathbb{R}^{k-1})^k} \varphi\left(\frac{y_1}{\sqrt{1-h^2(A)}},\ldots,\frac{y_{k}}{\sqrt{1-h^2(A)}}\right) \psi(A)
\\
& \times\Delta^{d-k+1}(y_1,\ldots,y_k) \left( \prod_{i=1}^k (1-h^2(A)-\|y_i\|^2)^\beta\,\mathbbm{1}_{\{h^2(A)+\|y_i\|^2<1\}}\right) \left(\prod_{i=1}^k \lambda_{k-1}({\rm d} y_i)\right) \mu_{k-1}({\rm d} A),
\end{align*}
where we also used the definition \eqref{eq:def_f_beta} of the beta density.
Next, we apply the substitution $z_i = y_i/\sqrt{1-h^2(A)}\in \mathbb{B}^{k-1}$, $1\leq i\leq k$, and write
\begin{align}
&(1-h^2(A)-\|y_i\|^2)^\beta
=
(1-h^2(A))^\beta \left(1- \frac{\|y_i\|^2}{1-h^2(A)}\right)^\beta
=
(1-h^2(A))^\beta \left(1-\|z_i\|^2\right)^\beta, \label{eq:self_similar_beta}\\
\nonumber &\lambda_{k-1}(\textup{d} y_i) = (1-h^2(A))^{\frac 12 (k-1)} \lambda_{k-1}(\textup{d} z_i), \\
\nonumber &\Delta(y_1,\ldots,y_k) = (1-h^2(A))^{\frac {k-1}2} \Delta(z_1,\ldots,z_k),
\end{align}
to conclude that
\begin{multline*}
B_{\varphi,\psi}
=
C_3 \int_{A(d,k-1)} \int_{(\mathbb{B}^{k-1})^k} \varphi(z_1,\ldots,z_k) \psi(A) \; (1-h^2(A))^{\frac 12 k(k-1) + \frac 12(d-k+1)(k-1) + k \beta} \mathbbm{1}_{\{h(A)<1\}}
\\
\times\Delta^{d-k+1}(z_1,\ldots,z_k)
\left(\prod_{i=1}^k (1 - \|z_i\|^2)^\beta \right) \left(\prod_{i=1}^k \lambda_{k-1}({\rm d} z_i) \right)\mu_{k-1}({\rm d} A).
\end{multline*}
Finally, some elementary transformations including the use of~\eqref{eq:def_f_beta} lead to
\begin{multline*}
B_{\varphi,\psi}
=
C_4 \left(\int_{A(d,k-1)} \psi(A)\; (1-h^2(A))^{\gamma} \mathbbm{1}_{\{h(A)<1\}}\; \mu_{k-1}({\rm d} A)\right)
\\
\times \left(\int_{(\mathbb{R}^{k-1})^k} \varphi(z_1,\ldots,z_k)\Delta^{d-k+1} (z_1,\ldots,z_k) \left(\prod_{i=1}^k f_{k-1,\beta}(z_i)\right) \left(\prod_{i=1}^k \lambda_{k-1}({\rm d} z_i) \right)\right),
\end{multline*}
where we used the notation
$$
\gamma:= \frac 12 k(k-1) + \frac 12(d-k+1)(k-1) + k \beta
=\frac {(k-1)(d+1)}2 + k\beta.
$$
The form of the second integral and the product structure of the formula imply that the random points $Z_1,\ldots,Z_k$ have the required joint density and are independent of $A$, thus proving claims (a) and (b) of the theorem.
We prove parts (c) and (d) of the theorem. To this end, we take $\varphi(z_1,\ldots,z_k)=1$ and write the above result as
$$
\mathbb{E} \psi(A) = C_5 \int_{A(d,k-1)} \psi(A)\; (1-h^2(A))^{\gamma} \mathbbm{1}_{\{h(A)<1\}}\; \mu_{k-1}({\rm d} A).
$$
Now we take $\psi(A) = \psi_1(I_{A^\bot}(p(A)))\, \psi_2 (A^\bot)$ for some Borel functions $\psi_1:\mathbb{R}^{d-k+1} \to [0,\infty)$ and $\psi_2: G(d,d-k+1) \to [0,\infty)$, so that the above identity takes the form
$$
\mathbb{E} \psi(A) = C_5 \int_{A(d,k-1)} \psi_1(I_{A^\bot}(p(A)))\; \psi_2 (A^\bot)\; (1-h^2(A))^{\gamma} \mathbbm{1}_{\{h(A)<1\}}\; \mu_{k-1}({\rm d} A).
$$
The definition of the measure $\mu_{k-1}$ on $A(d,k-1)$ given in~\eqref{eq:DefMeasureMuk} implies that for every Borel function $f: A(d,k-1)\to [0,\infty)$ we have
$$
\int_{A(d,k-1)} f(A) \mu_{k-1}(\textup{d} A)
=
\int_{G(d,k-1)}\int_{L^\bot}f(L+x) \lambda_{L^\bot}({\rm d} x) \;\nu_{k-1}({\rm d} L).
$$
Observing that for every $L\in G(d,k-1)$ and $x\in L^\bot$ we have $(L+x)^\bot = L^\bot$, $p(L+x) = x$ and $h(L+x) = \|x\|$, we arrive at
$$
\mathbb{E} \psi(A)
=
C_5 \int_{G(d,k-1)} \psi_2 (L^\bot) \left(\int_{L^\bot} \psi_1(I_{L^\bot}(x)) \; (1-\|x\|^2)^{\gamma} \mathbbm{1}_{\{\|x\|<1\}}\; \lambda_{L^\bot}({\rm d} x)\right) \;\nu_{k-1}({\rm d} L).
$$
Writing $y:= I_{L^\bot}(x)\in \mathbb{R}^{d-k+1}$ and using that $\|y\| = \|x\|$ since $I_{L^\bot}:L^\bot\to \mathbb{R}^{d-k+1}$ is an isometry, we obtain
\begin{multline*}
\mathbb{E} \left[\psi_1(I_{A^\bot}(p(A))) \; \psi_2 (A^\bot)\right]
\\=
C_5 \left(\int_{G(d,k-1)} \psi_2 (L^\bot)\; \nu_{k-1}({\rm d} L) \right) \left(\int_{\mathbb{R}^{d-k+1}} \psi_1(y) \; (1-\|y\|^2)^{\gamma} \mathbbm{1}_{\{\|y\|<1\}}\; {\rm d} y \right).
\end{multline*}
The product structure of the right-hand side implies that $I_{A^\bot}(p(A))$ and $A^\bot$ are independent, thus proving part (d) of the theorem. Taking $\psi_2 \equiv 1$, we arrive at
$$
\mathbb{E} \psi_1(I_{A^\bot}(p(A)))
=
C_5 \int_{\mathbb{R}^{d-k+1}} \psi_1(y) \; (1-\|y\|^2)^{\gamma}\mathbbm{1}_{\{\|y\|<1\}} \; {\rm d} y
=
C_6 \int_{\mathbb{R}^{d-k+1}} \psi_1(y) \; f_{d-k+1, \gamma}(y)\; {\rm d} y.
$$
It follows that $I_{A^\bot}(p(A))$ has density $f_{d-k+1, \gamma}$ on $\mathbb{R}^{d-k+1}$, thus proving claim (c).
\end{proof}
\begin{remark}
Theorem~\ref{theo:ruben_miles} continues to hold for $\beta=-1$ (corresponding to the uniform distribution on the $(d-1)$-dimensional unit sphere), but in this case we have to replace (a) by
\begin{itemize}
\item[(a)] The joint distribution of $(Z_1,\ldots,Z_k)$ has density proportional to $\Delta^{d-k+1} (z_1,\ldots,z_k)$ with respect to the $d$-th power of the spherical Lebesgue measure on $\mathbb{S}^{d-1}$.
\end{itemize}
\end{remark}
A result similar to Theorem~\ref{theo:ruben_miles} holds in the beta' case as well and is also due to Ruben and Miles~\cite{ruben_miles}. Since the proof is similar, we don't present the details.
\begin{theorem}[Canonical decomposition in the beta' case]\label{theo:ruben_miles_prime}
Let $X_1,\ldots,X_k$ be i.i.d.\ points in $\mathbb{R}^d$ with density $\tilde f_{d,\beta}$. Let $A=\mathop{\mathrm{aff}}\nolimits(X_1,\ldots,X_k)$ be the affine subspace spanned by $X_1,\ldots,X_k$. Let also $p(A)$ be the orthogonal projection of the origin on $A$ and let and $h(A) = \|p(A)\|$ denote the distance from the origin to $A$. Consider the points
$$
Z_i := \frac{I_A(X_i)}{\sqrt{1 + h^2(A)}} \in \mathbb{R}^{k-1}, \quad i=1,\ldots,k.
$$
Then,
\begin{itemize}
\item[(a)] the joint Lebesgue density of the random vector $(Z_1,\ldots,Z_k)$ is proportional to
$$\Delta^{d-k+1} (z_1,\ldots,z_k)\prod_{i=1}^k \tilde f_{k-1,\beta}(z_i),$$
\item[(b)] the random vector $(Z_1,\ldots,Z_k)$ is stochastically independent of $A$,
\item[(c)] the density of $I_{A^\bot}(p(A))\in \mathbb{R}^{d-k+1}$ is $\tilde f_{d-k+1, k \beta - \frac{(k-1)(d+1)}{2}}$,
\item[(d)] $I_{A^\bot}(p(A))$ is stochastically independent of $A^\bot$.
\end{itemize}
\end{theorem}
\begin{proof}
The computations are analogous to those done in the proof of Theorem~\ref{theo:ruben_miles}, but instead of~\eqref{eq:self_similar_beta} we use the identity
\begin{equation*}
(1+h^2(A)+\|y_i\|^2)^{-\beta}
=
(1+h^2(A))^{-\beta} \left(1 + \frac{\|y_i\|^2}{1+h^2(A)}\right)^{-\beta}
=
(1+h^2(A))^{-\beta} \left(1 + \|z_i\|^2\right)^{-\beta}.
\end{equation*}
Correspondingly, in the formula for $B_{\varphi,\psi}$ the term $(1+h^2(A))^{-\tilde \gamma}$ appears, where $\tilde \gamma$ is given by
$\tilde \gamma = k\beta - \frac{(k-1)(d+1)}{2}$.
\end{proof}
Applying Lemma~\ref{lem:squared_norm} to $I_{A^\bot}(p(A))$, we obtain the following result, which is also contained in~\cite{beta_simplices} as Theorem 2.7.
\begin{corollary}[Distances to affine subspaces]\label{cor:distance_distr}
Let $X_1,\ldots,X_{k}$ be i.i.d.\ random points in $\mathbb{R}^d$ and denote by $h$ the distance from the origin to the affine subspace $\mathop{\mathrm{aff}}\nolimits(X_1,\ldots,X_k)$ spanned by $X_1,\ldots,X_{k}$.
\begin{itemize}
\item[(a)] If $X_1,\ldots,X_k$ have the beta density $f_{d,\beta}$, then $h^2(A)\sim \text{Beta}(\frac{d-k+1}{2},\frac{(k-1)(d+1)}{2} + k \beta + 1)$.
\item[(b)] If $X_1,\ldots,X_k$ have the beta' density $\tilde f_{d,\beta}$, then $h^2(A)\sim \text{Beta}'(\frac{d-k+1}{2}, k(\beta - \frac d2))$.
\end{itemize}
\end{corollary}
\begin{remark}
A result similar to Theorems~\ref{theo:ruben_miles} and~\ref{theo:ruben_miles_prime} holds for the isotropic normal distribution in $\mathbb{R}^d$ if we define $Z_i=I_A(X_i)$. In this case, $I_{A^\bot}(p(A))$ has a standard normal distribution on $\mathbb{R}^{d-k+1}$.
\end{remark}
\subsection{Relation to the extreme-value theory}\label{subsec:extreme_pareto}
Consider a random vector $X$ in $\mathbb{R}^d$ whose density is a spherically symmetric function of the form $p(\|x\|)$, $x\in\mathbb{R}^d$.
The beta and beta' distributions as well as the normal distribution are characterized by the following remarkable property discovered by Miles~\cite{miles}. Namely, for every $h,r>0$ for which $p(h)>0$ the relation
\begin{equation}\label{eq:miles_functional_eq}
p\left(\sqrt{h^2+r^2}\right) = c_1(h) p\left(\frac{r}{c_2(h)}\right)
\end{equation}
holds, where $c_1(h)>0$ and $c_2(h)>0$ are certain functions.
That is, the restriction of the density to any affine hyperplane at distance $h$ from the origin has the same radial component as the original density, up to rescaling. This property is crucial for the proof of the canonical decomposition, recall~\eqref{eq:self_similar_beta}.
Let us give an alternative way to solve the functional equation~\eqref{eq:miles_functional_eq}.
Consider the function $g(y) := p(\sqrt y)$. Then, \eqref{eq:miles_functional_eq} takes the form
$$
g(h^2+r^2) = c_1(h) g\left(\frac{r^2}{c_2^2(h)}\right).
$$
Equivalently, with $a:=h^2$, $s:=r^2$ and with $\psi_1(a)= c_1(\sqrt a)$, $\psi_2(a)=c_2^2(\sqrt a)$, we have
$$
g(a+s) = c_1(\sqrt a) g\left(\frac{s}{c_2^2(\sqrt a)}\right) = \psi_1(a) g\left(\frac{s}{\psi_2(a)}\right).
$$
Since $\int_0^\infty g(y) {\rm d} y = 2\int_0^\infty p(r) r {\rm d} r <\infty$, provided we assume that $d\geq 2$, we can normalize $g$ to be a probability density. Let $Z$ be a random variable with density $g$. Then, the above equation can probabilistically be rewritten as
\begin{equation}\label{eq:pareto_definition}
Z-a \,|\, Z\geq a \stackrel{d}{=} \psi_2(a) Z \qquad\text{ for all } a>0 \text{ such that } \mathbb{P}[Z\geq a] >0.
\end{equation}
Non-degenerate distributions having this property are known as generalized Pareto distributions and appear in extreme-value theory as limit distributions for residual life given that the current age is high, see~\cite{balkema_de_haan}, \cite{pickands}. There are three possible types of these distributions (below $\textit{const}$ denotes a suitable normalization constant, which may change from occasion to occasion):
\begin{enumerate}
\item[(a)] the exponential distribution $g(y) = \text{const}\cdot {\rm e}^{-\lambda y}$, $y>0$, with parameter $\lambda>0$, which corresponds to the normal distribution with radial component $p(r)=\text{const}\cdot{\rm e}^{-\lambda r^2}$, $r>0$.
\item[(b)] the Pareto distribution of Weibull type $g(y) = \text{const}\cdot(1 - y/A)^\beta$, $0 < y < A$, where $\beta>-1$ and $A>0$ are parameters. They correspond to the beta-type densities with radial component $p(r)= \text{const}\cdot (1 - r^2/A)^\beta$, $0<r<\sqrt{A}$.
\item[(c)] the Pareto distribution of Fr\'echet type $g(y) = \text{const}\cdot (1 + y/A)^{-\beta}$, $y > 0$, where $\beta>1$ and $A>0$ are parameters. They correspond to the beta'-type densities with radial component $p(r)= \text{const}\cdot (1 + r^2/A)^{-\beta}$, $r>0$.
\end{enumerate}
Besides, the degenerate distribution, where $Z$ is a positive constant, also satisfies~\eqref{eq:pareto_definition}. The corresponding multivariate distribution is the uniform distribution on a sphere.
\section{Proofs}\label{sec:proofs}
\subsection{Expected external angles} \label{subsec:external_angles_proofs}
\begin{proof}[Proof of Theorem~\ref{theo:external} and Theorem~\ref{theo:external_prime}]
Since the proofs in the beta and beta' cases are similar, let us write $P$ for both $P_{n,d}^\beta$ and $\tilde P_{n,d}^\beta$. The following first part of the proof applies to both cases.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\textwidth, height=0.6\textwidth ]{Monotonie-4.png}
\end{center}
\caption{
Idea of the proof of Theorems~\ref{theo:external} and~\ref{theo:external_prime}.
The red interval is the face $G=[X_1,X_2]$, with $k=2$. The vertical line passing through $X_1$ and $X_2$ is the affine subspace $A$. The grey horizontal plane is its orthogonal complement $A^\bot$. The points $Y_3,\ldots, Y_6$ are orthogonal projections of $X_3,\ldots,X_6$ on $A^\bot$. The figure also shows the tangent cone (the solid angle bounded by the brown half-planes) and the normal cone (the blue two-dimensional angle in $A^\bot$). The reader should keep in mind that both angles should in fact be translated to $0$.
}
\label{fig:projections}
\end{figure}
\vspace*{2mm}
\noindent
\textit{Representation of cones and angles.}
Consider the affine subspace $A = \mathop{\mathrm{aff}}\nolimits G = \mathop{\mathrm{aff}}\nolimits (X_1,\ldots,X_k)$ and let $A^\bot$ be the orthogonal complement of $A$; see Figure~\ref{fig:projections}. Note that $\dim A = k-1$ and $\dim A^\bot = d-k+1$ with probability $1$. Observe also that $A^\bot$ is by definition a linear subspace, whereas $A$ need not pass through the origin. In the following, we shall identify $A^\bot$ with $\mathbb{R}^{d-k+1}$ by means of the isometry $I_{A^\bot} : A^{\bot} \to \mathbb{R}^{d-k+1}$, as explained in Section~\ref{subsec:identification}. Let $\pi_{A^\bot}:\mathbb{R}^d \to A^\bot$ be the orthogonal projection onto $A^\bot$.
Consider the points
$$
Y_1:= \pi_{A^\bot}(X_{k+1}) \in A^\bot,
\;\; \ldots, \;\;
Y_{n-k} := \pi_{A^\bot}(X_n)\in A^\bot,
\;\;
Y := \pi_{A^\bot}(X_1)= \ldots = \pi_{A^\bot}(X_k)\in A^\bot.
$$
Let us assume that $G=[X_1,\ldots,X_k]$ is a face of $P$. Then the tangent cone of $P$ at $G$ is given by
\begin{align*}
T(G, P)
=
\mathop{\mathrm{pos}}\nolimits\left(X_{1} - \bar X, \ldots, X_k - \bar X, X_{k+1}- \bar X, \ldots, X_{n} - \bar X\right),
\end{align*}
where the centre $\bar X = (X_1+\ldots+X_k)/k$ is almost surely contained in the relative interior of $G$.
Since the positive hull of $X_1- \bar X,\ldots, X_k - \bar X$ is $A-\bar X$, we arrive at
$$
T(G, P)
=
(A-\bar X) \oplus \mathop{\mathrm{pos}}\nolimits(Y_1-Y,\ldots,Y_{n-k}-Y),
$$
where the direct sum $\oplus$ is orthogonal.
Since $A-\bar X$ is a linear space, it follows that the normal cone at $G$, defined as the polar of the tangent cone, is the polar cone of $\mathop{\mathrm{pos}}\nolimits(Y_1-Y,\ldots,Y_{n-k}-Y)$ taken inside $A^\bot$ as the ambient space. Let us now map all our points to $\mathbb{R}^{d-k+1}$ by considering $Y_i':=I_{A^\bot}(Y_i)\in \mathbb{R}^{d-k+1}$ and $Y' := I_{A^\bot}(Y)\in \mathbb{R}^{d-k+1}$. From the isometry property of $I_{A^\bot}$ it follows that the internal and the external angles at $G$ are given by
\begin{align}
\beta(G, P) &= \alpha (\mathop{\mathrm{pos}}\nolimits(Y_1'-Y',\ldots,Y_{n-k}'-Y')), \label{eq:gamma_G_P_alpha_internal}\\
\gamma(G, P) &= \alpha (\mathop{\mathrm{pos}}\nolimits^\circ (Y_1'-Y',\ldots,Y_{n-k}'-Y')). \label{eq:gamma_G_P_alpha}
\end{align}
The above holds if $G$ is a face of $P$. At this point let us observe that $G$ is \textit{not} a face of $P$ if and only if $\mathop{\mathrm{pos}}\nolimits(Y_1-Y,\ldots,Y_{n-k}-Y)= A^\bot$. This condition means that the angles on the right-hand sides of~\eqref{eq:gamma_G_P_alpha_internal} and~\eqref{eq:gamma_G_P_alpha} are equal to $1$ and $0$, respectively, which corresponds to our convention that $\beta(G,P)=1$ and $\gamma(G,P)=0$ if $G$ is not a face of $P$.
If $N$ denotes a vector with standard normal distribution on $\mathbb{R}^{d-k+1}$ that is independent of everything else, then the definitions of the solid angle and the polar cone imply that
\begin{equation}\label{eq:E_gamma_proof0}
\gamma(G, P)
=
\mathbb{P}[\langle Y_1'-Y', N\rangle \leq 0 ,\ldots, \langle Y_{n-k}'-Y', N\rangle \leq 0 \;|\; Y',Y_1',\ldots,Y_{n-k}'].
\end{equation}
Averaging over $X_1,\ldots,X_n$, we arrive at
\begin{equation}\label{eq:E_gamma_proof}
\mathbb{E} \gamma(G, P)
=
\mathbb{P}[\langle Y_1'-Y', N\rangle \leq 0 ,\ldots, \langle Y_{n-k}'-Y', N\rangle \leq 0].
\end{equation}
The above considerations are valid both for beta and beta' polytopes. In the following, we consider the beta case. Changes needed in the beta' case will be indicated at the end of the proof.
\vspace*{2mm}
\noindent
\textit{Proof of the independence.}
Observe that by~\eqref{eq:E_gamma_proof0}, the random variable $\gamma(G, P)$ is certain function of the random points $Y',Y_1',\ldots,Y_{n-k}'$. Let us argue that this collection is independent of $I_A(X_1)/\sqrt{1-h^2(A)},\ldots, I_A(X_k)/\sqrt{1-h^2(A)}$, where $h(A) = \|Y\|$ is the distance from the origin to $A$, and $I_A: A\to \mathbb{R}^{k-1}$ is an isometry satisfying $I_A(Y)=0$. This would prove the independence statement of Theorem~\ref{theo:external}. Recall that $Y_i'=I_{A^\bot}(\pi_{A^\bot}(X_{k+i}))$, $1\leq i \leq n-k$, hence $Y_1',\ldots,Y_{n-k}'$ are functions of $X_{k+1}, \ldots,X_n$ and $A^\bot$. Since $I_A(X_1)/\sqrt{1-h^2(A)},\ldots, I_A(X_k)/\sqrt{1-h^2(A)}$ are stochastically independent of $A$ by part (b) of Theorem~\ref{theo:ruben_miles}, these random points are independent of $Y_1',\ldots,Y_{n-k}'$. They are also independent of $Y'=I_{A^\bot}(Y)$ because $\{Y\} = A\cap A^\bot$ is function of $A$ only.
\vspace*{2mm}
\noindent
\textit{Joint distribution of the projected points.}
Let us now describe the joint distribution of the points $Y',Y_1',\ldots,Y_{n-k}'$.
We claim that
\begin{itemize}
\item[(a)] $Y', Y_1',\ldots,Y_{n-k}'$ are independent points in $\mathbb{R}^{d-k+1}$,
\item[(b)] $Y_1',\ldots,Y_{n-k}'$ are i.i.d.\ with density $f_{d-k+1, \frac{2\beta+k-1} 2}$,
\item[(c)] $Y'$ has density $f_{d-k+1,\gamma}$ with $\gamma = \frac {(2\beta+d)k+ k-d-1}{2}$.
\end{itemize}
To prove (a), observe that conditionally on $A^\bot$, the points $Y_i = I_{A^\bot} (\pi_{A^\bot} (X_{k+i}))$, $1\leq i\leq n-k$, form an i.i.d.\ sample with density $f_{d-k+1, \frac{2\beta+k-1} 2}$ by Lemma~\ref{lem:projection} (a). Again conditionally on $A^{\bot}$, the point $Y' = I_{A^\bot}(p(A))$ (where $Y=p(A)$ is the projection of the origin onto $A$) has the density $f_{d-k+1, \gamma}$ by Theorem~\ref{theo:ruben_miles} (c) and (d). Still conditioning on $A^\bot$, we observe that $Y'= I_{A^\bot}(\pi_{A^\bot}(X_1))$ is stochastically independent of the points $Y_i = I_{A^\bot} (\pi_{A^\bot} (X_{k+i}))$, $1\leq i\leq n-k$. Thus, properties (a), (b), (c) hold conditionally on $A^\bot$. Since the joint conditional distribution of $Y',Y_1',\ldots,Y_{n-k}'$ does not depend on $A^\bot$, the statements hold in the unconditional sense, too.
\vspace*{2mm}
\noindent
\textit{Proof of the formula for the external angle.}
We are finally ready to compute the expected external angle. Since the joint distribution of $Y',Y_1',\ldots,Y_{n-k}'$ does not change under orthogonal transformations of $\mathbb{R}^{d-k+1}$, we may rewrite~\eqref{eq:E_gamma_proof} in the following form:
$$
\mathbb{E} \gamma(G, P)
=
\mathbb{P}[\langle Y_1'-Y', e\rangle \leq 0 ,\ldots, \langle Y_{n-k}'-Y', e\rangle \leq 0],
$$
where $e\in \mathbb{R}^{d-k+1}$ is any unit vector. Introducing the random variables $Z_i:= \langle Y_i',e\rangle$ and $Z := \langle Y,e\rangle$, we obtain
\begin{equation}\label{eq:gamma_Z}
\mathbb{E} \gamma(G, P)
=
\mathbb{P}[Z_1\leq Z ,\ldots, Z_{n-k}\leq Z].
\end{equation}
Projecting $Y'$ and $Y_i'$ to $Z'$ and $Z_i'$ reduces the dimension by $d-k$. Now, by the above description of the joint law of $Y',Y_1',\ldots,Y_{n-k}'$ and by Lemma~\ref{lem:projection} (a), we have that
\begin{itemize}
\item[(a)] $Z, Z_1,\ldots,Z_{n-k}$ are independent random variables,
\item[(b)] $Z_1,\ldots,Z_{n-k}$ are i.i.d.\ with density $f_{1, \frac{2\beta+d-1} 2}$,
\item[(c)] $Z$ has density $f_{1,\frac {(2\beta+d) k -1}{2}}$.
\end{itemize}
Conditioning on the event that $Z=t$ in the right-hand side of~\eqref{eq:gamma_Z} and integrating, we obtain
\begin{multline*}
\mathbb{E} \gamma(G, P)
=
\mathbb{P}[Z_1\leq Z ,\ldots, Z_{n-k}\leq Z]\\
=
\int_{-1}^{+1} c_{1, \frac {(2\beta+d) k - 1}{2}}
(1-t^2)^{\frac {(2\beta+d) k - 1}{2}}
\left(\int_{-1}^t c_{1, \frac{2\beta+d - 1}{2}} (1-s^2)^{\frac{2\beta + d - 1}{2}}{\rm d} s\right)^{n-k} {\rm d} t
=
I_{n,k}(2\beta+d),
\end{multline*}
where we used~\eqref{eq:I_definition} in the last equality. This completes the proof of the formula for the expected external angle in the beta case.
\vspace*{2mm}
\noindent
\textit{The beta' case} is analogous to the beta case, but this time everything is based on Theorem~\ref{theo:ruben_miles_prime} and part (b) of Lemma~\ref{lem:projection}. The joint distribution of $Y',Y_1',\ldots,Y_{n-k}'$ is as follows:
\begin{itemize}
\item[(a)] $Y', Y_1',\ldots,Y_{n-k}'$ are independent points in $\mathbb{R}^{d-k+1}$,
\item[(b)] $Y_1',\ldots,Y_{n-k}'$ are i.i.d.\ with density $\tilde f_{d-k+1, \frac{2\beta-k+1} 2}$,
\item[(c)] $Y'$ has density $\tilde f_{d-k+1,\gamma}$ with $\gamma = \frac {(2\beta-d) k + d-k+1}{2}$.
\end{itemize}
By Lemma~\ref{lem:projection} (b), the joint distribution of the one-dimensional projections $Z_i:= \langle Y_i',e\rangle$ and $Z := \langle Y',e\rangle$ is as follows:
\begin{itemize}
\item[(a)] $Z, Z_1,\ldots,Z_{n-k}$ are independent random variables,
\item[(b)] $Z_1,\ldots,Z_{n-k}$ are i.i.d.\ with density $\tilde f_{1, \frac{2\beta-d+1} 2}$,
\item[(c)] $Z$ has density $\tilde f_{1,\frac {(2\beta-d) k +1}{2}}$.
\end{itemize}
Recalling~\eqref{eq:gamma_Z}, conditioning on the event that $Z=t$ and integrating, we obtain
\begin{multline*}
\mathbb{E} \gamma(G, P)
=
\mathbb{P}[Z_1\leq Z ,\ldots, Z_{n-k}\leq Z]\\
=
\int_{-\infty}^{+\infty} \tilde c_{1, \frac {(2\beta-d) k + 1}{2}}
(1+t^2)^{-\frac {(2\beta-d) k + 1}{2}}
\left(\int_{-\infty}^t \tilde c_{1, \frac{2\beta-d + 1}{2}} (1+s^2)^{-\frac{2\beta-d + 1}{2}}{\rm d} s\right)^{n-k} {\rm d} t
=
\tilde I_{n,k}(2\beta-d),
\end{multline*}
where we used~\eqref{eq:I_definition_prime} in the last equality. This completes the proof in the beta' case.
\end{proof}
\subsection{Internal angles under change of dimension}
To motivate the next theorem, consider a $d$-dimensional simplex $[Z_1,\ldots,Z_{d+1}]$ in a Euclidean space $\mathbb{R}^{d+\ell}$, where $\ell\in\mathbb{N}_0$. Let first $Z_1,\ldots,Z_{d+1}$ be i.i.d.\ with beta density $f_{d+\ell,\beta}$. Na\"ively, one might conjecture that the expected internal angle at a face of some fixed dimension $k$ does not depend on the choice of $\ell\in\mathbb{N}_0$. Indeed, this angle does not depend on whether we consider the simplex as embedded into $\mathbb{R}^{d+\ell}$ or into its own $d$-dimensional affine hull $A=\mathop{\mathrm{aff}}\nolimits (Z_1,\ldots,Z_{d+1})$, and the beta density preserves its form when restricted to affine subspaces (up to scaling, which does not change the angle). However, as we know from Theorem~\ref{theo:ruben_miles}, the joint distribution of $Z_1,\ldots,Z_{d+1}$ inside their own affine hull involves an additional `Blaschke--Petkantschin term' $\Delta^\ell(Z_1,\ldots,Z_{d+1})$, which is why the above argument breaks down.
In the next theorem we show that in order to make the expected internal angle independent of the dimension of the space the simplex is embedded in, we have to decrease the parameter of the beta distribution by $\frac 12$ each time we increase the dimension by $1$.
\begin{theorem}\label{theo:internal_angle_projection}
Let $X_1,\ldots,X_{d+1}$ be i.i.d.\ random points in $\mathbb{R}^{d+\ell}$ with the beta-type distribution $f_{d+\ell, \beta - \frac \ell 2}$, where $\ell\in\mathbb{N}_0$ and $\beta -\frac \ell 2\geq -1$. Then, for all $k\in \{1,\ldots,d\}$, we have
\begin{equation*
\mathbb{E} \beta([X_1,\ldots,X_{k}], [X_1,\ldots,X_{d+1}]) = J_{d+1,k}(\beta)
\end{equation*}
where $J_{d+1,k}(\beta)$ is given by~\eqref{eq:J_definition}.
That is, the expected internal angle does not depend on $\ell\in \mathbb{N}_0$ as long as $\beta-\frac \ell 2\geq -1$.
Similarly, if $X_1,\ldots,X_{d+1}$ are i.i.d.\ points in $\mathbb{R}^{d+\ell}$ with the beta'-type density $\tilde f_{d+\ell, \beta + \frac \ell 2}$, where $\ell\in\mathbb{N}_0$ and $\beta > \frac d2$, then the above expected internal angle equals $\tilde J_{d+1,k}(\beta)$ defined in~\eqref{eq:J_definition_prime} and thus does not depend on the choice of $\ell\in\mathbb{N}_0$.
\end{theorem}
\begin{proof}
For concreteness, we consider the beta case. The main tool in the proof is Lemma~\ref{lem:projection} that states that the projection of $[X_1,\ldots,X_{d+1}]$ to $\mathbb{R}^d$ is a full-dimensional simplex whose vertices are i.i.d.\ with density $f_{d,\beta}$. We have to relate the expected internal angles of $[X_1,\ldots,X_{d+1}]$ to those of its projection.
In Section~\ref{subsec:external_angles_proofs}, especially in Equation~\eqref{eq:gamma_G_P_alpha_internal}, we have shown (with a different notation) that
$$
\beta([X_1,\ldots,X_{k}], [X_1,\ldots,X_{d+1}]) = \alpha (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)),
$$
where
\begin{itemize}
\item[(a)] $V, V_1,\ldots,V_{d+1-k}$ are independent points in $\mathbb{R}^{d+\ell-k+1}$ such that
\item[(b)] $V_1,\ldots,V_{d+1-k}$ are i.i.d.\ with density $f_{d+\ell-k+1, \frac{2\beta-\ell+k-1} 2}$ and
\item[(c)] $V$ has density $f_{d+\ell-k+1,\gamma}$ with $\gamma = \frac {(2\beta- \ell+d +\ell)k+ k-d-\ell-1}2 = \frac {(2\beta+d)k+ k-d-1}{2} -\frac \ell 2$.
\end{itemize}
Note that an increase of the dimension by $\ell$ is always accompanied by a decrease of the beta-parameter by $\frac \ell2$. Let $\Pi: \mathbb{R}^{d+\ell-k+1}\to\mathbb{R}^{d-k+1}$ be the orthogonal projection defined by
\begin{equation}\label{eq:def_projection_Pi}
\Pi (x_0,x_1,\ldots,x_{d+\ell-k}) = (x_0,x_{\ell+1}\ldots,x_{d+\ell-k}),\qquad (x_0,\ldots,x_{d+\ell-k})\in\mathbb{R}^{d+\ell-k+1}.
\end{equation}
By Lemma~\ref{lem:projection} (a), the joint distribution of the points $W:=\Pi V$, $W_1:=\Pi V_1,\ldots,W_{d+1-k}:=\Pi V_{d+1-k}$ can be described as follows:
\begin{itemize}
\item[(a)] $W, W_1,\ldots,W_{d+1-k}$ are independent points in $\mathbb{R}^{d-k+1}$,
\item[(b)] $W_1,\ldots,W_{d+1-k}$ are i.i.d.\ with density $f_{d-k+1, \frac{2\beta+k-1} 2}$,
\item[(c)] $W$ has density $f_{d-k+1,\gamma'}$ with $\gamma' = \frac {(2\beta+d)k+ k-d-1}{2}$.
\end{itemize}
In particular, their distribution does not depend on $\ell$. To prove the theorem, it suffices to show that
\begin{equation}\label{eq:E_alpha_V_eq_E_alpha_W}
\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)) = \mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W)).
\end{equation}
Since this identity becomes trivial for $\ell=0$, we shall henceforth assume that $\ell\in\mathbb{N}$. Using the definition of the solid angle, we have
\begin{align*}
2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W))
=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W) \cap L_{1} \neq \{0\}],
\end{align*}
where $L_1 \in G(d-k+1, 1)$ is a uniformly distributed random line passing through the origin which is independent of everything else. Since the probability law of the cone $\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W)$ is invariant under orthogonal transformations, we can replace $L_1$ by any fixed line, which leads to
$$
2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W))
=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W) \cap \mathop{\mathrm{lin}}\nolimits (e_0) \neq \{0\}],
$$
where $e_0,e_1,\ldots, e_{d-k}$ is the standard orthonormal basis of $\mathbb{R}^{d-k+1}$.
On the other hand, using the properties of conic intrinsic volumes and the conic Crofton formula, see, in particular, \eqref{eq:h_def} and~\eqref{eq:ConicalIntVolGrassmannAngle}, we can write
\begin{align*}
2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V))
&=
2\,\mathbb{E} \upsilon_{d-k+1} (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)) \\
&=
2\,\mathbb{E} h_{d-k+1} (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V))\\
&=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V) \cap L_{\ell + 1}' \neq \{0\}],
\end{align*}
where $L_{\ell + 1}'\in G(d+\ell-k+1,\ell+1)$ is a random, uniformly distributed $(\ell+1)$-dimensional linear subspace of $\mathbb{R}^{d+\ell-k+1}$ that is independent of everything else. Once again by rotational invariance of the probability law of the random cone $\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)$, we can replace $L_{\ell+1}'$ by an arbitrary deterministic $(\ell+1)$-dimensional linear subspace of our choice, which leads to
\begin{equation}\label{eq:alpha_pos_as_intersect}
\begin{split}
&2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V))\\
&\qquad=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V) \cap \mathop{\mathrm{lin}}\nolimits(e_0,e_1,\ldots,e_\ell) \neq \{0\}],
\end{split}
\end{equation}
where $e_0,e_1,\ldots, e_{d+\ell-k}$ is the standard orthonormal basis of $\mathbb{R}^{d+\ell-k+1}$. Recalling that $W=\Pi V$ and $W_i=\Pi V_i$ for $i\in\{1,\ldots,d-k+1\}$, and using the definition of the orthogonal projection $\Pi$ given in~\eqref{eq:def_projection_Pi}, we arrive at
\begin{align*}
&2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W))\\
&\qquad=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W) \cap \mathop{\mathrm{lin}}\nolimits (e_0) \neq \{0\}]\\
&\qquad=
\mathbb{P}[\Pi \mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V) \cap (\mathop{\mathrm{lin}}\nolimits (e_0)\backslash \{0\}) \neq \varnothing]\\
&\qquad=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V) \cap \Pi^{-1}(\mathop{\mathrm{lin}}\nolimits (e_0)\backslash \{0\}) \neq \varnothing].
\end{align*}
Now, $ \Pi^{-1}(\mathop{\mathrm{lin}}\nolimits (e_0)\backslash \{0\}) = \mathop{\mathrm{lin}}\nolimits(e_0,e_1,\ldots,e_\ell) \backslash \mathop{\mathrm{lin}}\nolimits (e_1,\ldots,e_\ell)$. Thus, we can write
\begin{multline*}
2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W))
\\=
\mathbb{P}[\exists v \in (\mathop{\mathrm{lin}}\nolimits(e_0,e_1,\ldots,e_\ell)\backslash\{0\}) \backslash (\mathop{\mathrm{lin}}\nolimits (e_1,\ldots,e_\ell)\backslash\{0\})\colon v\in \mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)].
\end{multline*}
However, by rotational invariance of the involved distributions, the $(d+1-k)$-dimensional linear space $\mathop{\mathrm{lin}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)$ has the uniform distribution $\nu_{d+1-k}$ on the Grassmannian $G(d+\ell-k+1,d+1-k)$. So, \cite[Lemma 13.2.1]{SW08} implies that the intersection of $\mathop{\mathrm{lin}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)$ with the $\ell$-dimensional linear space $E:=\mathop{\mathrm{lin}}\nolimits (e_1,\ldots,e_\ell)$ in $\mathbb{R}^{d+\ell-k+1}$ is $\{0\}$ with probability $1$. Indeed,
\begin{align*}
&\mathbb{P}[\mathop{\mathrm{lin}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)\cap E\neq\{0\}] \\
&\qquad= \int_{{\rm SO}(d+\ell-k+1)}\mathbbm{1}_{\{\dim(\vartheta\mathop{\mathrm{lin}}\nolimits(e_{\ell+1},\ldots,e_{d+\ell-k+1})\cap E)>0\}}\,\nu(\textup{d} \vartheta) = 0,
\end{align*}
where ${\rm SO}(d+\ell-k+1)$ is the special orthogonal group in $\mathbb{R}^{d+\ell-k+1}$ with its unique invariant Haar probability measure $\nu$.
It follows that
\begin{align*}
&2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (W_1-W,\ldots,W_{d+1-k}-W))\\
&\qquad=
\mathbb{P}[\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V) \cap \mathop{\mathrm{lin}}\nolimits(e_0,e_1,\ldots,e_\ell) \neq \{0\}]\\
&\qquad=
2\,\mathbb{E} \alpha (\mathop{\mathrm{pos}}\nolimits (V_1-V,\ldots,V_{d+1-k}-V)),
\end{align*}
where we used~\eqref{eq:alpha_pos_as_intersect} in the last step. This proves~\eqref{eq:E_alpha_V_eq_E_alpha_W} and completes the proof of Theorem~\ref{theo:internal_angle_projection} in the beta case.
The proof in the beta' case is similar, but this time an increase of the dimension by $\ell$ is always accompanied by an \textit{increase} of the beta'-parameter by $\frac \ell2$, see Lemma~\ref{lem:projection} (b).
\end{proof}
\begin{remark}
Returning to the discussion at the beginning of this section, we can equivalently restate Theorem~\ref{theo:internal_angle_projection} as follows. Let $Y_1,\ldots,Y_{d+1}$ be (in general, stochastically dependent) random points in $\mathbb{R}^d$ whose joint density is proportional to
$$
\Delta^{\ell} (y_1,\ldots,y_{d+1})\,\prod_{i=1}^{d+1} f_{d,\beta - \frac{\ell}{2} }(y_i).
$$
Then, the expected internal angle $\mathbb{E} \beta([Y_1,\ldots,Y_k], [Y_1,\ldots,Y_{d+1}])$ does not depend on the choice of $\ell\in\mathbb{N}_0$, as long as $\beta-\frac \ell 2 \geq -1$. Indeed, by Theorem~\ref{theo:ruben_miles}, the joint distribution of $X_1,\ldots,X_{d+1}$ inside their own affine hull $\mathop{\mathrm{aff}}\nolimits(X_1,\ldots,X_{d+1})$ is the same as the joint distribution of $Y_1,\ldots,Y_{d+1}$ up to rescaling, which does not change internal angles. A similar statement also holds in the beta' case.
\end{remark}
\subsection{Analytic continuation}
One of the main ideas used in our proofs is to raise the dimension. More precisely, we shall view the beta polytope $P_{n,d}^\beta\subset \mathbb{R}^d$ as a projection of $P_{n,d+1}^{\beta-1/2} \subset \mathbb{R}^{d+1}$; see Lemma~\ref{lem:projection} (a). Since raising the dimension must be accompanied by lowering the parameter $\beta$, such a representation is possible for $\beta \geq -\frac 12$ only. For example the uniform distribution on $\mathbb{S}^{d-1}$ (corresponding to $\beta=-1$) cannot be represented as a projection of a higher-dimensional beta distribution. It is for this reason that our proofs work for $\beta > -\frac 12$ only. In order to extend the results to the full range $\beta\geq -1$, we shall use analytic continuation. To this end, we need to show that the functionals under interest, such as the expected internal angles of beta simplices, can be viewed as analytic functions of $\beta$.
The following lemma makes this precise and will be applied several times below.
Observe that for any fixed $x\in\mathbb{B}^d$, we can consider
$$
f_{d,z}(x)= \frac{ \Gamma\left( \frac{d}{2} + z + 1 \right) }{ \pi^{ \frac{d}{2} } \Gamma\left( z+1 \right) }(1-\|x\|^2)^z
$$
as an analytic function of the complex variable $z$ on the half-plane $H_{-1}:=\{z\in\mathbb{C}: \Re z>-1\}$.
\begin{lemma}\label{lem:Analytic}
Fix $d\in\mathbb{N}$, $n\in\mathbb{N}$, and let $\varphi:(\mathbb{B}^d)^{n}\to \mathbb{R}$ be a bounded measurable function. Then the function
$$
\mathcal{I}(z):=\int_{(\mathbb{B}^d)^{n}}\varphi(x_1,\ldots,x_{n})\left(\prod_{i=1}^{n} f_{d,z}(x_i)\right)\,
\lambda_d(\textup{d} x_1) \ldots \lambda_d(\textup{d} x_{n})
$$
is analytic on the half-plane $H_{-1}$.
\end{lemma}
\begin{proof}
If $K\subset H_{-1}$ is a compact set, then there is a constant $C(K)$ depending only on $K$ such that
$$
|f_{d,z}(x)| = \left|\frac{ \Gamma\left( \frac{d}{2} + z + 1 \right) }{ \pi^{ \frac{d}{2} } \Gamma\left( z+1 \right) }(1-\|x\|^2)^z\right| \leq C(K) (1-\|x\|^2)^{\Re z}
$$
for all $x\in \mathbb{B}^d$ and $z\in K$.
Since $\varphi$ is bounded and the function $(1-\|x\|^2)^{\Re z}$ is integrable over $\mathbb{B}^d$ for $\Re z >-1$, the function $\mathcal{I}(z)$ is well-defined.
\vspace*{2mm}
\noindent
\textit{Continuity.}
In a next step, we claim that $\mathcal{I}(z)$ is continuous on $H_{-1}$. To prove this, take a sequence $(z_k)_{k\in\mathbb{N}}\subset H_1$ with $z_k\to z\in H_1$, as $k\to\infty$. Then,
$$
|\mathcal{I}(z) - \mathcal{I}(z_k)| \leq \int_{(\mathbb{B}^d)^{n}} |\varphi(x_1,\ldots,x_{n})| \left|\prod_{i=1}^{n}f_{d,z}(x_i)-\prod_{i=1}^{n}f_{d,z_k}(x_i)\right|\,
\lambda_d(\textup{d} x_1) \ldots \lambda_d(\textup{d} x_{n}).
$$
For every fixed $x_1,\ldots,x_{n}$ and as $k\to\infty$, the function under the sign of the integral converges to $0$, because $\lim_{k\to\infty} f_{d,z_k}(x_i) = f_{d,z}(x_i)$ for all $x_1,\ldots,x_n\in\mathbb{B}^d$. Moreover, recall that $\varphi$ is bounded and observe that
\begin{multline*}
\left|\prod_{i=1}^{n}f_{d,z}(x_i)-\prod_{i=1}^{n}f_{d,z_k}(x_i)\right|
\leq
\prod_{i=1}^{n} |f_{d,z}(x_i)| + \prod_{i=1}^{n}|f_{d,z_k}(x_i)|
\\
\leq
C(K)^{n} \prod_{i=1}^{n} (1-\|x_i\|^2)^{\Re z} + C(K)^{n} \prod_{i=1}^{n} (1-\|x_i\|^2)^{a}
\end{multline*}
with $K = \{z, z_1,\ldots\}$ being compact and $a := \inf_{k\in\mathbb{N}} \Re z_k > -1$. Since the function $(1-\|x_i\|^2)^a$ is integrable over $\mathbb{B}^{d}$ for $a>-1$, the dominated convergence theorem applies, thus proving that $\mathcal{I}(z_k)\to \mathcal{I}(z)$, as $k\to\infty$. Hence, $\mathcal{I}(z)$ is continuous.
\vspace*{2mm}
\noindent
\textit{Analyticity.}
To prove that $\mathcal{I}(z)$ is analytic, let $\gamma\subset H_{-1}$ be any triangular contour. By Morera's theorem \cite[Theorem 10.17]{RudinRealComplexAnalysis} it suffices to show that
$$
\oint_\gamma\mathcal{I}(z)\,\textup{d} z = 0.
$$
But since the function $z\mapsto \prod_{i=1}^{n} f_{d,z}(x_i)$ is analytic on $H_{-1}$ for all $x_1,\ldots,x_n\in\mathbb{B}^d$, Cauchy's integral theorem \cite[Theorem 10.14]{RudinRealComplexAnalysis} implies that, for all $x_1,\ldots,x_{n}\in\mathbb{B}^d$,
$$
\oint_\gamma \prod_{i=1}^{n}f_{d,z}(x_i)\,\textup{d} z = 0.
$$
Since $\varphi$ is bounded, for every $z\in\gamma$ and $x_1,\ldots,x_{n} \in\mathbb{B}^d$ we have
$$
\left| \varphi(x_1,\ldots,x_{n})\left(\prod_{i=1}^{n}f_{d,z}(x_i)\right)\right|
\leq
\sup_{x_1,\ldots,x_n\in\mathbb{B}^d} |\varphi(x_1,\ldots,x_n)| \cdot C(\gamma)^{n} \prod_{i=1}^{n}(1-\|x\|_i^2)^{b},
$$
where $b:=\inf_{z\in\gamma} \Re z>-1$.
Since the function $(1-\|x_i\|^2)^b$ is integrable over $\mathbb{B}^{d}$ for $b>-1$, we may interchange the order of integration by Fubini's theorem, which yields
\begin{align*}
\oint_\gamma\mathcal{I}(z)\,\textup{d} z = \int_{(\mathbb{B}^d)^{n}}\varphi(x_1,\ldots,x_{n})
\left(\oint_\gamma \prod_{i=1}^{n}f_{d,z}(x_i)\,\textup{d} z \right) \lambda_d (\textup{d} x_1) \ldots \lambda_{d}(\textup{d} x_{n})
= 0.
\end{align*}
Note that since $\gamma$ is a triangle, the contour integral can be reduced to usual Lebesgue integrals, which justifies the above use of Fubini's theorem. The argument is complete.
\end{proof}
\begin{corollary}\label{cor:analytic_J}
The function $J_{m,\ell}(\alpha)$, originally defined in Theorem~\ref{theo:f_vect} for real $\alpha>-1$, admits an extension to an analytic function on the half-plane $H_{-1}=\{z\in\mathbb{C}: \Re z>-1\}$.
\end{corollary}
\begin{proof}
Apply Lemma~\ref{lem:Analytic} with $d=m-1$, $n=m$ and $\varphi(x_1,\ldots,x_m) = \beta ([x_1,\ldots,x_{\ell}], [x_1,\ldots,x_m])$, which is bounded by $1$ and measurable.
\end{proof}
\begin{corollary}\label{cor:analytic_f_k}
The function $\beta\mapsto \mathbb{E} f_k(P_{n,d}^\beta)$, originally defined for real $\beta>-1$, admits an extension to an analytic function on the half-plane $H_{-1}=\{z\in\mathbb{C}: \Re z>-1\}$.
\end{corollary}
\begin{proof}
Apply Lemma~\ref{lem:Analytic} with $\varphi(x_1,\ldots,x_{n}) = f_k([x_1,\ldots,x_n])$, which is bounded by $\binom {n}{k+1}$.
\end{proof}
\begin{remark}
Observe that the problem mentioned at the beginning of the section does not arise in the beta' case since by Lemma~\ref{lem:projection} (b) we can represent $\tilde P_{n,d}^{\beta}$ as a projection of $\tilde P_{n,d+1}^{\beta+1/2}$ for any $\beta>\frac d2$ and the new parameters also satisfy $\beta+ \frac 12 >\frac{d+1}{2}$. This is why we only treated the beta case here.
\end{remark}
\subsection{Expected \texorpdfstring{$f$}{f}-vector}
In this section we prove Theorems \ref{theo:f_vect} and \ref{theo:f_vect_prime}.
\begin{proof}[Proof of Theorem~\ref{theo:f_vect}.]
We are going to compute the expected $f$-vector of $P_{n,d}^\beta$. To this end, we shall represent this polytope as a random projection of a higher-dimensional polytope and then use the formula from Proposition \ref{theo:affentranger_schneider}.
\vspace*{2mm}
\noindent
\textit{Geometric argument.}
We take some $\ell\in\mathbb{N}$, assume that $\beta-\frac \ell2 >-1$ and consider the random polytope $P_{n,d+\ell}^{\beta-\frac \ell 2}$ in $\mathbb{R}^{d+\ell}$. Independently, let $L_d$ be a random, uniformly distributed $d$-dimensional linear subspace in $\mathbb{R}^{d+\ell}$. Denote by $\Pi_d$ the orthogonal projection on $L_d$. By Lemma~\ref{lem:projection} (a) we have that
\begin{equation}\label{eq:P_is_projection}
f_k(\Pi_d P_{n,d+\ell}^{\beta - \frac \ell 2}) \stackrel{d}{=} f_k(P_{n,d}^{\beta}).
\end{equation}
In particular, the expectations of these quantities are equal. On the other hand, by Proposition~\ref{theo:affentranger_schneider}, we have that
$$
\mathbb{E} \left[ f_k(\Pi_d P_{n,d+\ell}^{\beta-\frac \ell 2}) \Big| P_{n,d+\ell}^{\beta-\frac \ell2} \right]
=
2 \sum_{s=0}^\infty \sum_{G\in \mathcal{F}_{d-2s-1}(P_{n,d+\ell}^{\beta-\frac \ell2})} \gamma(G, P_{n,d+\ell}^{\beta-\frac \ell2}) \sum_{F\in\mathcal{F}_{k}(G)} \beta(F,G).
$$
In the following we consider only terms with $d-2s\geq 1$ because all remaining terms are equal to $0$.
All $(d-2s-1)$-dimensional faces of $P_{n,d+\ell}^{\beta - \frac \ell 2}$ have the form $G=[X_{i_1},\ldots, X_{i_{d-2s}}]$ for some indices $1\leq i_1<\ldots < i_{d-2s} \leq n$. By symmetry, the contributions of all these faces are equal, so we may just take $G = [X_1,\ldots,X_{d-2s}]$ (on the event that this is indeed a face) and write
\begin{multline*}
\mathbb{E} \left[f_k(\Pi_d P_{n,d+\ell}^{\beta - \frac \ell 2})\right]
=
\mathbb{E} \left[\mathbb{E} \left[ f_k(\Pi_d P_{n,d+\ell}^{\beta - \frac \ell 2}) \Big| P_{n,d+\ell}^{\beta - \frac \ell 2} \right]\right]\\
=
2 \sum_{s=0}^\infty \binom n {d-2s} \mathbb{E} \left[\gamma(G, P_{n,d+\ell}^{\beta - \frac \ell 2})\mathbbm{1}_{\left\{G \in \mathcal{F}_{d-2s-1}( P_{n,d+\ell}^{\beta - \frac \ell 2})\right\}} \sum_{F\in \mathcal{F}_k(G)} \beta(F,G) \right].
\end{multline*}
By~\eqref{eq:P_is_projection} and the independence part of Theorem~\ref{theo:external} (which is a crucial step in this proof allowing us to treat external and internal angles separately), we have
\begin{align*}
\mathbb{E} f_k(P_{n,d}^{\beta})
&=
\mathbb{E} \left[f_k(\Pi_d P_{n,d+\ell}^{\beta - \frac \ell2})\right]\\
&=
2 \sum_{s=0}^\infty \binom n {d-2s} \mathbb{E} \left[\gamma(G, P_{n,d+\ell}^{\beta - \frac \ell 2})
\mathbbm{1}_{\left\{G \in \mathcal{F}_{d-2s-1}( P_{n,d+\ell}^{\beta - \frac \ell 2})\right\}}\right] \mathbb{E}\left[\sum_{F\in \mathcal{F}_k(G)} \beta(F,G) \right].
\end{align*}
By Theorem~\ref{theo:external} and recalling the convention that the external angle is $0$ if $G$ is not a face, we obtain
\begin{align*}
\mathbb{E} \left[\gamma(G, P_{n,d+\ell}^{\beta - \frac \ell 2})\mathbbm{1}_{\left\{G \in \mathcal{F}_{d-2s-1}
(P_{n,d+\ell}^{\beta - \frac \ell 2})\right\}}\right]
&=
I_{n,d-2s}\left(2\left(\beta - \frac \ell 2\right) + d + \ell\right)\\
&=
I_{n,d-2s}(2\beta+d).
\end{align*}
Also, recalling that $G$ is the convex hull of i.i.d.\ random points $X_1,\ldots,X_{d-2s}$ in $\mathbb{R}^{d+\ell}$ with density $f_{d+\ell,\beta -\frac \ell 2}$, we apply Theorem~\ref{theo:internal_angle_projection} to deduce that
\begin{align*}
\mathbb{E}\left[\sum_{F\in \mathcal{F}_k(G)} \beta(F,G) \right]
&=
\binom {d-2s}{k+1}
\mathbb{E}\beta([X_1,\ldots,X_{k+1}], [X_1,\ldots,X_{d-2s}])\\
&=
\binom {d-2s}{k+1}
\mathbb{E}\beta([X_1',\ldots,X_{k+1}'], [X_1',\ldots,X_{d-2s}'])\\
&=
\binom {d-2s}{k+1} J_{d-2s,k+1}\left(\beta + s + \frac 12\right),
\end{align*}
where $X_1',\ldots,X_{d-2s}'$ are i.i.d.\ random points in $\mathbb{R}^{d-2s-1}$ with density $f_{d-2s-1,\beta + s + \frac 12}$.
Taking everything together, we arrive at the final formula
\begin{equation}\label{eq:ProofXXX}
\mathbb{E} f_k(P_{n,d}^{\beta})
=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} I_{n,d-2s}(2\beta + d) J_{d-2s,k+1}\left(\beta + s + \frac 12\right).
\end{equation}
For the above argument, the value of $\ell\in\mathbb{N}$ was irrelevant, so that we can take $\ell=1$. Because of the restriction on $\beta$ at the very beginning of the argument, the proof so far only covers the case where $\beta > - \frac 12$.
\vspace*{2mm}
\noindent
\textit{Analytic continuation: Proof for $\beta>-1$.}
To extend the result to all $\beta>-1$ we argue by analytic continuation. For that purpose we first recall that by Corollary~\ref{cor:analytic_f_k}, the function $\beta\mapsto \mathbb{E} f_k(P_{n,d}^{\beta})$ admits an analytic continuation to $\beta\in\{z\in\mathbb{C}: \Re z>-1\}$. On the other hand, also the right-hand side in~\eqref{eq:ProofXXX} admits an analytic extension to $\beta\in\{z\in\mathbb{C}: \Re z>-1\}$. Indeed, for $J_{d-2s,k+1}(\beta+s+{1\over 2})$ this was observed in Corollary~\ref{cor:analytic_J}. For $I_{n,d-2s}(2\beta+d)$ this follows from the identity
$$
I_{n,d-2s}(2\beta+d) = \mathbb{E} \gamma([X_1,\ldots,X_{d-2s}], [X_1,\ldots,X_n]), \quad \beta>-1,
$$
where $X_1,\ldots,X_n$ are i.i.d.\ with density $f_{d,\beta}$
(see Theorem~\ref{theo:external}) and Lemma~\ref{lem:Analytic} with $\varphi(x_1,\ldots, x_n) = \gamma ([x_1,\ldots,x_{d-2s}], [x_1,\ldots,x_n])$.
Hence, by the uniqueness of analytic continuation (see~\cite[Corollary to Theorem 10.18]{RudinRealComplexAnalysis}), these two expressions must coincide for all $\beta\in(-1,\infty)$, since they already coincide for all $\beta\in(-{\frac 1 2},\infty)$.
\vspace*{2mm}
\noindent
\textit{Continuity: Proof for $\beta=-1$.}
To prove that~\eqref{eq:ProofXXX} also holds in the limiting case $\beta=-1$ corresponding to the uniform distribution on $\mathbb{S}^{d-1}$, we shall argue that both sides of~\eqref{eq:ProofXXX} are continuous at $\beta=-1$. Regarding the left-hand side, we claim that
\begin{equation}\label{eq:lim_E_f_k_beta_minus_1}
\lim_{\beta \downarrow -1} \mathbb{E} f_k(P_{n,d}^\beta) = \mathbb{E} f_k(P_{n,d}^{-1})
\end{equation}
for all $d\geq 2$, $n\geq d+1$ and $k \in \{0,1,\ldots, d-1\}$.
To prove this, we observe that the mapping $(x_1,\ldots,x_n) \mapsto f_k([x_1,\ldots,x_n])$ from $(\mathbb{B}^d)^n$ to $\{0,1,2,\ldots\}$ is continuous on the set $\text{GP}_{n,d}$ of all tuples $(x_1,\ldots,x_n)$ that are in general position (meaning that no $d+1$ points are located on a common affine hyperplane); see also~\cite[Lemma 4.1]{beta_polytopes}. Let $X_1^{(\beta)},\ldots,X_n^{(\beta)}\in\mathbb{B}^d$ be i.i.d.\ random points with the beta density $f_{d,\beta}$ (if $\beta>-1$) or with the uniform distribution on $\mathbb{S}^{d-1}$ (if $\beta=-1$).
From the proof of \cite[Proposition 3.9]{beta_polytopes} we conclude that we have the weak convergence
$$
(X_1^{(\beta)},\ldots,X_n^{(\beta)})\overset{d}{\to} (X_1^{(-1)},\ldots,X_n^{(-1)}),
$$
weakly on $(\mathbb{B}^d)^n$, as $\beta\downarrow -1$. Also, almost surely, $(X_1^{(-1)},\ldots,X_{n}^{(-1)}) \in \text{GP}_{n,d}$. The continuous mapping theorem then yields that
$$
f_k(P_{n,d}^\beta) \overset{d}{\to} f_k(P_{n,d}^{-1}),
$$
as $\beta\downarrow -1$. Moreover, since almost surely $f_k(P_{n,d}^\beta) \leq \binom n{k+1}$ for all $\beta\geq -1$, we conclude from this that~\eqref{eq:lim_E_f_k_beta_minus_1} holds.
It remains to prove that the right-hand side of~\eqref{eq:ProofXXX} is also continuous at $\beta = -1$. Indeed, for $J_{d-2s,k+1}\left(\beta + s + \frac 12\right)$ we even proved analyticity since $\beta + s+\frac 12 \geq -\frac 12$, while for $I_{n,d-2s}(2\beta + d)$ the continuity follows from the defining integral representation~\eqref{eq:I_definition} since $2\beta + d\geq d-2\geq 0$ for $d\geq 2$. Having proved that both sides are continuous at $\beta=-1$, we conclude that~\eqref{eq:ProofXXX} indeed holds for $\beta=-1$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:f_vect_prime}.]
The proof for the beta' case is line by line the same as the one for Theorem~\ref{theo:f_vect} given before. In addition to the distributional equality
$$
f_k(\Pi_d\tilde{P}_{n,d+\ell}^{\beta+{\ell\over 2}}) \overset{d}{=} f_k(\tilde{P}_{n,d}^\beta)
$$
that follows from Lemma \ref{lem:projection} (b), one now uses Theorem \ref{theo:external_prime} instead of Theorem \ref{theo:external} and the beta' case of Theorem \ref{theo:internal_angle_projection}. No analytic continuation and continuity arguments are needed.
\end{proof}
\subsection{Proof of the monotonicity}
In this section we prove Theorems~\ref{theo:monoton} and~\ref{theo:monoton_prime}.
Fix $d\geq 2$, $n\geq d+1$ and $k\in\{0,1,\ldots,d-1\}$. Our aim is to prove that $\mathbb{E} f_k(P_{n,d}^{\beta}) < \mathbb{E} f_k(P_{n+1,d}^{\beta})$ and $\mathbb{E} f_k(\tilde P_{n,d}^{\beta}) < \mathbb{E} f_k(\tilde P_{n+1,d}^{\beta})$. In view of the formulae
\begin{align*}
\mathbb{E} f_k(P_{n,d}^{\beta})
&=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} I_{n,d-2s}(2\beta+d) J_{d-2s,k+1}\left(\beta + s + \frac 12\right),\\
\mathbb{E} f_k(\tilde P_{n,d}^{\beta})
&=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} \tilde I_{n,d-2s}(2\beta-d) \tilde J_{d-2s,k+1}\left(\beta - s - \frac 12\right),
\end{align*}
that follow from Theorem \ref{theo:f_vect} and Theorem \ref{theo:f_vect_prime}, respectively, it suffices to show that
\begin{equation}\label{eq:MonoBetaToShow}
\binom n {m} I_{n,m}(\alpha) \leq \binom {n+1} {m} I_{n+1,m}(\alpha)
\end{equation}
and
\begin{equation}\label{eq:MonoBetaPrimeToShow}
\binom n {m} \tilde I_{n,m}(\alpha) \leq \binom {n+1} {m} \tilde I_{n+1,m}(\alpha)
\end{equation}
for all $\alpha\geq 0$ and $m \in \{1,\ldots,n-1\}$ with strict inequality holding if $m\neq 1$. Note that we do not need to consider the case $m=0$ since the term with $d-2s=0$ vanishes because then $\binom {d-2s}{k+1}=0$.
Recall from Theorems~\ref{theo:f_vect} and~\ref{theo:f_vect_prime} that
\begin{align}
\binom n {m} I_{n,m}(\alpha)
&=
\binom n {m} \int_{-1}^{+1} c_{1, \frac {\alpha m -1}{2}}
(1-t^2)^{\frac {\alpha m - 1}{2}}
\left(\int_{-1}^t c_{1, \frac{\alpha - 1}{2}} (1-s^2)^{\frac{\alpha - 1}{2}}{\rm d} s\right)^{n-m} {\rm d} t,
\label{eq:I_n_m_proof_monoton1}\\
\binom n {m} \tilde I_{n,m}(\alpha)
&=
\binom n {m} \int_{-\infty}^{+\infty} \tilde c_{1, \frac {\alpha m + 1}{2}}
(1+t^2)^{-\frac {\alpha m + 1}{2}}
\left(\int_{-\infty}^t \tilde c_{1, \frac{\alpha + 1}{2}} (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s\right)^{n-m} {\rm d} t.
\label{eq:I_n_m_proof_monoton2}
\end{align}
Note that the factors $c_{1, \frac {\alpha m -1}{2}}$ and $\tilde c_{1, \frac {\alpha m + 1}{2}}$ appearing in the above formulae are strictly positive and do not depend on $n$, so that we can ignore them in the sequel. To simplify the notation, we introduce the distribution function $F(t) = \int_{-\infty}^t f(s)\, {\rm d} s$, where $f$ is the probability density on $\mathbb{R}$ given by
$$
f(s)
=
\begin{cases}
f_{1,\frac{\alpha-1}{2}}(s) = c_{1, \frac{\alpha - 1}{2}} (1-s^2)^{\frac{\alpha - 1}{2}} \mathbbm{1}_{\{|s|<1\}},
&\text{ in the beta case,}\\
\tilde f_{1,\frac{\alpha+1}2}(s) = \tilde c_{1, \frac{\alpha + 1}{2}} (1+s^2)^{-\frac{\alpha + 1}{2}},
&\text{ in the beta' case}.
\end{cases}
$$
Let first $m=1$. For concreteness, we consider the beta case. From~\eqref{eq:I_n_m_proof_monoton1} we have
$$
\binom n 1 I_{n,1} (\alpha) = n \int_{-1}^{+1} f(t) F^{n-1} (t){\rm d} t = 1.
$$
So, for $m=1$, both sides of~\eqref{eq:MonoBetaToShow} are equal to $1$. The beta' case is similar.
Let in the following $m\in \{2,\ldots,n-1\}$.
From~\eqref{eq:I_n_m_proof_monoton1} and~\eqref{eq:I_n_m_proof_monoton2} we see that it is necessary to study monotonicity in $n$ for expressions of the form
$$
g_{n,m} := \binom nm \int_{-\infty}^{+\infty} f^{(m-1) \gamma + 1}(t) F^{n-m}(t) {\rm d} t,
$$
where $\gamma=\frac{\alpha}{\alpha-1}$ with $\alpha=2\beta+d$ in the beta case and $\gamma=\frac{\alpha}{\alpha+1}$ with $\alpha=2\beta-d$ in the beta' case. Note that $\alpha\geq 0$ in both cases. Below, we shall consider the beta case with $\alpha=1$ separately, so let us assume that $\gamma$ is well-defined.
\begin{lemma}\label{lem:monotone_g}
Assume that $f$ is a probability density on $\mathbb{R}$ that is strictly positive and continuously differentiable on some non-empty open interval $I\subseteq\mathbb{R}$ (which is allowed to coincide with the whole real line $\mathbb{R}$) and zero on $\mathbb{R}\setminus I$. If $\gamma\in\mathbb{R}$ and the function $\gamma f^{\gamma-2}(t) f'(t)$ is strictly decreasing on $I$,
then
$$
g_{n+1,m} > g_{n,m}
$$
for all $m\in\{2,3,\ldots\}$ and $n \in \{m,m+1,\ldots\}$.
\end{lemma}
For the proof we need the following slightly corrected version of Lemma~5 from~\cite{bonnet_etal}.
\begin{lemma}\label{lem:FromBGTTTW}
Let $h, g,L:(0,1)\to\mathbb{R}$ be three functions such that
\begin{enumerate}
\item $h$ is non-negative, measurable, and $0 < \int_0^1 h(s)\textup{d} s<\infty$;
\item $g$ is linear, with negative slope and a root at $s^*\in (0,1)$,
\item $L$ is non-negative and strictly concave on $(0,1)$.
\end{enumerate}
Then, for all $m > 1$,
$$
\int_0^1 h(s)g(s)L^{m-1}(s)\,\textup{d} s > \int_0^1 h(s)g(s)\left(\frac {L(s^*)}{s^*}s\right)^{m-1}\,\textup{d} s.
$$
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:monotone_g}]
Observe that under the assumptions of the lemma the distribution function $F$ is strictly increasing and continuously differentiable on $I$. The tail function $\bar F(t)= 1-F(t)$ has thus a well-defined inverse $\bar F^{-1}$. Using the definition of $g_{n,m}$ and then the substitution $\bar F(t) = s$, we arrive at
\begin{align*}
g_{n+1,m}- g_{n,m}
&= \int_{I} f^{(m-1) \gamma+1}(t) \bigg[{n+1\choose m}F(t)-{n\choose m}\bigg]F(t)^{n-m}\,{\rm d} t\\
&= \int_0^1 f^{(m-1) \gamma}(\bar F^{-1}(s)) \bigg[{n+1\choose m}(1-s)-{n\choose m}\bigg](1-s)^{n-m}\,{\rm d} s.
\end{align*}
Now, we define
\begin{align*}
h(s) := (1-s)^{n-m},\quad g(s) := {n+1\choose m}(1-s)-{n\choose m}\quad\text{and}\quad L(s):=f^{\gamma}(\bar F^{-1}(s))
\end{align*}
for $s\in(0,1)$. Clearly, the function $h$ is measurable, strictly positive and bounded, the function $g$ is linear, has negative slope and root at $s^*=m/(n+1)\in (0,1)$. Moreover, the function $L$ is positive and we shall argue that $L$ is also strictly concave. Indeed, by the chain rule its derivative equals
$$
L'(s) = -\gamma f^{\gamma-2}(\bar F^{-1}(s)) f'(\bar F^{-1}(s)),
$$
which is strictly decreasing because $-\gamma f(t)^{\gamma-2} f'(t)$ is increasing and $\bar F^{-1}(s)$ is decreasing.
Thus, Lemma~\ref{lem:FromBGTTTW} can be applied to deduce that
\begin{align*}
g_{n+1,m}- g_{n,m}
&= \int_0^1 L^{m-1}(s)g(s)h(s)\,{\rm d} s\\
&> \bigg({L(s^*)\over s^*}\bigg)^{m-1}\int_0^1 s^{m-1}g(s)h(s)\,{\rm d} s\\
&= \bigg({L(s^*)\over s^*}\bigg)^{m-1}\int_0^1 s^{m-1}\bigg[{n+1\choose m}(1-s)-{n\choose m}\bigg](1-s)^{n-m}\,{\rm d} s\\
&= \bigg({L(s^*)\over s^*}\bigg)^{m-1}{n+1\choose m}\bigg[\int_0^1 s^{m-1}(1-s)^{n+1-m}\,{\rm d} s\\
&\hspace{5cm}-{n-m+1\over n+1}\int_0^1 s^{m-1}(1-s)^{n-m}\,{\rm d} s\bigg]\\
&= \bigg({L(s^*)\over s^*}\bigg)^{m-1}{n\choose m}\Big[B(m,n-m+2)-{n-m+1\over n+1}B(m,n-m+1)\Big],
\end{align*}
where $B(x,y)=\int_0^1 s^{x-1}(1-s)^{y-1}\,{\rm d} s$, $x,y>0$, is Euler's beta function. Since $B(x,y+1)={y\over x+y}B(x,y)$, the last expression in square brackets is equal to zero. Hence,
$
g_{n+1,m}- g_{n,m} > 0
$,
which is the desired inequality.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:monoton}]
As explained above, we need to prove the strict inequality in~\eqref{eq:MonoBetaToShow} for all $m\in \{2,\ldots,n-1\}$.
Recall that $\alpha = 2\beta +d\geq 0$, and consider first the case when $\alpha\notin\{0,1\}$. In particular, this means that $\gamma= \frac{\alpha}{\alpha-1}$ is well defined.
To apply Lemma~\ref{lem:monotone_g}, we need to verify that the function $\gamma f^{\gamma-2}(t) f'(t)$ is strictly decreasing in $t\in (-1,1)$, where $f(t) = c_{1, \frac{\alpha - 1}{2}} (1-t^2)^{\frac{\alpha - 1}{2}}$.
We have
$$
\gamma f^{\gamma-2}(t) f'(t) = -\alpha c_{1,\frac{\alpha-1}{2}}^{1/(\alpha-1)} \frac{t}{\sqrt{1-t^2}}, \qquad t\in (-1,1),
$$
which is strictly decreasing because $\alpha>0$. Lemma~\ref{lem:monotone_g} thus yields $g_{n+1,m} > g_{n,m}$, which can be written as
$$
\binom n {m} I_{n,m}(\alpha) < \binom {n+1} {m} I_{n+1,m}(\alpha).
$$
This establishes~\eqref{eq:MonoBetaToShow} and completes the proof when $\alpha \notin \{0,1\}$.
The case when $\alpha=0$ occurs if $(d,\beta)=(2,-1)$. Note that Theorem~\ref{theo:monoton} becomes trivial in this case, but we prefer to prove~\eqref{eq:MonoBetaToShow} in all cases. Formula~\eqref{eq:I_n_m_proof_monoton1} simplifies as follows:
$$
\binom nm I_{n,m}(0) = \binom nm \int_{-1}^{+1} f(t) F^{n-m}(t) \textup{d} t = \frac 1 {n-m+1} \binom nm.
$$
It follows that for all $m\in \{2,\ldots,n-1\}$,
$$
\frac{\binom {n+1}m I_{n+1,m} (0) }{\binom nm I_{n,m} (0)}
=
\frac{n+1}{n-m+2}>1.
$$
Let finally $\alpha=1$, which occurs if $(d,\beta)$ is $(3,-1)$ or $(2, -1/2)$. The expression for $I_{n,m}(\alpha)$ given in~\eqref{eq:I_n_m_proof_monoton1} simplifies as follows:
\begin{align*}
\binom nm I_{n,m} (1)
&=
\binom nm \int_{-1}^{+1} c_{1,\frac{m-1}{2}} (1-t^2)^{\frac{m-1}{2}} \left(\frac {1+t}{2}\right)^{n-m} \textup{d} t\\
&=c_{1,{m-1\over 2}}2^m{n\choose m}\int_0^1 u^{n-{m\over 2}+{1\over 2}-1}(1-u)^{{m\over 2}+{1\over 2}-1}\,dint u\\
&={c_{1,{m-1\over 2}}2^m\over m!(n-m)!}\Gamma\Big(n-{m-1\over 2}\Big)\Gamma\Big({m+1\over 2}\Big)
\end{align*}
where we computed the integral by using the substitution $u:= (1+t)/2$ and the properties of the beta and the gamma function. It follows that for all $m\in\{2,\ldots,n-1\}$,
$$
\frac{\binom {n+1}m I_{n+1,m} (1) }{\binom nm I_{n,m} (1) } = {(n-m)!\over(n-m+1)!}{\Gamma(n-{m-1\over 2}+1)\over\Gamma(n-{m-1\over 2})} = \frac{n-\frac{m-1}{2}}{n-m+1} > 1.
$$
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:monoton_prime}]
Observe that $\alpha=2\beta-d >0$.
We take $\gamma = \frac{\alpha}{\alpha+1}$ and $f(t) = \tilde c_{1, \frac{\alpha + 1}{2}} (1+t^2)^{-\frac{\alpha + 1}{2}}$, $t\in\mathbb{R}$. Then,
$$
\gamma f^{\gamma-2}(t) f'(t) = -\alpha \tilde c_{1, \frac{\alpha + 1}{2}}^{-1/(\alpha+1)} \frac{t}{\sqrt{1+t^2}}, \qquad t\in \mathbb{R},
$$
which is strictly decreasing in $t$. An application of Lemma~\ref{lem:monotone_g} yields $g_{n+1,m} > g_{n,m}$ for all $m\in \{2,\ldots,n-1\}$ and thus
$$
\binom n {m} \tilde I_{n,m}(\alpha) < \binom {n+1} {m} \tilde I_{n+1,m}(\alpha),
$$
which establishes~\eqref{eq:MonoBetaPrimeToShow} and completes the argument.
\end{proof}
\subsection{Expected intrinsic volumes of tangent cones}
In this section we give proofs of Theorems \ref{theo:expected_conic_tangent} and \ref{theo:expected_conic_tangent_prime}.
\begin{proof}[Proof of Theorem~\ref{theo:expected_conic_tangent}]
By definition of the Grassmann angles \eqref{eq:h_def} it follows that, for every $j\in\{k,\ldots,d-1\}$,
\begin{align*}
2\mathbb{E}\left[h_{j+1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]
=
\mathbb{P}\left[T(G,P_{n,d}^\beta) \cap L_{d-j} \neq \{0\}\text{ and }G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right],
\end{align*}
where $L_{d-j}\in G(d, d-j)$ is a uniformly distributed linear subspace that is independent of everything else.
Since the probability law of $T(G, P_{n,d}^\beta)$ is rotationally invariant, we can replace $L_{d-j}$ by any deterministic linear subspace of the same dimension, thus arriving at
\begin{align*}
&2\mathbb{E}\left[h_{j+1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right] \\
&\qquad= \mathbb{P}\left[T(G,P_{n,d}^\beta) \cap \mathop{\mathrm{lin}}\nolimits (e_{j+1},\ldots,e_d) \neq \{0\}\text{ and }G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right]
\end{align*}
or, equivalently,
\begin{align*}
&\mathbb{P}\left[ G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right] -2\mathbb{E}\left[h_{j+1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right] \\
&\qquad= \mathbb{P}\left[T(G,P_{n,d}^\beta) \cap \mathop{\mathrm{lin}}\nolimits (e_{j+1},\ldots,e_d) = \{0\}\text{ and }G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right]
\end{align*}
where $e_1,\ldots,e_d$ is the standard orthonormal basis in $\mathbb{R}^d$. Let $\Pi_j:\mathbb{R}^d\to\mathbb{R}^j$ be the orthogonal projection from $\mathbb{R}^d$ to $\mathbb{R}^j$ (which is identified with $\mathop{\mathrm{lin}}\nolimits (e_1,\ldots, e_j))$ given by
$$
\Pi_j(x_1,\ldots,x_d) := (x_1,\ldots,x_j).
$$
Then, given that $G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)$, the intersection of $T(G,P_{n,d}^\beta)$ and $\mathop{\mathrm{lin}}\nolimits (e_{j+1},\ldots,e_d)$ is the null space $\{0\}$ if and only if $\Pi_jG$ is a $(k-1)$-face of the projected polytope $\Pi_j P_{n,d}^\beta$. Moreover, if $G\notin\mathcal{F}_{k-1}(P_{n,d}^\beta)$, then $G$ contains an interior point of $P_{n,d}^\beta$, which under the projection $\Pi_j$ is mapped to a relative interior point of $\Pi_j P_{n,d}^\beta$, implying that $\Pi_jG$ cannot be a $(k-1)$-face in this case. It follows that
\begin{equation}\label{eq:GrassmannAnglesVSfvector}
\begin{split}
&\mathbb{P}\left[ G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right] -2\mathbb{E}\left[h_{j+1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]\\
&= \mathbb{P}\left[\Pi_j G \in \mathcal{F}_{k-1}(\Pi_j P_{n,d}^\beta)\text{ and }G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right]\\
&= \mathbb{P}\left[\Pi_j G \in \mathcal{F}_{k-1}(\Pi_j P_{n,d}^\beta)\right]
=
\binom{n}{k}^{-1}\mathbb{E} f_{k-1} (\Pi_j P_{n,d}^\beta)
=
\binom{n}{k}^{-1}\mathbb{E} f_{k-1} \big(P_{n,j}^{\beta+\frac{d-j}{2}}\big),
\end{split}
\end{equation}
where the last identity follows from Lemma~\ref{lem:projection} (a), which implies that the random polytopes $\Pi_j P_{n,d}^\beta$ and $P_{n,j}^{\beta+\frac{d-j}{2}}$ are identically distributed. Applying Theorem~\ref{theo:f_vect} to the right-hand side of \eqref{eq:GrassmannAnglesVSfvector}, we can write
\begin{multline*}
\mathbb{E}\left[h_{j+1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]
=
{1\over 2} \mathbb{P}\left[ G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right]
\\
-\frac 1 {\binom{n}{k}} \sum_{s=0}^\infty \binom n {j-2s} \binom {j-2s}{k} I_{n,j-2s}(2\beta+d) J_{j-2s,k}\left(\beta + s+\frac{d - j + 1}2\right).
\end{multline*}
Inserting $j-2$ in place of $j$ yields the identity
\begin{multline*}
\mathbb{E}\left[h_{j-1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]
=
{1\over 2} \mathbb{P}\left[ G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\right]
\\-
\frac 1 {\binom{n}{k}} \sum_{s=1}^\infty \binom n {j-2s} \binom {j-2s}{k} I_{n,j-2s}(2\beta+d) J_{j-2s,k}\left(\beta + s+\frac{d - j + 1}2\right).
\end{multline*}
Recall from \eqref{eq:ConicalIntVolGrassmannAngle} that, for a cone $C\subset\mathbb{R}^d$ that is not a linear subspace,
$$
h_{j+1}(C) = \upsilon_{j+1}(C) + \upsilon_{j+3}(C) + \ldots.
$$
Hence, $\upsilon_{j-1}(C) = h_{j-1}(C) - h_{j+1}(C)$.
Subtracting the first from the second equation, we see that on the right-hand side only the term with $s=0$ remains, while the left-hand side reduces to $\mathbb{E}\left[\upsilon_{j-1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]$. We thus arrive at
\begin{align*}
\mathbb{E}\left[\upsilon_{j-1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]
&=
\frac {\binom n {j} \binom {j}{k}} {\binom{n}{k}} I_{n,j}(2\beta+d) J_{j,k}\left(\beta + \frac{d - j + 1}2\right)\\
&={n-k\choose j-k}I_{n,j}(2\beta+d) J_{j,k}\left(\beta + \frac{d - j + 1}2\right).
\end{align*}
It remains to recall our convention that $T(G,P_{n,d}^\beta)=\mathbb{R}^d$ if $G\notin\mathcal{F}_{k-1}(P_{n,d}^\beta)$ and to note that, by definition of the conic intrinsic volumes, $\upsilon_{j-1}(\mathbb{R}^d)$ is equal to one if $j-1=d$ and zero otherwise, that is, $\upsilon_{j-1}(\mathbb{R}^d)=\mathbbm{1}_{\{j-1=d\}}$. This implies that
\begin{align*}
\mathbb{E} \upsilon_{j-1}(T(G,P_{n,d}^\beta)) &= \mathbb{E}\left[\upsilon_{j-1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]+\mathbb{E}\left[\upsilon_{j-1}(T(G,P_{n,d}^\beta))\mathbbm{1}_{\{G\notin\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right]\\
&={n-k\choose j-k}I_{n,j}(2\beta+d) J_{j,k}\left(\beta + \frac{d - j + 1}2\right)+\mathbbm{1}_{\{j-1=d\}}\mathbb{P}\left[G\notin\mathcal{F}_{k-1}(P_{n,d}^\beta)\right],
\end{align*}
which is, upon replacing $j$ by $j+1$, the required formula.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo:expected_conic_tangent_prime}]
As in the beta case (see \eqref{eq:GrassmannAnglesVSfvector}) one shows that
$$
1-2\mathbb{E}\left[h_{j+1}(T(G,\tilde P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(P_{n,d}^\beta)\}}\right] = \binom{n}{k}^{-1}\mathbb{E} f_{k-1} \big(\tilde P_{n,j}^{\beta-\frac{d-j}{2}}\big),
$$
where we used Lemma \ref{lem:projection} (b) instead of part (a). Applying Theorem \ref{theo:f_vect_prime} we get
\begin{multline*}
\mathbb{E}\left[\upsilon_{j+1}(T(G,\tilde P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(\tilde P_{n,d}^\beta)\}}\right] + \mathbb{E}\left[\upsilon_{j+3}(T(G,\tilde P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(\tilde P_{n,d}^\beta)\}}\right]+\ldots
\\=
{1\over 2}-\frac 1 {\binom{n}{k}} \sum_{s=0}^\infty \binom n {j-2s} \binom {j-2s}{k} I_{n,j-2s}(2\beta-d) J_{j-2s,k}\left(\beta - s-\frac{d - j + 1}2\right).
\end{multline*}
As above, replacing $j$ by $j-2$ and subtracting finally yields
$$
\mathbb{E}\left[\upsilon_{j-1}(T(G,\tilde P_{n,d}^\beta))\mathbbm{1}_{\{G\in\mathcal{F}_{k-1}(\tilde P_{n,d}^\beta)\}}\right] = {n-k\choose j-k}I_{n,j}(2\beta-d) J_{j,k}\left(\beta - \frac{d - j + 1}2\right).
$$
From this point the proof can be completed as the one of Theorem \ref{theo:expected_conic_tangent}.
\end{proof}
\subsection{The Poisson limit for beta' polytopes}\label{sec:proof_poisson_limit}
In this section we prove Theorem~\ref{theo:f_vect_poisson}.
\begin{lemma}\label{lem:asymptotics_I}
Fix some $\alpha>0$ and $\beta>0$. As $n\to\infty$, we have
\begin{align*}
A_n
:=
\int_{-\infty}^{+\infty}
(1+t^2)^{-\frac {\beta + 1}{2}}
\left(\int_{-\infty}^t \tilde c_{1, \frac{\alpha + 1}{2}} (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s\right)^{n} {\rm d} t
\sim
\frac {\Gamma(\beta/\alpha)}{\alpha} \left(\frac{\alpha}{\tilde c_{1, \frac{\alpha + 1}{2}}}\right)^{\beta/\alpha} n^{-\beta/\alpha}.
\end{align*}
\end{lemma}
\begin{proof}
To simplify the notation, we shall write $C_\alpha$ for $\tilde c_{1, \frac{\alpha + 1}{2}}$.
Using the change of variables $t= n^{1/\alpha} u$, we have
\begin{align*}
A_n
=
n^{1/\alpha} \int_{-\infty}^{+\infty}
(1 + n^{2/\alpha} u^2)^{-\frac {\beta + 1}{2}}
\left(1 - \int_{n^{1/\alpha}u}^{+\infty} C_\alpha (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s\right)^{n} {\rm d} u
=
n^{-\frac{\beta}{\alpha}}
\int_{-\infty}^{+\infty}
g_n(u) {\rm d} u,
\end{align*}
where
$$
g_n(u) =
(n^{-2/\alpha} + u^2)^{-\frac {\beta + 1}{2}}
\left(1 - \int_{n^{1/\alpha}u}^{+\infty} C_\alpha (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s\right)^{n}.
$$
Applying the L'Hospital rule, it is easy to check that for every positive $u>0$,
\begin{equation}\label{eq:asympt_tail_integral}
\int_{n^{1/\alpha}u}^{+\infty} C_\alpha (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s
\sim
\alpha^{-1} C_\alpha (n^{1/\alpha} u)^{-\alpha}
=
\alpha^{-1} C_\alpha u^{-\alpha} n^{-1}
\end{equation}
as $n\to\infty$. It follows that for all $u>0$,
$$
\lim_{n\to\infty} g_n(u)
=
\begin{cases}
u^{-(\beta + 1)}
{\rm e}^{-\alpha^{-1}C_\alpha u^{-\alpha}}, & \text{ if } u>0,\\
0, &\text{ if } u\leq 0.
\end{cases}
$$
In fact, the case $u \leq 0$ follows from the observation that
\begin{equation}\label{eq:tech}
\int_{n^{1/\alpha}u}^{+\infty} C_\alpha (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s \geq 1/2, \quad u\leq 0.
\end{equation}
Assuming that we can apply the dominated convergence theorem, we arrive at
$$
A_n
=
n^{-\frac{\beta}{\alpha}}
\int_{-\infty}^{+\infty}
g_n(u) {\rm d} u
\sim
n^{-\frac{\beta}{\alpha}}
\int_0^{+\infty} u^{-(\beta + 1)}
{\rm e}^{-\alpha^{-1} C_\alpha u^{-\alpha}} {\rm d} u
=
\frac {\Gamma(\beta/\alpha)}{\alpha} \left(\frac{\alpha}{C_\alpha}\right)^{\beta/\alpha} n^{-\beta/\alpha},
$$
which is the required claim.
Let us justify the use of the dominated convergence theorem above. First of all, observe that $g_n(u)\geq 0$ by definition. Further, we have
$
g_n(u) \leq |u|^{-(\beta + 1)}
$,
with the right-hand side being integrable over $\{|u|\geq 1\}$. To construct an integrable bound for $u\in (-1,0)$, observe that according to \eqref{eq:tech}, in this range we have $g_n(u) \leq n^{\frac{\beta+1}{\alpha}} 2^{-n}$, which in turn is bounded by a constant.
Finally, in the case when $u\in (0,1)$, we use the estimate
\begin{equation}\label{eq:est_tail111}
\int_{n^{1/\alpha}u}^{+\infty} C_\alpha (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s \geq c_1(1+n^{2/\alpha}u^2)^{-\alpha/2}, \qquad u\geq 0,
\end{equation}
valid for some constant $c_1>0$. To prove this estimate, note that as functions of $n^{1/\alpha}u$, both expressions are continuous and non-zero on $[0,\infty)$. Since the quotient of both expressions tends to a non-zero constant as $n^{1/\alpha}u\to \infty$, see the asymptotic equivalence~\eqref{eq:asympt_tail_integral}, we can conclude~\eqref{eq:est_tail111}. An estimate similar to~\eqref{eq:est_tail111} was used in~\cite[Equation (1)]{BonnetEtAlThresholds}.
Now, we distinguish the two cases $u^2>n^{-2/\alpha}$ and $0< u^2\leq n^{-2/\alpha}$. In the first case, that is, if $u^2>n^{-2/\alpha}$, we use the inequality $(1-x)^n \leq {\rm e}^{-nx}$, $0\leq x <1$, to deduce that
\begin{multline*}
g_n(u)
\leq
u^{-(\beta+1)}\exp\left\{-n \int_{n^{1/\alpha}u}^{+\infty} C_\alpha (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s \right\}
\leq
u^{-(\beta+1)}\exp\{-c_1 n (1+n^{2/\alpha}u^2)^{-\alpha/2}\}
\\
\leq
u^{-(\beta+1)}\exp\{-c_1 n (2 n^{2/\alpha}u^2)^{-\alpha/2}\}
=
u^{-(\beta+1)}\exp\{-c_2 u^{-\alpha}\},
\end{multline*}
where $c_2>0$ is another constant. On the other hand, if $0 < u^2\leq n^{-2/\alpha}$, then, again using the inequality $(1-x)^n \leq {\rm e}^{-nx}$, $0\leq x <1$, we have that
$$
g_n(u)
\leq
n^{\beta+1\over \alpha}\exp\{-c_1 n (1+n^{2/\alpha}u^2)^{-\alpha/2}\}
\leq
n^{\beta+1\over \alpha}\exp\{-c_1 n 2^{-\alpha/2}\}
=
n^{\beta+1\over \alpha}\exp\{-c_3 n\} \leq c_4
$$
with suitable constants $c_3,c_4>0$. Altogether this shows that for $u\in(0,1)$, we have the upper bound
$$
g_n(u) \leq \max\{c_4,u^{-(\beta+1)}\exp\{-c_2u^{-\alpha}\}\} \leq c_5
$$
with some constant $c_5>0$. The proof is thus complete.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:f_vect_poisson}]
It was shown in~\cite{convex_hull_sphere} that
\begin{equation}\label{eq:E_f_k_to_E_f_k_Poi}
\mathbb{E} f_k(\mathop{\mathrm{conv}}\nolimits \Pi_{d,\alpha})
=
\lim_{n\to\infty}\mathbb{E} f_k\left(\tilde P_{n,d}^{\frac{d+\alpha}{2}}\right).
\end{equation}
In fact, only the case $\alpha=1$ was considered in~\cite{convex_hull_sphere}, but as we explained at the end of Section~\ref{subsec:convex_hulls_half_sphere}, the same proof applies to any $\alpha>0$.
So, we have to compute the limit on the right-hand side of~\eqref{eq:E_f_k_to_E_f_k_Poi}. It follows from Lemma~\ref{lem:asymptotics_I} with $\beta = \alpha m$ that for every fixed $m\in\mathbb{N}$, the quantity $\tilde I_{n,m}(\alpha)$ defined in~\eqref{eq:I_definition_prime} satisfies
\begin{align*}
\tilde I_{n,m}(\alpha)
&=
\int_{-\infty}^{+\infty} \tilde c_{1, \frac {\alpha m + 1}{2}}
(1+t^2)^{-\frac {\alpha m + 1}{2}}
\left(\int_{-\infty}^t \tilde c_{1, \frac{\alpha + 1}{2}} (1+s^2)^{-\frac{\alpha + 1}{2}}{\rm d} s\right)^{n-m} {\rm d} t\\
&\sim
\tilde c_{1, \frac {\alpha m + 1}{2}} \frac {\Gamma(m)}{\alpha} \left(\frac{\alpha}{\tilde c_{1, \frac{\alpha + 1}{2}}}\right)^{m} n^{-m},
\end{align*}
as $n\to\infty$.
By Theorem~\ref{theo:f_vect_prime} and the above asymptotics with $m=d-2s$ for all $s\in \mathbb{N}_0$ with $d-2s\geq k+1$, we arrive at
\begin{align*}
\mathbb{E} f_k\left(\tilde P_{n,d}^{\frac{d+\alpha}{2}}\right)
&=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} \tilde I_{n,d-2s}(\alpha) \tilde J_{d-2s,k+1}\left(\frac{d-2s-1+\alpha}{2}\right)\\
&=
2 \sum_{\substack{m\in \{k+1,\ldots,d\}\\ m\equiv d \Mod{2}}} \binom n {m} \binom {m}{k+1} \tilde I_{n,m}(\alpha) \tilde J_{m,k+1}\left(\frac{m-1+\alpha}{2}\right)\\
&\sim
2 \sum_{\substack{m\in \{k+1,\ldots,d\}\\ m\equiv d \Mod{2}}} \frac{\tilde c_{1, \frac{\alpha m +1}{2}}}{(\tilde c_{1, \frac{\alpha+1}{2}})^{m}} \cdot \frac{\alpha^{m-1}}{m} \cdot \binom {m}{k+1}\tilde J_{m,k+1}\left(\frac{m-1+\alpha}{2}\right),
\end{align*}
as $n\to\infty$.
Note that we restricted the summation to the range $m\in \{k+1,\ldots,d\}$ because terms with $m\leq k$ vanish.
To complete the proof of the theorem, recall that $\tilde c_{1,\gamma} = \frac{\Gamma (\gamma) }{\sqrt{\pi} \Gamma( \gamma - \frac{1}{2})}$ by~\eqref{eq:def_f_beta_prime}.
\end{proof}
\subsection{Asymptotics for the \texorpdfstring{$f$}{f}-vector of beta polytopes}
In this section we prove Theorem~\ref{theo:f_vector_asympt_beta}. The proof is prepared with the following auxiliary estimate.
\begin{lemma}\label{lem:asympt_beta}
Fix some $\alpha>-1$ and $\beta>-1$. As $n\to\infty$, we have
\begin{align*}
B_n:= \int_{-1}^{+1} (1-t^2)^{\frac{\beta-1}{2}} \left(\int_{-1}^t c_{1,\frac{\alpha-1}{2}} (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s\right)^{n} {\rm d} t
\sim \frac{n^{-\frac{\beta+1}{\alpha+1}}}{1 + \alpha}
\left(\frac{1+\alpha}{c_{1,\frac{\alpha-1}{2}}}\right)^{\frac{\beta+1}{\alpha+1}}
\Gamma\left(\frac{1 + \beta}{1 + \alpha}\right).
\end{align*}
\end{lemma}
\begin{proof}
Write $C_\alpha:= c_{1,\frac{\alpha-1}{2}}$. Using the change of variables $1-t = u n^{-\frac{2}{\alpha+1}}$, we obtain
$$
B_n = n^{-\frac{\beta+1}{\alpha+1}} \int_0^{2n^{\frac{2}{\alpha+1}}} g_n(u) {\rm d} u
$$
where $g_n$ is given by
$$
g_n(u) =
n^{\frac{\beta-1}{\alpha+1}} \left(1-\left(1- u n^{-\frac{2}{\alpha+1}}\right)^2\right)^{\frac{\beta-1}{2}} \left(1-\int_{1 - un^{-\frac{2}{\alpha+1}}}^1 C_\alpha (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s \right)^n.
$$
With the rule of L'Hospital one easily checks that
\begin{equation}\label{eq:asympt_proof_beta}
\int_{1- un^{-\frac{2}{\alpha+1}}}^1 C_\alpha (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s
\sim
C_\alpha 2^{\frac{\alpha+1}{2}} (\alpha+1)^{-1} (un^{-\frac{2}{\alpha+1}})^{\frac{\alpha+1}{2}}={C_\alpha\over\alpha+1}(2u)^{\alpha+1\over 2}n^{-1}.
\end{equation}
It follows that for every $u>0$ we have
$$
\lim_{n\to\infty} g_n(u) = (2u)^{\frac{\beta-1}{2}} \exp\left\{-{C_\alpha\over\alpha+1} (2u)^{\frac{\alpha+1}{2}} \right\}.
$$
Assuming that the dominated convergence theorem is applicable, we arrive at
\begin{align*}
B_n = n^{-\frac{\beta+1}{\alpha+1}} \int_0^{\infty} g_n(u) \mathbbm{1}_{\big(0,2n^{\frac{2}{\alpha+1}}\big)}(u)\, {\rm d} u
\sim n^{-\frac{\beta+1}{\alpha+1}} \int_0^{\infty}(2u)^{\frac{\beta-1}{2}} \exp\left\{-{C_\alpha\over\alpha+1} (2u)^{\frac{\alpha+1}{2}} \right\}{\rm d} u.
\end{align*}
Evaluation of the integral yields
$$
\int_0^{\infty}(2u)^{\frac{\beta-1}{2}} \exp\left\{-{C_\alpha\over\alpha+1} (2u)^{\frac{\alpha+1}{2}} \right\}{\rm d} u = {1\over\alpha+1}\left({\alpha+1\over C_\alpha}\right)^{\beta+1\over \alpha+1}\Gamma\left({\beta+1\over \alpha+1}\right)
$$
and thus the desired asymptotic formula.
To justify the interchanging of the integral and the limit, it suffices to show that there is a sufficiently small $\delta>0$ such that
\begin{equation}\label{eq:est_dominated1}
0\leq g_n(u) \leq h(u), \quad \text{ for all } u\in (0,(2-\delta) n^{\frac 2 {\alpha+1}}),
\end{equation}
where $h(u)$ is integrable, and that
\begin{equation}\label{eq:est_dominated2}
\lim_{n\to\infty} \int_{(2-\delta)n^{\frac 2 {\alpha+1}}}^{2n^{\frac 2 {\alpha+1}}} g_n(u) {\rm d} u = 0.
\end{equation}
Clearly, $g_n(u)\geq 0$. To prove the upper estimate in~\eqref{eq:est_dominated1}, observe first that there is $c_1>0$ such that
$$
n^{\frac{\beta-1}{\alpha+1}}
\left(1-\left(1- u n^{-\frac{2}{\alpha+1}}\right)^2\right)^{\frac{\beta-1}{2}}
=
u^{\frac {\beta-1}2} \left(2 - u n^{-\frac{2}{\alpha+1}}\right)^{\frac{\beta-1}{2}}
\leq
c_1 u^{\frac{\beta-1}{2}}
$$
for all $u\in (0,(2-\delta) n^{\frac 2 {\alpha+1}})$.
Namely, we can take $c_1 = 2^{\frac{\beta-1}{2}}$ if $\beta\geq 1$ and $c_1 =\delta^{\frac{\beta-1}{2}}$ if $\beta \leq 1$.
Further, there exists a constant $c_2>0$ such that
$$
\int_{1 - un^{-\frac{2}{\alpha+1}}}^1 C_\alpha (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s
\geq
c_2\Big(1-(1-un^{-{2\over \alpha+1}})\Big)^{\alpha+1\over 2}
=
c_2 u^{\frac{\alpha+1}{2}} n^{-1},
$$
for all $u\in (0,2 n^{\frac 2 {\alpha+1}}]$. Indeed, the quotient of both expressions converges to a non-zero constant as $un^{-\frac{2}{\alpha+1}}\to 0$; see Relation~\eqref{eq:asympt_proof_beta}. Furthermore, both expressions are continuous, non-vanishing functions of the argument $un^{-\frac{2}{\alpha+1}} \in (0,2]$. This implies the required bound. A similar bound was also used in~\cite[Lemma 2.2]{BonnetEtAlThresholds}. Now, if $u\in (0,(2-\delta) n^{\frac 2 {\alpha+1}})$, then taking the above estimates together and using the elementary inequality $(1-x)^n\leq {\rm e}^{-nx}$, $0\leq x<1$, we arrive at
$$
g_n(u)
\leq
c_1 u^{\frac{\beta-1}{2}} \exp\left\{- n\int_{1 - un^{-\frac{2}{\alpha+1}}}^1 C_\alpha (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s \right\}
\leq
c_1 u^{\frac{\beta-1}{2}}\exp\{-c_2 u^{\frac{\alpha+1}{2}}\},
$$
which proves the integrable bound stated in~\eqref{eq:est_dominated1} for every $\delta\in (0,2)$.
Let us prove~\eqref{eq:est_dominated2}. First of all, we have
$$
n^{\frac{\beta-1}{\alpha+1}}
\left(1-\left(1- u n^{-\frac{2}{\alpha+1}}\right)^2\right)^{\frac{\beta-1}{2}}
=
u^{\frac {\beta-1}2} \left(2 - u n^{-\frac{2}{\alpha+1}}\right)^{\frac{\beta-1}{2}}.
$$
Unfortunately, this becomes infinite at $un^{-\frac{2}{\alpha+1}} =2$ if $\beta<1$. Let us choose $\delta>0$ so small that for all $u\in ((2-\delta) n^{\frac 2 {\alpha+1}}, 2n^{\frac 2 {\alpha+1}})$,
$$
1-\int_{1 - un^{-\frac{2}{\alpha+1}}}^1 C_\alpha (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s
=
\int_{-1}^{-1+ (2 - un^{-\frac{2}{\alpha+1}})} C_\alpha (1-s^2)^{\frac{\alpha-1}{2}} {\rm d} s
\leq \frac 12.
$$
This is possible because the integral converges to $0$ as $(2 - un^{-\frac{2}{\alpha+1}})\to 0$. Recalling the definition of $g_n$ and taking the above estimates together, we arrive at
$$
\int_{(2-\delta)n^{\frac 2 {\alpha+1}}}^{2n^{\frac 2 {\alpha+1}}} g_n(u) {\rm d} u
\leq
\int_{(2-\delta)n^{\frac 2 {\alpha+1}}}^{2n^{\frac 2 {\alpha+1}}} u^{\frac{\beta-1}{2}} \left(2 - u n^{-\frac{2}{\alpha+1}}\right)^{\frac{\beta-1}{2}} 2^{-n} {\rm d} u
=
n^{\frac{\beta+1}{\alpha+1}} 2^{-n} \int_{2-\delta}^2 v^{\frac{\beta-1}{2}} (2-v)^{\frac{\beta-1}{2}} {\rm d} v,
$$
which converges to $0$, as $n\to\infty$. This completes the proof of~\eqref{eq:est_dominated2}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo:f_vector_asympt_beta}]
It follows from Lemma~\ref{lem:asympt_beta} with $\beta=\alpha k$ that
\begin{align*}
I_{n,k}(\alpha)
&=
\int_{-1}^{+1} c_{1, \frac {\alpha k -1}{2}}
(1-t^2)^{\frac {\alpha k - 1}{2}} \left(\int_{-1}^t c_{1, \frac{\alpha - 1}{2}} (1-s^2)^{\frac{\alpha - 1}{2}}{\rm d} s\right)^{n-k} {\rm d} t\\
&\sim
n^{-\frac{\alpha k + 1}{\alpha+1}}
\frac{c_{1, \frac {\alpha k -1}{2}}}{1 + \alpha}
\left(\frac{1+\alpha}{c_{1,\frac{\alpha-1}{2}}}\right)^{\frac{\alpha k + 1}{\alpha+1}}
\Gamma\left(\frac{1 + \alpha k}{1 + \alpha}\right).
\end{align*}
From Theorem~\ref{theo:f_vect} we recall the formula
$$
\mathbb{E} f_k(P_{n,d}^{\beta})
=
2 \sum_{s=0}^\infty \binom n {d-2s} \binom {d-2s}{k+1} I_{n,d-2s}(2\beta+d) J_{d-2s,k+1}\left(\beta + s + \frac 12\right).
$$
It follows from the above that the $s$-th term of the sum behaves like a constant multiple of $n^{d-2s-1\over 2\beta+d+1}$, as $n\to\infty$. Consequently, as $n\to\infty$, the term with $s=0$ dominates all other terms and we arrive at
\begin{align*}
\mathbb{E} f_k(P_{n,d}^{\beta}) &\sim
n^{\frac{d - 1}{2\beta+d+1}}\frac{2}{d!} \binom {d}{k+1} J_{d,k+1}\left(\beta + \frac 12\right)
\frac{c_{1, \frac {(2\beta+d) d -1}{2}}}{2\beta+d+1}\\
&\qquad\qquad\qquad\times\left(\frac{2\beta+d+1}{c_{1,\frac{2\beta+d -1}{2}}}\right)^{\frac{(2\beta+d) d + 1}{2\beta+d +1}}
\Gamma\left(\frac{(2\beta+d) d+1}{2\beta+d+1}\right).
\end{align*}
This completes the proof.
\end{proof}
\subsection{Poisson hyperplane tessellations}
Recall the definitions of the zero cell $Z_\alpha$ and the Poisson point process $\Pi_{d,\alpha}$.
\begin{proof}[Proof of Theorem \ref{thm:PoissonHyperplanes}]
Let us define the (measurable) mapping
$$
T:\mathbb{R}^d\setminus\{0\}\to A(d,d-1),\qquad
x\mapsto H(x):=\{y\in\mathbb{R}^d:\langle x,y\rangle=1\}.
$$
The well-known mapping property of Poisson processes (see, for example, \cite[Theorem 5.1]{LastPenrosePPPBook}) implies that the image process $T\Pi_{d,\alpha}$ is a Poisson process on the space $A(d,d-1)$. Its probability law is rotationally invariant, since $\Pi_{d,\alpha}$ has the same property. Next, we consider the distance distribution. For $s>0$ we first compute, by transformation into spherical coordinates, that
\begin{align}\label{eq:DistanceDistributionComp1}
\int_{\{x\in\mathbb{R}^d:\|x\|>s\}}{\textup{d} x\over\|x\|^{d+\alpha}} = {2\pi^{d/2}\over\Gamma({d\over 2})}\int_s^\infty {\textup{d} r\over r^{\alpha+1}} = {2\pi^{d/2}\over\Gamma({d\over 2})}{s^{-\alpha}\over \alpha}.
\end{align}
On the other hand, writing $d(0,H)$ for the distance of a hyperplane $H\in A(d,d-1)$ to the origin, we have that $|\{H\in\eta_\alpha:d(0,H)\leq s\}|$ ($|\,\cdot\,|$ denotes the cardinality of a set) is Poisson distributed with mean
\begin{align*}
\Theta_\alpha(\{H\in A(d,d-1):d(0,H)\leq s\})
=
{2\pi^{d/2}\over\Gamma({d\over 2})}\int_{0}^s|t|^{\alpha-1}\,\textup{d} t
=
{2\pi^{d/2}\over\Gamma({d\over 2})}{s^\alpha\over\alpha},
\end{align*}
where we used the definition of $\Theta_\alpha$ given in~\eqref{eq:Theta_def}.
Thus,
\begin{align}\label{eq:DistanceDistributionComp2}
\mathbb{E}|\{H\in\eta_\alpha:d(0,H)^{-1}\geq s\}| = \mathbb{E}|\{H\in\eta_\alpha:d(0,H)\leq s^{-1}\}| = {2\pi^{d/2}\over\Gamma({d\over 2})}{s^{-\alpha}\over\alpha}.
\end{align}
So, a comparison of \eqref{eq:DistanceDistributionComp1} with \eqref{eq:DistanceDistributionComp2} shows that the Poisson processes $T\Pi_{d,\alpha}$ and $\eta_\alpha$ have the same distribution. In view of the definition of the mapping $T$ and the definition of the dual of a convex body this implies that the random polytopes $(\mathop{\mathrm{conv}}\nolimits\Pi_{d,\alpha})^\circ$ and $Z_\alpha$ (or, equivalently, $\mathop{\mathrm{conv}}\nolimits\Pi_{d,\alpha}$ and $Z_\alpha^\circ$) are identically distributed. The claim for the expected $f$-vectors follows from~\eqref{eq:DualityFvector}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:PoissonHyperplanesDtoInfinity}]
Fix some $k\in \{0,1,2,\ldots\}$.
Theorem \ref{thm:PoissonHyperplanes} and Theorem \ref{theo:f_vect_poisson} imply that
\begin{align*}
\mathbb{E} f_k(Z_\alpha)
&= \mathbb{E} f_{d-k-1}(\mathop{\mathrm{conv}}\nolimits\Pi_{d,\alpha})\\
&= 2\sum_{s=0}^{\lfloor{\frac k2}\rfloor}{\Gamma({(d-2s)\alpha+1\over 2})\over\Gamma({(d-2s)\alpha\over 2})}\left({\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{d-2s}{(\sqrt{\pi}\alpha)^{d-2s-1}\over d-2s}{d-2s\choose d-k}\tilde{J}_{d-2s,d-k-1}\left({d-2s-1+\alpha\over 2}\right)\\
&=: \sum_{s=0}^{\lfloor{\frac k2}\rfloor} T_{d,\alpha}(s).
\end{align*}
Recall that we assume that $\alpha=\alpha(d)>0$ is bounded away from $0$. For fixed $s\in \{0,1,\ldots,\lfloor \frac k2 \rfloor\}$, Stirling's formula and the definition of the binomial coefficients yield
$$
{\Gamma({(d-2s)\alpha+1\over 2})\over\Gamma({(d-2s)\alpha\over 2})}\sim \sqrt{(d-2s)\alpha\over 2}
\qquad\text{and}\qquad
{d-2s\choose d-k}={d-2s\choose k-2s} \sim {d^{k-2s}\over(k-2s)!},
$$
as $d\to\infty$.
Moreover, we shall argue below that
\begin{equation}\label{eq:JTildeAsymptotic}
\lim_{d\to\infty} \tilde{J}_{d-2s,d-k-1}\left({d-2s-1+\alpha\over 2}\right) = {1\over 2^{k-2s}}.
\end{equation}
This implies that the $s$th term in the above sum (together with the prefactor $2$) is equal to
\begin{align*}
T_{d,\alpha}(s)
&\sim 2\sqrt{(d-2s)\alpha\over 2}\left({\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{d-2s}{(\sqrt{\pi}\,\alpha)^{d-2s-1}\over d-2s}{d^{k-2s}\over(k-2s)!}{1\over 2^{k-2s}}\\
&\sim {\sqrt{\alpha}\over 2^{k-2s-{1\over 2}}}\left({\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{d-2s}{(\sqrt{\pi}\,\alpha)^{d-2s-1}\over (k-2s)!}d^{k - 2s-{1\over 2}},
\end{align*}
as $d\to\infty$. Next we show that the term $T_{d,\alpha}(s)$ with $s=0$ is asymptotically dominating the terms with $s\neq 0$. For every $s\in \{0,1,\ldots,\lfloor \frac k2 \rfloor\}$, we have
$$
\frac{T_{d,\alpha}(0)}{T_{d,\alpha}(s)} \sim \left({\sqrt \pi \alpha d\over 2} \, {\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{2s} \frac{(k-2s)!} {k!}.
$$
If $s\neq 0$, then the right-hand side goes to $+\infty$, as $d\to\infty$, since the function $\Gamma({\alpha\over 2}+1)/\Gamma({\alpha+1\over 2})$ is bounded away from $0$ for $\alpha>0$.
Thus, $T_{d,\alpha} (s) = o(T_{d,\alpha} (s))$ for every $s\in \{1,2,\ldots, \lfloor \frac k2 \rfloor\}$ and hence
$$
\mathbb{E} f_k(Z_\alpha) \sim T_{d,\alpha}(0) \sim {\sqrt{\alpha}\over 2^{k-{1\over 2}}}\left({\Gamma({\alpha\over 2})\over\Gamma({\alpha+1\over 2})}\right)^{d}{(\sqrt{\pi}\,\alpha)^{d-1}\over k!}d^{k-{1\over 2}}.
$$
It remains to prove \eqref{eq:JTildeAsymptotic}. For $\ell,m\in\mathbb{N}$ consider the quantity
$$
\tilde{J}_{m,\ell-1}(\beta)=\mathbb{E}\beta([Z_1,\ldots,Z_\ell],[Z_{1},\ldots,Z_m]),
$$
where the points $Z_1,\ldots,Z_m\in\mathbb{R}^{m-1}$ are i.i.d.\ with beta' density $\tilde{f}_{m-1,\beta}$. In the proof of Theorem \ref{theo:f_vect} we have seen that $\beta([Z_1,\ldots,Z_\ell],[Z_{1},\ldots,Z_m])$ has the same law as $\beta(\mathop{\mathrm{pos}}\nolimits(V_1-V,\ldots,V_{m-\ell}-V))$, where
\begin{itemize}
\item[(a)] $V,V_1,\ldots,V_{m-\ell}\in\mathbb{R}^{m-\ell}$ are independent and such that
\item[(b)] $V_1,\ldots,V_{m-\ell}$ have density $\tilde{f}_{m-\ell,{2\beta-\ell+1\over 2}}$ and
\item[(c)] $V$ has density $\tilde{f}_{m-\ell,{(2\beta-m)\ell+m\over 2}}$.
\end{itemize}
Now, we substitute $m=d-2s$, $\ell=d-k$ and $\beta={d-2s-1+\alpha\over 2}$ and notice that the relevant beta'-parameters are
$$
m-\ell\quad\text{and}\quad\kappa(d) := {2\beta-\ell+1\over 2}={\alpha+k-2s\over 2}
$$
for the random variables $V_1,\ldots,V_{m-\ell}$ in (b) and
$$
m-\ell\quad \text{and}\quad
\eta(d) := {(2\beta-m)\ell+m\over 2}={\alpha(d-k)+k-2s\over 2}
$$
for the random variable $V$ in (c).
We are interested in the large $d$ behavior of
$$
\tilde{J}_{d-2s,d-k-1}\left({d-2s-1+\alpha\over 2}\right) =
\mathbb{E} \beta \mathop{\mathrm{pos}}\nolimits(V_1(d)-V(d),\ldots, V_{k-2s}(d)-V(d)),
$$
where $V_1(d),\ldots,V_{k-2s}(d)$ with density $\tilde{f}_{k-2s,\kappa(d)}$ and $V(d)$ with density $\tilde{f}_{k-2s,\eta(d)}$ are independent.
\vspace*{2mm}
\noindent
\textit{Case 1.}
Assume first that $\kappa(d)$ converges to some finite $\kappa \in (\frac{k-2s}{2}, \infty)$, as $d\to\infty$. Note that the value $\frac{k-2s}{2}$ can be excluded by the that assumption $\inf_{d\in\mathbb{N}} \alpha(d) > 0$. For the same reason, we have $\eta(d)\to \infty$, as $d\to\infty$. Observe that the beta' distribution with a second parameter going to $\infty$ weakly converges to the Dirac measure at $0$. It follows that, as $d\to\infty$, the collection of random points $(V_1(d),\ldots,V_{k-2s}(d),V)$ weakly converges to $(W_1,\ldots,W_{k-2s},0)$, where $W_1,\ldots,W_{k-2s}$ are i.i.d.\ with density $\tilde{f}_{k-2s,\kappa}$. Consequently, we have
\begin{align*}
\lim_{d\to\infty} \tilde{J}_{d-2s,d-k-1}\left({d-2s-1+\alpha\over 2}\right)
&=
\lim_{d\to\infty} \mathbb{E} \beta \mathop{\mathrm{pos}}\nolimits(V_1(d)-V(d),\ldots, V_{k-2s}(d)-V(d))\\
&=
\mathbb{E} \beta \mathop{\mathrm{pos}}\nolimits(W_1,\ldots, W_{k-2s}).
\end{align*}
However, since the distribution of $W_i$ is the same as that of $-W_i$ for every $i=1,\ldots,k-2s$, we must have
$$
\mathbb{E}\beta(\mathop{\mathrm{pos}}\nolimits(W_1,\ldots,W_{k-2s})) = {1\over 2^{k-2s}},
$$
for symmetry reasons. This proves~\eqref{eq:JTildeAsymptotic} in Case~1.
\vspace*{2mm}
\noindent
\textit{Case 2.}
Assume now that $\kappa(d)$ diverges to $+\infty$, as $d\to\infty$. By the definition of $\kappa(d)$ and $\eta(d)$ we have $\eta(d) \to\infty$ and moreover $\kappa(d) = o(\eta(d))$, as $d\to\infty$. By Lemma~\ref{lem:gauss_limit}, the random points
$$
\sqrt{2 \kappa(d)} V_1(d), \ldots, \sqrt{2 \kappa(d)} V_{k-2s}(d), \sqrt{2 \eta(d)} V(d)
$$
thus converge weakly to independent random points $W_1,\ldots, W_{k-2s}, W$ with standard normal distribution on $\mathbb{R}^{k-2s}$.
Combining this with $\kappa(d) = o(\eta(d))$, we obtain the weak convergence
$$
\sqrt{2 \kappa(d)} \, (V_1(d), \ldots, V_{k-2s}(d), V(d)) \overset{d}{\to} (W_1,\ldots,W_{k-2s}, 0),
$$
as $d\to\infty$. Since the standard normal distribution is centrally symmetric with respect to the origin, the same symmetry argument as in Case~1 proves the validity of~\eqref{eq:JTildeAsymptotic}, that is
$$
\lim_{d\to\infty} \tilde{J}_{d-2s,d-k-1}\left({d-2s-1+\alpha\over 2}\right) = {1\over 2^{k-2s}}.
$$
\vspace*{2mm}
\noindent
\textit{Case 3.} In general, $\kappa(d)$ need not converge to any finite or infinite limit. However, any subsequence of $\kappa(d)$ has a subsubsequence to which either Case~1 or Case~2 can be applied, thus showing that~\eqref{eq:JTildeAsymptotic} holds without additional assumptions. This completes the proof of Theorem~\ref{thm:PoissonHyperplanesDtoInfinity}.
\end{proof}
\section*{Acknowledgement}
ZK and CT were supported by the DFG Scientific Network {\it Cumulants, Concentration and Superconcentration}.
\addcontentsline{toc}{section}{References}
| {
"timestamp": "2019-01-23T02:31:24",
"yymm": "1805",
"arxiv_id": "1805.01338",
"language": "en",
"url": "https://arxiv.org/abs/1805.01338",
"abstract": "We study random polytopes of the form $[X_1,\\ldots,X_n]$ defined as convex hulls of independent and identically distributed random points $X_1,\\ldots,X_n$ in $\\mathbb{R}^d$ with one of the following densities: $$ f_{d,\\beta} (x) = c_{d,\\beta} (1-\\|x\\|^2)^{\\beta}, \\qquad \\|x\\| < 1, \\quad \\text{(beta distribution, $\\beta>-1$)} $$ or $$ \\tilde f_{d,\\beta} (x) = \\tilde{c}_{d,\\beta} (1+\\|x\\|^2)^{-\\beta}, \\qquad x\\in\\mathbb{R}^d, \\quad \\text{(beta' distribution, $\\beta>d/2$)}. $$ This setting also includes the uniform distribution on the unit sphere and the standard normal distribution as limiting cases. We derive exact and asymptotic formulae for the expected number of $k$-faces of $[X_1,\\ldots,X_n]$ for arbitrary $k\\in\\{0,1,\\ldots,d-1\\}$. We prove that for any such $k$ this expected number is strictly monotonically increasing with $n$. Also, we compute the expected internal and external angles of these polytopes at faces of every dimension and, more generally, the expected conic intrinsic volumes of their tangent cones. By passing to the large $n$ limit in the beta' case, we compute the expected $f$-vector of the convex hull of Poisson point processes with power-law intensity function. Using convex duality, we derive exact formulae for the expected number of $k$-faces of the zero cell for a class of isotropic Poisson hyperplane tessellations in $\\mathbb R^d$. This family includes the zero cell of a classical stationary and isotropic Poisson hyperplane tessellation and the typical cell of a stationary Poisson--Voronoi tessellation as special cases. In addition, we prove precise limit theorems for this $f$-vector in the high-dimensional regime, as $d\\to\\infty$. Finally, we relate the $d$-dimensional beta and beta' distributions to the generalized Pareto distributions known in extreme-value theory.",
"subjects": "Probability (math.PR); Metric Geometry (math.MG)",
"title": "Beta polytopes and Poisson polyhedra: $f$-vectors and angles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815491146485,
"lm_q2_score": 0.7154240018510025,
"lm_q1q2_score": 0.7076842224247758
} |
https://arxiv.org/abs/1901.02468 | Schur and $e$-positivity of trees and cut vertices | We prove that the chromatic symmetric function of any $n$-vertex tree containing a vertex of degree $d\geq \log _2n +1$ is not $e$-positive, that is, not a positive linear combination of elementary symmetric functions. Generalizing this, we also prove that the chromatic symmetric function of any $n$-vertex connected graph containing a cut vertex whose deletion disconnects the graph into $d\geq\log _2n +1$ connected components is not $e$-positive. Furthermore we prove that any $n$-vertex bipartite graph, including all trees, containing a vertex of degree greater than $\lceil \frac{n}{2}\rceil$ is not Schur-positive, namely not a positive linear combination of Schur functions. In complete generality, we prove that if an $n$-vertex connected graph has no perfect matching (if $n$ is even) or no almost perfect matching (if $n$ is odd), then it is not $e$-positive. We hence deduce that many graphs containing the claw are not $e$-positive. | \section{Introduction}\label{sec:intro} The generalization of the chromatic polynomial, known as the chromatic symmetric function, was introduced by Stanley in 1995 \cite{Stan95} and has seen a resurgence of interest and activity recently. Much of this has centred around trying to resolve the 1995 conjecture of Stanley \cite[Conjecture 5.1]{Stan95} and its equivalent incarnation \cite[Conjecture 5.5]{StanStem}, which states that if a poset is $(3+1)$-free, then its incomparability graph is a nonnegative linear combination of elementary symmetric functions, that is, $e$-positive. The study of chromatic symmetric function $e$-positivity \cite{ChoHuh, Dahl, lollipop, Foley, FoleyKin, Gash, GebSag, GP, Hamel, MM, HuhNamYoo, Wolfe}, and related Schur-positivity \cite{Gasharov, Paw, SW, Stanley2}, is also an active area due to connections to the representation theory of the symmetric and general linear group.
Many partial results regarding chromatic symmetric functions have been obtained such as when the graph involved is the path or the cycle \cite{lollipop, Stan95, Wolfe}, when the graph is formed from complete graphs \cite{ChoHuh, GebSag, MM}, or when a graph avoids another \cite{Foley, Gash, Hamel, Tsujie}. These proofs have not always worked directly with the chromatic symmetric function. Instead, sometimes generalizations of the chromatic symmetric function have been employed such as to quasisymmetric functions \cite{ChoHuh, MM, SW} and noncommutative symmetric functions \cite{GebSag}.
Another research avenue that has seen activity is to determine whether two nonisomorphic trees can have the same chromatic symmetric function \cite{Jose2+1, Jose2, HeilJi, Loebl, MMW, Orellana}. The data for up to 29 vertices \cite{HeilJi} shows that two trees $T_1, T_2$ have the same chromatic symmetric function if and only if $T_1$ and $T_2$ are isomorphic. Further evidence towards this includes that for $T_1$ and $T_2$ to have the same chromatic symmetric function they must have the same number of vertices, edges and matchings, and many of these results have been collected together in \cite{Orellana}.
In this paper, we meld these two avenues and discover criteria on trees and graphs with cut vertices that ensure they are not $e$-positive or not Schur-positive. In particular, we discover a trove of trees that are not $e$-positive, supporting Stanley's observation from 1995 \cite[p 187]{Stan95} that a tree is likely only to be $e$-positive ``by accident''. More precisely, this paper is structured as follows.
In Section~\ref{sec:background} we review the necessary notions before reducing the graphs we need to study to spiders in Subsection~\ref{subsec:redspiders}. {We also prove the following in Theorem~\ref{the:perfect_matching} and relate it to the claw in Corollary~\ref{cor:clawsandmatching}.}
\begin{theorem*} {Let $G$ be an $n$-vertex connected graph. If $G$ has no perfect matching (if $n$ is even) or no almost perfect matching (if $n$ is odd), then $G$ is not $e$-positive.}
\end{theorem*}
In Section~\ref{sec:spiders} we study the $e$-positivity of spiders including showing that a spider with at least three legs of odd length is not $e$-positive in Corollary~\ref{cor:matching_cor}. We also show that if the length of each spider leg is less than half the total number of vertices, then the spider is not $e$-positive in Lemma~\ref{lem:short_legs} and generalize this to trees and graphs in Theorem~\ref{the:gen_short_legs}. In Lemma~\ref{lem:induction_lem}, Theorem~\ref{the:induction_short_res_1} and Theorem~\ref{the:induction_short_res_2}, we show that if a spider is not $e$-positive, then we can create infinitely many more spiders from it that are not $e$-positive. Meanwhile Lemmas~\ref{lem:quotient_construction}, ~\ref{lem:quotient_construction_2} and ~\ref{lem:quotient_construction_3} give divisibility criteria on the total number of vertices, which ensure in Proposition~\ref{prop:all_e_positive_spiders} that the spider is not $e$-positive. Applying these results on spiders yields our most general result, the following, given in Theorem~\ref{the:gen_partial_e_thm}.
\begin{theorem*} If $G$ is an $n$-vertex connected graph with a cut vertex whose deletion produces a graph with $d\geq 3$ connected components such that
$$d \geq \log_2 n + 1$$then $G$ is not $e$-positive.
\end{theorem*}
We show the utility of our results in Example~\ref{ex:gen_short_legs}, where we easily classify when a windmill graph $W^d_n$ for $d\geq1$, $n\geq 1$ is $e$-positive. In Section~\ref{sec:bipartite} we turn our attention to Schur-positivity, proving the following in Theorem~\ref{the:bipartite_s_pos}.
\begin{theorem*}
If $G$ is an $n$-vertex bipartite graph with a vertex of degree greater than $\lceil \frac{n}{2} \rceil$, then $G$ is not Schur-positive.
\end{theorem*}
Finally, in Section~\ref{sec:further} we conclude with two captivating conjectures on the $e$-positivity of trees.
\section{Background}\label{sec:background} In order to describe our results, let us first recall the necessary combinatorics and algebra. We say a \emph{partition} $\lambda = (\lambda _1, \ldots , \lambda _{\ell(\lambda)})$ of $N$, denoted by $\lambda \vdash N$, is a list of positive integers whose \emph{parts} $\lambda _i$ satisfy $\lambda _1 \geq \cdots \geq \lambda _{\ell(\lambda)}>0$ and {$\sum _{i=1} ^{\ell(\lambda)} \lambda _i=N$}. If we have $j$ parts equal to $i$ then we often denote this by $i^j$. Related to every partition $\lambda$ is its \emph{transpose}, $\lambda ^t = (\lambda _1^t, \ldots , \lambda ^t _{\lambda_1})$, which is the partition of $N$ obtained from $\lambda$ by setting
$$\lambda _i^t = \mbox{ number of parts of }\lambda \geq i.$$For example, if $\lambda =(2,2,1)$ then $\lambda ^t = (3,2)$.
Given a graph $G$ with vertex set $V_G$ and edge set $E_G$, we say that $G$ is an \emph{$n$-vertex graph}, or has \emph{size} $n$, if $|V_G| =n$. We say that a connected graph $G$ contains a \emph{cut vertex} if there exists a vertex $v\in V_G$ such that the deletion of $v$ and its incident edges yields a graph $G'$ with more than one connected component. A \emph{connected partition} $C$ of an $n$-vertex graph $G$ is a partitioning of its vertex set $V_G$ into $\{V_1, \dots, V_k\}$ such that each induced subgraph formed by the vertices in each subset $V_i$ only is a connected graph. The \emph{type} of a connected partition $C$ is the partition of $n$ formed from sorting the sizes of each set $V_i$ in decreasing order.
We say $G$ \emph{has a connected partition} of type $\lambda$ if and only if there exists a connected partition of $G$ of type $\lambda$, and is \emph{missing a connected partition} of type $\lambda$ otherwise.
\begin{example}\label{ex:connpartition} Consider the $n$-vertex star $S_n$ for $n\geq 4$, consisting of a single vertex connected to $n-1$ vertices of degree 1. The star $S_4$ is below.
$$
\centering
\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (0,0);
\coordinate (B) at (1.5,0);
\coordinate (C) at (3,0);
\coordinate (D) at (1.5,1);
\draw[thick] (A)--(D);
\draw[thick] (B)--(D);
\draw[thick] (C)--(D);
\filldraw (A) circle (7pt);
\filldraw (B) circle (7pt);
\filldraw (C) circle (7pt);
\filldraw (D) circle (7pt);
\end{tikzpicture}$$
The graph $S_n$ has a connected partition of type $\lambda$ if and only if $\lambda = (k, 1^{n-k})$ for some $1 \leq k \leq n$. Examples of connected partitions for $S_4$ of type $(4), (3,1), (2,1^2)$ and $(1^4)$ are below.
$$
\centering
\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (0,0);
\coordinate (B) at (1.5,0);
\coordinate (C) at (3,0);
\coordinate (D) at (1.5,1);
\draw[thick] (A)--(D);
\draw[thick] (B)--(D);
\draw[thick] (C)--(D);
\filldraw (A) circle (7pt);
\filldraw (B) circle (7pt);
\filldraw (C) circle (7pt);
\filldraw (D) circle (7pt);
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (0,0);
\coordinate (B) at (1.5,0);
\coordinate (C) at (3,0);
\coordinate (D) at (1.5,1);
\draw[thick] (A)--(D);
\draw[thick] (B)--(D);
\filldraw (A) circle (7pt);
\filldraw (B) circle (7pt);
\filldraw (C) circle (7pt);
\filldraw (D) circle (7pt);
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (0,0);
\coordinate (B) at (1.5,0);
\coordinate (C) at (3,0);
\coordinate (D) at (1.5,1);
\draw[thick] (A)--(D);
\filldraw (A) circle (7pt);
\filldraw (B) circle (7pt);
\filldraw (C) circle (7pt);
\filldraw (D) circle (7pt);
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (0,0);
\coordinate (B) at (1.5,0);
\coordinate (C) at (3,0);
\coordinate (D) at (1.5,1);
\filldraw (A) circle (7pt);
\filldraw (B) circle (7pt);
\filldraw (C) circle (7pt);
\filldraw (D) circle (7pt);
\end{tikzpicture}
$$
Thus $S_n$ is missing a connected partition of type $(n-2, 2)$ for $n\geq 4$. For example, $S_4$ is missing a connected partition of type $(2,2)$.
\end{example}
\begin{remark}\label{rem:claw} The star $S_4$ is also known as the claw. It is intimately connected to the aforementioned 1995 conjecture of Stanley \cite[Conjecture 5.1]{Stan95} since if a poset is $(3+1)$-free, then its incomparability graph is claw-free. In contrast, the graphs we will study are mostly not claw-free.
\end{remark}
We say that an $n$-vertex graph $G$ has a \emph{perfect matching} if it has a connected partition of type $(2^{\frac{n}{2}})$ and an \emph{almost} perfect matching if it has a connected partition of type $(2^{\frac{n-1}{2}},1)$. Classically stated, a graph $G$ has a perfect matching if there exists a subset of its edges $M\subseteq E_G$, such that every vertex in the graph is incident to exactly one edge in $M$. Similarly $G$ has an almost perfect matching if there exists a vertex $v\in V_G$ whose deletion, along with its incident edges, yields a graph $G'$ that has a perfect matching.
Graphs that will be of particular interest to us will be \emph{trees}, namely connected graphs with no cycles. Recall that degree 1 vertices in trees are called \emph{leaves}, and that a disjoint union of trees is called a \emph{forest}. Two types of tree that will be crucial to our results are paths and spiders. Recall that the \emph{path} $P_n$ of \emph{length} $n$ where $n\geq1$ is the $n$-vertex tree with $n-2$ vertices of degree 2, and 2 leaves, for $n\geq 2$ or a single vertex for $n=1$. Meanwhile, given a partition $\lambda = (\lambda _1, \ldots , \lambda _d) \vdash n-1$ where $d\geq 3$, the \emph{spider} $$S(\lambda) = S(\lambda _1, \ldots , \lambda _d)$$is the $n$-vertex tree consisting of $d$ disjoint paths $P_{\lambda _1}, \ldots , P_{\lambda _d}$ (each respectively called a \emph{leg} of \emph{length} $\lambda _i$ for $1\leq i \leq d$) and a vertex (called the \emph{centre}) joined to a leaf in each path. Extending this notation, $S(i,\lambda)$ is the $(n+i)$-vertex spider with legs of length $i, \lambda _1, \ldots , \lambda _d$.
\begin{example}\label{ex:spider} The $n$-vertex star $S_n$ for $n\geq4$ is also the spider $S(1^{n-1})$. The $7$-vertex spider $S(4,1,1)$ is below.
\begin{center}
\begin{tikzpicture}
\filldraw [black] (0,0) circle (4pt);
\filldraw [black] (1,1) circle (4pt);
\filldraw [black] (0,2) circle (4pt);
\filldraw [black] (2.5,1) circle (4pt);
\filldraw [black] (4,1) circle (4pt);
\filldraw [black] (5.5,1) circle (4pt);
\filldraw [black] (7,1) circle (4pt);
\draw[thick] (0,0)--(1,1);
\draw[thick] (0,2)--(1,1);
\draw[thick] (1,1)--(7,1);
\end{tikzpicture}
\end{center}
\end{example}
We now turn to the algebra we will need. The algebra of symmetric functions is a subalgebra of $\mathbb{Q} [[ x_1, x_2, \ldots ]]$ that can be defined as follows. The \emph{$i$-th elementary symmetric function} $e_i$ for $i\geq 1$ is given by
$$e_i = \sum _{j_1<\cdots < j_i} x_{j_1}\cdots x_{j_i}$$and given a partition $\lambda = (\lambda _1, \ldots , \lambda _{\ell(\lambda)})$ the \emph{elementary symmetric function} $e_\lambda$ is given by
$$e_\lambda = e_{\lambda _1} \cdots e_{\lambda _{\ell(\lambda)}}.$$The \emph{algebra of symmetric functions}, $\Lambda$, is then the graded algebra
$$\Lambda = \Lambda ^0 \oplus \Lambda ^1 \oplus \cdots$$where $\Lambda ^0 = \operatorname{span} \{1\} = \mathbb{Q}$ and for $N\geq 1$
$$\Lambda ^N = \operatorname{span} \{e_\lambda \;|\; \lambda \vdash N\}.$$Moreover, the elementary symmetric functions form a basis for $\Lambda$. Perhaps the most studied basis of $\Lambda$ is the basis of Schur functions. For a partition $\lambda = (\lambda _1, \ldots, \lambda _{\ell(\lambda)})$, the \emph{Schur function} $s_\lambda $ is given by
\begin{equation}\label{eq:JT}
s_{\lambda }=\det \left( e_{\lambda ^t_i -i +j}\right) _{1\leq i,j \leq \lambda_1}
\end{equation}where if $\lambda ^t_{i}-i+j <0$ then $e _{\lambda ^t_{i}-i+j}=0$.
If a symmetric function can be written as a nonnegative linear combination of elementary symmetric functions then we say it is \emph{$e$-positive}, and likewise if a symmetric function can be written as a nonnegative linear combination of Schur functions then we say it is \emph{Schur-positive}. Although not clear from \eqref{eq:JT}, it is a classical result that any $e$-positive symmetric function is Schur-positive, however Example~\ref{ex:S411} shows that the converse does not hold.
However, the symmetric functions that we will focus on will be the chromatic symmetric function of a graph, which is reliant on a graph that is \emph{finite} and \emph{simple} and we will assume that our graphs satisfy these properties from now on.
Given a graph, $G$, with vertex set $V_G$ a \emph{proper colouring} $\kappa$ of $G$ is a function
$$\kappa : V_G\rightarrow \{1,2,\ldots\}$$such that if $v_1, v_2 \in V_G$ are adjacent, then $\kappa(v_1)\neq \kappa(v_2)$. Then the chromatic symmetric function is defined as follows.
\begin{definition}\cite[Definition 2.1]{Stan95}\label{def:chromsym} For an $n$-vertex graph $G$ with vertex set $V_G=\{v_1, \ldots, v_n\}$, the \emph{chromatic symmetric function} is defined to be
$$X_G = \sum _\kappa x_{\kappa(v_1)}\cdots x_{\kappa(v_n)}$$
where the sum is over all proper colourings $\kappa$ of $G$. \end{definition}
For succinctness, if we say that a graph $G$ is $e$-positive or Schur-positive, then we mean that $X_G$ is $e$-positive or Schur-positive, respectively.
\begin{example}\label{ex:S411} The spider $S(4,1,1)$ from Example~\ref{ex:spider} is not $e$-positive, but is Schur-positive since
\begin{align*}
X_{S(4,1,1)}&=e_{(2^3,1)} + 4e_{(3,2,1^2)} - 3e_{(3,2^2)} + 10e_{(3^2,1)} \\
&+ 10e_{(4,2,1)}+ 17e_{(4, 3)} + 4e_{(5, 1^2)} + 3e_{(5, 2)} + 11e_{(6, 1)} + 7e_{(7)}\\
&=64s_{(1^7)} + 88s_{(2,1^5)} + 76s_{(2^2, 1^3)} + 57s_{(2^3,1)} + 36s_{(3,1^4)} \\
&+ 36s_{(3,2,1^2)} + 18s_{(3,2^2)} + 4s_{(3^2,1)} + 5s_{(4,1^3)} + 6s_{(4,2,1)} + s_{(4,3)}.
\end{align*}
\end{example}
The following result gives us one way to test whether a graph is $e$-positive, and will be vital in many of our proofs.
\begin{theorem}\cite[Proposition 1.3.3]{Wolfgang} \label{the:e_positivity_crit}
If a connected $n$-vertex graph $G$ is $e$-positive, then $G$ has a connected partition of type $\mu$ for every partition $\mu \vdash n$.
\end{theorem}
Hence, to prove that a graph $G$ is not $e$-positive, it suffices to find a partition $\mu$ such that $G$ is missing a partition of type $\mu$. In particular, we get the following.
\begin{theorem}\label{the:perfect_matching} {Let $G$ be an $n$-vertex connected graph. If $G$ has no perfect matching (if $n$ is even) or no almost perfect matching (if $n$ is odd), then $G$ is not $e$-positive. In particular, let $T$ be an $n$-vertex tree. If $T$ has no perfect matching (if $n$ is even) or no almost perfect matching (if $n$ is odd), then $T$ is not $e$-positive.}\end{theorem}
{It is a well-known result that if a connected graph with an even number of vertices is claw-free, then it has a perfect matching. This immediately yields the following corollary to Theorem~\ref{the:perfect_matching}.}
\begin{corollary}\label{cor:clawsandmatching} {Let $G$ be a connected graph with an even number of vertices and no perfect matching. Then $G$ contains the claw and is not $e$-positive.}
\end{corollary}
\begin{remark}\label{rem:clawsandmatching} {Note that Corollary~\ref{cor:clawsandmatching} cannot be strengthened further regarding graphs that contain the claw since $S(2,1,1)$ and $S(6,2,1)$ both contain the claw and are $e$-positive. However, $S(2,1,1)$ has an odd number of vertices, and $S(6,2,1)$ has a perfect matching.}
\end{remark}
It is also known when a tree has a perfect matching by the following specialization of Tutte's Theorem on graphs with perfect matchings \cite{Tutte}.
\begin{lemma}\label{lem:perfect_matching} Let $T$ be a tree. Then $T$ has a perfect matching if and only if for every vertex $v$, the deletion of $v$ and its incident edges produces a forest with exactly one connected component with an odd number of vertices.
\end{lemma}
As an example, we use the two theorems above to test the star to see in two ways that it is not $e$-positive.
\begin{example}\label{ex:perfect_matching} The $n$-vertex star $S_n$ for $n\geq4$ is not $e$-positive since it is missing a connected partition of type $(n-2,2)$ by Example~\ref{ex:connpartition}.
It is also not $e$-positive since it is missing a connected partition of type $(2^{\frac{n}{2}})$ for $n$ even and $(2^{\frac{n-1}{2}},1)$ for $n$ odd. \end{example}
Note, however, that the converse of Theorem~\ref{the:e_positivity_crit} and of Theorem~\ref{the:perfect_matching} is false since the spider $S(4,1,1)$ is not $e$-positive by Example~\ref{ex:S411} and yet has a connected partition of every type.
\subsection{The reduction to spiders}\label{subsec:redspiders} The next two lemmas allow us to reduce our study of connected graphs to the study of spiders. For ease of notation in the proof of the next lemma, in the $i^{th}$ leg of a spider, which has $\lambda_i$ vertices, label the vertices by $\{s_{i,1}, \dots, s_{i,\lambda_i}\}$ where $s_{i,1}$ is the vertex connected to the centre, $s_{i,\lambda_i}$ is a leaf, and there are edges between each $s_{i,j}$ and $s_{i,j+1}$ for $1\leq j \leq \lambda _i -1$.
\begin{lemma}\label{lem:spider_lem}
Let $T$ be a tree with a vertex of degree $d \geq 3$, and let $v$ be any such vertex. Let $(t_1, \dots, t_d)$ be the partition whose parts denote the sizes of the subtrees $(T_1, \dots, T_d)$ rooted at each vertex adjacent to $v$. If $T$ has a connected partition of type $\mu$, then the spider $S = S(t_1, \dots, t_d)$ has a connected partition of type $\mu$ as well.
\end{lemma}
\begin{proof}
Let $C = \{V_1, \dots, V_k\}$ be a connected partition of $T$ of type $\mu$. {We will work towards constructing a connected partition of $S$ of type $\mu$.} Without loss of generality, suppose $v \in V_1$. Since $T$ is a tree, no subset in $C$ contains vertices from two different subtrees unless vertex $v$ is also included. Therefore, all subsets in $C$ except possibly $V_1$ contain vertices from one subtree only. Hence, let $V_1$ contains $n_i$ vertices from subtree $T_i$ for $1 \leq i \leq d$, and let $\mathcal{T}_i \subset C$ denote the sets in the partition with vertices in subtree $T_i$ only. Notice that each $\mathcal{T}_i$ contains $t_i - n_i$ vertices.
A connected partition $\{W_1, \dots, W_k\}$ of the same type $\mu$ in $S$ may now be formed as follows. Let $W_1 = \{v\} \cup \{s_{i,j}: 1 \leq i \leq d, 1 \leq j \leq n_i\}$ where $v$ is the centre of $S$. Note that {$|W_1|=|V_1|$ and $W_1$ is connected.} Now notice that $S$ with deletion of all vertices in $W_1$ and their incident edges is now a collection of $d$ disjoint paths of length $t_i - n_i$ for $1 \leq i \leq d$. For each $\mathcal{T}_i$, let $\nu_i \vdash (t_i - n_i)$ be the partition formed from the size of each set in $\mathcal{T}_i$. Since a path may be decomposed into a connected partition of any type, in particular, a connected partition of type $\nu_i$ can be formed from a path of length $t_i - n_i$. Hence, the result follows.
\end{proof}
In fact, the above argument can be generalized as follows.
\begin{lemma}\label{lem:gen_spider_lem}
Let $G$ be a connected graph with a cut vertex $v$ whose deletion produces a graph with connected components $(C_1, \dots, C_d)$ with $d \geq 3$. Let $(c_1, \dots, c_d)$ be the partition whose parts denote the sizes of each of these connected components. If $G$ has a connected partition of type $\mu$, then the spider $S = S(c_1, \dots, c_d)$ has a connected partition of type $\mu$ as well.
\end{lemma}
\begin{proof}
Suppose $C$ is a connected partition of $G$ of type $\mu$. By assumption that $v$ is a cut vertex, any path between $w, u$ in distinct connected components $C_i$ and $C_j$ passes through $v$, for otherwise the deletion of $v$ would not leave $C_i$ and $C_j$ as distinct connected components. Hence, there are no edges between any distinct $C_i$ and $C_j$. The proof then proceeds as before in Lemma~\ref{lem:spider_lem} as $C$ contains exactly one set $V_1$ including the vertex $v$ and possibly some other vertices in $(C_1, \dots, C_d)$, and every other set $V_i$ in $C$ contains vertices from exactly one connected component $C_j$.
\end{proof}
\section{The $e$-positivity of spiders}\label{sec:spiders} We now work towards our first result on spiders by classifying when they have perfect and almost perfect matchings, for which we will need the following straightforward observation.
\begin{observation}\label{obs:matchedpaths} A path, $P_n$, has a perfect matching if and only if $n$ is even.
\end{observation}
With this observation we can now classify when a spider has a perfect or almost perfect matching.
\begin{lemma}\label{lem:spider_matching} We have the following.
\begin{enumerate}[(a)]
\item A spider has a perfect matching if and only if it has exactly one leg of odd length.
\item A spider has an almost perfect matching if and only if it has zero or two legs of odd length.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose a spider $S$ has a perfect matching. The deletion of the centre of $S$ produces a union of disjoint paths, and exactly one of these paths must have odd length by Lemma~\ref{lem:perfect_matching}. Conversely, if $S$ has exactly one leg of odd length, by matching the centre to its neighbour in that leg, a perfect matching exists by Observation~\ref{obs:matchedpaths} since the deletion of these vertices and their incident edges then produces a set of disjoint paths, each of even length.
Now suppose a spider $S$ has an almost perfect matching {so that there exists a vertex $v'$ whose deletion results in a perfect matching}. If the deletion of the centre produces a graph with a perfect matching, all legs in $S$ have even length, by Observation~\ref{obs:matchedpaths}. Otherwise, {$v'$} was in one of the legs $L$. Hence, unless the centre had degree 3 and $v'$ was adjacent to it, the deletion of $v'$ produces a spider $S'$ with an even number of vertices and a path of even length, which may {have 0 vertices}. Notice that an odd number of vertices in total was deleted from $S$ to form $S'$.
Hence, if the leg $L$ in $S'$ now has odd length, {then} every other leg in $S$ had even length by the first part. Therefore, all legs in $S$ had even length as an odd number of vertices was deleted from $L$ to form $S'$. Otherwise, if the leg $L$ in $S'$ now has even length, then some other leg in $S'$ has odd length by the first part, and so as an odd number of vertices was deleted from $L$ to form $S'$, $S$ originally had 2 legs of odd length.
In the case where $v'$ is adjacent to a degree 3 centre, the deletion of $v'$ produces two disjoint paths of even length. Hence, in this case, $S$ had exactly 2 legs of odd length. {This is because one of the paths of even length must consist of 2 legs and the centre, so one of the legs must be of odd length. Furthermore, the other path of even length must be a path of length one less than the remaining third leg, due to the deletion of $v'$, so the third leg must be of odd length.}
For the converse, if $S$ has no legs of odd length, then the deletion of the centre produces a disjoint union of paths with even length, which has a perfect matching by Observation~\ref{obs:matchedpaths}. If $S$ has 2 legs of odd length, the the deletion of the leaf in exactly one of those legs produces a spider with exactly one leg of odd length, which has a perfect matching by the first part.
\end{proof}
\begin{corollary}\label{cor:matching_cor}
Every spider with at least 3 legs of odd length is not $e$-positive.
\end{corollary}
\begin{proof}
This follows immediately from Lemma~\ref{lem:spider_matching} and Theorem~\ref{the:perfect_matching}. In particular, if we have an $n$-vertex spider, then a connected partition of type $(2^{\frac{n}{2}})$ for $n$ even or $(2^{\frac{n-1}{2}},1)$ for $n$ odd, is missing.
\end{proof}
\begin{example} \label{ex:threeoddlegs} All spiders $S(\lambda _1, \lambda _2, \lambda _3, 1,1,1)$ are not $e$-positive. \end{example}
However, it is not only the parity of the legs that can determine the $e$-positivity of a spider, but also the length of the legs.
\begin{lemma}\label{lem:short_legs} Let $\lambda \vdash (n-1)$ have at least 3 parts and $m$ be the maximum of the parts of $\lambda$. If $m < \lfloor \frac{n}{2} \rfloor$, then the $n$-vertex spider $S(\lambda)$ is missing a connected partition of type $$(n-m-1, m+1)$$and hence is not $e$-positive.
\end{lemma}
\begin{proof} Consider the types of connected partitions that can be formed by the deletion of one edge from $S(\lambda)$. It is straightforward to see that the only connected partitions of type $(n-i, i)$ can be formed where $1\leq i \leq m$. Since $m < \lfloor \frac{n}{2} \rfloor$ we have that $n-(m+1)\geq m+1$ and so $S(\lambda)$ is missing a connected partition of type
$$(n-(m+1), m+1)=(n-m-1, m+1).$$Hence, $S(\lambda)$ is not $e$-positive by Theorem~\ref{the:e_positivity_crit}.
\end{proof}
\begin{remark}\label{rem:short_legs} If $\lambda$ and $m$ are as in Lemma~\ref{lem:short_legs} and $\mu = (n-m-1, m+1)$, then using the Newton-Girard Identities and \cite[Theorem 2.5]{Stan95} one can compute directly that
$$[e_\mu] X_{S(\lambda)} = -n \mbox{ if } m+1\neq n-m-1$$and
$$[e_\mu] X_{S(\lambda)} = -\frac{n}{2} \mbox{ if } m+1= n-m-1$$where $[e_\mu] X_{S(\lambda)}$ denotes the coefficient of $e_\mu$ in $X_{S(\lambda)}$ when expanded as a linear combination of elementary symmetric functions.
\end{remark}
\begin{example}\label{ex:short_legs} If $\lambda = (2,2,1,1)\vdash 6$ then $m=2$ and $S(2,2,1,1)$, below, is missing a connected partition of type $(4,3)$ so is not $e$-positive by Lemma~\ref{lem:short_legs}. More precisely, it is not $e$-positive since $X_{S(2,2,1,1)}$ contains the term $-7e_{(4,3)}$ by the above remark.
\begin{center}
\begin{tikzpicture}
\filldraw [black] (0,0) circle (4pt);
\filldraw [black] (0,2) circle (4pt);
\filldraw [black] (1,1) circle (4pt);
\filldraw [black] (2,0) circle (4pt);
\filldraw [black] (2,2) circle (4pt);
\filldraw [black] (3.5,0) circle (4pt);
\filldraw [black] (3.5,2) circle (4pt);
\draw[thick] (0,0)--(1,1);
\draw[thick] (0,2)--(1,1);
\draw[thick] (1,1)--(2,0);
\draw[thick] (1,1)--(2,2);
\draw[thick] (2,2)--(3.5,2);
\draw[thick] (2,0)--(3.5,0);
\end{tikzpicture}
\end{center}
\end{example}
For our next three results we construct larger spiders that are not $e$-positive and have been created from smaller ones. Hence, our focus will temporarily not be on the total number of vertices, $n$, but on the number of vertices in an initial set of legs.
\begin{lemma}\label{lem:induction_lem}
Let $i \geq 0$, $\lambda \vdash N$ have at least two parts, and $m$ be the maximum of the parts of $\lambda$. Suppose the spider $S(i,\lambda)$ is missing a connected partition of type $\mu \vdash (i + N+1)$ where $m + 1 \leq \mu_k \leq N$ for each part $\mu_k$ of $\mu$. Then the spider $S(i+N, \lambda)$ is missing a connected partition of type $$(N, \mu)$$ and hence is not $e$-positive.
\end{lemma}
To aid comprehension, we give an example before the proof.
\begin{example}\label{ex:induction_lem} If $\lambda = (2,1,1)$ then $N=4$ and $m=2$. Setting $i=2$, we know from Example~\ref{ex:short_legs} that $S(2,2,1,1)$ is missing a connected partition of type $\mu = (4,3)\vdash (2+4+1)$ and its parts satisfy $m+1=3\leq 4,3 \leq 4 = N$. Hence, $S(6,2,1,1)$ is missing a connected partition of type $(4,4,3)$.
\end{example}
\begin{proof}
{We will try to construct a connected partition of type $(N, \mu)$ and find this is impossible.}
Suppose $V_1$ is a connected component in a connected partition of $S = S(i+N,\lambda)$ with $N$ vertices. Let $L'$ denote the vertices that are part of the leg of length $i + N$ in $S$ and {$L_k$} denote the vertices that are part of the leg of length {$\lambda_k$} in $S$.
If $V_1$ does not contain any vertex from $L'$, $V_1$ must contain the centre as each set {$L_k$} by assumption has less than or equal to $N-1$ vertices. As $V_1$ must contain the centre and furthermore cannot contain all $N+1$ vertices in the subtree $S(\lambda)$ in $S$, the deletion of $V_1$ and its incident edges produces an isolated vertex. Therefore, a connected partition of type $\mu$ cannot be formed from $S$ with all vertices in $V_1$ deleted, as each part of $\mu$ has size greater than or equal to 2 by assumption.
Hence, $V_1$ must contain some vertices from $L'$. If $V_1$ contained some vertex in some {$L_k$} as well, it must contain the centre since $V_1$ is connected. Once again, since the subtree $S(\lambda)$ has $N+1$ vertices and $V_1$ contains the centre, $V_1$ cannot contain the subtree $S(\lambda)$ entirely and the deletion of $V_1$ from $S$ then produces at least two connected components where one of them is a path with less than or equal to $m$ vertices. Hence, a connected partition of type $\mu$ cannot be formed from the remaining graph as each part has size greater than or equal to $m+1$.
{Thus, $V_1$ must contain either the centre and vertices from $L'$ only, or vertices from $L'$ only. If $V_1$ contains the centre and vertices from $L'$ only, then, as in the previous paragraph, the deletion of $V_1$ from $S$ then produces at least two connected components where one of them is a path with less than or equal to $m$ vertices. Hence, a connected partition of type $\mu$ cannot be formed from the remaining graph as each part has size greater than or equal to $m+1$.}
Therefore, $V_1$ must contain vertices from $L'$ only, {which} form an induced path of length $N$. Upon deleting $V_1$ and its incident edges, the graph $G_j$ formed is the disjoint union of $S(j,\lambda)$ and a path of length $i - j$ for some $0 \leq j \leq i$. However, if $G_j$ had a connected partition of type $\mu$ for some $j$, this contradicts the assumption that $S(i,\lambda)$ was missing a connected partition of type $\mu$, since any $G_j$ can be formed by first deleting an edge from the length $i$ path in $S(i,\lambda)$.
As we have exhausted all possibilities for $V_1$ and $|V_1| = N$, then $S(i+N,\lambda)$ is missing a connected partition of type $(N, \mu)$, and hence it is not $e$-positive by Theorem~\ref{the:e_positivity_crit}.
\end{proof}
We now come to a substantial way to create families of spiders that are missing a connected partition.
\begin{theorem}\label{the:induction_short_res_1}
Let $\lambda \vdash N$ and $m$ be the maximum of the parts of $\lambda$. If $\max(2m-N+1, 0) \leq i < N$, then the spider $S(i + Na, \lambda)$ is not $e$-positive for every integer $a \geq 0$. In particular, the spider $S(i+Na,\lambda)$ is missing a connected partition of type $\mu$ where $$\mu = \begin{cases} (N^{a+1}, i+1) & m \leq i < N \\ (N^a, N+i-m, m+1) & i < m \end{cases}.$$
\end{theorem}
\begin{proof}
Note that if a spider is missing a connected partition then it is not $e$-positive by Theorem~\ref{the:e_positivity_crit}. Hence, we now proceed to find a missing connected partition in all cases.
Before we do this, observe that if $\lambda = (N)$, so $m=N$, then no such $i$ exists. Thus assume that $\lambda \vdash N$ with at least 2 parts. We will now study two cases, $m\leq i$ and $i<m$. In each case we will study the spider $S(i,\lambda)$ before drawing our desired conclusions about $S(i+Na, \lambda)$.
For the first case, if $1\leq m \leq i <N$, then $2m-N+1 \leq m$ since $m\leq N-1$, so our condition on $i$ is trivially satisfied. Also note that all legs in the spider $S(i,\lambda)$ have length less than $\lfloor \frac{i+N+1}{2} \rfloor$. This is because there are $i+N+1$ vertices in total and
$$\left\lfloor \frac{i + N + 1}{2} \right\rfloor > {\left\lfloor\frac{2i}{2} \right\rfloor}= i \geq m.$$Hence, by Lemma~\ref{lem:short_legs}, since $i$ is the maximum of the parts of $\lambda$ {and $i$}, the spider $S(i,\lambda)$ is missing a connected partition of type $(N, i+1)$. Hence, by Lemma~\ref{lem:induction_lem} since $m+1\leq i+1\leq N$, by assumption, we have that $S(i+N, \lambda)$ is missing a connected partition of type $(N,N,i+1)$.
Consequently, $S(i+Na, \lambda)$ is missing a connected partition of type $(N^{a+1}, i+1)$, since if not then $(a-1)$ connected components with $N$ vertices would have to be contained in the leg of length $i+Na$ (this is because otherwise one of these connected components with $N$ vertices would consist of the centre vertex connected to $N-1$ other vertices, which could not yield a connected partition of type $(N^{a+1}, i+1)$ since $i\geq m$). However, this would imply that $S(i+N, \lambda)$ is not missing a connected partition of type $(N,N,i+1)$, a contradiction.
For the second case, first suppose $0 \leq 2m-N+1 \leq i < m$. Since, therefore, $2m+1\leq i+N$,
$$\left\lfloor \frac{i+N+1}{2} \right\rfloor \geq \left\lfloor \frac{2m+2}{2} \right\rfloor = m+1 > m > i$$the spider $S(i, \lambda)$ is missing a connected partition of type $(N+i-m, m+1)$
by Lemma~\ref{lem:short_legs} since $m$ is the length of the longest leg in $S(i, \lambda)$. Hence, by Lemma~\ref{lem:induction_lem}, since $m+1=N+(2m-N+1)-m \leq N+i-m<N$, because $i<m$ by assumption, we have that $S(i+N, \lambda)$ is missing a connected partition of type $(N, N+i-m, m+1)$.
Alternatively suppose $2m-N+1<0\leq i <m$. The first inequality implies that $m<\frac{N-1}{2}$ so
{$$\left\lfloor \frac{i+N+1}{2} \right\rfloor > m$$}and hence the spider $S(i, \lambda)$ is missing a connected partition of type $(N+i-m, m+1)$ by Lemma~\ref{lem:short_legs} since $m$ is the length of the longest leg in $S(i, \lambda)$. Hence, by Lemma~\ref{lem:induction_lem}, since $m+1\leq N+i-m<N$, because $i<m$ by assumption, we have that $S(i+N, \lambda)$ is missing a connected partition of type $(N, N+i-m, m+1)$.
Consequently, in both subcases of the second case $S(i+Na, \lambda)$ is missing a connected partition of type $(N^a, N+i-m, m+1)$ since if not then $(a-1)$ connected components with $N$ vertices would have to be contained in the leg of length $i+Na$ (this is because otherwise one of these connected components with $N$ vertices would consist of the centre vertex connected to $N-1$ vertices, which could not yield a connected partition of type $(N,N+i-m,m+1)$). However, this would imply that $S(i+N, \lambda)$ is not missing a connected partition of type $(N,N+i-m,m+1)$, a contradiction.
\end{proof}
\begin{example}\label{ex:induction_short_res_1}
If $\lambda = (2,1,1)$, then $N=4$ and $m=2$. Since $\max(2m-N+1, 0)=1 \leq 2 < 4=N$ we can set $i=2$ and hence, by Theorem~\ref{the:induction_short_res_1}, every spider $S(2+4a, 2, 1,1)$ is missing a connected partition of type $(4^{a+1}, 3)$ for every integer $a \geq 0$.
\end{example}
Applying the above theorem now leads us to a surfeit of spiders that are not $e$-positive.
\begin{theorem}\label{the:induction_short_res_2}
Let $\lambda \vdash N$ and $m$ be the maximum of the parts of $\lambda$. If $m < \lfloor \frac{N}{2} \rfloor$ and $i \geq 0$ is an integer, then every spider $S(i,\lambda)$ is not $e$-positive.
\end{theorem}
\begin{proof}
By Theorem~\ref{the:induction_short_res_1}, it suffices to show that if $m < \left \lfloor \frac{N}{2} \right \rfloor$, then $2m-N+1 \leq 0$. If $N$ is even, then $2m < N$, so $2m - N < 0$ and $2m - N + 1 \leq 0$. Otherwise if $N$ is odd, then $2m < N-1$, so once again $2m - N + 1 \leq 0$.
\end{proof}
An alternative approach for finding a missing connected partition is given in the following lemma.
\begin{lemma}\label{lem:quotient_construction}
Suppose $S=S(\lambda_1, \dots, \lambda_d)$ is an $n$-vertex spider with $(\lambda_1, \dots, \lambda_d)$ a partition, $\lambda _1 \geq \lambda _2 +\cdots + \lambda _d$, $\lambda_2 \leq \lambda_3 + \dots + \lambda_d$, and with $\lambda_2 \geq 2$. Let $n = q(\lambda_2 + 1) + r$ {where $0\leq r <\lambda _2 +1$}, and $r = qd' + r'$ {where $0\leq r' <q$}. If $\lambda_2 \geq 3$, or if $\lambda_2 = 2$ and $q \geq 3$, then $S$ is missing a connected partition of type $$(\lambda_2 + d' + 2)^{r'} (\lambda_2 + d' + 1)^{q - r'}.$$
\end{lemma}
\begin{proof}
Suppose $S$ has a connected partition $C$ of type $(\lambda_2 + d' + 2)^{r'} (\lambda_2 + d' + 1)^{q - r'}$.
Consider the set $V_1\in C$ containing the leaf on a leg of length $\lambda _2$. Since every set {in $C$} must contain at least $\lambda_2 + 1$ vertices, $V_1$ contains the centre, and hence also all legs of length less than or equal to $\lambda_2$ since all sets in the connected partition have size greater than $\lambda_2$.
Hence, since $\lambda_2 \leq \lambda_3 + \dots + \lambda_d$, $$|V_1| \geq 1 + \lambda_2 + \dots + \lambda_d \geq 1 + 2 \lambda_2.$$
However, we claim that $1 + 2 \lambda_2 > \lambda_2 + d' + 2$. This is because $$n = 1 + \lambda_1 + \dots + \lambda_d \geq 1 + 2(\lambda_2 + \dots + \lambda_d) > 2(\lambda_2 + 1)$$ by assumption that $\lambda _1 \geq \lambda _2 +\cdots + \lambda _d$. So $q \geq 2$. Hence, if $\lambda_2 \geq 3$, then $$\frac{\lambda_2}{\lambda_2 - 1} = 1 + \frac{1}{\lambda_2 - 1} < 2 \leq q.$$Otherwise if $\lambda_2 = 2$, then $\frac{\lambda_2}{\lambda_2 - 1} = 2 < q$ when $q \geq 3$. Therefore, in either case,
$$r - r' = qd' \leq r \leq \lambda_2 < q(\lambda_2 - 1),$$which implies that $d' < \lambda_2 - 1$ by dividing both sides by $q$. Adding $\lambda_2 + 2$ to both sides shows that $1 + 2\lambda_2 > \lambda_2 + d' + 2$, which contradicts that $C$ is a connected partition of the desired type as $V_1$ does not contain $\lambda_2 + d' + 2$ or $\lambda_2 + d' + 1$ vertices.
\end{proof}
\begin{example}\label{ex:quotient_construction}
Let $S = S(8, 2, 2, 1)$. Since $8\geq 2+2+1$ and $2\leq 2+1$, and $n = 14 = 4(3) + 2$, the conditions of Lemma~\ref{lem:quotient_construction} are met. Since $2 = 4(0) + 2$, $S$ is missing a connected partition of type $(\lambda_2 + 2)^2 (\lambda_2 + 1)^2 = (4,4,3,3)$.
\end{example}
We now will generalize Lemma~\ref{lem:quotient_construction} and then bound the number of vertices before giving our key result on the $e$-positivity of a spider.
\begin{lemma}\label{lem:quotient_construction_2}
Let $i \geq 3$ be an integer. Suppose $S(\lambda_1, \dots, \lambda_d)$ is an $n$-vertex spider with $(\lambda_1, \dots, \lambda_d)$ a partition, with a $\lambda_i \geq 2$ satisfying $\lambda_i \leq \lambda_{i+1} + \dots + \lambda_d$ {where $i<d$}, and $\lambda_j > \lambda_{j+1} + \dots + \lambda_d$ for all $j < i$. Let $n = q(\lambda_i + 1) + r$ {where $0\leq r <\lambda _i +1$}, and $r = qd' + r'$ {where $0\leq r' <q$}. Then $S$ is missing a connected partition of type $$(\lambda_i + d' + 2)^{r'} (\lambda_i + d' + 1)^{q - r'}.$$
\end{lemma}
\begin{proof}
Suppose $S$ has a connected partition $C$ of the desired type.
Consider the set $V_1 \in C$ containing the leaf on a leg of length $\lambda_i$. Since every set {in $C$} must contain at least $\lambda_i + 1$ vertices, $V_1$ contains the centre, and hence also all legs of length less than or equal to $\lambda_i$ since all sets in the connected partition have size greater than $\lambda_i$. Hence, since $\lambda_i \leq \lambda_{i+1} + \dots + \lambda_d$,
$$|V_1| \geq 1 + \lambda_i + \dots + \lambda_d \geq 1 + 2 \lambda_i.$$
However, we claim that $\frac{\lambda_i}{\lambda_i - 1} \leq 2 < q$. This is because $\lambda_i \geq 2$ so $\frac{\lambda_i}{\lambda_i - 1} \leq 2$, and repeatedly using the condition that $\lambda_j > \lambda_{j+1} + \dots + \lambda_d$, for all $j < i$ gives
{\begin{align*}n&=1+\lambda _1 + \cdots + \lambda _d\\
&>1+2(\lambda _2 + \cdots + \lambda _d)\\
&>1+4(\lambda _3 + \cdots + \lambda _d)\\
&\vdots\\
&> 1 + 2^{i-1} (\lambda_i + \dots + \lambda_d) \geq 2^{i-1} (\lambda_i + 1).\end{align*}Hence,
$$(q+1)(\lambda _i +1) > q(\lambda_i + 1) + r = n > 2^{i-1} (\lambda_i + 1)$$and so} $q \geq 2^{i-1} \geq 4>2$ since $i \geq 3$. This implies that $1 + 2 \lambda_i > \lambda_i + d' + 2$ as in the proof of Lemma~\ref{lem:quotient_construction}, which contradicts that $C$ is a connected partition of the desired type as $V_1$ does not contain {$\lambda_i + d' + 2$ or $\lambda_i + d' + 1$} vertices.
\end{proof}
\begin{lemma}\label{lem:count_vertices}
Let $S(\lambda_1, \dots, \lambda_d)$ be an $n$-vertex spider, with $(\lambda_1, \dots, \lambda_d)$ a partition.
\begin{enumerate}[(a)]
\item If $\lambda_1 \geq \lambda_2 + \dots + \lambda_d$ and $\lambda_i > \lambda_{i+1} + \dots + \lambda_d$ for $2 \leq i \leq d - 1$, then $n > 2^{d-1}$.
\item If $\lambda_1 \geq \lambda_2 + \dots + \lambda_d$, $\lambda_i > \lambda_{i+1} + \dots + \lambda_d$ for {$d\geq 4$ and }$2 \leq i \leq d-2$, and $\lambda_{d-1} = \lambda_d = 1$, then $n > 2^{d-1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For (a), if the partition $(\lambda_1, \dots, \lambda_d)$ satisfies the conditions stated, then we claim that $\lambda_{d-i} > 2^{i-1} \lambda_d$ for $1 \leq i \leq d-1$. This is true for $i = 1$ by assumption. Next, assuming by induction that $\lambda_{d-j} > 2^{j-1} \lambda_d$ for all $1 \leq j < i$,
$$\lambda_{d-i} \geq \sum_{j = 0}^{i-1} \lambda_{d-j} > \lambda_d + \sum_{j=1}^{i-1} 2^{j-1} \lambda_d = \lambda_d + (2^{i-1} - 1) \lambda_d = 2^{i-1} \lambda_d.$$
Hence, the total number of vertices $n$ satisfies
$$n = \lambda_1 + \dots + \lambda_d + 1 > \lambda_d + \sum_{i=1}^{d-1} 2^{i-1} \lambda_d = {\lambda_d + (2^{d-1} - 1) \lambda_d} = 2^{d-1} \lambda_d.$$
Since $\lambda_d \geq 1$, then {$n > 2^{d-1} \lambda_d\geq2^{d-1}$.}
For (b), we claim that $\lambda_{d-i} \geq 3 (2^{i-2})$ for $2 \leq i \leq d-2$. This is true for $i=2$ since $\lambda_{d-2} \geq 3$ from the conditions given. Next, assuming by induction that this holds for all $2 \leq j < i$,
$$\lambda_{d-i} \geq \sum_{j=2}^{i-1} \lambda_{d-j} + \lambda_{d-1} + \lambda_d + 1 \geq 3 + \sum_{j=2}^{i-1} 3 (2^{j-2}) = 3 + 3 (2^{i-2} - 1) = 3 (2^{i-2}).$$
Thus, $\lambda_1 \geq \lambda_2 + \dots + \lambda_d \geq \sum_{i=2}^{d-2} 3 (2^{i-2}) + 2 = 3 (2^{d-3}) - 1.$
Hence, the total number of vertices $n$, by considering the centre and lengths of each leg, satisfies
$$n={1+\lambda_1+\cdots + \lambda _d} \geq \sum_{j=0}^{d-3} 3 (2^j) - 1 + 3 = 3 (2^{d-2} - 1) + 2 = 3 (2^{d-2}) - 1 > 2^{d-1}$$
since $d \geq 3$ for a spider.
\end{proof}
We now arrive at our most general result on the $e$-positivity of spiders.
\begin{theorem}\label{the:partial_e_thm}
Suppose $S=S(\lambda_1, \dots, \lambda_d)$ is an $n$-vertex spider with a vertex of degree $$d \geq \log_2 n + 1.$$ Then $S$ is missing a connected partition of some type $\mu$, and hence is not $e$-positive.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:count_vertices}(a), either $\lambda_1 < \lambda_2 + \dots + \lambda_d$ or $\lambda_i \leq \lambda_{i+1} + \dots + \lambda_d$ for some $i \geq 2$ {and $i<d$}, since if not, $d < \log_2 n + 1$.
{For the first case, if} $\lambda_1 < \lambda_2 + \dots + \lambda_d$, then Lemma~\ref{lem:short_legs} provides a missing connected partition. {For the second case}, if $\lambda_1 \geq \lambda_2 + \dots + \lambda_d$ but $\lambda_2 \leq \lambda_3 + \dots + \lambda_d$ and $\lambda _2\geq 3$, then Lemma~\ref{lem:quotient_construction} provides a missing connected partition.
If $\lambda _2 =2$ then since $d\geq3$, $\lambda _3$ exists and $\lambda _3 \leq 2$. Hence, $n\geq 9$ so the conditions of Lemma~\ref{lem:quotient_construction} are satisfied and it provides a missing connected partition. Meanwhile, if $\lambda _2=1$ then since $d\geq 3$, $\lambda _3$ exists and {$\lambda _3 = 1$.} Hence, $n\geq 5$ and $d\geq 4$ since $d \geq \log_2 n + 1$ {so $\lambda _4$ exists and $\lambda _4 = 1$.} Thus by Lemma~\ref{lem:spider_matching} the spider does not have a perfect {or almost perfect} matching and hence is missing a connected partition as well.
Otherwise, if the spider does not fall into the {second case subcases} above, {then} let $i \geq 3$ be the least index for which $\lambda_i \leq \lambda_{i+1} + \dots + \lambda_d$ {and $i<d$ is true. Then $d\geq 4$.} If $\lambda_i \geq 2$, Lemma~\ref{lem:quotient_construction_2} provides a missing connected partition. Otherwise, $\lambda_i = 1$. By Lemma~\ref{lem:count_vertices}(b), $i < d-1$ since if $i = d-1$, then $d < \log_2 n + 1$. Hence, the centre of the spider is attached to at least three leaves, and by Lemma~\ref{lem:spider_matching}, the spider does not have a perfect or almost perfect matching, and hence is missing a connected partition.
\end{proof}
We are now left with spiders $S(\lambda _1, \ldots , \lambda _d)$ with $(\lambda _1, \ldots , \lambda _d)$ a partition, $\lambda_1 \geq \lambda_2 + \dots + \lambda_d$, and either every other leg $\lambda_i$ satisfies $\lambda_i > \lambda_{i+1} + \dots + \lambda_d$, or $\lambda _{d-1}=\lambda _d = 1$ and all other $\lambda _i$ satisfy this inequality. In this case, a spider may not be $e$-positive but still have a connected partition of every type. The spider $S(6,4,1,1)$ is one such example.
Otherwise, we can get a missing partition for ``sufficiently large'' spiders by a method similar to Lemma~\ref{lem:quotient_construction_2}.
\begin{lemma}\label{lem:quotient_construction_3}
Suppose $S=S(\lambda_1, \dots, \lambda_d)$ is an $n$-vertex spider with $(\lambda_1, \dots, \lambda_d)$ a partition. Pick some $\lambda_i$ with $i \geq 2$ and let $n = q (\lambda_i + 1) + r$ {where $0\leq r <\lambda _i +1$}, $r = qd' + r'$ {where $0\leq r' <q$}, and $t = \lambda_{i+1} + \dots + \lambda_d$ {where $i<d$ and $t>1$}. If $q \geq \frac{\lambda_i + 1}{t - 1}$, then $S$ is missing a connected partition of type $$(\lambda_i + d' + 2)^{r'} (\lambda_i + d' + 1)^{q-r'}.$$
\end{lemma}
\begin{proof}
Suppose $S$ has a connected partition $C$ of the desired type. Consider the set $V_1\in C$ containing the leaf on a leg of length $\lambda _i$. Since every set {in $C$} must contain at least $\lambda_i + 1$ vertices, $V_1$ contains the centre, and hence also all legs of length less than or equal to $\lambda_i$ since all sets in the connected partition have size greater than $\lambda_i$. Hence, $|V_1| \geq 1 + \lambda_i + \dots + \lambda_d = 1 + \lambda_i + t$, but we claim that this is greater than $\lambda_i + d' + 2$. It suffices to show that $d' < t - 1$. This follows since,
$$d' = \frac{r-r'}{q} \leq \frac{\lambda_i}{q} \leq \frac{\lambda_i}{\lambda_i + 1} (t-1) < t-1$$by the assumption on the value of the quotient $q$. Hence, $1+\lambda _i+t>\lambda _i+d'+2$, which contradicts that $C$ is a connected partition of the desired type as $V_1$ does not contain $\lambda_i + d' + 2$ or $\lambda_i + d' + 1$ vertices.
\end{proof}
Hence, if $\lambda_{i+1}, \dots, \lambda_d$ are fixed, then for sufficiently large values of the sum $\lambda_1 + \dots + \lambda_i$, the spider $S(\lambda_1, \dots, \lambda_d)$ will be missing a connected partition of some type.
\begin{example}\label{ex:136411} If $\lambda = (13, 6,4,1,1)$ and let $i=2$ so that $\lambda _i = 6$, then $26 = 3(7)+5$, $5=3(1) +2$ and $6=4+1+1$. Hence, $q=3$, $\lambda _i +1 = 7$, and $t=6$. Since $3\geq \frac{7}{5}$, we have that $S(\lambda)$ is missing a connected partition of type
$$(6+1+2)^2(6+1+1)=(9,9,8).$$
\end{example}
To end this section, we collect together the various latter lemmas on missing partitions of various types and draw the following conclusion on $e$-positivity, which is immediate by Theorem~\ref{the:e_positivity_crit}.
\begin{proposition}\label{prop:all_e_positive_spiders} If a spider satisfies the criteria of Lemmas~\ref{lem:quotient_construction}, ~\ref{lem:quotient_construction_2} or ~\ref{lem:quotient_construction_3}, then it is not $e$-positive.
\end{proposition}
\section{The $e$-positivity of trees and cut vertices}\label{sec:trees} We can now use our results from the previous section in conjunction with Lemmas~\ref{lem:spider_lem} and ~\ref{lem:gen_spider_lem} to deduce criteria for $e$-positivity of trees and graphs in general.
\begin{theorem}\label{the:gen_partial_e_thm} If $G$ is an $n$-vertex connected graph with a cut vertex whose deletion produces a graph with $d\geq 3$ connected components such that
$$d \geq \log_2 n + 1$$then $G$ is not $e$-positive.
In particular, if $T$ is an $n$-vertex tree with a vertex of degree $d\geq 3$ such that
$$d \geq \log_2 n + 1$$then $T$ is not $e$-positive.
\end{theorem}
\begin{proof} For the first part, by Lemma~\ref{lem:gen_spider_lem} and Theorem~\ref{the:partial_e_thm} every such $n$-vertex graph is missing a connected partition of some type, and hence is not $e$-positive by Theorem~\ref{the:e_positivity_crit}. For the second part we can either repeat the above argument but this time using Lemma~\ref{lem:spider_lem} instead of Lemma~\ref{lem:gen_spider_lem}, or we can note that if we delete a vertex of degree $d$ from a tree, then $d$ connected components remain.
\end{proof}
As a simple example, every tree with 1000 vertices that contains a vertex of degree 11 or more is not $e$-positive.
In fact, we can use every result from the previous section on spiders that involves a missing connected partition to obtain a result on trees, cut vertices and $e$-positivity. We illustrate this using Lemma~\ref{lem:short_legs}.
\begin{theorem}\label{the:gen_short_legs} If $G$ is an $n$-vertex connected graph with a cut vertex whose deletion produces a graph with connected components $C_1, \ldots , C_d$ such that $d\geq 3$ and $|V_{C_i}|< \lfloor \frac{n}{2}\rfloor$ for all $1\leq i \leq d$, then $G$ is not $e$-positive.
In particular, if $T$ is an $n$-vertex tree with a vertex for degree $d\geq3$ whose deletion produces subtrees $T_1, \ldots , T_d$ and $|V_{T_i}|< \lfloor \frac{n}{2}\rfloor$ for all $1\leq i \leq d$, then $T$ is not $e$-positive.
\end{theorem}
\begin{proof} For the first part, by Lemma~\ref{lem:gen_spider_lem} and Lemma~\ref{lem:short_legs} every such $n$-vertex graph is missing a connected partition type $(n-m-1, m+1)$ where
$$m=\max \{|V_{C_1}|, \ldots, |V_{C_d}|\}$$and hence is not $e$-positive by Theorem~\ref{the:e_positivity_crit}. For the second part we can either repeat the above argument but this time using Lemma~\ref{lem:spider_lem} instead of Lemma~\ref{lem:gen_spider_lem}, or we can note that if we delete a vertex of degree $d$ from a tree, then $d$ connected components remain.
\end{proof}
As a more meaningful example, we will now classify when a windmill graph is $e$-positive.
\begin{example}\label{ex:gen_short_legs} Let $K_n$ be the \emph{complete graph} on $n$-vertices, namely the $n$-vertex graph in which every two vertices are adjacent. Let $W^d_n$ for $d\geq 1, n\geq 1$ be the \emph{windmill graph} in which $d$ copies of $K_n$ all have one common vertex $c$. For example, $W^4_3$ is below.
\begin{center}
\begin{tikzpicture}[scale=0.6]
\filldraw [black] (0,0) circle (4pt);
\filldraw [black] (-2,1) circle (4pt);
\filldraw [black] (-2,-1) circle (4pt);
\filldraw [black] (2,1) circle (4pt);
\filldraw [black] (2,-1) circle (4pt);
\filldraw [black] (1,2) circle (4pt);
\filldraw [black] (-1,2) circle (4pt);
\filldraw [black] (1,-2) circle (4pt);
\filldraw [black] (-1,-2) circle (4pt);
\draw (0,0)--(-2,1)--(-2,-1)--(0,0);
\draw (0,0)--(2,1)--(2,-1)--(0,0);
\draw (0,0)--(1,2)--(-1,2)--(0,0);
\draw (0,0)--(1,-2)--(-1,-2)--(0,0);
\end{tikzpicture}
\end{center}
Note that $W^d_n$ has $d(n-1)+1$ vertices. Also note that {for $n>1$} the deletion of $c$ produces $d$ connected components, more precisely $d$ copies of $K_{n-1}$ each with $n-1$ vertices. Hence, for $d\geq 3$ since
$$(n-1)<\left\lfloor \frac{d(n-1)+1}{2}\right\rfloor$$every $W^d_n$ for $d\geq 3, {n>1}$ is not $e$-positive by Theorem~\ref{the:gen_short_legs}. In contrast, by say \cite[Theorem 8]{ChovW} and \cite[Corollary 3.6]{Stan95} respectively, every $W^1_n=K_n$ for {$n>1$} and $W^2_n$ for {$n>1$} is $e$-positive. {Lastly, note that $W^d_1$ for $d\geq 1$ is $K_1$ and so, by say \cite[Theorem 8]{ChovW}, is $e$-positive.}
\end{example}
\section{The Schur-positivity of bipartite graphs}\label{sec:bipartite} While $e$-positivity implies Schur-positivity, it is possible for a graph to be Schur-positive but not $e$-positive, for example the spider $S(4,1,1)$ from Examples~\ref{ex:spider} and ~\ref{ex:S411}.
\begin{center}
\begin{tikzpicture}[scale=0.8]
\filldraw [black] (0,0) circle (4pt);
\filldraw [black] (1,1) circle (4pt);
\filldraw [black] (0,2) circle (4pt);
\filldraw [black] (2.5,1) circle (4pt);
\filldraw [black] (4,1) circle (4pt);
\filldraw [black] (5.5,1) circle (4pt);
\filldraw [black] (7,1) circle (4pt);
\draw[thick] (0,0)--(1,1);
\draw[thick] (0,2)--(1,1);
\draw[thick] (1,1)--(7,1);
\end{tikzpicture}
\end{center}
Again we can determine whether trees or certain graphs are not Schur-positive using a vertex degree criterion, but this time we will need the dominance order on partitions, bipartite graphs, and stable partitions.
For the first of these, recall that given two partitions of $N$, $\lambda = (\lambda _1, \ldots , \lambda _{\ell(\lambda)})$ and $\mu = (\mu _1, \ldots , \mu _{\ell(\mu)})$, we say that $\lambda$ \emph{dominates} $\mu$, denoted by $\lambda \geq _{dom} \mu$ if
$$\lambda _1 + \cdots + \lambda _i \geq \mu _1 + \cdots + \mu _i $$for all $1\leq i \leq \min\{\ell(\lambda), \ell(\mu)\}$. For the second of these, recall that a graph $G$ is \emph{bipartite} if there exists a proper colouring of $G$ with 2 colours. For the third of these, we say a \emph{stable} partition of an $n$-vertex graph $G$ is a partitioning of its vertex set $V$ into sets $\{V_1, \dots, V_k\}$ such that every set $V_i$ in the partitioning is an independent set, namely no edge $e\in E_G$ exists between any $v_1, v_2 \in V_i$.
The \emph{type} of a stable partition is the partition of $n$ formed by sorting the sizes of each set $V_i$ in decreasing order. We say $G$ \emph{has a stable partition} of type $\lambda$ if and only if there exists a stable partition of $G$ of type $\lambda$, and is \emph{missing a stable partition} of type $\lambda$ otherwise.
\begin{example}\label{ex:stablepartition} Consider again the $n$-vertex star $S_n$ for $n\geq 4$. The graph $S_n$ has a stable partition of type $(n-1, 1)$ but is missing a stable partition of type $(n-2,2)$.
\end{example}
Stable partitions are intimately related to the Schur-positivity of a graph via the following result.
\begin{theorem}\cite[Proposition 1.5]{Stanley2} \label{thm:s_positivity_crit}
Suppose an $n$-vertex graph $G$ has a stable partition of type $\lambda \vdash n$. If $G$ is Schur-positive, then $G$ has a stable partition of type $\mu$ for every $\mu \leq _{dom}\lambda$.
\end{theorem}
This in turn yields a criterion for when a graph is not Schur-positive that is dependent on vertex degrees.
\begin{theorem}\label{the:bipartite_s_pos}
If $G$ is an $n$-vertex bipartite graph with a vertex of degree greater than $\lceil \frac{n}{2} \rceil$, then $G$ is not Schur-positive.
In particular, if $T$ is an $n$-vertex tree with a vertex of degree greater than $\lceil \frac{n}{2} \rceil$, then $T$ is not Schur-positive.
\end{theorem}
\begin{proof}
For the first part, let $G$ be an $n$-vertex bipartite graph with a vertex $v$ of degree $d > \lceil \frac{n}{2} \rceil$. By assumption that $G$ is bipartite, there is a proper colouring of $G$ with two colours. Call these colours red and black, and note that $(V_1, V_2)$ where $V_1$ is the set of the vertices coloured red and $V_2$ is the set of vertices coloured black is a stable partition of $V_G$. The type of this stable partition will be a partition $(m, n - m)$ where $m > \lceil \frac{n}{2} \rceil$, since the $d > \lceil \frac{n}{2} \rceil$ vertices adjacent to $v$ must be assigned a different colour from the colour of $v$ by assumption.
We claim now that $G$ does not have a stable partition of type $(\lceil \frac{n}{2} \rceil, \lfloor \frac{n}{2} \rfloor)$. Suppose $G$ did have such a partitioning of its vertices into $(V_1, V_2)$ with $|V_1| = \lceil \frac{n}{2} \rceil$ and $|V_2| = \lfloor \frac{n}{2} \rfloor$. If $v \in V_1$, then its neighbours must be in $V_2$, which is impossible since $v$ has degree $d > \lceil \frac{n}{2} \rceil \geq \lfloor \frac{n}{2} \rfloor$. Similarly, if $v \in V_2$, then its neighbours must be in $V_1$, which is impossible since $v$ has degree $d > \lceil \frac{n}{2} \rceil$.
Since $(\lceil \frac{n}{2} \rceil, \lfloor \frac{n}{2} \rfloor) < _{dom}(m, n - m)$, and $G$ has a stable partition of type $(m, n-m)$, but $G$ is missing a stable partition of type $(\lceil \frac{n}{2} \rceil, \lfloor \frac{n}{2} \rfloor)$, then by Theorem~\ref{thm:s_positivity_crit}, $G$ is not Schur-positive.
For the second part, note that all trees are bipartite.
\end{proof}
\begin{example}\label{ex:bipartite_s_pos} Since the star $S_n$ for $n\geq 4$ is a tree and has one vertex of degree $(n-1)>\lceil \frac{n}{2} \rceil$, it is not Schur-positive, and hence again not $e$-positive, since $e$-positivity implies Schur-positivity.
\end{example}
\section{Further avenues}\label{sec:further} A natural avenue to pursue is to tighten the bound in Theorem~\ref{the:partial_e_thm}, and to this end we conjecture the following, which has been checked for all trees with up to 12 vertices.
\begin{conjecture}\label{con:4vertices} If $T$ is an $n$-vertex tree with a vertex of degree $d\geq 4$, then $T$ is not $e$-positive.
\end{conjecture}
Towards this one can use our techniques to check that certain families of spiders are missing a particular type of partition. However, so far these results have been local to the family of spiders being studied, and there does not seem to exist a natural global type of partition that is missing. Moreover, as noted just before Lemma~\ref{lem:quotient_construction_3}, there exist spiders that may not be $e$-positive but still have a connected partition of every type, such as $S(6,4,1,1)$.
In this case we can prove that the family of spiders $S(r,s,1,1)$ is not $e$-positive by first noting that if $r$ or $s$ is odd then $S(r,s,1,1)$ is not $e$-positive by Corollary~\ref{cor:matching_cor}. If $r$ and $s$ are even then we can prove this by using the triple-deletion rule of Orellana-Scott \cite[Theorem 3.1]{Orellana}, and generalized to $k$-deletion by the first and third authors \cite[Proposition 5]{lollipop}, to express $X_{S(r,s,1,1)}$ as a linear combination of chromatic symmetric functions of unions of paths. From here, by using the formula of Wolfe \cite[Theorem 3.2]{Wolfe} for expressing $X_{P_n}$ as a linear combination of elementary symmetric functions we can show that if $r=2k$ and $s=2\ell$ then
$$[e_{(3,2^{k+\ell})}]X_{S(r,s,1,1)} = -2(r+s)+7,$$which is negative when $r\geq 2$ and $s\geq 2$. This technique can also be used to show that $S(r, 1, 1)$ for $r\geq 3$ has
$$[e_{(r-1,2^2)}]X_{S(r,1,1)} = -(r-1).$$For example, returning to Example~\ref{ex:S411}, note the term $-3e_{(3,2^2)}$ in $X_{S(4,1,1)}$. Direct calculation yields that $S(1,1,1)$ is not $e$-positive but $S(2,1,1)$ is $e$-positive, and hence deducing that $S(r, 1, 1)$ for $r\geq 3$ is not $e$-positive supports Stanley's statement \cite[p 187]{Stan95} that $S(2,1,1)$ is $e$-positive ``by accident''. {Meanwhile, regarding Schur-positivity, we believe that the bound in Theorem~\ref{the:bipartite_s_pos} cannot be improved. This is implied by the following conjecture, which has been checked for all trees with up to 19 vertices, and with which we end.}
{\begin{conjecture}\label{con:Spositivetrees} For all $n\geq 2$, there exists an $n$-vertex tree with a vertex of degree {$\lfloor \frac{n}{2} \rfloor$} that is Schur-positive.
\end{conjecture}}
\section*{Acknowledgements}\label{sec:acknow} The authors would like to thank John Shareshian for fruitful conversations, {and the referee for drawing our attention to the connection between claw-free graphs and perfect matchings that then gives Corollary~\ref{cor:clawsandmatching}.}
\bibliographystyle{plain}
\def$'${$'$}
| {
"timestamp": "2019-12-17T02:04:50",
"yymm": "1901",
"arxiv_id": "1901.02468",
"language": "en",
"url": "https://arxiv.org/abs/1901.02468",
"abstract": "We prove that the chromatic symmetric function of any $n$-vertex tree containing a vertex of degree $d\\geq \\log _2n +1$ is not $e$-positive, that is, not a positive linear combination of elementary symmetric functions. Generalizing this, we also prove that the chromatic symmetric function of any $n$-vertex connected graph containing a cut vertex whose deletion disconnects the graph into $d\\geq\\log _2n +1$ connected components is not $e$-positive. Furthermore we prove that any $n$-vertex bipartite graph, including all trees, containing a vertex of degree greater than $\\lceil \\frac{n}{2}\\rceil$ is not Schur-positive, namely not a positive linear combination of Schur functions. In complete generality, we prove that if an $n$-vertex connected graph has no perfect matching (if $n$ is even) or no almost perfect matching (if $n$ is odd), then it is not $e$-positive. We hence deduce that many graphs containing the claw are not $e$-positive.",
"subjects": "Combinatorics (math.CO)",
"title": "Schur and $e$-positivity of trees and cut vertices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815487957216,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.707684222196608
} |
https://arxiv.org/abs/2007.07983 | Optimal angle bounds for quadrilateral meshes | We show that any simple planar n-gon can be meshed in linear time by $O(n)$ quadrilaterals with all new angles bounded between $60$ and $120$ degrees. | \section{Introduction} \label{intro}
We answer a question of Bern and Eppstein by proving:
\begin{thm} \label{main}
Any
simply connected planar domain $\Omega$ whose boundary is a simple $n$-gon
has a quadrilateral mesh
with $O(n)$ pieces so that all angles are between
$60^\circ $ and $120^\circ$, except that original angles of the
polygon with angle $< 60^\circ$ remain. The mesh can be constructed in
time $O(n)$.
\end{thm}
The theorem is sharp in the sense that no shorter interval of
angles suffices for all polygons: using Euler's formula,
Bern and Eppstein proved
(Theorem 5 of \cite{Bern-Eppstein-QuadMesh}) that any
quadrilateral mesh of a polygon
with all angles $\geq 120^\circ$
must contain an angle $\geq 120^\circ$. On the other hand, any
boundary angle $\theta > 120^\circ$ must be subdivided by the mesh
in Theorem \ref{main}
and hence there must be a new angle $\leq \theta/2$ in the mesh.
Thus taking polygons with an angle $ \theta \searrow 120^\circ$
shows $60^\circ $ is the optimal lower bound.
It is perhaps best to think of Theorem \ref{main} as an
existence result. Although we give a linear time algorithm
for finding the mesh, the constant is large and the construction
depends on other linear algorithms, such Chazelle's linear time
triangulation of polygons, that have not been implemented (as far as
I know).
The three main tools in the proof of Theorem \ref{main} are
conformal maps, thick/thin decompositions of
polygons and hyperbolic tesselations.
We will decompose $\Omega$ into $O(n)$ ``thick'' and ``thin'' parts.
The thin parts have simple shapes and we can easily construct
an explicit mesh in each of them. The thick parts are more complicated,
but we can use a conformal map to transfer a mesh from
the unit disk, $\Bbb D$, to the thick parts of $\Omega$
with small distortion.
The mesh on $\Bbb D $ is produced using a finite piece
of an infinite tesselation
of $\Bbb D$ by hyperbolic pentagons.
I would like to thank Marshall Bern for asking me the question that lead
to Theorem \ref{main} and pointing out his paper \cite{Bern-Eppstein-QuadMesh}
with David Eppstein.
Also thanks to Joe Mitchell for many helpful conversations on
computational geometry. This paper is part of a series
(\cite{Bishop-Bowen},
\cite{Bishop-BrenConj},
\cite{Bishop-ExpSullivan},
\cite{Bishop-time})
that exploits the close connection between
the medial axis of a planar domain, the geometry of its hyperbolic
convex hull in ${\Bbb H}^3_+$ and the conformal map of the domain to the disk.
This was originally motivated by a result of Dennis Sullivan \cite{Sullivan81}
about boundaries of hyperbolic 3-manifolds and its generalization by
David Epstein (only one ``p'' this time) and Al Marden \cite{EM87}.
Many thanks to those authors for the inspiration and insights they have provided.
Also many thanks to the referees for a careful reading of the original
manuscript. Their thoughtful comments and suggestions greatly improved
the paper.
One of them pointed out reference \cite{Gerver} where
the Riemann mapping theorem
is used to prove that any polygon with all angles $\geq \pi/5$ can be
dissected into triangles with all angles $\leq 2 \pi /5$.
\section{M{\"o}bius transformations and hyperbolic geometry} \label{mobius}
A linear fractional (or M{\"o}bius) transformation is
a map of the form $z \to (a z + b )/ ( c z + d)$. This is a 1-1, onto,
holomorphic map of the Riemann sphere $S^2 =
\Bbb C \cup \{ \infty \}$ to
itself. Such maps form a group under composition and
are well known to map circles to circles (if we
count straight lines as circles that pass through $\infty$).
M{\"o}bius transforms are conformal, so they preserve angles.
Given two sets of distinct points $\{ z_1, z_2, z_3\}$ and
$\{ w_1, w_2, w_3 \}$ there is a unique M{\"o}bius transformation that
sends $w_k \to z_k$ for $k=1,2,3$.
A M{\"o}bius transformation maps the unit disk, $\Bbb D$, to itself
iff it is of the form $g(z) =\lambda (z-a)/(1-\bar az)$ for some $a \in \Bbb D,
|\lambda|=1$.
The hyperbolic metric on the unit disk is given
by
$$ \rho(v,w) = \inf \int_\gamma \frac {2 |dz|}{1-|z|^2},$$
where the infimum is over all rectifiable arcs connecting
$v$ and $w$ in $\Bbb D$. This is a metric of constant negative
curvature. In some sources, the ``2'' is omitted;
we have chosen this version to be consistent with
the trigonometric formulas found in \cite{Beardon}.
Geodesics for this metric are circular arcs that are perpendicular
to the boundary (including diameters).
Hyperbolic area is given by $4 dxdy /(1-|z|^2)^2$.
The area of a triangle with geodesic edges is $\pi-\alpha -
\beta-\gamma$, where $\alpha, \beta, \gamma$ are the interior
angles. Thus the area of any hyperbolic triangle is $\leq \pi$.
The hyperbolic metric
is well known to be invariant
under M{\"o}bius transformations of the disk, so it is enough
to compute it when one point has been normalized to be $0$ and
the other rotated to the positive axis. If $0< x <1$ and $\rho =
\rho(0,x)$, then
$$ \rho= \log \frac {1+x}{1-x}, \qquad x = \frac {e^\rho -1}{e^\rho +1}.$$
It is also convenient to consider the isometric model of the
upper half-space, $\Bbb H$. In this case the hyperbolic metric is given
by
$$ \rho(v,w) = \inf \int_\gamma \frac { |dz|}{y},$$
where $ z = x + iy$, but geodesics are still circular arcs
perpendicular to the boundary.
If $E \subset \Bbb T = \partial \Bbb D$ is closed then
$\Bbb T \setminus E = \cup I_j$ is a union of open intervals.
The hyperbolic convex hull of $E$, denoted ${\rm{CH}(E)}$,
is the region in $\Bbb D$ bounded by $E$ and the collection of
circular arcs $\{ \gamma_j\}$, where $\gamma_j$ is the hyperbolic
geodesic with the same endpoints as $I_j$. See Figure \ref{W}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{W.eps}
$\hphantom{xxxxx}$
\includegraphics[height=1.5in]{W3.eps}
$\hphantom{xxxxx}$
\includegraphics[height=1.5in]{W2.eps}
}
\caption{ \label{W}
Examples of hyperbolic convex hulls. The one on the left is
uniformly perfect, the center is thick with a large $\eta$, but not
uniformly prefect, and the right is only thick with a small $\eta$
(there are two geodesics that almost touch, but do not share an endpoint).
}
\end{figure}
A closed set $E \subset \Bbb T$ is called $\eta$-thick if
any two components of $\partial {\rm{CH}}(E) \cap \Bbb D$ that don't share
an endpoint are at least hyperbolic distance $\eta$ apart.
If $E$ is $\eta$-thick, then any point in the hull is contained in a
hyperbolic ball of radius $\eta$ that is also contained in the
convex hull. The thickness condition can be written in other
ways. For example, $E$ is $\eta$-thick iff non-adjacent
complementary intervals have extremal distance at least $\delta>0$
(with $\delta^{-1} \simeq \frac 2\pi \log \frac 1 \eta$ for small
$\delta, \eta$) \cite{Bishop-time}.
A closed set $E$ is called uniformly perfect
if any two components of $\partial {\rm{CH}}(E) \cap \Bbb D$ are at least
hyperbolic distance $\eta $ part. This stronger condition arises
many places in function theory, but will not be used in this paper.
\newpage
\section{A subdivision of the hyperbolic disk} \label{subdivide disk}
To prove Theorem \ref{main} we will divide the interior of
$\Omega$ into pieces called ``thick'' and ``thin'' (see
\cite{Bishop-time} and Section \ref{thick and thin}).
The thin pieces will be meshed
explicitly, but the mesh on the thick pieces will
be transferred from
a quadrilateral mesh of a domain in the unit disk via a conformal map.
Most of our time will be spent constructing the
mesh on the disk.
In this section we describe the subdomain and how to subdivide it into
circular arc
triangles, quadrilaterals and pentagons. In the following
sections we
show how to construct quadrilateral meshes for each subregions that
are consistent along shared boundaries.
A compact hyperbolic polygon is a bounded region in hyperbolic space bounded by
a finite number geodesic segments. The polygon is ``right'' if every
interior angle is
$90^\circ$. There are no compact hyperbolic right triangles or
quadrilaterals, but there are hyperbolic right $n$-gons
for every $n \geq 5$ and any such can be extended to a tesselation
${ \mathcal T}_n$ of hyperbolic space by repeated reflections.
See Figure \ref{n-gons} for the case of pentagons (the only
case we use in this paper).
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{5gon.ps}
$\hphantom{xxxx}$
\includegraphics[height=1.5in]{penta-carl.ps}
}
\caption{ \label{n-gons}
A hyperbolic right pentagon (left) and the its neighbors in the
tesselation ${\mathcal T}_5$.
}
\end{figure}
Let $L= \cosh^{-1}(1 + 2 \cos(\frac {2 \pi }{5})) \approx 1.06128$
denote the side length of a hyperbolic right
pentagon. We don't need the specific value, but it can
be computed using $L=c$, $\gamma= 2\pi/5$, $\alpha= \beta = \pi/4$
in the second hyperbolic law of cosines
(see \cite{Beardon}):
$$ \cosh c = \frac {\cos \alpha \cos \beta + \cos \gamma}
{\sin \alpha \sin \beta}.$$
In the tesselation ${\mathcal T}_5$, each edge
of a pentagon lies on some hyperbolic geodesic. Each of these
geodesics divides $\Bbb T$ into two arcs and we let
${\mathcal I}_5$ denote the collection of all such arcs.
\begin{lemma} \label{cover 1}
There is a $c < \infty$ so that given any arc $J \subset \Bbb T$
there are $I_1, I_2 \in {\mathcal I}_5$ with $I_1 \subset J \subset
I_2$ and $|I_2|/|I_1| \leq c$ ($|\cdot|$ denotes arclength).
\end{lemma}
\begin{proof}
Let $\gamma$ be the hyperbolic geodesic with the same endpoints
as $J$. The top point of $\gamma$ (i.e., the point closest
to $0$) is contained in some pentagon of the tesselation. By taking
$c$ larger, we can assume $J$ is as short as we wish, so we
may assume this is not the central pentagon. Let
$a$ be the hyperbolic center of this pentagon and let
$g(z) = \lambda (z-a)/(1- \bar a z)$ where $|\lambda|=1$
is chosen $g$ maps the pentagon to the central pentagon.
This is a M{\"o}bius
transformation that sends $a$ to $0$, maps the diameter $D$
through $a$ into $\lambda D$ and maps $\gamma$ to a geodesic $\gamma'$ that
intersects the central pentagon of the tesselation. Moreover,
since $g$ preserves angles, the angle between $\gamma'$ and
$ D'= \lambda D$ is the same as between $\gamma$ and $D$, and this is
bounded away from $0$, since the intersection point is within
distance $L$ of the top point of $\gamma$.
Thus $\gamma'$ also makes a large angle with $ D'$ and so is
some positive distance $r$ from the point $b= -\lambda a=g(0)$.
The inverse of $g$
is $ f(z) = \bar \lambda (z-b)/(1-\bar b z)$ and the
derivative of this is $(1-|b|^2)/(1-\bar b z)^2$. From this
we see that for $|z| =1$,
$$ \frac {1-|b|}{|z-b|^2} \leq
| f'(z) | \leq \frac {2(1-|b|)}{|z-b|^2}|,$$
so that $ | f'(z) | \simeq 2(1-|b|) $ with a constant
that depends only on $|z-b|$. Thus sets outside a ball
around $b$ will be compressed similar amounts by $f$.
Choose geodesics $\gamma_1, \gamma_2$ from the tesselation edges
on either side of $\gamma'$ so that $\gamma_1$ separates $b$
from $\gamma'$ and has a uniformly bounded distance $r$ from
$b$ (we can easily do this if $1-|b| = 1-|z| \simeq |J|$ is small
enough). Apply $f$ to $\gamma_1, \gamma_2$ and we get two
geodesics of comparable Euclidean size whose base intervals
are the desired $I_1, I_2$.
\end{proof}
A Carleson triangle in $\Bbb D$ is a region bounded by two geodesic
rays that have a common endpoint where they meet with
interior angle $90^\circ$. Any two such are M{\"o}bius
equivalent. A Carleson quadrilateral is bounded by one
finite length hyperbolic segment and two geodesic rays,
again with both interior angles equal $90^\circ$. See
Figure \ref{HCQ}. It is determined up to isometry by the
hyperbolic length of its finite length edge. In this paper
all of our Carleson quadrilaterals with have length $L$, where
$L$ is the side length of a right pentagon, as above.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.0in]{HCQ+T.eps}
}
\caption{ \label{HCQ}
A Carleson quadrilateral and triangle.
}
\end{figure}
We will prove the following:
\begin{lemma} \label{build intervals}
There is a $c < \infty$ so that the following holds.
Suppose we are given $A>1$ and a finite collection
intervals $\{ I_j\}_1^N$ on
the unit circle so that the expanded intervals $\{ A I_j\}$ are
disjoint (these are the concentric intervals that are $A$ times longer)
and each has length $< \pi$. Let $E = \cup_j I_j$.
We can find intervals $\{ J_j\}$ so that
\begin{enumerate}
\item $ \sqrt{A} I_j \subset
J_j \subset c \sqrt{A} I_j$, $j=1, \dots, N$.
\item Let $F = \cup_j J_j$ and let $W \subset \Bbb D$ be the hyperbolic
convex hull of $\Bbb T \setminus F$. Then $W$
has a mesh $\{ W_k\}$ consisting of
right hyperbolic pentagons, Carleson quadrilaterals and
Carleson triangles. A pentagon shares an edge only with other pentagons
or the top of a quadrilateral, a quadrilateral shares a top edge
only with pentagons and side edges
with triangles and other quadrilaterals,
and a triangle shares edges only with quadrilaterals.
\item Each component of $\partial W \cap \Bbb D$
is an infinite geodesic that is the union of side edges from
two Carleson quadrilaterals and edges from three pentagons.
\item Every pentagon used in the mesh is a uniformly bounded
hyperbolic distance from the hyperbolic convex hull of $E$.
\item Every region $W_k$ in the mesh has diameter bounded by
$O({\rm{dist}}(W_k, E))$ (Euclidean distances).
\end{enumerate}
\end{lemma}
\begin{proof}
For each interval $ I_j$ given in the lemma, choose
$ J_j \in {\mathcal I}_5 $ to be the minimal interval
containing $\sqrt{A} I_j$. Then (1) clearly holds by
Lemma \ref{cover 1}.
Let $\gamma_j$ be the geodesic with the same endpoints
as $J_j$ and let $P_0$ be a pentagon in ${\mathcal T}_5$
that is above $\gamma_j$ (i.e,
whose interior lies in the component of $\Bbb D \setminus
\gamma_j$ containing $0$) and whose boundary contains the
``top'' of $\gamma_j$ (the point closest to $0$).
Let $P_1,P_2$ be the elements of ${\mathcal T}_5$ that are
adjacent to $P_0$ and also above $\gamma_j$. Then the
part of $\gamma_j$ covered by the boundaries of these
three pentagons contains an interval of hyperbolic length
$2L$ centered at the top point.
Let $\gamma_j^1 $ be the geodesic containing the side of
$P_1$ that has one endpoint on $\gamma_j$ and is
not on $\partial P_0$. Let $J_j^1 \in {\mathcal I}_5$
be the base interval of $\gamma_j^1$. Let
$J_j^2 \in {\mathcal I}_5$ be the corresponding interval
for $P_2$ and let $J_j' = J_j^1 \cup J_j \cup J_j^2$.
See Figure \ref{Jinterval}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{Jinterval.eps}
}
\caption{ \label{Jinterval}
On the left are $J, J_1, J_2$. The shaded region is a
union of the pentagons; the white is a union of
quadrilaterals and triangles.
}
\end{figure}
Let $G= \cup_j (J_j^1 \cup J_j \cup J_j^2)$
and let $\{\mathcal K\}$ be the collection of
intervals in ${\mathcal I}_5$ that are compactly
contained in $\Bbb T \setminus F$, contain a
point of $\Bbb T \setminus G$ and
are maximal in the sense of containment with respect to these
properties.
These clearly cover all of $\Bbb T \setminus G$.
Now add the intervals $J_j, J_j^1, J_j^2$ to get
a cover of the whole circle.
Any open finite cover of an interval has
a subcover with overlaps of at most $2$ (if a point
is in three intervals we can keep the ones with leftmost left
endpoint and rightmost right endpoint and throw away the third;
repeat until every point is in at most two intervals).
For such a subcover, we mesh $W$ with pentagons above the
corresponding geodesics and by Carleson quadrilaterals and
triangles below. See Figure \ref{W4}.
Conditions (2) and (3) are clear from construction.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.75in]{W5.eps}
}
\caption{ \label{W4}
An example of meshing a convex hull $W$ with pentagons, quadrilaterals and
triangles. This example is not to scale,
since the white regions
should be much smaller than their distances apart.
}
\end{figure}
If $x \in \Bbb T \setminus F$ and $d = {\rm{dist}} (x, F)$ then
apply Lemma \ref{cover 1} to an interval of length
$ \frac 12 d/c$ centered at $x$. We obtain an element of ${\mathcal I}_5$
containing $x$, missing $F$ and of length $\geq \frac 12 d/c $.
Thus the maximal interval of ${\mathcal K}$ containing $x$ has at least
this length. This implies (4).
Every right pentagon $P$ has Euclidean
diameter bounded by $O({\rm{dist}}(P, \Bbb T))= O({\rm{dist}}(P,E))$.
Every Carleson quadrilateral $R$ has a top edge along a geodesic $\gamma$
with endpoints $\{ a,b \}$ and ${\rm{diam}}(R) \simeq {\rm{dist}}(R, \{ a,b\})$.
Since $\gamma$ misses the hyperbolic
convex hull of $E$, the latter is $\leq {\rm{dist}}(Q, R)$.
Every Carleson triangle is adjacent to two Carleson quadrilaterals
of comparable Euclidean size that separate it from $E$, so the
estimate also holds for these triangles. Thus (5) holds.
\end{proof}
\begin{lemma} \label{thick collections}
If the collection $\{ I_j\}_1^n$ satisfies the
conditions of Lemma \ref{build intervals} and if, in addition, the
set $E=\cup_j I_j$ is a $\delta$-thick set,
then the mesh constructed in Lemma \ref{build intervals} has $O(n)$
elements, with a constant that depends only on $\delta$.
\end{lemma}
\begin{proof}
Choose a disjoint collection of $\eta$-balls in
$ S= \rm{CH}(E) \cap W$ and note that there are $O(n)$ such
balls since $S$ has hyperbolic area $O(n)$ (it is a convex
hyperbolic polygon with $O(n)$ sides, hence has a triangulation into
$O(n)$ hyperbolic triangles, and every hyperbolic triangle has
hyperbolic area $\leq \pi$).
Every pentagon used in the proof of Lemma \ref{build intervals}
is within a bounded hyperbolic distance $D$
of one of the chosen $\eta$-balls, so only $O(1)$ pentagons can
be associated to any one ball (they are disjoint, have a fixed
area and all lie in a ball of fixed radius, hence fixed area).
Thus the total number of
pentagons used is $O(n)$. Every Carleson quadrilateral shares an edge
with a pentagon and every Carleson triangle shares an edge with
a quadrilateral, so the number of these regions is also
$O(n)$.
\end{proof}
\section{Meshing the pentagons} \label{pent mesh sec}
In the last section we subdivided the unit disk into hyperbolic
pentagons, quadrilaterals and triangles. Next we
want to mesh each of these regions into quadrilaterals
with angles in the interval $[60^\circ, 120^\circ]$.
Moreover, along
common edges of the regions, the vertices of the meshes
must match up correctly.
For each type of region, we will produce a mesh by quadrilaterals that have
circular arc boundaries and angles within a given range. In most cases
the boundary arcs lie on circles with radius comparable to the region,
and the quadrilaterals will be much smaller, about $1/N$ as large, for
a large $N$. If we replace the circular arc edges by line segments,
the angles change by only $O(1/N)$, which still gives angles in
the desired range. The only exceptions will be certain parts of the
mesh of the Carleson triangles, that will require a separate argument
to show the ``snap-to-a-line'' angles are still between $60^\circ $
and $120^\circ$.
As before, $L$ denotes the sidelength of a hyperbolic
right pentagon.
\begin{lemma} \label{pentagon mesh}
For sufficiently large integers
$N>0$ the following holds.
Suppose $P$ is a hyperbolic right pentagon. Then there is
mesh of $P$ into hyperbolic quadrilaterals with angles between
$72^\circ$ and $108^\circ$. The mesh divides each side
of the pentagon into $N$ segments of length $L/N$. Each quadrilateral
$Q$ in the mesh has hyperbolic diameter $O(1/N)$
and satisfies ${\rm{diam}}(Q) =O( \frac 1 N \cdot {\rm{diam}}(P))$ in
the Euclidean metric.
Replacing the edges of $Q$ by line segments changes angles by only
$O(1/N)$.
\end{lemma}
\begin{proof}
Connect the center $c$ of the pentagon by hyperbolic
geodesics to the (hyperbolic) center of each
edge. This divides the pentagon into five
quadrilaterals each of which has 3 right angles
and an angle of $ 72^\circ $ at the
center. Consider one of these quadrilaterals $Q$
with sides $S_1, S_2, S_3, S_4$ where
$S_1$, $S_2$ each connects the center of the pentagon
to midpoints of adjacent sides.
Then $ S_3, S_4$ are each half of a side of the pentagon
adjacent at a vertex $v$, with $S_3$ opposite $S_1$ and $S_4$
opposite $S_2$ (Figure \ref{penta-defn}).
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.75in]{penta-defn.eps}
}
\caption{ \label{penta-defn}
Definitions used in the mesh of a hyperbolic right pentagon.
Each pentagon is divided into five quadrilaterals as shown.
}
\end{figure}
Place a point $x$ along $S_3$ and let $e_x$
be the geodesic segment from $S_1$ to $S_3$ that
meets $S_3$ at $x$ and makes a $90^\circ $ angle
with $S_3$. Similarly define a segment
$f_y$ that joints $y \in S_4$ to $S_2$.
We claim that the segments cross at an angle
(labeled $\phi$ in Figure \ref{penta-defn})
that is between $72^\circ $ and $90^\circ$. The
two segments $e_x$, $f_y$ divide $Q$ into
four quadrilaterals, one of which contains the
vertex $v$. This subquadrilateral, $Q'$ is a
Lambert quadrilateral, i.e., bounded by four
hyperbolic geodesic segments and having 3
right angles. The one non-right angle, $\phi$,
is a function of the hyperbolic lengths of the
two opposite sides (in this case a function of
$a=\rho(x,v)$ and $b=\rho(y,v)$),
$$ \cos(\phi) = \sinh a \sinh b .$$
See Theorem 7.17.1 of \cite{Beardon}.
Clearly, $\phi$ decreases as either $a$ or $b$
increase. For $a$ and $b$ close to zero we have
$\phi \approx 90^\circ$ and when $a,b$ take their maximum value
($a=b $ is the hyperbolic length of $S_3$) we
get $Q' = Q$ and $\phi= 72^\circ$.
Thus $\phi $ takes values between $72^\circ$ and
$90^\circ$, as claimed.
To define a mesh of $Q$, take $N$ equally spaced points
$\{ x_k\} \subset S_3$ and $\{ y_k \} \subset S_4$ and
take the union of segments $e_{x_k}, f_{y_k}$. This divides
$Q$ into quadrilaterals with geodesic boundaries
and angles between $72^\circ$ and $108^\circ$.
Doing this for each of the five quadrilaterals that
make up the hyperbolic right pentagon gives a mesh of the
pentagon. The remaining claims are easy to verify.
See Figure \ref{mesh-pent}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{filledpoly.ps}
$\hphantom{xxxx}$
\includegraphics[height=1.5in]{pent-2nd-fill.ps}
}
\caption{ \label{mesh-pent}
A quadrilateral mesh of a single pentagon and the mesh
on 11 adjoining pentagons.
Because vertices are evenly spaced in the hyperbolic metric,
meshing of adjacent pentagons match up.
}
\end{figure}
\end{proof}
\section{Meshing the quadrilaterals } \label{meshing quads}
\begin{lemma} \label{quad mesh}
For sufficiently large integers $N$ the following holds.
Suppose $\{ d_1 < d_2< \dots < d_M\}$ satisfy $|d_k-d_{k+1}|
\leq 1/N$ for $k=1, \dots, M-1$, $d_1< 1/N$, $ d_M > N$ and suppose
$R$ is a right Carleson quadrilateral. Then there is
mesh of $R$ into hyperbolic quadrilaterals with angles between
$90^\circ -O(\frac 1N)$ and $90^\circ+O(\frac 1 N)$. The mesh divides
the unique finite (hyperbolic) length side of $R$ into $N$
segments of length $L/N$. Each infinite length side of $R$ has vertices
exactly at the points that are hyperbolic distance $d_k$,
$k=1, \dots, m$ from the finite length side.
If the base of $R$ has length $\leq \pi$, then
each element $Q$ of the mesh
satisfies ${\rm{diam}}(Q) = O(\frac 1 N \cdot {\rm{diam}}(R))$ in the Euclidean
metric.
Replacing the edges of $Q$ by lines segments changes angles by at most
$O(1/N)$.
\end{lemma}
We need a simple preliminary result.
\begin{lemma} \label{perp foliation}
Suppose $Q$ is a right circular quadrilateral, i.e., is bounded by
four circular arcs and all four interior angles are $90^\circ$. Then $Q$
has two orthogonal foliations by circular arcs. Every leaf of
both foliations is perpendicular to the boundary at both of its
endpoints.
\end{lemma}
\begin{proof}
To see this, take two opposite
sides. Each lies on a circle and these circles either intersect
in 0, 1 or 2 points or are the same circle.
In the first case
we can conjugate by a M{\"o}bius transformation so both disks are
centered at $0$. Then the two other sides must map to radial segments
and the foliations are as claimed.
If the circles
intersect in two points, we can assume these points are $0$ and $\infty$
so the circles are both lines passing
through $0$ and again the foliations are radial rays and circles centered
at $0$. If the opposite sides belong to the same circle,
we can conjugate it to be the real line, with the two sides being arcs
symmetric with respect to the origin. Then the other two sides
must be circular arcs centered at $0$ and the two foliations are as before.
The last, and exceptional, case is
if the two circles intersect in one point. Then we can conjugate this
point to infinity and the intersecting sides to
two parallel lines. The other two sides must map to perpendicular
segments and the region is foliated by perpendicular straight lines.
See Figure \ref{circ-quad}.
\end{proof}
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.25in]{CirQuad-3.eps}
$\hphantom{x}$
\includegraphics[height=1.25in]{CirQuad-1.eps}
}
\caption{ \label{circ-quad}
Any right circular quadrilateral is M{\"o}bius equivalent to
one of these cases and hence has an orthogonal foliations by circular
arcs.
}
\end{figure}
\begin{proof} [Proof of Lemma \ref{quad mesh}]
The two sides of $R$ that lie in $\Bbb D$ but have infinite
hyperbolic length are geodesic rays that are
both perpendicular to the geodesic containing the
top edge of $R$.
Hence they are subarcs of non-intersecting circles
(to see this, isometrically
map $\Bbb D \to \Bbb H$ so the top edge maps to a vertical
segment and the geodesic rays map to arcs of concentric circles).
The foliations provided by the previous lemma consist of
(1) hyperbolic geodesics that are
perpendicular to the top edge of $R$ (the unique finite
length side) and (2) subarcs of circles that
all pass through $a,b$ (the endpoints of the hyperbolic
geodesic that contains the top edge of $R$). We call
these the vertical and horizontal foliations respectively.
To prove the lemma, we simply subdivide the edges of $R$
as described and take the foliation leaves with these
endpoints. The only point that needs to be checked is
that points on the two infinite length sides of $R$
that are the same hyperbolic distance from the top
edge lie on the same horizontal foliation leaf.
However, any two horizontal leaves are equidistant from
each other in the hyperbolic metric
(to see this, map the vertices $a,b$ to $0,\infty$ by an
isometry $\Bbb D \to \Bbb H$ and these leaves become
rays, and the claim is obvious since dilation is
an isometry on $\Bbb H$).
Since the top edge is a horizontal leaf, we are done.
See Figure \ref{fill-quad}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.75in]{fillquad10.ps}
}
\caption{ \label{fill-quad}
A quadrilateral mesh of a Carleson quadrilateral. ``Horizontal'' edges
lie on circles that pass through the same two points on the boundary (the
endpoints of the geodesic contain the top edge).
``Vertical'' edges are hyperbolic geodesics perpendicular to the
top edge.
}
\end{figure}
\end{proof}
\section{Meshing the triangles } \label{triangles}
Unlike our meshes of the Carleson quadrilaterals
and right pentagons, our mesh of the Carleson
triangles will use the full interval of angles
$[60^\circ, 120^\circ]$.
This is easy to do if we just want to mesh by
quadrilaterals with circular arc sides. However,
we will want to conformally map our mesh in $\Bbb D$ to $\Omega$
and then replace the curved edges in the image
by straight line segments. This can change the angles
slightly, so we would end up with angles in $[60^\circ - \epsilon,
120^\circ + \epsilon]$ (where $\epsilon$ depends on
the ratio
between the diameters of our mesh elements and the
diameter of $T$).
To get the sharp result, we will have to
be careful how we use angles near
$60^\circ$ and $120^\circ$. To simplify matters, it
will be enough to simply consider one special Carleson
triangle $T$ in the upper half-plane model with vertices
at $-1,1, i/(\sqrt{2}-1)$. The mesh for any other
triangle will be obtained as a M{\"o}bius image of the
mesh we construct on this triangle.
The triangle $T$ has one vertex in $\Bbb H$, and we
refer to this as the ``top point''. Adjacent to the
top point are two sides that we call the ``left'' and
``right'' sides.
Inside $T$ we will construct
an ``inner triangle'' $T_i \subset T$.
The vertices of $T_i$ form an ordinary
equilateral Euclidean triangle, but the edges of $T_i$
itself are
circular arcs meeting at three interior angles of $90^\circ$,
and $T_i$ is uniquely determined by this.
\begin{lemma} \label{triangle mesh}
The following holds for all sufficiently large integers
$N$. There is a sequence $d_1 < d_2 < \dots < d_M$ with
$|d_k - d_{k+1}| \leq 1/N$ for $k=1, \dots, M-1$ and a
mesh of $T$ into hyperbolic
quadrilaterals with angles between $60^\circ$
and $120^\circ$ so that the vertices along the left and
right edges of $T$ occur exactly at the points distance
$d_k$, $k=1, \dots, m$ from the top point.
Every quadrilateral $Q$ in the mesh satisfies
${\rm{diam}}(Q) = O(\frac 1 N \cdot {\rm{diam}}(T)) $. The triangle
$T$ contains a symmetric right circular triangle $T_i
\subset T$ so that outside $T_i$, only angles
in $[90^\circ -O(\frac 1N) , 90^\circ + O(\frac 1N)]$ are used.
The triangle $T_i$ may be chosen as small as wish compared
to $T$.
Replacing edges by straight line segments gives angles between
$60^\circ$ and $120^\circ$.
\end{lemma}
\begin{figure}[htbp]
\centerline{
\includegraphics[height=2in]{inner-outer3.eps}
}
\caption{ \label{inner-outer}
The outer triangle $T$ is a Carleson triangle in the upper half-plane
with top point $w = i/(\sqrt{2}-1)$.
Its interior is divided into an inner triangle $T_i$ (shaded)
with top point $v$ and nine surrounding right circular quadrilaterals.
The points $v_1, v_2$ are equidistant from $w$
in the hyperbolic metric. The left and right sides of $T_i$
are geodesic segments and extend to hit $\Bbb R$ as points
$v_7, v_8$. The Carleson triangle with vertices $v, v_7, v_8$
is denoted $T_e$.
}
\end{figure}
The inner triangle $T_i$ it is divided
into three quadrilaterals by connecting the center of the triangle
to the midpoint of
each edge by a straight line.
The vertices of $T_i$ and the midpoints of its edges
are connected to points on $\partial T$ by
circular arcs that are perpendicular to both the boundaries
of $T$ and $T_i$ at the points where they meet.
See Figure \ref{inner-outer}.
We mesh each of the nine resulting quadrilaterals using
the foliations given in Lemma \ref{perp foliation}, starting
at the left and right sides of $T$ at the points given
by $\{ d_k\}$. We assume that this collection contains the
distances $\rho(w,v_1), \rho(w, v_3), \rho(w, v_5)$.
When a leaf ends we continue it in the next
quadrilateral (assume we know how to do this for the inner
triangle and that the foliation there is symmetric).
The path continues until it either it hits $c$ (the center of
the inner triangle),
hits $[-1,1]$ (the base of $T$) or hits the opposite side of $T$.
In the
latter case, symmetry implies the path ends at a point
the same distance from the top point as its starting point.
The choice of inner triangle $T_i$ depends only on the choice of
its top point. This lies on the positive imaginary axis,
and $T_i$ is chosen to be symmetric with respect to this
line. The diameter of $T_i$ is scaled so that the left and right
edges of $T_i$ are hyperbolic geodesic segments (if the top point
has height $h$ above $0$, the three vertices of $T_i$ should
form an equilateral triangle of sidelength $h(\sqrt{3}-1)$;
see Figure \ref{ScaleInner}).
Since any point
between the top point of $T$ and the origin can be used, the inner
triangle can be as small as we wish.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{ScaleInner.eps}
}
\caption{ \label{ScaleInner}
How to scale the inner triangle.
Suppose $a$ is height $1$ above the real axis and $a,b$ lie
on a geodesic $\gamma$ centered at $d$ that makes a $45^\circ$ angle
with the horizontal at $a$. The $\Delta a0d$ is isosceles with base
angles $45^\circ$, so $|ad| = |bd|=\sqrt{2}$. The line $da$ is
perpendicular to $\gamma$, so $\Delta dab$ is isosceles.
Thus $|ab| = 2 |bd| \sin(15^\circ) = \sqrt{3}-1$.
}
\end{figure}
We define three foliations on this triangle $T_i$. For each vertex $v$,
reflect $v$ through the circular arc on the opposite side
to define a point $v^*$ and foliate $T_i$ by arcs that lie on
circles passing through both $v$ and $v^*$.
Note that each foliation leaf passes through one
of the vertices of $T_i$ and is perpendicular to the opposite side.
See Figure \ref{3-full-foliations}.
The center of the triangle can be connected
to the midpoint of each side by a foliation leaf that is
a straight line,
dividing $T$ into three
quadrilaterals. Restrict each foliation to the two quadrilaterals that are
not adjacent to the vertex it passes through. This gives two
foliations on each quadrilateral. See Figure \ref{3-full-foliations}.
Taking a finite set of leaves for each foliation gives a quadrilateral
mesh of the right circular triangle.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.2in]{fol1.eps}
$\hphantom{x}$
\includegraphics[height=1.2in]{fol2.eps}
$\hphantom{x}$
\includegraphics[height=1.2in]{fol3.eps}
$\hphantom{x}$
\includegraphics[height=1.2in]{symm-tri-center.eps}
}
\vskip .15in
\centerline{
\includegraphics[height=1.2in]{part-fol1.eps}
$\hphantom{x}$
\includegraphics[height=1.2in]{part-fol3.eps}
$\hphantom{x}$
\includegraphics[height=1.2in]{part-fol2.eps}
$\hphantom{x}$
\includegraphics[height=1.2in]{symm-tri2.eps}
}
\caption{ \label{3-full-foliations}
Three foliations of a circular right triangle. Each leaf passes
through an associated vertex and is perpendicular to the opposite side.
Connecting the center of the triangle to the midpoints of each
side by the straight leaf divides $T_i$ into three
quadrilaterals.
We then restrict each foliation to two of the quadrilaterals
as shown, and leaves of the union give the mesh edges.
}
\end{figure}
Combining this foliation of the inner triangle with the foliations of
the surrounding quadrilaterals and choosing starting points along the
left and right sides of $T$ as described earlier gives the desired
mesh of $T$. See Figure \ref{full-mesh-tri}.
The only part of the lemma left to
prove is the claim that the angle are in the desired interval when
replace the curved edges by straight segments.
\begin{figure}
\centerline{
\includegraphics[height=2.25in]{full-mesh-tri2.eps}
$\hphantom{xxxx}$
\includegraphics[height=2.25in]{full-mesh-tri4.eps}
}
\caption{ \label{full-mesh-tri}
The mesh of a Carleson triangle for two different
positions of the inner triangle.
}
\end{figure}
When we replace the circular arc edges in the mesh by straight line segments,
it is not obvious that all the angles remain in
$[60^\circ, 120^\circ]$, but we will show that this is true.
Consider a point $z$ in one of the three quadrilaterals
and the two foliation paths $\gamma_1, \gamma_2$
that connect it to the two opposite
vertices, $v_1, v_2$ respectively.
See Figure \ref{bounds-on-quad}.
Let $L_1, L_2$ be the lines through the center $c$ and the points
$v_1, v_2$.
If we think of the arc $\gamma_1$ as a
graph over the line $L_1$ it is monotonically increasing as we move
away from $v_1$ and remains increasing so as long as we stay inside the
triangle (since $\gamma_1$ is perpendicular the the opposite
side of the triangle, the point of greatest distance from
$L_1$ occurs outside the triangle). Thus if we translate $L_1$ to
pass through the point $z$, we see that $\gamma_1$ stays on one side
of this new line up to $z$ and on the other side beyond $z$.
Thus any chord of $\gamma_1$ in the triangle with one endpoint at
$z$ also stays on the same side of the line as the corresponding arc
of $\gamma_1$. Similar for $\gamma_2$ and
$L_2$. See the right side of Figure \ref{bounds-on-quad}.
Thus if we replace foliation paths by segments,
at each
vertex there will be two angles less than $120^\circ$ and two greater
than $60^\circ$ (which are the angles formed by $L_1$ and $L_2$).
This completes the proof of Lemma \ref{triangle mesh}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{L-gamma.eps}
$\hphantom{xxxxxxxxx}$
\includegraphics[height=1.5in]{close-up.eps}
}
\caption{ \label{bounds-on-quad}
If we choose any point $z$ of the equilateral triangle then
chords of the foliation paths with endpoint $z$ form angles
that are bounded between $60^\circ$ and $120^\circ$.
}
\end{figure}
\section{Meshing the thick parts by conformal maps } \label{thick and thin}
The Riemann mapping theorem says that given any
simply connected planar domain $\Omega$ (other than the
whole plane) there is a 1-1, onto, holomorphic map
of the disk onto $\Omega$. Moreover, we may map $0$
to any point of $\Omega$ and specify the
argument of the derivative at $0$.
Such a mapping is conformal, i.e., it preserves angles
locally.
More importantly, a conformal mapping is close to
linear on small balls with estimates that depend on the ball
but not on the mapping.
Koebe's estimate (e.g. Cor. 4.4 of
\cite{Garnett-Marshall}) says that
if $f: \Bbb D \to \Omega$ is conformal
then
$$ \frac 14 |f'(z)| \leq \frac {{\rm{dist}}( z, \partial \Omega_1) }
{{\rm{dist}}(f(z) , \partial \Omega_2) }
\leq |f'(z)|.$$
The closely related distortion theorem states (Equation
(4.17) of \cite{Garnett-Marshall}) that if $f$ is conformal
on the unit disk, then
$$
\frac {1-|z|}{(1+|z|)^2}
\leq \frac { |f'(z)|}{|f'(0)|}
\leq \frac {1+|z|}{(1-|z|)^3},$$
This says that on small balls $f'$ is close to constant, and
hence that $f$ is close to linear.
More precisely,
if $f$ is conformal on a ball $B(w,r)$ then
\begin{eqnarray} \label{close-to-linear}
|f(z) - L(z) | \leq O(\epsilon^2 |f'(z)| r) ,
\end{eqnarray}
for $z \in B(w, \epsilon r)$,
where $L(z) = f(w) + (z-w) f'(w)$ is a Euclidean similarity.
We are particularly interested in conformal maps onto
polygons. In this case $f$ is given by
the Schwarz-Christoffel formula
$$ g(z) = A + C \int \prod_{k=1}^{n-1} (w-z_k)^{\alpha_k-1} dw , $$
where
the interior angles of $\Omega$ are $
\{ \alpha_1 \pi, \dots, \alpha_n \pi\}$ and the preimages
of the vertices are ${\bf z}= \{ z_1, \dots, z_n\}$.
See e.g., \cite{DT-book}, \cite{Nehari}, \cite{DT99}.
The formula was discovered independently by
Christoffel in 1867 \cite{Chr67} and Schwarz in 1869
\cite{Sch90}, \cite{Sch69a}. For other references
and a brief history see Section 1.2 of \cite{DT-book}.
The difficulty in using the formula is to find the
correct parameters ${\bf z}$ for a given $\Omega$.
For a conformal map $f$ onto a polygonal region,
the points of the prevertex set $ {\bf z} \subset \Bbb T$ are the only
singularities of $f$ on $\Bbb T$.
The map extends
analytically across the complementary intervals by the
Schwarz reflection theorem. Thus for a point $w \in \Bbb D$,
the map $f$ extends
to be conformal on the ball $B=B(w, {\rm{dist}}(w,E))$,
and if $Q \subset B$ and ${\rm{diam}}(Q) \leq \epsilon \cdot
{\rm{dist}}(Q,E)$, then there is a linear map $L$ so that
\begin{eqnarray} \label{close-to-linear2}
|f(z) - L(z) | \leq O(\epsilon {\rm{diam}}(f(Q)) ) ,
\end{eqnarray}
for $z \in Q$. In particular, the images of the vertices
of $Q$ map to the vertices of quadrilateral whose angles
differ by only $O(\epsilon)$ from the angles of $Q$. This is
what allows us to map our mesh via a conformal map and obtain
a mesh with only slightly distorted angles. More precisely,
\begin{lemma} \label{small distortion}
Suppose $f:\Bbb D \to \Omega$ is a conformal map onto a polygonal
domain with singular set ${\bf z}$ and $Q \subset \Bbb D$ is a
Euclidean quadrilateral with $ {\rm{diam}}(Q) \leq \epsilon \cdot {\rm{dist}}(Q, {\bf z}) $.
Then the images of the vertices of $Q$ under $f$ form a quadrilateral
with angles differing by at most $O(\epsilon)$ from the
corresponding angles of $Q$.
\end{lemma}
If we applied this directly to a general polygonal region we
could prove that there is a quadrilateral mesh with angles
between $60^\circ - O(\epsilon) $ and $120^\circ + O(\epsilon)$ for
any $\epsilon >0$,
but we would not have the $O(n)$ bound on the number of
pieces.
Bounding the number of terms
comes from using a special decomposition of $\Omega$ and getting
rid of the $\epsilon$'s comes from modifying the conformal map
near the inner triangles in our mesh of $\Bbb D$.
We will deal with the decomposition first.
A polygonal domain $\Omega$
is $\delta$-thick if the corresponding prevertex set
${\bf z}$ is $\delta$-thick, as defined in Section \ref{mobius}.
Equivalently, any two non-adjacent sides of $\Omega$
have extremal distance at least $\delta$ in $\Omega$.
Extremal distance is a well know conformal invariant
which roughly measures the distance between two
continua compared to their diameters. For more
details about extremal distance and thick domains, see
\cite{Bishop-time}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.25in]{three-thin-types.eps}
}
\caption { \label{three-thin-types}
A polygon with one hyperbolic thin part (darker) and six
parabolic thin parts.}
\end{figure}
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.0in]{thin-intro.eps}
}
\caption{\label{thick-parts}
A polygon with five hyperbolic thin parts.
This figure is not to scale.
The channel on the right is not thin because the upper
edge is made up of numerous short edges; the extremal
distance from any of these to the lower edge is bounded
away from zero.
}
\end{figure}
A subdomain $\Omega' \subset \Omega$ is $\delta$-thin
if (1) $\partial \Omega' \cap \partial \Omega$ consists of two
segments $S_1, S_2$ (each a subset of distinct edges of $\Omega$),
(2) $\partial \Omega' \cap
\Omega$ consists of two polygonal arcs, each inscribed in an
approximate circle and (3) the extremal distance
between $S_1$ and $S_2$ in $\Omega'$ is $\leq \delta$.
A thin part of $\Omega$ is called parabolic if the sides
$S_1, S_2$ lie on adjacent sides of $\Omega$ is called hyperbolic
otherwise. See Figures \ref{three-thin-types} and \ref{thick-parts}.
The following result is proven in \cite{Bishop-time}.
\begin{lemma}
There is an $\delta_0 >0$ and $0< C < \infty$
so that if $ \delta < \delta_0$
then the following holds.
Given a simply connected, polygonal domain $\Omega$ we can
write $\Omega$ is a union of subdomains $\{ \Omega_j\}$
belonging to two families ${\mathcal N }$ and $ {\mathcal K} $.
The elements of ${\mathcal N}$ are $O(\delta)$-thin polygons
and the elements of
${\mathcal K}$ are $\delta$-thick.
The number of edges in all the pieces put together is $O(n)$
and all the pieces can be computed in time $O(n)$ (constant
depends on $\epsilon$).
A piece can only intersect a piece of the opposite type.
Any such intersection is a $4\delta$-thin polygon.
\end{lemma}
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.5in]{Overlap.eps}
}
\caption{\label{Overlap}
An overlapping thick piece, $\Omega_2$, and thin piece,
$\Omega_1$ and crosscuts $\gamma_1 = \partial \Omega_1
\cap \Omega_2$, $ \gamma_2 = \partial \Omega_2 \cap \Omega_1$.
The shaded region is $\Omega_3 = \Omega_1 \cap \Omega_2$.
This region
is divided into three sections and in the center section is
denoted $\Omega_4$.
}
\end{figure}
Suppose $\Omega_1$ is one of the thick parts, and
let $f : \Bbb D
\to \Omega_1$ be a conformal map with the origin mapping to
a point outside of all the thin parts hitting $\Omega_1$.
Note that $\partial
\Omega_1 \cap \Omega$ consists of crosscuts $\{ \gamma_j\}$
and let $\{ I_j\}$
be the preimages under $f$ of these boundary arcs.
Each $\gamma_j$ has an associated crosscut $\gamma_j'$
that is a boundary arc of the thin part containing
$\gamma_j$. The preimage of $\gamma_j'$ defines a crosscut
in $\Bbb D$ whose endpoints define an interval that contains
$A I_j$ of $I_j$ where $A \simeq \exp(\pi / 4 \delta)$.
These larger intervals are disjoint (since none of the thin
parts intersect) and $f(0)$ can be chosen
so they all have length $< \pi$.
Thus we can apply Lemma \ref{build intervals} to construct
a domain $W_1 \subset \Bbb D$ and a quadrilateral mesh
on it. Suppose $\partial \Omega_1$ has $n_1$ sides. Since
$\Omega_1$ is $\delta$-thick,
Lemma \ref{thick collections} implies the mesh of $W_1$ has
$O(n_1)$ elements, with a constant depending on $\delta$.
Moreover, for any $\epsilon >0$ we may assume Lemma \ref{small distortion}
applies to all the quadrilaterals in our mesh of $W_1$
if we take $\epsilon = O(1/N)$ where $N$ is
in Lemmas \ref{pentagon mesh}, \ref{quad mesh}
and \ref{triangle mesh}). Thus $f(W_1) \subset \Omega_1$
has a mesh with $O(n_1)$ quadrilaterals, the constant depending
on $\delta$ and $N$, which we will choose independent of $\Omega$.
If $N$ is large enough, then all angles are in the desired range,
except possibly for the quadrilaterals corresponding to the
inner triangles. This determine the choice of $N$.
We also want to choose $\delta >0$ independent of $\Omega$.
As above, suppose $\Omega_1$ is a thick piece and that it
intersects a thin piece $\Omega_2$.
The intersection, $\Omega_3 = \Omega_1
\cap \Omega_2$ is a $4 \delta$-thin
part and can be divided into three disjoint $12 \delta$-thin
parts as illustrated in Figure \ref{Overlap}. Let
$\Omega_4$ denote the ``middle'' part (the one separated
from both $\gamma_1$ and $\gamma_2$). For points inside
$\Omega_4$, the conformal maps of the disk to $\Omega_1$
and $\Omega_3$ are very close to each other if $\delta $ is
small enough. The following
result (Lemma 24 of \cite{Bishop-time}) makes this
precise:
\begin{lemma} \label{approx map}
Suppose $f: \Bbb H \to \Omega_1$ is conformal. We can choose
a conformal map $g : \Bbb H \to \Omega_3$ so that for
$z \in f^{-1}(\Omega_4)$, and uniform $c>0, C< \infty$,
$$|f(z)-g(z)| \leq C \exp(- c/ \delta)
\max({\rm{diam}}(\gamma_1), {\rm{diam}}(\gamma_2)). $$
\end{lemma}
Since $\Omega_3$ is a thin part, we can renormalize our maps
so that $f(i) = g(i)$ is the center of $\Omega_4$ and the
preimages of the vertices of $\Omega_3$
under $g$ can be grouped into two parts: those in
a small interval $\{ |x| < \eta\}$ and those outside
a large interval $\{ |x| > 1/\eta\}$, where $\eta $ tends
to zero as $\delta$ tends to zero.
The corresponding terms of the Schwarz-Christoffel formula
can be grouped as
$$g'(w) = B
\prod_{|z_k| < \eta} w^{\alpha_k-1} (1- \frac {z_k}w)^{\alpha_k-1}
\prod_{|z_j| > 1/\eta} (1- \frac w {z_j} )^{\alpha_j-1}
\simeq B w^{ \sum_{k: |z_k| < \eta} \alpha_k-1} = B w^\beta ,$$
where $B$ is constant, and the dropped terms are close to
$1$ if $\eta$ is close to $0$. Thus $g$ approximates a
power function.
This implies that $g$, and hence $f$, maps the
circular arc $\{|z|=1\} \cap \Bbb H$
to a smooth crosscut of $\Omega_4$ that approximates
a circular arc that is close to perpendicular to the boundary,
and that $f$ followed by radial projection
onto this arc preserves the ordering of points and multiplies
the distances between them by approximately a constant factor
(with error that tends to zero with $\delta$).
Figure \ref{PowerApprox}.
This is one condition that determines
our choice of $\delta$. Another will be given in the final
section when we mesh hyperbolic thin parts.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.2in]{PowerApprox1.eps}
$\hphantom{xx}$
\includegraphics[height=1.2in]{PowerApprox2.eps}
}
\caption{ \label{PowerApprox}
Inside the middle of the overlap of a thick and thin part,
the conformal map approximates a power function. Points
on a circular arc in the disk are mapped to points that
lie on an approximate circular arc and order is preserved.
}
\end{figure}
We now transfer the mesh from $W_1$ to $f(W_1) \subset \Omega_1$.
The unmeshed portions of $\Omega$ are now all subsets of
thin parts bounded by crosscuts that are almost circular arcs.
Moreover, the number of mesh vertices on each of theses
crosscuts is the same by (3) of Lemma \ref{build intervals}
(namely $3N+2M$ where $N$ is from Lemma \ref{pentagon mesh} and
$M$ is from Lemmas \ref{quad mesh} and \ref{triangle mesh}).
The mesh has all angles in $[60^\circ, 120^\circ ]$,
except those corresponding to
the inner triangles in the Carleson triangles, where
they may be $O(\epsilon)$ larger or smaller.
To fix this, we replace the conformal map by a linear map
in the inner triangles.
Map each Carleson triangle in $\Bbb D$ used in the mesh of $W_1$,
to the Carleson triangle $T$ in $\Bbb H$ discussed in
Section \ref{triangles} using a M{\"o}bius transformation $\tau$.
Then $g= f_k \circ \tau^{-1}$ is a conformal map of $T$ into
a part of $f(W_1)$ and we transfer our mesh of $T$ outside the
inner triangle $T_i$ via this map. This agrees with our previous
definition.
In the inner triangle $T_i$ we use the linear map
$ h(z) = g(c) + (z-c) g'(c)$
to transfer the mesh. This preserves angles exactly and so the image
quadrilaterals have angles in $[60^\circ, 120^\circ]$. For
quadrilaterals along the boundary of $T_i$
we apply $h$ to the vertices on $\partial T_i$ and $g$ to the vertices
in $T \setminus T_i$. Along the boundary
of $T_i$, $|h(z) - g(z)| = O( \eta^2) {\rm{diam}} (g(T_i))$ where
$\eta = {\rm{diam}}(T_i)/{\rm{dist}}(T_i, {\bf z}) =O( {\rm{diam}}(T_i)/{\rm{diam}}(T))$. Since the
quadrilaterals meshing $T$ along $\partial T_i$
have Euclidean diameter $\simeq \eta$, and
angles all near $90^\circ$, we see that the angles of the image
quadrilaterals also have angles near $90^\circ$ if $\eta$ is
small enough, i.e., if the inner triangle is small enough with
respect to the outer triangle. This determines the choice of the
inner triangle.
This completes the proof that the desired mesh exists,
except for meshing the thin parts, which is done in the
next section.
However, this is not quite a linear time
algorithm for computing the mesh, since we have used
evaluations of conformal maps without an estimate
of the work involved. We address this now.
The exact conformal map onto a general polygon
probably can't be computed
in finite time, but we can compute an approximate
map onto a simple $n$-gon in time $O(n)$ with a
constant depending only on the desired accuracy.
In \cite{Bishop-time} I show that
a $(1+\epsilon)$-quasiconformal map from $\Bbb D$ to
$\Omega$ can be computed and evaluated at $n$ points
in time $O(n)$ where the constant depends only on
$\epsilon$. I will refer to \cite{Bishop-time}
for the definition and relevant properties of
quasiconformal mappings, but the point is that
if $f: \Bbb D \to \Omega$ is conformal and $g:\Bbb D
\to \Omega$ is the $(1+\epsilon)$-quasiconformal approximation
constructed in \cite{Bishop-time},
and if we have a Euclidean quadrilateral
$Q$ in our mesh, then the $g$-images of the vertices of $Q$ give angles
that are $O(\epsilon)$ close to the angles in the $f$ image.
Thus using $g$ to transfer the
mesh vertices works just as well as $f$.
The fast Riemann mapping
theorem given in \cite{Bishop-time} implies:
\begin{thm}
Suppose we are given a thick simply connected region $\Omega$ bounded by
a simple $n$-gon and an $\epsilon >0$.
We can compute the thick/thin decomposition of $\Omega$,
the corresponding domain $W$ and its
quadrilateral mesh and
a map $g$ on vertices of the mesh that extends to a
$(1+\epsilon)$-quasiconformal map of the disk to $\Omega$.
The total work is $O(n)$ where the constant may depend on
$\epsilon$.
\end{thm}
In fact, we do not need the full strength of the result in
\cite{Bishop-time}, giving the dependence on $\epsilon$,
since we only need to apply the result for a small, but fixed,
$\epsilon$. Moreover, we only need the result for thick polygons,
which is an easier case of the theorem.
\section{Meshing the thin parts} \label{thin}
We are now done with the proof of Theorem \ref{main}
except for meshing thin parts.
Each such thin part is either bounded by two adjacent
edges of $\Omega$ and an almost circular crosscut $\gamma$ (the
parabolic case) or by two non-adjacent edges and two
almost circular crosscuts $\gamma_1,\gamma_2$ (the hyperbolic case).
We start with parabolic thin parts where the
two adjacent edges of $\Omega$ meet at vertex $v$ with
angle $\theta \leq 120^\circ$. The crosscut $\gamma$
defines a neighborhood of $v$ in $\Omega$ that is approximately
a sector, and we define a true circular sector $S$ with vertex
$v$ of comparable, but smaller, size. See Figure \ref{compose-split}.
This sector is divided into pieces using circular arcs concentric
with $v$ and radial segments, as shown in the left of
Figure \ref{compose-split}. There are several levels, with
the width of the level decreasing by a factor of $2$ as we move
away from $v$, and we split each level with radial segments
in order to increase the number of vertices on the outer edge
of the sector. This can be done so that if we divide
$S$ into four equal sectors (each of angle $\theta/4 \leq 30^\circ$)
and add extra vertices to the centers of some arcs,
then the number of points on the outer edge in each subsector
is the same as the number of vertices on $\gamma$ in the
same subsector.
If we list the points on
$\gamma$ and on the outer edge of the sector in order, then
corresponding points lie in the same subsector and can be
joined by segments that make angle between
$90^\circ - \theta/2- \epsilon \geq 75^\circ - \epsilon$
and $105^\circ + \epsilon$ with the chords of the outer
edge of the sector. See Figure \ref{Same-Sector}.
A similar estimate holds for the
chords on $\gamma$ (with a larger $\epsilon$ since $\gamma$
is only an approximate circle). Here $\epsilon$
tends to zero as $S$ shrinks with respect to $\gamma$.
We simply choose a relative size for $S$ that causes these
angles to be between $60^\circ$ and $120^\circ$.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.75in]{doubling2.eps}
$\hphantom{xxx}$
\includegraphics[height=1.75in]{doubling4.eps}
}
\caption{\label{compose-split}
The crosscut $\gamma$ is defines a neighborhood
of the vertex $v$. We define a sector of comparable
size and partition the sector, so that the number of
vertices on the outer edge approximates the number of
points on the crosscut $\gamma$. The pieces are then
meshed: Mesh 1 is used in dark shaded region, Mesh 2 (or
it reflection) the white regions and segments only in
the lighter shaded regions. The number of
vertices on the outer edge is exactly the number on $\gamma$
and corresponding points are joined by segments.
}
\end{figure}
\begin{minipage}{3in}
\centerline{
\includegraphics[height=1.75in]{thin-vertex-120.eps}
}
\centerline{Mesh 1}
\begin{eqnarray*}
&& 0 < \theta \leq 120 \\
&& 60 \leq \theta_1 = 180 - 60 - \frac 12 \theta \leq 120 \\
&& 60 \leq \theta_2 = \frac 12(180-\frac 12 \theta) \leq 90
\end{eqnarray*}
\end{minipage}
\begin{minipage}{3in}
\centerline{
\includegraphics[height=1.5in]{bisect.eps}
}
\centerline{Mesh 2}
{\small{
\begin{eqnarray*}
&& 0 \leq \theta \leq 60 \\
&& 90 \leq \theta_1= 180-\frac 12(180-\theta) \leq 120 \\
&& 60 \leq \theta_2 = 360 - 60 - 2 \theta_1
\leq 120 \\
&& 75 \leq \theta_3
= 90 - \frac 14 \theta \leq 90 \\
&& 60 \leq \theta_4 = 90 - \frac 12 \theta \leq 90 \\
&& 90 \leq \theta_5 = 360-120-60-\theta_4 \leq 120\\
\end{eqnarray*}
}}
\end{minipage}
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.75in]{Same-Sector.eps}
}
\caption{\label{Same-Sector}
The connecting segments between $\gamma$ and the outer edge
of $S$ lie inside a sector of angle $2 \phi \leq 30$. If the
$S$ is small enough compared to $\gamma$ the angle marked
$\epsilon$ is as small as we wish, say $\epsilon < 10^\circ$.
Then the angles formed with the chords of the outer edge of $S$
are between $65^\circ$ and $115^\circ$. The angles with the
chords along $\gamma$ are slightly smaller/larger since $\gamma$ is
only an approximate circle, but the difference is as small as
wish by taking the parameter $\delta$ in our thick/thin
decomposition small enough.
}
\end{figure}
We then have to mesh $S$ so that the mesh vertices on the outer edge
are exactly the ones given above. We do this by applying the
illustrated constructions in each part of the sector.
Mesh 1 is used only in the piece adjacent to $v$ and the equations
below the figure show that all the new angles are in the correct
range. Mesh 2 (or its reflection) are used in all the pieces
that have one more vertex on their outer edge than on the inner
edge (we use reflections to make the vertices on the radial
edges match up). Otherwise we simply use chords of circles
concentric with $v$ to connect edge vertices of parts where
mesh 2 was used. See the right side of Figure \ref{compose-split}.
If the interior angle at $v$ is $120^\circ \leq \theta < 240^\circ$
then we bisect the angle as part of our partition of the sector.
If $240^\circ \leq \theta \leq 360^\circ$, then we trisect the
angle. See Figure \ref{big angle}.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.25in]{BigAngle.eps}
}
\caption{\label{big angle}
If the vertex has interior angle between $120^\circ$ and
$240^\circ$ then we bisect the angle as part of the sector
partition and mesh each piece as before.
}
\end{figure}
A hyperbolic thin part is bounded by two straight
line segments in $\partial \Omega$ and two almost
circular crosscuts $\gamma_1, \gamma_2$. Both crosscuts
contain the same number, $P$, of vertices from the meshes
of the corresponding thick pieces.
If the two straight sides are parallel or lie on lines that
intersect with small angle, then just connecting each
point on $\gamma_1$ to the corresponding point on $\gamma_2$
will give angles in the desired range. In general, however,
this is not the case, but is easily fixed by adding a
bounded number of circular crosscuts separating
$\gamma_1, \gamma_2$ and using a polygonal chain with
vertices on these crosscuts to connect each vertex on
$\gamma_1$ to the corresponding vertex on $\gamma_2$. It
is easy to see that this can be done with angles close
to $90^\circ$ if the number of intermediate crosscuts is
large enough and $\delta$ (the degree if thinness) is small enough.
See Figure \ref{HyperConnect}.
This places an additional constraint on the choice of $\delta$.
\begin{figure}[htbp]
\centerline{
\includegraphics[height=1.25in]{HyperConnect.eps}
}
\vskip .3in
\centerline{
\includegraphics[height=.5 in]{HyperConnect2.eps}
}
\caption{\label{HyperConnect}
By adding a bounded number of circular crosscuts
to a hyperbolic thin part we can connect any $P$
points on $\gamma_1$ to any $P$ points on $\gamma_2$
with a mesh using angles near $90^\circ$.
The arcs look like logarithmic spirals. Indeed, we can
think of this mesh as approximating the image of the
lower picture under the complex exponential map.
}
\end{figure}
In addition to the angle bounds, every quadrilateral in the
construction can be chosen to have bounded geometry (i.e.,
all four edges of comparable length with uniform constants)
except in two cases. First, when we mesh a parabolic thin part with
angle $\theta \ll 1$, the piece containing the vertex has
two sides with length only $O(\theta)$ as long as the other two.
Second, when meshing a hyperbolic thin part we use
long, narrow pieces, but if the long
sides have extremal distance $\delta$, we can refine the
mesh by subdividing each such piece into $O(1/\delta)$
bounded geometry quadrilaterals.
Thus if the hyperbolic thin parts of $\Omega$ have ``thinnesses''
$\{\delta_k\}$, then we can mesh $\Omega$ by $O(n+\sum_k \delta_k^{-1})$
quadrilaterals with angles in $[60^\circ, 120^\circ]$ and
bounded geometry, except for the pieces containing vertices with
small angles. If $\Omega$ has no small angles, then this gives
the smallest (up to a constant factor), bounded geometry mesh
of $\Omega$.
| {
"timestamp": "2020-07-17T02:03:08",
"yymm": "2007",
"arxiv_id": "2007.07983",
"language": "en",
"url": "https://arxiv.org/abs/2007.07983",
"abstract": "We show that any simple planar n-gon can be meshed in linear time by $O(n)$ quadrilaterals with all new angles bounded between $60$ and $120$ degrees.",
"subjects": "Computational Geometry (cs.CG); Complex Variables (math.CV)",
"title": "Optimal angle bounds for quadrilateral meshes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815534201612,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.707684219503155
} |
https://arxiv.org/abs/1307.5157 | Lower bounds on geometric Ramsey functions | We continue a sequence of recent works studying Ramsey functions for semialgebraic predicates in $\mathbb{R}^d$. A $k$-ary semialgebraic predicate $\Phi(x_1,\ldots,x_k)$ on $\mathbb{R}^d$ is a Boolean combination of polynomial equations and inequalities in the $kd$ coordinates of $k$ points $x_1,\ldots,x_k\in\mathbb{R}^d$. A sequence $P=(p_1,\ldots,p_n)$ of points in $\mathbb{R}^d$ is called $\Phi$-homogeneous if either $\Phi(p_{i_1}, \ldots,p_{i_k})$ holds for all choices $1\le i_1 < \cdots < i_k\le n$, or it holds for no such choice. The Ramsey function $R_\Phi(n)$ is the smallest $N$ such that every point sequence of length $N$ contains a $\Phi$-homogeneous subsequence of length $n$.Conlon, Fox, Pach, Sudakov, and Suk constructed the first examples of semialgebraic predicates with the Ramsey function bounded from below by a tower function of arbitrary height: for every $k\ge 4$, they exhibit a $k$-ary $\Phi$ in dimension $2^{k-4}$ with $R_\Phi$ bounded below by a tower of height $k-1$. We reduce the dimension in their construction, obtaining a $k$-ary semialgebraic predicate $\Phi$ on $\mathbb{R}^{k-3}$ with $R_\Phi$ bounded below by a tower of height $k-1$.We also provide a natural geometric Ramsey-type theorem with a large Ramsey function. We call a point sequence $P$ in $\mathbb{R}^d$ order-type homogeneous if all $(d+1)$-tuples in $P$ have the same orientation. Every sufficiently long point sequence in general position in $\mathbb{R}^d$ contains an order-type homogeneous subsequence of length $n$, and the corresponding Ramsey function has recently been studied in several papers. Together with a recent work of Bárány, Matoušek, and Pór, our results imply a tower function of $\Omega(n)$ of height $d$ as a lower bound, matching an upper bound by Suk up to the constant in front of $n$. | \section{Introduction}
\heading{Ramsey's theorem and the classical Ramsey function.}
A classical and fundamental theorem of Ramsey claims that for every $n$
there is a number $N$ such that for every coloring of
the edge set of the complete graph $K_N$ on
$N$ vertices there is a \emph{homogeneous} subset of $n$ vertices,
meaning that all edges in the complete subgraph induced by these
$n$ vertices have the same color. More generally, for every $k$ and $n$
there exists $N$ such that if the set of all $k$-tuples of elements
of an $N$-element set $X$ is colored by two colors, then there
exists an $n$-element homogeneous $Y\subseteq X$, with all $k$-tuples from $Y$
having the same color. Let $R_k(n)$ stand for the smallest $N$
with this property.
Considering $k$ fixed and $n$ large,
the best known lower and upper bounds for the Ramsey function $R_{k}(n)$ are
of the form\footnote{We employ the usual asymptotic notation
for comparing functions: $f(n)=O(g(n))$ means that
$|f(n)|\le C|g(n)|$ for some $C$ and all $n$, where $C$ may depend
on parameters declared as constants (in our case on $k$);
$f(n)=\Omega(g(n))$ is equivalent to $g(n)=O(f(n))$;
and $f(n)=\Theta(g(n))$ means that both $f(n)=O(g(n))$
and $f(n)=\Omega(g(n))$.}
$R_2(n)=2^{\Theta(n)}$ and, for $k\ge 3$,
\[
\twr_{k-1}(\Omega(n^2))\le R_{k}(n) \le \twr_{k}(O(n)),
\]
where the tower function $\twr_k(x)$ is defined by $\twr_1(x) = x$
and $\twr_{i+1} (x) = 2^{\twr_i (x)}$.
A widely believed, and probably very difficult, conjecture
of Erd\H{o}s and Hajnal asserts that the upper bound is essentially
the truth. This is supported by known bounds for more than two colors,
where the lower bound for $k$-tuples is also a tower
of height $k$; see Conlon, Fox, and
Sudakov \cite{conlon-al} for a recent improvement
and more detailed overview of the known bounds.
\heading{Better Ramsey functions for geometric Ramsey-type results. }
Ramsey's theorem can be used to establish many geometric Ramsey-type results
concerning configurations of points, or of other geometric objects, in ${\mathbb{R}}^d$.
The first two examples, which up until now remain among the most
significant and beautiful ones, come from a 1935 paper of
Erd\H{o}s and Szekeres \cite{es-cpg-35}.
The first one asserts that every sufficiently long sequence
$(x_1,\ldots,x_N)$ of real numbers contains a subsequence
$(x_{i_1},x_{i_2},\ldots,x_{i_n})$, $i_1<i_2<\cdots<i_n$, that
is either increasing, i.e., $x_{i_1}< x_{i_2}<\cdots<x_{i_n}$,
or nonincreasing, i.e., $x_{i_1}\ge x_{i_2}\ge\cdots\ge x_{i_n}$.
Ramsey's theorem for $k=2$ yields the bound $N\le R_2(n)\le \twr_2(O(n))$
(color a pair $\{i,j\}$, $i<j$, red if $x_i<x_j$ and blue
if $x_i\ge x_j$), but the result is known to hold with $N=(n-1)^2+1$,
an exponential improvement over $R_2(n)$.
For the second of the two Erd\H{o}s--Szekeres theorems mentioned above,
we consider a sequence $P=(p_1,p_2,\ldots,p_N)$ of points in the plane;
for simplicity, we assume that the $p_i$ are in general position
(no three collinear). If $N$ is sufficiently large, then there is
a subsequence $(p_{i_1},\ldots,p_{i_n})$, $i_1<i_2<\cdots<i_n$,
forming the vertex set of a convex $n$-gon, enumerated clockwise
or counterclockwise.
This time Ramsey's theorem yields $N\le R_3(n)\le\twr_3(O(n))$,
by coloring a triple
$\{i,j,k\}$, $i<j<k$, red if $p_i,p_j,p_k$ appear clockwise around the
boundary of their convex hull, and blue otherwise. Again, the optimal
bound is one exponential better, of order $2^{\Theta(n)}$.
It is natural to ask, what is special about the two-colorings of
pairs or triples in the above two examples, what makes the Ramsey
functions here considerably smaller, compared to arbitrary colorings?
One kind of a combinatorial condition for two-colorings
of $k$-tuples implying such improved bounds was given by
Fox, Pach, Sudakov, and Suk \cite{FoxPachSudSuk}, and another
by the first two authors \cite{highes-aim}; both of them include
the two Erd\H{o}s--Szekeres results as special cases.
However, a considerably more general, and probably more interesting,
reason for the better Ramsey behavior of these geometric examples
is that the colorings are ``algebraically defined''; more precisely,
they are given by \emph{semialgebraic predicates}.
\heading{Upper bounds for semialgebraic colorings. }
Let $x_1,\ldots,x_k$ be points in ${\mathbb{R}}^d$, with $x_{i,j}$ denoting
the $j$th coordinate of $x_i$; we regard the $x_{i,j}$ as variables.
A \emph{$k$-ary $d$-dimensional semialgebraic predicate} $\Phi(x_1,\ldots,x_k)$
is a Boolean combination of polynomial equations and inequalities
in the $x_{i,j}$. More explicitly, there are a Boolean formula
$\phi(X_1,\ldots,X_t)$ in Boolean variables $X_1,\ldots,X_t$ and
polynomials $f_1,\ldots,f_t$ in the variables
$x_{i,j}$, $1\le i\le k$, $1\le j\le d$, such that $\Phi(x_1,\ldots,x_k)=
\phi(A_1,\ldots,A_t)$, where $A_\ell$ is true if
$f_\ell(x_{1,1},\ldots,x_{k,d})\ge 0$ and false otherwise.
We call a sequence $(p_1,\ldots,p_n)$ of points in ${\mathbb{R}}^d$
{\em $\Phi$-homogeneous} if either $\Phi(p_{i_1},
\ldots,p_{i_k})$ holds for every choice $1\le i_1<\cdots<i_k\le n$,
or it holds for no such choice. The Ramsey function $R_\Phi(n)$ is
the smallest $N$ such that every point sequence of length $N$
contains a $\Phi$-homogeneous subsequence of length~$n$.
The following general upper bound was first proved by
Alon, Pach, Pinchasi, Radoi\v{c}i\'c, and Sharir \cite{apprs-semialg} for $k=2$,
and then generalized by Conlon, Fox, Pach, Sudakov, and Suk \cite{cfpss-semialg} for $k\ge 3$:
\begin{theorem}[\cite{apprs-semialg,cfpss-semialg}]\label{t:ub}
For every $d$, $k$, and a $k$-ary
$d$-dimensional semialgebraic predicate $\Phi$,
\[
R_\Phi(n)\le\twr_{k-1}(n^C),
\]
where $C$ is a constant depending on $d,k,\Phi$.\footnote{Actually,
the constant $C$ depends on $\Phi$ only through its \emph{description
complexity}, which Conlon et al.\ define as $\max(m,D)$,
where $m$ is the number of polynomials occurring in $\Phi$ and
$D$ is the maximum degree of these polynomials. Thus, the bound
does not depend on the magnitude of the coefficients in the polynomials.}
\end{theorem}
Thus, the Ramsey function for $k$-ary semialgebraic predicates is
bounded above by a tower one lower than the ``combinatorial''
Ramsey function $R_k(n)$. Let us note that for the case of increasing
or nonincreasing subsequences ($k=2$, $d=1$) and subsequences in
convex position ($k=3$, $d=2$) as above, Theorem~\ref{t:ub} yields
somewhat weak bounds, namely, $n^{O(1)}$ and $2^{n^{O(1)}}$
instead of $n^2$ and $2^{O(n)}$, respectively, but still in the right
range.
By very different methods, Bukh and the second author \cite{BukhMat}
obtained a doubly exponential upper bound for all one-dimensional
semialgebraic predicates, for arbitrary $k$:
\begin{theorem}[\cite{BukhMat}] \label{t:bm}
For every $1$-dimensional
semialgebraic predicate $\Phi$ there is a constant $C$ such that
$R_\Phi(n)\le \twr_3(Cn)$.
\end{theorem}
This opens an interesting possibility, namely, that the Ramsey function
of $d$-dimensional semialgebraic predicates might be bounded
by a tower whose height depends only on $d$ (and not on $k$),
but currently this question is wide open. But certainly it
makes it interesting to study the dependence of the Ramsey function
on the dimension.
\heading{Lower bounds. } The classical Erd\H{o}s--Szekeres result
on subsequences in convex position
\cite{es-cpg-35} supplies a lower bound of $2^{\Omega(n)}=\twr_2(\Omega(n))$
in the setting of Theorem~\ref{t:ub} for $k=3$ and $d=2$.
The first two authors \cite{highes-aim} constructed a
reasonably natural\footnote{By a ``natural'' predicate we mean here
one that has a clear geometric meaning and
seems reasonable to study in its own right, not only as a lower-bound
example for a general result. In the case of \cite{highes-aim},
assuming that the considered four points $x_1,\ldots,x_4$ are
numbered in the order of increasing first coordinates, the predicate
asserts that $x_4$ lies above the graph of the unique
quadratic polynomial passing through
$x_1,x_2,x_3$.}
$4$-ary planar semialgebraic $\Phi$ with $R_\Phi(n)\ge
\twr_3(\Omega(n))$. This shows that for $k\le 4$, the height
of the tower in Theorem~\ref{t:ub} is optimal in terms of~$k$.
For $d=1$, \cite{BukhMat} provided a one-dimensional $5$-ary $\Phi$
with $R_\Phi(n)\ge \twr_3(\Omega(n))$, matching Theorem~\ref{t:bm}.
Conlon et al.~\cite{cfpss-semialg} improved the arity to~$4$,
which is optimal in view of Theorem~\ref{t:ub}.
Moreover, they obtained a lower bound almost matching Theorem~\ref{t:ub}
for an arbitrary $k$. Namely, for every $k\ge 4$ they constructed
a $d$-dimensional $k$-ary semialgebraic predicate $\Phi$
such that $R_\Phi(n) \ge \twr_{k-1}(\Omega(n))$.
However, the dimension $d$ in their construction is
large: $d = 2^{k-4}$.
\heading{A stronger lower bound. }
In this paper we first modify (and simplify) the
lower bound construction of Conlon et al.~\cite{cfpss-semialg},
obtaining examples in considerably lower dimension.
\begin{theorem}\label{t:lb}
For every $d \ge 2$ there is a $d$-dimensional
semialgebraic predicate $\Phi$ of arity $k=d+3$
such that
\[ R_\Phi(n) \ge \twr_{k-1}(\Omega(n)).
\]
\end{theorem}
The proof is given in Section~\ref{s:lower}.
In view of Theorem~\ref{t:bm}, the dependence of the tower height
on the dimension in this result might even be optimal.
\heading{Super-order-type homogeneous subsequences. }
Next, we provide a natural geometric Ramsey-type theorem in ${\mathbb{R}}^d$
in which the Ramsey function is a tower of height~$d$.
Let $T=(p_1,\ldots,p_{d+1})$ be an ordered $(d+1)$-tuple of points
in ${\mathbb{R}}^d$. We recall that the \emph{sign} (or \emph{orientation})
of $T$ is
defined as $\mathop {\rm sgn}\nolimits\det M$, where the $j$th column of the $(d+1)\times (d+1)$
matrix $M$ is $(1,p_{j,1},p_{j,2},\ldots,p_{j,d})$.
Geometrically, the sign
is $+1$ if the $d$-tuple
of vectors $p_1-p_{d+1},\ldots,p_{d}-p_{d+1}$ forms a positively
oriented basis of ${\mathbb{R}}^d$, it is $-1$ if it forms a negatively
oriented basis, and it is $0$ if these vectors are linearly dependent.
We call a sequence $(p_1,p_2,\ldots,p_n)$
of points in ${\mathbb{R}}^d$ in general position \emph{order-type homogeneous}
if all $(d+1)$-tuples $(p_{i_1},\ldots,p_{i_{d+1}})$,
$i_1<\cdots<i_{d+1}$, have the same sign (which is nonzero,
by the general position assumption). Such sequences are of interest
from various points of view: For example, the convex hull of an order-type
homogeneous sequence is combinatorially equivalent to
a cyclic polytope (see, e.g., \cite{z-lp-94} for background).
They can also be viewed as \emph{discrete Chebyshev systems};
see \cite{KarlinStudden}, as well as a remark below.
By Ramsey's theorem, every sufficiently long point sequence
in general position contains an order-type homogeneous
subsequence of length $n$ (we color every $(d+1)$-tuple by its
sign). Letting $\OT_d(n)$ be the corresponding
Ramsey function, we obtain $\OT_d(n)\le\twr_d(n^C)$ from Theorem~\ref{t:ub}.
This has recently been improved to
$\OT_d(n)\le\twr_d(O(n))$ by Suk~\cite{Suk-OT}.
This upper bound is essentially tight. Until recently this was proved only for
$d=2$ (by \cite{es-cpg-35}) and $d=3$ \cite{highes-aim}.
As will be explained next, our results, together with
a recent paper of
B\'ar\'any, P\'or, and the second author \cite{curve-bmp},
yield a matching lower bound for all~$d$.
In the present paper we prove a lower bound for a somewhat stronger notion
of homogeneity. Namely, let $\pi_j\colon {\mathbb{R}}^d \to {\mathbb{R}}^j$
denote the projection on the first $j$ coordinates.
We say that a point sequence
$P=(p_1, \dotsc, p_n)$ in
${\mathbb{R}}^d$ is \emph{super-order-type homogeneous} if,
for each $j=1,2,\ldots,d$, the
projected sequence $\pi_j(P)=(\pi_j(p_1),\ldots\pi_j(p_n))$
is order-type homogeneous.
By iterated application of Ramsey's theorem, it can be seen
that every sufficiently long point sequence in general position in ${\mathbb{R}}^d$
contains a super-order-type homogeneous subsequence of length~$n$.
Let $\OT^*_d(n)$ be the corresponding Ramsey function. We have the following
lower bound, proved in Section~\ref{s:lb-sup}:
\begin{theorem}\label{t:super}
For every $n\ge d+1$, $\OT^*_d(n)\ge \twr_d(n-d)$.
\end{theorem}
In $\cite{curve-bmp}$ it is proved that
$\OT^*_d(n)\le \OT_d(C_d n)$ for every $d$, where $C_d$ is a suitable
constant. Thus, we also obtain a lower bound for
$\OT_d$, which is tight up to a multiplicative constant
in front of $n$:
\begin{corol}
We have $\OT_d(n) \geq \twr_d(\Omega(n))$.
\end{corol}
\heading{Chebyshev systems. } Let $A$ be a linearly ordered set
of at least $k+1$ elements. A (real) \emph{Chebyshev system} (also spelled
Tchebycheff) on $A$ is a system of continuous real functions
$f_0,f_1,\ldots,f_k\:A\to{\mathbb{R}}$ such that for every choice of elements
$t_0<t_1<\cdots<t_k$ in $A$, the matrix $(f_i(t_j))_{i,j=0}^k$ has
a (strictly) positive determinant. Chebyshev systems are mostly
considered for $A$ an interval in ${\mathbb{R}}$ with the natural ordering,
the basic example being $f_i(t)=t^i$, but the case
of finite $A$ (\emph{discrete} Chebyshev systems) has been investigated
as well. The functions $f_0,\ldots,f_k$ as above form a
\emph{Markov system}, also called a \emph{complete Chebyshev system},
if $f_0,\ldots,f_i$ is a Chebyshev system for every $i=1,2,\ldots,k$.
Chebyshev systems are of considerable importance in several areas,
such as approximation theory or the theory of finite moments;
see the classical monograph of Karlin and Studden
\cite{KarlinStudden} or, e.g., Carnicer, Pe\~na, and Zalik
\cite{CPZ} for a more recent study.
In our setting, it is easy to check that
an $n$-point order-type homogeneous sequence
$P=(p_1,\ldots,p_n)$ in ${\mathbb{R}}^d$ gives rise to a Chebyshev system
on $A=\{1,2,\ldots,n\}$, by setting $f_j(i)=p_{i,j}$
for $j=1,2,\ldots,d$ and $f_0\equiv 1$ (possibly with changing
the sign for one of the $f_i$, if
the signs of the $(d+1)$-tuples in $P$ are negative),
and conversely, from a discrete Chebyshev system with
$f_0\equiv 1$ we obtain an order-type homogeneous sequence.
Similarly, super-order-type homogeneous sequences correspond
to discrete Markov systems.
\section{Lower bound for semialgebraic predicates in a small dimension}
\label{s:lower}
Here we prove Theorem \ref{t:lb}. As was remarked in the introduction,
our construction can be regarded as a modification of that of
Conlon et al.~\cite{cfpss-semialg}, but we give a self-contained
presentation.
\heading{Stepping up. } The proof proceeds by induction on $d$;
having constructed a suitable $d$-dimensional
$k$-ary semialgebraic predicate
and an $N$-point sequence $P\subset {\mathbb{R}}^d$ without long $\Phi$-homogeneous
subsequences, we produce a $(d+1)$-dimensional
$(k+1)$-ary semialgebraic predicate $\Psi$
and a $2^N$-point sequence $Q\subset{\mathbb{R}}^{d+1}$ without
long $\Psi$-homogeneous subsequences.
Our basic tool is a classical stepping-up lemma of Erd\H{o}s and
Hajnal, see e.g.~\cite{grs-rt-90} or \cite{conlon-al}.
We first recall it in the standard combinatorial
setting, and then we will work on transferring it to a semialgebraic
setting.
Let $I=[N]:=\{1,2,\ldots,N\}$, and let $\chi\:\binom{I}{k}\to \{0,1\}$
be a given two-coloring of all $k$-tuples of $I$. Let $J=\{0,1\}^N$
be the set of all binary vectors of length $N$ ordered lexicographically.
We define a coloring $\chi'\:\binom{J}{k+1}\to\{0,1\}$ of all $(k+1)$-tuples
of $J$. First we introduce a function $\delta\:J\times J\to I$
by
\[\delta(\balpha,\bbeta)=\min\{i\in I:\alpha_i\ne\beta_i\}.
\]
For a $(k+1)$-tuple $(\balpha_1,\ldots,\balpha_{k+1})$
of binary vectors, $\balpha_1<_{\rm lex}\cdots
<_{\rm lex} \balpha_{k+1}$, we write
$\delta_\ell:=\delta(\balpha_\ell,\balpha_{\ell+1})$.
Then $\chi'$, the \emph{stepping-up coloring} for $\chi$, is given by
\begin{equation}\label{def-stepup}
\chi'(\balpha_1,\ldots,\balpha_{k+1}):=\alterdef{
\chi(\delta_1,\ldots,\delta_k)& \text{if }
\delta_1 < \dotsb < \delta_k \text{ or } \delta_1 > \dotsb >
\delta_k\\
1& \text{if } \delta_1 < \delta_2 > \delta_3\\
0& \text{otherwise.}
}
\end{equation}
Now the stepping-up lemma can be stated as follows.
\begin{lemma}[Stepping-up lemma]\label{lem:step-up}
If $\chi$ is a two-coloring of the $k$-tuples of $I:=[N]$
under which $I$ has no homogeneous subset of size $n$,
then, under the stepping-up coloring $\chi'$, the set $J=\{0,1\}^N$ contains
no homogeneous subset of size $2n+k-4$.
\end{lemma}
The proof is not very complicated and it can be found, e.g., in
\cite{cfpss-semialg} or \cite[Sec.~4.7]{grs-rt-90}.
\heading{Semialgebraic stepping up. } Now let $\Phi$ be a $d$-dimensional
$k$-ary
semialgebraic predicate, and let $P=(p_1,\ldots,p_N)$
be a point sequence in ${\mathbb{R}}^d$ indexed by the set $I=[N]$ as above.
Let $\chi=\chi_\Phi$ be the coloring of $k$-tuples of $I$ induced
by $\Phi$; that is, for $i_1<\cdots<i_k\in I$,
$\chi(i_1,\ldots,i_k)$ is $1$ or $0$ depending
on whether $\Phi(p_{i_1},\ldots,p_{i_k})$ holds or not.
We want to construct a sequence $Q$ in ${\mathbb{R}}^{d+1}$ indexed by $J=\{0,1\}^N$
and a $(d+1)$-dimensional $(k+1)$-ary semialgebraic predicate $\Psi$
such that the coloring induced
by $\Psi$ on $\binom{J}{k+1}$ is exactly the stepping-up coloring $\chi'$.
For our construction, we need to assume simple additional properties
of $\Phi$ and $P$, which we now introduce.
Let $P=(p_1,\ldots,p_N)$ be a sequence of points in ${\mathbb{R}}^d$.
We call a predicate $\Phi$ \emph{robust}\footnote{Conlon et
al.~\cite{cfpss-semialg}
use the term $\eta$-deep.} on $P$
if there is some $\eta>0$ such that
$\Phi(p_{i_1},\ldots,p_{i_k})\Leftrightarrow \Phi(p'_{i_1},\ldots,p'_{i_k})$
whenever $1\le i_1<\cdots<i_k\le N$ and $\|p_{i_j}-p'_{i_j}\|\le\eta$
for all $j=1,2,\ldots,k$.
In defining the new predicate $\Psi$, we will also need to
use the linear ordering of the points of $P$.
We thus say that a binary semialgebraic predicate $\prec$
on ${\mathbb{R}}^d$ is \emph{order-inducing} for $P$ if
$p_i\prec p_j$ iff $i<j$, for $i,j=1,2,\ldots,N$.
Now we can state our semialgebraic stepping-up lemma.
\begin{prop}[Semialgebraic stepping-up]\label{p:lb-i}
Let $\Phi$ be a $d$-dimensional $k$-ary semialgebraic predicate and let
$\prec$ be a $d$-dimensional binary semialgebraic predicate. Then
there are a $(d+1)$-dimensional $(k+1)$-ary semialgebraic predicate
$\Psi$ and a $(d+1)$-dimensional binary
semialgebraic predicate $\prec'$ with the following
property.
Let $P=(p_1,\ldots,p_N)$ be a point sequence in ${\mathbb{R}}^d$ such that
$\prec$ is order-inducing on $P$ and both $\Phi$ and $\prec$
are robust on $P$, and let $\chi_\Phi$ be the coloring
of $k$-tuples of $I=[N]$ induced by $\Phi$.
Then there is a point sequence $Q=(q_\balpha:\balpha\in J=\{0,1\}^N)$
such that $\prec'$ is order-inducing on $Q$ (w.r.t.\ the lexicographic
ordering of $J$), both $\Psi$ and $\prec'$ are robust on $Q$,
and the coloring $\chi_\Psi$ induced on the $(k+1)$-tuples
of $J$ by $\Psi$ is the stepping-up coloring for~$\chi_\Phi$.
\end{prop}
\begin{proof}
The construction of $Q$ uses a parameter $\eps>0$, which we
assume to be sufficiently small.
For $\balpha=(\alpha_1,\ldots,\alpha_N)\in J$, we set
\[
q_\balpha := \sum_{i=1}^N \alpha_i\eps^i (1,p_{i,1},p_{i,2},\ldots,p_{i,d})\in {\mathbb{R}}^{d+1}.
\]
In particular, the first coordinate of $q_{\balpha}$ is
$\sum_{i=1}^N \alpha_i\eps^i$. Hence, as is easy to check,
for $\eps$ sufficiently small,
the lexicographic ordering
of $J$ agrees with the ordering of $Q$ by the first coordinate,
and hence we can take the standard ordering in the first coordinate
as the required order-inducing (and obviously robust)
predicate $\prec'$ on~$Q$.
Next, we define a mapping $\sdelta\:{\mathbb{R}}^{d+1}\times{\mathbb{R}}^{d+1}\to {\mathbb{R}}^d$,
which will play the role of the $\delta$ from the stepping-up lemma
in the geometric setting. For points $x,y\in {\mathbb{R}}^{d+1}$, we set
\begin{equation}\label{e:sdelta}
\sdelta(x,y):= \left(\frac{x_2-y_2}{x_1-y_1},\frac{x_3-y_3}{x_1-y_1},\ldots,
\frac{x_{d+1}-y_{d+1}}{x_1-y_1}\right) \in{\mathbb{R}}^d.
\end{equation}
(Actually, $\sdelta(x,y)$ is undefined for $x_1=y_1$, but we will
use $\sdelta$ only for points with different first coordinates.)
By elementary calculation we can see that for $\balpha,\bbeta\in J$,
$\balpha\ne\bbeta$, we have
\begin{equation}
\label{e:lim}
\lim_{\eps\to 0} \sdelta(q_\balpha,q_{\bbeta})= p_{\delta(\balpha,\bbeta)}.
\end{equation}
This allows us to imitate the combinatorial definition \eqref{def-stepup}
of the stepping-up coloring
by a semialgebraic predicate $\Psi$. For a $(k+1)$-tuple of points
$(x_1,\ldots,x_{k+1})$ in ${\mathbb{R}}^{d+1}$, let us write
$\sdelta_\ell:=\sdelta(x_\ell,x_{\ell+1})$, and set
\[
\Psi(x_1,\ldots,x_{k+1}):=\alterdef{
\Phi(\sdelta_1,\ldots, \sdelta_k)&\mbox{ if }
\sdelta_1 \prec \cdots \prec \sdelta_k\\
\Phi(\sdelta_k,\ldots, \sdelta_1)&\mbox{ if }
\sdelta_1 \succ \cdots \succ \sdelta_k\\
\mathrm{true}&\mbox{ if } \sdelta_1 \prec \sdelta_2\succ \sdelta_3\\
\mathrm{false}&\mbox{ otherwise.}
}
\]
As written, $\Psi$ is not necessarily a semialgebraic predicate,
since the definition of $\sdelta$ involves division. However, we can
always multiply by the denominators and introduce appropriate
conditions; e.g., $\frac uv<1$ can be replaced with
$(u<v\wedge v>0)\vee (u>v\wedge v<0)$, which is equivalent whenever $\frac uv$
is defined. In this way, we obtain an honest semialgebraic predicate.
It remains to check that $\Psi$ induces the stepping-up coloring
on $J$, which is straightforward using the robustness of $\Phi$
and $\prec$ and the limit relation \eqref{e:lim}. Indeed,
let us fix $\balpha_1<_{\rm lex}\cdots <_{\rm lex}\balpha_{k+1}\in J$ and
write $\sdelta_\ell:=\sdelta(q_{\balpha_\ell},q_{\balpha_{\ell+1}})$
and $\delta_\ell:=\delta(\balpha_\ell,\balpha_{\ell+1})$.
Then for $\eps$ sufficiently small, we have $\sdelta_\ell
\prec\sdelta_{\ell+1}$ iff $p_{\delta_\ell}\prec p_{\delta_{\ell+1}}$
(by the robustness of $\prec$) iff $\delta_\ell<\delta_{\ell+1}$
(since $\prec$ is order-inducing on $P$).
Assuming $\sdelta_1\prec \sdelta_2\prec\cdots\prec\sdelta_k$,
we get that $\Phi(\sdelta_1,\ldots,\sdelta_k)$ iff
$\Phi(p_{\delta_1},\ldots,p_{\delta_k})$, again for
all sufficiently small $\eps$; similarly if
$\sdelta_1\succ \sdelta_2\succ\cdots\succ\sdelta_k$.
Therefore,
the coloring induced by $\Psi$ on $Q$ is indeed
the stepping-up coloring for $\chi_\Phi$ as claimed.
It remains to verify that $\Psi$ is robust on $Q$, but this is clear
from the robustness of $\Phi$ and $\prec$ and the continuity of
$\sdelta$ on the subset of ${\mathbb{R}}^{d+1}\times{\mathbb{R}}^{d+1}$ where it is defined.
\end{proof}
\heading{Proof of Theorem \ref{t:lb}.}
As announced, we prove the theorem by induction on $d$.
For the base case $d=1$, we use a result of Conlon et al.~\cite{cfpss-semialg},
who construct a $4$-ary semialgebraic predicate $\Phi_1$ on ${\mathbb{R}}^1$
and, for every $n$, a sequence $P_1\subset {\mathbb{R}}$ of length $\twr_3(\Omega(n))$
with no $\Psi_1$-homogeneous subsequence of length $n$. It is obvious
from their construction that $\Psi_1$ is robust on $P_1$ and
that $<$, the usual inequality among real numbers, is robust and order-inducing
on $P_1$.
The theorem then follows by a $(d-1)$-fold application of
Proposition~\ref{p:lb-i} together with the stepping-up
lemma (Lemma~\ref{lem:step-up}).
\ProofEndBox\smallskip
\section{Lower bound for super order type}\label{s:lb-sup}
Here we prove Theorem~\ref{t:super}. Thus, we need to exhibit
long point sequences without super-order-type homogeneous subsequences
of length $n$. The construction is almost identical to the one
in the previous section, only the base case for $d=1$ is different.
The proof essentially consists in relating super-order-type homogeneity
to another property, which we call super-monotonicity; checking
that the constructed sequence has no super-monotone subsequences
of length~$n$
is straightforward.
First, for convenience, we extend the definition of the bivariate function
$\sdelta$ from \eqref{e:sdelta} in
the previous section to an arbitrary number of arguments.
Namely, we set $\sdelta(p)=p$ and, for $k\ge 2$,
\
\sdelta(p_1,\dots,p_{k+1}):=\sdelta(\sdelta(p_1,\dots,p_k),\sdelta(p_2,\dots,p_{k+1})).
\]
Again, we are going to use $\sdelta$ only with arguments for which it
is well defined.
For points $p,q\in{\mathbb{R}}^d$, we write $p\leqone q$ if $p_1<q_1$ (strict
inequality in the first coordinate).
A point sequence $P=(p_1\dots,p_n)$ in ${\mathbb{R}}^d$ is
\emph{super-monotone} if each of the point sequences
$(\sdelta(p_1,\dots,p_j),\dots,\sdelta(p_{n-j+1},\dots,p_n))$
in ${\mathbb{R}}^{d-j+1}$ is monotone according to $\leqone$, $1\le j\le d$.
Here is the key technical result.
\begin{prop}\label{p:sot-sm}
A point sequence $(p_1,\dots,p_n)$ in ${\mathbb{R}}^d$ is super-monotone
if and only if it is super-order-type homogeneous.
\end{prop}
The proof will be given at the end of this section, after
some algebraic lemmas. First we finish the proof of Theorem~\ref{t:super},
assuming the proposition.
\heading{Proof of Theorem \ref{t:super}.}
We will construct a sequence $P_d(n)$ in general position
in ${\mathbb{R}}^d$ of length $\twr_d(n-d)$ and containing
no super-order-type homogeneous subsequence of length~$n$.
We proceed by induction on $d$. The inductive hypothesis will include
the assumption that the first coordinate in $P_d(n)$ is strictly
increasing.
For $d=1$ we set $P_1(n):=(1,2,\dots,n-1)$.
Now we construct $P_{d+1}(n)$ from $P_d(n-1)=(p_1,\ldots,p_N)$,
using the same construction as in Proposition \ref{p:lb-i}. That is,
$
P_{d+1}(n)=(q_\balpha:\balpha\in \{0,1\}^N),
$
where the binary vectors $\balpha$ are ordered lexicographically,
and where, with $\eps>0$ sufficiently small,
\[
q_\balpha:= \sum_{i=1}^N \alpha_i\eps^i(1,p_{i,1},p_{i,2},\ldots,p_{i,d}) \in {\mathbb{R}}^{d+1}, \quad \balpha\in\{0,1\}^N.
\]
(The $\eps$ is different in each inductive step, and in particular,
the one used to construct $P_{d+1}(n)$ from $P_d(n-1)$ is much smaller
than the one used to construct $P_{d}(n-1)$ from $P_{d-1}(n-2)$, etc.)
Because of the robustness of the super-order-type condition, we can slightly perturb the points so that they are in general position.
As in the previous section, the points of $P_{d+1}(n)$, ordered
according to the lexicographic ordering
of the indices $\balpha$, have increasing first coordinates
(for $\eps$ sufficiently small).
Now we assume for contradiction
that $P_{d+1}(n)$ contains a super-order-type homogeneous subsequence
$S=(s_1,\dots,s_n)$.
By Proposition~\ref{p:sot-sm}, $S$ is super-monotone.
Thus, setting $t_\ell=\sdelta(s_\ell,s_{\ell+1})$, $\ell=1,2,\ldots,n-1$,
the sequence $T=(t_1,\ldots,t_{n-1})$ is super-monotone as well by definition.
By the limit relation \eqref{e:lim}, for $\eps\to 0$,
each $t_\ell$ tends to a point $p_{i_\ell}$ of $P_{d}(n-1)$.
Moreover, by super-monotonicity, we have $t_1\leqone\cdots\leqone t_{n-1}$.
Hence $p_{i_1}\leqone \cdots\leqone p_{i_{n-1}}$ for sufficiently
small $\eps$ and therefore, since
the first coordinates are increasing in $P_d(n-1)$ by the inductive
hypothesis, we have $i_1<\cdots<i_{n-1}$.
Consequently, using Proposition~\ref{p:sot-sm} again, $(p_{i_1}, \dotsc,
p_{i_{n-1}})$ is a super-order-type homogeneous subsequence of
$P_d(n-1)$---a contradiction proving the theorem.
\ProofEndBox\smallskip
\heading{Algebraic lemmas. } It remains to prove Proposition~\ref{p:sot-sm},
and for this, we need to develop some algebraic results.
Given a $k$-tuple $T=(p_1,\dots,p_k)$ of points in ${\mathbb{R}}^d$, $1\le k\le d$,
and an index $j\ge k-1$, we put
\[
D_j(T)=\det
\begin{pmatrix}
1 & 1 & \dots & 1\\
p_{1,1} & p_{2,1} & \dots & p_{k,1}\\
\vdots & \vdots & \ddots & \vdots\\
p_{1,k-2} & p_{2,k-2} & \dots & p_{k,k-2}\\
p_{1,j} & p_{2,j} & \dots & p_{k,j}\\
\end{pmatrix}
\]
and
\[\overrightarrow D_j(T)=(D_j(T),D_{j+1}(T),\dots,D_d(T)).\]
Let us remark that $k$ is not represented explicitly in the notation,
but it can be inferred from the number of arguments of $D_j$.
We also note that $\mathop {\rm sgn}\nolimits D_{k-1}(p_1,\dots,p_k)$ is the sign of the $k$-tuple
$\pi_{k-1}(T)$.
\begin{lemma}\label{lem:matrices}
If $A=(p_1,\dots,p_k)$ and $B=(p_2,\dots,p_{k+1})$, then, for
$j\ge k$, we have
\[D_{k-1}(A) D_{j} (B)- D_{k-1}(B) D_{j}(A)=D_{k-2}(p_2,\dots,p_k) D_{j}(p_1,\dots,p_{k+1}).\]
\end{lemma}
\begin{proof}
It is enough to do the case $j=k$ (we have $j\ge k$, and so
in the identity of the lemma, the
$j$th coordinates of the $p_i$ appear only in the determinants
$D_j(A)$, $D_j(B)$, and $D_j(p_1,\ldots,p_{k+1})$).
We define the $(k+1)\times(k+1)$ matrix
\[M_{k+1}(p_1,\dots,p_{k+1})=
\begin{pmatrix}
1 & 1 & \dots & 1\\
p_{1,1} & p_{2,1} & \dots & p_{k+1,1}\\
\vdots & \vdots & \ddots & \vdots\\
p_{1,k} & p_{2,k} & \dots & p_{k+1,k}
\end{pmatrix}.\]
All the determinants we are interested in are submatrices
of $M_{k+1}$ and they all contain the matrix $M_{k-1}=
M_{k-1}(p_2,\dots,p_k)$ associated with $D_{k-2}(p_2,\dots,p_k)$.
We can use elementary row and column operations on $M_{k+1}$ to diagonalize
$M_{k-1}$ while leaving the determinants fixed, and we can also assume
that the entries below $M_{k-1}$, as well as those
to the left and to the right of it, are~$0$, as is illustrated next:
\[
\begin{pmatrix}
\cline{2-4}
1 & \multicolumn{1}{|c}{1} & \dots & \multicolumn{1}{c|}{1} & 1\\
p_{1,1} & \multicolumn{1}{|c}{p_{2,1}} & \dots & \multicolumn{1}{c|}{p_{k,1}} & p_{k+1,1}\\
\vdots & \multicolumn{1}{|c}{\vdots} & M_{k-1} & \multicolumn{1}{c|}{\vdots} & \vdots\\
p_{1,k-2} & \multicolumn{1}{|c}{p_{2,k-2}} & \dots & \multicolumn{1}{c|}{p_{k,k-2}} & p_{k+1,k-2}\\
\cline{2-4}
p_{1,k-1} & p_{2,k-1} & \dots & p_{k+1,k-1} & p_{k+1,k-1}\\
p_{1,k} & p_{2,k} & \dots & p_{k+1,k} & p_{k+1,k}\\
\end{pmatrix}
\longrightarrow
\begin{pmatrix}
\cline{2-4}
0 & \multicolumn{1}{|c}{m_1} & \dots & \multicolumn{1}{c|}{0} & 0\\
0 & \multicolumn{1}{|c}{0} & \dots & \multicolumn{1}{c|}{0} & 0\\
\vdots & \multicolumn{1}{|c}{\vdots} & \ddots & \multicolumn{1}{c|}{\vdots} & \vdots\\
0 & \multicolumn{1}{|c}{0} & \dots & \multicolumn{1}{c|}{m_{k-2}} & 0\\
\cline{2-4}
x & 0 & \dots & 0 & u\\
y & 0 & \dots & 0 & v\\
\end{pmatrix}.
\]
Now we can compute the determinants in the following way:
\[
\begin{aligned}
D_{k}(p_1,\ldots,p_{k+1})&=(-1)^{k+1}(xv-yu)\det(M_{k-1})\\
D_{k-2}(p_2,\dots,p_k)&=\det(M_{k-1})\\
D_{k-1}(A)&=(-1)^{k+1}x\det(M_{k-1})
\end{aligned}
\qquad
\begin{aligned}
D_{k}(B)&=v\det(M_{k-1})\\
D_{k}(A)&=(-1)^{k+1}y\det(M_{k-1})\\
D_{k-1}(B)&=u\det(M_{k-1}).
\end{aligned}
\]
The lemma follows.
\end{proof}
\begin{lemma}\label{lem:sigma}
\[\sdelta(p_1,\dots,p_k)=\frac{\overrightarrow{D}_k(p_1,\dots,p_k)}{D_{k-1}(p_1,\dots,p_k)}.\]
\end{lemma}
\begin{proof}
The proof goes by induction on $k$. The cases $k=1,2$ are trivial.
Assume the lemma is true for $k$ and we have points
$p_1,\dots,p_{k+1}\in{\mathbb{R}}^d$. For simplicity we write
$A=(p_1,\dots,p_k)$ and $B=(p_2,\dots,p_{k+1})$. Then we have,
with $\pi\:{\mathbb{R}}^d\to{\mathbb{R}}^{d-1}$ denoting the projection omitting the
first coordinate,
\begin{align*}
\sdelta(p_1,\dots,p_{k+1})&=\sdelta(\sdelta(A),\sdelta(B))\\
&= \frac{\pi(\sdelta(A))-\pi(\sdelta(B))}{(\sdelta(A))_1-(\sdelta(B))_1}\\
&= \frac{D_{k-1} (A)\overrightarrow{D}_{k+1} (B)-D_{k-1} (B)\overrightarrow{D}_{k+1} (A)}
{D_{k-1} (A)D_{k} (B)-D_{k-1} (B)D_{k} (A)}.
\end{align*}
The last equality follows from the fact that $\pi(\overrightarrow{D}_{k}) = \overrightarrow{D}_{k+1}$ and by clearing the denominators.
To finish the proof we use Lemma~\ref{lem:matrices} on the denominator and each coordinate of the numerator.
\end{proof}
\heading{Proof of Proposition~\ref{p:sot-sm}.}
We generalize the notions of super-monotonicity and
super-order-type homogeneity as follows.
We say that a point sequence $(p_1\dots,p_n)$ is \emph{$k$-monotone}
if for all $j\le k$ the point sequence $(\sdelta(p_1,\dots,p_j),\dots,\sdelta(p_{n-j+1},\dots,p_n))$ is monotone according to $\leqone$.
We say that $(p_1\dots,p_n)$ is \emph{$k$-order-type homogeneous} if for all $j\le k$ the sequence of projections $(\pi_j(p_1)\dots,\pi_j(p_n))$
in ${\mathbb{R}}^j$ is order-type homogeneous.
By induction on $k$, we prove that
a point sequence $(p_1,\dots,p_n)$ in ${\mathbb{R}}^d$ is $k$-monotone
if and only if it is $k$-order-type homogeneous; for $k=d$
this is the statement of the proposition.
The cases $k=1,2$ are trivial. So we assume that the claim
is true up to some $k$ and we are given a sequence of $n$ points.
We may assume that this sequence is $k$-monotone, and hence also
$k$-order-type homogeneous. Then we only need to show that
$(\sdelta(p_1,\dots,p_{k+1}),\dots,\sdelta(p_{n-k},\dots,p_n))$ is
order-type homogeneous iff $(\pi_{k+1}(p_1),\dots,\pi_{k+1}(p_n))$
is monotone according to~$\leqone$.
Let $(q_1,\ldots,q_{k+2})$ be a $(k+2)$-point subsequence of $(p_1,\ldots,p_n)$.
By Lemma~\ref{lem:sigma}, the condition $\sdelta(q_1,\dots,q_{k+1})\leqone\sdelta(q_2,\dots,q_{k+2})$ is equivalent to
\begin{equation}\label{eq:1}
\frac{D_{k+1}(q_2,\dots,q_{k+2})} {D_{k}(q_2,\dots,q_{k+2})}-\frac{D_{k+1}(q_1,\dots,q_{k+1})} {D_{k}(q_1,\dots,q_{k+1})} >0.
\end{equation}
Since $(q_1,\ldots,q_{k+2})$ is $k$-order-type homogeneous, we have
\[D_{k}(q_1,\dots,q_{k+1})D_{k}(q_2,\dots,q_{k+2})>0,\]
and therefore, \eqref{eq:1} is equivalent to
\[D_{k+1}(q_2,\dots,q_{k+2})D_{k}(q_1,\dots,q_{k+1})-D_{k+1}(q_1,\dots,q_{k+1})D_{k}(q_2,\dots,q_{k+2})>0.\]
By Lemma~\ref{lem:matrices} this is just
\[D_{k-1}(q_2\dots,q_{k+1})D_{k+1}(q_1\dots,q_{k+2})>0.\]
Since our sequence is also $(k-1)$-order-type homogeneous, the numbers
\[D_{k-1}(p_1\dots,p_{k}),\,D_{k-1}(p_2\dots,p_{k+1}),\,\ldots,\,D_{k-1}(p_{n-k+1}\dots,p_{n})\]
have the same sign and therefore the numbers
\[D_{k+1}(p_1\dots,p_{k+2}),\,D_{k+1}(p_2\dots,p_{k+3}),\,\ldots,\,D_{k+1}(p_{n-k-1}\dots,p_{n})\]
also have the same sign.
This is precisely the condition needed for the sequence
to be $(k+1)$-order-type homogeneous.
\ProofEndBox\smallskip
\iffalse
\section{Upper bound for super order type}\label{s:sup-up}
Here we give a proof, already sketched in the introduction
after Theorem~\ref{t:super}, of the upper bound
$\OT^*_d(n)\le\twr_d(n^C)$, with $C$ depending only on~$d$.
First we need a straightforward generalization of Theorem~\ref{t:ub},
the upper bound for the Ramsey function for $k$-ary semialgebraic
predicates, to several colors. We say that, for a point sequence
$P$ in ${\mathbb{R}}^d$, an $r$-coloring $\chi\:\binom{P}{k}\to[r]$
is \emph{semialgebraic}
if there are $d$-dimensional $k$-ary semialgebraic predicates
$\Phi_1,\ldots,\Phi_{r-1}$ such that
$\chi(p_1,\ldots,p_k)=i$ is equivalent to
\[
\neg \Phi_1(p_1,\ldots,p_k)\wedge \cdots\wedge
\neg \Phi_{i-1}(p_1,\ldots,p_k)\wedge \Phi_i(p_1,\ldots,p_k),
\]
where $\Phi_r$ is interpreted as true.
Let $R_{\Phi_1,\ldots,\Phi_{r-1}}$ be the corresponding Ramsey function.
\begin{theorem}\label{t:ub-r}
For every $d,k,r$ and $d$-dimensional $k$-ary semialgebraic predicates
$\Phi_1,\ldots,\Phi_{r-1}$, there is $C$ such that
$R_{\Phi_1,\ldots,\Phi_{r-1}}(n)\le \twr_{k-1}(n^C)$.
\end{theorem}
\heading{Sketch of proof. } For $k=2$, the desired bound is $n^C$,
and it can be obtained by an $(r-1)$-fold application of Theorem~\ref{t:ub}
for two colors (merge colors $2,\ldots,r$ into one, get a homogeneous
subset $S$, if it has color $1$ stop, if not restrict to $S$ and repeat).
For $k>2$, one can use the inductive argument from Conlon et al.~\cite{cfpss-semialg} almost verbatim, the only change being that $r$ enters into the
estimate for the number of equivalence classes.
\ProofEndBox\smallskip
\heading{Proof of the upper bound for super-order-type
homogeneous subsequences.}
Let $P=(p_1,\ldots,p_N)$ be a point sequence in ${\mathbb{R}}^d$ in general
position. We define a three-coloring of $\binom{P}{d+1}$ by setting
the color of a $(d+1)$-tuple $T\in
\binom{P}{d+1}$ as follows:
\[\chi(T)=\begin{cases}
\text{red} & \text{if the sign of $T$ is $+1$ and $T$ is super-order-type homogeneous,}\\
\text{blue} & \text{if the sign of $T$ is $-1$ and $T$ is super-order-type homogeneous,}\\
\text{white} & \text{if $T$ is not super-order-type homogeneous.}
\end{cases}\]
This three-coloring is clearly semialgebraic, and so by Theorem~\ref{t:ub-r},
we can find an $n$-point homogeneous subsequence $Q$ in $P$ for
$N\ge \twr_d(n^C)$, with a suitable~$C$.
First we observe that for $n$ sufficiently large, the color of $Q$ cannot be
white. Indeed, if $n$ exceeds a suitable constant,
then $Q$ has a super-order-type homogeneous subsequence of length $d+1$
(by iterated Ramsey's theorem for example), but such a $(d+1)$-tuple
is red or blue.
Assuming that the color of $Q$ is red or blue, we show that $Q$
is super-order-type homogeneous. By the definition of the coloring,
it is clear that $Q$ is order-type homogeneous, so we consider
$\pi_j(Q)$ for some $j<d$. If there are two $(j+1)$-tuples $T_1,T_2\in
\binom{Q}{j+1}$ with $\pi_j(T_1)$ and $\pi_j(T_2)$ having different sign,
then there must also be two such $(j+1)$-tuples $T'_1,T'_2$
whose symmetric difference has two elements. But then
$|T_1'\cup T'_2|= j+2\le d+1$. We form a $(d+1)$-tuple $T\subseteq Q$
by taking $T_1'\cup T_2'$ and adding $d-j-1$ more elements of $Q$.
This yields a $(d+1)$-tuple that is not super-order-type homogeneous
and hence white---a contradiction.
\ProofEndBox\smallskip
\f
\subsection*{Acknowledgment}
We would like to thank Imre B\'ar\'any for useful discussions.
\bibliographystyle{alpha}
| {
"timestamp": "2014-01-09T02:01:29",
"yymm": "1307",
"arxiv_id": "1307.5157",
"language": "en",
"url": "https://arxiv.org/abs/1307.5157",
"abstract": "We continue a sequence of recent works studying Ramsey functions for semialgebraic predicates in $\\mathbb{R}^d$. A $k$-ary semialgebraic predicate $\\Phi(x_1,\\ldots,x_k)$ on $\\mathbb{R}^d$ is a Boolean combination of polynomial equations and inequalities in the $kd$ coordinates of $k$ points $x_1,\\ldots,x_k\\in\\mathbb{R}^d$. A sequence $P=(p_1,\\ldots,p_n)$ of points in $\\mathbb{R}^d$ is called $\\Phi$-homogeneous if either $\\Phi(p_{i_1}, \\ldots,p_{i_k})$ holds for all choices $1\\le i_1 < \\cdots < i_k\\le n$, or it holds for no such choice. The Ramsey function $R_\\Phi(n)$ is the smallest $N$ such that every point sequence of length $N$ contains a $\\Phi$-homogeneous subsequence of length $n$.Conlon, Fox, Pach, Sudakov, and Suk constructed the first examples of semialgebraic predicates with the Ramsey function bounded from below by a tower function of arbitrary height: for every $k\\ge 4$, they exhibit a $k$-ary $\\Phi$ in dimension $2^{k-4}$ with $R_\\Phi$ bounded below by a tower of height $k-1$. We reduce the dimension in their construction, obtaining a $k$-ary semialgebraic predicate $\\Phi$ on $\\mathbb{R}^{k-3}$ with $R_\\Phi$ bounded below by a tower of height $k-1$.We also provide a natural geometric Ramsey-type theorem with a large Ramsey function. We call a point sequence $P$ in $\\mathbb{R}^d$ order-type homogeneous if all $(d+1)$-tuples in $P$ have the same orientation. Every sufficiently long point sequence in general position in $\\mathbb{R}^d$ contains an order-type homogeneous subsequence of length $n$, and the corresponding Ramsey function has recently been studied in several papers. Together with a recent work of Bárány, Matoušek, and Pór, our results imply a tower function of $\\Omega(n)$ of height $d$ as a lower bound, matching an upper bound by Suk up to the constant in front of $n$.",
"subjects": "Combinatorics (math.CO)",
"title": "Lower bounds on geometric Ramsey functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815532606978,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.707684219389071
} |
https://arxiv.org/abs/0902.4869 | Higher rank numerical ranges of normal matrices | The higher rank numerical range is closely connected to the construction of quantum error correction code for a noisy quantum channel. It is known that if a normal matrix $A \in M_n$ has eigenvalues $a_1, \..., a_n$, then its higher rank numerical range $\Lambda_k(A)$ is the intersection of convex polygons with vertices $a_{j_1}, \..., a_{j_{n-k+1}}$, where $1 \le j_1 < \... < j_{n-k+1} \le n$. In this paper, it is shown that the higher rank numerical range of a normal matrix with $m$ distinct eigenvalues can be written as the intersection of no more than $\max\{m,4\}$ closed half planes. In addition, given a convex polygon ${\mathcal P}$ a construction is given for a normal matrix $A \in M_n$ with minimum $n$ such that $\Lambda_k(A) = {\mathcal P}$. In particular, if ${\mathcal P}$ has $p$ vertices, with $p \ge 3$, there is a normal matrix $A \in M_n$ with $n \le \max\left\{p+k-1, 2k+2 \right\}$ such that $\Lambda_k(A) = {\mathcal P}$. | \section{Introduction}
\setcounter{equation}{0}
Let $M_n$ be the algebra of $n\times n$ complex matrices regarded as
linear operators acting on the $n$-dimensional Hilbert space $\IC^n$.
The {\em classical numerical range} of $A\in M_n$ is defined and denoted by
$$W(A) = \{ x^*Ax\in \IC:
x\in \IC^n \hbox{ with } x^*x = 1\},$$ which is a useful concept
in studying matrices and operators; see \cite{HJ}.
In the context of quantum information theory,
if the quantum states are represented as matrices in $M_n$,
then a {\it quantum channel} is
a trace preserving completely positive map
$L: M_n \rightarrow M_n$ with the following
operator sum representation
\begin{equation} \label{opersum}
L(A) = \sum_{j=1}^r E_j^* A E_j,
\end{equation}
where $E_1, \dots, E_r \in M_n$ satisfy
$\sum_{j=1}^r E_j E_j^* = I_n$.
The matrices $E_1, \dots, E_r$ are known as the
{\em error operators} of the quantum channel $L$.
A subspace $V$ of $\IC^n$
is a {\em quantum error correction code} for the channel $L$
if and only if the orthogonal projection $P \in M_n$
with range space $V$ satisfies $PE_i^*E_jP = \gamma_{ij} P$
for all $i,j \in \{1, \dots, r\}$; for example, see
\cite{KL,KLV,KLPL}.
In this connection, for $1 \le k < n$ researchers
define the {\em rank-$k$ numerical range} of $A\in M_n$ by
$$\Lambda_k(A) = \{ \lambda \in \IC: PAP = \lambda P
\hbox{ for some rank-$k$ orthogonal projection } P\},$$
and the {\em joint rank-$k$ numerical range} of $A_1, \dots, A_m
\in M_n$ by $\Lambda_k(A_1, \dots, A_m)$ to be the collection of
complex vectors $(a_1, \dots, a_m) \in \IC^{1\times m}$ such that
$PA_jP = a_j P$ for a rank-$k$ orthogonal projection $P \in M_n$.
Evidently, there is a quantum error correction code $V$ of dimension
$k$ for the quantum channel $L$ described in (\ref{opersum})
if and only if
$\Lambda_k(A_1, \dots, A_m)$ is non-empty for $(A_1, \dots, A_m)
= (E_1^*E_1, E_1^*E_2, \dots, E_r^*E_r)$. Also, it is easy to see
that if $(a_1, \dots, a_m) \in \Lambda_k(A_1, \dots, A_m)$ then
$a_j \in \Lambda_k(A_j)$ for $j = 1,\dots, m$.
When $k = 1$, $\Lambda_k(A)$
reduces to the classical numerical range $W(A)$.
Recently,
interesting results have been obtained for the rank-$k$ numerical
range and the joint rank-$k$ numerical range; see
\cite{Cet,Cet0,Cet1,Cet2,GLW,LP,LPS,LPS2,LS,W1}. In particular,
an explicit description of the rank-$k$ numerical range of $A \in
M_n$ is given in \cite{LS}, namely,
\begin{eqnarray}\label{eq1.1}
\Lambda_k(A) = \bigcap_{\xi\in [0,2\pi)} \{ \mu \in \IC:
e^{-i\xi}\mu+ e^{i\xi}\overline{\mu} \le
\lambda_k(e^{-i\xi}A + e^{i\xi}A^*) \},
\end{eqnarray}
where $\lambda_k(X)$ is the $k$th largest eigenvalue of
a Hermitian matrix $X$.
In the study of quantum error correction, there are channels such
as the randomized unitary channels and Pauli channels whose error
operators are commuting normal matrices. Thus, it is of interest
to study the rank-$k$ numerical ranges of normal matrices.
Although the error operators of a generic quantum channel
may not commute, a good understanding
of the special case would lead to deeper insights and
more proof techniques for the general case.
Given $S\subseteq \IC$, let $\conv S$ denote the smallest convex
subset of $\IC$ containing $S$. For a normal matrix $A \in M_n$
with eigenvalues $a_1, \dots, a_n$, it was conjectured in
\cite{Cet1,Cet2} that
\begin{equation}\label{eq1.2}
\Lambda_k(A)
= \bigcap_{1 \le j_1 < \cdots < j_{n-k+1} \le n}
\conv\{a_{j_1},\dots,a_{j_{n-k+1}}\},
\end{equation}
which is a convex polygon including its interior (if it is non-empty).
This conjecture was confirmed in \cite{LS} using the description
of $\Lambda_k(A)$ in (\ref{eq1.1}).
{\it In our discussion,
a polygon would always mean a convex polygon
with its interior.}
In this paper, we improve the description (\ref{eq1.2}) of the
rank-$k$ numerical range of a normal matrix. In particular,
in Section 2 we show that for a normal matrix $A$ with $m$
distinct eigenvalues, $\Lambda_k(A)$ can be written as the
intersection of no more than $\max\{m,4\}$ closed half planes in $\IC$.
Moreover, if $\Lambda_k(A) \ne \emptyset$, then it is a polygon
with no more than $m$ vertices.
We then consider the ``inverse'' problem, namely, for a given
polygon $\cP$, construct a normal matrix $A \in M_n$ with
$\Lambda_k(A) = \cP$. In other words, we study the necessary
condition for the existence of quantum channels whose error
operators have prescribed rank-$k$ numerical ranges. It is easy
to check that $\Lambda_k(\tilde A) = \cP$ if $\tilde A = A
\otimes I_k$ with $W(A) = \cP$.
Our goal is to find a normal matrix $\hat A$ with smallest size
so that $\Lambda_k(\hat A) = \cP$. To achieve this, we give a
necessary and sufficient condition for the existence of a normal
matrix $A\in M_n$ so that $\Lambda_k(A) = \cP$ in terms of
$k$-regular sets in $\IC$ (see Definition \ref{3.1}).
Furthermore, we show that the problem of finding a desired normal
matrix $A$ is equivalent to a combinatorial problem of extending a given
$p$ element set of unimodular complex numbers to a $k$-regular
set. We then give the solution of the problem in Section 4. As a
consequence of our results, if $\cP$ is a polygon with $p$
vertices, then there is a normal matrix $A \in M_n$ with $$n
\le \max\left\{p+k-1, 2k+2 \right\}$$ such that $\Lambda_k(A) =
\cP$. Moreover, this upper bound is best possible in the sense
that there exists $\cP$ so that there is no matrix of smaller
dimension with rank-$k$ numerical range equal to $\cP$.
\section{Construction of higher rank numerical ranges}
By (\ref{eq1.1}), $\Lambda_k(A)$ can be obtained as the intersection of
infinitely many closed half planes for a given $A \in M_n$.
Suppose $A$ is normal. By (\ref{eq1.2}), one can write $\Lambda_k(A)$
as the intersection of ${n\choose k-1}$ convex polygons so that
$\Lambda_k(A)$ is a polygon. In particular,
it is well known that $\Lambda_1(A) = \conv\{a_1, \dots, a_m\}$,
where $a_1,\dots,a_m$ are the distinct eigenvalues of $A$.
There are nice interplay between the
algebraic properties of $A \in M_n$
and the geometric properties of $\Lambda_1(A) = W(A)$.
For instance, $\Lambda_1(A)$
is always non-empty; $\Lambda_1(A)$ is a singleton if and only if
$A$ is a scalar matrix; $\Lambda_1(A)$ is a non-degenerate line
segment if
and only if $A$ is a non-scalar normal
matrix and its eigenvalues lie on a straight line.
Unfortunately, these results have no analogs for $\Lambda_k(A)$ if $k>1$.
First, the set $\Lambda_k(A)$ may be empty, see \cite{LPS2};
there are non-scalar matrices $A$ such that $\Lambda_k(A)$ is a singleton;
and there are non-normal matrices $A$ such that
$\Lambda_k(A)$ is a line segment.
Even for a normal matrix $A$,
it is not easy to determine whether $\Lambda_k(A)$ is empty,
a point or a line segment without actually constructing the
set $\Lambda_k(A)$. Moreover, there is no easy way to express
the vertices of the polygon $\Lambda_k(A)$ (if it is non-empty)
in terms of the eigenvalues of the normal matrix $A$ as in the case
of $\Lambda_1(A)$. Of course, one can use
(\ref{eq1.2}) to construct $\Lambda_k(A)$ for the normal
matrix $A$, but the number of polygons needed
in the construction will grow exponentially for large $n$ and $k$.
In the following, we will study efficient ways to generate
$\Lambda_k(A)$ for a normal matrix $A \in M_n$.
While it is difficult to use the eigenvalues of $A$ to
determine the set $\Lambda_k(A)$, it turns out that
we can use half planes determined by
the eigenvalues to generate $\Lambda_k(A)$ efficiently.
In the following, we will focus on the following problem.
\smallskip
\begin{problem} \rm
Determine the minimum number of half planes needed to construct
$\Lambda_k(A)$ using the eigenvalues of the normal matrix $A \in M_n$.
\end{problem}
\smallskip
As by-products, we will show that for a normal matrix $A$ with $m$ distinct
eigenvalues, $\Lambda_k(A)$ is either empty or is a polygon with
at most $m$ vertices. In fact, by
examining the location of the eigenvalues of $A$ on the complex
plane, one may further reduce the number of half planes needed to
construct $\Lambda_k(A)$.
Suppose the eigenvalues of $A\in M_n$ are collinear. Then by a
translation, followed by a rotation, we may assume that $A$ is
Hermitian with eigenvalues $a_1\ge \dots \ge a_n$. Then we have
$\Lambda_k(A)=\left[a_{n-k+1},a_{k}\right]$. So we focus on those normal matrices whose eigenvalues are not collinear.
Let us motivate our result with the following examples,
which can be verified by using (\ref{eq1.2}).
\smallskip
\begin{example} \label{ex1} \rm
Let $A = \diag(1, w, w^2, \dots, w^{n-1})$ with
$w=e^{2\pi i/n}$. Then for $k \le n/2$,
we have $\Lambda_k(A) = \cap_{j=0}^{n-1} \cH_j$,
where
$$\cH_j = \left\{z\in \IC: \Re \left( e^{-\frac{(2j+k)\pi i}{n}} z \right) \le \cos \frac{k \pi}{n} \right\},$$
and only a small part of
$\conv\{w^{j-1}, w^{j-1+k}\}$ lies in $\Lambda_k(A)$.
\end{example}
\begin{center}
\epsfig{file=SIMAX076430RRR_fig1.png, width =2in, height=2in} \qquad
\epsfig{file=SIMAX076430RRR_fig2.png, width =2in, height=2in}
$\Lambda_2(A)$ with $n = 9$ in Example \ref{ex1}\quad
$\Lambda_3(A)$ with $n = 9$ in Example \ref{ex1}
\end{center}
\smallskip
More generally, we have the following.
\smallskip
\begin{example}\label{eg1} \rm
Let $a_1,\dots,a_n$ be the eigenvalues of $A \in M_n$, with $n\ge 3$.
Suppose $\conv \{a_1,\dots,a_n\} = \cP$ is an $n$-sided convex polygon
containing the origin in the interior.
We may assume that $a_1,\dots,a_n$ are arranged in
the counterclockwise direction on the boundary of $\cP$.
For $j\in \{1, \dots, n\}$,
let $L_j$ be the line passing through $a_j$ and $a_{j+k}$,
where $a_{j+k} = a_{j+k-n}$ if $j+k> n$,
and $\cH_j$ be the closed half plane determined by $L_j$
which does not contain $a_\ell$ for $j< \ell <j+k$.
Then
$$\Lambda_k(A)=\bigcap_{j=1}^n\cH_j.$$
\end{example}
Note that each $\cH_j$ in Example \ref{eg1} contains exactly $n-k+1$
eigenvalues of $A$.
The situation is more complicated if $\Lambda_1(A)$ is not an $n$-sided
convex polygon for the normal matrix $A \in M_n$.
\smallskip
\begin{example} \label{ex2} \rm
Suppose $B = \diag(1,i,-1,-i, 2,2i,-2,-2i,3,3i,-3,-3i)$.
One can see from the figures that
the eigenvalues $1,i,-1,-i$ are interior points of $\Lambda_2(B)$
while these eigenvalues are the vertices of $\Lambda_3(B)$.
\end{example}
\begin{center}
\epsfig{file=SIMAX076430RRR_fig3.png, width =2in, height=2in} \qquad
\epsfig{file=SIMAX076430RRR_fig4.png, width =2in, height=2in}
$\Lambda_2(B)$\hspace{4.5cm}
$\Lambda_3(B)$
\end{center}
To deal with normal matrices $A \in M_n$ as in Example \ref{ex2} that
$\Lambda_1(A)$ is not an $n$-side convex polygon, we need to construct
some half spaces using the eigenvalues of the normal matrix $A$.
To do this, we introduce the following.
Given any two distinct complex numbers $a$ and $b$,
let $L(a,b)$ be the (directed) line passing through $a$ and $b$.
The closed half plane
$$H(a,b)
= \{ z \in \IC: \Im\left((\bar b-\bar a)(z-a)\right) \ge 0\}$$
is called the {\em left closed half plane} determined by $L(a,b)$.
For example, $H(0,i) = \{z\in \IC: \Re(z) \le 0\}$
and $H(i,0) = \{z\in \IC: \Re(z) \ge 0\}$.
Remark that in Example \ref{ex1}, the set $\cH_j$ is indeed the closed half plane $H(w^j,w^{j+k})$.
Note that $L(a,b) \ne L(b,a)$.
In our discussion, it is sometimes convenient to write
$$H(a,b) = \{z \in \IC: \Re( e^{-i\xi} z ) \le \Re( e^{-i\xi} a) \}$$
with $\xi = \arg(b - a) - \pi/2$. Also, we use $H_0(a,b)$ to denote
the {\em left open half plane} determined by $L(a,b)$, i.e.,
$H_0(a,b) = H(a,b) \setminus L(a,b)$.
\smallskip
We have the following result showing that for a normal matrix $A \in M_n$
with $m$ distinct eigenvalues, $\Lambda_k(A)$ can be written as the
intersection of at most $\max\{m,4\}$ half spaces.
Even without any knowledge about the final shape of the set $\Lambda_k(A)$,
one can use $m(m-1)$ half spaces to generate $\Lambda_k(A)$.
Evidently, the construction is more efficient than the construction
using (\ref{eq1.1}) or (\ref{eq1.2}). Furthermore, we can conclude that
$\Lambda_k(A)$ is either an empty set, a singleton, a line segment,
or a non-degenerate polygon with at most $m$ vertices.
\smallskip
\begin{theorem} \label{thm2.3}
Let $A \in M_n$ be normal with distinct eigenvalues
$a_1, \dots, a_m$ that are not collinear.
Let $\cS$ be the set of index pairs $(r,s)$ such that $H(a_r,a_s)$
contains at least $n-k+1$ eigenvalues (counting multiplicities) of $A$, and
\begin{eqnarray*}
\cS_0=\{&& (r,s)\in\cS: H_0(a_r,a_s)
\hbox{ contains at most } \cr
&&\hspace{3cm} n-k-1 \hbox{ eigenvalues (counting multiplicities) }\}.
\end{eqnarray*}
Then
\begin{eqnarray}\label{int}
\Lambda_k(A) = \bigcap_{(r,s) \in \cS} H(a_r,a_s)
=\bigcap_{(r,s) \in \cS_0} H(a_r,a_s).
\end{eqnarray}
Moreover, $\Lambda_k(A)$ can be written as intersection
of at most $\max\{m,4\}$ half planes $H(a_r,a_s)$, with $(r,s) \in \cS_0$.
\end{theorem}
\smallskip
\begin{proof}
In the first part of the proof,
we assume that $A\in M_n$ has $n$ eigenvalues $a_1,\dots,a_n$.
For notational simplicity, we write $H(a_r,a_s) = H(r,s)$,
$H_0(a_r,a_s) = H_0(r,s)$,
and $L(a_r,a_s) = L(r,s)$ for any two distinct eigenvalues
$a_r$ and $a_s$ of $A$.
For each $(r,s)\in \cS$, since $H(r,s)$ is convex and contains at least $n-k+1$
eigenvalues of $A$, by (\ref{eq1.2}), we have
$$
\Lambda_k(A) = \bigcap_{1 \le j_1< \cdots < j_{n-k+1} \le n}
\conv\{a_{j_1}, \dots, a_{j_{n-k+1}}\}
\subseteq H(r,s).
$$
It follows that
\begin{equation}\label{eq2.2}
\Lambda_k(A)
\subseteq \bigcap_{(r,s)\in \cS} H(r,s).
\end{equation}
To prove the reverse inclusion of (\ref{eq2.2}),
note that if $z$ is a point not in $\Lambda_k(A)$, then $z$ will lie outside
a convex polygon which equals the convex hull of $n-k+1$ eigenvalues of $A$.
So, it suffices to show that the convex hull $\cW$ of any $n-k+1$ eigenvalues of $A$ can be written
as an intersection of half planes, $\cW=\cap_{j=1}^\ell H(r_j,s_j)$ for some $(r_1,s_1),\dots, (r_\ell,s_\ell)\in\cS$. We consider the following three cases.
\smallskip\noindent
{\bf Case 1} Suppose $\cW$ is a singleton.
Then $\cW = \{a_r\}$ for some eigenvalue $a_r$
with multiplicity at least $n-k+1$.
Since the eigenvalues of $A$ are non-collinear,
there are eigenvalues $a_s$ and $a_t$ such that
$a_r$, $a_s$, and $a_t$ are not collinear. Then
$$\cW = H(r,s) \cap H(s,r) \cap H(r,t) \cap H(t,r).$$
\smallskip\noindent
{\bf Case 2} Suppose $\cW$ is a non-degenerate line segment.
In this case, $\cW = \conv\{a_r,a_s\}$ for some eigenvalues $a_r$
and $a_s$ with $a_r \ne a_s$. Since the eigenvalues of $A$ are
non-collinear, there is another eigenvalue $a_t$ such that
$a_r$, $a_s$, and $a_t$ are not collinear.
Without loss of generality,
we assume that $a_t \in H(r,s)$. Otherwise,
we interchange $a_r$ and $a_s$.
Then
$$\cW = H(r,s) \cap H(s,r) \cap H(s,t) \cap H(t,r).$$
\smallskip\noindent
{\bf Case 3} Suppose $\cW$ is a non-degenerate polygonal disk.
We may relabel the eigenvalues of $A$ and assume that
$\cW$ has vertices $a_1, \dots, a_q$ arranged in the
counterclockwise direction, where $q \ge 3$. For convenience of
notation, we will let $a_{q+1} = a_1$ and $H(q,q+1) = H(q,1)$.
Then $$\cW = \bigcap_{1\le t \le q } H(t,t+1).$$
Thus, the first equality in (\ref{int}) is proved.
To prove the second equality in (\ref{int}),
we claim the following.
\smallskip
\noindent
{\bf Claim} For each $(r,s) \in \cS \setminus \cS_0$,
there exist two ordered pairs $(r_1,s_1)$ and $(r_2,s_2)$ in $\cS_0$
such that
$H(r_1,s_1) \cap H(r_2,s_2) \subseteq H_0(r,s)$.
\smallskip
Once the claim is proved,
all the half planes $H(r,s)$ with $(r,s) \in \cS \setminus \cS_0$
are not needed in the intersection $\bigcap_{(r,s) \in \cS} H(a_r,a_s)$
and hence the second equality in (\ref{int}) holds.
To prove the claim, suppose $(r,s)\in \cS\setminus \cS_0$.
Then $H_0(r,s)$ contains at least $n-k$ eigenvalues of $A$.
By a translation followed by a rotation, we may assume that
$H(r,s) = \{z\in \IC: \Im z \ge 0\}$
and we can relabel the index of eigenvalues so that
for $1\le j\le n-1$,
either $\Im a_j > \Im a_{j+1}$
or $\Im a_j = \Im a_{j+1}$ with $\Re a_j \ge \Re a_{j+1}$. Let
$$\cU = \conv \{a_1,\dots, a_{n-k}\}
\quad\hbox{and}\quad
\cV = \conv \{a_{n-k+1}, \dots, a_n\}.$$
Then $\cU$ and $\cV$ are disjoint if $a_{n-k} \ne a_{n-k+1}$
or $\cU \cap \cV = \{a_{n-k}\}$ if $a_{n-k} = a_{n-k+1}$.
By the assumption, $\cU \subseteq H_0(r,s)$ and $\{a_r,a_s \} \subseteq \cV$.
Define the set
$$\cW = \conv \{a_i - a_j: 1\le i \le n-k < j \le n\}
= \{u-v: u\in \cU \hbox{ and } v \in \cV\},$$
which is a convex polygon.
Note that $\cW\subseteq\{z\in\IC:\Im(z)\ge 0\}$ since $\Im(a_i-a_j)\ge 0$ for all
$1\le i\le n-k <j\le n$.
By the facts that $\cU$ and $\cV$
can intersect at at most one point, and
the union $\cU\cup \cV$ cannot be contained in any line,
the set $\cW$ does not lie in any line that passes through the origin,
and the point $0$ can only be either an extreme point of $\cW$ or is not in $\cW$.
Under these conditions, one can find two extreme points
$w_1$ and $w_2$ in $\cW$ with $\Im (\bar w_1 w_2) \ne 0$ such that
\begin{eqnarray}\label{eq2.3}
\Im (\bar w_1 w) \ge 0 \ge \Im (\bar w_2 w) \quad\hbox{for all}\quad w\in \cW.
\end{eqnarray}
Since $w_1$ is an extreme point in $\cW$,
there are eigenvalues $a_{s_1} \in \cU$ and $a_{r_1} \in \cV$
such that $w_1 = a_{s_1} - a_{r_1}$.
Then (\ref{eq2.3}) gives
$$\Im (\bar a_{s_1} - \bar a_{r_1})(u - a_{r_1}) \ge 0
\quad\hbox{and}\quad
\Im (\bar a_{r_1} - \bar a_{s_1})(v - a_{s_1}) \ge 0$$
for all $u \in \cU$ and $v \in \cV$,
and thus, $\cU \subseteq H(r_1,s_1)$
and $\cV \subseteq H(s_1,r_1)$.
With the fact that $a_{r_1}$ and $a_{s_1}$ lie in the line $L(r_1,s_1)$,
the closed half plane $H(r_1,s_1)$ contains at least $n-k+1$ eigenvalues of $A$
while the open half plane $H_0(r_1,s_1)$ contains at most $n-k-1$ eigenvalues only.
Therefore, $(r_1,s_1) \in \cS_0$.
By a similar argument, one can show that there are eigenvalues $a_{r_2}\in \cU$ and $a_{s_2} \in \cV$
such that $w_2 = a_{r_2} - a_{s_2}$. Then (\ref{eq2.3}) yields
$$\Im (\bar a_{r_2} - \bar a_{s_2})(u - a_{s_2}) \le 0
\quad\hbox{and}\quad
\Im (\bar a_{s_2} - \bar a_{r_2})(v - a_{r_2}) \le 0$$
for all $u \in \cU$ and $v \in \cV$,
and thus, $\cU \subseteq H(r_2,s_2)$ and $\cV \subseteq H(s_2,r_2)$,
and one can conclude that $(r_2,s_2) \in S_0$.
Observe that the two lines $L(r_1,s_1)$ and $L(r_2,s_2)$ are not parallel
as $\Im (\bar a_{r_1} - \bar a_{s_1}) (a_{s_2} - a_{r_2} )
= \Im (\bar w_1 w_2) \ne 0$.
Using the fact that the two distinct eigenvalues $a_r$ and $a_s$
are in $\cV$, which is contained in
the intersection $H(s_1,r_1) \cap H(s_2,r_2)$,
one can conclude that the intersection $H(r_1,s_1) \cap H(r_2,s_2)$
must lie in $H_0(r,s)$, the interior of $H(r,s)$.
Therefore, the claim holds.
Next, we turn to the last part of the Theorem.
It is trivial that if $\Lambda_k(A)$ is either an empty set, a singleton, or
a non-degenerate line segment, then only at most $4$ half planes are needed
in the construction of $\Lambda_k(A)$.
Suppose $A$ has $m$ distinct eigenvalues $a_1,\dots,a_m$ and
$\Lambda_k(A)$ is a non-degenerate polygon.
Let $\cT$ be a minimal subset of $\cS_0$ such that $\Lambda_k(A)=\cap_{(r,s)\in \cT}H(r,s)$.
Since $\cT$ is minimal, the half planes $H(r,s)$, $(r,s) \in \cT$, are all distinct.
We may further assume that for all $(r,s) \in \cT$, $\{ a_1,\dots, a_m\}\cap L(a_r,a_s)\subseteq \conv\{a_r,a_s\}$.
Since $\Lambda_k(A)$ is a non-degenerate polygon, for each $1\le t\le m$,
there exist at most two pairs $(r,s)\in \cT$ such that $t\in \{r,s\}$.
Therefore, $\cT$ contains at most $m$ ordered pairs.
\end{proof}
\smallskip
\begin{example} \rm
Let $A = \diag(0,0,1,1,i)$. Then
$$\begin{array}{rl}\Lambda_2(A)&=[0,1]=H(0,1)\cap H(1,0)\cap H(1,i)\cap H(i,0)\\&\\
\Lambda_3(A)&=\emptyset=H(1,0)\cap H(1,i)\cap H(i,0)\end{array}$$ and
the intersection of any $2$ half planes $H(a_r,a_s)$ is non-empty.
This example also shows that one cannot replace $\max\{m,4\}$ by $m$
in the conclusion in Theorem \ref{thm2.3}.
\end{example}
\smallskip
\begin{example} \rm
Let $A = \diag(1, -1, i, -i)$. Then
$$\Lambda_2(A)=\{0\}=H(1,-1)\cap
H(-1,1)\cap H(i,-i)\cap H(-i,i)$$
and $\Lambda_2(A)$ cannot be written as an
intersection of less than 4
half planes $H(a_r,a_s)$.
\end{example}
\smallskip
\begin{corollary} \label{cor2.6}
Suppose $A \in M_n$ is normal such that $W(A)$ is an $n$-sided
polygon
containing the origin as its interior point. Let $v_1, \dots, v_n$ be the
vertices of $W(A)$ having arguments $0 \le \xi_1 < \cdots < \xi_n < 2\pi$. If
$k < n/2$, then $\Lambda_k(A)$ is an $n$-sided convex
polygon obtained by
joining $v_j$ and $v_{j+k}$, where $v_{j+k}=v_{j+k-n}$ if $j+k>n$.
\end{corollary}
\smallskip
By Theorem \ref{thm2.3}, it is easy to see that the boundary
of $\Lambda_k(A)$
are subsets of the union of line segments of the form
$\conv\{a_r,a_s\}$ such that $a_r$ and $a_s$ satisfy the $H(a_r,a_s)$ condition.
However, it is not easy to determine which part of the line segment
actually belong to $\Lambda_k(A)$ as shown in Examples \ref{ex1}, \ref{eg1},
and \ref{ex2}.
By Theorem \ref{thm2.3}, if the normal matrix $A \in M_n$
has $m$ distinct eigenvalues, we need no more than $\max\{m,4\}$
half planes $H(a_r,a_s)$ to generate $\Lambda_k(A)$.
{\it Can one determine these half planes effectively?}
We will answer this question by presenting an
algorithm in Section 5
based on the discussion in this section.
\section{Matrices with prescribed higher rank numerical ranges}
We study the following problem in this section.
\smallskip
\begin{problem} \label{prob.1} \rm
Let $k > 1$ be a positive integer,
and let $\cP$ be a $p$-sided polygon in $\IC$.
Construct a normal matrix $A $ with smallest size (dimension) such that
$\Lambda_k(A) = \cP$.
\end{problem}
\smallskip
If $\cP$ degenerates to a line segment joining two points $a_1$ and $a_2$.
Then the smallest $n$ to get a normal matrix with $\Lambda_k(A)=\cP$
is $n=2k$, if $a_1$ and $a_2$ are distinct and $n=k$ if $a_1=a_2$.
So we focus on the case when the polygon $\cP$ is non-degenerate.
A natural approach to Problem \ref{prob.1} is to reverse
the construction of $\Lambda_k(A)$ in Example \ref{eg1}.
Suppose we have a non-degenerate $p$-sided polygon
$\cP$, with vertices, $v_1,\dots,v_p$.
Without loss of
generality, we may assume that $0$ lies in the interior of
$\cP$ and the arguments of $v_j$ in $[0, 2\pi)$ are arranged in
ascending order. Our goal is to use the support line $L_j$
which passes through $v_j,v_{j+1}$ for $j = 1, \dots, p$, where
$v_{p+1} = v_1$, to construct $A = \diag(a_1, \dots, a_p)$
such that $\Lambda_k(A) = \cP$. Note that if the desired values
$a_1, \dots, a_p$ exist and are arranged in counter-clockwise
direction, then (by proper numbering) the line $L_j$ will coincide
with the line passing through $a_j$ and $a_{j+k}$,
where $a_{j+k}=a_{j+k-p}$ if $j+k>p$.
Consequently, $a_j$ will lie at the intersection of $L_j$ and
$L_{j-k}$, where $L_{j-k} = L_{j-k+p}$ if $j-k<0$.
Consequently, there exists $A = \diag(a_1, \dots, a_p)$ satisfying
$\Lambda_k(A) = \cP$ if the following hold.
\begin{itemize}
\item[{\rm (1)}] $k < p/2$.
\item[{\rm (2)}] There exist $a_1, \dots, a_p \in \IC$ such that
\begin{itemize}
\item[{\rm (2.a)}] $L_j \cap L_{j-k} = \{a_j\}$ for $j = 1, \dots, p$,
\item[{\rm (2.b)}] $a_1, \dots, a_p$ have arguments $\xi_1 < \dots < \xi_p$
in the interval $[\xi_1, \xi_1+2\pi)$ and 0 lie in the interior of
their convex hull.
\end{itemize}
\end{itemize}
\smallskip
Note that by Theorem \ref{thm2.3},
$A$ has the smallest dimension among all
normal matrices $B$ such that $\Lambda_k(B) = \cP$.
Clearly, conditions (1) and (2a) are necessary in the above construction.
From the following example, one can see that
the above construction also fails when condition (2b)
is not satisfied.
\begin{example} \rm
Let $\cP$ be the $5$-sided polygon with vertices
$\{v_1,\dots,v_5\} = \{2+i,1+2i,-1+3i,-1-i,3-i \}$, see the below figure.
\begin{center}
\epsfig{file=SIMAX076430RRR_fig5.png, width =2in, height=2in} \\
The polygon $\cP$
\end{center}
Then, with $k=2$, we have $$\{a_1,\dots,a_5\} = \{-1+4i,7-i,-1+7i,4-i, (5+5i)/3 \},$$
which does not satisfy the condition (2b). Clearly,
for $A = \diag(a_1,\dots,a_5)$, $\Lambda_2(A)$
lies in the convex hull of $\{a_1,\dots,a_5\}$,
which does not contain $\cP$.
\end{example}
\smallskip
Conditions (2a) and (2b) motivate the following definition.
\smallskip
\begin{definition} \label{3.1}
Let $\Omega = \{z\in \IC: |z| = 1\}$. A subset $\Pi = \{\alpha_1,\dots,\alpha_m\}$,
with distinct $\alpha_1,\dots,\alpha_m \in \Omega$,
is {\it $k$-regular} if every semi-circular arc of $\Omega$ without
endpoints contains at least $k$ elements in $\Pi$.
\end{definition}
\smallskip
Given distinct $\alpha_1,\ \alpha_2\in \Omega$, $\alpha_2/\alpha_1
=e^{i\theta}$
for a unique $0<\theta<2\pi$. Then
$[\alpha_1,\alpha_2]=\{e^{it}\alpha_1:0\le t\le \theta\} $ is the
closed arc on $\Omega$ from $\alpha_1$ to $\alpha_2$ in the
counterclockwise direction. Also define the open arc
$$(\alpha_1,\alpha_2)=[\alpha_1,\alpha_2 ]\setminus\{\alpha_1,\alpha_2\}.$$
The value $\theta$ is called the {\it length} of these intervals.
Suppose $1\le k\le n$ and $\Pi \subseteq \Omega$.
Then $\Pi$ is $k$-regular if for each
$\alpha\in \Pi$, $(\alpha,-\alpha)\cap \Pi $ contains at least $k$ elements.
Note that if
$\Pi=\{e^{i\xi_j}:1\le j\le n\}$ with distinct $\xi_1, \dots, \xi_n \in [0,2\pi)$,
then $\Pi$ is $k$-regular if and only if
for each $r = 1,\dots, n$, there are $1\le j_1 < \dots < j_k \le n$ such that
\begin{eqnarray}\label{eq3.1}
e^{i\xi_{j_1}},\dots,e^{i\xi_{j_k}} \in \left( e^{i\xi_r}, e^{i(\xi_r + \pi)} \right).
\end{eqnarray}
For this reason, a set
$\{\xi_1 , \dots , \xi_n\}\subseteq [0,2\pi)$ of $n$ distinct numbers is also
called $k$-regular if $\{e^{i\xi_j}:1\le j\le n\}$
is $k$-regular as defined in Definition \ref{3.1}.
For $\xi,\xi'\in [0,
2\pi)$, $[\xi,\xi']$ will denote the subset $\{t\in [0,
2\pi):e^{it}\in [e^{i\xi},e^{i\xi'} ]\}$; the intervals $[\xi,\xi')$,
$(\xi,\xi']$ and $(\xi,\xi')$ will also be defined similarly.
In Example \ref{ex1}, a direct computation shows that for
$1\leq r, k\leq n$,
$$\xi_{r+k}-\xi_r = \cases{
2k\pi/n & \mbox{ if } $r+k\leq n$, \cr
2k\pi/n-2\pi & \mbox{ if } $r+k>n$.}
$$
Therefore, the set
$\{ \xi_1 ,\dots, \xi_n \}$ is $k$-regular and
$\Lambda_k(A)$ is nonempty for $1\leq k< n/2$.
Otherwise, the set $\{ \xi_1 ,\dots, \xi_n \}$ is not
$k$-regular and $\Lambda_k(A)$ is either empty or a singleton.
In the following, we need an alternate formulation of
(\ref{eq1.1}). For any $d, \xi \in \IR$,
consider the closed half plane
\begin{equation} \label{chdxi}
\cH(d,\xi) = \{\mu\in \IC: \Re(e^{-i\xi} \mu ) \le d\},
\end{equation}
and its boundary, which is the
straight line
\begin{equation} \label{cldxi}
\cL(d,\xi) = \partial \cH(d,\xi) = \{\mu\in \IC: \Re(e^{-i\xi} \mu) = d\}.
\end{equation}
For $A\in M_n$, let Re$A=(A+A^*)/2$.
Then (\ref{eq1.1}) is equivalent to $$\Lambda_k(A) =
\bigcap_{\xi \in [0,2\pi)}
\cH(\lambda_k(\Re(e^{-i\xi}A)), \xi).$$
The following result is easy to verify.
\smallskip
\begin{proposition}
\label{prop3.a}
Let $A \in M_n$ and $\Lambda_k(A) = \cap_{j=1}^m \cH(d_j,\xi_j) \ne
\emptyset$,
where $\cH(d_j,\xi_j)$ is defined as in
$(\ref{chdxi})$
for some $d_1, \dots, d_m \in \IR$ and distinct $\xi_1, \dots, \xi_m\in [0,2\pi)$.
\begin{itemize}
\item[{\rm (a)}] We have $0 \in \Lambda_k(A)$ if and only if
$d_1, \dots, d_m \ge 0$, and $0$ is an interior point of
$\Lambda_k(A)$ if and only if $d_1, \dots, d_m > 0$.
\item[{\rm (b)}] If $\mu = re^{i\xi}$ with $r > 0$ and $\xi \in \IR$,
then $\Lambda_k(\mu A) = \cap_{j=1}^m \cH(rd_j,\xi_j+\xi)$
and $\Lambda_k(A + \mu I ) = \cap_{j=1}^m \cH(d_j + r\cos(\xi - \xi_j),\xi_j)$.
\end{itemize}
\end{proposition}
\smallskip
In connection to Problem \ref{prob.1}, we have the following.
\smallskip
\begin{theorem}\label{equiv}
Suppose $\cP = \bigcap_{j=1}^p\, \cH(d_j,\xi_j)$
is a non-degenerate $p$-sided polygon,
where $\cH(d_j,\xi_j)$ is defined as in $(\ref{chdxi})$
with $d_1,\dots,d_p\in \IR$ and distinct $\xi_1,\dots,\xi_p\in [0,2\pi)$.
Let $q$ be a nonnegative integer.
The following two statements are equivalent.
\begin{itemize}
\item[\rm (I)] There is a $(p+q)\times (p+q)$
normal matrix $A$ such that $\Lambda_k(A) = \cP$.
\item[\rm (II)] There are distinct $\xi_{p+1},\dots,\xi_{p+q}\in [0,2\pi)$ such that
$\{\xi_1,\dots,\xi_{p+q}\}$ is $k$-regular.
\end{itemize}
\end{theorem}
\smallskip
Notice that a necessary condition for
the set $\bigcap_{j=1}^p\, \cH(d_j,\xi_j)$
to be a non-degenerate polygon is that
\begin{eqnarray}
\{e^{i\xi_1},\dots,e^{i\xi_p} \} \ \mbox{is $1$-regular.}
\end{eqnarray}
By Proposition \ref{prop3.a}, one may assume that $0$ lies in the interior
of $\cP$ in our proofs. However, it is equally convenient for us not
to impose this assumption so that we need not verify
$d_j > 0$ in $\cH(d_j,\xi_j)$ in our proofs.
To prove Theorem \ref{equiv}, we need some lemmas.
\smallskip
\begin{lemma}\label{lem3.5}
Given $A = \diag(a_1,\dots,a_n)$ and $1\le m < n$.
Suppose the eigenvalues $a_{m+1},\dots,a_n$ are in $\Lambda_k(A)$ but
not extreme points of $\Lambda_k(A)$. Then
$$\Lambda_k(\diag(a_1,\dots,a_m)) = \Lambda_k(A).$$
\end{lemma}
\smallskip
\begin{proof}
It suffices to show that if $a_n$ is in $\Lambda_k(A)$
but not an extreme point of $\Lambda_k(A)$, then
$\Lambda_k(\diag(a_1,\dots,a_{n-1})) = \Lambda_k(A)$.
Suppose $a_n$ satisfy the above assumption.
Clearly, $\Lambda_k(\diag(a_1,\dots,a_{n-1}))$
is a subset of $\Lambda_k(A)$.
On the other hand, for any $1\le j_1 < \dots < j_{n-k} \le n-1$,
$\Lambda_k(A)\subseteq\conv\{a_{j_1},\dots, a_{j_{n-k}}, a_n\}$.
Since $a_n$ is not an extreme point of $\Lambda_k(A)$,
it follows that $a_n$ lies in $\conv \{a_{j_1},\dots, a_{j_{n-k}},a_n\}$
but is not its extreme point. Therefore,
$$\conv \{a_{j_1},\dots, a_{j_{n-k}}\}
= \conv \{a_{j_1},\dots, a_{j_{n-k}},a_n\}.$$
Thus,
\begin{eqnarray*}
\Lambda_k(A)
&=& \bigcap \{ \conv \{a_{j_1},\dots, a_{j_{n-k+1}} \}:
1\le j_1 < \dots < j_{n-k} < j_{n-k+1}\leq n\} \cr
&\subseteq& \bigcap \{ \conv \{a_{j_1},\dots, a_{j_{n-k}},a_n \}:
1\le j_1 < \dots < j_{n-k} \le n-1\} \cr
&=& \bigcap \{ \conv \{a_{j_1},\dots, a_{j_{n-k}} \}:
1\le j_1 < \dots < j_{n-k} \le n-1\} \cr
&=& \Lambda_k(\diag(a_1,\dots,a_{n-1})).
\end{eqnarray*}
\vskip-.25in \hspace{55mm}
\end{proof}
\smallskip
The next lemma shows that if a convex polygon $\cP$ is the intersection
of half planes $\cH(d_j, \zeta_j)$ for $j = 1, \dots, m$,
such that the set $\{\zeta_1,\dots,\zeta_m\}$ is ``almost'' $k$-regular
(in the sense that $\{\zeta_1,\dots,\zeta_m\}$ is $k$-regular if
we count the multiplicity of each element in the set),
one may replace these half planes by $n$ other half planes
$\cH(\tilde d_j, \tilde \zeta_j)$
for $j = 1, \dots, n$, $n \ge m$,
with $\tilde \zeta_i \ne \tilde \zeta_j$
for all $i\ne j$, such that
$\{\tilde \xi_1,\dots,\tilde \xi_n\}$
is $k$-regular and the boundary $\cL(\tilde d_j, \tilde \zeta_j)$ of
$\cH(\tilde d_j, \tilde \zeta_j)$ touches the polygon $\cP$
for each $j = 1, \dots, n$.
\smallskip
\begin{lemma}\label{lem3.6}
Suppose $\cP = \cap_{j=1}^m \cH(d_j, \zeta_j)$ such
that $0 \le \zeta_1 \le \dots \le \zeta_m < 2\pi$ and for each $r = 1,\dots,m$,
there are $1 \le j_1 < \cdots < j_k \le m$
such that $\zeta_{j_1}, \dots, \zeta_{j_k} \in (\zeta_r, \zeta_r + \pi)$.
For every $n \ge m$, there exist
$\tilde d_1, \dots, \tilde d_n\in\IR$ and
distinct $\tilde \zeta_1,\dots,\tilde \zeta_n \in [0,2\pi)$
with
$\{\tilde \zeta_1, \dots, \tilde \zeta_n\}$
being $k$-regular
such that
$\cP = \cap_{j=1}^n \cH(\tilde d_j, \tilde \zeta_j)$ and
$\cP \cap \cL(\tilde d_j, \tilde \zeta_j) \ne \emptyset$
for each $j = 1, \dots, n$.
\end{lemma}
\smallskip
\begin{proof}
Set $\tilde \zeta_1 = \zeta_1$, and
for $s\in\{2, \ldots, m\}$, let $\tilde \zeta_{s}=\zeta_{s}$
if $\zeta_{s-1}<\zeta_s$. For the remaining values, we have
$\zeta_{s-1}=\zeta_s$ and we can set
$$
\tilde \zeta_{s-t_1}=\zeta_{s-t_1+1}=\cdots=\zeta_s =\cdots=
\zeta_{s+t_2}<\tilde\zeta_{s+t_2+1}$$ for some $t_1\ge 1$ and
$t_2\ge 0$. Let $\ell=\min\{j:\tilde\zeta_j>\tilde\zeta_{s-t_1}+\pi\}$, then we can
replace $\zeta_{s+j}$ by $\tilde
\zeta_{s+j}=\zeta_{s+j}+\epsilon_j$ for sufficient small
$\epsilon_j>0$ for $j=-t_1+1, -t_1+2, \ldots, 0, \ldots, t_2$ such
that $$
\tilde \zeta_{s-t_1}<\tilde\zeta_{s-t_1+1}<\cdots<\tilde\zeta_s <\cdots<
\tilde\zeta_{s+t_2}<\min\{\tilde\zeta_\ell-\pi, \tilde\zeta_{s+t_2+1}\}.$$
After this modification, $\tilde\zeta_1, \ldots,
\tilde\zeta_m$ are distinct and $\{\tilde\zeta_1, \ldots, \tilde\zeta_m\}$
is $k$-regular. If $n > m$, pick distinct $\tilde \zeta_{m+1},\dots,\tilde \zeta_n
\in [0,2\pi) \setminus \{\tilde \zeta_1,\dots,\tilde \zeta_m\}$.
Then $\{\tilde \zeta_1,\dots,\tilde \zeta_n\}$ also forms a $k$-regular set.
Finally, let $\tilde d_j = \max_{\mu\in \cP}\ \Re
\left( e^{-i\tilde \zeta_j} \mu \right)$ for $j =1,\dots,n$. Clearly,
we have $\cP \cap \cL(\tilde d_j, \tilde \zeta_j) \ne \emptyset$ and
$\cP\subseteq\cH(\tilde d_j, \tilde \zeta_j)$ for all $j$. By construction,
$\{\zeta_1, \ldots, \zeta_m\}\subseteq \{\tilde\zeta_1, \ldots,
\tilde\zeta_n\}$ and $\cP = \cap_{j=1}^m \cH(d_j, \zeta_j)=
\cap_{j=1}^n \cH(\tilde d_j, \tilde \zeta_j)$.
\smallskip
We can now present the proof of Theorem \ref{equiv}.
\smallskip
\bf Proof of Theorem \ref{equiv}. \rm
Let $\cP = \bigcap_{j=1}^p\, \cH(d_j,\xi_j)$
be a non-degenerate $p$-sided polygon, where
$d_1,\dots,d_p\in \IR$ and $\xi_1,\dots,\xi_p\in [0,2\pi)$.
\smallskip
Suppose (I) holds. We may assume that
$A = \diag(a_1,\dots,a_{p+q})$ and $\Lambda_k(A) = \cP$.
By Lemma \ref{lem3.5}, one can remove the eigenvalues of $A$
in $\Lambda_k(A)$ that are not extreme points of $\Lambda_k(A)$
to get $\tilde A \in M_n$ for some positive integer $n \le p+q$
so that $\Lambda_k(A) = \Lambda_k(\tilde A)$.
We have the following.
\smallskip
\noindent
{\bf Claim}
There are $f_1,\dots,f_n \in \IR$ and $\zeta_1, \dots, \zeta_n \in [0,2\pi)$
such that $\Lambda_k(\tilde A) = \linebreak
\cap_{j=1}^n \cH(f_j, \zeta_j)$. Furthermore, for each $r = 1,\dots,n$, there exist $1 \le j_1 < \dots < j_k \le n$ such that
$\zeta_{j_1},\dots,\zeta_{j_k} \in (\zeta_r, \zeta_r + \pi)$.
\smallskip
Once the claim holds, Lemma \ref{lem3.6} will ensure that
$\Lambda_k(\tilde A) = \cap_{j=1}^{p+q} \cH(\tilde d_j, \tilde
\xi_j)$ for some $\tilde d_1,\dots,\tilde d_{p+q} \in \IR$ and a
$k$-regular set $\{\tilde \xi_1, \dots, \tilde \xi_{p+q}\}$, with
$$ \bigcap_{j=1}^p\, \cH(d_j,\xi_j)
= \cP = \Lambda_k(A) = \Lambda_k(\tilde A) = \cap_{j=1}^{p+q} \cH(\tilde d_j, \tilde
\xi_j)\,.$$
Then
$\{\xi_1,\dots,\xi_p\} \subseteq \{\tilde \xi_1,\dots,\tilde \xi_{p+q}\}$. Thus,
we can take $\xi_{p+1},\dots,\xi_{p+q}\in [0,2\pi)$ so that
$\{\xi_1,\dots,\xi_{p+q}\}
= \{\tilde \xi_1,\dots,\tilde \xi_{p+q}\}$. Therefore, (II) holds.
For notational convenience, we assume that $A = \tilde A$
in the claim so that every eigenvalue of $A$ is
either an extreme point of $\Lambda_k(A)$ or does not lie in
$\Lambda_k(A)$.
We first construct $\zeta_1,\dots,\zeta_n \in [0,2\pi)$ and $f_1,\dots,f_n\in \IR$.
For $r = 1,\dots, n$,
let $\Gamma_r$ be the set containing all $\xi \in [0,2\pi)$
such that the closed half plane $\cH( \Re(e^{-i\xi} a_r), \xi)$ contains at least $n-k+1$ eigenvalues of $A$.
As $a_r$ is either an extreme point of $\Lambda_k(A)$ or not in
$\Lambda_k(A)$, there is a $\zeta\in [0,2\pi)$
such that $\Re(e^{-i\zeta} a_r) \ge \lambda_k(\Re(e^{-i\zeta} A))$
and hence the half plane $\cH( \Re(e^{-i\zeta} a_r), \zeta)$ contains at least $n-k+1$ eigenvalues.
Then $\Gamma_r$ is always nonempty.
Furthermore, by the definition of $\Gamma_r$,
the set $\Gamma_r$ is an union of closed arcs of $\Omega$.
Clearly,
$$\cP = \Lambda_k(A) \subseteq
\bigcap_{\xi \in \Gamma_r} \cH( \Re(e^{-i\xi} a_r), \xi).$$
Also the above intersection, which containing $\cP$, is a non-degenerate conical region.
Then $\Gamma_r$ is contained in some open semi-circular arc of $\Omega$; otherwise,
the above intersection of half planes is equal to the singleton $\{a_r\}$.
As $\Gamma_r$ is a union of closed arcs in some open semi-circular arc of $\Omega$,
there exists a unique $\zeta_r \in \Gamma_r$ such that
\begin{eqnarray}\label{eq3.7}
\Gamma_r \subseteq (\zeta_r - \pi,\zeta_r].
\end{eqnarray}
Let $f_r = \Re(e^{-i\zeta_r} a_r)$ for $1\le r \le n$.
We show that $\Lambda_k(A) = \bigcap_{j = 1}^n \cH(f_j, \zeta_j)$.
Suppose $\cT$ is a minimal subset of $\cS_0$ such that $\Lambda_k(A) = \bigcap_{(r,s)\in \cT} H(a_r,a_s)$.
We may further assume that for all $(r,s)\in \cT$,
$\{a_1,\dots,a_m\} \cap L(a_r,a_s) \subseteq \conv \{a_r,a_s\}$.
For each $(r,s) \in \cT$,
write $H(a_r,a_s) = \cH( \Re(e^{-i\zeta} a_r), \zeta)$
with $\zeta = \arg(a_s-a_r) - \pi/2$. Then $\zeta \in \Gamma_r$.
We claim that $\zeta = \zeta_r$.
Suppose not. By the above assumption on $a_r$,
one can see that for a sufficiently small $\epsilon > 0$,
the half plane $\cH( \Re(e^{-i\hat \zeta} a_r), \hat \zeta)$ with $\hat \zeta = \zeta - \epsilon$
will contain all eigenvalues of $A$ that are in $H(a_r,a_s)$,
i.e., $\hat \zeta \in \Gamma_r$. With (\ref{eq3.7}), we have
$$\Lambda_k(A) \subseteq
\cH( \Re(e^{-i\zeta_r} a_r), \zeta_r) \cap
\cH( \Re(e^{-i\hat \zeta} a_r), \hat \zeta) \subseteq
H_0(a_r,a_s)\cup \{a_r\}.$$
So $L(a_r,a_s)\cap\Lambda_k(A)$ contains at most one point.
But this contradicts the fact that $(r,s)$ is an element in
the minimal subset $\cT$.
Therefore, $\zeta =\zeta_r$.
Then for each $(r,s)\in \cT$, $H(a_r,a_s) = \cH(f_r,\zeta_r)$ and so
$$\bigcap_{j = 1}^n \cH(f_j,\zeta_j)
\subseteq \bigcap_{(r,s)\in \cT} H(a_r,a_s)
= \Lambda_k(A) \subseteq \bigcap_{j = 1}^n \cH(f_j, \zeta_j).$$
Thus, $\Lambda_k(A) = \bigcap_{j = 1}^n \cH(f_j, \zeta_j)$
and the first part of the claim holds.
To prove the second part of the claim, without loss of generality, we may assume that
$a_r = 0$ and $\zeta_r = 0$. Then $f_r = 0$ and
$$\cH(f_r,\zeta_r) = H = \{z\in \IC: \Re(z) \le 0\}.$$
Thus, the closed left half plane contains at least $n-k+1$ eigenvalues of $A$.
Suppose that the closed right half plane $-H$ contains eigenvalues
$a_{j_1},\dots,a_{j_{h}}$ of $A$ with
$\zeta_{j_t} \ne 0$ for $t = 1,\dots, g$, and
$\zeta_{j_t} = 0$ for $t = g+1,\dots,h$
for some $g \le h$.
Fix a sufficiently small $\epsilon > 0$.
We choose $g+1\le \ell\le h$ so that
$$\Re( e^{-i\epsilon} a_{j_\ell})
= \max_{g+1\le t \le h} \Re( e^{-i\epsilon} a_{j_t}).$$
Then $\{a_{j_{g+1}},\dots, a_{j_h}\} \subseteq
\cH(\Re( e^{-i\epsilon} a_{j_\ell}), \epsilon)$.
On the other hand, this closed half plane
$\cH(\Re( e^{-i\epsilon} a_{j_\ell}), \epsilon)$
also contains all eigenvalues of $A$ that are in the left open half plane.
Thus, this closed half plane $\cH(\Re( e^{-i\epsilon} a_{j_\ell}), \epsilon)$
has at least $n-g$ eigenvalues of $A$.
On the other hand by (\ref{eq3.7}),
$\epsilon \notin\Gamma_{j_\ell}$ and so
$\cH(\Re( e^{-i\epsilon} a_{j_\ell}), \epsilon)$ can have at
most $n-k$ eigenvalues. Thus, we have $g \ge k$.
Now for each $t=1,\dots,k$, let $\hat d_t =\Re(a_{j_t})$, then
$\hat d_t \ge 0$ and $H \subseteq \cH(\hat d_t,0)$.
Thus, the closed half plane $\cH(\hat d_t,0)$ contains at least $n-k+1$ eigenvalues of $A$,
i.e., $\zeta_r = 0 \in \Gamma_{j_t}$. Recall that $\zeta_{j_t} \ne 0$.
By (\ref{eq3.7}), one see that
\begin{eqnarray*}
\zeta_r \in (\zeta_{j_t} - \pi, \zeta_{j_t})
\quad \hbox{ for } t = 1,\dots,k.
\end{eqnarray*}
Equivalently, $\zeta_{j_1},\dots,\zeta_{j_k} \in (\zeta_r, \zeta_r+\pi)$.
Thus, our {\bf claim} is proved, and (II) holds.
\smallskip
Suppose now (II) holds, namely, there are distinct $\xi_{p+1},\dots,\xi_{p+q}$ such that \linebreak
$\{\xi_1,\dots,\xi_{p+q}\}$ is $k$-regular. For $j=p+1,\dots,p+q$, define
$$d_j = \max_{\mu\in \cP}\ \Re ( e^{-i\xi_j} \mu ).$$
Then $\cP \subseteq \cH(d_j,\xi_j)$ and so
$$\cP = \bigcap_{j=1}^p\, \cH(d_j,\xi_j)
= \bigcap_{j=1}^n\, \cH(d_j,\xi_j)$$
with $n = p+q$.
By Lemma \ref{lem3.6}, we may assume that
$\cP\cap \cL(d_j,\xi_j)\neq\emptyset$ for all $j = 1, \dots, n$,
and $0\le\xi_1< \cdots < \xi_n<2\pi$ such that
condition (\ref{eq3.1}) holds.
For each $r=1,\dots,n$, let
$$a_r = \frac{i}{\sin(\xi_{r+k} - \xi_r)}
\left( e^{i\xi_r}d_{r+k} - e^{i\xi_{r+k}} d_r\right)$$
and $A = \diag(a_1,\dots,a_n)$. Then
$$\Re(e^{-i\xi_r} a_r) = d_r
\quad\hbox{and}\quad
\Re(e^{-i\xi_{r+k}} a_r) = d_{r+k}.$$
Note that $a_r \in \cL(d_r,\xi_r) \cap \cL(d_{r+k}, \xi_{r+k})$ is
the vertex of the conical region $\cH(d_r,\xi_r)\cap\cH(d_{r+k},
\xi_{r+k})$, which contains $\cP$. Therefore, $$
\Re(e^{-i\xi_r} (a_r-\mu)) \ge 0\quad
\mbox{and}\quad
\Re(e^{-i\xi_{r+k}}( a_r-\mu))\ge 0,$$
for all $\mu\in\cP$. Since $\xi_{r+k} \in (\xi_r,\xi_r+\pi)$, we have
\begin{eqnarray}\label{eq3.5}
\Re(e^{-i\xi} a_r) \ge \max_{\mu \in \cP}\Re(e^{-i\xi} \mu)
\quad \hbox{for all }\xi \in [\xi_r,\xi_{r+k}].
\end{eqnarray}
Let $\mu_j\in
\cL(d_j,\xi_j)\cap\cP$ for $j=r,r+k$.
As $\xi_{r+k} \in (\xi_r,\xi_r+\pi)$, we have
$\mu_r=a_r-ie^{i\xi_r}b_r$ and $\mu_{r+k}=a_r+ie^{i\xi_{r+k}}c_r$
for some $b_r, c_r\ge 0$. Note that
$$\Re(e^{-i\xi}(\mu_r-a_r))=b_r\sin(\xi_r-\xi)\ge 0 \quad
\mbox{for all} \ \xi\in[\xi_r-\pi, \xi_r],$$
and $$\Re(e^{-i\xi}(\mu_{r+k}-a_r))=c_r\sin(\xi - \xi_{r+k})\ge 0 \quad
\mbox{for all} \ \xi\in[\xi_{r+k}, \xi_{r+k}+\pi].$$
Since $\{\xi_1, \ldots, \xi_n\}$ is $k$-regular,
it is easily seen that
$$[0,2\pi) \setminus [\xi_r,
\xi_{r+k}]=[\xi_r-\pi, \xi_r)\cup(\xi_{r+k}, \xi_{r+k}+\pi].$$
Therefore, for $\xi \in [0,2\pi) \setminus [\xi_r,
\xi_{r+k}]$, we have
$$\max\{\Re(e^{-i\xi}(\mu_r-a_r)),\Re(e^{-i\xi}(\mu_{r+k}-a_r))\}\ge 0.$$
Moreover, we have
\begin{equation}\label{eq3.6}
\max\{\Re(e^{-i\xi } \mu_r), \Re(e^{-i\xi }
\mu_{r+k})\}\ge \Re(e^{-i\xi} a_r).
\end{equation}
Let $\xi \in [0,2\pi)$. Then $\xi \in [\xi_s,\xi_{s+1})$ for some
$s \in\{1,\dots,n\}$. It follows that $\xi \in [\xi_r, \xi_{r+k}]$
for $r = s-k+1,\dots,s$, and $\xi \in [0,2\pi) \setminus [\xi_r,
\xi_{r+k}]$ for other $r$. By (\ref{eq3.5}) and (\ref{eq3.6}),
$$\min_{r \in \{s-k+1,\dots,s\}} \Re(e^{-i\xi} a_r)
\ge \max_{\mu \in \cP} \Re(e^{-i\xi} \mu)
\geq \max_{r \notin \{s-k+1,\dots,s\}} \Re(e^{-i\xi} a_r).$$
Thus, $\lambda_k( \Re(e^{-i\xi} A))
= \min_{r \in\{s-k+1,\dots,s\}} \Re(e^{-i\xi} a_r)$ and so
$$\cP \subseteq \cH\left(\lambda_k(\Re(e^{-i\xi} A) ),\xi \right).$$
Hence, $\cP \subseteq \Lambda_k(A)$.
Furthermore, if $\xi = \xi_s$, then $\Re(e^{-i\xi_s} a_s) = d_s$. Thus
$$\lambda_k(\Re(e^{-i\xi_s} A)) = \min_{r \in \{s-k+1,\dots,s\}}
\Re(e^{-i\xi_s} a_r) \le d_s.$$
It follows that
\begin{eqnarray*}
\Lambda_k(A)
&=& \bigcap_{\xi\in [0,2\pi)} \cH
\left( \lambda_k(\Re(e^{-i\xi} A)), \xi\right) \\
&\subseteq& \bigcap_{1\le s \le n} \cH
\left( \lambda_k(\Re(e^{-i\xi_s} A)), \xi_s\right)
\subseteq \bigcap_{1\le s \le n} \cH \left(d_s, \xi_s \right)
= \cP.
\end{eqnarray*}
Thus, $\cP = \Lambda_k(A)$.
\end{proof}
\smallskip
By Theorem \ref{equiv}, Problem \ref{prob.1} is equivalent
to the following combinatorial problem, whose solution will
be given in the next section.
\medskip
\begin{problem}\label{prob.2}
Suppose $\{\xi_1,\dots,\xi_p\} \subseteq [0,2\pi)$ is $1$-regular. For $k>1$,
determine the smallest nonnegative integer $q$ so that
$\{\xi_1,\dots,\xi_{p+q}\}$ is $k$-regular
for some distinct $\xi_{p+1},\dots,\xi_{p+q}\in [0,2\pi)$.
\end{problem}
\section{Solutions for Problems \ref{prob.1} and \ref{prob.2}}
In this section, we give the solutions for Problems
\ref{prob.1} and \ref{prob.2}.
Given a non-empty set
$\Pi = \{\xi_1,\dots,\xi_p\} \subseteq \Omega$,
Problem \ref{prob.2} is equivalent to the study
of smallest nonnegative integer $q$ so that $\{\xi_1,\dots,\xi_{p+q}\}$
is $k$-regular for some distinct
$\xi_{p+1},\dots,\xi_{p+q} \in \Omega$.
We have the following.
\medskip
\begin{theorem}\label{main6}
Let $k>1$ be a positive integer and $\Pi$ be a $p$ element subset
of $\Omega$, including $s$ pairs of antipodal points:
$\{\beta_1, -\beta_1\}, \dots, \{\beta_s,-\beta_s\}$,
where $p \ge 3$ and $s \ge 0$.
Suppose $\Pi$ is $1$-regular but not $k$-regular
and $q$ is the minimum number of
points in $\Omega$ one can add to $\Pi$ to form a $k$-regular set.
\noindent
{\rm (a)} If $k \ge p-s$, then
\begin{eqnarray}\label{Q1}
q =
\cases{
2k+1-p & \hbox{if } $s = 0$, \cr
2k+2-p & \hbox{if } $s > 0$.}
\end{eqnarray}
\noindent
{\rm (b)} If $k < p-s$, then $q$ is the smallest nonnegative integer $t$
such that one can remove $t$ non-antipodal points from $\Pi$ to get a $(k-t)$-regular set. More precisely,
\begin{eqnarray} \label{Q2}
q = &&\min \{ t\in \IN:
\Pi \setminus \{ \beta_1, -\beta_1, \dots, \beta_s,-\beta_s \}
\mbox{ has a {\it t}-element } \cr
&&\hspace{3cm}
\mbox{ subset $T$ such that }
\Pi \setminus T \mbox{ is }
(k-t)\mbox{-regular}\, \}.
\end{eqnarray}
Consequently,
\begin{equation} \label{optimal1}
q \le \min\{2k+2-p,k-1\}.
\end{equation}
The inequality in (\ref{optimal1})
becomes equality if
$\Pi = \{1, i, -1,\alpha_4,\dots,\alpha_p\}$
where $\alpha_4,\dots,\alpha_p$ lie in the open lower half plane.
\end{theorem}
\smallskip
Several remarks concerning Theorem \ref{main6} are in order.
If condition (a) in the theorem holds, then the value $q$
can be determined immediately. However, it is important to
consider two cases depending on whether $\Pi$ has pairs of antipodal
points as illustrated by the following.
\medskip
\begin{example} \rm
Suppose $S_1 = \{1, w, w^2, w^3\}$ with $w = e^{2i\pi/5}$.
Then $\alpha \ne -\beta$ for any two elements $\alpha, \beta \in S_1$
and adding $w^4$ to $S_1$ results in a $2$-regular set.
Suppose $S_2 = \{1, -1, i, -i\}$. Then we need to add at least two
points, say, $z, -z \in \Omega \setminus S_2$, to get a $2$-regular set.
\end{example}
\smallskip
Suppose condition (b) in the theorem
holds.
We can determine the value $q$ by taking $t$
non-antipodal elements away from $\Pi$
at a time and check whether the resulting set is $(k-t)$-regular.
The value $q$ can then be determined in no more than $\sum_{i=0}^{p-2s}{\tiny\(\begin{array}{c}p-2s\\ i\end{array}\) } = 2^{p-2s}$ steps.
The success of reducing Problem \ref{prob.2} to a problem which is solvable
in finite steps depends on Lemma \ref{prop2} and Proposition \ref{prop3}.
It would be nice to have a simple formula for $q$ in terms of $p,k,s$
in case (b) of the theorem.
However, the following example show that the value
$q$ depends not only on the values
$p$,$k$,$s$, but also on the relative positions
of the points in $\Pi$.
\medskip
\begin{example} \rm
Let $S_1 = \{1,w,w^2,w^3,w^4,w^5\}$ with $w = e^{2\pi i/7}$
and $S_2 = \{z^2, z^3, z^7, z^8, z^{12}, z^{13}\}$
with $z = e^{2\pi i/15}$.
Notice that both $S_1$ and $S_2$ contain $6$ elements
and have no antipodal pairs.
Furthermore, both of them are $2$-regular but not $3$-regular.
Clearly, adding $w^6$ to $S_1$ results a $3$-regular set.
However, as each of the open arcs
$(z^3, -z^3)$, $(z^8,-z^8)$ and $(z^{13},-z^{13})$
contains only two elements of $S_2$ while
the intersection of this three open arcs is empty,
at least two elements has to be added
to $S_2$ to form a $3$-regular set.
\end{example}
\smallskip
Note that our proofs are constructive;
see Lemma \ref{prop2} and Propositions \ref{ass2} and \ref{prop3}.
One can actually construct a subset
$\Pi' \subseteq \Omega$ with $q$ elements so that $\Pi \cup \Pi'$ is
$k$-regular.
By Theorem \ref{main6}, we can
answer Problems \ref{prob.1} and \ref{prob.2}, and obtain some
additional information on their solutions. We will continue to use
the notation $\cH(d,\xi)$ defined in (\ref{chdxi}) in the following.
\medskip
\begin{theorem}\label{main7}
For Problem \ref{prob.1}, if a $p$-sided polygon $\cP$
is expressed as $\cP = \cap_{j=1}^p \cH(d_j,\xi_j)$ for some
$d_1, \dots, d_p \in \IR$ and $\xi_1,\dots,\xi_p \in [0,2\pi)$,
then the minimum dimension $n$ for the existence of a normal matrix
$A \in M_{n}$ such that $\Lambda_k(A) = \cP$
is equal to $p+q$, where $q$ is determined in Theorem \ref{main6}.
Moreover,
\begin{equation} \label{optimal2}
n \le \max\{2k+2,p+k-1\}.
\end{equation}
The inequality in (\ref{optimal2})
becomes equality if
$(\xi_1, \xi_2, \xi_3) = (0, \pi/2, \pi)$
and $\xi_4,\dots,\xi_p$ lie in $(\pi,2\pi)$.
\end{theorem}
\smallskip
We break down the proofs of Theorems \ref{main6} and \ref{main7}
in several propositions.
We first give a lower bound for the number of elements in a $k$-regular set.
\medskip
\begin{proposition}\label{lower}
Suppose $S = \{\alpha_1,\dots,\alpha_n\} \subseteq \Omega$ is $k$-regular.
Then $n \ge 2k+1$. Furthermore, if
$S$ contains a pair of antipodal points $\{\alpha,-\alpha\}$,
then $n \ge 2k+2$.
\end{proposition}
\smallskip
\begin{proof}
For any $r \in \{1,\dots, n\}$, each of the open arcs
$(\alpha_r,-\alpha_r)$ and $(-\alpha_r,\alpha_r)$
contains $k$ elements of $S$. Thus, $n \ge 2k+1$.
For the last statement, if we take $\alpha_r = \alpha$,
then together with $\alpha$ and $-\alpha$, we see that
$n \ge 2k+2$. The proof of the assertion is complete.
\end{proof}
\smallskip
As shown in Proposition \ref{lower}, the existence of
a pair of antipodal points $\{\alpha, -\alpha\}$
has implication on the size of a $k$-regular set $\Pi$.
The next result together with Proposition \ref{lower} show that
the lower bound in (\ref{Q1}) is best possible.
\medskip
\begin{proposition}\label{ass2}
Let $k > 1$ and $\Pi$
is a $p$ element subset of $\Omega$ containing $s$ pairs of antipodal points,
where $p \ge 3$ and $s \ge 0$.
If $\Pi$ is $1$-regular but not $k$-regular and $k \ge p-s$,
then one can extend $\Pi$ to a $k$-regular set by adding $2k+1-p$ or $2k+2-p$ elements,
depending whether $s$ is zero.
\end{proposition}
\smallskip
\begin{proof}
Assume $k \ge p-s$.
Suppose first that
$s > 0$.
Let $\Pi''$ be a set containing $(k-p+s+1)$ pairs
of antipodal points such
that $\Pi'' \cap \Pi$ is empty. Take
$$\Pi' = \Pi'' \cup -(\Pi \setminus \{\beta_1,-\beta_1,\dots,\beta_s,-\beta_s\}).$$
Then $\Pi'$ contains $(2k+2-p)$ elements.
Furthermore, the set $\Pi\cup\Pi'$ contains exactly $k+1$ pairs
of antipodal points hence it is $k$-regular.
Thus, the result follows if $s > 0$.
Next, suppose $s=0$.
Without loss of generality, we may assume that
$1\in \Pi$. Hence, $-1\in \Pi'$.
We now modify $\Pi'$.
We first delete the point $-1$ in $\Pi'$.
Then for all other points $\alpha \in \Pi'$,
we replace $\alpha$ by $e^{i\xi} \alpha$ if $\alpha$ lies in the upper open half plane
$P = \{z \in \IC: \Im(z) > 0\}$,
and by $e^{-i\xi} \alpha$ if $\alpha$ lies in the lower open half plane $-P$,
with sufficiently small $\xi > 0$.
Then we see that for every $\alpha \in \Pi\cup \Pi'$,
$\alpha P$ still contains exactly $k$ elements.
Thus, $\Pi\cup \Pi'$ is $k$-regular. Furthermore,
the modified set $\Pi'$ has one fewer point, i.e.,
$\Pi'$ has only $2k+1-p$ elements.
The proof of is complete.
\end{proof}
\smallskip
A referee pointed out that each $(k-1)$-regular set can
be enlarged to a $k$-regular set by adding in not more than $2$ extra elements.
The following result shows that sometimes $2$ may not be the minimum number needed.
\medskip
\begin{lemma}\label{prop2}
Let $k > 1$ and $\Pi$ be a subset of $\Omega$
containing at least one non-antipodal point.
The following are equivalent.
\begin{enumerate}
\item[\rm (a)] One can add a point $\beta\notin \Pi$ so that
$\Pi \cup \{\beta\}$ is $k$-regular.
\item[\rm (b)] One can delete a non-antipodal point $\gamma \in \Pi$ so that
$\Pi \setminus \{\gamma\}$ is $(k-1)$-regular.
\end{enumerate}
Here, an element $\alpha\in \Pi$ is called a non-antipodal point of $\Pi$ if $-\alpha \notin \Pi$.
\end{lemma}
\smallskip
\begin{proof}
Suppose first that (b) holds.
Let $P = \{z\in \IC: \Im(z) > 0\}$.
Without loss of generality, we may assume that $\gamma = 1$
is a non-antipodal point in $\Pi$. Suppose $ \Pi\setminus\{\gamma\} =\{e^{i\theta_1},\dots, e^{i\theta_{p-1}}\}$
such that
$$0< \theta_1 <\cdots <\theta_m<\pi<\theta_{m+1}<\cdots< \theta_{p-1}<2\pi.$$
As $\Pi\setminus \{\gamma\}$ is $(k-1)$-regular, by Proposition \ref{lower}, $\Pi\setminus \{\gamma\}$ has
$p-1\ge 2(k-1)+1$ elements. Therefore, for every $\alpha\in\Omega$, the open half plane $\alpha P$ contains at least $k-1$ elements in $\Pi\setminus\{\gamma\}$ and either $P$ or $- P$ contains at least $k$ elements in $\Pi\setminus\{\gamma\}$. Hence, we have either $m=k-1$ or $k\le m\le p-k$.
Choose $\beta =e^{i\theta}$ where
$$\theta
= \cases{
\max\{\pi+\theta_m,\theta_{p-1}\}/2 & \mbox{ if } $m=k-1$, \cr
\min\{2\pi+\theta_1,\pi+\theta_{m+1}\}/2 & \mbox{ if } $k\le m\le p-k.$
}
$$
Now for every $\alpha \ne \pm 1$, the open half plane $\alpha
P$ contains at least $k-1$ elements of $\Pi\setminus \{\gamma\}$
and either $\gamma$ or $\beta$. Hence, $\alpha P$ contains at least $k$ elements of
$\Pi\cup\{\beta\}$. On the other hand, when $\alpha = \pm 1$, the open half plane
$\alpha P$ contains either $k$ elements of $ \Pi $ or $k-1$ elements of $\Pi $ and $\beta$. Again, $\alpha P$ contains at least $k$ elements of
$\Pi\cup\{\beta\}$. Thus, (a) holds.
\smallskip
Conversely, suppose (a) holds.
If $-\beta \in \Pi$,
then it is easy to see that the set $\Pi \setminus \{-\beta\}$
is $(k-1)$-regular.
From now, we assume that $-\beta\notin \Pi$.
Without loss of generality, we may assume that $\beta = -1$.
Furthermore, by replacing $\Pi$ with the set $\{\bar \xi: \xi\in \Pi\}$, if necessary, we can assume that
the number of elements in $\Pi \cap P$
is greater or equal to the number of elements in $\Pi \cap (-P)$.
Under this assumption, the upper open half plane must contain at least one
non-antipodal point of $\Pi$.
Let $\gamma$ be the non-antipodal point in $\Pi$ such that
$0<\arg(\gamma) \le \arg(\alpha)$ for all non-antipodal
points $\alpha \in \Pi$. Then $\gamma \in P$.
We show
that $\Pi \setminus \{\gamma\}$ is $(k-1)$-regular.
Take any $\alpha \in \Pi \setminus \{\gamma\}$.
Suppose $\alpha \in \beta P \cup \gamma P$. Then the open half plane $\alpha P$
can contain at most one of points $\beta$ and $\gamma$. As the open half plane
$\alpha P$ contains at least $k$ elements of $\Pi \cup \{\beta\}$,
$\alpha P$ contains at least $k-1$ elements of $\Pi \setminus \{\gamma\}$.
Thus, $\Pi \setminus \{\gamma\}$ is $(k-1)$-regular
if $\Pi\setminus \{\gamma\} \subseteq \beta P \cup \gamma P$.
Now suppose $(\Pi\setminus \{\gamma\}) \setminus (\beta P \cup \gamma P)$
is nonempty and let $\omega_1,\dots,\omega_t$ be the points in this set.
Notice that all of them lie in the upper open half plane $P$.
Therefore, we may assume that $$0 < \arg(\omega_1) < \cdots <
\arg(\omega_t) < \arg(\gamma) < \pi.$$
Also by the choice of $\gamma$,
$\omega_1,\dots,\omega_t$ cannot be non-antipodal points and
hence the points $-\omega_1, \dots,-\omega_t$ are in $\Pi$.
Clearly, each open half
plane $\omega_j P$ contains at least $k$ elements of $\Pi \cup \{\beta\}$.
Notice that $(w_j P)\setminus P$ contains exactly
$j$ elements of $\Pi \cup \{\beta\}$, namely,
$-\omega_1,\dots,-\omega_{j-1}$ and $\beta$.
Also the set $P\setminus (w_jP)$
contains exactly $j$ elements of $\Pi \cup \{\beta\}$, namely,
$\omega_1,\dots,\omega_j$.
It follows that the half plane $w_jP$ contains the
same number of elements of $P\cup \{\beta\}$
as the upper half plane $P$.
By Proposition \ref{lower},
$\Pi \cup \{\beta\}$ contains at least $2k+2$ elements.
Then by assumption, the upper open half plane $P$ contains at
least $k+1$ elements of $\Pi \cup \{\beta\}$.
Thus, every open half plane $w_j P$ contains at least $k+1$
elements of $\Pi \cup \{\beta\}$ and it
contains at least $k-1$ elements of $\Pi\setminus \{\gamma\}$.
Therefore,
$\Pi\setminus \{\gamma\}$ is a
$(k-1)$-regular set and the assertion follows.
\end{proof}
\smallskip
Applying the above lemma inductively (repeatedly), we have the following.
\medskip
\begin{proposition}\label{prop3}
Let $k > 1$ and $\Pi$
is a $p$ element subset of $\Omega$
containing $s$ pairs of antipodal points,
where $p \ge 3$ and $s \ge 0$.
Suppose $p > 2s$.
For any positive $t \le \min\{k,p-2s,p-1\}$,
the following are equivalent.
\begin{enumerate}
\item[\rm (a)] One can add $t$ points $\beta_1,\dots,\beta_t \notin \Pi$
so that
$\Pi \cup \{\beta_1,\dots,\beta_t\}$ is $k$-regular.
\item[\rm (b)] One can delete $t$ non-antipodal points $\gamma_1,\dots,\gamma_t \in \Pi$ so that
$\Pi \setminus \{\gamma_1,\dots,\gamma_t\}$ is $(k-t)$-regular.
\end{enumerate}
\end{proposition}
\smallskip
\begin{proof}
Clearly, the result holds for $t = 1$ by Proposition \ref{prop2}.
Assume the statement holds for all $\ell < t$.
Suppose $\Pi \cup \{\beta_1,\dots,\beta_t\}$ is $k$-regular.
Let $\Pi_1 = \Pi \cup \{\beta_1\}$.
Then $\Pi_1 \cup \{\beta_2,\dots,\beta_t\}$ is $k$-regular and it follows from the assumption that
one can find $t-1$ non-antipodal points $\gamma_1,\dots,\gamma_{t-1} \in \Pi \cup\{\beta_1\}$
such that $\Pi_1 \setminus \{\gamma_1,\dots,\gamma_{t-1}\}$ is $(k-t+1)$-regular.
If $\beta_1 \notin \{\gamma_1,\dots,\gamma_{t-1}\}$,
by applying Lemma \ref{prop2} to the set $(\Pi \setminus \{\gamma_1,\dots,\gamma_{t-1}\})
\cup \{\beta_1\}$, one can find another non-antipodal point
$\gamma_t \in \Pi_1 \setminus \{\gamma_1,\dots,\gamma_{t-1}\}$
so that $\Pi \setminus \{\gamma_1,\dots,\gamma_{t}\}$ is $(k-t)$-regular.
On the other hand, if $\beta_1$ is one of the $\gamma_j$, say $\beta_1 = \gamma_1$,
then $\Pi \setminus \{\gamma_2,\dots, \gamma_{t-1}\}$ is $(k-t+1)$-regular.
In this case, take an arbitrary element $\gamma_t \in \Pi \setminus \{\gamma_2,\dots, \gamma_{t-1}\}$
and apply Lemma \ref{prop2} to the set
$(\Pi \setminus \{\gamma_2,\dots, \gamma_t\})\cup \{\gamma_t\}$,
one can find another non-antipodal point $\gamma_{t+1}$ so that
the set $\Pi \setminus \{\gamma_2,\dots, \gamma_{t+1}\}$ is $(k-t)$-regular.
Then (b) follows.
The proof of (b) implying (a) can also be done by induction in a similar way.
\end{proof}
\smallskip
Suppose $k < p-s$.
Given a $p$ element subset $\Pi$ of $\Omega$
containing $s$ pairs of antipodal points,
$\beta_1, -\beta_1, \dots, \beta_s,-\beta_s$ with $s > 0$,
which is not $k$-regular,
the set obtained from $\Pi$ by
deleting all $p-2s$ non-antipodal points
is a $(s-1)$-regular set.
On the other hand,
if $\Pi$ does not have any pair of antipodal points, then $k \le p-1$
and one can always delete $k$ elements to form a $0$-regular set.
In both cases, one see that the following
minimum always exist.
\begin{eqnarray*}
q=&& \min \{ t\in \IN:
\Pi \setminus \{ \beta_1, -\beta_1, \dots, \beta_s,-\beta_s \}
\mbox{ has a {\it t}-element } \cr
&& \hspace{35mm} \mbox{ subset $T$ such that }
\Pi \setminus T \mbox{ is }
(k-t)\mbox{-regular}\, \}.
\end{eqnarray*}
By Proposition \ref{prop3}, one can always add
this minimum number $q$ of points to $\Pi$ to form a $k$-regular set.
Furthermore, this number $q$ is optimal
in the sense that one cannot add fewer than $q$ elements
to do so.
By definition, $q$ is a positive integer
bounded above by $\min\{k,p-2s\}$.
The following proposition gives more information about the minimum value
(\ref{Q2}) in Theorem \ref{main6}.
\medskip\begin{proposition}\label{main6a}
Using the notation in Theorem \ref{main6}.
If $k < p-s$,
then the value $q$ in (\ref{Q2}) exists and satisfies
$$q \le \cases{
k & \hbox{if } (p,s) = (k+1,0) \hbox{ or } $(k+2,1)$, \cr
\min\{k-1,p-2s\} & \hbox{otherwise.}
}$$
Also $q$ is bounded below by $2k+1-p$ or $2k+2-p$,
depending whether $s$ is zero.
Furthermore, $q = 2k+1-p$
if $p \le k+2$ with $s = 0$
and $q = 2k+2-p$ if $p \le k+3$ with $s > 0$.
\end{proposition}
\smallskip
\begin{proof}
The lower bound can be seen easily from Proposition \ref{lower}.
Also the case when $(p,s) = (k+1,0)$ or $(k+2,1)$
has already discussed.
Now we assume that $(p,s) \notin \{(k+1,0), (k+2,1)\}$.
Consider the case when $s \ge 2$.
Take $t = \min\{k-1,p-2s\}$
and delete $t$ non-antipodal elements in $\Pi$.
Then the resulting set is $(s-1)$-regular and hence $(k-t)$-regular
as $k-t = \max\{1,k-p+2s\} \le s-1$.
Thus, $q \le t$.
Next we consider the case when $s = 1$ and $p \ge k+3$.
Let $\{\alpha,\ -\alpha\}$ be the pair of antipodal points in $\Pi$.
Since $\Pi$ is $1$-regular, there are
$\alpha_1\in (\alpha,\ -\alpha)\cap\Pi$ and
$\alpha_2\in (-\alpha,\ \alpha)\cap\Pi$.
Pick another $k-1$ non-antipodal points
$\alpha_3,\dots,\alpha_{k+1}$ in $\Pi$.
The set $\Pi \setminus \{\alpha_3,\dots,\alpha_{k+1}\}$
containing $\{\alpha_1,\alpha_2,\alpha,-\alpha\}$
is $1$-regular. Then $q \le k-1$.
Finally consider the case when $s = 0$ and
$p \ge k+2$. We may assume that $\Pi=\{e^{i\xi_j}:1\le j\le p\}$
with $0=\xi_1<\cdots<\xi_p<2\pi$.
Since $\Pi$ is $1$-regular, we can choose $\ell$ such that
$\xi_\ell=\max\{\xi_j:0<\xi_j<\pi\}$. Then
$S=\{\xi_1,\xi_{\ell},\xi_{\ell+1}\}$ is $1$-regular.
Then any $p-k+1$ subset of $\Pi$ containing $S$ is $1$-regular.
Thus, $q \le k-1$.
\end{proof}
\smallskip
Now we are ready to present the following:
\bf Proof of Theorems \ref{main6} and \ref{main7}. \rm The assertions on
$q$ and $n$ follows by Propositions \ref{ass2} and \ref{prop3}.
For the last assertion in Theorem \ref{main6},
we see that in order to get a $k$-regular set by adding $q$ points to
$\Pi$, we need to add at least $k-1$ points $e^{i\xi}$, with $0<\xi<\pi$. If $2k+2-p>k-1$,
then $p-3<k$ and we need to add an extra $k-(p-3)$ points $e^{i\xi}$, with $\pi<\xi<2\pi$,
giving a total of $k-1+k-(p-3)=2k+2-p$ points. This proves the equality in (\ref{optimal1}).
The equality in (\ref{optimal2}) now follows readily.
{\vbox{\hrule height0.6pt\hbox
\vrule height1.3ex width0.6pt\hskip0.8ex
\vrule width0.6pt}\hrule height0.6pt
}
\smallskip
To close this section, let us illustrate our
results by the following example.
\smallskip
\begin{example}\label{ex3} \rm
Let the polygon $\cP
= \conv\{1,w,w^2,w^3,w^4,w^5,w^6,w^9\}$ with $w = e^{2\pi i/12}$,
see the following.
\begin{center}
\epsfig{file=SIMAX076430RRR_fig6.png, width =1.7in, height=1.7in} \\
The polygon $\cP$
\end{center}
Then $\cP = \bigcap_{j=1}^8 \cH(d_j,\xi_j)$ with
$d_1 = \cdots = d_6 = \cos \frac{\pi}{12}$,
$d_7 = d_8 = \cos \frac{\pi}{4}$, and
$$\left(\xi_1,\dots,\xi_8\right)
= \left(\frac{\pi}{12}, \frac{3\pi}{12}, \frac{5\pi}{12},
\frac{7\pi}{12}, \frac{9\pi}{12}, \frac{11\pi}{12},
\frac{15\pi}{12}, \frac{21\pi}{12} \right).$$
Thus,
\vspace{-3mm}
$$\Pi = \{\alpha_1,\dots,\alpha_8\} = \left\{e^\frac{\pi
i}{12}, e^\frac{3\pi i}{12}, e^\frac{5\pi i}{12}, e^\frac{7\pi
i}{12}, e^\frac{9\pi i}{12}, e^\frac{11\pi i}{12}, e^\frac{15\pi
i}{12}, e^\frac{21\pi i}{12} \right\}.$$
In particular,
$\Pi$ has two pairs of antipodal points, namely,
$\left\{e^\frac{3\pi i}{12}, e^\frac{15\pi i}{12} \right\}$ and
\linebreak
$\left\{e^\frac{9\pi i}{12}, e^\frac{21\pi i}{12} \right\}$,
i.e., $p = 8$ and $s = 2$.
By Theorem \ref{main7} and Proposition \ref{main6a}, for $k \ge 5$,
a $(2k+2)\times (2k+2)$ normal matrix $A$ can be constructed
so that $\Lambda_k(A) = \cP$.
It remains to consider the cases for $k\le 4$.
Clearly, $\Pi$ is $2$-regular.
Thus, a $8\times 8$ normal matrix $A_2$ can be constructed
so that $\Lambda_2(A_2) = \cP$.
However, $\Pi$ is not $k$-regular for $k \ge 3$.
Now we consider the case $k =3$. Clearly, $\Pi\setminus \{ e^\frac{5\pi
i}{12} \}$ is $2$-regular.
Then Theorem \ref{main7} shows that there is a $9\times 9$
normal matrix $A_3$ such that $\Lambda_3(A_3) = \cP$. Indeed, following
the proof of Lemma \ref{prop2}, we see that if $\Pi' =
\{e^\frac{18\pi i}{12} \}$, $\Pi \cup \Pi'$ is
$3$-regular.
Finally, we turn to the case when $k = 4$.
Notice that $\Pi \setminus
\{ e^\frac{5\pi i}{12}, e^\frac{7\pi i}{12} \}$ is $2$-regular.
Thus, Theorem \ref{main7} shows that there is
a $10\times 10$ normal matrix $A_4$ such that
$\Lambda_4(A_4) = \cP$.
In the following, we display the higher rank numerical ranges of
$A_2,$ $A_3$, and $A_4$. In the figures, the points ``o''
correspond to the vertices of the polygon while the points
``$\ast$'' correspond to the eigenvalues of the normal matrices.
\begin{center}
\epsfig{file=SIMAX076430RRR_fig7.png, width =2in, height=2in} \qquad
\epsfig{file=SIMAX076430RRR_fig8.png, width =2in, height=2in}
$\Lambda_2(A_2) = \cP$ \hspace{3.5cm}
$\Lambda_3(A_3) = \cP$
\epsfig{file=SIMAX076430RRR_fig9.png, width =2.66in, height=2in}
$\Lambda_4(A_4) = \cP$
\end{center}
\end{example}
\pagebreak
\section{An algorithm}
In this section, we further present a detail procedure
for constructing the rank-$k$ numerical ranges
of normal matrices based on the discussion in Section 2.
\smallskip
Given a normal matrix $A$ with $m$ distinct eigenvalues
$a_1, \dots, a_m$, one can easily construct $\Lambda_k(A)$ through the
following algorithms.
\medskip
\noindent
\bf Basic Algorithm \rm
First construct the set $\cS_0$.
For each ordered pair $(r,s)$ with $r < s$,
count the number of eigenvalues of $A$
(counting multiplicities) in the open planes $H_0(a_r,a_s)$ and $H_0(a_s,a_r)$.
\begin{enumerate}
\item If $H_0(a_r,a_s)$ has at most $n-k-1$ eigenvalues
while $H_0(a_s,a_r)$ has at most $k-1$ eigenvalues,
then collect the index pair $(r,s)$ in $\cS_0$.
\item If $H_0(a_s,a_r)$ has at most $n-k-1$ eigenvalues
while $H_0(a_r,a_s)$ has at most $k-1$ eigenvalues,
then collect the index pair $(s,r)$ in $\cS_0$.
\end{enumerate}
\smallskip
Notice that one can already construct $\Lambda_k(A)$
by determine the intersection of all the half planes
$H(a_r,a_s)$ with $(r,s) \in \cS_0$.
Nevertheless, one can perform the following additional steps
to simplify the set $\cS_0$ before constructing $\Lambda_k(A)$.
\medskip
\noindent
\bf Modified Algorithm 1 \rm
Suppose in basic algorithm, there is an index pair $(p,q)$ satisfying both (1) and (2),
i.e., both pairs $(p,q)$ and $(q,p)$ are in $\cS_0$.
Then $\Lambda_k(A)$ is a subset of a line segment.
In this case, $\Lambda_k(A)$ can be constructed as follows.
Set $\hat a_j = (a_j - a_p) / (a_q - a_p)$
and define $\cS_1 = \{(r,s)\in \cS_0: \Im(\hat a_r) \ne \Im(\hat a_s)\}$. If
$\cS_1=\emptyset$, then $\Lambda_k(A)=\emptyset$. Suppose
$\cS_1\ne\emptyset$.
For each $(r,s)\in \cS_1$, compute
$$b_{rs} = \frac{ \Im(\hat a_r)\, \Re(\hat a_s)-\Im(\hat a_s)\, \Re(\hat
a_r) }{\Im(\hat a_r) - \Im(\hat a_s)}.$$
Take
\begin{eqnarray*}
b_1 &=& \max\{b_{rs}: (r,s) \in \cS_1,\ \Im( \hat a_r) \ge 0 \hbox{ and } \Im(\hat a_s) \le 0\}, \cr
b_2 &=& \min\,\{b_{rs}: (r,s) \in \cS_1,\ \Im( \hat a_r) \le 0 \hbox{ and } \Im(\hat a_s) \ge 0\}.
\end{eqnarray*}
Then
$\Lambda_k(A)$ is the line segment in $\IC$ joining the points
$(a_q-a_p) b_1 + a_p$ and $(a_q-a_p) b_2 + a_p$
if $b_1 \le b_2$; otherwise, $\Lambda_k(A) = \emptyset$.
\medskip
\noindent
\bf Modified Algorithm 2 \rm
Assume the situation mentioned in modified algorithm 1 does not hold.
Check if the set $\cS_0$ satisfy the following.
\begin{eqnarray}\label{cond3}
&&\hbox{There are } (r_1,s_1), \dots, (r_\ell,s_\ell) \in \cS_0 \hbox{ with $\ell \ge 3$ such that} \cr
&&\hspace{3cm}
\{r_1,s_1\} \cap \{r_2,s_2\}\cap \cdots \cap \{r_\ell, s_\ell\} = \{t\}
\quad \hbox{for some $1\le t\le m$.}
\end{eqnarray}
If yes, define
$$\theta_j = \cases{
\arg(a_{s_j} - t) & \hbox{if } $r_j = t$, \cr
\arg(t - a_{r_j}) & \hbox{if } $s_j = t$.
}$$
Relabel the indices so that $0 \le \theta_1 \le \cdots \le\theta_\ell < 2\pi$.
Consider the following three cases.
\begin{enumerate}
\item If $\theta_\ell - \theta_1 < \pi$,
remove the all pairs $(r_j,s_j)$ in $\cS_0$ for $j \ne 1,\ell$.
Then check again whether the modified set still satisfies (\ref{cond3}).
\item If $\theta_{k+1} - \theta_k > \pi$ for some $k$,
remove the all pairs $(r_j,s_j)$ in $\cS_0$ for $j \ne k,k+1$.
Then check again whether the modified set still satisfies (\ref{cond3})
\item If the above two items are not satisfied,
then $\Lambda_k(A)$ is either the empty set or the singleton set $\{a_t\}$.
In this case, check whether $a_t$ lies in $H(a_r,a_s)$ for all $(r,s) \in \cS_0$.
If yes, $\Lambda_k(A)$ is the singleton set; otherwise it is the empty set.
\end{enumerate}
Finally, if the modified set $\cS_0$ does not satisfy (\ref{cond3}),
then one can construct $\Lambda_k(A)$
by determine the intersection of all the half planes
$H(a_r,a_s)$ with $(r,s)$ in the modified set $\cS_0$.
\medskip
\noindent
{\bf Acknowledgment}
This research began at the 2008 IMA PI Summer Program for Graduate
Students, where the second author is a lecturer and the third
author is a co-organizer. The support of IMA and NSF for the
program is graciously acknowledged. The hospitality of the
colleagues at Iowa State University is deeply appreciated.
The authors would also like to thank the referees for some
helpful comments.
| {
"timestamp": "2010-11-03T01:01:45",
"yymm": "0902",
"arxiv_id": "0902.4869",
"language": "en",
"url": "https://arxiv.org/abs/0902.4869",
"abstract": "The higher rank numerical range is closely connected to the construction of quantum error correction code for a noisy quantum channel. It is known that if a normal matrix $A \\in M_n$ has eigenvalues $a_1, \\..., a_n$, then its higher rank numerical range $\\Lambda_k(A)$ is the intersection of convex polygons with vertices $a_{j_1}, \\..., a_{j_{n-k+1}}$, where $1 \\le j_1 < \\... < j_{n-k+1} \\le n$. In this paper, it is shown that the higher rank numerical range of a normal matrix with $m$ distinct eigenvalues can be written as the intersection of no more than $\\max\\{m,4\\}$ closed half planes. In addition, given a convex polygon ${\\mathcal P}$ a construction is given for a normal matrix $A \\in M_n$ with minimum $n$ such that $\\Lambda_k(A) = {\\mathcal P}$. In particular, if ${\\mathcal P}$ has $p$ vertices, with $p \\ge 3$, there is a normal matrix $A \\in M_n$ with $n \\le \\max\\left\\{p+k-1, 2k+2 \\right\\}$ such that $\\Lambda_k(A) = {\\mathcal P}$.",
"subjects": "Functional Analysis (math.FA); Mathematical Physics (math-ph); Quantum Physics (quant-ph)",
"title": "Higher rank numerical ranges of normal matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815494335754,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7076842166510557
} |
https://arxiv.org/abs/1905.10613 | A binary encoding of spinors and applications | We present a binary code for spinors and Clifford multiplication using non-negative integers and their binary expressions, which can be easily implemented in computer programs for explicit calculations. As applications, we present explicit descriptions of the triality automorphism of $Spin(8)$, explicit representations of the Lie algebras $\mathfrak{spin}(8)$, $\mathfrak{spin}(7)$ and $\mathfrak{g}_2$, etc. | \section{Introduction}
Spinors were first discovered by Cartan in 1913 \cite{Cartan2}, and have been of great relevance in Mathematics and Physics ever since.
In this note, we introduce a binary code for spinors via non-negative integers (and/or their binary decompositions)
which is very useful for computer calculations.
This encoding is carried out by putting in correspondence the elements
a suitable basis of spinors with nonnegative integers via their binary decompositions.
In particular, Clifford multiplication becomes a matter of flipping bits in binary expressions and keeping track of powers of $\sqrt{-1}$.
In order to show its usefulness, we develop
very explicit descriptions of the triality automorphism of $Spin(8)$ (avoiding the octonions),
the relationship between Clifford multiplication and the multiplication table of the octonions and the construction
of linearly independent vector fields on spheres.
The computer implementation of this code has also allowed us to carry out other calculations in high dimensions which, otherwise, would
have been impossible. Note that, although we only deal with the
case of Clifford algebras and spinors defined by positive definite quadratic forms, the binary code can also be used in the case of
semi-definite quadratic forms.
The paper is organized as follows.
In Section \ref{sec: preliminaries}, we recall standard facts about Clifford algebras, the Spin
Lie groups and algebras, the spinor representations and an ordered spinor basis made up of weight vectors.
In Section \ref{sec: binary}, we introduce the correspondence
between these basic spinors and nonnegative integers, as well as the functions that encode the Clifford multiplication.
In Section \ref{sec: applications}, we present some applications such as triality in dimension 8, the relationship between Clifford multiplication
and the multiplication table of the octonions and, finally, the construction of linearly independent vector fields on spheres.
\section{Preliminaries}\label{sec: preliminaries}
The details of the facts mentioned in this section can be found in \cite{Baum, friedrich}.
\subsection{Clifford algebra, spinors, Clifford multiplication and the Spin group}
Let $Cl_n$ denote the Clifford algebra generated by all the products of the canonical vectors
$e_1, e_2, \ldots, e_n\in\mathbb{R}^n$ subject to the relations
\begin{eqnarray}
e_j e_k + e_k e_j &=& - \big< e_j, e_k\big>, \quad\mbox{for $j\not =k$} \nonumber\\
e_j e_j &=& -1 \nonumber
\end{eqnarray}
where $\big< , \big>$ denotes the standard inner product in $\mathbb{R}^n$, and $Cl_n^0$
the even Clifford subalgebra determined as the fixed point of the involution of $Cl_n$ induced by $-{\rm Id}_{\mathbb{R}^n}$.
Let
\[\mathbb{C}l_n=Cl_n\otimes_{\mathbb{R}}\mathbb{C},\quad\mbox{and}\quad
\mathbb{C}l_n^0=Cl_n^0\otimes_{\mathbb{R}}\mathbb{C}\]
be the complexfications of $Cl_n$ and $Cl_n^0$. It is well known that
\[\mathbb{C}l_n\cong \left\{
\begin{array}{ll}
\rm End(\mathbb{C}^{2^k}), & \mbox{if $n=2k$}\\
\rm End(\mathbb{C}^{2^k})\oplus\rm End(\mathbb{C}^{2^k}), & \mbox{if $n=2k+1$}
\end{array},
\right.
\]
where
\[\mathbb{C}^{2^k}=\underbrace{\mathbb{C}^2\otimes \ldots \otimes \mathbb{C}^2}_{\mbox{$k$ times}}\]
the tensor product of $k=[{n\over 2}]$ copies of $\mathbb{C}^2$.
Let us denote this space as
\[\Delta_n = \mathbb{C}^{2^k},\]
which is called the {\em space of spinors}.
Consider the linear map
\[\kappa:\mathbb{C}l_n \longrightarrow \rm End(\mathbb{C}^{2^k}),\]
which is the aforementioned isomorphism for $n$ even, and the projection onto the first summand for $n$ odd.
The Spin group $Spin(n)\subset Cl_n$ is the subset
\[Spin(n) =\{x_1x_2\cdots x_{2l-1}x_{2l}\,\,|\,\,x_j\in\mathbb{R}^n, \,\,
|x_j|=1,\,\,l\in\mathbb{N}\},\]
endowed with the product of the Clifford algebra.
It is a Lie group and its Lie algebra is
\[\mathfrak{spin}(n)=\mbox{span}\{e_ie_j\,\,|\,\,1\leq i< j \leq n\}.\]
Recall that the Spin group $Spin(n)$ is the universal double cover of $SO(n)$, $n\ge 3$. For $n=2$
we consider $Spin(2)$ to be the connected double cover of $SO(2)$.
The covering map will be denoted by
\[\lambda_n:Spin(n)\rightarrow SO(n),\]
where an element $x_1x_2\cdots x_{2l-1}x_{2l}\in Spin(n)$ is mapped to the orthogonal transformation
\begin{eqnarray*}
\lambda_n: \mathbb{R}^n&\longrightarrow&\mathbb{R}^n\\
y&\mapsto& x_1x_2\cdots x_{2l-1}x_{2l}\cdot y\cdot x_{2l}x_{2l-1}\cdots x_2x_1.
\end{eqnarray*}
Its differential is given
by
\[\lambda_{n*}(e_ie_j) = 2E_{ij},\]
where $E_{ij}=e_i^*\otimes e_j - e_j^*\otimes e_i$ is the
standard basis of the skew-symmetric matrices and $e^*$ denotes the metric dual of the vector $e$.
We will also denote by $\lambda_n$ the induced representation on
$\raise1pt\hbox{$\ts\bigwedge$}^*\mathbb{R}^n$.
The restriction of $\kappa$ to $Spin(n)$ defines the Lie group representation
\[
\kappa_n:Spin(n)\longrightarrow GL(\Delta_n),\]
which is special unitary. We have the corresponding Lie algebra representation
\[
\kappa_{n*}:\mathfrak{spin}(n)\longrightarrow \mathfrak{gl}(\Delta_n),\]
which is, in fact, the restriction of the linear map $\kappa:\mathbb{C}l_n\longrightarrow\rm End(\Delta_n)$ to
$\mathfrak{spin}(n)\subset \mathbb{C}l_n$.
Note that
$SO(1)=\{1\}$,
$Spin(1)=\{\pm1\}$,
and $\Delta_1 =\mathbb{C}$
the trivial $1$-dimensional representation.
The Clifford multiplication is defined by
\begin{eqnarray*}
\mu_n:\mathbb{R}^n\otimes \Delta_n &\longrightarrow&\Delta_n\\
x \otimes \psi &\mapsto& \mu_n(x\otimes \psi)=x\cdot\psi :=\kappa(x)(\psi).
\end{eqnarray*}
It is skew-symmetric with respect to the Hermitian product
\[\left<x\cdot\psi_1 , \psi_2\right>
=-\left<\psi_1 , x\cdot \psi_2\right>,
\]
is $Spin(n)$-equivariant
and can be extended to a $Spin(n)$-equivariant map
\begin{eqnarray*}
\mu_n:\raise1pt\hbox{$\ts\bigwedge$}^*(\mathbb{R}^n)\otimes \Delta_n &\longrightarrow&\Delta_n\\
\omega \otimes \psi &\mapsto& \omega\cdot\psi.
\end{eqnarray*}
Let
\[{\rm vol}_n := e_1 e_2\cdots e_n.\]
When $n$ is even, we define the following involution
\begin{eqnarray*}
\Delta_n&\longrightarrow& \Delta_n \\
\psi &\mapsto& (-i)^{n\over 2}{\rm vol}_n\cdot \psi.
\end{eqnarray*}
The $\pm 1$ eigenspaces of this involution are denoted by $\Delta_n^\pm$ and called positive and negative spinors respectively.
These spaces have equal dimension and are irreducible representations of $Spin(n)$
\[\kappa_n^\pm:Spin(n)\longrightarrow {\rm Aut}(\Delta_n^\pm).\]
Note that our definition differs from the one given in \cite{friedrich} by a factor $(-1)^{n\over 2}$.
There exist either real or quaternionic structures on the spin representations.
A quaternionic structure $\alpha$ on $\mathbb{C}^2$ is given by
\[\alpha\left(\begin{array}{c}
z_1\\
z_2
\end{array}
\right) = \left(\begin{array}{c}
-\overline{z}_2\\
\overline{z}_1
\end{array}\right),\]
and a real structure $\beta$ on $\mathbb{C}^2$ is given by
\[\beta\left(\begin{array}{c}
z_1\\
z_2
\end{array}
\right) = \left(\begin{array}{c}
\overline{z}_1\\
\overline{z}_2
\end{array}\right).\]
Note that these structures satisfy
\[
\begin{array}{rclcrcl}
\left< \alpha(v),w\right> &=& \overline{\left< v,\alpha(w)\right> }, &\quad&
\left< \alpha(v),\alpha(w)\right> &=& \overline{\left< v,w\right> }, \\
\left< \beta(v),w\right> &=& \overline{\left< v,\beta(w)\right> }, &\quad&
\left< \beta(v),\beta(w)\right> &=& \overline{\left< v,w\right> },
\end{array}
\]
with respect to the standard hermitian product in $\mathbb{C}^2$, where $v,w\in \mathbb{C}^2$.
The real and quaternionic structures $\gamma_n$ on $\Delta_n=(\mathbb{C}^2)^{\otimes
[n/2]}$ are built as follows
\[
\begin{array}{cclll}
\gamma_n &=& (\alpha\otimes\beta)^{\otimes 2k} &\mbox{if $n=8k,8k+1$}& \mbox{(real),} \\
\gamma_n &=& (\alpha\otimes\beta)^{\otimes 2k}\otimes\alpha &\mbox{if $n=8k+2,8k+3$}&
\mbox{(quaternionic),} \\
\gamma_n &=& (\alpha\otimes\beta)^{\otimes 2k+1} &\mbox{if $n=8k+4,8k+5$}&\mbox{(quaternionic),} \\
\gamma_n &=& (\alpha\otimes\beta)^{\otimes 2k+1}\otimes\alpha &\mbox{if $n=8k+6,8k+7$}&\mbox{(real).}
\end{array}
\]
which also satisfy
\[
\begin{array}{rclcrcl}
\left< \gamma_n(v),w\right> &=& \overline{\left< v,\gamma_n(w)\right> }, &\quad&
\left< \gamma_n(v),\gamma_n(w)\right> &=& \overline{\left< v,w\right> }, \\
\end{array}
\]
where $v,w\in\Delta_n$.
This means
\[
\left< v+ \gamma_n(v),w+ \gamma_n(w)\right> \in \mathbb{R}. \label{eq: real inner product}
\]
Now, we summarize some results about real representations of $Cl_r^0$ in the next table (cf. \cite{Lawson}).
Here $d_r$ denotes the dimension of an irreducible representation of $Cl^0_r$ and $v_r$ the number of distinct
irreducible representations.
\[\begin{array}{|c|c|c|c|c|}
\hline
r \mbox{ (mod 8)}&Cl_r^0&d_r&v_r \tstrut\\
\hline
1&\mathbb R(d_r)&2^{\lfloor\frac r2\rfloor}&1 \tstrut\\
\hline
2&\mathbb C(d_r/2)&2^{\frac r2}&1 \tstrut\\
\hline
3&\mathbb H(d_r/4)&2^{\lfloor\frac r2\rfloor+1}&1 \tstrut\\
\hline
4&\mathbb H(d_r/4)\oplus \mathbb H(d_r/4)&2^{\frac r2}&2 \tstrut\\
\hline
5&\mathbb H(d_r/4)&2^{\lfloor\frac r2\rfloor+1}&1 \tstrut\\
\hline
6&\mathbb C(d_r/2)&2^{\frac r2}&1 \tstrut\\
\hline
7&\mathbb R(d_r)&2^{\lfloor\frac r2\rfloor}&1 \tstrut\\
\hline
8&\mathbb R(d_r)\oplus \mathbb R(d_r)&2^{\frac r2-1}&2 \tstrut\\
\hline
\end{array}
\]
\centerline{Table 1}
Let $\tilde\Delta_r$ denote the real irreducible representation of $Cl_r^0$ for $r\not\equiv0$ $(\mbox{mod } 4) $
and $\tilde\Delta^{\pm}_r$ denote the real irreducible representations for $r\equiv0$ $(\mbox{mod } 4)$. Note that
the representations are complex for $r\equiv 2,6$ $(\mbox{mod } 8)$ and quaternionic for $r\equiv 3,4,5$
$(\mbox{mod } 8)$.
\subsection{A special basis of spinors and an explicit description of $\kappa$ }
The vectors
\[u_{+1}={1\over \sqrt{2}}(1,-i)\quad\quad\mbox{and}\quad\quad u_{-1}={1\over \sqrt{2}}(1,i),\]
form a unitary basis of $\mathbb{C}^2$.
Consequently, the vectors
\begin{equation}
\{u_{(\varepsilon_1,\ldots,\varepsilon_k)}=u_{\varepsilon_1}\otimes\ldots\otimes
u_{\varepsilon_k}\,\,|\,\, \varepsilon_j=\pm 1,
j=1,\ldots,k\},\label{eq: special basis}
\end{equation}
form a unitary basis of $\Delta_n=(\mathbb{C}^2)^{\otimes
[n/2]}$.
In order to give an explicit description of the map $\kappa$, consider the following matrices with complex entries
\[Id = \left(\begin{array}{ll}
1 & 0\\
0 & 1
\end{array}\right),\quad
g_1 = \left(\begin{array}{ll}
i & 0\\
0 & -i
\end{array}\right),\quad
g_2 = \left(\begin{array}{ll}
0 & i\\
i & 0
\end{array}\right),\quad
T = \left(\begin{array}{ll}
0 & -i\\
i & 0
\end{array}\right).
\]
Note that
\[g_1(u_{\pm1})= iu_{\mp1},\quad
g_2(u_{\pm1})= \pm u_{\mp1},\quad
T(u_{\pm1})= \mp u_{\pm1}.\]
In general, the generators $e_1, \ldots, e_n$ of the Clifford algebra are mapped under $\kappa$ to the following linear transformations
of $\Delta_n$:
\begin{eqnarray}
e_1&\mapsto& Id\otimes Id\otimes \ldots\otimes Id\otimes Id\otimes g_1,\nonumber\\
e_2&\mapsto& Id\otimes Id\otimes \ldots\otimes Id\otimes Id\otimes g_2,\nonumber\\
e_3&\mapsto& Id\otimes Id\otimes \ldots\otimes Id\otimes g_1\otimes T,\nonumber\\
e_4&\mapsto& Id\otimes Id\otimes \ldots\otimes Id\otimes g_2\otimes T,\label{eq: explicit Clifford map}\\
\vdots && \dots\nonumber\\
e_{2k-1}&\mapsto& g_1\otimes T\otimes \ldots\otimes T\otimes T\otimes T,\nonumber\\
e_{2k}&\mapsto& g_2\otimes T\otimes\ldots\otimes T\otimes T\otimes T,\nonumber
\end{eqnarray}
and the last generator
\[ e_{2k+1}\mapsto i\,\, T\otimes T\otimes\ldots\otimes T\otimes T\otimes T\]
if $n=2k+1$. Thus, if $1\leq j\leq k$,
\begin{eqnarray}
e_{2j-1}u_{\varepsilon_1,\ldots,\varepsilon_k}&=& i(-1)^{j-1}
\left(\prod_{\alpha=k-j+2}^k \varepsilon_{\alpha}\right)
u_{\varepsilon_1,\ldots, (-\varepsilon_{k-j+1}) ,\ldots,\varepsilon_k} \nonumber\\
e_{2j}u_{\varepsilon_1,\ldots,\varepsilon_k}&=& (-1)^{j-1}
\left(\prod_{\alpha=k-j+1}^k \varepsilon_{\alpha}\right)
u_{\varepsilon_1,\ldots, (-\varepsilon_{k-j+1}) ,\ldots,\varepsilon_k} \nonumber
\end{eqnarray}
and
\[
e_{2k+1}u_{\varepsilon_1,\ldots,\varepsilon_k}= i(-1)^k
\left(\prod_{\alpha=1}^k \varepsilon_{\alpha}\right) u_{\varepsilon_1,\ldots,\varepsilon_k}
\]
if $n=2k+1$ is odd.
Also, the real and quaternionic structures on $\mathbb{C}^2$ look as follows
\begin{eqnarray*}
\alpha(u_{\varepsilon})
&=& -\varepsilon iu_{-\varepsilon}\\
\beta(u_{\varepsilon})
&=& u_{-\varepsilon},
\end{eqnarray*}
and, on the basis vectors of $\Delta_n$,
\begin{eqnarray*}
(\alpha\otimes\beta)^{\otimes k}(u_{(\varepsilon_1,...,\varepsilon_{2k})})
&=&(-i)^k \left(\prod_{s=1}^k \varepsilon_{2s-1}\right) u_{(-\varepsilon_1,...,-\varepsilon_{2k})}\\
\left[(\alpha\otimes\beta)^{\otimes k}\otimes\alpha\right] (u_{(\varepsilon_1,...,\varepsilon_{2k+1})})
&=&(-i)^{k+1} \left(\prod_{s=1}^{k+1} \varepsilon_{2s-1}\right) u_{(-\varepsilon_1,...,-\varepsilon_{2k+1})}\\
\end{eqnarray*}
{\bf Remark}. From these expressions, we can see that Clifford multiplication by basic vectors amounts to flipping a sign and keeping track of a power of $i=\sqrt{-1}$.
{\bf Remark}.
We became aware of the use of this type of unitary bases in \cite{Baum}, and carried out many calculations using them in several
dimensions. In spite of their usefulness, they are difficult to handle in computer programs since they lead to large matrices.
We realized that we could encode spinors and Clifford multiplication using a binary code which we present in Section \ref{sec: binary}.
The binary code has allowed us to perform calculations in which dimensions which otherwise would have been impossible.
{\bf Example}.
Let us consider $n=6$. Using the {\em ordered} spinor basis
\[\{
u_{(1,1,1)},
u_{(1,1,-1)},
u_{(1,-1,1)},
u_{(1,-1,-1)},
u_{(-1,1,1)},
u_{(-1,1,-1)},
u_{(-1,-1,1)},
u_{(-1,-1,-1)}
\}\]
we have the following matrices
\begin{eqnarray*}
\kappa_6(e_1)&=&\left(\begin{array}{cccccccc}
0 & i & & & & & & \\
i & 0 & & & & & & \\
& & 0 & i & & & & \\
& & i & 0 & & & & \\
& & & & 0 & i & & \\
& & & & i & 0 & & \\
& & & & & & 0 & i\\
& & & & & & i & 0
\end{array}
\right),\\
\kappa_6(e_2)&=&\left(\begin{array}{cccccccc}
0 & -1 & & & & & & \\
1 & 0 & & & & & & \\
& & 0 & -1 & & & & \\
& & 1 & 0 & & & & \\
& & & & 0 & -1 & & \\
& & & & 1 & 0 & & \\
& & & & & & 0 & -1\\
& & & & & & 1 & 0
\end{array}
\right).
\end{eqnarray*}
\subsubsection{Maximal torus of $Spin(n)$ and weight vectors of $\Delta_n$}
A maximal torus of the group $Spin(n)$ is given by
\[\left\{\prod_{i=1}^{[n/2]} (\cos(\theta_i)+\sin(\theta_i)e_{2i-1}e_{2i})\in Spin(n)\,\,:\,\,
\theta_i\in [0,2\pi], i=1,\ldots,[n/2]\right\}.\]
In order to visualize the transformations that this elements give under $\kappa_n$ and $\lambda_n$,
let $n=6$ and consider the element $e_1e_2\in Spin(6)$.
In terms of \rf{eq: special basis} and \rf{eq: explicit Clifford map}
\[\kappa_6(e_1e_2)= \left(\begin{array}{cccccccc}
i & & & & & & & \\
& -i & & & & & & \\
& & i & & & & & \\
& & & -i & & & & \\
& & & & i & & & \\
& & & & & -i & & \\
& & & & & & i & \\
& & & & & & & -i
\end{array}
\right),\]
On the other hand, $e_1e_2$ acts on $y\in\mathbb{R}^6$ as follows
\begin{eqnarray*}
e_1e_2(y_1e_1+y_2e_2+y_3e_3+y_4e_4+y_5e_5+y_6e_6)e_2e_1
&=&
e_1(y_1e_1-y_2e_2+y_3e_3+y_4e_4+y_5e_5+y_6e_6)e_1 \\
&=&
-y_1e_1-y_2e_2+y_3e_3+y_4e_4+y_5e_5+y_6e_6,
\end{eqnarray*}
i.e.
\[\lambda_6(e_1e_2)=\left(\begin{array}{ccccccc}
-1 & & & & & \\
& -1 & & & & \\
& & 1 & & & \\
& & & 1 & & \\
& & & & 1 & \\
& & & & & 1
\end{array}
\right)\]
Now, consider the element
\[\cos(\theta)+\sin(\theta)e_1e_2 = (\cos(\theta)e_1+\sin(\theta)e_2)(-e_1)\in Spin(6).\]
On the one hand,
\begin{eqnarray*}
\kappa_6(\cos(\theta)+\sin(\theta)e_1e_2) &=&
\cos(\theta)\left(\begin{array}{cccccccc}
1 & & & & & & & \\
& 1 & & & & & & \\
& & 1 & & & & & \\
& & & 1 & & & & \\
& & & & 1 & & & \\
& & & & & 1 & & \\
& & & & & & 1 & \\
& & & & & & & 1
\end{array}
\right) +
\sin(\theta)\left(\begin{array}{cccccccc}
i & & & & & & & \\
& -i & & & & & & \\
& & i & & & & & \\
& & & -i & & & & \\
& & & & i & & & \\
& & & & & -i & & \\
& & & & & & i & \\
& & & & & & & -i
\end{array}
\right)
\\
&=&
\left(\begin{array}{cccccccc}
e^{i\theta} & & & & & & & \\
& e^{-i\theta} & & & & & & \\
& & e^{i\theta} & & & & & \\
& & & e^{-i\theta} & & & & \\
& & & & e^{i\theta} & & & \\
& & & & & e^{-i\theta} & & \\
& & & & & & e^{i\theta} & \\
& & & & & & & e^{-i\theta}
\end{array}
\right),
\end{eqnarray*}
and, on the other,
\begin{eqnarray*}
(\cos(\theta)e_1+\sin(\theta)e_2)(-e_1)y(-e_1)(\cos(\theta)e_1+\sin(\theta)e_2)
\end{eqnarray*}
induces the transformation on $\mathbb{R}^6$
\[\left(\begin{array}{cccccc}
\cos(2\theta) & -\sin(2\theta) & & & & \\
\sin(2\theta) & \cos(2\theta) & & & & \\
& & 1 & & & \\
& & & 1 & & \\
& & & & 1 & \\
& & & & & 1
\end{array}
\right).\]
Clearly, the two transformations $\kappa_6(\cos(\theta)+\sin(\theta)e_1e_2)$ and $\lambda_6(\cos(\theta)+\sin(\theta)e_1e_2)$ are different.
Setting $\theta=\varphi_1/2$, we see the familiar coefficients $\pm 1/2$ of the Spin representation
\[
\left(\begin{array}{cccccccc}
e^{i\varphi_1\over2} & & & & & & & \\
& e^{-{i\varphi_1\over2}} & & & & & & \\
& & e^{i\varphi_1\over2} & & & & & \\
& & & e^{-{i\varphi_1\over2}} & & & & \\
& & & & e^{i\varphi_1\over2} & & & \\
& & & & & e^{-{i\varphi_1\over2}} & & \\
& & & & & & e^{i\varphi_1\over2} & \\
& & & & & & & e^{-{i\varphi_1\over2}}
\end{array}
\right),
\quad\quad
\left(\begin{array}{cccccc}
\cos(\varphi_1) & -\sin(\varphi_1) & & & & \\
\sin(\varphi_1) & \cos(\varphi_1) & & & & \\
& & 1 & & & \\
& & & 1 & & \\
& & & & 1 & \\
& & & & & 1
\end{array}
\right).
\]
Similarly,
$\cos(\varphi_2/2)+\sin(\varphi_2/2)e_3e_4$ induces
\[
\left(\begin{array}{cccccccc}
e^{i\varphi_2\over2} & & & & & & & \\
& e^{{i\varphi_2\over2}} & & & & & & \\
& & e^{-{i\varphi_2\over2}} & & & & & \\
& & & e^{-{i\varphi_2\over2}} & & & & \\
& & & & e^{i\varphi_2\over2} & & & \\
& & & & & e^{{i\varphi_2\over2}} & & \\
& & & & & & e^{-{i\varphi_2\over2}} & \\
& & & & & & & e^{-{i\varphi_2\over2}}
\end{array}
\right),
\quad\quad
\left(\begin{array}{cccccc}
1 & & & & & \\
& 1 & & & & \\
&&\cos(\varphi_2) & -\sin(\varphi_2) & & \\
&&\sin(\varphi_2) & \cos(\varphi_2) & & \\
& & & & 1 & \\
& & & & & 1
\end{array}
\right)
\]
and $\cos(\varphi_3/2)+\sin(\varphi_3/2)e_5e_6$ induces
\[
\left(\begin{array}{cccccccc}
e^{i\varphi_3\over2} & & & & & & & \\
& e^{{i\varphi_3\over2}} & & & & & & \\
& & e^{{i\varphi_3\over2}} & & & & & \\
& & & e^{{i\varphi_3\over2}} & & & & \\
& & & & e^{-{i\varphi_3\over2}} & & & \\
& & & & & e^{-{i\varphi_3\over2}} & & \\
& & & & & & e^{-{i\varphi_3\over2}} & \\
& & & & & & & e^{-{i\varphi_3\over2}}
\end{array}
\right),
\quad\quad
\left(\begin{array}{cccccc}
1 & & & & & \\
& 1 & & & & \\
& & 1 & & & \\
& & & 1 & & \\
&&&&\cos(\varphi_3) & -\sin(\varphi_3) \\
&&&&\sin(\varphi_3) & \cos(\varphi_3)
\end{array}
\right)
\]
A general element of the standard maximal torus of $Spin(6)$
\[
{\left(\cos\left({\varphi_1\over2}\right)+{\sin}\left({\varphi_1\over2}\right)e_1e_2\right)}
{\left(\cos\left({\varphi_2\over2}\right)+{\sin}\left({\varphi_2\over2}\right)e_3e_4\right)}
{\left(\cos\left({\varphi_3\over2}\right)+{\sin}\left({\varphi_3\over2}\right)e_5e_6\right)}
\]
has the following matrix representations
\[
\left(\begin{array}{cccccccc}
e^{i(\varphi_1+\varphi_2+\varphi_3)\over2} & & & & & & & \\
& e^{{i(-\varphi_1+\varphi_2+\varphi_3)\over2}} & & & & & & \\
& & e^{{i(\varphi_1-\varphi_2+\varphi_3)\over2}} & & & & & \\
& & & e^{{i(-\varphi_1-\varphi_2+\varphi_3)\over2}} & & & & \\
& & & & e^{{i(\varphi_1+\varphi_2-\varphi_3)\over2}} & & & \\
& & & & & e^{{i(-\varphi_1+\varphi_2-\varphi_3)\over2}} & & \\
& & & & & & e^{{i(\varphi_1-\varphi_2-\varphi_3)\over2}} & \\
& & & & & & & e^{{i(-\varphi_1-\varphi_2-\varphi_3)\over2}}
\end{array}
\right),
\]
\[
\left(\begin{array}{cccccc}
\cos(\varphi_1) & -\sin(\varphi_1) & & & & \\
\sin(\varphi_1) & \cos(\varphi_1) & & & & \\
&&\cos(\varphi_2) & -\sin(\varphi_2) & & \\
&&\sin(\varphi_2) & \cos(\varphi_2) & & \\
&&&&\cos(\varphi_3) & -\sin(\varphi_3) \\
&&&&\sin(\varphi_3) & \cos(\varphi_3)
\end{array}
\right).
\]
This formula clearly shows that the basis given in \rf{eq: special basis} is made up of weight vectors
of the spin representation $\Delta_6$.
In general we have
\[\prod_{i=1}^{[n/2]} \left(\cos\left({\varphi_i\over 2}\right)+\sin\left({\varphi_i\over 2}\right)e_{2i-1}e_{2i}\right)\cdot u_{(\varepsilon_1,\ldots,\varepsilon_k)} =
e^{i(\varepsilon_1\varphi_1+\cdots+\varepsilon_k\varphi_k)\over2}u_{(\varepsilon_1,\ldots,\varepsilon_k)}.\]
\section{Binary code}\label{sec: binary}
Given the description in the previous section, we see that the calculation of
$e_{j}u_{\varepsilon_1,\ldots,\varepsilon_k}$, where $k=[n/2]$, depends on $j$, the $k$-tuple
$(\varepsilon_1,\ldots,\varepsilon_k)$ and (possibly on) $n$.
By noticing that
\[+1=(-1)^0 \quad \mbox{and}\quad -1=(-1)^1,\]
we see that for $\varepsilon=\pm1$,
\[\varepsilon=(-1)^{1-\varepsilon\over 2}.\]
Thus, we can change the $k$-tuple $(\varepsilon_1,\ldots,\varepsilon_k)$
by the $k$-tuple
$[{1-\varepsilon_1\over 2},\ldots,{1-\varepsilon_k\over 2}]$ whose entries belong to $\{0,1\}$. Notice that these arrays correspond to the binary expressions of non-negative integers.
For instance, for $n=6$,
\begin{center}
\begin{tabular}[b]{|c|c|c|}\hline
$(\varepsilon_1,\varepsilon_2,\varepsilon_3)$ & $[{1-\varepsilon_1\over 2},{1-\varepsilon_2\over 2},
{1-\varepsilon_3\over 2}]$ & Integer \\\hline
(1,1,1) & $[0,0,0] $ & 0\\
(1,1,-1) & $[0,0,1] $ & 1\\
(1,-1,1) & $[0,1,0] $ & 2\\
(1,-1,-1) & $[0,1,1] $& 3\\
(-1,1,1) & $[1,0,0] $& 4\\
(-1,1,-1) & $[1,0,1] $& 5\\
(-1,-1,1) & $[1,1,0] $& 6\\
(-1,-1,-1) & $[1,1,1] $& 7 \\\hline
\end{tabular}
\end{center}
Thus, the aforementioned binary code of spinors is given by the correspondence
\[{(\varepsilon_1,...,\varepsilon_k)}\longleftrightarrow {{1-\varepsilon_k\over 2} (2^0) +{1-\varepsilon_{k-1}\over 2} (2^1) +\ldots
+{1-\varepsilon_2\over 2} (2^{k-2})+{1-\varepsilon_1\over 2} (2^{k-1}) }.\]
\subsection{Clifford multiplication}
The Clifford multiplication of a standard basis vectors $e_p$
with a spinors $u_{a}$, where $a\in\{0,1,2,\ldots,2^n-1\}$ now looks as follows
\begin{eqnarray*}
e_{2j-1}u_a
&=& i(-1)^{j-1}(-1)^{\sum_{l=0}^{j-2}[{a/ 2^l}]-2[{a/ 2^{l+1}}]}
u_{a+(-1)^{[{a/ 2^{j-1}}]-2[{a/ 2^{j}}]}2^{j-1}} \nonumber\\
e_{2j}u_a
&=& (-1)^{j-1}(-1)^{\sum_{l=0}^{j-1}[{a/ 2^l}]-2[{a/ 2^{l+1}}]}
u_{a+(-1)^{[{a/ 2^{j-1}}]-2[{a/ 2^{j}}]}2^{j-1}}, \nonumber
\end{eqnarray*}
that can be summarized in one formula, for $1\leq p\leq n$, $j:=[{p+1\over 2}]$,
\begin{eqnarray}
e_pu_a
&=& (-1)^{2j-p/2-1+\sum_{l=0}^{j-2}
a_l+a_{j-1}(-2j+p+1)}u_{a+(-1)^{a_{j-1}}2^{j-1}}\label{formula}
\end{eqnarray}
where
$a_l=\left[{a\over 2^l}\right]-2\left[{a\over 2^{l+1}}\right].$
Furthermore, if $n$ is odd, $k=[{n\over 2}]$,
\begin{eqnarray}
e_{2k+1}u_a &=& i(-1)^{k+\sum_{l=0}^{k-1}a_l}
u_a .
\label{formula odd}
\end{eqnarray}
{\bf Remark}. This approach has an important advantage: the integer encoding
of Clifford multiplication in \rf{formula}
does not depend on the dimension $n$ for all $p<n$.
For example, we will always have
\[e_5u_{10}=iu_{15}\]
for all $n\geq 6$.
Furthermore, \rf{formula} does not depend at all on $n$ when $n$ is even.
As a consequence, one can use \rf{formula} without ever specifying the dimension.
This allows us to make general assertions and perform computations in large dimensions without the use of
enormous matrices (recall that the dimension of the $Spin(n)$ representations increases exponentially
with $n$).
In this way, formula \rf{formula} is somewhat {\em universal}.
\subsubsection{Example: the isomorphism between $\Delta_{2k-1}$ and $\Delta_{2k}^+$}
The space of positive spinors $\Delta_{2k}^+$
is generated by the elements $u_{\varepsilon_1\cdots\varepsilon_k}$ such that
\[\prod_{l=1}^k \varepsilon_l = 1.\]
In the binary code this corresponds to the nonnegative integers whose binary expansion has an even
number of bits $a_l=\left[{a\over 2^l}\right]-2\left[{a\over 2^{l+1}}\right]$ equal to $1$.
Now, the isomorphism
\[f:\Delta_{2k-1}=\mbox{span}\{u_a\in\mathbb{Z}|0\leq a\leq 2^{k-1}-1 \}
\longrightarrow\Delta_{2k}^+=\mbox{span}\left\{u_b\in\mathbb{Z}|0\leq b\leq 2^{k}-1, \sum_{l=0}^{k-1} b_l\equiv 0\,\,(\mod 2) \right\},\]
as representations of the Lie algebra $\mathfrak{spin}(2k-1)$,
is given by
\[f(u_a)= u_{a+\left({1+(-1)^{1+\sum a_l}\over2}\right)2^{k}}.\]
In order to check that the complex linear extension of $f$ is $\mathfrak{spin}(2k-1)$ equivariant,
let $0\leq a\leq 2^{k-1}$, $1\leq p \leq q\leq 2k-1$. We must verify
\begin{equation}
f(e_pe_q(u_a)) = e_pe_q(f(u_a)). \label{eq: isomorphism}
\end{equation}
Note that the subindices of $u_a$ and $f(u_a)$ have the same binary expression up to the digit corresponding to $2^{k-1}$
so that for $1\leq p< q \leq 2k-2$ the identity \rf{eq: isomorphism}
is fulfilled.
The only cases we have to check are $1\leq p \leq 2k-2$ and $q=2k-1$.
On the one hand
\[
e_{2k-1}u_a = i(-1)^{k-1}(-1)^{\sum_{l=0}^{k-2}a_l} u_a
\]
when $u_a$ is considered as a spinor in $\Delta_{2k-1}$
and
\[ e_{2k-1}u_a = i(-1)^{k-1}(-1)^{\sum_{l=0}^{k-2}a_l}
u_{a+(-1)^{a_{k-1}}2^{k-1}}\]
when $u_a$ is considered as a spinor in $\Delta_{2k}^+$.
On the other hand, in $\Delta_{2k-1}$
\[
e_p e_{2k-1}u_a = i(-1)^{k-1}(-1)^{\sum_{l=0}^{k-2}a_l}
(-1)^{2j-p/2-1}(-1)^{\sum_{l=0}^{j-2}a_l}(-1)^{a_{j-1}(-2j+p+1)}u_{a+(-1)^{a_{j-1}}2^{j-1}}.
\]
{\bf Remark}.
One can even avoid the use of \rf{formula odd} when $n$ is odd and $p=n$ by
using the isomorphism between $\Delta_{2k-1}=\Delta^+_{2k}$.
\section{Applications}\label{sec: applications}
In this long section, we present three applications of the binary code in the form of explicit calculations for triality (compare with \cite{Cartan, Harvey}), the octonion multiplication table and independent vector fields on spheres (compare with \cite{Piccinni}).
\subsection{Triality}
We will first explain the idea of triality in a topological form.
As we will recall below, the group $Spin(8)$ is represented
orthogonally on three real 8-dimensional spaces: $\mathbb{R}^8$,
$\tilde\Delta_8^+$ and $\tilde\Delta_8^-$.
In other words, we have three homomorphisms
\[
\xymatrix{
& Spin(8) \ar[d]^{\lambda_8}&\\
& SO(8)&\\
Spin(8)\ar[ur]^{\kappa_8^+}&&Spin(8)\ar[ul]_{\kappa_8^-}\\
}
\]
Now, consider the following two diagrams,
\[
\xymatrix{
& Spin(8) \ar[d]_{\lambda_8} \\
Spin(8) \ar[ur]^{\sigma}\ar[r]^{\kappa_8^-} & SO(8)
}
\quad\quad\quad
\xymatrix{
& Spin(8)\ar[d]_{\lambda_8} \\
Spin(8) \ar[ur]^{\tau}\ar[r]^{\kappa_8^+} & SO(8)
}
\]
which include the correspoding lifts $\sigma$ and $\tau$ (due to the simple connectedness of $Spin(8)$).
We will see that $\sigma:Spin(8)\longrightarrow Spin(8)$ is an outer automorphism of order 3 (a {\em triality} automorphism),
$\tau:Spin(8)\longrightarrow Spin(8)$ is an outer automorphism of order 2, and the two automorphisms
generate a copy of the permutation group $S_3$.
First, we will examine the situation explicitly at the Lie algebra level
\begin{equation}
\xymatrix{
& \mathfrak{spin}(8) \ar[d]^{\lambda_{8*}} \\
\mathfrak{spin}(8) \ar[ur]^{\sigma_*}\ar[r]^{\kappa_{8*}^-} & \mathfrak{so}(8)
}
\quad\quad\quad
\xymatrix{
& \mathfrak{spin}(8)\ar[d]^{\lambda_{8*}} \\
\mathfrak{spin}(8) \ar[ur]^{\tau_*}\ar[r]^{\kappa_{8*}^+} & \mathfrak{so}(8)
}\label{eq: diagrams Lie algebras}
\end{equation}
and later at the Lie group level.
\subsubsection{The real $Spin(8)$-representations $\tilde\Delta_8^+$ and $\tilde\Delta_8^+$}
Recall that $\gamma_8:\Delta_8\longrightarrow\Delta_8$ is a real structure on $\Delta_8$, which means it is the complexification of a real vector space $\tilde\Delta_8$ given by
\[\tilde\Delta_8 =(1+\gamma_8)(\Delta_8).\]
Furthermore, $\gamma_8$ also preserves the subrepresentations $\Delta_8^+$ and $\Delta_8^-$, i.e. $\gamma_8$ restricts to
real structures on $\Delta_8^+$ and $\Delta_8^-$ and, therefore, they are also complexifications of real vector spaces
\begin{eqnarray*}
\tilde{\Delta}_8^+&=&\{\mbox{$(+1)$-eigenspace of $\gamma_8$ in $\Delta_8^+$} \}\\
&=&(1+\gamma_8)(\Delta_8^+),\\
\tilde{\Delta}_8^-&=&\{\mbox{$(-1)$-eigenspace of $\gamma_8$ in $\Delta_8^-$}\}\\
&=&(1-\gamma_8)(\Delta_8^-).
\end{eqnarray*}
In fact, we have chosen $\tilde\Delta_8^+$ and $\tilde\Delta_8^-$ in this way so that they are compatible with Clifford multiplication
\[\mu_8:\mathbb{R}^8\times\tilde\Delta_8^+\longrightarrow \tilde\Delta_8^-.\]
We have explicit generators for the complex spinor spaces
\begin{eqnarray*}
\Delta_8^+&=&\mathrm{span}(u_0,u_3,u_5,u_6,u_9,u_{10},u_{12},u_{15}),\\
\Delta_8^-&=&\mathrm{span}(u_1,u_2,u_4,u_7,u_8,u_{11},u_{13},u_{14}).
\end{eqnarray*}
For the real representation $\tilde\Delta_8^+$ we have
\begin{eqnarray*}
\tilde\Delta_8^+&=&\mathrm{span}\left\{
{1\over \sqrt{2}}(u_0-u_{15}),
{i\over \sqrt{2}}(u_0+u_{15}),
{1\over \sqrt{2}}(u_3+u_{12}),
{i\over \sqrt{2}}(u_3-u_{12}),\right.\\
&& \left. {1\over \sqrt{2}}(u_5-u_{10}),
{i\over \sqrt{2}}(u_5+u_{10}),
{1\over \sqrt{2}}(u_6+u_9),
{i\over \sqrt{2}}(u_6- u_9)
\right\}.
\end{eqnarray*}
We choose the ordered basis of $\tilde\Delta_8^-$ to be the image of the basis of $\tilde\Delta_8^+$ under Clifford multiplication by the
canonical vector $e_1\in\mathbb{R} ^8$. Namely,
\begin{eqnarray*}
\tilde\Delta_8^-&=&\mathrm{span}\left\{
{i\over \sqrt{2}} (u_1-u_{14}),
{-1\over \sqrt{2}} (u_1+u_{14}),
{i\over \sqrt{2}} (u_2+u_{13}),
{1\over \sqrt{2}} (u_2- u_{13}),\right.\\
&& \left.{-i\over \sqrt{2}} (u_4-u_{11}),
{-1\over \sqrt{2}} (u_4+ u_{11}),
{i\over \sqrt{2}} (u_7+ u_8),
{-1\over \sqrt{2}} (u_7- u_8)
\right\}.
\end{eqnarray*}
\subsubsection{The endomorphism $\sigma_*$}
Using the ordered basis of spinors, one can compute the endomorphisms corresponding
to the generators $e_ie_j\in \mathfrak{spin}(8)$, $1\leq i<j\leq 8$, under the map $\kappa_{8*}^-$ and, in turn, express those endomorphisms as images
of elements of $\mathfrak{spin}(8)$ under $\lambda_{8*}$:
\begin{eqnarray*}
\kappa_{8*}^-( e_{1} e_{2})&=& -E_{1, 2} - E_{3, 4} - E_{5, 6} - E_{7, 8}
={1\over 2}\lambda_{8*}( -e_1e_2 - e_3e_4 - e_5e_6 - e_7e_8),\\
\kappa_{8*}^-( e_{1} e_{3})&=& -E_{1, 3} + E_{2, 4} - E_{5, 7} + E_{6, 8}
={1\over 2}\lambda_{8*}( -e_1e_3 + e_2e_4 - e_5e_7 + e_6e_8),\\
\kappa_{8*}^-( e_{1} e_{4})&=& -E_{1, 4} - E_{2, 3} + E_{5, 8} + E_{6, 7}
={1\over 2}\lambda_{8*}( -e_1e_4 - e_2e_3 + e_5e_8 + e_6e_7),\\
\kappa_{8*}^-( e_{1} e_{5})&=& -E_{1, 5} + E_{2, 6} + E_{3, 7} - E_{4, 8}
={1\over 2}\lambda_{8*}( -e_1e_5 + e_2e_6 + e_3e_7 - e_4e_8),\\
\kappa_{8*}^-( e_{1} e_{6})&=& -E_{1, 6} - E_{2, 5} - E_{3, 8} - E_{4, 7}
={1\over 2}\lambda_{8*}( -e_1e_6 - e_2e_5 - e_3e_8 - e_4e_7),\\
\kappa_{8*}^-( e_{1} e_{7})&=& -E_{1, 7} + E_{2, 8} - E_{3, 5} + E_{4, 6}
={1\over 2}\lambda_{8*}( -e_1e_7 + e_2e_8 - e_3e_5 + e_4e_6),\\
\kappa_{8*}^-( e_{1} e_{8})&=& -E_{1, 8} - E_{2, 7} + E_{3, 6} + E_{4, 5}
={1\over 2}\lambda_{8*}( -e_1e_8 - e_2e_7 + e_3e_6 + e_4e_5),\\
\kappa_{8*}^-( e_{2} e_{3})&=& E_{1, 4} + E_{2, 3} + E_{5, 8} + E_{6, 7}
={1\over 2}\lambda_{8*}( e_1e_4 + e_2e_3 + e_5e_8 + e_6e_7),\\
\kappa_{8*}^-( e_{2} e_{4})&=& -E_{1, 3} + E_{2, 4} + E_{5, 7} - E_{6, 8}
={1\over 2}\lambda_{8*}( e_1e_4 + e_2e_3 + e_5e_8 + e_6e_7),\\
\kappa_{8*}^-( e_{2} e_{5})&=& E_{1, 6} + E_{2, 5} - E_{3, 8} - E_{4, 7}
={1\over 2}\lambda_{8*}( e_1e_6 + e_2e_5 - e_3e_8 - e_4e_7),\\
\kappa_{8*}^-( e_{2} e_{6})&=& -E_{1, 5} + E_{2, 6} - E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( -e_1e_5 + e_2e_6 - e_3e_7 + e_4e_8),\\
\kappa_{8*}^-( e_{2} e_{7})&=& E_{1, 8} + E_{2, 7} + E_{3, 6} + E_{4, 5}
={1\over 2}\lambda_{8*}( e_1e_8 + e_2e_7 + e_3e_6 + e_4e_5),\\
\kappa_{8*}^-( e_{2} e_{8})&=& -E_{1, 7} + E_{2, 8} + E_{3, 5} - E_{4, 6}
={1\over 2}\lambda_{8*}( -e_1e_7 + e_2e_8 + e_3e_5 - e_4e_6),\\
\kappa_{8*}^-( e_{3} e_{4})&=& E_{1, 2} + E_{3, 4} - E_{5, 6} - E_{7, 8}
={1\over 2}\lambda_{8*}( e_1e_2 + e_3e_4 - e_5e_6 - e_7e_8),\\
\kappa_{8*}^-( e_{3} e_{5})&=& E_{1, 7} + E_{2, 8} + E_{3, 5} + E_{4, 6}
={1\over 2}\lambda_{8*}( e_1e_7 + e_2e_8 + e_3e_5 + e_4e_6),\\
\kappa_{8*}^-( e_{3} e_{6})&=& -E_{1, 8} + E_{2, 7} + E_{3, 6} - E_{4, 5}
={1\over 2}\lambda_{8*}( -e_1e_8 + e_2e_7 + e_3e_6 - e_4e_5),\\
\kappa_{8*}^-( e_{3} e_{7})&=& -E_{1, 5} - E_{2, 6} + E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( -e_1e_5 - e_2e_6 + e_3e_7 + e_4e_8),\\
\kappa_{8*}^-( e_{3} e_{8})&=& E_{1, 6} - E_{2, 5} + E_{3, 8} - E_{4, 7}
={1\over 2}\lambda_{8*}( e_1e_6 - e_2e_5 + e_3e_8 - e_4e_7),\\
\kappa_{8*}^-( e_{4} e_{5})&=& -E_{1, 8} + E_{2, 7} - E_{3, 6} + E_{4, 5}
={1\over 2}\lambda_{8*}( -e_1e_8 + e_2e_7 - e_3e_6 + e_4e_5),\\
\kappa_{8*}^-( e_{4} e_{6})&=& -E_{1, 7} - E_{2, 8} + E_{3, 5} + E_{4, 6}
={1\over 2}\lambda_{8*}( -e_1e_7 - e_2e_8 + e_3e_5 + e_4e_6),\\
\kappa_{8*}^-( e_{4} e_{7})&=& E_{1, 6} - E_{2, 5} - E_{3, 8} + E_{4, 7}
={1\over 2}\lambda_{8*}( e_1e_6 - e_2e_5 - e_3e_8 + e_4e_7),\\
\kappa_{8*}^-( e_{4} e_{8})&=& E_{1, 5} + E_{2, 6} + E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( e_1e_5 + e_2e_6 + e_3e_7 + e_4e_8),\\
\kappa_{8*}^-( e_{5} e_{6})&=& E_{1, 2} - E_{3, 4} + E_{5, 6} - E_{7, 8}
={1\over 2}\lambda_{8*}( e_1e_2 - e_3e_4 + e_5e_6 - e_7e_8),\\
\kappa_{8*}^-( e_{5} e_{7})&=& E_{1, 3} + E_{2, 4} + E_{5, 7} + E_{6, 8}
={1\over 2}\lambda_{8*}( e_1e_3 + e_2e_4 + e_5e_7 + e_6e_8),\\
\kappa_{8*}^-( e_{5} e_{8})&=& -E_{1, 4} + E_{2, 3} + E_{5, 8} - E_{6, 7}
={1\over 2}\lambda_{8*}( -e_1e_4 + e_2e_3 + e_5e_8 - e_6e_7),\\
\kappa_{8*}^-( e_{6} e_{7})&=& -E_{1, 4} + E_{2, 3} - E_{5, 8} + E_{6, 7}
={1\over 2}\lambda_{8*}( -e_1e_4 + e_2e_3 - e_5e_8 + e_6e_7),\\
\kappa_{8*}^-( e_{6} e_{8})&=& -E_{1, 3} - E_{2, 4} + E_{5, 7} + E_{6, 8}
={1\over 2}\lambda_{8*}( -e_1e_3 - e_2e_4 + e_5e_7 + e_6e_8),\\
\kappa_{8*}^-( e_{7} e_{8})&=& E_{1, 2} - E_{3, 4} - E_{5, 6} + E_{7, 8}
={1\over 2}\lambda_{8*}( e_1e_2 - e_3e_4 - e_5e_6 + e_7e_8).
\end{eqnarray*}
This means, in terms of the first diagram in \rf{eq: diagrams Lie algebras},
\begin{eqnarray*}
\sigma_*( e_{1} e_{2})&=&{1\over 2} ( -e_1e_2 - e_3e_4 - e_5e_6 - e_7e_8),\\
\sigma_*( e_{1} e_{3})&=&{1\over 2} ( -e_1e_3 + e_2e_4 - e_5e_7 + e_6e_8),\\
\sigma_*( e_{1} e_{4})&=&{1\over 2} ( -e_1e_4 - e_2e_3 + e_5e_8 + e_6e_7),\\
\sigma_*( e_{1} e_{5})&=&{1\over 2} ( -e_1e_5 + e_2e_6 + e_3e_7 - e_4e_8),\\
\sigma_*( e_{1} e_{6})&=&{1\over 2} ( -e_1e_6 - e_2e_5 - e_3e_8 - e_4e_7),\\
\sigma_*( e_{1} e_{7})&=&{1\over 2} ( -e_1e_7 + e_2e_8 - e_3e_5 + e_4e_6),\\
\sigma_*( e_{1} e_{8})&=&{1\over 2} ( -e_1e_8 - e_2e_7 + e_3e_6 + e_4e_5),\\
\sigma_*( e_{2} e_{3})&=&{1\over 2} ( e_1e_4 + e_2e_3 + e_5e_8 + e_6e_7),\\
\sigma_*( e_{2} e_{4})&=&{1\over 2} ( -e_1e_3 + e_2e_4 + e_5e_7 - e_6e_8),\\
\sigma_*( e_{2} e_{5})&=&{1\over 2} ( e_1e_6 + e_2e_5 - e_3e_8 - e_4e_7),\\
\sigma_*( e_{2} e_{6})&=&{1\over 2} ( -e_1e_5 + e_2e_6 - e_3e_7 + e_4e_8),\\
\sigma_*( e_{2} e_{7})&=&{1\over 2} ( e_1e_8 + e_2e_7 + e_3e_6 + e_4e_5),\\
\sigma_*( e_{2} e_{8})&=&{1\over 2} ( -e_1e_7 + e_2e_8 + e_3e_5 - e_4e_6),\\
\sigma_*( e_{3} e_{4})&=&{1\over 2} ( e_1e_2 + e_3e_4 - e_5e_6 - e_7e_8),\\
\sigma_*( e_{3} e_{5})&=&{1\over 2} ( e_1e_7 + e_2e_8 + e_3e_5 + e_4e_6),\\
\sigma_*( e_{3} e_{6})&=&{1\over 2} ( -e_1e_8 + e_2e_7 + e_3e_6 - e_4e_5),\\
\sigma_*( e_{3} e_{7})&=&{1\over 2} ( -e_1e_5 - e_2e_6 + e_3e_7 + e_4e_8),\\
\sigma_*( e_{3} e_{8})&=&{1\over 2} ( e_1e_6 - e_2e_5 + e_3e_8 - e_4e_7),\\
\sigma_*( e_{4} e_{5})&=&{1\over 2} ( -e_1e_8 + e_2e_7 - e_3e_6 + e_4e_5),\\
\sigma_*( e_{4} e_{6})&=&{1\over 2} ( -e_1e_7 - e_2e_8 + e_3e_5 + e_4e_6),\\
\sigma_*( e_{4} e_{7})&=&{1\over 2} ( e_1e_6 - e_2e_5 - e_3e_8 + e_4e_7),\\
\sigma_*( e_{4} e_{8})&=&{1\over 2} ( e_1e_5 + e_2e_6 + e_3e_7 + e_4e_8),\\
\sigma_*( e_{5} e_{6})&=&{1\over 2} ( e_1e_2 - e_3e_4 + e_5e_6 - e_7e_8),\\
\sigma_*( e_{5} e_{7})&=&{1\over 2} ( e_1e_3 + e_2e_4 + e_5e_7 + e_6e_8),\\
\sigma_*( e_{5} e_{8})&=&{1\over 2} ( -e_1e_4 + e_2e_3 + e_5e_8 - e_6e_7),\\
\sigma_*( e_{6} e_{7})&=&{1\over 2} ( -e_1e_4 + e_2e_3 - e_5e_8 + e_6e_7),\\
\sigma_*( e_{6} e_{8})&=&{1\over 2} ( -e_1e_3 - e_2e_4 + e_5e_7 + e_6e_8),\\
\sigma_*( e_{7} e_{8})&=&{1\over 2} ( e_1e_2 - e_3e_4 - e_5e_6 + e_7e_8).
\end{eqnarray*}
In other words, we have defined $\sigma_*$ in such a way that
\[\lambda_{8*}\circ \sigma_* = \kappa_{8*}^-.\]
In order to show that $\sigma_*$ is of order 3, let us consider, for instance
\[ \sigma_*( e_{1} e_{2})={1\over 2} ( -e_1e_2 - e_3e_4 - e_5e_6 - e_7e_8).
\]
Then
\begin{eqnarray*}
\sigma_*(\sigma_*( e_{1} e_{2}))
&=&{1\over 2} ( -\sigma_*(e_1e_2) - \sigma_*(e_3e_4) - \sigma_*(e_5e_6) - \sigma_*(e_7e_8))\\
&=&{1\over 4} ( -( -e_1e_2 - e_3e_4 - e_5e_6 - e_7e_8) - ( e_1e_2 + e_3e_4 - e_5e_6 - e_7e_8) \\
& & - ( e_1e_2 - e_3e_4 + e_5e_6 - e_7e_8) - ( e_1e_2 - e_3e_4 - e_5e_6 + e_7e_8))\\
&=&{1\over 2} (-e_1e_2 +e_3e_4 +e_5e_6+e_7e_8),
\end{eqnarray*}
and
\begin{eqnarray*}
\sigma_*(\sigma_*(\sigma_*( e_{1} e_{2})))
&=&{1\over 2} (-\sigma_*(e_1e_2) +\sigma_*(e_3e_4) +\sigma_*(e_5e_6)+\sigma_*(e_7e_8))\\
&=&{1\over 4} (-( -e_1e_2 - e_3e_4 - e_5e_6 - e_7e_8) +( e_1e_2 + e_3e_4 - e_5e_6 - e_7e_8) \\
& & +( e_1e_2 - e_3e_4 + e_5e_6 - e_7e_8)+( e_1e_2 - e_3e_4 - e_5e_6 + e_7e_8))\\
&=&e_1e_2 .
\end{eqnarray*}
All the other cases are similar.
In fact, using the standard ordered basis $\{e_1e_2,e_1e_3,...,e_7e_8\}$
of $\mathfrak{spin}(8)$ we have the matrix representation
{\tiny
\[\sigma_*={1\over 2}
\left(
\begin{array}{cccccccccccccccccccccccccccc}
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 0\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0
& 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 &
1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
-1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 0\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & -1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 &
0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1
& 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1
& 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 &
0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0
& 0 & 0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
-1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1
& 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0
& 0 & 0 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 &
0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 \\
0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right),
\]
}
which one can verify is of order 3.
Furthermore, the map $\sigma_*$ has eigenvalues
\[e^{2\pi i\over3},e^{-2\pi i\over3},1,\]
with multiplicities 7, 7 and 14 respectively.
The eigenspace corresponding to $1$ is generated by
\begin{eqnarray}
&& \left\{
e_{2} e_{3} + e_{6} e_{7},
-e_{2} e_{4} + e_{6} e_{8},
-e_{3} e_{4} + e_{7} e_{8},
-e_{2} e_{6} + e_{3} e_{7},
-e_{2} e_{5} + e_{3} e_{8},
e_{2} e_{7} + e_{4} e_{5},
-e_{2} e_{8} + e_{4} e_{6},
\right.\nonumber\\
&&\left.
-e_{2} e_{5} + e_{4} e_{7},
e_{2} e_{6} + e_{4} e_{8},
-e_{3} e_{4} + e_{5} e_{6},
e_{2} e_{4} + e_{5} e_{7},
e_{2} e_{3} + e_{5} e_{8},
e_{2} e_{8} + e_{3} e_{5},
e_{2} e_{7} + e_{3} e_{6},
\right\}\label{eq: generators g2}
\end{eqnarray}
which generates a copy of $\mathfrak{g}_2\subset \mathfrak{spin}(8)\subset Cl_8^0$.
Note that none of the generators includes the vector $e_1$, which makes this copy of $\mathfrak{g}_2$
a subalgebra of the copy of $\mathfrak{spin}(7)$ generated by the span of $\{e_2,e_3,e_4,e_5,e_6,e_7,e_8\}$.
This copy of $\mathfrak{g}_2$ annihilates the basic positive spinor
\[{1\over \sqrt{2}}(u_0-u_{15})\in \tilde\Delta_8^+,\]
so that $\tilde\Delta_8^+=\mathbf{1}\oplus\mathbb{R}^7$ under $\mathfrak{g}_2$ and also
annihilates the basic negative spinor
\[{i\over \sqrt{2}} (u_1-u_{14})\in \tilde\Delta_8^-\]
under Clifford multiplication, so that $\tilde\Delta_8^-=\mathbf{1}\oplus\mathbb{R}^7$ under $\mathfrak{g}_2$.
The matrix representation
for a general element
\begin{eqnarray*}
&&
\alpha_{1} (e_{2} e_{3} + e_{6} e_{7})+
\alpha_{2} (-e_{2} e_{4} + e_{6} e_{8})+
\alpha_{3} (-e_{3} e_{4} + e_{7} e_{8})+
\alpha_{4} (-e_{2} e_{6} + e_{3} e_{7})+
\alpha_{5} (-e_{2} e_{5} + e_{3} e_{8})\\
&&+
\alpha_{6} (e_{2} e_{7} + e_{4} e_{5})+
\alpha_{7} (-e_{2} e_{8} + e_{4} e_{6})+
\alpha_{8} (-e_{2} e_{5} + e_{4} e_{7})+
\alpha_{9} (e_{2} e_{6} + e_{4} e_{8})+
\alpha_{10} (-e_{3} e_{4} + e_{5} e_{6})\\
&&+
\alpha_{11} (e_{2} e_{4} + e_{5} e_{7})+
\alpha_{12} (e_{2} e_{3} + e_{5} e_{8})+
\alpha_{13} (e_{2} e_{8} + e_{3} e_{5})+
\alpha_{14} (e_{2} e_{7} + e_{3} e_{6})
\end{eqnarray*}
on both $\tilde\Delta_8^+$ and $\tilde\Delta_8^-$, is
\[
2\left(
\begin{array}{cccccccc}
0&0&0&0&0&0&0&0\\
0&0& \alpha_{1}+ \alpha_{12}& -\alpha_{2}+ \alpha_{11}&-\alpha_{5}- \alpha_{8}& -\alpha_{4}+ \alpha_{9}& \alpha_{6}+ \alpha_{14}&- \alpha_{7}+ \alpha_{13}\\
0&- \alpha_{1}- \alpha_{12}&0&- \alpha_{3}- \alpha_{10}& \alpha_{13}& \alpha_{14}& \alpha_{4 }& \alpha_{5}\\
0& \alpha_{2}- \alpha_{11}& \alpha_{3}+ \alpha_{10}&0& \alpha_{6}& \alpha_{7}& \alpha_{8 }& \alpha_{9}\\
0& \alpha_{5 }+ \alpha_{8}&- \alpha_{13}&- \alpha_{6}&0& \alpha_{10}& \alpha_{11}& \alpha_{12}\\
0& \alpha_{4}- \alpha_{9 }&- \alpha_{14}&- \alpha_{7}&- \alpha_{10}&0& \alpha_{1}& \alpha_{2}\\
0&- \alpha_{6}- \alpha_{14}&- \alpha_{4 }&- \alpha_{8 }&- \alpha_{11}&- \alpha_{1}&0& \alpha_{3}\\
0& \alpha_{7}- \alpha_{13}&- \alpha_{5}&- \alpha_{9}&- \alpha_{12}&- \alpha_{2}&- \alpha_{3}&0\\
\end{array}
\right).
\]
Let us compute an explicit element of the group $G_2$.
Consider the element $e_{2} e_{3} + e_{6} e_{7}\in\mathfrak{g}_2\subset \mathfrak{spin}(8)\subset Cl_8^0$,
and the one parameter subgroup
\begin{eqnarray*}
exp(t(e_{2} e_{3} + e_{6} e_{7}))
&=& exp(te_{2} e_{3}) exp(te_{6} e_{7})\\
&=& (\cos(t)+\sin(t)e_2e_3)(\cos(t)+\sin(t)e_6e_7)\\
&=& {1\over 2}(-e_2e_3e_6e_7\cos(2t)+e_2e_3e_6e_7+e_2e_3\sin(2t)+e_6e_7\sin(2t)+\cos(2t)+1)\\
&=& {1\over 2}(e_2e_3e_6e_7(1-\cos(2t))+(e_2e_3+e_6e_7)\sin(2t)+\cos(2t)+1)\\
&\in& G_2\subset Spin(7)\subset Spin(8)\subset Cl_8^0.
\end{eqnarray*}
Its image under $\kappa_8^-$ is
\begin{eqnarray*}
\kappa_8^-(exp(t(e_{2} e_{3} + e_{6} e_{7})))
&=& (\cos(t){\rm Id}_8+\sin(t)\kappa_8^-(e_2e_3))(\cos(t){\rm Id}_8+\sin(t)\kappa_8^-(e_6e_7))\\
&=&\left(\begin{array}{cccccccc}
1 & • & • & • & • & • & • & • \\
• & \cos(2t) & -\sin(2t) & • & • & • & • & • \\
• & \sin(2t) & \cos(2t) & • & • & • & • & • \\
• & • & • & 1 & • & • & • & • \\
• & • & • & • & 1 & • & • & • \\
• & • & • & • & • & \cos(2t) & -\sin(2t) & • \\
• & • & • & • & • & \sin(2t) & \cos(2t) & • \\
• & • & • & • & • & • & • & 1
\end{array} \right)\\
&\in & \kappa_8^-( G_2)\subset SO(7)\subset SO(8).
\end{eqnarray*}
The eigenspace corresponding to $e^{2\pi i\over3}$ is generated by
\begin{eqnarray*}
&& \{e_{6} e_{8} - e_{5} e_{7} + e_{2} e_{4} + i e_{1} e_{3} \sqrt{3},
e_{4} e_{7} + e_{3} e_{8} + e_{2} e_{5} - i e_{1} e_{6} \sqrt{3} ,
e_{4} e_{6} - e_{3} e_{5} + e_{2} e_{8} + i e_{1} e_{7} \sqrt{3} ,\\
&& e_{7} e_{8} + e_{5} e_{6} + e_{3} e_{4} - i e_{1} e_{2} \sqrt{3} ,
e_{4} e_{5} + e_{3} e_{6} - e_{2} e_{7} + i e_{1} e_{8} \sqrt{3} ,
e_{4} e_{8} - e_{3} e_{7} - e_{2} e_{6} - i e_{1} e_{5} \sqrt{3} ,\\
&& e_{6} e_{7} + e_{5} e_{8} - e_{2} e_{3} + i e_{1} e_{4} \sqrt{3} \},
\end{eqnarray*}
and eigenspace corresponding to $e^{-2\pi i\over3}$ is generated by
\begin{eqnarray*}
&&\{ e_{6} e_{8} - e_{5} e_{7} + e_{2} e_{4} - i e_{1} e_{3} \sqrt{3} ,
e_{4} e_{7} + e_{3} e_{8} + e_{2} e_{5} + i e_{1} e_{6} \sqrt{3} ,
e_{4} e_{6} - e_{3} e_{5} + e_{2} e_{8} - i e_{1} e_{7} \sqrt{3} ,\\
&& e_{7} e_{8} + e_{5} e_{6} + e_{3} e_{4} + i e_{1} e_{2} \sqrt{3} ,
e_{4} e_{5} + e_{3} e_{6} - e_{2} e_{7} - i e_{1} e_{8} \sqrt{3} ,
e_{4} e_{8} - e_{3} e_{7} - e_{2} e_{6} + i e_{1} e_{5} \sqrt{3} ,\\
&& e_{6} e_{7} + e_{5} e_{8} - e_{2} e_{3} - i e_{1} e_{4} \sqrt{3} \}
\end{eqnarray*}
\subsubsection{The endomorphism $\tau_*$}
Using the ordered basis of spinors, one can compute the endomorphisms corresponding
to the elements $e_ie_j\in \mathfrak{spin}(8)$, $1\leq i< j \leq 8$, under the map $\kappa_{8*}^+$ and, in turn, express those endomorphisms as images
of elements of $\mathfrak{spin}(8)$ under $\lambda_{8*}$:
\begin{eqnarray*}
\kappa_{8*}^+( e_1 e_2)
&=& E_{1, 2} + E_{3, 4} + E_{5, 6} + E_{7, 8}
={1\over 2}\lambda_{8*}(e_{1, 2} + e_{3, 4} + e_{5, 6} + e_{7, 8}),\\
\kappa_{8*}^+( e_1 e_3)
&=& E_{1, 3} - E_{2, 4} + E_{5, 7} - E_{6, 8}
={1\over 2}\lambda_{8*}( e_{1, 3} - e_{2, 4} + e_{5, 7} - e_{6, 8}),\\
\kappa_{8*}^+( e_1 e_4)
&=& E_{1, 4} + E_{2, 3} - E_{5, 8} - E_{6, 7}
={1\over 2}\lambda_{8*}( e_{1, 4} + e_{2, 3} - e_{5, 8} - e_{6, 7}),\\
\kappa_{8*}^+( e_1 e_5)
&=& E_{1, 5} - E_{2, 6} - E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( e_{1, 5} - e_{2, 6} - e_{3, 7} + e_{4, 8}),\\
\kappa_{8*}^+( e_1 e_6)
&=& E_{1, 6} + E_{2, 5} + E_{3, 8} + E_{4, 7}
={1\over 2}\lambda_{8*}( e_{1, 6} + e_{2, 5} + e_{3, 8} + e_{4, 7}),\\
\kappa_{8*}^+( e_1 e_7)
&=& E_{1, 7} - E_{2, 8} + E_{3, 5} - E_{4, 6}
={1\over 2}\lambda_{8*}( e_{1, 7} - e_{2, 8} + e_{3, 5} - e_{4, 6}),\\
\kappa_{8*}^+( e_1 e_8)
&=& E_{1, 8} + E_{2, 7} - E_{3, 6} - E_{4, 5}
={1\over 2}\lambda_{8*}( e_{1, 8} + e_{2, 7} - e_{3, 6} - e_{4, 5}),\\
\kappa_{8*}^+( e_2 e_3)
&=& E_{1, 4} + E_{2, 3} + E_{5, 8} + E_{6, 7}
={1\over 2}\lambda_{8*}( e_{1, 4} + e_{2, 3} + e_{5, 8} + e_{6, 7}),\\
\kappa_{8*}^+( e_2 e_4)
&=& -E_{1, 3} + E_{2, 4} + E_{5, 7} - E_{6, 8}
={1\over 2}\lambda_{8*}( -e_{1, 3} + e_{2, 4} + e_{5, 7} - e_{6, 8}),\\
\kappa_{8*}^+( e_2 e_5)
&=& E_{1, 6} + E_{2, 5} - E_{3, 8} - E_{4, 7}
={1\over 2}\lambda_{8*}( e_{1, 6} + e_{2, 5} - e_{3, 8} - e_{4, 7}),\\
\kappa_{8*}^+( e_2 e_6)
&=& -E_{1, 5} + E_{2, 6} - E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( -e_{1, 5} + e_{2, 6} - e_{3, 7} + e_{4, 8}),\\
\kappa_{8*}^+( e_2 e_7)
&=& E_{1, 8} + E_{2, 7} + E_{3, 6} + E_{4, 5}
={1\over 2}\lambda_{8*}( e_{1, 8} + e_{2, 7} + e_{3, 6} + e_{4, 5}),\\
\kappa_{8*}^+( e_2 e_8)
&=& -E_{1, 7} + E_{2, 8} + E_{3, 5} - E_{4, 6}
={1\over 2}\lambda_{8*}( -e_{1, 7} + e_{2, 8} + e_{3, 5} - e_{4, 6}),\\
\kappa_{8*}^+( e_3 e_4)
&=& E_{1, 2} + E_{3, 4} - E_{5, 6} - E_{7, 8}
={1\over 2}\lambda_{8*}( e_{1, 2} + e_{3, 4} - e_{5, 6} - e_{7, 8}),\\
\kappa_{8*}^+( e_3 e_5)
&=& E_{1, 7} + E_{2, 8} + E_{3, 5} + E_{4, 6}
={1\over 2}\lambda_{8*}( e_{1, 7} + e_{2, 8} + e_{3, 5} + e_{4, 6}),\\
\kappa_{8*}^+( e_3 e_6)
&=& -E_{1, 8} + E_{2, 7} + E_{3, 6} - E_{4, 5}
={1\over 2}\lambda_{8*}( -e_{1, 8} + e_{2, 7} + e_{3, 6} - e_{4, 5}),\\
\kappa_{8*}^+( e_3 e_7)
&=& -E_{1, 5} - E_{2, 6} + E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( -e_{1, 5} - e_{2, 6} + e_{3, 7} + e_{4, 8}),\\
\kappa_{8*}^+( e_3 e_8)
&=& E_{1, 6} - E_{2, 5} + E_{3, 8} - E_{4, 7}
={1\over 2}\lambda_{8*}( e_{1, 6} - e_{2, 5} + e_{3, 8} - e_{4, 7}),\\
\kappa_{8*}^+( e_4 e_5)
&=& -E_{1, 8} + E_{2, 7} - E_{3, 6} + E_{4, 5}
={1\over 2}\lambda_{8*}( -e_{1, 8} + e_{2, 7} - e_{3, 6} + e_{4, 5}),\\
\kappa_{8*}^+( e_4 e_6)
&=& -E_{1, 7} - E_{2, 8} + E_{3, 5} + E_{4, 6}
={1\over 2}\lambda_{8*}( -e_{1, 7} - e_{2, 8} + e_{3, 5} + e_{4, 6}),\\
\kappa_{8*}^+( e_4 e_7)
&=& E_{1, 6} - E_{2, 5} - E_{3, 8} + E_{4, 7}
={1\over 2}\lambda_{8*}( e_{1, 6} - e_{2, 5} - e_{3, 8} + e_{4, 7}),\\
\kappa_{8*}^+( e_4 e_8)
&=& E_{1, 5} + E_{2, 6} + E_{3, 7} + E_{4, 8}
={1\over 2}\lambda_{8*}( e_{1, 5} + e_{2, 6} + e_{3, 7} + e_{4, 8}),\\
\kappa_{8*}^+( e_5 e_6)
&=& E_{1, 2} - E_{3, 4} + E_{5, 6} - E_{7, 8}
={1\over 2}\lambda_{8*}( e_{1, 2} - e_{3, 4} + e_{5, 6} - e_{7, 8}),\\
\kappa_{8*}^+( e_5 e_7)
&=& E_{1, 3} + E_{2, 4} + E_{5, 7} + E_{6, 8}
={1\over 2}\lambda_{8*}( e_{1, 3} + e_{2, 4} + e_{5, 7} + e_{6, 8}),\\
\kappa_{8*}^+( e_5 e_8)
&=& -E_{1, 4} + E_{2, 3} + E_{5, 8} - E_{6, 7}
={1\over 2}\lambda_{8*}( -e_{1, 4} + e_{2, 3} + e_{5, 8} - e_{6, 7}),\\
\kappa_{8*}^+( e_6 e_7)
&=& -E_{1, 4} + E_{2, 3} - E_{5, 8} + E_{6, 7}
={1\over 2}\lambda_{8*}( -e_{1, 4} + e_{2, 3} - e_{5, 8} + e_{6, 7}),\\
\kappa_{8*}^+( e_6 e_8)
&=& -E_{1, 3} - E_{2, 4} + E_{5, 7} + E_{6, 8}
={1\over 2}\lambda_{8*}( -e_{1, 3} - e_{2, 4} + e_{5, 7} + e_{6, 8}),\\
\kappa_{8*}^+( e_7 e_8)
&=& E_{1, 2} - E_{3, 4} - E_{5, 6} + E_{7, 8}
={1\over 2}\lambda_{8*}( e_{1, 2} - e_{3, 4} - e_{5, 6} + e_{7, 8}),
\end{eqnarray*}
This means, in terms of the second diagram in \rf{eq: diagrams Lie algebras},
\begin{eqnarray*}
\tau_*( e_1 e_2)&=& {1\over 2} (e_1e_2 + e_3e_4 + e_5e_6 + e_7e_8),\\
\tau_*( e_1 e_3)&=& {1\over 2} (e_1e_3 - e_2e_4 + e_5e_7 - e_6e_8),\\
\tau_*( e_1 e_4)&=& {1\over 2} (e_1e_4 + e_2e_3 - e_5e_8 - e_6e_7),\\
\tau_*( e_1 e_5)&=& {1\over 2} (e_1e_5 - e_2e_6 - e_3e_7 + e_4e_8),\\
\tau_*( e_1 e_6)&=& {1\over 2} (e_1e_6 + e_2e_5 + e_3e_8 + e_4e_7),\\
\tau_*( e_1 e_7)&=& {1\over 2} (e_1e_7 - e_2e_8 + e_3e_5 - e_4e_6),\\
\tau_*( e_1 e_8)&=& {1\over 2} (e_1e_8 + e_2e_7 - e_3e_6 - e_4e_5),\\
\tau_*( e_2 e_3)&=& {1\over 2} (e_1e_4 + e_2e_3 + e_5e_8 + e_6e_7),\\
\tau_*( e_2 e_4)&=& {1\over 2} (-e_1e_3 + e_2e_4 + e_5e_7 - e_6e_8),\\
\tau_*( e_2 e_5)&=& {1\over 2} (e_1e_6 + e_2e_5 - e_3e_8 - e_4e_7),\\
\tau_*( e_2 e_6)&=& {1\over 2} (-e_1e_5 + e_2e_6 - e_3e_7 + e_4e_8),\\
\tau_*( e_2 e_7)&=& {1\over 2} (e_1e_8 + e_2e_7 + e_3e_6 + e_4e_5),\\
\tau_*( e_2 e_8)&=& {1\over 2} (-e_1e_7 + e_2e_8 + e_3e_5 - e_4e_6),\\
\tau_*( e_3 e_4)&=& {1\over 2} (e_1e_2 + e_3e_4 - e_5e_6 - e_7e_8),\\
\tau_*( e_3 e_5)&=& {1\over 2} (e_1e_7 + e_2e_8 + e_3e_5 + e_4e_6),\\
\tau_*( e_3 e_6)&=& {1\over 2} (-e_1e_8 + e_2e_7 + e_3e_6 - e_4e_5),\\
\tau_*( e_3 e_7)&=& {1\over 2} (-e_1e_5 - e_2e_6 + e_3e_7 + e_4e_8),\\
\tau_*( e_3 e_8)&=& {1\over 2} (e_1e_6 - e_2e_5 + e_3e_8 - e_4e_7),\\
\tau_*( e_4 e_5)&=& {1\over 2} (-e_1e_8 + e_2e_7 - e_3e_6 + e_4e_5),\\
\tau_*( e_4 e_6)&=& {1\over 2} (-e_1e_7 - e_2e_8 + e_3e_5 + e_4e_6),\\
\tau_*( e_4 e_7)&=& {1\over 2} (e_1e_6 - e_2e_5 - e_3e_8 + e_4e_7),\\
\tau_*( e_4 e_8)&=& {1\over 2} (e_1e_5 + e_2e_6 + e_3e_7 + e_4e_8),\\
\tau_*( e_5 e_6)&=& {1\over 2} (e_1e_2 - e_3e_4 + e_5e_6 - e_7e_8),\\
\tau_*( e_5 e_7)&=& {1\over 2} (e_1e_3 + e_2e_4 + e_5e_7 + e_6e_8),\\
\tau_*( e_5 e_8)&=& {1\over 2} (-e_1e_4 + e_2e_3 + e_5e_8 - e_6e_7),\\
\tau_*( e_6 e_7)&=& {1\over 2} (-e_1e_4 + e_2e_3 - e_5e_8 + e_6e_7),\\
\tau_*( e_6 e_8)&=& {1\over 2} (-e_1e_3 - e_2e_4 + e_5e_7 + e_6e_8),\\
\tau_*( e_7 e_8)&=& {1\over 2} (e_1e_2 - e_3e_4 - e_5e_6 + e_7e_8).
\end{eqnarray*}
In other words, we have defined $\tau_*$ in such a way that
\[\lambda_{8*}\circ \tau_* = \kappa_{8*}^+.\]
As before, using the stardard ordered basis of $\mathfrak{spin}(8)$, we have the matrix representation
{\tiny
\[
\tau_*={1\over 2}
\left(
\begin{array}{cccccccccccccccccccccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 &
0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1
& 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
-1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\
0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & -1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 &
0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1
& 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & -1 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1
& 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0
& 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &
-1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 &
1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 &
0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 0\\
0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0\\
0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 &
0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 1 \\
\end{array}
\right),
\]
}
which one can verify is of order 2.
The map $\tau_*$ has eigenvalues
\[1, -1\]
with multiplicities 21 and 7 respectively.
The eigenspace corresponding to $1$ is generated by
\begin{eqnarray*}
&& \{ e_{1} e_{4} + e_{2} e_{3},
-e_{1} e_{3} + e_{2} e_{4},
e_{1} e_{6} + e_{2} e_{5},
-e_{1} e_{5} + e_{2} e_{6},
e_{1} e_{8} + e_{2} e_{7},
-e_{1} e_{7} + e_{2} e_{8},
e_{1} e_{2} + e_{3} e_{4},\\
&& e_{1} e_{7} + e_{3} e_{5},
-e_{1} e_{8} + e_{3} e_{6},
-e_{1} e_{5} + e_{3} e_{7},
e_{1} e_{6} + e_{3} e_{8},
-e_{1} e_{8} + e_{4} e_{5},
-e_{1} e_{7} + e_{4} e_{6},
e_{1} e_{6} + e_{4} e_{7},\\
&& e_{1} e_{5} + e_{4} e_{8},
e_{1} e_{2} + e_{5} e_{6},
e_{1} e_{3} + e_{5} e_{7},
-e_{1} e_{4} + e_{5} e_{8},
-e_{1} e_{4} + e_{6} e_{7},
-e_{1} e_{3} + e_{6} e_{8},
e_{1} e_{2} + e_{7} e_{8}\}
\end{eqnarray*}
which generates a copy of $\mathfrak{spin}(7)\subset\mathfrak{spin}(8)\subset Cl_8^0$, i.e.
\[\{\mbox{+1 eigenspace of $\tau_*$}\}\cong \mathfrak{spin}(7).\]
By taking appropriate sums of these generators we can find
the set of generators \rf{eq: generators g2} of our copy of $\mathfrak{g}_2$.
Moreover, $\mathfrak{g}_2$ is the intersection of the two copies of $\mathfrak{spin}(7)$, i.e.
\[\mathfrak{g}_2= \mathfrak{spin}({\rm span}\{e_2,...,e_8\})\cap \{\mbox{+1 eigenspace of $\tau_*$}\}.\]
One can easlily compute brackets (in Clifford product) of the pair of Lie algebras
$(\mathfrak{spin}(7),\mathfrak{g}_2)$ and check they form a symmetric pair. Since the orbit space
$Spin(7)\cdot e_1=Spin(7)/G_2$ is $7$-dimensional and a submanifold of the $7$-dimensional sphere $S^7=Spin(8)/Spin(7)$,
we have the classical result
\[{Spin(7)\over G_2}=S^7 .\]
The $7$-dimensional eigenspace corresponding to $-1$ is generated by
\begin{eqnarray*}
&& \{ e_{1} e_{3} + e_{2} e_{4} - e_{5} e_{7} + e_{6} e_{8},
e_{1} e_{4} - e_{2} e_{3} + e_{5} e_{8} + e_{6} e_{7},
e_{1} e_{7} + e_{2} e_{8} - e_{3} e_{5} + e_{4} e_{6},
e_{1} e_{8} - e_{2} e_{7} + e_{3} e_{6} + e_{4} e_{5},\\
&& -e_{1} e_{6} + e_{2} e_{5} + e_{3} e_{8} + e_{4} e_{7},
-e_{1} e_{2} + e_{3} e_{4} + e_{5} e_{6} + e_{7} e_{8},
-e_{1} e_{5} - e_{2} e_{6} - e_{3} e_{7} + e_{4} e_{8}\}.
\end{eqnarray*}
{\bf Remark}.
Note that, by using the bases
\[\{e_1e_2,e_1e_3,\ldots,e_7e_8\}\subset \mathfrak{spin}(8)\]
and
\[\{E_{1,2}, E_{1,3},\ldots,E_{7,8}\}\subset \mathfrak{so}(8),\]
the matrices representing
\begin{eqnarray*}
\kappa_{8*}^-:\mathfrak{spin}(8)&\longrightarrow& \mathfrak{so}(8),\\
\kappa_{8*}^+:\mathfrak{spin}(8)&\longrightarrow& \mathfrak{so}(8),
\end{eqnarray*}
with respect to these bases equal the matrices representing $2\sigma_*$ and $2\tau_*$ respectively.
In this way, triality becomes somewhat tautological.
\subsubsection{Group generated by $\sigma_*$ and $\tau_*$}
\begin{corol}
The endomorphisms $\sigma_*$ and $\tau_*$
generate a copy of the permutation group $S_3$ of three symbols.
\end{corol}
{\em Proof}.
The endomorphisms $\sigma_*$ and $\tau_*$ satisfy
\begin{eqnarray*}
\tau_*^2&=&{\rm Id}_{\mathfrak{spin}(8)},\\
\sigma_*^3&=&{\rm Id}_{\mathfrak{spin}(8)},\\
\sigma_*\tau_*&=&\tau_*\sigma_*^2,\\
\sigma_*^2\tau_*&=&\tau_*\sigma_*,
\end{eqnarray*}
which proves the claim.\qd.
\begin{corol}
The compositions
$\tau_*\sigma_*$, $\sigma_*\tau_*$
are also involutions.
\end{corol}
{\em Proof}. Consider,
\begin{eqnarray*}
(\tau_*\sigma_*)(\tau_*\sigma_*)
&=&\tau_*(\sigma_*\tau_*)\sigma_*\\
&=&\tau_*(\tau_*\sigma_*^2)\sigma_*\\
&=&\tau_*^2\sigma_*^3\\
&=&{\rm Id}_{\mathfrak{spin}(8)}.
\end{eqnarray*}
\qd
\begin{corol}
The endomorphisms $\lambda_{8*}$, $\kappa_{8*}^+$, $\kappa_{8*}^-$, $\tau_*$ and $\sigma_*$ satisfy
\begin{eqnarray*}
\lambda_{8*}\sigma_*&=& \kappa_{8*}^-,\\
\lambda_{8*}\tau_*&=& \kappa_{8*}^+,\\
\kappa_{8*}^-\tau_*\sigma_*&=& \kappa_{8*}^+,\\
\kappa_{8*}^+\tau_*\sigma_*&=& \kappa_{8*}^-,
\end{eqnarray*}
i.e. the symmetric group $S_3$ generated by $\tau_*$ and $\sigma_*$ permutes the Lie algebra representations $\lambda_{8*}$, $\kappa_{8*}^+$ and $\kappa_{8*}^-$, and the following diagram commutes
\[
\xymatrix{
& \mathfrak{spin}(8) \ar[d]^{\lambda_{8*}}&\\
& \mathfrak{so}(8)&\\
\mathfrak{spin}(8)\ar[rr]_{\tau_*\sigma_*}\ar[uur]^{\tau_*}\ar[ur]_{\kappa_{8*}^+}&&\mathfrak{spin}(8)
\ar[uul]_{\sigma_*}\ar[ul]^{\kappa_{8*}^-}\\
}
\]
\end{corol}
\qd
We know that the following diagrams also commute
\[
\xymatrix{
\mathfrak{spin}(8)\ar[r]^{\sigma_*}\ar[d]_{exp} & \mathfrak{spin}(8)\ar[d]^{exp} \\
Spin(8) \ar[r]^{\sigma} & Spin(8)
}
\quad\quad\quad\quad\quad
\xymatrix{
\mathfrak{spin}(8)\ar[r]^{\tau_*}\ar[d]_{exp} & \mathfrak{spin}(8)\ar[d]^{exp} \\
Spin(8) \ar[r]^{\tau} & Spin(8)
}
\]
\begin{corol}
The symmetric group $S_3$ generated by $\sigma$ and $\tau$ permutes the
three representations
$\lambda_8$, $\kappa_8^+$ and $\kappa_8^+$
, i.e. the following diagram commutes
\[
\xymatrix{
& Spin(8) \ar[d]^{\lambda_8}&\\
& SO(8)&\\
Spin(8)\ar[uur]^{\tau}\ar[rr]_{\tau\sigma}\ar[ur]_{\kappa_8^+}&&Spin(8)\ar[ul]^{\kappa_8^-}\ar[luu]_{\sigma}\\
}
\]
\end{corol}
\qd
\begin{corol}
We have
\[\{\mbox{$(+1)$-eigenspace of $\tau_*\sigma_*$}\}\cong\mathfrak{spin}({\rm span}(e_2,e_3,e_4,e_5,e_6,e_7,e_8))
=\mathfrak{spin}(7).\]
\end{corol}
{\em Proof}. It is enough to check the effect of $\tau_*\sigma_*$ on the linear generators of $\mathfrak{spin}(8)$.
We have
\begin{eqnarray*}
\tau_*\sigma_*(e_1e_k)&=& -e_1e_k\quad\quad\mbox{for $2\leq k\leq 8$.}\\
\tau_*\sigma_*(e_ie_j)&=& e_ie_j\quad\quad\quad\mbox{for $2\leq i<j\leq 8$.}
\end{eqnarray*}
\qd
\begin{corol}
We have
\[\mathfrak{g}_2=\{\mbox{$(+1)$-eigenspace of $\tau_*$}\}\cap\{\mbox{$(+1)$-eigenspace of $\tau_*\sigma_*^2$}\}.\]
and
\[\mathfrak{g}_2=\{\mbox{$(+1)$-eigenspace of $\tau_*$}\}\cap\{\mbox{$(+1)$-eigenspace of $\tau_*\sigma_*$}\}.\]
\end{corol}
{\em Proof}.
If $X\in\{\mbox{$(+1)$-eigenspace of $\tau_*$}\}\cap\{\mbox{$(+1)$-eigenspace of $\tau_*\sigma_*^2$}\}$
\begin{eqnarray*}
\tau_*(X)&=& X\\
\tau_*\sigma_*^2(X)&=&X,
\end{eqnarray*}
Then
\begin{eqnarray*}
X&=&\tau_*\sigma_*^2(X)\\
&=&\sigma_*\tau_*(X)\\
&=& \sigma_*(X),
\end{eqnarray*}
which means $X$ is a $(+1)$-eigenvector of $\sigma_*$, thus an element of $\mathfrak{g}_2$.
A dimension count proves the first identity. The second identity is proved similarly.\qd
\begin{corol}
$\sigma_*^2$ provides an isomorphism between the two copies of $\mathfrak{spin}(7)$. Namely,
\[\sigma_*^2(\{\mbox{$(+1)$-eigenspace of $\tau_*\sigma_*$}\})=
\{\mbox{$(+1)$-eigenspace of $\tau_*$}\}\]
\end{corol}
{\em Proof}.
Let $Y\in \{\mbox{$(+1)$-eigenspace of $\tau_*\sigma_*$}\}$, i.e.
\[\tau_*\sigma_*(Y)=Y.\]
Apply $\sigma_*^2$ to both sides, so that
\[\sigma_*^2\tau_*\sigma_*(Y)= \sigma_*^2(Y),\]
Since
\[\sigma_*^2\tau_*\sigma_*=\sigma_*^2\tau_*\sigma_*^2\sigma_*^2=\sigma_*^2\sigma_*\tau_*\sigma_*^2=\tau_*\sigma_*^2\]
we have
\[\tau_*(\sigma_*^2(Y))= \sigma_*^2(Y).\]
This means that $X=\sigma_*^2(Y)$ is a $(+1)$-eigenvector of $\tau_*$. Since $\tau_*$ is an automorphism, the claim is proved. \qd
\subsubsection{Fundamental $Spin(7)$ $4$-form and $G_2$ $3$-form}
Using the metric, we can dualize the endomorphisms $\kappa_{8*}^+(e_ie_j)$, $2\leq i<j \leq 8$ into 2-forms:
\begin{eqnarray*}
f_{2, 3} &=& dx_{1} \wedge dx_{4} + dx_{2} \wedge dx_{3}
+ dx_{5} \wedge dx_{8} + dx_{6} \wedge dx_{7},\\
f_{2, 4} &=& -dx_{1} \wedge dx_{3} + dx_{2} \wedge dx_{4}
+ dx_{5} \wedge dx_{7} - dx_{6} \wedge dx_{8},\\
f_{2, 5} &=& dx_{1} \wedge dx_{6} + dx_{2} \wedge dx_{5}
- dx_{3} \wedge dx_{8} - dx_{4} \wedge dx_{7},\\
f_{2, 6} &=& -dx_{1} \wedge dx_{5} + dx_{2} \wedge dx_{6}
- dx_{3} \wedge dx_{7} + dx_{4} \wedge dx_{8},\\
f_{2, 7} &=& dx_{1} \wedge dx_{8} + dx_{2} \wedge dx_{7}
+ dx_{3} \wedge dx_{6} + dx_{4} \wedge dx_{5},\\
f_{2, 8} &=& -dx_{1} \wedge dx_{7} + dx_{2} \wedge dx_{8}
+ dx_{3} \wedge dx_{5} - dx_{4} \wedge dx_{6},\\
f_{3, 4} &=& dx_{1} \wedge dx_{2} + dx_{3} \wedge dx_{4}
- dx_{5} \wedge dx_{6} - dx_{7} \wedge dx_{8},\\
f_{3, 5} &=& dx_{1} \wedge dx_{7} + dx_{2} \wedge dx_{8}
+ dx_{3} \wedge dx_{5} + dx_{4} \wedge dx_{6},\\
f_{3, 6} &=& -dx_{1} \wedge dx_{8} + dx_{2} \wedge dx_{7}
+ dx_{3} \wedge dx_{6} - dx_{4} \wedge dx_{5},\\
f_{3, 7} &=& -dx_{1} \wedge dx_{5} - dx_{2} \wedge dx_{6}
+ dx_{3} \wedge dx_{7} + dx_{4} \wedge dx_{8},\\
f_{3, 8} &=& dx_{1} \wedge dx_{6} - dx_{2} \wedge dx_{5}
+ dx_{3} \wedge dx_{8} - dx_{4} \wedge dx_{7},\\
f_{4, 5} &=& -dx_{1} \wedge dx_{8} + dx_{2} \wedge dx_{7}
- dx_{3} \wedge dx_{6} + dx_{4} \wedge dx_{5},\\
f_{4, 6} &=& -dx_{1} \wedge dx_{7} - dx_{2} \wedge dx_{8}
+ dx_{3} \wedge dx_{5} + dx_{4} \wedge dx_{6},\\
f_{4, 7} &=& dx_{1} \wedge dx_{6} - dx_{2} \wedge dx_{5}
- dx_{3} \wedge dx_{8} + dx_{4} \wedge dx_{7},\\
f_{4, 8} &=& dx_{1} \wedge dx_{5} + dx_{2} \wedge dx_{6}
+ dx_{3} \wedge dx_{7} + dx_{4} \wedge dx_{8},\\
f_{5, 6} &=& dx_{1} \wedge dx_{2} - dx_{3} \wedge dx_{4}
+ dx_{5} \wedge dx_{6} - dx_{7} \wedge dx_{8},\\
f_{5, 7} &=& dx_{1} \wedge dx_{3} + dx_{2} \wedge dx_{4}
+ dx_{5} \wedge dx_{7} + dx_{6} \wedge dx_{8},\\
f_{5, 8} &=& -dx_{1} \wedge dx_{4} + dx_{2} \wedge dx_{3}
+ dx_{5} \wedge dx_{8} - dx_{6} \wedge dx_{7},\\
f_{6, 7} &=& -dx_{1} \wedge dx_{4} + dx_{2} \wedge dx_{3}
- dx_{5} \wedge dx_{8} + dx_{6} \wedge dx_{7},\\
f_{6, 8} &=& -dx_{1} \wedge dx_{3} - dx_{2} \wedge dx_{4}
+ dx_{5} \wedge dx_{7} + dx_{6} \wedge dx_{8},\\
f_{7, 8} &=& dx_{1} \wedge dx_{2} - dx_{3} \wedge dx_{4}
- dx_{5} \wedge dx_{6} + dx_{7} \wedge dx_{8}.
\end{eqnarray*}
We can form the $Spin(7)$-invariant 4-form
\begin{eqnarray*}
\Omega&=&\sum_{2\leq i<j\leq 8}f_{i,j}\wedge f_{i,j}\\
&=&6(
- (dx_{1}\wedge dx_{2}\wedge dx_{3}\wedge dx_{4})- (dx_{1}\wedge dx_{2}\wedge dx_{5}\wedge dx_{6})+ (dx_{1}\wedge dx_{7}\wedge dx_{2}\wedge dx_{8})\\
&&- (dx_{1}\wedge dx_{7}\wedge dx_{3}\wedge dx_{5})
+ (dx_{1}\wedge dx_{7}\wedge dx_{4}\wedge dx_{6})+ (dx_{1}\wedge dx_{8}\wedge dx_{3}\wedge dx_{6})\\
&&+ (dx_{1}\wedge dx_{8}\wedge dx_{4}\wedge dx_{5}) + (dx_{2}\wedge dx_{3}\wedge dx_{5}\wedge dx_{8})
+ (dx_{2}\wedge dx_{3}\wedge dx_{6}\wedge dx_{7})\\
&&+ (dx_{2}\wedge dx_{7}\wedge dx_{4}\wedge dx_{5})- (dx_{2}\wedge dx_{8}\wedge dx_{4}\wedge dx_{6})- (dx_{3}\wedge dx_{6}\wedge dx_{4}\wedge dx_{5})\\
&&+ (dx_{3}\wedge dx_{7}\wedge dx_{4}\wedge dx_{8})- (dx_{5}\wedge dx_{6}\wedge dx_{7}\wedge dx_{8})
)
\end{eqnarray*}
whose square is a multiple of the 8-dimensional volume form
\[\Omega\wedge\Omega=504\,\, dx_1\wedge dx_2\wedge dx_3\wedge dx_4\wedge dx_5\wedge dx_6\wedge dx_7\wedge dx_8,\]
thus showing that $\Omega$ is non-degenerate.
By integrating out $dx_1$ we get the
$G_2$-invariant 3-form
\begin{eqnarray*}
\varphi
&=&6(
- (dx_{2}\wedge dx_{3}\wedge dx_{4})- (dx_{2}\wedge dx_{5}\wedge dx_{6})- (dx_{2}\wedge dx_{7}\wedge dx_{8})\\
&&- (dx_{3}\wedge dx_{5}\wedge dx_{7})
+ (dx_{4}\wedge dx_{6}\wedge dx_{7})+ (dx_{3}\wedge dx_{6}\wedge dx_{8})+ (dx_{4}\wedge dx_{5}\wedge dx_{8})
).
\end{eqnarray*}
\subsubsection{$\sigma$ and $\tau$ are outer automorphisms}
Now, we will show that $\sigma$ and $\tau$ are outer automorphisms
by showing that they permute the non-trivial central elements of $Spin(8)$, namely
\[-1, {\rm vol}_8, -{\rm vol}_8.\]
Recall that
\[{\rm vol}_8^2=1.\]
Consider, for instance,
\begin{eqnarray*}
\sigma(exp(t e_1e_2))
&=& \sigma(\cos(t)+\sin(t) e_1e_2)\\
&=& exp(\sigma_*(te_1e_2))\\
&=& exp\left({t\over 2} ( -e_1e_2 - e_3e_4 - e_5e_6 - e_7e_8)\right)\\
&=& exp\left(-{te_1e_2\over 2} \right) exp\left(-{te_3e_4\over 2} \right)
exp\left(-{te_5e_6\over 2} \right) exp\left(-{te_7e_8\over 2} \right)\\
&=& (\cos(t/2)-\sin(t/2) e_1e_2) (\cos(t/2)-\sin(t/2) e_3e_4)
(\cos(t/2)-\sin(t/2) e_5e_6) (\cos(t/2)-\sin(t/2) e_7e_8) ,
\end{eqnarray*}
so that
\begin{eqnarray*}
\sigma(e_1e_2)
&=&\sigma(\cos(\pi/2)+\sin(\pi/2) e_1e_2)\\
&=& (\cos(\pi/4)-\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4)
(\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8) .
\end{eqnarray*}
Note that the calculations are carried out in $Cl_8$ where the exponentials converge.
Similarly,
\begin{eqnarray*}
\sigma(e_3e_4)
&=& (\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)+\sin(\pi/4) e_3e_4)
(\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8) .
\\
\sigma(e_5e_6)
&=& (\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4)
(\cos(\pi/4)+\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8) .
\\
\sigma(e_7e_8)
&=& (\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4)
(\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)+\sin(\pi/4) e_7e_8) .
\end{eqnarray*}
Now consider
\begin{eqnarray*}
\tau(exp(t e_1e_2))
&=& \tau(\cos(t)+\sin(t) e_1e_2)\\
&=& exp(\tau_*(te_1e_2))\\
&=& exp\left({t\over 2} ( e_1e_2 + e_3e_4 + e_5e_6 + e_7e_8)\right)\\
&=& exp\left({te_1e_2\over 2} \right) exp\left({te_3e_4\over 2} \right)
exp\left({te_5e_6\over 2} \right) exp\left({te_7e_8\over 2} \right)\\
&=& (\cos(t/2)+\sin(t/2) e_1e_2) (\cos(t/2)+\sin(t/2) e_3e_4)
(\cos(t/2)+\sin(t/2) e_5e_6) (\cos(t/2)+\sin(t/2) e_7e_8) ,
\end{eqnarray*}
so that
\begin{eqnarray*}
\tau(e_1e_2)
&=&\tau(\cos(\pi/2)+\sin(\pi/2) e_1e_2)\\
&=& (\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)+\sin(\pi/4) e_3e_4)
(\cos(\pi/4)+\sin(\pi/4) e_5e_6) (\cos(\pi/4)+\sin(\pi/4) e_7e_8) .
\end{eqnarray*}
Similarly for all other generators $e_ie_j$ of $\mathfrak{spin}(8)\subset Cl_8^0.$
\begin{corol}\label{cor: sigma and tau on center}
The automorphisms $\tau $ and $\sigma$ are outer automorphisms, of order 2 and 3 respectively, since they permute the elements of the center of $Spin(8)$, i.e.
\begin{eqnarray*}
\sigma(-1)
&=& {\rm vol}_8,\\
\sigma({\rm vol}_8)
&=& -{\rm vol}_8,\\
\sigma(-{\rm vol}_8)
&=& -1,\\
\tau(-1)
&=& {\rm vol}_8,\\
\tau({\rm vol}_8) &=& -1.
\end{eqnarray*}
\end{corol}
{\em Proof}.
Consider
{\footnotesize
\begin{eqnarray*}
\sigma(-1)
&=& \sigma(e_1e_2e_1e_2)\\
&=& \sigma(e_1e_2)\sigma(e_1e_2)\\
&=& [(\cos(\pi/4)-\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4) \\
&& (\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8)]^2 \\
&=& (\cos(\pi/4)-\sin(\pi/4) e_1e_2)^2 (\cos(\pi/4)-\sin(\pi/4) e_3e_4)^2 \\
&& (\cos(\pi/4)-\sin(\pi/4) e_5e_6)^2 (\cos(\pi/4)-\sin(\pi/4) e_7e_8)^2 \\
&=& (\cos^2(\pi/4)-\sin^2(\pi/4)-2\sin(\pi_4)\cos(\pi/4) e_1e_2)
(\cos^2(\pi/4)-\sin^2(\pi/4)-2\sin(\pi_4)\cos(\pi/4) e_3e_4) \\
&& (\cos^2(\pi/4)-\sin^2(\pi/4)-2\sin(\pi_4)\cos(\pi/4) e_5e_6)
(\cos^2(\pi/4)-\sin^2(\pi/4)-2\sin(\pi_4)\cos(\pi/4) e_7e_8) \\
&=& (\cos(\pi/2)-\sin(\pi/2) e_1e_2) (\cos(\pi/2)-\sin(\pi/2) e_3e_4)
(\cos(\pi/2)-\sin(\pi/2) e_5e_6) (\cos(\pi/2)-\sin(\pi/2) e_7e_8) \\
&=& e_1e_2e_3e_4e_5e_6e_7e_8,\\
\sigma({\rm vol}_8)
&=& \sigma(e_1e_2)\sigma(e_3e_4)\sigma(e_5e_6)\sigma(e_7e_8)\\
&=& (\cos(\pi/4)-\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4)
(\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8) \\
&&(\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)+\sin(\pi/4) e_3e_4)
(\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8)\\
&& (\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4)
(\cos(\pi/4)+\sin(\pi/4) e_5e_6) (\cos(\pi/4)-\sin(\pi/4) e_7e_8)\\
&& (\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)-\sin(\pi/4) e_3e_4)
(\cos(\pi/4)-\sin(\pi/4) e_5e_6) (\cos(\pi/4)+\sin(\pi/4) e_7e_8)\\
&=& (\cos(\pi/4)+\sin(\pi/4) e_1e_2)^2 (\cos(\pi/4)-\sin(\pi/4) e_3e_4)^2
(\cos(\pi/4)-\sin(\pi/4) e_5e_6)^2 (\cos(\pi/4)-\sin(\pi/4) e_7e_8)^2\\
&=& (\cos(\pi/2) + \sin(pi/2) e_1e_2) (\cos(\pi/2) - \sin(pi/2) e_1e_2)
(\cos(\pi/2) - \sin(pi/2) e_1e_2) (\cos(\pi/2) - \sin(pi/2) e_1e_2)\\
&=& -e_1e_2e_3e_4e_5e_6e_7e_8,\\
\sigma(-{\rm vol}_8)
&=& \sigma(-1)\sigma(e_1e_2e_3e_4e_5e_6e_7e_8)\\
&=& (e_1e_2e_3e_4e_5e_6e_7e_8)(-e_1e_2e_3e_4e_5e_6e_7e_8)\\
&=& -1.
\end{eqnarray*}
On the other hand, we also have
\begin{eqnarray*}
\tau(-1)
&=& \tau(e_1e_2e_1e_2)\\
&=& \tau(e_1e_2)\tau(e_1e_2)\\
&=& [(\cos(\pi/4)+\sin(\pi/4) e_1e_2) (\cos(\pi/4)+\sin(\pi/4) e_3e_4) \\
&& (\cos(\pi/4)+\sin(\pi/4) e_5e_6) (\cos(\pi/4)+\sin(\pi/4) e_7e_8)]^2 \\
&=& (\cos(\pi/4)+\sin(\pi/4) e_1e_2)^2 (\cos(\pi/4)+\sin(\pi/4) e_3e_4)^2 \\
&& (\cos(\pi/4)+\sin(\pi/4) e_5e_6)^2 (\cos(\pi/4)+\sin(\pi/4) e_7e_8)^2 \\
&=& (\cos^2(\pi/4)-\sin^2(\pi/4)+2\sin(\pi_4)\cos(\pi/4) e_1e_2)
(\cos^2(\pi/4)-\sin^2(\pi/4)+2\sin(\pi_4)\cos(\pi/4) e_3e_4) \\
&& (\cos^2(\pi/4)-\sin^2(\pi/4)+2\sin(\pi_4)\cos(\pi/4) e_5e_6)
(\cos^2(\pi/4)-\sin^2(\pi/4)+2\sin(\pi_4)\cos(\pi/4) e_7e_8) \\
&=& (\cos(\pi/2)+\sin(\pi/2) e_1e_2) (\cos(\pi/2)+\sin(\pi/2) e_3e_4)
(\cos(\pi/2)+\sin(\pi/2) e_5e_6) (\cos(\pi/2)+\sin(\pi/2) e_7e_8) \\
&=& e_1e_2e_3e_4e_5e_6e_7e_8,\\
\tau({\rm vol}_8) &=& -1.
\end{eqnarray*}
}
\qd
\subsubsection{Octonions}
In this subsection, we recover a multiplication table of the normed division algebra of Octonions using the $Spin(8)$ representations. The idea is to consider Clifford multiplication and the three real representations of $Spin(8)$ (see \cite{Baez,Lounesto}).
Consider the basis of the positive real spinors given by
\[\beta^+=\{u_0-u_{15},iu_0+iu_{15},u_3+u_{12},-iu_3+iu_{12},-u_5+u_{10},iu_5+iu_{10},u_6+u_9,iu_6-iu_9\}\]
Let us consider Clifford multiplication as a bilinear map
\[\mathbb R^8\times \tilde{\Delta}_8^+\rightarrow \tilde{\Delta}_8^-.\]
In this subsection, let $\{v_0,\dots,v_7\}$ denote the standard ordered basis of $\mathbb R^8$.
The Clifford multiplication table is the following:
\[
\begin{array}{|r|rrrrrrrr|}
\hline
& u_0-u_{15}&iu_0+iu_{15}&u_3+u_{12}&-iu_3+iu_{12}&-u_5+u_{10}&iu_5+iu_{10}&u_6+u_9&iu_6-iu_9\\
\hline
v_0& iu_{1}-iu_{14} & -u_{1}-u_{14} & iu_{2}+iu_{13} & -u_{2}+u_{13} & iu_{4}-iu_{11} & -u_{4}-u_{11} & iu_{7}+iu_{8} & -u_{7}+u_{8}\\
v_1&u_{1}+u_{14} & iu_{1}-iu_{14} & -u_{2}+u_{13} & -iu_{2}-iu_{13} & -u_{4}-u_{11} & -iu_{4}+iu_{11} & u_{7}-u_{8} & iu_{7}+iu_{8} \\
v_2&-iu_{2}-iu_{13} & u_{2}-u_{13} & iu_{1}-iu_{14} & -u_{1}-u_{14} & iu_{7}+iu_{8} & -u_{7}+u_{8} & -iu_{4}+iu_{11} & u_{4}+u_{11} \\
v_3&-u_{2}+u_{13} & -iu_{2}-iu_{13} & -u_{1}-u_{14} & -iu_{1}+iu_{14} & u_{7}-u_{8} & iu_{7}+iu_{8} & u_{4}+u_{11} & iu_{4}-iu_{11} \\
v_4&iu_{4}-iu_{11} & -u_{4}-u_{11} & iu_{7}+iu_{8} & -u_{7}+u_{8} & -iu_{1}+iu_{14} & u_{1}+u_{14} & -iu_{2}-iu_{13} & u_{2}-u_{13} \\
v_5&u_{4}+u_{11} & iu_{4}-iu_{11} & u_{7}-u_{8} & iu_{7}+iu_{8} & u_{1}+u_{14} & iu_{1}-iu_{14} & u_{2}-u_{13} & iu_{2}+iu_{13} \\
v_6&-iu_{8}-iu_{7} & u_{8}-u_{7} & -iu_{11}+iu_{4} & u_{11}+u_{4} & -iu_{13}-iu_{2} & u_{13}-u_{2} & -iu_{14}+iu_{1} & u_{14}+u_{1} \\
v_7&-u_{8}+u_{7} & -iu_{8}-iu_{7} & -u_{11}-u_{4} & -iu_{11}+iu_{4} & -u_{13}+u_{2} & -iu_{13}-iu_{2} & -u_{14}-u_{1} & -iu_{14}+iu_{1} \\
\hline
\end{array}
\]
Now let
\begin{eqnarray*}
\beta^-&=&\{
iu_{1}-iu_{14},
-u_{1}-u_{14},
iu_{2}+iu_{13},
-u_{2}+ u_{13},
iu_{4}-iu_{11},
-u_{4}- u_{11},
iu_{7}+iu_{8},
-u_{7}+u_{8}\}
.
\end{eqnarray*}
By labeling the elements of the ordered bases $\beta^+=\{\psi_0,\dots,\psi_7\}$ and $\beta^-=\{\phi_0,\dots,\phi_7\}$,
the Clifford multiplication table now reads as follows:
\[
\begin{array}{|r|rrrrrrrr|}
\hline
& \psi_0 & \psi_1 & \psi_2 & \psi_3 & \psi_4 & \psi_5 & \psi_6 & \psi_7 \\
\hline
v_0&\phi_0 &\phi_1 & \phi_2 & \phi_3 & \phi_4& \phi_5 & \phi_6 & \phi_7\\
v_1&-\phi_1 &\phi_0 & \phi_3 & -\phi_2 & \phi_5 & -\phi_4& -\phi_7 & \phi_6 \\
v_2& -\phi_2 & -\phi_3 &\phi_0 &\phi_1 & \phi_6 & \phi_7 & -\phi_4& -\phi_5 \\
v_3& \phi_3 & -\phi_2 &\phi_1 & -\phi_0 & -\phi_7 & \phi_6 & -\phi_5 & \phi_4\\
v_4& \phi_4& \phi_5 & \phi_6 & \phi_7 & -\phi_0 &-\phi_1 & -\phi_2 & -\phi_3 \\
v_5& -\phi_5 & \phi_4& -\phi_7 & \phi_6 &-\phi_1 &\phi_0 & -\phi_3 & \phi_2 \\
v_6&-\phi_6 &\phi_7 & \phi_4 & -\phi_5 & -\phi_2& \phi_3 & \phi_0 & -\phi_1 \\
v_7&-\phi_7 &-\phi_6 & \phi_5 & \phi_4 & -\phi_3& -\phi_2 & \phi_1 & \phi_0\\
\hline
\end{array}
\]
We can recover the multiplication table of the octonions by identifying $\mathbb R^8$, $ \tilde{\Delta}_8^+$ and $\tilde{\Delta}_8^-$ with a single vector space $\mathbb O={\rm span}\{\hat {e}_0,\hat {e}_1,\dots, \hat {e}_7\}$ in the following way. We identify $v_0$, $\psi_0$ and $\phi_0$ with the identity $\hat {e}_0$ of $\mathbb O$. We also identify $\psi_i$ with $\hat {e}_i$. In this way, we have that $\phi_i=v_0 \psi_i=\hat {e}_0\cdot \hat {e}_i=\hat {e}_i$. We also have that $v_1\psi_0=-\phi_1$ so that $v_1$ should be identify with $-\hat {e}_1$. In the same way
$v_2$, $v_3$, $v_4$, $v_5$, $v_6$ and $v_7$ should be identify with $-\hat {e}_2$, $\hat {e}_3$ $\hat {e}_4$, $-\hat {e}_5$, $-\hat {e}_6$ and $-\hat {e}_7$. Then the multiplication table (of the octonions) reads as follows
\[\begin{array}{|r|rrrrrrrr|}
\hline
& \hat {e}_0 & \hat {e}_1 & \hat {e}_2 & \hat {e}_3 & \hat {e}_4 & \hat {e}_5 & \hat {e}_6 & \hat {e}_7 \\
\hline
\hat {e}_0&\hat {e}_0 &\hat {e}_1 & \hat {e}_2 & \hat {e}_3 & \hat {e}_4& \hat {e}_5 & \hat {e}_6 & \hat {e}_7\\
\hat {e}_1&\hat {e}_1 &-\hat {e}_0 & -\hat {e}_3 & \hat {e}_2 & -\hat {e}_5 & \hat {e}_4& \hat {e}_7 & -\hat {e}_6 \\
\hat {e}_2& \hat {e}_2 & \hat {e}_3 &-\hat {e}_0 &-\hat {e}_1 & -\hat {e}_6 & -\hat {e}_7 & \hat {e}_4& \hat {e}_5 \\
\hat {e}_3& \hat {e}_3 & -\hat {e}_2 &\hat {e}_1 & -\hat {e}_0 & -\hat {e}_7 & \hat {e}_6 & -\hat {e}_5 & \hat {e}_4\\
\hat {e}_4& \hat {e}_4& \hat {e}_5 & \hat {e}_6 & \hat {e}_7 & -\hat {e}_0 &-\hat {e}_1 & -\hat {e}_2 & -\hat {e}_3 \\
\hat {e}_5& \hat {e}_5 & -\hat {e}_4& \hat {e}_7 & -\hat {e}_6 &\hat {e}_1 &-\hat {e}_0 & \hat {e}_3 & -\hat {e}_2 \\
\hat {e}_6&\hat {e}_6 &-\hat {e}_7 & -\hat {e}_4 & \hat {e}_5 & \hat {e}_2& -\hat {e}_3 & -\hat {e}_0 & \hat {e}_1 \\
\hat {e}_7&\hat {e}_7 &\hat {e}_6 & -\hat {e}_5 & -\hat {e}_4 & \hat {e}_3& \hat {e}_2 & -\hat {e}_1 & -\hat {e}_0\\
\hline
\end{array}
\]
{\bf Remark}. One can actually do the same in the case of $\mathbb{R}^4$, $\tilde\Delta_4^+$ and $\tilde\Delta_4^-$ to
recover the quaternion multiplication table.
\subsection{Vector fields on spheres}
Classical results of Hurwitz, Radon and Adams \cite{Hurwitz, Adams} tell us the maximal number of independent vector fields a
sphere can admit.
In this subsection, we will give explicit expressions, using Clifford algebras and the binary code,
for orthogonal linearly indenpendent vector fields on spheres, (compare with \cite{Piccinni,Ognikyan} for recent work). We follow the idea in \cite{Lawson} using Clifford multiplication, but we will multiply by elements of $\mathfrak{spin}(r)$.
The idea is as follows: if $\mathbb{R}^N$ is a representation of $Cl_r^0$, we can construct the following orthogonal vector fields on the sphere. For every $Z\in S^{N-1}$ and $2\leq j\leq r$, $V_{j-1}(Z)=e_1e_jZ$ are $r-1$ orthogonal linearly independent vectors tangent to the sphere at $Z$, so that one can define $r-1$ orthogonal vector fields $V_1,\dots,V_{r-1}$ on the sphere. By \cite{Adams}, if $r$ is the maximum integer such that $\mathbb{R}^N$ is a representation of $Cl_r^0$, then the vector fields constructed before form a maximal set of independent vector fields.
Thus, let us suppose that $Cl_r^0$ is represented
on $\mathbb{R}^N$, for some $N\in\mathbb{N}$,
in such a way that each bivector $e_ie_j$ is mapped to an antisymmetric endomorphism $J_{ij}$ satisfying
\begin{equation}
J_{ij}^2 = -{\rm Id}_{\mathbb{R}^N}.\label{eq:almost-complex-structures}
\end{equation}
\begin{itemize}
\item
If $r\not\equiv 0
\,\,\,({\rm mod}\,\,\,\, 4)$, $r>1$, $\mathbb{R}^N$ decomposes into a sum of irreducible representations of $Cl_r^0$.
Since this algebra is simple, such irreducible representations can only be trivial or
copies of the standard representation
$\tilde\Delta_r$ of $Cl_r^0$ (cf. \cite{Lawson}). Due to
\rf{eq:almost-complex-structures}, there are no trivial summands in such a decomposition so that
\begin{eqnarray*}
\mathbb{R}^N
&=& \underbrace{\tilde\Delta_r\oplus\cdots\oplus \tilde\Delta_r}_{m
\,\,\,times} .
\end{eqnarray*}
By restricting to $\mathfrak{spin}(r)\subset Cl_r^0$,
\[\mathbb{R}^N =\tilde\Delta_r \otimes_{\mathbb{R}}\mathbb{R}^m\]
we see that $\mathfrak{spin}(r)$ has an isomorphic image
\[\widehat{\mathfrak{spin}(r)}=\mathfrak{spin}(r)\otimes \{{\rm Id}_{\mathbb{R}^m}\}\subset
\mathfrak{so}(d_rm),\]
which is a subalgebra of $\mathfrak{so}(d_rm)$.
Note that
\[J_{ij}=[\kappa_r(e_ie_j)\otimes{\rm Id}_{\mathbb{R}^m}]\]
for $1\leq i<j\leq r$.
\item
If, on the other hand, $r\equiv 0 \,\,\,({\rm mod}\,\,\,\, 4)$,
\[\hat\Delta_r = \tilde{\Delta}_r^+ \oplus \tilde{\Delta}_r^-,\]
the sum of two inequivalent irreducible representationsm, and
\begin{eqnarray*}
\mathbb{R}^N
&=& \tilde\Delta_r^+\otimes_{\mathbb{R}} \mathbb{R}^{m_1} \oplus \tilde\Delta_r^-\otimes_{\mathbb{R}} \mathbb{R}^{m_2}.
\end{eqnarray*}
as a representation to $\mathfrak{spin}(r)\subset Cl_r^0$, and
we see that $\mathfrak{spin}(r)$ has an isomorphic image
\[\widehat{\mathfrak{spin}(r)}=\{\kappa_n^+(g)\otimes ({\rm Id}_{\mathbb{R}^{m_1}}\oplus
\mathbf{0}_{m_2\times m_2})\oplus
\kappa_n^-(g)\otimes(\mathbf{0}_{m_1\times m_1}\oplus {\rm Id}_{\mathbb{R}^{m_2}})\,|\,g\in \mathfrak{spin}(r)\}
\subset
\mathfrak{so}(d_rm_1+d_rm_2).\]
Note that
\[J_{ij}=[\kappa_r^+(e_ie_j)\otimes{\rm Id}_{\mathbb{R}^{m_1}}]\oplus [\kappa_r^-(e_ie_j)\otimes{\rm Id}_{\mathbb{R}^{m_2}}]\]
for $1\leq i<j\leq r$.
\end{itemize}
Given a point $Z$ in the sphere of $\tilde\Delta_r\otimes_{\mathbb{R}}\mathbb{R}^m$ or
$\tilde\Delta_r^\pm\otimes_{\mathbb{R}}\mathbb{R}^m$, the corresponding values of the
vector fields at $Z$ will be given by
\[ [\kappa_r(e_1e_2)\otimes{\rm Id}_{m}](Z),...,[\kappa_r(e_1e_r)\otimes{\rm Id}_{m}](Z),\]
or
\[[\kappa_r^\pm(e_1e_2)\otimes{\rm Id}_{m}] (Z),...,[\kappa_r^\pm(e_1e_r)\otimes{\rm Id}_{m}](Z),\]
respectively, where ${\rm Id}_{m}:={\rm Id}_{\mathbb{R}^m}$.
\subsubsection{Calculations in $\Delta_r$}
First recall that
\begin{eqnarray*}
e_{1}u_b
&=& i u_{b+(-1)^{b_0}}.
\end{eqnarray*}
Now, if
$p=2\leq r$, $j=[{p+1\over 2}]= 1$,
\begin{eqnarray*}
e_1e_2\cdot u_a
&=& i(-1)^{a_{0}} u_{a},
\end{eqnarray*}
if
$3\leq p\leq 2[{r\over 2}]$, $j=[{p+1\over 2}]\geq 2$,
\begin{eqnarray*}
e_1e_p\cdot u_a
&=& i^{1-p}(-1)^{2j-1+\sum_{s=0}^{j-2}a_s+a_{j-1}(-2j+p+1)} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}
\end{eqnarray*}
and if $p=r=2k+1$,
\begin{eqnarray*}
e_1 e_{r}u_a
&=& (-1)^{[r/2]+1+\sum_{l=0}^{[r/2]-1}a_l} u_{a+(-1)^{a_0}} .
\end{eqnarray*}
We also have the following expressions for the real and quaternionic structures:
For $r=0,1,4,5 \,\,(\mod 8)$ and $q=[{r/4}]$
\begin{eqnarray*}
\gamma_r(u_a)
&=&(-i)^q (-1)^{\sum_{t=1}^qa_{2t-1}} u_{2^{2q}-1-a}.
\end{eqnarray*}
For $r=2,3,6,7 \,\,(\mod 8)$ and $q=[{r/4}]$
\begin{eqnarray*}
\gamma_r(u_a)
&=&(-i)^{q+1} (-1)^{\sum_{t=0}^qa_{2t}} u_{2^{2q+1}-1-a}.
\end{eqnarray*}
\subsubsection{The vector fields}
Due to the coincidence of dimensions of the real Spin representations
\[d_{8q+3}=d_{8q+4},\]
and
\[d_{8q+5}=d_{8q+6}=d_{8q+7}=d_{8q+8},\]
we only need to consider the cases $r\equiv 0,1,2,4 \,\,(\mod 8)$.
For example, if $R^N$ is a representation space of $Cl_{8q+3}^0$, then $R^N$ is also a representation of
$Cl_{8q+4}^0$ and therefore $8q+3$ is not maximal.
Let us consider bases for the spaces $\tilde\Delta_r$ and $\tilde\Delta_r^\pm$:
\begin{itemize}
\item Case $r\equiv 0\,\,(\mod 8)$: A basis for $\tilde\Delta_r^+$ is given by
\[\left\{u_a+\gamma_r(u_a),iu_a+\gamma_r(iu_a)\,|\, a=0,...,2^{r/2-1}-1, \sum a_l\equiv 0 \,\,(\mod 2)\right\}\]
\item Case $r\equiv 1\,\,(\mod 8)$: A basis for $\tilde\Delta_r$ is given by
\[\left\{u_a+\gamma_r(u_a),iu_a+\gamma_r(iu_a)\,|\, a=0,...,2^{[r/2]-1}-1\right\}\]
\item Case $r\equiv 2\,\,(\mod 8)$: A basis for $\tilde\Delta_r$ is given by
\[\left\{u_a,iu_a\,|\, a=0,...,2^{r/2}-1,\sum a_l\equiv 0 \,\,(\mod 2)\right\}\]
\item Case $r\equiv 4\,\,(\mod 8)$: A basis for $\tilde\Delta_r^+$ is given by
\[\left\{u_a,iu_a\,|\, a=0,...,2^{r/2}-1,\sum a_l\equiv 0 \,\,(\mod 2)\right\}\]
\end{itemize}
\subsubsection*{Case $r\equiv 0\,\,(\mod 8)$}
For any point $Z\in S(\tilde\Delta_r^+\otimes_{\mathbb{R}}\mathbb{R}^m )=S^{d_r m}$,
\[Z= \sum_{h=1}^m\sum_{\{a=0,...,2^{r/2-1}-1\,:\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}} X_{a,h}(u_a\otimes v_h) + Y_{a,h} (iu_a\otimes v_h), \]
where $\{v_1,...,v_m\}$ is an orthonormal basis of $\mathbb{R}^m$ and $X_{a,h},Y_{a,h}\in\mathbb{R}$,
there are
$r-1$ point-wise linearly independent vector fields given as follows.
For $p=2$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_2)\otimes{\rm Id}_m] \left(\sum_{h=1}^m\sum_{\{a=0,...,2^{r/2-1}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}}
X_{a,h}(u_a+\gamma_r(u_a))\otimes v_h) + Y_{a,h} (iu_a+\gamma_r(iu_a))\otimes v_h)\right)\\
&=&\sum_{h=1}^m\sum_{\{a=0,...,2^{r/2-1}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}}
(-1)^{a_{0}}(-Y_{a,h} ( u_{a}+\gamma_r( u_{a}))\otimes v_h)+X_{a,h}(i u_{a}+\gamma_r(i u_{a}))\otimes v_h) ).
\end{eqnarray*}
For $3\leq p\leq r$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_p)\otimes{\rm Id}_m] \left(\sum_{\{a=0,...,2^{r/2-1}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}}
X_{a,h}(u_a+\gamma_r(u_a))\otimes v_h) + Y_{a,h} (iu_a+\gamma_r(iu_a))\otimes v_h)\right)\\
&=&\sum_{h=1}^m\sum_{\{a=0,...,2^{r/2-1}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}}
(-1)^{2j-1+\sum_{s=0}^{j-2}a_s+a_{j-1}(-2j+p+1)}\\
&&(X_{a,h}(i^{1-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}+\gamma_r(i^{1-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}))\otimes v_h) \\
&& -Y_{a,h} (i^{-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}+\gamma_r(i^{-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}))\otimes v_h)\\
\end{eqnarray*}
\subsubsection*{Case $r\equiv 1\,\,(\mod 8)$}
For any point) $Z\in S(\tilde\Delta_r\otimes_{\mathbb{R}}\mathbb{R}^m )=S^{d_r m}$,
\[Z= \sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1} X_{a,h}(u_a\otimes v_h) + Y_{a,h} (iu_a\otimes v_h), \]
where $\{v_1,...,v_m\}$ is an orthonormal basis of $\mathbb{R}^m$ and $X_{a,h},Y_{a,h}\in\mathbb{R}$,
there are
$r-1$ point-wise linearly independent vector fields given as follows.
For $p=2$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_2)\otimes{\rm Id}_m] \left(\sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1}
X_{a,h}(u_a+\gamma_r(u_a))\otimes v_h) + Y_{a,h} (iu_a+\gamma_r(iu_a))\otimes v_h)\right)\\
&=&\sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1}
(-1)^{a_{0}}(-Y_{a,h} ( u_{a}+\gamma_r( u_{a}))\otimes v_h)+X_{a,h}(i u_{a}+\gamma_r(i u_{a}))\otimes v_h) ).
\end{eqnarray*}
For $3\leq p\leq r-1$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_p)\otimes{\rm Id}_m] \left(\sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1}
X_{a,h}(u_a+\gamma_r(u_a))\otimes v_h) + Y_{a,h} (iu_a+\gamma_r(iu_a))\otimes v_h)\right)\\
&=&\sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1}
(-1)^{2j-1+\sum_{s=0}^{j-2}a_s+a_{j-1}(-2j+p+1)}(X_{a,h}(i^{1-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}+\gamma_r(i^{1-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}))\otimes v_h) \\
&& -Y_{a,h} (i^{-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}+\gamma_r(i^{-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}))\otimes v_h)\\
\end{eqnarray*}
For $p=r$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_r)\otimes{\rm Id}_m] \left(\sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1}
X_{a,h}(u_a+\gamma_r(u_a))\otimes v_h) + Y_{a,h} (iu_a+\gamma_r(iu_a))\otimes v_h)\right)\\
&=& \sum_{h=1}^m\sum_{a=0}^{2^{[r/2]-1}-1}
(-1)^{[r/2]+1+\sum_{l=0}^{[r/2]-1}a_l}(X_{a,h}( u_{a+(-1)^{a_0}}+\gamma_r( u_{a+(-1)^{a_0}}))\otimes v_h) \\
&&+ Y_{a,h} (i u_{a+(-1)^{a_0}}+\gamma_r(i u_{a+(-1)^{a_0}}))\otimes v_h).
\end{eqnarray*}
\subsubsection*{Case $r\equiv 2,4\,\,(\mod 8)$}
For any point $Z\in S(\tilde\Delta_r\otimes_{\mathbb{R}}\mathbb{R}^m )=S^{d_r m}$ if $r\equiv 2\,\,(\mod 8)$
(resp. $Z\in S(\tilde\Delta_r^+\otimes_{\mathbb{R}}\mathbb{R}^m )=S^{d_r m}$ if $r\equiv 4\,\,(\mod 8)$),
\[Z= \sum_{h=1}^m\sum_{\{a=0,...,2^{r/2}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}} X_{a,h}(u_a\otimes v_h) + Y_{a,h} (iu_a\otimes v_h), \]
where $\{v_1,...,v_m\}$ is an orthonormal basis of $\mathbb{R}^m$ and $X_{a,h},Y_{a,h}\in\mathbb{R}$,
there are
$r-1$ point-wise linearly independent vector fields given as follows.
For $p=2$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_2)\otimes{\rm Id}_m] \left(\sum_{h=1}^m\sum_{\{a=0,...,2^{r/2}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}} X_{a,h}(u_a\otimes v_h) + Y_{a,h} (iu_a\otimes v_h)\right)
\\
&=& \sum_{h=1}^m \sum_{\{a=0,...,2^{r/2}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}} (-1)^{a_{0}}(-Y_{a,h} (u_{a}\otimes v_h) + X_{a,h} (iu_{a}\otimes v_h)).
\end{eqnarray*}
For $3\leq p\leq r$,
\begin{eqnarray*}
&&[\kappa_r(e_1e_p)\otimes{\rm Id}_m] \left(\sum_{h=1}^m\sum_{\{a=0,...,2^{r/2}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}} X_{a,h}(u_a\otimes v_h) + Y_{a,h} (iu_a\otimes v_h)\right)\\
&=& \sum_{h=1}^m\sum_{\{a=0,...,2^{r/2}-1,\,\sum a_l\equiv 0 \,\,({\rm mod }2)\}} (-1)^{2j-1+\sum_{s=0}^{j-2}a_s+a_{j-1}(-2j+p+1)}(X_{a,h}(i^{1-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}\otimes v_h)\\
&& -Y_{a,h} (i^{-p} u_{a+(-1)^{a_{j-1}}2^{j-1}+(-1)^{a_0}}\otimes v_h)).\\
\end{eqnarray*}
\subsubsection*{Example: Vector fields on $S^{31}$}
In this subsection, we compute explicitly a maximal set of orthogonal linearly independent vector fields on $S^{31}$. Recall that $Cl_{10}^0$ is the biggest even Clifford algebra with $\mathbb{R}^{32}$ as representation space, so there are $9$ linearly independent orthogonal vector fields on the sphere $S^{31}$. Let
$Z\in S^{31}$, in terms of our basis
\begin{eqnarray*}Z&:=&(X_{0} + i Y_{0})u_{0}+ (X_{3} + i Y_{3})u_{3}+ (X_{5} + i Y_{5})u_{5}+
(X_{6} + i Y_{6})u_{6}+ (X_{9} + i Y_{9})u_{9}+ (X_{10} + i Y_{10})u_{10}\\
&&+
(X_{12} + i Y_{12})u_{12}+ (X_{15} + i Y_{15})u_{15}+
(X_{17} + i Y_{17})u_{17}+ (X_{18} + i Y_{18})u_{18}+
(X_{20} + i Y_{20})u_{20}\\
&&
+ (X_{23} + i Y_{23})u_{23}+
(X_{24} + i Y_{24})u_{24}+ (X_{27} + i Y_{27})u_{27}+
(X_{29} + i Y_{29})u_{29}+ (X_{30} + i Y_{30})u_{30}
\end{eqnarray*}
then a set of linearly independent orthogonal vector fields is given by
\begin{eqnarray*}
V_1&=&e_1e_2\cdot Z \\
&=&(i X_{0} - Y_{0})u_{0}+ (-i X_{3} + Y_{3})u_{3}+
(-i X_{5} + Y_{5})u_{5}+ (i X_{6} - Y_{6})u_{6}+ (-i X_{9} + Y_{9})u_{9}+
(i X_{10} - Y_{10})u_{10}\\
&&+ (i X_{12} - Y_{12})u_{12}+
(-i X_{15} + Y_{15})u_{15}+ (-i X_{17} + Y_{17})u_{17}+
(i X_{18} - Y_{18})u_{18}+ (i X_{20} - Y_{20})u_{20}\\
&&+
(-i X_{23} + Y_{23})u_{23}+ (i X_{24} - Y_{24})u_{24}+
(-i X_{27} + Y_{27})u_{27}+ (-i X_{29} + Y_{29})u_{29}+
(i X_{30} - Y_{30})u_{30}\\
V_2&=&e_1e_3\cdot Z \\
&=&(-X_{3} - i Y_{3})u_{0}+ (X_{0} + i Y_{0})u_{3}+
(X_{6} + i Y_{6})u_{5}+ (-X_{5} - i Y_{5})u_{6}+ (X_{10} + i Y_{10})u_{9}+
(-X_{9} - i Y_{9})u_{10}\\
&&+ (-X_{15} - i Y_{15})u_{12}+
(X_{12} + i Y_{12})u_{15}+ (X_{18} + i Y_{18})u_{17}+
(-X_{17} - i Y_{17})u_{18}+ (-X_{23} - i Y_{23})u_{20}\\
&&+
(X_{20} + i Y_{20})u_{23}+ (-X_{27} - i Y_{27})u_{24}+
(X_{24} + i Y_{24})u_{27}+ (X_{30} + i Y_{30})u_{29}+
(-X_{29} - i Y_{29})u_{30}\\
V_3&=&e_1e_4\cdot Z\\
&=&(-i X_{3} + Y_{3})u_{0}+ (-i X_{0} + Y_{0})u_{3}+
(i X_{6} - Y_{6})u_{5}+ (i X_{5} - Y_{5})u_{6}+ (i X_{10} - Y_{10})u_{9}+
(i X_{9} - Y_{9})u_{10}\\
&&+ (-i X_{15} + Y_{15})u_{12}+
(-i X_{12} + Y_{12})u_{15}+ (i X_{18} - Y_{18})u_{17}+
(i X_{17} - Y_{17})u_{18}+ (-i X_{23} + Y_{23})u_{20}\\
&&+
(-i X_{20} + Y_{20})u_{23}+ (-i X_{27} + Y_{27})u_{24}+
(-i X_{24} + Y_{24})u_{27}+ (i X_{30} - Y_{30})u_{29}+
(i X_{29} - Y_{29})u_{30}\\
V_4&=&e_1e_5\cdot Z\\
&=&(X_{5} + i Y_{5})u_{0}+ (X_{6} + i Y_{6})u_{3}+
(-X_{0} - i Y_{0})u_{5}+ (-X_{3} - i Y_{3})u_{6}+
(-X_{12} - i Y_{12})u_{9}+ (-X_{15} - i Y_{15})u_{10}\\
&&+
(X_{9} + i Y_{9})u_{12}+ (X_{10} + i Y_{10})u_{15}+
(-X_{20} - i Y_{20})u_{17}+ (-X_{23} - i Y_{23})u_{18}+
(X_{17} + i Y_{17})u_{20}\\
&&+ (X_{18} + i Y_{18})u_{23}+
(X_{29} + i Y_{29})u_{24}+ (X_{30} + i Y_{30})u_{27}+
(-X_{24} - i Y_{24})u_{29}+ (-X_{27} - i Y_{27})u_{30}\\
V_5&=&e_1e_6\cdot Z \\
&=&(i X_{5} - Y_{5})u_{0}+ (i X_{6} - Y_{6})u_{3}+
(i X_{0} - Y_{0})u_{5}+ (i X_{3} - Y_{3})u_{6}+ (-i X_{12} + Y_{12})u_{9}+
(-i X_{15} + Y_{15})u_{10}\\
&&+ (-i X_{9} + Y_{9})u_{12}+
(-i X_{10} + Y_{10})u_{15}+ (-i X_{20} + Y_{20})u_{17}+
(-i X_{23} + Y_{23})u_{18}+ (-i X_{17} + Y_{17})u_{20}\\
&&+
(-i X_{18} + Y_{18})u_{23}+ (i X_{29} - Y_{29})u_{24}+
(i X_{30} - Y_{30})u_{27}+ (i X_{24} - Y_{24})u_{29}+
(i X_{27} - Y_{27})u_{30}\\
V_6&=&e_1e_7\cdot Z \\
&=&(-X_{9} - i Y_{9})u_{0}+ (-X_{10} - i Y_{10})u_{3}+
(-X_{12} - i Y_{12})u_{5}+ (-X_{15} - i Y_{15})u_{6}+
(X_{0} + i Y_{0})u_{9}+ (X_{3} + i Y_{3})u_{10}\\
&&+ (X_{5} + i Y_{5})u_{12}+
(X_{6} + i Y_{6})u_{15}+ (X_{24} + i Y_{24})u_{17}+
(X_{27} + i Y_{27})u_{18}+ (X_{29} + i Y_{29})u_{20}\\
&&+
(X_{30} + i Y_{30})u_{23}+ (-X_{17} - i Y_{17})u_{24}+
(-X_{18} - i Y_{18})u_{27}+ (-X_{20} - i Y_{20})u_{29}+
(-X_{23} - i Y_{23})u_{30}\\
V_7&=&e_1e_8\cdot Z \\
&=&(-i X_{9} + Y_{9})u_{0}+ (-i X_{10} + Y_{10})u_{3}+
(-i X_{12} + Y_{12})u_{5}+ (-i X_{15} + Y_{15})u_{6}+
(-i X_{0} + Y_{0})u_{9}+ (-i X_{3} + Y_{3})u_{10}\\
&&+
(-i X_{5} + Y_{5})u_{12}+ (-i X_{6} + Y_{6})u_{15}+
(i X_{24} - Y_{24})u_{17}+ (i X_{27} - Y_{27})u_{18}+
(i X_{29} - Y_{29})u_{20}\\
&&+ (i X_{30} - Y_{30})u_{23}+
(i X_{17} - Y_{17})u_{24}+ (i X_{18} - Y_{18})u_{27}+
(i X_{20} - Y_{20})u_{29}+ (i X_{23} - Y_{23})u_{30}\\
V_8&=&e_1e_9\cdot Z \\
&=&(X_{17} + i Y_{17})u_{0}+ (X_{18} + i Y_{18})u_{3}+
(X_{20} + i Y_{20})u_{5}+ (X_{23} + i Y_{23})u_{6}+
(X_{24} + i Y_{24})u_{9}+ (X_{27} + i Y_{27})u_{10}\\
&&+
(X_{29} + i Y_{29})u_{12}+ (X_{30} + i Y_{30})u_{15}+
(-X_{0} - i Y_{0})u_{17}+ (-X_{3} - i Y_{3})u_{18}+
(-X_{5} - i Y_{5})u_{20}\\
&&+ (-X_{6} - i Y_{6})u_{23}+
(-X_{9} - i Y_{9})u_{24}+ (-X_{10} - i Y_{10})u_{27}+
(-X_{12} - i Y_{12})u_{29}+ (-X_{15} - i Y_{15})u_{30}\\
V_9&=&e_1e_{10}\cdot Z \\
&=&(i X_{17} - Y_{17})u_{0}+ (i X_{18} - Y_{18})u_{3}+
(i X_{20} - Y_{20})u_{5}+ (i X_{23} - Y_{23})u_{6}+
(i X_{24} - Y_{24})u_{9}+ (i X_{27} - Y_{27})u_{10}\\
&&+
(i X_{29} - Y_{29})u_{12}+ (i X_{30} - Y_{30})u_{15}+
(i X_{0} - Y_{0})u_{17}+ (i X_{3} - Y_{3})u_{18}+ (i X_{5} - Y_{5})u_{20}\\
&&+
(i X_{6} - Y_{6})u_{23}+ (i X_{9} - Y_{9})u_{24}+
(i X_{10} - Y_{10})u_{27}+ (i X_{12} - Y_{12})u_{29}+
(i X_{15} - Y_{15})u_{30}.
\end{eqnarray*}
In terms of coordinate vectors, one can write these vector fields as follows
\begin{eqnarray*}
V_1 &=&
(-Y_{0},X_{0},Y_{3},-X_{3},Y_{5},-X_{5},-Y_{6},X_{6},Y_{9},-X_{9},-Y_{10},X_{10},-Y_{12},X_{12},Y_{15},\\&&-X_{15},Y_{17},-X_{17},-Y_{18},X_{18},-Y_{20},X_{20},Y_{23},-X_{23},-Y_{24},X_{24},Y_{27},-X_{27},Y_{29},-X_{29},-Y_{30},X_{30}),
\\
V_2&=&
(-X_{3},-Y_{3},X_{0},Y_{0},X_{6},Y_{6},-X_{5},-Y_{5},X_{10},Y_{10},-X_{9},-Y_{9},-X_{15},-Y_{15},X_{12},Y_{12},\\&&X_{18},Y_{18},-X_{17},-Y_{17},-X_{23},-Y_{23},X_{20},Y_{20},-X_{27},-Y_{27},X_{24},Y_{24},X_{30},Y_{30},-X_{29},-Y_{29}),
\\
V_3&=&
(Y_{3},-X_{3},Y_{0},-X_{0},-Y_{6},X_{6},-Y_{5},X_{5},-Y_{10},X_{10},-Y_{9},X_{9},Y_{15},-X_{15},Y_{12},-X_{12},\\&&-Y_{18},X_{18},-Y_{17},X_{17},Y_{23},-X_{23},Y_{20},-X_{20},Y_{27},-X_{27},Y_{24},-X_{24},-Y_{30},X_{30},-Y_{29},X_{29}),
\\
V_4 &=&(X_{5},Y_{5},X_{6},Y_{6},-X_{0},-Y_{0},-X_{3},-Y_{3},-X_{12},-Y_{12},-X_{15},-Y_{15},X_{9},Y_{9},X_{10},Y_{10},\\&&-X_{20},-Y_{20},-X_{23},-Y_{23},X_{17},Y_{17},X_{18},Y_{18},X_{29},Y_{29},X_{30},Y_{30},-X_{24},-Y_{24},-X_{27},-Y_{27}),
\\
V_5 &=&(-Y_{5},X_{5},-Y_{6},X_{6},-Y_{0},-X_{0},-Y_{3},X_{3},Y_{12},-X_{12},Y_{15},-X_{15},Y_{9},-X_{9},Y_{10},-X_{10},\\&&Y_{20},-X_{20},Y_{23},-X_{23},Y_{17},-X_{17},Y_{18},-X_{18},-Y_{29},X_{29},-Y_{30},X_{30},-Y_{24},X_{24},-Y_{27},X_{27})),
\\
V_6&=&
(-X_{9},-Y_{9},-X_{10},-Y_{10},-X_{12},-Y_{12},-X_{15}, Y_{15},X_{0},Y_{0},X_{3},Y_{3},X_{5},Y_{5},X_{6},Y_{6},\\&&X_{24},Y_{24},X_{27},Y_{27},X_{29},Y_{29},X_{30},Y_{30},-X_{17},-Y_{17},-X_{18},-Y_{18},-X_{20},-Y_{20},-X_{23},-Y_{23}),
\\
V_7 &=&(Y_{9},-X_{9},Y_{10},-X_{10},Y_{12},-X_{12},Y_{15},-X_{15},Y_{0},-X_{0},Y_{3},-X_{3},Y_{5},-X_{5},Y_{6},-X_{6},\\&&-Y_{24},X_{24},-Y_{27},X_{27},-Y_{29},X_{29},-Y_{30},X_{30},-Y_{17},X_{17},-Y_{18},X_{18},-Y_{20},X_{20},-Y_{23},X_{23}),
\\
V_8 &=&(X_{17},Y_{17},X_{18},Y_{18},X_{20},Y_{20},X_{23},Y_{23},X_{24},Y_{24},X_{27},Y_{27},X_{29},Y_{29},X_{30},Y_{30},\\&& -X_{0},-Y_{0},-X_{3},-Y_{3},-X_{5},-Y_{5},-X_{6},-Y_{6},-X_{9},-Y_{9},-X_{10},-Y_{10},-X_{12},-Y_{12},-X_{15},-Y_{15}),
\\
V_9 &=&
(-Y_{17},X_{17},-Y_{18},X_{18},-Y_{20},X_{20},-Y_{23},X_{23},-Y_{24},X_{24},-Y_{27},X_{27},-Y_{29},X_{29},-Y_{30},X_{30},\\&&-Y_{0}, X_{0},-Y_{3},X_{3},-Y_{5},X_{5},-Y_{6},X_{6},-Y_{9},X_{9},-Y_{10},X_{10},-Y_{12},X_{12},-Y_{15},X_{15}).
\end{eqnarray*}
Relabelling the entries of $Z$,
\begin{eqnarray*}Z&:=&(v_1,v_2,v_3,v_4,v_5,v_6,v_7,v_8,v_9,v_{10},v_{11},v_{12},v_{13},v_{14},v_{15},v_{16},\\&& v_{17},v_{18},v_{19},
v_{20},v_{21},v_{22},v_{23},v_{24},v_{25},v_{26},v_{27},v_{28},v_{29},v_{30},v_{31},v_{32}),
\end{eqnarray*}
the vector fields are the following
\begin{eqnarray*}
V_1&=&
(-v_2,v_1,v_4,-v_3,v_6,-v_5,-v_8,v_7,v_{10},-v_9,-v_{12},v_{11},-v_{14},v_{13},v_{16},-v_{15},\\&&v_{18},-v_{17},-v_{20},v_{19},-v_{22},v_{21},v_{24},-v_{23},-v_{26},v_{25},v_{28},-v_{27},v_{30},-v_{29},-v_{32},v_{31}),
\\
V_2&=&
(-v_3,-v_4,v_1,v_2,v_7,v_8,-v_5,-v_6,v_{11},v_{12},-v_9,-v_{10},-v_{15},-v_{16},v_{13},v_{14},\\&&v_{19},v_{20},-v_{17},-v_{18},-v_{23},-v_{24},v_{21},v_{22},-v_{27},-v_{28},v_{25},v_{26},v_{31},v_{32},-v_{29},-v_{30}),
\\
V_3 &=&
(v_4,-v_3,v_2,-v_1,-v_8,v_7,-v_6,v_5,-v_{12},v_{11},-v_{10},v_9,v_{16},-v_{15},v_{14},-v_{13},\\&&-v_{20},v_{19},-v_{18},v_{17},v_{24},-v_{23},v_{22},-v_{21},v_{28},-v_{27},v_{26},-v_{25},-v_{32},v_{31},-v_{30},v_{29}),
\\
V_4 &=&(v_5,v_6,v_7,v_8,-v_1,-v_2,-v_3,-v_4,-v_{13},-v_{14},-v_{15},-v_{16},v_9,v_{10},v_{11},v_{12},\\&&-v_{21},-v_{22},-v_{23},-v_{24},v_{17},v_{18},v_{19},v_{20},v_{29},v_{30},v_{31},v_{32},-v_{25},-v_{26},-v_{27},-v_{28}),
\\
V_5 &=&(-v_6,v_5,-v_8,v_7,-v_2,-v_1,-v_4,v_3,v_{14},-v_{13},v_{16},-v_{15},v_{10},-v_9,v_{12},-v_{11},\\&&v_{22},-v_{21},v_{24},-v_{23},v_{18},-v_{17},v_{20},-v_{19},-v_{30},v_{29},-v_{32},v_{31},-v_{26},v_{25},-v_{28},v_{27}),
\\
V_6 &=&
(-v_9,-v_{10},-v_{11},-v_{12},-v_{13},-v_{14},-v_{15}, v_{16},v_1,v_2,v_3,v_4,v_5,v_6,v_7,v_8,\\
&&v_{25},v_{26},v_{27},v_{28},v_{29},v_{30},v_{31},v_{32},-v_{17},-v_{18},-v_{19},-v_{20},-v_{21},-v_{22},-v_{23},-v_{24}),
\\
V_7 &=&(v_{10},-v_9,v_{12},-v_{11},v_{14},-v_{13},v_{16},-v_{15},v_2,-v_1,v_4,-v_3,v_6,-v_5,v_8,-v_7,\\&&-v_{26},v_{25},-v_{28},v_{27},-v_{30},v_{29},-v_{32},v_{31},-v_{18},v_{17},-v_{20},v_{19},-v_{22},v_{21},-v_{24},v_{23}),
\\
V_8 &=&(v_{17},v_{18},v_{19},v_{20},v_{21},v_{22},v_{23},v_{24},v_{25},v_{26},v_{27},v_{28},v_{29},v_{30},v_{31},v_{32},\\&& -v_1,-v_2,-v_3,-v_4,-v_5,-v_6,-v_7,-v_8,-v_9,-v_{10},-v_{11},-v_{12},-v_{13},-v_{14},-v_{15},-v_{16}),
\\
V_9&=&
(-v_{18},v_{17},-v_{20},v_{19},-v_{22},v_{21},-v_{24},v_{23},-v_{26},v_{25},-v_{28},v_{27},-v_{30},v_{29},-v_{32},v_{31},\\&&-v_2, v_1,-v_4,v_3,-v_6,v_5,-v_8,v_7,-v_{10},v_9,-v_{12},v_{11},-v_{14},v_{13},-v_{16},v_{15}).
\end{eqnarray*}
{\small
\renewcommand{\baselinestretch}{0.5}
\newcommand{\vspace{-.05in}\bibitem}\begin{thebibliography}{10}{\vspace{-.05in}\bibitem} | {
"timestamp": "2019-05-28T02:11:23",
"yymm": "1905",
"arxiv_id": "1905.10613",
"language": "en",
"url": "https://arxiv.org/abs/1905.10613",
"abstract": "We present a binary code for spinors and Clifford multiplication using non-negative integers and their binary expressions, which can be easily implemented in computer programs for explicit calculations. As applications, we present explicit descriptions of the triality automorphism of $Spin(8)$, explicit representations of the Lie algebras $\\mathfrak{spin}(8)$, $\\mathfrak{spin}(7)$ and $\\mathfrak{g}_2$, etc.",
"subjects": "Differential Geometry (math.DG)",
"title": "A binary encoding of spinors and applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815487957216,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7076842161947199
} |
https://arxiv.org/abs/0812.0431 | On David type Siegel Disks of the Sine family | In 2008 Petersen posed a list of questions on the application of trans-quasiconformal Siegel surgery developed by Zakeri and himself. In this paper we extend Petersen-Zakeri's idea so that the surgery can be applied to all the premodels which have no "free critical points". We explain how the idea is used in solving three of the questions posed by Petersen. To present the details of the idea, we focus on the solution of one of them: we prove that for typical rotation numbers $0< \theta < 1$, the boundary of the Siegel disk of $f_{\theta}(z) = e^{2 \pi i \theta} \sin (z)$ is a Jordan curve which passes through exactly two critical points $\pi/2$ and $-\pi/2$. | \section{Introduction}
Let $0< \theta< 1$ be an irrational number and $[a_{1}, \cdots,
a_{n},\cdots]$ be its continued fraction. We call $\theta$ of
bounded type if $\sup \{a_{n}\} < \infty$, and of David type if
$\log a_{n} = O(\sqrt n)$. It was proved in \cite{Z1} that when
$\theta$ is of bounded type, the Siegel disk of the entire function
$f_{\theta}(z) = e^{2 \pi i \theta} \sin(z)$ is a quasi-disk with
exactly two critical points $\pi/2$ and $-\pi/2$ on the boundary.
The main purpose of this paper is to extend this result to the case
that $\theta$ is of David type. We prove
\begin{main}
Let $0< \theta < 1$ be an irrational number of David type.
Then the boundary of the Siegel
disk of $f_{\theta}(z) = e^{2 \pi i \theta} \sin (z)$ is a
Jordan curve which passes
through exactly two critical points $\pi/2$ and $-\pi/2$.
\end{main}
A similar result for David type Siegel disks of quadratic polynomials
was previously
obtained by Petersen and Zakeri in thir seminal work\cite{PZ}.
Our proof goes along the same line as theirs. First we construct a Blaschke
fraction $G_{\theta}$ which
models the map $f_{\theta}$. Then we perform a trans-quasiconformal
surgery on $G_{\theta}$. To make such surgery possible, one needs
to prove the integrability of some Beltrami differential $\mu$, and
as in \cite{PZ}, this is the heart of the whole paper. After this,
we get an entire function $T_{\theta}$ which has a Siegel disk of
rotation number $\theta$ such that the boundary of the Siegel disk
is a Jordan curve passing through exactly two critical points
$\pi/2$ and $-\pi/2$. The main theorem then follows by showing that
$f_{\theta}(z) = T_{\theta}(z)$.
The most remarkable difference between the proof in this paper and
that in \cite{PZ} is as follows. In quadratic polynomial case, one
has a set of puzzle pieces with some very nice geometric and
dynamical properties which were used in an essential way in
\cite{PZ} to prove the integrability of $\mu$(these puzzles were
previously constructed by Petersen in his famous article \cite{P1},
and are usually called Petersen puzzles now). But in our case, there
are no external rays and equipotential curves for $f_{\theta}$, so
such puzzle pieces do not exist any more. Thus the puzzle technique
used there do not apply here. To solve this problem, a new method
will be developed in $\S6$ of this paper by which one can estimate
the area of some dynamically defined sets, and the integrability of
$\mu$ then follows. Due to its flexibility, the method can be
applied in more general situations. In particular, it is one of the
crucial techniques in \cite{Z2} where it has been proved that every
David type Siegel disk of a polynomial map of any degree must be a
Jordan domain with at leat one of the critical points on its
boundary.
Throughout the following, we use $\widehat{\Bbb C}$, $\Bbb C$,
$\Delta$, and $\Bbb T$ to denote the Riemann sphere, the complex
plane, the open unit disk, and the unit circle, respectively. The
following is the organization of the paper.
In $\S 2$, we present the background materials about David
homeomorphisms and critical circle mappings.
In $\S3$, we construct an odd Blaschke fraction $G_{\theta}$ to
serve as the model map for $f_{\theta}$. The restriction of
$G_{\theta}$ on $\Bbb T$ is a homeomorphism with rotation number
$\theta$ and two critical points $1$ and $-1$. Let $\Phi: {\Bbb C}
\to {\Bbb C}$ be the square map given by $z \to z^{2}$. Then the map
$$
g_{\theta}(z) = \Phi \circ G_{\theta} \circ \Phi^{-1}(z).
$$
is a meoromorphic function with exactly two essential singularities
at $0$ and $\infty$, and moreover, the restriction of $g_{\theta}$
on $\Bbb T$ is a critical circle mapping with rotation number
$\alpha \equiv 2 \theta \mod(1)$(that is, $\alpha = 2 \theta$ if $0<
\theta < 1/2$ and $\alpha = 2 \theta -1$ if $1/2 < \theta <
1$)(Lemma~\ref{rotation number}). By Yoccoz's linearization theorem
\cite{Yo1}, there is a circle homeomorphism $h: \Bbb T \to \Bbb T$
such that $h(1) = 1$ and
$$g_{\theta}|{\Bbb T}(z) = h^{-1} \circ R_{\alpha} \circ h(z)$$
where $R_{\alpha}$ is the rigid rotation given by $\alpha$.
In $\S4$, we prove that $\alpha $ is also of David
type(Lemma~\ref{arith}).
In $\S5$, we introduce Yocooz's cell construction by which one can
extend $h$ to a David homeomorphism $H: \Delta \to \Delta$. Let
$$
\nu_{H} = \frac{\overline{\partial} H }{\partial H} \frac{d
\overline{z}}{d z}
$$
be the Beltrami differential of $H$ in $\Delta$. Define
\begin{equation} \widetilde{g}_{\theta}(z) =
\begin{cases}
g_{\theta}(z) & \text{for $z \in {\Bbb C} - \Delta$}, \\
H^{-1}\circ R_{\alpha}\circ H(z) & \text{
for $z \in \Delta$}.
\end{cases}
\end{equation}
It follows that $\nu_{H}$ is $\widetilde{g}_{\theta}-$invariant. Let
$\nu$ denote the Beltrami differential in the whole complex plane
which is obtained by the pull back of $\nu_{H}$ through the
iterations of $\widetilde{g}_{\theta}$.
In $\S6$, we prove that the dilatation of the Beltrami differential
$\nu$ satisfies an exponential growth condition, more precisely,
there exist constants $M > 0$, $\alpha > 0$, and $0< \epsilon_{0} <
1$, such that for any $0< \epsilon < \epsilon_{0}$, the following
inequality holds,
\begin{equation}\label{integrability}
area \{z\:\big{|}\: |\nu(z)| > 1 - \epsilon \} \le M
e^{-\alpha/\epsilon},
\end{equation}
where $area(X)$ is used to denote the spherical area of a subset $X
\subset \widehat{\Bbb C}$.
Let $\mu$ be the Beltrami differential in the complex plane which is
defined by the pull back of $\nu$ through the square map $\Phi$. It
will be proved that $\mu$ satisfies the
condition~(\ref{integrability}) also(Lemma~\ref{equi}). By David's
theorem\cite{Da}, $\mu$ is integrable. That is, there is a
homeomorphism $\phi: {\Bbb C} \to {\Bbb C}$ in $W_{loc}^{1,1}({\Bbb
C})$ such that
$$\overline{\partial} \phi = \mu
\partial \phi.$$
Define
\begin{equation} \widetilde{G}_{\theta}(z) =
\begin{cases}
G_{\theta}(z) & \text{for $z \in {\Bbb C} - \Delta$}, \\
\Phi^{-1}\circ H^{-1}\circ R_{\alpha}\circ H \circ \Phi(z) & \text{
for $z \in \Delta$}.
\end{cases}
\end{equation}
It follows that $\mu$ is $\widetilde{G}_{\theta}-$invariant. Now let
$\phi$ be normalized such that it fixes $0$ and the infinity, and
maps $1$ to $\pi/2$. Then by the same argument as in the proof of
Lemma 5.5 of \cite{PZ}, it follows that the map $T_{\theta}(z) =
\phi \circ \widetilde{G}_{\theta}\circ \phi^{-1}(z)$ is an entire
function(Lemma~\ref{PH}). From the construction above, $T_{\theta}$
has a Siegel disk centered at the origin with rotation number
$\theta$, and moreover, the boundary of the Siegel disk is a Jordan
curve passing through exactly two critical points $\pi/2$ and
$-\pi/2$.
In $\S7$, we will prove that $f_{\theta}(z) = T_{\theta}(z)$. We
prove this by using a topological rigidity property of the Sine
family (Lemma 1 of \cite{DS} or Lemma~\ref{DS}). The Main Theorem
follows.
\section{Preliminaries}
\subsection{David Homeomorphisms}
Let $\Omega \subset \widehat{\Bbb C}$ be a domain. A Beltrami
differential $\mu = \mu(z)d\overline{z}/dz$ in $\Omega$ is a
measurable $(-1, 1)$-form such that $|\mu(z)| < 1$ almost everywhere
in $\Omega$. We say $\mu$ is $\emph{integrable}$ if there is a
homeomorphism $\phi: \Omega \to \Omega'$ in $W_{loc}^{1,1}(\Omega)$
which solves the Beltrami equation
\begin{equation}\label{be}
\overline{\partial} \phi = \mu \partial \phi.
\end{equation}
The map $\phi$ is called a David homeomorphism. When
$\|\mu\|_{\infty} < 1$, the map $\phi$ is the classical
quasiconformal mapping.
Recall that $area(X)$ is used to denote the spherical area of a
subset $X \subset \widehat{\Bbb C}$.
\begin{thm}[David \cite{Da}]\label{David}
Let $\Omega \subset \widehat{\Bbb C}$ be a domain. Let $\mu$ be a
Beltrami differential in $\Omega$. Then $\mu$ is integrable if there
exist constants $M > 0$, $\alpha > 0$, and $0< \epsilon_{0} < 1$,
such that for any $0< \epsilon < \epsilon_{0}$, the following
inequality holds,
$$
area \{z\:\big{|}\: |\mu(z)| > 1 - \epsilon \} \le M
e^{-\alpha/\epsilon}.
$$
Moreover, if $\mu$ is integrable, up to postcomposing a conformal
map, there is a unique solution $\phi: \Omega \to \Omega'$ in
$W_{loc}^{1,1}(\Omega)$ which solves the Beltrami
equation~(\ref{be}). That is, if $\psi: \Omega\to \Omega''$ is
another such solution, then there is a conformal map $\sigma:
\Omega' \to \Omega''$ such that $\psi = \sigma \circ \phi$.
\end{thm}
\subsection{Critical Circle Mappings} For our purpose, we say a homeomorphism $f:
{\Bbb T} \to {\Bbb T}$ is a critical circle mapping if it is real
analytic and has exactly one critical point at $1$.
Suppose $f$ is a critical circle mapping with an irrational rotation
number $\theta$. Let $p_{n}/q_{n}, n \ge 0$ be the continued
fractions of $\theta$. For $i \in {\Bbb Z}$, let $x_{i}\in \Bbb T$
denote the point such that $f^{i}(x_{i}) = 1$. Let $I_{n} = [1,
x_{q_{n}}]$. For $i \ge 0$, let $I_{n}^{i} \subset \Bbb T$ denote
the interval such that $f^{i}(I_{n}) = I_{n}$, that is, $I_{n}^{i} =
[x_{i}, x_{q_{n} +i}]$. Then the collection of the intervals
$$
I_{n}^{i}, \: 0 \le i \le q_{n+1}-1, \hbox{ and } I_{n+1}^{j}, \: 0
\le j\le q_{n}-1,
$$
defines a partition of $\Bbb T$ modulo the common end points. We
call such a partition a $\emph{dynamical partition}$ of level $n$.
It is not difficult to see that the set of all the end points in
this partition is
$$
\Pi_{n} = \{x_{i}\:\big{|} \: 0 \le j < q_{n} + q_{n+1}\}.
$$
\begin{thm}[\'{S}wiatek-Herman, see \cite{dFdM}]\label{SH}
Let $f: {\Bbb T} \to {\Bbb T}$ be a real analytic critical circle
mapping with an irrational rotation number $\theta$. Let $n \ge 0$.
Then there is an asymptotically universal bound such that
$$
|[x, f^{-q_{n}}(x)]| \asymp |[x, f^{q_{n}}(x)]|
$$
holds for any point $x$ in $\Bbb T$ and
$$
|I| \asymp |J|
$$
holds for any two adjacent intervals $I$ and $J$ in the dynamical
partition of $\Bbb T$ of level $n$.
\end{thm}
Now let us consider another partition of $\Bbb T$. Let
$$
\Xi_{n} = \{x_{i}\:\big{|}\: 0 \le i < q_{n+1}\}.
$$
The points in $\Xi_{n}$ separated $\Bbb T$ into disjoint intervals.
This partition arises in Yoccoz's cell construction(see $\S6$ of
\cite{PZ} or $\S5$). Let us call it the $\emph{cell partition}$ of
level $n$. The following lemma describes the relation between these
two partitions.
\begin{lem}\label{dyn-cell}
Each interval in ${\Bbb T}\setminus \Xi_{n}$ is either an interval
in ${\Bbb T}\setminus \Pi_{n}$ or the union of two adjacent
intervals in ${\Bbb T}\setminus \Pi_{n}$.
\end{lem}
\begin{rmk}{\rm
For our use, the definition of the cell partition is a little
different from that in \cite{PZ} where the cell partition of level
$n$ is defined by the points $\{x_{i}\: \big{|} 0 \le i < q_{n}\}$.
Therefore, the cells of level $n$ in this paper correspond to the
cells of level $n+1$ there.}
\end{rmk}
For the $\emph{dynamical partition}$, an interval in the partition
of level $n$ may also be an interval in the partition of the next
level. This is still true for the $\emph{cell partition}$. Actually,
we have
\begin{lem}\label{cell}
An interval $[x_{j}, x_{k}]$(with $j < k$) in the cell partition of level $n$ is
also an interval in the cell partition of level $n+1$ if and only if
$a_{n+2} = 1$, $k = j + q_{n}$, and $0 \le j \le q_{n+1} - q_{n}$.
\end{lem}
For proofs of the above two lemmas, see $\S6$ of \cite{PZ}. By
Lemma~\ref{cell}, any two adjacent points in $\Xi_{n}$ can not be
adjacent in $\Xi_{n+2}$. This, together with Theorem~\ref{SH} and
Lemma~\ref{dyn-cell}, implies
\begin{lem}\label{geom}
There is a $0< \delta < 1$ which depends only on $f$ such that for
any interval $I$ in ${\Bbb T} \setminus \Xi_{n+2}$, there is some
interval $J$ in ${\Bbb T} \setminus \Xi_{n}$ with $I \subset J$ and
$|I| < \delta |J|$.
\end{lem}
\begin{lem}\label{real bound}
Let $v = f(1)$ denote the critical value of $f$. We have the
following real bounds:
\begin{itemize}
\item[1.] $|[x_{q_{n}}, x_{-q_{n+1}}]| \asymp |[x_{-q_{n+1}}, 1]|$,
\item[2.] $|[x_{q_{n}}, x_{q_{n}+q_{n+1}}]| \asymp
|[x_{q_{n}+q_{n+1}},1]|$,
\item[3.] $|[x_{q_{n}+q_{n+1}-1}, v]| \asymp |[v,x_{q_{n+1}-1}]|$.
\end{itemize}
\end{lem}
\begin{proof}
The direction $|[x_{q_{n}}, x_{-q_{n+1}}]| \preceq|[x_{-q_{n+1}},
1]|$ in the first assertion follows from $|[x_{-q_{n+1}}, 1]|
\asymp |[x_{q_{n+1}}, 1]| \asymp |[1, x_{q_{n}}]|$ which is implied
by Theorem~\ref{SH}. Let us prove the other direction. Consider the
intervals $J = [x_{-q_{n}-q_{n+1}}, x_{-q_{n}}]$ and $ I = [1,
x_{-2q_{n}}]$. Note that $J \subset [x_{-q_{n+2}}, x_{-q_{n}}]
\subset I$. By Theorem~\ref{SH}, $J$ has definite space around it
inside $I$. Since
$I$ contains at most two points from $f^{k}(1), 1 \le k \le
q_{n}$, the direction $|[x_{q_{n}}, x_{-q_{n+1}}]| \succeq
|[x_{-q_{n+1}}, 1]|$ then follows by considering the action of
$f^{-q_{n}}$ on $I$ and Koebe's distortion principle(see Lemma 2.4
of \cite{dFdM} for Koebe's distortion principle).
Note that $|[x_{q_{n}}, x_{q_{n}+q_{n+1}}]| \le |[x_{q_{n}},
x_{q_{n+2}}]|$ and $|[x_{q_{n+2}}, 1]| \le |[x_{q_{n}+q_{n+1}},
1]|$. But $|[x_{q_{n}}, x_{q_{n+2}}]| < |[x_{q_{n}}, 1]| \preceq
|[x_{q_{n+2}},1]|$ by Theorem~\ref{SH}. Thus we proved $|[x_{q_{n}},
x_{q_{n}+q_{n+1}}]| \preceq |[x_{q_{n}+q_{n+1}},1]|$. This proves
one direction of the second assertion. The other direction can be
proved in the same way by considering the interval $J =
[x_{q_{n+1}}, x_{-q_{n}}]$ and $I = [1, x_{ - 2q_{n}}]$, and the
action of $f^{-q_{n}}$ on $I$.
From the second assertion, it follows that $|[x_{q_{n}}, 1]]| \asymp
|[x_{q_{n}+q_{n+1}},1]|$ and therefore $|[x_{q_{n+1}}, 1]]| \asymp
|[x_{q_{n}+q_{n+1}},1]|$ by Theorem~\ref{SH}. The third assertion
then follows by considering the action of $f$ on both sides of it.
\end{proof}
\section{A Ghys-like Model}
In this section, we construct a Ghys-like model map $G_{\theta}$.
The idea of such type of construction was pioneered by A. Cheritat
(see \cite{C}). Recall that $\Delta$ and $\Bbb T$ denote the unit
disk and the unit circle, respectively.
Let $T(z) = \sin(z)$. It follows that the map $T(z)$ has exactly two
critical values $1$ and $-1$. Let $D$ be the component of
$T^{-1}(\Delta)$ which contains the origin.
\begin{lem}\label{basic domain}
$D$ is a Jordan domain which is symmetric about the origin and the
map $T|\partial D: \partial D \to {\Bbb T}$ is a homeomorphism.
Moreover, $\partial D$ passes through exactly two critical points
$\pi/2$ and $-\pi/2$.
\end{lem}
\begin{proof}
Since $T$ is an entire function with no finite asymptotic value by
Lemma 1 of \cite{Z}, $\partial D$ is bounded and thus a closed and
piecewise smooth curve. In addition, since $\Delta$ contains no
critical value of $T$, the map $T: D \to \Delta$ is a holomorphic
isomorphism. This implies that $\partial D$ does not intersect with
itself and thus is a Jordan curve. It follows that $T: \partial D
\to {\Bbb T}$ is a homeomorphism. The symmetry of $D$ follows from
the odd property of $T(z)$. The first assertion of the lemma has
been proved.
Note that the inverse branch of $T$ which maps the origin to itself
can be continuously extended to $1$ along the segment $[0,1]$. It
follows that $\pi/2 \in \partial D$. The same argument implies that
$-\pi/2 \in \partial D$. Because $T: \partial D \to {\Bbb T}$ is a
homeomorphism, and because $1$ and $-1$ are the only two critical
values of $T$, $\pi/2$ and $-\pi/2$ are the only two critical points
on $\partial D$. The proof of the lemma is completed.
\end{proof}
For $k \in \Bbb Z$, let $D_{k} = \{z+k\pi\big{|} z \in D\}$. It follows that $D_{0} = D$.
\begin{lem}\label{dis-con}
The domains $D_{k}, k \in {\Bbb Z}$, are all the components of
$T^{-1} (\Delta)$. For any $k \in \Bbb Z$, $\partial D_{k} \cap
\partial D_{k+1} = \{k \pi+ \pi/2\}$, and moreover, $\partial D_{i}
\cap
\partial D_{j}= \emptyset$ for $i, j \in {\Bbb Z}$ with $|i - j| >
1$.
\end{lem}
\begin{proof}
Since $D = D_{0}$ is symmetric about the origin, $T(D_{k}) = \Delta$
for any $k \in \Bbb Z$. The first assertion then follows from the
fact that $T^{-1}(0) = \{k\pi\:\big{|}\: k\in {\Bbb Z}\}$. Note
that $\partial D_{i} \cap
\partial D_{j}$ must consist of critical points if it is non-empty.
The second assertion then follows from the fact that every
$\partial D_{k}$ contains exactly two critical points $k\pi + \pi/2$
and $k\pi - \pi/2$.
\end{proof}
Let $\psi: \widehat{\Bbb C} - \overline{\Delta} \to \widehat{\Bbb C} - \overline{D}$
be the Riemann map such that $\psi(\infty) = \infty$ and $\psi(1) = \pi/2$. Since $\Delta$
and $D$ are both symmetric about the origin, we have
\begin{lem}\label{odd}
$\psi$ is odd.
\end{lem}
For $z \in \Bbb C$, let $z^{*}$ denote the symmetric image of $z$ about the unit circle.
Define
\begin{equation}
G(z) =
\begin{cases}
T \circ \psi(z) & \text{for $z \in {\Bbb C} - \Delta$}, \\
((T \circ \psi)(z^{*}))^{*} & \text{ for $z \in \Delta -\{0\}$}.
\end{cases}
\end{equation}
By Lemma~\ref{odd} and the construction of $G(z)$, we have
\begin{lem}\label{G}
$G(z)$ is holomorphic in ${\Bbb C} - \{0\}$ and is symmetric about
the unit circle. Moreover, $G(z)$ is odd, and $G|{\Bbb T}: {\Bbb
T}\to {\Bbb T}$ is a real analytic circle homeomorphism which has
exactly two critical points at $1$ and $-1$.
\end{lem}
Let $0< \theta< 1$ be the David type irrational number in the Main
Theorem. Since $G|{\Bbb T}: {\Bbb T} \to {\Bbb T}$ is a critical
circle homeomorphism, by Proposition 11.1.9 of \cite{KH}, we get
\begin{lem}\label{exist}
There exists a unique $t \in [0, 1)$ such that $e^{2 \pi it}G|{\Bbb
T}: {\Bbb T}\to {\Bbb T}$ is a critical circle homeomorphism of
rotation number $\theta$.
\end{lem}
Let $t \in [0, 1)$ be the number given in Lemma~\ref{exist}. Let us
denote $e^{2 \pi it}G(z)$ by $G_{\theta}(z)$. Since $G(z)$ is odd by
Lemma~\ref{G}, we have
\begin{lem}\label{G-odd}
$G_{\theta}$ is odd.
\end{lem}
Let $\Phi: {\Bbb C} \to {\Bbb C}$ be the square map given by
$\Phi(z) = z^{2}$. Define
$$
g_{\theta}(z) = \Phi \circ G_{\theta} \circ \Phi^{-1}(z).
$$
\begin{lem}\label{rotation number}
$g_{\theta}$ is a meromorphic function with exactly two essential
singularities at $0$ and $\infty$, and the restriction of
$g_{\theta}$ to $\Bbb T$ is a critical circle homeomorphism with
exactly one critical point at $1$. Moreover, the rotation number of
$g_{\theta}|{\Bbb T}$ is $\alpha \equiv 2 \theta \mod(1)$.
\end{lem}
\begin{proof}
Since $G_{\theta}$ is odd by Lemma~\ref{G-odd}, $g_{\theta}$ is well
defined and has exactly one critical point $1$ on the unit circle.
The first assertion follows. Now let us prove the second assertion.
Let $I$ denote the anticlockwise arc from $1$ to $g_{\theta}(1) =
(G_{\theta}(1))^{2}$. Consider the orbit segment $O_{n} =
\{g_{\theta}^{k}(1) = (G_{\theta}^{k}(1))^{2}, 0 \le k \le n\}$. Let
$P_{n}$ denote the numbers of the points in $O_{n}$ which are
contained in $I$. There are two cases.
In the first case, $0< \theta < 1/2$. Since $G_{\theta}$ is odd, it
follows that any half of the unit circle contains almost half of the
number of the points in $O_{n}$. Thus $G_{\theta}(1)$ is contained
in the upper half of the unit circle. Let $J$ denote the
anticlockwise arc from $1$ to $G_{\theta}(1)$. It follows that $J
\subset I$. Let $Q_{n}^{+}$ and $Q_{n}^{-}$ denote the numbers of
the points in $O_{n}$ which are contained in $J$, and $-J$,
respectively(Here $-J$ is the anticlockwise arc from $-1$ to
$-G_{\theta}(1)$). It follows that
$$
\lim_{n\to \infty} Q_{n}^{+}/n = \lim_{n\to \infty} Q_{n}^{-}/n =
\theta.
$$
Note that $g_{\theta}^{k}(1) \in I$ if and only if
$G_{\theta}^{k}(1) \in J \cup (-J)$ and that $J \cap (-J) =
\emptyset$. We thus have
$$
\lim_{n\to \infty} P_{n}/n = \lim_{n\to \infty} [(Q_{n}^{+}+
Q_{n}^{-}]/n = 2\theta.
$$
In the second case, $1/2< \theta < 1$. Since any half of the unit
circle contains almost half of the number of the points in $O_{n}$,
it follows that $G_{\theta}(1)$ is contained in the lower half of
the unit circle. Thus $-G_{\theta}(1)$ is contained in the upper
half of the unit circle. Let $J$ denote the anticlockwise arc from
$1$ to $-G_{\theta}(1)$. It follows that $J \subset I$. Again let
$Q_{n}^{+}$ and $Q_{n}^{-}$ denote the numbers of the points in
$O_{n}$ which are contained in $J$, and $-J$, respectively(Here $-J$
is the anticlockwise arc from $-1$ to $G_{\theta}(1)$). It follows
that
$$
\lim_{n\to \infty} Q_{n}^{+}/n = \lim_{n\to \infty} Q_{n}^{-}/n =
\theta - 1/2.
$$
As before, $g_{\theta}^{k}(1) \in I$ if and only if
$G_{\theta}^{k}(1) \in J \cup (-J)$. Since $J \cap (-J) =
\emptyset$, we have
$$
\lim_{n\to \infty} P_{n}/n = \lim_{n\to \infty} [(Q_{n}^{+}+
Q_{n}^{-}]/n = 2\theta - 1.
$$
The lemma follows.
\end{proof}
\section{An Arithmetic Property}
\begin{lem}\label{arith}
Let $0<
\theta < 1$ be an irrational number of David type. Let $0< \alpha <
1$ be the irrational number such that
$$
\alpha \equiv 2 \theta \mod(1).
$$
Then $\alpha$ is also of David type.
\end{lem}
\begin{proof}
Let $[b_{1}, \cdots, b_{n}, \cdots]$, $s_{n}/t_{n}$, and $[a_{1},
\cdots, a_{n}, \cdots,]$, $p_{n}/q_{n}$, be the continued fractions
and convergents of $\theta$ and $\alpha$, respectively. Let $n \ge
4$. We claim that there exists an even integer $L = 2m$ among
$t_{n-1}, t_{n}$ and $t_{n} - t_{n-1}$ and an integer $N \ge 0$ such
that the inequality
\begin{equation}\label{arith ine}
|2m \theta - N| < |2y \theta - x|
\end{equation}
holds for all integers $x \ge 0$ and $0 < y < m$.
In fact, if one of $t_{n-1}$ and $t_{n}$ is even, we can take it to
be $L$, and take $N$ to be $s_{n-1}$ or $s_{n}$. Then the claim is
obviously true. Otherwise, both $t_{n-1}$ and $t_{n}$ are odd
integers. Then let $L = t_{n} - t_{n-1}$ and let $N \ge 0$ be the
integer such that the left hand of (\ref{arith ine}) obtains the
minimum. If $t_{n-2} = t_{n} - t_{n-1}$, the claim is obviously
true. Otherwise, $t_{n} - t_{n-1} > t_{n-1}$. Then the claim also
follows since the only possible integers $s$ and $t$ such that $t
< t_{n} - t_{n-1}$ and $|(t_{n} - t_{n-1}) \theta - N| \ge |t \theta
- s|$ are $s_{n-1}$ and $t_{n-1}$. But $t_{n-1}$ is odd, hence
(\ref{arith ine}) also holds in the later case.
From (\ref{arith ine}) and $\alpha \equiv 2 \theta \mod(1)$ , it
follows that there exists some integer $N \ge 0$ such that
\begin{equation}\label{appro}
|m \alpha - N| < |\alpha y - x|
\end{equation}
holds for all integers $x \ge 0$ and $0 < y < m$. This implies
that $m = q_{l}$ for some $l \ge 0$. Let $k$ be the largest number
such that $q_{k} < t_{n+1}$. Since $m = L/2 < t_{n+1}$, and since
$k$ is the largest integer such that $q_{k} < t_{n+1}$, it follows
that $q_{k} \ge m$. Since $L \ge t_{n-2}$, we have $m = L/2
> t_{n-4}$. Thus we get
$$
q_{k} > t_{n-4}.
$$
This implies that for every $n \ge 4$, there is some $q_{k}$ between
$t_{n+1}$ and $t_{n-4}$.
Now for every $k \ge 1$, let $n \ge 1$ be the least integer such
that $q_{k} < t_{n+1}$. It is clear that $n \ge 9$ for all $k$
large. Since for every $n \ge 4$, there is some $q_{k}$ between
$t_{n+1}$ and $t_{n-4}$, it follows that
$$
n \le 5k + 5.
$$
Similarly, between $t_{n-4}$ and $t_{n-9}$, there is some $q_{l}$
with $l < k$. So we get
$$
q_{k-1} > t_{n-9}.
$$
Thus we have
$$
a_{k} \le q_{k}/q_{k-1} < t_{n+1}/t_{n-9}.
$$
All these together implies that
$$
\log a_{k} < \log (t_{n+1}/t_{n-9}) \le \sum_{n-8 \le l \le n+1}
\log (b_{l} + 1) \le C \sqrt n \le C' \sqrt k
$$
holds for all $k \ge 1$ large, where $C, C' > 0$ are some uniform
constants. The lemma follows.
\end{proof}
\section{Yoccoz's Cell Construction}
Recall that $g_{\theta}|\Bbb T$ is a critical circle homeomorphism
with rotation number $\alpha$ and exactly one critical point at
$1$. For $i \in {\Bbb Z}$, let $x_{i}$ be the point in $\Bbb T$ such
that $g_{\theta}^{i}(x_{i}) = 1$. For $n \ge 0$, let $p_{n}/q_{n}$
be the continued fraction of $\alpha$. Consider the $\emph{cell
partition}$ of level $n$ introduced in $\S2$,
$$
\Xi_{n} = \{x_{i} \: \big{|}\: 0\le i < q_{n+1}\}.
$$
For each $x_{i} \in \Xi_{n}$, let $y_{i}$ be the point on the radial
segment $[0, x_{i}]$ such that
$$
|y_{i}-x_{i}| = d(x_{r}, x_{l})/2
$$
where $x_{r}$ and $x_{l}$ denote the two points immediately to the
right and left of $x_{i}$ in $\Xi_{n}$, and $d(x_{r}, x_{l})$
denotes the Euclidean length of the smaller arc connecting $x_{r}$
and $x_{l}$. Let us assume that $n\ge 0$ is large enough such that
$d(x_{i}, x_{r})< 1$ holds for any two adjacent points $x_{i}$ and
$x_{r}$ in $\Xi_{n}$.
Let $x_{i}$ and $x_{r}$ be any two adjacent points in $\Xi_{n}$.
Connect $y_{i}$ and $y_{r}$ by a straight segment. Then the three
straight segments $[x_{i}, y_{i}]$, $[y_{i}, y_{r}]$, $[x_{r},
y_{r}]$, and the arc segment $[x_{i}, x_{r}]$ bound a domain,
which is called a cell of level $n$. It follows that the union of
all the cells of level $n$ is an annulus with $\Bbb T$ being the
outer boundary component. Let us denote this annulus by $Y_{n}$.
Let $K > 1$. Two straight segments $I$ and $J$ are called
$K-$commensurable if $|J|/K < |I| < K |J|$. From the construction
of the cells, and Theorem~\ref{SH}, Lemma~\ref{dyn-cell}, and
Lemma~\ref{geom}, one has the following lemma,
\begin{lem}\label{good geometry}
The four sides of each cell are $K-$commensurable for some $K > 1$
dependent only on $g_{\theta}$. Furthermore, each cell $E$ of level
$n+2$ is well contained in some sell $E'$ of level $n$ in the sense
that there is a uniform $0< \sigma < 1$ such that the ratio of the
length of each side of $E$ to the length of the corresponding side
of $E'$ is less than $\sigma$.
\end{lem}
Let $h: {\Bbb T} \to {\Bbb T}$ be the homeomorphism such that $h(1)
= 1$ and $g_{\theta}|{\Bbb T}(z) = h^{-1} \circ R_{\alpha} \circ
h(z)$. Then by Yoccoz's extension theorem(see \cite{Yo2} or Theorem
6.5 of \cite{PZ}), we have
\begin{lem}\label{Yoccoz extension}
There is a $C > 0$ such that the map $h$ can be extended to a
homeomorphism $H: \Delta \to \Delta$ whose dilatation in $Y_{n}$ is
at most $C(1 + (\log a_{n+2})^{2})$.
\end{lem}
By composing with a quasiconformal homeomorphism of the unit disk to
itself which fixes $1$, we may assume that $H(0) = 0$. Let
$$
\nu_{H} = \frac{\overline{\partial} H }{\partial H} \frac{d
\overline{z}}{d z}
$$
be the Beltrami differential of $H$ in $\Delta$. Define
\begin{equation} \widetilde{g}_{\theta}(z) =
\begin{cases}
g_{\theta}(z) & \text{for $z \in {\Bbb C} - \Delta$}, \\
H^{-1}\circ R_{\alpha}\circ H(z) & \text{
for $z \in \Delta$}.
\end{cases}
\end{equation}
It follows that $\nu_{H}$ is $\widetilde{g}_{\theta}-$invariant. Let
$\nu$ denote the Beltrami differential in the whole complex plane
which is obtained by the pull back of $\nu_{H}$ through the
iterations of $\widetilde{g}_{\theta}$.
Define
\begin{equation} \widetilde{G}_{\theta}(z) =
\begin{cases}
G_{\theta}(z) & \text{for $z \in {\Bbb C} - \Delta$}, \\
\Phi^{-1}\circ H^{-1}\circ R_{\alpha}\circ H \circ \Phi(z) & \text{
for $z \in \Delta$}.
\end{cases}
\end{equation}
Here the branch of $\Phi^{-1}$ is taken to be such that
$$\Phi^{-1}\circ H^{-1}\circ R_{\alpha}\circ H \circ \Phi(1) =
G_{\theta}(1).$$ Let $\mu$ be the Beltrami differential in the
complex plane which is obtained by the pull back of $\nu$ through
the square map $\Phi$. The proof of the following lemma is direct,
and we leave it to the reader.
\begin{lem}\label{final}
The map $\widetilde{G}_{\theta}$ is odd. The Beltrami differential
$\mu$ is $\widetilde{G}_{\theta}-$invariant, and moreover, $\mu(z) =
\mu(-z)$.
\end{lem}
\section{The integrability of $\mu$}
The purpose of this section is to prove the integrability of $\mu$.
\subsection{The integrability of $\nu$ implies the integrability of
$\mu$}
\begin{lem}\label{equi}
If $\nu$ satisfies the condition~(\ref{integrability}), then so does
$\mu$ with the same $0< \epsilon_{0}< 1$ but possibly different
constants $M
> 0$ and $\alpha
> 0$.
\end{lem}
\begin{proof}
Let $\Phi: z \to z^{2}$ be the square map defined in $\S3$. It is
sufficient to prove that there exists a $C
> 0$ such that for any measurable set $E \subset {\Bbb C}$,
the following inequality holds,
\begin{equation}\label{growth}
area(\Phi^{-1}(E)) < C area(E)^{1/2}.
\end{equation}
To show this, let $E_{1} = E \cap \overline{\Delta}$ and $E_{2} = E
\cap ({\Bbb C} \setminus \Delta)$. It is sufficient to prove
(\ref{growth}) holds for both $E_{1}$ and $E_{2}$. Since the
transform $\zeta = 1/z$ commutes with $\Phi$ and preserves the
spherical metric $|dz|/(1 + |z|^{2})$ and maps $E_{2}$ to some
subset of $\overline{\Delta}$, we need only to prove (\ref{growth})
for $E_{1}$. Note that in $\overline{\Delta}$, the Euclidean area is
equivalent to the spherical area. Thus it is sufficient to prove
(\ref{growth}) in the case of Euclidean area. Note that
$$
\int_{E_{1}}dxdy = 2 \int_{\Phi^{-1}(E_{1})}(s^{2} + t^{2}) dsdt.
$$
It follows that for given $\int_{E_{1}}dxdy$,
$\int_{\Phi^{-1}(E_{1})}dsdt$ obtains the maximum when
$\Phi^{-1}(E_{1})$ is a Euclidean disk centered at the origin. This
implies (\ref{growth}) in the case of Euclidean area and the lemma
follows.
\end{proof}
Let
$$
X = \{z \in {\Bbb C} \setminus \overline{\Delta}\:\big{|}\:
g_{\theta}^{k}(z) \in \Delta \hbox{ for some integer } k > 0\}.
$$
For each $z \in X$, let $k_{z}$ be the least integer such that
$g_{\theta}^{k_{z}}(z) \in \Delta$. Define
$$
X_{n} = \{z \in X \:\big{|} g_{\theta}^{k_{z}}(z) \in Y_{n}\}.
$$
By Lemma~\ref{Yoccoz extension} and the condition that $\log a_{n} =
O(\sqrt n)$, we have
\begin{lem}\label{power law}
If there exist $C > 0$ and $0< \delta < 1$ such that $area(X_{n}) <
C \delta^{n}$ holds for all $n$ large enough, then $\nu$ satisfies
the condition~(\ref{integrability}).
\end{lem}
It is clear that Lemma~\ref{power law} can be further reduced to
the next lemma.
\begin{lem}\label{main lemma}
If there exist $C > 0$, $0< \epsilon < 1$, and $0< \delta < 1$ such
that
$$
area(X_{n+2}) \le C \epsilon^{n} + \delta \: area(X_{n}),
$$
then $\nu$ satisfies the condition~(\ref{integrability}).
\end{lem}
The remaining of the section is devoted to the proof of
Lemma~\ref{main lemma}.
\subsection{A covering lemma}
For $z \in {\Bbb C}$ and $r > 0$, let $B_{r}(z)$ denote the
Euclidean disk with radius $r$ and center at $z$.
\begin{lem}\label{covering lemma}
Let $K > 1$. Then there is a constant $L > 1$ depending only on $K$
such that for any finite family of pairs of sets $\{(U_{i},
V_{i})\}_{i \in \Lambda}$ in $\Bbb C$, if for each $i \in \Lambda$,
there exist $x_{i} \in V_{i}$ and $r_{i} > 0$ such that
$$
B_{r_{i}}(x_{i}) \subset V_{i} \subset U_{i} \subset
B_{Kr_{i}}(x_{i}),
$$
then there is a subfamily $\sigma_{0}$ of $\Lambda$ such that all
$B_{r_{j}}(x_{j}), j \in \sigma_{0}$, are disjoint, and moreover,
$$
\bigcup_{i \in \Lambda} U_{i} \subset \bigcup_{j \in \sigma_{0}}
B_{Lr_{j}}(x_{j}).
$$
\end{lem}
\begin{proof}
Let us simply denote $B_{r_{i}}(x_{i})$ as $B_{i}$. It is sufficient
to prove the worst case, that is, $V_{i} = B_{i}$, and $U_{i} =
B_{Kr_{i}}(x_{i})$. By considering the subfamily of $\Lambda$ which
consists of all those $i$ such that $B_{i}$ is maximal(that is,
$B_{i}$ is not contained in any other $B_{j}$), we may assume that
for any $i \ne j$ in $\Lambda$, $B_{i}$ is not contained in $B_{j}$.
Let $\Sigma$ be the class which consists of all the non-empty
subsets of $\Lambda$ such that for every $\sigma \in \Sigma$, the
sets
$$
B_{i}, i \in \sigma
$$ are disjoint with each other. Clearly any subset of $\Lambda$
which contains exactly one element must belong to $\Sigma$. It
follows that $\Sigma$ is finite and non-empty. Let $\sigma_{0} \in
\Sigma$ be such that
$$
m (\bigcup_{i \in \sigma_{0}} B_{i}) = \max_{\sigma\in\Sigma} m
(\bigcup_{i \in \sigma} B_{i})
$$
where $m$ denotes the Euclidean area. Now let us prove that there is
an $L > 1$ depending only on $K$ such that for any $i \in \Lambda$,
there is some $j \in \sigma_{0}$ with $U_{i} \subset
B_{Lr_{j}}(x_{j})$.
In fact, if $i \in \sigma_{0}$, we can take $L = K$ and $j = i$. We
may assume that $i \notin \sigma_{0}$. By the maximal property of
$\sigma_{0}$, the disk $B_{i}$ must intersect at least one $B_{j}$
for some $j \in \sigma_{0}$. Let
$$
\Theta = \{j \in \sigma_{0}\:\big{|}\: B_{i} \cap B_{j} \ne
\emptyset\}.
$$
It follows from the maximal property of $\sigma_{0}$ again that
\begin{equation}\label{cover area}
m(B_{i}) \le m(\bigcup_{j\in \Theta} B_{j}).
\end{equation}
(This is because otherwise, one may use $B_{i}$ to replace all the
disks $B_{j}, j \in \Theta$, then the total Euclidean area will be
increased, and this contradicts with the maximal property of
$\sigma_{0}$)
Since by assumption, every $B_{j}$ for $j \in \Theta$ is not
completely contained in $B_{i}$, it follows that the boundary circle
of $B_{j}$ intersects the boundary circle of $B_{i}$. It thus
follows that
$$
r_{i} \le 8 \max _{j \in \Theta} r_{j}.
$$
(Because otherwise, the union of $B_{j}$ would be a proper subset of
the annulus
$$\{z\:\big{|}\: \frac{3}{4} r_{i} < |z - x_{i}| < \frac{5}{4}
r_{i}\},$$ whose Euclidean area is equal to that of $B_{i}$. This
contradicts with (\ref{cover area}))
Let $L = 8K + 9$. Let $j \in \Theta$ be such that $r_{j}$ obtains
the maximum of $r_{l}, l \in \Theta$. It is easy to see that
$U_{i} \subset B_{Kr_{i}}(x_{i}) \subset B_{Lr_{j}}(x_{j})$. The
proof of the lemma is completed.
\end{proof}
Recall that we use $area(X)$ to denote the spherical area of a
subset $X \subset \widehat{\Bbb C}$. Let $\Omega = {\Bbb C}
\setminus \overline{\Delta}$. For a subset $E \subset \Omega$, let
$diam_{\Omega}(E)$ denote the diameter of $E$ with respect to the
hyperbolic metric in $\Omega$.
\begin{cor}\label{spherical area}
Let $\{(U_{i}, V_{i})\}_{i \in \Lambda}$ be a finite family of
pairs of sets in $\Omega$ satisfying the condition in
Lemma~\ref{covering lemma} for some $1 < K < \infty$. If in addition
\begin{equation}\label{s-e}
diam_{\Omega}(U_{i})< K \end{equation}
for each $i \in \Lambda$, then
$$
\frac{area(\bigcup_{i\in\Lambda}
V_{i})}{area(\bigcup_{i\in\Lambda}U_{i})} \ge \lambda(K)
$$
where $0< \lambda(K) < 1$ is a constant dependent only on $K$.
\end{cor}
\begin{proof}
Let $\sigma_{0} \subset \Lambda$ and $L$ be given as in
Lemma~\ref{covering lemma}. Then for any $i \in \Lambda$, from the
proof of Lemma~\ref{covering lemma}, there is some $j \in
\sigma_{0}$ such that $U_{i} \subset B_{Lr_{j}}(x_{j})$ and
$B_{r_{j}}(x_{j})$ intersects $B_{r_{i}}(x_{i})$. Since
$B_{r_{i}}(x_{i}) \subset U_{i}$ and $B_{r_{j}}(x_{j}) \subset
U_{j}$, we get $U_{i} \cap U_{j} \ne \emptyset$. This, together
with (\ref{s-e}), implies
\begin{equation}\label{s-e'}
diam_{\Omega} (U_{i} \cup U_{j}) < 2 K.
\end{equation}
By (\ref{s-e'}), there is some constant $1 < \ell(K)< \infty$
depending only on $K$ such that
\begin{equation}\label{s-e7}
\sup_{z, \xi \in U_{i}\cup U_{j}} \frac{1 + |z|^{2}}{1 + |\xi|^{2}}
< \ell(K).
\end{equation}
Since the spherical metric is given by $|dz|/(1 + |z|^{2})$, this
implies that the distortion of the spherical metric in $U_{i} \cup
U_{j}$ is bounded by $\ell(K)$. But on the other hand, by $U_{i}
\subset B_{Lr_{j}}(x_{j})$ we have
\begin{equation}\label{s-e8}
m(U_{i}) \le L^{2} m ( B_{r_{j}}(x_{j})),
\end{equation}
where $m(\cdot)$ denotes the Euclidean area. Since $B_{r_{j}}(x_{j})
\subset U_{j} \subset U_{i} \cup U_{j}$, it follows from
(\ref{s-e7}) and (\ref{s-e8}) that
$$
area(U_{i}) \le L^{2} \ell (K) area (B_{r_{j}}(x_{j})).
$$
Since $L$ depends only on $K$ and all $B_{r_{j}}(x_{j})$, $j \in
\sigma_{0}$, are disjoint, the lemma then follows by taking
$\lambda(K) = L^{2} \ell(K)$.
\end{proof}
\subsection{Hyperbolic neighborhoods}
\begin{figure}
\bigskip
\begin{center}
\centertexdraw { \drawdim cm \linewd 0.02 \move(-2 1)
\move(-3 -3.5) \lvec(3 -3.5)
\move(-1.2 -1.78) \larc r: 2 sd:300 ed:240
\move(-0.68 -2.6) \larc r: 1 sd:300 ed:240
\move(0.4 -2.48) \larc r: 1.2 sd:300 ed:240
\move(-1.7 -2.2) \larc r: 1.4 sd:290 ed:250
\move(-2.4 -3.85) \htext{$x_{q_{n}}$}
\move(-1.4 -3.85) \htext{$x_{-q_{n+1}}$}
\move(-0.2 -3.85) \htext{$1$}
\move(1 -3.85) \htext{$x_{q_{n+1}}$}
\move(-0.2 -3.5) \lvec(2.2 0.7)
\move(-0.2 -3.5) \lvec(-2.6 0.7) }
\end{center}
\vspace{0.2cm} \caption{}
\end{figure}
Let us first introduce some concepts. Let $I
\subset \Bbb T$ be an open segment. Set
$$
\Omega_{I} = {\Bbb C} \setminus (\{0\} \cup ({\Bbb T} \setminus I)).
$$
Let $d_{\Omega_{I}}(\cdot, \cdot)$ denote the hyperbolic distance in
$\Omega_{I}$. For $d > 0$, the hyperbolic neighborhood of $I$ is
defined to be
$$
H_{d}(I) = \{z \in \Omega_{I}\:\big{|}\: d_{\Omega_{I}}(z, I) < d\}.
$$
For given $d > 0$, when $I$ is small, $H_{d}(I)$ is like the
hyperbolic neighborhood of the slit plane. Thus it is like the
domain bounded by two arcs of Euclidean circles which are symmetric
about $\Bbb T$. In the following, we always assume that the arc
segment $I$ involved is small, and therefore regard $H_{d}(I)$ as
the domain bounded by two symmetric arc segments of Euclidean
circles. Let $\alpha$ be the exterior angle between $\partial
H_{d}(I)$ and $\Bbb T$. For the convenience of our later
discussions, let us use $H_{\alpha}(I)$ to denote the domain
$H_{d}(I)$.
\subsection{The construction of the set $Z_{n}$} Now take $0<
\beta < \alpha < \pi/3$ and let them be fixed throughout the
following sections. Recall that for $i \in \Bbb Z$, $x_{i}$ is the
point in $\Bbb T$ such that $(g_{\theta}|{\Bbb T})^{i}(x_{i}) = 1$.
For $n > 0$, Let
$$
I_{n} = [1, x_{q_{n}}], \: K_{n} = [1, x_{-q_{n+1}}], \hbox{ and
}L_{n} = [x_{q_{n}}, x_{-q_{n+1}}].
$$
Define
$$
A_{n} = H_{\alpha}(I_{n})\setminus \overline{\Delta},
$$
$$
B_{n} = H_{\alpha}(I_{n+1})\setminus \overline{\Delta} = A_{n+1},
$$
$$
C_{n} = H_{\alpha}(K_{n})\setminus \overline{\Delta},
$$
and
$$
D_{n} = H_{\beta}(L_{n})\setminus \overline{\Delta}.
$$
$\emph{Note}$. In the following, we assume that the integer $n$
in the discussion is large enough such that $I_{n}$, $K_{n}$, and
$L_{n}$ are all small and hence all the domains $H_{\alpha}(I_{n})$,
$H_{\alpha}(K_{n})$, and $H_{\beta}(L_{n})$ are simply connected.
For an arc segment $I \subset \Bbb T$, let $I^{\cdot}$ denote the
interior of $I$.
For $0 \le i \le q_{n}$, $g_{\theta}^{i}(1) \notin
I_{n+1}^{\circ}$. Let $B_{n}^{i}$ denote the domain which is
attached to the segment $[x_{i}, x_{i+q_{n+1}}]$ such that
$g_{\theta}^{i}: B_{n}^{i} \to B_{n}$ is a homeomorphism.
For $0 \le q_{n+1}$, $g_{\theta}^{i}(1) \notin K_{n}^{\circ}$. let
$C_{n}^{i}$ denote the domain which is attached to the segment
$[x_{i-q_{n+1}}, x_{i}]$ such that $g_{\theta}^{i}: C_{n}^{i} \to
C_{n}$ is a homeomorphism.
For $0 \le i \le q_{n+1}-1$, $g_{\theta}^{i}(1) \notin
L_{n}^{\circ}$. Let $D_{n}^{i}$ denote the domain which is attached
to the segment $[x_{i+q_{n}}, x_{i-q_{n+1}}]$ such that
$g_{\theta}^{i}: D_{n}^{i} \to D_{n}$ is a homeomorphism.
\begin{lem}\label{sch}
$B_{n}^{i} \subset H_{\alpha}(I_{n+1}^{i}), 0\le i \le q_{n}$,
$C_{n}^{i} \subset H_{\alpha}([x_{i}, x_{i-q_{n+1}}]), 0\le i \le
q_{n+1}$, and $D_{n}^{i} \subset H_{\beta}([x_{q_{n}+i},
x_{i-q_{n+1}}]), 0\le i \le q_{n+1}-1$.
\end{lem}
\begin{proof}
Let us prove the first assertion and the other two can be proved in
the same way. For $0\le i \le q_{n}$, let $P_{i}$ denote the set
of the critical values of $g_{\theta}^{i}$. Then
$$
P_{i} = \{g_{\theta}^{j}(1) \:\big{|}\: 1 \le j \le i\}.
$$
Note that $g_{\theta}$ has exactly one critical value. It follows
that $P_{i} \cap \Omega_{I_{n+1}} = \emptyset$. let $\Psi_{i}$
denote the inverse branch of $g_{\theta}^{i}$ which maps $I_{n+1}$
to $I_{n+1}^{i}$. Since $H_{\alpha}(I_{n+1})$ is simply connected by
assumption, $\Psi_{i}$ can be holomorphically extended to
$H_{\alpha}(I_{n+1})$. But since $\Omega_{I_{n+1}}$ is not simply
connected, the map $\Psi_{i}$ may not be extended to a holomorphic
function on $\Omega_{I_{n+1}}$. To avoid this problem, let us
consider the holomorphic universal covering map $\pi: \Delta \to
\Omega_{I_{n+1}}$. Since $P_{i} \cap \Omega_{I_{n+1}} = \emptyset$,
$\Psi_{i}$ can be lifted to a holomorphic function
$\widetilde{\Psi}_{i}: \Delta \to \Omega_{I_{n+1}^{i}}$ such that
$$
\pi = g_{\theta}^{i} \circ \widetilde{\Psi}_{i}.
$$
This, together with the Schwarz Contraction Principle, implies that
the map $\Psi_{i}$ maps $H_{\alpha}(I_{n+1})$ into
$H_{\alpha}(I_{n+1}^{i})$. The first assertion then follows. The
other two assertions can be proved in the same way.
\end{proof}
Define
\begin{equation}\label{imp-set}
Z_{n} = \bigcup_{0\le i\le q_{n}} B_{n}^{i} \cup \bigcup_{0\le
i\le q_{n+1}} C_{n}^{i} \cup \bigcup_{0\le i \le q_{n+1}-1}
D_{n}^{i}
\end{equation}
See Figure 1 for an illustration of $A_{n}, B_{n}, C_{n}$, and
$D_{n}$. The cone, whose two sides have an angle $\pi/3$ with $\Bbb
T$, represents part of the pre-image of $\Delta$.
\subsection{The construction of the family $\{(U_{i}, V_{i})\}_{i \in \Lambda}$}
Let $\Omega = {\Bbb C} \setminus \overline{\Delta}$. Let
$diam_{\Omega} (\cdot)$ denote the hyperbolic diameter of a subset
in $\Omega$. Let $diam(\cdot)$ and $dist(\cdot,\cdot)$ denote
the diameter and distance with respect to the Euclidean metric.
Recall that $T(z) = \sin (z)$. The following is a technical lemma
about the distortion of $T^{-1}$ in a bounded set.
For two quantities $x, y > 0$, we write $x \succeq y$ if there is
some universal constant $ 0< K < \infty$ such that $x > K y$. We
write $x \preceq y$ if $y \succeq x$. We write $x \asymp y$ if $x
\succeq y$ and $y \succeq x$ both holds.
\begin{lem}\label{b-p}
Let $1 < M < \infty$. Then there exists a constant $1< \tau(M) <
\infty$ depending only on $M$ such that for any $r > 0$ and $a \in
\Bbb C$ with $B_{Mr}(a) \subset B_{2}(0)$, and any component $U$
of $T^{-1}(B_{Mr}(a))$ and any component $V$ of $T^{-1}(B_{r}(a))$
with $V \subset U$, there exist an $r'
> 0$ and an $a' \in \Bbb C$ such that
$$
B_{r'}(a') \subset V \subset U \subset B_{\tau(M) r'} (a').
$$
\end{lem}
\begin{proof}
By using a compact argument, we may assume that $r > 0$ is small and
$a$ is contained in a small neighborhood of one of the critical
values of $T(z)$, $1$ or $-1$. Without loss of generality, let us
assume $a$ is close to $1$.
By a direct calculation, it is not difficult to see that there
exists a uniform $1 < L < \infty$ such that for any small Euclidean
disk $B_{R}(w)$ near $1$, if $W$ is a component of
$T^{-1}(B_{R}(w))$, then one can find $z \in \Bbb C$ and $R' > 0$
such that
\begin{equation}\label{ll-1}
B_{R'}(z) \subset W \subset B_{LR'}(z)
\end{equation}
with $R' \asymp \sqrt{R + |w-1|} - \sqrt{|w-1|}$.
Now we have two cases. In the first case, $r < |a - 1|/10M$. By
(\ref{ll-1}), we have
$$
diam U \preceq \sqrt{Mr + |a-1|} - \sqrt{|a-1|}\preceq
Mr/\sqrt{|a-1|}.
$$
By (\ref{ll-1}), there is an $a' \in V$ and $r' > 0$ such that
$B_{r'}(a') \subset V$ with $$ r' \succeq \sqrt{r + |a-1|} -
\sqrt{|a-1|}\succeq r/\sqrt{|a-1|}.$$ This proves the lemma in the
first case.
In the second case, $r \ge |a - 1|/10M$. Then
$$
diam U \preceq \sqrt{Mr + |a-1|} \preceq \sqrt{11Mr}.
$$
By (\ref{ll-1}), there is an $a' \in V$ and $r' > 0$ such that
$B_{r'}(a') \subset V$ with
$$
r' \succeq \sqrt{r + |a-1|} - \sqrt{|a-1|}\succeq \sqrt{r}(\sqrt{1 +
10M} - \sqrt{10M}).
$$ In the last inequality we use the fact that $\sqrt{1 + x} - \sqrt{x}$
is decreasing for $x >
0$. This proves the lemma in the second case and Lemma~\ref{b-p}
follows.
\end{proof}
\begin{defi}\label{admiss}{\rm
Let $1 < K < \infty$ and $z \in X_{n+2}$. We say $z$ is associated
to a $K-$admissible pair $(U, V)$ if $V \subset U \subset \Omega$
are two topological disks such that
\begin{itemize}
\item[1.] $z \in U$,
\item[2.] $V \subset X_{n}\setminus X_{n+2}$,
\item[3.] $diam_{\Omega}(U) < K$,
\item[4.] there exist $x \in V$ and $r > 0$ such that $B_{r}(x)
\subset V \subset U \subset B_{Kr}(x)$.
\end{itemize}}
\end{defi}
From now on, let $v = g_{\theta}(1)$ denote the unique critical
value of $g_{\theta}$. Let
$$
\wp = 1/1000
$$
and be fixed through the following discussions.
\begin{lem}\label{adm1}
There is a uniform $1 < K < \infty$ such that for all $n$ large
enough and any $z \in X_{n+2}$, if $\omega = g_{\theta}(z) \in
Y_{n+2}$ and $z \notin A_{n}\cup B_{n}$, then $z$ is associated to
some $K-$admissible pair $(U, V)$.
\end{lem}
\begin{proof}
We have two cases. In the first case, $d(z, \Bbb T) \ge \wp$. In
the second case, $d(z, \Bbb T) < \wp$.
Suppose that we are in the first case. By assuming that $n$ is large
enough, we can always take a Euclidean disk $B$ in $Y_{n}\setminus
Y_{n+2}$ and a small open topological disk $A$ such that
\begin{itemize}
\item[1.] $\omega \in A$,
\item[2.] $B \subset A$,
\item[3.] $diam(A) \preceq diam(B)$.
\end{itemize}
Note that for all $n$ large enough, we can take $A$ small so that
the component of $g_{\theta}^{-1}(A)$ which contains $z$, say $U$,
lies in the outside of $\overline{\Delta}$. That is, $U \subset
\Omega$. Let $V$ be one of the components of $g_{\theta}^{-1}(B)$
such that $V \subset U$. By using the previous notations, we have
$$
\omega = g_{\theta}(z) = \Phi \circ R_{t} \circ T \circ \psi \circ
\Phi^{-1}(z)
$$
where $\psi: \widehat{\Bbb C} \setminus \overline{\Delta} \to
\widehat{\Bbb C} \setminus \overline{D}$, $T: z \to \sin(z)$,
$R_{t}: z \to e^{2 \pi i t} z$, and $\Phi: z \to z^{2}$ are the maps
as defined in $\S3$.
Since $A$ is a small open topological disk which intersects
$\Delta$, $\Phi^{-1}(A)$, and hence $R_{t}^{-1}\circ \Phi^{-1}(A)$,
are small open topological disks which intersect $\Delta$ also(We
take one of the branches of $\Phi^{-1}$). By taking $A$ small, the
distortion of $R_{t}^{-1} \circ \Phi^{-1}$ on $A$ is uniformly
bounded, and from the third property above, we can thus find a
point $a \in \Bbb C$, an $r > 0$, and a universal $1 < M < \infty$
such that
\begin{equation}\label{idd}
B_{r}(a) \subset R_{t}^{-1} \circ \Phi^{-1}(B) \subset R_{t}^{-1}
\circ \Phi^{-1}(A) \subset B_{Mr}(a) \subset B_{2}(0).
\end{equation}
Since $T$ is periodic, the diameter of any component of $$T^{-1}
\circ R_{t}^{-1}\circ \Phi^{-1}(A)$$ has a uniform upper bound.
Since $d(z, \Bbb T) \ge \wp$ and the diameter of $A$ is small, it
follows that $d((\psi \circ \Phi^{-1})(z), \partial D) \ge
\kappa(\wp)$ where $\kappa(p) > 0$ is some constant depending only
on $\wp$. Let $A'$ denote the component of $T^{-1} \circ R_{t}^{-1}
\circ \Phi^{-1}(A)$ which contains $(\psi \circ \Phi^{-1})(z)$.
Since $T$ is periodic, by taking $A$ small, we can make $A'$ small
and $d(A',
\partial D) > \kappa(\wp)/2$. So we can always assume that
\begin{equation}\label{hd}
diam_{{\Bbb C} \setminus \overline{D}}(A') < 1.
\end{equation}
by taking $A$ small.
Let $U = (\Phi \circ \psi^{-1}) (A')$. Since $\Phi \circ \psi^{-1}:
{\Bbb C} \setminus \overline{D} \to {\Bbb C}\setminus
\overline{\Delta}$ is a holomorphic map, it follows from (\ref{hd})
and Schwarz Contraction Principle that
\begin{equation}\label{hdd}
diam_{{\Bbb C} \setminus \overline{\Delta}}(U) < 1.
\end{equation}
This verifies the property (3) in Definition~\ref{admiss}. The first
two properties of Definition~\ref{admiss} hold automatically. The
last property follows since the distortion caused by each map in
the composition
$$
g_{\theta}^{-1} = \Phi \circ \psi^{-1} \circ T^{-1} \circ R_{t}^{-1}
\circ \Phi^{-1}
$$
is bounded by some uniform constant provided that $A$ is small. In
fact, by (\ref{idd}) and Lemma~\ref{b-p}, it is sufficient to show
that the distortion of $\Phi\circ \psi^{-1}$ on $A'$ is uniformly
bounded. From (\ref{hd}), it follows that $diam(A') \preceq dist(A',
\partial D)$.
This implies that $\psi^{-1}$ can be defined in a definitely larger
domain containing $\overline{A'}$. It follows from Koebe's
distortion theorem that the distortion of $\psi^{-1}$ on $A'$ is
uniformly bounded. Since $A'$ is small and since the derivative of
$\psi^{-1}$ is bounded in a neighborhood of the infinity, it
follows that $diam(\psi^{-1}(A')) < 1$ provided that $A$ is small.
This then implies that the distortion of $\Phi$ on $\psi^{-1}(A')$
is uniformly bounded. The last property in Definition~\ref{admiss}
then follows.
Now suppose that we are in the second case. That means, $d(z, \Bbb
T) < \wp$. Since $z \notin A_{n}\cup B_{n}$, it follows that
\begin{equation}\label{com1}
dist(\omega, v) \succeq |I_{n}^{q_{n+1}-1}|.
\end{equation}
Recall that $I_{n} = [1, x_{q_{n}}]$ and $I_{n}^{i}$ is the arc
segment on $\Bbb T$ such that $g_{\theta}^{i}(I_{n}^{i}) = I_{n}$.
(\ref{com1}) then comes immediately from the fact that the part of
the cone, which is contained in $A_{n}\cup B_{n}$, has size $\asymp
I_{n} \asymp |I_{n}^{q_{n+1}}|$, see Figure 1.
Note that $I_{n}^{q_{n+1}-1}$ is the interval in the
$\emph{dynamical partition}$ of level $n$ which contains $v$. Let
$I\subset \Bbb T$ be any interval in the $\emph{cell partition}$ of
level $n$ which contains $v$ or has $v$ as one of its end points (In
the latter case, there are two such intervals in the $\emph{cell
partition}$ of level $n$). The inequality~(\ref{com1}), together
with Theorem~\ref{SH} and Lemma~\ref{dyn-cell}, implies that
\begin{equation}\label{com2}
dist(\omega, v) \succeq |I|.
\end{equation}
Let $J' \subset J \subset \Bbb T$ be the corresponding intervals to
the cells of level $n+2$ and $n$ whose closures contain $\omega$.
Since any two adjacent intervals in the $\emph{cell partition}$ are
commensurable(This is implied by Theorem~\ref{SH} and
Lemma~\ref{dyn-cell}), we have
\begin{equation}\label{com3}
dist(\omega, v) \succeq |J| > |J'|.
\end{equation}
In fact, if $J = I$, then (\ref{com3}) follows from (\ref{com2}).
Otherwise, let $M$ denote the interval in the cell partition of
level $n$ which is between $I$ and $J$ and which is adjacent to $J$.
Then $|M| \asymp |J|$ by Theorem~\ref{SH} and Lemma~\ref{dyn-cell}.
If $M \ne I$, we must have $dist(\omega, v) \succeq |M|$ and
(\ref{com3}) follows. If $M = I$, then (\ref{com3}) follows again
from (\ref{com2}).
\begin{figure}
\bigskip
\begin{center}
\centertexdraw { \drawdim cm \linewd 0.02 \move(-2 1)
\move(-6 -2) \lvec(-2 -2)
\move(0 -2) \lvec(2 -2)
\move(-6 -3) \lvec(-2 -3) \move(-6 -2.5) \lvec(-2 -2.5)
\move(-5.8 -2) \lvec(-5.8 -3)
\move(-3.5 -2) \lvec(-3.5 -3)
\move(-2.2 -2) \lvec(-2.2 -3) \move(0.95 -2.35) \htext{$1$}
\move(-5 -2) \lvec(-5 -2.5)
\move(-4.2 -2) \lvec(-4.2 -2.5)
\move(-2.8 -2) \lvec(-2.8 -2.5)
\move(1 -2) \lvec(2 0) \move(1 -2) \lvec(0 0)
\move(-3 -2.75) \fcir f:0.7 r:0.15
\move(-4.6 -2) \fcir f:0.5 r:0.05
\move(-3.1 -2.3) \fcir f:0.5 r:0.05
\move(-4.7 -1.9) \htext{$v$}
\move(-3.4 -2.4) \htext{$\omega$}
\move(-3.1 -2.7) \lcir r:0.6
\move(0.6 -0.4) \fcir f:0.7 r:0.15
\move(0.76 -0.5) \lcir r:0.4
\move(0.6 -0.75) \fcir f:0.5 r:0.05
\move(0.7 -0.83) \htext{$z$}
\move(-1.6 -1.8)
\arrowheadtype t:V \avec(-0.6 -1.1)
\move(-1.15 -1.85) \htext{$g_{\theta}$}
}
\end{center}
\vspace{0.2cm} \caption{}
\end{figure}
It now follows from Lemma~\ref{good geometry} that there is a
Euclidean disk $B \subset X_{n} \setminus X_{n+2}$ such that $B$ and
$\omega$ are contained in the same cell of level $n$ and
\begin{equation}\label{com4}
diam(B) \asymp dist(\omega, B) \asymp |J'| \preceq dist(v, B).
\end{equation}
From (\ref{com3})
and (\ref{com4}), it follows that for such Euclidean disk $B$, there
is an open topological disk $A \subset \Delta$ such that
\begin{itemize}
\item[1.] $\omega \in A$,
\item[2.] $B \subset A$,
\item[3.] $diam(A) \preceq diam(B) \preceq dist(v, A)$.
\end{itemize}
See Figure 2 for an illustration of the sets $A$ and $B$. Let $U$
and $V$ be the pull backs of $A$ and $B$ by $g_{\theta}$
respectively such that $z \in U$ and $V \subset U$. The first two
properties in Definition~\ref{admiss} hold automatically. Let us
verify the property (3). In fact, since $A \subset \Delta$ by the
construction, $U$ is contained in the cone. From $diam(A) \preceq
dist(v, A)$, it follows that
$$
\frac{diam (U)}{dist(U, {\Bbb T})} < \rho
$$
for some uniform $\rho > 0$. This implies that $diam_{\Omega}(U) <
K$ where $K > 1$ is some constant depending only on $\rho$ and the
property (3) follows. Since $diam(A) \preceq dist(v, A)$,
$g_{\theta}^{-1}$ can thus be defined in a definitely larger domain
$E \supset \overline{A}$ such that ${\rm mod}(E \setminus
\overline{A})$ has a uniform positive lower bound. The last property
then follows from Koebe's distortion theorem.
\end{proof}
\begin{lem}\label{adm2}
There is a uniform $1 < K < \infty$ such that for any $0 \le i \le
q_{n}-1$ and $z \in X_{n+2}$, if $\omega = g_{\theta}(z) \in
B_{n}^{i}$ but $z \notin B_{n}^{i+1}$, then $z$ is associated to
some $K-$admissible pair $(U, V)$.
\end{lem}
\begin{proof}
Again we have two cases. In the first case, $d(z, \Bbb T) \ge
\wp$. In the second case, $d(z, \Bbb T) < \wp$.
Suppose that we are in the first case. Note that by Lemma~\ref{sch},
$\omega \in B_{n}^{i} \subset H_{\alpha}(I_{n+1}^{i})$. With the
aid of this fact and Lemmas~\ref{dyn-cell}, ~\ref{good geometry},
and Theorem~\ref{SH}, the proof of the first case can be completed
by using exactly the same argument as in the proof of the first case
of Lemma~\ref{adm1}. The reader shall easily fill up the details of
the proof for this case.
Now suppose that we are in the second case. That is, $d(z, \Bbb T) <
\wp$. Note that $I_{n+1}^{i} \cap I_{n}^{q_{n+1}-1} = \emptyset$
and that by the third assertion of Lemma~\ref{real bound}, $v$
separates the interval $I_{n}^{q_{n+1}-1}$ into two
$L-$commensurable subintervals for some uniform $1< L < \infty$.
Since $\omega \in H_{\alpha}(I_{n+1}^{i})$, it follows that
\begin{equation}\label{ff-e}
dist(\omega, v) \succeq |I_{n}^{q_{n+1}-1}|.
\end{equation}
Let $I$ be the interval in the $\emph{cell partition}$ of level $n$
which contains the interval $I_{n}^{q_{n+1}-1}$. In particular, $v
\in I$. By Theorem~\ref{SH} and Lemma~\ref{dyn-cell}, it follows
that $|I_{n}^{q_{n+1}-1}|\asymp |I|$ and therefore by (\ref{ff-e})
we have
\begin{equation}\label{re}
dist(\omega, v) \asymp dist(I_{n+1}^{i}, v) \asymp dist
(H_{\alpha}(I_{n+1}^{i}), v) \succeq |I|.
\end{equation}
Let $J$ be the interval in the $\emph{cell partition}$ of level $n$
which contains $I_{n+1}^{i}$. It follows that $J \asymp I_{n+1}^{i}$
(In Figure 3, $J = I_{n+1}^{i}$). Since any two adjacent intervals
in the $\emph{cell partition}$ are $L-$commensurable for some
uniform $L > 1$, by (\ref{re}) and the same argument as in the proof
of (\ref{com3}), we have
\begin{equation}\label{ree}
dist(\omega, v) \succeq |J|.
\end{equation}
Let $E$ be the cell of level $n$ corresponding to $J$. It follows
from (\ref{re}), (\ref{ree}) and Lemma~\ref{good geometry} that
there is a Euclidean disk $B \subset E \setminus X_{n+2}$ and a
topological disk $A \subset (\Delta \cup H_{\alpha}(I_{n+1}^{i}))$
such that
\begin{figure}
\bigskip
\begin{center}
\centertexdraw { \drawdim cm \linewd 0.02 \move(-2 1)
\move(-6 -2) \lvec(-2 -2)
\move(0 -2) \lvec(2 -2)
\move(-6 -3) \lvec(-2 -3)
\move(-6 -2.5) \lvec(-2 -2.5)
\move(-5.8 -2) \lvec(-5.8 -3)
\move(-4.2 -2) \lvec(-4.2 -3)
\move(-2.2 -2) \lvec(-2.2 -3)
\move(0.95 -2.35) \htext{$1$}
\move(-5 -2) \lvec(-5 -2.5)
\move(-3.5 -2) \lvec(-3.5 -2.5)
\move(-2.8 -2) \lvec(-2.8 -2.5)
\move(1 -2) \lvec(2 0) \move(1 -2) \lvec(0 0)
\move(-3 -2.75) \fcir f:0.7 r:0.2
\move(-5.6 -2) \fcir f:0.5 r:0.05
\move(-3.2 -1.3) \larc r: 1.2 sd:325 ed:215
\move(-3.1 -1.8) \fcir f:0.5 r:0.05
\move(-5.5 -1.9) \htext{$v$}
\move(-3.4 -1.8) \htext{$\omega$}
\move(-3.1 -2.3) \lcir r:0.8
\move(0.6 -0.4) \fcir f:0.7 r:0.15
\move(0.4 -0.5) \lcir r:0.5
\move(0.2 -0.75) \fcir f:0.5 r:0.05
\move(0 -0.7) \htext{$z$}
\move(-1.4 -1.8)
\arrowheadtype t:V \avec(-0.4 -1.1)
\move(-1 -1.85) \htext{$g_{\theta}$}
}
\end{center}
\vspace{0.2cm} \caption{}
\end{figure}
\begin{itemize}
\item[1.] $\omega \in A$,
\item[2.] $B \subset A$,
\item[3.] $diam(A) \preceq diam(B) \preceq |J| \preceq dist(v, A)$,
\end{itemize}
See Figure 3 for an illustration of the sets $A$ and $B$. Let $U$
and $V$ be the pull backs of $A$ and $B$ by $g_{\theta}$
respectively such that $z \in U$ and $V \subset U$. It is clear that
the first two properties of Definition~\ref{admiss} hold
automatically. Since $diam(A) \preceq dist(v, A)$,
$g_{\theta}^{-1}$ can be defined in a definite larger domain
containing $A$, so the last property follows from Koebe's
distortion theorem.
Now let us prove the property (3). Since $A \subset (\Delta \cup
H_{\alpha}(I_{n+1}^{i}))$ and $diam(A) \preceq dist(v, A)$, it
follows that $diam(U)/dist(U, {\Bbb T}) < \rho$ for some uniform
$\rho
> 0$. This implies that $diam_{\Omega}(U) < K$ for some $K > 1$
depending only on $\rho$.
This proves the property (3) and Lemma~\ref{adm2} follows.
\end{proof}
\begin{lem}\label{adm3}
There is a uniform $1 < K < \infty$ such that for any $0 \le i \le
q_{n+1} -1$ and any $z \in X_{n+2}$ with $\omega = g_{\theta}(z)
\in C_{n}^{i}$, if $z \notin C_{n}^{i+1}$ for $0 \le i \le q_{n+1}
-2$ and $z \notin A_{n} \cup B_{n}$ for $i = q_{n+1} -1$, then $z$
is associated to some $K-$admissible pair $(U, V)$.
\end{lem}
\begin{proof}
Suppose that $0 \le i \le q_{n+1} -2$. As before, we have two
cases. In the first case, $d(z, \Bbb T) \ge \wp$. In the second
case, $d(z, \Bbb T) < \wp$. Again, the first case can be proved by
the same argument as in the proof of the first case of
Lemma~\ref{adm1}. So let us suppose that we are in the second case.
That is, $d(z, \Bbb T) < \wp$. By Lemma~\ref{sch}, $C_{n}^{i}
\subset H_{\alpha}([x_{i-q_{n+1}}, x_{i}])\subset
H_{\alpha}(I_{n}^{i})$ for all $0 \le i \le q_{n+1} -1$. Note that
$I_{n}^{i} \cap I_{n}^{q_{n+1}-1} = \emptyset$ and that by the third
assertion of Lemma~\ref{real bound}, $v$ separates the interval
$I_{n}^{q_{n+1}-1}$ into two $L-$commensurable subintervals for some
uniform $1 < L < \infty$. Since $\omega \in C_{n}^{i} \subset
H_{\alpha}(I_{n}^{i})$, it follows that
$$
dist(\omega, v) \succeq |I_{n}^{q_{n+1}-1}|.
$$
Then the same argument as in the proof of the second case of
Lemma~\ref{adm2} can be used to construct a $K$-admissible pair $(U,
V)$ associated to $z$. The reader shall easily supply the details.
\begin{figure}
\bigskip
\begin{center}
\centertexdraw { \drawdim cm \linewd 0.02 \move(-2 2)
\move(-6 -2) \lvec(-2 -2)
\move(-4.5 -0.97) \larc r: 1.2 sd:300 ed:240
\move(-3.5 -1.3) \larc r: 0.8 sd:300 ed:240
\move(-3.9 -2) \lvec(-5.9 1.375)
\move(-3.9 -2) \lvec(-1.9 1.375)
\move(-3.9 -2) \lvec(-5.28 -0.97)
\move(-4.8 -0.1) \larc r: 1 sd:132 ed:240
\move(-4.5 -0.25) \fcir f:0.7 r:0.22
\move(-4.2 -0.2) \htext{$V$}
\move(-5.7 -0.115) \htext{$\Omega'$}
\move(-4.7 0) \lcir r:1.2
\move(-4.7 0.5) \htext{$U$}
\move(-5.3 -2.4) \htext{$x_{q_{n}}$}
\move(-4.0 -2.35) \htext{$1$}
\move(-3.3 -2.4) \htext{$x_{q_{n+1}}$}
}
\end{center}
\vspace{0.2cm} \caption{}
\end{figure}
Now suppose that $ i = q_{n+1} -1$. Again we have two cases. In the
first case, $d(z, \Bbb T) \ge \wp$. In the second case, $d(z, \Bbb
T) < \wp$. The first case can still be treated in the same way as
in the proof of the first case of Lemma~\ref{adm1}. So let us assume
that $d(z, \Bbb T) < \wp$. Note that there are two components of
$g_{\theta}^{-1}(C_{n}^{q_{n+1}-1})$ whose boundaries contain the
critical point $1$. It is clear that one of them is contained in
$B_{n}$. Let $\Omega$ denote the other one. Then $\Omega$ is a
domain which is attached to one side of the cone from the outside.
Let $\Omega' = \Omega \setminus (A_{n} \cup B_{n})$. Since $z \notin
A_{n} \cup B_{n}$, we have $z \in \Omega'$. Note that
$|[x_{q_{n}+q_{n+1}-1}, v]| \asymp |[v,x_{q_{n+1}-1}]|$ by the third
assertion of Lemma~\ref{real bound} and $C_{n}^{q_{n+1} -1} \subset
H_{\alpha}([x_{q_{n+1} -1}, v])$ by Lemma~\ref{sch}, it follows that
$$
diam(\Omega') \preceq dist(\Omega', \Bbb T)\asymp |I_{n}|.
$$
On the other hand, by Lemma~\ref{dyn-cell}, Lemma~\ref{real bound},
and Lemma~\ref{good geometry}, it follows that there is a Euclidean
disk $V \subset X_{n} \setminus X_{n+2}$ which is contained in the
cone such that
$$
diam(V) \asymp dist(V, {\Bbb T}) \asymp |I_{n}|.
$$
It follows that one can construct an open topological disk $U$
containing $\Omega'$ and $V$ such that
$$
diam(U) \asymp dist(U, {\Bbb T}) \asymp |I_{n}|.
$$
See Figure 4 for an illustration of the sets $\Omega'$ and $V$. The
properties of Definition~\ref{admiss} are obviously satisfied. The
lemma follows.
\end{proof}
\begin{lem}\label{adm4}
There is a uniform $1 < K < \infty$ such that for any $0 \le i \le
q_{n+1} -1$ and $z \in X_{n+2}$ with $\omega = g_{\theta}(z) \in
D_{n}^{i}$, if $z \notin D_{n}^{i+1}$ for $0 \le i \le q_{n+1}-2$
and $z \notin A_{n} \cup B_{n}$ for $i = q_{n+1} -1$, then $z$ is
associated to some $K-$admissible pair $(U, V)$.
\end{lem}
\begin{figure}
\bigskip
\begin{center}
\centertexdraw { \drawdim cm \linewd 0.02 \move(-2 2)
\move(-6 -2) \lvec(-2 -2)
\move(-4.5 -0.97) \larc r: 1.2 sd:300 ed:240
\move(-3.5 -1.3) \larc r: 0.8 sd:300 ed:240
\move(-3.9 -2) \lvec(-5.9 1.375)
\move(-3.9 -2) \lvec(-1.9 1.375)
\move(-3.9 -2) \lvec(-5.63 -0.07)
\move(-3.9 -2) \lvec(-2.8 -1.3)
\move(-3.2 -0.85) \larc r: 0.6 sd:300 ed:60
\move(-4.6 -2) \lvec(-5.5 -1)
\move(-4.7 -0.4) \larc r: 1 sd:160 ed:218
\move(-3.25 -0.25) \fcir f:0.7 r:0.15
\move(-4.5 -0.25) \fcir f:0.7 r:0.22
\move(-4.2 -0.3) \htext{$V_{1}$}
\move(-3.1 -0.09) \htext{$V_{2}$}
\move(-3.2 -1.2) \htext{$\Omega_{2}$}
\move(-2.55 -0.8) \htext{$\Omega_{2}'$} \move(-5 -1.5)
\htext{$\Omega_{1}$}
\move(-6.05 -0.1) \htext{$\Omega_{1}'$}
\move(-5.3 -2.4)
\htext{$x_{q_{n}}$}
\move(-4.7 -2.4) \htext{$x_{q_{n} + q_{n+1}}$}
\move(-3.3 -2.4) \htext{$x_{q_{n+1}}$}
}
\end{center}
\vspace{0.2cm} \caption{}
\end{figure}
\begin{proof}
The case that $0 \le i \le
q_{n+1} - 2$ can be proved by the same argument as in the proof of
the same case of Lemma~\ref{adm3}. The reader shall easily supply
the details.
Suppose that $i = q_{n+1} -1$. As before, we have two cases. In
the first case, $d(z, \Bbb T) \ge \wp$. In the second case, $d(z,
\Bbb T) < \wp$. Again, the first case can be proved by the same
argument as in the proof of the same case of Lemma~\ref{adm1}. So
let us assume that $d(z, \Bbb T) < \wp$. By Lemma~\ref{sch},
$D_{n}^{q_{n+1}-1} \subset H_{\beta}([x_{q_{n}+q_{n+1}-1}, v])$.
There are exactly two two components of
$g_{\theta}^{-1}(D_{n}^{q_{n+1}-1})$ which are attached to $1$. Let
us use $\Omega_{1}$ to denote the one which is attached to
$[x_{q_{n}+q_{n+1}}, 1]$, and use $\Omega_{2}$ to denote the other
one which is attached to one side of the cone from the outside. Let
$\Omega_{i}' = \Omega_{i}\setminus (A_{n}\cup B_{n})$ for $i=1, 2$.
By Lemma~\ref{sch} and the third assertion of Lemma~\ref{real bound},
it follows
that for $i = 1,2$,
$$
diam(\Omega_{i}') \preceq dist(\Omega_{i}', {\Bbb T}) \asymp
|I_{n}|.
$$
Then by Lemma~\ref{dyn-cell}, Lemma~\ref{real bound}, and
Lemma~\ref{good geometry}, for $i = 1, 2$, one can take a Euclidean
disk $V_{i} \subset X_{n} \setminus X_{n+2}$ which is contained in
the cone such that
$$
diam(V_{i}) \asymp dist(V_{i}, {\Bbb T}) \asymp |I_{n}|,
$$
and a topological disk $U_{i}$ which contains $\Omega_{i}'$ and
$V_{i}$ such that $$diam(U_{i}) \asymp dist(U_{i}, {\Bbb T}) \asymp
|I_{n}|.$$ See Figure 5 for an illustration of the sets
$\Omega_{i}'$ and $V_{i}$, $i = 1, 2$. The properties of
Definition~\ref{admiss} are obviously satisfied. The lemma follows.
\end{proof}
\begin{lem}\label{adm5}
There is a uniform $1 < K < \infty$ such that for any $z \in
X_{n+2}$, if $z \in A_{n} \setminus( B_{n} \cup C_{n} \cup
D_{n})$, then $z$ is associated to some $K-$admissible pair $(U,
V)$.
\end{lem}
\begin{proof}
Let $W = A_{n} \setminus( B_{n} \cup C_{n} \cup D_{n})$. Note that
$|[x_{q_{n}}, x_{-q_{n+1}}]| \asymp |[x_{-q_{n+1}}, 1]|$ by the
first assertion of Lemma~\ref{real bound}. By the definition of
$A_{n}$, $B_{n}$, $C_{n}$, $D_{n}$ and the fact that $0< \beta <
\alpha$, it follows that
$$
diam (W) \preceq dist (W, \Bbb T)\asymp |I_{n}|.
$$
See Figure 1 for an illustration.
Now by Lemma~\ref{good geometry} we can construct a Euclidean disk
$V \subset X_{n} \setminus X_{n+2}$ in the cone such that
$$
diam(V) \asymp dist(V, {\Bbb T}) \asymp |I_{n}|.
$$
It follows that there is an open topological disk $U$ containing $W$
and $V$ such that
$$
diam(U) \preceq dist(U, {\Bbb T}) \asymp |I_{n}|.
$$
The properties of Definition~\ref{admiss} are obviously satisfied.
The lemma follows.
\end{proof}
\begin{lem}\label{Koe}
For every $1 < K < \infty$, there exists an $1 < L < \infty$
depending only on $K$ such that if a point $z \in X_{n+2}$ is
associated to some $K-$admissible pair $(U, V)$, then for any point
$\xi \in X_{n+2}$ in the inverse orbit of $z$, $\xi$ is associated
to some $L-$admissible pair $(U', V')$.
\end{lem}
\begin{proof}
Suppose $z \in X_{n+2}$ is associated to some $K$-admissible pair
$(U, V)$. Let $\xi \in X_{n+2}$ be a point in the inverse orbit of
$z$, that is, $g_{\theta}^{k}(\xi) = z$ for some integer $k \ge 1$.
Let $V' \subset U'$ be the pull backs of $V$ and $U$ by
$g_{\theta}^{k}$ such that $\xi \in U'$. The first two properties of
Definition~\ref{admiss} hold automatically. Since $diam_{\Omega}(U)<
K$, the branch of $g^{-k}_{\theta}$, which maps $z$ to $\xi$, can be
defined in a definitely larger domain containing $\overline{U}$. By
Koebe's distortion theorem, the last property of
Definition~\ref{admiss} holds for some constant depending only on
$K$. It remains to prove the third property.
Recall that $\Omega = {\Bbb C} \setminus \overline{\Delta}$. Let
$\Sigma = {\Bbb C} \setminus (\overline{\Delta \cup
g_{\theta}^{-1}(\Delta)})$. It follows that $\Sigma \subset
\Omega$. Note that $g_{\theta}(1) = (G_{\theta}(1))^{2}$ is the
only critical value of $g_{\theta}$ in $\Bbb C$. This implies that
$g_{\theta}: \Sigma \to \Omega$ is a holomorphic covering map and
that any inverse branch of $g_{\theta}$ contracts the hyperbolic
metric in $\Omega$. Thus we get $diam_{\Omega}(U') < K$. This proves
the third property of Definition~\ref{admiss} and the lemma
follows.
\end{proof}
Let $Z_{n}$ be the set defined in (\ref{imp-set}). The importance of
the set $Z_{n}$ is reflected by the following lemma.
\begin{lem}\label{pre-lem}
There is a uniform $K > 1$ such that for any $z \in X_{n+2}$, either
$z \in Z_{n}$, or $z$ is associated to some $K-$admissible pair $(U,
V)$.
\end{lem}
\begin{proof}
Take $z \in X_{n+2}$. Suppose $z \notin Z_{n}$. Recall that $k_{z}
> 0$ is the least integer such that $g_{\theta}^{k_{z}}(z) \in \Delta$. Let $l = k_{z}$.
For $0 \le k \le l$, let $z_{k} = g_{\theta}^{l-k}(z)$. Then $z =
z_{l}$. We may assume that $z_{1} \in A_{n} \cup B_{n}$. This is
because otherwise, $\omega = g_{\theta}(z_{1}) = g_{\theta}^{l}(z)
\in Y_{n+2}$ and $z_{1} \notin A_{n} \cup B_{n}$, then by
Lemma~\ref{adm1}, $z_{1}$ is associated to a $K$-admissible pair
$(U, V)$ for some uniform $1 < K < \infty$. Since $z = z_{l}$ lies
in the inverse orbit of $z_{1}$, the Lemma then follows by
Lemma~\ref{Koe}.
We may further assume that $z_{1} \in Z_{n}$. Because otherwise, we
will have
$$
z_{1} \in (A_{n} \cup B_{n}) \setminus Z_{n} \subset A_{n}
\setminus (B_{n} \cup C_{n} \cup D_{n}).
$$
By Lemma~\ref{adm5}, it follows that $z_{1}$ is associated to a
$K$-admissible pair for some uniform $1 < K < \infty$. The lemma
then follows again by Lemma~\ref{Koe} since $z$ lies in the inverse
orbit of $z_{1}$.
Now suppose that $k\le l$ is the largest integer such that $z_{i}
\in Z_{n}$ for all $1 \le i \le k$ and $z_{k} \in A_{n}\cup B_{n}$.
By assumption that $z_{l} = z \notin Z_{n}$, it follows that $k <
l$. Now we may assume that one of the following three possibilities
happens:
$z_{k} \in B_{n}, z_{k} \in C_{n}$, or $z_{k} \in D_{n}$. This is
because otherwise, $z_{k} \in A_{n} \setminus (B_{n} \cup C_{n}
\cup D_{n})$. So by Lemma~\ref{adm5} it follows
that $z_{k}$ is associated to a $K$-admissible pair for some
uniform $1 < K < \infty$. The lemma
then follows again by Lemma~\ref{Koe} since $z = z_{l}$ lies in the
inverse orbit of $z_{k}$.
Suppose that $z_{k} \in B_{n}$. By the assumption that $z \notin
Z_{n}$ and the choice of $k$, there is either an $0\le i \le
q_{n}-2$ such that $z_{k+i} \in B_{n}^{i}$ but $z_{k+i+1} \notin
B_{n}^{i+1}$ or
$z_{k+q_{n}-1} \in B_{n}^{q_{n}-1}$ but
$z_{k+q_{n}} \notin A_{n} \cup B_{n}$, and hence $z_{k+q_{n}} \notin
B_{n}^{q_{n}}$(Because $B_{n}^{q_{n}} \subset A_{n}$). Then the
lemma follows from Lemma~\ref{adm2} and Lemma~\ref{Koe}.
Suppose that $z_{k} \in C_{n}$. By the assumption that $z \notin
Z_{n}$ and the choice of $k$, there is either an $0\le i \le
q_{n+1}-2$ such that $z_{k+i} \in C_{n}^{i}$ but $z_{k+i+1} \notin
C_{n}^{i+1}$ or $z_{k+q_{n+1}-1} \in C_{n}^{q_{n+1}-1}$ but
$z_{k+q_{n+1}} \notin A_{n} \cup B_{n}$. Then the lemma follows from
Lemma~\ref{adm3} and Lemma~\ref{Koe}.
Suppose that $z_{k} \in D_{n}$. By the assumption that $z \notin
Z_{n}$ and the choice of $k$, there is either an $0\le i \le
q_{n+1}-2$ such that $z_{k+i} \in D_{n}^{i}$ but $z_{k+i+1} \notin
D_{n}^{i+1}$ or
$z_{k+q_{n+1}-1} \in D_{n}^{q_{n+1}-1}$ but
$z_{k+q_{n+1}} \notin A_{n} \cup B_{n}$. Then the lemma follows from
Lemma~\ref{adm4} and Lemma~\ref{Koe}.
The proof of the lemma is finished.
\end{proof}
\subsection{Proof of Lemma~\ref{main lemma}}
\begin{proof}
Let $N \ge 1$ and $R > 1$ be large and be fixed. For $z \in
X_{n+2}$, recall that $k_{z} \ge 1$ is the least positive integer
such that $g_{\theta}^{k_{z}}(z) \in \Delta$. Define
$$
X_{n+2}^{N,R} = \{z \in X_{n+2}\:\big{|}\: |z| \le R \hbox{ and }
k_{z} \le N\}.
$$
Note that the inner boundary component of $Y_{n+2}$ is the union of
finitely many straight segments and the outer boundary component of
$Y_{n+2}$ is the unit circle. Let $\overline{X_{n+2}^{N,R}}$ denote
the closure of $X_{n+2}^{N,R}$. Let
$$
W_{n} = {\Bbb T}_{R} \cup (\overline{B_{R}(0)}\cap \bigcup_{0 \le l
\le N}g_{\theta}^{-k} (\partial Y_{n+2}))
$$
where ${\Bbb T}_{R} = \{z\: \big{|}\: |z| = R\}$. It is clear that
$W_{n}$ is the union of finitely many piecewise smooth curve
segments and moreover, we have
$$
\overline{X_{n+2}^{N,R}}\setminus X_{n+2}^{N,R} \subset W_{n}.
$$
Since $Z_{n}$ is open, it follows that $\overline{X_{n+2}^{N,
R}}\setminus Z_{n}$ is a compact set. Take an arbitrary small
positive number $\eta > 0$. It is clear that there is a finite open
cover of $W_{n}$, say $Q_{i}, 1 \le i \le M$, such that
$$
\sum_{1 \le i \le M} area(Q_{i}) < \eta.
$$
By Lemma~\ref{pre-lem}, any point $x$ in the compact set
$\overline{X_{n+2}^{N, R}}\setminus Z_{n}$ is either belongs to some
$Q_{i}$ or is associated to some $K-$admissible pair $(U, V)$ for
some uniform $1 < K < \infty$. We thus have finitely many pairs
$(U_{i}, V_{i})$, $i \in \Lambda$, such that
\begin{itemize}
\item[1.] $\overline{X_{n+2}^{N, R}}\setminus Z_{n} \subset
\bigcup_{1 \le i \le M} Q_{i} \cup \bigcup_{i \in \Lambda}
U_{i}$,
\item[2.]$V_{i} \subset X_{n} \setminus X_{n+2}$ for every $i \in
\Lambda$,
\item[3.] there is a uniform $K > 1$ such that for any $i \in
\Lambda $, there exist $x_{i} \in V_{i}$ and $r_{i} > 0$ , such
that $B_{r_{i}}(x_{i}) \subset V_{i} \subset U_{i} \subset
B_{Kr_{i}}(x_{i})$.
\end{itemize}
On the other hand, by Theorem~\ref{SH}, it follows that there is a
$0 < \sigma < 1$ such that for any interval of the $\emph{dynamical
partition}$ of level $n$, $|I| < \sigma^{n}$. This, together with
Lemma~\ref{sch}, implies that there is a uniform $C
> 1$ and $0< \epsilon < 1$ such that
\begin{equation}\label{m-z}
area(Z_{n}) < C \epsilon^{n}.
\end{equation}
We now claim that there is a $0< \delta < 1$ such that
\begin{equation}\label{m-e-1}
area (X_{n+2}) \le C \epsilon^{n} + \delta \:area(X_{n}).
\end{equation}
In fact, by the first property above, we have
\begin{equation}\label{m-e-2}
area (\overline{X_{n+2}^{N, R}}) \le area(Z_{n}) + area (\bigcup_{1
\le i \le M} Q_{i} ) + area(\bigcup_{i \in \Lambda} U_{i}).
\end{equation}
By Corollary~\ref{spherical area}, we have
\begin{equation}\label{m-e-3}
area(\bigcup_{i \in \Lambda} U_{i}) \le area(\bigcup_{i \in \Lambda}
V_{i})/\lambda(K).
\end{equation}
From the second property above, we have
$$
area(\bigcup_{i \in \Lambda} V_{i}) \le area(X_{n}) - area(X_{n+2}).
$$
Note that
$$
area(X_{n+2}) \ge area(X_{n+2}^{N,R}) > area (\overline{X_{n+2}^{N,
R}}) - area(\bigcup_{1 \le i \le M} Q_{i} ).
$$
We thus have
\begin{equation}\label{a-ee}
area(\bigcup_{i \in \Lambda} U_{i}) \le (area(X_{n})- area
(\overline{X_{n+2}^{N, R}}) + area(\bigcup_{1 \le i \le M}
Q_{i}))/\lambda(K).
\end{equation}
Let $\delta = 1 /(1+ \lambda(K))$. From (\ref{m-e-2}) and
(\ref{a-ee}), we have
$$
area (\overline{X_{n+2}^{N, R}}) \le area(Z_{n}) + area (\bigcup_{1
\le i \le M} Q_{i} )+ \delta area(X_{n}).
$$
By (\ref{m-z}), we have
$$
area (\overline{X_{n+2}^{N, R}}) \le C \epsilon^{n} + \delta \:
area(X_{n}) + \eta.
$$
Since $\eta > 0$ can be arbitrarily small, we thus have
$$
area (\overline{X_{n+2}^{N, R}}) \le C \epsilon^{n} + \delta \:
area(X_{n}).
$$
In particular, we get
$$
area (X_{n+2}^{N, R}) \le C \epsilon^{n} + \delta \: area(X_{n}).
$$
Since the constants $C, \epsilon$, and
$\delta$ do not depend on $N$ and $R$, (\ref{m-e-1}) now follows by
letting $N, R \to \infty$. This proves the claim and Lemma~\ref{main
lemma} follows.
\end{proof}
It follows that $\nu$, and thus $\mu$ by Lemma~\ref{equi}, satisfy
the condition (\ref{integrability}). We have proved the
integrability of $\mu$.
\section{Proof of the Main Theorem}
Let $\phi: \Bbb C \to \Bbb C$ be the David homeomorphism given by
$\mu$ which fixes $0$ and the infinity, and maps $1$ to $\pi/2$.
\begin{lem}\label{odd homeo}
The map $\phi$ is odd.
\end{lem}
\begin{proof}
By Lemma~\ref{final}, $\mu(z) = \mu(-z)$. Consider the map
$\tilde{\phi}(z) = \phi(-z)$. It follows that $\phi$ and
$\tilde{\phi}$ has the same Beltrami differential. By
Theorem~\ref{David}, it follows that $\tilde{\phi} \circ \phi^{-1}$
is a conformal map in the plane. Since it fixes $0$ and $\infty$, it
follows that $(\tilde{\phi} \circ \phi^{-1})(z) = az$ for some $a
\ne 0$. That is, $\phi(-z) = a \phi(z)$. It follows that $\phi(-z)
= a \phi(-(-z)) = a^{2} \phi(-z)$ for all $z$. This implies that
$a^{2} = 1$. Clearly $a \ne 1$ since $\phi$ is a homeomorphism of
the plane. It follows that $a = -1$ and thus $\phi(-z) = -\phi(z)$.
The lemma has been proved.
\end{proof}
\begin{lem}\label{PH}
$T_{\theta} = \phi \circ \widetilde{G}_{\theta} \circ \phi^{-1}$ is
an odd entire function.
\end{lem}
The proof uses completely the same argument as in the proof of Lemma
5.5 of \cite{PZ}.
\begin{proof}
Let $X$ denote the set of the critical points of
$\widetilde{G}_{\theta}$. It is sufficient to show that the map
$\phi \circ \widetilde{G}_{\theta}$ belongs to $W_{loc}^{1, 1}({\Bbb
C} \setminus X)$. In fact, if $\phi \circ \widetilde{G}_{\theta}$
belongs to $W_{loc}^{1, 1}({\Bbb C} \setminus X)$, then in any small
open neighborhood $U$ of a regular point of
$\widetilde{G}_{\theta}$, since by Lemma~\ref{final}, the Beltrami
differential of $\phi \circ \widetilde{G}_{\theta}$ and $\phi$ are
both equal to $\mu$, it follows from Theorem~\ref{David} that $\phi
\circ \widetilde{G}_{\theta} = \sigma \circ \phi$ where $\sigma$ is
a conformal map defined on $\phi(U)$. This implies that $T_{\theta}$
is holomorphic in the complex plane except the points in $\phi(X)$.
But it is clear that for any point $z \in \phi(X)$, there is a
neighborhood $W$ of $z$ such that $T_{\theta}$ is bounded in $W$. It
follows that all the points in $\phi(X)$ are removable. So
$T_{\theta}$ is an entire function.
Now let us show that the map $\phi \circ \widetilde{G}_{\theta}$
belongs to $W_{loc}^{1, 1}({\Bbb C} \setminus X)$. Firstly, $\phi
\circ \widetilde{G}_{\theta} \in W_{loc}^{1, 1}({\Bbb C} \setminus
(X \cup \overline{\Delta}))$. This is because
$\widetilde{G}_{\theta}$ is holomorphic in ${\Bbb C} \setminus (X
\cup \overline{\Delta})$ and $\phi \in W_{loc}^{1, 1}(\Bbb C)$.
Secondly, we have $\phi \circ \widetilde{G}_{\theta} \in
W_{loc}^{1, 1}({\Delta})$. To see this, write $\phi \circ
\widetilde{G}_{\theta} = \phi\circ \Phi^{-1} \circ H^{-1} \circ
R_{\alpha} \circ H \circ \Phi$ in $\Delta$. Note that $\phi \circ
\Phi^{-1}$ and $H$ has same Beltrami differential in $\Delta$, it
follows from Theorem~\ref{David} again that $\phi \circ \Phi^{-1}
\circ H^{-1}$ and therefore $\phi \circ \Phi^{-1} \circ H^{-1}\circ
R_{\alpha}$ is conformal. Since $\Phi$ is conformal, $H \circ \Phi$
belongs to $W_{loc}^{1, 1}(\Delta)$. It follows that $$\phi \circ
\widetilde{G}_{\theta} = (\phi \circ \Phi^{-1} \circ H^{-1}\circ
R_{\alpha}) \circ (H \circ \Phi) \in W_{loc}^{1, 1}(\Delta).$$
It remains to prove that for
every small open disk $U$ centered at the point in ${\Bbb T}
\setminus \{1, -1\}$, $\phi \circ \widetilde{G}_{\theta} \in
W_{loc}^{1, 1}({U})$. Note that $\phi \circ \widetilde{G}_{\theta}$
is almost differentiable in $U$. Therefore
\begin{equation}\label{Jac}
\int_{U} Jac(\phi \circ \widetilde{G}_{\theta}) \le area\: ((\phi
\circ \widetilde{G}_{\theta}) (U)) < \infty.
\end{equation}
This implies that $Jac(\phi \circ \widetilde{G}_{\theta}) \in
L^{1}(U)$. It follows that the ordinary partial derivatives of
$\phi \circ \widetilde{G}_{\theta}$ are equal to the distributive
ones in any compact set in $U \setminus \Bbb T$. It is sufficient to
prove that $\partial (\phi \circ \widetilde{G}_{\theta}) \in
L^{1}(U)$ and thus $\overline{\partial}(\phi \circ
\widetilde{G}_{\theta}) \in L^{1}(U)$(Then the distributive partial
derivatives coincide with the ordinary partial derivatives in $U$
and are thus integrable in $U$). But this follows from the following
argument. Since $\mu_{\phi \circ \widetilde{G}_{\theta}} = \mu$
almost everywhere in $U$, we have
$$
|\partial (\phi \circ \widetilde{G}_{\theta})|^{2} =
\frac{Jac(\phi \circ \widetilde{G}_{\theta})}{1-|\mu_{\phi \circ
\widetilde{G}_{\theta}}|^{2}} \le \frac{Jac(\phi \circ
\widetilde{G}_{\theta})}{1-|\mu_{\phi \circ
\widetilde{G}_{\theta}}|} = \frac{Jac(\phi \circ
\widetilde{G}_{\theta})}{1-|\mu|}
$$ and therefore,
$$
|\partial (\phi \circ \widetilde{G}_{\theta})| \le \frac{Jac(\phi
\circ \widetilde{G}_{\theta})^{1/2}}{(1-|\mu|)^{1/2}}.
$$
Since $\mu$ satisfies the exponential growth condition
(\ref{integrability}),
the measurable function $1/(1- |\mu|)$ is
integrable in $U$. This, together with (\ref{Jac}) and Cauchy
inequality, implies the integrability of $\partial (\phi \circ
\widetilde{G}_{\theta})$ in $U$.
The odd property of $T_{\theta}$ follows from the odd property of
$\widetilde{G}_{\theta}$(see Lemma~\ref{final}) and Lemma~\ref{odd
homeo}.
\end{proof}
\begin{defi}{\rm
Two maps $f: {\Bbb C} \to {\Bbb C}$ and $g: {\Bbb C} \to {\Bbb C}$
are called topologically equivalent if there exist two
homeomorphisms $\theta_{1}$ and $\theta_{2}$ of the complex plane
such that $f = \theta_{2}^{-1} \circ g \circ \theta_{1}$}.
\end{defi}
\begin{lem}[Lemma 1, \cite{DS}]\label{DS}
Let $f$ be an entire function. If $f(z)$ is topologically equivalent
to $\sin(z)$, then $f(z) = a + b\sin(cz + d)$ where $a, b, c, d
\in\Bbb C$, and $b, c \ne 0$.
\end{lem}
For a proof of Lemma~\ref{DS}, see \cite{DS}.
\begin{lem}\label{top-equ}
Let $f: {\Bbb C} \to {\Bbb C}$ and $g: {\Bbb C} \to {\Bbb C}$ be two
continuous maps such that $f = g$ on the outside of the unit disk.
If in addition, $f: \overline{\Delta} \to \overline{\Delta}$ and $g:
\overline{\Delta} \to \overline{\Delta}$ are both homeomorphisms,
then $f$ and $g$ are topologically equivalent to each other.
\end{lem}
\begin{proof}
Define $\theta_{2}(z) = z$ for $z \notin \Delta$ and
$\theta_{2}(z) = g^{-1} \circ f(z)$
for $z \in \Delta$. It follows that $\theta_{2}: {\Bbb C} \to \Bbb C$ is a
homeomorphism.
Let $\theta_{1} = id$. Then $f = \theta_{1}^{-1} \circ g \circ \theta_{2}$.
The Lemma follows.
\end{proof}
Let $\psi: \widehat{\Bbb C} - \overline{\Delta} \to \widehat{\Bbb C}
- \overline{D}$ be map in the definition of $G(z)$(see $\S3$). Let
$\eta : \widehat{\Bbb C} \to \widehat{\Bbb C}$ be a homeomorphic
extension of $\psi$. As before let $T(z) = \sin (z)$. It follows
that $T(z)$ is topologically equivalent to $T \circ \eta$. Let $t
\in [0, 1)$ be the number in Lemma~\ref{exist}. Let
$$
S(z) = e^{2\pi it}(T\circ \eta)(z).
$$
\begin{lem}\label{tran}
$S(z)$ is topologically equivalent to $T(z)$ and $\widetilde{G}_{\theta}(z)$.
\end{lem}
\begin{proof}
The first topological equivalence follows from the definition of
$S(z)$. The second one follows from the definition of
$\widetilde{G}_{\theta}$ and Lemma~\ref{top-equ}.
\end{proof}
\begin{lem}\label{sin}
$T_{\theta}(z)$ is topologically equivalent to $T(z)$.
\end{lem}
\begin{proof}
By the construction of $T_{\theta}$, it follows that $T_{\theta}$ is
topologically equivalent to $\widetilde{G}_{\theta}$. The Lemma then
follows from Lemma~\ref{tran}.
\end{proof}
Now it is the time to prove the Main Theorem.
\begin{proof}
By Lemma~\ref{DS} and Lemma~\ref{sin}, it follows that $T_{\theta}(z) = a + b\sin(cz +
d)$ where $a, b, c, d \in \Bbb C$ and $b, c \ne 0$. Since
$T_{\theta}$ is odd by Lemma~\ref{PH}, we get
\begin{equation}\label{odd-sin}
a + b\sin(c z + d) \equiv -a + b\sin(c z - d).
\end{equation}
Now by differentiating both sides of (\ref{odd-sin}), we get
$$
\cos(cz + d) \equiv \cos(cz - d).
$$
It follows that
$$
\sin (d)\sin(cz) \equiv 0.
$$
Since $c \ne 0$, it follows that $d = k\pi$ for some integer $k$.
Therefore, we may assume that $T_{\theta}(z) = a + b \sin(cz)$ for
some $b, c \ne 0$. Since $T_{\theta}(0) = 0$, it follows that $a =
0$. This implies that $T_{\theta}(z) = b \sin (cz)$.
Since $T_{\theta}'(\pi/2) = 0$, it follows that $c$ is some odd
integer. By changing the sign of $b$, we may assume that $c$ is
positive. Suppose $c = 2 l +1$ for some integer $l \ge 0$. Let
$\Omega_{0}$ be the Siegel disk of $T_{\theta}$ centered at the
origin. For $k \in \Bbb Z$, let
$$
\Omega_{k} = \{z+ k\pi\big{|}\: z \in \Omega_{0}\}.
$$ Since
$T_{\theta}$ is odd by Lemma~\ref{PH}, $\Omega_{0}$ is symmetric
about the origin. It follows that $T_{\theta}(\Omega_{k}) =
\Omega_{0}$. Therefore each $\Omega_{k}$ is a component of
$T_{\theta}^{-1}(\Omega_{0})$.
Let $D_{k}, k \in {\Bbb Z}$, be the domains in Lemma~\ref{dis-con}.
Recall that $D = D_{0}$. Let $\psi: \widehat{\Bbb C} \setminus
\overline{\Delta} \to \widehat{\Bbb C}\setminus \overline{D}$ be the
map defined immediately after Lemma~\ref{dis-con}. Let
$$
\widetilde{\Omega}_{0} = \Omega_{0} \hbox{ and
}\widetilde{\Omega}_{k} = \phi \circ \psi^{-1} (D_{k}).
$$
By Lemma~\ref{dis-con}, we have
\begin{itemize}
\item[1.] $\widetilde{\Omega}_{k}, k \in {\Bbb Z}$, are all the components
of $T_{\theta}^{-1}(\Omega_{0})$,
\item[2.] every $\partial \widetilde{\Omega}_{k}$ contains exactly two critical
points of $T_{\theta}$,
\item[3.] $\partial \widetilde{\Omega}_{k} \cap
\partial \widetilde{\Omega}_{j} = \emptyset$ if $|k-j| > 1$,
\item[4.] any critical point of $T_{\theta}$ is the intersection
point of $\partial \widetilde{\Omega}_{k}$ and $\partial
\widetilde{\Omega}_{k+1}$ for some $k \in {\Bbb Z}$, and for every
$k \in {\Bbb Z}$, $\partial \widetilde{\Omega}_{k} \cap
\partial \widetilde{\Omega}_{k+1}$ contains exactly one critical
point of $T_{\theta}$.
\end{itemize}
It is clear that every
$\Omega_{k}$ is equal to some $\widetilde{\Omega}_{j}$. We claim
that $\Omega_{k} = \widetilde{\Omega}_{k}$ for all $k \in {\Bbb Z}$.
By definition, $\Omega_{0} = \widetilde{\Omega}_{0}$. Since
$\partial \Omega_{1}$ contains the critical point $\pi/2$, and since
only $\partial \widetilde{\Omega}_{0}$ and $\partial
\widetilde{\Omega}_{1}$ contain $\pi/2$(This is because $\pi/2 \in
\partial D_{1}$ and $\phi \circ \psi^{-1}(\pi/2) = \pi/2$), we get
${\Omega}_{1} = \widetilde{\Omega}_{1}$(Since $\Omega_{1} \ne
\widetilde{\Omega}_{0} = \Omega_{0}$). Since only $\partial
\widetilde{\Omega}_{0}$ and $\partial \widetilde{\Omega}_{2}$
intersect $\partial \widetilde{\Omega}_{1}$ and since $\partial
\Omega_{2}$ intersects $\partial \Omega_{1}$, it follows that
$\Omega_{2} = \widetilde{\Omega}_{2}$(Since $\Omega_{2} \ne
\Omega_{0}$). Since only $\partial \widetilde{\Omega}_{1}$ and
$\partial \widetilde{\Omega}_{3}$ intersect $\partial
\widetilde{\Omega}_{2}$ and since $\partial \Omega_{3}$ intersects
$\partial \Omega_{2} = \widetilde{\Omega}_{2}$, it follows that
$\Omega_{3} = \widetilde{\Omega}_{3}$(Since $\Omega_{3} \ne
\widetilde{\Omega}_{1} = \Omega_{1}$). Repeating this argument, we
get $\Omega_{k} = \widetilde{\Omega}_{k}$ for all $k \ge 0$. the
same argument implies $\Omega_{k} = \widetilde{\Omega}_{k}$ for all
$k < 0$. The claim has been proved.
Now it follows that the set of the critical points of $T_{\theta}$
is equal to
$$\{\pi/2 + k \pi\:\big{|}\: k \in {\Bbb Z}\}.$$ This implies that
$c = 1$. It follows that $b = e^{2 \pi i \theta}$ and therefore
$T_{\theta}(z) = f_{\theta}(z)$. This completes the proof of the
Main Theorem.
\end{proof}
| {
"timestamp": "2008-12-02T07:26:31",
"yymm": "0812",
"arxiv_id": "0812.0431",
"language": "en",
"url": "https://arxiv.org/abs/0812.0431",
"abstract": "In 2008 Petersen posed a list of questions on the application of trans-quasiconformal Siegel surgery developed by Zakeri and himself. In this paper we extend Petersen-Zakeri's idea so that the surgery can be applied to all the premodels which have no \"free critical points\". We explain how the idea is used in solving three of the questions posed by Petersen. To present the details of the idea, we focus on the solution of one of them: we prove that for typical rotation numbers $0< \\theta < 1$, the boundary of the Siegel disk of $f_{\\theta}(z) = e^{2 \\pi i \\theta} \\sin (z)$ is a Jordan curve which passes through exactly two critical points $\\pi/2$ and $-\\pi/2$.",
"subjects": "Dynamical Systems (math.DS); Complex Variables (math.CV)",
"title": "On David type Siegel Disks of the Sine family",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.989181552941771,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7076842131590149
} |
https://arxiv.org/abs/1101.1649 | An overdetermined problem in Riesz-potential and fractional Laplacian | The main purpose of this paper is to address two open questions raised by W. Reichel on characterizations of balls in terms of the Riesz potential and fractional Laplacian. For a bounded $C^1$ domain $\Omega\subset \mathbb R^N$, we consider the Riesz-potential $$u(x)=\int_{\Omega}\frac{1}{|x-y|^{N-\alpha}} \,dy$$ for $2\leq \alpha \not =N$. We show that $u=$ constant on the boundary of $\Omega$ if and only if $\Omega$ is a ball. In the case of $ \alpha=N$, the similar characterization is established for the logarithmic potential. We also prove that such a characterization holds for the logarithmic Riesz potential. This provides a characterization for the overdetermined problem of the fractional Laplacian. These results answer two open questions of W. Reichel to some extent. | \section{Introduction}
It is well-known that the gravitational potential of a ball of
constant mass density is constant on the surface of the ball.
It is shown by Fraenkel \cite{Fr} that this property indeed provides a characterization of balls.
In fact, Fraenkel proves the following
{\bf Theorem A \cite{Fr}}: Let $\Omega\subset \mathbb R^N$ be a
bounded domain and $\omega_N$ be the surface measure of the unit
sphere in $\mathbb R^N$. Consider
\begin{equation}
u(x)=\left \{ \begin{array}{lll} \frac{1}{2 \pi} \int_\Omega \log{\frac{1}{|x-y|}} \,dy, \qquad \qquad & N=2,\\
\\
\frac{1}{(N-2)\omega_N} \int_\Omega \frac{1}{|x-y|^{N-2}} \,dy,
\qquad \qquad & N\geq 3.
\end{array} \right.
\label{Fr}
\end{equation}
If $u(x)$ is constant on $\partial \Omega$, then $\Omega$ is a ball.
This result has been extended by Reichel \cite{R2} to more general
Riesz potential, but under a more restrictive assumption on the
domain $\Omega$, i.e., $\Omega$ is assumed to be convex. In
\cite{R2}, Reichel considers the integral equation
\begin{equation}
u(x)=\left \{ \begin{array}{lll} \int_\Omega \log{\frac{1}{|x-y|}} \,dy, \qquad \qquad & N=\alpha,\\
\\
\int_\Omega \frac{1}{|x-y|^{N-\alpha}} \,dy, \qquad \qquad & N\neq
\alpha,
\end{array} \right.
\label{int}
\end{equation}
and proves the following theorem.
{\bf Theorem B \cite{R2} }: Let $\Omega\subset \mathbb R^N$ be a
bounded convex domain and $\alpha >2$, if $u(x)$ is constant on
$\partial \Omega$, then $\Omega$ is a ball.
This more general Riesz potential is actually closely related to the
fractional Laplacian $(-\lap)^{\frac{\alpha}{2}}$ in $\mathbb R^N$.
Let $\mathbb{N}_0$ be the collection of nonnegative integers. It is
known that the fundamental solution $G(x,y)$ for pseudo-differential
operator $(-\lap)^{\frac{\alpha}{2}}$ in $\mathbb R^N$ has the
following representation
\begin{equation}
G(x,y)=\left\{
\begin{array}{ll}
\frac{\Gamma(\frac{N-\alpha}{2})}{2^\alpha \pi^{\frac{N}{2}}
\Gamma(\frac{\alpha}{2})}|x-y|^{\alpha-N}, \qquad \qquad
&\mbox{if}\ \ \frac{\alpha-N}{2}\not\in \mathbb{N}_0,\\
\\
\frac{(-1)^k}{2^{\alpha-1}\pi^{\frac{N}{2}}\Gamma({\frac{\alpha}{2}})}|x-y|^{\alpha-N}\log\frac{1}{|x-y|},
\qquad \qquad &\mbox{if}\ \ \frac{\alpha-N}{2}\in \mathbb{N}_0.
\end{array} \right.
\label{green}
\end{equation}
We note that for the case of $\alpha=2$, Fraenkel's result is under weaker assumption on the domain $\Omega$, namely, $\Omega$ only needs to be bounded and open in $\mathbb R^N$.
The surprising part for $\alpha=2$ is that there is neither
regularity nor convexity requirement for $\Omega$. Thus, two open
problems were raised by Reichel in \cite{R2}
{\bf Question 1.} Is Theorem B true if we remove the convexity
assumption of $\Omega$?
\medskip
{\bf Question 2.} Is there an analogous result as Theorem B for
Riesz-Potential of the form
\begin{equation}
u(x)=\int_\Omega |x-y|^{\alpha-N}\log\frac{1}{|x-y|} \,dy?
\label{log}
\end{equation}\\
It is meaningful to study (\ref{log}) because in the case of
$\frac{\alpha-N}{2}\in \mathbb N_0$, up to some rescaling, the
kernel function in above integral is the fundamental solution of the
fractional Laplacian $(-\lap)^{\frac{\alpha}{2}}.$
Our goal is to address the above two open questions.
The first result we establish does remove the convexity assumption
in Theorem B.
\begin{mthm}
Let $\Omega$ ba a $C^1$ bounded domain. If $u$ in (\ref{int}) is
constant on $\partial\Omega$, then $\Omega$ is a ball. \label{1t}
\end{mthm}
As far as Question 2 is concerned, we partially solve it under some additional assumption on the diameter of the domain $\Omega$. Since we are only interested in the case when $\alpha>N$, we will assume this when we address Question 2.
\begin{mthm}
Assume $\alpha>N$. Let $\Omega$ be a $C^1$ bounded domain with
$diam \,\Omega< e^{\frac{1}{N-\alpha}}$. Thus, $\Omega$ is a ball if
$u(x)$ in (\ref{log}) is constant on $\partial\Omega$. \label{2t}
\end{mthm}
\begin{rem}
In the above two theorems, if the conclusion that $\Omega$ is a ball
is verified, then we can easily deduce that $u(x)$ is radially
symmetric with respect to the center of the ball.
\end{rem}
There has been extensive study in the literature about
overdetermined problems in elliptic differential equations and
integral equations. In his seminal paper \cite{Se}, Serrin showed
that the overdetermined boundary value determines the geometry of
the underlying set. This is, if $\Omega$ is a bounded $C^2$ domain
and $u\in C^2(\bar\Omega)$ satisfies the following
\begin{equation}
\left \{ \begin{array}{ll} \lap u=-1 & \mbox{in} \ \ \Omega, \\
\\
u=0, \qquad \ \frac{\partial u}{\partial n}=constant &\mbox{on}\ \
\Omega,
\end{array}
\right.
\end{equation}
then $\Omega$ is a ball and $u$ is radially symmetric with respect
to its center of the ball. Serrin's proof is based on what is
nowadays called the moving planes method relying on the maximum
principle of solutions to the differential equations, which is
originally due to Alexandrov, and has been later used to derive
further symmetry results for more general elliptic equations.
Important progress as for the moving plane methods since then are the works of Gidas-Ni-Nirenberg \cite{GNN},
Caffarelli-Gidas-Spruck \cite{CGS}, to just name some of the early works in this direction.
Immediately after Serrin's paper, Weinberger \cite{W} obtained a
very short proof of the same result, using the maximum principle
applied to an auxiliary function. However, compared to Serrin's
approach, Weinberger's proof relies crucially on the linearity of
the Laplace operator.
Since the work of \cite{Se}, many results are obtained about
overdetermined problems. The interested reader may refer to
\cite{AB}, \cite{B}, \cite{BK}, \cite{BNST}, \cite{BNST1}, \cite{CS},
\cite{EP}, \cite{FG}, \cite{FGK}, \cite{FK}, \cite{FV}, \cite{G},
\cite{GL}, \cite{HPP}, \cite{Lim}, \cite{Liu}, \cite{M}, \cite{MR},
\cite{PP}, \cite{PS}, \cite{P}, \cite{Sh}, \cite{Si}, \cite{WX}
and references therein, for more general elliptic equations. See
also \cite{R1} and reference therein for overdetermined problems in
an exterior domain or general domain. In \cite{BNST}, an alternative
shorter proof of Serrin's result, not relying explicitly on the
maximum principle has been given, where they deduce some global
information concerning the geometry of the solution.
Overdetermined problems are important from the point of view of
mathematical physics. Many models in fluid mechanics, solid
mechanics, thermodynamics, and electrostatics are relevant to the
overdetermined Dirichlet or Newmann boundary problems of elliptic
partial differential equations.
We refer the reader to the article \cite{FG} for a nice introduction in that aspect.
Instead of a volume potential, single layer potential is also
considered in overdetermined problems. A single layer potential is
given by
\begin{equation}
u(x)=\left \{ \begin{array}{lll}
A \int_{\partial\Omega}\frac{-1}{2 \pi} \log{\frac{1}{|x-y|}} \,d\sigma_y, \qquad \qquad & N=2,\\
\\
A \int_{\partial\Omega}
\frac{1}{(N-2)\omega_N}\frac{1}{|x-y|^{N-2}}\,d\sigma_y, \qquad
\qquad & N\geq 3,
\end{array} \right.
\label{sig}
\end{equation}
where $A>0$ is the constant source density on the boundary of the
domain $\Omega$. If $u$ is constant in $\bar\Omega$, then $\Omega$
can be proved to be a ball under different smoothness assumption on
the domain $\Omega$.
See \cite{M} for the case of $n=2$ and
\cite{R1} for the case of $n\geq 3$, and also some related works in
\cite{Lim} and \cite{Sh}. We also refer the reader to the book of C.
Kenig \cite{K} on this subject of layer potential.
Generally speaking, two approaches are widely applied in dealing
with overdetermined problems. One is the classical moving plane
method. In \cite{Se}, the moving plane method with a sophisticated
version of Hopf boundary maximum principle plays a very important
role in the proof. The other way is based on an equality of Rellich
type, as well as an interior maximum principle, see \cite{W}. Our
approach is a new variant of moving plane method - Moving plane in
integral forms. It is much different from the traditional methods of
moving planes used for partial differential equations. Instead of
relying on the differentiability and maximum principles of the
structure, a global integral norm is estimated. The method of moving
planes in integral forms can be adapted to obtain symmetry and
monotonicity for solutions. The method of moving planes on integral
equations was developed in the work of W. Chen, C. Li and B. Ou
\cite{CLO}, see also Y.Y. Li \cite{Li}, the book by W. Chen and C.
Li \cite{CL1} and an exhaustive list of references therein, where
the symmetry of solutions in the entire space was proved. Moving
plane method in integral form over bounded domains requires some additional
efforts and has been carried out recently in symmetry
problems arising from the integral equations over bounded domains,
see the work of D. Li, G. Strohmer and L. Wang \cite{LSW}.
We end this introduction with the following remark concerning the
characterization of balls by using the Bessel potential. The Bessel
kernel $g_\alpha$ in $\mathbb R^N$ with $\alpha\geq 0$ is defined by
\begin{equation}
g_\alpha(x)=\frac{1}{r(\alpha)}\int_0^{\infty}
\exp(-\frac{\pi}{\delta}|x|^2)\exp(-\frac{\delta}{4\pi})\delta^{\frac{\alpha-N-2}{2}}
\,d\delta, \label{ex1}
\end{equation}
where $r(\alpha)=(4\pi)^{\frac{\alpha}{2}}\Gamma(\frac{\alpha}{2}).$\\
In the paper \cite{HLZ}, we consider the Bessel potential type
equation:
\begin{equation}
u(x)=\int_\Omega g_\alpha(x-y) \,dy. \label{b1}
\end{equation}
Overdetermined problems for Bessel potential over a bounded domain in $\mathbb R^N$ have
been recently studied in \cite{HLZ}. For instance,
the following theorem is proved in \cite{HLZ}, among some other results:
\begin{mthm}
Let $\Omega$ ba a $C^1$ bounded domain in $\mathbb R^N$. If $u$ in
(\ref{b1}) is constant on $\partial\Omega$, then $\Omega$ is a ball.
\label{4t}
\end{mthm}
It is well-known that (\ref{b1}) is closely related to the following
fractional equation
$$(I-\lap)^{\frac{\alpha}{2}}u=\chi_\Omega.$$
In the case of $\alpha=2$, it turns out to be the ground state of
the Schr$\ddot{o}$dinger equation.
The paper is organized as follows. In Section 2, we show Theorem 1.
In Section 3, we carry out the proof of Theorem 2. Throughout this
paper, the positive constant $C$ is frequently used in the paper. It
may differ from line to line, even within the same line. It also
may depends on $u$ in some cases.
Finally, we thank Dr. Xiaotao Huang for his comments on our earlier draft of this paper.
\section{Proof of Theorem \ref{1t}}
In this section, we will prove Theorem \ref{1t} by adapting the
moving plane method in integral forms, see \cite{CLO}. Since we are
dealing with the case of bounded domains, we modify the method
accordingly (see also \cite{LSW}, \cite{CZ}).
We first introduce some notations. Choose any direction and, rotate
coordinate system if it is necessary such that $x_1$-axis is
parallel to it. For any $\lambda\in \mathbb R$, define
$$T_\lambda=\{(x_1,...,x_n)\in \Omega | x_1=\lambda \}.$$
Since $\Omega$ is bounded, if $\lambda$ is sufficiently negative,
the intersection of $T_\lambda$ and $\Omega$ is empty. Then, we move
the plane $T_\lambda$ all the way to the right until it intersects
$\Omega$. Let
$$\lambda_0=min\{\lambda :T_\lambda\cap\bar\Omega\not= \emptyset\}.$$
For $\lambda>\lambda_0$, $T_\lambda$ cuts off $\Omega$. We define
$$ \Sigma_\lambda=\{x\in \Omega |x_1<\lambda\}.$$
Set
$$x_\lambda=\{2\lambda-x_1,...,x_n\} $$
and
$$
\Sigma^\prime_\lambda=\{x_\lambda\in\Omega|x\in\Sigma_\lambda\}.$$
At the beginning of $\lambda>\lambda_0$, $ \Sigma^\prime_\lambda$
remains within $\Omega$. As the plane keeps moving to the right, $
\Sigma^\prime_\lambda$ will still stay in $\Omega$ until at least
one of the following events occurs:
(i)$ \Sigma^\prime_\lambda$ is internally tangent to the boundary of
$\Omega$ at some point $P_\lambda$ not on $T_\lambda$.
\medskip
(ii) $T_\lambda$ reaches a position where it is orthogonal to the
boundary of $\Omega$ at some point $Q$.\\ Let $\bar\lambda$ be the
first value such that at least one of the above positions is
reached.
We assert that $\Omega$ must be symmetric about $T_{\bar\lambda}$;
i.e.,
\begin{equation}
\Sigma_{\bar\lambda}\cup T_{\bar\lambda}\cup
\Sigma^\prime_{\bar\lambda}=\Omega. \label{key}
\end{equation}
If this assertion is verified, for any given direction in $\mathbb
R^N$, there also exists a plane $T_{\bar\lambda}$ such that $\Omega$
is symmetric about $T_{\bar\lambda}$. Moreover, $\Omega$ is
connected. Then the only domain with those properties is a ball, see
\cite{Al}.
In order to assert (\ref{key}), we introduce
$$u_\lambda(x)=u(x_\lambda),$$
$$
\Omega_\lambda=\Omega\backslash(\overline{\Sigma_\lambda\cup\Sigma^\prime_\lambda}).$$
We first establish some lemmas. Throughout the paper we assume
$\alpha\geq 2$.
\begin{lem}
Let $l\in\mathbb N$ with $1\leq l<\alpha$. Then for any solution in
(\ref{int}), $u\in C^l(\mathbb R^N)$ and differentiation of order $l$
can be taken under the integral.
\label{mono}
\end{lem}
\begin{proof}
The proof is standard. We refer the reader to \cite{R2}.
\end{proof}
\begin{lem}
For
$\lambda_0<\lambda<\bar\lambda$ and $u(x)$ satisfying (\ref{int}), we have
(i) If $N\geq \alpha$, $u_\lambda(x)>u(x)$ for any $x\in
\Sigma_\lambda.$
\medskip
(ii) If $N< \alpha$, $u_\lambda(x)<u(x)$ for any $x\in
\Sigma_\lambda.$
\end{lem}
\begin{proof}
For $x\in\Sigma_\lambda$, in the case of $N=\alpha$, we rewrite
$u(x)$ and $u_\lambda(x)$ as
$$u(x)=\int_{\Sigma_\lambda} \log\frac{1}{|x-y|} \,dy + \int_{\Sigma_\lambda} \log\frac{1}{|x_\lambda-y|} \,dy
+\int_{\Omega_\lambda} \log\frac{1}{|x-y|} \,dy,$$ and
$$u_\lambda(x)=\int_{\Sigma_\lambda} \log\frac{1}{|x_\lambda-y|} \,dy + \int_{\Sigma_\lambda} \log\frac{1}{|x-y|} \,dy
+\int_{\Omega_\lambda} \log\frac{1}{|x_\lambda-y|} \,dy.$$ Then
\begin{equation}
u_\lambda(x)-u(x)=\int_{\Omega_\lambda}\log\frac{|x-y|}{|x_\lambda-y|}
\,dy. \label{Com}
\end{equation}
Since $|x-y|>|x_\lambda-y|$ for $x\in \Sigma_\lambda$ and
$y\in \Omega_\lambda$, then
$$u_\lambda(x)>u(x).$$
While in the case of $ N\not =\alpha$, $u_\lambda(x)$ and $u(x)$ have the
following representations respectively:
$$u(x)=\int_{\Sigma_\lambda}|x-y|^{\alpha-N}\,dy + \int_{\Sigma_\lambda} |x_\lambda-y|^{\alpha-N} \,dy
+\int_{\Omega_\lambda} |x-y|^{\alpha-N} \,dy,$$
and
$$u_\lambda(x)=\int_{\Sigma_\lambda}|x_\lambda-y|^{\alpha-N}\,dy + \int_{\Sigma_\lambda} |x-y|^{\alpha-N} \,dy
+\int_{\Omega_\lambda} |x_\lambda-y|^{\alpha-N} \,dy.$$ Thus,
\begin{equation}
u_\lambda(x)-u(x)=\int_{\Omega_\lambda}(|x_\lambda-y|^{\alpha-N}-|x-y|^{\alpha-N})
\,dy, \label{Com1}
\end{equation}
Note that $|x-y|>|x_\lambda-y|$ for $x\in \Sigma_\lambda$ and
$y\in \Omega_\lambda$. Thus, (i) and (ii) are concluded.
\end{proof}
\begin{lem}
Assume that $u(x)$ satisfies (\ref{int}) and suppose
$\lambda=\bar\lambda$ in the first case; i.e.
$\Sigma^\prime_\lambda$ is internally tangent to the boundary of
$\Omega$ at some point $P_{\bar\lambda}$ not on $T_{\bar\lambda}$,
then $\Sigma_{\bar\lambda}\cup
T_{\bar\lambda}\cup\Sigma^\prime_{\bar\lambda}=\Omega$. \label{le}
\end{lem}
\begin{proof}
When $N\geq \alpha$, thanks to Lemma \ref{mono},
$u_{\bar\lambda}(x)\geq u(x)$ for $x\in \Sigma_{\bar\lambda}.$ While
$ N<\alpha$, $u_{\bar\lambda}(x)\leq u(x)$ for $x\in
\Sigma_{\bar\lambda}.$ We argue by contradiction. Suppose $
\Sigma_{\bar\lambda}\cup
T_{\bar\lambda}\cup\Sigma^\prime_{\bar\lambda}\varsubsetneqq\Omega;$
that is, $\Omega_{\bar\lambda}\not=\emptyset.$ At $
P_{\bar\lambda}$, from (\ref{Com}) and (\ref{Com1}),
$u(P_{\bar\lambda})>u(P)$ in the case of $ N\geq \alpha$. It is a
contradiction since $P_{\bar\lambda}, P\in
\partial\Omega$ and $u(P_{\bar\lambda})=u(P)=$ constant. From the
same reason, $u(P_{\bar\lambda})<u(P)$ when $N<\alpha$. It also
contradicts the fact that $u$ is constant on the boundary.
Therefore, the lemma is completed.
\end{proof}
\begin{lem}
Assume that $u(x)$ satisfies (\ref{int}) and suppose that the second
case occurs: i.e. $T_{\bar\lambda}$ reaches a position where is
orthogonal to the boundary of $\Omega$ at some point $Q$, then,
$\Sigma_{\bar\lambda}\cup
T_{\bar\lambda}\cup\Sigma^\prime_{\bar\lambda}=\Omega$. \label{la}
\end{lem}
\begin{proof}
Since $u(x)$ is constant on the boundary and $\Omega\in C^1$,
$\grad u$ is parallel to the normal at $Q$. As implied in the
second case, $\frac{\partial u}{\partial x_1}|_Q=0$. We denote the coordinate of $Q$ by $z$. Suppose
$\Omega_{\bar\lambda}\not=\emptyset$, there exits a ball $
B\subset\subset
\Omega_{\bar\lambda}$.
Choose a sequence $\{x^i\}^\infty_1\in\Sigma_{\bar\lambda}\setminus T_{\bar\lambda}$ such that
$x^i\to z$ as $i\to \infty$. It is easy to see that
$x^i_{\bar\lambda}\to z$ as $i\to \infty$. Since $ B\subset\subset
\Omega_{\bar\lambda},$ we can also find a $ \delta$ such that $ diam
\Omega>|x^i_{\bar\lambda}-y|>\delta$ for any $y\in B$ and any $x^i_{\bar\lambda}$.
If
$N=\alpha$, by (\ref{Com}),
$$u(x^i_{\bar\lambda})-u(x^i)=\int_{\Omega_{\bar\lambda}}\log\frac{|x^i-y|}{|x^i_{\bar\lambda}-y|}
\,dy.$$
Let $e_1=(1,0,\cdots,0)\in \mathbb R^N$, then $(x^i_{\bar\lambda}-x^i)\cdot e_1$ is the first component of
$(x^i_{\bar\lambda}-x^i)$.
By the Mean Value theorem,
\begin{eqnarray}
\frac{u(x^i_{\bar\lambda})-u(x)}{(x^i_{\bar\lambda}-x^i)\cdot e_1} &=&
\int_{\Omega_{\bar\lambda}}\frac{\log|x^i-y|-\log|x^i_{\bar\lambda}-y|}{(x^i_{\bar\lambda}-x^i)\cdot e_1}\,dy \nonumber \\
& = & \int_{\Omega_{\bar\lambda}}\frac{(y-\bar
x^i_{\bar\lambda})\cdot e_1}{|y-\bar x^i_{\bar\lambda}|^2}\,dy \nonumber \\
& > & C\int_B \frac{1}{|diam\,\,\Omega|^2}\,dy \nonumber \\
&>& C, \nonumber \\
\label{Co}
\end{eqnarray}
where $\bar
x^i_{\bar\lambda}$ is some point between $ x^i_{\bar\lambda}$ and
$x^i$. Nevertheless,
$$\lim_{i\to
\infty}\frac{u(x^i_{\bar\lambda})-u(x^i)}{(x^i_{\bar\lambda}-x^i)\cdot
e_1}=\frac{\partial u}{\partial x_1}|_Q=0,$$ which contradicts
(\ref{Co}). Therefore, $\Omega_{\bar\lambda}=\emptyset.$
In the case of $N>\alpha$, similarly we have
\begin{eqnarray}
\frac{u(x^i_{\bar\lambda})-u(x^i)}{(x^i_{\bar\lambda}-x^i)\cdot
e_1}&=&\int_{\Omega_{\bar\lambda}}\frac{|x^i_{\bar\lambda}-y|^{\alpha-N}-|x^i-y|^{\alpha-N}}
{(x^i_{\bar\lambda}-x^i)\cdot e_1} \,dy \nonumber \\
&=&\int_{\Omega_{\bar\lambda}}(\alpha-N)|\bar
x^i_{\bar\lambda}-y|^{\alpha-N-2}((x^i_{\bar\lambda}-y)\cdot
e_1)\,dy
\nonumber \\
&>& \int_{B}(\alpha-N)|\bar
x^i_{\bar\lambda}-y|^{\alpha-N-2}((x^i_{\bar\lambda}-y)\cdot e_1)\,dy \nonumber \\
&>& C.
\end{eqnarray}
It also contradicts $\frac{\partial u}{\partial x_1}|_Q=0$, thus
$\Omega_{\bar\lambda}=\emptyset$.
The same idea can be applied to
the case of $N<\alpha$ with minor modification. In conclusion,
$\Sigma_{\bar\lambda}\cup
T_{\bar\lambda}\cup\Sigma^\prime_{\bar\lambda}=\Omega$ when the
second case occurs.
\end{proof}
Combining Lemma (\ref{le}) and Lemma (\ref{la}), Theorem \ref{1t} is
implied.
\section{Proof of Theorem \ref{2t}}
In this section, we will prove theorem \ref{2t} under some
restriction on the diameter of
$\Omega$. Since we are mainly interested in
the case of
$\frac{\alpha-N}{2}\in \mathbb N_0$. This is the case when the
fundamental solution of $(-\bigtriangleup)^{\frac{\alpha}{2}}$ has
the representation (\ref{green}). Therefore, we will assume
$\alpha>N$ in this section. Obviously, $u \in C^{1}(\mathbb R^N)$ in
(\ref{log}). We begin with establishing several lemmas.
\begin{lem}
For $ \lambda_0<\lambda<\bar\lambda$, assume $u(x)$ satisfies
(\ref{log}) with $diam\, \Omega< e^{\frac{1}{N-\alpha}}$, then
$u_\lambda(x)<u(x)$ for any $x\in \Sigma_\lambda$.
\end{lem}
\begin{proof}
Since $|x_\lambda-y_\lambda|=|x-y|,$ and
$|x_\lambda-y|=|x-y_\lambda|,$ we write $u(x)$ and $u_\lambda(x)$ in
the following forms:
\begin{eqnarray*}
u(x)&=&
\int_{\Sigma_\lambda}|x-y|^{\alpha-N}\log\frac{1}{|x-y|}\,dy+\int_{\Sigma_\lambda}|x_\lambda-y|^{\alpha-N}\log\frac{1}{|x_\lambda-y|}\,dy
\\ &&{}+ \int_{\Omega_\lambda}|x-y|^{\alpha-N}\log\frac{1}{|x-y|}\,dy,
\end{eqnarray*}
and
\begin{eqnarray*}
u_\lambda(x)&=&\int_{\Sigma_\lambda}|x_\lambda-y|^{\alpha-N}\log\frac{1}{|x_\lambda-y|}\,dy+\int_{\Sigma_\lambda}|x-y|^{\alpha-N}\log\frac{1}{|x-y|}\,dy
\\ &&{}+
\int_{\Omega_\lambda}|x_\lambda-y|^{\alpha-N}\log\frac{1}{|x_\lambda-y|}\,dy.
\end{eqnarray*}
Then,
\begin{equation}
u_\lambda(x)-u(x)=\int_{\Omega_\lambda}|x-y|^{\alpha-N}\log|x-y|\,dy-\int_{\Omega_\lambda}|x_\lambda-y|^{\alpha-N}\log|x_\lambda-y|\,dy.
\label{pw}
\end{equation}
We consider the function $s^{\alpha-N}\log s.$ Note $\alpha>N$, thus
$$(s^{\alpha-N}\log s)^\prime=s^{\alpha-N-1}[(\alpha-N)\log s+1]<0,$$
whenever $s<e^{\frac{1}{N-\alpha}}.$ Since $|x-y|>|x_\lambda-y|$ for
$x\in \Sigma_\lambda , \, y\in \Omega_\lambda$, and $diam \Omega<
e^{\frac{1}{N-\alpha}}$, we easily infer that $u_\lambda(x)<u(x)$
for any $x\in\Sigma_\lambda.$
\end{proof}
\begin{lem}
$u(x)$ satisfies (\ref{log}) and suppose $\lambda=\bar\lambda$ in
the first case; i.e. $\Sigma^\prime_{\bar\lambda}$ is internally
tangent to the boundary of $\Omega$ at some point $P_{\bar\lambda}$
not on $T_{\bar\lambda}$, then $\Sigma_{\bar\lambda}\cup
T_{\bar\lambda}\cup\Sigma^\prime_{\bar\lambda}=\Omega$.
\end{lem}
\begin{proof}
The proof is essentially the same as that of Lemma (\ref{le}).
\end{proof}
\begin{lem}
Suppose that $u(x)$ satisfies (\ref{log}) with $diam \Omega<
e^{\frac{1}{N-\alpha}}$ and that the second case occurs: i.e.
$T_{\bar\lambda}$ reaches a position where is orthogonal to the
boundary of $\Omega$ at some point $Q$, then,
$\Sigma_{\bar\lambda}\cup
T_{\bar\lambda}\cup\Sigma^\prime_{\bar\lambda}=\Omega$.
\end{lem}
\begin{proof}
The argument follows that of the proof of Lemma (\ref{la}). Since
$u(x)$ is constant on $\partial\Omega$ and $\Omega\in C^1$,
$\frac{\partial u}{\partial x_1}|_Q=0$. We denote the coordinate of $Q$ by $z$. Suppose
$\Omega_{\bar\lambda}\not=\emptyset$, there exits a ball $
B\subset\subset
\Omega_{\bar\lambda}$.
Choosing a sequence $\{x^i\}^\infty_1\in\Sigma_{\bar\lambda}\setminus T_{\bar\lambda}$ such that
$x^i\to z$ as $i\to \infty$, then $x^i_{\bar\lambda}\to z$ as $i\to
\infty$. Since $ B\subset\subset
\Omega_{\bar\lambda},$ we find a $ \delta$ such that $ diam
\Omega>|x^i_{\bar\lambda}-y|>\delta$ for any $y\in B$ and any $x^i_{\bar\lambda}$.
From (\ref{pw}), by Mean Value Theorem,
\begin{eqnarray}
\frac{u(x^i_{\bar\lambda})-u(x^i)}{(x^i_{\bar\lambda}-x^i)\cdot
e_1}&=&\int_{\Omega_{\bar\lambda}}\frac{|x^i-y|^{\alpha-N}\log|x^i-y|
-|x^i_{\bar\lambda}-y|^{\alpha-N}\log|x^i_{\bar\lambda}-y|}
{(x^i_{\bar\lambda}-x^i)\cdot e_1} \,dy \nonumber \\
&=&\int_{\Omega_{\bar\lambda}} -|\bar
x^i_{\bar\lambda}-y|^{\alpha-N-2}((x^i_{\bar\lambda}-y)\cdot
e_1)((\alpha-N)\log|\bar x^i_{\bar\lambda}-y|+1) \,dy
\nonumber \\
&<&\int_{B} -|\bar
x^i_{\bar\lambda}-y|^{\alpha-N-2}((x^i_{\bar\lambda}-y)\cdot
e_1)((\alpha-N)\log|\bar x^i_{\bar\lambda}-y|+1) \,dy
\nonumber \\
&<& -C. \nonumber \\
\label{22}
\end{eqnarray}
Where $\bar x^i_{\bar\lambda}$ is some point between $
x^i_{\bar\lambda}$ and $x^i$. The assumption $diam \Omega<
e^{\frac{1}{N-\alpha}}$ is applied in the last inequalities.
Consequently, (\ref{22}) contradicts $\frac{\partial u}{\partial
x_1}|_Q=0$ as $i\to\infty$. Therefore, the lemma is verified.
\end{proof}
With the help of the above two lemmas, Theorem \ref{2t} is
confirmed.
| {
"timestamp": "2011-02-02T02:00:46",
"yymm": "1101",
"arxiv_id": "1101.1649",
"language": "en",
"url": "https://arxiv.org/abs/1101.1649",
"abstract": "The main purpose of this paper is to address two open questions raised by W. Reichel on characterizations of balls in terms of the Riesz potential and fractional Laplacian. For a bounded $C^1$ domain $\\Omega\\subset \\mathbb R^N$, we consider the Riesz-potential $$u(x)=\\int_{\\Omega}\\frac{1}{|x-y|^{N-\\alpha}} \\,dy$$ for $2\\leq \\alpha \\not =N$. We show that $u=$ constant on the boundary of $\\Omega$ if and only if $\\Omega$ is a ball. In the case of $ \\alpha=N$, the similar characterization is established for the logarithmic potential. We also prove that such a characterization holds for the logarithmic Riesz potential. This provides a characterization for the overdetermined problem of the fractional Laplacian. These results answer two open questions of W. Reichel to some extent.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "An overdetermined problem in Riesz-potential and fractional Laplacian",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815510282099,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7076842117900075
} |
https://arxiv.org/abs/1811.12876 | The moduli space of real vector bundles of rank two over a real hyperelliptic curve | The Desale-Ramanan Theorem is an isomorphism between the moduli space of rank two vector bundles over complex hyperelliptic curve and the variety of linear subspaces in an intersection of two quadrics. We prove a real version of this theorem for the moduli space of real vector bundles over a real hyperelliptic curve. We then apply this result to study the topology of the moduli space, proving that it is relatively spin and identifying the diffeomorphism type for genus two curves. Our results lay the groundwork for future study of the quantum homology of these moduli spaces. | \section{Introduction}\label{sect:intro}
Given a Riemann surface $\Sigma$ of genus $g \geq 2$ and a line bundle $\xi \rightarrow \Sigma$ of odd degree, the moduli space $U_{\xi}$ of stable rank 2 vector bundles $E\rightarrow \Sigma$ with fixed determinant $\wedge^2 E \cong \xi$ is a non-singular projective variety of dimension $3g-3$ which is naturally endowed with a Kaehler metric.
Given an antiholomorphic involution $\tau: \Sigma \rightarrow \Sigma$, there is an induced antiholomorphic and antisymplectic involution on $U_{\xi}$. The fixed point set $U_{\xi}^\tau \subseteq U_{\xi}$ is a totally geodesic real Lagrangian submanifold of $U_{\xi}$. The points of $U_{\xi}^\tau$ correspond to holomorphic bundles over $\Sigma$ which admit a real structure \cite{BHH,S11}. The topology of $U_{\xi}^\tau$ has been investigated using Atiyah-Bott-Kirwan type methods: the mod 2 betti numbers were calculated in \cite{Baird13,Baird18,LS}, and when $g \geq 3$ the rational cohomology ring was calculated in \cite{Baird18}.
In the current paper, we study $U_\xi^{\tau}$ using an entirely different method in the special case when $\Sigma$ is a hyperelliptic curve. Recall that a hyperelliptic curve is a 2-fold ramified cover $\pi: \Sigma \rightarrow {\mathbb{C}} P^1$. If $\Sigma$ has genus $g \geq 2$, then $\pi$ has $2g+2$ ramification points $W \subseteq {\mathbb{C}} P^1 $ also known as the Weierstrass points. Choose affine coordinates ${\mathbb{C}} P^1 = {\mathbb{C}} \cup \{\infty\}$ and let $\lambda_1,...,\lambda_{2g+2} \in {\mathbb{C}}$ denote the coordinates of $W$. The following is due to Desale-Ramanan \cite{DesaleRamanan76} (see also \cite{NR, N} for the genus 2 case).
\begin{theorem}[Desale-Ramanan]\label{DRT}
If $\Sigma$ is hyperelliptic then $U_{\xi}$ is isomorphic to the subvariety of the Grassmannian $X \subseteq Gr_{g-1}( {\mathbb{C}}^{2g+2})$ consisting of $(g-2)$-linear subspaces contained in the intersection of the two quadrics $Z(q_0) \cap Z(q_1) \subset {\mathbb{C}} P^{2g+1}$ where
\begin{eqnarray*}
q_0 &=& x_1^2 + ...+ x_{2g+2}^2, \\
q_1 &=& \lambda_1 x_1^2 +....+ \lambda_{2g+2} x_{2g+2}^2.
\end{eqnarray*}
\end{theorem}
Our main result (Theorem \ref{mainresult}) is a real version of the Desale-Ramanan Theorem. Namely, we prove that if $(\Sigma, \tau)$ is a real hyperelliptic curve, then $U_{\xi}^{\tau}$ is isomorphic to the subvariety of $Gr_{g-1}( {\bb{R}}^{2g+2})$ consisting of planes contained in the intersection of two explicit real quadrics. For genus $g=2$ curves, this result was proven by Shuguang Wang \cite{W}.
The rest of the paper is devoted to applications of this result. In \S \ref{Grasssect} we produce formulas for the Stiefel-Whitney classes of $U_{\xi}^{\tau}$ and use these to show that $U_{\xi}^{\tau}$ is relatively spin as a Lagrangian submanifold of $U_{\xi}$. This implies that it has a well-defined quantum homology ring with integer coefficients \cite{BC}.
In \S \ref{g2sect} we identify the diffeomorphism type of $U_{\xi}^{\tau}$ for all genus two examples. Previously, only the ${\bb{Z}}_2$-Betti numbers were known.
\section{Real hyper-elliptic curves}
A hyperelliptic curve, $\pi: \Sigma \rightarrow {\mathbb{C}} P^1$, is a 2-fold ramified cover over the complex projective line. We assume always that $\Sigma$ has genus $g \geq 2$. Such a curve admits an involution $\iota: \Sigma \rightarrow \Sigma$, which interchanges the two sheets of the cover and thus identifies $\Sigma/ \iota \cong {\mathbb{C}} P^1$. The ramification points $W \subset {\mathbb{C}} P^1$ are called the Weierstrass points. Choosing affine coordinates on ${\mathbb{C}} P^1 = {\mathbb{C}} \cup \{\infty\}$ we can represent $W$ as a set of $2g+2$ points in the complex plane ${\mathbb{C}}$ (we assume always $\infty \not\in W$).
A real structure on a Riemann surface $\Sigma$ is an anti-holomorphic involution $\tau$, that is a smooth map $\tau: \Sigma \rightarrow \Sigma$ which reverses the complex structure and satisfies $\tau^2 = Id_{\Sigma}$. The fixed point set of the involution, $\Sigma_{\bb{R}} := \Sigma^{\tau}$, is diffeomorphic to disjoint union of circles. In this paper, we consider real structures that are compatible with the hyperelliptic projection, i.e. \[\pi\circ \tau = \tau_{P^1} \circ \pi\]
with respect to the standard real involution $\tau_{P^1}$ on ${\mathbb{C}} P^1$, defined by $\tau_{P^1}([z:w]) := [\overline{z}:\overline{w}]$. A hyperelliptic curve admits a compatible real structure if and only if $\tau_{P^1}(W) = W$. If this happens, then $\Sigma$ admits a pair of compatible real structures $\tau$ and $\tau \circ \iota = \iota \circ \tau$, where $\iota$ is the hyperelliptic involution. For consistency, we will always choose $\tau$ so that $\infty \in \pi( \Sigma^{\tau})$ and $\infty \not\in \pi(\Sigma^{\tau\circ \iota})$. A typical situation for genus $g=2$ is illustrated in Figures \ref{Fig1} and \ref{Fig2} below. The ramification points $W \subset {\mathbb{C}} \subset {\mathbb{C}} P^1$, drawn in orange, are arranged symmetrically about the real axis. The respective images $\pi( \Sigma^{\tau})$ and $\pi( \Sigma^{\tau \circ \iota})$ are drawn in green.
\begin{figure}[htbp
\begin{center}
\begin{tikzpicture}[scale=1,>=stealth]
\tkzInit[xmin=-3.5,xmax=3,ymin=-0.7,ymax=0.7]\tkzClip[space=0.1]
\begin{scope}
\tkzDefPoints{-3/0/xl,3/0/xr,-2/0/p1,-1.2/0/p2, 0/0/p3, 1/0/p4, 1.8/0.7/q1, 1.8/-0.7/q2}
\tkzDrawSegments[color=green,<->,line width=1pt,opacity=0.8](xl,xr)
\tkzDrawSegments[color=black,line width=1pt,opacity=1](p1,p2 p3,p4)
\tkzDrawPoints[color=orange,fill=orange](p1,p2,p3,p4,q1,q2)
\tkzLabelPoint[left](xl){${\bb{R}}$}
\end{scope}
\end{tikzpicture}
\caption{$\tau$}
\label{Fig1}
\end{center}
\end{figure}
\begin{figure}[htbp
\begin{center}
\begin{tikzpicture}[scale=1,>=stealth]
\tkzInit[xmin=-3.5,xmax=3,ymin=-0.7,ymax=0.7]\tkzClip[space=0.2]
\begin{scope}
\tkzDefPoints{-3/0/xl,3/0/xr,-2/0/p1,-1.2/0/p2, 0/0/p3, 1/0/p4, 1.8/0.7/q1, 1.8/-0.7/q2}
\tkzDrawSegments[color=black,<->,line width=1pt,opacity=0.8](xl,xr)
\tkzDrawSegments[color=green,line width=1pt,opacity=1](p1,p2 p3,p4)
\tkzDrawPoints[color=orange,fill=orange](p1,p2,p3,p4,q1,q2)
\tkzLabelPoint[left](xl){${\bb{R}}$}
\end{scope}
\end{tikzpicture}
\caption{$ \iota \circ \tau$}
\label{Fig2}
\end{center}
\end{figure}
Given a real curve $(\Sigma, \tau)$ the fixed point set $\Sigma^{\tau}$ is a union of disjoint circles called the \emph{real circles}. The topological type of a real curve $(\Sigma, \tau)$ is determined by the genus of $g$, the number of path components of $\Sigma^\tau$ and whether or not $\Sigma \setminus \Sigma^{\tau}$ is connected (see \cite{GH}). The following proposition classifies real hyperelliptic curves topologically.
\begin{prop}\label{realcurvesprop}
Decompose $W = W_0 \cup W_+ \cup W_-$ , where $W_0$, $W_+$, $W_-$ are those Weierstrass points in $w \in {\mathbb{C}}$ whose imaginary part is respectively zero, positive, negative. Clearly $W_+$ and $W_-$ have equal cardinality so $W_0$ has even cardinality. Let $2n = \# W_0$.
\begin{itemize}
\item[(i)] If $n>0$, then $\pi_0(\Sigma^{\tau}) = \pi_0(\Sigma^{\tau \circ \iota }) = n$.
\item[(ii)] If $n=0$ then $\pi_0(\Sigma^{\tau})= 1$ and $\pi_0(\Sigma^{\tau \circ \iota }) = 0$.
\item[(iii)] If $1 \leq n \leq g$, then both $\Sigma \setminus \Sigma^{\tau}$ and $\Sigma \setminus \Sigma^{\tau \circ \iota}$ are connected. If $n = g+1$ then both $\Sigma \setminus \Sigma^{\tau}$ and $\Sigma \setminus \Sigma^{\tau \circ \iota}$ are disconnected. If $n=0$, then $\Sigma \setminus \Sigma^{\tau}$ is disconnected and $\Sigma \setminus \Sigma^{\tau \circ \iota}$ is connected.
\end{itemize}
\end{prop}
\begin{proof}
Statements (i) and (ii) are pretty clear.
For statement (iii), observe that $\Sigma \setminus \Sigma^{\tau}$ is connected if and only if it is possible to draw a closed loop in ${\mathbb{C}} \setminus W$ which encloses an odd number of points in $W$ and crosses the real line. Then it is a simple matter of looking at pictures.
\end{proof}
\begin{remark}
Proposition \ref{realcurvesprop} implies that for genus $g \geq 4$, there exist topological types of real curves that cannot be realized as hyperelliptic curves. Namely, those curves for which $1 < \pi_0( \Sigma^{\tau} ) < g+1$ and $\Sigma \setminus \Sigma^{\tau}$ is disconnected.
\end{remark}
\subsection{Real line bundles over real hyperelliptic curves}
Let $(\Sigma, \tau)$ be a real curve. A real line bundle $\xi$ over $(\Sigma,\tau)$ is a line bundle that admits an antiholomorphic lift
$$ \xymatrix{ \xi \ar[r]^{\tilde{\tau}} \ar[d] & \xi \ar[d] \\ \Sigma \ar[r]^\tau & \Sigma}$$ such that $\tilde{\tau}^2 = Id_{\xi}$. If $\tilde{\tau}$ exists then by Schur's Lemma it is uniquely determined up to multiplication by a unit scalar. The fixed point set $\xi^{\tilde{\tau}} \rightarrow \Sigma^{\tau}$ is an ${\bb{R}}^1$-bundle over $\Sigma^{\tau}$. The Stiefel-Whitney class $w_1(\xi^{\tilde{\tau}}) \in H^1(\Sigma^{\tau}; {\bb{Z}}_2)$ depends only on $(\Sigma, \tau, \xi)$, not on the choice of lift $\tilde{\tau}$. If $C \subset \Sigma^{\tau}$ is a real circle, then the following are equivalent
\begin{enumerate}
\item[(i)] $\xi^{\tilde{\tau}}|_C$ is non-orientable (i.e. a Moebius band),
\item[(ii)] $w_1(\xi^{\tilde{\tau}})(C) = 1$,
\end{enumerate}
We call $C$ odd with respect to $\xi$ if it satisfies these equivalent conditions. If $k$ is the number of odd circles with respect to $\xi$ then by (\cite{BHH} Prop. 4.1)
\begin{equation}\label{d=w=k}
\mathrm{deg}(\xi) \equiv w_1(\xi^{\tilde{\tau}})(\Sigma^{\tau}) \equiv k~mod~2.
\end{equation}
The Desale-Ramanan Theorem requires the degree of $\xi$ to be odd, which implies that our real line bundle $\xi$ will always have an odd number of odd circles. In particular, we need only consider $(\Sigma, \tau)$ for which $\Sigma^{\tau}$ is non-empty. By (\cite{BHH} Prop. 4.1) if $\Sigma^{\tau} \neq \emptyset$, then real line bundles exist over $(\Sigma,\tau)$ in all degrees. Tensoring by a real line bundle $L$ over $(\Sigma, \tau)$ determines an isomorphism $$ U_{\xi}^{\tau} \cong U_{\xi \otimes L^{\otimes 2}}^{\tau}$$ and $\mathrm{deg}(\xi \otimes L^{\otimes 2}) = \mathrm{deg}(\xi) +2 \mathrm{deg}(L)$. Therefore in studying $U_{\xi}^{\tau}$ we may assume without loss of generality that $\xi$ has degree $2g+1$.
Given a real line bundle $\xi$ with real structure $\tilde{\tau}$, there always exists a $\tilde{\tau}$-invariant meromorphic section which determines a real divisor $D = \sum_{i} m_i p_i$ where $m_i \in {\bb{Z}}$ is the multiplicity of the zero of $s$ at $p_i \in \Sigma$. Observe that $\pi(D) = D$. A real circle in $C \subseteq \Sigma^{\tau}$ is odd with respect to $\xi$ if and only if $\sum_{p_i \in C} m_i$ is odd.
\section{The Desale-Ramanan Theorem}
We review what we need from Desale-Ramanan \cite{DesaleRamanan76}.
\subsection{The moduli space of bundles as a subvariety of a Grassmannian}\label{31}
Let $\pi: \Sigma \rightarrow {\mathbb{C}} P^1$ be a hyperelliptic curve of genus $g$ with hyperelliptic involution $\iota$. Let $W$ be the set of $2g+2$ Weierstrass points for $\Sigma$. We abuse notation and identify $W = \pi^{-1}(W) \subset \Sigma$. Fix a line bundle $\xi \rightarrow \Sigma$ of degree $2g+1$.
Let $E \in U_{\xi}$ be a stable bundle with determinant isomorphic to $\xi$. There is a natural map $$H^0(\Sigma, E \otimes \iota^* E) \rightarrow \bigoplus_{w \in W} E_w \otimes \iota^*E_w = \bigoplus_{w \in W} E_w \otimes E_w$$
defined by restriction and identifying $\iota^*E_w = E_w$. This map is $\iota$-equivariant. Restricting to the $-1$ eigenspaces, yields a map
\begin{equation}\label{grassmanninj}
H^0(\Sigma, E \otimes \iota^* E) ^{-\iota} \rightarrow \bigoplus_{w \in W} (E_w \otimes E_w)^{-\iota} = \bigoplus_{w \in W} \wedge^2 E_w \cong \bigoplus_{w \in W} \xi_w
\end{equation}
where the last isomorphism uses $\wedge^2 E \cong \xi$, so is only natural up to a non-zero scalar in ${\mathbb{C}}^*$. Desale-Ramanan prove that (\ref{grassmanninj}) is injective, and $H^0(\Sigma, E \otimes \iota^* E) ^{-\iota}$ has fixed dimension $g+3$ for all $E$. We therefore obtain a morphism
$$ U_{\xi} \rightarrow Gr_{g+3} (V^*)$$
where $V^* = \oplus_{w \in W} \xi_w$. Applying duality, we can think of this as a morphism
$$ \phi: U_{\xi} \rightarrow Gr_{g-1}(V) $$
where $V := \oplus_{w\in W} \xi_w^*$. Desale and Ramanan prove that $\phi$ is an embedding, so $U_{\xi}$ is isomorphic with the image $im(\phi)$. Moreover, they prove that $im(\phi)$ is equal to the set of $(g-2)$-linear subspaces in $P(V)$ that lie in the intersection of two quadrics $Q_0$ and $Q_1$. We construct these quadrics in the next section.
Now suppose we endow $\Sigma$ with an antiholomorphic involution $\tau$ which commutes with $\iota$. This means in particular that $\tau$ permutes the Weierstrass points. If $\xi$ is a real line bundle over $(\Sigma,\tau),$ then $V = \oplus_{w \in W} \xi_w^*$ inherits an anti-linear involution inducing an involution on $Gr_{g-1}(V)$. It is clear from the construction that $\phi$ is $\tau$-equivariant.
\subsection{Line bundles and quadrics}\label{subsect:linebundleoncurve}
A pencil of quadratic forms on a vector space $V$ is determined by a surjective linear map $Q: S^2(V) \rightarrow U$ where $U$ is a vector space of dimension two. In terms of a basis for $U$, this is defined by a linearly independent pair of quadratic forms $Q_0,Q_1: S^2(V) \rightarrow {\mathbb{C}}$.
Given a line bundle $\xi$ of degree $2g+1$ over the hyperelliptic curve $\Sigma$, we want to construct a pencil of quadratic forms on $V$ where \begin{equation}\label{decomp}
V := \bigoplus_{w \in W} \xi_w^*.
\end{equation}
We do this by constructing a linear map $$Q:S^2(V) \rightarrow H^0( h^{-2g-1}(W))$$ where $h$ is the hyperplane bundle over ${\mathbb{C}} P^1$ and $W$ is the Weierstrass divisor. Our pencil will be diagonal with respect to the decomposition (\ref{decomp}) so it is determined by morphisms
$$ q_w: (\xi_w^*)^{\otimes 2} \rightarrow H^0( h^{-2g-1}(W))$$
for each $w \in W$. We define $q_w$ to be the composition of morphisms (\ref{eqa}), (\ref{eqb}), (\ref{eqc}) defined below. Fix an isomorphism
\begin{equation}\label{xih}
\phi: \xi^* \otimes \iota^* \xi^* \cong \pi^*(h)^{-2g-1}.
\end{equation}
At a Weierstrass point $w \in W$, $\phi$ restricts to an isomorphism
\begin{equation}\label{eqa}
(\xi_w^*)^{\otimes 2} = \xi^*_w \otimes \iota^* \xi^*_w \cong \pi^*(h)^{-2g-1}_w = h_w^{-2g-1}
\end{equation}
(recall we abuse notation identifying $w$ and $\pi(w)$). Evaluation at $w$ determines an isomorphism
\begin{equation}\label{eqb}
h_w^{-2g-1} \cong H^0( h^{-2g-1}(W - w)),
\end{equation}
and we have the natural inclusion
\begin{equation}\label{eqc}
H^0( h^{-2g-1}(W - w)) \hookrightarrow H^0( h^{-2g-1}(W)).
\end{equation}
Choose affine coordinates on ${\mathbb{C}} P^1 = {\mathbb{C}} \cup \{\infty\}$ with variable $t$ and Weierstrass points $\{t_w \in {\mathbb{C}}\}$. In these coordinates we have $$ H^0( h^{-2g-1}(W)) = \{ (at +b) R(t)| a,b\in {\mathbb{C}} \} $$
where $R(t) = t^{2g+1} \prod_{w \in W} (t-t_w)^{-1}$. The quadratic forms $Q_0, Q_1$ are defined by the identity $Q = (Q_0t - Q_1) R(t)$.
Let $u $ be a meromorphic section of $\xi$ which may be chosen to have no poles or zeros on $W$ so that $\{u_w|w \in W\}$ is a basis for $V$. Let $D$ be the divisor of $u$ and express $$\pi(D) = \sum m_i \alpha_i$$ in affine coordinates $\alpha_i \in {\mathbb{C}} \subset {\mathbb{C}} P^1$ where $\sum m_i = -2g-1$. Then up to a scalar (which we may fix to equal one), $u \otimes \iota^*(u)$ is sent by $\phi$ to the pull-back of $\prod_i (t - \alpha_i)^{m_i}$. Tracing through the definition of $q_w$ we get
$$ q_w( u_w^2) = (t -t_w) \prod_i (t_w - \alpha_i)^{m_i} \prod_{w' \neq w} (t_w -t_{w'}) R(t).$$
Therefore
\begin{eqnarray}
Q_0(u_w^2) &:=& \prod_i (t_w - \alpha_i)^{m_i} \prod_{w' \neq w} (t_w -t_{w'}) \label{Q0}\\
Q_1(u_w^2) &:=& t_w Q_0(u_w^2).
\end{eqnarray}
We recover the expression in Theorem \ref{DRT} simply by replacing the basis $u_w$ with $\frac{1}{ \sqrt{Q_0(u_w^2)}} u_w$. To get the real version, we must choose a different basis.
\section{Main Result}\label{sect:mainresult}
Suppose now that $\pi: \Sigma \rightarrow {\mathbb{C}} P^1$ is a real hyperelliptic curve with anti-holomorphic involution $\tau$ compatible with the standard involution $\tau_{P^1}$ on ${\mathbb{C}} P^1$. Suppose this lifts to a real structure $\tilde{\tau}$ on $\xi$ and choose a $\tilde{\tau}$-invariant meromorphic section $u$. We want to replace the basis $\{u_w| w \in W\}$ of $V$ by a basis whose elements are fixed by $\tau$ and with respect to which the expressions for $Q_0$ and $Q_1$ simplify.
Given affine coordinates ${\mathbb{C}} \cup \{ \infty\}$, recall that if $\# W_0 = 2n >0$, then $\pi(\Sigma^{\tau}) \cap {\mathbb{C}}$ is a union of $n$ disjoint closed intervals in ${\bb{R}}$, two of which are half infinite. Declare such an interval $I \subseteq {\bb{R}}$ to be odd with respect to $\pi(D) = \sum_i m_i \alpha_i$ if $\sum_{\alpha_i \in I} m_i$ is odd. Note that a bounded interval is odd if and only if it is the image of an odd circle for $\xi$. To simplify what follows we require when $n>0$ that the negative half infinite interval be even. This can always be arranged by changing the affine coordinate chart using a Moebius transformation.
\begin{theorem}\label{mainresult}
Suppose that $W_0$ consists of real numbers $r_1<r_2< ... < r_{2n}$ while $W_+$ consists of complex numbers $a_1 + i b_1, ...., a_{g+1-n} +ib_{g+1-n}$ where $b_i>0$. Assume that $ \infty \in \pi( \Sigma^{\tau})$. There is a choice of coordinates $x_1...,x_{2r},, z_1,...,z_s, w_1,...,w_s$ on $V$, in which the quadratic forms $Q_0$ and $Q_1$ are expressed as polynomials
\begin{eqnarray}
q_0 &:=& \sum_{i=1}^{2n} \epsilon_i \left(x_{2i-1}^2-x_{2i}^2\right) + \sum_{j = 1}^{s} \left(z_{j}^2 -w_{j}^2\right)\label{eqn4.1} \\
q_1 &:=& \sum_{i=1}^{2n} \epsilon_i \left(r_{2i-1} x_{2i-1}^2-r_{2i}x_{2i}^2\right) + \sum_{j=1}^{s} \left( a_{j} z_{j}^2 - a_j z_{j}^2 + 2b_j z_{j}w_{j} \right) \label{eqn4.2}
\end{eqnarray}
where $\epsilon_i \in \{\pm 1\}$ equals $(-1)^{(N_i+1)}$ where $N_i$ is the number of odd intervals to the right of $r_{2i-1}$. This convention implies $\epsilon_1 =1$.
\end{theorem}
\begin{proof}
Clearly $\tau_{P^1}$ preserves both $\pi(D)$ and $W$. Let $\nu$ be the permutation of $W$ such that $\tau_{P^1}(w) =\nu(w)$. The induced anti-linear involution on $V = \oplus_{w \in W} \xi_w^*$ sends a linear combination $ \sum_{w \in W} \lambda_w u_w$ to $\sum_{w \in W}\overline{\lambda}_w u_{\nu(w)}$.
Introduce the notation $ R_w e^{i \theta_w} := Q_0(u_w^2)$ where $R_w > 0$ and $e^{i\theta_w} \in [0, 2\pi)$. This is well-defined because from (\ref{Q0}) we see $Q_0(u_w^2) \neq 0$. Define the basis
$$ B := \{ v_w ~|~w \in W_0\} \cup \{ v'_w, v''_w~|~w \in W_+\} $$
where
\begin{eqnarray*}
v_w &:= & \frac{1}{\sqrt{ R_w}} u_w \\
v'_w &:=& \frac{e^{-i \theta_w/2}}{\sqrt{ 2 R_w}}u_w + \frac{e^{i \theta_w/2}}{\sqrt{2 R_w}} u_{\nu(w)}\\
v''_w &:=& -i\Big( \frac{e^{-i \theta_w/2}}{\sqrt{ 2 R_w}}u_w - \frac{e^{i \theta_w/2}}{\sqrt{2 R_w}} u_{\nu(w)} \Big) .
\end{eqnarray*}
Clearly each basis vector in $B$ is fixed by conjugation. It is straightforward to check that $Q_0$ is diagonal in this basis, with
\begin{eqnarray*}
Q_0(v_{w}^2) &=& e^{i \theta_w}\\
Q_0((v_w')^2) &=&1 \\
Q_0((v_w')^2) &=& -1 \\
\end{eqnarray*}
while $Q_1$ satisfies
\begin{eqnarray*}
Q_1(v_{w}^2) &=& t_w e^{i \theta_w}\\
Q_1((v_w')^2) &=& (t_w + \overline{t}_w)/2 = Re(t_w) \\
Q_1((v_w')^2) &= &- (t_w + \overline{t}_w)/2 = - Re(t_w) \\
Q_1(v_w' v_w'') &= &- i (t_w - \overline{t}_w)/2 = Im(t_w) \\
\end{eqnarray*}
with all other entries zero. Finally, check that if $w \in W_0$, then $e^{i \theta_w} = (-1)^N$ where $$ N = \#\{w' \in W_0 | t_{w'} > t_w\} + \sum_{\alpha_i \in {\bb{R}}, \alpha_i > t_w} m_i.$$ Therefore $e^{i \theta_w} = \epsilon_i$ if $t_w = r_{2i-1}$ and $e^{i \theta_w} = -\epsilon_i$ if $t_w = r_{2i}$ as desired.
\end{proof}
\section{Stiefel-Whitney classes}\label{Grasssect}
Let $G_{{\mathbb{C}}} := Gr_{g-1} ({\bb{C}}^{2g+2})$ with tautological bundle $V_{{\mathbb{C}}} \rightarrow G_{{\mathbb{C}}}$ and let $G := Gr_{g-1}({\bb{R}}^{2g+2})$ and $V$ be their real counterparts. We saw in \S \ref{31} that there is a $\tau$-equivariant embedding $\phi: U_{\xi} \hookrightarrow G_{{\mathbb{C}}}$, which restricts to an embedding of $U_{\xi}^{\tau}$ into $G$. The image of $\phi$ coincides with the set of $(g-1)$-planes on which the quadratic forms $Q_0, Q_1$ both vanish.
\begin{prop}
The image of $\phi$ equals the zero locus $Z(s)$ of a $\tau$-invariant section $s \in H^0(G; S^2(V^*_{{\mathbb{C}}})^{\oplus 2})$ which intersects the zero section transversely. Therefore $U^\tau_{\xi} \cong Z(s)^\tau = Z(s^{\tau})$ where $s^{\tau}$ is restricted section of $S^2(V^*)$.
\end{prop}
\begin{proof}
The identification of $Z(s)$ with linear subspaces of $Q_0, Q_1$ is explained in Borcea \cite{Borcea}. Transversality follows from (\cite{Borcea} Corollary 2.2) because the pencil of quadrics $xQ_0+yQ_1$ is generic. Invariance under $\tau$ is clear.
\end{proof}
\begin{corollary}
Let $(\Sigma_0, \tau_0)$ and $(\Sigma_1, \tau_1)$ be real hyperelliptic curves of the same genus $g \geq 2$ equipped with real line bundles $\xi_1$ and $\xi_2$ respectively of degree $2g+1$. Then
$U_{\xi_0}^{\tau_0}$ and $U_{\xi_1}^{\tau_1}$ are cobordant.
\end{corollary}
\begin{proof}
There exist real sections $s_0,s_1 \in \Gamma(S^2(V^*))$ such that $U_{\xi_i}^{\tau_i}= Z(s_i) := s^{-1}_i(0)$. Choose a homotopy $s: Gr_{g-1}({\bb{R}}^{2g+2}) \times I \rightarrow S^2(V^*)$ from $s_0$ to $s_1$ which intersects the zero section transversally. Then $Z(s)$ provides a cobordism between $Z(s_0) = M_0$ and $Z(s_1) = M_1$.
\end{proof}
Let $N := im(\phi)^{\tau} \cong U_{\xi}^{\tau}$. We have the following isomorphism of vector bundles
\begin{equation}\label{TGTM}
TG|_N \cong TN \oplus S^2(V^*|_N)\oplus S^2(V^*|_N)
\end{equation}
which can be used to compute Stiefel-Whitney classes.
\begin{prop}\label{chernform}\label{PropSW}
Let $N \cong U_\xi^\tau$ be as above. The total Stiefel-Whitney class of the tangent bundle of $N$ is equals
\begin{equation}\label{SWeq}
w(TN) = w(V^*|_N)^{2g+2} w(V|_N \otimes V^*|_N)^{-1} w( S^2(V^*)|_N)^{-2}.
\end{equation}
In particular, we have
\begin{eqnarray*}
w_1(U_\xi^\tau) &=& 0\\
w_2(U_\xi^\tau) &=& (g+1) \phi^*(w_1)^2
\end{eqnarray*}
where $w_i$ is the tautological Stiefel-Whitney class in $H^i(G;{\bb{Z}}_2).$
\end{prop}
\begin{proof}
By (\ref{TGTM}) and the Whitney sum formula, we get
$$ w(TN) = w(TG|_N) w( S^2(V^*)|_N)^{-2} .$$
We have the well-known isomorphism $ TG \cong Hom(V, W)$ where $V$ and $W$ are the tautological bundle and its orthogonal complement respectively. In particular, $V \oplus W \cong G \times {\bb{R}}^{2g+2}$. Therefore
$$ TG \oplus Hom(V, V) = Hom( V, {\bb{R}}^{2g+2}) = V^* \oplus ... \oplus V^*,$$
so by the Whitney sum formula
\begin{eqnarray*}
w(TG) & = & w(V^*)^{2g+2} w(V \otimes V^*)^{-1}
\end{eqnarray*}
proving (\ref{SWeq}). By definition
\begin{eqnarray*}
w(V) = w(V^*) & =& 1+w_1+w_2+....+w_{g-1}.\\
\end{eqnarray*}
A simple calculation using the splitting principle gives
\begin{eqnarray*}
w(V \otimes V^*) &=& 1 + (rk(V)-1)w_1^2 + O(3)\\
&=& 1 + g w_1^2 + O(3)\\
w( S^2(V^*))^2 &=& 1+ w_1(S^2(V^*))^2 +O(3)\\
&=&1 + gw_1^2 + O(3)\\
\end{eqnarray*}
where $O(3)$ is a sum of terms in degree three or higher. The values of $w_i(U_{\xi}^{\tau})$ follow by direct calculation and functoriality.
\end{proof}
Before stating the next corollary we to introduce some terminology.
\begin{defn}
Let $L$ be a Lagrangian submanifold in a symplectic manifold $(M,\omega)$. We call $L$
\begin{itemize}
\item relatively pin if $w_2(L)$ lies in the image of $H^2(M;{\bb{Z}}_2) \rightarrow H^2(L;{\bb{Z}}_2)$.
\item relatively spin if it is relatively pin and orientable.
\item spin if it is orientable and $w_2(L) =0$.
\end{itemize}
\end{defn}
\begin{corollary}\label{pinspin}
The Lagrangian submanifold $U_{\xi}^{\tau} \subset U_{\xi}$ is relatively spin for all $g \geq 2$ and is spin if $g$ is odd.
\end{corollary}
\begin{proof}
That $U_{\xi}^{\tau}$ is orientable follows from $w_1(U_{\xi}^{\tau})=0$ which is proven in Proposition \ref{PropSW}.
Consider the commutative diagram of inclusions
$$ \xymatrix{ U_{\xi} \ar[r]^{\phi} & G_{\mathbb{C}} \\ U_{\xi}^{\tau} \ar[u] \ar[r]^{\phi} & G \ar[u]^i }. $$
We have $w_2( U_{\xi}^{\tau}) = (g+1) \phi^*(w_1)^2$ by Proposition \ref{PropSW}, and we have $w_1^2 = i^*(u)$ where $u$ is the pull-back of the generator of $H^2(G_{{\mathbb{C}}};{\bb{Z}}_2)$ (\cite{MS} problem 15-A). Commutativity completes the argument.
\end{proof}
\begin{remark}
\rm{
We note that the moduli space $U_\xi$ is monotone with minimal Chern number $2$, which implies that $U_\xi^\tau$ is monotone with minimal Maslov number greater than or equal to two (see \cite{Baird18} Theorem 1.6). It follows from \cite{BC}
that the quantum homology of the Lagrangian submanifold $U_\xi^\tau$ is well defined over the Novikov ring with integer coefficients. It is well-known that Dehn twist on $\Sigma$ induces fibered Dehn twist of $U_\xi$ (e.g. \cite{WehrheimWoodward} and references therein). For the real curve $(\Sigma, \tau)$, we can consider a real Dehn twist. In this case, the corresponding moduli spaces of real bundles can be related by a Lagrangian cobordism in Lefschetz fibration (c.f. Biran-Cornea \cite{BC15}). We hope to pursue this in future work.
}
\end{remark}
\section{The genus 2 case}\label{g2sect}
Given a generic intersection of real quadrics $X = Z(q_0) \cap Z(q_1) \subseteq {\bb{R}} P^N$ we can form the double cover $\tilde{X} \rightarrow X$ by pulling back the double cover $S^{N} \rightarrow {\bb{R}} P^{N}$. The diffeomorphism types of such $\tilde{X}$ were classified by Guti\'errez and L\'opez de Medrano \cite{GL}, which we review below.
Suppose $q_0$ and $q_1$ are determined by symmetric real matrices $A$ and $B$. Up to a small perturbation that doesn't affect the diffeomorphism type, we may assume that $A^{-1}B$ is diagonalizable with distinct eigenvalues. Introduce real variables $x_1,...,x_r, u_1,v_1,...,u_s,v_s$ where each $x_i$ corresponds to the eigenspace for a real eigenvalue of $A^{-1}B$ and each pair $u_i, v_i$ corresponds to the real part of the sum of eigenspaces for a complex conjugate pair of eigenvalues. Then the coefficients of $q_0, q_1$ can be continuously varied without changing the diffeomorphism type to a pair of quadratic forms
\begin{eqnarray}
p_0 &=& \sum_{i=1}^ra_i x_i^2 + \sum_{j=1}^s( u_j^2 -v_j^2) \label{eq6.1} \\
p_1 &=& \sum_{i=1}^r b_i x_i^2 +\sum_{j=1}^s 2u_jv_j \label{eq6.2}
\end{eqnarray}
where the $b_i/a_i$ are the real eigenvalues of $A^{-1}B$.
Consider the set of points $\lambda_i = (a_i, b_i) \in {\bb{R}}^2$. The fact that the intersection is generic implies that $(0,0)$ does not lie on the line segment joining $\lambda_i$ and $\lambda_j$ for any pair $i,j \in \{1,...,r\}$. If the coefficients $\lambda_i$ are continuously varied without violating this property, then the diffeomorphism type of the intersection doesn't change. The $\lambda_i$ can in this way be put into a standard form, such that all of the $\lambda_i$ lie on $2l+1$ roots of unity (in ${\bb{R}}^2 = {\mathbb{C}}$) for a minimal value $l$. This determines a cyclically ordered partition $ r = n_1 +...+n_{2l+1}$ where $n_i>0$ counts the multiplicity of $\lambda_i$ at the $i$th root of unity. The diffeomorphism type of $\tilde{X},$ hence also $X,$ is determined by $s$ and the partition $r = n_1+...,+n_{2l+1}$.
\begin{prop}
Suppose that $(\Sigma, \tau)$ is a real hyperelliptic curve of genus $g=2$ with $2n$ real Weierstrass points and let $\xi$ be a real line bundle with $k$ many odd circles. Then the quadric intersection type of $Z(q_0) \cap Z(q_1)$ and the diffeomorphism type of the double cover $\tilde{U}_{\xi}^{\tau}$ are as follows.
\bigskip
\begin{tabular}{|c|l|c|c|c|c|}
\hline
(n,k)& quadric intersection type & diffeomorphism type of $\tilde{U}_{\xi}^{\tau}$ \\
\hline
(0,1) & s=3, r=0 &${\bb{R}} P^3$ \\
(1,1) & s=2, r=2 &$S^1 \times S^2$ \\
(2,1) & s=1, r=1+1+2 &$\#_3 (S^1 \times S^2)$ \\
(3,1) & s=0, r=1+1+1+1+2 &$\#_5 (S^1 \times S^2)$ \\
(3,3) & s=0, r=2+2+2 &$T^3$ \\
\hline
\end{tabular}
\bigskip
\end{prop}
\begin{proof}
Comparing to formulas (\ref{eqn4.1}), (\ref{eqn4.2}) with (\ref{eq6.1}), (\ref{eq6.2}), we see that $r = 2n$ is equal to the number of real Weierstrass points and $2s$ is the number of non-real Weierstrass points. The partitions can be worked out by hand case-by-case. The diffeomorphism types follows from the main theorem of \cite{GL}.
\end{proof}
Guti\'errez and L\'opez de Medrano do not provide a general formula for the diffeomorphism type of the intersection of projective quadrics $X$ itself, but it is not hard to determine it for our examples.
\begin{theorem}\label{diffeotype}
Suppose that $(\Sigma, \tau)$ is a real hyperelliptic curve of genus $2$ with $2n$ real Weierstrass points and let $\xi$ be a real line bundle with $k$ many odd circles (as for Theorem 6.1). The diffeomorphism type of $U_{\xi}^{\tau}$ is
\bigskip
\begin{tabular}{|c|c|c|c|c|c|}
\hline
(n,k)& diffeomorphism type of $U_{\xi}^{\tau}$ \\
\hline
(0,1) & $L(4,1)$ \\
(1,1) & $S^1 \times S^2$ \\
(2,1) & $\#_2 (S^1 \times S^2)$ \\
(3,1) & $\#_3 (S^1 \times S^2)$ \\
(3,3) & $T^3$ \\
\hline
\end{tabular}
\bigskip
\end{theorem}
\begin{proof}
For the case $(0,1)$ we know that $U_{\xi}^{\tau}$ is diffeomorphic to the projective quadric defined by
\[\sum_{i=1}^3 \left( u_i^2-v_i^2 \right) = \sum_{i=1}^3 2u_i v_i = 0\]
If we regard these as affine equations and impose the extra affine condition
\[\sum_{i=1}^3 \left(u_i^2 +v_i^2\right)=1\]
then these define the unit tangent bundle of $S^2$. Taking the projective quotient gives the unit tangent bundle of ${\bb{R}} P^2$ which is diffeomorphic to the lens space $L(4,1)$ (see \cite{K}).
The cases (1,1), (2,1), and (3,1) are each diffeomorphic to an intersection of quadrics of the form
\begin{eqnarray*}
x_1^2 +x_2^2 + F(x_3,x_4,x_5,x_6) &=& 0 \\
0 + F(x_3,x_4,x_5,x_6) &=& 0
\end{eqnarray*}
which admits an $SO(2)$-action defined by rotating the coordinates $x_1,x_2$. Note that points in the intersection must either be fixed by this rotation (if $x_1=x_2=0$) or have trivial stabilizer (because the first equation implies there are no non-zero solutions of the form $(a_1,a_2,0,0,0,0)$). In all three cases it is easy to verify that the fixed point set is non-empty. Since by Corollary \ref{pinspin} we also know that $U_{\xi}^{\tau}$ is orientable, a result of Raymond (\cite{R} Theorem 1) implies that $U_{\xi}^{\tau}$ is diffeomorphic to a connected sum of copies of $S^1 \times S^2$. The mod 2 Betti numbers of $U_{\xi}^{\tau}$ were calculated in \cite{Baird18}, from which we can determine the number of copies of $S^1 \times S^2$ occurring in each case.
The case $(3,3)$ actually admits an effective action by the 3-torus $SO(2)^3$ (see \cite{GL} \S 4.1) and therefore must be diffeomorphic to a 3-torus.
\end{proof}
\begin{remark}
\rm{
The case $(3,1)$ was considered by Saveliev and Wang \cite{SW}. They correctly proved $U_\xi^{\tau}$ has rational Poincar\'e polynomial equal to $1+3t+3t^2+t^3$, but is not homeomorphic to $T^3$.
}
\end{remark}
\begin{remark}
\rm{
In the proof of Theorem \ref{diffeotype}, we made use of the existence of torus actions on (a manifold diffeomorphic to) $U_{\xi}^{\tau}$. These circle actions may be related to the Jeffrey-Weitsman torus action \cite{JW} obtained from Goldman's integral system \cite{G}. The Narasimhan-Seshadri Theorem determines a diffeomorphism between $U_{\xi}$ and the twisted representation variety $M = Hom_{-1}(\pi_1(\Sigma), SU(2))/SU(2)$. Jeffrey and Weitsman produced Hamiltonian $SO(2)$ actions defined on dense open subsets of $M$ corresponding to embedded circles in the Riemann surface $\Sigma$. These $SO(2)$ actions commute whenever the embedded circles are disjoint. The action is not defined everywhere on $M$ because the Hamiltonian functions are not everywhere differentiable. However, in unpublished work by the first author it is shown that for an embedded circle coinciding with an odd circle for a real curve $(\Sigma,\tau)$, the corresponding $SO(2)$-action restricts to a globally defined circle action on the submanifold $M^{\tau}$ identified with $U_\xi^{\tau}$. Therefore $M^{\tau}$ admits a torus action of rank equal to the number of odd circles. This may correspond to the torus actions employed above.
}
\end{remark}
\textbf{Acknowledgements:} The authors would like to thank Joel Kamnitzer, Luis Haug and Francois Charette for insightful discussions. Thanks also to Guti\'{e}rrez and Lopez de Medrano for answering our questions about their work. This research was supported in part by NSERC Discovery grants.
| {
"timestamp": "2019-07-10T02:20:02",
"yymm": "1811",
"arxiv_id": "1811.12876",
"language": "en",
"url": "https://arxiv.org/abs/1811.12876",
"abstract": "The Desale-Ramanan Theorem is an isomorphism between the moduli space of rank two vector bundles over complex hyperelliptic curve and the variety of linear subspaces in an intersection of two quadrics. We prove a real version of this theorem for the moduli space of real vector bundles over a real hyperelliptic curve. We then apply this result to study the topology of the moduli space, proving that it is relatively spin and identifying the diffeomorphism type for genus two curves. Our results lay the groundwork for future study of the quantum homology of these moduli spaces.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "The moduli space of real vector bundles of rank two over a real hyperelliptic curve",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815497525024,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7076842108773356
} |
https://arxiv.org/abs/1803.04997 | Fröberg's Conjecture and the initial ideal of generic sequences | Let K be an infinite field and let I = (f_1,...,f_r) be an ideal in the polynomial ring R = K[x_1,...,x_n] generated by generic forms of degrees d_1,...,d_r. In the case r=n, following an effective method by Gao, Guan and Volny, we give a description of the initial ideal of I with respect to the degree reverse lexicographic order. Thanks to a theorem due to Pardue (2010), we apply our result to give a partial solution to a longstanding conjecture stated by Froeberg (1985) on the Hilbert series of R/I. | \section{Introduction}
\hspace{15pt} Let $R = K[x_1, \cdots, x_n]$ be the polynomial ring in $n$ variables over an infinite field $K. $ A homogeneous ideal $I $ in $R$ is said to be of type $(n; d_1, \cdots, d_r)$ if it is generated by $r$ forms of degree $d_i $ for $i = 1, \cdots, r. $ The Hilbert function of $A= R/I$ is by definition $HF_{A}(t) := \dim_K R_t/I_t $ for every $t \ge 0$ and it reflects important information of the ideal $I$. We are interested in the behavior of the Hilbert function of generic ideals of type $(n; d_1, \cdots, d_r). $ We adopt the definition of generic ideals given by Fr\"{o}berg in \cite{F} because it is more suitable for our approach. Assume that $K$ is an extension of a base field $F$.
\begin{Definition} \emph{(1) A form of degree $d$ in $K[x_1, \cdots, x_n]$ is called generic over $F$ if it is a linear combination of all monomials of degree $d$ and all coefficients are algebraically independent over $F$.}
\noindent \emph{(2) A homogeneous ideal $(f_1, \cdots, f_r)$ is called generic if all $f_i$ are generic forms with all the coefficients algebraically independent over $F$.}
\end{Definition}
Other definitions in the literature appear in terms of the affine space whose points are the coefficients of the $r$ forms and we say that a property of such sequences is generic if it holds in a nonempty Zariski open subset of the affine space, see for instance \cite{MS} or \cite{P}. The property ought to hold for a randomly chosen sequence. The Hilbert function of a generic ideal of type $(n;d_1,\cdots,d_n)$ is the Hilbert function of a regular sequence, hence the generating Hilbert series $HS_A(z) := \sum_{t\ge 0} HF_A(t) z^t $ is well known $$HS_A(z) = \dfrac{\prod_{i=1}^n(1 - z^{d_i})}{(1-z)^n}. $$
In general if $A$ is a graded standard $K$-algebra and $f \in R_d$ is a generic form, it is natural to guess that for every $t \ge 0, $ the multiplication map
$$ A_t \xrightarrow{f} A_{t+d} $$
is of maximal rank (either injective or surjective), hence $$HF_{A/fA} (t) =\max \{0, HF_A(t) -HF_A(t-d)\}.$$
In terms of the corresponding Hilbert series this can be rewritten as $$ HS_{A/fA} (z) = \left\lceil (1-z^j) HS_A(z) \right\rceil,$$ where for a power series $\sum a_i z^i$ one has $\lceil \sum a_i z^i \rceil = \sum b_i z^i$, with $b_i = a_i$ if $a_j > 0$ for all $j \leq i$, and $b_i = 0$ otherwise. Following the terminology of Pardue in \cite{P}, if this is the case, we say that $f $ is {\it{semi-regular}}.
A sequence of homogeneous polynomials $f_1, \cdots, f_r$ is a \emph{semi-regular sequence} on $A$ if each $f_i$ is semi-regular on $A/(f_1,\cdots,f_{i-1})$.
Clearly regular sequences are semi-regular sequences. This approach motivated a longstanding conjecture stated in 1985 by Fr\"{o}berg.
\begin{Conjecture}\label{Froberg}(Fr\"{o}berg's Conjecture). Let $I = (f_1, \cdots, f_r)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_r)$ in $R = K [x_1,\cdots,x_n]$. Then the Hilbert series of $A = R/I$ is given by
$$HS_A(z) = \left\lceil \dfrac{\prod_{i=1}^r(1 - z^{d_i})}{(1-z)^n} \right\rceil.$$
\end{Conjecture}
\vskip 2mm
This problem is of central interest in commutative algebra in the last decades and a great deal was done (see for instance Anick \cite{A}, Fr\"{o}berg \cite{F}, Fr\"{o}berg-Hollman \cite{FH}, Fr\"{o}berg-L\"{o}fwall \cite{FLo}, Fr\"{o}berg-Lundqvist \cite{FLu}, Moreno-Soc\'{\i}as \cite{MS}, Pardue \cite{P}, Stanley \cite{S}, Valla \cite{V}). A large number of validations through computational methods push to a positive answer. Fr\"{o}berg's Conjecture is known to be true for $r \leq n$ (complete intersection); $n = 2$ \cite{F}; $n=3$ \cite{A}; $r = n+1$ (almost complete intersection) with char$K=0$ \cite{F}; $d_1 = \cdots = d_r = 2$ and $n \leq 11$, $d_1 = \cdots d_r = 3$ and $n \leq 8$ \cite{FH}.
\vskip2mm
Denote by $\initial_{\tau}(I)$ the initial ideal of $I$ with respect to a term order $\tau$ on $R$. Because the Hilbert series of $R/I$ and of $R/\initial_{\tau}(I) $ coincide for every $\tau$, a rich literature has been developed with the aim to characterize the initial ideal of generic ideals with respect to suitable term orders. In the case $r=n, $ Moreno-Soc\'{\i}as stated a conjecture describing the initial ideal of generic ideals with respect to the degree reverse lexicographic order. From now on, the initial ideal of $I$ will be always with respect to the degree reverse lexicographic order and it will be denoted simply by $\initial(I).$ For general facts and properties on the degree reverse lexicographic order see for instance \cite[Proposition 15.2]{E}. It is natural to guess generic complete intersections share {\it{special}} initial ideals. We present here Moreno-Soc\'{\i}as' Conjecture.
\begin{Conjecture}\label{Moreno} (See Moreno-Soc\'{\i}as \cite{MS1}.) Let $I = (f_1,\cdots,f_n)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $K[x_1, \cdots , x_n]$. Then $\initial(I)$ is almost reverse lexicographic, i.e, if $x^\mu$ is a minimal generator of $\initial(I)$ then every monomial of the same degree and greater than $x^\mu$ must be in $\initial(I)$ as well.
\end{Conjecture}
Moreno-Soc\'{\i}as' Conjecture was proven in the case $n=2$ by Aguire et al. \cite{AJL} and Moreno-Soc\'{\i}as \cite{MS}, $n=3$ by Cimpoeas \cite{C}, $n=4$ by Harima and Wachi \cite{HW}, and sequences $d_1, \cdots, d_n$ satisfying $d_i > \sum_{j=1}^{i-1} d_j - i + 1$ for every $i \geq 4$ by Cho and Park assuming char$K=0 $ \cite{CP}. Without restriction on the characteristic of $K, $ by using an incremental method from \cite{GGV}, Capaverde and Gao improved the result of Cho and Park. They gave a complete description of the initial ideal of $I$ in the case $d_1, \cdots, d_n$ satisfies $d_i \geq \sum_{j=1}^{i-1} d_j - i -1$ for every $i$. In particular they proved Moreno-Soc\'{\i}as' Conjecture under the above conditions, see \cite[Theorem 3.19]{JS}. Moreno-Soc\'{\i}as' Conjecture implies Fr\"{o}berg's Conjecture and Pardue stated a conjecture which is equivalent to Fr\"{o}berg's Conjecture (\cite[Theorem 2]{P}) by a slight modification of the requirement on the initial ideal.
For every monomial $x^{\alpha} \in K[x_1, \cdots , x_n]$, denote by $\max(x^{\alpha})$ the largest index $i$ such that $x_i$ divides $x^{\alpha}$. It is easy to see that Moreno-Soc\'{\i}as' Conjecture implies the following conjecture stated by Pardue.
\begin{Conjecture}\label{Pardue E} \cite[Conjecture E]{P} Let $I = (f_1,\cdots,f_n)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $K [x_1,\cdots,x_n]$. If $x^\mu$ is a minimal generator of $\initial(I)$ and $\max(x^\mu) = m$ then every monomial of the same degree in the variables $x_1,\cdots,x_{m-1}$ must be in $\initial(I)$ as well.
\end{Conjecture}
Our aim is to study Fr\"{o}berg's Conjecture directly through the above conjecture instead of Moreno-Soc\'{\i}as' Conjecture.
\begin{Definition}
\emph{Let $I = (f_1,\cdots,f_n)$ be a generic homogeneous ideal in $K[x_1,\cdots,x_n]$. Suppose $x^{\alpha}$ is a generator of $\initial(I)$ and $\max(x^\alpha) = m$. We say $x^\alpha$ satisfies property $P$ if every monomial of the same degree in the variables $x_1,\cdots,x_{m-1}$ is also in $\initial(I)$.}
\end{Definition}
Thus, in order to claim Conjecture \ref{Pardue E} is true for $I$, we need to prove that every minimal generator of $\initial(I)$ satisfies property $P$. By using the incremental method from \cite{JS} and \cite{GGV}, in Proposition \ref{Structure initial}, we give an explicit description of the initial ideal of generic ideals with respect to the degree reverse lexicographic order. This is a crucial point for proving Theorem \ref{Pardue E with condition} that gives an inductive method for proving Conjecture \ref{Pardue E}.
\vskip 0.3cm
In Theorem \ref{Partial Pardue E3}, we prove that Conjecture \ref{Pardue E} holds true for every generic ideal if $n\le 3$. If $n \ge 4 $,
in Theorem \ref{Partial Pardue E}, we prove Conjecture \ref{Pardue E} for generic ideals of type $(n; d_1, \cdots, d_n) $ where $d_1 \leq \cdots \leq d_n$ and
\begin{equation}\label{Condition}
d_i \geq \min \left\{ d_1+ \cdots + d_{i-2} - i + 2 , \left\lfloor \dfrac{d_1+\cdots+d_{i-1} - i +1}{2} \right\rfloor \right\}
\end{equation}
for every $4 \leq i \leq n.$ Here $\left\lfloor a \right\rfloor $ denotes the greatest integer less or equal to $a.$
As application, in Theorem \ref{Partial FB}, we prove that Fr\"{o}berg's Conjecture holds true for generic ideals of type $(n; d_1, \cdots, d_r) $ with $r \le 3 $ or $r \ge 4$ and $d_1 \leq \cdots \leq d_r$ verifying condition (\ref{Condition}). Our hope is that our results will be successfully applied to give new insights in proving Fr\"{o}berg's Conjecture. All the computations in this paper have been performed by using Macaulay2 \cite{GS}.
We would like to point out that, at the end of our paper, we realized that independently Bertone and Cioffi in \cite[Corollary 3.9]{BC} proved Moreno-Soc\'{\i}as' Conjecture with a condition similar to (\ref{Condition}) under the assumption char$K=0.$ The methods and the aim are lightly different even if strictly related.
\section{Preliminaries}
\hspace{15pt} Let $R' = K[x_1,\cdots,x_n,z]$ be the polynomial ring in $n+1$ variables and fix the order on the variables $x_1 > \cdots > x_n > z$. Let $(I,g) = (f_1,\cdots,f_n,g)$ be a generic homogeneous ideal of type $(n+1;d_1,\cdots,d_n,d)$ in $R'$. Note that, $I=(f_1,\cdots,f_n)$ is a generic homogeneous ideal of type $(n+1;d_1,\cdots,d_n)$ in $R'$. Define $\pi: R' \longrightarrow R = K[x_1,\cdots,x_n]$ to be the ring homomorphism that takes z to zero, fixing the elements in $K$ and the variables $x_1,\cdots,x_n$. Let $J = \pi(I)$ be the image of $I$. Then $J$ is a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $R$. We recall that we consider always the degree reverse lexicographic order.
\begin{Proposition}
$\initial(I)$ and \ $\initial(J)$ have the same minimal generators.
\end{Proposition}
\begin{proof}
From a property of the degree reverse lexicographic order in \cite[Proposition 15.12]{E}, we get $\pi(\initial(I)) = \initial(J)$. On the other hand, since $z$ is regular in $R'/I$, by \cite[Theorem 15.13]{E} $z$ is regular in $R'/\initial(I)$. Furthermore, by \cite[Theorem 15.14]{E} the minimal generators of $\initial(I)$ are not divisible by $z$. Thus, $\initial(I)$ and \ $\initial(J)$ have the same minimal generators.
\end{proof}
Let $B = B(J)$ be the set of monomials in $R$ which are not in $\initial(J)$ and we call them the standard monomials with respect to $J$. We set
\vskip 0.3cm
\hspace{150pt} $\delta = d_1 + \cdots + d_n - n,$
\vskip 0.3cm
\hspace{150pt} $\delta^* = d_1 + \cdots + d_{n-1} - (n-1),$
\vskip 0.3cm
\hspace{150pt} $\sigma = \min \left\{ \delta^* , \left\lfloor \dfrac{\delta}{2} \right\rfloor \right\}.$
\vskip 0.3cm
It is known that $A = R/J$ is a complete intersection and the Hilbert series of $A$ is a symmetric polynomial of degree $\delta$, say $HS_A(z) = \sum_{i = 0}^{\delta} a_i z^i,$ with
$$0 < a_0 < a_1 < \cdots < a_{\sigma} = \cdots = a_{\delta - \sigma} > \cdots > a_{\delta - 1} > a_\delta > 0.$$
Notice that $a_i = | B_i |$ where $B_i$ is the set of monomials of degree $i$ in $B$ (see for instance \cite[Proposition 2.2]{MS}).
\vskip 0.3cm
The set of standard monomials with respect to a generic ideal of type $(n;d_1,\cdots,d_r)$ depends only on $(n; d_1, \cdots, d_r)$. So, we denote it by $B(n;d_1, \cdots, d_r)$. We will describe more clearly the set of standard monomials in the case $r=n$, that is $B = B(n;d_1, \cdots, d_n)$. For each $ 1 \leq i \leq \sigma$, define
$$\widetilde{B}_i^0 = \{ x^\mu \in B_i \ | \ \max(x^\mu) < n \}.$$
\begin{Proposition} \label{Structure B} The structure of $B = B(n;d_1, \cdots, d_n)$ is as follows,
\vskip 0.3cm
(1) $B_i = \widetilde{B}_i^0 \cup x_n \widetilde{B}_{i-1}^0 \cup \cdots \cup x_n^{i-1}\widetilde{B}_1^0 \cup \{ x_n^i \}$
\vskip 0.3cm
\hspace*{30pt} $= \widetilde{B}_i^0 \cup x_nB_{i-1},$ for all $1 \leq i \leq \sigma$.
\vskip 0.3cm
(2) $B_{\sigma+i} = x_n^i \widetilde{B}_{\sigma}^0 \cup x_n^{i+1} \widetilde{B}_{\sigma-1}^0 \cup \cdots \cup x_n^{\sigma +i-1}\widetilde{B}_1^0 \cup \{ x_n^{\sigma+i} \}$
\vskip 0.3cm
\hspace*{42pt} $= x_n^iB_{\sigma}$, for all $0 \leq i \leq \delta - 2\sigma$.
\vskip 0.3cm
(3) $B_{\delta-i} = x_n^{\delta - 2i} \widetilde{B}_{i}^0 \cup x_n^{\delta -2i+1} \widetilde{B}_{i-1}^0 \cup \cdots \cup x_n^{\delta -i-1}\widetilde{B}_1^0 \cup \{ x_n^{\delta-i} \}$
\vskip 0.3cm
\hspace*{42pt} $= x_n^{\delta - 2i}B_i$, for all $ 0 \leq i \leq \sigma$.
\end{Proposition}
\begin{proof}
(1) For $1 \leq i \leq \sigma$, we have $a_{i-1} < a_i$. Let $S$ denote the subset of $B_i$ consisting of the $a_{i-1}$ smallest monomials in $B_i$. By \cite[Lemma 3.5]{JS} we get $S = x_nB_{i-1}$. It is clear that $\widetilde{B}_i^0 \subseteq B_i \setminus S$. Conversely, for every monomial $x^\alpha \in B_i \setminus S$. Assume that $x^\alpha \notin \widetilde{B}_i^0$. Then $x_n$ divides $x^\alpha$, so that $x^\alpha / x_n \in B_{i-1}$. This implies a contradiction since $x_\alpha = x_n(x^\alpha / x_n) \in S$. Thus, $\widetilde{B}_i^0 = B_i \setminus S$ and (1) holds.
\vskip 0.3cm
\noindent (2) For $0 \leq i \leq \delta - 2\sigma$, we have $a_{\sigma+i} = a_\sigma$. By \cite[Lemma 3.5]{JS} we get $B_{\sigma+i} = x_n^iB_{\sigma}$.
\vskip 0.3cm
\noindent (3) Since $\sigma \leq \dfrac{\delta}{2}$, by \cite[Lemma 3.4]{JS} we get $B_{\delta-i} = x_n^{\delta - 2i}B_i$ for $0 \leq i \leq \sigma$.
\end{proof}
\begin{Remark} \emph{(1) $B=B(n;d_1,\cdots,d_n)$ is determined by $\widetilde{B}_1^0, \cdots, \widetilde{B}_{\sigma}^0$.
\vskip 0.3cm
\noindent (2) $|\widetilde{B}_i^0| = a_i - a_{i-1} = a'_i > 0$, for all $1 \leq i \leq \sigma$.}
\end{Remark}
In the following example, we show explicitly the structure of $B(n; d_1, \cdots, d_n)$ according to Proposition \ref{Structure B}.
\begin{Example} \label{ex structure B}
\emph{Let $B = B(4;2,3,3,4)$ be the set of standard monomials with respect to a generic ideal of type $(4;2,3,3,4)$. Then $\delta = 8; \sigma=4$. Denote by $a_i = |B_i|$ and $a'_i = |\widetilde{B}_i^0|$. We have
\begin{align*}
&B_0 = \{ 1 \} &\Rightarrow a_0 = 1.\\
&B_1 = \widetilde{B}_1^0 \cup \{ x_4 \} \ \text{where} \ \widetilde{B}_1^0 = \{ x_1, x_2, x_3 \} &\Rightarrow a'_1 = 3, a_1 = 4.\\
&B_2 = \widetilde{B}_2^0 \cup x_4B_1 \ \text{where} \ \widetilde{B}_2^0 = \{ x_1x_2, x_2^2, x_1x_3, x_2x_3, x_3^2 \} &\Rightarrow a'_2 = 5, a_2 = 9.\\
&B_3 = \widetilde{B}_3^0 \cup x_4B_2 \ \text{where} \ \widetilde{B}_3^0 = \{ x_1x_2x_3, x_2^2x_3, x_1x_3^2, x_2x_3^2, x_3^3 \} &\Rightarrow a'_3 = 5, a_3 = 14.\\
&B_4 = \widetilde{B}_4^0 \cup x_4B_3 \ \text{where} \ \widetilde{B}_4^0 = \{ x_2x_3^3, x_3^4 \} &\Rightarrow a'_4 = 2, a_4 = 16.\\
&B_5 = x_4^2B_3 &\Rightarrow a_5 = a_3 = 14.\\
&B_6 = x_4^4B_2 &\Rightarrow a_6 = a_2 = 9.\\
&B_7 = x_4^6B_1 &\Rightarrow a_7 = a_1 = 4.\\
&B_8 = x_4^8B_0 &\Rightarrow a_5 = a_3 = 1.
\end{align*}}
\end{Example}
In the next section, we will see how to construct $B(n+1, d_1, \cdots, d_{n+1})$ starting from $B(n, d_1, \cdots, d_n)$ by using the incremental method introduced in \cite{GGV} and adapted to our situation in \cite{JS}.
\section{Main results}
\hspace{15pt} Let $(I,g) = (f_1,\cdots,f_n,g)$ be a generic homogeneous ideal of type $(n+1;d_1,\cdots,d_n,d)$ in $R' = K[x_1,\cdots,x_n,z]$. Define $C_I$ to be the set of the coefficients of the polynomials $f_1,\cdots,f_n$ and $\bar{F} = F(C_I) \subset K$. Let $G = \{ g_1, \cdots,g_t \}$ be the reduced Gr\"{o}bner basis of $I$ with respect to the degree reverse lexicographic order. Then $g_1, \cdots, g_t$ are homogeneous polynomials in $\bar{F}[x_1,\cdots,x_n,z]$. Reducing $g$ modulo $G$ we obtain a polynomial that is a $K$-linear combination of all monomials of degree $d$ in $E$ with coefficients that are still algebraically independent over $\bar{F}$. Hence, from now on we assume that $g$ is reduced modulo $G$ and the coefficients of $g$ are algebraically independent over $\bar{F}$.
\vskip 0.3cm
In order to construct $B(n+1,d_1,\cdots,d_{n+1})$ from $B(n,d_1,\cdots,d_n)$ we need to compare $\initial(I,g)$ and $\initial(I)$. We recall here the incremental method to construct $\initial(I,g)$ from $\initial(I)$, for more details see \cite[Section 3]{JS}. Let $E = B(n+1, d_1, \cdots, d_n)$ be the set of standard monomials with respect to $I$ and $B = B(n; d_1, \cdots, d_n)$ be the set of standard monomials with respect to $J = \pi(I)$. Denote by $E_i$ the set of monomials of degree $i$ in $E$ for every $i \geq 0$. Note that, for $0 \leq i \leq \delta$, we have
$$E_i = B_i \cup zB_{i-1} \cup z^2B_{i-2} \cup \cdots \cup z^{i-1}B_1 \cup z^iB_0,$$
\noindent and for $i > \delta$, we have $E_i = z^{i - \delta}E_\delta.$
Let $0 \leq i \leq \delta$, denote by $\mathbf{E}_i$ the column vector whose entries are the monomials in $E_i$ listed in decreasing order. For each monomial $x^{\alpha} \in \mathbf{E}_i$, reducing the product $x^{\alpha}g \in R'_{i+d}$ modulo $G$ we obtain a polynomial, say the reduced form of $x^{\alpha}g$, that is a $K$-linear combination of monomials in $E_{i+d}$. Note that each coefficient of the reduced form of $x^{\alpha}g$ is a $\bar{F}$-linear combination of coefficients of polynomial $g$. Let $M_i$ denote the matrix such that
\begin{equation}\label{Equation}
\mathbf{E}_i.g \equiv M_i\mathbf{E}_{i+d} \ \text{(mod} \ G),
\end{equation}
where $\mathbf{E}_{i+d}$ denotes the column vector whose entries are the monomials in $E_{i+d}$ listed in decreasing order. By \cite[Lemma 3.2]{JS} the rows of $M_i$ are linearly independent. This means that $\rank(M_i) = |E_i| = a_i + a_{i-1} + \cdots + a_0$. Furthermore, the monomials in $\mathbf{E}_{i+d}$ corresponding to the $|E_i|$ first linearly independent columns of $M_i$ are the generators that will be added to $\initial(I)$ to form $\initial(I,g)$. Note that some of the monomials we add might be redundant. In this section, we will prove the following main result.
\begin{Theorem}\label{Pardue E with condition}
Let $(I,g) = (f_1,\cdots,f_n,g)$ be a generic homogeneous ideal of type $(n+1;d_1,\cdots,d_n,d)$ in $R' = K [x_1,\cdots,x_n,z]$, where $d_1 \leq \cdots \leq d_n \leq d$. If $d \geq \sigma$ and Conjecture \ref{Pardue E} is true for $J = \pi(I)$, then Conjecture \ref{Pardue E} is also true for $(I,g)$.
\end{Theorem}
The proof needs a deep investigation given in Proposition \ref{Structure initial}, Proposition \ref{Part 2} and Proposition \ref{Part 1}. First of all we notice that Theorem \ref{Pardue E with condition} can be deduced from \cite[Proposition 3.12]{JS} when $d \geq \delta$. Indeed, by \cite[Proposition 3.12]{JS}, if $d \geq \delta$ then
$$\initial(I,g) = (\initial(I), z^{d - \delta}B_{\delta}, z^{d-\delta+2}B_{\delta -1}, \cdots, z^{\delta+d-2}B_1, z^{\delta+d}B_0).$$
Let $x^\mu$ be a generator of $\initial(I,g)$ in $z^{d - \delta}B_{\delta}, z^{d-\delta+2}B_{\delta -1}, \cdots, z^{\delta+d-2}B_1, z^{\delta+d}B_0$. We claim $x^\mu$ satisfies property $P$. Indeed, if $d > \delta$ then $x^\mu$ is divisible by $z$ and $\degree(x^\mu) = k > \delta$. Hence, every monomial $x^\alpha$ of degree $k$ in variables $x_1, \cdots, x_n$ is not in $E_k = z^{k-\delta}E_\delta$, so that $x^\alpha \in \initial(I) \subset \initial(I,g)$. If $d = \delta$ then
$$\initial(I,g) = (\initial(I),x_n^{\delta}, z^2B_{\delta -1}, \cdots, z^{2\delta-2}B_1, z^{2\delta}B_0).$$
If $x^\mu$ is a monomial in $z^2B_{\delta -1}, \cdots, z^{2\delta-2}B_1, z^{2\delta}B_0$, then $x^\mu$ satisfies property $P$ through a similar argument as above. On the other hand, it is not hard to see that $x_n^\delta$ also satisfies properties P. Thus, if Conjecture \ref{Pardue E} is true for $J$ with $d \geq \delta$, then every minimal generator of $\initial(I,g)$ satisfies property $P$. This means Conjecture \ref{Pardue E} is true for $(I,g)$.
\vskip 0.3cm
Consider now the case $d < \delta$. Set $i^*=\lfloor\frac{\delta-d}{2}\rfloor$. The following lemma will be useful for proving Proposition \ref{Structure initial}.
\begin{Lemma}\label{multiple}
For $i > j \geq i^*$, we have $a_{d+i} \leq a_{d+j}$. Furthermore, the monomials of $B_{d+i}$ are multiples of the $a_{d+i}$ smallest monomials in $B_{d+j}$.
\end{Lemma}
\begin{proof}
Since $d+i^* \geq \lfloor\frac{\delta}{2}\rfloor \geq \sigma$ , so $d+i > d+j \geq \sigma$. Hence, $a_{d+i} \leq a_{d+j}$. Let $S$ denote the subset of $B_{d+j}$ consisting of the $a_{d+i}$ smallest monomials in $B_{d+j}$. By \cite[Lemma 3.5 (ii)]{JS} we have $B_{d+i} = x_n^{i-j}S.$
\end{proof}
By convention, we use the following notation. Let $B$ be a finite subset of monomials in $R = K[x_1,\cdots,x_n]$ and denote by $\mathbf{B} = \{ x^{\alpha_1}, \cdots, x^{\alpha_k} \}$ the set of monomials in $B$ listed in decreasing order. Let $S$ be a subset of $[1,k]$ and denote $\mathbf{B}^S = \{ x^{\alpha_i} \in B \ | \ i \in S \}$. The set of generators of $\initial(I,g)$ can be described as the following.
\begin{Proposition}\label{Structure initial}
Let $(I,g) = (f_1,\cdots,f_n,g)$ be a generic ideal of type $(n+1;d_1,\cdots,d_n,d)$ in $R' = K [x_1,\cdots,x_n,z]$, where $d_1 \leq \cdots \leq d_n \leq d$ and $d < \delta$. Let $B = B(n;d_1,\cdots,d_n)$ be the set of standard monomials with respect to a generic ideal of type $(n;d_1,\cdots,d_n)$ in $R = K [x_1,\cdots,x_n]$.
\vskip 0.3cm
\noindent (1) If $\delta - d \equiv 0$ ( $\mod 2$) then
\begin{center}
$\initial(I,g) = (\initial(I), \mathbf{B}_d^{\{1\}}, \mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}, z^2B_{d+i^*-1}, \cdots, z^{\delta - d}B_d,$\\
$ z^{\delta - d + 2}B_{d-1}, \cdots, z^{\delta+d-2}B_1, z^{\delta+d}B_0),$
\end{center}
(2) If $\delta - d \equiv 1$ ( $\mod 2$) then
\begin{center}
$\initial(I,g) = (\initial(I), \mathbf{B}_d^{\{1\}}, \mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i*}}, B_{d+i^*+1}, zB_{d+i^*}, \cdots, z^{\delta - d}B_d, $\\
$z^{\delta - d + 2}B_{d-1}, \cdots, z^{\delta+d-2}B_1, z^{\delta+d}B_0),$
\end{center}
\noindent where $S_i$ is a subset of $[1,a_{d+i}]$ containing $a_i$ elements for every $i = 1, \cdots , i^*$.
\end{Proposition}
\begin{proof}
Since $g$ is a combination of all monomials in $E_d = B_d \cup zB_{d-1} \cup \cdots \cup z^dB_0$, we can write
$$g = \mathbf{v}_d\mathbf{B}_d + \mathbf{v}_{d-1}\mathbf{B}_{d-1}z + \cdots + \mathbf{v}_1\mathbf{B}_1z^{d-1} + \mathbf{v}_0z^d,$$
where $\mathbf{v}_i$ is the row vector of the coefficients of $g$ corresponding to the monomials in $\mathbf{B}_i$. Denote the last coefficient of $\mathbf{v_d}$ by $c_d$. Note that $c_d$ is the coefficient corresponding to the monomial $x_n^d$. Set $\mathbf{v}_d^* = \mathbf{v}_d \setminus \{ c_d \}$. We will construct a set of generators for $\initial(I,g)$ by using incremental method.
\vskip 0.3cm
For $i = 0$, we have $E_0 = \{ 1 \}$ and $M_0$ is a row matrix $\pmt{\mathbf{v}_d & \mathbf{v}_{d-1} & \cdots & \mathbf{v}_1 & \mathbf{v}_0}$. Hence, the first column of $M_0$ is linearly independent. Thus, the largest monomial of $\mathbf{B}_d$ will be added.
\vskip 0.3cm
For $1 \leq i \leq i^*,$ equation (\ref{Equation}) can be explicitly written in the following form
\begin{equation}\label{Equation 1}
\mt{& \mt{\mathbf{B}_{d+i} & z\mathbf{B}_{d+i-1} & \ \ \cdots & z^{i-1}\mathbf{B}_{d+1} & z^i\mathbf{B}_d & \ \cdots & z^{d+i}} \\
\pmt{\mathbf{B}_i\\ z\mathbf{B}_{i-1}\\ \vdots \\ z^{i-1}\mathbf{B}_1\\ z^i\mathbf{B}_0}.g \equiv & \pmt{\ \Gamma_{i,d+i}&\Gamma_{i,d+i-1}&\cdots&\Gamma_{i,d+1}&\Gamma_{i,d}&\ \cdots&\Gamma_{i,0}\\
\ 0&\Gamma_{i-1,d+i-1}&\cdots&\Gamma_{i-1,d+1}&\Gamma_{i-1,d}&\ \cdots&\Gamma_{i-1,0}\\
\ \vdots&\vdots&\ddots&\vdots&\vdots&\ \ddots&\vdots\\
\ 0&0&\cdots&\Gamma_{1,d+1}&\Gamma_{1,d}&\ \cdots& \Gamma_{1,0}\\
\ 0&0&\cdots&0&\Gamma_{0,d}&\ \cdots&\Gamma_{0,0}}}
\end{equation}
\vskip 0.5cm
Denote by $A_i, A_{i-1}, \cdots, A_1, A_0$ the submatrices of $M_i$ formed by the columns corresponding to the monomials in $\mathbf{B}_{d+i}, z\mathbf{B}_{d+i-1}, \cdots, z^{i-1}\mathbf{B_{d+1}}, z^i\mathbf{B_d}$ respectively.
\vskip 0.3cm
We now consider the block $\Gamma_{i,d+i}$. Note that the entries of $\Gamma_{i,d+i}$ are the $\bar{F}$-linear combinations of the coefficients in $\mathbf{v}_d$. Since $ i \leq \dfrac{\delta-d}{2}$, we have $i < \sigma$ and
$$\mathbf{B}_i = \widetilde{\mathbf{B}}_i^0 \cup x_n \widetilde{\mathbf{B}}_{i-1}^0 \cup \cdots \cup x_n^{i-1}\widetilde{\mathbf{B}}_1^0 \cup \{ x_n^i \}.$$
Since $d+i \leq \delta -i$, we have $a_{d+i} \geq a_i$ and the $a_i$ smallest monomials in $\mathbf{B}_{d+i}$ are
$$x_n^d\mathbf{B}_i = x_n^d\widetilde{\mathbf{B}}_i^0 \cup x_n^{d+1} \widetilde{\mathbf{B}}_{i-1}^0 \cup \cdots \cup x_n^{d+i-1}\widetilde{\mathbf{B}}_1^0 \cup \{ x_n^{d+i} \}.$$
Let $x^{\alpha} \in \mathbf{B}_i$. Suppose $x^{\alpha} = x_n^jx^{\beta} \in x_n^{j}\widetilde{\mathbf{B}}_{i-j}^0$, for some $0 \leq j \leq i$ and $x^{\beta} \in \widetilde{\mathbf{B}}_{i-j}^0$ . Then
$$x^{\alpha}x_n^d = x_n^{d+j}x^{\beta} \in x_n^{d+j}\widetilde{\mathbf{B}}_{i-j}^0 \subset \mathbf{B}_{d+i}.$$
Thus, the term $c_d.{x^\alpha}x_n^d$ of the product $x^{\alpha}g$ is reduced mod $G$. Therefore, in the coefficients of the reduced form of the product $x^{\alpha}.g$, $c_d$ will appear only in the coefficient of the monomial $x_n^{d+j}x^{\beta} \in \mathbf{\mathbf{B}}_{d+i}$. Thus,
$$\Gamma_{i,d+i} = \pmt{L_{1,1} & \cdots & c_d + L_{1,k} & L_{1,k+1} & \cdots & L_{1,a_{d+i}} \\ L_{2,1} & \cdots & L_{2,k} & c_d + L_{2,k+1} & \cdots & L_{2,a_{d+i}} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ L_{a_i,1} & \cdots & L_{a_i,k} & L_{a_i,k+1} & \cdots & c_d + L_{a_i,a_{d+i}}},$$
where $k=a_{d+i} - a_i +1$ and $L_{i,j}$ is a $\bar{F}$-linear combination of the coefficients in $\mathbf{v}_d^*$. Hence, the $a_i$ last columns of $\Gamma_{i,d+i}$ are linearly independent. So, $\rank(\Gamma_{i,d+i}) = a_i$. This implies that the $a_i$ first linearly independent columns of $M_i$ are the $a_i$ first linearly independent columns of $A_i$ (before defined). Define $S_i$ to be the subset of $[1,a_{d+i}]$ such that its elements are the indices of the $a_i$ first linearly independent columns of $A_i$. Then the generators which will be added are the elements in $\mathbf{B}_{d+i}^{S_i}$. Since the $a_{i-1}$ last columns of $\Gamma_{i-1,d+i-1}$ are linearly independent again, we have
$$\rank \pmt{\Gamma_{i,d+i} & \Gamma_{i,d+i-1} \\ 0 & \Gamma_{i-1,d+i-1}}= a_i + a_{i-1}.$$
Therefore, the $a_{i-1}$ next linearly independent columns of $M_i$ are the $a_{i-1}$ first linearly independent columns of $A_{i-1}$ and so on. We have the $a_i + a_{i-1} + \cdots + a_0$ first linearly independent columns of $M_i$ are the $a_i, a_{i-1}, \cdots, a_1, a_0$ first linearly independent columns of $A_i, A_{i-1}, \cdots, A_1, A_0$ respectively.
However, the monomials in $z\mathbf{B}_{d+i-1}, \cdots, z^{i-1}\mathbf{B}_{d+1}, z^i\mathbf{B}_d$ corresponding to the first linearly independent columns of $A_{i-1}, \cdots, A_1, A_0$ respectively are redundant since they are multiples of the monomials that were added in the steps $i-1, \cdots, 1, 0$. Thus, in the step $i$, only the monomials $\mathbf{B}_{d+i}^{S_i}$ will be added.
Note that, in the case $\delta - d \equiv 0$ (mod $2$) we have $\mathbf{B}_{d+i^*}^{S_{i^*}} = B_{d+i^*}$. Indeed, since $d+i^* = \delta - i^*$, by \cite[Lemma 3.4]{JS} we have $a_{d+i^*} = a_{\delta-i^*} = a_{i^*}$. Hence, $S_{i^*} = [1,a_{d+i^*}]$.
\vskip 0.2cm
For $i^* < i < \delta - d$, equation (\ref{Equation}) also has form as in (\ref{Equation 1}). Let $\Lambda_i$ denote the square submatrix of $M_i$ given by
$$\mt{& \ \ \mt{\mathbf{B}_{d+i} & z\mathbf{B}_{d+i-1} & \ \ \cdots & z^{d+2i-\delta}\mathbf{B}_{\delta-i}} \\
\Lambda_i = & \pmt{\Gamma_{i,d+i}&\Gamma_{i,d+i-1}&\cdots&\ \Gamma_{i,\delta-i}\\
0&\Gamma_{i-1,d+i-1}&\cdots&\ \Gamma_{i-1,\delta-i}\\
\vdots&\vdots&\ddots&\ \vdots\\
0&0&\cdots&\ \Gamma_{\delta-d-i,\delta-i}}}.$$
\vskip 0.3cm
Then, $M_i$ has form
$$M_i = \pmt{ \Lambda_i & \Omega \\ 0 & M_{\delta-d-i-1}}.$$
\vskip 0.3cm
By \cite[Proposition 3.16]{JS} $\Lambda_i$ is nonsingular. Hence, the first linearly independent columns of $M_i$ are given by all the columns of $\Lambda_i$ and the columns corresponding to the first linearly independent columns of $M_{\delta-d-i-1}$. Note that the monomials in $E_{d+i}$ corresponding to the first linearly independent columns of $M_{\delta-d-i-1}$ are redundant since they are multiplies of monomials were already added in the step $\delta-d-i-1$. Furthermore, by using Lemma \ref{multiple},for $i = i^*+1, \cdots, \delta - d-1$, we obtain the following.
\vskip 0.2cm
$\bullet$ If $\delta - d \equiv 0$ (mod $2$) then only the monomials in $z^2B_{d+i^*-1}, \cdots, z^{\delta - d-2}B_{d+1}$ will be added.
\vskip 0.2cm
$\bullet$ If $\delta - d \equiv 1$ (mod $2$) then only the monomials in $B_{d+i^*+1}, zB_{d+i^*}, \cdots, z^{\delta - d-2}B_{d+1}$ will be added.
\vskip 0.2cm
Finally, for $\delta - d \leq i \leq \delta$, by \cite[Corollary 3.11]{JS} the monomials in
$z^{\delta - d}B_d$ , $z^{\delta - d + 2}B_{d-1}$, $\cdots$, $z^{\delta+d-2}B_1$, $z^{\delta+d}B_0$ will be added.
\end{proof}
\begin{Remark} \emph{(1) The set of generators of $\initial(I,g)$ appearing in Proposition \ref{Structure initial} is not minimal. For instance the monomial $z^{\delta - d}\mathbf{B}_d^{\{1\}}$ is a multiple of $\mathbf{B}_d^{\{1\}}$. However, with respect to the property P, we will see that the problems only come from the monomials in $\mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}$.
\vskip 0.3cm
\noindent (2) The set of generators of $\initial(I,g)$ in Proposition \ref{Structure initial} is determined by the monomials in $\mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}$.}
\end{Remark}
The following lemma will be useful for proving Proposition \ref{Part 2}.
\begin{Lemma}\label{Lemma} $B_{d+i} \subset \initial(I,g)$ for every $i > i^*$.
\end{Lemma}
\begin{proof}
If $\delta - d \equiv 0$ (mod $2$), by Proposition \ref{Structure initial} $B_{d+i^*} \subset \initial(I,g)$. Hence, by Lemma \ref{multiple} $B_{d+i} \subset \initial(I,g)$. If $\delta - d \equiv 1$ (mod $2$) then $B_{d+i^*+1} \subset \initial(I,g)$, and Lemma \ref{Lemma} follows from Lemma \ref{multiple} again.
\end{proof}
In the next proposition, we will prove that the generators of $\initial(I,g)$ not in $\initial(I)$, $\mathbf{B}_d^{\{1\}}$, $\mathbf{B}_{d+1}^{S_1}$, $\cdots$, $\mathbf{B}_{d+i^*}^{S_{i^*}}$ satisfy property $P$ in any case.
\begin{Proposition}\label{Part 2} (1) If $\delta - d \equiv 0$ (mod $2$), then the generators of $\initial(I,g)$ in $z^2B_{d+i^*-1}$, $\cdots$, $z^{\delta - d}B_d$, $z^{\delta - d + 2}B_{d-1}$, $\cdots$, $z^{\delta+d-2}B_1$, $z^{\delta+d}B_0$ satisfy property $P$.
\vskip 0.2cm
\noindent (2) If $\delta - d \equiv 1$ (mod $2$), then the generators of $\initial(I,g)$ in $B_{d+i^*+1}$, $zB_{d+i^*}$, $\cdots$, $z^{\delta - d}B_d$, $z^{\delta - d + 2}B_{d-1}$, $\cdots$, $z^{\delta+d-2}B_1$, $z^{\delta+d}B_0$ satisfy property $P$.
\end{Proposition}
\begin{proof}
(1) If $x^\mu$ is a generator of $\initial(I,g)$ in $z^2B_{d+i^*-1}$, $\cdots$, $z^{\delta - d}B_d$, $z^{\delta - d + 2}B_{d-1}$, $\cdots$, $z^{\delta+d-2}B_1$, $z^{\delta+d}B_0$, then $x^\mu$ is divisible by $z$ and $\deg(x^\mu) = k > d+i^*$. For every monomial $x^\alpha$ of degree $k$ in variables $x_1, \cdots, x_n$, if $x^\alpha \in B_k$, by Lemma \ref{Lemma}, then we have $x^\alpha \in \initial(I,g)$. Otherwise $x^\alpha \notin B_k$, so that $x^\alpha \in \initial(I) \subset \initial(I,g)$. Thus, $x^\mu$ satisfies property $P$.
(2) If $x^\mu$ is a generator of $\initial(I,g)$ in $B_{d+i^*+1}$, then $x^\mu$ is divisible by $x_n$ because $B_{d+i^*+1} = x_n^{d+1}B_{i^*}$. Hence, for every monomial $x^\alpha$ of degree $d+i^*+1$ in variables $x_1, \cdots, x_{n-1}$, we have $x^\alpha \notin B_{d+i^*+1}$, so that $x^\alpha \in \initial(I) \subset \initial(I,g)$. Thus, $x^\mu$ satisfies property $P$.
If $x^\mu$ is a generator of $\initial(I,g)$ in $zB_{d+i^*}$, $\cdots$, $z^{\delta - d}B_d$, $z^{\delta - d + 2}B_{d-1}$, $\cdots$, $z^{\delta+d-2}B_1$, $z^{\delta+d}B_0$. Then, by an argument as in (1), $x^\mu$ satisfies property $P$.
\end{proof}
We still have to prove that the generators of $\initial(I,g)$ in $\mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}$ satisfy property $P$. Under condition $d \geq \sigma$, we get the following.
\begin{Proposition} \label{Part 1} If $d \geq \sigma$, then the generators of $\initial(I,g)$ in $\mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}$ satisfy property $P$.
\end{Proposition}
\begin{proof}
Since $d \geq \sigma$ , by Propsition \ref{Structure B} the monomials in $B_{d+1}, \cdots, B_{d+i^*}$ are divisible by $x_n$. Hence, if $x^\mu$ is a generator of $\initial(I,g)$ in $\mathbf{B}_{d+i}^{S_i}$, for some $1 \leq i \leq i^*$, then $x^\mu$ is divisible by $x_n$. This follows $x^\mu$ satisfies property $P$ because for every monomial $x^\alpha$ of degree $d+i$ in variables $x_1, \cdots, x_{n-1}$, we have $x^\alpha \notin B_{d+i}$, so that $x^\alpha \in \initial(I) \subset \initial(I,g)$.
\end{proof}
\noindent \emph{Proof of Theorem \ref{Pardue E with condition}}. If $\delta - d \equiv 0$ (mod $2$), by Proposition \ref{Structure initial} we have
\begin{center}
$\initial(I,g) = (\initial(I), \mathbf{B}_d^{\{1\}}, \mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}, z^2B_{d+i^*-1}, \cdots, z^{\delta - d}B_d,$\\
$ z^{\delta - d + 2}B_{d-1}, \cdots, z^{\delta+d-2}B_1, z^{\delta+d}B_0).$
\end{center}
\noindent The monomial $\mathbf{B}_d^{\{1\}}$ satisfies property $P$ because it is the largest monomial of $\mathbf{B}_d$. By Proposition \ref{Part 1} and Proposition \ref{Part 2} (1) the generators of $\initial(I,g)$ in $\mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}$ and in $z^2B_{d+i^*-1}$, $\cdots$, $z^{\delta - d}B_d$, $z^{\delta - d + 2}B_{d-1}$, $\cdots$, $z^{\delta+d-2}B_1$, $z^{\delta+d}B_0$ satisfy property $P$. Hence, if Conjecture \ref{Pardue E} is true for $J = \pi(I)$, then every minimal generator of $\initial(I,g)$ satisfies property $P$, so that Conjecture \ref{Pardue E} is true for $\initial(I,g)$.
If $\delta - d \equiv 1$ (mod $2$), then Theorem \ref{Pardue E with condition} is proved by a similar argument as above.
\vskip 0.3 cm
From Proposition \ref{Structure initial}, we have the following corollary that describes more explicitly the set of the standard monomials with respect to $(I,g)$ in case $d < \delta$.
\begin{Corollary} \label{Structure F}
Let $(I,g) = (f_1,\cdots,f_n, g)$ be a generic homogeneous ideal of type $(n+1;d_1,\cdots,d_n,d)$ in $K [x_1,\cdots,x_n,z]$, where $d_1 \leq \cdots \leq d_n \leq d$ and $d < \delta$. Let $B = B(n;d_1, \cdots , d_n)$ and $F = B(n+1;d_1,\cdots,d_n,d)$.
\vskip 0.2cm
\noindent \begin{minipage}[t]{8.5cm}
\quad (1) If $\delta - d \equiv 0$ (mod $2$), then
\begin{align*}
F_0 &= B_0\\
F_1 &= B_1 \cup zF_0\\
& \ \vdots\\
F_{d-1} &= B_{d-1} \cup zF_{d-2}\\
F_d &= \mathbf{B}_d^{(1,a_d]} \cup zF_{d-1}\\
F_{d+1} &= \mathbf{B}_{d+1}^{[1,a_{d+1}] \backslash S_1} \cup zF_d\\
& \ \vdots\\
F_{d+i^*-1} &= \mathbf{B}_{d+i^*-1}^{[1,a_{d+i^*-1}] \backslash S_{i^*-1}} \cup zF_{d+i^*-2}\\
F_{d+i^*} &= zF_{d+i^*-1}\\
& \ \vdots\\
F_{\delta} &= z^{\delta-d+1}F_{d-1}\\
F_{\delta+1} &= z^{\delta-d+3}F_{d-2}\\
& \ \vdots\\
F_{\delta+d-1} &= z^{\delta+d-1}F_0.
\end{align*}
\end{minipage}
\begin{minipage}[t]{8.5cm}
\quad (2) If $\delta - d \equiv 1$ (mod $2$), then
\begin{align*}
F_0 &= B_0\\
F_1 &= B_1 \cup zF_0\\
& \ \vdots\\
F_{d-1} &= B_{d-1} \cup zF_{d-2}\\
F_d &= \mathbf{B}_d^{(1,a_d]} \cup zF_{d-1}\\
F_{d+1} &= \mathbf{B}_{d+1}^{[1,a_{d+1}] \backslash S_1} \cup zF_d\\
& \ \vdots\\
F_{d+i^*} &= \mathbf{B}_{d+i^*}^{[1,a_{d+i^*}] \backslash S_{i^*}} \cup zF_{d+i^*-1}\\
F_{d+i^*+1} &= z^2F_{d+i^*-1}\\
& \ \vdots\\
F_{\delta} &= z^{\delta-d+1}F_{d-1}\\
F_{\delta+1} &= z^{\delta-d+3}F_{d-2}\\
& \ \vdots\\
F_{\delta+d-1} &= z^{\delta+d-1}F_0.
\end{align*}
\end{minipage}
\end{Corollary}
\vskip 0.5 cm
Thus, in order to construct $F = B(n+1, d_1,\cdots,d_n,d)$ from $B = B(n;d_1,\cdots,d_n)$ we only need to know explicitly the monomials in $\mathbf{B}_{d+1}^{S_1}, \cdots, \mathbf{B}_{d+i^*}^{S_{i^*}}$. In the following example, we help the reader to construct $\initial(I,g)$ from $\initial(I)$ and $B(n; d_1, \cdots, d_n)$ according to Proposition \ref{Structure initial}. Moreover, we construct $F = B(n+1; d_1, \cdots, d_n,d)$ from $B(n; d_1, \cdots, d_n)$ according to Corollary \ref{Structure F}.
\vskip 0.3cm
\begin{Example}\label{Ex structure in}\emph{Let $(I,g)=(f_1,\cdots,f_4,g)$ be the generic ideal of type $(5; 2,3,3,4,5)$ in $K[x_1, \cdots, x_4, z]$. Let $B = B(4;2,3,3,4)$ as in Example \ref{ex structure B}. Then $\delta = 8, \sigma = 4, d = 5$ and $i^*=\lfloor\frac{\delta-d}{2}\rfloor = 1$. We write $g$ as reduced form as the following}
$$g = \mathbf{v}_5\mathbf{B}_5 + \mathbf{v}_4\mathbf{B}_4z + \cdots + \mathbf{v}_1\mathbf{B}_1z^4 + z^5.$$
\emph{For $i=0$, The largest monomials of $\mathbf{B}_5$ will be added, in this case it is $x_1x_2x_3^2x_4^2$.}
\noindent \emph{For $i=1,$
\begin{align*}
& \quad \mt{\mathbf{B}_6 & \ z\mathbf{B}_5 & \cdots & z^5\mathbf{B}_1 & z^6} \\
\mathbf{E}_1.g \equiv M_1\mathbf{E}_6 \ \Leftrightarrow \ \pmt{\mathbf{B}_1 \\ z}.g \equiv &\pmt{\Gamma_{1,6} & \Gamma_{1,5} & \cdots & \ \Gamma_{1,1} & \Gamma_{1,0} \\ 0 & \Gamma_{0,5} & \cdots & \ \Gamma_{0,1} & \Gamma_{0,0}},
\end{align*}}
\noindent \emph{where $\mathbf{B}_1 = \mathbf{\widetilde{B}}_1^0 \cup \{ x_4 \}$ and $\mathbf{B}_6 = x_4^4\mathbf{\widetilde{B}}_2^0 \cup x_4^5\mathbf{\widetilde{B}}_1^0 \cup \{x_4^6 \}$. The monomials in $\mathbf{B}_6$ corresponding to the $a_1 = 4$ first linearly independent columns of $\Gamma_{1,6}$ will be added. By using Macaulay2 to compute $\initial(I,g)$, we see that the $4$ largest monomials of $\mathbf{B}_6$ are the minimal generators of $\initial(I,g)$. This means that the $4$ first columns of $\Gamma_{1,6}$ are linearly independent, so that $\mathbf{B}_6^{S_1} = \mathbf{B}_6^{[1,4]}$. Thus, the set of generators of $\initial(I,g)$ as in Proposition \ref{Structure initial} is
$$\initial(I,g) = (\initial(I), \mathbf{B}_5^{\{1\}}, \mathbf{B}_6^{[1,4]}, B_7, zB_6, z^3B_5, z^5B_4, z^7B_3, z^9B_2, z^{11}B_1, z^{13}).$$ }
\hspace*{15pt}\emph{Let $F = B(5;2,3,3,4,5)$ be the set of the standard monomials with respect to $(I,g)$. Denote by $f_i = |F_i|$ and $f'_i = |\widetilde{F}_i^0|$. By Corollary \ref{Structure F}, we have}
\emph{\begin{align*}
&F_0 = \{ 1 \} &\Rightarrow f_0 = 1.\\
&F_1 = B_1 \cup \{ z \} &\Rightarrow f'_1 = 4 , f_1 = 5 .\\
&F_2 = B_2 \cup zF_1 &\Rightarrow f'_2 = 9 , f_2 = 14 .\\
&F_3 = B_3 \cup zF_2 &\Rightarrow f'_3 = 14, f_3 = 28 .\\
&F_4 = B_4 \cup zF_3 &\Rightarrow f'_4 = 16 , f_4 = 44 .\\
&F_5 = \widetilde{F}_5^0 \cup zF_4 \ \text{where} \ \widetilde{F}_5^0 = \mathbf{B}_5^{(1,14]} &\Rightarrow f'_5 = 13, f_5 = 57.\\
&F_6 = \widetilde{F}_6^0 \cup zF_5 \ \text{where} \ \widetilde{F}_6^0 = \mathbf{B}_6^{(4,9]} &\Rightarrow f'_6 = 5, f_6 = 62.\\
&F_7 = z^2F_5; \ F_{8} = z^4F_4; \ \cdots; F_{11} = z^{10}F_1; \ F_{12} = z^{12}F_0.&
\end{align*}}
\end{Example}
In \cite[Lemma 3.14]{JS}, it is conjectured that $\mathbf{B}_{d+i}^{S_i}$ are the $a_i$ largest monomials of $\mathbf{B}_{d+i}$ for every $i = 0, \cdots, i^*$. However, in the following example, we show that this conjecture is not true.
\vskip 0.3cm
\begin{Example} \emph{Let $(I,g)=(f_1,\cdots,f_5,g)$ be the generic ideal of type $(6; 2,3,3,4,5,5)$ in $K[x_1, \cdots, x_5, z]$. Let $F = B(5;2,3,3,4,5)$ as in Example \ref{Ex structure in} with the variable $x_5$ instead of variable $z$. Then $\delta = 12, \sigma = 6$, $d=5$ and $i^*=\lfloor\frac{\delta-d}{2}\rfloor = 3$. Here $F$ plays the same role of $B$ in Proposition \ref{Structure initial}. We write}
$$g = \mathbf{v}_5\mathbf{F}_5 + \mathbf{v}_4\mathbf{F}_4z + \cdots + \mathbf{v}_1\mathbf{F}_1z^4 + z^5.$$
\emph{For $i=0$, The largest monomials of $\mathbf{F}_5$ will be added, in this case it is $x_2^2x_3x_4^2$.}
\noindent \emph{For $i=1,$
\begin{align*}
& \quad \mt{\mathbf{F}_6 & \ z\mathbf{F}_5 & \cdots & z^5\mathbf{F}_1 & z^6} \\
\mathbf{E}_1.g \equiv M_1\mathbf{E}_6 \ \Leftrightarrow \ \pmt{\mathbf{F}_1 \\ z}.g \equiv &\pmt{\Gamma_{1,6} & \Gamma_{1,5} & \cdots & \ \Gamma_{1,1} & \Gamma_{1,0} \\ 0 & \Gamma_{0,5} & \cdots & \ \Gamma_{0,1} & \Gamma_{0,0}},
\end{align*}}
\noindent \emph{where $\mathbf{F}_1 = \mathbf{\widetilde{F}}_1^0 \cup \{ x_5 \}$ and $\mathbf{F}_6 = \mathbf{\widetilde{F}}_6^0 \cup x_5\mathbf{\widetilde{F}}_5^0 \cdots \cup x_5^5\mathbf{\widetilde{F}}_1^0 \cup \{x_5^6 \}$. The monomials in $\mathbf{F}_6$ corresponding to the $f_1=5$ first linearly independent columns of $\Gamma_{1,6}$ will be added. We have}
\begin{align*}
& \quad \mt{\mathbf{\widetilde{F}}_6^0 & x_5\mathbf{\widetilde{F}}_5^0 & \cdots & x_5^5\mathbf{\widetilde{F}}_1^0 & x_5^6 } \\
\mathbf{F}_1.g \longleftrightarrow \Gamma_{1,6}\mathbf{F}_6 \ \Leftrightarrow \ \pmt{\mathbf{\widetilde{F}}_1^0 \\ x_5}.g \longleftrightarrow &\pmt{\Omega_{1,6} & \Omega_{1,5} & \cdots & \ \Omega_{1,1} & \Omega_{1,0} \\ 0 & \Omega_{0,5} & \cdots & \ \Omega_{0,1} & \Omega_{0,0}},
\end{align*}
\emph{Since $|\mathbf{\widetilde{F}}_1^0| = f'_1 = 4$ and $|\mathbf{\widetilde{F}}_6^0| = f'_6 = 5$, we get $\rank(\Omega_{1,6}) \leq 4$. By using Macaulay2 to compute $\initial(I,g)$, we see that the $4$ largest monomials of $\mathbf{F}_6$ are the minimal generators of $\initial(I,g)$. This means that the $4$ first columns of $\Omega_{1,6}$ are linearly independent. Hence the $5$ first linearly independent columns of $\Gamma_{1,6}$ are the $4$ first columns and the $6$-th column. Thus $\mathbf{F}_6^{S_1} = \mathbf{F}_6^{[1,4]} \cup \mathbf{F}_6^{\{6\}}$. However the monomial $\mathbf{F}_6^{\{6\}}$ is not a minimal generator because $\mathbf{F}_6^{\{6\}} = x_5\mathbf{F}_5^{\{1\}}$ and $\mathbf{F}_5^{\{1\}}$ was already added in step $i=0$.}
\end{Example}
\section{Application for Fr\"{o}berg's Conjecture}
\hspace{15pt} In \cite[Theorem 2]{P}, Pardue proved that Fr\"{o}berg's Conjecture is equivalent to Conjecture \ref{Pardue E}. In order to prove the equivalence of the conjectures, Pardue used the notion of semi-regular sequences already quoted in the introduction and introduced in \cite[Section 3]{P}.
Regular sequences and semi-regular sequences can be characterized by Hilbert series.
\begin{Proposition} \cite[Proposition 1]{P}\label{semi-regular}
Let $A = K[x_1,\cdots,x_n]/I$, where $I$ is a homogeneous ideal, and $f_1, \cdots, f_r$ are homogeneous polynomials of degree $d_1, \cdots, d_r$. Then,
\vskip 0.3cm
\noindent (1) $f_1, \cdots, f_r$ is a semi-regular sequence on $A$ if and only if for all $s = 1, \cdots, r$
$$HS_{A/(f_1, \cdots, f_s)}(z) = \left\lceil \pmt{\prod_{i=1}^s(1 - z^{d_i})} HS_A(z) \right\rceil.$$
(2) $f_1, \cdots, f_r$ is a regular sequence on $A$ if and only if
$$HS_{A/(f_1, \cdots, f_r)}(z) = \pmt{\prod_{i=1}^r(1 - z^{d_i})} HS_A(z).$$
\end{Proposition}
\vskip 0.3cm
In \cite[Theorem 2]{P}, Pardue proved that Conjecture \ref{Pardue E} is equivalent to the following conjecture.
\begin{Conjecture} \cite[Conjecture C]{P}\label{Pardue C}
Let $I = (f_1, \cdots, f_n)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $K [x_1,\cdots,x_n]$. Then $x_n, x_{n-1}, \cdots, x_1$ is a semi-regular sequence on $A = K[x_1, \cdots, x_n]/I$.
\end{Conjecture}
We apply now Theorem \ref{Pardue E with condition} to get partial answers to Conjecture \ref{Pardue E} and Conjecture \ref{Pardue C}. Let $d_1 \leq \cdots \leq d_n$ be $n$ positive integers. For every $1 \leq i \leq n$, we set
\vskip 0.3cm
\hspace{110pt} $\delta_i = d_1 + \cdots + d_{i} - i,$
\vskip 0.3cm
\hspace{110pt} $\sigma_i = \min \left\{ \delta_{i-1} , \left\lfloor \dfrac{\delta_i}{2} \right\rfloor \right\}$ for all $i \geq 2$.
\vskip 0.3cm
\begin{Theorem}\label{Partial Pardue E3}
Let $I = (f_1,\cdots,f_n)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $K [x_1,\cdots,x_n]$ with $n \leq 3$ and $d_1 \leq \cdots \leq d_n$. Then, Conjecture \ref{Pardue E} is true for $I$.
\end{Theorem}
\begin{proof}
It is known that Conjecture \ref{Pardue E} is true in case $n \leq 2$. For $n=3$, we have $J = \pi(I)$ is a generic ideal of type $(2;d_1,d_2)$. Hence, Conjecture \ref{Pardue E} is true for $J$. Since
$$d_3 \geq \sigma_2 = \min \left\{ d_1 - 1 , \left\lfloor \dfrac{d_1 + d_2 - 2}{2} \right\rfloor \right\},$$
by Theorem \ref{Pardue E with condition} we have that Conjecture \ref{Pardue E} is true for $I$.
\end{proof}
\begin{Theorem}\label{Partial Pardue E}
Let $I = (f_1,\cdots,f_n)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $K [x_1,\cdots,x_n]$ with $n \geq 4$ and $d_1 \leq \cdots \leq d_n$. If $d_i \geq \sigma_{i-1}$ for all $4 \leq i \leq n$, then Conjecture \ref{Pardue E} is true for $I$.
\end{Theorem}
\begin{proof}
We prove by induction on $n$. For $n=4$, we have $J = \pi(f_1,f_2,f_3)$ is a generic ideal of type $(3;d_1,d_2,d_3)$. By Theorem \ref{Partial Pardue E3}, Conjecture \ref{Pardue E} is true for $J$. Since $d_4 \geq \sigma_3$, by Theorem \ref{Pardue E with condition} we have that Conjecture \ref{Pardue E} is true for $I$.
For $n > 4$, we have $J = \pi(f_1,\cdots,f_{n-1})$ is a generic ideal of type $(n-1;d_1,\cdots,d_{n-1})$ with $d_i \geq \sigma_{i-1}$ for all $4 \leq i \leq n-1$. Hence, by induction Conjecture \ref{Pardue E} is true for $J$. Since, $d_n \geq \sigma_{n-1}$, by Theorem \ref{Pardue E with condition} we have that Conjecture \ref{Pardue E} is true for $I$.
\end{proof}
Since Conjecture \ref{Pardue E} is equivalent to Conjecture \ref{Pardue C}, we also get a partial answer to Conjecture \ref{Pardue C}.
\begin{Theorem}\label{Partial Pardue C}
Let $I = (f_1,\cdots,f_n)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_n)$ in $K [x_1,\cdots,x_n]$ with $d_1 \leq \cdots \leq d_n$. If $d_i \geq \sigma_{i-1}$ for all $4 \leq i \leq n$ (in case $n \geq 4$), then $x_n, x_{n-1}, \cdots, x_1$ is a semi-regular sequence on $K[x_1, \cdots, x_n]/I$.
\end{Theorem}
We apply now Theorem \ref{Partial Pardue C} to prove the main result of this section.
\begin{Theorem}\label{Partial FB}
Let $I = (f_1,\cdots,f_r)$ be a generic homogeneous ideal of type $(n;d_1,\cdots,d_r)$ in $R = K [x_1,\cdots,x_n]$ with $d_1 \leq \cdots \leq d_r$. If $d_i \geq \sigma_{i-1}$ for all $4 \leq i \leq r$ (in case $r \geq 4$), then the Hilbert series of $R/I$ is given by
$$HS_{R/I}(z) = \left\lceil \dfrac{\prod_{i=1}^r(1 - z^{d_i})}{(1-z)^n} \right\rceil.$$
\end{Theorem}
\begin{proof}
Since Fr\"{o}berg's Conjecture is known to be true if $r \leq n$, we only have to consider the case $r > n$. Let $R' = K[x_1, \cdots, x_r]$ be the polynomial ring in $r$ variables and view $R$ as $R = R'/(x_r, \cdots, x_{n+1})$. Then, there exist the generic homogeneous polynomials $f_1', \cdots, f_r'$ of type $(r; d_1, \cdots, d_r)$ in $R'$ such that $f_i$ is the image of $f_i'$ in $R = R'/(x_r, \cdots, x_{n+1})$. Set $A = R'/(f_1', \cdots, f_r')$. It is known that $A$ is the complete intersection and Hilbert series of $A$ is given by
$$HS_A(z) = \dfrac{\prod_{i=1}^r(1 - z^{d_i})}{(1-z)^r}.$$
Applying Theorem \ref{Partial Pardue C} for $(f_1', \cdots, f_r')$ we have $x_r, \cdots, x_{n+1}, \cdots, x_1$ is a semi-regular sequence on $A$. By Proposition \ref{semi-regular} we get
$$HS_{A/(x_r, \cdots, x_{n+1})}(z) = \left\lceil (1-z)^{r-n}HS_A(z) \right\rceil = \left\lceil \dfrac{\prod_{i=1}^r(1 - z^{d_i})}{(1-z)^n} \right\rceil,$$
and Theorem \ref{Partial FB} follows from the following isomorphisms.
$$A/(x_r, \cdots, x_{n+1}) \cong R'/(f_1', \cdots, f_r', x_r, \cdots, x_{n+1}) \cong R/(f_1, \cdots, f_r).$$
\end{proof}
\noindent \textbf{Acknowledgements}
\vskip 0.5cm
I thank my advisor Maria Evelina Rossi for suggesting the problem and for providing helpful suggestions throughout the preparation of this manuscript. I am also grateful to the department of Mathematics of Genova University for supporting my PhD program. The main algorithm that I used in this paper is the incremental method from \cite{GGV} and \cite{JS}. So, I also wish to thank Gao, Guan, Volny and Capaverde for providing this effective approach.
\renewcommand{\refname}{References}
| {
"timestamp": "2018-03-15T01:00:59",
"yymm": "1803",
"arxiv_id": "1803.04997",
"language": "en",
"url": "https://arxiv.org/abs/1803.04997",
"abstract": "Let K be an infinite field and let I = (f_1,...,f_r) be an ideal in the polynomial ring R = K[x_1,...,x_n] generated by generic forms of degrees d_1,...,d_r. In the case r=n, following an effective method by Gao, Guan and Volny, we give a description of the initial ideal of I with respect to the degree reverse lexicographic order. Thanks to a theorem due to Pardue (2010), we apply our result to give a partial solution to a longstanding conjecture stated by Froeberg (1985) on the Hilbert series of R/I.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Fröberg's Conjecture and the initial ideal of generic sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815494335754,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7076842106491675
} |
https://arxiv.org/abs/1210.1008 | Two Topological Uniqueness Theorems for Spaces of Real Numbers | A 1910 theorem of Brouwer characterizes the Cantor set as the unique totally disconnected, compact metric space without isolated points. A 1920 theorem of Sierpinski characterizes the rationals as the unique countable metric space without isolated points. The purpose of this exposition is to give an accessible overview of this celebrated pair of uniqueness results. It is illuminating to treat the problems simultaneously because of commonalities in their proofs. Some of the more counterintuitive implications of these results are explored through examples. Additionally, near-examples are provided which thwart various attempts to relax hypotheses. | \section{Introduction}
The problem of characterizing spaces of real numbers topologically is an old and productive one. In 1928, Alexandroff \& Urysohn characterized the irrationals as the unique separable, completely metrizable, zero-dimensional space for which every compact subset has empty interior \cite{alexandroff/urysohn}. In 1936, Ward characterized the real line as the unique connected, locally connected, separable, metrizable space for which the removal of any point results in precisely two connected components \cite{ward}. In 1970, Franklin \& Krishnarao improved Ward's result by weakening \emph{metrizable} to \emph{regular} \cite{franklin/krishnarao}, thereby removing implicit mention of the reals from the characterization. As stated in the abstract, our interest is in two other results from this family. To wit, Brouwer's characterization of the Cantor set \cite{cantor}, and Sierpi\'{n}ski's characterization of the rationals \cite{sierpinski}. Our proof of Sieri\'{n}ski's theorem is modeled on one given in \cite{dasgupta}.
First, we standardize some notation and terminology. Except where otherwise stated, all sets of real numbers are assumed to carry the subspace topology inherited from the usual topology on $\R$. We denote by $C$ the standard ``middle thirds'' set of Cantor, and by $E$ the countable, dense subset of $C$ consisting of endpoints of intervals deleted during the construction of $C$, along with $0$ and $1$. Denote by $\N$ the set of positive integers and by $\mathrm{2}$ the (discrete) 2-point space $\{0,1\}$. It is standard that the product topology on the space $\mathrm{2}^\N$ of infinite binary sequences is homeomorphic to $C$ (the natural homeomorphism is the one sending $(b_i) \in \mathrm{2}^\N$ to $\sum_{i=1}^\infty \frac{2 b_i}{3^i} \in C$). Since the 2-point discrete space has a natural topological group structure (that of $\Z/2\Z$) the identification $C \cong \mathrm{2}^\N$ shows that $C$ has the structure of an uncountable (abelian) topological group. In particular, this shows $C$ has many self homeomorphisms - a point which will later be of use. The separation axioms \emph{regular} and \emph{normal} are taken to include the Hausdorff condition, by definition. The word \emph{countable} is taken to mean countably infinite.
The overarching goal is to prove the following theorems which characterize $C$ and $\Q$ uniquely up to homeomorphism.
\begin{thm}[Brouwer] Every nonempty, totally disconnected, compact, metrizable space without isolated points is homeomorphic to the Cantor set $C$.
\end{thm}
\begin{thm}[Sierpi\'{n}ski] Every countable, metrizable space without isolated points is homeomorphic to the rational numbers $\Q$.
\end{thm}
Some applications of these results are given in Examples~\ref{brouw ex} and~\ref{sier ex} below.
\begin{ex}\label{brouw ex} \text{}
\begin{enumerate}[(a)]
\item Any nonempty perfect subset of $C$ is homeomorphic to $C$.
\item Extending (a), a nonempty subset of $\R$ is homeomorphic to $C$ if and only if it is perfect, nowhere dense and compact.
\item The products spaces $C^2,C^3, \ldots$ and $C^\N$ are homeomorphic to $C$.
\item If $X_1,X_2,\ldots$ are totally disconnected, compact, metrizable spaces (e.g. finite discrete spaces) with at least 2 points, then $\prod_{i =1}^\infty X_i$ is homeomorphic to $C$.
\end{enumerate}
\end{ex}
Part (d) holds because total disconnectedness and compactness are preserved under arbitrary products, metrizability is preserved under countable products, and the product of an infinite family of spaces with at least 2 points has no isolated point.
\begin{ex}\label{sier ex} \text{}
\begin{enumerate}[(a)]
\item Any countable, dense subset of $\R$ is homeomorphic to $\Q$ (already for spaces as simple as $\Q \setminus \{0\}$ or $\Q \cup \{\sqrt 2\}$ this is not completely obvious).
\item The endpoints $E$ of the Cantor set are homeomorphic to $\Q$.
\item Extending (a), a countable dense subset of any Euclidean $n$-space is homeomorphic to $\Q$. In particular, the product spaces $\Q^2,\Q^3,\ldots$ are homeomorphic to $\Q$.
\item The ``Sorgenfrey'' topology on $\Q$ generated by half-open intervals $[a,b) \cap \Q$ is homeomorphic to the standard one (metrizability is most easily seen through Urysohn's metrization theorem).
\end{enumerate}
\end{ex}
The spaces in Part (a) above are more basic in the sense that it will in fact be possible to find an \emph{order preserving} homeomorphism with $\Q$ for these spaces. In Part (b), this cannot be so because the order type of $E$ is very different from that of $\Q$ (every left endpoint is the immediate predecessor of a right endpoint). Thus, an order preserving homeomorphism is too much to hope for in general. This concession paves the way for the intuition-testing examples in Part (c). Here, it is far from apparent that there should be any order structure whatsoever compatible with the topology. Part (d) may be the most perverse example of the lot; the Sorgenfrey topology on $\Q$ is strictly finer topology than the standard topology while, at the same time, homeomorphic to the standard topology.
An order preserving homeomorphism between two subsets of $\R$ will be called an \emph{order homeomorphism}. Our general outline for proving Brouwer's theorem and Sierpi\'{n}ski's theorem is the same:
\begin{enumerate}
\item Identify the subsets of $\R$ that are order-homeomorphic to $C$ and $\Q$ respectively.
\item Given $X$ as in Brouwer's theorem or Sierpi\'{n}ski's theorem, construct an embedding of $X$ into $\R$ whose range is one of the above sets.
\end{enumerate}
\section{The Ordered Perspective}
In this paper, we commit a mild travesty by only considering suborders of $\R$. When $X \subset \R$, there is a second natural topology on $X$, besides the subspace topology. This is the \emph{order topology} on $X$. The sets
\[ (a,b) \cap X \ \ \ [\ell,b) \cap X \ \ \ (a,r] \cap X \]
where $\ell$ and $r$ are the largest and smallest elements of $X$ (if indeed such exist) and $a,b \in X$ satisfy $\ell \leq a < b \leq r$ are a basis for the order topology. Note that an order isomorphism between $X,Y \subset \R$ is also a homeomorphism of their order topologies since basic sets will be sent to basic sets. The subspace topology on $X \subset \R$ is always finer than the order topology and, in many familiar instances, the distinction is irrelevant. This is the case for $C$ and $\Q$ (if the former is not clear now, it will be made clear soon). It is an inconvenient fact of life that the subspace topology can be strictly finer. For example, there is a clear order isomorphism between the spaces
\[ X = [0,1) \cup \{2\} \cup (3,4] \text{ and } Y = [0,2] \]
so their order topologies are homeomorphic. However, their subspace topologies are not homeomorphic since $X$ has an isolated point and $Y$ does not. We will record an easy, but highly checkable, necessary and sufficient condition for these topologies to agree. It seems useful and appropriate to phrase this condition, and two others that will become relevant, in a common language.
By a \emph{gap} in $X$ we mean a bounded connected component of $\R \setminus X$. We refine this notion. Every bounded, connected subset of $\R$ has precisely one of the following forms
\[ \underbrace{(a,b)}_\text{open} \ \ \
\underbrace{[a,b] \ \ \{a\}}_\text{closed} \ \ \
\underbrace{[a,b) \ \ (a,b]}_\text{half-open}, \]
so we may classify gaps disjointly as follows: an open gap is an \emph{essential gap} ($X$ contains both endpoints), a closed gap is a \emph{Dedekind gap} ($X$ contains neither endpoint), a half-open gap is a \emph{pseudo-gap} ($X$ contains one endpoint and not the other). For example
\[ X = ([0,1] \cap \Q) \cup [2,3] \cup (4,5) \cup(5,6] \cup [7,\infty) \]
has 2 essential gaps, many +1 Dedekind gaps and 1 pseudo-gap. The first two flavors of gaps are intrinsic to the order on $X$. The essential gaps in $X$ are in one to one correspondence with pairs $(x,y)$ of points in $X$ where $x$ is an immediate predecessor of $y$ (so $y$ is the immediate successor of $x$). The Dedekind gaps in $X$ are in one to one correspondence with the \emph{Dedekind cuts} in $X$, where a Dedekind cut in $X$ is defined as a pair $(L,U)$ of nonempty subsets of $X$ such that $L \cup U =X$, $L<U$, $L$ has no largest element, and $U$ has no smallest element. In contrast, however, pseudo-gaps, are \emph{not} detected by the order on $X$ and, when present, indicate that the way $X$ sits in $\R$ is somehow defective. Their importance stems from the following easy result.
\begin{propn}\label{top agree}
The order topology on $X \subset \R$ agrees with the subspace topology if and only if $X$ has no pseudo-gaps.
\end{propn}
Note that a closed subset of $\R$ can only have essential gaps (it must contain both endpoints of any gap). In particular, this shows the order topology and subspace topologies on $C$ are the same.
Essential gaps, on the other hand, are important to us because of their role they play in the following classical, order theoretic analog of Sierpi\'{n}ski's theorem - originally due to Cantor \cite{cantor}.
\begin{thm}[Cantor]\label{cantor}
If $A,B \subset \R$ are countable, have no essential gaps\footnote{It is standard to call a linear order with no essential gaps \emph{dense}, but this conflicts with the meaning of dense in topology so we avoid this usage here.}, and have neither largest nor smallest elements (these conditions mean, for instance that, when $a,a' \in A$, $a<a'$, we can find $x,y,z \in A$ such that $x<a<y<a'<z$), then $A,B$ are order isomorphic. In particular, both are order isomorphic to $\Q$.
\end{thm}
\begin{proof}
First we fix enumerations of $A$ and $B$. It simplifies matters to index the elements of $A$ with odd numbers and the elements of $B$ with even numbers. So, $A = \{a_1,a_3,a_5,\ldots\}$ while $B = \{b_2,b_4,b_6,\ldots\}$. Let $\mathscr{F}$ be the collection of partially defined, order preserving maps $A \to B$ with finite domain. The plan is to build up an order isomorphism through a countable sequence of extensions $\varphi_1,\varphi_2,\varphi_3,\ldots \in \mathscr{F}$. Moreover, we choose our extensions so that
\[ a_1 \in \dom \varphi_1 \ \ b_2 \in \ran \varphi_2 \ \ a_3 \in \dom \varphi_3 \ \ b_4 \in \ran \varphi_4 \ \ \ldots \]
which ensures the map defined ``in the limit'' is defined on all of $A$ and hits all of $B$. This is sometimes called a ``back-and-forth'' construction. This process can always continue because of the following.
\emph{Claim.} If $\varphi \in \mathscr{F}$ and $a \in A$, then there is an order preserving extension $\varphi^*$ of $\varphi$ with $\dom \varphi^* = \dom \varphi \cup \{a\}$. If $\varphi \in \mathscr{F}$ and $b \in B$, then there is an order preserving extension $\varphi^*$ of $\varphi$ with $\ran \varphi^* = \ran \varphi \cup \{b\}$.
We prove only the first statement. If $a \in \dom \varphi$ already, then do nothing. If $a < \dom \varphi$, then we use the assumption that $B$ has no smallest element to produce $b \in B$ with $b <\ran \varphi$ (note $\ran \varphi$ is finite) and define $\varphi^*(a) = b$. We proceed similarly when $a > \dom \varphi$. Otherwise, $a$ has an immediate predecessor $a^-$ and an immediate successor $a^+$ in $\dom \varphi$ (since $\dom \varphi$ is finite). Since $\varphi(a^-) < \varphi(a^+)$ and $B$ has no essential gaps, there is a $b \in B$ with $\varphi(a^-) < b < \varphi(a^+)$ and we set $\varphi^*(x) = b$. In all of the above cases, the map $\varphi^*$ so obtained is an extension of the desired type.
\end{proof}
We now identify the subsets of $\R$ which are order-homeomorphic to $\Q$ (i.e. homeomorphic to $\Q$ via. an order preserving homeomorphism). Let $X \subset \R$. We say $p \in \R$ is a \emph{left limit point} of $X$, if it is a limit point of $(-\infty,p) \cap X$. A \emph{right limit point} of $X$ is defined similarly. If $p$ is both a left limit point and a right limit point of $X$, then we say that $p$ is a \emph{2-sided limit point} of $X$.
\begin{thm}[Ordered version of Sierpi\'{n}ski's theorem]\label{ord sier}
A set $X \subset \R$ is order-homeomorphic to $\Q$ if and only if $X$ is countable and every $x \in X$ is a 2-sided limit point of $X$.
\end{thm}
\begin{proof}
Only the ``if'' part of the statement is nontrivial. Suppose $X$ is countable and each point of $X$ is a 2-sided limit point of $X$. No $x \in X$ can be the left endpoint of a gap in $X$ or a largest element of $X$, or else $x$ is not a right limit point of $X$. Similarly, no $x \in X$ can be a the right endpoint of a gap in $X$ or a smallest element of $X$, or else $x$ is not a left limit point if $X$. It follows that $X$ has no largest or smallest element, no essential gaps, and no pseudo-gaps. By Cantor's theorem, there is an order isomorphism $\varphi : X \to \Q$ which is homeomorphism from the the order topology on $X$ to $\Q$. However, since $X$ has no pseudo-gaps, Theorem~\ref{top agree} shows $\varphi$ is also a homeomorphism from the subspace topology on $X$ to $\Q$.
\end{proof}
Before we can prove an analogous result for $C$, we will need to our attention to Dedekind gaps. We say that $X \subset \R$ is \emph{Dedekind complete} if it has no Dedekind gaps. As was previously observed, a closed subset of $\R$ can have only essential gaps. Thus, closed subsets of $\R$ are Dedekind complete. It turns out that any linear order embeds into a certain essentially unique Dedekind complete linear order which could be called its \emph{Dedekind completion} (the construction just mimics the construction of the reals using Dedekind cuts in $\Q$). We stop short of defining a Dedekind completion precisely, or proving it always exists. However, we will use the following result - a special case of uniqueness for Dedekind completions.
\begin{propn}\label{ded compl}
Suppose $A \subset X \subset \R$, $X$ is Dedekind complete, and each point in $X \setminus A$ is a 2-sided limit point of $A$. Suppose $B \subset Y \subset \R$, $Y$ is Dedekind complete, and each point in $Y \setminus B$ is a 2-sided limit point of $B$. Then any order isomorphism $\varphi : A \to B$ has a unique extension to an order isomorphism $\varphi^* : X \to Y$.
\end{propn}
\begin{proof}
Let $x \in X \setminus A$. Let $L= \{y \in Y : y \leq \varphi(a) \text{ for some } a \in A \text{ with } a < x\}$ and $U = \{y \in Y : y \geq \varphi(a) \text{ for some } a \in A \text{ with } a > x\}$. Since $x$ is a 2-sided limit point of $A$ and since $\varphi$ is order preserving, it follows that $L$ and $U$ are nonempty, $L < U$, $L$ has no largest element, and $U$ has no smallest element. Since $Y$ is Dedekind complete, $(L,U)$ cannot be a Dedekind cut in $Y$ and there exists $y \in Y$ satisfying $L < y< U$. In fact, the $y$ must be unique since if $y_1,y_2$ have $L < y_1 < y_2 < U$, then $y_2$ is not a left limit point of $B$. Clearly we are forced to define $\varphi^*(x) = y$ if the extension is to be order preserving. The map $\varphi^*$ obtained by carrying out this argument at each $x \in X \setminus A$ is an order preserving injection $\varphi^* : X \to Y$ extending $\varphi$. The proof that $\varphi^*$ is onto proceeds similarly by fixing a $y \in Y \setminus B$.
\end{proof}
Now we identify the subsets of $\R$ which are order-homeomorphic to $C$.
\begin{thm}[Ordered version of Brouwer's theorem]\label{ord brouw}
A nonempty set $X \subset \R$ is order-homeomorphic to $C$ if and only if $X$ is perfect, nowhere dense and compact.
\end{thm}
\begin{proof}
Only the ``if'' part of the statement is nontrivial. Let $X$ be as above. Since $X$ is closed, it has only essential gaps. Let $L, R \subset X$ be, respectively, the set of left endpoints and the set of right endpoints of essential gaps in $X$. Note that $L \cap R = \varnothing$ since $X$ has no isolated points.
First we show $L$ is order-homeomorphic to $\Q$ by applying Cantor's theorem. Consider the left endpoint $\ell$ of some essential gap $(\ell, r)$ in $X$. It must be that $\ell$ is a left limit of $X$ (or else it is isolated, but $X$ is perfect). Whenever $x \in X$ and $x < \ell$, there must be an essential gap in $X$ between $x$ and $\ell$ (or else $X$ has nonempty interior, contradicting nowhere denseness) so there is a $\ell^- \in L$ with $x < \ell^- < \ell$. From these observations, it follows that $\ell$ is not a smallest element of $L$ and has no immediate successor. Similarly, $r$ is a right limit point of $X$, and, if $x \in X$ and $\ell<r<x$, there is an $\ell^+ \in L$ with $x < r < \ell^+ < x$. From this it follows that $\ell$ is not a largest element of of $L$ and that $\ell$ has no immediate predecessor. From Cantor's theorem, it follows it now follows that $L$ is order isomorphic to $\Q$ as claimed.
There is a natural order isomorphism between $L$ and $R$ by pairing the left and right endpoint of each essential gap. In fact, $L \cup R$ is order isomorphic to $L \times \{0,1\} \cong \Q \times \{0,1\}$ in the dictionary order (adding the elements of $R$ amounts to adjoining an immediate successor to each element of $L$). Let $a = \inf X$ and $b = \sup X$. Since $X$ is compact, $a,b \in X$. Let $E_X = L \cup R \cup \{a,b\} \subset X$. Clearly the order type of $E_X$ is just $\Q \times \{0,1\}$ with a largest and smallest element adjoined. In particular, there is an order isomorphism $\varphi : E_X \to E$.
We claim any point $x \in X \setminus E_X$ is a 2-sided limit point of $E_X$. Such an $x$ is at least a 2-sided limit point of $X$ (since it is not the largest element, not the smallest element, and not an endpoint of a gap). Moreover, if $y \in X$ and $y \neq x$, then there is a gap in $X$ between $x$ and $y$ (since $X$ has empty interior). Thus, there are points of $E_X$ between $x$ and $y$, and it follows that $x$ is a 2-sided limit point of $E_X$ too. But now, Proposition~\ref{ded compl} applies, and $\varphi$ extends uniquely to an order isomorphism $\varphi^* : X \to E$. Na\"{i}vely, need just be a homeomorphism of the order topologies on $X$ and $C$ but, since $X,C$ are closed, these are the same as the subspace topologies and we have our desired order-homeomorphism.
\end{proof}
We have now completed step one from our outline in the introduction. As was observed in Example~\ref{brouw ex}, the subsets of $\R$ which satisfy the hypotheses of Brouwer's theorem are precisely the sets appearing in Theorem~\ref{ord brouw} above\footnote{In contrast, there exist subsets of $\R$ (for example, $E$ or $\Q \cap [0,1]$) which satisfy the hypotheses of Sierpi\'{n}ski's theorem, but are not order-homeomorphic to $\Q$.}. Revisiting the aforementioned outline, it is now apparent that proving Brouwer's theorem is a slightly less delicate business than we originally supposed. \emph{Any} embedding at all that we construct for Step 2 will hit a set of the desired type.
\section{Embeddings of Zero-Dimensional Spaces}
If we are to have any hope of completing Step 2 from our outline, we will need techniques for embedding spaces into $\R$. Typical embedding theorems in general topology take a family $f_i:X \to Y_i$ of continuous functions on a space $X$ and use them as the coordinate functions of a map from $X$ into the product space $\prod_i Y_i$. For this reason, it tends to be easier to construct embeddings when the target space is a large topological product. Although $\R$ is not itself of this form, it has a subspace of this form\footnote{Actually, there is another famous subspace of $\R$ homeomorphic to a large product available. The set of irrational numbers in $\R$ is homeomorphic to $\N^\N$ (roughly, via continued fraction representations).} - namely $C \cong \mathrm{2}^\N$. From this point of view, looking specifically at embeddings into $C \subset \R$ is quite a natural thing to try. To clarify what follows, we state an amusing and somewhat neglected criterion for map to be an embedding. The trivial proof is left to the reader.
\begin{lemma}
A continuous function $f:X \to Y$ with $X$ a $T_0$ space is an embedding if and only if $p \notin \overline S$ implies $f(p) \notin \overline{f(S)}$ for all $p \in X$, $S \subset X$.
\end{lemma}
We say an indexed family $f_i:X \to Y_i$, $i \in I$ of continuous functions \emph{separates points from closed sets} if for every $p \in X$ and every neighbourhood $U$ of $p$, there exists an index $i \in I$ and a closed set $0 \subset Y_i$ such that $f_i(p) \notin 0$ and $f_i(x) \in 0$ for all $x \in X \setminus U$. Intuitively, one interprets the latter condition as saying that $f_i$ is nonzero at $x$ and vanishes outside of $U$. We now state a quite general embedding theorem, whose proof is fairly transparent view of the preceding lemma.
\begin{thm}\label{embed}
If $X$ is a $T_0$ space and $f_i:X \to Y_i$, $i \in I$ is a family of continuous functions which separates points from closed sets, then the function $f : X \to \prod_{i \in I} X_i$ sending $x$ to $(f_i(x))_{i \in I}$ is an embedding.
\end{thm}
If we want use the preceding to embed a space $X$ into $\mathrm{2}^\N$, we will need lots of continuous functions $X \to \mathrm{2}$. Since $\mathrm{2}$ is discrete, a clopen subset $Q$ of a topological space $X$ is essentially the same as a continuous function $X \to \mathrm{2}$. One simply considers the function which is $1$ on $Q$ and $0$ on $X \setminus Q$. In fact, as is easily verified, this correspondence is such that a collection $Q_i$ of clopen sets is a basis if and only if the corresponding family of functions $f_i : X \to \mathrm{2}$ separates points from closed sets. We say that $X$ is \emph{zero-dimensional} when there is exists a basis of clopen sets. We now characterize the subspaces of $C$.
\begin{propn}\label{cantor embed}
A topological space $X$ can be embedded into $C$ if and only it $X$ is $T_0$, 2nd countable, and zero-dimensional.
\end{propn}
\begin{proof}
Only the ``if'' part of the proposition is nontrivial, so suppose $X$ is $T_0$, 2nd countable and zero-dimensional. Let $\mathscr{B}$ be a countable basis for $X$. Construct a countable collection of clopen sets $\mathscr{Q}$ by choosing, for each pair $U,V \in \mathscr{B}$ where this is possible, a clopen set $Q_{U,V}$ satisfying $U \subset Q_{U,V} \subset V$. It is easy to check $\mathscr{Q}$ is a countable basis of clopen sets. So, there is a countable family $f_1,f_2,\ldots$ of continuous functions $X \to \mathrm{2}$ which separates points from closed sets. By Theorem~\ref{embed}, we obtain an embedding of $X$ into $\mathrm{2}^\N \cong C$ so we are done.
\end{proof}
Note that dropping 2nd countability in the above gives us the spaces embeddable into $\mathrm{2}^I$ for some, possibly uncountable, index set $I$.
\section{The theorems of Brouwer and Sierpi\'{n}ski}
In this section we complete the proofs of the two main results. In both cases, we prove the space under consideration is zero-dimensional and apply Proposition~\ref{cantor embed}. For Brouwer's theorem, this is slightly delicate. Certainly it is worth pointing out at this point that the following statement is generally false: if $x$ and $y$ are in different connected components of a space $X$ then there exists a separation $U,V$ of $X$ with $x \in U$ and $y \in V$.
\begin{ex}
Let $X$ be the subspace of the Euclidean plane consisting of the points $p=\langle 0,0 \rangle$ and $q = \langle 0,1 \rangle$ together with the vertical lines $\{1/n\} \times [0,1]$, $n \in \N$. The connected components of $X$ are the individual lines and the singletons $\{p\}$ and $\{q\}$. However, there is no separation $U,V$ of $X$ with $p \in U$ and $q \in V$.
\end{ex}
\begin{proof}[Proof of Brouwer's theorem]
Let $X$ be a nonempty, totally disconnected, compact metric space with no isolated points. We need only prove that $X$ is zero dimensional since then Proposition~\ref{cantor embed} and Theorem~\ref{ord brouw} prove that $X$ is homeomorphic to $C$. In fact, this all comes down to the following claim, and the rest is a routine compactness argument.
\emph{Claim. }If $x,y \in X$ are distinct, there is a separation $U,V$ of $X$ with $x \in U$, $y \in V$.
Fix $x \in X$ and let $Y$ be the (closed) intersection of all clopen neighbourhoods of $x$. We need to prove that $Y = \{x\}$ and, since $X$ is totally disconnected, it suffices to see $Y$ is connected. To this end, suppose that $A,B$ are disjoint closed sets whose union is $Y$. With no harm done, suppose $x \in A$. We will show that $B = \varnothing$ so that $Y$ is connected. Since $X$ is a normal space, there exist disjoint open sets $U,V$ such that $A \subset U, B \subset V$. It is clear that the clopen sets which \emph{exclude} $x$ are an open cover of the (compact) set $X \setminus (U \cup V)$. It follows that are finitely many clopen neighbourhoods of $x$ whose intersection $Q$, another clopen neighbourhood of $x$, has $A \cup B \subset Q \subset U \cup V$. But now notice that $Q \cap U = Q \setminus V$ is also a clopen neighbourhood of $x$ so $Y = A \cup B \subset Q \subset U$ requiring $B = \varnothing$.
\end{proof}
For Sierpi\'{n}ski's theorem, it is more straightforward to check zero-dimensionality, but even after applying Proposition~\ref{cantor embed}, more work needs to be done to get an embedding whose range satisfies the hypothesis of Theorem~\ref{ord sier}.
\begin{proof}[Proof of Sierpi\'{n}ski's theorem]
Let $X = (X,d)$ be a countable metric space without isolated points. Let $D \subset [0, \infty)$ equal the set of distances $d(x,y)$ as $x,y$ range over $X$. Since $X$ is countable, $D$ is countable. Therefore, there exist positive real numbers $\epsilon_1,\epsilon_2,\ldots$ in $(0,\infty) \setminus D$ converging to zero. For $x \in X$ and $n \in \N$, let $U(x,n) = \{ y \in X : d(x,y) < \epsilon_n\}$. By design, the complement of $U(x,n)$ is $\{y \in X : d(x,y) > \epsilon_n\}$ so each $U(x,n)$ is clopen. Moreover, the $U(x,n)$ are a basis for $X$ (even a countable one). So, by Proposition~\ref{cantor embed} there exists an embedding $f:X \to C$. We would like to apply Theorem~\ref{ord sier} to $f(X)$, but this is not yet justified since $f(X)$ could equal, say, $E$ and fail to be order isomorphic to $\Q$.
Note, however, that $\overline{ f(X) }\subset C \subset \R$ satisfies the hypotheses of Brouwer's theorem, or even Theorem~\ref{ord brouw}. It follows that $f(X)$ is homeomorphic to $C$. Therefore, there is in fact an embedding $g: X \to C$ whose image $g(X)$ is \emph{dense} in $C$. Now recall that $C$ is homeomorphic to $\mathrm{2}^\N$ so that $C$ has a natural topological group structure. We claim that there exists an $x \in C$ which ``translates'' $f(X)$ away from the problematic points of $E$. That is, there exists $x \in C$ such that $(f(X) + x) \cap E = \varnothing$. In fact, whenever $A,B \subset C$ are countable, there must be an $x \in C$ with $(A + x) \cap B = \varnothing$. This is because $(A+x) \cap B \neq \varnothing$ if and only if $x$ is in the countable set $\{b - a : a, \in A, b \in B\}$ so the set of $x \in C$ that \emph{don't} work is countable. So, by composing $g$ with an appropriate self homeomorphism of $C$, we obtain an embedding $h:X \to C$ such that $g(X)$ is dense in $C$ and $g(X) \cap E = \varnothing$. Now, we claim that every point in $h(X)$ is a 2-sided limit point of $h(X)$ so that Theorem~\ref{ord sier} can be applied. Indeed, any point $c \in C \setminus E$ is a 2-sided limit of $C$, hence a 2-sided limit point of $h(X)$ (since $h(X)$ is dense in $C$). Therefore, $h(X)$ is (order) homeomorphic to $\Q$. Since $X$ is homeomorphic to $h(X)$, this proves Sierpi\'{n}ski's result.
\end{proof}
\section{Modifying hypotheses}
Sierpi\'{n}ski's theorem characterizes $\Q$ by the properties \emph{countable}, \emph{metrizable} and \emph{no isolated points}. If one wishes to expunge any reference to a metric from the theorem, one can replace \emph{metrizable} with \emph{1st countable and regular}. This is possible because, for countable spaces, 1st countable implies 2nd countable so Urysohn's metrization theorem gives the nontrivial half of the equivalence. The following example shows that \emph{countable}, \emph{regular} and \emph{no isolated points} do not suffice. Not even if \emph{regular} is improved to \emph{zero-dimensional}.
\begin{ex}
Let $X = \Q[x]$ be the space of polynomial functions from $\R \to \R$ with rational coefficients. Give $X$ the topology of pointwise convergence. Clearly $X$ is countable and has no isolated points. Also, $X$ is zero dimensional. To see this, suppose that $U$ is a neighbourhood of $p \in X$. Without loss of generality, there is an $\epsilon >0$ and $a_1,\ldots,a_n \in \R$ such that $U = \{ q \in X : |q(a_i) - p(a_i)| < \epsilon \text{ for } i=1,\ldots,n\}$. For each $i$, $S_i \colonequals \{q(a_i) : q \in X\}$ is countable, so there exist $r_i,s_i \in \R \setminus S_i$ with $p(a_i) - \epsilon < r_i < p(a_i) < s_i < p(a_i) + \epsilon$. Then, the set $Q \colonequals \{q \in X : r_i < q(a_i) < s_i \text{ for } i=1,\ldots,n\}$ is clopen and satisfies $p \in Q \subset U$, so $X$ is zero dimensional. However, $X$ does not have strong enough countability properties to be metrizeable so is not homeomorphic to $\Q$. It is not difficult to see $X$ is not 1st countable. In fact more is true. Sequences do not suffice to detect limit points in $X$. Let $Y$ be the set of $q \in X$ with $|q(x)| \leq 1$ on $[0,1]$ and with $\int_0^1 q(x) \ dx \geq 1/2$. It is clear that the zero polynomial is in the closure of $Y$. But, the zero cannot be a the limit of a sequence $(q_i)$ in $Y$. If such $q_i$ converge pointwise to zero, then Lebesgue's dominated convergence theorem implies that the sequence of integrals $\int_0^1 q_i(x) \ dx$ converges to zero as well, which is impossible by design.
\end{ex}
The following example shows that \emph{countable}, \emph{2nd countable} and \emph{no isolated points} do not suffice. Not even if we assume \emph{totally disconnected} and \emph{Hausdorff} in addition.
\begin{ex}
As a set, take $X$ to be the union of $\Q$ with two idealized points $p_0$ and $p_1$. For $n=0,1,2,\ldots$, let $I_n = (n,n+1)$. We topologize $X$ by taking $U \subset X$ open if and only if all of the following hold.
\begin{itemize}
\item $U \cap \Q$ is open in the standard topology on $\Q$.
\item If $p_0 \in U$, then $U$ contains all but finitely many of $I_0,I_2,I_4,\ldots$.
\item If $p_1 \in U$, then $U$ contains all but finitely many of $I_1,I_3,I_5,\ldots$.
\end{itemize}
It is easy to see that $X$ is 2nd countable, Hausdorff, totally disconnected and has no isolated points. However, $X$ is not regular. For example, $\Z \subset X$ is closed and $p_0 \notin \Z$, but $\Z$ and $p$ cannot have disjoint neighbourhoods.
\end{ex}
Brouwer's theorem characterizes $C$ by the properties \emph{totally disconnected}, \emph{compact}, \emph{metrizable} and \emph{no isolated points}. Here, \emph{metrizable} can be replaced with \emph{Hausdorff and 2nd countable} (since these conditions are equivalent in the presence of compactness) if a ``metric free'' characterization is desired. Obvious examples show no one of these conditions can be dropped. Finally, in the proof of Brouwer's theorem, compactness was used in a key way to deduce zero-dimensionality from total disconnectedness. The compactness assumption is crucial. For example, Cantor's leaky tent is a (noncompact) subspace of the Euclidean plane which is totally disconnected with no isolated points, but not zero-dimensional - nor even totally separated \cite{ctrexamples}.
| {
"timestamp": "2012-10-04T02:02:16",
"yymm": "1210",
"arxiv_id": "1210.1008",
"language": "en",
"url": "https://arxiv.org/abs/1210.1008",
"abstract": "A 1910 theorem of Brouwer characterizes the Cantor set as the unique totally disconnected, compact metric space without isolated points. A 1920 theorem of Sierpinski characterizes the rationals as the unique countable metric space without isolated points. The purpose of this exposition is to give an accessible overview of this celebrated pair of uniqueness results. It is illuminating to treat the problems simultaneously because of commonalities in their proofs. Some of the more counterintuitive implications of these results are explored through examples. Additionally, near-examples are provided which thwart various attempts to relax hypotheses.",
"subjects": "General Topology (math.GN)",
"title": "Two Topological Uniqueness Theorems for Spaces of Real Numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815507092832,
"lm_q2_score": 0.7154239836484144,
"lm_q1q2_score": 0.7076842055599514
} |
https://arxiv.org/abs/1004.1379 | Index coding via linear programming | Index Coding has received considerable attention recently motivated in part by real-world applications and in part by its connection to Network Coding. The basic setting of Index Coding encodes the problem input as an undirected graph and the fundamental parameter is the broadcast rate $\beta$, the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on $\beta$ were derived from the study of other Index Coding capacities (e.g. the scalar capacity $\beta_1$) by Bar-Yossef et al (2006), Lubetzky and Stav (2007) and Alon et al (2008). However, these indirect bounds shed little light on the behavior of $\beta$: there was no known polynomial-time algorithm for approximating $\beta$ in a general network to within a nontrivial (i.e. $o(n)$) factor, and the exact value of $\beta$ remained unknown for any graph where Index Coding is nontrivial.Our main contribution is a direct information-theoretic analysis of the broadcast rate $\beta$ using linear programs, in contrast to previous approaches that compared $\beta$ with graph-theoretic parameters. This allows us to resolve the aforementioned two open questions. We provide a polynomial-time algorithm with a nontrivial approximation ratio for computing $\beta$ in a general network along with a polynomial-time decision procedure for recognizing instances with $\beta=2$. In addition, we pinpoint $\beta$ precisely for various classes of graphs (e.g. for various Cayley graphs of cyclic groups) thereby simultaneously improving the previously known upper and lower bounds for these graphs. Via this approach we construct graphs where the difference between $\beta$ and its trivial lower bound is linear in the number of vertices and ones where $\beta$ is uniformly bounded while its upper bound derived from the naive encoding scheme is polynomially worse. | \section{Introduction}\svlabel{sec:intro}
In the Index Coding problem a server holds a set of messages that it wishes to broadcast over a noiseless channel to a set of receivers. Each receiver is interested in one of the messages and has side-information comprising some subset of the other messages. Given the side-information map as an input, the objective is to devise an optimal encoding scheme for the messages (e.g., one minimizing the broadcast length) that allows all the receivers to retrieve their required information.
This notion of source coding that optimizes the encoding scheme given the side-information map of the clients was introduced by Birk and Kol~\cite{BK} and further developed by Bar-Yossef \emph{et al.} in~\cite{BBJK}. Motivating applications include satellite transmission of large files (e.g.\ video on demand), where a slow uplink may be used to inform the server of the side-information map, namely the identities of the files currently stored at each client due to past transmissions. The goal of the server is then to issue a shortest possible broadcast that allows every client to decode its target file while minimizing the overall latency. See~\cites{BK,BBJK,CS} and the references therein for further applications of the model and an account of various heuristic/rigorous Index Coding protocols.
The basic setting of the problem (see~\cite{AHLSW}) is formalized
as follows:
the server holds $n$ messages $x_1,\ldots,x_n \in \Sigma$ where
$|\Sigma|> 1$, and there are $m$ receivers
$R_1,\ldots,R_m$. Receiver $R_j$ is interested in one message,
denoted by $x_{f(j)}$, and knows some subset $N(j)$ of the other
messages.
A solution of the problem must specify a finite alphabet $\Sigma_P$
to be used by the server, and an encoding scheme
${\mathcal{E}}: \Sigma^n \to \Sigma_P$ such that,
for any possible values of $x_1,\ldots,x_n$,
every receiver $R_j$ is able to decode the message
$x_{f(j)}$ from the value of ${\mathcal{E}}(x_1,\ldots,x_n)$ together
with that receiver's side-information. The minimum
encoding length $\ell = \left\lceil \log_2 |\Sigma_P|\right\rceil$ for
messages that are $t$ bits long (i.e.~$|\Sigma|=2^t$)
is denoted by $\beta_t(G)$, where $G$ refers to the data
specifying the communication requirements, i.e.~the functions
$f(j)$ and $N(j)$.
As noted in \cite{LuSt}, due to the overhead associated
with relaying the side-information map to the server the
main focus is on the case $t\gg1$ and namely on the
following \emph{broadcast rate}.
\begin{equation}
\svlabel{eq-beta-limit}
\beta(G) \stackrel{\scriptscriptstyle\triangle}{=} \lim_{t\to\infty}\frac{\beta_t(G)}t = \inf_t \frac{\beta_t(G)}{t}
\end{equation}
(The limit exists by sub-additivity.) This is interpreted as the average asymptotic number of broadcast bits needed per bit of input, that is, the asymptotic broadcast rate for long messages.
In Network Coding terms, $\beta$ is the \emph{vector} capacity whereas $\beta_1$ is a \emph{scalar} capacity.
An important special case of the problem arises when there is exactly
one receiver for each message, i.e.~$m=n$ and $f(j)=j$
for all $j$. In this case, the side-information map $N(j)$ can equivalently
be described in terms of the binary relation consisting of pairs
$(i,j)$ such that $x_j \in N(i)$. These pairs can be thought of as
the edges of a directed graph on the vertex set $[n]$ or, in case the
relation is symmetric, as the edges of an undirected graph.
This special case of the problem (which we will hereafter identify
by stating that \emph{$G$ is a graph}) corresponds to the original
Index Coding problem introduced by Birk and Kol~\cite{BK}, and
has been extensively studied due to its rich connections with
graph theory and Ramsey theory. These connections stem from
simple relations between broadcast rates and other graph-theoretic
parameters.
Letting $\alpha(G),\overline{\chi}(G)$ denote the independence and clique-cover numbers of $G$, respectively, one has
\begin{equation}
\svlabel{eq-trivial-ineqs}
\alpha(G) \leq \beta(G) \leq \beta_1(G)\leq \overline{\chi}(G)\,.
\end{equation}
The first inequality above is due to an independent set being identified with a set of receivers with no mutual information, whereas the last one due to~\cites{BK,BBJK} is obtained by broadcasting the bitwise XOR of the vertices per clique in the optimal clique-cover of $G$.
\svapdx{ \xhdr{History of the problem}}
{\subsection{History of the problem}}\svlabel{subsec:related}
The framework of graph Index Coding and its scalar capacity $\beta_1$
were introduced in~\cite{BK}, where Reed-Solomon based protocols
hinging on a greedy clique-cover (related to the bound $\beta_1 \leq
\overline{\chi}$) were proposed and empirically analyzed.
In a breakthrough paper~\cite{BBJK}, Bar-Yossef \emph{et al.}\ proposed
a new class
of linear index codes based on a matrix rank minimization problem. The
solution to this problem, denoted by $\operatorname{minrk}_2(G)$,
was shown to achieve the optimal linear scalar capacity over $GF(2)$ and in particular to be superior to the clique-cover method, i.e.\ $\beta_1 \leq \operatorname{minrk}_2 \leq \overline{\chi}$.
The parameter $\operatorname{minrk}_2$ was extended to general fields in~\cite{LuSt},
where arguments from Ramsey Theory
showed that for any $\epsilon>0$ there is a family of graphs on $n$ vertices where $\beta_1 \leq n^\epsilon$ while $\operatorname{minrk}_2 \geq n^{1-\epsilon}$ for any fixed $\epsilon>0$.
The first proof of a separation $\beta < \beta_1$ for graphs was presented by Alon \emph{et al.}\ in~\cite{AHLSW};
the proof introduces a new capacity parameter $\beta^*$ such that $\beta \leq \beta^* \leq \beta_1$ and shows that the second inequality can be strict using a graph-theoretic characterization of $\beta^*$.
In addition, the paper studied hypergraph Index Coding (i.e.~the general broadcasting with side information problem, as defined above), for which several hard instances were constructed --- ones where $\beta=2$ while $\beta^*$ is unbounded and others where $\beta^*<3$ while $\beta_1$ is unbounded.
The first proof of a separation $\alpha < \beta$ for graphs is presented in a companion paper~\cite{BKL11a}; the proof makes use of a new technique for bounding $\beta$ from below using a linear program whose constraints express information inequalities. The paper then uses lexicographic products to amplify this separation, yielding a sequence of graphs in which the ratio $\beta/\alpha$ tends to infinity. The same technique of combining linear programs with lexicographic products also leads to an unbounded multiplicative separation between non-linear and vector-linear Index Coding in hypergraphs.
As is clear from the foregoing discussion,
the prior work on Index Coding has been highly successful in
bounding the broadcast rate above and below by various parameters
(all of which are, unfortunately, NP-hard to compute) and in coming
up with examples that exhibit separations between these parameters.
However it has been less successful at providing general techniques
that allow the determination (or even the approximation)
of the broadcast rate $\beta$ for large classes of problem
instances. The following two facts starkly illustrate this
limitation. First, the exact
value of $\beta(G)$ remained unknown for \emph{every}
graph $G$ except those for which trivial lower and upper
bounds $\alpha(G), \overline{\chi}(G)$ coincide. Second, it was not
known whether the broadcast rate $\beta$ could be approximated
by a polynomial-time algorithm whose approximation ratio
improves the trivial factor $n$ (achieved by simply broadcasting
all $n$ messages) by more than a constant
factor.\footnote{When $G$ is a graph, it is not hard to derive a
polynomial-time $o(n)$-approximation from~\sveqref{eq-trivial-ineqs}.}
In this paper, we extend and apply the linear programming
technique recently introduced in~\cite{BKL11a} to obtain
a number of new results on Index Coding, including
resolving both of the open questions stated in the preceding
paragraph.
The following two sections discuss our contributions, first to the
general problem of broadcasting with side information, and then to
the case when $G$ is a graph.
\medskip
\svapdx{ \xhdr{New techniques for bounding and approximating
the broadcast rate}}
{\subsection{New techniques for bounding and approximating
the broadcast rate}}
\svlabel{subsec:new-techniques}
The technical tool at the heart of our paper is a pair of linear programs
whose values bound $\beta$ above and below. The linear program
that supplies the lower bound was introduced in~\cite{BKL11a}
and discussed above;
the one that supplies the upper bound is strikingly similar,
and in fact the two linear programs fit into a hierarchy
defined by progressively strengthening the constraint set
(although the relevance of the middle levels of this hierarchy
to Index Coding, if any, is unclear).
\begin{maintheorem}\svlabel{thm-hierarchy}
Let $G$ be a broadcasting with side information problem,
having $n$ messages and $m$ receivers.
There is an explicit sequence of $n$ information-theoretic
linear programs, each one a relaxation of its successors, whose
respective solutions $b_1 \leq b_2 \leq \ldots \leq b_n$ are such
that:
\begin{compactenum}[(i)]
\item \svlabel{item-b2-bn}
The broadcast rate $\beta$ satisfies $b_2 \leq \beta \leq b_n$,
and both of the inequalities can be strict.
\item \svlabel{item-b1-bn}
When $G$ is a graph, the extreme LP solutions $b_1$
and $b_n$ coincide with the independence number $\alpha(G)$ and
the fractional clique-cover number $\overline{\chi}_f(G)$
respectively.
\end{compactenum}
\end{maintheorem}
As a first application of this tool, we obtain the following
pair of algorithmic results.
\begin{maintheorem}\svlabel{thm-hypergraph-approx}
Let $G$ be a broadcasting with side information problem,
having $n$ messages and $m$ receivers.
Then there is a polynomial time algorithm which computes a
parameter $\tau=\tau(G)$ such that
$1 \leq \frac{\tau(G)}{\beta(G)} \leq O\big(n \frac {\log\log n}{\log n}\big)$.
There is also a polynomial time algorithm to decide
whether $\beta(G)=2$.
\end{maintheorem}
\svapdx{}{In fact, the $O \big( n \frac{\log \log n}{\log n} \big)$
approximation holds in greater generality for
the \emph{weighted} case, where different messages may
have different rates (in the motivating applications this can correspond e.g.\ to a server that holds files of varying size).
The generalization is explained in
Section~\ref{sec:weighted-hypergraph}.
}
\svapdx{ \xhdr{Consequences for graphs}}
{\subsection{Consequences for graphs}}
\svlabel{sec:graph-consequences}
In
\svapdx{Appendix~\ref{sec:beta-of-graphs}}
{Section~\ref{sec:beta-of-graphs}}
we demonstrate the use of
Theorem~\svref{thm-hierarchy} to derive the exact value of $\beta(G)$
for various families of graphs by analyzing the LP solution $b_2$.
As mentioned above, the exact value of $\beta(G)$ was previously
unknown for any graph except when the trivial lower and upper
bounds --- $\alpha(G)$ and $\overline{\chi}(G)$ --- coincide, as happens
for instance when $G$ is a perfect graph.
Using the stronger lower and upper bounds $b_2$ and $b_n$,
we obtain
the exact value of $\beta(G)$ for all cycles and cycle-complements:
$\beta(C_n) = n/2$ and
$\beta(\overline{C_n}) = n / \lfloor \frac{n}{2} \rfloor$.
In particular this settles the
Index Coding problem for the $5$-cycle investigated
in~\cites{BBJK,BKL11a,AHLSW}, closing the gap between
$b_2(C_5) = 2.5$ and $\beta^*(C_5) = 5 - \log_2 5 \approx 2.68$.
These results also provide simple constructions of networks with gaps between
vector and scalar Network Coding capacities.
We also use Theorem~\svref{thm-hierarchy} to prove separation between broadcast rates and other graph parameters.
Our results, summarized in Table~\svref{tab-comparison},
improve upon several of the best previously known
separations.
Prior to this work there were no known graphs
$G$ where $\beta_1(G) - \beta(G) \geq 1$. (For the more
general setting of broadcasting with side information,
multiplicative gaps that were logarithmic in the number
of messages were established in~\cite{AHLSW}.)
In fact, merely showing that the 5-cycle satisfies
$2 \leq \beta < \beta_1 = 3$ required the involved
analysis of an auxiliary capacity $\beta^*$, discussed earlier in
Section~\svref{subsec:related}.
With the help of our linear programming bounds
(Theorem~\svref{thm-hierarchy}) we supply
in Section~\svref{subsec:cor-of-thm-1}
a family of graphs on $n$ vertices where $\beta_1 - \beta$
is linear in $n$, namely $\beta = n/2$ whereas
$\beta_1=(1-\frac15\log_2 5-o(1))n\approx 0.54 n$.
\begin{table}[t]
\centering
\begin{tabular}{cccc}
\toprule%
Capacities & Best previous & New separation & Appears in\\
compared & bounds in graphs & results & Section\\
\midrule
$\beta-\alpha$ & $\Theta \left(n^{0.56}\right)$
& $\Theta(n)$ & \svref{subsec:cor-of-thm-1}\\
\midrule[0.25pt]
$\beta$ vs.\ $\overline{\chi}_f$
& $\begin{array}{c}
\beta \leq n^{o(1)}\\
\overline{\chi}_f \geq n^{1-o(1)}
\end{array}$
& $\begin{array}{c}
\beta = 3\\
\overline{\chi}_f =\Omega( n^{1/4})
\end{array}$ & \svapdx{\svref{sec:graph-stub}}{\ref{sec:sep-beta-alpha}}\\
\midrule[0.25pt]
$\beta_1-\beta$ & $\approx 0.32$ & $\Theta(n)$ & \svref{subsec:cor-of-thm-1}\\
$\beta_1/\beta$ & $\approx1.32$ & $1.5-o(1)$ & \svref{subsec:cor-of-thm-1}\\
\midrule[0.25pt]
$\beta^*-\beta$ & --- & $\Theta(n)$ & \svref{subsec:cor-of-thm-1}\\
\bottomrule
\end{tabular}
\caption{New separation results for Index Coding capacities in $n$-vertex graphs} \svlabel{tab-comparison}
\end{table}
We turn now to the relation between $\beta(G)$ and $\overline{\chi}_f(G)$,
the upper bound provided by our LP hierarchy. As mentioned earlier,
Lubetzky and Stav~\cite{LuSt} supplied, for every $\varepsilon>0$, a
family of graphs on $n$ vertices satisfying
$\beta(G) \leq \beta_1(G) < n^\varepsilon$
while $\overline{\chi}_f(G) > n^{1-\varepsilon}$,
thus implying that $\overline{\chi}_f(G)$ is not
bounded above by any polynomial function
of $\beta(G)$. We strengthen this result
by showing that $\overline{\chi}_f(G)$ is not bounded
above by \emph{any} function of $\beta(G)$.
To do so, we use
a class of projective Hadamard graphs due to Erd\H{o}s and R\'enyi
to prove the following theorem in
\svapdx{Section~\svref{sec:graph-stub}}{Section~\svref{sec:sep-beta-alpha}}.
\begin{maintheorem}\svlabel{thm-beta-chif-gap}
There exists an explicit family of graphs $G$ on $n$ vertices such that $\beta(G) = 3$ whereas the Index Coding encoding schemes based on clique-covers cost at least $\overline{\chi}_f(G) = \Theta(n^{1/4})$ bits.
\end{maintheorem}
Recall the natural heuristic approach to Index Coding: greedily cover
the side-information graph $G$ by $r \geq \overline{\chi}(G)$ cliques and send
the XORs of messages per clique for an average communication cost of $r$.
A similar protocol based on Reed-Solomon Erasure codes was proposed by~\cite{BK} and was empirically shown to be effective on large random graphs. Theorem~\svref{thm-beta-chif-gap} thus presents a hard instance for this protocol,
namely graphs where $\beta=O(1)$ whereas $\overline{\chi}(G)$ is polynomially large.
\section{Linear programs bounding the broadcast rate}\svlabel{sec:hierarchy}
In this section we present
linear programs that bound
the broadcast rate $\beta$ below and above,
using an information-theoretic analysis.
We demonstrate this technique
by determining $\beta(C_5)$ precisely;
later, in \svapdx{Appendix~\ref{sec:beta-of-graphs}}
{Section~\ref{sec:beta-of-graphs}},
we determine $\beta$ precisely for various infinite
families of graphs.
\subsection{The LP hierarchy}\svlabel{subsec:def-hierarchy}
Numerous results in Network Coding theory
bound the Network
Coding rate (e.g.,~\cites{AHJKL,DFZ1,HKL,HKNW,SYC})
by combining entropy inequalities of two types. The first is
purely information-theoretic and holds for any set of random
variables; the second is derived from the graph structure. An
important example of the second type of inequality, that we
refer to as ``decoding'', enforces the following: if a set
of edges $A$ cuts off a set of edges $B$ from all the sources, then
any information on edges in $B$ is determined by information on edges
in $A$. We translate this idea to the setting of Index Coding in
order to develop stronger lower bounds for the broadcast rate.
\begin{definition}
Given a broadcasting with side information problem and subsets of
messages $A,B$, we say that $A$ \emph{decodes} $B$ (denoted $A
\rightsquigarrow B$) if $A \subseteq B$ and for every message $x \in B
\setminus A$ there is a receiver $R_j$ who is interested in $x$
and knows only messages in $A$ (i.e.\
$x_{f(j)} = x$ and $N(j) \subseteq A$).
\end{definition}
\begin{remark}
For graphs, $A \rightsquigarrow B$ if $A \subseteq B$ and for every $v\in
B\setminus A$ all the neighbors of $v$ are in $A$.
\end{remark}
If we consider the Index Coding problem on $G$ and a valid solution
${\mathcal{E}}$, then the relation $A \rightsquigarrow B$ implies
$H(A,{\mathcal{E}}(x_1,\ldots,x_n)) \ge H(B,{\mathcal{E}}(x_1,\ldots,x_n))$,
since for each
message in $B \setminus A$ there is a receiver who must be able to
determine the message from only the messages in $A$ and
the public channel ${\mathcal{E}}(x_1,\ldots,x_n)$.
(Here and in what follows we denote by $H(X,Y)$
the joint entropy of the random variables $X,Y$.)
Combining these decoding inequalities with purely
information-theoretic inequalities, one can prove
lower bounds on the entropy of the public channel,
a process formalized by a linear program (that we
denote by $\mathcal{B}_2$) whose solution $b_2$
constitutes a lower bound on $\beta$.
(See~\cites{BKL11a,Yeung} for more on information-theoretic LPs.)
Interestingly, $\mathcal{B}_2$ fits into a hierarchy of $n$
increasing linear programs such that the last LP in the
hierarchy gives an \emph{upper} bound on $\beta$.
\begin{definition}
For a broadcasting with side information problem on a set $V$ of $n$ messages, the \emph{$\beta$-bounding LP hierarchy} is the sequence of LPs, denoted by $\mathcal{B}_1, \mathcal{B}_2, \mathcal{B}_3, \ldots , \mathcal{B}_{n}$ with solutions $b_1, b_2, \ldots, b_{n}$, given by:
\[
\begin{array}{llc}
\hline
\multicolumn{3}{c}{\mbox{\emph{$k$-th level of the LP hierarchy for the broadcast rate}}}\\
\cline{1-3}
\mbox{minimize $X(\emptyset)$} \\
\mbox{subject to:} \\
\qquad X(V) \ge n & & \mbox{(\textit{initialize})}\\
\qquad X(\emptyset) \ge 0 & & \mbox{(\textit{non-negativity})}\\
\qquad X(S) + |T \setminus S| \geq X(T) & \forall S \subseteq T \subseteq V &\mbox{(\textit{slope})}\\
\qquad X(T) \ge X(S) & \forall S \subseteq T \subseteq V &\mbox{(\textit{monotonicity})} \\
\qquad X(A) \ge X(B) & \forall A,B \subseteq V \,:\, A \rightsquigarrow B &\mbox{(\textit{decode})} \\
\qquad \sum_{T \subseteq R} (-1)^{|R \setminus T|} X( T \cup Z) \le 0 &
\!\!\!\begin{array}
{l}
\forall R \subseteq V \,:\, 2 \le |R| \le k \\
\forall Z \subseteq V \,:\, Z \cap R = \emptyset
\end{array} & \mbox{(\textit{$|R|$-th order submodularity})}\\
\hline
\end{array}
\]
\svlabel{def:bbLP}
\end{definition}
\begin{remark}
The above defined \emph{2-th order submodularity} inequalities are equivalent to the classical submodularity inequalities whereby $X(S) + X(T) \ge X(S \cap T) + X(S \cup T)$ for all $S,T$.
\end{remark}
Theorem~\svref{thm-hierarchy} traps $\beta$ in the solution sequence of
the above-defined hierarchy and characterizes its extreme values for
graphs. The proofs of these results appear in
\svapdx{Appendix~\ref{sec:hierarchy-proof}}
{Section~\ref{sec:hierarchy-proof}},
and in what follows we\svapdx{}{ first} outline the arguments therein and the
intuition behind them.
As mentioned above, the parameter $b_2$ is the entropy-based lower bound via
Shannon inequalities that is commonly used in the Network Coding
literature. To see that indeed $\beta\geq b_2$ we
interpret a solution to the broadcasting problem as
a feasible primal solution to $\mathcal{B}_2$ via the
assignment $X(A) = H(A \cup {\mathcal{E}}(x_1,\ldots,x_n))$.
The proof that $\alpha(G) = b_1(G)$ for graphs is similarly based on
constructing a feasible primal solution to $\mathcal{B}_1$,
this time via the assignment
$X(A) = |A| + \max \{|I| \,:\, \mbox{$I$ is an independent set
disjoint from $A$}\}$. (The existence of this primal
solution justifies the inequality $b_1 \leq \alpha$; the
reverse inequality is an easy consequence of the decoding,
initialization, and slope constraints.)
To establish that $\beta(G) \leq b_n(G)$ when $G$ is a graph
we\svapdx{}{ will} show that $b_n(G) = \overline{\chi}_f(G)$, the fractional
clique-cover number of $G$,
while $\overline{\chi}_f(G)$ is an upper bound on
$\beta$.
For a general broadcasting network $G$
we\svapdx{}{ will} follow the same
approach via
an analog of $\overline{\chi}_f$ for hypergraphs.
It turns out that there are two natural generalizations
of cliques and clique-covers
in the context of broadcasting with side information.
\begin{definition} \svlabel{def:hyperclique}
A \emph{weak hyperclique} of a broadcasting problem
is a set of receivers ${\mathcal{J}}$ such that
for every pair of distinct elements $R_i,R_j \in {\mathcal{J}}$,
$f(i)$ belongs to $N(j)$.
A \emph{strong hyperclique} is
a subset of messages $T \subseteq V$ such that for any
receiver $R_j$ that desires $x_{f(j)} \in T$ we have that
$T \subseteq N(j) \cup \{f(j)\}$.
A \emph{weak fractional hyperclique-cover} is
a function that assigns a non-negative weight to each
weak hyperclique, such that for every receiver $R_j$,
the total weight assigned to weak hypercliques
containing $R_j$ is at least 1. A \emph{strong
fractional hyperclique-cover} is defined the
same way, except that the weights are assigned
to strong hypercliques and the coverage requirement
is applied to receivers rather than messages.
In both cases, the \emph{size} of the hyperclique-cover
is defined to be the sum of all weights.
\end{definition}
Observe that if $T$ is any set of messages and ${\mathcal{J}}$ is the set
of all receivers desiring a message in $T$, then $T$ is a strong
hyperclique if and only if ${\mathcal{J}}$ is a weak hyperclique.
However, it is not the
case that every weak hyperclique can be obtained from a strong
hyperclique $T$ in this way.
Observe also that if ${\mathcal{J}}$ is a
weak hyperclique and each of the
messages $x_{f(j)}\, (R_j \in {\mathcal{J}})$
is a single scalar value in some field,
then broadcasting the sum of those values
provides sufficient information for each
$R_j \in {\mathcal{J}}$ to decode $x_{f(j)}$.
This provides an indication (though not
a proof) that $\beta$ is bounded above by
the weak fractional hyperclique cover number.
The proof of Theorem~\svref{thm-hierarchy}(\svref{item-b2-bn})
in fact
identifies $b_n$ as being equal to the \emph{strong} fractional
hyperclique-cover number, which is obviously greater than or
equal to its weak counterpart. The role of the
$n^{\mathrm{th}}$-order submodularity constraints
is that they force the function $F(S) \stackrel{\Delta}{=}
X(\overline{S}) - |\overline{S}|$ to be a
\emph{weighted coverage function}. Using this representation
of $F$ it is not hard to extract a fractional set cover of
$V$, and the sets in this covering are shown to be
strong hypercliques using the decoding constraints.
Finally, \svapdx{the proof}{we will show}
that one can have $\beta > b_2$ \svapdx{uses}{using} a
construction based on the V\'amos matroid following the approach used in
\cite{DFZ2} to separate the corresponding Network Coding
parameters. As for showing that one can have $\beta < b_n$,
we\svapdx{}{ will} in fact show that one can have $\beta < b_3 \leq b_n$.
We believe that the other parameters $b_3,\ldots,b_{n-1}$ have no relation to $\beta$, e.g.\ as noted above we show that there is a broadcasting instance for which $\beta < b_3$ and thus $b_3$ is not a lower bound on $\beta$.
\svapdx{
}{
\subsection{Proof of Theorem~\svref{thm-hierarchy}}\svlabel{sec:hierarchy-proof}
In this section we prove Theorem~\svref{thm-hierarchy} via a series of claims.
The main inequalities involving the broadcast rate $\beta$ are shown in \S\svref{subsec:ap-hierarchy} whereas the constructions demonstrating that these inequalities can be strict appear in \S\svref{subsec:strict}.
\subsubsection{Bounding the broadcast rate via the LP hierarchy}\svlabel{subsec:ap-hierarchy}
We begin by familiarizing ourselves with the framework of the LP-hierarchy through proving
the following straightforward claim regarding the LP-solution $b_1$ and the graph independence number.
\begin{claim}\svlabel{clm-b1-eq-alpha}
If $G$ is a graph then the LP-solution $b_1$ satisfies $b_1(G) = \alpha(G)$.
\end{claim}
\begin{proof}
In order to show that $b_1(G) \ge \alpha(G)$, let $I$ be an
independent set of maximal size in $G$. Now, $V\setminus I
\rightsquigarrow V$ implies that $X(V\setminus I) \ge X(V) \ge n$ is
true for any feasible solution. Additionally, $X(V\setminus I) \le
X(\emptyset) + |V\setminus I|$. Combining these together, we get
$X(\emptyset) \ge |V| - |V\setminus I| = |I| = \alpha(G)$.
To prove $b_1(G) \le \alpha(G)$ we present a feasible solution to the
primal attaining the value $\alpha(G)$,
\begin{equation}
\svlabel{eq-alpha-feasible-sol}
X(S) = |S| + \max \{|I| \,:\,
\mbox{$I$ is an independent set disjoint from $S$} \}\,,
\end{equation}
We verify that the solution is feasible by checking that it satisfies all the
constraints of $\mathcal{B}_1$. The fact that $X(V) = n$ implies the initialization constraint is satisfied.
To prove the slope constraint,
for $S \subseteq T \subseteq V$ let $I, J$ be maximum-cardinality
independent sets disjoint from $S,T$ respectively. Note that $J$
itself is disjoint from $S$, implying $|J| \le |I|$. Thus we have
\[
X(T) = |T| + |J| = |S| + |T \setminus S| + |J| \le
|S| + |T \setminus S| + |I| = X(S) + |T \setminus S|.
\]
Note also that $I \setminus T$ is an independent set disjoint
from $T$, hence it satisfies $|I \setminus T| \le |J|$. Thus
\[
X(T) = |T| + |J| \ge |T| + |I \setminus T| =
|T \cup I| \ge |S \cup I| = |S| + |I| = X(S),
\]
which verifies monotonicity. Finally, to prove
decoding let $A,B$ be any vertex sets
such that $A \rightsquigarrow B$.
Consider $G \setminus A$,
the induced subgraph of $G$ on vertex set $V \setminus A$.
Every vertex of $B \setminus A$
is isolated in $G \setminus A$, and
consequently if $I$ is a maximum-cardinality
independent set disjoint from $B$,
then $I \cup (B \setminus A)$ is an independent
set in $G \setminus A$. Therefore,
\begin{equation*}
X(A) \ge |A| + |I| + |B \setminus A|
= |B| + |I| = X(B)\,.\qedhere
\end{equation*}
\end{proof}
We next turn to showing that $b_2$ is a lower bound on the broadcast rate.
\begin{claim}\svlabel{clm-b2-leq-beta}
The LP-solution $b_2$ satisfies $b_2(G) \leq \beta(G)$.
\end{claim}
\begin{proof}
Let $G$ be a broadcasting with side information problem with $n$
messages $V$ and $m$ receivers. Consider the
message $P = {\mathcal{E}}(x_1,\ldots, x_n)$ that we send on the public
channel to achieve $\beta$. Denote by $H$ the entropy function normalized so that $H(x_i) = 1$ for all $i$. This induces a function from the power set of $V \cup P$ to $\mathcal{R}$ where $H(S) = |S|$ for any subset of messages $S$ and $H(P) = \beta$.
Now, let $X(S) = H(S,P)$ for $S \subseteq V$. We will
show that $X$ satisfies all the constraints of the LP $\mathcal{B}_2$, implying $X$ it is a feasible solution $\mathcal{B}_2$.
First, $X(V) \ge n$ since $H(V,P) = H(V)$ and our
normalization has $H(V) = n$. Non-negativity holds because $H(P)
\ge 0$. The $X(\cdot)$ values satisfy monotonicity and submodularity
because entropy does. Slope is implied by
the fact that entropy is submodular (that is, $H(S,P) + H(T\setminus
S) \ge H(T,P)$) together with our normalization. Finally,
decoding is satisfied because the coding solution is valid: each receiver $R_j$ can determine its sought information from
$N(j)$ and the public channel.
This solution gives $X(\emptyset) = H(P) = \beta$ and since the LP is
stated as a minimization problem it implies that $\beta$ is an upper
bound on its solution $b_2$.
\end{proof}
Next we prove that $\beta \le b_n$. We do this in three parts.
First, for every
instance $G$ of the
broadcasting with side information problem,
we define a parameter $\overline{\chi}_f(G)$ be the
minimum size of a strong fractional hyperclique-cover; this parameter
specializes to the fractional clique-cover number when $G$ is a graph.
Next we show that $\beta \le \overline{\chi}_f$, and finally we prove
that $\overline{\chi}_f = b_n$.
\begin{claim}\svlabel{clm-beta-leq-chibarf}
For any broadcasting problem with side information, $G$, we have
$\beta(G) \leq \overline{\chi}_f(G)$.
\end{claim}
\begin{proof}
Let $\mathcal{C}$ be the set of strong hypercliques in $G = (V,E)$. If
$\overline{\chi}_f \leq w$ then there is a finite collection of ordered pairs
$\{(S,x_S) \,:\, S \in \mathcal{C}\}$ where the $x_S$'s are positive rational
numbers satisfying
\begin{align*}
\sum_{S \in \mathcal{C}} x_S = w \,,\quad\mbox { and }\quad
\sum_{S \in \mathcal{C}: \, x \in S} x_S \geq 1\mbox{ for all $x \in V$}\,.
\end{align*}
Let $q$ be a positive integer such that each of the numbers $x_S \,
(S \in \mathcal{C})$ is an integer multiple of $1/q.$ Set $p = q w$, noting
that $p$ is also a positive integer. Letting $y_S = q x_S$ for every
$S \in \mathcal{C}$, we have:
\begin{align}
\sum_{S \in \mathcal{C}} y_S = p\,,\quad\mbox{ and }\quad
\sum_{S \in \mathcal{C}: \, x \in S} y_S & \geq q\mbox{ for all $x \in V$}\,.\svlabel{eq:chibarf-q}
\end{align}
Replacing each pair $(S,y_S)$ with $y_S$ copies of the pair $(S,1)$ if
necessary, we can assume that $y_S = 1$ for every $S$.
Similarly, replacing each $S$ by a proper subset if necessary, we can
assume that the inequality~\sveqref{eq:chibarf-q} is tight for every
$x$.
(Note that this step depends on the fact that the collection of
strong hypercliques, $\mathcal{C}$, is closed under taking subsets.)
Altogether we have a sequence of sets $S_1,S_2,\ldots,S_p$, each of which
is a strong hyperclique in $G$, such that every message occurs in
exactly $q$ of these sets.
From such a set system it is easy to construct an index code
where every message has $q$ bits (i.e.\ $\Sigma = \{0,1\}^q$)
and the broadcast utilizes $p$ bits (i.e.\ $\Sigma_P = \{0,1\}^p$).
Indeed, for each message $x \in V$ let
$j_1(x) < j_2(x) < \cdots < j_q(x)$ denote the indices such that
$x \in S_j$ for $j \in \{j_1(x),j_2(x),\ldots,j_q(x)\}$. If the
bits of message $x$ are denoted by
$b_1(x),b_2(x),\ldots,b_q(x)$ then for each $1 \leq i \leq p$
the $i$-th bit of the index code is computed by taking the
sum (modulo 2) of all bits $b_k(z)$ such that $z \in S_i$ and
$i=j_k(z)$. Receiver $R = (S,x)$ is able to decode the $k^{\mathrm{th}}$
bit of $x$ by taking the $j_k(x)$-th bit of the index code and
subtracting various bits belonging to other messages $x' \in
S_{j_k(x)}$. All of these bits are known to $R$ since
$S_{j_k(x)}$ is a strong hyperclique containing $x$.
This confirms that
$\beta(G) \leq p/q = w$, as desired.
\end{proof}
It remains to characterize the extreme upper LP solution:
\begin{claim}\svlabel{clm-bn-eq-chibarf}
The LP-solution $b_n$ satisfies $b_n(G) = \overline{\chi}_f(G)$.
\end{claim}
\begin{proof
The proof hinges on the fact that the entire set of constraints of $\mathcal{B}_n$ gives a useful structural characterization of any feasible solution $X$. Once we have this structure it will be simple to infer the required result.
\begin{lemma}\svlabel{lem:coverage-functions}
A vector $X$ satisfies the slope constraint and the i-th order
submodularity constraints for $i \in \{2,\ldots,n\}$ if and only if
there exists a vector of non-negative numbers $w(T)$, defined for
every non-empty set of messages $T$, such that $X(S) = |S| + \sum_{T: T \not \subseteq S} w(T)$ for all $S \subseteq V$.
\end{lemma}
The proof of this fact is similar to a characterization of a weighted coverage function. While much of the proof is likely folklore, we include it in Section~\svref{sec:ap-coverage} for completeness.
Given this fact we now prove that $b_n(G) \ge \overline{\chi}_f(G)$ by showing that any solution $X$ having the form stated in Lemma \svref{lem:coverage-functions} is a fractional coloring of $\overline{G}$. Thus, for the remainder of this
subsection, $X$ refers to a solution of $\mathcal{B}_n$ having value $b_n(G)$
and $w$ refers to the associated vector of non-negative numbers whose
existence is guaranteed by Lemma~\svref{lem:coverage-functions}.
\begin{fact}\svlabel{fact:sumT}
For every message $x \in V$, $\sum_{T \ni x } w(T) = 1$.
\end{fact}
To see this, observe that monotonicity and decoding imply that $X(V \setminus \{x\}) = X(V)$. Lemma~\svref{lem:coverage-functions} implies that the right-hand-side is $n$ while the left-hand-side is $n-1+\sum_{T \ni x} w(T)$.
\begin{fact}\svlabel{fact:sumneigh}
For every receiver $R_j$, if $x$ denotes $x_{f(j)}$,
then
$\sum_{T : \, x \, \in \, T \, \subseteq \, N(j) \cup \{x\}} w(T) = 1$.
\end{fact}
Indeed, monotonicity and decoding imply that $X(N(j) \cup \{x\}) = X(N(j))$. Lemma~\svref{lem:coverage-functions} implies that the right side and left side differ by
$1-\sum_{T : \, x \, \in \, T \, \subseteq \, N(j) \cup \{x\}} w(T).$
For a message $x$, let $N(x) = \bigcap_{j:x =x_{f(j)}} N(j)$ be the
intersection of the side information for every receiver who wants to
know $x$. By combining Facts~\svref{fact:sumT} and~\svref{fact:sumneigh} we find
that if $w(T)$ is positive then $T$ is contained in $N(x) \cup \{x\}$
for every $x$ in $T$. Thus, we can infer the following:
\begin{corollary}\svlabel{cor:clique}
If $w(T) > 0$ then the set of receivers desiring messages in $T$ is a
strong hyperclique.
\end{corollary}
Now, to prove $b_n(G) \le \overline{\chi}_f(G)$ we show that if a
vector $w$ gives a feasible fractional coloring then $X(S) = |S| + \sum_{T: T \not \subseteq S} w(T)$ is feasible for the LP
$\mathcal{B}_n$. By the argument made in the proof of Claim \svref{clm-beta-leq-chibarf} we can assume without
loss of generality that $\sum_{T \ni u} w(T) = 1 \; \forall u \in
V$. $X$ has value equal to the fractional coloring because
$X(\emptyset) = \sum_{T} w(T)$. Further,
Lemma~\svref{lem:coverage-functions} implies that $X$ satisfies the
$i$-th order submodularity constraints and slope. It trivially
satisfies initialization and non-negativity. To show that $X$
satisfies monotonicity it is sufficient to prove that $X(S \cup \{u\})
\ge X(S)$ for all $S \subseteq V, u \in V \setminus S$. By definition, we have $X(S \cup \{u\}) - X(S) = 1 -
\sum_{T : \, u \, \in \, T \, \not \subseteq \, S} w(T)
$. Additionally, we know $
\sum_{T : \, u \, \in \, T \, \not \subseteq \, S} w(T)
\le \sum_{T: u \in T}
w(T) = 1$, where the last equality is because $w$ is a fractional
coloring. Finally, for the decoding constraints, it is sufficient
to show that $X(A)\ge X(A\cup\{x\})$ for $A = N(j)$
where $R_j$ is a receiver who desires $x$. By definition of $X$,
$X(A) - X(A\cup\{x\}) =
\sum_{T: \, x \, \in \, T \, \subseteq \, N(j) \cup \{x\}} w(T)
- 1$. Also, $
\sum_{T : \, x \, \in \, T \, \subseteq \, N(j) \cup \{x\}} w(T)
= \sum_{T \ni x} w(T) = 1$ because $T$
with $w(T) > 0 $ is a strong hyperclique.
\end{proof}
\subsubsection{Strict lower and upper bounds for the broadcast rate}\svlabel{subsec:strict}
\begin{claim}\svlabel{claim-beta-le-b3}
There exists a broadcasting with side information instance $G$ for which $\beta(G) < b_3(G)$.
\end{claim}
\begin{proof}
The construction is an extremely simple instance with only three
messages $\{a,b,c\}$ and three receivers $(\{a\}, b), (\{b\}, c),$
and $(\{c\}, a)$. It is easy to see that $a \oplus b, b
\oplus c$ is a valid solution, and thus $\beta \le 2$. However,
using the 3rd-order submodularity constraint we have that
\[ X(ab) + X(bc) + X(ac) + X(\emptyset) \ge X(abc) + X(a) + X(b) +
X(c).\]
Combining that with decoding inequalities
\[X(a) \ge X(ab)\,,\quad X(b) \ge X(bc)\,,\quad X(c) \ge X(ac)\,,\]
together with the initialization inequality $X(abc) \ge 3$ now gives us that $b_3 = X(\emptyset) \ge 3$.
\end{proof}
}
\subsection{The broadcast rate of the 5-cycle}
As stated in Theorem~\svref{thm-hierarchy}, whenever the LP-solution $b_2$ equals $\overline{\chi}_f$ we obtain that $\beta$ is precisely this value, hence one may compute the broadcast rate (previously unknown for any graph) via a chain of entropy-inequalities.
We will demonstrate this in Section~\svref{sec:beta-of-graphs} by determining $\beta$ for several families of graphs, in particular for cycles and their complements (Theorem~\svref{thm-cycles}). These seemingly simple cases were previously studied in~\cites{AHLSW,BBJK} yet their $\beta$ values were unknown before this work.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=6.5in]{beta_c5.pdf}
\caption{A proof-by-picture that $\beta(C_5) = \frac52$. Variables marked by highlighted subsets of vertices, e.g.\ the first submodularity application applies the LP constraint $X(\{3,4,5\})+X(\{2,3,4\}) \geq X(\{2,3,4,5\})+X(\{3,4\})$. Final outcome is a proof that $\beta(C_5)\geq X(\emptyset)$ with $3X(\emptyset)+5 \geq X(\emptyset)+10$.}
\svlabel{fig:5cycle}
\end{center}
\vspace{-0.5cm}
\end{figure}
To give a flavor of the proof of Theorem~\svref{thm-cycles}, we provide a proof-by-picture for the broadcast rate of the 5-cycle (Figure~\svref{fig:5cycle}), illustrating the intuition behind choosing the set of inequalities one may combine for an analytic lower bound on $\beta$. The inequalities in Figure~\svref{fig:5cycle} establish that $\beta(C_5)\geq\frac52$, thus matching the upper bound $\beta(C_5) \leq \overline{\chi}_f(C_5) = \frac52$.
We note that odd cycles on $n\geq 5$ vertices as well as their complements constitute the first examples for graphs where the independence number $\alpha$ is strictly smaller than $\beta$. Corollary~\svref{cor-c5-union} will further amplify the gap between these parameters.
\subsection{Corollaries for vector/scalar index codes}\svlabel{subsec:cor-of-thm-1}
Prior to this work and its companion paper~\cite{BKL11a} there was no known family of graphs where $\alpha \neq \beta$,
and one could conjecture that for long enough messages the broadcast rate in fact converges to the independence number, the largest set of receivers that are pairwise oblivious. We now have that the 5-cycle provides an example where $\alpha = 2$ while $\beta =\frac52$, however here the difference $\beta-\alpha<1$ could potentially be attributed to integer-rounding, e.g.\ it could be that $\alpha = \lfloor \beta \rfloor$.
Such was also the case for the best known difference between the vector capacity $\beta$ and the scalar capacity $\beta_1$. The best lower bound on $\beta_1-\beta$ in any graph was again attained by the 5-cycle where it was slightly less than $\frac13$, and again in the constrained setting of graph Index Coding we could conjecture that $\beta_1 = \lceil \beta \rceil$.
The following corollary of the above mentioned results refutes these suggestions by amplifying both these gaps to be linear in $n$. The separation between $\alpha$ and $\beta$ was further strengthened in the companion paper~\cite{BKL11a}, where we obtained a gap of a polynomial factor between these parameters.
\begin{corollary}\svlabel{cor-c5-union}
There exists a family of graphs $G$ on $n$ vertices for which $\beta(G) = n/2$ while $\alpha(G) = \frac25 n$ and $\beta_1(G)=(1-\frac15\log_2 5+o(1))n\approx 0.54 n$. Moreover, we have $\beta^*(G) = (1-o(1))\beta_1(G)$.
\end{corollary}
To prove this result we will use the direct-sum capacity $\beta^*$. Recall that this capacity is defined to be $\beta^*(G)=\lim_{t\to\infty}\frac1{t}\beta_1(t\cdot G)=\inf_t \frac1{t}\beta_1(t\cdot G)$ where $t\cdot G$ denotes the disjoint union of $t$ copies of $G$. This parameter satisfies $\beta\leq\beta^*\leq\beta_1$.
Similarly we let $G+H$ denote the disjoint union of the graphs $G,H$. We need the following simple lemma.
\begin{lemma}\svlabel{lem-additive-beta-beta*}
The parameters $\beta$ and $\beta^*$ are additive with respect to disjoint unions, that is for any two graphs $G,H$ we have $\beta(G+H)=\beta(G)+\beta(H)$ and $\beta^*(G+H)=\beta^*(G)+\beta^*(H)$.
\end{lemma}
\begin{proof}[Proof of lemma]
The fact that $\beta^*$ is additive w.r.t.\ disjoint unions follows immediately from the results of~\cite{AHLSW}. Indeed, it was shown there that for any graph $G$ on $n$ vertices $\beta^*(G) = \log_2 \chi_f ( \mathfrak{C}(G) )$ where $\mathfrak{C}=\mathfrak{C}(G)$ is an appropriate undirected Cayley graph on the group $\mathbb Z_2^n$. Furthermore, it was shown that $\mathfrak{C}(G+H) = \mathfrak{C}(G) \ensuremath{\mathaccent\cdot\vee} \mathfrak{C}(H)$, where $\ensuremath{\mathaccent\cdot\vee}$ denotes the OR-graph-product. It is well-known (see, e.g.,~\cites{Feige,LV}) that the fractional chromatic number is multiplicative w.r.t.\ this product, i.e.\ $\chi_f(G \ensuremath{\mathaccent\cdot\vee} H) = \chi_f(G)\chi_f(H)$ for any two graphs $G,H$. Combining these statements we deduce that
\begin{align*}
2^{\beta^*(G+H)} &= \chi_f( \mathfrak{C}(G+H) ) = \chi_f( \mathfrak{C}(G) \ensuremath{\mathaccent\cdot\vee} \mathfrak{C}(H) ) = \chi_f(\mathfrak{C}(G))\chi_f(\mathfrak{C}(H)) = 2^{\beta^*(G)+\beta^*(H)}\,.
\end{align*}
We shall now use this fact to show that $\beta$ is additive. The inequality $\beta(G+H)\leq \beta(G)+\beta(H)$ follows from concatenating the codes for $G$ and $H$ and it remains to show a matching upper bound.
As observed by~\cite{LuSt}, the Index Coding problem for an $n$-vertex graph $G$ with messages that are $t$ bits long has an equivalent formulation as a problem on a graph with $t n$ vertices and messages that are $1$-bit long; denote this graph by $G_t$ (formally this is the $t$-blow-up of $G$ with independent sets, i.e.\ the graph on the vertex set $V(G) \times [t]$, where $(u,i)$ and $(v,j)$
are adjacent iff $u v \in E(G)$).
Under this notation $\beta_t(G) = \beta_1(G_t)$. Notice that $(G+H)_t = G_t+H_t$ for any $t$ and furthermore that $s\cdot G_t$ is a spanning subgraph of $G_{s t}$ for any $s$ and $t$, in particular implying that $\beta_1(s \cdot G_t) \geq \beta_1(G_{s t})$.
Fix $\epsilon > 0$ and let $t$ be a large enough integer such that $\beta(G+H) \geq \beta_t(G+H)/t -\epsilon$. Further choose some large $s$ such that $\beta^*(G_t) \geq \beta_1(s\cdot G_t)/s - \epsilon$ and
$ \beta^*(H_t) \geq \beta_1(s\cdot H_t)/s - \epsilon$.
We now get
\begin{align*}
\beta(G+H) + \epsilon &\geq \beta_1(G_t+H_t)/t \geq \beta^*(G_t+H_t)/t
= \beta^*(G_t)/t+\beta^*(H_t)/t\,,
\end{align*}
where the last inequality used the additivity of $\beta^*$. Since
\[ \beta^*(G_t)/t \geq \beta_1(s \cdot G_t)/st - \epsilon \geq \beta_1(G_{st})/st - \epsilon \geq \beta(G) -\epsilon\]
and an analogous statement holds for $\beta^*(H_t)/t$, altogether we have $\beta(G+H) \geq \beta(G) + \beta(H) - 3\epsilon$. Taking $\epsilon\to 0$ completes the proof of the lemma.
\end{proof}
\begin{proof}[\emph{\textbf{Proof of Corollary~\svref{cor-c5-union}}}]
Consider the family of graphs on $n=5k$ vertices given by $G = k \cdot C_5$. It was shown in~\cite{AHLSW} that $\beta^*(C_5) = 5-\log_2 5 $, which by definition implies that
$\beta^*(G) = (5-\log_2 5)k$ and $\beta_1(G) = \beta^*(G) + o(k)$.
At the same time, clearly $\alpha(G) = 2k$ and combining the fact that $\beta(C_5)=\frac52$ with Lemma~\svref{lem-additive-beta-beta*} gives $\beta(G) = 5k/2 = n/2$, as required.
\end{proof}
The above result showed that the difference between the broadcast rate $\beta$ and the Index Coding scalar capacity $\beta_1$ can be linear in the number of messages. We now wish to use the gap between $\beta$ and $\beta_1$ to infer a gap between the vector and scalar Network Coding capacities.
\begin{corollary}\svlabel{cor-nc-gap}
For any $k\geq 1$ there exists a Network Coding instance on $5k+2$ vertices where the ratio between the vector and scalar-linear capacities is precisely $1.2$ while the ratio between the vector and scalar capacities converges to $1-\frac12\log_25 \approx 1.07$ as $k\to\infty$.
\end{corollary}
\begin{proof}
It is well known (e.g.~\cite{RSG}) that an $n$-vertex graph Index Coding instance $G$ can be translated into a capacitated network $H$ on $2n+2$ vertices via a reduction that preserves linear encoding. It thus suffices to bound the ratio of the corresponding Index Coding capacities.
For $k\geq 1$ consider the graph $G$ consisting of $k$ disjoint $5$-cycles. Corollary~\svref{cor-c5-union} established that $\beta(G) = 5k/2$ whereas $\beta_1(G) = (5-\log_2 5+o(1))k$ where the $o(1)$-term tends to $0$ as $k\to\infty$. At the same time, it was shown in~\cite{BBJK} that the scalar-linear Index Coding capacity over $GF(2)$ coincides with a parameter denoted by $\operatorname{minrk}_2(G)$, and as observed in~\cite{LuSt} this extends to any finite field $\mathbb F$ as follows: For a graph $H=(V,E)$ we say that a matrix $B$ indexed by $V$ over $\mathbb F$ is a \emph{representation} of $H$ over $\mathbb F$ if it has nonzero diagonal entries ($B_{uu}\neq 0$ for all $u\in V$) whereas $B_{u v} = 0$ for any $u\neq v$ such that $u v \notin E$. The smallest possible rank of such a matrix over $\mathbb F$ is denoted by $\operatorname{minrk}_\mathbb F(H)$. For the $5$-cycle we have $\operatorname{minrk}_\mathbb F(C_5) \leq \overline{\chi}(C_5) = 3$ by the linear clique-cover encoding and this is tight by as $\operatorname{minrk}_\mathbb F(C_5) \geq \lceil \beta(C_5) \rceil= 3$.
Finally, $\operatorname{minrk}_\mathbb F$ is clearly additive w.r.t.\ disjoint unions of graphs by its definition and thus $\operatorname{minrk}_\mathbb F(G) = 3k$ as required.
\end{proof}
\section{Approximating the Broadcast Rate}\svlabel{sec:approx}
This section is devoted to the proof of Theorem~\svref{thm-hypergraph-approx}, on polynomial-time algorithms for approximating $\beta$ and deciding whether $\beta=2$. Working in the setting of a general broadcast network is somewhat delicate and we begin by sketching the arguments that will follow.
In the simpler case of undirected graphs, a $o(n)$-approximation to $\beta$ is implied by results of~\cites{Wigderson,BH,AKa} that together give a polynomial time procedure that finds either a small clique-cover or a large independent set (see Remark~\svref{rem-undirected-graphs}). To get an approximation for the general broadcasting problem we will apply a similar technique using analogues of independent sets and clique-covers that give lower and upper bounds respectively on the general broadcasting rate. The analogue of an independent set is an \emph{expanding sequence} --- a sequence of receivers where the $i^{\mathrm{th}}$ receiver's desired message is unknown to receivers $1,\ldots,i-1$. The clique-cover analogue is a weak fractional hyperclique-cover (see Definition~\svref{def:hyperclique}).
In the remainder of this section, whenever we refer to hypercliques or hyperclique-covers we always mean weak hypercliques and weak hyperclique-covers.
We will prove that there is a polynomial time algorithm that outputs an expanding sequence of size $k$ or reports a fractional hyperclique-cover of size $O \left( k n^{1-1/k} \right)$; the approximation follows by setting $k$ appropriately. We will argue that either we can partition the graph and apply induction or else the side-information map is dense enough to deduce existence of a small fractional hyperclique-cover. The proof of the latter step deviates significantly from the techniques used for graphs, and seems interesting in its own right. We will give a simple procedure to randomly sample hypercliques and use it to produce a valid weight function for the hyperclique-cover by defining the weight of a hyperclique to be proportional to the probability it is sampled by the procedure.
To prove the second part of Theorem~\svref{thm-hypergraph-approx} we will prove that a structure called an \emph{almost alternating cycle} (AAC) constitutes a minimal obstruction to obtaining a broadcast rate of $2$. The proof makes crucial use of Theorem~\svref{thm-hierarchy}, calculating the parameter $b_2$ for AAC's to prove that their broadcast rate is strictly greater than $2$. Furthermore, the proof reduces finding an AAC to finding the transitive closure of a particular relation, which is polynomial time computable.
\subsection{Approximating the broadcast rate in general networks}\svlabel{subsec:beta-approx}
We now present a nontrivial approximation algorithm for $\beta$ for a general network described by a
hypergraph (that is, the most general framework where there are $m \geq n$ receivers).
\begin{remark}\svlabel{rem-undirected-graphs}
In the setting of undirected graphs a slightly better approximation algorithm for $\beta$ is a consequence of a result of Boppana and Halldorsson~\cite{BH}, following the work of Wigderson~\cite{Wigderson}. In~\cite{BH} the authors showed an algorithm that finds either a ``large'' clique or a ``large'' independent set in a graph (where the size guarantee involves the Ramsey number estimate). A simple adaptation of this result (Proposition~2.1 in the Alon-Kahale~\cite{AKa} work on approximating $\alpha$ via the $\vartheta$-function) gives a polynomial-time algorithm for finding an independent set of size $t_k(m)=\max\big\{ s : \binom{k+s-2}{k-1} \leq m\big\}$ in any graph satisfying $\overline{\chi}(G) \geq n/k + m$. In particular, taking $m=n/k$ with $k = \frac12 \log n$ we clearly have $t_k(m) \geq k$ for any sufficiently large $n$ and obtain that either $\overline{\chi}(G) < 4n/\log n$ or we can find an independent set of size $\frac12\log n$ in polynomial-time.
\end{remark}
We use the following notation: the $n$ message streams are identified with the elements of $[n] = V$.
The data consisting of the
pairs $\{(N(j),f(j))\}_{j=1}^m$ is our \emph
directed hypergraph} instance.
When referring to the hypergraph structure itself
(rather than the corresponding index coding problem) we will
refer to elements of $V$ as \emph{vertices} and we will refer to
pairs $(N(j),f(j))$ as \emph{directed hyperedges}.
For notational convenience,
we denote $S(j) = N(j) \cup \{f(j)\}$.
An \emph{expanding sequence} of size $k$ is a sequence of receivers
$j_1,\ldots,j_k$ such that
\begin{equation}
\svlabel{eq-expanding-def}
f(j_\ell) \not\in \bigcup_{i < \ell} S(i)
\end{equation}
for $1 \leq \ell \leq k.$
For a
hypergraph $G$, let $\alpha(G)$
denote the maximum
size of an expanding sequence.
\begin{lemma} \svlabel{lem:triv-lb}
Every
hypergraph $G$ satisfies the bound
$\beta(G) \geq \alpha(G).$
\end{lemma}
\begin{proof}
The proof is by contradiction. Let $j_1,\ldots,j_k$ be an expanding
sequence
and suppose that there is an
index code that achieves rate $r < k.$ Let $J = \{j_1,\ldots,j_k\}.$
For $b = \log_2 |\Sigma|$ we have
\[
|\Sigma|^k = 2^{b k} > 2^{b r} \geq |\Sigma_P|.
\]
Let us fix an element $x^*_i \in \Sigma$ for every
$i \not\in \{f(j) : j \in J\},$ and define $\Psi$ to be the
set of all $\vec{x} \in \Sigma^n$ that satisfy
$x_i = x^*_i$ for all $i \not\in \{f(j) : j \in J\}.$
The cardinality of $\Psi$ is $|\Sigma|^k$, so the
Pigeonhole Principle implies that
the function ${\mathcal{E}}$, restricted to $\Psi$, is not
one-to-one. Suppose that $\vec{x}$ and $\vec{y}$ are
two distinct elements of $\Psi$ such that ${\mathcal{E}}(\vec{x}) =
{\mathcal{E}}(\vec{y}).$ Let $i$ be the smallest index such
that $x_{f(j_i)} \neq y_{f(j_i)}.$ Denoting $j_i$ by $j$, we have
$x_{k} = y_{k}$ for all $k \in N(j)$,
because $N(j)$ does not contain
$f(j_{\ell})$ for any $\ell \geq i$, and the components with
indices $j_i, j_{i+1}, \ldots, j_k$ are the only components
in which $\vec{x}$ and $\vec{y}$ differ. Consequently
receiver $j$ is unable to distinguish between message vectors
$\vec{x},\vec{y}$ even after observing the broadcast message,
which violates the condition that $j$ must be able to decode
message $f(j)$.
\end{proof}
\begin{lemma} \svlabel{lem:triv-ub}
Let $\psi_f(G)$ denote the minimum weight of a
fractional weak hyperclique-cover of $G$.
Every
hypergraph $G$ satisfies the bound
$\beta(G) \leq \psi_f(G).$
\end{lemma}
\begin{proof}
The linear program defining $\psi_f(G)$ has integer
coefficients, so $G$ has a fractional hyperclique
cover of weight $w = \psi_f(G)$ in which the weight
$w({\mathcal{J}})$ of every hyperclique ${\mathcal{J}}$ is a
rational number. Assume we are given such a fractional
hyperclique-cover, and choose an integer $d$ such that
$w({\mathcal{J}})$ is an integer multiple of $1/d$ for
every ${\mathcal{J}}$. Let $\mathcal{C}$ denote a multiset
of hypercliques containing $d \cdot w({\mathcal{J}})$
copies of ${\mathcal{J}}$ for every hyperclique ${\mathcal{J}}$.
Note that the
the cardinality of $\mathcal{C}$ is $d \cdot w$.
For any hyperclique ${\mathcal{J}}$, let $f({\mathcal{J}})$
denote the set $\bigcup_{j \in {\mathcal{J}}} \{f(j)\}.$
For each $i \in [n]$, let $\mathcal{C}_i$ denote the
sub-multiset of $\mathcal{C}$ consisting of all
hypercliques ${\mathcal{J}} \in \mathcal{C}$ such that
$i \in f({\mathcal{J}})$. Fix a finite field $\mathbb F$ such
that $|\mathbb F| > dw.$
Define $\Sigma = \mathbb F^d$ and
$\Sigma_P = \mathbb F^{d \cdot w}$.
Let $\{\xi_P^{{\mathcal{J}}}\}_{{\mathcal{J}} \in \mathcal{C}}$
be a basis for the dual vector space $\Sigma_P^*$ and
let $\{\xi_i^{{\mathcal{J}}}\}_{{\mathcal{J}} \in \mathcal{C}_i}$
be a set of dual vectors in $\Sigma^*$ such that
any $d$ of these vectors constitute a basis
for $\Sigma^*$. (The existence of such a set of
dual vectors is guaranteed by our choice of $\mathbb F$ with
$|\mathbb F| > dw \geq d.$)
The encoding function is defined to be the unique
linear function satisfying
\[
\xi_P^{{\mathcal{J}}}({\mathcal{E}}(x_1,\ldots,x_n)) =
\sum_{i \in f({\mathcal{J}})} \xi_i^{{\mathcal{J}}}(x_i)
\qquad \forall {\mathcal{J}}.
\]
For each receiver $j$, if $i=f(j)$, the set of dual vectors $\xi_i^{{\mathcal{J}}}$
with $j \in {\mathcal{J}}$ compose a basis of
$\Sigma^*$, hence to prove that $j$ can decode message $x_{i}$
it suffices to show that $j$ can determine the value of
$\xi_i^{{\mathcal{J}}}(x_{i})$ whenever $j \in {\mathcal{J}}.$
This holds because the public channel contains the value of
$\sum_{\ell \in f({\mathcal{J}})} \xi_{\ell}^{{\mathcal{J}}}(x_{\ell})$,
and receiver $j$ knows that value of $\xi_{\ell}^{{\mathcal{J}}}(x_{\ell})$
for every $\ell \neq i$ in $f({\mathcal{J}})$ because
$\ell \in N(j).$
\end{proof}
We now turn our attention to bounding the ratio
$\psi_f(G)/\alpha(G)$ for a
hypergraph $G$. Our goal is to show that
this ratio is bounded by a function in $o(n).$
To begin with, we need an analogue of the lemma
that undirected graphs with small maximum degree
have small fractional chromatic number.
\begin{lemma} \svlabel{lem:low-degree}
If $G$ is
a hypergraph with $n$ vertices,
and $d$ is a natural number such
that for every receiver $j$, $|S(j)| + d \geq n,$
then $\psi_f(G) \leq 4d+2.$
\end{lemma}
\begin{proof}
Let us define a procedure for sampling a random subset
$T \subseteq [n]$ and a random hyperclique ${\mathcal{J}}$
as follows. Let $\pi$ be a uniformly
random permutation of $[n+d]$, let $i$ be the least
index such that $\pi(i+1) > n,$ and let $T$ be the
set $\{\pi(1),\pi(2),\ldots,\pi(i)\}.$ (If $\pi(1)>n$
then $i=0$ and $T$ is the empty set.) Now let
${\mathcal{J}}$ be the set of all $j$ such that $f(j) \in T \subseteq S(j).$
(Note that ${\mathcal{J}}$ is indeed a hyperclique.)
For any hyperclique ${\mathcal{J}}$ let $p({\mathcal{J}})$ denote the
probability that ${\mathcal{J}}$ is sampled by this procedure
and let $w({\mathcal{J}}) = (4d+2) \cdot p({\mathcal{J}}).$ We claim
that the weights $w(\cdot)$ define a fractional
hyperclique-cover of $G$, or equivalently, that
for every receiver $j$, $\P(f(j) \in T \subseteq S(j)) \geq \frac{1}{4d+2}.$
Let $U(j)$ denote the set $\{f(j)\} \cup \left( [n] \setminus S(j) \right)
\cup \left( [n+d] \setminus [n] \right).$
The event $\mathcal{E} = \{f(j) \in T \subseteq S(j)\}$ occurs if and only
if, in the ordering of $U(j)$ induced by $\pi$, the first
element of $U(j)$ is $f(j)$ and the next element belongs
to $[n+d] \setminus [n].$ Thus,
\[
\P(\mathcal{E}) = \frac{1}{|U(j)|} \cdot
\frac{d}{|U(j)|-1}.
\]
The bound $\P(\mathcal{E}) \geq \frac{1}{4d+2}$ now follows
from the fact that $|U(j)| \leq 2d+1.$
\end{proof}
\begin{lemma} \svlabel{lem:hyper-wigderson}
If $G$ is
a hypergraph and $\alpha(G) \leq k$,
then $\psi_f(G) \leq 6k \cdot n^{1-1/k}.$ Moreover, there is a
polynomial-time algorithm, whose input is
a hypergraph $G$
and a natural number $k$,
that either outputs an expanding sequence of size $k+1$ or
reports (correctly) that $\psi_f(G) \leq 6k \cdot n^{1-1/k}.$
\end{lemma}
\begin{proof}
The proof is by induction on $k$.
In the base case $k=1$,
either $G$ itself is a hyperclique or
there is some pair of receivers $j,j'$ such
that $f(j)$ is not in $S(j')$. In that case, the sequence
$j_1=j', j_2=j$ is an expanding sequence of size $2$.
For the induction step, for each hyperedge $j$ define the set
$D(j) = \{f(j)\} \cup \left( [n] \setminus S(j) \right)$
and let $j_1$ be a hyperedge such that $|D(j)|$ is maximum.
If $|D(j_1)| \leq n^{1 - 1/k}+1,$ then the bound
$|S(j)| + n^{1-1/k} \geq n$ is satisfied for
every $j$ and Lemma~\svref{lem:low-degree} implies
that $\psi_f(G) < 4 n^{1-1/k} + 2 \leq 6 n^{1-1/k}.$
Otherwise, partition the vertex set of $G$
into $V_1 = [n] \setminus S(j_1)$ and $V_2 = S(j_1),$
and for $i=1,2$ define $G_i$ to be the
hypergraph with vertex set $V_i$ and edge
set $E_i$ consisting of all pairs $(N(j) \cap V_i, f(j))$
such that $(N(j),f(j))$ is a hyperedge of $G$
with $f(j) \in V_i.$ (We will call such a structure
the \emph{induced sub-hypergraph of $G$ on vertex
set $V_i$}.)
If $G_1$ contains an expanding sequence
$j_2,j_3,\ldots,j_{k+1}$ of size $k$, then
the sequence $j_1,j_2,\ldots,j_{k+1}$ is an
expanding sequence of size $k+1$ in $G$.
(Moreover, if an algorithm efficiently finds the
sequence $j_2,j_3,\ldots,j_{k+1}$ then it is easy
to efficiently construct the sequence $j_1,\ldots,j_{k+1}.$)
Otherwise, by the induction hypothesis,
$G_1$ has a fractional hyperclique-cover
of weight at most $6 (k-1) |V_1|^{1-1/(k-1)} \leq 6 (k-1) |V_1| n^{-1/k}$.
Continuing to process the induced sub-hypergraph
on vertex set $V_2$ in the same way, we arrive
at a partition of $[n]$ into disjoint vertex sets
$W_1, W_2, \ldots, W_{\ell}$ of cardinalities
$n_1,\ldots,n_{\ell}$, respectively, such that
for $1 \leq i < \ell$, the induced sub-hypergraph
on $W_i$ has a fractional clique-cover of weight
at most $6 (k-1) n_i n^{-1/k}$, and for $i=\ell$
the induced sub-hypergraph on $W_i$ satisfies
the hypothesis of Lemma~\svref{lem:low-degree}
with $d=n^{1-1/k}$
and consequently has a fractional hyperclique-cover
of weight at most $6 n^{1-1/k}.$
The lemma follows by summing the weights of these hyperclique-covers.
\end{proof}
\noindent Combining
Lemmas~\svref{lem:triv-lb},~\svref{lem:triv-ub},~\svref{lem:hyper-wigderson},
we obtain the approximation
algorithm asserted by Theorem~\svref{thm-hypergraph-approx}.
\subsection{Extending the algorithm to networks with variable source rates}\svlabel{sec:weighted-hypergraph}
The aforementioned approximation algorithm for $\beta$ naturally extends to the setting where each source in the broadcast network has
its own individual rate. Namely, the $n$ message streams are identified with the elements of $[n] = V$, where message stream $i$ has a rate $r_i$, and the problem input consists of the vector $(r_1,\ldots,r_n)$ and the
pairs $\{(N(j),f(j))\}_{j=1}^m$. Thus the input is a \emph{weighted directed hypergraph} instance.
An index code for a weighted hypergraph consists of the following:
\begin{compactitem}
\item Alphabets $\Sigma_P$ and $\Sigma_i$ for $1 \leq i \leq n$,
\item An encoding function
${\mathcal{E}}: \prod_{i=1}^n \Sigma_i \rightarrow \Sigma_P$,
\item Decoding functions ${\mathcal{D}}_j : \Sigma_P \times \prod_{i \in N(j)} \Sigma_i
\rightarrow \Sigma_{f(j)}.$
\end{compactitem}
The encoding and decoding functions are required to satisfy
\[{\mathcal{D}}_j({\mathcal{E}}(\sigma_1,\ldots,\sigma_n), \sigma_{N(j)}) = \sigma_{f(j)}\]
for all $j=1,\ldots,m$ and all $(\sigma_1,\ldots,\sigma_n) \in
\prod_{i=1}^n \Sigma_i.$ Here the notation
$\sigma_{N(j)}$ denotes the tuple obtained from a complete
$n$-tuple $(\sigma_1,\ldots,\sigma_n)$ by retaining only
the components indexed by elements of $N(j).$
An index code \emph{achieves} rate $r \geq 0$ if there exists
a constant $b>0$ such that $|\Sigma_i| \geq 2^{b\, r_i}$ for
$1 \leq i \leq n$ and $|\Sigma_P| \leq 2^{b\, r}.$ If so,
we say that rate $r$ is \emph{achievable}. If $G$ is a
weighted hypergraph, we define
$\beta(G)$ to be the infimum of the set of achievable
rates.
The first step in generalizing the proof given in the previous subsection to the case where the $r_i$'s are non-uniform is to properly extend the notions of hypercliques and expanding sequences. A weak fractional hyperclique cover of a weighted hypergraph will now assign a weight $w({\mathcal{J}})$ to every weak hyperclique ${\mathcal{J}}$ such that for every receiver $j$, $\sum_{{\mathcal{J}} \ni j} w({\mathcal{J}}) \geq r_{f(j)}$ (cf.\ Definition~\svref{def:hyperclique} corresponding to $r_{f(j)}=1$).
As before, the weight of a fractional weak hyperclique-cover is
given by $\sum_{{\mathcal{J}}} w({\mathcal{J}})$ and for a weighted hypergraph $G$ we let $\psi_f(G)$
denote the minimum weight of a fractional weak hyperclique-cover.
An expanding sequence $j_1,\ldots,j_k$ is defined as before (see Eq.~\svref{eq-expanding-def}) except now we associate such a sequence with the weight $\sum_{\ell=1}^k r_{f(j_\ell)}$ and the quantity $\alpha(G)$ will denote
the maximum weight of an expanding sequence (rather than the maximum cardinality).
With these extended defintions, the proofs in the previous subsection carry unmodified to the weighted hypergraph setting with the single exception of Lemma~\svref{lem:hyper-wigderson}, where the assumption that the hypergraph is unweighted was essential to the proof. In what follows we will qualify an application of that lemma via a dyadic partition of the vertices of our weighted hypergraph according to their weights $r_i$.
Assume without loss of generality that $0 \leq r_i \leq 1$ for
every vertex $i \in [n]$, and partition the vertex of set
$G$ into subsets $V_1, V_2, \ldots$ such that $V_s$
contains all vertices $i$ such that $2^{-s} < r_i \leq 2^{1-s}.$
Let $G_s$ denote the induced hypergraph on vertex
set $V_s$. For each of the nonempty hypergraphs $G_s$,
run the algorithm in Lemma~\svref{lem:hyper-wigderson} for
$k=1,2,\ldots$ until the smallest value of $k(s)$ for which
an expanding sequence of size $k(s)+1$ is not found.
If $G_s^{\circ}$ denotes the unweighted version of
$G_s$, then we know that
\begin{align*}
\alpha(G_s) & \geq 2^{-s} \alpha(G_s^{\circ}) \geq 2^{-s} k(s) \\
\psi_f(G_s) & \leq 2^{1-s} \psi_f(G_s^{\circ}) \leq 2^{-s} \cdot
12 k(s) n^{1-1/k(s)}.
\end{align*}
In addition, for each $i \in V_s$ the set of hyperedges
containing $i$ constitutes a hyperclique, which implies
the trivial bound
\[
\psi_f(G_s) \leq \sum_{i \in V_s} r_i \leq 2^{1-s} |V_s|.
\]
Combining these two upper bounds for $\psi_f(G_s)$, we obtain
an upper bound for $\psi_f(G)$:
\begin{equation} \svlabel{eq:tau}
\psi_f(G) \leq \sum_{s=1}^{\infty} \psi_f(G_s) \leq
\sum_{s=1}^{\infty} 2^{-s} \cdot \min \left\{
12 k(s) n^{1-1/k(s)}, \, 2 |V_s| \right\}.
\end{equation}
We define $\tau(G)$ to be the right side of \sveqref{eq:tau}.
We have described a polynomial-time algorithm to compute $\tau(G)$
and have justified the relation $\psi_f(G) \leq \tau(G)$,
so it remains to show that $\tau(G) / \alpha(G) \leq
c n \left( \frac{\log \log n}{\log n} \right)$ for some constant $c$.
The bound $\tau(G) \leq n$ follows immediately
from the definition of $\tau$, so if $\alpha(G) \geq
\frac{\log n}{\log \log n}$ there is nothing to prove.
Assume henceforth that $\alpha(G) < \frac{\log n}{\log \log n}$,
and define $w$ to be the smallest integer such that
$2^w \cdot \alpha(G) > \frac{\log n}{2 \log \log n}.$
We have
\begin{align}
\nonumber
\tau(G) &\leq
\sum_{s=1}^{w} 2^{-s} \cdot 12 k(s) n^{1-1/k(s)} \;+\;
\sum_{s=w+1}^{\infty} 2^{1-s} \cdot |V_s| \\
\nonumber
& \leq
12 n \sum_{s=1}^{w} 2^{-s} k(s) n^{-1/k(s)} \;+\;
2^{-w} \cdot n \\
\svlabel{eq:tau.2}
& <
12 n \alpha(G) \sum_{s=1}^{w} n^{-1/k(s)} \;+\;
2 n \alpha(G) \left( \frac{\log \log n}{\log n} \right),
\end{align}
with the last line derived using the relations
$2^{-s} k(s) \leq \alpha(G_s) \leq \alpha(G)$
and $2^{-w} < \alpha(G) \big( \frac{2 \log \log n}{\log n} \big)$.
Applying once more the fact that $2^{-s} k(s) \leq \alpha(G)$,
we find that $n^{-1/k(s)} \leq n^{-1/\left(2^s \cdot \alpha(G)\right)}.$
Substituting this bound into \sveqref{eq:tau.2} and letting $\alpha$
denote $\alpha(G)$, we have
\[
\frac{\tau(G)}{\alpha(G)} \leq
2 n \left( \frac{\log \log n}{\log n} \right) \;+\;
12 n \left( n^{-1/2\alpha} + n^{-1/4\alpha} + \cdots + n^{-1/2^w \alpha}
\right).
\]
In the sum appearing on the right side, each term is the square of
the one following it. It now easily follows
that the final term in
the sum is less than $1/2$, so the entire sum is bounded above
by twice its final term. Thus
\begin{equation} \svlabel{eq:tau.3}
\frac{\tau(G)}{\alpha(G)} \leq
2 n \left( \frac{\log \log n}{\log n} \right) \;+\;
24 n \cdot n^{-1/2^w \alpha}.
\end{equation}
Our choice of $w$ ensures that $2^w \alpha \leq \frac{\log n}{\log \log n}$
hence
$n^{-2^{-w} a} \leq n^{-\log \log n / \log n} = (\log n)^{-1}$.
By substituting this bound into \sveqref{eq:tau.3} we obtain
\[
\frac{\tau(G)}{\alpha(G)} \leq n
\left( \frac{2 \log \log n}{\log n} + \frac{24}{\log n} \right)\,,
\]
as desired.
\subsection{Proof of Theorem~\svref{thm-hypergraph-approx}, determining whether the broadcast rate equals 2}\svlabel{subsec:beta-equals-2}
Let $G$ be an undirected graph with independence number $\alpha=2$. Clearly, if $\overline{G}$ is bipartite than $\overline{\chi}(G) = 2$ and so $\beta(G) = 2$ as well. Conversely, if $\overline{G}$ is not bipartite then it contains an odd cycle, the smallest of which is induced and has $k\geq 5$ vertices since the maximum clique in $\overline{G}$ is $\alpha(G)=2$. In particular,
\svapdx{Theorem~\svref{thm-beta-exact}}{Theorem~\svref{thm-cycles}} implies that $\beta(G) \geq \beta(\overline{C_k}) = \frac{k}{\lfloor k/2\rfloor} > 2$. We thus conclude the following:
\begin{corollary}\svlabel{cor-beta-2-undir}
Let $G$ be an undirected graph on $n$ vertices whose complement $\overline{G}$ is nonempty. Then $\beta(G) = 2$ if and only if $\overline{G}$ is bipartite.
\end{corollary}
A polynomial time algorithm for determining whether $\beta=2$ in undirected graphs follows as an immediate consequence of Corollary~\svref{cor-beta-2-undir}. However, for broadcasting with side information in general --- or even for the special case of directed graphs (the main setting of~\cites{BK,BBJK}) --- it is unclear whether such an algorithm exists. In this section we provide such an algorithm, accompanied by a characterization theorem that generalizes the above characterization for undirected graphs. To state our characterization we need the following definitions. As in Section~\svref{subsec:beta-approx} we use $S(j)$ to denote the set $N(j) \cup \{f(j)\}$. Additionally, we introduce the notation $T(j)$ to denote the complement of $S(j)$ in the set of messages. When referring to the message desired by receiver $R_j$, we abbreviate $x_{f(j)}$ to $x(j)$.
\svapdx{}{Henceforth, when referring to a hypergraph $G=(V,E)$, we assume that for each edge $j \in E$, the hypergraph structure specifies the vertex $f(j)$ and both of the sets $S(j),T(j)$.}
\begin{definition} \svlabel{def:h-compat}
If $G=(V,E)$ is a directed hypergraph and $S$ is a set,
a function $F : V \to S$ is said to be \emph{$G$-compatible}
if for every edge $j \in E$, there are two \emph{distinct}
elements $t,u \in S$ such that $F$ maps every element of $T(j)$ to $t$,
and it maps $f(j)$ to $u$.
\end{definition}
\begin{definition} \svlabel{def:aac}
If $G=(V,E)$ is a directed hypergraph, an \emph{almost alternating
(2n+1)-cycle} in $G$ is
a sequence of vertices
$v_{-n}, v_{-n+1}, \ldots, v_n$,
and a sequence of edges
$j_0, \ldots, j_n$,
such that for $i=0,\ldots,n$, the vertex $f(j_i)$ is equal to $v_{i-n}$
and the set $T(j_i)$ contains $v_{i}$, as well as $v_{i+1}$ if $i < n$.
\end{definition}
\begin{theorem}\svlabel{thm-directed-beta-2}
For a directed hypergraph $G$ the following are equivalent:
\begin{compactenum}[(i)]
\item $\beta(G)=2$ \svlabel{aac:beta}
\item There exists a set $S$ and a $G$-compatible function $F : V \to S$.
\svlabel{aac:g-compat}
\item $G$ contains no almost alternating cycles.
\svlabel{aac:aac}
\end{compactenum}
Furthermore there is a polynomial-time algorithm to decide
if these equivalent conditions hold.
\end{theorem}
\begin{proof}
{\bf(\svref{aac:beta})}$\Rightarrow${\bf(\svref{aac:aac}):}
The contrapositive statement says that if $G$ contains an almost alternating
cycle then $\beta(G) > 2.$ Let $v_{-n},\ldots,v_n$ be the vertices
of an almost alternating $(2n+1)$-cycle with edges $j_0,\ldots,j_n$.
\svapdx{
We manipulate entropy inequalities to
prove that $b_2(G) > 2$ whenever $G$ contains an
almost alternating cycle; the details are in
Appendix~\ref{subsec:beta-equals-2}. Theorem~\svref{thm-hierarchy}
then implies that $\beta(G) > 2$.
}{
To prove $\beta(G) > 2$ we manipulate
entropy inequalities involving the random variables
$\{x_i : -n \leq i \leq n\}$ and $y$, where $x_i$ denotes
the message associated to vertex $v_i$ normalized to have entropy $1$, and $y$ denotes
the public channel. For $S \subseteq \{y,x_{-n},\ldots,x_n\}$,
let $H(S)$ denote the entropy of the joint distribution of
the random variables in $S$, and let $\hb{S}$ denote
$H(\overline{S})$. Let $S_{i:j}$ denote the set $\{x_i,x_{i+1},\ldots,x_j\}.$
For $0 \leq i \leq n-1$, we have
\begin{equation} \svlabel{eq:aac2}
H(y) + (2n-2) \geq
\hb{\{x_{i-n},x_{i},x_{i+1}\}} =
\hb{\{x_{i},x_{i+1}\}} =
\hb{S_{i:i+1}}\,,
\end{equation}
where the second equation holds because receiver
$j_i$ can decode message $x_{i-n} = x(j_i)$
given the value $y$ and the values $x_k$ for $k \in N(j_i)$.
Using submodularity we have that for $0 < j < n,$
\begin{equation} \svlabel{eq:aac3}
\hb{S_{0:j}} + \hb{S_{j:j+1}} \geq
\hb{S_{0:j+1}} + \hb{x_j} =
\hb{S_{0:j+1}} + \hb{\emptyset} =
\hb{S_{0:j+1}} + 2n+1 \,.
\end{equation}
Summing up \sveqref{eq:aac3} for $j=1,\ldots,n-1$ and
canceling terms that appear on both sides, we obtain
\begin{equation} \svlabel{eq:aac4}
\sum_{j=0}^{n-1} \hb{S_{j:j+1}} \geq \hb{S_{0:n}} + (n-1)(2n+1)\,.
\end{equation}
Summing up \sveqref{eq:aac2} for $i=0,\ldots,n-1$ and
combining with \sveqref{eq:aac4} we obtain
\begin{equation} \svlabel{eq:aac5}
n H(y) + n(2n-2) \geq \hb{S_{0:n}} + (n-1)(2n+1)\,.
\end{equation}
Now, observe that
\begin{equation} \svlabel{eq:aac6}
\hb{S_{0:n}} + n-1 \geq
\hb{x_0,x_n} \geq \hb{x_n} \geq \hb{\emptyset} = 2n+1 \,.
\end{equation}
Summing \sveqref{eq:aac5} and \sveqref{eq:aac6}, we obtain
\begin{align*}
n H(y) + 2n^2 - n - 1 &\geq 2n^2 + n
\end{align*}
and rearranging we get $H(y)\geq 2+n^{-1}$,
from which it follows that $\beta(G) \geq 2 + n^{-1}$.
}
{\bf (\svref{aac:aac})$\Rightarrow$(\svref{aac:g-compat}):}
Define a binary relation $\sharp$ on the vertex set $V$
by specifying that $v \sharp w$ if there exists an
edge $j$ such that $\{v,w\} \subseteq T(j)$.
Let $\sim$ denote the transitive closure of $\sharp$.
Define
$F$ to be the quotient map from $V$ to
the set $S$ of equivalence classes of $\sim$.
We need to check that $F$ is $G$-compatible.
For every edge $j \in E$, the definition of relation $\sharp$
trivially implies that $F$ maps all of $T(j)$ to a single element of $S$.
The fact that it maps $f(j)$ to a \emph{different} element of $S$
is a consequence of
the non-existence of almost alternating cycles.
A relation $f(j) \sim v$ for some $v \in T(j)$ would imply
the existence of a sequence $v_0, \ldots, v_n$ such that
$v_0=f(j), v_n=v,$ and $v_i \sharp v_{i+1}$ for
$i=0,...,n-1$. If we choose $j_i$ for $0 \leq i < n$ to
be an edge such that $T(j_i)$ contains $v_i,v_{i+1}$
(such an edge exists because $v_i \sharp v_{i+1}$)
and we set $j_n = j$ and $v_{i-n} = f(j_i)$ for $i=0,\ldots,n-1$,
then the vertex sequence $v_{-n},\ldots,v_n$ and edge
sequence $j_0,\ldots,j_n$ constitute an almost alternating
cycle in $G$.
Computing the relation $\sim$ and the function $F$,
as well as testing that $F$ is $G$-compatible, can easily
be done in polynomial time, implying the final sentence
of the theorem statement.
{\bf (\svref{aac:g-compat})$\Rightarrow$(\svref{aac:beta}):}
If $F : V \to S$ is $G$-compatible,
we may compose $F$ with a one-to-one mapping from $S$
into a finite field $\mathbb F$, to obtain a function
$\phi : V \to \mathbb F$ that is $G$-compatible.
The public channel broadcasts two elements of $\mathbb F$, namely:
\svapdx{
\[
y = \sum_v x_v, \qquad z = \sum_v \phi(v) x_v.
\]
}{
\begin{align*}
y & = \sum_v x_v \\
z & = \sum_v \phi(v) x_v
\end{align*}
}
Receiver $R_j$ now decodes message $x(j)$ as follows.
Let $c$ denote the unique element of $\mathbb F$ such that $\phi(v)=c$ for every
$v$ in $T(j)$.
Using the pair $(y,z)$ from the public channel, $R_j$
can form the linear combination
\svapdx{$}{$$}
cy - z = \sum_v [c - \phi(v)] x_v.
\svapdx{$}{$$}
We know that every $v \in T(j)$ appears with coefficient zero in this sum.
For every $v \in N(j)$, receiver $R_j$ knows the value of $x_v$ and can
consequently subtract off the term $[c-\phi(v)] x_v$ from the sum.
The only remaining term is $[c-\phi(x(j))] x(j)$.
The coefficient $c-\phi(x(j))$ is nonzero, because $\phi$ is $G$-compatible.
Therefore $R_j$ can decode $x(j)$.
\end{proof}
\section{The gap between the broadcast rate and clique cover numbers}
\subsection{Separating the broadcast rate from the extreme LP solution $b_n$}\svlabel{sec:sep-beta-alpha}
In this section we prove Theorem~\svref{thm-beta-chif-gap} that shows a strong form of separation between $\beta$ and its upper bound $b_n = \overline{\chi}_f$. Not only can we have a family of graphs where $\beta=O(1)$ while $\overline{\chi}_f$ is unbounded, but one can construct such a family where $\overline{\chi}_f$ grows polynomially fast with $n$.
\begin{proof}[\textbf{\emph{Proof of Theorem~\svref{thm-beta-chif-gap}}}]
The following family of graphs (up to a small modification) was introduced by Erd\H{o}s and R\'enyi in~\cite{ER}. Due to its close connection to the (Sylvester-)Hadamard matrices when the chosen field has characteristic 2 we refer to it as the \emph{projective-Hadamard} graph $H(\mathbb F_q)$:
\begin{compactenum}
\item Vertices are the non-self-orthogonal vectors in the $2$-dimensional projective space over $\mathbb F_q$.
\item Two vertices are adjacent iff their corresponding vectors are non-orthogonal.
\end{compactenum}
Let $q$ be a prime-power. We claim that the projective-Hadamard graph $H(\mathbb F_q)$ on $n=n(q)$ vertices satisfies $\beta = 3$ while $\overline{\chi}_f = \Theta(n^{1/4})$. The latter
is a well-known fact which appears for instance in~\cites{AK,MW}. Showing that $\overline{\chi}_f \geq (1-o(1))n^{1/4}$ is straightforward and we include an argument establishing this for completeness.
The fact that $\beta \geq 3$ follows from the fact that the standard basis vectors form an independent set of size $3$. A matching upper bound will follow from the $\operatorname{minrk}_\mathbb F$ parameter defined in Section~\svref{subsec:cor-of-thm-1}:
Let $\mathbb F$ be some finite field and let $\ell=\operatorname{minrk}_\mathbb F(G)$ be the length of the optimal linear encoding over $\mathbb F$ for the Index Coding problem of a graph $G$ with messages taking values in $\mathbb F$. Broadcasting $\ell \lceil \log_2 |\mathbb F| \rceil$ bits allows each receiver to recover his required message in $\mathbb F$ and so clearly $\beta \leq \ell$. It thus follows that $\lceil \beta(G) \rceil \leq \operatorname{minrk}_\mathbb F(G)$ for any graph $G$ and finite field $\mathbb F$.
Here, dealing with the projective-Hadamard graph $H$, let $B$ be the Gram matrix over $\mathbb F_q$ of the vectors corresponding to the vertices of $H$. By definition the diagonal entries are nonzero and whenever two vertices $u,v$ are nonadjacent we have $B_{uv} = 0$. In particular $B$ is a representation for $H$ over $\mathbb F_q$ which clearly has rank $3$ as the standard basis vectors span its entire row space. Altogether we deduce that $\beta(H) = 3$ whereas $\overline{\chi}_f = \Theta(n^{1/4})$, as required.
The fact that $\overline{\chi}_f \geq (1-o(1))n^{-1/4}$ will follow from a straightforward calculation showing that the clique-number of $H$ is at most $(1+o(1))q^{3/2} = (1+o(1))n^{3/4}$.
Consider the following multi-graph $G$ which consists of the entire projective space:
\begin{compactenum}
\item Vertices are all vectors of the $2$-dimensional projective space over $\mathbb F_q$.
\item Two (possibly equal) vertices are adjacent iff their corresponding vectors are orthogonal.
\end{compactenum}
Clearly, $G$ contains the complement of the Hadamard graph $H(\mathbb F_q)$ as an induced subgraph
and it suffices to show that $\alpha(G) \leq (1+o(1))q^{3/2}$.
It is well-known (and easy) that $G$ has $N=q^2+q+1$ vertices and that every vertex of $G$ is adjacent to precisely $q+1$ others. Further observe that for any $u,v \in V(G)$ precisely one vertex of $G$ belongs to $\{u,v\}^\bot$ (as $u,v$ are linearly independent vectors). In other words, the codegree of any two vertices in $G$ is $1$.
We conclude that $G$ is a strongly-regular graph (see e.g.~\cite{GR} for more details on this special class of graphs) with codegree parameters $\mu=\nu=1$
(where $\mu$ is the codegree of adjacent pairs and $\nu$ is the codegree of non-adjacent ones).
There are thus precisely 2 nontrivial eigenvalues of $G$ given by
$\frac12 ((\mu-\nu)\pm\sqrt{(\mu-\nu)^2+4(q+1-\nu)})= \pm\sqrt{q}$,
and in particular the smallest eigenvalue is $\lambda_N=-\sqrt{q}$. Hoffman's eigenvalue bound (stating that $\alpha \leq \frac{-m\lambda_m}{\lambda_1-\lambda_m}$ for any regular $m$-vertex graph with largest and smallest eigenvalues $\lambda_1,\lambda_m$ resp., see e.g.~\cite{GR}) now shows
\begin{equation*}
\alpha(G) \leq \frac{-N \lambda_N}{(q+1)-\lambda_N} = \frac{(q^2+q+1)\sqrt{q}}{q-\sqrt{q}+1}
= q^{3/2}+q+\sqrt{q}\,,
\end{equation*}
as required.
\end{proof}
In addition to demonstrating a large gap between $\overline{\chi}_f$ and $\beta$ on the projective-Hadamard graphs, we show that even in the extreme cases where $G$ is a triangle-free graph on $n$ vertices, in which case $\overline{\chi}_f(G) \geq n/2$, one can
construct Index Coding schemes that significantly
outperform $\overline{\chi}_f$. We prove this
in Section~\svref{subsec:sep-triangle-free} by providing a
family of triangle-free graphs on $n$ vertices where
$\beta \leq \frac38 n$.
\subsection{Broadcast rates for triangle-free graphs}\svlabel{subsec:sep-triangle-free}
In this section we study the behavior of the broadcast rate for triangle-free graphs, where the upper bound $b_n$ on $\beta$ is at least $n/2$. The first question in this respect is whether possibly $\beta = b_n$ in this regime, i.e.\
for such sparse graphs one cannot improve upon the fractional clique-cover approach for broadcasting. This is answered by the following result.
\begin{theorem}\svlabel{thm-triangle-free}
There exists an explicit family of triangle-free graphs on $n$ vertices where $\overline{\chi}_f \geq n/2$
whereas the broadcast rate satisfies $\beta \leq \frac38 n$.
\end{theorem}
The following lemma will be the main ingredient in the construction:
\begin{lemma}\svlabel{lem-triangle-free-family}
For arbitrarily large integers $n$ there exists a family $\mathcal{F}$ of subsets of $[n]$ whose size is at least $8n/3$ and has the following two properties:
\begin{inparaenum}[(i)]
\item Every $A \in \mathcal{F}$ has an odd cardinality.
\item There are no distinct $A,B,C\in\mathcal{F}$ that have pairwise odd cardinalities of intersections.
\end{inparaenum}
\end{lemma}
\begin{remark}
For $n$ even, a simple family $\mathcal{F}$ of size $2n$ with the above properties
is obtained by taking all the singletons and all their complements. However, for our application here it is crucial to obtain a family $\mathcal{F}$ of size strictly larger than $2n$.
\end{remark}
\begin{remark}
The above lemma may be viewed as a higher-dimensional analogue of the Odd-Town theorem: If we consider a graph on the odd subsets with edges between those with an odd cardinality of intersection, the original theorem looks for a maximum independent set while the lemma above looks for a maximum triangle-free graph.
\end{remark}
\begin{proof}[Proof of lemma]
It suffices to prove the lemma for $n=6$ by super-additivity (we can partition a ground-set $[N]$ with $N=6m$ into disjoint $6$-tuples and from each take the original family $\mathcal{F}$).
Let $U_1 = \big\{ \{x\} : x \in [5]\big\}$ be all singletons except the last, and $U_2 = \big\{ A \cup \{6\} : A \subset [5]\,,\,|A|=2 \big\}$.
Clearly all subsets given here are odd.
We first claim that there are no triangles on the graph induced on $U_2$. Indeed, since all subsets there contain the element $6$, two vertices in $U_2$ are adjacent iff their corresponding 2-element subsets $A,A'$ are disjoint, and there cannot be 3 disjoint 2-element subsets of $[5]$.
The vertices of $U_1$ form an independent set in the graph, hence the only remaining option for a triangle in the induced subgraph on $U_1\cup U_2$ is of the form $\{x\}, (A\cup\{6\}), (A'\cup \{6\})$. However, to support edges from $\{x\}$ to the two sets in $U_2$ we must have that $x$ belongs to both sets, and since $x\neq 6$ by definition we must have $x \in A \cap A'$. However, we must also have $A \cap A' = \emptyset$ for the two vertices in $U_2$ to be adjacent, contradiction.
To conclude the proof observe that adding the extra set $[5]$ does not introduce any triangles, since $U_1$ is an independent set while $[5]$ is not adjacent to any vertex in $U_2$ (its intersection with any set $(A\cup\{6\}) \in U_2$ contains precisely 2 elements). Altogether we have $|\mathcal{F}| = 5+\binom{5}2 + 1=\frac83 n$.
\end{proof}
\begin{proof}[\emph{\textbf{Proof of Theorem~\svref{thm-triangle-free}}}]
Let $\mathcal{F}$ be the family provided by the above lemma and consider the graph $G$ whose $N$ vertices are the elements of $\mathcal{F}$ with edges between $A,B$ whose cardinality of intersection is odd. By definition the graph $G$ is triangle-free and we have $\overline{\chi}_f(G) \geq N/2$.
Next, consider the binary matrix $M$ indexed by the vertices of $G$ where $M_{A,B} = |A\cap B|\pmod{2}$. All the diagonal entries of $M$ equal $1$ by the fact that $\mathcal{F}$ is comprised of odd subsets only, and clearly $M$ is a representation of $G$ over $GF(2)$. At the same time, $M$ can be written as $F F^\mathrm{T}$ where $F$ is the $N\times n$ incidence-matrix of the ground-set $[n]$ and subsets of $\mathcal{F}$. In particular we have that $\operatorname{rank}(M) \leq \operatorname{rank}(F) \leq n$ over $GF(2)$.
This implies that $\operatorname{minrk}_2(G) \leq n$ and the proof is now concluded by the fact that $\beta(G) \leq \operatorname{minrk}_2(G)$.
\end{proof}
\begin{remark} The construction of the family of subsets $\mathcal{F}$ in Lemma~\svref{lem-triangle-free-family} relied on a triangle-free 15-vertex base graph $H$ which is equivalent to the Peterson graph with 5 extra vertices added to it, each one adjacent to one of the independent sets of size 4 in the Peterson graph.
\end{remark}
Having discussed the relation between $\beta$ and $b_n$ for sparse graphs we now turn our attention to the analogous question for the other extreme end, namely whether $\beta=b_1$ when $b_1=\alpha$ attains its smallest possible value (other than in the complete graph) of 2.
\subsection{Graphs with a broadcast rate of nearly 2}
We now return to the setting of undirected graphs, where the class of $\{ G : \beta(G) = 2\}$ is simply the complements of nonempty bipartite graphs, where in particular Index Coding is trivial.
It turns out that extending this class to $\{G : \beta(G) < 2+\epsilon\}$ for any fixed small $\epsilon> 0$ already turns this family of graphs to a much richer one, as the following simple corollary of Theorem~\svref{thm-hierarchy} shows. Recall that the Kneser graph with parameters $(n,k)$ is the graph whose vertices are all the $k$-element subsets of $[n]$ where two vertices are adjacent iff their two corresponding subsets are disjoint.
\begin{corollary}\svlabel{cor-kneser}
Fix $0 < \epsilon < \frac12$ and let $G$ be the complement of the Kneser$(n,k)$ graph on $N=\binom{n}{k}$ vertices for $n = (2+\epsilon)k$. Then $\beta(G) \leq 2+\epsilon$ whereas $\overline{\chi}(G) \geq (\epsilon/2)\log N$.
\end{corollary}
\begin{proof}
Using topological methods, Lov\'asz~\cite{Lovasz} proved that
the Kneser graph with parameters $(n,k)$ has chromatic number $n-2k+2$, in our case giving
that $\overline{\chi}(G) = \epsilon k + 2 \leq (\epsilon/2)\log N$ (with the last inequality due to the fact that $N \geq [\mathrm{e}(2+\epsilon)]^k$ and so $k \geq \frac12 \log N$).
At the same time, it is well known that $G$ satisfies $\overline{\chi}_f = n/k$ (its maximum clique corresponds to a maximum set of intersecting $k$-subsets, which has size $\omega=\binom{n-1}{k-1}$ by the Erd\H{o}s-Ko-Rado Theorem, and being vertex-transitive it satisfies $\overline{\chi}_f = N / \omega$).
The bound $\beta \leq b_n = \overline{\chi}_f$ given in Theorem~\svref{thm-hierarchy} thus completes the proof.
\end{proof}
\section{Establishing the exact broadcast rate for families of graphs}\svlabel{sec:beta-of-graphs}
\subsection{The broadcast rate of cycles and their complements}
The following theorem establishes the value of $\beta$ for cycles and their complements via the LP framework of Theorem~\svref{thm-hierarchy}.
\begin{theorem}\svlabel{thm-cycles}
For any integer $n\geq 4$ the $n$-cycle satisfies $\beta(C_{n}) = n/2$ whereas its complement satisfies $\beta(\overline{C_n})=n/\lfloor n/2\rfloor $. In both cases $\beta_1 = \lceil\beta \rceil$ while
$\alpha = \lfloor \beta \rfloor$.
\end{theorem}
\begin{proof}
As the case of $n$ even is trivial with all the inequalities in~\sveqref{eq-trivial-ineqs} collapsing into an equality (which is the case for any perfect graph), assume henceforth that $n$ is odd.
We first show that $\beta(C_n) = n/2$. Putting $n=2k+1$ for $k \geq 2$, we aim to prove that $b_2 \geq k+1/2 $, which according to Theorem~\svref{thm-hierarchy} will imply the required result since clearly $\overline{\chi}_f = k+1/2$.
Denote the vertices $V$ of the cycle by $0,1,\ldots,2k$. Further define:
\begin{align*}
E &= \{i \;:\; i\equiv0\bmod{2}~,~ i \ne 2k\} &\mbox{(Evens)}\,, \\
O &= \{i\;:\; i \equiv1\bmod{2}\} &\mbox{(Odds)}\,, \\
E^+ &= \{i \;:\; i \le 2k-2\} &\mbox{(Evens decoded)}\,,\\
O^+ &= \{i \;:\; 1 \le i \le 2k-1\} &\mbox{(Odds decoded)}\,,\\
M &= \{i \;:\; 1 \le i \le 2k-2\} &\mbox{(Middle)}\,.
\end{align*}
Next, consider the following constraints in the LP $\mathcal{B}_2$:
\begin{align*}
X(\emptyset) + k &\ge X(E) & \mbox{(slope)}\\
X(\emptyset) + k &\ge X(O) & \mbox{(slope)}\\
X(\emptyset) + 1 &\ge X(\{2k\}) & \mbox{(slope)}\\
X(E) & \ge X(E^+) & \mbox{(decode)}\\
X(O) & \ge X(O^+) & \mbox{(decode)}\\
X(E^+) + X(O^+) & \ge X(V) + X(M) & \mbox{(submod , decode)}\\
X(M) + X(\{2k\}) &\ge X(V) + X(\emptyset) & \mbox{(submod , decode)}\\
2X(V) &\ge 2(2k+1) & \mbox{(initialize)}&\,.
\end{align*}
Summing and canceling we get $2X(\emptyset) + 2k+1 \ge 4k+2$, implying $X(\emptyset) \ge k+1/2$. The main idea of this proof, as with the ones to follow, is that we input some sets of vertices and then apply decoding to the sets as well as combine them together using submodularity to eventually output $X(V)$ and $X(\emptyset)$.
It remains to treat complements of odd cycles. Let $H=\overline{{\mathsf{AAC}}_{n}}$ be the complement of a directed odd almost-alternating cycle on $n$ vertices (as defined in Section~\svref{subsec:beta-equals-2}). Treating $\overline{C_{n}}$ as a directed graph (replacing each edge with a bi-directed pair of edges) it is clearly a spanning subgraph of $H$, hence $\beta(\overline{C}_{n})$ is at least as large as $\beta(H)$. The proof in Section~\svref{subsec:beta-equals-2} establishes that $\beta(H) \geq \frac{n}{\lfloor n/2\rfloor}$, translating to a lower bound on $\beta(\overline{C_n})$. The matching upper bound follows from the fact that due to Theorem~\svref{thm-hierarchy} we have $\beta(\overline{C_n}) \leq \overline{\chi}(\overline{C_n}) = \frac{n}{\lfloor n/2\rfloor}$.
\end{proof}
\subsection{The broadcast rate of cyclic Cayley Graphs}\svlabel{sec:ap-cayley}
In this section we demonstrate how the same framework of the proof of Theorem~\svref{thm-cycles} may be applied with a considerably more involved sequence of entropy-inequalities to establish the broadcast rate of two classes of Cayley graphs of the cyclic group $\mathbb Z_n$. Recall that a \emph{cyclic Cayley graph} on $n$ vertices with a set of generators $G \subseteq \{1,2,\ldots,\lfloor n/2\rfloor\}$ is the graph on the vertex set $\{0,1,2,\ldots,n-1\}$ where $(i,j)$ is an edge iff $j - i \equiv g \pmod{n}$ for some $g \in G$.
\begin{theorem}\svlabel{thm-3-reg-cayley}
For any $n\geq 4$, the 3-regular Cayley graph of $\mathbb Z_n$ has broadcast rate $\beta = n/2$.
\end{theorem}
\begin{theorem}[Circulant graphs]\svlabel{thm-circulant}
For any integers $n\geq 4$ and $k < \frac{n-1}2$, the Cayley graph of $\mathbb Z_n$ with generators $\{\pm1,\ldots,\pm k\}$ has broadcast rate $\beta=n/(k+1)$.
\end{theorem}
To simplify the exposition of the proofs of these theorems we make use of the following definition.
\begin{definition}
A \emph{slice} of size $i$ in $\mathbb Z_n$ indexed by $x$ is the subset of $i$ contiguous vertices on the cycle given by $\{x+j \pmod{n} : 0 \le j < i\}$.
\end{definition}
\begin{proof}[\textbf{\emph{Proof of Theorem~\svref{thm-3-reg-cayley}}}]
It is not hard to see that for a cyclic Cayley graph to be $3$-regular it must have two generators, $1$ and $n/2$, and $n$ must be even. If $n$ is not divisible by four, then it is easy to check that there is an independent set of size $n/2$ and $\overline{\chi}_f$ is also $n/2$. Thus, it immediately follows that $\beta = n/2$.
For 3-regular cyclic Cayley graphs where $n$ is divisible by four, $\alpha$ is strictly less than $n/2$. So to prove that $\beta = n/2$ we use the LP $\mathcal{B}_2$ to show $b_2 \ge n/2$, implying $\beta \ge n/2$.
Let $0,1,2,\ldots,4k-1$ be the vertex set of the graph. We assume that any solution $X$ has cyclic symmetry. That is, $X(S) = X(\{s+i|s \in S\})$ for all $i \in [0,4k-1]$. This assumption is without loss of generality because we can take any LP solution $X$ and find a new one $X'$ that is symmetric and has the same value by setting $X'(S) = \frac{1}{4k}\sum_{i = 0}^{4k-1} X(\{s+i|s \in S\})$. All the constraints are feasible for $X'$ because each is simply the average of $4k$ feasible constraints.
In our proof we will be using the following subsets of vertices:
\begin{align*}
[i] &= \{0,1,2,\ldots,i-1\} \;\; \text{(a slice of size $i$)}\\
D &= \{0,2, \ldots, 2k-4, 2k-2, 2k+1,2k+3, \ldots , 4k-5,4k-3\}\\
D^+ &= \{0,1,2,\ldots,2k-4, 2k-3, 2k-2, 2k+1, 2k+2,2k+3, \ldots, 4k-4, 4k-3\}\,.
\end{align*}
\begin{figure}[tb]
\begin{center}
\includegraphics{beta_3reg.pdf}
\caption{A 3-regular cyclic Cayley graph on $4k$ vertices. Highlighted vertices mark the set $D$ used in the proof of Theorem~\svref{thm-3-reg-cayley}.}
\svlabel{fig:3regCC}
\end{center}
\vspace{-0.5cm}
\end{figure}
Observe from Figure~\svref{fig:3regCC} that $D \rightsquigarrow D^+$. Also note that $D^+$ is missing only four vertices, two on each side almost directly across from each other, and $|D| = 2k-1$.
Similar to our proof for the 5-cycle, we will prove $b_2 \ge n/2$ by listing a sequence of constraints in the LP $\mathcal{B}_2$ that sum and cancel to give us $X(\emptyset) \ge n/2$. However, this proof differs from the 5-cycle proof because we list inequalities implied not only by the constraints in our LP but also our assumption of cyclic symmetry. The fact that any two slices of size $i$ have the same $X$ value is used heavily in the sequence of inequalities that make up our proof.\\
First, we create $2k-1$ $X(D^+)$ terms on the right-hand-side:
\begin{align*}
(2k-2) + X(\emptyset) &\geq X(D \setminus \{0\}) & \mbox{(slope)}\\
X([1]) + X(D \setminus \{0\}) & \ge X(D^+) + X(\emptyset) & \mbox{(submod , decode)} \\
(2k-2)((2k-1) + X(\emptyset) &\geq X(D^+)) & \mbox{(slope , decode)}
\end{align*}
Now, we apply submodularity to slices of size $i = 2\ldots2k$ and an $X(D^+)$ term --- canceling all the $X(D^+)$ terms we created on the right-hand-side in the previous step. We pick our slices so that the union is a slice missing only two vertices, and the intersection is a slice of size $i-1$.
\begin{align*}
X(D^+) + X([2k]) & \geq X([4k-2]) + X([2k-1])\\
X(D^+) + X([2k-1]) &\geq X([4k-2]) + X([2k-2])\\
&~\vdots\\
X(D^+) + X([2]) &\geq X([4k-2]) + X([1])
\end{align*}
If we sum and cancel the inequalities listed so far we have: $$2k(2k-2)+ (2k-2)X(\emptyset) + X([2k]) \ge (2k-1)X([4k-2])$$ Now, we combine all $2k-1$ of the $X([4k-2])$ terms to get full cycles.
\begin{align*}
2X([4k-2]) & \geq X(V) + X([4k-3])\\
X([4k-3]) + X([4k-2]) &\geq X(V) + X([4k-4])\\
X([4k-4]) + X([4k-2]) &\geq X(V) + X([4k-5])\\
&~\vdots&\\
X([2k+1]) + X(H[4k-2]) &\geq H(V) + H([2k])
\end{align*}
Now, we are left with:$$2k(2k-2)+ (2k-2)X(\emptyset) \ge (2k-2)X(V)$$
We can apply the constraint $X(V) \ge n$, yielding:
$$2k(2k-2)+ (2k-2)X(\emptyset) \ge (2k-2)4k$$ thus $X(\emptyset)\ge 2k$ for any feasible solution, implying $b_2 \ge 2k = n/2$.
\end{proof}
\begin{proof}[\textbf{\emph{Proof of Theorem~\svref{thm-circulant}}}]
It is easy to check that $\overline{\chi}_f$ for these graphs is $n/(k+1)$, so it is sufficient to prove that $b_2 \ge n/(k+1)$.
As we did in the proof of Theorem~\svref{thm-3-reg-cayley} we will assume that our solution $X$ has cyclic symmetry. Suppose that $n \mod (k+1) \equiv j$. Now, consider dividing the cycle into sections of size $k+1$ and let $S$ be the set of vertices consisting of the first $k$ in each complete section ($|S| = k(n-j)/(k+1)$). Then by decoding $X(S) = X([-j])$ where $[-j]$ is a slice of size $n-j$. We will also use $[j]$ to denote a set of size $j$, as in the proof of Theorem~\svref{thm-3-reg-cayley}. Observe that if $j = 0$ then this observation alone completes the proof.
\begin{lemma}
$(k+1)X[-j] + X[k] \ge (k+1)[-j-1] + X(\emptyset)$
\svlabel{lem:gen1k_helper}
\end{lemma}
\begin{proof}
The following inequalities are true by submodularity and the cyclic symmetry of $X$. In each inequality we apply submodularity to two slices, say of size $s$ and $t$, $s \le t$, overlapping such that their intersection is a slice of size $s-1$ and their union a slice of size $t+1$.\begin{align*}
X([-j])+X([-j]) &\ge X([-j+1]) + X([-j-1])\\
X([-j])+X([-j-1]) &\ge X([-j+1]) + X([-j-2])\\
X([-j])+X([-j-2]) &\ge X([-j+1]) + X([-j-3])\\
&~\vdots\\
X([-j])+X([-j-(k-1)]) &\ge X([-j+1]) + X([-j-k])\\
X([-j-k])+X([k]) & \ge X(\emptyset) + X([-j+1]) \qquad\qquad\mbox{(submod , decode).}
\end{align*}
Adding up all of these inequalities gives us the desired inequality.
\end{proof}
Now, if we sum together the following string of inequalities we get the bound we want on $X(\emptyset)$. Essentially, we iteratively apply our Lemma to get us to the trivial $j=0$ case.
\begin{align*}
k(n-j) + (k+1)X(\emptyset) &\ge (k+1)X([-j]) & \mbox{(slope , decode)}\\
jk + jX(\emptyset) &\ge jX([k]) & \mbox{(slope)}\\
(k+1)X([-j]) + X([k]) & \ge (k+1)X([-j-1]) + X(\emptyset) & \mbox{(by Lemma~\svref{lem:gen1k_helper})} \\
(k+1)X([-j-1]) + X([k]) & \ge (k+1)X([-j-2]) + X(\emptyset) & \mbox{(by Lemma~\svref{lem:gen1k_helper})} \\
&~\vdots\\
(k+1)X([-1]) + X([k]) & \ge (k+1)X(V) + X(\emptyset) & \mbox{(by Lemma~\svref{lem:gen1k_helper})} \\
(k+1)X(V) &\ge kn\,.
\end{align*}
This completes the proof.
\end{proof}
\subsection{The broadcast rate of specific small graphs}
For any specific graph one can attempt to solve the second level of the LP-hierarchy directly to yield a possibly tight lower bound $\beta \geq b_2$. The following corollary lists a few examples obtained using an AMPL/CPLEX solver.
\begin{fact}\svlabel{cor-specific-graphs} The following graphs satisfy $b_2=\beta=\overline{\chi}_f$:
\begin{compactenum}
[(1)]
\item Petersen graph (Kneser graph on $\binom{5}{2}$ vertices): $n=10$, $\alpha = 4$ and $\beta = 5$.
\item Gr\"{o}tzsch graph (smallest triangle-free graph with $\chi=4$): $n=11$, $\alpha=5$ and $\beta = \frac{11}2$.
\item Chvatal graph (smallest triangle-free $4$-regular graph with $\chi=4$): $n=12$, $\alpha=4$ and $\beta = 6$.
\end{compactenum}
\end{fact}
\section{Coverage functions: a proof of Lemma~\svref{lem:coverage-functions}}\svlabel{sec:ap-coverage}
Lemma~\svref{lem:coverage-functions} (\S~\svref{sec:hierarchy-proof}) will readily follow from establishing the following Lemmas~\svref{lem:coverage1} and \svref{lem:coverage2}, as it is easy to verify that the slope constraints and the i-th order submodularity constraints in our LP are equivalent to the inequalities in Eq.~\sveqref{eq:almost-LP}.
\begin{lemma}
A vector $X$, indexed over all subsets of the groundset $V$, satisfies
\begin{equation} \svlabel{eq:almost-LP}
\forall R \ne \emptyset, \forall Z \cap R = \emptyset, \;\;
\sum_{T \subseteq R} (-1)^{|R\setminus T|} X(T \cup Z) \le \left\{\begin{array}{ll}1 & \mbox{if $|R|=1$} \\
0 & \mbox{otherwise}
\end{array}\right.
\end{equation}
if and only if it satisfies:
\begin{equation} \svlabel{eq:almost-coverage}
\forall R \subseteq V, R \neq \emptyset, \;\;
\sum_{T \subseteq R} (-1)^{|T|} (X(R \setminus T) - |R\setminus T|) \le 0\,.
\end{equation}
\svlabel{lem:coverage1}
\end{lemma}
\begin{lemma}
A vector $X$, indexed over all subsets of the ground-set $V$, satisfies \sveqref{eq:almost-coverage} if and only if there exists a vector of non-negative numbers $w(T)$, defined for every non-empty vertex set $T$, such that $X(S) = |S| + \sum_{T: T \not \subseteq S} w(T)$ $\forall S \subseteq V$.
\svlabel{lem:coverage2}
\end{lemma}
\begin{proof}[Proof of Lemma~\svref{lem:coverage1}]
First, we claim that $X$ satisfies \sveqref{eq:almost-coverage} if and only if it satisfies:
\begin{equation} \svlabel{eq:middle}
\forall R \subseteq V, R \neq \emptyset, \;\;
\sum_{T \subseteq R} (-1)^{|R \setminus T|} X(T) \le \left\{\begin{array}{ll}1 & \mbox{if $|R|=1$}\,, \\
0 & \mbox{otherwise.}
\end{array}\right.
\end{equation}
Starting with the inequalities \sveqref{eq:almost-coverage}, observe that we get an equivalent set of inequalities when we switch the roles of $T$ and $R \setminus T$, as it is essentially summing over the complements of $T$ instead of $T$. Additionally, for $|R| \ge 2$ we can remove the constant term because it is equal to the alternating sum $ \pm\sum_{i = 1}^k (-1)^k \binom{|R|}{k}k = 0$.
If $|R| = 1$ then the constant term is one.
Now, we show the equivalence of \sveqref{eq:middle} and \sveqref{eq:almost-LP}. Clearly, if $X$ satisfies \sveqref{eq:almost-LP} then it satisfies \sveqref{eq:middle} because the inequalities in the latter are a subset of the inequalities in the former. Now, we show by induction on the size of $Z$ that~\sveqref{eq:middle} implies \sveqref{eq:almost-LP}. Our base case, $|Z| = 0$ holds trivially. We assume that~\sveqref{eq:middle} implies~\sveqref{eq:almost-LP} for $|Z| <|Z^*|$ and show the following inequality holds:
\[\sum_{T \subseteq R^*} (-1)^{|R^*\setminus T|} X(T \cup Z^*) \le \left\{\begin{array}{ll}1 & \mbox{if $|R|=1$} \\
0 & \mbox{otherwise}
\end{array}\right.\tag{$\star$}\]
By our inductive hypothesis, Eq.~\sveqref{eq:middle} implies the following two inequalities from~\sveqref{eq:almost-LP}:
\begin{align*}
R &= R^* \cup \{z\}\,,\quad Z = Z^* \setminus \{z\} \tag{I}\,,\\
R &= R^*\,,\qquad Z = Z^* \setminus \{z\} \tag{II}
\end{align*} for some $z \in Z^*$. It is easy to see that $(\star)-(\mathrm{II}) = (\mathrm{I})$, thus we can derive ($\star$) from $(\mathrm{I}),(\mathrm{II})$.
\end{proof}
\noindent\emph{Proof of Lemma~\svref{lem:coverage2}.}
Suppose there exists a vector of non-negative numbers $w(T)$, defined for every non-empty vertex set $T$, such that $X(S) = |S| + \sum_{T: T \not \subseteq S} w(T)$ $\forall S \subseteq V$ as in the statement of our Lemma. Then rearranging, we have:
$$X(\overline{S})-|\overline{S}| = \sum_{T: T \not \subseteq \overline{S}} w(T) = \sum_{T:T \cap S \ne \emptyset} w(T) \;\;\forall \overline{S} \subseteq V$$
Now, define $F(S) = X(\overline{S}) - |\overline{S}|$.
\begin{lemma}
The set function $F$ satisfies
\begin{equation} \svlabel{eq:F_almost-coverage}
\forall R \subseteq V, R \neq \emptyset, \;\;
\sum_{T \subseteq R} (-1)^{|T|} F(\overline{R} \cup T)
\leq 0.
\end{equation}
if and only if there exists a vector of non-negative numbers $w(T)$, defined for every non-empty vertex set $T$, such that
\begin{equation}
\svlabel{eq:set-cover-fn}
F(S) = \sum_{T:T \cap S \ne \emptyset} w(T) \;\;\forall S \subseteq V.
\end{equation}
\svlabel{lem:coverage-helper}
\end{lemma}
\begin{remark}
A set function $F$ is called a weighted set cover function if it can be written as in Eq.~\sveqref{eq:set-cover-fn}.
\end{remark}
Plugging in $X(\overline{S})-|\overline{S}|$ for $F(S)$ and noting that $\overline{\overline{R} \cup T} = R \setminus T$ it is easy to see that Lemma~\svref{lem:coverage-helper} implies our desired result. Thus, it remains to prove Lemma~\svref{lem:coverage-helper}.
\begin{proof}[Proof of Lemma~\svref{lem:coverage-helper}]
In this proof we will be working with vectors and matrices whose rows
and columns are indexed by subsets of $V$. Let $n=|V|, N = 2^n$. Expressing $F$ and $w$ as vectors with $N-1$ components
(ignoring the component corresponding to the empty set),
this equation can be written in matrix form as
\[
F = M w,
\]
where $M$ is the $(N-1)$-by-$(N-1)$ matrix defined by
\[
M_{TS} = \begin{cases}1 & \mbox{if } T \cap S \neq \emptyset \\
0 & \mbox{otherwise.} \end{cases}
\]
We shall see below that $M$ is invertible.
It follows that $F$ can be written as in Eq.~\sveqref{eq:set-cover-fn} if and only if $M^{-1} F$ is a vector $w$ with non-negative
components.
To prove that $M$ is invertible and to obtain
a formula for the entries of the inverse matrix,
let $L$ be the $N$-by-$N$ matrix defined by
\[
L_{TS} = \begin{cases}1 & \mbox{if } T \cap S \neq \emptyset \\
0 & \mbox{otherwise.} \end{cases}
\]
In other words, $L$ is the matrix obtained from $M$
by adding a top row and a left column consisting
entirely of zeros. Let us define another matrix
$K$ by
\[
K_{TS} = 1 - L_{TS} =
\begin{cases}1 & \mbox{if } T \cap S = \emptyset \\
0 & \mbox{otherwise.} \end{cases}
\]
We can now begin to make progress on inverting
these matrices, using the observation that both
$K$ and $K+L$ can be represented as Kronecker
products of $2$-by-$2$ matrices. Specifically,
\begin{align*}
K &= \left( \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array}
\right)^{\otimes n} ~,\quad
K+L = \left( \begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array}
\right)^{\otimes n}.
\end{align*}
The inverse of $\left(\begin{smallmatrix} 1 & 1 \\ 1 & 0\end{smallmatrix}\right)$
is $\left(\begin{smallmatrix} 0 & 1 \\ 1 & -1\end{smallmatrix}\right)$.
We may now make use of the fact that Kronecker
products commute with matrix products, to deduce that
\begin{align*}
L \left( \begin{array}{rr} 0 & 1 \\ 1 & -1 \end{array}
\right)^{\otimes n} &=
(K+L) \left( \begin{array}{rr} 0 & 1 \\ 1 & -1 \end{array}
\right)^{\otimes n} -
K \left( \begin{array}{rr} 0 & 1 \\ 1 & -1 \end{array}
\right)^{\otimes n} \\
&=
\left[
\left( \begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array} \right)
\left( \begin{array}{rr} 0 & 1 \\ 1 & -1 \end{array} \right)
\right]^{\otimes n} -
\left[
\left( \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right)
\left( \begin{array}{rr} 0 & 1 \\ 1 & -1 \end{array} \right)
\right]^{\otimes n} \\
&=
\left( \begin{array}{rr} 1 & 0 \\ 1 & 0 \end{array} \right)^{\otimes n}
-
\left( \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array}
\right)^{\otimes n}.
\end{align*}
Examine the matrices occurring on the
left and right sides of the equation
above, and consider the submatrix obtained
by deleting the left column and top row.
On the right side, we obtain $-I$, where $I$
denotes the $(N-1)$-by-$(N-1)$ identity matrix.
On the left side we obtain $M \cdot A$, where
$A$ is the matrix obtained from
$\left(\begin{smallmatrix} 0 & 1 \\ 1 & -1\end{smallmatrix}\right)^{\otimes n}$
by deleting the left
column and top row. This implies that
$M$ is invertible and its inverse is $-A$.
Moreover, one can verify that the
entries of $-A$ are given by
\[
-A_{TS} = \begin{cases}
0 & \mbox{if } T \cup S \neq V \\
(-1)^{|T \cap S|} & \mbox{if } T \cup S = V.
\end{cases}
\]
Recall that a set function $F$ can be
expressed as it is in Eq.~\sveqref{eq:set-cover-fn}
if and only if $M^{-1} F$ has non-negative
entries. Now that we have derived an expression
for $M^{-1}$ we find that this criterion is
equivalent to stating that for all nonempty sets
$R \subseteq V$,
\[
\sum_{S \,:\, T \cup S = V}
(-1)^{|T \cap S|} F(S) \leq 0.
\]
This condition is equivalent to Eq.~\sveqref{eq:almost-coverage} because every set $S$ such that $T \cup S = V$
can be uniquely written as the disjoint union of
two sets $\overline{T}$ and $R = T \cap S.$
This completes the proof of Lemma~\svref{lem:coverage-helper} and subsequently proves Lemmas~\svref{lem:coverage2} and~\svref{lem:coverage-functions}.
\end{proof}
\section{Open problems}\svlabel{sec:conclusion}
\begin{compactitem}
\item We provide an information-theoretic lower bound $b_2$ on the broadcast rate $\beta$, enabling us to answer fundamental questions about the behavior of $\beta$. While one can have $b_2 < \beta$, what is the largest possible gap between the two parameters? Recalling that the linear program for $b_2$ contains exponentially many constraints, is there an efficient algorithm for computing $b_2$?
\item Our results include a polynomial time algorithm for determining whether $\beta=2$ for any broadcasting network. A major open problem is establishing the hardness of determining whether $\beta < C$ for a given graph $G$ and real $C>0$. While no such hardness result is known, presumably this problem is extremely difficult e.g.\ it is unclear whether it is even decidable.
\item In an effort to approximate $\beta$, we give an efficient multiplicative $o(n)$-approximation algorithm for the general broadcasting problem. Can we obtain an approximation of $\beta$ (even for case of undirected graphs) within a multiplicative constant of $n^{1-\epsilon}$ for some fixed $\epsilon > 0$?
\item Using certain projective-Hadamard graphs introduced by Erd\H{o}s and R\'enyi, we show that the broadcast rate can be uniformly bounded while its upper bound $b_n$ is polynomially large. Is the scalar capacity $\beta_1$ of these graphs unbounded as the field characteristic $q$ tends to $\infty$?
\end{compactitem}
\bigskip
\noindent \textbf{Acknowledgment.} We thank Noga Alon for useful discussions.
\begin{bibdiv}
\begin{biblist}[\normalsize]
\bib{Abbott}{article}{
author={Abbott, H. L.},
author={Williams, E. R.},
title={Lower bounds for some Ramsey numbers},
journal={J. Combinatorial Theory Ser. A},
volume={16},
date={1974},
pages={12--17},
}
\bib{AHJKL}{article}{
author={Adler, Micah},
author={Harvey, Nicholas J. A.},
author={Jain, Kamal},
author={Kleinberg, Robert},
author={Lehman, April Rasala},
title={On the capacity of information networks},
conference={
title={Proc.\ of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2006)},
},
pages={241--250},
}
\bib{ACLY}{article}{
author={Ahlswede, Rudolf},
author={Cai, Ning},
author={Li, Shuo-Yen Robert},
author={Yeung, Raymond W.},
title={Network information flow},
journal={IEEE Trans. Inform. Theory},
volume={46},
date={2000},
pages={1204--1216},
}
\bib{AHLSW}{inproceedings}{
author={Alon, Noga},
author={Hassidim, Avinatan},
author={Lubetzky, Eyal},
author={Stav, Uri},
author={Weinstein, Amit},
title={Broadcasting with side information},
conference={
title={Proc.\ of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2008)},
},
pages={823--832},
}
\bib{AKa}{article}{
author={Alon, Noga},
author={Kahale, Nabil},
title={Approximating the independence number via the $\theta$-function},
journal={Math. Programming},
volume={80},
date={1998},
number={3, Ser. A},
pages={253--264},
}
\bib{AK}{article}{
author={Alon, Noga},
author={Krivelevich, Michael},
title={Constructive bounds for a Ramsey-type problem},
journal={Graphs Combin.},
volume={13},
date={1997},
number={3},
pages={217--225},
}
\bib{BBJK}{article}{
author={Bar-Yossef, Z.},
author={Birk, Y.},
author={Jayram, T.S.},
author={Kol, T.},
title={Index coding with side information},
booktitle={Proc.\ of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006)},
pages={197--206},
}
\bib{BK}{article}{
author={Birk, Y.},
author={Kol, T.},
title={Coding-on-demand by an informed source (ISCOD) for efficient broadcast of different supplemental data to caching clients},
journal={IEEE Trans. Inform. Theory},
volume={52},
date={2006},
pages={2825--2830},
note={An earlier version appeared in INFOCOM 1998},
}
\bib{BKL11a}{article}{
author={Blasiak, A.},
author={Kleinberg, R.},
author={Lubetzky, E.},
title={Lexicographic Products and the Power of Non-Linear Network Coding},
booktitle={Proc.\ of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2011)},
note={To appear},
}
\bib{BH}{article}{
author={Boppana, Ravi},
author={Halld{\'o}rsson, Magn{\'u}s M.},
title={Approximating maximum independent sets by excluding subgraphs},
journal={BIT},
volume={32},
date={1992},
number={2},
pages={180--196},
}
\bib{CS}{article}{
title={Efficient algorithms for index coding},
author={Chaudhry, M.A.R.},
author={Sprintson, A.},
conference={
title={IEEE Conference on Computer Communications Workshops (INFOCOM 2008)},
},
pages={1--4},
date={2008},
}
\bib{DFZ1}{article}{
author={Dougherty, Randall},
author={Freiling, Christopher},
author={Zeger, Kenneth},
title={Insufficiency of linear coding in network information flow},
journal={IEEE Trans. Inform. Theory},
volume={51},
date={2005},
pages={2745--2759},
}
\bib{DFZ2}{article}{
author={Dougherty, Randall},
author={Freiling, Chris},
author={Zeger, Kenneth},
title={Networks, matroids, and non-Shannon information inequalities},
journal={IEEE Trans. Inform. Theory},
volume={53},
date={2007},
number={6},
pages={1949--1969},
}
\bib{ER}{article}{
author={Erd{\H{o}}s, P.},
author={R{\'e}nyi, A.},
title={On a problem in the theory of graphs},
language={Hungarian, with Russian and English summaries},
journal={Magyar Tud. Akad. Mat. Kutat\'o Int. K\"ozl.},
volume={7},
date={1962},
pages={623--641 (1963)},
}
\bib{Feige}{article}{
author={Feige, Uriel},
title={Randomized graph products, chromatic numbers, and the Lov\'asz $\vartheta$-function},
journal={Combinatorica},
volume={17},
date={1997},
number={1},
pages={79--90},
note={An earlier version appeared in Proc.\ of the 27th Annual ACM Symposium on Theory of computing (STOC 1995), pp. 635--640.},
}
\bib{GR}{book}{
author={Godsil, Chris},
author={Royle, Gordon},
title={Algebraic graph theory},
series={Graduate Texts in Mathematics},
volume={207},
publisher={Springer-Verlag},
place={New York},
date={2001},
pages={xx+439},
}
\bib{HKL}{article}{
author={Harvey, Nicholas J. A.},
author={Kleinberg, Robert},
author={Lehman, April Rasala},
title={On the capacity of information networks},
journal={IEEE Trans. Inform. Theory},
volume={52},
date={2006},
number={6},
pages={2345--2364},
}
\bib{HKNW}{article}{
author = {Harvey, Nicholas J. A.},
author={Kleinberg, Robert},
author={Nair, Chandra},
author={Wu, Yunnan},
title = {A ``Chicken \& Egg{"} Network Coding Problem},
conference={
title={IEEE International Symposium on Information Theory (ISIT 2007)},
},
pages = {131--135},
}
\bib{KRHKMC}{article}{
title={XORs in the air: practical wireless network coding},
author={Katti, S.},
author={Rahul, H.},
author={Hu, W.},
author={Katabi, D.},
author={M{\'e}dard, M.},
author={Crowcroft, J.},
journal={IEEE/ACM Trans. on Networking},
volume={16},
pages={497--510},
year={2008},
note={An earlier version appeared in ACM SIGCOMM 2006.}
}
\bib{LaSp}{article}{
title={On the hardness of approximating the network coding capacity},
author={Langberg, M.},
author={Sprintson, A.},
conference={
title={IEEE International Symposium on Information Theory (ISIT 2008)},
},
pages={315--319},
}
\bib{LV}{article}{
author = {Linial, Nathan},
author = {Vazirani, Umesh},
title = {Graph products and chromatic numbers},
conference={
title={Proc.\ of the 30th Annual IEEE Symposium on Foundations of Computer Science (FOCS 1989)},
},
pages = {124-128},
}
\bib{Lovasz}{article}{
author={Lov{\'a}sz, L.},
title={Kneser's conjecture, chromatic number, and homotopy},
journal={J. Combin. Theory Ser. A},
volume={25},
date={1978},
pages={319--324},
}
\bib{LuSt}{article}{
author={Lubetzky, Eyal},
author={Stav, Uri},
title={Non-linear index coding outperforming the linear optimum},
journal={IEEE Trans. Inform. Theory},
volume={55},
date={2009},
pages={3544--3551},
note={An earlier version appeared in Proc.\ of the 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2007), pp. 161--167.},
}
\bib{MW}{article}{
author={Mubayi, Dhruv},
author={Williford, Jason},
title={On the independence number of the Erd\H os-R\'enyi and projective
norm graphs and a related hypergraph},
journal={J. Graph Theory},
volume={56},
date={2007},
number={2},
pages={113--127},
}
\bib{RSG}{article}{
author={El Rouayheb, S.},
author={Sprintson, A.},
author={Georghiades, C.},
title={On the relation between the Index Coding and the Network Coding problems},
conference={
title={IEEE International Symposium on Information Theory (ISIT 2008)},
},
pages={1823 -1827},
}
\bib{SYC}{article}{
author={Song, Lihua},
author={Yeung, Raymond W.},
author={Cai, Ning},
title={Zero-error network coding for acyclic networks},
journal={IEEE Trans. Inform. Theory},
volume={49},
date={2003},
number={12},
pages={3129--3139},
}
\bib{Wigderson}{article}{
author={Wigderson, Avi},
title={Improving the performance guarantee for approximate graph
coloring},
journal={J. Assoc. Comput. Mach.},
volume={30},
date={1983},
number={4},
pages={729--735},
}
\bib{Yeung}{book}{
author={Yeung, Raymond W.},
title={A first course in information theory},
series={Information Technology: Transmission, Processing and Storage},
publisher={Kluwer Academic/Plenum Publishers, New York},
date={2002},
pages={xxiv+412},
}
\bib{YLC}{book}{
author={Yeung, Raymond W.},
author={Li, Shuo-Yen Robert},
author={Cai, Ning},
title={Network coding theory},
author={Yeung, R. and Li, SY and Cai, N.},
date={2006},
publisher={Now Publishers Inc},
}
\bib{YZ}{article}{
author = {Zhang, Zhen},
author = {Yeung, Raymond W.},
title = {On Characterization of Entropy Function via Information
Inequalities},
journal = {IEEE Transactions on Information Theory},
volume = {44},
number = {4},
date = {1998},
pages = {1440-1452},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2011-07-14T02:00:56",
"yymm": "1004",
"arxiv_id": "1004.1379",
"language": "en",
"url": "https://arxiv.org/abs/1004.1379",
"abstract": "Index Coding has received considerable attention recently motivated in part by real-world applications and in part by its connection to Network Coding. The basic setting of Index Coding encodes the problem input as an undirected graph and the fundamental parameter is the broadcast rate $\\beta$, the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on $\\beta$ were derived from the study of other Index Coding capacities (e.g. the scalar capacity $\\beta_1$) by Bar-Yossef et al (2006), Lubetzky and Stav (2007) and Alon et al (2008). However, these indirect bounds shed little light on the behavior of $\\beta$: there was no known polynomial-time algorithm for approximating $\\beta$ in a general network to within a nontrivial (i.e. $o(n)$) factor, and the exact value of $\\beta$ remained unknown for any graph where Index Coding is nontrivial.Our main contribution is a direct information-theoretic analysis of the broadcast rate $\\beta$ using linear programs, in contrast to previous approaches that compared $\\beta$ with graph-theoretic parameters. This allows us to resolve the aforementioned two open questions. We provide a polynomial-time algorithm with a nontrivial approximation ratio for computing $\\beta$ in a general network along with a polynomial-time decision procedure for recognizing instances with $\\beta=2$. In addition, we pinpoint $\\beta$ precisely for various classes of graphs (e.g. for various Cayley graphs of cyclic groups) thereby simultaneously improving the previously known upper and lower bounds for these graphs. Via this approach we construct graphs where the difference between $\\beta$ and its trivial lower bound is linear in the number of vertices and ones where $\\beta$ is uniformly bounded while its upper bound derived from the naive encoding scheme is polynomially worse.",
"subjects": "Information Theory (cs.IT); Combinatorics (math.CO)",
"title": "Index coding via linear programming",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109525293959,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7076796400999552
} |
https://arxiv.org/abs/1411.2164 | Regularity of Loewner Curves | The Loewner equation encrypts a growing simple curve in the plane into a real-valued driving function. We show that if the driving function $\lambda$ is in $C^{\beta}$ with $\beta>2$ (or real analytic) then the Loewner curve is in $C^{\beta + \frac{1}{2}}$ (respectively analytic). This is a converse to a result by Earle and Epstein and extends a result of Wong. | \section{Introduction and results}
The Loewner differential equation, a classical tool that has attracted recent attention due to Schramm-Loewner evolution, provides a unique way of encoding a simple 2-dimensional curve into a continuous 1-dimensional function.
In particular, let $\gamma : [0,T] \to \mathbb{C}$ be a simple curve with $\gamma(0) = 0$ and $\gamma(0,T) \in \mathbb{H} = \{ x+iy \, : \, y >0\}$. For each $t\in [0,T]$, there is a unique conformal map $g_t : \H \setminus \gamma(0,t) \to \H$ with the so-called hydrodynamic normalization:
\begin{equation}\label{hydro norm}
g_t(z) = z + \frac{a(t)}{z} + O(z^{-2}) \text{ for $z$ near infinity.}
\end{equation}
Further, it is possible to reparametrize $\gamma$ so that $a(t) = 2t$ in equation \eqref{hydro norm}. In this case, we say that $\gamma$ is parametrized by halfplane capacity
(since $a(t)$ is called the halfplane capacity of $\gamma[0,T]$ and can be thought of as a measure of the size of $\gamma[0,T]$.)
Unless stated otherwise, we will assume $\gamma$ has this parametrization throughout the paper. The Loewner equation describes the time evolution of $g_t$:
$$\partial_t g_t(z) = \frac{2}{g_t(z) - \lambda(t)},~~~g_0(z)=z,$$
where $\lambda(t) = g_t(\gamma(t))$ is a continuous real-valued function, called the driving function.
(See \cite{L} for further details.)
It is natural to ask how properties of the Loewner curve $\gamma$ correspond to properties of the driving function $\l$. The results in this paper relate the regularity of $\l$ to the regularity of $\gamma$. Precise definitions of the regularity are given in Section 2.1, but at this point, we remind the reader that the Zygmund space $\Lambda_*^n$ is a generalization of $C^{n+1}$.
\begin{theorem}
\label{t: main theorem 1}
Let $\lambda \in C^\beta [0,T] $ for $\beta > 2$.
Then the Loewner curve $\gamma$ is $C^{\beta+\frac{1}{2}} (0,T]$
when $\beta+1/2 \notin \mathbb{N}$,
and $\gamma$ is in $\Lambda_*^{\beta - 1/2}(0, T]$ when $\beta+ 1/2 \in \mathbb{N}$.
\end{theorem}
See Theorem \ref{t: quantitative} for the quantitative version of this result.
This theorem extends the work in \cite{C}, where the result was proven for $\beta \in (1/2, 2] \setminus \{3/2\}$.
We do not know if the Zygmund space $\Lambda_*^n$ is optimal for the case $\beta=n+1/2$, but we do know that it is not possible to strengthen Theorem \ref{t: main theorem 1} to say that $\gamma \in C^{n+1}$ when $\l \in C^{n+1/2}$. This is illustrated in
Section \ref{sec:ex}, in which we discuss an example where $\l \in C^{3/2}$ but $\gamma$ fails to be $C^2$.
We also address the analytic case:
\begin{theorem} \label{t:analytic}
If $\lambda$ is real analytic on $[0,T]$, then $\gamma$ is also real analytic on $(0,T]$.
\end{theorem}
Notice that in both of these theorems, the regularity of $\gamma$ is on the time interval $(0,T]$.
With the halfplane-capacity parametrization, it is not possible to extend these results to $t=0$.
To see this, consider the example when
the driving function is $\lambda(t) \equiv 0$. Then the corresponding Loewner curve is $\gamma(t) = 2i\sqrt{t}$.
Further, with the halfplane-capacity parametrization, $\gamma(t)$ can always be expanded at $t=0$ in powers of $\sqrt{t}$, as we see in the following theorem.
\begin{theorem} \label{t:series at 0}
Assume that $\l \in C^{n+\alpha} [0,T]$ for $n \in \mathbb{N}$ and $\alpha \in (0, 1]$. Then near $t=0$, \begin{equation*}
\gamma(t) =
\begin{cases}
2i\sqrt{t} + a_2 t + i \,a_3 t^{3/2} + a_4 t^2 + \cdots + a_{2n} t^{n} + O(t^{n+\a})
&\mbox{if } \a \leq 1/2 \\
2i\sqrt{t} + a_2 t + i \,a_3 t^{3/2} + a_4 t^2 + \cdots + a_{2n} t^{n} +i \,a_{2n+1} t^{n+1/2} + O(t^{n+\a})
&\mbox{if } \a > 1/2 \\
\end{cases}
\end{equation*}
where the real-valued coefficients $a_m$ depend on
$\l^{(k)}(0)$ for $k=1, \cdots, \lfloor \frac{m}{2} \rfloor$.
\end{theorem}
As Theorem \ref{t:series at 0} suggests, if we make the simple change of parametrization $t=s^2$, then the smoothness extends to $s=0$.
\begin{theorem}
\label{t: main theorem 2}
Let $\Gamma(s) = \gamma(s^2)$ be the reparametrized Loewner curve with driving function $\l$.
If $\l$ is real analytic on $[0,T]$, then $\Gamma$ is real analytic on $[0,\sqrt{T}]$.
If $\lambda \in C^\beta [0,T] $, then $\Gamma \in C^{\beta + 1/2}[0, \sqrt{T}]$ when $\beta + 1/2 \notin \mathbb{N}$.
\end{theorem}
\begin{figure}
\begin{tikzpicture}
\draw[thick] (0,0) to [out=145,in=220] (0.5,2) to [out=40,in=270] (1,2.8)
to [out=90,in=300] (0.6,4);
\draw[fill] (0.5,2) circle [radius=0.05];
\node[right] at (0.5,2) {$\gamma(s)$};
\draw[fill] (1,2.8) circle [radius=0.04];
\node[right] at (1,2.8) {$\gamma(s+u)$};
\draw (-3,0) -- (3,0);
\draw[->] (3,2) to [out=45,in=135] (5,2);
\node[above] at (4,2.5) {$g_s-\lambda(s)$};
\draw[fill] (0,0) circle [radius=0.05];
\node[below] at (0,0) {$0$};
\draw[fill] (8,0) circle [radius=0.05];
\node[below] at (8,0) {$0$};
\draw[thick] (8,0) to [out=70, in=290] (8.3,1.5) to [out=110, in=290] (7.8,2.5);
\draw[fill] (8.3,1.5) circle [radius=0.04];
\node[right] at (8.3,1.5) {$\gamma(s,s+u)$};
\draw (5,0) -- (11,0);
\end{tikzpicture}
\caption{The curve $\gamma(s,s+u) = g_s(\gamma(s + u)) - \lambda( s )$.
}\label{gamma(s,s+u) figure}
\end{figure}
We wish to briefly describe the key tool used in this paper.
For $s\in [0,T]$, consider the simple curve $g_s(\gamma(s + u)) - \lambda( s )$, which we denote by $\gamma( s, s + u), 0 \leq u\leq T - s $. This is illustrated in Figure \ref{gamma(s,s+u) figure}.
We are following the notation introduced in \cite{C}, and to avoid confusion, we wish to point out that $\gamma(s, s+u)$ is {\it not} the image of the time interval $(s, s+u)$ under $\gamma$.
Rather, for fixed $s$ the curve $\gamma( s, s + u)$ corresponds to the time-shifted driving function $\lambda_s(u)= \lambda(u + s ) - \lambda(s)$, $0\leq u\leq T - s$.
It follows from \cite[Theorem 6.2]{C} that under the assumption $\lambda\in C^2 [0,T]$, the curve $\gamma$ is in $C^2$ and
\begin{equation} \label{e: gamma''}
\gamma''(s)=\frac{2\gamma'(s)}{\gamma(s)^2} - 4\gamma'(s)\int^s_0 \frac{\partial_s [\gamma(s-u,s)]}{\gamma(s-u,s)^3}du.
\end{equation}
In order to understand the higher differentiability of $\gamma$, we need to understand $\gamma(s-u,s)$. Differentiating this function with respect to $u$, we obtain
\begin{equation}
\label{e: g(s-u,s)}
\partial_u [\gamma(s-u,s)] = \partial_u [g_{s-u} ( \gamma(s)) - \lambda(s - u)] = \frac{-2}{\gamma(s - u, s)} + \lambda'(s - u) \mbox{ for } 0<u \leq s,
\end{equation}
and $\gamma(s-u, s) |_{u=0} = \gamma(s,s) = 0$. We note that the above differential equation does not hold for $u=0$. This is the reason for us to investigate the following ODE:
\begin{eqnarray}
f'(u)&=&\frac{-2}{f(u)}+\lambda'(s-u),~~~~0\leq u\leq s, \label{e: ODE}\\
f(0) & = & i \epsilon \in \mathbb{H}. \nonumber
\end{eqnarray}
The work in this paper depends on a deep understanding of the function $f(u) = f(u, s, \epsilon)$ which is the solution to \eqref{e: ODE}. Once we show that $f(u, s, \epsilon)$ converges uniformly to $\gamma(s-u, s)$ as $\epsilon \to 0^+$ (see Lemma \ref{l: upward ODE}), we can use \eqref{e: gamma''} to translate information about $f$ into information about the derivatives of $\gamma$.
The paper is organized as follows:
Section \ref{prelim section} includes initial properties of $f(u, s, \epsilon)$
and some lemmas regarding solutions to a particular class of ODEs.
These lemmas will be useful in analyzing $f$ and its partial derivatives, and this is the content of Section \ref{Properties}.
In Section \ref{smoothness section}, we state and prove a quantitative version of Theorem \ref{t: main theorem 1}.
The real analyticity of the curve $\gamma$ in Theorem \ref{t:analytic} is proved in Section \ref{sec: analytic}.
In Section \ref{sec:behavior at 0}, we analyze the behavior of the trace at its base, proving Theorem \ref{t: main theorem 2} and Theorem \ref{t:series at 0}. The latter is proven by constructing
a nice curve that well-approximates a given Loewner curve at its base.
We conclude in Section \ref{sec:ex} with two examples.
\noindent {\bf Remark.} Theorem \ref{t: main theorem 1} and Theorem \ref{t:analytic} provide a converse to the results of Earle and Epstein in \cite{EE}.
Their results (translated from the radial setting to the chordal setting using \cite{M}) state that
if any parametrization of $\gamma$ is $C^n$, then the halfplane-capacity parametrization of $\gamma$ is in $C^{n-1}(0,T)$ and $\l \in C^{n-1}(0,T)$. They also prove that if $\gamma$ is real analytic, then $\l$ must be real analytic.
{\bf Acknowledgement:} We appreciate the conversations and comments we received at various stages from Kyle Kinneberg, Michael Frazier, Donald Marshall, Steffen Rohde, Fredrik Johansson-Viklund and Carto Wong. Part of this research was performed while the second author was visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation, and he thanks the institute for its hospitality and the use of its facilities.
\section{Preliminaries} \label{prelim section}
\subsection{Notation}
Let $I$ be an interval on the real line. The space $C^0(I)$ consists of all continuous functions on $I$ and $||\phi||_{\infty,I}=\sup_{t\in I} |\phi(t)|$ for $\phi\in C^0(I)$.
Let $\alpha\in (0,1)$. A function $\phi$ defined on $I$ is in $C^\alpha$ if $||\phi||_{\infty,I}<\infty$ and
its $\alpha-$H\"older norm is bounded:$$
||\phi||_{C^\alpha}:= \sup_{s,t\in I, s\neq t} \frac{|\phi(t) - \phi(s)|}{|t-s|^\alpha} <\infty.
$$
Let $n\in \mathbb{N}_0$, $\alpha\in [0,1]$ and $M>0$. A function $\phi$ is in $C^{n,\alpha}(I;M)$
if $\phi',\cdots, \phi^{(n)}$ exist and are continuous and the following two conditions hold:
\begin{align*}
||\phi^{(k)}||_{\infty,I}\leq& \,M \, \mbox { for all } 0\leq k\leq n,\\
\text{ and } ||\phi^{(n)}||_{C^\alpha}:=& \sup_{s,t\in I, s\neq t} \frac{|\phi^{(n)}(t) - \phi^{(n)}(s)|}{|t-s|^\alpha} \leq M.
\end{align*}
In particular, the $n^{th}$ derivative of functions in $C^{n,1}$ are Lipschitz. A function $\phi$ is in $C^n$ if $\phi\in C^{n,0}(I;M)$ for some $M$. When $\alpha\in(0,1)$, we also write $C^{n+\alpha}$ for $C^{n,\alpha}$.
Zygmund introduced a generalization of $C^{0,1}$ called $\Lambda_*$. A continuous function $\phi$ is in $\Lambda_*(I)$ means that
$$
||\phi||_{\Lambda_*}:= \sup_{s-\delta,s+\delta \in I, \delta > 0}
\frac{|\phi(s+\delta) +\phi(s-\delta)- 2\phi(s)|}{\delta} <\infty.
$$
We say that $\phi \in \Lambda^n_*(I;M)$ if $\phi',\cdots, \phi^{(n)}$ exist and are continuous, $\phi^{(n)} \in \Lambda_*$, and the following two conditions hold:
\begin{align*}
||\phi^{(k)}||_{\infty,I}\leq& \,M \, \mbox { for all } 0\leq k\leq n,\\
\text{ and } ||\phi^{(n)}||_{\Lambda_*}\leq& M.
\end{align*}
Note that $ C^{n+1} \subset C^{n,1} \subset \Lambda_*^n $.
The following proposition will be needed in Section \ref{sec:behavior at 0}.
\begin{proposition} \label{Cn_alpha}
If a function $\phi$ belongs to $C^{n,\alpha}(I;M)$ then there exists $c=c(n, M)$ such that for all $t_0, t+ t_0\in I$,
$$|\phi(t + t_0) - \sum^n_{k = 0} \frac{1}{k!} t^k \phi^{(k)}( t_0)|\leq c t^{n + \alpha}.$$
\end{proposition}
\noindent
The proof follows from the integral form of the remainder of Taylor series.
We use $C$ for a universal constant. For estimates related to a driving function $\lambda\in C^{n, \alpha}([0,T];M)$, we use $c$ for constants depending on $M,n,T$. When constants depend on other factors, we will state this explicitly.
\subsection{Loewner equation} \label{Loewner equation}
In the introduction we described how the Loewner equation can be used to encode a simple curve into its driving function. This process can be reversed.
Let $\lambda$ be a real-valued continuous function on $[0,T]$ with $T>0$. Then the forward chordal Loewner equation is the following initial value problem:
\begin{equation} \label{forward LE}
\partial_t g_t (z)= \frac{2}{g_t(z)-\lambda(t)},~~~ g_0(z)=z.
\end{equation}
For each $z \in \mathbb{H}$, the solution $g_t(z)$ exists up to $T_z = \inf\{t>0: g_t(z)-\lambda(t)=0\}$. Let $K_t = \{ z\in \mathbb{H}: T_z\leq t\}$. It is known that $g_t$ is the unique conformal map from $\mathbb{H}\backslash K_t$ to $\mathbb{H}$ that satisfies
the hydrodynamic normalization at infinity:
$$g_t(z) = z + O(\frac{1}{z}),~~~~ \mbox{ near } z=\infty.$$
We say that $\lambda$ generates the curve $\gamma:[0,T] \to \overline{\mathbb{H}}$ if $\mathbb{H}\backslash K_t$ is the unbounded component of $\mathbb{H} \backslash \gamma(0,T]$. In this case $\lambda(t) = g_t(\gamma(t))$, $0\leq t\leq T.$
An important property of the chordal Loewner equation is the concatenation property, which says that for fixed $s$, the time-shifted driving function $\lambda(s + t)$ generates
the mapped curve $g_{s}(\gamma(s, t))$.
For more details, see \cite{L}.
It was shown that if $\lambda\in C^{1/2} [0,T] $ with $||\lambda||_{C^{1/2}}<4$ then $\lambda$ generates a simple quasi-arc $\gamma$ (\cite{MR05}, \cite{Lind}). Since we work with $\lambda\in C^\beta$ for $\beta>2$, on small intervals $||\lambda||_{C^{1/2}}\leq 1$. Therefore we are guaranteed that the corresponding Loewner curve is a simple curve. We can prove Theorems \ref{t: main theorem 1} and \ref{t:analytic} on small intervals, then use the concatenation property of the Loewner equation to derive the regularity of $\gamma$ on $[0,T]$. Henceforth, we assume $||\lambda||_{C^{1/2}}\leq 1.$
Changing \eqref{forward LE} by a negative sign gives the
backwards chordal Loewner equation:
\begin{equation} \label{backward LE}
\partial_t h_t (z)= \frac{-2}{h_t(z)-\xi(t)},~~~ h_0(z)=z
\end{equation}
for a continuous real-valued function $\xi$ defined on $[0,T]$.
The solution $h_t(z)$ exists for all $z \in \H$ and $t \in [0, T]$, and $h_t$ is a conformal map from $\H$ into $\H$. The forward and backward versions of the Loewner equation are related as follows:
if $g_t$ is the solution to \eqref{forward LE} with driving function $\l \in C[0, T]$ and $h_t$ is the solution to \eqref{backward LE} with driving function $\xi(t) = \l(T-t)$, then $h_t=g_{T-t} \circ g_T^{-1}$, and in particular, $h_T = g_T^{-1}$.
We think of \eqref{e: ODE} as a variant of the backward Loewner equation (with $\xi(u) = \l(s-u)$ and $f(u) = h_u(i \epsilon) - \xi(u)$), and our first goal is to understand some basic properties of its solution
$f(u) = f(u, s, \epsilon)$, when $(u,s) \in D := \{ (u,s) \, : \, 0 \leq u \leq s \leq T\}$.
Further properties of $f(u,s,\epsilon)$ are in Section \ref{Properties}.
\begin{lemma}
\label{l: upward ODE}
Let $\lambda \in C^1([0,T]; M)$, and let $0\leq s\leq T$ and $\epsilon>0$. Then the ODE
\begin{eqnarray*}
f'(u)&=&\frac{-2}{f(u)}+\lambda'(s-u),~~~~0\leq u\leq s,\\
f(0) & = & i \epsilon \in \mathbb{H}. \nonumber
\end{eqnarray*}
has a unique solution $f(u)=f(u,s,\epsilon)$, with $0\leq u\leq s$, satisfying the following properties:
\noindent
(i) $\mbox{Im\,} f$ is increasing in $u$.
\noindent
(ii) For all $(u,s) \in D = \{ (u,s) \, : \, 0 \leq u \leq s \leq T\}$
$$\sqrt{3u + \epsilon^2} \leq \mbox{Im\,} f(u,s,\epsilon)\leq \sqrt{4u+\epsilon^2} $$
$$\mbox{and }\,\,\,\,\,\,|\mbox{Re\,} f(u,s,\epsilon)| \leq \sqrt{u} \leq \frac{1}{\sqrt{3}} \mbox{Im\,} f(u,s,\epsilon) .$$
\noindent
(iii) For every $\delta>0$, there is $\epsilon(\delta)>0$ such that
$$|f(u, s, \epsilon_1) - f(u, s, \epsilon_2)|\leq \delta\mbox{ for all } (u,s)\in D \text{ and } \epsilon_1,\epsilon_2 \leq \epsilon(\delta).$$
In particular, $f(u, s, \epsilon)$ converges uniformly as $\epsilon\to 0+$ to a limit denoted by $f(u, s)$. This limit is the family of curves $\gamma(s - u, s)$ generated by $\lambda_s$, $0 \leq s \leq T$.
(iv) Suppose $\lambda\in C^n([0,T];M)$, and let $l+k\leq n$ and $k\leq n-1$. Then $\partial^l_u \partial^k_s f$ exists and is continuous in $(u,s) \in D$ for all $\epsilon>0$.
(v) If $\lambda\in C^n([0,T];M)$ and $1\leq k\leq n-1$, then
$\partial^k_s f(0,s,\epsilon)=0 \mbox{ for all } s\in [0,T]$ and $\epsilon>0.$
\end{lemma}
\begin{proof}
The equation (\ref{e: ODE}) is of the form:
\begin{eqnarray*}
f'(u)&=&G(f(u), u, s),
\end{eqnarray*}
where $G(z, u, s) = \frac{-2}{z} + \lambda'(s - u)$ is jointly continuous in $z,u,s$, and Lipschitz in $z$ variable whenever $\mbox{Im\,} z\geq C>0$. So the solution exists on some interval containing 0. To show that the solution to (\ref{e: ODE}) exists on the whole interval $[0,s]$, it suffices to show that $(i)$ always holds. The idea of $(i)-(iii)$ comes from \cite{RTZ}, which contains a study of the Loewner equation when $||\lambda||_{C^{1/2}}<4$. For the convenience of the reader, we will present the proof here.
Let $x=x(u), y=y(u)$ be real and imaginary parts of $f(u)$. It follows from (\ref{e: ODE}) that
\begin{eqnarray}
(x + \lambda(s - \cdot))' & = & \frac{-2x}{x^2 + y^2},\\
y' &= &\frac{2y}{x^2+ y^2}. \label{e: Dy}
\end{eqnarray}
In particular, $y$ is increasing and $(y^2)' \leq 4$. The former shows $(i)$, and the latter shows that $y\leq \sqrt{4u+\epsilon^2}$.
\noindent
Now we will show that $|x(u)|\leq \sqrt{u}$, for $0 \leq u \leq s$.
Suppose $0\leq x(u)$ and let $u_0 = \sup\{ v \in [0,u]: x(v)\leq 0\}$. So
$$\partial_v (x(v) + \lambda(s - v)) \leq 0 \mbox{ for } u_0\leq v\leq u,$$
and
$$ x(u) + \lambda( s- u ) \leq x(u_0) + \lambda( s- u_0) = \lambda( s- u_0).$$
Hence
$$x(u)\leq \lambda(s - u_0) - \lambda (s - u) \leq \sqrt{|u_0 - u|}\leq \sqrt{u}.$$
where the very last inequality follows since $||\lambda||_{1/2}\leq 1$. The same argument applies when $x(u)\leq 0$, proving that $|x(u)|\leq \sqrt{u}$.
\noindent
Next we will show $y(u) > \sqrt{3u}$ for $0\leq u\leq s$. Suppose this is not the case.
Then since $y(0)=\epsilon>0$, there exists $u_0\in (0,s]$ such that $y(u_0) = \sqrt{3u_0}$ and $y(u) \geq \sqrt{3u}$ for $u\in [0,u_0]$. It follows from (\ref{e: Dy}) that
$$(y^2)'=\frac{4y^2}{x^2 + y^2} \geq \frac{12u}{u + 3u} = 3 \mbox{ for } 0\leq u\leq u_0.$$
So $y(u_0)\geq \sqrt{3u_0 + \epsilon^2} > \sqrt{3u_0}$. This is a contradiction. Therefore $y(u) > \sqrt{3u}$ and $(y^2)'\geq 3$. These show $(ii)$.
To show $(iii)$, differentiate (\ref{e: ODE}) with respect to $\epsilon$ to obtain
$$\partial_u(\partial_\epsilon f) = \partial_\epsilon \partial_u f = \frac{2 \partial_\epsilon f}{f^2}.$$
Since $\partial_\epsilon f(0, s, \epsilon) = i$,
$$\partial_\epsilon f(u, s, \epsilon) = i \exp \int^u_0 \frac{2}{f^2(v, s, \epsilon)}\, dv.$$
This implies
\begin{eqnarray*}
|\partial_\epsilon f(u, s, \epsilon)| &= &\exp \int^u_0 \mbox{Re\,} \frac{2}{f^2(v, s, \epsilon)}\, dv \\
&= & \exp \int^u_0 \frac{2 (x^2(v) - y^2(v))}{(x^2(v) + y^2(v))^2}\, dv \leq 1.
\end{eqnarray*}
The last inequality comes from $(ii)$. It follows that
$$|f(u,s,\epsilon)-f(u,s,\epsilon')|\leq |\epsilon-\epsilon'|, \mbox{ for all } 0\leq u\leq s\leq T,$$
and $f(u, s, \epsilon)$ converges uniformly in $D$ to a limit, denoted by $f(u,s)$, as $\epsilon\to 0^+$.
Intuitively the limit $f(u,s)$ is equal to $\gamma(s-u,s)$ since $f(u, s, \epsilon)$ satisfies the same ODE as $\gamma(s-u,s)$ does, and $\lim_{\epsilon\to 0^+} f(0, s,\epsilon) = \gamma(s - u, s) \large|_{u=0} = 0$. Indeed, from (\ref{e: g(s-u,s)}) and (\ref{e: ODE}) we can show that
\begin{equation} \label{e: difference}
| f(u,s, \epsilon) - \gamma(s-u,s) | = | f(u_0, s, \epsilon) - \gamma(s-u_0, s)| \exp\int^u_{u_0} \mbox{Re\,} \frac{2\,dv}{f(v, s, \epsilon) \gamma(s-v, s) },
\end{equation}
with $0<u_0\leq u\leq s\leq T$ and $\epsilon>0$.
Since $\gamma(s-v,s)$ is the tip of a Loewner curve generated by a driving function whose H\"older-1/2 norm is less than 1, then by \cite[Lemma 3.1]{C}, it satisfies
$$|\mbox{Re\,} \gamma(s - v,s)|\leq \mbox{Im\,} \gamma(s-v,s).$$
This implies that
$$\mbox{Re\,} \frac{2}{f(v, s, \epsilon) \gamma(s-v, s)} \leq 0.$$
Let $u_0\to 0^+$ and then $\epsilon\to 0^+$ in (\ref{e: difference}) we get $f(u, s) = \gamma(s - u ,s)$.
Statement $(iv)$ follows from the standard ODE theory (see \cite{CL}, for instance) and the fact that $G$ is $C^{n-1}$ in $(u,s)$.
We show $(v)$ by induction. For the base case,
$$\partial_s f(0, s, \epsilon)=\lim_{\delta \to 0}\frac{f(0, s + \delta, \epsilon)-f(0, s,\epsilon)}{\delta}=\lim_{\delta \to 0}\frac{\epsilon - \epsilon}{\delta}=0.$$
\noindent
Now supppose $\partial^k_s f(0,s,\epsilon)=0$ for all $s\in [0,T]$. Then
$$\partial^{k+1}_s f(0, s, \epsilon)=\lim_{\delta \to 0}\frac{\partial^k_s f(0, s + \delta, \epsilon) - \partial^k_s f(0, s, \epsilon)}{\delta}=0.$$
\end{proof}
\noindent
{\bf Remark.} For convenience, in this paper we only consider $\epsilon\in (0,1]$. In this case,
$$ \sqrt{3u} \leq |f(u, s, \epsilon)| \leq \sqrt{Cu + \epsilon^2} \leq C\sqrt{u} + C\epsilon \leq c(T) \mbox{ for all } 0\leq u, s\leq T.$$
\noindent
Later in Lemma \ref{l: partial^n_s f} we will show that $\partial^n_s f$ exists and is continuous in $(u,s)$.
\subsection{ODE lemmas}
The next lemma is frequently used in Section \ref{Properties} to investigate the regularity of $f(u, s, \epsilon)$.
\begin{lemma} \label{l: ODE fact 1}
Consider a complex-valued function $X$ satisfying the initial value problem
$$X'(u)=P(u)X(u)+Q(u),~~~~ X(0)=0.$$
Suppose there exist constants $C, M_1 >0$ so that $|P(u)|\leq - C \mbox{Re\,} P(u)$ and $|Q(u)|\leq M_1$ for $0\leq u\leq u_0$. Then
$$|X(u)|\leq (C+1)M_1 u\mbox { for } 0\leq u\leq u_0.$$
\end{lemma}
\begin{proof} Solving the equation, one obtains
$$X(u) = R(u) + e^{-\mu(u)} \int^u_0 e^{\mu(v)}P(v)R(v)\, dv,$$
where $\mu (u)=-\int^u_0 P(v)\, dv$ and $R(u)=\int^u_0 Q(v) \, dv$. Since $|R(u)| \leq M_1 u$,
\begin{eqnarray*}
|X(u)| & \leq & M_1 u + M_1 u |e^{-\mu(u)}| \int^u_0 |e^{\mu(v)}| \cdot |P(v)| \, dv \\
& \leq & M_1 u + M_1 u e^{-\mbox{Re\,} \mu(u)} \int^u_0 e^{\int^v_0 -\mbox{Re\,} P(w) dw} C(-\mbox{Re\,} P(v)) \, dv \\
& = & M_1 u + C M_1 u e^{-\mbox{Re\,}\mu(u)} \left(e^{-\int^u_0 \mbox{Re\,} P(v) \, dv}-1\right) \\
& = & M_1 u + C M_1 u e^{-\mbox{Re\,} \mu(u)} \left( e^{\mbox{Re\,} \mu(u)}-1\right) \\
& \leq & (C+1) M_1 u.
\end{eqnarray*}
\end{proof}
In some cases, we will need a more general version of Lemma \ref{l: ODE fact 1}.
\begin{lemma} \label{l: ODE fact 2}
Let $Y$ be a solution to
$$Y'(u)=P(u)Y(u) - P(u)Q(u) + R(u),~~~~Y(0)=Q(0)$$
with $|P|\leq -C\mbox{Re\,} P$ and $|Q(v)-Q(0)|\leq \omega(v) $ on $[0,u_0]$, where $\omega$ is a non-decreasing function and $C>0$.
\noindent
(i) If $|R|\leq M_2u^{\beta-1}$ for some constant $M_2$, then
$$|Y(u) - Q(u)|\leq (C+1) \omega(u) +(C+1)\frac{M_2}{\beta} u^\beta.$$
\noindent
(ii) If $Y(0)=Q(0)=0$ and $|R|\leq M_2$, then
$$|Y(u)|\leq C \omega(u) + (C+1)M_2 u.$$
\noindent
(iii) More generally,
$$|Y(u) - Q(u)|\leq (C+1) \omega(u) +(C+1)\int^u_0 |R(v)| \, dv.$$
\end{lemma}
\begin{proof}
Let $\mu(u)=\int^u_0 -P(v) \, dv$ and $S(u)=\int^u_0 R(v) \, dv$. We have
\begin{align*}
Y(u) &=e^{-\mu(u)}Y(0) + e^{-\mu(u)}\int^u_0 e^{\mu(v)}(-P Q + R) \, dv \\
&=Q(0)+e^{-\mu(u)}\int^u_0 e^{\mu(v)}(-P)[Q-Q(0)] \, dv + e^{-\mu(u)}\int^u_0 e^{\mu(v)}R \, dv \\
&=Q(0)+e^{-\mu(u)}\int^u_0 e^{\mu(v)}(-P)[Q-Q(0)]\, dv+ S(u)-e^{-\mu(u)}\int^u_0 e^{\mu(v)}(-P) S \,dv,
\end{align*}
where the last equality follows from an integration by parts.
Therefore under the first assumption, $|S(u)| \leq M_2 u^\beta/\beta$ and
$$|Y(u)-Q(u)|\leq |Q(0)-Q(u)| + e^{-\mbox{Re\,}\mu(u)}\int^u_0 e^{\mbox{Re\,} \mu(v)}C (-\mbox{Re\,} P)\omega(v) \, dv +|S(u)|$$
$$ + e^{-\mbox{Re\,} \mu(u)}\int^u_0 e^{\mbox{Re\,} \mu(v)} C (-\mbox{Re\,} P) \frac{M_2}{\beta} u^\beta \,dv$$
$$\leq \omega(u) + C \omega(u) + \frac{M_2}{\beta} u^\beta +C\frac{M_2}{\beta} u^\beta.$$
Under the second assumption,
$$|Y(u)|\leq e^{-\mbox{Re\,}\mu(u)}\int^u_0 e^{\mbox{Re\,} \mu(v)}C (-\mbox{Re\,} P) \omega(v)\, dv + |S(u)| $$
$$ + e^{-\mbox{Re\,} \mu(u)}\int^u_0 e^{\mbox{Re\,} \mu(v)} C(-\mbox{Re\,} P) M_2 u \, dv$$
$$\leq C \omega(u) +M_2 u + CM_2 u.$$
\end{proof}
\section{Properties of $f(u,s,\epsilon)$} \label{Properties}
In this section, we will prove all important properties of $f(u, s, \epsilon)$, which are summarized in Proposition \ref{summary}. Then we will let $\epsilon \to 0^+$ to obtain properties of $f(u, s) = \gamma(s - u, s)$.
To accomplish this, we will show that $f(u,s, \epsilon)$ and its partial derivatives satisfy the type of ODE considered in Lemmas \ref{l: ODE fact 1} and \ref{l: ODE fact 2}. These lemmas then provide us with the needed estimates about $f(u,s, \epsilon)$. The next two lemmas concern the $s$-derivatives of $f(u,s, \epsilon)$.
\begin{lemma}\label{l: partial^k_s f} Suppose $\lambda\in C^n([0,T];M)$ with $n\geq 2$. For every $1\leq k\leq n-1$, there exists a function $ Q_k = Q_k(u,s,\epsilon)$ such that
$$\partial_u (\partial_s^k f) = \frac{2}{f^2} \partial^k_s f + Q_k.$$
with $(u, s) \in D$, and $\epsilon\in (0,1]$. Moreover there exists constant $c=c(M,n,T)>0$ so that
$$|\partial^k_s f(u, s, \epsilon)|\leq c u.$$
\end{lemma}
\begin{proof}
We will prove the lemma by induction. Let $k=1$ and $n\geq 2$. Fix $s\in [0,T]$ and $\epsilon\in (0,1)$, and let $X(u)=\partial_s f(u,s, \epsilon)$. Then
\begin{align*}
X'(u)=\partial_u\partial_s f(u, s, \epsilon)=\partial_s \partial_u f(u, s, \epsilon) &= \frac{2}{f^2(u, s, \epsilon)}\partial_s f(u, s, \epsilon)+\lambda''(s - u)\\
&=\frac{2}{f^2(u, s, \epsilon)}X(u) + \lambda''(s - u),
\end{align*}
and $X(0)=\partial_s f(0, s, \epsilon)=0$. Let $P_s=P_s(u, \epsilon)=\frac{2}{f^2(u, s, \epsilon)}$ and $Q_1(u, s, \epsilon)=\lambda''(s - u)$. Clearly, $|Q_1|\leq M$. We will show that $P_s(\cdot, \epsilon)$ satisfies the property of $P$ in Lemma \ref{l: ODE fact 1}.
Indeed, let $f(u, s, \epsilon)=x+iy$. It follows from Lemma \ref{l: upward ODE}(ii) that there exists a constant $C>0$ such that
$$|P_s(u, \epsilon)|=\frac{2}{x^2+y^2}\leq -C\frac{2(x^2-y^2)}{(x^2+y^2)^2}=-C \,\mbox{Re\,} P_s(u, \epsilon).$$
Applying Lemma \ref{l: ODE fact 1}, we obtain
$$|\partial_s f(u, s, \epsilon)|\leq cu$$
completing the base case.
Now suppose the lemma holds for $1\leq k-1 \leq n-2$ and $\partial_u (\partial^{k-1}_s f)=P_s\partial^{k-1}_s f + Q_{k-1}$. Then
$$\partial_u \partial^k_s f = \partial_s (\partial_u \partial^k_s f)=P_s\partial^k_s f + Q_k,$$
with $Q_k=\partial_s Q_{k-1}-\dfrac{4}{f^3}(\partial_s f)(\partial^{k-1}_s f).$
One can show by induction that
$$Q_k=\lambda^{(k+1)}(s-u)+R_k$$
with $R_k(u, s, \epsilon) =\sum \mbox{ terms}$,
where the number of terms is no more than $k-1$ and each term has the form
$$\frac{c}{f^m}\prod^{m-1}_{j=1} \partial^{m_j}_s f,$$
for some $3\leq m\leq k+1$, and $1\leq m_j\leq k-1$. This term is by induction no bigger than
$$\frac{c}{u^{m/2}}u^{m-1}=cu^{m/2-1}\leq c(M,k,T) \sqrt{u}.$$
So $|Q_k|$ is bounded by a constant $c= c(M,k,T)$, and hence Lemma \ref{l: ODE fact 1} implies that $|\partial^k_s f|\leq cu$.
\end{proof}
\noindent
{\bf Remark.} $R_1=0$ and $R_k$ satisfies a recursive formula:
$$R_{k+1}(u, s, \epsilon) = \partial_s R_k(u, s, \epsilon) - \frac{4}{f(u, s, \epsilon)^3} (\partial_s f(u, s, \epsilon)) (\partial^k_s f(u, s, \epsilon)).$$
We have shown that for $1\leq k\leq n-1$,
$$|R_k| \leq c(M, k, T) \sqrt{u}.$$
Since $R_n$ is only related to $\partial^k_s f$ for $0\leq k\leq n-1$, we have the same inequality:
$$|R_n| \leq c(M, n, T) \sqrt{u}.$$
\begin{lemma}\label{l: partial^n_s f}
Suppose $\lambda\in C^n([0,T];M)$ then $\partial^n_s f(u, s, \epsilon)$ exists and if $\lambda\in C^{n,\alpha}([0,T]; M)$ then
$$|\partial^n_s f(u,s,\epsilon)|\leq cu^\alpha,$$
where $c=c(M,n,T)$.
\end{lemma}
\noindent
{\bf Remark.} If $\lambda\in C^n([0,T]; M)$, then we can bound $ \displaystyle |\partial^n_s f(u, s, \epsilon)|$ by the oscillation of $\lambda^{(n)}$:
$$ \displaystyle |\partial^n_s f(u, s, \epsilon)|\leq c \sup\{|\lambda^{(n)}(u_1)-\lambda^{(n)}(u_2)|: |u_1-u_2|\leq u, u_1, u_2\in [0,s] \}\leq cM.$$
\begin{proof}
It follows from the proof of the previous lemma that
\begin{eqnarray*}
\partial_u (\partial^{n-1}_s f) &=&P_s \partial^{n-1}_s f + Q_{n-1}\\
&=& P_s \partial^{n-1}_s f + \lambda^{(n)}(s-u)+R_{n-1}.
\end{eqnarray*}
So
$$ \partial_u X = P_s X + Q, ~~~~X|_{ u = 0}=\lambda^{(n-1)}(s),
$$
where $X=\partial^{n-1}_s f + \lambda^{(n-1)}(s-u)$ and $Q=- P_s \lambda^{(n-1)}(s-u)+R_{n-1}$. Since $Q$ is $C^1$ jointly in $(u,s)$, $\partial_s X$ exists and satisfies
$$\partial_u (\partial_s X)=P_s \partial_s X - P_s \lambda^{(n)}(s-u) + R_n.
$$
and $\partial_s X|_{u=0}=\lambda^{(n)}(s)$. Hence $\partial^n_s f$ exists and is continuous in $(u,s)$. Since $|R_n|\leq c(M,n,T)$, apply Lemma \ref{l: ODE fact 2} $(i)$ with $\omega \equiv Mu^\alpha$, $M_2=c$, and $\beta=1$ to obtain
$$|\partial^n_s f|=|\partial_s X-\lambda^{(n)}(s-u)|\leq (C+1) Mu^\alpha + cu\leq cu^\alpha.$$
\end{proof}
The next three lemmas concern the oscillation of $\partial^k_s f$ in the variable $s$.
In the proofs, we omit $\epsilon$ from the formulas at times (for ease of reading), but we remind the reader that the functions $f, P_s, Q_k, R_k$ do depend on the three variables $u,s,\epsilon$.
\begin{lemma} \label{l: lambda in C^1}
Suppose $\lambda\in C^{1,\alpha}([0,T];M)$ with $\alpha \in (0,1]$. Then
$$|f(u, s + \delta, \epsilon)-f(u, s, \epsilon)|\leq c\min(u \delta^\alpha,\delta u^\alpha),$$
$$|\partial_s f(u, s + \delta, \epsilon)-\partial_s f(u, s, \epsilon)|\leq c(1 + \frac{\epsilon}{\alpha}) \min (u^\alpha, \delta^\alpha)$$
for $0\leq u\leq s\leq s+\delta\leq T$ and $\epsilon>0$.
\end{lemma}
\begin{proof}
Since $|\partial_s f(u, s, \epsilon)|\leq c u^\alpha$ (by Lemma \ref{l: partial^n_s f}),
$$|f(u, s + \delta, \epsilon) - f(u, s, \epsilon)| \leq c \delta u^\alpha.$$
Omitting the parameter $\epsilon$ for convenience, we have
$$\partial_u [f(u, s + \delta) - f(u, s)]=\frac{2}{f(u, s) f(u, s + \delta)}[f(u, s + \delta) - f(u, s)] + \lambda'(s + \delta - u) - \lambda'(s - u),$$
and $f(0, s + \delta) - f(0, s)=0$. We see that $P:=\dfrac{2}{f(u, s) f(u, s + \delta)}$ satisfies
$$|P(u)|\leq - C \mbox{Re\,} P(u)$$
and that $Q=\lambda'( s + \delta - u) - \lambda'( s - u)$ is bounded by $M \delta^\alpha$.
Therefore, Lemma \ref{l: ODE fact 1} implies
$$|f (u, s + \delta) - f(u, s)|\leq CM u \delta^\alpha.$$
It remains to prove the last inequality. We have
$$\partial_u [\partial_s f(u, s + \delta) + \lambda'(s + \delta - u)] = P_{s+\delta} \partial_s f(u, s + \delta),$$
and
$$\partial_u [\partial_s f(u, s) + \lambda'(s - u)] = P_{s} \partial_s f(u, s).$$
So
\begin{align*}
\partial_u [\partial_s f(u, s + \delta) + \lambda'&(s + \delta - u) - \partial_s f(u, s) - \lambda'(s - u)] \\
= P_{s + \delta} & [\partial_s f(u, s + \delta) +\lambda'(s + \delta - u) - \partial_s f(u, s) - \lambda'(s - u)] \\
&- P_{s + \delta} \left(\lambda'(s + \delta - u) - \lambda'(s - u)\right) + (P_{s+\delta}-P_s)\partial_s f(u, s).
\end{align*}
We will apply Lemma \ref{l: ODE fact 2} with $Q(u) = \lambda'(s + \delta - u) - \lambda'(s - u)$ and $R(u) = (P_{s+\delta}-P_s)\partial_s f(u, s)$.
Note
$$|\lambda'(s+\delta-u)-\lambda'(s-u)-\lambda'(s+\delta)+\lambda'(s)|\leq 2 M \min (u^\alpha, \delta^\alpha).$$
Further
\begin{align*}
|P_{s+\delta}-P_s|\cdot |\partial_s f(u, s)| &\leq \frac{c |f(u, s + \delta) - f(u, s)| \cdot |f(u, s) + f(u, s + \delta)|}{u^2}u^\alpha\\
&\leq \frac{ c u \delta^\alpha \sqrt{Cu+ \epsilon^2}}{u^2}u^\alpha\\
&\leq c \delta^\alpha u^{\alpha - 1/2} + c \delta^\alpha\epsilon u^{\alpha - 1},
\end{align*}
and so
$$\int^u_0 |R(v)| \, dv \leq \int^u_0 \left( c \delta^\alpha v^{\alpha - 1/2} + c \delta^\alpha\epsilon v^{\alpha - 1} \right) \, dv \leq c\delta^\alpha u^{\alpha + 1/2} + c \delta^\alpha \frac{\epsilon}{\alpha} u^\alpha.$$
Therefore, by Lemma \ref{l: ODE fact 2} $(iii)$ with $\omega \equiv 2M \min (u^\alpha, \delta^\alpha)$,
\begin{align*}
|\partial_s f(u, s + \delta) - \partial_s f(u, s)| &\leq C M \min (u^\alpha, \delta^\alpha) + c\delta^\alpha u^{\alpha + 1/2} + c \delta^\alpha \frac{\epsilon}{\alpha} u^\alpha\\
&\leq c (1+ \frac{\epsilon}{\alpha}) \min (u^\alpha, \delta^\alpha).
\end{align*}
\end{proof}
\begin{lemma}
Suppose $\lambda\in C^{n,\alpha}([0,T];M)$ with $n\geq 2$ and $\alpha \in (0,1]$. Then
$$|R_k(u, s + \delta, \epsilon) - R_k (u, s, \epsilon)| \leq c \delta \sqrt{u}~~~~ \mbox{ when } ~~~~ 1\leq k\leq n-1,$$
and
$$|\partial^k_s f(u, s + \delta,\epsilon) - \partial^k_s f(u, s, \epsilon)|\leq cu\delta ~~~~ \mbox{ when }~~~~ 1\leq k\leq n-2, $$
and
$$|\partial^{n-1}_s f(u, s + \delta,\epsilon) - \partial^{n-1}_s f(u, s, \epsilon)|\leq c \min( u^\alpha \delta, u \delta^\alpha).$$
\end{lemma}
\begin{proof}
From the Remark following Lemma \ref{l: partial^k_s f}, we know that
$R_1=0$, $R_k$ satisfies the recursive formula:
$$R_{k+1} = \partial_s R_k - \frac{4}{f^3}(\partial_s f)(\partial^k_s f),$$
and $|R_k| \leq c \sqrt{u}$ for $1 \leq k \leq n$.
Therefore, for $k+1\leq n$, Lemma \ref{l: partial^k_s f} implies that
\begin{align*}
|\partial_s R_k | &\leq |R_{k+1} |+\frac{4}{|f|^3}|\partial_s f|\cdot | \partial^k_s f|\\
&\leq c \sqrt{u}.
\end{align*}
Thus
$$|R_k(u, s + \delta, \epsilon)-R_k(u, s, \epsilon)|\leq \int^{s+\delta}_s |\partial_sR_k (u, r, \epsilon)|dr \leq c \delta \sqrt{u},$$
proving the first statement.
When $1 \leq k \leq n-2$, Lemma \ref{l: partial^k_s f} implies that
$$|\partial^k_s f(u,s+\delta,\epsilon)-\partial^k_s f(u,s,\epsilon)|\leq \int^{s + \delta}_s |\partial^{k+1}_s f(u, r, \epsilon)| dr \leq c u \delta,$$
proving the second statement.
From Lemma \ref{l: partial^n_s f}
$$|\partial^{n-1}_s f(u, s + \delta,\epsilon) - \partial^{n-1}_s f(u, s, \epsilon)|\leq \int^{s + \delta}_s |\partial^n_s f(u, r, \epsilon)| dr \leq c u^\alpha \delta.$$
To prove the third statement, it remains to show
\begin{equation} \label{third statement goal}
|\partial^{n-1}_s f(u, s + \delta, \epsilon) - \partial^{n-1}_s f(u, s, \epsilon)|\leq c \delta^\alpha u.
\end{equation}
Omitting the parameter $\epsilon$, we have
\begin{align*}
\partial_u [\partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)] = P_{s + \delta} & [\partial^{n-1}_s f(u, s + \delta) - \partial^{n - 1}_s f(u, s)] \\
&+ (\lambda^{(n)}(s + \delta - u) - \lambda^{(n)}(s - u))\\
&+ (P_{s + \delta} - P_s)\partial^{n-1}_s f(u, s) \\ &+ R_{n - 1}(u, s + \delta) - R_{n - 1}(u, s).
\end{align*}
Since
$$|\lambda^{(n)}(s + \delta - u) - \lambda^{(n)}(s - u)|\leq M \delta^\alpha,$$
and
$$|P_{s+\delta} - P_s|\cdot |\partial^{n-1}_s f(u, s)|\leq \frac{c\delta u C}{u^2} u \leq c \delta \leq c \delta^\alpha, $$
and
$$|R_{n - 1}(u, s + \delta) - R_{n - 1}(u, s)|\leq c\delta \sqrt{u} \leq c \delta^\alpha,$$
we apply Lemma \ref{l: ODE fact 1} with $M_1=c \delta^\alpha$ to prove \eqref{third statement goal}.
\end{proof}
\begin{lemma}
Suppose $\lambda\in C^{n,\alpha}([0,T];M)$ with $n\geq 2$ and $\alpha\in (0,1]$. There exists $c=c(M,n,T)$ so that
\begin{eqnarray*}
|R_{n+1}(u, s, \epsilon)| & \leq & cu^{\alpha-1/2}, \\
|R_n(u, s + \delta, \epsilon) - R_n(u, s, \epsilon)| & \leq & cu^{\alpha-1/2}\delta,\\
|\partial^n_s f(u,s+\delta,\epsilon) - \partial^n_s f(u,s,\epsilon)| & \leq & c (1+ \frac{\epsilon}{\alpha}) \min (u^\alpha, \delta^\alpha).
\end{eqnarray*}
\begin{proof}
Let's note that
$$R_n = \sum \frac{c}{f^m}\prod^{m-1}_{j=1} \partial^{m_j}_s f$$
with $3\leq m\leq n+1$, $1\leq m_j\leq n-1$, and the number of terms in the sum is no more than $n-1$. Since $\partial^n_s f$ exists, so does $R_{n+1}$:
$$R_{n+1}=\sum \frac{c}{f^m}\prod^{m-1}_{j=1} \partial^{m_j}_s f,$$
with $3\leq m\leq n+2$ and $1\leq m_j\leq n$. We can check that in each product, there is at most one $m_j=n$. Hence
$$|R_{n+1}|\leq cn\frac{u^{m-2}u^\alpha}{u^{m/2}}\leq c u^{\alpha+m/2-2}\leq c(M,n,T) u^{\alpha-1/2},$$
and
$$|\partial_s R_n|\leq |R_{n+1}|+\frac{4}{|f|^3}|\partial_s f|\cdot |\partial^n_s f| \leq cu^{\alpha-1/2}.$$
This implies that
$$|R_n(u,s+\delta)-R_n(u,s)|\leq cu^{\alpha-1/2}\delta.$$
It remains to prove the last statement.
Now we have
$$\partial_u(\partial^n_s f(u, s + \delta) + \lambda^{(n)}(s + \delta - u)) = P_{s + \delta}\partial^n_s f(u, s + \delta) + R_n(u, s + \delta),$$
and
$$\partial_u(\partial^n_s f(u, s)+\lambda^{(n)}(s - u)) = P_s \partial^n_s f(u, s) + R_n(u, s).$$
Let
\begin{align*}
Y(u) &= \partial^n_s f(u, s + \delta) + \lambda^{(n)}(s + \delta - u)-\partial^n_s f(u, s) - \lambda^{(n)}(s - u) \text{ and } \\
Q(u) &=\lambda^{(n)}(s + \delta - u) -\lambda^{(n)}( s - u).
\end{align*}
Then
$$\partial_u Y = P_{s+\delta} Y - P_{s+\delta} Q + (P_{s + \delta} - P_s) \partial^n_s f(u, s) + R_n(u, s + \delta) - R_n (u, s).$$
We see that
$$|Q(u)-Q(0)|\leq c\min (u^\alpha, \delta^\alpha),$$
and
$$|(P_{s + \delta} - P_s) \partial^n_s f(u, s)|\leq \frac{cu\delta \sqrt{Cu+\epsilon^2}}{u^2} u^\alpha \leq c \delta u^{\alpha-1/2} + c\epsilon \delta u^{\alpha - 1}.$$
By Lemma \ref{l: ODE fact 2} $(iii)$ with $|R(u)| \leq c \delta u^{\alpha - 1/2} + c \epsilon \delta u^{\alpha - 1}$,
$$|\partial^n_s f(u, s + \delta, \epsilon) - \partial^n_s f(u, s, \epsilon)| = |Y-Q| \leq c\min (u^\alpha, \delta^\alpha) + c \delta u^{\alpha + 1/2} + \frac{c\epsilon \delta}{\alpha} u^\alpha. $$
\end{proof}
\end{lemma}
\begin{lemma} \label{l: mix_u_s}
(Boundedness of mixed $u$ and $s$ derivatives.)
Suppose $\lambda\in C^n([0,T];M)$. Let $s_0\in (0,T)$ and $D_0=\{ (u,s)\in D: s_0\leq u\}$.
There exists $L_0=L_0(M,n,T,s_0)$ such that for all $l+k\leq n$,
$$|\partial^l_u \partial^k_s f( u, s, \epsilon)| \leq L_0.$$
In other words, $f \in C^n(D_0; L_0)$ for every $\epsilon\in (0,1].$
\end{lemma}
\begin{proof}
The case $l=0$ and $k\leq n$ is proven by Lemmas \ref{l: partial^k_s f} and \ref{l: partial^n_s f}. Consider $k=0$ and $1\leq l\leq n$. We have
$$\partial_u f = \frac{-2}{f} + \lambda'(s-u).$$
This implies that when $u_0\leq u$,
$$|\partial_u f|\leq \frac{2}{C\sqrt{u}} + M \leq L_0.$$
We can show by induction in $l$ that
$$\partial^l_u f = \frac{2}{f^2} \partial^{l-1}_u f +(-1)^{l-1} \lambda^{(l)}(s-u) + \hat{R}_l,$$
where $\hat{R}_l$ is the sum of a finite number (depending on $l$) of terms of the form
$$\frac{c}{f^m} \prod^{m-1}_{j=1} \partial^{m_j}_u f$$
with $3\leq m\leq l-1$ and $1\leq m_j\leq l-2$. Hence by induction $|\partial^l_u f|\leq L_0$ for $s_0\leq u\leq T$. The other cases $1\leq k\leq n-1$ are proved similarly.
\end{proof}
In summary, we have proved the following results about $f(u, s, \epsilon)$:
\begin{proposition} \label{summary}
If $\l$ is in $C^{n, \alpha}[0,T]$, then
$f(u, s, \epsilon)$ satisfies the following properties:
\begin{itemize}
\item $C\sqrt{u+\epsilon^2}\leq |f(u, s, \epsilon)|\leq C'\sqrt{u} + C'\epsilon$.
\smallskip
\item $|\partial^k_s f(u, s, \epsilon)|\leq cu$ for $1\leq k\leq n-1$.
\smallskip
\item $|\partial^n_s f(u, s, \epsilon)|\leq cu^\alpha$.
\smallskip
\item $|\partial^k_s f(u, s + \delta, \epsilon) - \partial^k_s f(u, s, \epsilon)|\leq c u \delta$ for $1 \leq k\leq n-2$.
\smallskip
\item $|\partial^{n-1}_s f(u, s + \delta, \epsilon) - \partial^{n-1}_s f(u, s, \epsilon)|\leq c\min (u\delta^\alpha, u^\alpha \delta)$ if $0\leq n-1$.
\smallskip
\item $|\partial^n_s f(u, s + \delta, \epsilon) - \partial^n_s f(u, s, \epsilon)| \leq c (1 + \frac{\epsilon}{\alpha}) \min (u^\alpha, \delta^\alpha)$ for $1\leq n$.
\smallskip
\item For every $0<s_0<T$, there exists $L_0=L_0(M, n, T, s_0)$ such that for all $l+k\leq n$, $|\partial^l_u \partial^k_s f( u, s, \epsilon)| \leq L_0$.
\end{itemize}
\end{proposition}
We emphasize that $c$ depends only on $M,n,T$, not on $\alpha$ and $\epsilon$. We know from Lemma \ref{l: upward ODE} that $f(u,s,\epsilon)$ converges uniformly in $D$ to $f(u,s)$ as $\epsilon \to 0^+$. For all $l+k=n$, it follows from the proof of previous lemmas that $\partial^l_u \partial^k_s f(u,s,\epsilon)$ can be expressed in terms of lower derivatives in $u$ and $s$ of $f(u,s,\epsilon)$. Therefore in $D_0=\{(u,s)\in D: 0<s_0\leq u\leq s\leq T\}$, $\partial^l_u \partial^k_s f(u,s,\epsilon)$ converges uniformly. This implies the following:
\begin{corollary} \label{sumry-prop}
If $\l$ is in $C^{n, \alpha}[0,T]$, then
$f(u,s)$ is in $C^n (D_0)$ and satisfies
\begin{itemize}
\item $C\sqrt{u}\leq |f(u, s)|\leq C'\sqrt{u}$.
\smallskip
\item $|\partial^k_s f(u, s)|\leq cu$ for $1\leq k\leq n-1$.
\smallskip
\item $|\partial^n_s f(u, s)|\leq cu^\alpha$.
\smallskip
\item $|\partial^k_s f(u, s + \delta) - \partial^k_s f(u, s)|\leq c u \delta$ for $1 \leq k\leq n-2$.
\smallskip
\item $|\partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)|\leq c\min (u\delta^\alpha, u^\alpha \delta)$ if $0\leq n-1$.
\smallskip
\item $|\partial^n_s f(u, s + \delta) - \partial^n_s f(u, s)| \leq c \min (u^\alpha, \delta^\alpha)$ for $1\leq n$.
\smallskip
\item For every $0<s_0<T$, there exists $L_0=L_0(M, n, T, s_0)$ such that for all $l+k\leq n$, $|\partial^l_u \partial^k_s f( u, s)| \leq L_0$.
\smallskip
\end{itemize}
\end{corollary}
The first three properties of the corollary will help to show that we can take derivatives of the integral term in the formula (\ref{e: gamma''}). The next three properties will be used to estimate the H\"older norm of the derivatives.
\begin{corollary} If $\lambda$ is in $C^{n,\alpha} [0,T] $ with $n\geq 2$ and $\alpha \in (0,1]$, then $\gamma$ is in $C^n (0,T] $.
\end{corollary}
\begin{proof} The previous arguments imply that $\gamma(s-u,s)\in C^n(D_0)$ for every $s_0\in (0,T)$. Hence $s\mapsto \gamma(0,s)\in C^n (0,T] $. Since $\gamma(s) = \gamma(0,s) + \lambda(0)$, the curve $\gamma$ is in $C^n (0,T]$.
\end{proof}
\section{Smoothness of $\gamma$}\label{smoothness section}
The goal of this section is to prove the following quantitative version of Theorem \ref{t: main theorem 1}.
\begin{theorem}
\label{t: quantitative}
Suppose $\lambda\in C^{n,\alpha}([0,T];M)$ with $n\geq 2$ and $\alpha \in (0,1]$.
(i) If $\alpha < 1/2$, then $\gamma \in C^{n, \alpha + 1/2} (0, T] $. For every $0 < s_0 < T$, there exists $c_0 = c_0 (M,n,T,s_0)$ such that $\gamma\in C^n( [s_0,T];c_0)$ and
$$|\gamma^{(n)}(s + \delta) - \gamma^{(n)}(s)| \leq \frac{c_0}{1 - 2 \alpha} \delta^{\alpha + 1/2},$$
(ii) If $\alpha = 1/2$, then $\gamma \in \Lambda_*^n(0,T] $. For every $0< s_0 <T$, there exists $c_0=c_0 (M,n,T,s_0)$ such that $\gamma\in C^n([s_0,T];c_0)$ and
$$|\gamma^{(n)}(s + \delta) + \gamma^{(n)}(s - \delta)- 2\gamma^{(n)}(s)| \leq c_0 \delta.$$
(iii) If $\alpha\in (\frac{1}{2},1]$, then $\gamma\in C^{n+1,\alpha - 1/2}(0,T]$. For every $0< s_0 <T$, there exists $c_0=c_0(M, n, T, s_0)$ such that $\gamma\in C^{n+1}([s_0,T];c_0)$ and
$$|\gamma^{(n + 1)}(s + \delta) - \gamma^{(n + 1)}(s)| \leq \frac{c_0}{2\alpha - 1} \delta^{\alpha - 1/2}.$$
\end{theorem}
\begin{proof}
Assume that $\lambda\in C^{n,\alpha}([0,T];M)$ with $n \geq 2$ and $\alpha \in (0,1]$.
Fix $s_0\in (0,T)$ and let $D_0 = \{(u,s)\in D: 0<s_0\leq u\leq s \leq T\}$. Recall from \cite{C} that
$$\gamma''(s)=\frac{2\gamma'(s)}{\gamma(s)^2} - 4\gamma'(s)\int^s_0 \frac{\partial_s [f(u,s)]}{f(u,s)^3}\,du.$$
We need to show
$$ F(s) := \int^s_0 \frac{\partial_s f(u,s)}{f(u,s)^3}\, du \mbox{ is } \left\{
\begin{array}{rcl}
\mbox{ in } C^{n-2} & \mbox{ and } & F^{(n-2)} \in C^{\alpha + 1/2} \mbox{ when } \alpha \in (0, 1/2) \\
\mbox{ in } C^{n - 2} & \mbox{ and } & F^{(n - 2)} \in \Lambda_* \mbox{ when } \alpha = 1/2\\
\mbox{ in } C^{n - 1} & \mbox{ and } & F^{(n - 1)} \in C^{\alpha - 1/2} \mbox{ when } \alpha \in (1/2,1]
\end{array}\right. .$$
Let $F_1(u,s)= \dfrac{\partial_s f(u,s)}{f(u,s)^3}$ and $\hat{R}_1 (u,s) = 0$. We define $F_k$ and $\hat{R}_k$ recursively as follows:
\begin{eqnarray*}
\hat{R}_k &=& \partial_s \hat{R}_{k - 1} - \frac{3 (\partial_s f) (\partial^{k - 1} _s f)}{f^4},\\
F_k & = &\partial_s F_{k - 1} = \frac{\partial^k_s f }{ f^3} + \hat{R}_k.
\end{eqnarray*}
Let $\hat{F}_k(s)=F_k(s, s)$. Then formally
\begin{equation}\label{e: F^{(n-2)}}
F^{(n-2)}(s) = \hat{F}_1^{(n-3)}(s)+ \hat{F}_2^{(n-4)}(s) + \cdots \hat{F}_{n-2}(s) + \int^s_0 \left[ \frac{\partial^{n-1}_s f(u, s)}{f^3(u, s)} + \hat{R}_{n - 1}(u, s)\right]\, du,
\end{equation}
and
\begin{equation}\label{e: F^{(n-1)}}
F^{(n-1)} (s)= \hat{F}_1^{(n-2)}(s)+ \hat{F}_2^{(n-3)}(s) + \cdots \hat{F}_{n-1}(s) + \int^s_0 \left[ \frac{\partial^n_s f(u, s)}{f^3(u, s)} + \hat{R}_n(u, s)\right] \, du.
\end{equation}
We notice that
\begin{equation} \label{Rkhat}
\hat{R}_k = \sum\frac{c}{f^m}\prod^{m-2}_{j=1} (\partial^{m_j}_s f),
\end{equation}
where there are at most $k-1$ terms for the sum, $4\leq m\leq k+2$, and $1\leq m_j\leq k-1$.
Further, when $k \geq 3$ each product contains at most one $m_j=k-1$.
Therefore, $\hat{R}_k \in C^{n-(k-1)}(D_0)$, $F_k \in C^{n-k}(D_0)$ and $\hat{F}_k\in C^{n-k} [s_0,T] $. The representation of $\hat{R}_k$ in \eqref{Rkhat} also implies that
\begin{eqnarray}
\label{e: R^k} |\hat{R}_k(u, s)| &\leq & c \mbox{ for } 1 \leq k \leq n,\\
\label{e: R^{n+1}} \mbox{ and }~~|\hat{R}_{n+1}(u, s)| &\leq & \frac{c}{u^{1/2}} \mbox{ if } \alpha \geq \frac{1}{2}.
\end{eqnarray}
Hence equation (\ref{e: F^{(n-2)}}) holds for all $\alpha \in (0,1]$ and equation (\ref{e: F^{(n-1)}}) holds when $\alpha \in (1/2,1]$.
Let
$$\displaystyle I_k(s):=\int^s_0 \dfrac{\partial^k_s f(u,s)}{f(u,s)^3}\, du \; \text{ and }
\; \displaystyle IR_k(s)=\int^s_0 \hat{R}_k (u, s)\, du.$$
Theorem \ref{t: quantitative} will be proven once we show that
\begin{itemize}
\item $I_{n - 1} + IR_{n - 1} \in C^{\alpha + 1/2} [s_0,T] \; \text{ for } \alpha \in (0,1/2), $ \smallskip
\item $I_{n - 1} + IR_{n - 1} \in \Lambda_* [s_0,T] \; \text{ for } \alpha = 1/2, \; \text{ and }$ \smallskip
\item $I_n + IR_n \in C^{\alpha - 1/2} [s_0,T] \; \text{ for } \alpha \in (1/2,1],$
\end{itemize}
along with the needed bounds on $| I_k(s+\delta) - I_k(s) |$ and $| IR_k(s + \delta) - IR_k(s)|$ (and the appropriate estimates for the $\alpha = 1/2$ case.)
This is the content of the next three lemmas.
\end{proof}
\begin{lemma}
Suppose $\lambda\in C^{n,\alpha}([0,T]; M)$, with $n\geq 2$ and $\alpha \in (0,1]$.
Then there exists $c=c(M,n,T)$ such that for all $0<s_0\leq s\leq s+ \delta \leq T$,
\begin{eqnarray*}
|IR_k(s + \delta) - IR_k(s)| & \leq & c \delta \mbox{ for all } 1\leq k \leq n-1 \text{ and }\\
|IR_n(s + \delta) - IR_n(s) | & \leq & c \delta \mbox{ if } \alpha \geq \frac{1}{2}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
It follows from the definition of $\hat{R}_k$ and formula (\ref{e: R^k}) that
for $1\leq k\leq n - 1$,
$$
|\hat{R}_k(u, s + \delta) - \hat{R}_k(u, s)| \leq \int_s^{s + \delta} | \partial_v \hat{R}_k(u,v) |\, dv
\leq c\delta.$$
Similarly if $\alpha \geq \frac{1}{2}$ equation \eqref{e: R^{n+1}} implies
$$|\hat{R}_n(u, s + \delta) - \hat{R}_n(u, s)| \leq \frac{c\delta}{u^{1/2}}.$$
Integrating completes the lemma.
\end{proof}
\begin{lemma}Suppose $\lambda\in C^{n,\alpha}([0,T]; M)$, with $n\geq 2$ and $\alpha \in (0,\frac{1}{2}]$.
Then $I_{n-1} \in C^{\alpha + 1/2} [s_0, T] $ when $ \alpha \in (0,1/2)$
and $I_{n-1} \in \Lambda_*[s_0,T] $ when $ \alpha = 1/2.$
In particular,
there exists $c=c(M,n,T)$ such that for all $0<s_0\leq s\leq s+ \delta \leq T$,
$$|I_{n - 1}(s + \delta) - I_{n - 1} (s) | \leq \left\{\begin{array}{rcl} c(\frac{1}{1 - 2 \alpha} +1) \delta^{\alpha + 1/2} + c(1+\frac{1}{\sqrt{s_0}}) \delta & \mbox{ when} & 0< \alpha < \frac{1}{2}\\
c (1+\log^+ \frac{s}{\delta} + \frac{1}{\sqrt{s_0}})\delta& \mbox{ when } & \alpha = \frac{1}{2} \end{array}\right. $$
and when $\alpha = 1/2$,
\begin{equation}\label{LambdaStarEstimate}
|I_{n - 1}(s + \delta) + I_{n - 1}(s - \delta)- 2 I_{n - 1} (s) | \leq c\left(1+\frac{1}{\sqrt{s_0}}\right) \delta
\end{equation}
for all $0<s_0\leq s-\delta \leq s+ \delta \leq T$.
\end{lemma}
\begin{proof}
We decompose $I_{n-1}(s+\delta) - I_{n-1}(s)$ into the sum of four integrals and bound each integral.
\begin{eqnarray*}
I_{n-1}(s + \delta) - I_{n-1} ( s) &= &\int^{\delta \wedge s}_0 \frac{ \partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)}{f(u, s + \delta)^3}\,du\\
& + & \int^s_{\delta \wedge s} \frac{ \partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)}{f(u, s + \delta)^3}\,du\\
& + & \int^s_0 \frac{\partial^{n-1}_s f(u,s) (f(u,s)^3 - f(u, s + \delta)^3)}{f(u, s)^3 f(u, s + \delta)^3}\, du\\
& + & \int^{s + \delta}_ s \frac{\partial^{n-1}_s f(u, s + \delta)}{f(u, s + \delta)^3}\,du.
\end{eqnarray*}
The first integral:
\begin{eqnarray*}
\left |\int^{\delta \wedge s}_0 \frac{ \partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)}{f(u, s + \delta)^3}\,du\right| &\leq & \int^{\delta \wedge s}_0 \frac{c u \delta^\alpha}{u^{3/2}}\, du \\
&=& c \delta^\alpha \sqrt{\delta \wedge s} \leq c \delta^{\alpha+1/2}.
\end{eqnarray*}
The second integral, when $0<\alpha< 1/2$:
\begin{eqnarray*}
\left|\int^s_{\delta \wedge s} \frac{ \partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)}{f(u, s + \delta)^3}\, du \right| &\leq & \int^s_{ \delta\wedge s} \frac{c u^\alpha \delta} {u^{3/2}}\, du\\
&\leq & \frac{c\delta}{1 -2 \alpha} (\delta^{\alpha - 1/2} - s^{\alpha - 1/2}) \\
& \leq & \frac{c}{1 - 2\alpha} \delta^{\alpha + 1/2}.
\end{eqnarray*}
In the case $\alpha = 1/2$, the second integral is bounded by
$$
\int^s_{\delta \wedge s} c\delta u^{-1}\, du = c\delta \log \frac{s}{s \wedge \delta} = c\delta \log^+ \frac{s}{\delta}.$$
The third integral:
\begin{eqnarray*}
\left| \int^s_0 \frac{\partial^{n-1}_s f(u,s) (f(u,s)^3 - f(u, s + \delta)^3)}{f(u, s)^3 f(u, s + \delta)^3)}\, du\right| & \leq & \int^s_0 \frac{c u(u\delta u)}{u^3}\,du \\
& = & c\delta s\leq c \delta.
\end{eqnarray*}
The last integral:
\begin{eqnarray*}
\left|\int^{s + \delta}_ s \frac{\partial^{n-1}_s f(u, s + \delta)}{f(u, s + \delta)^3}\,du \right| \leq \int^{s + \delta}_s \frac{c u }{u^{3/2}} \, du &= &c (\sqrt{s+\delta} - \sqrt{s})\\
& = &\frac{c\delta}{\sqrt{s + \delta}+ \sqrt{s}} \leq \frac{c}{\sqrt{s_0}} \delta.
\end{eqnarray*}
To finish the proof, it remains to show \eqref{LambdaStarEstimate}. Set $\alpha = 1/2$ and
write
$$ I_{n - 1}(s + \delta) + I_{n - 1}(s - \delta)- 2 I_{n - 1} (s) =
\left[ I_{n - 1}(s + \delta) - I_{n - 1} (s) \right] - \left[ I_{n - 1} (s) - I_{n - 1}(s - \delta) \right].$$
As with
$ I_{n - 1}(s + \delta) -I_{n-1}(s)$ above, we can decompose $I_{n-1}(s) - I_{n - 1}(s - \delta)$
into the sum of four integrals. In both cases, the first, third and fourth integrals yield adequate bounds. When $\delta \geq s-\delta$, the second integral is also adequately controlled. Thus, we assume $\delta < s-\delta$ and we only need to control the difference of the second integrals:
\begin{equation*}
\int^s_{\delta} \frac{ \partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)}{f(u, s + \delta)^3}\, du
- \int^{s-\delta}_{\delta } \frac{ \partial^{n-1}_s f(u, s ) - \partial^{n-1}_s f(u, s-\delta)}{f(u, s )^3}\, du.
\end{equation*}
We can decompose this into the sum $J_1 + J_2 + J_3$ where
\begin{align*}
J_1 &= \int^{s-\delta}_{\delta } \frac{\left( f(u, s)^3 - f(u, s+\delta)^3 \right) \left( \partial^{n-1}_s f(u, s+\delta) - \partial^{n-1}_s f(u, s) \right)}{f(u, s + \delta)^3 f(u, s)^3} \, du \\
J_2 &= \int^{s-\delta}_{\delta } \frac{ \partial^{n-1}_s f(u, s+ \delta ) + \partial^{n-1}_s f(u, s - \delta ) - 2 \partial^{n-1}_s f(u, s ) }{ f(u, s)^3} \, du\\
J_3 &= \int^{s}_{s-\delta} \frac{ \partial^{n-1}_s f(u, s + \delta) - \partial^{n-1}_s f(u, s)}{f(u, s + \delta)^3}\, du.
\end{align*}
Then
$$ |J_1| \leq \int^{s-\delta}_{\delta } \frac{c(u\delta u )( u \sqrt{\delta}) }{u^3}\, du \leq c \delta^{3/2},$$
and
$$ |J_3| \leq \int_{s-\delta}^{s } c \delta u^{-1} \, du = c \delta \log \frac{s}{s-\delta} \leq c \delta \log \frac{T}{s_0}.$$
Since
\begin{align*}
&\Big| \left[ \partial^{n-1}_s f(u, s+ \delta ) - \partial^{n-1}_s f(u, s) \right]
- \left[ \partial^{n-1}_s f(u, s) - \partial^{n-1}_s f(u, s- \delta ) \right] \Big| \\
&\;\;\;\;= \left| \int_s^{s+\delta} \partial^{n}_s f(u, r ) - \partial^{n}_s f(u, r-\delta ) \, dr \right| \\
&\;\;\;\;\leq \int_s^{s+\delta} c \sqrt{\delta} \, dr \leq c \delta^{3/2},
\end{align*}
then
$$|J_2| \leq \int^{s-\delta}_{\delta } \frac{ c\delta^{3/2}}{u^{3/2}} \, du \leq c \delta.$$
This establishes \eqref{LambdaStarEstimate} and completes the lemma.
\end{proof}
\begin{lemma} Suppose $\lambda\in C^{n,\alpha}([0,T];M)$ with $n\geq 2$ and $\alpha\in (\frac{1}{2},1]$.
Then $I_{n} \in C^{\alpha - 1/2} [ s_0, T] $, and
there exists $c=c(M,T,n)$ such that for all $0\leq s \leq s+ \delta\leq T$
$$|I_n(s + \delta) - I_n(s) | \leq \frac{c}{2 \alpha - 1} \delta^{\alpha - 1/2}.$$
\end{lemma}
\begin{proof}
We proceed in a manner similar to the previous proof.
\begin{eqnarray*}
I_n(s + \delta) - I_n(s) &= & \int^{\delta \wedge s}_0 \frac{ \partial^n_s f(u, s + \delta) - \partial^n_s f(u, s)}{f(u, s + \delta)^3}\,du\\
&+ &\int^s_{\delta \wedge s} \frac{ \partial^n_s f(u, s + \delta) - \partial^n_s f(u, s)}{f(u, s + \delta)^3}\,du\\
& + & \int^s_0 \frac{\partial^n_s f(u,s) (f(u,s)^3 - f(u, s + \delta)^3)}{f(u, s)^3 f(u, s + \delta)^3}\, du\\
& + &\int^{s + \delta}_ s \frac{\partial^n_s f(u, s + \delta)}{f(u, s + \delta)^3}\, du.
\end{eqnarray*}
The first integral:
\begin{eqnarray*}
\left|\int^{\delta \wedge s}_0 \frac{ \partial^n_s f(u, s + \delta) - \partial^n_s f(u, s)}{f(u, s + \delta)^3}\,du\right|& \leq & \int^{\delta \wedge s}_0 \frac{c\min(u^\alpha, \delta^\alpha)}{u^{3/2}}\, du\\
& \leq &c\int^{\delta \wedge s}_0 u^{\alpha - 3/2} \, du \leq \frac{c}{2\alpha - 1} \delta^{\alpha - 1/2}.
\end{eqnarray*}
The second integral:
\begin{eqnarray*}
\left| \int^s_{\delta \wedge s} \frac{ \partial^n_s f(u, s + \delta) - \partial^n_s f(u, s) }{ f(u, s + \delta)^3}\,du \right| & \leq & \int^s_{s \wedge \delta} \frac{c\min (u^\alpha, \delta^\alpha)}{u^{3/2}}\, du\\
&\leq & \int^s_{s \wedge \delta} \frac{c\delta^\alpha}{u^{3/2}}\, du \leq c\delta^{\alpha}(\delta^{-1/2}-s^{-1/2})\leq c \delta^{\alpha - 1/2}.
\end{eqnarray*}
The third integral:
\begin{eqnarray*}
\left|\int^s_0 \frac{\partial^n_s f(u,s) (f(u,s)^3 - f(u, s + \delta)^3)}{f(u, s)^3 f(u, s + \delta)^3}\, du \right| & \leq & \int^s_0 cu^\alpha \frac{u^2\delta}{u^3}\, du \\
&= &\int^s_0 c\delta u^{\alpha - 1}\, du = \frac{c\delta}{\alpha}s^\alpha\leq c\delta^{\alpha - 1/2}.
\end{eqnarray*}
The last integral:
\begin{eqnarray*}
\left| \int^{s + \delta}_ s \frac{\partial^n_s f(u, s + \delta)}{f(u, s + \delta)^3} \, du \right| & \leq & \int^{s + \delta}_s \frac{cu^\alpha}{u^{3/2}}\, du = \frac{c}{2\alpha - 1} ((s + \delta)^{\alpha - 1/2} - s^{\alpha - 1/2})\\
&\leq & \frac{c}{2\alpha - 1} \delta^{\alpha - 1/2}.
\end{eqnarray*}
\end{proof}
\section{Real analyticity of $\gamma$}
\label{sec: analytic}
In this section we prove Theorem \ref{t:analytic}. There exists $\delta>0$ such that $\lambda$ can be extended (complex) analytically to $E=\{z\in\mathbb{C}: d(z,[0,T])\leq \delta\}$.
Notice that $f(s,s) = \gamma(0,s)= \gamma(s)-\lambda(0)$ and $f(u,s,\epsilon)$ converges uniformly to $f(u,s)$ on $D=\{(u,s):0<u\leq s, 0<s \leq T\}$.
So it suffices to show that $f(u, s, \epsilon)$ can be extended analytically in the same neighborhood of $D$ (in $\mathbb{C}\times \mathbb{C}$) for all $\epsilon$.
Recall that $G(z,u,s) = \frac{-2}{z} + \lambda'(s - u)$ is analytic in $(z,u,s)$, hence by the dependence of solutions of ODE on parameters (see \cite[Theorem 8.1]{CL}) the function $f( \cdot,s,\epsilon)$ in (\ref{e: ODE}) exists and is analytic in a neighborhood of $u=0$ for each $\epsilon\in (0,1]$ and $s\in E$.
The main difficulty is to show this neighborhood is the same for all $\epsilon$ and $s$.
The outline of this section is as follows:
First we show in Lemma \ref{l: complex} that the equation (\ref{e: ODE}) still has solution when $s$ is in the domain $$E_1 = \{t: 0< \mbox{Re\,} t<T + \delta_1, |\mbox{Im\,} t|<\delta_1 \}$$ with $\delta_1$ small enough and not depending on $\epsilon$.
Then in Lemma \ref{l: extension} we show that one can take complex u-derivatives in (\ref{e: ODE}), which means the solutions are extended analytically.
Finally by \cite[Theorem 8.3]{CL} the solutions are analytic in $(u,s)$ on the same domain for all $\epsilon$.
Let $M$ be an upper bound for the sup-norms of $\lambda'$ and $\lambda''$ on $E$. As a first step, we will show the following:
\begin{lemma} \label{l: complex}
There exists $\delta_1 \in (0, \delta) $ depending on $\delta, M$ and $T$ such that for every $s\in E_1$ and $\epsilon\in (0,1]$, the solution to the equation
\begin{eqnarray*}
\partial_u f(u, s, \epsilon) &= & \frac{-2}{f(u, s, \epsilon)} + \lambda'(s - u),~~~~ u\geq 0,\\
f(0, s, \epsilon) &= & i\epsilon,
\end{eqnarray*}
exists uniquely for $u \in [0, \mbox{Re\,} s + \delta_1]$. Moreover,
$$
\max(\sqrt{2u},\frac{\epsilon}{2}) \leq \mbox{Im\,} f(u, s, \epsilon) \mbox{ for } 0 \leq u \leq \mbox{Re\,} s + \delta_1.
$$
\end{lemma}
\begin{proof}
The solution $f(u,s, \epsilon)$ exists on a neighborhood of $u=0$, and it continues to exists as long as it stays above the real line.
The uniqueness of this solution comes from standard ODE techniques.
To establish the results of the lemma, we will compare $f(u, s, \epsilon)$ to $f(u,s_0,\epsilon)$ where $s_0 = \mbox{Re\,} s$ and
\begin{eqnarray*}
\partial_u f(u, s_0, \epsilon) &= & \frac{-2}{f(u, s_0, \epsilon)} + \lambda'(s_0 - u),~~~~ u\geq 0,\\
f(0, s_0, \epsilon) &= & i\epsilon.
\end{eqnarray*}
It follows from Lemma \ref{l: upward ODE} $(i,ii)$ that
\begin{eqnarray*}
\sqrt{3u + \epsilon^2} & \leq & \mbox{Im\,} f(u, s_0, \epsilon) \\
\mbox{and }\,\,\,\,\,\,|\mbox{Re\,} f(u, s_0, \epsilon)| &\leq & \sqrt{u}~~~~~~~~~ \mbox{ for } 0\leq u \leq s_0 + \delta_1,
\end{eqnarray*}
where $\delta_1 < \delta$ will be specified momentarily.
By following the same argument in Lemma \ref{l: lambda in C^1},
we get a bound for the difference of $f(u,s,\epsilon)$ and $f(u, s_0,\epsilon)$:
$$|f(u, s, \epsilon) - f(u, s_0, \epsilon)| \leq C M u |s - s_0| \leq C M u \delta_1$$
whenever $0\leq u \leq S$ with
$$S = \inf\{ 0\leq v \leq u_0+\delta_1: \mbox{Im\,} f(v, s, \epsilon) < \frac{\epsilon}{3} \mbox { or } \frac{|\mbox{Re\,} f(v, s, \epsilon)|}{\mbox{Im\,} f(v, s, \epsilon)} > C_1\}, $$
where $C_1$ is a constant in $(0,1)$ and close to $1$.
It follows that
$$\mbox{Im\,} f(u, s, \epsilon) \geq \mbox{Im\,} f(u, s_0, \epsilon) - C M u \delta_1 \geq \sqrt{3u+\epsilon^2} - CMu\delta_1,$$
and
$$ |\mbox{Re\,} f(u, s, \epsilon)| \leq |\mbox{Re\,} f(u, s_0, \epsilon)| + C M u \delta_1 \leq \sqrt{u} + C M u \delta_1.$$
By choosing $\delta_1$ small enough, $\mbox{Im\,} f(u, s, \epsilon) \geq \max(\sqrt{2u},\epsilon/2)$ and
$$\frac{|\mbox{Re\,} f(u, s, \epsilon)|}{\mbox{Im\,} f(u, s, \epsilon)} < C_1$$
for all $0\leq u \leq S$. It follows that $S=u_0 + \delta_1$ and the lemma follows.
\end{proof}
Now we will show that
\begin{lemma} \label{l: extension} For every $\epsilon\in (0,1]$, $s\in E_1$ and $0< \tilde{u}<\mbox{Re\,} s + \delta_1$, there exist $r=r(\tilde{u}, M, \delta, T) \in (0, \delta - \delta_1)$ and an analytic extension of $f(\cdot, s, \epsilon)$ on $B_{\tilde{u}} =\{ z\in \mathbb{C}: |z - \tilde{u}| < r\}$ such that
$$\partial_u f(u, s, \epsilon) = \frac{-2}{ f(u, s, \epsilon)} + \lambda'(s - u).$$
\end{lemma}
\begin{proof}
We will use the Picard iteration to show that the equation
\begin{eqnarray}
\label{e: g} g'(u) & = & -\frac{2} {g(u)} + \lambda'( s - u),\\
\nonumber g(\tilde{u}) & = & f( \tilde{u}, s, \epsilon)
\end{eqnarray}
has a solution on $B_{\tilde{u}} =\{ z\in \mathbb{C}: |z - \tilde{u}| < r\}$, where $r$ will be specified later.
Indeed for $|u - \tilde{u}| < r$ define
$g_0(u) = f(\tilde{u}, s, \epsilon)$ and
$$g_{n+1}(u) = f(\tilde{u}, s, \epsilon) + \int^u_{ \tilde{u}} \frac{-2}{g_n(v)} + \lambda'(s-v) \, dv.$$
We will show by induction on $n$ that $g_n$ is well-defined and analytic in $B_{\tilde{u}}$ and
$$\mbox{Im\,} g_n(u) \geq \sqrt{\tilde{u}}.$$
The base case $n=0$ is clear because of Lemma \ref{l: complex}.
Suppose the claim holds for $n$.
The function $g_{n+1}$ is well-defined and analytic in $B_{\tilde{u}}$ since $ \frac{1}{g_n}$ is analytic in a simply connected domain. Now
\begin{eqnarray*}
\mbox{Im\,} g_{n+1}(u) & \geq &\mbox{Im\,} f(\tilde{u}, s, \epsilon) - |u-\tilde u|\max_{ v\in B_{\tilde{u}}} \left( \frac{2}{|g_n(v)|} + |\lambda'(s-v)|\right) \\
%
& \geq & \sqrt{2\tilde{u}} - r (\frac{2}{\sqrt{\tilde{u}}} + M).
\end{eqnarray*}
The claim holds for $n+1$ by choosing $r$ small enough depending on $\tilde{u}, M$ and $T$. We also require that $r$ is small enough so that $2r/\tilde{u}<1$. Then the sequence $g_n$ converges uniformly in $B_{\tilde{u}}$ since
\begin{eqnarray*}
|g_{n+1}(u) - g_n(u)| & \leq & |u - \tilde{u}| \max_{ v\in B_{\tilde{u}}} \frac{2 |g_n(v) - g_{n-1}(v)|}{|g_n(v) g_{n-1}(v)|}\\
& \leq & \frac{2r} { \tilde{u}} ||g_n - g_{n-1}||_{B_{\tilde{u}},\infty}.
\end{eqnarray*}
\noindent
Let $g$ be the limit. Then this function is analytic and satisfies the differential equation (\ref{e: g}). In particular $g(u)$ and $f(u, \tilde{u}, \epsilon)$ solve same initial value problem. Hence they are equal when $u$ is real. In order words, $f(\cdot, s, \epsilon)$ is extended analytically on $B_{\tilde{u}}$.
\end{proof}
\emph{Proof of Theorem \ref{t:analytic}.}
By \cite[Theorem 8.3]{CL}, for every $\epsilon\in (0,1]$ the function $f(u, s, \epsilon)$ is analytic in the domain $\{(u,s): s\in E_1, u\in B_{\tilde{u}} \mbox{ for some } \tilde{u}\in (0, \mbox{Re\,} s + \delta_1) \}$. It follows that $f(u,s)$ is also analytic in the same domain which contains $\{(s,s): 0<s \leq T\}$. Hence $f(s,s)$ and $\gamma(s)$ is real analytic on $(0,T]$. \qed
\section{Behavior of $\gamma$ at $s=0$}\label{sec:behavior at 0}
In this section we analyze the behavior of $\gamma$ at its base, proving
Theorem \ref{t: main theorem 2} and Theorem \ref{t:series at 0}.
\subsection{Smoothness of $\gamma(s^2)$ at $s=0$}
We may extend $\lambda$ smoothly on $(-\delta, T)$ by the concatenation property of the Loewner equation.
Thus, it suffices to show that for fixed $t_0 \in (0, T)$, the curve $\gamma_0(s^2) = g_{t_0} (\gamma(s^2 + t_0))$ is smooth at $s=0$ provided $\gamma$ is smooth on $(0,T)$.
The idea, illustrated in Figure \ref{at0}, is as follows. Let $U$ be the intersection of $\mathbb{H}$ and a small disk centered at $\lambda(0)$ and let $V=g_{t_0}^{-1}(U)$. Define an analytic branch $\phi$ of $\sqrt{z-\gamma(t_0)}$ in a neighborhood of $\gamma(t_0)$ such that the branch cut is $\gamma(0,t_0]$. Let $W=\phi(V)$. All we need to check is that for small $\epsilon>0$ the images under $\phi$ of $\gamma((t_0-\epsilon,t_0])$ and $\gamma(t_0+s^2), 0\leq s^2\leq \epsilon$, are smooth. Finally the smoothness of $\gamma_0(s^2)$ follows immediately from the Schwarz reflection principle through $E=\phi(\gamma((t_0-\epsilon,t]))$ (in the case $\gamma$ is analytic)
or Kellogg-Warschawski theorem (in the case $\gamma$ is $C^{n, \alpha}$) for the map $\phi\circ g_{t_0}^{-1}$ from $U$ to $W$.
\begin{figure}[h]
\centering
\includegraphics[width=5in]{smooth1}
\vspace{-1.5in}
\caption{ Illustration for the proof of Theorem \ref{t: main theorem 2}}\label{at0}
\end{figure}
\begin{proof}[Proof of Theorem \ref{t: main theorem 2} when $\l$ is analytic]
It follows from (\ref{e: gamma''}) that $\gamma'(t) \neq 0$ for all $t$. Thus, there exists an (real) analytic function $h$ on $(-\sqrt{\epsilon},\sqrt{\epsilon})$ such that
$$\frac{\gamma(t_0+s) - \gamma(t_0)}{s} = h(s)^2 \mbox{ for all } s\in (-\sqrt{\epsilon},\sqrt{\epsilon}) \backslash \{0\}. $$
Let $\phi_1(s) = is h(-s^2)$ and $\phi_2(s) = sh(s^2)$. We see that these two functions are analytic and one-to-one. Moreover,
$$\phi_1(s)^2 = \gamma(t_0 - s^2) - \gamma(t_0) \mbox{ and }$$
$$\phi_2(s)^2 = \gamma( t_0 + s^2) - \gamma(t_0).$$
Therefore the boundary $E$ of $W$, which is parametrized by $\phi_1(s)$ near 0, and $\phi(\gamma(t_0 + s^2))$ are analytic.
Since the latter map is the image of $\gamma_0(s^2)$ under $\phi \circ g_{t_0}^{-1}$, it follows from the Schwarz reflection principle that $\gamma_0(s^2)$ is analytic at $0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t: main theorem 2} when $\l$ is $C^{\beta}$]
By Theorem \ref{t: main theorem 2}, $\gamma \in C^{n,\a}(0,T]$ for appropriate $\a \in (0, 1)$.
It is not obvious that the function $h$ in the previous case is $C^{n,\a}$.
Indeed one can find an example of function $\gamma \in C^{n,\a}$ but $h$ is not. Now let
$$H(s) = \frac{\gamma(t_0 + s) - \gamma(t_0)}{s} \mbox{ for } s \in (-\sqrt{\epsilon}, \sqrt{\epsilon})\backslash\{0\}, \mbox{ and } H(0)=\gamma'(t_0).$$
We claim that $H\in C^{n-1,\a} (-\sqrt{\epsilon},\sqrt{\epsilon})$. Indeed
$$H^{(n)}(s) = \frac{n!}{s^{n+1}} \sum^{n}_{k=0} \frac{(-1)^k}{k!} s^k \gamma^{(k)}( t_0+s)
- \frac{(-1)^n n!}{s^{n+1}} \gamma(t_0)
\mbox{ for } s\neq 0.$$
Apply Proposition \ref{Cn_alpha} for functions $\gamma,\gamma',\cdots, \gamma^{(n)}$ to get $|H^{(n)}(s)| \leq cs^{\a - 1}$ which implies the claim.
Since $\inf_{s\in (-\sqrt{\epsilon},\sqrt{\epsilon})} |H(s)|>0$, it follows from the claim that the function $s \mapsto \sqrt{H(-s^2)} $ is $C^{n-1,\a}(-\sqrt{\epsilon},\sqrt{\epsilon})$
for any well-defined square-root function.
Let $\phi_1(s)$ be a parametrization near 0 of $E$ such that $\phi_1(s)^2 =\gamma(t_0 - s^2) - \gamma(t_0)$ and $\phi_1(s) = s\sqrt{H(-s^2)}$ for $s \in (-\sqrt{\epsilon}, \sqrt{\epsilon})$.
Since $\phi_1'(s) = \dfrac{\gamma'(t_0 - s^2)}{\sqrt{H(-s^2)}}$, the function $\phi_1$ is $C^{n,\a}(-\sqrt{\epsilon},\sqrt{\epsilon})$.
The same argument shows that the function $\phi(\gamma(t_0 + s^2))$ is $C^{n,\a}[0,\sqrt\epsilon)$.
Combined with the last two statements,
the Kellogg-Warschawski theorem \cite[Theorem 3.6]{Pommerenke}
implies that the function $\gamma_0(s^2)$ is $C^{n,\a}[0,\sqrt\epsilon)$.
\end{proof}
{\bf Remark.} The proof also shows that if $\lambda\in C^{n,\alpha}([0,T];M)$ then $\Gamma\in C^{n,\alpha+1/2}([0,T]; c)$
with $c=c(T, M, n, \alpha)$.
\subsection{Expansion of $\gamma$ at $s=0$}\label{sec: comparison curves}
The goal of this section is to prove Theorem \ref{t:series at 0}, which illuminates why the $s^2$ parametrization is a natural parametrization at the base of a Loewner curve $\gamma$.
To accomplish this, we create a comparison curve $\tilde \gamma$ that closely approximates $\gamma$ near its base and is ``nice" at $s=0$ (that is, $\tilde \Gamma(s) =\tilde \gamma(s^2)$ is smooth at $t=0$.)
The properties of the comparison curve are summarized in Proposition \ref{prop} below.
Assume $\gamma$ is generated by $\l \in C^{n,\a}[0,T]$. We define $\tilde{\gamma}$ as a perturbation of a vertical slit, as done in Section 4.6 of \cite{L}.
Set
$$\phi(z)
= z + \sum_{m=2}^{4n+1} \frac{b_m}{2^m} z^m,$$
which is conformal on a neighborhood of the origin.
The real-valued coefficients $b_m$ will depend on $\l^{(k)}(0)$ as we will describe later.
Then define
\begin{align*}
\tilde{\gamma}(t) = \phi(2i\sqrt{t})
&= 2i\sqrt{t} + \sum_{m=2}^{4n+1} i^{m} b_m t^{m/2} \\
&= 2i\sqrt{t} - b_2 t - i \,b_3 t^{3/2} + b_4 t^2 + \cdots + i \,b_{4n+1} t^{2n+1/2}.
\end{align*}
Let $g_t: \H \setminus [0,2i\sqrt{t}] \to \H$ and $\tilde{g}_t:\H \setminus \tilde\gamma[0,t] \to \H$ be conformal maps with the hydrodynamic normalization at infinity. Then we set $\phi_t = \tilde{g}_t \circ \phi \circ g_t^{-1}$
and $\tilde{\l}(t) = \phi_t(0)$, as illustrated in Figure \ref{comparison curve figure}.
In this form, $\tilde{\gamma}$ and $\tilde{\l}$ are not parametrized by halfplane capacity. We will need to reparametrize by $t=t(s)$, which satisfies $t(0)=0$ and $\frac{dt}{ds} = \phi_t'(0)^{-2}$.
Note in particular that $\frac{dt}{ds}\big|_{s=0} = 1$.
\begin{figure}
\begin{tikzpicture}
\draw[thick] (0,0) -- (0,2);
\draw[fill] (0,2) circle [radius=0.05];
\draw[thick,dashed] (0,2) -- (0,3.2);
\node[right] at (0,2) {$2i\sqrt t$};
\draw (-3,0) -- (3,0);
\draw[->] (3,2) to [out=45,in=135] (5,2);
\node[above] at (4,2.5) {$\phi$};
\draw[->] (3,-3) to [out=45,in=135] (5,-3);
\node[above] at (4,-2.5) {$\phi_t$};
\draw[->] (-3,-1) to [out=225,in=135] (-3,-3);
\node[left] at (-3.5,-2) {$g_t$};
\draw[->] (11,-1) to [out=315,in=45] (11,-3);
\node[right] at (11.5,-2) {$\tilde g_t$};
\draw[thick] (8,0) to [out=90, in=270] (8.5,1) to [out=90,in=290] (7.8,1.9);
\draw[fill] (7.8,1.9) circle [radius=0.05];
\draw[dashed] (7.8,1.9) to [out=100,in=250] (8.1,3);
\node[right] at (7.8,1.9) {$\tilde\gamma(t)$};
\draw (5,0) -- (11,0);
\draw [dashed] (0,-5) -- (0,-3.8);
\draw[very thick] (-2,-5) -- (2,-5);
\draw[fill] (0,-5) circle [radius=0.05];
\node[below] at (0,-5) {$0$};
\draw (-3,-5) -- (3,-5);
\draw [dashed] (8.2,-5) to [out=90,in=210] (9,-4.2);
\draw [very thick] (7.5,-5) -- (10.5,-5);
\draw[fill] (8.2,-5) circle [radius=0.05];
\node[below] at (8.2,-5) {$\tilde\lambda(t)$};
\draw (5,-5) -- (11,-5);
\end{tikzpicture}
\caption{The conformal maps $\phi, g_t, \tilde g_t, \phi_t,$ the comparison curve $\tilde \gamma$, and $\tilde \l$.
}\label{comparison curve figure}
\end{figure}
\begin{lemma} \label{polylem}
Assume $\phi_t$, $\tilde \l$ and $t=t(s)$ are defined as above, and let $k \in \mathbb{N}$.
Then there exists $\tilde T >0$, there exist polynomials
$p_k(x_1, x_2, \cdots, x_{k+2} ),$
$ q_k(x_1, x_2, \cdots, x_{2k} )$
and $r_k(x_1, x_2, \cdots, x_{2k-1} )$,
and there exist nonzero constants $c_k, d_k, e_k$ so that for $t \in [0, \tilde T],$
\begin{align}
\label{polylem1}
& \partial_t \phi^{(k)}_t(0) = c_k \,\phi^{(k+2)}_t(0)+
p_k\left( \phi'_t(0), \phi''_t(0), \cdots, \phi^{(k+1)}_t(0), \phi_t'(0)^{-1} \right), \\
\label{polylem2}
& \partial_s^{k} \tilde \l(t) = d_k \, \phi^{(2k)}_t(0) \cdot \phi_t'(0)^{-2k} +
q_k\left( \phi'_t(0), \phi''_t(0), \cdots, \phi^{(2k-1)}_t(0), \phi_t'(0)^{-1} \right), \text{ and}\\
\label{polylem3}
&\partial_s^{k} t = e_k \, \phi_t^{(2k-1)}(0) \cdot \phi_t'(0)^{-(2k+1)} + r_k\left( \phi'_t(0), \phi''_t(0), \cdots, \phi^{(2k-2)}_t(0), \phi_t'(0)^{-1} \right).
\end{align}
Further $\tilde \l \in C^{\infty}[0, s(\tilde T)]$ under the halfplane-capacity parametrization.
\end{lemma}
\begin{proof}
Write $ \phi_t(z) = \sum_{k=0}^{\infty} a_k z^k$, keeping in mind that $a_k$ depends on $t$.
Then from Proposition 4.40 in \cite{L},
\begin{align}
\nonumber
\partial_t \phi_t(z) &= 2 \left( \frac{ \phi_t'(0)^2}{\phi_t(z) - \phi_t(0)} - \frac{ \phi_t'(z)}{z} \right)\\ \label{uglysums}
&= -2\, \frac{ \sum_{k=0}^{\infty} (a_1a_{k+2}+2 a_2 a_{k+1}+ \cdots + (k+2) a_{k+2} a_1)z^{k}}
{\sum_{k=0}^{\infty} a_{k+1}z^k}.
\end{align}
Since $a_1=1$ when $t=0$, there exists a neighborhood $U$ of 0 and $\tilde T>0$ so that the denominator is nonzero for $z \in U$ and $t \leq \tilde T$.
Therefore $ \partial_t \phi^{(k)}_t(z)$ is defined for $(z,t) \in U \times [0,\tilde T]$.
Equation \eqref{polylem1} follows from \eqref{uglysums} (with $c_k = -\frac{2(k+3)}{(k+2)(k+1)}$.)
We verify \eqref{polylem2} inductively. For the base case,
$$\partial_s \tilde \l(t) = \partial_t \phi_t(0) \cdot \frac{dt}{ds} = -3\, \phi_t''(0) \cdot \phi_t'(0)^{-2}.$$
Assume \eqref{polylem2} holds for a fixed $k$. Then
\begin{align*}
\partial_s^{k+1} \tilde \l(t) &= \partial_t \left( d_k \, \phi^{(2k)}_t(0) \cdot \phi_t'(0)^{-2k} +
q_k\left( \phi'_t(0), \phi''_t(0), \cdots, \phi^{(2k-1)}_t(0), \phi_t'(0)^{-1} \right) \right)\cdot \phi_t'(0)^{-2} \\
& = d_k \,c_{2k} \, \phi_t^{(2k+2)}(0) \cdot \phi_t'(0)^{-2k-2} + q_{k+1}\left( \phi'_t(0), \phi''_t(0), \cdots, \phi^{(2k+1)}_t(0), \phi_t'(0)^{-1} \right).
\end{align*}
We also prove \eqref{polylem3} inductively. When $k=1$,
$$ \frac{dt}{ds} = \phi_t'(0) \cdot \phi_t'(0)^{-3}.$$
If \eqref{polylem3} holds for fixed $k$, then
\begin{align*}
\partial_s^{k+1} t &= \frac{d}{dt} \left( e_k \, \phi_t^{(2k-1)}(0) \cdot \phi_t'(0)^{-(2k+1)} + r_k\left( \phi'_t(0),
\phi''_t(0), \cdots, \phi^{(2k-2)}_t(0), \phi_t'(0)^{-1} \right) \right)\cdot \phi_t'(0)^{-2} \\
&= e_k \, c_{2k-1} \, \phi_t^{(2k+1)}(0) \cdot \phi_t'(0)^{-(2k+3)} + r_{k+1} \left( \phi'_t(0), \phi''_t(0), \cdots, \phi^{(2k)}_t(0), \phi_t'(0)^{-1} \right).
\end{align*}
The last assertion follows from \eqref{polylem2}.
\end{proof}
We are now ready to recursively define the coefficients of $\phi$.
The coefficient $b_m$ will depend on $\l^{(k)}(0)$ for $k=1, \cdots, \lfloor \frac{m}{2} \rfloor \wedge n$.
For even values of $m$, our choice of $b_m$ will ensure that $\partial_s^{k} \tilde{\l}(0) = \l^{(k)}(0)$ for $k \leq n$.
For odd values of $m$, we choose $b_m$ so that the $t$-parametrization of $\tilde \gamma$ is close to the halfplane-capacity parametrization.
\begin{itemize}
\item Set $b_2 = -\frac{2}{3}\l'(0)$.
Since $\partial_s \tilde{\l}(0)= -\frac{3}{2} b_2$, this implies that $ \partial_s \tilde \l(0)= \l'(0)$.
\smallskip
\item Set $\displaystyle b_3 = \frac{b_2^2}{8}$.
This implies that $ \frac{d^{2}t}{d s^{2}} \big|_{s=0}= 2 b_3 - b_2^2/4=0$.
\smallskip
\item Assume that $b_2, b_3, \cdots, b_{2k-1}$ have been defined.
Then by Lemma \ref{polylem},
$$ \partial_s^{k} \tilde \l(0) = d_k \, \frac{(2k)!}{2^{2k}}\, b_{2k} +
q_k\left( 1, \frac{1}{2}b_2, \cdots, \frac{ (2k-1)!}{2^{2k-1} }b_{2k-1}, 1 \right) .$$
If $k \leq n$, define $b_{2k}$ so that $\partial_s^{k} \tilde{\l}(0) = \l^{(k)}(0)$. If $k>n$, we may define $b_{2k}$ however we like; for instance, we choose $b_{2k}$ so that $\partial_s^{k} \tilde{\l}(0) =0$.
\item Assume that $b_2, b_3, \cdots, b_{2k}$ have been defined.
\smallskip
Then by Lemma \ref{polylem},
$$ \frac{d^{k+1}t}{d s^{k+1}}\bigg|_{s=0} = e_{k+1} \, \frac{(2k+1)!}{2^{2k+1}} \, b_{2k+1}
+ r_{k+1} \left( 1, \frac{1}{2}b_2, \cdots, \frac{ (2k)!}{2^{2k} }b_{2k}, 1 \right).$$
Define $b_{2k+1}$ so that this quantity is zero.
\end{itemize}
This construction ensures that
$\partial_s^{k} \tilde{\l}(0) = \l^{(k)}(0)$ for $k \leq n$
and that
$t = s + O(s^{2n+2})$.
The first fact, together with by Theorem 3.3 in \cite{C}, implies that
$ |\gamma(s) - \tilde \gamma(t(s))| = O(s^{n+\a})$ for $s$ near 0.
The second fact implies that under the halfplane-capacity parametrization $\tilde \gamma(t(s))$ will have the same coefficients as $\tilde \gamma(t)$ for the terms with exponents at most $n+1/2$.
Together, this provides precise information about the expansion of $\gamma(s)$ near $s=0$.
In summary, we have proved the following, which establishes Theorem \ref{t:series at 0}.
\begin{proposition} \label{prop}
Assume that $\l \in C^{n,\alpha} [0,T]$ generates the curve $\gamma$.
Then there exists $\tilde \l \in C^{\infty} [0,S] $ that generates a (halfplane-capacity-parametrized) curve $\tilde \gamma \in C^{\infty}(0, S]$ with the following properties:
\begin{itemize}
\item $\l^{(k)}(0) = \tilde \l^{(k)}(0)$ for $1 \leq k \leq n$.
\smallskip
\item $\tilde \Gamma(s) = \tilde \gamma(s^2)$ is in $C^{\infty}[0,\sqrt{S}]$.
\smallskip
\item $\tilde \Gamma^{(m)}(0)$ depends on $\l^{(k)}(0)$ for $m \leq 2n+1$ and $k=1, \cdots, \lfloor \frac{m}{2} \rfloor$.
\smallskip
\item $|\gamma(s) - \tilde \gamma(s)| = O(s^{n+\alpha}).$
\end{itemize}
In particular near $s=0$, the curve $\gamma$ has the form
\begin{equation*}
\gamma(s) =
\begin{cases}
2i\sqrt{s} + a_2 s + i \,a_3 s^{3/2} + a_4 s^2 + \cdots + a_{2n} s^{n} + O(s^{n+\a})
&\mbox{if } \a \leq 1/2 \\
2i\sqrt{s} + a_2 s + i \,a_3 s^{3/2} + a_4 s^2 + \cdots + a_{2n} s^{n} +i \,a_{2n+1} s^{n+1/2} + O(s^{n+\a})
&\mbox{if } \a > 1/2 \\
\end{cases}
\end{equation*}
where the real-valued coefficients $a_m$ depend on
$\l^{(k)}(0)$ for $k=1, \cdots, \lfloor \frac{m}{2} \rfloor$.
\end{proposition}
We note the equations for the first few coefficients:
\begin{align*}
a_2 &= \frac{2}{3} \l'(0) \\
a_3 &= -\frac{1}{18} \l'(0)^2 \\
a_4 &= \frac{4}{15} \l''(0) + \frac{1}{135} \l'(0)^3 \\
a_5 & = -\frac{1}{15} \l''(0) \l'(0) + \frac{1}{2160} \l'(0)^4
\end{align*}
Coefficients $a_2, a_3, a_4$ were discovered in \cite{LR2} by comparison with specific example curves (such as those generated by $c\sqrt{\tau-t}$.)
Along with the tools developed in Sections \ref{Properties} and \ref{smoothness section}, Proposition \ref{prop} could be used to show that if $\Gamma(s) = \gamma(s^2)$, then
$\Gamma^{(k)}(0)$ exists and equals $\tilde \Gamma^{(k)}(0)$ for $k=1, \cdots, n+1$.
\section{Examples} \label{sec:ex}
In this section we discuss two examples that illustrate the two special cases of Theorem \ref{t: quantitative}.
The first special case is when the driving function is $C^{n+1/2}$. Here the conclusion is weaker than we might initially expect: it is not necessarily true that $\gamma \in C^{n+1}$, but rather $\gamma$ is in the larger space $\Lambda^n_*$ (which contains both $C^{n+1}$ and $C^{n,1}$.)
This case is illustrated in the first example where the driving function is $C^{3/2}$ and the associated curve is $C^{1,1}$ but not $C^{2}$.
The second special case of Theorem \ref{t: quantitative} is when the driving function is $ C^{n,1}$. Here the conclusion is slightly stronger than might be initially expected: $\gamma \in C^{n+1, 1/2}.$
This is illustrated in the second example, where
the driving function is $C^{0,1}$ but not $C^1$ and the associated curve is $C^{3/2}$.
We describe the needed computational steps to verify these examples, but leave details for the reader.
\subsection{Example 1: $\lambda \in C^{3/2}$ and $\gamma \in C^{1,1}\backslash C^2$}
This example was communicated to us from Don Marshall.
We will create $\gamma$ via a sequence of conformal maps, as pictured in Figure \ref{fig:rect-line}.
Let $f_1(z) = z+\frac{1}{z} + c\ln z$, and let $r_{1,2} = \frac{-c\pm \sqrt{c^2 + 4}}{2}$ be the finite critical points of $f_1$.
Define
$$g(z)=\frac{c\pi}{f_1(z)-f_1(r_1)},$$
which is a conformal map from $\mathbb{H}$ onto the $C^{1,1}$ domain $\mathbb{C}\backslash((-\infty,0]\cup \text{ a circle arc})$.
Finally, set
$$F(z) = i\sqrt{g(z)+1}.$$
The image of $\mathbb{H}$ under $F$ is a slit half-plane, and we let $\gamma$ be the resulting slit.
\begin{figure}[h!]
\includegraphics[scale=0.3]{example_domain.pdf}
\caption{Conformal maps used in the construction of $\gamma$ for Example 1.}\label{fig:rect-line}
\end{figure}
For $t \in [0,1/4]$, $\gamma(t) = 2i\sqrt{t}$ and $\lambda(t) \equiv 0$.
To compute $\l$ and $\gamma$ for $t >1/4$, we will need to use the conformal maps,
since $\gamma(t) = F(r_2)$ and $\l(t) = L^{-1}(r_2)$ for the automorphism $L$ of $\mathbb{H}$
with
\begin{equation}\label{normalization}
F(L(z))=z+ 0 + \frac{-2t}{z} + \cdots \; \text{ near infinity}.
\end{equation}
Since $L$ must send $\infty$ to $r_1$,
\begin{equation*}
L(z) = r_1 + \frac{a}{z-b}
= r_1+\frac{a}{z} + \frac{ab}{z^2} +\frac{ab^2}{z^3}+\frac{ab^3}{z^4} + O(|z|^{-5})\text{ near infinity},
\end{equation*}
where $a<0$ and $b\in\mathbb{R}$.
Using this and the Taylor series expansion of $f_1 - f_1(r_1)$ at $z=r_1$, one can compute that
$$ f_1(L(z))-f_1(r_1) = \frac{A}{z^2}+\frac{B}{z^3}+\frac{D}{z^4} + O(1/|z|^5) \text{ near infinity},$$
with $$A= \frac{a^2f_1^{(2)}(r_1)}{2}, \;\; B= a^2b f^{(2)}(r_1)+ \frac{a^3 f^{(3)}(r_1)}{6},$$
$$ \text{ and } D = \frac{3a^2b^2 f^{(2)}(r_1)}{2} + \frac{a^3 b f^{(3)}(r_1)}{2} + \frac{a^4 f^{(4)}(r_1)}{24}.$$
Thus near infinity,
\begin{align*}
F(L(z))
&= i\sqrt{\frac{c\pi}{A}z^2 -\frac{c\pi B}{A^2} z -\frac{c\pi D}{A^2} + \frac{c\pi B^2}{A^3} + 1 + O(1/|z|)} \\
&= i\left(-i\sqrt{\frac{c\pi}{|A|}} z -i B \frac{\sqrt{c\pi}}{2|A|^{3/2}}+ O(1/|z|)\right)
\end{align*}
Note that in choosing the appropriate branch for the square root, we used the fact that $A <0$.
In order to satisfy \eqref{normalization}, we must have
\begin{itemize}
\item $ \displaystyle A = -c\pi, \,$ or equivalently, $\displaystyle a =\frac{r_1\sqrt{-2\pi cr_1}}{\sqrt{2-cr_1}}$, and
\item $\displaystyle B=0, \, $ or equivalently, $\displaystyle b = \frac{(cr_1-3)\sqrt{-2\pi c r_1}}{3(2-cr_1)^{3/2}}.$
\end{itemize}
Using these two facts, we expand further and find that at infinity,
$$F(L(z)) = z+0 - \frac{1}{2}\left(\frac{D}{A}+1\right) \frac{1}{z}+ O(1/|z|^2),$$
which implies that
$$4t = \frac{D}{A}+1= \frac{-\pi c r_1(c^2r_1^2-6c r_1+6)}{3(2-cr_1)^3} +1.$$
Next we compute $\l(t)$ for $t>1/4$:
$$\l(t) = L^{-1}(r_2) = b+\frac{a}{r_2-r_1}=\frac{-2\sqrt{2\pi}(-cr_1)^{3/2}}{3(2-cr_1)^{3/2}}.$$
Thus with $y=-cr_1$, we have
$$t= \frac{1}{4} +\frac{\pi y(y^2+6y+6)}{12(2+y)^{3}}
\; \text{ and } \; \l(t) = \frac{-2\sqrt{2\pi}y^{3/2}}{3(2+y)^{3/2}}.$$
So for $t>1/4$,
$$\l'(t) = \frac{\frac{d\l}{dy} }{ \frac{dt}{dy}} = \frac{-2\sqrt{2}\sqrt{y}(2+y)^{3/2}}{\sqrt{\pi}(y+1)}.$$
Using this, one can show that for $s> t \geq 1/4$,
$$|\l'(s)-\l'(t)| \leq c \sqrt{y_s-y_t} \leq c' \sqrt{s-t},$$
proving that $\l \in C^{3/2}[0,T]$. We also note that away from $t=1/4$, one can check that $\l(t)$ is $C^2$.
Lastly, for $t \geq 1/4$, $\gamma(t) = F(r_2)$. Using this, one can determine computationally that with the halfplane-capacity parametrization, $\gamma'$ and $\gamma''$ exist on $[1/4, T]$ (by computing, for instance, $\gamma'(t) = \frac{dF(r_2)}{dc}/\frac{dt}{dc}$ and $\gamma''= \frac{d\gamma'(t)}{dc}/\frac{dt}{dc}$). Further,
$$ \lim_{t \searrow 1/4} \gamma'(t) = 2i =\lim_{t \nearrow 1/4} \gamma'(t), $$
but
$$ \lim_{t \searrow 1/4} \gamma''(t)=-4i-16 \neq \lim_{t \nearrow 1/4} \gamma''(t) = -4i.$$
Therefore on the full interval $(0,T]$, $\gamma$ is $C^{1, 1}$ but not $C^2$.
\subsection{Example 2: $\lambda \in C^{0,1}$ and $\gamma \in C^{3/2}$}
Consider the driving function
\begin{equation*}
\lambda(t) = \left\{
\begin{array}{ccr}
0 & \mbox{ for } & 0\leq t \leq \frac{1}{4}\\
\frac{3}{2}-\frac{3}{2}\sqrt{1-8(t-1/4)} & \mbox{ for } & \frac{1}{4} \leq t < \frac{1}{4} + \frac{1}{10}\\
\end{array}
\right. .
\end{equation*}
There exists $c>0$ so that
$$|\lambda(t)-\lambda(s)|\leq c |t-s|$$
for all $s, t \in [0, 0.35]$, implying that $\l \in C^{0,1}$.
However, $\l$ is not in $C^1$ since $\lambda'$ is not continuous.
The driving function $ \frac{3}{2}-\frac{3}{2}\sqrt{1-8s}$, defined on $[0,\frac{1}{8}]$, generates the upper half-circle of radius $\frac{1}{2}$ centered at $\frac{1}{2}$.
Let $\hat{\gamma}$ be the portion of this circle generated on the time interval $[0, \frac{1}{10}]$.
Then the curve $\gamma$ generated by $\l$ is the image of $[-1,1] \cup \hat\gamma$ by the map $S(z)=\sqrt{z^2-1}$.
See Figure \ref{Example2gamma}.
By Proposition 3.12 in \cite{MR07},
$\gamma \in C^{3/2}$ (and no better) under the arclength parametrization. This is also true under the halfplane-capacity parametrization. Note that $\hat \gamma$ is smooth on $(0, \frac{1}{10}]$ (because its driving function is smooth),
and near $s=0$
$$\hat \gamma(s) = 2i\sqrt{s} + 4s -2is^{3/2} + O(s^2)$$
by Theorem \ref{t:series at 0}.
Thus $\gamma$ is piecewise smooth, and for $t \geq 1/4$
$$\gamma(t) = S(\hat \gamma(t-1/4))
= i +2i(t-1/4)+8(t-1/4)^{3/2}+O((t-1/4)^2).$$
From this we can determine that $\gamma \in C^{3/2}(0,0.35]$ (and no better) under the halfplane-capacity parametrization.
\begin{figure}
\begin{tikzpicture}
\draw[thick] (0,0) arc[radius=1, start angle=180, end angle=45];
\node[right] at (-0.2,1) {$\hat{\gamma}$};
\draw (-2,0) -- (3,0);
\draw[->] (3,2) to [out=45,in=135] (5,2);
\node[above] at (4,2.5) {S};
\draw[thick] (7,0) to (7,1.8);
\draw[thick] (7,1.8) to [out=90,in=180] (7.25, 2);
\draw[thick] (7.25, 2) to [out=0,in=100] (7.8,1.4);
\node[right] at (7.7,1.9) {$\gamma$};
\draw (5,0) -- (10,0);
\end{tikzpicture}
\caption{The curve $\gamma$ for Example 2.
}\label{Example2gamma}
\end{figure}
\bibliographystyle{alpha}
| {
"timestamp": "2014-11-11T02:09:15",
"yymm": "1411",
"arxiv_id": "1411.2164",
"language": "en",
"url": "https://arxiv.org/abs/1411.2164",
"abstract": "The Loewner equation encrypts a growing simple curve in the plane into a real-valued driving function. We show that if the driving function $\\lambda$ is in $C^{\\beta}$ with $\\beta>2$ (or real analytic) then the Loewner curve is in $C^{\\beta + \\frac{1}{2}}$ (respectively analytic). This is a converse to a result by Earle and Epstein and extends a result of Wong.",
"subjects": "Complex Variables (math.CV)",
"title": "Regularity of Loewner Curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9848109494088427,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7076796378575432
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.